CVE-2021-44228 Log4J RCE Notice
Applicable to: V4.X
The following information is being provided in light of CVE-2021-44228 relating to the Log4J remote code execution vulnerability. The following steps are provided for reference purposes and the steps required to mitigate the risks associated with the vulnerability.
CVE References
Ref. A 0-day exploit in the popular Java logging library log4j2 was discovered that results in Remote Code Execution (RCE) by logging a certain string.
https://nvd.nist.gov/vuln/detail/CVE-2021-44228
https://github.com/advisories/GHSA-jfh8-c2jp-5v3q
Modules affected by the vulnerability disclosure
Elasticsearch (OpenDistro)
TheHive
TheHive JAR library does not include the vulnerable JNDI lookup code, so does not require patching
Deployment type specific information
AWS Managed Elasticsearch
Currently in a patching phase. Please refer to https://aws.amazon.com/security/security-bulletins/AWS-2021-005/
SIEMonster Oracle/AWS SaaS:
All clients currently in patching phase. ETA for completion 9:00PM 12/12/2021 US EST
Mitigation Steps (MSSP Pro tenant)
Check image pull policy for Elasticsearch set to ‘Always’
Steps:
Kubectl -n <tenant-name> edit sts <tenant-name>-master
Verify that the following match
image: siemonster/elasticsearch:prod-v4.x
SetimagePullPolicyto: Always
Steps to Restart Elasticsearch by scaling StatefulSet down and up
• kubectl -n <tenant-name> scale sts <tenant-name>-master --replicas=0
Wait for termination completion
kubectl -n <tenant-name> get pods |grep master
kubectl -n <tenant-name> scale sts <tenant-name>-master --replicas=1
Check logs for successful initialization
kubectl -n <tenant-name> logs -f <tenant-name>-master-0
Mitigation Steps (MSSP Enterprise tenant)
Check image pull policy for Elasticsearch set to ‘Always’
Mitigation Steps (MSSP Enterprise tenant)
Check image pull policy for Elasticsearch set to ‘Always’
Steps Master Nodes:
Kubectl -n <tenant-name> edit sts <tenant-name>-master
Verify that the following match
image: siemonster/elasticsearch:prod-v4.x
imagePullPolicy: Always
Steps to Restart Elasticsearch by scaling StatefulSet down and up
kubectl -n <tenant-name> scale sts <tenant-name>-master --replicas=0
Wait for termination completion
kubectl -n <tenant-name> get pods |grep master
kubectl -n <tenant-name> scale sts <tenant-name>-master --replicas=1
Check logs for successful initialization
kubectl -n <tenant-name> logs -f <tenant-name>-master-0
Steps Data Nodes:
Kubectl -n <tenant-name> edit sts <tenant-name>-data
Verify that the following match
image: siemonster/elasticsearch:prod-v4.x
imagePullPolicy: Always
Steps to Restart Elasticsearch by scaling StatefulSet down and up
kubectl -n <tenant-name> scale sts <tenant-name>-data --replicas=0
Wait for termination completion
kubectl -n <tenant-name> get pods |grep data
kubectl -n <tenant-name> scale sts <tenant-name>-data --replicas=1
Check logs for successful initialization
kubectl -n <tenant-name> logs -f <tenant-name>-data-0
kubectl -n <tenant-name> logs -f <tenant-name>-data-1
The above is for a 2 data node cluster, keep incrementing the last number until all nodes have been covered.
Mitigation Steps (Enterprise)
Check image pull policy for Elasticsearch set to ‘Always’
Steps Master Nodes:
Kubectl -n <cluster-name> edit sts <cluster-name>-master
Verify that the following match
image: siemonster/elasticsearch:prod-v4.x
imagePullPolicy: Always
Steps to Restart Elasticsearch by scaling StatefulSet down and up
kubectl -n <cluster-name> scale sts <cluster-name>-master --replicas=0
Wait for termination completion
kubectl -n <cluster-name> get pods |grep master
kubectl -n <cluster-name> scale sts <cluster-name>-master --replicas=1
Check logs for successful initialization
kubectl -n <cluster-name> logs -f <cluster-name>-master-0
Steps Data Nodes:
Kubectl -n <cluster-name> edit sts <cluster-name>-data
Verify that the following match
image: siemonster/elasticsearch:prod-v4.x
imagePullPolicy: Always
Steps to Restart Elasticsearch by scaling StatefulSet down and up
kubectl -n <cluster-name> scale sts <cluster-name>-data --replicas=0
Wait for termination completion
kubectl -n <cluster-name> get pods |grep data
kubectl -n <cluster-name> scale sts <cluster-name>-data --replicas=1
Check logs for successful initialization
kubectl -n <cluster-name> logs -f <cluster-name>-data-0
kubectl -n <cluster-name> logs -f <cluster-name>-data-1
The above is for a 2 data node cluster, keep incrementing the last number until all nodes have been covered.
Mitigation Steps (Professional - Cloud)
Check image pull policy for Elasticsearch set to ‘Always’
Steps Master Nodes:
Kubectl -n <cluster-name> edit sts <cluster-name>-master
Verify that the following match
image: siemonster/elasticsearch:prod-v4.x
imagePullPolicy: Always
Steps to Restart Elasticsearch by scaling StatefulSet down and up
kubectl -n <cluster-name> scale sts <cluster-name>-master --replicas=0
Wait for termination completion
kubectl -n <cluster-name> get pods |grep master
kubectl -n <cluster-name> scale sts <cluster-name>-master --replicas=1
Check logs for successful initialization
kubectl -n <cluster-name> logs -f <cluster-name>-master-0
Mitigation Steps (Professional Baremetal)
Custom deployments on baremetal
Beginning with Elasticsearch Master nodes:
Edit /etc/elasticsearch/jvm.options
Add the following line and save/exit
-Dlog4j2.formatMsgNoLookups=true
Restart node – systemctl restart elasticsearch
Verify green cluster health before moving on to the next node
curl -u elastic:xxxxxx http://<endpoint>:9200/_cluster/health?pretty
Repeat with Data nodes consecutively.
Mitigation Steps (Professional OVA/Docker)
Update Docker image for Elasticsearch referenced in /etc/siemonster/docker_images.env
Find the line DOCKER_ELASTIC_SEARCH=siemonster/elasticsearch:prod-v4.x.x
Update image:
docker pull siemonster/elasticsearch:prod-v4.x.x
NOTE: Please replace 4.x.x with the version that is listed in docker_images.env
Restart Elasticsearch
sudo systemctl restart es-master