Skip to main content
Skip table of contents

AWS AMI Quick Start Guide

The following has been created to assist with the deployment of the AWS AMI SIEMonster implementation. The following will contain all relevant information on deploying the image provided and the configuration of the individual components as needed. This guide is limited to the deployment only, for any additional information, please consult the SIEMonster documentation.

Please note the following videos assist with getting started quickly.

Adding a user for the SIEMonster V5 AWS marketplace stack deployment

Deploying SIEMonster V5 to the AWS Marketplace


The SIEMonster SIEM Solution provides organizations from small to large with the ability to monitor their network assets in an affordable and scalable solution whilst providing the latest SIRP and SOAR automation and response.

SIEMonster is a collection of the best open-source security tools, as well as our own, to provide a SIEM for everyone. We showcase the latest and greatest tools for security professionals like The Hive, Cortex, MISP, MITRE, PatrOwl, Suricata, Wazuh, AWS OpenSearch and more to create a SIEM. We have customers all around the world now using our product that was once using Gartner quadrant competitors.

Pricing & Licensing

The AWS pricing may vary per region. Log into your AWS account and view the pricing chart in the AWS Marketplace for more information. Pricing includes the SIEMonster license, and no license management is required.

Regions Supported

Regions supported include:

Technical & User Requirements

Requires an AWS account with Route 53 Hosted Zone and registered domain. Optional - existing VPC with availability for 4 x /22 CIDR ranges.

Note: Do not use an AWS Root account for deployment as this is not a requirement.

There are no licensing requirements for the deployment.

It is recommended to use an assumed role with administrative access for deployment, followed by AWS EKS cluster least privilege review, see


An administrative role is required for deployment. The following roles are created automatically for management: For identification purposes, the prefix for each created role is equal to the cluster name.

  • OpenSearch service role for management after deployment.

  • Batch Instance IAM role for EC2,

  • ECS Task Execution Role Policy,

  • EC2 Container Service for EC2 role,

  • EKS list & describe policy,

  • Submit Batch policy for interaction with Elastic Load Balancers,

  • Secrets Manager & Security Groups

  • Roles for Lambda function integration with EKS, Route53, ACM, Secrets Provider and Log Streams.

References: Reduced subset of the Administrator job function

Details of roles and related policy statements for the deployment and management of the application

NOTE⚠️ :To deploy the solution, an AWS Administrative Role should be created that includes full access to the following AWS Services:



EC2 Auto Scaling

Certificate Manager


Cloudwatch Logs




ELB v2




License Manager


OpenSearch Service

Route 53



Steps to configure:

Post Deployment EKS administration

To administer the EKS cluster, an AWS Role should be created with full EKS access and added to the EKS Auth ConfigMap.

Steps to configure:

Post Deployment Monitoring

A lesser privilege AWS Role should be created for the purpose of granting access to Monitoring solutions.

Steps to configure:

kubectl -n kube-system apply -f ClusterRole.yaml

kubectl -n kube-system apply -f ClusterRoleBinding.yaml

Knowledge Prerequisites

Should be familiar with or have knowledge of:

  • Familiarity with AWS infrastructure

  • Basic knowledge of CloudFormation deployment via templates

  • Basic knowledge of IAM users & roles

  • Familiarity with of the following AWS services:

  • Route53

  • EC2

  • OpenSearch

  • MSK Managed Kafka

  • EFS, S3, ACM

  • EKS Managed Kubernetes

  • VPCs

  • Load Balancers

  • Security Groups

Billable Services and Guidance

Below is a picture of the AWS Marketplace page with pricing information. The example is Region U.S East, with SIEMonster at $0.452 cents per hour and $0.452 for the EC2 instances for a total of $0.904 cents per hour.

The following additional services are Mandatory or Optional billable services.

AWS Billable Services










Route 53








AWS Calculators

The following AWS calculator has been provided to guide you on an estimate of AWS costs.

Cloud Formation Deployment

Launching a SIEMonster stack via a CloudFormation Template can be achieved by either using a downloaded template or one provided on the AWS Marketplace.

You can choose either a QuickStart or a Custom install to integrate with your own environment and to provide private zone options with custom certificates.

The SIEMonster deployment will install 2 EC2 instances spanning 2 availability zones and a single node OpenSearch service in 1 AZ. Autoscaling ensures that additional nodes (EC2 instances) will be deployed based on number of tenants and system load. Additional charges will be incurred for each new node deployed. Deployment options are limited to single region and 2 AZ configuration. SIEMonster can be installed into a new VPC or to an existing VPC provided that an Internet Gateway is available. 2 Public and 2 Private subnets are required, and suitable defaults are provided. A low-tier Bastion host will all be deployed to provide VPN options for ingestion and cluster SSH access. See Appendix 4.3 – Public Resource Exposure for further details.

Additional AWS infrastructure that will be installed/utilized with this template (see the Topographical Details section for further information).

  • 2 x Network Load Balancers

  • 1 x Application Load Balancer

  • 1 x Encrypted EFS volume

  • AWS OpenSearch

  • AWS Managed Kubernetes

  • AWS Managed Streaming for Apache Kafka

  • AWS EC2

  • AWS S3 Bucket


  • AWS Route 53 for DNS management

When a cluster is created, a Hosted Zone is created just for the cluster. The name of the zone will be <CLUSTER_NAME>.<DNS_ROOT_ZONE_NAME> (variables provided in the CloudFormation Template).

An initial tenant will be deployed as specified in the CloudFormation template/

Inside this zone, several A records will be created for various services in the SIEMonster application (e.g. wazuh.<CLUSTER_NAME>.<DNS_ROOT_ZONE_NAME>).

Whatever value you provide to DNS_ROOT_ZONE_NAME must already have a NS record configured in the parent domain (see AWS docs for configuring a subdomain as a Hosted Zone). For example, if you set

DNS_ROOT_ZONE_NAME to - a Hosted Zone named must be manually created.

The NS record (which is generated by AWS upon zone creation) must be copied to the zone.

It should be noted that it is completely acceptable to use a parent domain for DNS_ROOT_ZONE_NAME. For example, you could set the value to and then have a cluster at

CloudFormation Deployment Steps

Deployment time will take 30-45 minutes plus a further 30 minutes for internal mesh system synchronization and DNS propagation. Refer to ‘Traffic Light’ module health indicators also, e.g., all green good to go.

To launch and configure SIEMonster, you should select it from the AWS Marketplace and start the launch/setup procedure. To do this, use the following steps:

  1. Locate SIEMonster on the AWS Marketplace.

  2. From the SIEMonster product page on the AWS Marketplace, press the Continue to Subscribe button.

  3. From the Subscription page press the Continue to Configuration button.

Version and Location

On the Configure this software page, select CloudFormation as your Fulfillment Option.

For the CloudFormation Template, select one of the options below. All Template launches are completely independent and will not affect any current resources on the account.

  • QuickStart: Launches SIEMonster in a preconfigured HA/clustered environment.

  • Custom: Advanced options to integrate SIEMonster into your own AWS VPC etc.

Select a Software Version. It is highly recommended to always use the latest version.

Note that the setup will not be finalized, and you will not be charged at this point.

Completing the CloudFormation Template Launch - QuickStart

You should now have a completed Amazon S3 template URL field.

Click Next to reach the Stack Details page.

Give your Stack a name and configure the blank details as appropriate for your environment. Ensure you make note of the Service Password entered.

  1. Choose from the list of DNS hosted zones the required zone.

  2. Choose Bastion options for external access.

  3. Choose a cluster name. This will become part of the access URL, e.g., https://<cluster_name>.<dns_root_zone>

Choose to enable WAF if required along with a name for the initial tenant.

Click Next and then Next again adding any tags as required and accept the AWS IAM creation option.

The Stack creation will now begin and will take approximately 30 minutes to complete deployment of all the required infrastructure. A CREATE_COMPLETE status will be shown in the CloudFormation console.

Please allow a further 30-60 minutes for DNS propagation and SIEMonster initialization

Completing the CloudFormation Template Launch – Custom

You should now have a completed Amazon S3 template URL field.

Click Next to reach the Stack Details page.

Give your Stack a name and configure the blank details as appropriate for your environment. Ensure you make note of the Bastion Admin Password and the K8s (Kubernetes) Dashboard password entered.

Specify the User ARN who will be interfacing with the Kubernetes backend.

Specify ExistingIgwID and ExistingVpcID if you are deploying to an existing VPC

Choose the required Instance type, (recommended minimum is m5d.2xlarge).

If deploying to existing VPC, enter private and public subnets suitable for your environment as well as the existing VPC CIDR block.

Choose options for OpenSearch depending on expected log volume. If the service-linked role for OpenSearch does not exist, then select ‘true’.

Choose options for Managed Kafka, allowing sufficient volume size for expected message queue retention.

Choose Bastion options for external access and admin password.

Choose a cluster name. This will become part of the access URL, e.g., https://<cluster_name>.<dns_root_zone>

Choose the required SIEMonster Edition required along with passwords for Prometheus and Kubernetes Dashboard access.

Click Next and then Next again adding any tags as required and accept the AWS IAM creation option.

The Stack creation will now begin and will take approximately 30 minutes to complete deployment of all the required infrastructure. A CREATE_COMPLETE status will be shown in the CloudFormation console.

Please allow a further 30-60 minutes for DNS propagation and SIEMonster initialization

Site Access

With the above steps completed, the SIEMonster web application will be available at the public DNS of https://<cluster_name>.<dns_root_zone>.

Enter the required administrative user email and password then click the Setup button to proceed to the main console.

Health Check

After you sign into your dashboard you will notice the Module Status widget on the landing page dashboard. Ticks beside each module indicate the cluster spun up correctly and everything is healthy like in the screenshot below:

If they are still showing warning or red, it may take a while for everything to stabilize, as backend processes finish, etc.

Another health check that can be performed is by checking Grafana metrics found by going to “System Monitor” in the side menu:

From here you can view the health of many of the backend processes. This also used the Traffic Light Protocol that was mentioned earlier for the dashboard health check.

Service Limits

The following Hyperlinks are provided as a reference to AWS service limits.

Service quotas page

Monitor Service quotas

Increase service limits

Log Ingestion

External log ingestion is handled via a single internet-facing Network Load Balancer. This has an identifiable Tag ‘<cluster_name>-main-lb’ under EC2 – Load Balancers.

There are 2 types of ingestion applications which have preconfigured DNS names:

Wazuh, to accept connections from agents, cluster nodes and via API. TCP ports 1514, 1515,1516 & 55000

wazuh. <cluster_name>.<dns_root_zone>

Nifi, to accept Syslog connections from Firewalls and Endpoint devices. TCP port 5514

nifi. <cluster_name>.<dns_root_zone>

See the respective administration guide for details on configuring endpoint agents and syslog feeds.

Kubernetes Interaction

To interact with Kubernetes via CLI the following steps are required:

  1. Download the latest version of kubectl

  2. Install the latest version of AWS CLI version 2

  3. Ensure that your local AWS profile is set to the of the USER ARN defined in the CloudFormation Template. If using the QuickStart template, the USER/ROLE ARN will be that of the logged in user/role deploying the CF template.

  4. Use the following command to update your local kubeconfig

aws eks --region <my-region> update-kubeconfig --name <cluster-name>

For example: aws eks --region ap-southeast-2 update-kubeconfig --name siem-1992

The K8s namespace is ‘siemonster’. Using kubectl, interact as follows:

kubectl -n siemonster get pods

Software Patching

SIEMonster server patch management

From the SIEMonster server shell type

apt-mark hold docker-ce kubelet kubeadm kubectl    <Enter>

apt install unattended-upgrades <Enter>

dpkg-reconfigure unattended-upgrades <Enter>

This requires a manual run for the first time

unattended-upgrade <Enter>

Your system is now configured for automatic updates. If you prefer to patch manually the SIEMonster server is built on AWS Linux 2 and instructions found here.

Hydra patch management

For compliance, patch management is required on the Hydra. The solution has been designed with this in mind. Please follow the guidelines below. This will allow you to update the Hydra with all relevant security patches as required.

From the Hydra shell type

apt list --installed |grep docker <Enter>

apt-mark hold docker-ce <Enter>

Now configure automatic security updates

apt install unattended-upgrades <Enter>

dpkg-reconfigure unattended-upgrades <Enter>

This requires a manual run for the first time

unattended-upgrade <Enter>

Your system is now configured for automatic updates.

If you prefer to patch the Hydra box manually it is a Ubuntu Linux Box and patching can be applied as per Ubuntu tutorials

Topographical Details

Kubernetes Dashboard Access & SSH Key

The token required for access to the Kubernetes Dashboard as well as the SSH private key for SSH access to the Bastion and the EKS worker node(s) can be found in the Parameters section under AWS Systems Manager, identified by the Cluster name:

K8s Dashboard is available at the public DNS https://infrastructure-dashboard. Username is ‘admin’ if using the Quickstart template.

For example:

On first access enter the 12-character Service password defined in the CloudFormation template.

Next, enter the token from Systems Manager/Parameters and click on Sign In

View all utilized Kubernetes resources.

User Data & Stored Credentials

Stored secrets for database credentials are stored in AWS Secrets Manager for the Kubernetes namespace <cluster-name> chosen during deployment. These secrets can be maintained using standard Kubernetes procedures, see, and further information can be found in the AWS resources,

A schedule can be set up to rotate these secrets every 90 days by editing the relevant secret ‘Rotation Configuration’ and initiating a 90-day cycle as appropriate with a Lambda Function.,

Stored secrets for SSH certificates and Kubernetes Dashboard tokens are stored with AWS Systems Manager – Parameters.

Steps to rotate Parameter Store secrets

Within AWS Systems Manager, Parameter Store, highlight the relevant secret with a prefix equal to the deployed <cluster_name> and click on Edit.

Replace the existing secret by pasting the new value.

Personally Identifiable Information (PII) is restricted to first name, last name & email address of users stored in a database using AES256-CBC encryption. The EC2 volume hosting this database is encrypted at rest using KMS AWS managed keys.

The following storage systems will be deployed utilizing encryption at rest:

  • Private S3 Buckets

  • EBS volumes include root volumes

  • EFS file systems

  • OpenSearch data volumes & node to node data in transit encryption using TLSv1.2

AWS SDK version provided with the solution uses IMDSv2 by default.

The customer is provided the ability to disable Metadata Service Version 1 (IMDSv1) using the standard aws ec2 modify-instance-metadata-options calls.

The encryption configuration KMS uses the following AWS managed keys with automatic annual rotation,

  • aws/elasticfilesystem

  • aws/sns

  • aws/lambda

  • aws/kafka

  • aws/s3

  • aws/ebs

  • aws/es

  • aws/ssm

KMS Customer-managed keys are used for deployment configured with automatic annual rotation.identifiable by alias name equal to the deployed cluster name. If you prefer to change the schedule of your keys you can change them manually at the following AWS URL


Velero backups should be scheduled as per

It is recommended to backup daily and to use an AWS Lifecycle strategy to either expire objects older than 7 days or to transition them to a less expensive storage class. See

The Velero backup includes all persistent volumes and data within the deployment except for OpenSearch, which relies on AWS automated snapshots for cluster recovery.

It is recommended to configure OpenSearch daily index snapshots in addition to allow for granular data restoration as and when required. This is covered in more depth in the SIEMonster Documentation.

Additionally, the AWS EFS component can be added to a Backup Plan. This file system is identified by the Name tag being the <cluster_name>. After the File system details page is displayed in

Under General, choose Edit.

To enable automatic backups, select Enable automatic backups.

Choose Save changes. Reference:

During deployment an S3 bucket was created for the purpose of exporting and importing all cluster data. The bucket will have the prefix ‘velero’ and is used in conjunction with the preconfigured Velero K8s plugin. Example:

To back up the entire cluster follow these steps:

Install the current Velero client

Locate the preconfigured S3 Bucket location

velero -n kube-system backup-location get

Example response:


aws aws velero-c2f7d5b0 ReadWrite

Create a snapshot location
velero -n kube-system snapshot-location create ebs-us-east-1 \

--provider aws \

--config region=us-east-1
Adjust region and name to suit

Perform full cluster backup
velero -n kube-system backup create full-cluster-backup \

--snapshot-volumes=true \

--provider aws

Check backup status
velero -n kube-system backup describe full-cluster-backup

To restore a full cluster backup:

When upgrading, identify the unique S3 bucket from the previous system backup, otherwise:
velero -n kube-system backup-location get

When upgrading, identify the snapshot location of the previous system backup, otherwise:

velero -n kube-system snapshot-location

Identify backup:
velero -n kube-system backup get

Restore backup:

velero -n kube-system restore create --from-backup full-cluster-backup
For PersistentVolumeClaims & PersistentVolumes only:
velero -n kube-system restore create --from-backup full-cluster-backup \

--include-resources persistentvolumeclaims,persistentvolumes

Check Restore status
velero -n kube-system restore describe full-cluster-backup

Check velero -n kube-system –help for additional options.

Hydra Setup (optional)

A universal OVA appliance is available for download should you need to centralize logging on premise and connect back to the AWS SIEMonster Stack.

Once downloaded, the OVA can be imported into VMWare ESX or Workstation locally.

Adjust the system resources depending on expected load, e.g. RAM, CPU, disk size expansion. These can also be increased later should there be an increase in log ingestion.

Once Hydra has been started there be a one-off interactive session to establish the VPN link with the SIEMonster cloud deployment.

The initial screen configures the SSH username and password.

Use the TAB key to navigate each option. Highlight ‘Next’ and press ENTER when completed

The following screen allows you to configure networking requirements. Press Enter on the Type option to switch between DHCP and Static.

The following screen establishes the VPN connection to SIEMonster. The URL is in the format https://<bastion>.<cluster-name>.<root-dns-name>.


The password is that which was entered during the CloudFormation deployment.

It can also be found in Systems Manager- Parameter store.

The next screen allows for configuration of the internal Kafka connection for the logging endpoint. Change Kafka port to 9092.

On the final screen, navigate to the Reboot option to save the configuration and reboot the Hydra instance.

The Hydra instance will now initialize and automatically establish a VPN connection to SIEMonster. Wait for this to complete, then press ENTER to display SSH login banner.


For SIEMonster deployment and any ongoing maintenance, please reach out via the support portal or for free support. Please allow up to 24 hours for a response. You can also reach out to AWS Support for further information on AWS specific support.

For 24/7 support and a dedicated support from our Professional services team, contact our SIEMonster sales team with any enquiries. SIEMonster offer 2,5- or 10-hour support packages per month.


Rotating cryptographic Keys

Rotating programmatic system credentials and cryptographic keys - System Credentials

Within the SIEMonster Kubernetes environment, system credentials are maintained as secrets and can be rotated as follows:

  1. Ensure Section 2.7 Kubernetes Interaction has been set up

  2. List secrets for tenancy
    kubectl -n <tenant-name> get secrets

Edit Module requiring credential rotation

kubectl -n tenant1 edit secret tenant1-misp

Credentials are Base64 encoded, encode new credential to comply, e.g.

echo -n 'New _Credential' | base64

Replace existing with newly encoded credential, save state and then exit.

Restart chosen module for credential changes to be updated, e.g.,
kubectl -n tenant1 delete pod tenant1-misp-0

Update Auto-Login function in Admin Panel – Roles – admin - <Module Name> - Settings with new credential, e.g.

Click on Save. Module credential rotation is now complete. Cryptographic keys are preconfigured for automatic rotation, see

The relevant keys can be identified by the alias which will match the deployed cluster name.

AWS programmatic access key rotation for IAM user reference:

Handling Fault Conditions

Troubleshooting steps for typical fault conditions within the environment.

Inaccessible Web Interface

  • Verify target URL is correct – Section 2.4 Site Access

  • Verify that the CloudFormation deployment has completed successfully.

On error, find solution from first failed event, e.g., ‘Chosen hosted domain not found’. Rectify issue, delete stack and re-deploy.

Verify Kubernetes infrastructure in a healthy state

On pending or error conditions ensure at least 30 minutes has elapsed since deployment for stabilization. Refer to for further information.

View Kubernetes event logs using ‘kubectl’
kubectl get events -A
Check for warnings/errors and refer to

Inaccessible Modules
Modules in a ‘Yellow’ or ‘Red’ state – see Section 2.5 Health Check, may prevent access.

To troubleshoot, obtain the recent logs for the affected Module and refer to for further information.

Log ingestion

When viewing the Dashboards – Discovery window, no current logs are available.

Troubleshooting steps for log ingestion pipelines:
Check OpenSearch Cluster status -

Check MSK Status -

Check Flow Processors Module for errors:

Red bulletins with hover over will indicate errors, see

for potential solutions.

Check AWS Load Balancer Target Groups health status with prefix <tenant-name>wazuh/nifi

On Unhealthy state verify target instance health -

Agent connections – Verify correct agent setup, see


Details of Kubernetes troubleshooting may be found at and at

Public Resource Exposure

Load Balancers

Application Load Balancer, identified by ‘<cluster_name>-main-alb’
Port exposed – TCP 443
Purpose – Web Application Endpoint
Additions – Configurable WAF option pre-deployment

Network Load Balancer, identified by ‘<cluster_name>-main-lb’

Purpose – SSH, VPN, Data Ingestion, Bastion URL
Ports exposed:
SSH TCP 22, access restriction included as a configuration option. Bastion SSH access.
TCP port range 1514-1516 – Data ingestion, access restriction - manual post deployment
VPN UDP 1194 – VPN link, access restriction - manual post deployment
TCP 8443 – Bastion URL identified by https://bastion.<cluster_name>.<dns_root_zone>:8443 access restriction - manual post deployment
Refer to for post deployment security group management.


Publicly exposed Route53 DNS records:
NS – identified by <cluster_name>.<dns_root_zone>
A record identification:
bastion. <cluster_name>.<dns_root_zone>
wazuh. <cluster_name>.<dns_root_zone>
inside.wazuh. <cluster_name>.<dns_root_zone>

Network Configuration

VPC options
Configurable pre-deployment to integrate with existing customer VPC. Default will deploy a new VPC with the following CIDR -


Configurable pre-deployment to integrate with existing customer VPC. Default will deploy a public subnet with CIDR ranges, and private subnets with CIDR ranges,

Security Groups

The following security groups are created during deployment:



Access restrictions configurable pre and post deployment.

A Network ACL is created during deployment identified by Tag Key

‘aws:cloudformation:stack-name’ This NACL is associated with all 4 subnets and allows all traffic. For a customer integrated VPC deployment type, the NACL rules will reflect those of the existing VPC.

Route Tables

3 route tables are created during deployment, identified by Tag Name ‘<cluster_name>’.
2 route tables identified by Tag Key ‘aws:cloudformation:logical-id’ of ‘ClusterNodePrivateRT1’ route traffic from the private subnets to a NAT gateway.
1 route table identified by Tag Key ‘aws:cloudformation:logical-id’ of ‘ClusterNodePublicRT’ routes traffic to an Internet Gateway

Deployment Troubleshooting and Testing

Identifying CloudFormation errors during deployment

Reference -

During stack deployment any errors that occur maybe viewed under Stack Events.


Further information regarding the reason for the failure can be found within Cloudwatch – Log groups.
Relevant entries will contain a prefix of <stack_name> and contain detailed information regarding the deployment process.


Common causes for failures:
Elastic IP limits

  • Load Balancer limits

  • EC2 instance limits

VPC constraints when deploying within existing VPC – check CIDR ranges do not overlap with existing assets

Check that the hosted zone used for deployment exists and is publicly accessible.

Post deployment testing procedure - checking accessibility of assets defined in Section 4.3

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.