We recommend Automated CloudFormation deployment as a default configuration. If you need to deploy this integration in a different way, please navigate to Advanced configuration.
Automated CloudFormation deployment
This project will create 2 Lambda functions:
-
Shipper function: this function is responsible for processing and shipping the Cloudwatch logs to Logz.io. See here the function’s repo.
-
Trigger function: this function is responsible for adding subscription filters to the desired Cloudwatch log groups, to trigger the shipper function.
When the Trigger function is run for the first time, it will add subscription filters to the log groups chosen by the user. If the user chose a service, the Trigger function will also get triggered whenever a log group is created to check if this log group is for a service that is one of the services that the user has selected. If yes, it will add a subscription filter to it.
Auto-deploy the Lambda function
👇 To begin, click this button to start the automated deployment. You will need to deploy it in your region.
Specify stack details
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Shipper config
Parameter | Description | Required/Default |
---|---|---|
logzioToken | Replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
logzioListener | Listener host, and port (for example, https://<<LISTENER-HOST>>:8071 ). | Required |
logzioType | The log type you’ll use with this Lambda. This can be a built-in log type, or a custom log type. You should create a new Lambda for each log type you use. | logzio_cloudwatch_logs |
logzioFormat | json or text . If json , the Lambda function will attempt to parse the message field as JSON and populate the event data with the parsed fields. | text |
logzioCompress | Set to true to compress logs before sending them. Set to false to send uncompressed logs. | true |
logzioEnrich | Enrich CloudWatch events with custom properties, formatted as key1=value1;key2=value2 . | - |
shipperLambdaTimeout | The number of seconds that Lambda allows a function to run before stopping it, for the shipper function. | 60 |
shipperLambdaMemory | Shipper function’s allocated CPU proportional to the memory configured, in MB. | 512 |
Trigger config
Parameter | Description | Required/Default |
---|---|---|
services | A comma-seperated list of services you want to collect logs from. Supported options are: apigateway , rds , cloudhsm , cloudtrail , codebuild , connect , elasticbeanstalk , ecs , eks , aws-glue , aws-iot , lambda , macie , amazon-mq . | - |
customLogGroups | A comma-seperated list of custom log groups you want to collect logs from | - |
triggerLambdaTimeout | The amount of seconds that Lambda allows a function to run before stopping it, for the trigger function. | 60 |
triggerLambdaMemory | Trigger function’s allocated CPU proportional to the memory configured, in MB. | 512 |
triggerLambdaLogLevel | Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic | info |
AWS limits every log group to have up to 2 subscription filters. If your chosen log group already has 2 subscription filters, the trigger function won’t be able to add another one.
Send logs
Give the stack a few minutes to be deployed.
Once new logs are added to your chosen log group, they will be sent to your Logz.io account.
If you’ve used the services
field, you’ll have to wait 6 minutes before creating new log groups for your chosen services. This is due to cold start and custom resource invocation, that can cause the cause Lambda to behave unexpectedly.
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Kibana.
If you still don’t see your logs, see log shipping troubleshooting.
The advanced configuration settings for this integration include:
- Manual configuration with a Lambda function
- Deployment using a module
- Terraform deployment
- Working with test events
Manual configuration with a Lambda function
Create a new Lambda function
This Lambda function will collect CloudWatch logs and sends them to Logz.io in bulk over HTTPS.
Open the AWS Lambda Console, and click Create function.Choose Author from scratch, and use this information:
- Name: We suggest adding the log type to the name, but you can name this function whatever you want.
- Runtime: Choose Python 3.7
- Role: Click Create new role from template(s). Then, from the Policy Templates list, select Basic Lambda@Edge permissions (for Cloudfront trigger).
Click Create Function (bottom right corner of the page). After a few moments, you’ll see configuration options for your Lambda function.
You’ll need this page later on, so keep it open.
Zip the source files
Clone the CloudWatch Logs Shipper - Lambda project from GitHub to your computer,and zip the Python files by running the following command:
git clone https://github.com/logzio/logzio_aws_serverless.git \&& cd logzio_aws_serverless/python3/cloudwatch/ \&& mkdir -p dist/python3/shipper; cp -r ../shipper/shipper.py dist/python3/shipper \&& cp src/lambda_function.py dist \&& cd dist/ \&& zip logzio-cloudwatch lambda_function.py python3/shipper/*
You can alternatively zip the required files manually. To do this, zip folders cloudwatch
and shipper
in python3
directory.
You’ll upload logzio-cloudwatch.zip
in the next step.
Upload the zip file and set environment variables
In the Function code section of Lambda, find the Code entry type list. Choose Upload a .ZIP file from this list.
Click Upload, and choose the zip file you created earlier (logzio-cloudwatch.zip
).
In the Environment variables section, set your Logz.io account token, URL, and log type, and any other variables that you need to use.
Environment variables
Parameter | Description | Required/Default |
---|---|---|
TOKEN | Your Logz.io account token. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
REGION | Your two-letter Logz.io account region code. Defaults to US East. Required only if your Logz.io region is different than US East. You can find your region code in the Available regions table. | Required. DEFAULT: blank (US East) |
URL (Deprecated) | Use REGION instead. | -- |
TYPE | The log type you’ll use with this Lambda. This can be a built-in log type, or a custom log type. You should create a new Lambda for each log type you use. | logzio_cloudwatch_lambda |
FORMAT | json or text . If json , the Lambda function will attempt to parse the message field as JSON and populate the event data with the parsed fields. | text |
COMPRESS | Set to true to compress logs before sending them. Set to false to send uncompressed logs. | false |
ENRICH | Enrich CloudWatch events with custom properties, formatted as key1=value1;key2=value2 . | -- |
Configure the function’s basic settings
In Basic settings, we recommend starting with these settings:
- Memory: 512 MB
- Timeout: 1 min 0 sec
These default settings are just a starting point.Check your Lambda usage regularly, and adjust these values if you need to.
Default settings
By default, we do not send logs of type START, END, REPORT.
If you prefer to send all log types, replace the method _parse_cloudwatch_log
in the Lambda function with the following:
def _parse_cloudwatch_log(log, additional_data): # type: (dict, dict) -> bool _add_timestamp(log) if LAMBDA_LOG_GROUP in additional_data['logGroup']: _extract_lambda_log_message(log) log.update(additional_data) _parse_to_json(log) return True
Set the CloudWatch Logs event trigger
-
Find the Add triggers list (left side of the Designer panel). Choose CloudWatch Logs from this list.
-
Below the Designer, you’ll see the Configure triggers panel. Choose the Log group that the Lambda function will watch.
-
Type a Filter name (required) and Filter pattern (optional).
-
Click Add, then Save at the top of the page.
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Kibana.
If you still don’t see your logs, see log shipping troubleshooting.
Deployment using a module
Deploy this integration to add a module for CloudWatch to your existing stack. This integration uses CloudWatch Public Registry.
Logz.io Public Registry extensions are currently only available on the AWS region us-east-1
.
Before you begin, you’ll need:
- A CloudFormation stack
- An S3 bucket to store the CloudFormation package
Select the Logz.io AWS Cloudwatch extension
- Navigate to CloudFormation > Registry > Public extensions.
- Set Extension type > Modules and Publisher > Third party.
- Select logzio::autoDeploymentLogzio::CloudWatch::MODULE.
Activate the Logz.io AWS Cloudwatch extension
- On the logzio::autoDeploymentLogzio::CloudWatch select Activate.
- In the Extension details section, select Use default.
- In the Automatic updates section, select On.
- Select Activate extension.
Copy the configuration template
On the logzio::autoDeploymentLogzio::CloudWatch page, navigate to Example template and select Copy template.
Add your stack values to the configuration template
{ "Resources": { "MyModule": { "Type": "logzio::autoDeploymentLogzio::CloudWatch::MODULE", "Properties": { "LogzioListenerUrl": "https://<<LISTENER-HOST>>:8071", "LogzioToken": "<<LOG-SHIPPING-TOKEN>>", "LogzioType": "LogzioType", "LogzioFormat": "LogzioFormat", "LogzioCompress": "LogzioCompress", "LogzioSendAll": "LogzioSendAll", "LogzioEnrich": "LogzioEnrich", "LogGroup": "LogGroup" } } }}
Save the template as a yaml file and add the values of your stack to the as per the table below.
Parameter | Description | Required/Default |
---|---|---|
LogzioListenerUrl | Use the listener URL specific to the region where your Logz.io account is hosted. Click to look up your listener URL. The required port depends whether HTTP or HTTPS is used: HTTP = 8070, HTTPS = 8071. | |
LogzioToken | Your Logz.io account token. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
LogzioType | The log type you’ll use with this Lambda. This can be a built-in log type, or a custom log type. You should create a new Lambda for each log type you use. | logzio_cloudwatch_lambda |
LogzioFormat | json or text . If json , the Lambda function will attempt to parse the message field as JSON and populate the event data with the parsed fields. | text |
LogzioCompress | Set to true to compress logs before sending them. Set to false to send uncompressed logs. | false |
LogzioSendAll | Set to true to send all logs | |
LogzioEnrich | Enrich CloudWatch events with custom properties, formatted as key1=value1;key2=value2 . | -- |
LogGroup | CloudWatch log group. | -- |
Add your stack values to the configuration template
If you are creating a new stack:
- In step 1 of the Create stack process, select Template is ready.
- Select Upload a template file.
If you are editing an existing stack:
- Select the stack.
- Select Update.
- Select Edit template in designer.
-
Paste the content of the yaml file into the Resources section of the template as follows:
"MyModule": { "Type": "logzio::autoDeploymentLogzio::CloudWatch::MODULE", "Properties": { "LogzioListenerUrl": "https://<<LISTENER-HOST>>:8071", "LogzioToken": "<<LOG-SHIPPING-TOKEN>>", "LogzioType": "LogzioType", "LogzioFormat": "LogzioFormat", "LogzioCompress": "LogzioCompress", "LogzioSendAll": "LogzioSendAll", "LogzioEnrich": "LogzioEnrich", "LogGroup": "LogGroup" } }
- If required, change the module name by editing the
"MyModule"
value.
Terraform deployment
You can deploy this integration using Terraform with AWS Provider.
Zip the source files
Clone the CloudWatch Logs Shipper - Lambda project from GitHub to your computer, and zip the Python files in the src/ folder as follows:
git clone https://github.com/logzio/logzio_aws_serverless.git \&& cd logzio_aws_serverless/python3/cloudwatch/ \&& mkdir -p dist/python3/shipper; cp -r ../shipper/shipper.py dist/python3/shipper \&& cp src/lambda_function.py dist \&& cd dist/ \&& zip logzio-cloudwatch lambda_function.py python3/shipper/* \&& mv logzio-cloudwatch.zip ../../../.. \&& cd ../../../.. \&& rm -rf logzio_aws_serverless
The above script will create logzio-cloudwatch.zip
that contains the source code for your lambda function.
Set AWS credentials
Create environment variables that contain your AWS credentials:
export AWS_ACCESS_KEY_ID="<<AWS-ACCESS-KEY>>"export AWS_SECRET_ACCESS_KEY="<<AWS-SECRET-KEY>>"export AWS_DEFAULT_REGION="<<AWS-REGION>>"
This configuration uses AWS keys credentials.If you’re using another authentication method for AWS, see here how to configure your provider accordingly.
Configure Terraform
Use the following configuration to create the necessary AWS resources to ship Cloudwatch logs to Logz.io.Make sure you replace within the configuration to your details, where applicable:
terraform { required_providers { aws = { source = "hashicorp/aws" version = "3.65.0" } }}resource "aws_iam_role" "iam_lambda_cw_to_logzio" { name = "lambda_cw_to_logzio" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Action = "sts:AssumeRole" Effect = "Allow" Sid = "" Principal = { Service = "lambda.amazonaws.com" } } ] })}resource "aws_iam_policy" "policy_cw_to_logzio" { name = "policy_cw_to_logzio" policy = jsonencode({ "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:PutResourcePolicy", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*", "Effect": "Allow" } ]})}resource "aws_iam_policy_attachment" "attach_cw_to_logzio" { name = "attach_cw_to_logzio" roles = [aws_iam_role.iam_lambda_cw_to_logzio.name] policy_arn = aws_iam_policy.policy_cw_to_logzio.arn}resource "aws_lambda_function" "lambda_cloudwatch_to_logzio" { # Change to your path of the zip file you created in the 1st step filename = "/path/to/your/logzio-cloudwatch.zip" function_name = "cloudwatch_to_logzio" role = aws_iam_role.iam_lambda_cw_to_logzio.arn runtime = "python3.9" handler = "lambda_function.lambda_handler" # These default settings are just a starting point. Check your Lambda usage regularly, and adjust these values if you need to. timeout = 60 memory_size = 512 environment { variables = { # Required variables: TOKEN = "<<LOG-SHIPPING-TOKEN>>" # Your Logz.io shipping token LISTENER_URL = "https://<<LISTENER-HOST>>:8071" # Your Logz.io listener host (for example, listener.logz.io) # To learn more about all the environment variables for this function, see the parameters table in the documentation. } }}# Log group for the lambda function we createresource "aws_cloudwatch_log_group" "log_group_cw_to_logzio" { name = "/aws/lambda/${aws_lambda_function.lambda_cloudwatch_to_logzio.function_name}" # You can add retention days for the function's log group # retention_in_days = 5}resource "aws_lambda_permission" "allow_cloudwatch" { statement_id = "AllowExecutionFromCloudWatch" action = "lambda:InvokeFunction" function_name = aws_lambda_function.lambda_cloudwatch_to_logzio.function_name principal = "logs.amazonaws.com"}# Subscription filter to the log group you wish to subscribe to.# Create a seperate "aws_cloudwatch_log_subscription_filter" resource for every log group you want to subscribe to.resource "aws_cloudwatch_log_subscription_filter" "cw_to_logzio_subscription" { name = "cw_to_logzio_subscription" # Change "log_group_name" to the log group name you want to subscribe to. For example - /aws/lambda/my_lambda_function log_group_name = "<<LOG-GROUP-TO-READ-FROM>>" filter_pattern = "" destination_arn = "${aws_lambda_function.lambda_cloudwatch_to_logzio.arn}"}
Parameter | Description | Required/Default |
---|---|---|
LogzioToken | Your Logz.io account token. Replace <<LOG-SHIPPING-TOKEN>> with the token of the account you want to ship to. | Required |
LogGroup | CloudWatch Log Group name from where you want to send logs. | Required |
LogzioListenerUrl | Listener host, and port (for example, https://<<LISTENER-HOST>>:8071 ). | Required |
LogzioType | The log type you’ll use with this Lambda. This can be a built-in log type, or a custom log type. You should create a new Lambda for each log type you use. | logzio_apprunner_logs |
LogzioFormat | json or text . If json , the Lambda function will attempt to parse the message field as JSON and populate the event data with the parsed fields. | text |
LogzioSendAll | By default, we do not send logs of type START , END , REPORT . Change to true to send all log data. | false |
LogzioCompress | Set to true to compress logs before sending them. Set to false to send uncompressed logs. | false |
LogzioEnrich | Enrich AppRunner events with custom properties, formatted as key1=value1;key2=value2 . | -- |
Create Terraform execution plan
The following command will create a Terraform execution plan out of your configuration
terraform plan -out terraform.plan
Apply the plan
The following command will apply the execution plan you’ve created in the previous step, meaning that Terraform will create the resources you described in the configuration:
terraform apply terraform.plan
Check Logz.io for your logs
Give your logs some time to get from your system to ours, and then open Kibana.
If you still don’t see your logs, see log shipping troubleshooting.
Destroying the resources
To destroy the resources, use the following:
terraform destroy
Working with test events
You can generate test events using the Logz.io Lambda test events generator and add these events to your Lambda function. This functionality is currently only available on Linux & macOS.
Generate a test event
-
In your terminal, run the following command:
bash <(curl -s https://raw.githubusercontent.com/logzio/logzio_aws_serverless/master/python3/cloudwatch/test_events/test_event_generator.sh)
This script generates a test event with a UTC timestamp of the moment you run the script.
-
Copy the output JSON.
(Video) #39 AWS Cloud Watch Logs – Part 2 | Monitoring Nginx Logs , suspicious Ip’s, Metrics | AWS 2020
Add the generated test event to your Lambda function
- Select the Lambda function that you need to add the test event to.
- Open the Test tab.
- Select New event.
- In the Template field, select CloudWatch Logs.
- In the Name field, enter a name for the test event. No specific naming convention is required.
- Populate the body field with the output JSON of the test event generated in the previous step.
- Select Format to format the test event.
- Select Save changes.
Run the test event
To run the test event, select Test in the Test tab. The Lambda function will run and generate the following two logs in your account:[ERROR] Logz.io cloudwatch test log1
[ERROR] Logz.io cloudwatch test log2