Before we learn how to automatically Export RDS Audit logs to S3, did you know AWS CloudWatch automatically stores RDS DB logs? Yes, they do, but, they only store the general logs for your DB instance. Now, what if you want to find a specific query or data from the log? Which user performed a particular query or from which IP Address? Who deleted certain records from your DB tables?
Also, somehow you have managed to export all audit logs of your RDS instance to CloudWatch, which is great. But, do you know? CloudWatch will retain those logs for 30 days only!! To retain them for a longer period, you need to export CloudWatch logs to the S3 bucket. You can do this manually from the CloudWatch dashboard. But, what if you want to have automation for this?
This guide will walk you through all the steps required to log all such events automatically!!
This guide includes:
1) Tweak RDS settings to send audit logs to CloudWatch 2) Create an S3 Bucket (To Store CloudWatch Audit Logs) 3) Create IAM Role (We will use this for Lambda automation) 4) Lambda (Function to automate CloudWatch Logs export to S3)
Let’s begin!
Tweak RDS Settings to Send Audit Logs to CloudWatch
First of all, we need to tweak the RDS to send specific logs to CloudWatch. To do this, we will create one Option Group and one Parameter Group.
Create Option Group
⇒ Visit your Amazon RDS Dashboard. ⇒ Go to the Option Group. ⇒ Click on the Create Group button.
⇒ Enter the Name and Description for this group.
⇒ Select Engine: mysql and Engine Version: 8.0. Click on Create.
⇒ Click on the Option Group that you have created.
⇒ Under Options section, click on Add option button.
⇒ Under “SERVER_AUDIT_EVENTS*” option, enter the values that you want to audit i.e. CONNECT, QUERY, QUERY_DDL, QUERY_DML, QUERY_DCL, QUERY_DML_NO_SELECT. If you want to log all the queries run on your tables, just enter QUERY in this field.
⇒ Set “Yes” for Apply Immediately. Click on Add Option.
Create Parameter Group
⇒ Let’s open the Amazon RDS Dashboard. ⇒ Click on the Parameter Group. ⇒ Click on the Create Parameter Group button.
⇒ Select “Parameter group family”: mysql8.0
⇒ Select “Type”: DB Parameter Group
⇒ Enter Group Name and Description. Click on Create.
⇒ Now, click on the parameter Group that you have created. ⇒ Under Parameter, we will edit the certain parameters and set the following values: ⇒ Edit parameter: “log_output” ⇒ Change its value from TABLE to FILE ⇒ Edit parameter: “slow_query_log” ⇒ Change its value from BLANK to 1 ⇒ Edit parameter: “general_log” ⇒ Change its value from BLANK to 1
Now, we are all set with our Option Group and Parameter Group. Now, let’s allocate these groups to our RDS.
⇒ Go to your RDS Dashboard. ⇒ Click on the RDS that you have created. ⇒ Click on Modify. ⇒ Scroll down the page and go to the “Database options” section. ⇒ Under “DB parameter group”, select the parameter group that you have created. ⇒ Under “Option group”, select the option group that you have created.
We are now all set with the RDS. Once you are done with the above steps, kindly allow 20-30 minutes to populate the data in CloudWatch (NOTE: It may take more time depending on the size of the logs).
Once CloudWatch has gathered all the logs, you will see the audit logs under your CloudWatch Dashboard. It will look like the following image:
Create an S3 Bucket (To Store CloudWatch Audit Logs)
⇒ Go to Amazon S3 Dashboard. ⇒ Create a new Bucket. ⇒ Once you have created a bucket, open it and navigate to the Permissions tab. ⇒ We will need to allow CloudWatch to put objects to the bucket (Write Access) ⇒ Click on Edit button for Bucket policy. Enter the following code:
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“Service”: “logs.YOUR-REGION.amazonaws.com” // i.e. logs.us-east-1.amazonaws.com
},
“Action”: “s3:GetBucketAcl”,
“Resource”: “arn:aws:s3:::BUCKET_NAME_HERE”
},
{
“Effect”: “Allow”,
“Principal”: {
“Service”: “logs.YOUR-REGION.amazonaws.com”
},
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::BUCKET_NAME_HERE/*”,
“Condition”: {
“StringEquals”: {
“s3:x-amz-acl”: “bucket-owner-full-control”
}
}
}
] }
Create IAM Role (We will use this for Lambda automation)
Now, we will create the IAM role that will be used in the Lambda function set up. The AWS Lambda service will require permission to log events and write to the S3 bucket we have created.
We will create the IAM Role “Export-RDS-CloudWatch-to-S3-Lambda” with “AmazonS3FullAccess”, “CloudWatchLogsFullAccess”, and “CloudWatchEventsFullAccess” policies.
⇒ Open your AWS IAM Dashboard.
⇒ Switch to the Roles and click on the Create Role button.
⇒ Under “Use case”, select Lambda and click on Next.
⇒ Search for “AmazonS3FullAccess” and select it.
⇒ Search for “CloudWatchLogsFullAccess” and select it.
⇒ Search for “CloudWatchEventsFullAccess” and select it.
⇒ Set Role Name: “Export-RDS-CloudWatch-to-S3-Lambda” and click on Create role.
Lambda (Function to automate CloudWatch Logs export to S3)
Lambda function allows you to put your code under the function and run it on triggers. You do not need to have any server or set up for this. Very easy and efficient!
⇒ Switch to AWS Lambda Dashboard. ⇒ Click on the Functions and then click on the Create function button.
⇒ Keep “Author for Scratch” selected.
⇒ Set “Function name”: Export-RDS-CloudWatch-Logs-To-S3
⇒ Under “Runtime”, select Python 3.x.
⇒ Under “Permissions”, select “Use an existing role” and select the IAM role that we created in the previous step.
⇒ Click on the Create function and navigate to Code view and enter the following script:
import boto3
import os
import datetime
GROUP_NAME = os.environ[‘GROUP_NAME’] DESTINATION_BUCKET = os.environ[‘DESTINATION_BUCKET’] PREFIX = os.environ[‘PREFIX’] NDAYS = os.environ[‘NDAYS’] nDays = int(NDAYS)
currentTime = datetime.datetime.now()
StartDate = currentTime – datetime.timedelta(days=nDays)
EndDate = currentTime – datetime.timedelta(days=nDays – 1)
fromDate = int(StartDate.timestamp() * 1000)
toDate = int(EndDate.timestamp() * 1000)
BUCKET_PREFIX = os.path.join(PREFIX, StartDate.strftime(‘%Y{0}%m{0}%d’).format(os.path.sep))
def lambda_handler(event, context):
client = boto3.client(‘logs’)
client.create_export_task(
logGroupName=GROUP_NAME,
fromTime=fromDate,
to=toDate,
destination=DESTINATION_BUCKET,
destinationPrefix=BUCKET_PREFIX
)
⇒ Now, click on Configuration ⇒ Environment Variables. ⇒ We need to create 4 variables: ⇒ DESTINATION_BUCKET: <Name of your S3 bucket> ⇒ GROUP_NAME: <Log’s group name that you are exporting> ⇒ NDAYS: 1 ⇒ PREFIX: exported-logs
This is all. Now, you have set up an environment to Export RDS logs to S3 automatically. You can retain the logs as long as you want. But, if you still face any issue, feel free to post a comment here. We would love to hear from you!!
Comments