Reduce AWS Costs using Lifecycle in AWS S3 via Lambda

Introduction

Enabling ALB Logs

This week I needed to enable ALB logs on AWS, to troubleshoot an incident, so I forwarded the access logs to S3.

The main benefit of enabling ALB logs is the ability to troubleshoot and identify performance bottlenecks. By analyzing the logs, you can detect traffic patterns, identify errors and anomalies, and take corrective measures to improve application efficiency.

nuvem download icon line conexao da placa de circuito

In addition, ALB logs are also valuable for compliance and security purposes. They record information such as IP addresses, URLs accessed, and HTTP status codes, which can be crucial for investigating suspicious activity, identifying potential attacks, and performing compliance audits.

Costs

However, it’s important to keep in mind that storing logs can come at a significant cost, especially in high-demand or request-heavy environments. Logs can take up considerable space on your data storage, which may incur additional charges.

To mitigate the costs associated with ALB logs, it is recommended that you implement an efficient management strategy. This can include setting up retention policies to limit the amount of time logs are stored, using log compression to reduce file sizes, and using data analysis services to process and filter logs more efficient.

It is also important to consider properly configuring access permissions to ALB logs. Ensuring that only relevant teams and individuals have access to logs can help prevent sensitive information leaks and minimize security risks.

In summary, enabling ALB logs on AWS is critical for monitoring and analyzing network traffic and ensuring proper application performance and security. However, it is important to be aware of the costs associated with storing logs and adopt efficient management practices to optimize usage and minimize unnecessary expenses.

Problem

As many know, the costs to store a massive amount of logs in S3 is enormous, so I decided to activate the lifecycle of S3 itself, specifying a specific path (where the ALB logs are) for it to delete logs older than 2 days.

However, 2 days passed and the objects were still in the bucket, generating unnecessary costs.

In another bucket, where I enabled lifecycle for the entire bucket, the process was proceeding as expected.

aws bill

Solution

Since the solution offered by AWS was not working as expected, I decided to take another approach, creating a Python script that performs this lifecycle process.

Requisitos

To create a structure where the lifecycle process occurs in an automated way, similar to what I would achieve through the S3 configuration, it was necessary:

  • S3 Bucket
  • AWS Lambda in Python
  • Lambda Role Policy
  • Bucket policy
  • Cron configuration on Amazon EventBridge

Creating the Lambda

The first step is to access the AWS console and go to the Lambda service.

Within Lambda, access:

  • Menu Functions
  • Click on Create function

In the window that will open, leave the option Author from scratch checked, in Function name I put “s3-lifecycle” (you can choose the name you think is most appropriate) and in Runtime choose “Python 3.9”.

Here is an example image:

image

The rest leave as is, click on Create function.

A screen like this will be shown:

image 1

In the snippet with example code in Python, let’s remove the lines and add our code:

import boto3
from datetime import datetime, timedelta, timezone

def delete_objects(bucket_name, prefix, days):
    s3 = boto3.client('s3')
    cutoff_date = datetime.now(timezone.utc) - timedelta(days=days)

    objects_to_delete = []

    paginator = s3.get_paginator('list_objects_v2')
    page_iterator = paginator.paginate(Bucket=bucket_name, Prefix=prefix)

    for page in page_iterator:
        if 'Contents' in page:
            for obj in page['Contents']:
                key = obj['Key']
                last_modified = obj['LastModified'].replace(tzinfo=timezone.utc)
                if last_modified < cutoff_date:
                    objects_to_delete.append({'Key': key})

    if len(objects_to_delete) > 0:
        s3.delete_objects(Bucket=bucket_name, Delete={'Objects': objects_to_delete})
        print(f'{len(objects_to_delete)} objects deleted.')
    else:
        print('No objects found to delete.')

def lambda_handler(event, context):
    bucket_name = 'devops-mind'
    prefix = 'tshoot-incidente-alb/AWSLogs/'
    days = 2

    delete_objects(bucket_name, prefix, days)

In this script, you only need to adjust the following fields:

  • bucket_name
    • Name of the bucket where the objects are stored.
  • prefix
    • Prefix/path where the objects are stored (the logs in our case).
  • days
    • Number of days where objects are considered old and can be deleted.

After adjusting the code, there will be a message “Changes not deployed”, you can click on Deploy.

image 2

Before proceeding with using Lambda or configuring EventBridge, we need to adjust the permissions, both on the S3 Bucket and on the role that is used by Lambda.

Adjusting Role and Bucket policies

For the whole process to work properly, permissions related to the S3 bucket are required.

Bucket Policy

Assuming you already have a bucket in S3 (I won’t cover the bucket creation part in this article), go to:

  • Permissions tab.
  • In the snippet about Bucket Policy, click Edit

8HdolSw0dRRXcAAAAASUVORK5CYII=

Let’s add the following policy:

{
    "Version": "2008-10-17",
    "Id": "Policy1335892530063",
    "Statement": [
        {
            "Sid": "DevOps-Mind-lambda",
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.us-east-1.amazonaws.com"
            },
            "Action": [
                "s3:*"
            ],
            "Resource": "arn:aws:s3:::devops-mind/tshoot-incidente-alb/AWSLogs/*",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "123456"
                }
            }
        }
    ]
}

Edit the fields, according to the features and structure of your objects in S3.

It is also necessary to edit the SourceAccount, I tried to leave permissions for resources of a specific account only.

In actions we left them all, but we could release more specific actions. In order to make the article simpler, we will continue to allow all of them.

image 3

After finishing the adjustments in the policy, click on Save changes.

Policy for Role

When we created Lambda, a Role was created along with it.

In my case the role is s3-lifecycle-role-fggxxkgz:

image 4

We need to create a policy and attach it to this role to ensure that it has the necessary privileges on our S3 bucket.

Access the IAM service on AWS, click on Policies and then on the Create policy button, as highlighted in yellow in the image below:

image 5

Let’s use the following code to create our policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DevOpsMindBucket1",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        },
        {
            "Sid": "DevOpsMindBucket2",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::devops-mind/tshoot-incidente-alb/AWSLogs/*",
                "arn:aws:s3:::devops-mind"
            ]
        }
    ]
}

Our screen will look like this:

image 6

Define a name for the policy and click Create policy:

image 7

Once this is done, look for the role in IAM.

image 8

Go to Role and click on Add permissions.

After that, click on Attach policies:

image 9

Select the policy in the search.

Click Add Permissions.

image 10

Expected outcome:

image 11

Configuring automation

For the lifecycle process to occur as expected, we need to configure a trigger in our Lambda.

The easiest way is to go to the AWS console, access our Lambda again and click on Add trigger, as highlighted in yellow:

image 12

Configure the event source to use EventBridge.

Give the rule a name and add a description.

In Schedule expression, add the values:

cron(0 12 * * ? *)

The cron expression cron(0 12 * * ? *) in an Amazon EventBridge rule defines an event schedule that occurs every day at 12:00 (noon) UTC.

Let’s analyze the cron expression in detail:

  • The first field (0) indicates the value of minutes. In this case, 0 means the event will occur at the beginning of every hour.
  • The second field (12) indicates the hour value. In this case, 12 indicates that the event will occur at 12 o’clock.
  • The third field (*) indicates the day of the month value. In this case, * means that the event will occur on any day of the month.
  • The fourth field (*) indicates the month value. In this case, * means the event will occur in any month.
  • The fifth field (?) is used to replace the day of week (*) field when scheduling is based on the day of the month. In that case, ? indicates that a day of the week is not specified.
  • The sixth field (*) indicates the year value. In this case, * means that the event will take place in any year.

Therefore, the event rule with this cron expression will fire every day at exactly 12:00 (noon) UTC.

After configuring all the fields, we will have something like this:

image 13

Click on Add to finish the process.

Expected outcome:

image 14

Support material

Conclusion

When the S3 lifecycle doesn’t work properly, it can be frustrating to deal with unnecessary storage of objects or failure to delete expired items. Fortunately, by using AWS Lambdas, you can create a custom process to manage the lifecycle of objects in an automated and efficient way.

Through Lambdas, you can solve specific lifecycle issues in S3, ensuring that objects are correctly transitioned or deleted according to your business rules. In addition, Lambdas offer flexibility and scalability, allowing you to adapt the process to your changing needs.

By using AWS Lambdas to automate the S3 lifecycle process, you can keep your storage optimized and reduce unnecessary costs, Julius would certainly approve of these actions.

julius a

Compartilhe / Share
Fernando Müller Junior
Fernando Müller Junior

I am Fernando Müller, a Tech Lead SRE with 16 years of experience in IT, I currently work at Appmax, a fintech located in Brazil. Passionate about working with Cloud Native architectures and applications, Open Source tools and everything that exists in the SRE world, always looking to develop and learn constantly (Lifelong learning), working on innovative projects!

Articles: 44

Receba as notícias por email / Receive news by email

Insira seu endereço de e-mail abaixo e assine nossa newsletter / Enter your email address below and subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *