[Webinar] How to effectively prove compliance in a multi-cloud, multi-IaC world

➡️ Register now

How to Use aws_s3_bucket_notification Resource in Terraform

terraform

🚀 Level Up Your Infrastructure Skills

You focus on building. We’ll keep you updated. Get curated infrastructure insights that help you make smarter decisions.

The aws_s3_bucket_notification resource plays a key role in building event-driven architectures with S3. It allows seamless integration with other AWS services like Lambda, SNS, and SQS, enabling automated workflows in response to bucket activity. 

Important notes up front

  • S3 uses a single notification configuration per bucket. Keep all targets in one aws_s3_bucket_notification resource, or they will overwrite each other.
  • If you prefer rules-based routing without SNS, SQS, or Lambda targets, you can also enable S3 to publish events to EventBridge by setting eventbridge = true in the notification config. You can enable EventBridge alongside SNS/SQS/Lambda, as each notification rule has one destination type.

What is aws_s3_bucket_notification resource?

The aws_s3_bucket_notification resource in Terraform is used to configure notifications for an Amazon S3 bucket. These notifications allow the S3 service to send events to other AWS services when specific actions occur on the bucket or its objects. 

Common targets for such notifications include AWS Lambda functions, Amazon Simple Notification Service (SNS) topics, and Amazon Simple Queue Service (SQS) queues. 

These notifications are helpful in designing event-driven architectures, where object-level operations in S3, such as file uploads or deletions, can trigger further processing automatically.

Example 1: Sending S3 events to an SNS topic

In this example, we’re creating an S3 bucket and an SNS topic. Then, we set up a notification so that anytime someone uploads a file to the bucket, an event is sent to the SNS topic. 

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my-example-bucket-12345"
}

resource "aws_sns_topic" "s3_events" {
  name = "s3-events-topic"
}

data "aws_caller_identity" "current" {}

resource "aws_sns_topic_policy" "allow_s3" {
  arn    = aws_sns_topic.s3_events.arn
  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Effect = "Allow",
      Principal = { Service = "s3.amazonaws.com" },
      Action   = "SNS:Publish",
      Resource = aws_sns_topic.s3_events.arn,
      Condition = {
        StringEquals = { "aws:SourceAccount" = data.aws_caller_identity.current.account_id },
        ArnLike      = { "aws:SourceArn" = aws_s3_bucket.my_bucket.arn }
      }
    }]
  })
}

resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = aws_s3_bucket.my_bucket.id

  topic {
    topic_arn = aws_sns_topic.s3_events.arn
    events    = ["s3:ObjectCreated:*"]
  }
}

events = ["s3:ObjectCreated:*"]</span> means we want to catch all types of object creation events — whether they’re standard uploads, multipart uploads, or anything else that adds a file to the bucket. A separate depends_on is not needed here because the reference to the topic ARN already creates the dependency.

The topic policy is required so S3 can publish to the topic. Without it, deliveries will not arrive. It is recommended that you use both aws:SourceArn (bucket ARN) and aws:SourceAccount in the policy condition.

Example 2: Trigger a Lambda function when a file is deleted

Here, we’re building a setup where an S3 bucket triggers a Lambda function every time a file is deleted. We include the Lambda permission that allows S3 to invoke the function, and we ensure the notification waits for that permission to exist. 

This example uses the current Node.js 22 runtime.

resource "aws_s3_bucket" "my_bucket" {
  bucket = "lambda-trigger-bucket-45678"
}

resource "aws_iam_role" "lambda_exec" {
  name = "lambda_exec_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Action = "sts:AssumeRole",
      Effect = "Allow",
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}

resource "aws_lambda_function" "s3_lambda" {
  filename         = "function.zip"
  function_name    = "S3DeleteTrigger"
  role             = aws_iam_role.lambda_exec.arn
  handler          = "index.handler"
  runtime          = "nodejs20.x"
  source_code_hash = filebase64sha256("function.zip")
}

resource "aws_lambda_permission" "allow_s3" {
  statement_id  = "AllowS3Invoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.s3_lambda.function_name
  principal     = "s3.amazonaws.com"
  source_arn    = aws_s3_bucket.my_bucket.arn
}

resource "aws_s3_bucket_notification" "lambda_notification" {
  bucket = aws_s3_bucket.my_bucket.id

  lambda_function {
    lambda_function_arn = aws_lambda_function.s3_lambda.arn
    events              = ["s3:ObjectRemoved:*"]
  }

  depends_on = [aws_lambda_permission.allow_s3]
}

S3 is set up to notify the Lambda function whenever any type of “object removed” event occurs, including deletions made manually or programmatically.

depends_on ensures Terraform doesn’t try to set up the notification before the Lambda function is ready. It should depend on the Lambda permission so S3 can validate the destination when the notification is created. Optionally add the basic execution policy to the role (for CloudWatch Logs).

In a real deployment, you must include the aws_lambda_permission resource as show, so S3 is allowed to invoke the function.

​​Example 3: Send upload events for .jpg files to an SQS Queue

In this setup, we start by creating an S3 bucket named image-upload-bucket-98765. This bucket might be used by an application or a website where users upload image files. 

Then, we create an SQS queue named s3-image-upload-queue that will receive event messages whenever new images are added to the bucket. We also attach a queue policy that lets S3 send messages to the queue. If the queue uses SSE-KMS, grant the S3 service principal permission to use the key.

resource "aws_s3_bucket" "image_bucket" {
  bucket = "image-upload-bucket-98765"
}

resource "aws_sqs_queue" "image_queue" {
  name = "s3-image-upload-queue"
}

data "aws_caller_identity" "current" {}

resource "aws_sqs_queue_policy" "allow_s3" {
  queue_url = aws_sqs_queue.image_queue.id
  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Effect = "Allow",
      Principal = { Service = "s3.amazonaws.com" },
      Action   = "sqs:SendMessage",
      Resource = aws_sqs_queue.image_queue.arn,
      Condition = {
        StringEquals = { "aws:SourceAccount" = data.aws_caller_identity.current.account_id },
        ArnLike      = { "aws:SourceArn" = aws_s3_bucket.image_bucket.arn }
      }
    }]
  })
}

resource "aws_s3_bucket_notification" "image_upload_notification" {
  bucket = aws_s3_bucket.image_bucket.id

  queue {
    queue_arn      = aws_sqs_queue.image_queue.arn
    events         = ["s3:ObjectCreated:Put"]
    filter_prefix  = "images/"
    filter_suffix  = ".jpg"
  }
}

The key part here is the queue block inside the aws_s3_bucket_notification resource. We specify that we only want notifications for a specific type of event: s3:ObjectCreated:Put. This means the event is triggered only when an object is uploaded via a PUT request, which typically corresponds to standard file uploads.

We also use two filters: filter_prefix and filter_suffix

  • The prefix is set to images/, which means this rule only applies to files uploaded under a folder or key prefix named images/
  • The suffix .jpg further narrows it down to JPEG images. For example, a file like images/profile1.jpg would trigger the event, while docs/readme.txt or images/logo.png would not. Filters do not support wildcards, so use concrete prefix and suffix values. If the queue uses KMS encryption, you must allow S3 to use the key, or messages will fail to arrive.

We now include depends_on because the queue policy must exist before S3 can successfully deliver. You usually do not need an explicit depends_on when resource references have already been ordered; keep it only when enforcing creation after permissions/policies.

Key points

The aws_s3_bucket_notification Terraform resource configures event notifications on an existing S3 bucket and you can target Lambda SNS or SQS or enable EventBridge delivery.

Terraform is really powerful, but to achieve an end-to-end secure GitOps approach, you need to use a product that can run your Terraform workflows. Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:

  • Policies (based on Open Policy Agent)
  • Multi-IaC workflows
  • Self-service infrastructure
  • Integrations with any third-party tools

If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.

Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.

Manage Terraform better with Spacelift

Build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization and many more.

Learn more

Terraform Commands Cheat Sheet

Grab our ultimate cheat sheet PDF
for all the Terraform commands
and concepts you need.

Share your data and download the cheat sheet