Terraform

How to Create and Manage an AWS S3 Bucket Using Terraform

Managing S3 Bucket using Terraform

Amazon S3 Bucket is a storage service offered by AWS for storing data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, etc. The S3 stands for Simple Storage Service which can be scaled based on individual or organization needs. On top of providing a storage solution, Amazon S3 also provides comprehensive access management which can help you to set up very granular-level permissions.

This blog post is all about managing the AWS S3 bucket using Terraform. Terraform provides three S3 resources: 

  1. aws_s3_bucket
  2. aws_s3_bucket_object
  3. aws_s3_bucket_public_access_block

These resources are used for managing the S3 bucket, but exhibit different characteristics which we will explore in this post.

AWS S3 bucket supports versioning, replication, encryption, ACL (access control list), and bucket object policy. Here is the list of S3 tasks which we are going to complete using Terraform: 

  1. Setting up AWS Access Credentials (prerequisite).
  2. Using aws_s3_bucket resource to create S3 Bucket.
  3. Uploading files to S3 bucket using aws_s3_bucket_object.
  4. Managing ACL (Access Control List) using aws_s3_bucket_public_access_block.
  5. Deleting the S3 bucket using Terraform.

Let’s start.

How to set up AWS Access Credentials

Terraform always needs Access Key and Secret Key to work with the AWS resources. But, AWS always provides you with static plain text credentials and should not be stored, as it is contained in your Terraform file. 

There are a couple of ways to handle this problem:

  1. Using Spacelift AWS Integration with IAM roles. Spacelift provides AWS integration out of the box. Here is a comprehensive guide from Spacelift which can help to integrate with AWS: AWS Integration Tutorial
  2. The second way would be generating AWS access credentials dynamically based on IAM policies and storing them into the Vault (Hashicorp Vault). Read more about Creating IAM Policies with Terraform.

The above-mentioned methods will help you integrate with AWS in a more secure way. 

Spacelift Programmatic Setup of IAM Role —If you are using Spacelift, then here is the code snippet of Terraform which you should integrate with your existing Terraform infrastructure code base.

# Creating a Spacelift stack.
resource "spacelift_stack" "managed-stack" {
 name        = "Stack managed by Spacelift"
 repository  = "my-awesome-repo"
 branch      = "master"
}
# Creating an IAM role.
resource "aws_iam_role" "managed-stack-role" {
 name = "spacelift-managed-stack-role"
 # Setting up the trust relationship.
 assume_role_policy = jsonencode({
   Version = "2012-10-17"
   Statement = [
     jsondecode(
       spacelift_stack.managed-stack.aws_assume_role_policy_statement
     )
   ]
 })
}
# Attaching a powerful administrative policy to the stack role.
resource "aws_iam_role_policy_attachment" "managed-stack-role" {
 role       = aws_iam_role.managed-stack-role.name
 policy_arn = "arn:aws:iam::aws:policy/PowerUserAccess"
}
# Linking AWS role to the Spacelift stack.
resource "spacelift_stack_aws_role" "managed-stack-role" {
 stack_id = spacelift_stack.managed-stack.id
 role_arn = aws_iam_role.managed-stack-role.arn
}

Hashicorp Vault programmatic Setup—If you are using the Hashicorp vault, here’s the Terraform code snippet which defines the AWS IAM role for managing an S3 Bucket.

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "name" { default = "dynamic-aws-creds-vault-admin" }
 
terraform {
 backend "local" {
   path = "terraform.tfstate"
 }
}
 
provider "vault" {}
 
resource "vault_aws_secret_backend" "aws" {
 access_key = var.aws_access_key
 secret_key = var.aws_secret_key
 path       = "${var.name}-path"
 
 default_lease_ttl_seconds = "120"
 max_lease_ttl_seconds     = "240"
}
 
resource "vault_aws_secret_backend_role" "admin" {
 backend         = vault_aws_secret_backend.aws.path
 name            = "${var.name}-role"
 credential_type = "iam_user"
 
 policy_document = <<EOF
{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Effect": "Allow",
     "Action": [
       "iam:*", "ec2:*", "s3:*"
     ],
     "Resource": "*"
   }
 ]
}
EOF
}
 
output "backend" {
 value = vault_aws_secret_backend.aws.path
}
 
output "role" {
 value = vault_aws_secret_backend_role.admin.name
}

How to create an S3 bucket using Terraform - Example

1. Use aws_s3_bucket Resource to Create S3 Bucket

After setting up the credentials, let’s use the Terraform aws_s3_bucket resource to create the first S3 bucket.  

The S3 Bucket name we are going to use is – spacelift-test1-s3.

Here are the names of items needed for creating the S3 bucket: 

  1. region—Specify the name of the region.
  2. bucket—Name the bucket i.e. – spacelift-test1-s3.
  3. acl—Access control list. We will set the S3 access as private.

Create a Terraform file named – main.tf and use the following Terraform code snippet:

variable "name" { default = "dynamic-aws-creds-operator" }
variable "region" { default = "eu-central-1" }
variable "path" { default = "../vault-admin-workspace/terraform.tfstate" }
variable "ttl" { default = "1" }
 
terraform {
 backend "local" {
   path = "terraform.tfstate"
 }
}
 
data "terraform_remote_state" "admin" {
 backend = "local"
 
 config = {
   path = var.path
 }
}
 
data "vault_aws_access_credentials" "creds" {
 backend = data.terraform_remote_state.admin.outputs.backend
 role    = data.terraform_remote_state.admin.outputs.role
}
 
provider "aws" {
 region     = var.region
 access_key = data.vault_aws_access_credentials.creds.access_key
 secret_key = data.vault_aws_access_credentials.creds.secret_key
}
 
resource "aws_s3_bucket" "spacelift-test1-s3" {
   bucket = "spacelift-test1-s3"
   acl = "private"  
}

Along with main.tf, let’s create version.tf for AWS and vault version.

terraform {
 required_providers {
   aws = {
     source  = "hashicorp/aws"
     version = "3.23.0"
   }
}

If you are going to use Hashicorp vault instead of Spacelift, then you must also add the Hashicorp vault version.

  vault = {
     source  = "hashicorp/vault"
     version = "2.17.0"
   }
 }

Let’s apply the above Terraform configuration using Terraform commands:

1. $ terraform init – This is the first command we are going to run. 

terraform init

2.  $ terraform plan – The second command would be to run a Terraform plan. This command will tell you how many AWS resources are going to be added, changed or destroyed.

$ terraform plan

3.  $ terraform applyApply the Terraform configuration using the Terraform apply command which will eventually create an S3 bucket in AWS.

$ terraform apply
AWS S3

2. Upload Files to S3 Bucket Using aws_s3_bucket_object

In Step 2 we saw how to create an S3 bucket using the aws_s3_bucket Terraform resource.  In this step, we are going to use the same S3 bucket (spacelift-test1-s3) to upload files into. 

When we want to perform some additional operations (e.g. – upload files) on the S3 bucket then we are going to use the aws_s3_bucket_object Terraform resource.

For uploading the files to the S3 bucket we will extend the existing Terraform script from Step 2, along with the new aws_s3_bucket_object resource block.

We are going to upload the two sample text files:

  1. test1.txt
  2. test2.text

Here is the screenshot of my project structure for uploading files, which includes my main.tf along with test1.txt, test2.txt files.

Terraform S3 bucket

As you can see from the project structure, I have kept my test files under the directory uploads, so I need to mention the relative path inside my Terraform file (main.tf).

Here is my Terraform file:

variable "name" { default = "dynamic-aws-creds-operator" }
variable "region" { default = "eu-central-1" }
variable "path" { default = "../vault-admin-workspace/terraform.tfstate" }
variable "ttl" { default = "1" }
 
terraform {
 backend "local" {
   path = "terraform.tfstate"
 }
}
 
data "terraform_remote_state" "admin" {
 backend = "local"
 
 config = {
   path = var.path
 }
}
 
data "vault_aws_access_credentials" "creds" {
 backend = data.terraform_remote_state.admin.outputs.backend
 role    = data.terraform_remote_state.admin.outputs.role
}
 
provider "aws" {
 region     = var.region
 access_key = data.vault_aws_access_credentials.creds.access_key
 secret_key = data.vault_aws_access_credentials.creds.secret_key
}
 
resource "aws_s3_bucket" "spacelift-test1-s3" {
   bucket = "spacelift-test1-s3"
   acl = "private"  
}
 
resource "aws_s3_bucket_object" "object1" {
  for_each = fileset("uploads/", "*")
  bucket = aws_s3_bucket.spacelift-test1-s3.id
  key = each.value
  source = "uploads/${each.value}"
}

Here are some additional notes for the above-mentioned Terraform file – 

  1. for_each = fileset(“uploads/”, “*”) For loop for iterating over the files located under upload directory.
  2. bucket =  aws_s3_bucket.spacelift-test1-s3.id – The original S3 bucket ID which we created in Step 2.
  3. Key = each.value – You have to assign a key for the name of the object, once it’s in the bucket.
  4. Source = “uploads/${each.value}” – Path of the files which will be uploaded to the S3 bucket.

How to Apply the New Changes?

Since we are working in the same main.tf file and we have added a new Terraform resource block aws_s3_bucket_object, we can start with the Terraform plan command:

1. $ terraform plan – This command will show that 2 more new resources (test1.txt, test2.txt) are going to be added to the S3 bucket. Because we have previously created an S3 bucket, this time it will only add new resources.

aws_s3_bucket_object

2. $ terraform applyRun the Terraform apply command and you should be able to upload the files to the S3 bucket.

aws_s3_bucket_object test

Here is the screenshot from AWS console S3 bucket: 

Spacelift-test1-s3

There are many more things that you can do with Terraform and the S3 Bucket. Here is a guide on how to rename an AWS S3 bucket in Terraform which can help you rename your S3 bucket.

3. Manage ACL (Access Control List) Using aws_s3_bucket_public_access_block

Now, after uploading the files to an S3 bucket, the next Terraform resource which we are going to talk about is aws_s3_bucket_public_access_block. This resource is going to help you manage the public access associated with your S3 bucket. 

By default, the value is false, which means we are allowing public ACL (Access Control List). If you want to restrict the public ACL, you have to set the value to true.

Here, we are going to take the same example which we have taken previously for uploading the files to an S3 bucket: 

variable "name" { default = "dynamic-aws-creds-operator" }
variable "region" { default = "eu-central-1" }
variable "path" { default = "../vault-admin-workspace/terraform.tfstate" }
variable "ttl" { default = "1" }
 
terraform {
 backend "local" {
   path = "terraform.tfstate"
 }
}
 
data "terraform_remote_state" "admin" {
 backend = "local"
 
 config = {
   path = var.path
 }
}
 
data "vault_aws_access_credentials" "creds" {
 backend = data.terraform_remote_state.admin.outputs.backend
 role    = data.terraform_remote_state.admin.outputs.role
}
 
provider "aws" {
 region     = var.region
 access_key = data.vault_aws_access_credentials.creds.access_key
 secret_key = data.vault_aws_access_credentials.creds.secret_key
}
 
resource "aws_s3_bucket" "spacelift-test1-s3" {
 bucket = "spacelift-test1-s3"
 acl = "private"
}
resource "aws_s3_bucket_object" "object1" {
 for_each = fileset("uploads/", "*")
 bucket = aws_s3_bucket.spacelift-test1-s3.id
 key = each.value
 source = "uploads/${each.value}"
}
resource "aws_s3_bucket_public_access_block" "app" {
bucket = aws_s3_bucket.spacelift-test1-s3.id
block_public_acls       = true
block_public_policy     = true
ignore_public_acls      = true
restrict_public_buckets = true
}

You can see in the above example that we have restricted the following public ACL: 

  1.  block_public_acls   = true
  2.  block_public_policy = true
  3.  ignore_public_acls  = true
  4.  restrict_public_buckets = true

With the help of aws_s3_bucket_public_access_block, you can manage the public access control list onto your S3 Bucket.

4. Delete S3 Bucket Using Terraform

In the previous steps, we have seen how to create an S3 bucket and how to upload files to the S3 bucket using terraform aws_s3_bucket, aws_s3_bucket_object resources.

In this section, you will see how we can delete the S3 bucket once we are done working with the S3.

When you are using Terraform, the deletion part is always easy. You simply need to run the universal $ terraform destroy command and it will delete all the resources which you have created previously.

$ terraform destroy

As you can see in the screenshot, Terraform has deleted the resources in the reverse chronological order starting from test2.txt, test2.txt, and finally the bucket spacelift-test1-s3.

Key Points

Using Terraform to create an S3 bucket is relatively simple, but it is not recommended to use Terraform for uploading thousands of files into the S3 bucket. Terraform is an infrastructure provisioning tool and should not be used for performing data-intensive tasks. This blog is a comprehensive guide to getting yourself familiar with Terraform and the S3 bucket.

We encourage you also to explore how Spacelift makes it easy to work with Terraform. If you need any help managing your Terraform infrastructure, building more complex workflows based on Terraform, and managing AWS credentials per run, instead of using a static pair on your local machine, Spacelift is a fantastic tool for this. It supports Git workflows, policy as code, programmatic configuration, context sharing, drift detection, and many more great features right out of the box. You can check it for free, by creating a trial account.

Note: New versions of Terraform will be placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that will expand on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6. OpenTofu retained all the features and functionalities that had made Terraform popular among developers while also introducing improvements and enhancements. OpenTofu is the future of the Terraform ecosystem, and having a truly open-source project to support all your IaC needs is the main priority.

Manage Terraform Better and Faster

If you are struggling with Terraform automation and management, check out Spacelift. It helps you manage Terraform state, build more complex workflows, and adds several must-have capabilities for end-to-end infrastructure management.

Start free trial
Terraform CLI Commands Cheatsheet

Initialize/ plan/ apply your IaC, manage modules, state, and more.

Share your data and download the cheatsheet