Terraform

How to Create an AWS RDS Instance Using Terraform

terraform rds

Terraform RDS support enables efficient management of your relational databases on all major cloud platforms. With AWS’s Relational Database Service (RDS), you can provision hosted databases, offering simplified operation and maintenance compared to self-managed setups. We can easily provision, scale, and adjust RDS instances and clusters seamlessly using Terraform. This approach ensures a secure, programmatic, and declarative management of the entire RDS instance and cluster life cycle.

In this post, we explore various configuration options usually required by teams and enterprises for RDS databases. Let’s go through the following steps:

  1. Configure a basic RDS instance
  2. Provision RDS instance in a VPC network
  3. Configure backup and maintenance settings
  4. Configure monitoring and performance insights
  5. Manage parameter groups
  6. Set up access and security
  7. Manage HA and replication
  8. AWS RDS Terraform module

Note: All code examples discussed here are available in this GitHub repository.

1. Configure basic RDS instance

Creating a RDS database instance in AWS using Terraform is quite easy. The “aws_db_instance” resource block takes a few required parameters which define essential characteristics of the database to be provisioned.

An example is shown below. In the upcoming sections we will build on the same configuration.

resource "aws_db_instance" "default" {
  allocated_storage = 10
  engine = "mysql"
  instance_class = "db.t3.micro"
  username = "foo"
  password = "foobarbaz"
  skip_final_snapshot = true // required to destroy
}

The attributes described here are briefed as follows.

  1. allocated_storage: Memory allocated in GB for this database instance.
  2. engine: Choice of database engine. We have selected MySQL as the desired database engine. We can also select Postgres, MariaDB, Oracle, etc. You can refer to this for all possible values.
  3. instance_class: Defines the dimensions of instance to provision based on various factors like CPU, memory, networking, and storage. More options.
  4. username: The user name of the main database user (admin).
  5. password: Password for the main database user (admin). This should be supplied in a secure way. For example, environment variables or key store.
  6. skip_final_snapshot: Determines if a snapshot should be taken before deleting the database. The database can be created without setting this setting. However, while destroying this database instance using Terraform, if this value is not set to true, the database is not destroyed.

Example source directory

2. Provision RDS instance in a VPC network

Often databases are placed in a private subnet with secure network access defined in security groups and network ACLs.

In this section, we first create the VPC and then update the basic configuration for RDS introduced in the previous section.

The config for VPC is found here. It creates:

  1. A VPC
  2. Two subnets
  3. A Terraform security group with public ingress settings

To place RDS instances in VPC subnets in AWS, you would need to create RDS Subnet groups. A subnet group is a group of subnets where we can create and manage database instances.

The Terraform configuration defines the same. It creates a subnet group that includes the subnets created by VPC previously.

resource "aws_db_subnet_group" "my_db_subnet_group" {
  name = "my-db-subnet-group"
  subnet_ids = [aws_subnet.subnet_a.id, aws_subnet.subnet_b.id]

  tags = {
    Name = "My DB Subnet Group"
  }
}

To associate this subnet group with our database instance we created in the previous section, we update the configuration as shown below.

resource "aws_db_instance" "default" {
  allocated_storage = 10
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t2.micro"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  skip_final_snapshot = true
}

Here, we have also associated the database instance with a security group created while creating VPC. Along with this, we have also added the engine_version attribute to use a specific version of MySQL for our database.

Run terraform plan and terraform apply to provision this database instance with VPC.

Navigate to the AWS console > RDS > Databases > mydb > Connectivity & Security, and see if the database instance is placed in a VPC subnet and associated with the corresponding security group as configured above and shown in the screenshot below.

terraform rds module

3. Configure backup and maintenance settings

In RDS, backup and maintenance are critical aspects of ensuring the availability, reliability, and durability of our database instances. RDS provides automated backup and maintenance functionalities to streamline the same. Automated backups enable the regular creation of snapshots of our database, allowing us to restore to specific points in time or recover from failures.

Additionally, RDS automates essential maintenance tasks such as applying operating system and database engine updates, which helps keep our database instances secure and up to date.

By configuring backup retention periods and preferred maintenance windows, we can tailor these operations to our requirements. This approach offloads the burden of manual backup management and maintenance scheduling, allowing us to focus on our applications while RDS takes care of crucial operational tasks behind the scenes.

To enable backup and maintenance features in our RDS MySQL instance, we add a few attributes to our RDS Terraform configuration, as shown below.

resource "aws_db_instance" "default" {
  allocated_storage = 10
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t2.micro"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7 # Number of days to retain automated backups
  backup_window = "03:00-04:00" # Preferred UTC backup window (hh24:mi-hh24:mi format)
  maintenance_window = "mon:04:00-mon:04:30" # Preferred UTC maintenance window

  # Enable automated backups
  skip_final_snapshot = false
  final_snapshot_identifier = “db-snap”

}

The inline comments for the additional attributes in the code above explain the purpose.

Note that we have now set the skip_final_snapshot to false, as the backups are created in the form of snapshots. The snapshot_idenfier is a required attribute when skip_final_snapshot is disabled.

Once the Terraform configuration is provisioned, navigate to the database and select the Maintenance & Backup tab.

Verify that the maintenance window is set for next Monday (or a weekday of your choice) at 4 a.m. UTC. The backup window is set to the time specified in the configuration, along with the backup retention period.

The configuration above results in the setting below.

4. Configure monitoring and performance insights

Monitoring and Performance Insights are integral components of managing Amazon RDS database instances effectively. RDS offers comprehensive monitoring tools that provide real-time visibility into the performance and health of our databases.

With Amazon CloudWatch integration, we can monitor various metrics such as CPU utilization, memory usage, storage capacity, and I/O activity. 

Performance Insights goes a step further, offering deeper insights into query performance by identifying and visualizing the most time-consuming queries. This helps in optimizing database performance, identifying bottlenecks, and improving application response times.

By utilizing these monitoring and insights capabilities, we can proactively detect anomalies, troubleshoot issues, and make informed decisions to ensure that their database workloads perform optimally and meet the demands of their applications.

To enable monitoring and performance insights in our RDS database instance, we need to provide a couple of attributes:

  1. monitoring_interval – to specify the interval to collect logs for monitoring
  2. performance_insights_enabled – to be set to true to enable performance insights

Enabling monitoring creates a CloudWatch log group where all the logs are collected. This requires an IAM role with appropriate access to CloudWatch.

First, we create the config for the IAM role, as shown below. Here, we are creating a role and attaching a policy to access CloudWatch for monitoring logs creation.

resource "aws_iam_role" "rds_monitoring_role" {
  name = "rds-monitoring-role"

  assume_role_policy = jsonencode({
  Version = "2012-10-17",
  Statement = [
      {
        Action = "sts:AssumeRole",
        Effect = "Allow",
        Principal = {
        Service = "monitoring.rds.amazonaws.com"
      }
    }
  ]
})
}

resource "aws_iam_policy_attachment" "rds_monitoring_attachment" {
  name = "rds-monitoring-attachment"
  roles = [aws_iam_role.rds_monitoring_role.name]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole"
}

Update the RDS configuration as shown below.

The comments explain the intentions of setting the additional attributes to enable monitoring and performance insights for our database instance.

Also, note that we have updated the allocated_storage and instance_class attributes to higher values. This is because performance insights are not supported for instances with lower configurations.

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"

  skip_final_snapshot = false
  final_snapshot_identifier = “my-db”

  # Enable enhanced monitoring
  monitoring_interval = 60 # Interval in seconds (minimum 60 seconds)
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn

  # Enable performance insights
  performance_insights_enabled = true

}

Provision the above Terraform configuration navigate to the RDS database, and click on the Monitoring tab. It should show various CloudWatch monitoring metrics in the form of graphs as shown below.

terraform rds parameter group

Click on the “Monitoring” dropdown on the top right to access the Performance Insights dashboard, as shown below.

Clicking on Performance Insights opens up a new tab/window, as shown below.

Thus, we have successfully configured monitoring and performance insights for the database instance using Terraform. 

Example source directory

5. Manage parameter groups

Until now, we have configured various database features like monitoring, performance insights, backup, and maintenance directly in the “aws_db_instance” resource.

However, database management deals with a vast range of features, and referring to the Terraform documentation, not all of them can be configured in the same aws_db_instance resource. Features like query optimization, caching, memory allocation, etc., need a different way (or resource block) to be managed.

Amazon RDS implements these features in parameter groups and associates the same with relevant databases.

A Parameter Group is a configuration setting that allows us to customize and fine-tune the behavior of our RDS database instances. It consists of a collection of database engine configuration parameters that influence various aspects of database performance, behavior, and functionality. With parameter groups, we can modify settings such as memory allocation, caching behavior, query optimization, replication behavior, and more tailored to our application’s specific requirements. 

Parameter groups are crucial for optimizing database performance and ensuring compatibility with our application’s needs. They are associated with RDS instances during or after their creation, allowing us to dynamically adjust settings without downtime. This flexibility empowers users to optimize their database environment by adjusting parameters to achieve better performance, reliability, and scalability according to their workload characteristics.

There are separate sets of parameters available for the kind of database instance being provisioned, and are also available at cluster and instance level.

For the complete list of parameters for various kinds of databases, refer to this document. For the MySQL database instance provisioned in this example, we want to update the default value assigned to connect_timeout parameter from 10 seconds to 15 seconds.

Note: There might be some confusion while referring to the links provided in this document for MySQL. You can either use the link for “Aurora MySQL parameters” while working with RDS MySQL databases or directly refer to the MySQL documentation to configure the desired parameters.

We begin by creating a resource for a parameter group, where we include all the parameters we wish to configure. The resource block “aws_db_parameter_group” defines the parameter group in the form of Terraform IaC.

It has to be named and associated with a specific database family, as shown below.

resource "aws_db_parameter_group" "my_db_pmg" {
  name = "my-db-pg"
  family = "mysql5.7"

  parameter {
    name = "connect_timeout"
    value = "15"
  }

  # more parameters...
  # parameter {
    # name = "<parameter name>"
    # value = "<valid value>"
  # }
}

Specifying the family ensures that the parameters being configured using this resource block exist or are valid for the given family of databases. This avoids confusion. The parameters are added in a nested block in a “name-value” format, which is straightforward. Depending on the level of customization desired, the parameter group resource block has the potential to consume many lines of code while defining IaC.

Finally, we update the configuration for the database instance itself to associate it with the parameter group defined above. The last line in the configuration below refers to the parameter group to be associated with.

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true

  # Associate with parameter group
  parameter_group_name = aws_db_parameter_group.my_db_pmg.name
}

Apply this updated Terraform configuration and check if the parameter group is created with desired values and is associated with the database instance.

Parameter group with updated connect_timeout value.

Navigate to the Configuration tab of the database record in the AWS management console and verify if the parameter group is being associated with it.

6. Set up access and security

Access and security are top considerations to safeguard sensitive data and maintain the integrity of our RDS database infrastructure. To ensure a robust security posture, it’s crucial to implement multi-factor authentication (MFA) for AWS accounts, utilize IAM roles and policies for controlled access, and adhere to network security practices by placing RDS instances within a VPC and configuring security groups and NACLs. 

We have already placed our RDS Database instance in VPC Subnets with basic network settings – not really satisfying the security requirements. Read more about how to configure a VPC using Terraform IaC.

Encryption

Regular updates, database user privilege management, auditing, and monitoring mechanisms contribute to early threat detection and mitigation. Encryption of data at rest and in transit should be enforced using AWS KMS encryption and SSL/TLS protocols.

To enable encryption for our example RDS database, first create the KMS key with the below resource configuration.

resource "aws_kms_key" "my_kms_key" {
  description = "My KMS Key for RDS Encryption"
  deletion_window_in_days = 30

  tags = {
    Name = "MyKMSKey"
  }
}

Update the RDS database resource configuration to enable encryption, and provide the key from the above resource as below. Comments indicate the additional attributes for this purpose.

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true
  # Enable storage encryption
  storage_encrypted = true
  # Specify the KMS key ID for encryption (replace with your own KMS key ARN)
  kms_key_id = aws_kms_key.my_kms_key.arn

  parameter_group_name = aws_db_parameter_group.my_db_pmg.name
}

Apply the Terraform configuration and verify if encryption is enabled on the RDS database and is associated with the KMS key.

AWS RDS security is an ongoing commitment that necessitates a holistic approach, incorporating industry best practices and vigilant adaptation to evolving security threats.

Example source directory

7. Manage HA and replication

AWS RDS offers robust high availability (HA) and replication solutions to ensure the availability and scalability of our database workloads. Multi-AZ (Availability Zone) deployment provides automatic failover capabilities, replicating your primary database to a standby instance in a different AZ. In the event of a failure, traffic seamlessly switches to the standby instance, minimizing downtime. 

Additionally, RDS supports read replicas, enabling you to offload read traffic from the primary instance and distribute the load across replicas, thereby improving performance. These replicas can be created within the same region or even cross-region for disaster recovery purposes. By strategically configuring RDS HA and replication, we can enhance both availability and scalability, catering to the demands of our applications while maintaining data integrity and minimizing disruptions.

For our example database, let us begin by enabling Multi-AZ to make the database more fault-tolerant.

To do this, simply add the “multi_az” attribute to the database resource block and set its value to true. The updated configuration looks like this.

resource "aws_db_instance" "default" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t3.medium"
  identifier = "mydb"
  username = "dbuser"
  password = "dbpassword"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]
  db_subnet_group_name = aws_db_subnet_group.my_db_subnet_group.name

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true
  storage_encrypted = true
  kms_key_id = aws_kms_key.my_kms_key.arn

  parameter_group_name = aws_db_parameter_group.my_db_pmg.name

  # Enable Multi-AZ deployment for high availability
  multi_az = true
}

Next, we have to create a replica to improve database performance during high-traffic periods.

Since the replica is essentially a database, we have to create another resource block that is similar to the “aws_db_instance” resource block defined above. The only difference this time is that all the basic parameters are referred from the main database’s ARN.

The resource block for the read replica database is defined below.

resource "aws_db_instance" "replica" {
  replicate_source_db = aws_db_instance.default.identifier
  instance_class = "db.t3.medium"

  vpc_security_group_ids = [aws_security_group.rds_sg.id]

  backup_retention_period = 7
  backup_window = "03:00-04:00"
  maintenance_window = "mon:04:00-mon:04:30"
  skip_final_snapshot = false
  final_snapshot_identifier = "my-db"
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring_role.arn
  performance_insights_enabled = true
  storage_encrypted = true
  kms_key_id = aws_kms_key.my_kms_key.arn

  parameter_group_name = aws_db_parameter_group.my_db_pmg.name

  # Enable Multi-AZ deployment for high availability
  multi_az = true
}

Notice that we have replaced several attributes at the beginning with a single attribute named replicate_source_db, since the replica database instance refers to that information from the source database instance. Rest of the parameters are identical.

Applying this updated Terraform configuration creates the source database instance, a replica instance, and enables Multi-AZ on both of them as seen in the screenshot below. It may take time for the databases to be fully provisioned as the source database is created first, and then the replica database.

terraform rds performance insights

The database instances thus provisioned – both source and replicas – are provisioned in the same region eu-central-1a. It is also possible to create automatically backed-up replicas in a different region altogether. The resource block below does the same.

The entire Terraform configuration so far has been provisioning all the resources in the eu-central-1 region. To create cross-region replicas, we have to explicitly provide a provider alias, as shown in the code below.

provider "aws" {
  region = "us-west-2"
  alias = "replica"
}

resource "aws_db_instance_automated_backups_replication" "default" {
  source_db_instance_arn = aws_db_instance.default.arn
  retention_period = 14
  kms_key_id = aws_kms_key.my_kms_key_us_west.arn

  provider = aws.replica
}

resource "aws_kms_key" "my_kms_key_us_west" {
  description = "My KMS Key for RDS Encryption"
  deletion_window_in_days = 30

  tags = {
    Name = "MyKMSKey"
  }

  provider = aws.replica
}

Apply the additional configuration above, and verify if a replica of the source database is created in the us-west-2 region. This makes the system full proof against data loss due to any reason.

aws rds proxy terraform

AWS RDS Terraform module

Optionally, it is also possible to use this AWS RDS module published on the Terraform registry. The module provides complete example configurations for various database setups to get started quickly. An example is below.

module "rds_instance" {
  source = "terraform-aws-modules/rds/aws"
  version = "6.1.1"  # Specify the version of the module you want to use

  identifier = "my-db"
  engine = "mysql"
  engine_version = "5.7"
  instance_class = "db.t2.micro"
  allocated_storage = 20
  name = "mydb"
  username = "dbuser"
  password = "dbpassword"
  parameter_group_name = "default.mysql5.7"
  skip_final_snapshot = true

  // Other configuration options as needed
}

However, note that to enable specific RDS database configurations via this module, it becomes imperative to know about various parameter group configurations as well as other attribute settings.

Key points

Managing AWS RDS database instances using Terraform ensures consistency, flexibility, and ease. With Terraform, we can easily create, adjust, and control your RDS databases without any mistakes. We can set things up just the way we want. For example, how often backups happen, who can access the database, and even making copies in different places. 

We can also automate tasks like updates and making your databases bigger when needed. This combination helps your databases stay flexible and strong, adapting to what our business needs.

We encourage you also to explore how Spacelift makes it easy to work with Terraform. If you need any help managing your Terraform infrastructure, building more complex workflows based on Terraform, and managing AWS credentials per run, instead of using a static pair on your local machine, Spacelift is a fantastic tool for this. You can check it for free by creating a trial account.

Note: New versions of Terraform will be placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that will expand on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6. OpenTofu retained all the features and functionalities that had made Terraform popular among developers while also introducing improvements and enhancements. OpenTofu is not going to have its own providers and modules, but it is going to use its own registry for them.

Terraform Management Made Easy

Spacelift effectively manages Terraform state, more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and includes many more features.

Start free trial