Terraform

How to Deploy AWS Auto Scaling Group with Terraform

How to Deploy AWS Autoscaling Groups with Terraform

In this article, we will take a look into how to manage and deploy autoscaling groups in Amazon Web Services (AWS), explaining what they are, their purpose and functions, and how they differ from a launch configuration group, before showing how to create an auto-scaling group with Terraform examples.

We will cover:

  1. What is an Auto Scaling Group?
  2. Prerequisites
  3. How to create an AWS Auto Scaling Group in Terraform

What is an AWS Auto Scaling Group?

AWS Auto Scaling Groups (ASGs) let you quickly scale and manage a collection of EC2 instances that run the same instance configuration. ASGs automatically scale the number of instances in response to changes in demand or other scaling policies. They ensure that the desired number of instances are always running, helping to maintain application availability and handle fluctuating workloads.

Scaling policies define the conditions under which the group scales up or down, such as CPU utilization, network traffic, or other custom metrics.

To utilize an auto-scaling group, you need to have a clear understanding of your application’s scaling requirements to be able to define appropriate policies. The best way to achieve that understanding is a period of monitoring the performance under load testing so you can accurately determine the metrics you’ll need to use in the scaling policy.

What is the difference between a launch configuration group and an auto-scaling group?

Both launch configuration groups and auto-scaling groups are AWS components.

  • The launch configuration group defines the configuration settings for EC2 instances.
  • The auto-scaling group manages the scaling and deployment of EC2 instances based on the defined configuration.

A launch configuration group is a template that defines the specifications for instances when they are launched in an Auto Scaling group. It includes parameters such as the Amazon Machine Image (AMI), instance type, security groups, key pairs, user data, and other instance launch-related settings. The launch configuration group acts as a blueprint for creating new instances.

Note that the use of launch configurations is discouraged in favor of launch templates. Read more in the AWS EC2 Documentation. You can also check out the Terraform documentation here.

An auto-scaling group, on the other hand, is a logical grouping of instances that are launched based on the specifications provided in the launch configuration group.

Prerequisites

Before we dive in and create an ASG, there are a few prerequisites that need to be in place to define the instance and its supporting infrastructure:

  • The Amazon Machine Image (AMI) serves as the template for launching instances in the Auto Scaling group.
  • You need to create a launch configuration or launch template, which defines the instance configuration parameters such as AMI, instance type, security groups, key pairs, and user data. This configuration/template acts as a blueprint for creating instances in the Auto Scaling group.
  • Supporting network infrastructure, including VPC, Subnets, and Security Groups, and optionally a load balancer — for your EC2 instances to connect to and Security groups to control inbound and outbound traffic to your instances. (Read more about managing security groups through Terraform.)
  • Determine the scaling policies you want to apply to the Auto Scaling group.

How to create an AWS Auto Scaling Group in Terraform

1. Define a launch configuration block and autoscaling group block

The example below defines a launch template and then uses this in the autoscaling group resource block (you should use these instead of launch configurations).

provider "aws" {
  region = "us-west-2"
}

resource "aws_launch_template" "template" {
  name_prefix     = "test"
  image_id        = "ami-1a2b3c"
  instance_type   = "t2.micro"
  security_groups = ["sg-12345678"]
}

resource "aws_autoscaling_group" "autoscale" {
  name                  = "test-autoscaling-group"  
  availability_zones    = ["us-west-2"]
  desired_capacity      = 3
  max_size              = 6
  min_size              = 3
  health_check_type     = "EC2"
  termination_policies  = ["OldestInstance"]
  vpc_zone_identifier   = ["subnet-12345678"]

  launch_template {
    id      = aws_launch_template.template.id
    version = "$Latest"
  }
}

The launch configuration block specifies:

  • a name prefix to use for all versions of this launch configuration. Terraform will append a unique identifier to the prefix for each launch configuration created.
  • an Amazon Linux AMI specified by a data source.
  • an instance type.
  • a security group to associate with the instances.

The autoscaling group block specifies:

  • the minimum and maximum number of instances allowed in the group
  • the desired count to launch (desired_capacity)
  • a launch template to use for each instance in the group
  • a list of subnets where the ASGs will launch new instances
  • a health check type is defined as EC2
  • a termination policy is set to ‘OldestInstance’ (This is a list of policies to decide how the instances in the Auto Scaling Group should be terminated).

2. Force a scaling operation using the AWS CLI

Once you have your autoscaling group applied, you can force a scaling operation using the AWS CLI.

The example below will scale the group to 10 instances.

aws autoscaling set-desired-capacity --auto-scaling-group-name "test-autoscaling-group" --desired-capacity 10

3. Add a lifecycle argument to Terraform

If you have made a change directly using the CLI and outside of Terraform, the next time you run terraform plan you will notice Terraform will want to adjust the desired_capacity back to the value specified in the Terraform configuration file (three in our example).

To ensure Terraform respects dynamic scaling operations and to stop it from scaling your instances when it changes other aspects of your configuration, use a lifecycle argument to ignore changes to the desired capacity.

  lifecycle { 
    ignore_changes = [desired_capacity]
  }

Terraform is not aware of the member instances of the group, only the capacity, and so it will not list the instances in your Terraform state file.

4. Add an automated scaling event

An alternative to manually triggering a scaling operation is to add an automated scaling event. You can trigger scaling events in response to metric thresholds or other benchmarks.

For example, the policy below uses a Cloudwatch metric alarm resource to scale down the number of EC2 instances by one when it is detected that less than 25% CPU is used over the course of 5 x 30 second evaluation periods (2m 30sec) on average.

AWS will continue to scale down your instances until it reaches the minimum capacity for the group that was set (three).

resource "aws_autoscaling_policy" "scale_down" {
  name                   = "test_scale_down"
  autoscaling_group_name = aws_autoscaling_group.autoscale.name
  adjustment_type        = "ChangeInCapacity"
  scaling_adjustment     = -1
  cooldown               = 120
}

resource "aws_cloudwatch_metric_alarm" "scale_down" {
  alarm_description   = "Monitors CPU utilization"
  alarm_actions       = [aws_autoscaling_policy.scale_down.arn]
  alarm_name          = "test_scale_down"
  comparison_operator = "LessThanOrEqualToThreshold"
  namespace           = "AWS/EC2"
  metric_name         = "CPUUtilization"
  threshold           = "25"
  evaluation_periods  = "5"
  period              = "30"
  statistic           = "Average"

  dimensions = {
    AutoScalingGroupName = aws_autoscaling_group.autoscale.name
  }
}

You can also scale instances on a schedule using theaws_autoscaling_schedule resource. The example below shows an autoscaling schedule resource block:

resource "aws_autoscaling_schedule" "schedule" {
  scheduled_action_name  = "schedule"
  min_size               = 3
  max_size               = 6
  desired_capacity       = 3
  start_time             = "2023-06-06T18:00:00Z"
  end_time               = "2023-06-07T06:00:00Z"
  autoscaling_group_name = aws_autoscaling_group.autoscale.name
}

Key points

Bringing together ASGs with launch configuration groups and dynamic autoscaling policies with Cloudwatch metric alerts, you can effectively scale your EC2 instances automatically based on specified metrics.

We encourage you also to explore how Spacelift makes it easy to work with Terraform. If you need any help managing your Terraform infrastructure, building more complex workflows based on Terraform, and managing AWS credentials per run, instead of using a static pair on your local machine, Spacelift is a fantastic tool for this. You can check it for free by creating a trial account.

Note: New versions of Terraform will be placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that will expand on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6. OpenTofu retained all the features and functionalities that had made Terraform popular among developers while also introducing improvements and enhancements. OpenTofu works with your existing Terraform state file, so you won’t have any issues when you are migrating to it.

Manage Terraform Better and Faster

If you are struggling with Terraform automation and management, check out Spacelift. It helps you manage Terraform state, build more complex workflows, and adds several must-have capabilities for end-to-end infrastructure management.

Start free trial