[Webinar] How Talkdesk Runs Enterprise-Grade OpenTofu with Spacelift

➡️ Register Now

Ansible

How to Use Ansible For DevOps? Hands-On Tutorial

ansible devops

Subscribe to our Newsletter

Mission Infrastructure newsletter is a monthly digest of the latest posts from our blog, curated to give you the insights you need to advance your infrastructure game.

DevOps is a practice that is typically used for application build and deployment lifecycle. However, DevOps, in general also helps in automating various procedures. 

In the past few years, CI/CD pipelines have been used to automate infrastructure management using IaC, mobile app development, microservices deployment, and many more. This flexibility, combined with its streamlined and standardized approach, enables organizations to release more frequently and consistently, with less turbulence to the business.

In this post, we will explore how to use Ansible in DevOps to manage infrastructure and configurations. We will touch upon several key points, such as secrets management, dynamic inventory in Ansible, and SSH access to Ansible.

What we’ll cover: 

  1. What is Ansible?
  2. How does Ansible work?
  3. Benefits of using Ansible in DevOps
  4. How to configure Ansible for DevOps
  5. DevOps pipeline example: Using GitHub Actions to automate Ansible playbooks

Note: The names of the files mentioned in this blog post do not follow any hard naming convention rules. They are mentioned for cross-referencing purposes within the example.

What is Ansible?

Ansible is an automation tool for managing infrastructure and application configuration. It employs agentless architecture to provision cloud computing resources and configure them with required patches, dependencies, and applications to support business processes. Playbooks in Ansible play a key role in carrying out these automation procedures.

Ansible uses declarative language in YAML format to define the tasks to be run in sequence or in parallel. It logs into these machines using an SSH connection to perform these tasks. Ansible integrates with multiple cloud providers to create immutable infrastructure and perform idempotent configuration management operations.

Key features of Ansible

  • Agentless architecture – Unlike other automation tools, Ansible does not require agents on managed nodes, reducing overhead and simplifying management.
  • Declarative and idempotent – Ensures that configurations are applied consistently without unintended changes.
  • YAML-based playbooks – Uses human-readable YAML syntax for defining automation workflows, making it accessible to both developers and system administrators.
  • Extensive module library – Provides built-in modules for managing infrastructure, applications, and security configurations.
  • Integration with DevOps and CI/CD – Works with tools like Spacelift, Jenkins, Git, and Kubernetes for continuous deployment and infrastructure automation.

How does Ansible work?

How does Ansible work

Below are the core concepts related to Ansible architecture:

  • Control node – The machine where Ansible is installed and executed
  • Managed nodes – Target systems that Ansible configures and automates
  • Inventory – A file listing the hosts or groups of hosts Ansible manages
  • Playbooks – YAML scripts defining automation tasks
  • Modules – Predefined scripts that perform specific tasks, such as package installation or user management
  • Roles – A structured way to organize tasks, variables, and handlers for better reusability
  • Plugins – Extend Ansible’s functionality, such as connection methods and logging

Why use Ansible in DevOps?

Ansible uses agentless architecture to deliver a suite of automation functionalities, such as configuration management, application deployment, and infrastructure provisioning. As long as we provide the control node with appropriate access (keys and certificates), it can automatically SSH into the target machines to perform tasks. 

Enabling the CI/CD agent machines/runners to execute Playbooks also enables developers to integrate GitOps into their workflows.

Using Ansible in DevOps, any infrastructure, application, or configuration changes can be deployed automatically as soon as the developers merge the Playbook YAML file changes to appropriate branches. This results in a faster feedback loop and, thus, more consistent releases in production. It also significantly streamlines and automates complex workflows and reduces manual errors.

Beyond automation, Ansible brings key advantages to DevOps: it’s open-source (cost-effective), easy to set up, and backed by a strong community. It accelerates feedback loops, streamlines infrastructure coordination, and optimizes deployments. By eliminating repetitive tasks, Ansible aligns perfectly with CI/CD workflows, boosting both speed and reliability.

How to configure Ansible for DevOps

A properly tuned Ansible setup keeps your automation smooth, whether you’re managing a few servers or orchestrating infrastructure at scale. The steps below show how to set up Ansible for DevOps workflows.

Step 1. Install Ansible

Install Ansible on a control node (e.g., your local machine or a CI/CD server).

sudo apt update && sudo apt install ansible -y  # Ubuntu/Debian
sudo yum install ansible -y  # CentOS/RHEL
brew install ansible  # macOS

See a detailed tutorial: How to Install Ansible on Ubuntu, RHEL, macOS & CentOS

Step 2. Configure SSH access for Ansible

Using Ansible in DevOps requires us to provide agent machines with appropriate SSH access to the target machines. In this case, the GitHub runners are responsible for executing the Playbooks and have no SSH access configured to let Ansible perform the tasks.

So, before we proceed with the next steps, it is important to create a key pair (or use an existing key pair) and configure the private key in Github repository secrets. We also need to configure AWS credentials like access key, secret key, and optionally region as secret variables. The runner requires AWS credentials to perform API calls for provisioning resources. 

We can see the same in the screenshot below.

ansible ssh access

Once these secret environment variables are configured, Ansible implicitly uses them to interact with AWS APIs and SSH into the EC2 instances. In the next steps, we will see that we have not explicitly defined any specific variable for any of the tasks.

Step 2. Set up Ansible dynamic inventory

Ansible usually depends on an inventory file named inventory.ini, which consists of all the hosts we want to manage using Ansible. However, for cloud resources, this static list of hosts will not work, as EC2 instances may get re-provisioned and assigned a new IP address each time. This is where dynamic inventory comes into play, especially in DevOps automation.

Instead of depending on changing instances’ attributes like ID, IP address, FQDN, AMI, etc., it depends on attributes that are set consciously by the developer. For example, if we follow a certain tagging standard, Ansible can track the count of these instances irrespective of other attributes. This gives more control.

The YAML file below uses the amazon.aws.aws_ec2 plugin to implement the basis of dynamic inventory. It specifies the region and keyed_groups, which Ansible interprets as conditions to filter out resources. Any resource that satisfies this criteria will be considered while executing the playbook. The contents are stored in the aws_ec2.yaml file.

plugin: amazon.aws.aws_ec2
regions:
 - eu-central-1
keyed_groups:
 - key: tags
   prefix: tag

The keyed groups are based on the inventory information Ansible queried.

Commands like ansible-inventory -list print all the attributes of all the EC2 instances. We can use any of these attributes as part of the keyed groups. 

In the example above, the AWS Tags information is specified to create a group of targeted EC2 instances. These tags are to be recognized using the “tag” prefix in the playbooks. We will look into this in the next sections. 

Refer to this document for more information on dynamic inventory.

Step 3. Create a playbook to provision EC2 instances

Let’s start writing a playbook for provisioning the EC2 instances. This step is simple but needs a few more tasks to make it more usable.

Note: Using Ansible to provision EC2 instances is not considered a best practice. Instead, you should use an Infrastructure as Code (IaC) tool like Terraform, OpenTofu, Pulumi, or AWS CloudFormation to provision resources efficiently and consistently. Ansible is better suited for configuration management and post-deployment automation rather than infrastructure provisioning.

The YAML file (named ec2.yaml) below is a basic example of creating three EC2 instances in AWS. 

Notice that the key pair is specified in the variable key_name and associated with these instances. We have also followed a simple tagging strategy to identify these instances.

---
- name: Provision EC2
 hosts: localhost
 connection: local
 gather_facts: false
 vars:
   region: eu-central-1
   instance_type: t2.micro
   image: ami-0a628e1e89aaedf80  # Ubuntu 24.04
   key_name: ldtkeypair # key-pair name
   security_group: sg-42f33f3a
   subnet_id: subnet-87e02dcb
 tasks:
   - name: Provision EC2 instance
     amazon.aws.ec2_instance:
       instance_type: "{{ instance_type }}"
       image_id: "{{ image }}"
       region: "{{ region }}"
       key_name: "{{ key_name }}" # associating the key-pair
       security_groups:
         - "{{ security_group }}"
       vpc_subnet_id: "{{ subnet_id }}"
       wait: true
       state: present
       count: 3
       network:
         assign_public_ip: true
       tags:
         MyKey: "MyValue" # tagging strategy

This is good for the first run, but in the next run, it will end up creating additional EC2 instances. There needs to be some tracking to avoid this. 

The tagging strategy queries the number of running EC2 instances and then updates the count attribute, as shown below. 

Here we have added a task to gather information about existing instances based on the filter conditions. This information is stored in a variable named existing_instances, and used in the count attribute of the next step – Provision EC2 instance. This ensures that no additional unnecessary instances are created.

---
- name: Provision EC2
 hosts: localhost
 connection: local
 gather_facts: false
 vars:
   region: eu-central-1
   instance_type: t2.micro
   image: ami-0a628e1e89aaedf80  # Ubuntu 24.04
   key_name: ldtkeypair # key-pair name
   security_group: sg-42f33f3a
   subnet_id: subnet-87e02dcb
 tasks:
   - name: Gather information about existing instances
     amazon.aws.ec2_instance_info:
       filters:
         "tag:MyKey": "MyValue" # following the tagging convention
         instance-state-name: ["pending", "running", "stopping", "stopped"]
     register: existing_instances

   - name: Provision EC2 instance
     amazon.aws.ec2_instance:
       instance_type: "{{ instance_type }}"
       image_id: "{{ image }}"
       region: "{{ region }}"
       key_name: "{{ key_name }}" # associating the key-pair
       security_groups:
         - "{{ security_group }}"
       vpc_subnet_id: "{{ subnet_id }}"
       wait: true
       state: present
       count: "{{ 3 - (existing_instances.instances | length) }}"
       network:
         assign_public_ip: true
       tags:
         MyKey: "MyValue" # tagging strategy
     register: ec2

Further, if we want to use these instances in the next steps, we may add them to the host group (temporarily) using Ansible’s builtin plugin – add_host

For example, we want to wait for the instances to be completely ready with SSH service up. In that case, append the file above with the additional steps below.

---
- name: Provision EC2
 hosts: localhost
 connection: local
 gather_facts: false
 vars:
   region: eu-central-1
   instance_type: t2.micro
   image: ami-0a628e1e89aaedf80  # Ubuntu 24.04
   key_name: ldtkeypair # key-pair name
   security_group: sg-42f33f3a
   subnet_id: subnet-87e02dcb
 tasks:
   - name: Gather information about existing instances
     amazon.aws.ec2_instance_info:
       filters:
         "tag:MyKey": "MyValue" # following the tagging convention
         instance-state-name: ["pending", "running", "stopping", "stopped"]
     register: existing_instances

   - name: Provision EC2 instance
     amazon.aws.ec2_instance:
       instance_type: "{{ instance_type }}"
       image_id: "{{ image }}"
       region: "{{ region }}"
       key_name: "{{ key_name }}" # associating the key-pair
       security_groups:
         - "{{ security_group }}"
       vpc_subnet_id: "{{ subnet_id }}"
       wait: true
       state: present
       count: "{{ 3 - (existing_instances.instances | length) }}"
       network:
         assign_public_ip: true
       tags:
         MyKey: "MyValue" # tagging strategy
     register: ec2

   - name: Add new instance to host group
     ansible.builtin.add_host:
       hostname: "{{ item.public_ip_address }}"
       groupname: webserver
       ansible_host: "{{ item.public_ip_address }}"
     loop: "{{ ec2.instances }}"

   - name: Wait for SSH to come up # if necessary
     ansible.builtin.wait_for:
       host: "{{ item.public_ip_address }}"
       port: 22
       delay: 60
       timeout: 320
       state: started
     loop: "{{ ec2.instances }}"

The “Wait for SSH to come up” task executes after the delay and inspects whether the SSH service is ready before proceeding to the next step. Note that we have used the loop attribute to loop over the ec2.instances registered variable to make sure each instance is ready with SSH.

With this, we have our infrastructure provisioning playbook ready.

Step 4. Configure the Apache web server

Configure the Apache web server in the same playbook. Add the task below after SSH is ready on all the instances. This task loops over all the instances to SSH into them, run commands to install the Apache web server and create a custom index.html file.

In the “Configure SSH access for Ansible” section, we mentioned that we need not explicitly define the SSH key here. However, it is important to specify the remote_user attribute. Since Ubuntu AMIs in AWS use the Ubuntu user, not specifying this throws an “Unreachable” error. If we are using other AMIs, consider changing this accordingly.

Also, note that we have specified the host group as a webserver created in the previous step. This ensures that the Apache web server is installed on all the instances in this group.

- name: Configure Web Server
 hosts: webserver
 become: true
 remote_user: ubuntu
 tasks:
   - name: Install Apache
     ansible.builtin.apt:
       name: apache2
       state: present
       update_cache: true

   - name: Start Apache service
     ansible.builtin.service:
       name: apache2
       state: started
       enabled: true

   - name: Create index.html
     ansible.builtin.copy:
       content: "<html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>"
       dest: /var/www/html/index.html
       mode: '0644'

Step 5. Test connectivity

Finally, to ensure the Apache web server is successfully installed and accessible, loop over the same set of web servers and access them from the runner machine. The second step prints the web page content, which is the same as what you set in the index.html file in the previous step.

Note that we can also perform these tests explicitly as part of the Github Actions pipeline.

- name: Test Connectivity
 hosts: webserver
 connection: local
 tasks:
   - name: Check web server connectivity
     ansible.builtin.uri:
       url: "http://{{ hostvars[item]['ansible_host'] }}"
       return_content: true
     loop: "{{ groups['webserver'] }}"
     register: webpages

   - name: Display web page content for all instances
     ansible.builtin.debug:
       msg: "Content from {{ hostvars[item.item]['ansible_host'] }}: {{ item.content }}"
     loop: "{{ webpages.results }}"
     when: item.status == 200

This completes our configuration management using Ansible.

DevOps pipeline example: Using GitHub Actions to automate Ansible playbooks

The intention of our example is to automate the execution of the Ansible playbook we just created using DevOps pipelines. This playbook may run multiple times using GitHub Actions, and it will always deliver a consistent result: three EC2 instances configured with an Apache web server and a custom index.html file.

Read more: Ansible with GitHub Actions: Automating Playbook Runs

Generic CI/CD tools like GitHub Actions are not designed for full infrastructure control and monitoring. Spacelift’s vibrant ecosystem and excellent GitOps flow are helpful for managing and orchestrating Ansible. By introducing Spacelift on top of Ansible, you can easily create custom workflows based on pull requests and apply any necessary compliance checks for your organization.

Start by creating Github Actions workflow YAML in the .github/workflows directory. Before executing the playbook directly, we need to install dependencies on the runner machine. We can also choose to perform linting to catch syntactical errors earlier. 

The Actions workflow file below consists of one job: lint. It checks out the code (playbook YAML), installs Ansible dependencies, and performs linting using the Ansible-lint tool.

name: Ansible CI/CD Pipeline


on:
 push:
   branches: [ main ]
 pull_request:
   branches: [ main ]


jobs:
 lint:
   runs-on: ubuntu-latest
   steps:
   - uses: actions/checkout@v2
   - name: Set up Python
     uses: actions/setup-python@v2
     with:
       python-version: '3.x'
   - name: Install dependencies
     run: |
       python -m pip install --upgrade pip
      pip install ansible-lint
      ansible-galaxy collection install amazon.aws
      pwd
      ls
   - name: Lint Ansible Playbook
     run: |
       ansible-lint ./ec2.yaml
      ansible-lint ./apache.yaml

Commit this to the GitHub repository on the main branch, and the Actions should trigger automatically. Expect this job to fail if there are any additional spaces, invalid syntax, or misuse of any Ansible plugin. If everything is validated successfully in this job, we should see output below in the “Lint Ansible Playbook” step.

Run ansible-lint ./ec2.yaml

Passed: 0 failure(s), 0 warning(s) on 1 files. Last profile that met the validation criteria was 'production'.

Passed: 0 failure(s), 0 warning(s) on 1 files. Last profile that met the validation criteria was 'production'.

Execute the playbook in the next job. A new job also means Github Actions will obtain a fresh runner machine to run it. Thus, we need to install relevant dependencies before executing the ansible-playbook command. 

The following deploy job to the GitHub Actions workflow we created in the last step.

deploy:
   needs: lint
   runs-on: ubuntu-latest
   steps:
   - uses: actions/checkout@v2
   - name: Set up Python
     uses: actions/setup-python@v2
     with:
       python-version: '3.x'
   - name: Install Ansible
     run: |
       python -m pip install --upgrade pip
      pip install ansible
      pip install boto3 botocore
   - name: Set up SSH key
     uses: webfactory/ssh-agent@v0.5.0
     with:
       ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
   - name: Configure AWS Credentials
     uses: aws-actions/configure-aws-credentials@v2
     with:
       aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
       aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
       aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
   - name: Provision EC2 Instances
     uses: dawidd6/action-ansible-playbook@v2
     with:
       playbook: ec2.yaml
       directory: ./
       key: ${{secrets.SSH_PRIVATE_KEY}}
       options: |
         --verbose

The steps are self-explanatory, however here is the summary of the steps this job performs:

  1. Check out the Playbook YAMLs from the repository
  2. Set up a Python environment with a specified version
  3. Install Ansible and dependencies
  4. Set up SSH key to SSH into the EC2 instances
  5. Configure AWS credentials on the runner VM
  6. Finally, run the ec2.yaml Playbook to provision EC2 instances and configure the Apache web server

Note that we are using a specific Action from the GitHub Actions marketplace. You can also simply run the ansible-playbook command instead.

Run the workflow and observe the log outputs. We should see the output from all the tasks defined in the Ansible playbook under the heading, as shown below. From the output, we can see that Ansible is successfully able to check the connectivity and print the output of the index.html file being served from each EC2 instance.

PLAY [Provision EC2] **************************
TASK [Gather information about existing instances] *************
TASK [Provision EC2 instance] **************
...
TASK [Check web server connectivity] *******************************************

ok: [localhost] => (item=3.75.101.135) => {"accept_ranges": "bytes", "ansible_loop_var": "item", "changed": false, "connection": "close", "content": "<html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>", "content_length": "69", "content_type": "text/html", "cookies": {}, "cookies_string": "", "date": "Sat, 04 Jan 2025 10:46:18 GMT", "elapsed": 0, "etag": "\"45-62adf1ade7e60\"", "item": "3.75.101.135", "last_modified": "Sat, 04 Jan 2025 10:46:17 GMT", "msg": "OK (69 bytes)", "redirected": false, "server": "Apache/2.4.58 (Ubuntu)", "status": 200, "url": "http://3.75.101.135", "vary": "Accept-Encoding"}

ok: [localhost] => (item=35.159.166.152) => {"accept_ranges": "bytes", "ansible_loop_var": "item", "changed": false, "connection": "close", "content": "<html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>", "content_length": "69", "content_type": "text/html", "cookies": {}, "cookies_string": "", "date": "Sat, 04 Jan 2025 10:46:19 GMT", "elapsed": 0, "etag": "\"45-62adf1addb233\"", "item": "35.159.166.152", "last_modified": "Sat, 04 Jan 2025 10:46:17 GMT", "msg": "OK (69 bytes)", "redirected": false, "server": "Apache/2.4.58 (Ubuntu)", "status": 200, "url": "http://35.159.166.152", "vary": "Accept-Encoding"}

ok: [localhost] => (item=18.192.5.108) => {"accept_ranges": "bytes", "ansible_loop_var": "item", "changed": false, "connection": "close", "content": "<html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>", "content_length": "69", "content_type": "text/html", "cookies": {}, "cookies_string": "", "date": "Sat, 04 Jan 2025 10:46:19 GMT", "elapsed": 0, "etag": "\"45-62adf1add7233\"", "item": "18.192.5.108", "last_modified": "Sat, 04 Jan 2025 10:46:17 GMT", "msg": "OK (69 bytes)", "redirected": false, "server": "Apache/2.4.58 (Ubuntu)", "status": 200, "url": "http://18.192.5.108", "vary": "Accept-Encoding"}
TASK [Display web page content for all instances] ******************************

ok: [localhost] => (item={'content': '<html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>', 'redirected': False, 'url': 'http://3.75.101.135', 'status': 200, 'date': 'Sat, 04 Jan 2025 10:46:18 GMT', 'server': 'Apache/2.4.58 (Ubuntu)', 'last_modified': 'Sat, 04 Jan 2025 10:46:17 GMT', 'etag': '"45-62adf1ade7e60"', 'accept_ranges': 'bytes', 'content_length': '69', 'vary': 'Accept-Encoding', 'connection': 'close', 'content_type': 'text/html', 'cookies_string': '', 'cookies': {}, 'msg': 'OK (69 bytes)', 'elapsed': 0, 'changed': False, 'invocation': {'module_args': {'url': 'http://3.75.101.135', 'return_content': True, 'force': False, 'http_agent': 'ansible-httpget', 'use_proxy': True, 'validate_certs': True, 'force_basic_auth': False, 'use_gssapi': False, 'body_format': 'raw', 'method': 'GET', 'follow_redirects': 'safe', 'status_code': [200], 'timeout': 30, 'headers': {}, 'remote_src': False, 'unredirected_headers': [], 'decompress': True, 'use_netrc': True, 'unsafe_writes': False, 'url_username': None, 'url_password': None, 'client_cert': None, 'client_key': None, 'dest': None, 'body': None, 'src': None, 'creates': None, 'removes': None, 'unix_socket': None, 'ca_path': None, 'ciphers': None, 'mode': None, 'owner': None, 'group': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None}}, 'failed': False, 'item': '3.75.101.135', 'ansible_loop_var': 'item'}) => {

    "msg": "Content from 3.75.101.135: <html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>"

}

ok: [localhost] => (item={'content': '<html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>', 'redirected': False, 'url': 'http://35.159.166.152', 'status': 200, 'date': 'Sat, 04 Jan 2025 10:46:19 GMT', 'server': 'Apache/2.4.58 (Ubuntu)', 'last_modified': 'Sat, 04 Jan 2025 10:46:17 GMT', 'etag': '"45-62adf1addb233"', 'accept_ranges': 'bytes', 'content_length': '69', 'vary': 'Accept-Encoding', 'connection': 'close', 'content_type': 'text/html', 'cookies_string': '', 'cookies': {}, 'msg': 'OK (69 bytes)', 'elapsed': 0, 'changed': False, 'invocation': {'module_args': {'url': 'http://35.159.166.152', 'return_content': True, 'force': False, 'http_agent': 'ansible-httpget', 'use_proxy': True, 'validate_certs': True, 'force_basic_auth': False, 'use_gssapi': False, 'body_format': 'raw', 'method': 'GET', 'follow_redirects': 'safe', 'status_code': [200], 'timeout': 30, 'headers': {}, 'remote_src': False, 'unredirected_headers': [], 'decompress': True, 'use_netrc': True, 'unsafe_writes': False, 'url_username': None, 'url_password': None, 'client_cert': None, 'client_key': None, 'dest': None, 'body': None, 'src': None, 'creates': None, 'removes': None, 'unix_socket': None, 'ca_path': None, 'ciphers': None, 'mode': None, 'owner': None, 'group': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None}}, 'failed': False, 'item': '35.159.166.152', 'ansible_loop_var': 'item'}) => {

    "msg": "Content from 35.159.166.152: <html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>"

}

ok: [localhost] => (item={'content': '<html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>', 'redirected': False, 'url': 'http://18.192.5.108', 'status': 200, 'date': 'Sat, 04 Jan 2025 10:46:19 GMT', 'server': 'Apache/2.4.58 (Ubuntu)', 'last_modified': 'Sat, 04 Jan 2025 10:46:17 GMT', 'etag': '"45-62adf1add7233"', 'accept_ranges': 'bytes', 'content_length': '69', 'vary': 'Accept-Encoding', 'connection': 'close', 'content_type': 'text/html', 'cookies_string': '', 'cookies': {}, 'msg': 'OK (69 bytes)', 'elapsed': 0, 'changed': False, 'invocation': {'module_args': {'url': 'http://18.192.5.108', 'return_content': True, 'force': False, 'http_agent': 'ansible-httpget', 'use_proxy': True, 'validate_certs': True, 'force_basic_auth': False, 'use_gssapi': False, 'body_format': 'raw', 'method': 'GET', 'follow_redirects': 'safe', 'status_code': [200], 'timeout': 30, 'headers': {}, 'remote_src': False, 'unredirected_headers': [], 'decompress': True, 'use_netrc': True, 'unsafe_writes': False, 'url_username': None, 'url_password': None, 'client_cert': None, 'client_key': None, 'dest': None, 'body': None, 'src': None, 'creates': None, 'removes': None, 'unix_socket': None, 'ca_path': None, 'ciphers': None, 'mode': None, 'owner': None, 'group': None, 'seuser': None, 'serole': None, 'selevel': None, 'setype': None, 'attributes': None}}, 'failed': False, 'item': '18.192.5.108', 'ansible_loop_var': 'item'}) => {

    "msg": "Content from 18.192.5.108: <html><body><h1>Hello from Ansible-configured EC2!</h1></body></html>"

We have managed to automate Ansible playbooks using DevOps, however, this is not the perfect solution.

Splitting infrastructure and configuration management in Github Actions

When the playbook provisions infrastructure, it’s often a one-time setup before configuration changes can be applied. Since configurations evolve more frequently than the underlying infrastructure, there’s no need to rerun the provisioning process with every configuration update.

Moreover, given that infrastructure and configuration management follow distinct workflows, managing them as separate projects aligns with best practices. This separation also lends itself well to independent DevOps automation pipelines.

Now that we’ve outlined the key steps, splitting the existing GitHub Actions pipeline should be straightforward. However, we must be mindful of certain critical details to ensure a smooth transition.

The current Playbook consists of the following tasks:

  1. Gather information about existing instances
  2. Provision EC2 instance
  3. Add a new instance to host group
  4. Wait for SSH to come up
  5. Install Apache
  6. Start Apache service
  7. Create index.html
  8. Check web server connectivity
  9. Display web page content for all instances

It is safe to assume that steps #1 through #4 are responsible for infrastructure provisioning, and the rest of them perform configuration management.

Create a new file to move steps #5 through #9 into it. In this example, we have named it apache.yaml. 

Below are the refactored tasks for apache.yaml.

---
- name: Install Apache on EC2 Instances
 hosts: tag_MyKey_MyValue
 remote_user: ubuntu
 become: true
 gather_facts: true
 vars:
   region: eu-central-1
   instance_type: t2.micro
   image: ami-0a628e1e89aaedf80  # Ubuntu 24.04
   key_name: ldtkeypair
   security_group: sg-42f33f3a
   subnet_id: subnet-87e02dcb
 tasks:
   - name: Update apt cache
     ansible.builtin.apt:
       update_cache: true
     when: ansible_os_family == "Debian"
   - name: Install Apache
     ansible.builtin.package:
       name: apache2
       state: present
   - name: Start Apache service
     ansible.builtin.service:
       name: apache2
       state: started
       enabled: true
   - name: Create a simple index.html
     ansible.builtin.copy:
       content: |
         <html>
        <body>
        <h1>Hello from {{ ansible_hostname }}</h1>
        <p>This server was provisioned using Ansible.</p>
        </body>
        </html>
       dest: /var/www/html/index.html
       mode: '0644'
   - name: Ensure Apache is running
     ansible.builtin.service:
       name: apache2
       state: restarted

Some of the important things to note here:

  1. We have used a dynamic inventory expression to specify the hosts that need to be affected. Based on the current value, it will configure all the EC2 instances with tags set to “MyKey = MyValue.”
  2. We have specified the remote_user property to “ubuntu” to make sure instances are reachable with this user.
  3. We have used builtin Ansible packages to install and start the Apache web server.
  4. Modified the contents of the index.html file

Next, in the GitHub Actions workflow, we can create a new repository/workflow to execute this new playbook. However, for the sake of this example, we will create a new step in the existing workflow file, to execute this playbook separately, after ec2.yaml playbook (infrastructure provisioning) is executed. 

Add the steps below to the existing deploy job of the GitHub workflow.

   - name: Install Ansible Amazon AWS Collection
     run: ansible-galaxy collection install amazon.aws
   - name: Validate Inventory
     run: ansible-inventory -i aws_ec2.yaml --list --yaml
   - name: Set up SSH key
     env:
       SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
     run: |
       mkdir -p ~/.ssh
      echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
      chmod 600 ~/.ssh/id_rsa
   - name: Install Apache Webserver
     env:
       ANSIBLE_HOST_KEY_CHECKING: "False"
     run: |
       ansible-playbook -i aws_ec2.yaml apache.yaml --user ubuntu --verbose
   - name: Install Apache Webserver
     uses: dawidd6/action-ansible-playbook@v2
     with:
       playbook: apache.yaml
       directory: ./
       key: ${{secrets.SSH_PRIVATE_KEY}}
       inventory: aws_ec2.yaml
       options: |
         --verbose

The initial steps install the Apache directory and set the private key used by Ansible to SSH into the remote EC2 instances. The step named “Install Apache Webserver”, executes the apache.yaml playbook, which also includes the tests.

Note that here we have explicitly used the dynamic inventory format specified in the file named aws_ec2.yaml to make sure only the intended instances are targeted. As mentioned earlier, we can potentially move the configuration management files to a separate repository and run this Action separately.

How can Spacelift help you with Ansible projects?

Compared to building a custom and production-grade infrastructure management pipeline with a CI/CD tool like GitHub Actions, adopting a collaborative infrastructure delivery tool like Spacelift feels a bit like cheating. 

Spacelift’s vibrant ecosystem and excellent GitOps flow can greatly assist you in managing and orchestrating Ansible. By introducing Spacelift on top of Ansible, you can easily create custom workflows based on pull requests and apply any necessary compliance checks for your organization.

With Spacelift, you get:

  • Better playbook automation – Manage the execution of Ansible playbooks from one central location.
  • Inventory observability – View all Ansible-managed hosts and related playbooks, with clear visual indicators showing the success or failure of recent runs.
  • Playbook run insights – Audit Ansible playbook run results with detailed insights to pinpoint problems and simplify troubleshooting.
  • Policies – Control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications
  • Stack dependencies – Build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that, for example, generates your EC2 instances using Terraform and combines it with Ansible to configure them
  • Self-service infrastructure via Blueprints, or Spacelift’s Kubernetes operator – Enable your developers to do what matters – developing application code while not sacrificing control
  • Creature comforts such as contexts (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code
  • Drift detection and optional remediation

If you want to learn more about using Spacelift with Ansible, check our documentation, read our Ansible guide, or book a demo with one of our engineers.

Would you like to see this in action – or just want a tl;dr? Check out this video showing you Spacelift’s new Ansible functionality:

ansible product video thumbnail

Key points

Implementing DevOps automation for Ansible is not about following a usual CI/CD path where we build the source code into executables or container images and then deploy them in the target environment. Executing Ansible Playbooks follows a different path, which we have discussed at a very high level in this blog post. 

This is by no means a production-ready example, but it does provide an approach to it. Better security controls, standards, and error handling may be required in the real-world setup.

Manage Ansible Better with Spacelift

Managing large-scale playbook execution is hard. Spacelift enables you to automate Ansible playbook execution with visibility and control over resources, and seamlessly link provisioning and configuration workflows.

Learn more

The Infrastructure Automation

Report 2025

Our research shows that teams are overconfident

and race toward faster deployments,

sacrificing governance and falling into

the Speed-Control Paradox.

Get the Report