In this article, we are exploring Ansible Playbooks, which are basically blueprints for automation actions. Playbooks allow us to define a recipe with all the steps we would like to automate in a repeatable, simple, and consistent manner.
We will cover:
- What is an Ansible playbook?
- What is the structure of an Ansible playbook?
- How to write an Ansible playbook?
- How to run Ansible playbooks?
- Ansible playbook example
- Using variables in playbooks
- Handling sensitive data in playbooks
- Triggering tasks on change with handlers
- Using conditional tasks in playbooks
- How to use loops in Ansible playbooks
- How to run multiple playbooks in Ansible
- Ansible playbooks tips and tricks
If you are entirely new to Ansible, check out this introductory Ansible Tutorial first.
Ansible playbooks are one of the basic components of Ansible as they record and execute Ansible’s configuration. Generally, a playbook is the primary way to automate a set of tasks that we would like to perform on a remote machine.
They help our automation efforts by gathering all the resources necessary to orchestrate ordered processes or avoid repeating manual actions. Playbooks can be reused and shared between persons, and they are designed to be human-friendly and easy to write in YAML.
What is the difference between a playbook and a role in Ansible?
Ansible playbooks are broader in scope and capable of orchestrating multiple plays and roles across different hosts and groups, while roles are more focused components, targeting specific tasks and configurations.
When it comes to how to use them, playbooks execute the automation, while roles are used to structure and package the automation in a reusable form.
A playbook is composed of one or more plays to run in a specific order. A play is an ordered list of tasks to run against the desired group of hosts.
Every task is associated with a module responsible for an action and its configuration parameters. Since most tasks are idempotent, we can safely rerun a playbook without any issues.
As discussed, Ansible playbooks are written in YAML using the standard extension .yml with minimal syntax.
We must use spaces to align data elements that share the same hierarchy for indentation. Items that are children of other items must be indented more than their parents. There is no strict rule for the number of spaces used for indentation, but it’s pretty common to use two spaces while Tab characters are not allowed.
Below is an example simple playbook with only two plays, each one having two tasks:
---
- name: Example Simple Playbook
hosts: all
become: yes
tasks:
- name: Copy file example_file to /tmp with permissions
ansible.builtin.copy:
src: ./example_file
dest: /tmp/example_file
mode: '0644'
- name: Add the user 'bob' with a specific uid
ansible.builtin.user:
name: bob
state: present
uid: 1040
- name: Update postgres servers
hosts: databases
become: yes
tasks:
- name: Ensure postgres DB is at the latest version
ansible.builtin.yum:
name: postgresql
state: latest
- name: Ensure that postgresql is started
ansible.builtin.service:
name: postgresql
state: started
We define a descriptive name for each play according to its purpose on the top level. Then we represent the group of hosts on which the play will be executed, taken from the inventory. Finally, we define that these plays should be executed as the root user with the become option set to yes.
You can also define many other Playbook Keywords at different levels such as play, tasks, playbook to configure Ansible’s behavior. Even more, most of these can be set at runtime as command-line flags in the ansible configuration file, ansible.cfg, or the inventory. Check out the precedence rules to understand how Ansible behaves in these cases.
Next, we use the tasks parameter to define the list of tasks for each play. For each task, we define a clear and descriptive name. Every task leverages a module to perform a specific operation.
For example, the first task of the first play uses the ansible.builtin.copy module. Along with the module, we usually have to define some module arguments. For the second task of the first play, we use the module ansible.builtin.user that helps us manage user accounts. In this specific case, we configure the name of the user, the state of the user account, and its uid accordingly.
Writing an Ansible playbook involves creating a YAML file that specifies the hosts to configure and the tasks to perform on these hosts.
Usually, as a best practice, to specify your hosts, you need to define an Ansible inventory file. To make things simple, we will use localhost:
[local]
localhost ansible_connection=local
Next, let’s define a play file that will ping localhost:
- name: Play
hosts: local
tasks:
- name: Ping my hosts
ansible.builtin.ping:
To run this, we can use:
ansible-playbook -i inventory.ini play.yaml
PLAY [Play] ******************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************
[WARNING]: Platform darwin on host localhost is using the discovered Python interpreter at /opt/homebrew/bin/python3.12, but future installation of
another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-
core/2.16/reference_appendices/interpreter_discovery.html for more information.
ok: [localhost]
TASK [Ping my hosts] *********************************************************************************************************************************
ok: [localhost]
PLAY RECAP *******************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Things to take into consideration when you are writing an Ansible playbook:
- YAML is sensitive to indentation, typically requiring 2 spaces for each level of indentation
- Take advantage of variables in your playbooks to make them more dynamic and flexible. Variables can be defined in many places, including in the playbook itself, in inventory, in separate files, or even passed at the command line
- Use handlers – these are special tasks that only run when notified by another task. They are typically used to restart services when configurations change.
- Take advantage of templates – Ansible can use Jinja2 templates to dynamically generate files based on variables
- Use roles – For complex setups, consider organizing your tasks into roles. This helps keep your playbooks clean and makes your tasks more reusable
When we are running a playbook, Ansible executes each task in order, one at a time, for all the hosts that we selected. This default behavior could be adjusted according to different use cases using strategies.
If a task fails, Ansible stops the execution of the playbook to this specific host but continues to others that succeeded. During execution, Ansible displays some information about connection status, task names, execution status, and if any changes have been performed.
At the end, Ansible provides a summary of the playbook’s execution along with failures and successes. Let’s see these in action by running the example playbook we saw earlier with the ansible-playbook command.
From the output, we notice the Play names, the Gathering Facts task, the Play tasks, and the Play Recap in the end. Since we didn’t define a databases hosts group, the second play of the playbook was skipped.
We can use the –limit flag to limit the Playbook’s execution to specific hosts. For example:
ansible-playbook example-simple-playbook.yml --limit host1
Let’s write a more real example playbook. We will suppose that we have three EC2 servers in AWS that we want to configure as webservers. These web servers will be behind a public load balancer, so whenever we access the load balancer, we want to see an HTML page that responds with the different hostnames.
To do this, we will need to create an inventory and a playbook. The inventory will be similar to:
[webservers]
ec2-instance-1 ansible_host=example1.compute.amazonaws.com
ec2-instance-2 ansible_host=example2.compute.amazonaws.com
ec2-instance-3 ansible_host=example3.compute.amazonaws.com
[webservers:vars]
ansible_ssh_private_key_file=/path/to/your/private-key.pem
Under the [webservers] hosts, we’ve added the 3 ec2 instances, and to make things easier, we have defined one variable in the inventory file:
- ansible_ssh_private_key_file – path to the private SSH key used to connect to the target machines
Now, let’s define a playbook that will install Nginx and create an index file that will be served from these web servers.
---
- name: Install and Configure Nginx
hosts: webservers
become: yes
In the first part of the playbook, we will add a name to it, tell it on which hosts to run, and whether or not to become root.
Next, we have defined a couple of pre_tasks to set the ssh user depending on the operating system:
pre_tasks:
- name: Set SSH user for Ubuntu systems
set_fact:
ansible_user: ubuntu
when: ansible_os_family == "Debian"
- name: Set SSH user for RedHat systems
set_fact:
ansible_user: ec2-user
when: ansible_os_family == "RedHat"
Next, we will have two installations for Nginx, one when the os_family is Debian and the other one when the os family is Redhat.
tasks:
- name: Install Nginx
ansible.builtin.yum:
name: nginx
state: present
when: ansible_os_family == "RedHat"
- name: Install Nginx
ansible.builtin.apt:
update_cache: yes
name: nginx
state: present
when: ansible_os_family == "Debian"
After that, we need to create the index.html file:
- name: Create index.html
ansible.builtin.copy:
dest: /usr/share/nginx/html/index.html
content: |
<!DOCTYPE html>
<html>
<head><title>Server Details</title></head>
<body>
<h1>Served from {{ ansible_hostname }}</h1>
</body>
</html>
mode: '0644'
In the end, we need to ensure that nginx is enabled and running:
- name: Ensure Nginx is running and enabled
ansible.builtin.service:
name: nginx
state: started
enabled: yes
This is how the playbook will look like in the end:
---
- name: Install and Configure Nginx
hosts: webservers
become: yes
pre_tasks:
- name: Set SSH user for Ubuntu systems
set_fact:
ansible_user: ubuntu
when: ansible_os_family == "Debian"
- name: Set SSH user for RedHat systems
set_fact:
ansible_user: ec2-user
when: ansible_os_family == "RedHat"
tasks:
- name: Install Nginx
ansible.builtin.yum:
name: nginx
state: present
when: ansible_os_family == "RedHat"
- name: Install Nginx
ansible.builtin.apt:
update_cache: yes
name: nginx
state: present
when: ansible_os_family == "Debian"
- name: Create index.html
ansible.builtin.copy:
dest: /usr/share/nginx/html/index.html
content: |
<!DOCTYPE html>
<html>
<head><title>Server Details</title></head>
<body>
<h1>Served from {{ ansible_hostname }}</h1>
</body>
</html>
mode: '0644'
- name: Ensure Nginx is running and enabled
ansible.builtin.service:
name: nginx
state: started
enabled: yes
Read more: Using Ansible to Automate AWS [Tutorial]
Variables are placeholders for values that you can reuse throughout a Playbook or other Ansible objects. They can only contain letters, numbers, and underscores and start with letters.
Variables can be defined in Ansible in multiple levels, so look at variable precedence to understand how they are applied. For example, we can set variables at the global scope for all hosts, at the host scope for a particular host, or at the play scope for a specific play.
To set host and group variables, create the directories group_vars and host_vars. For example, to define group variables for the databases group, create the file group_vars/databases. Set common default variables in a group_vars/all file.
Even more, to define host variables for a specific host, create a file with the same name as the host under the hosts_vars directory.
To substitute any variables during runtime, use the -e flag.
The most straightforward method to define variables is to use a vars block at the beginning of a play. They are defined using standard YAML syntax.
- name: Example Variables Playbook
hosts: all
vars:
username: bob
version: 1.2.3
Another way is to define variables in external YAML files.
- name: Example Variables Playbook
hosts: all
vars_files:
- vars/example_variables.yml
To use them in tasks, we have to reference them by placing their name inside double braces using the Jinja2 syntax:
- name: Example Variables Playbook
hosts: all
vars:
username: bob
tasks:
- name: Add the user {{ username }}
ansible.builtin.user:
name: "{{ username }}"
state: present
If a variable’s value starts with curly braces, we must quote the whole expression to allow YAML to interpret the syntax correctly.
We can also define variables with multiple values as lists.
package:
- foo1
- foo2
- foo3
It’s also possible to reference individual values from a list. For example, to select the first value foo1:
package: "{{ package[0] }}"
Another possible option is to define variables using YAML dictionaries. For example:
dictionary_example:
- foo1: one
- foo2: two
Similarly, to get the first field from the dictionary:
dictionary_example['foo1']
To reference nested variables, we have to use a bracket or dot notation. For example, to get the example_name_2 value from this structure:
vars:
var1:
foo1:
field1: example_name_1
field2: example_name_2
tasks:
- name: Create user for field2 value
user:
name: "{{ var1['foo1']['field2'] }}"
We can create variables using the register statement that captures the output of a command or task and then use them in other tasks.
- name: Example-2 Variables Playbook
hosts: all
tasks:
- name: Run a script and register the output as a variable
shell: "find example_file"
args:
chdir: "/tmp"
register: example_script_output
- name: Use the output variable of the previous task
debug:
var: example_script_output
At times, we would need to access sensitive data (API keys, passwords, etc.) in our playbooks. Ansible provides Ansible Vault to assist us in these cases. Storing them as variables in plaintext is considered a security risk so we can use the ansible-vault command to encrypt and decrypt these secrets.
After the secrets have been encrypted with a password of your choice, you can safely put them under source control in your code repositories. Ansible Vault protects only data at rest. After the secrets are decrypted, it’s our responsibility to handle them with care and not accidentally leak them.
We have the option to encrypt variables or files. Encrypted variables are decrypted on-demand only when needed, while encrypted files are always decrypted as Ansible doesn’t know in advance if it needs content from them.
In any case, we need to think about how are we going to manage our vault passwords. To define encrypted content, we add the !vault tag, which tells Ansible that the content needs to be decrypted and the | character before our multi-line encrypted string.
To create a new encrypted file:
ansible-vault create new_file.yml
Then, an editor is launched to add our content to be encrypted. It’s also possible to encrypt existing files with the encrypt command:
ansible-vault encrypt existing_file.yml
To view an encrypted file:
ansible-vault view existing_file.yml
To edit an encrypted file in place, use the edit command to decrypt the file temporarily:
ansible-vault edit existing_file.yml
To use a different password on an encrypted file, use the rekey command by using the original password:
ansible-vault rekey existing_file.yml
In case you need to decrypt a file, you can do so with the decrypt command:
ansible-vault decrypt existing_file.yml
Similarly, we use the encrypt_string command to encrypt individual strings that we can use later in variables and include them in playbooks or variables files:
ansible-vault encrypt_string <password_source> '<string_to_encrypt>' –'<variable_name>'
For example, to encrypt the db_password string ‘12345679’ using the ansible vault:
Since we omitted the <password_source>, we manually entered the Vault password. This could also be achieved by passing a password file with –vault-password-file.
To view the contents of the above example encrypted variable that we saved in the vars.yml file, use the same password as before with the –ask-vault-pass flag:
ansible localhost -m ansible.builtin.debug -a var="db_password" -e "@vars.yml" --ask-vault-pass
Vault password:
localhost | SUCCESS => {
"changed": false,
"db_password": "12345678"
}
For managing multiple passwords, use the option –vault-id to set a label. For example, to set the label dev on a file and prompt for a password to use:
ansible-vault encrypt existing_file.yml --vault-id dev@prompt
To suppress output from a task that might log a sensitive value to the console, we use the no_log: true attribute:
tasks:
- name: Hide sensitive value example
debug:
msg: "This is sensitive information"
no_log: true
If we run this task we will notice that the message isn’t printed on the console:
TASK [Hide sensitive value example] ***********************************
ok: [host1]
Finally, let’s use the example encrypted variable we created above in a playbook and execute it.
Nice, we verified that we could decrypt the value successfully and use it in tasks.
In general, Ansible modules are idempotent and can be executed safely multiple times, but there are cases where we would like to run a task only when a change is made on the host. For example, we would like to restart a service only when updating its configuration files.
Ansible uses handlers triggered when notified by other tasks to solve this use case. Tasks only notify their handlers, with the notify: parameter, when the tasks actually change something.
Handlers should have globally unique names, and it’s common to author them at the bottom of the playbooks.
- name: Example with handler - Update apache config
hosts: webservers
tasks:
- name: Update the apache config file
ansible.builtin.template:
src: ./httpd.conf
dest: /etc/httpd.conf
notify:
- Restart apache
handlers:
- name: Restart apache
ansible.builtin.service:
name: httpd
state: restarted
In the above example, the Restart apache task will only be triggered when we change something in the configuration. In reality, handlers can be considered inactive tasks waiting to be triggered with a notify statement.
An important thing to note about handlers is that they run by default after all the other tasks have been completed. This way, the handlers only run once, even if triggered many times.
To control this behavior, we can leverage the meta: flush_handlers task that triggers any handlers that have been already notified at that time.
It’s also possible for a task to notify more than one handler in its notify statement.
To further control execution flow in Ansible, we can leverage conditionals. Conditionals allow us to run or skip tasks based on if certain conditions are met. Variables, facts, or results of previous tasks along with operators, can be used to create such conditions.
Some examples of use cases could be to update a variable based on a value of another variable, skip a task if a variable has a specific value, execute a task only if a fact from the host returns a value higher than a threshold.
To apply a simple conditional statement, we use the Ansible when parameter on a task. If the condition is met, the task is executed. Otherwise, it is skipped.
- name: Example Simple Conditional
hosts: all
vars:
trigger_task: true
tasks:
- name: Install nginx
apt:
name: "nginx"
state: present
when: trigger_task
In the above example, the task is executed since the condition is met.
Another common pattern is to control task execution based on attributes of the remote host that we can obtain from facts. Check out this list with commonly-used facts to get an idea of all the facts we can utilize in conditions.
- name: Example Facts Conditionals
hosts: all
vars:
supported_os:
- RedHat
- Fedora
tasks:
- name: Install nginx
yum:
name: "nginx"
state: present
when: ansible_facts['distribution'] in supported_os
It’s possible to combine multiple conditions with logical operators and group them with parenthesis:
when: (colour=="green" or colour=="red") and (size="small" or size="medium")
Then when statement supports using a list in cases when we have multiple conditions that all need to be true:
when:
- ansible_facts['distribution'] == "Ubuntu"
- ansible_facts['distribution_version'] == "20.04"
- ansible_facts['distribution_release'] == "bionic"
Another option is to use conditions based on registered variables that we have defined in previous tasks:
- name: Example Registered Variables Conditionals
hosts: all
tasks:
- name: Register an example variable
ansible.builtin.shell: cat /etc/hosts
register: hosts_contents
- name: Check if hosts file contains "localhost"
ansible.builtin.shell: echo "/etc/hosts contains localhost"
when: hosts_contents.stdout.find(localhost) != -1
Ansible allows us to iterate over a set of items in a task to execute it multiple times with different parameters without rewriting it. For example, to create several files, we would use a task that iterates over a list of directory names instead of writing five tasks with the same module.
Read more about using loops in Ansible.
To iterate over a simple list of items, use the loop keyword. We can reference the current value with the loop variable item.
- name: "Create some files"
ansible.builtin.file:
state: touch
path: /tmp/{{ item }}
loop:
- example_file1
- example_file2
- example_file3
The output of the above task that uses loop and item:
TASK [Create some files] *********************************
changed: [host1] => (item=example_file1)
changed: [host1] => (item=example_file2)
changed: [host1] => (item=example_file3)
It’s also possible to iterate over dictionaries:
- name: "Create some files with dictionaries"
ansible.builtin.file:
state: touch
path: "/tmp/{{ item.filename }}"
mode: "{{ item.mode }}"
loop:
- { filename: 'example_file1', mode: '755'}
- { filename: 'example_file2', mode: '775'}
- { filename: 'example_file3', mode: '777'}
Another useful pattern is to iterate over a group of hosts of the inventory:
- name: Show all the hosts in the inventory
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ groups['databases'] }}"
By combining conditionals and loops, we can select to execute the task only on some items in the list and skip it for others:
- name: Execute when values in list are lower than 10
ansible.builtin.command: echo {{ item }}
loop: [ 100, 200, 3, 600, 7, 11 ]
when: item < 10
Finally, another option is to use the keyword until to retry a task until a condition is true.
- name: Retry a task until we find the word "success" in the logs
shell: cat /var/log/example_log
register: logoutput
until: logoutput.stdout.find("success") != -1
retries: 10
delay: 15
In the above example, we check the file example_log 10 times, with a delay of 15 seconds between each check until we find the word success. If we let the task run and add the word success to the example_log file after a while, we notice that the task stops successfully.
TASK [Retry a task until we find the word “success” in the logs] *********
FAILED - RETRYING: Retry a task until we find the word "success" in the logs (10 retries left).
FAILED - RETRYING: Retry a task until we find the word "success" in the logs (9 retries left).
changed: [host1]
Check out the official Ansible guide on Loops for more advanced use cases.
Running multiple playbooks in Ansible can be achieved in several ways:
- Using sequential execution – You can run the playbooks one by one or use the && operator
ansible-playbook -i inventory.ini playbook1.yaml && ansible-playbook -i inventory.ini playbook2.yaml
- Using a master playbook – Includes multiple other playbooks using the import_playbook directive
---
- import_playbook: playbook1.yaml
- import_playbook: playbook2.yaml
- Ansible Tower or AWX – you can use one of these tools to provide workflows that allow you to string together multiple playbooks
- Ansible runner – used to execute multiple playbooks programmatically, manage their execution environment, and also handle the outputs and logs
- Makefile – you could potentially take advantage of Makefile to organize and run Ansible playbooks
Keeping these tips and tricks in mind when building your playbooks will help you be more productive and improve your efficiency.
1) Keep it as simple as possible
Try to keep your tasks simple. There are many options and nested structures in Ansible, and by combining lots of features, you can end up with fairly complex setups. Spending some time simplifying your Ansible artifacts pays off in the long term.
2) Place your Ansible artifacts under version control
It’s considered best practice to store playbooks in git or any other version control system and take advantage of its benefits.
3) Always give descriptive names to your tasks, plays, and playbooks
Choose names that help you and others quickly understand the artifact’s functionality and purpose.
4) Strive for readability
Use consistent indentation and add blank lines between tasks to increase readability.
5) Always mention the state of tasks explicitly
Many modules have a default state that allows us to skip the state parameter. It’s always better to be explicit in these cases to avoid confusion.
6) Use comments when necessary
There will be times when the task definition won’t be enough to explain the whole situation, so feel free to use comments for more complex parts of playbooks.
Spacelift’s vibrant ecosystem and excellent GitOps flow can greatly assist you in managing and orchestrating Ansible. By introducing Spacelift on top of Ansible, you can then easily create custom workflows based on pull requests and apply any necessary compliance checks for your organization.
Another great advantage of using Spacelift is that you can manage different infrastructure tools like Ansible, Terraform, Pulumi, AWS CloudFormation, and even Kubernetes from the same place and combine their Stacks with building workflows across tools.
Our latest Ansible enhancements solve three of the biggest challenges engineers face when they are using Ansible:
- Having a centralized place in which you can run your playbooks
- Combining IaC with configuration management to create a single workflow
- Getting insights into what ran and where
Provisioning, configuring, governing, and even orchestrating your containers can be performed with a single workflow, separating the elements into smaller chunks to identify issues more easily.
Would you like to see this in action – or just want a tl;dr? Check out this video I put together showing you Spacelift’s new Ansible functionality:
If you want to learn more about using Spacelift with Ansible, check our documentation, read our Ansible guide, or book a demo with one of our engineers.
In this article, we had a look into Ansible’s core automation component, playbooks. We saw how to create, structure, and trigger playbook runs.
Moreover, we explored leveraging variables, handling sensitive data, controlling task execution with handlers and conditions, and iterating over tasks with loops.
Thank you for reading, and I hope you enjoyed this article as much as I did!
Manage Ansible Better with Spacelift
Managing large-scale playbook execution is hard. Spacelift enables you to automate Ansible playbook execution with visibility and control over resources, and seamlessly link provisioning and configuration workflows.