Ansible is an open-source automation tool used for deploying applications, managing systems, and enforcing configuration across remote machines. It uses a simple, agentless push model from a central control node and defines tasks in YAML with reusable modules.
Modules are the building blocks of Ansible tasks — they handle everything from installing software and managing users to editing files and configuring networks. They’re idempotent, meaning they can run repeatedly without causing unexpected changes, which helps keep systems consistent and reliable. Ansible comes with thousands of built-in modules and supports custom ones for added flexibility.
One useful example is the replace module, which lets you modify text within files across multiple hosts, which is ideal for updating configs or making precise edits at scale. We’ll explore how it works next.
What we will cover:
- What is the Ansible
replace
module? - How to use the Ansible
replace
module - How to replace multiline texts in files with Ansible
replace
module - How to use regular expressions (regex) to replace patterns with Ansible
replace
module - How to conditionally replace phrases with
before
andafter
keywords - How to
replace
a string with a backup - When not to use the
replace
module
The replace
module in Ansible is used to seek and replace text or specific patterns within files on remote hosts. It functions similarly to the replace option in a text editor.
The Ansible replace
module operates by scanning within the file specified in the task for a defined string or regular expression and replacing it with a desired new value. This is useful for updating configuration files to change database connection strings, update port configurations or API keys, or modify file paths across multiple servers.
The replace module can be very useful if you need to update the version number in a configuration file after an application upgrade, or when you want to replace hardcoded values in configuration files with environment-specific variables.
Let’s now explore the syntax of the replace module and cover some basic examples of using it. Here is the syntax:
- name: Replace String in a File
replace:
path: /opt/app/conf/file.xml
regexp: 'old_string'
replace: 'new_string'
path
: This is the absolute path of the file on the remote host that you want to modify.regexp
: This is the text or pattern you want to search for within the file. Here you can use a string or a regular expression, which must be wrapped in single quotes.replace
: This determines the string or text you want to replace the value that is grabbed from the regexp.
Once you have specified these key parameters, you can replace a string in your files with a new value. In the coming sections, we will discuss other parameters you can utilize to enhance the use of this module.
In this section, we cover real-world scenarios where we can use the replace module and how it can benefit and streamline our workflows.
Example 1: Replace port-mappings in a Docker configuration
In this example, we have multiple docker-compose.yml files across our environments, and we need to update the port mappings on this file to a new port configuration. Instead of manually modifying each docker-compose
file, we can use the Ansible replace
module to quickly update the values in our docker-compose
file.
- name: Update Docker port mappings
hosts: docker_servers
tasks:
- name: Update port mappings in Docker-Compose file
replace:
path: /opt/app/docker-compose.yml
regexp: '8080:8080'
replace: '9090:9090'
Example 2: Update logging configurations for a centralized logging solution
Here, we have a microservice architecture that is built using Node.js and logs events using Winston logging.
We usually send these logs to a specific location on the container, but we want to centralize our logging mechanism to a single source across all our application services (Auth service, Order service, Payment service, etc.) and send these logs to a centralized logging system such as ELK stack.
Manually updating these values across all of our microservices might be time-consuming, but the Ansible replace
module makes it easy to update this value.
- name: Update log file path for all microservices using Winston
hosts: microservices
tasks:
- name: Replace log file in Auth Microservice
replace:
path: /opt/app/auth-service/winston-config.js
regexp: '/var/log/app/old_logs/'
replace: '/var/log/app/new_logs/'
- name: Replace log file in Order Microservice
replace:
path: /opt/app/order-service/winston-config.js
regexp: '/var/log/app/old_logs/'
replace: '/var/log/app/new_logs/'
- name: Replace log file in Payment Microservice
replace:
path: /opt/app/payment-service/winston-config.js
regexp: '/var/log/app/old_logs/'
replace: '/var/log/app/new_logs/'
Example 3: Modify environment variables across all Kubernetes deployment YAML files
Typically, we would work with multiple deployment YAML files that utilize environment variables like database connection strings.
To migrate to a new database, we need to update environment variables across all the deployment YAML files that utilize this specific database. Reviewing each deployment file and updating every environment variable associated with the database connection string might be cumbersome.
With Ansible’s replace
module, we can ensure all of our deployment YAML files have the correct updated values and can be managed seamlessly. We can also include other tasks needed to ensure our deployment pods are loaded up properly and the application is validating, increasing flexibility.
- name: Update DB connection string in K8s deployment
hosts: k8s_clusters
tasks:
- name: Replace old DB connection string with new one
replace:
path: /etc/k8s/deployments/deployment.yml
regexp: 'envVar: "DB_CONNECTION=old_db_url"'
replace: 'envVar: "DB_CONNECTION=new_db_url"'
Example 4: Update contact email across multiple servers
The Ansible replace
module can also be useful for replacing an email address contact with a new one across all of your servers. This can happen if the company decides to use a different email approach and change its support email. You can simply specify the application path where the email address is configured and update it to the new one.
If there are different application paths across your systems, you will have to add looping along with variables to get the job done. See the example below:
- name: Update contact email address in multiple application configurations
hosts: myservers
vars:
applications:
- { app_name: "app1", config_path: "/etc/app1/config/email_config.conf", old_email: "old_email@example.com", new_email: "new_email@example.com" }
- { app_name: "app2", config_path: "/etc/app2/config/email_config.conf", old_email: "old_email@example.com", new_email: "new_email@example.com" }
- { app_name: "app3", config_path: "/opt/app3/config/email_config.conf", old_email: "old_email@example.com", new_email: "new_email@example.com" }
tasks:
- name: Replace old email address with new one in each app config
replace:
path: "{{ item.config_path }}"
regexp: "contact_email={{ item.old_email }}"
replace: "contact_email={{ item.new_email }}"
loop: "{{ applications }}"
Example 5: Update the new certificate path across all your web servers
In this example, we will demonstrate how to replace the SSL certificate path on all our web servers using NGINX. Once again, this approach increases the flexibility of renewing our certificates and updating the new path across all our web servers.
- name: Update SSL certificate paths in Nginx configuration
hosts: web_servers
tasks:
- name: Replace old SSL certificate path with new one
replace:
path: /etc/nginx/nginx.conf
regexp: 'ssl_certificate /etc/nginx/old_cert.crt'
replace: 'ssl_certificate /etc/nginx/new_cert.crt'
Example 6: Update version number in applications configuration files
Whenever a Java-based application has a major version upgrade, we typically update the version in the application.properties file.
We can also use the Ansible replace
module to ensure all our Java-based applications are on the proper version.
- name: Update version number in application properties
hosts: app_servers
tasks:
- name: Replace old version number with new one
replace:
path: /opt/app/config/application.properties
regexp: 'version=1.1.0'
replace: 'version=2.1.0'
Example 7: Update CI/CD secrets across our Kubernetes secret YAML
In Kubernetes, we typically store our sensitive values in the Secrets resource file.
In this example, we have a scenario where we need to rotate the API keys in our Kubernetes Secrets YAML file across all of our clusters. The Ansible replace module can easily update these values for us and become our new way to manage our secrets and API keys:
- name: Update secrets in K8sYAML files
hosts: k8s_clusters
tasks:
- name: Replace old API key in secrets.yaml
replace:
path: /etc/k8s/secrets/api-secrets.yaml
regexp: 'api_key: old_key'
replace: 'api_key: new_key'
In this section, we’ll walk through practical examples of using regular expressions to identify and replace patterns within our files. This can be very beneficial when we are not sure how many lines in the file can include a specific text. Instead, we search for patterns and replace the found value with our desired value.
Here is a basic example of using regular expressions to find any value in the file with user
and digits following it to a specific value of user21625
:
- name: Replace multiple occurrences of a pattern
replace:
path: /etc/config.conf
regexp: 'user\d+'
replace: 'user21625'
The \d+
pattern searches for any value that matches all numbers with one or more digits.
Example 1: Update version numbers in configuration files using regex
Previously, we covered replacing version numbers with actual strings we know the file will contain, but now we will search for patterns to replace values in a file we are not sure about.
At this point, we know the version number follows a pattern of 1.5.1, and we want to update all our servers consistently with the latest version number throughout the application.properties file.
- name: Update version numbers
hosts: servers
tasks:
- name: Replace old version number
replace:
path: /etc/app/config/settings.conf
regexp: 'version=\d+\.\d+\.\d+'
replace: 'version=2.0.0'
With this regular expression, we check for any pattern that matches one or more digits (\d+
) and place a period between these digits with (\.
) This allows us to search for any version number pattern across our files properly and replace it with 2.0.0.
Example 2: Update API base URL across configuration files using regex
In this example, we use regex to find patterns of our API base URL across our configuration files and replace them with our new URL.
- name: Update API base URL
hosts: servers
tasks:
- name: Replace old API base URL
replace:
path: /etc/app/config/api_config.json
regexp: 'https?://oldapi\.com/v[0-9]+'
replace: 'https://newapi.com/v2'
We are checking for patterns that have both http/https, ://, the api value, period (escaping with the \), and the version by checking for any value from 0-9. Once Ansible finds the regex value across our files, it will attempt to replace it with our new value.
Example 3: Update database connection strings using regex
Using regex in our Ansible replace
module is also beneficial when we need to update our database connection strings.
- name: Replace old database connection string with new one
replace:
path: /etc/app/config/database.conf
regexp: 'db_url="[^"]*"'
replace: 'db_url="mysql://newuser:newpassword@newhost/newdb"'
Here, we are searching for any patterns with a string of characters inside a set of quotes (“”) after the db_url=
, which can be useful for creating consistency across your database configuration files.
Example 4: Match and replace email addresses in configuration files
Regular expressions can be very useful for updating all the email fields in your configuration files to a single email source instead of having inconsistent email addresses across your files:
- name: Replace old admin email with new one
replace:
path: /etc/app/config/settings.conf
regexp: '[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}'
replace: 'admin@newdomain.com'
In this regex, we are searching for a pattern with the local part of the email address (before the @
) to be a value that contains letters, numbers, and basic symbols.
We are also searching for the domain part of the email address (after the @
) and checking for the top-level domain (.com
). This allows us to search properly for any pattern that matches a common email address and replace it with our desired email address.
When working with Ansible’s replace
module, the before
and after
options give you precise control over what gets replaced and where. Instead of blindly replacing every match in a file, you can target content that appears only between two known points.
The before
keyword tells Ansible to look for your match before a specific string, while after
does the opposite. It scopes the search to anything after
a defined marker. Even better, both accept regular expressions, so you’re not limited to fixed strings.
You can also combine before
and after
to define a very specific range within your file, which helps minimize accidental replacements and keeps your changes safe and predictable.
Let’s walk through a few examples to see how this works in real-world scenarios.
Example 1: Update database connection strings only for dev
We want to update only the database connection string for our dev instance and ensure the replacement doesn’t affect our production instance. Here is a sample file we will attempt to modify using the Ansible replace
module:
# Global settings
db_url="http://old-db.com"
[dev]
db_url="http://dev-db.com"
[prod]
db_url="http://prod-db.com"
Now, we can specify the exact location we want to update by setting the before flag to [prod]
and the after flag to [dev]
.
We want the replace module to search for a string after the [dev]
string and before the [prod]
string, allowing us to replace the dev database connection string only. In this example, we will also use regex to find the DB connection string that may have any number or character.
- name: Replace DB connection string in dev sections
hosts: all_servers
tasks:
- name: Replace DB connection string in DEV configurations
replace:
path: /etc/app/config/settings.conf
regexp: 'db_url=".*"'
replace: 'db_url="new_db_url"'
before: '[prod]'
after: '[dev]'
Example 2: Change log path for non-production environments
In this example, we will attempt to modify only the log path of our non-production configuration files in our main application configuration file without disrupting production.
This is the file we will attempt to modify:
# Global settings
db_url="mysql://localhost:3306/app_db"
api_key="old-api-key"
log_level="INFO"
log_path="/var/log/app/logfile.log"
[dev]
db_url="mysql://dev-db:3306/app_dev"
api_key="dev-api-key"
log_level="DEBUG"
log_path="/var/log/app/dev_logfile.log"
feature_flag=true
[staging]
db_url="mysql://staging-db:3306/app_staging"
api_key="staging-api-key"
log_level="INFO"
log_path="/var/log/app/staging_logfile.log"
feature_flag=true
[prod]
db_url="mysql://prod-db:3306/app_prod"
api_key="prod-api-key"
log_level="ERROR"
log_path="/var/log/app/prod_logfile.log"
feature_flag=false
With the following Ansible tasks, we can modify dev and staging without affecting our production values.
- name: Update log path for dev and staging sections only
hosts: all_servers
tasks:
- name: Replace log path for dev
replace:
path: /etc/app/config/settings.conf
regexp: 'log_path="/var/log/app/dev_logfile.log"'
replace: 'log_path="/var/log/app/new_dev_logfile.log"'
before: '[staging]'
after: '[dev]'
- name: Replace log path for staging
replace:
path: /etc/app/config/settings.conf
regexp: 'log_path="/var/log/app/staging_logfile.log"'
replace: 'log_path="/var/log/app/new_staging_logfile.log"'
before: '[prod]'
after: '[staging]'
Example 3: Update our Kubernetes container image tag without affecting production
We can also use the Ansible replace
module to replace the image tag in our Kubernetes deployment manifest file for our non-production environments. This is the deployment file we will be attempting to modify using the replace module:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-prod
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app-container
image: app:prod-tag
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-dev
spec:
replicas: 2
selector:
matchLabels:
app: app-dev
template:
metadata:
labels:
app: app-dev
spec:
containers:
- name: app-container
image: app:dev-tag
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-staging
spec:
replicas: 2
selector:
matchLabels:
app: app-staging
template:
metadata:
labels:
app: app-staging
spec:
containers:
- name: app-container
image: app:staging-tag
ports:
- containerPort: 8080
With this Ansible playbook, we can ensure only dev/staging environments are being changed. We are also using regular expressions to search properly for the dev/staging/production environments within our before/after keywords:
- name: Update container image tag in dev and staging deployments
hosts: all_servers
tasks:
- name: Update image tag for dev deployment
replace:
path: /path/to/kubernetes/deployment.yml
regexp: 'image: app:dev-tag'
replace: 'image: app:new-dev-tag'
before: '^(apiVersion: apps/v1)$'
after: '^name: app-dev$'
- name: Update image tag for staging deployment
replace:
path: /path/to/kubernetes/deployment.yml
regexp: 'image: app:staging-tag'
replace: 'image: app:new-staging-tag'
before: '^name: app-prod$'
after: '^name: app-staging$'
The Ansible replace
module includes a handy backup
parameter that lets you create a backup of a file before making any changes. This is especially useful when working in production environments, where updating configuration files could potentially disrupt critical processes or workflows. Having a backup in place ensures you can quickly restore the original file if something goes wrong.
To enable this, just set the backup
flag to true
, and Ansible will automatically save a backup in the same directory with a .bak
extension. For example, if you’re modifying a file called deployment.yml
, setting backup: true
will create a backup file named deployment.yml.bak
in the same folder. This gives you a quick, easy way to roll back any changes.
Now, let’s look at a few examples that show how to use the backup
parameter in action.
Example 1: Replace API URLs in Kubernetes ConfigMaps
In this example, we will attempt to replace the API URL with a new one. However, since the Configmap stores key environment variables that can affect our workflows, we want to ensure we keep a backup of this file.
Here is a sample Configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-api-config
data:
api_url: "http://old-api.example.com"
The following Ansible playbook will search for the specific API and replace it with the new one:
- name: Replace old API URL with backup in Kubernetes ConfigMap
replace:
path: /kubernetes/configmap-api.yml
regexp: 'api_url: "http://old-api.example.com"'
replace: 'api_url: "http://new-api.example.com"'
backup: true
Example 2: Update environment variables in our Docker Compose files
We often need to update environment variable strings in our docker-compose file when the infrastructure changes. The replace module can be really helpful for automating this process across multiple places. That said, things can sometimes break when a new environment variable is introduced or doesn’t behave as expected.
In this example, we’ll update an environment variable in our docker-compose file, specifically the one tied to our Azure Client Secret. The current client secret has expired, so we’ll need to replace it with a new one.
Here’s the sample docker-compose file we’ll be working with:
version: '3.8'
services:
app:
image: app:latest
environment:
- AZURE_CLIENT_ID=my-client-id
- AZURE_TENANT_ID=my-tenant-id
- AZURE_CLIENT_SECRET=old-client-secret
ports:
- "8080:8080"
We want to replace the client secret, but we might be concerned that the new secret will not function as expected or that a separate configuration was not done properly.
If so, we can set the backup flag to “true” to ensure peace of mind while modifying the environment variable in the docker-compose file.
- name: Update expired Azure client secret in Docker Compose file
hosts: all_servers
tasks:
- name: Backup and replace expired Azure client secret
replace:
path: /path/to/docker-compose.yml
regexp: 'AZURE_CLIENT_SECRET=old-client-secret'
replace: 'AZURE_CLIENT_SECRET=new-client-secret'
backup: true
Example 3: Adding a new feature to Kubernetes API server
Modifying the Kubernetes API file is a sensitive task that can cause many issues if there is an invalid character in the main Pod file. This can lead to downtime and troubleshooting to determine what went wrong. Therefore, it’s important that we back up this file before making any changes.
This is the sample kube-apiserver
manifest file we will be working with:
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
containers:
- name: kube-apiserver
image: k8s.gcr.io/kube-apiserver:v1.21.0
command:
- /bin/sh
- -c
- |
kube-apiserver --advertise-address=$(hostname -i) \
--allow-privileged=true \
--authorization-mode=Node,RBAC \
--enable-admission-plugins=NodeRestriction
ports:
- containerPort: 6443
protocol: TCP
We will attempt to update the command field with a new feature by modifying the value in the enable-admission-plugins
field. If we incorrectly add an extra space or even use the wrong plugin name, the cluster can fail. Therefore, it’s important to back this file up and ensure you have a rollback method in place.
- name: Update kube-apiserver flags
hosts: master_nodes
tasks:
- name: Backup and update kube-apiserver flags with new feature gate
replace:
path: /etc/kubernetes/manifests/kube-apiserver.yaml
regexp: '--enable-admission-plugins=NodeRestriction'
replace: '--enable-admission-plugins=NodeRestriction,AnotherPlugin'
backup: true
Example 4: Modify sensitive files that can make the server inaccessible
You may encounter edge cases where using the Ansible replace
module to modify a file on a server could make the server inaccessible once the task is complete. In such situations, the backup
flag isn’t always helpful because if the server becomes unreachable, you won’t be able to retrieve the backup or even run a follow-up task to move it somewhere safe.
To handle this kind of scenario, we need a different approach. In this example, we’ll walk through a safer way to back up the file to a remote location before
making changes, so even if the server becomes unreachable, the backup is still secure.
Here’s how to do it:
First, we use the copy
module to send the file to a remote backup destination. Then, we proceed with the file modifications.
In this case, we’ve split the changes into two separate tasks, both of which could potentially impact server accessibility. Finally, we restart the SSH service, which is required after making these changes.
- name: Modify sshd_config with backup and external backup server
hosts: servers
become: true
tasks:
- name: Backup sshd_config file to remote server #before any modification
copy:
src: /etc/ssh/sshd_config
dest: /path/to/remote/backup/sshd_config_{{ ansible_hostname }}.bak
- name: Backup and change PermitRootLogin in sshd_config file
replace:
path: /etc/ssh/sshd_config
regexp: 'PermitRootLogin yes'
replace: 'PermitRootLogin no'
backup: true
- name: Backup and change PasswordAuthentication in sshd_config file
replace:
path: /etc/ssh/sshd_config
regexp: 'PasswordAuthentication yes'
replace: 'PasswordAuthentication no'
backup: true
- name: Restart SSH service to apply changes
systemd:
name: sshd
state: restarted
This will ensure you have a backup of the sshd_config file. If the server becomes inaccessible, you can use other recovery methods to access it, restore this file, and bring it back up.
The Ansible replace
module is a great tool for modifying and replacing specific values in files across your servers. However, in certain situations, other Ansible modules might handle the task better.
What is the difference between Ansible lineinfile and replace modules?
The Ansible lineinfile
module ensures that a specific line is present (or absent) in a file, making it ideal for managing configuration entries in a predictable, idempotent way.
The lineinfile
module in Ansible is best used when you need to add, remove, or modify a single line in a file, especially when you’re validating the presence of a specific line. It’s ideal for straightforward, line-based edits. On the other hand, the replace
module is more suitable for working with complex patterns or when you need to replace specific text across multiple lines.
Usually, the sshd_config contains only one field, PermitRootLogin
, and only one location within the file contains it, so we should use the lineinfile module. The lineinfile
module can validate and ensure that the field is always set to a specific value.
- name: Ensure the line 'PermitRootLogin no' is set in sshd_config
lineinfile:
path: /etc/ssh/sshd_config
line: 'PermitRootLogin no'
Another example where lineinfile will be a better option is when we are updating usernames in a config file. Typically, there would be one location in a file that we would update for usernames. In that case, we can utilize this module instead of the replace module:
- name: Update admin user in config.json using lineinfile
lineinfile:
path: /path/to/app/config.json
regexp: '"admin_user": "olduser"'
line: '"admin_user": "newuser"'
What is the difference between Ansible blockinfile and replace modules?
In scenarios where multiple lines of text such as configuration blocks need to be added, updated, or removed in a file, the blockinfile
module is more suitable than the replace
module. blockinfile
is designed specifically to handle blocks of content, ensuring that the block is inserted only once and remains clearly marked and manageable.
Although the replace
module is useful for single-line or pattern-based substitutions, it lacks the structure and idempotency that blockinfile
provides for grouped entries.
For example, if we need to add multiple environment variables to a file, we can easily use the blockinfile module to add many environment variables to our files:
- name: Add environment variables block to systemd override configuration
blockinfile:
path: /etc/systemd/system/my-service.service.d/override.conf
block: |
[Service]
Environment="ENV_VAR1=test"
Environment="ENV_VAR2=value"
The blockinfile module increases idempotency and makes sure this task is completed only once.
What is the difference between Ansible template and replace modules?
The Ansible template
module is best used when you need to pass specific variables or external data into a file. It processes Jinja2 variables and renders them into the target file, enabling a more dynamic and flexible configuration. This makes it especially effective for handling complex setups that vary across environments.
On the other hand, the replace
module is used for straightforward text substitutions within existing files using regular expressions. It’s suited for simpler, targeted edits rather than generating full dynamic content.
For example, if we manage the values for PermitRootLogin
or PasswordAuthentication
for our sshd_config file in our Linux hosts through variables, we should use templates and pass the dynamic values into our files instead of hard-coding them into the replace task.
Instead of doing the following with the replace
module:
- name: Set custom settings for sshd_config
replace:
path: /etc/ssh/sshd_config
regexp: '^PasswordAuthentication.*'
replace: 'PasswordAuthentication no'
We can have a Jinja2 template file with our sshd_config file content using Ansible variables:
PermitRootLogin {{ ssh_root_login }}
PasswordAuthentication {{ password_auth }}
Now we can copy that template file over to our Linux hosts with the correct variable values. This approach ensures the use of variables and flexibility when making changes down the line:
- name: Deploy sshd_config from template
template:
src: sshd_config.j2
dest: /etc/ssh/sshd_config
Spacelift’s vibrant ecosystem and excellent GitOps flow are helpful for managing and orchestrating Ansible. By introducing Spacelift on top of Ansible, you can easily create custom workflows based on pull requests and apply any necessary compliance checks for your organization.
Another advantage of using Spacelift is that you can manage infrastructure tools like Ansible, Terraform, Pulumi, AWS CloudFormation, and even Kubernetes from the same place and combine their stacks with building workflows across tools.
Our latest Ansible enhancements solve three of the biggest challenges engineers face when they are using Ansible:
- Having a centralized place in which you can run your playbooks
- Combining IaC with configuration management to create a single workflow
- Getting insights into what ran and where
Provisioning, configuring, governing, and even orchestrating your containers can be performed with a single workflow, separating the elements into smaller chunks to identify issues more easily.
Would you like to see this in action, or just get a tl;dr? Check out this video showing you Spacelift’s Ansible functionality:
If you want to learn more about using Spacelift with Ansible, check our documentation, read our Ansible guide, or book a demo with one of our engineers.
The Ansible replace
module automates text replacements across files by matching patterns or strings, making it easier to manage configs across hosts. It supports regular expressions, backup options, and controlled replacements using before
and after
keywords, offering flexibility and safety, especially in production.
It is powerful, but you should know when modules like lineinfile
, blockinfile
, or template
are a better fit. Knowing when to use each helps keep your automation clean, efficient, and reliable.
Manage Ansible better with Spacelift
Managing large-scale playbook execution is hard. Spacelift enables you to automate Ansible playbook execution with visibility and control over resources, and seamlessly link provisioning and configuration workflows.