In the previous article, we discussed the benefits of self-service infrastructure and the Spacelift functionality that enables it. We’ll now move on to explore examples of self-service infrastructure, the problems it solved, and what a modern Spacelift-based implementation looks like today, including output sharing across stacks, drift detection, OpenTofu support, and service catalog workflows.
Unlike available CI/CD tools, Spacelift works as a multifunctional platform, delivering CI/CD and orchestration functionality and also operating as a self-service infrastructure platform.
Self-service infrastructure in action
How does Spacelift do all that? Here is an overview of the features that enable Spacelift’s self-service approach:
- Blueprints. These are the backbone of Spacelift’s self-service infrastructure approach. Templating is a core element of self-service, allowing teams to select the infrastructure template that fits their needs and deploy it securely. Templates are tested and managed by experts.
- Spaces. The development team doesn’t need to know exactly how the structure of cloud accounts is defined. Spaces make the organization of the accounts and environments easier.
- Policies. Policies are powerful tools for controlling Blueprints usage. Using Open Policy Agent (OPA) policies to manage who can use specific templates when enhances infrastructure control.
- Spacelift Intent. Allows you to create and manage infrastructure using natural language through your LLM. Spacelift provides the MCP server and extensibility to simply ask your LLM for resources, and they are deployed. No code. No repository.
- Stack Dependencies v2. Development teams can create a clear chain of deployments across stacks and also reference values from earlier stacks, so later stacks can consume outputs without copy and paste.
- Drift detection. Self-service does not end at provisioning. Detecting and responding to changes outside IaC helps keep systems aligned with the intended state.
- Dynamic, short-lived cloud credentials. Self-service becomes safer when runs use temporary credentials instead of long-lived keys.
- OpenTofu support. Many teams now standardize on OpenTofu or run mixed estates. Self-service workflows should work the same way regardless of whether stacks use Terraform or OpenTofu.
- Service catalog entry points. Some organizations prefer a catalog experience. Spacelift can be used behind a catalog to keep requests familiar while preserving guardrails.
Real-world self-service infrastructure implementations
I will now walk you through some real cases I’ve observed or worked on:
Case 1: CloudFormation for setting up environments
This project centers on the team who managed the infrastructure templates for development teams. They used AWS CloudFormation to create separated environments in multiple AWS accounts. Infrastructure created in these environments contains networks, policies, container clusters, serverless, and more.
The problem
The first problem in this case concerned knowledge gaps around AWS architecture and best practices within development teams. The second was more complex. The organization had a large number of development teams, with each team owning at least 3 AWS accounts, sometimes more. This proliferation of teams and ownership meant there was little control over the infrastructure and no single pane of glass where information about systems could be gathered.
How it was addressed
The solution at that time was to create a set of CloudFormation snippets that could be collected by the development team and combined into one big template according to their needs.
| What it solved | Issues raised by this approach |
|
|
It is clear that this approach did not deliver a satisfactory solution, failing to resolve technical issues or improve the process.
How Spacelift could solve the problem
The first step is to devise an appropriate process for defining the requirements and designing the templates. With Spacelift, you don’t need to use unwieldy CloudFormation snippets. You can prepare full templates, parameterize them, and publish them as Blueprints so teams consume a stable interface instead of assembling building blocks by hand.
For flexibility, templates should be prepared by layers. For example, you can keep separate templates for the VPC, the ECS cluster, and other layers. The templates can then be published as Blueprints.
We mentioned the complicated AWS account setup earlier. This setup can be replicated using Spaces and guarded by Policies.
When development teams create stacks from Blueprints, they can use Stack Dependencies to connect stacks into a logical deployment chain. With Stack Dependencies v2, later stacks can also consume outputs from earlier stacks, like passing a VPC ID into the cluster stack, or reusing shared networking and IAM values without manual wiring.
The first diagram below presents the process used by the company. The second shows the process with Spacelift. I also show additional skills the team needs to work with these processes.
This diagram shows the process as-is. It is clear that the development team has to do a lot of manual work on infrastructure tasks to create the templates properly. The platform team doesn’t know who is using the snippets (and how).
Even worse, the development team has full access to the snippets, and they can modify them. These issues add tension and uncertainty in the process.
The second diagram shows the Spacelift approach for self-service infrastructure. We have a clear separation of responsibilities, and we use one central tool. The respective responsibilities of the platform and development teams are clearly defined. There is an established control and communication channel.
Development teams can focus on delivering software. The platform team controls the CloudFormation Blueprints, and development teams cannot modify them. Policies and approvals can be applied consistently across all runs, and the platform team can observe usage and outcomes through a single system of record.
The platform team can track execution quality and improve Blueprints when needed. If parts of the estate change outside IaC, drift detection helps surface it so the platform team can respond before configuration diverges silently.
Case 2: Multicloud Kubernetes
Kubernetes shines as an agnostic tool that can be used in a range of applications. It can be deployed on AWS and Azure or AWS and on-prem, for example. Such multicloud setups are useful, especially when different workloads operate under different security guardrails.
In this example, we explore creating self-service infrastructure for such a scenario.
The problem
This is a fairly straightforward case, in which the development team wants to quickly deploy a workload on a Kubernetes cluster. Depending on security recommendations, deployment will take place in AWS or on-prem.
How it can be solved
For this exercise, we assume that the platform team is in charge of Kubernetes infrastructure and keep the solution simple, with one AWS account per environment and one Kubernetes cluster. The same setup is on-prem — one cluster per environment.
Let’s take a look at the responsibilities of platform and development teams in this scenario:
| Platform team responsibilities | Development team responsibilities |
|
|
As we can see, the platform team is responsible for the underlying infrastructure. Fortunately, this responsibility can be executed through Spacelift, in the same way that development teams use it.
To avoid confusion for development teams, Spacelift administrators create Spaces which logically separate administrative work done by platform team from development work. Also, another set of Spaces can be created to separate environments and clusters (on-prem, AWS). This logical structure must be part of SDLC design.
The diagram above illustrates the potential process of this solution. It offers simple and compelling benefits:
- All decisions and deployments are made with one tool (Spacelift).
- Only one team is responsible for infrastructure.
- Developers need to know only where to deploy workload. From their perspective, the additional work is to select a defined worker pool.
- No additional knowledge is expected from development teams.
- Development teams are not blocked with their deployments.
- Spaces separate the responsibilities of platform and development teams.
Case 3: Create entire project for development teams
In this case, we look at a real-case scenario in which the platform team sets out to deliver fully functional templates for development teams to create projects. These templates must contain infrastructure on AWS, as well as GitHub repositories and pipelines.
We will approach this case in two steps. We have already created an appropriate setup in Spacelift, with Spaces, etc.
The first step is to create and configure the GitHub repository. Configuration involves assigning proper users, configuring the protected branches, and so on.
The second step is to deploy infrastructure on AWS. This step is strictly related to infrastructure.
This is how the process might be constructed:
The platform team creates the templates for all use cases. There may be just one template related to GitHub repository creation, enabling almost complete control over the way repositories are constructed and managed. This is a very important consideration when your organization is scaling. The approach allows you to keep the SDLC approach unified for the whole organization, which can be important from a regulatory perspective.
For the development team, this is a very straightforward step. They simply select the name for the repositories and who should have access to them. They are then ready to ship their code into the repository.
The second step is similar to previous scenarios. The platform team prepares and manages blueprints for infrastructure, and development teams use these blueprints to create their infrastructure for applications.
Case 4: ServiceNow-powered self-service catalog
Many organizations already rely on ServiceNow as the standard place to request access and infrastructure changes. In those environments, self-service succeeds when it fits existing workflows.
The problem
Development teams want a simple way to request infrastructure without learning every platform detail. Platform teams want strong guardrails, consistent templates, and a clear audit trail.
ServiceNow provides a familiar request experience, but if fulfillment stays manual, the platform team becomes a bottleneck again. The goal is to keep ServiceNow as the request entry point while moving fulfillment to an automated, policy-driven flow in Spacelift.
How it can be solved
ServiceNow becomes the front door, and Spacelift becomes the execution engine. The catalog item collects a small set of approved inputs, then Spacelift provisions and deploys using Blueprints, Spaces, Policies, and Stack Dependencies v2.
| Platform team responsibilities | Development team responsibilities |
| Build Blueprints for approved infrastructure patterns | Submit a ServiceNow request with the required inputs |
| Define inputs and guardrails with Policies | Select environment and workload options from approved choices |
| Organize ownership and blast radius using Spaces | Provide ownership and tagging information needed for governance |
| Operate dependency chains and shared outputs with Stack Dependencies v2 | Use the delivered outputs and follow the established operating model |
| Monitor drift and respond to unintended changes | Report issues and request updates through the same catalog flow |
A practical flow looks like this:
- A developer opens a ServiceNow catalog item like “Create application environment” or “Provision a Kubernetes namespace”.
- The form captures required fields like environment, region, service name, owner, and cost center.
- ServiceNow triggers Spacelift through an integration step that starts a run tied to a Blueprint.
- Spacelift creates or updates the stack in the correct Space based on the request.
- Policies validate the request, enforce naming, constrain regions, require tags, and apply approvals when needed.
- Stack Dependencies v2 orchestrates multi-step provisioning and shares outputs between layers, like network identifiers flowing into cluster and workload stacks.
- Spacelift returns key outputs like endpoints, repository links, or access details back to ServiceNow so the requester can find them in one place.
- Drift detection keeps the delivered system aligned over time and notifications can be routed back to the same operational channels.
This approach keeps the request workflow familiar for teams that already operate through ServiceNow. It also keeps the platform team out of the fulfillment loop for standard requests, while preserving visibility, governance, and consistent infrastructure outcomes.
Here is a demo video of how to implement Self-Service in Spacelift:
Case 5: Natural Language Infrastructure with Spacelift Intent
Modern development teams understand what infrastructure they need, but sometimes lack deep expertise in Infrastructure as Code syntax and cloud provider APIs. While Blueprints solve this for predefined templates, teams sometimes need the flexibility to request infrastructure variations or modifications using natural language rather than learning Terraform or OpenTofu syntax.
In this example, we explore how Spacelift Intent enables truly conversational self-service infrastructure through LLM integration.
The problem
Development teams frequently encounter scenarios where existing Blueprints don’t quite match their needs, or they need to make quick infrastructure changes during prototyping or incident response. Traditional approaches force teams to either:
- Learn IaC syntax to modify templates, creating a knowledge bottleneck
- Submit tickets to the platform team for simple variations, creating wait states
- Work around infrastructure limitations, leading to suboptimal solutions
Additionally, the iterative nature of infrastructure design often requires multiple adjustments. With traditional IaC, each adjustment means editing code, committing to a repository, and triggering a deployment pipeline.
How it can be solved
Spacelift Intent provides an MCP (Model Context Protocol) server that connects to your LLM of choice, allowing teams to describe infrastructure needs in plain language. The platform team maintains control through policies and guardrails while development teams gain unprecedented flexibility.
The key innovation is that no code repository is required, and no IaC syntax knowledge is needed. The LLM translates natural language requests into infrastructure actions, and Spacelift enforces organizational policies before execution.
Let’s examine the responsibilities in this model:
Platform team responsibilities:
- Define policies that control what infrastructure can be created through Intent
- Configure approved cloud services, regions, and resource constraints
- Set up MCP server integration and LLM access
- Establish naming conventions and tagging requirements
- Monitor Intent usage and outcomes through Spacelift’s audit trail
Development team responsibilities:
- Describe infrastructure needs in natural language to their LLM
- Provide context about environment, purpose, and requirements
- Review and approve the proposed infrastructure deployments
- Work within the constraints defined by platform team policies
A practical workflow
The process works seamlessly within the development team’s existing tools:
- A developer describes their need in their LLM interface: “I need an S3 bucket for storing application logs from our payment service in the production environment. It should have 30-day lifecycle policies and encryption enabled.”
- The LLM communicates with Spacelift Intent through the MCP server, translating the request into appropriate infrastructure specifications.
- Spacelift policies validate the request against organizational rules: approved regions, required tags, encryption standards, naming patterns, and resource quotas.
- If policies pass, Spacelift provisions the infrastructure. If not, the LLM provides clear feedback about what needs to change: “This request requires encryption to be enabled for production resources” or “The bucket name must follow the pattern ‘prod-appname-purpose’.”
- The infrastructure is deployed, and relevant outputs like bucket ARN and endpoint are returned directly in the conversation.
- Drift detection continues monitoring the resource to ensure it stays aligned with the intended state.
For modifications, the developer simply continues the conversation: “Actually, let’s extend that retention to 90 days” or “I need to add cross-region replication to our DR region.” No repository updates, no pull requests, no context switching.
The benefits of conversational infrastructure
This approach delivers compelling advantages for organizations embracing self-service:
- Zero IaC learning curve: Development teams describe what they need, not how to implement it.
- Rapid iteration: Changes happen through conversation, not code commits and reviews.
- Policy enforcement remains central: Platform teams maintain full control through Spacelift policies regardless of how requests arrive.
- No repository proliferation: Intent resources exist in Spacelift without creating additional repositories to manage.
- Natural documentation: The conversation itself becomes documentation of what was requested and why.
- Contextual assistance: LLMs can explain infrastructure decisions, suggest improvements, and help troubleshoot issues.
- Consistent with existing workflows: Teams using Blueprints and Intent benefit from the same Spaces, Policies, and drift detection.
Development teams gain autonomy for infrastructure variations and experiments without becoming IaC experts. Platform teams preserve governance and visibility while reducing the ticket volume for simple infrastructure requests.
The combination of Intent with Blueprints creates a flexible self-service model. Use Blueprints for well-defined, repeatable patterns, and Intent for the variations, experiments, and edge cases that don’t fit a template. Both paths remain governed by the same organizational policies and provide the same operational visibility.
Key takeaways from all scenarios
In all scenarios above, self-service infrastructure and a well-designed underlying system allow development teams to work autonomously. These factors matter in Agile environments. At the same time, organizations still need security, best practices, correct configuration, cost effectiveness, and operational clarity. These remain the responsibilities of a well-informed platform team.
Development teams are expected to deliver software as efficiently as possible. We cannot allow waiting states to block delivery while teams wait for an ops team to provide infrastructure. The self-service approach allows development teams to become independent without requiring them to become infrastructure experts.
Spacelift allows organizations to organize and control workloads using one platform. This can be an efficient and cost-effective way to operate. With components like Blueprints and Spaces in one place, Spacelift can support a self-service culture where teams move quickly while platform teams retain control.
A refreshed self-service model also accounts for what happens after provisioning. Drift detection helps identify unintended changes, dynamic credentials reduce risk, and Stack Dependencies v2 makes multi-stack systems easier to operate by sharing outputs across layers.
If you would like to discover how the Spacelift platform could enhance flexibility and efficiency in your organization, try it for free or book a demo with one of our engineers.
Solve your infrastructure challenges
At Spacelift, we understand that you need a platform that not only helps you with infrastructure provisioning, configuring, and governing but also fosters collaboration and increases developer velocity.
