When monolithic applications are broken down into microservices, the intention is to introduce more flexibility in the system’s architecture. However, if the microservices are still communicating with each other synchronously, the transformation does not make much sense. In a way, we are still dealing with a pseudo-monolith. Event-driven architecture (EDA) is a paradigm that uses tools like AWS EventBridge to bring meaning to this transformation.
Amazon EventBridge is a service offering an event management platform with added intelligence. In this post, we will explore its features and benefits, understand various concepts and use cases, and compare how it differs from SNS and SQS.
What we will cover:
- What is event-driven architecture?
- What is Amazon EventBridge?
- When to use AWS EventBridge?
- Benefits of using AWS EventBridge
- Amazon EventBridge key features
- How does Amazon EventBridge work?
- AWS EventBridge use case examples
- AWS EventBridge vs SNS vs SQS
- What is the difference between CloudWatch and EventBridge?
- Amazon EventBridge alternatives
Event-driven architecture (EDA) enables asynchronous communication between microservices. When a microservice performs its task and wants to pass it on to the next one for further processing, it can simply emit an event with all the relevant details. These events are then accumulated in a “broker” system, which routes them to appropriate channels or topics.
Even if multiple microservices depend on this event, based on which channel they are subscribed to, they will be notified about the event so they can process it for the next steps. This is a crucial aspect of EDA, which enables dependency-less development and scaling of relevant microservices.
Amazon EventBridge is a serverless service that enables asynchronous communication between other AWS services and third-party SaaS applications. The communication is based on events, which is advantageous when building loosely coupled software components and aligns with EDA. It adds a layer of intelligence, using which you can apply transformation logic, filter, and send events to the designated targets.
An event is either triggered or scheduled. The source service/platform triggers the event, which is meant to be consumed by a target service. Amazon EventBridge manages an event from when it is triggered until it is consumed. It uses various concepts and features like buses, pipes, and schedulers to do the same.
AWS EventBridge pricing
Amazon EventBridge pricing is based on several factors, such as the number of events ingested and whether the events originate from within the AWS account or external third-party SaaS applications, replays, and data processing. It does not charge based on the number of schemas or rules configured.
Since this is a serverless service, the costs depend on usage. Events published on the default event bus (originating from the AWS services) are free; however, custom events are charged USD 1 per million events. This is also true for third-party or cross-account events published by and sent to actions, respectively.
Read more about AWS cost optimization.
Amazon EventBridge can be used for different tasks, for example:
- Integrating AWS services — You can use AWS EventBridge to react to events from various AWS services like Amazon CloudWatch, AWS Lambda, etc.
- Data processing — You can use EventBridge for scenarios that require real-time data processing. For example, it can trigger AWS Lambda functions to process data as soon as it becomes available from a connected source.
- Automating event-driven architecture — AWS EventBridge enables you to build event-driven, loosely coupled applications by routing events from sources to targets.
- Scheduling — You can use the EventBridge Scheduler to create scheduled rules that invoke targets at specific times, enabling automation of operational tasks.
- Monitoring and auditing — EventBridge integrates with CloudWatch, allowing you to monitor event flow and audit event delivery through metrics and logs.
- Integrating SaaS applications — EventBridge allows you to add events from SaaS applications and third-party sources.
Here are some key benefits of using Amazon EventBridge for event-driven architecture:
- Simplified and flexible integration — EventBridge lets you easily integrate event sources from most AWS services and 50+ external SaaS platforms using a centralized event bus.
- Real-time event processing — EventBridge enables real-time processing and routing of events from different sources to targets like AWS Lambda, allowing you to react to changes and automate workflows in near real-time.
- Scalability — As a serverless service, EventBridge automatically scales to handle increasing event volumes.
- Cost-optimization — Serverless architecture also introduces the pay-as-you-use model for Amazon EventBridge. There is no charge for events sourced from other AWS services.
- Filtering and routing — Amazon EventBridge introduces an intelligence layer that can be configured to filter and transform incoming events from various sources. It is also possible to “enrich” the events with additional data before routing them to appropriate targets for further processing.
- Ease of management — Since Amazon EventBridge is a serverless service, AWS takes care of most of the infrastructure management. In addition, the schema registry helps identify and create schemas of incoming events from internal and external sources. This accelerates the development process for application logic, as the schema helps export the bindings in the programming language of choice.
Let’s now look at the most important Amazon EventBridge features and components.
1. Event bus
When events are triggered from various sources, they have to first reach a “broker” that decides to forward them to appropriate targets. Amazon EventBridge is this “broker” with added intelligence for filtering that uses rules to apply the filtering logic. It is possible to configure 300 rules on each event bus, which in turn can forward the events to 5 targets at max. For more rules, you can create multiple event buses.
Events are JSON messages containing various attributes to define the source and the data required by the target applications or services for further processing. Event buses receive these events and the rules we configure on them and decide whether to forward this message to a given target.
The diagram below shows a high-level overview of the event bus in Amazon EventBridge.
For example, let us assume that you want to process the object as soon as it is uploaded to an S3 bucket. In this case, you create an event bus using Amazon EventBridge and configure the source as AWS S3.
This event bus will receive all the events emitted by the AWS S3 services, including the object upload event. You can then define rules in the event bus to listen to this specific action and forward it to the Lambda function, which would then process the object uploaded in S3.
Each AWS Account comes configured with a default event bus, which receives events from all eligible AWS services. You can configure rules on the default event bus, as described in the example above. For custom applications, a separate event bus needs to be created along with the rules. Additionally, Amazon EventBridge supports multiple third-party SaaS applications with individual event buses.
2. Pipes
Pipes in Amazon EventBrigde are used for point-to-point asynchronous communication between specific AWS services. Unlike event buses, pipes are explicitly configured to receive events from a single source. Pipes can also filter and enrich the events with more information before sending them to a defined single target.
The diagram below highlights various components of a pipe.
Currently, supported event sources in pipes are:
- DynamoDB stream
- Kinesis stream
- Amazon MQ message broker
- Amazon MSK topic
- Self-managed Kafka stream
- Amazon SQS Queue
Like event buses, pipes implement filtering functionality based on the event schema attributes and values. However, in the case of pipes, no rules are set. Events are based on the patterns defined in the filter stage. Only those events that qualify the filter patterns and conditions are sent for enrichment.
The qualifying events may carry the bare minimum information required for processing. Since events are JSON messages, they may not carry heavy payloads for various reasons. In such cases, it is often desirable to bind the rest of the information related to the event before forwarding it to the targets. This step also helps transform the data into consumable formats required by the target interface. At times, the information is encrypted/encoded for security and performance. Transformation helps decode or flatten information.
The enrichment is performed in four ways:
- Lambda functions — We can write logic to decode or add more information by performing database queries using Lambda functions.
- Step functions — If the enrichment process involves manual actions, step functions are used to conduct them with state flows.
- API Gateway — There are multiple ways to use API Gateway. It is possible to pass the event data as a payload to API Gateway, which then leverages a dedicated backend where transformation logic is written. The backend could range from a simple service hosted on a dedicated EC2 instance to a service in a Kubernetes cluster or a Lambda function.
- API Connections — API Connections are dedicated endpoints configured to communicate with third-party party platforms. Like API Gateway, API connections are helpful when sending events for enrichment to an external service.
3. Schedulers
As mentioned at the beginning of this post, events are not always triggered by an action. In certain situations, they are also required to be triggered periodically. For such situations, Amazon EventBridge implements a scheduler concept.
The source of these events is a schedule that runs at a given time or frequency. On the other side, the scheduler supports almost all the AWS services as targets.
As seen in the screenshot below, it supports 329 services and specific actions in each service. For example, it supports actions like “invoke” and 35 others for the Lambda AWS service alone.
While selecting the target, you have to define the payload in JSON format, which needs to be sent while triggering the corresponding target. For example, you may want to trigger a periodic backup process or set periodic triggers to keep Lambda functions “warm” to avoid cold starts.
4. Schema Registry
One of Amazon EventBridge’s main differentiating features is its Schema and Schema Registry. Every event is processed as a JSON object, which can get quite complex based on the number of attributes and the level of nested objects it contains. Since EventBridge interacts with compute workloads that receive these events and process them, it can slow down development efforts.
Schemas are defined templates of events originating from a specific source. EventBridge provides the functionality to export the code bindings for these events in multiple languages. This makes developing the processing and transformation logic based on these bindings easy. For example, for such complex event schemas, if the development team can readily get the bindings for Go or Python programming language, they need not dive too deep to pick the relevant attributes for processing each event.
Besides third-party support for schemas, Amazon EventBridge also provides a feature in event buses to discover schemas from various sources and save them in the registry.
5. Replays
In an event-driven architecture, where communication between services happens asynchronously, missing out on events can cause escalations if not disasters. During peak times, high volumes of events are created via event streams. It is not possible to track and validate each event.
Amazon EventBridge provides the ability to archive and replay the chosen events manually. This is great for debugging or retrying a certain event if it fails the first time. One of the advantages of EDA is that the ability to replay the events at our convenience will still result in the same output without touching the system’s stability. The system is expected to behave in a particular way by default based on events.
Event buses allow you to create archives of events and specify the retention period. During this time, all the events are archived and available for manual re-processing/replaying.
Implementing a solution using the concepts we discussed is easy and flexible in Amazon EventBridge. In the example below, we will create a simple Event bus to capture all the events when files are uploaded to an S3 bucket. For demonstration purposes, we will log these events in a CloudWatch bucket.
Step 1: Create an S3 bucket and configure event notifications to use Amazon EventBridge
First, let’s create an S3 bucket with default values. To forward events related to this bucket, you need to enable Event Notifications and forwarding to Amazon EventBridge.
As seen from the screenshot below, we have created an S3 bucket named “buckettobewatched” and checked “All object create events” (s3:ObjectCreated) only in Event Types since we are only interested in the object created event.
Click on save, and enable the Amazon EventBridge forwarding from the next section.
Step 2: Create an event bus rule on the default event bus
Events originating from all the AWS services are forwarded to the default event bus. Considering the limitations related to creating rules on one event bus, it is possible to create more event buses and forward these events from the default event bus to custom event buses for further processing.
Navigate to Amazon EventBridge > Event buses > Rules. Select the event bus as “default” from the dropdown. All the rules related to the default event bus would be displayed in the Rules section.
Click on the “Create rule” button to add our custom rules. In the first step, give this rule a name and leave everything else as default, as shown in the screenshot below.
In the next step, to build an event pattern to capture, select the event source as “AWS events” since we are working with AWS’s S3 service.
We can optionally set or use a Sample event to understand the event pattern. In the screenshot below, we have selected the S3’s Object Created event to understand the JSON structure of the event. This information is not needed for the setup, but it helps set the event pattern for the next step.
In the event pattern section, you can still systematically select the exact event to create a matching pattern.
In the screenshot below, we have selected the AWS Services > S3 > Amazon S3 Event Notification > Object Created specific event. It generates the corresponding event pattern to match the one on the right. Depending on the other requirements, you can add more attributes to the pattern and test if your modifications are valid by clicking the “Test pattern” button.
In the next step, select the target. Here, you can choose to send all the events captured and qualified by this Rule to be forwarded to another event bus within this AWS account or external accounts. You can also select another AWS service to forward these events to.
For this example, we want to log these events in the CloudWatch log group; we select “AWS Service.”
In the example above, we have chosen to create a new CloudWatch log group with the name “/aws/events/bucketWatch”. You may also want to set some tags and create the rule. We have kept everything as default after this.
Step 3: Test the setup
We have configured the default event bus to capture all the S3 Object Created events and the S3 bucket in consideration to forward these events to Amazon EventBridge.
Upload a few files to the S3 bucket as an event example and monitor the event bus rule. You should see some activity as below.
Navigate to the CloudWatch log group we created in Step 2 and observe the logs. The screenshot below shows how “Object Created” from the aws.s3 source is logged here.
We configured S3 bucket as source and CloudWatch log groups as target, so we can similarly configure other AWS Services to implement more service-related and event-driven functionality.
So far, we have covered the main concepts used in an architecture involving Amazon EventBridge. The sections below explain a few practical use cases.
E-commerce platform wants to handle bundled orders differently
An e-commerce platform receives orders from its customers via an online platform. They classify orders based on the number of items requested in each order. A simple order consists of a single item to be delivered, and this follows an established process. However, operational challenges can arise when processing orders containing multiple items to be delivered.
When orders are placed, they are persisted in the DynamoDB table. You can configure Amazon EventBridge pipes with a DynamoDB table stream as a source. Pipes allow you to apply filters to filter out invalid orders; however, that is not the main challenge.
All the valid orders should now be classified. Using the enrichment component of pipes, you can write a Lambda function to mark the incoming order as “simple” or “complex” based on the number of items requested. The marking is done by adding an attribute to the JSON event object – enrichment.
Remember that you have only enriched all the orders and not yet segregated them. To do the same, create an event bus and select that as a target for the pipe so that all the enriched events are forwarded to the event bus.
Create a couple of rules on the event bus based on the marking done in the pipe’s enrichment step. One rule will forward the events with “simple” markings to a target compute (Lambda function, Kubernetes workload, or EC2 instance) to handle the fulfillment. The second rule will forward the “complex” events to an elaborate workflow setup for group fulfillment, which may comprise multiple steps.
In this case, we propose to address the order segregation challenge using a combination of Amazon EventBridge pipes and event buses.
Social media platform wants to automate actions based on profanity moderation
A moderation team is responsible for implementing and managing the profanity policy for content posted by users on their platform. Currently, they have developed an application that helps detect and flag content that violates the policy. Team members manually address such violations by keeping track of user IDs and suspending their accounts for repeated failure to adhere.
It is assumed that the violation detector application built by the team has a perfect accuracy of 100%. Owing to this, the team wants to automate the actions that are otherwise performed by teammates to utilize their time for new initiatives.
To solve this, you can start by creating a scheduler that triggers the already built application at a predefined frequency, let’s say hourly. Modify the application to collect the new data created in the past hour and generate results in the form of events. The events contain information about the profanity flag, the user ID of the user who posted the content, and the content itself. The application then wraps these events as messages and pushes them to the SQS queue.
Configure an Amazon EventBridge pipe to receive these events from SQS. The enrichment step of the pipe queries the user’s history for previous policy breaches. If it has crossed a certain threshold, it enriches the event being processed with a positive suspension flag. The pipe then forwards the event to a target event bus or a lambda function to suspend the user account and/or notify them about the same.
Note: Please remember that the example use cases above are only meant to provide a perspective for better understanding. In the real world, you would come across several factors that influence the design in such a way that you may be asked to address multiple challenges by providing a solution on similar lines.
In the meantime, go ahead and learn how a platform like Spacelift can help you and your organization fully manage cloud resources within minutes.
Spacelift is a CI/CD platform for infrastructure-as-code that supports tools like Terraform, Pulumi, Kubernetes, and more. For example, it enables policy-as-code, which lets you define policies and rules that govern your infrastructure automatically. You can even invite your security and compliance teams to collaborate on and approve certain workflows and policies for parts that require a more manual approach.
EventBridge, SNS (Simple Notification Service), and SQS (Simple Queue Service) are the messaging services provided by AWS. At a broader level, each of them seems to achieve similar objectives. However, they are not similar on multiple fronts – infrastructure, usability, scale, and synchronicity – which define the purpose and decision of which one to use.
SQS implements a point-to-point messaging system. Events are queued in SQS queues, which are polled by the consumers. Choosing SQS makes sense in an event-driven architecture when the two sides have different paces of processing events in a sequence of steps. The message delivery sequence is maintained.
SNS implements a highly scalable publisher-subscriber (pubsub) model. Unlike SQS, SNS maintains topics that consumers (subscribers) subscribe to. When SNS receives event messages, it fans them out on all the relevant topics, and thus, subscribers are notified of the next steps. SNS does not maintain the sequence of events.
Amazon EventBridge, on the other hand, offers a richer experience. As discussed in this post, we can “compose” the event-driven part of a larger picture using concepts like event buses, pipes, and schedulers. Further, it is also possible to cover some of the filtering, transforming, and routing logic on the go, which would otherwise have to be done using Lambda functions. It is not as scalable as SNS, but it offers the intelligence that streamlines event processing in a better way.
CloudWatch Events and EventBridge are the same products at their core, and they share some history. Before Amazon introduced EventBridge as a full-service offering, it was part of CloudWatch Events.
In the past, scheduling a certain event/trigger was possible by using CloudWatch Events. However, for a few years, EventBridge evolved into a full-fledged event management service with many rich features. CloudWatch focus is on log management. These are not alternatives to each other.
There are some alternatives to Amazon EventBridge that offer similar functionality in terms of event-driven architectures and integration capabilities. Here are a few notable options:
- Azure Event Grid — Part of the Microsoft Azure platform, Azure Event Grid allows easy event routing based on event source, type, and subject. It integrates seamlessly with other Azure services.
- Google Eventarc — Eventarc is Google Cloud’s event delivery service, primarily focused on integrating with other Google products.
- Confluent Kafka — Confluent Kafka is a managed Pub/Sub event streaming platform based on Apache Kafka. It is designed for high-volume data stream processing and real-time analytics across platforms.
- TriggerMesh — TriggerMesh is an open-source, multi-cloud alternative to EventBridge. It provides similar functionality but works across AWS, Google Cloud, and Azure, offering connectors for various cloud services and SaaS applications.
Event-driven architectures are highly flexible and scalable, which is why the tech industry has widely adopted this pattern in the last decade. Determining and managing an event thus becomes crucial, as the loss of a single event can result in undesirable consequences.
Amazon EventBridge is an advanced and intelligent broker service that handles high volumes of events by filtering, transforming, and forwarding them to the appropriate targets. It uses concepts like event bus, pipes, schedulers, and schema registry to make this process dynamic. It also assists in reducing development time by handling exceptions on the fly. With its Partner ecosystem, it is even easier to integrate with third-party services quickly.
Remember to check out Spacelift for free by creating a trial account or booking a demo with one of our engineers.
The Most Flexible CI/CD Automation Tool
Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.