Create a Pipeline
A guide on how to filter, enrich, transform, retry and deliver events in real-time to any cloud app, database, warehouse, server or existing streaming service

Once you start ingesting events from your data sources, you'll want to deliver those events to downstream endpoints. However, the challenge of orchestrating the data relationship within a many-to-many data ecosystem (many sources of events and many destinations of events) is extremely difficult at scale. This is where Event shines.
With Event, you can use our intelligent orchestration layer to filter, enrich, transform, retry and deliver events anywhere. Reliably deliver thousands, millions or billions of events in real-time.
Create a new Pipeline via the + New button in the Pipelines tab.

Give your Pipeline a human-readable name to make it easy to find or query later.

Next, set the events you'd like to use to trigger the Pipeline. Event gives you fine-tune control here and gives you the ability to select one or many events from the source as a trigger.

Set where you want to deliver your data. In this section, you'll be selecting both the type of Destination you want to use and the API action for that Destination (i.e., Create, Update, etc..)

Here's where the magic starts to happen.
Using Extractors, you're able to extract data from other parts of your stack using an HTTP request and inject that data within the payload of the trigger event. This gives you the ability to enrich any event that travels through a Pipeline while on its way to the Destination of choice.
To enrich your data, follow these steps:
Step 1: Click + Add Extractor

Step 2: Name your Extractor

Step 3: Select the Extractor type
You can select from the following Extractor types:
- HTTP: use an HTTP request to source the data you'd like to inject into the in-transit event payload

Event supports HTTP-based enrichment only. We plan to release support for more types of enrichment in the near future, including direct database and past event extraction
Step 4: Configure the HTTP Request
You can configure the call by entering the URL of HTTP request, its method, as well as any logic you'd like to append to the call.

Step 5: Apply a Retry Policy
Event lets you apply an optional unique retry policies per Extractor. You can turn the retry on or off and set both the retry's
Start to Close Timeout
and Maximum Attempts
. This gives you fine-tune control over how to handle failures when trying to enrich your in-flight data.
Developers are able to add multiple Extractors when enriching data
Here's where the magic continues.
Transformations are commonly used to map data seamlessly from a Source to a Destination. Using Event Transformations, you're able to shape any event payload using a simple function.

Here's an example Transformation function transforms Stripe's
customer.created
event for a MongoDB collection.JavaScript
function transform(event, context) {
const body = JSON.parse(event.body);
const { name, email } = body.data.object;
return {
collection: "stripe-customers",
document: {
name,
email
}
};
}
The
transformation
function includes the following parameters:Name | Required | Description |
---|---|---|
Event | Access the in-flight event body | |
Context | Access data from your Extractors |
Finally, Event lets you elegantly handle any downstream API failures (that almost always occur) using intelligent retries. Retries are applied according to your policy and are executed with exponential backoff by default.
- Start to Close Timeout: the duration of time with which you'd like to retry if delivery fails
- Maximum Attempts: the total amount of retry attempts within the selected duration

Once you finalize your pipeline, it will be live. This means that it will be triggered whenever the event you set as the trigger event is ingested from your active streams

Last modified 2mo ago