Build data
products faster.

Twirl is a platform for data teams looking to ship value, not tooling. It lets you deploy data pipelines from day one and manages all infrastructure and scaling. Twirl is proudly code-first and unifies analytics, machine learning and everything in between.

Built by data engineers from

manifest.py
1234567891011121314

Smooth local development.

Best practices built-in. Abstractions like tables, streams, jobs and schemas make incremental updates and schema evolution easy. Process data using any language you want, sync data between data stores and mix batch and streaming, all in one platform.

twirl
12345678910

Easy local testing.

No more unexpected failures in production. With Twirl, you can run and test dependent jobs together to catch errors early and iterate faster. Unit test your code and continuously monitor your assumptions about your data.

Dependency graph

Painless deployments.

Fully serverless infrastructure. Don't worry about scaling, containers, or CI/CD. Once you merge to main, all jobs run continuously in the cloud on whatever hardware you need. Use the Twirl web app to keep track of what’s running and when.

Don't take our word for it

Here's what our customers say.

Twirl worked out of the box. It powers our key product features that require data aggregation to provide users with insights about their speaking profiles. That means more than half of all English teachers in Mongolia rely on this data!

Once we decided to go with Twirl, the team was onboarded in less than a day and we could start spinning up pipelines immediately. I would highly recommend Twirl to other startups and scaleups that need a great data platform without wasting time.

Image of Thúy N Trần from Astrid
Thúy N Trần
CTO & Co-founder

Before Twirl, developers queried databases with inconsistent results. Now, Twirl handles all data transformation and creates universal definitions of metrics such as churn rate and the number of bookings.

It was an easy choice and shortened time to value. Instead of hiring multiple engineers to build the same thing, we got started instantly. Twirl is easy to use, and you can stay on one platform.

Image of Stéphanie Cabrera from Bokadirekt
Stéphanie Cabrera
Data Science Lead

Twirl makes us more productive. It helps us collect and process financial data from our customers, to understand their business and offer them loans. I can focus on writing code to improve our product, as Twirl completely removed the need to set up any data infrastructure.

It was super easy to get started and I had my jobs up and running on the first day. The platform is very intuitive, concepts are easy to understand and everything is neatly designed.

Image of William Keith from Fejron
William Keith
Head of Data

By engineers, for engineers.

Best practices built in.

We've taken our learnings from building data platforms at startups, scale-ups and enterprises and turned them into the framework we always wished existed.

Data contracts. Specify schemas and your pipeline will break loudly if a column name or type changes.
Containers. All jobs run in separate containers. Use pre-built defaults, or bring your own.
Testing. Test entire data jobs by providing fixed input and output datasets.
Monitoring. When a pipeline breaks or your data format changes, you will immediately find out.
Mix batch and streaming. Process some data in batches and some streaming - all in the same system.
Incremental updates. No need to rewrite your code - just specify that you want to process only new rows.
Plain code. No custom DSL, magic Python annotations or weird macros necessary. No steep learning curve and no lock-in.
Mix languages. SQL, Python, R or whatever you prefer. Start in one, continue in another and then switch to a third.
Unstructured data. Work with tables, streams or files of unstructured data.

See Twirl in action

Developing data pipelines is easy with the right tooling.