Workshop: First steps to running dlt in production by The Scalable Way

From theory to pipeline, it’s time to build.

A practical workshop for data teams ready to move from concepts to code. Learn how to build scalable, secure, and efficient data pipelines using dlt, with live coding and expert guidance. The session also covers essential dlt fundamentals, which are required for taking your pipelines from prototype to production. 

Location: Youtube streaming 

Date: July 24th, 2025

Time: 16:00 (CET | Berlin)

Who is this workshop for?

Data Platform Engineers 

Data Engineers 

Data Architects & Platform Owners

Data Analysts and BI Analysts

About the workshop:

From theory to pipeline, it’s time to build.

In this hands-on, code-first workshop, you’ll learn how to create production-ready data pipelines using dlt, an open-source Python framework for modern data ingestion.

Through guided exercises and live coding, you’ll learn how to:

1. Use dlt’s core building blocks: sources, resources, and pipelines

2. Load data from different sources such as Python objects, Rest API, CSV files, and real databases

3. Manage configuration and secrets using toml files and Environment Variables

4. Implement incremental loading and control data flow with write dispositions

5. Understand and debug pipeline state with dlt’s system tables

This session is ideal for data professionals and engineers who want to build robust, maintainable data workflows with Python.

About TSW: 

The Scalable Way is a team of eager data experts who bring the best software engineering practices to data platforms to achieve higher scalability, cost effectiveness, usability, and reliability.

With our code-first approach, we help teams build scalable, automated, and reliable data platforms that enable smooth collaboration, reduce manual work, and improve security and governance.

About the speaker: 

Michał Zawadzki specializes in building production-grade data pipelines and platforms that are reliable, scalable, and easy to maintain. With a strong focus on automation, modular architecture, and configuration management, he helps data teams streamline ingestion workflows, reduce errors, and simplify complex infrastructure. His expertise in pipeline orchestration and incremental data loading empowers teams to deliver faster, more efficient analytics solutions.

Built with