About the job Data Engineer - German speaker
Role Overview
We are looking for an experienced Senior Data Engineer with a strong focus on Databricks to design and implement robust data pipelines and modern data architectures in a cloud environment (Azure). The ideal candidate combines deep technical skills with a proactive, hands-on mindset and the ability to work independently while collaborating effectively with cross-functional teams.
Responsibilities
Design and build reliable data pipelines using Databricks
- Model clean, reusable data products for analytics and machine learning
Collaborate with data scientists, analysts, and platform teams
Ensure data quality, lineage, and governance across the platform
Must-Have Skills
Proven expertise in Databricks and Apache Spark (PySpark and SQL
- Deep knowledge of Delta Lake, Unity Catalog, and lakehouse architectures
- Strong background in data modeling (dimensional, relational, event-based)
- Experience designing and implementing scalable batch and streaming pipelines
- Solid understanding of data architecture principles and best practices
- Familiarity with Azure cloud services (ADLS, ADF, Event Hub, Key Vault, etc.)
Proficient in Python and SQL for data processing and transformation
Basic experience with Azure DevOps, Git, and Docker
Jira and Confluence
Nice-to-Have Skills
- Understanding of CI/CD concepts and Infrastructure as Code (Terraform)
- Experience with streaming technologies
- Hands-on knowledge of data quality tools or frameworks
Soft Skills
Ability to work autonomously, take ownership, and act proactively
Solution-oriented, structured, and pragmatic approach to challenges
Strong communication skills with both tech and business stakeholders
Location & Setup
Remote or hybrid (depending on client/project)
German and English, both are required
Consulting/project experience is welcome