Senior Data Engineer
Job Description:
We are seeking an experienced and highly skilled Data Engineer to join our team to design, develop, and optimize the Enterprise Data Warehouse (EDW) using Microsoft Fabric. In this role, you will work with cross-functional teams to build and maintain robust data pipelines, ensuring that high-quality data is available for analytics, reporting, and business intelligence across the organization.
The ideal candidate will have experience in data architecture, cloud technologies, and Microsoft Fabric, alongside strong skills in data modeling, ETL processes, and building scalable solutions to support business insights.
Key Responsibilities:
-
Design & Build Data Infrastructure:
Develop and maintain the Enterprise Data Warehouse (EDW) architecture using Microsoft Fabric, including data pipelines, storage solutions, and data models. Ensure that data is structured, accurate, and accessible for business use. -
ETL Development & Optimization:
Design, implement, and optimize ETL (Extract, Transform, Load) processes to integrate data from various sources into the EDW. Ensure smooth data flow across systems with high performance and reliability. -
Data Modeling & Schema Design:
Create and maintain relational and dimensional data models. Work with stakeholders to ensure that the data structure aligns with business requirements and supports easy querying and reporting. -
Cloud & Data Platform Management:
Manage and optimize the cloud-based data storage and processing infrastructure within Microsoft Fabric, including Azure Synapse Analytics, Azure Data Lake, and other relevant services. -
Data Quality & Governance:
Implement data quality checks and monitoring for the data pipeline to ensure data integrity, consistency, and compliance. Collaborate with Data Governance teams to enforce best practices and policies. -
Collaboration with Cross-Functional Teams:
Work closely with teams to understand data needs and ensure that data is available and consumable for analytical purposes. -
Performance Tuning & Scalability:
Continuously monitor the performance of data systems and workflows. Make improvements to data processing speeds, cost optimization, and scalability as needed. -
Documentation & Best Practices:
Maintain clear and comprehensive documentation for data pipelines, architecture, and data models. Contribute to the development and adherence to best practices for data engineering within the organization.
Qualifications:
Required:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field (or equivalent work experience).
- 5+ years of experience in data engineering, with a focus on building and maintaining data infrastructure.
- Strong experience with Microsoft Fabric, including services like Azure Synapse Analytics, Azure Data Lake, and Azure Data Factory.
- Proficiency in data pipeline tools and technologies such as SQL, Python, ETL frameworks, and Azure Data Factory.
- Hands-on experience with data modeling (relational, star/snowflake schema, dimensional) and designing effective data storage solutions.
- Experience with cloud-based data architecture and services, preferably within Microsoft Azure.
- Familiarity with data quality frameworks, monitoring, and error handling in data pipelines.
- Knowledge of data warehousing concepts and architecture for enterprise-level solutions.
- Excellent problem-solving skills and attention to detail.