About the job Data Engineer
As a Data Engineer on the Data Experiences and Automation team, you will help shape how data is collected, consumed, and used to improve internal and external product experiences. You'll work closely with technical leadership to expand data consumption and reporting capabilities, while building solutions that improve the development lifecycle for data engineers and insight analysts across the organization.
This role will also support advancements and fast iteration in Machine Learning and GenAI pipelines. You will bring experience working with globally distributed systems and partner with software engineers to build efficient, scalable, and reliable data solutions. You will report to the Engineering Manager of the team.
Responsibilities:
Design, build, and optimize scalable data pipelines that provide insight into the broader data foundation ecosystem.
Design and implement data models using industry best practices, capturing a complete view of internal customer experiences while ensuring accuracy, scalability, and long-term usability.
Architect and implement robust, maintainable, and high-performance data solutions.
Automate workflows to reduce manual intervention and improve data processing efficiency, including automation across content, growth, and sports-related areas of coverage.
Optimize query performance and resolve pipeline bottlenecks to improve data accessibility.
Evaluate and adopt new tools, frameworks, and methodologies to advance data engineering capabilities.
Support cost optimization by ensuring scalable and efficient data solutions.
Ensure data quality, governance, and compliance with regulatory standards such as GDPR and CCPA.
Contribute to the broader data engineering discipline by helping shape infrastructure, craft standards, tooling, and organizational best practices.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or a related field.
5+ years of experience in data engineering or a related field.
Strong expertise in big-data technologies, including Python, Go, Spark, Scala, SQL, and tools like Airflow and dbt.
Proficiency with cloud infrastructure, particularly AWS, and Databricks.
Expertise designing efficient and scalable data models with large datasets across multiple teams and in collaboration with insights or data stakeholders.
Demonstrated ability to work cross-functionally with engineering, analytics, and product teams.
Proven experience mentoring and guiding other engineers.
Desired Qualifications:
Experience with MLOps, GenAI pipelines, and related infrastructure.