Senior Data Engineer (Hybrid)
Job Description:
Join a top Fortune 500 project in Canada as a Data Engineer. Contribute to innovative solutions and technology advancements. Apply now to make an impact with a dynamic team. This hybrid role is based in Toronto, Ontario, Canada.
Responsibilities
- Product-Driven Development: Apply a product-focused mindset to understand business needs and design scalable, adaptable systems that evolve with changing requirements.
- Problem Solving & Technical Design: Deconstruct complex challenges, document technical solutions, and plan iterative improvements for fast, impactful results.
- Data Infrastructure & Processing: Build and scale robust data infrastructure to handle batch and real-time processing of billions of records efficiently.
- Automation & Cloud Infrastructure: Automate cloud infrastructure, services, and observability to enhance system efficiency and reliability.
- CI/CD & Testing: Develop CI/CD pipelines and integrate automated testing to ensure smooth, reliable deployments.
- Cross-Functional Collaboration: Work closely with data engineers, data scientists, product managers, and other stakeholders to understand requirements and promote best practices.
- Growth Mindset & Insights: Identify business challenges and opportunities, using data analysis and mining to provide strategic and tactical recommendations.
- Analytics & Reporting: Support analytics initiatives by delivering insights into product usage, campaign performance, funnel metrics, segmentation, conversion, and revenue growth.
- Ad-Hoc Analysis & Dashboarding: Conduct ad-hoc analyses, manage long-term projects, and create reports and dashboards to reveal new insights and track key initiative progress.
- Stakeholder Engagement: Partner with business stakeholders to understand analytical needs, define key metrics, and maintain a data-driven approach to problem-solving.
- Cross-Team Partnership: Collaborate with cross-functional teams to gather business requirements and deliver tailored data solutions.
- Data Storytelling & Presentation: Deliver impactful presentations that translate complex data into clear, actionable insights for diverse audiences.
Minimum Qualifications
- Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent training, fellowship, or work experience.
- Industry Experience: 5-7 years of industry experience in big data systems, data processing, and SQL databases.
- Spark & PySpark Expertise: 3 years of experience with Spark data frames, Spark SQL, and PySpark for large-scale data processing.
- Programming Skills: 3 years of hands-on experience in writing modular, maintainable code, preferably in Python and SQL.
- SQL & Data Modeling: Strong proficiency in SQL, dimensional modeling, and working with analytical big data warehouses like Hive and Snowflake.
- ETL Tools: Experience with ETL workflow management tools such as Airflow.
- Business Intelligence (BI) Tools: 2+ years of experience in building reports and dashboards using BI tools like Looker.
- Version Control & CI/CD: Proficiency with version control and CI/CD tools like Git and Jenkins CI.
- Data Analysis Tools: Experience working with and analyzing data using notebook solutions such as Jupyter, EMR Notebooks, and Apache Zeppelin.
APPLY NOW!
NearSource Technologies values diversity and is committed to equal opportunity. All qualified applicants will be considered regardless of their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as protected veterans.
Required Skills:
Key Metrics Hive Snowflake Stakeholder Engagement Analysis Religion Modeling Data Processing Mining Intelligence Spark Data Modeling Pipelines Big Data Apache Version Control Business Intelligence Business Requirements Metrics Reliability Analytics Jenkins Infrastructure Automation Problem Solving Writing Programming Databases Presentations Data Analysis Computer Science Records Git Testing Python SQL Design Engineering Business Science Training Management