Job Openings
Data Engineer
About the job Data Engineer
Data Engineer
Location: Ortigas
Work Setup: Onsite (as required)
Role Overview
We are looking for a Data Engineer to join our growing technology team and play a key role in building and optimizing our data infrastructure. The role will focus on developing data pipelines, integrating master data into AWS environments, and supporting ETL/ELT processes to generate accurate reports and analytics for business stakeholders.
This role is highly collaborative, working closely with data leads and cross-functional teams to ensure that data is reliable, consistent, and usable for decision-making.
Key Responsibilities
- Design, build, and maintain data pipelines to ingest SunCBS master data into AWS Data Hive.
- Develop and optimize ETL/ELT workflows leveraging SQL, Python, and AWS-native tools (S3, Redshift, EC2, Lambda, Glue, etc.).
- Collaborate with CSB Data Leads to understand master data structures, mappings, and reporting requirements.
- Build, validate, and test queries for MVP and Feed Reports to ensure accuracy and performance.
- Maintain clear documentation of data processes, data dictionaries, and system integrations.
- Monitor and troubleshoot data pipelines, ensuring high availability and reliability.
- Implement best practices for data governance, quality, and security in alignment with organizational standards.
- Provide support for ad hoc data analysis requests and collaborate with BI/Analytics teams.
- Bachelors degree in Computer Science, Information Technology, Data Engineering, or related field.
- AWS certifications (e.g., AWS Certified Data Analytics, AWS Solutions Architect) are an advantage.
Experience & Background
- 3+ years experience as a Data Engineer or in a similar role.
- Strong proficiency in SQL and Python for data processing and scripting.
- Hands-on experience with AWS services: S3, Redshift, EC2, Glue, Lambda, and related data tools.
- Familiarity with ETL/ELT design patterns, data modeling, and pipeline automation.
- Exposure to data governance, data quality, and documentation practices.
- Experience with agile/scrum delivery models is a plus.
- Advanced SQL development and performance tuning.
- Python programming for ETL/automation.
- AWS-native tools for data ingestion, storage, and analytics.
- Ability to work with structured and semi-structured datasets.
- Familiarity with data visualization/BI tools (Tableau, Power BI, QuickSight).
- Knowledge of CI/CD pipelines for data workflows.
- Experience with data lake or data warehouse architectures.
- Strong analytical mindset and problem-solving skills.
- Excellent communication and collaboration skills.
- Detail-oriented with the ability to deliver in fast-paced environments.