Job Openings
JR-152705 Data Engineer Middle (AB)
About the job JR-152705 Data Engineer Middle (AB)
Project Description:
We are looking for a Data Engineer to join an international team working on enterprise-scale data solutions in AWS cloud. This role focuses on designing, developing, and maintaining scalable data pipelines, DBT models, and ETL processes to ensure high-quality, consistent data across multiple sources. You will work closely with both technical and business stakeholders to optimize data processing, automate workflows, and deliver actionable insights across a global organization.
Locations:
- Albania
- Portugal
- Bosnia and Herzegovina
- Bulgaria
- Croatia
- Kosovo
- Montenegro
- North Macedonia
- Romania
- Serbia
- Slovenia
Requirements:
- 3+ years of hands-on experience in data engineering: Datalake, Lakehouse, DWH;
- Minimum 1 years of experience developing data solutions in AWS;
- Real production experience working with AWS services: Step Function, Lambda, Glue Jobs / Catalog, Redshift (Spectrum), Athena;
- Experience implementing processing and transformations with DBT: incremental strategies, macros, tests;
- Proficiency in Python (pySpark, Pandas), SQL, Jinja.
Will be a plus:
- Databricks experience / certifications
Other skills:
- English – upper intermediate;
- Good communication skills (both verbal and written);
- Ability to work in a global multi-cultural and multi-national company;
- Ability to lead conversations with both technical and business representatives;
- Proven ability to work both independently and as a part of an international project team;
- Work in CET time-zone.
Job Responsibilities:
- Design, implement and support a data solution in AWS cloud;
- Develop scalable data pipelines (using AWS native services) that meet data integration, ingestion and processing requirements and common patterns;
- Maintain and develop data quality ensuring consistency and integrity across multiple sources;
- Maintain and develop DBT models for data processing and business transformations;
- Optimize S3 storage, ETL pipelines and DBT models for performance enhancements, query tuning and efficient data partition management;
- Orchestrate existing pipelines and develop new with AWS Step Functions, automate pipeline monitoring;
- Collaborate with multiple Dev and BA teams to solve complex data challenges, align on integration and processing patterns, business result expectations.
What We Offer:
- Competitive salary;
- 100% remote opportunity;
- Flexible work environment (in-office, remote, or hybrid depending on preferences and manager approval);
- Opportunities for professional growth and career advancement;
- Collaborative and innovative work environment;
- Paid time off, including holidays, vacation and sick leave;
- Benefits may vary by location and will be confirmed during the interview process.
Job ID: JR-152705