Leiria, Portugal

Data Engineer Databricks & Python (Common Library)

 Job Description:

Syffer is an all-inclusive consulting company focused on talent, tech and innovation. We exist to elevate companies and humans all around the world, making change, from the inside to the outside.

We believe that technology + human kindness positively impacts every community around the world. Our approach is simple, we see a world without borders, and believe in equal opportunities. We are guided by our core principles of spreading positivity, good energy and promote equality and care for others.

Our hiring process is unique! People are selected by their value, education, talent and personality. We dont present ethnicity, religion, national origin, age, gender, sexual orientation or identity.

Its time to burst the bubble, and we will do it together!

What You'll do:

- Design, implement, and maintain a shared Python library for Databricks, supporting batch and streaming pipelines;

- Develop reusable PySpark modules, base classes, and abstractions for Bronze, Silver, and Gold layers;

- Actively participate as a Scrum team member in Sprint Planning, Daily, Refinement, Review, and Retrospective ceremonies;

- Define and enforce software engineering best practices, including coding standards, documentation, testing strategies, and versioning;

- Establish and maintain code quality standards, including linting, formatting, and static analysis:

- Collaborate with Product Owners and fellow engineers to clarify requirements and deliver incremental value;

- Maintain and improve CI/CD pipelines using GitLab and Databricks Asset Bundles (DABs);

- Ensure controlled releases, backward compatibility, and smooth adoption of the common library across teams;

- Integrate logging, monitoring, and data quality controls using Grafana and DQX;

- Work closely with DataOps to ensure stability, observability, and reliability in production environments;

- Hybrid Work Model;



Who 
You Are:

- Minimum 10 years of professional experience developing in Python;

- At least 5 years of hands-on experience with Databricks, including PySpark development in production environments;

- Proven experience working as a member of Scrum or Agile teams;

- Solid experience designing Python libraries, frameworks, or shared components;

- Strong knowledge of software engineering best practices, including:

Object-Oriented Programming (OOP)

Design patterns

Unit and integration testing

CI/CD pipelines

- Experience with code standardization and quality tools, such as linting and formatting tools (e.g., pylint, flake8, black or equivalent);

- Strong understanding of batch and streaming data processing;

- Experience with Medallion Architecture and data lifecycle best practices;

- Familiarity with Airflow, Terraform, and Azure ADLS Gen2;

- Fluent in Portuguese and English;



What youll get:

- Wage according to candidate's professional experience;

- Remote Work whenever possible;

- Allocation of health insurance from the beginning of the employment;

- Delivery of work equipment adjusted to the performance of functions;

- And others.

Work together with expert teams on projects of large magnitude and intensity, long term together with our clients, all leaders in their industries.

Are you ready to step into a diverse and inclusive world with us?

Together we will promote uniquess!