Job Openings JR-124399 Lead Databricks Engineer

About the job JR-124399 Lead Databricks Engineer

We are seeking a Lead Databricks Engineer to drive the design, development, and optimization of data pipelines and analytics solutions on the Databricks Lakehouse platform. This role is ideal for a hands-on technical leader who is passionate about big data technologies, cloud computing and enabling business insights through scalable data architectures.

Locations:

  • Serbia
  • Albania
  • Bosnia and Herzegovina
  • Montenegro
  • North Macedonia
  • Ukraine
  • Georgia
  • Argentina
  • Brazil
  • Estonia
  • Latvia
  • Lithuania
  • Finland
  • Romania
  • Hungary
  • Slovakia
  • Slovenia
  • Czech Republic

Requirements:

  • Bachelors or Master's degree in Computer Science, Information Technology, Engineering or related field;
  • 5+ years of experience in Data Engineering or Big Data roles;
  • 2+ years of hands-on experience with Databricks, Spark (PySpark or Scala) and Delta Lake;
  • Strong knowledge of cloud platforms (AWS, Azure or GCP) and modern data architectures (Lakehouse, Data Mesh, etc.);
  • Proficiency in SQL, Python and distributed data processing;
  • Experience with CI/CD, version control (Git) and DataOps in a data environment;
  • Deep understanding of data governance, cataloging and security concepts;
  • Experience leading a team or a project, acting as a team/tech lead of a project.

       Nice to Have:

  • Databricks (Pro, Architect) certification - Experience with machine learning pipelines and MLOps in Databricks;
  • Exposure to streaming technologies (Kafka, Spark Structured Streaming);
  • Knowledge of DBT, Airflow or other similar transformation and orchestration tools.

       Other skills:

  • English excellent written and verbal communication skills;
  • Ability to work in a global multi-cultural and multi-national company;
  • Ability to lead conversations with both technical and business representatives;
  • Proven ability to work both independently and as a part of an international project team.

Job Responsibilities:

  • Lead end-to-end development of data pipelines, ETL/ELT processes and batch/streaming solutions using Databricks and Apache Spark;
  • Design and implement Lakehouse architectures that align with business and technical requirements;
  • Collaborate with data scientists, analysts and engineers to deliver high-performance data products and ML features;
  • Define and enforce coding standards, best practices and performance tuning strategies across Databricks notebooks and jobs;
  • Optimize data models in Delta Lake and implement data governance standards using Unity Catalog;
  • Manage integration of data sources across cloud platforms (e.g. AWS, Azure, GCP) using native and third-party connectors;
  • Contribute to and lead technical reviews, architecture sessions and mentoring of less experienced engineers
  • Automate infrastructure deployment with tools like Terraform, Databricks CLI or others;
  • Ensure data platform solutions are secure, compliant, and scalable across global business units.

What We Offer:

  • Competitive salary;
  • 100% remote opportunity;
  • Opportunities for professional growth and advancement;
  • A collaborative and innovative work environment;
  • 20 days of paid vacation, 15 paid days of sick leave with a doctors note, and 5 days of paid sick leave without a doctors note;
  • Medical insurance coverage for employees, with optional family coverage at corporate rates;
  • Support for participation in professional development opportunities (webinars, conferences, trainings, etc.);
  • Regular team-building activities and bi-annual company-wide events;
  • Flexible work environment (in-office, remote, or hybrid depending on preferences and manager approval).

Job ID: JR -124399