Job Openings Data Engineer (GCP / BigQuery / dbt)

About the job Data Engineer (GCP / BigQuery / dbt)

We are a consulting company with a bunch of tech-savvy and happy people!

We love technology, we love design, and we love quality. Our diversity makes us unique and creates an inclusive and welcoming workplace where every individual is highly valued.

With us, everyone can be themselves while respecting others for who they are. We believe that when an amazing mix of people come together and share their knowledge, experiences, and ideas, we can help our clients on a completely different level.

We are looking for someone who can start immediately and wants to grow with us!

With us, you have great opportunities to make real progress in your career and the chance to take on significant responsibility.

About the Role

We are looking for a skilled Data Engineer with strong experience in Google Cloud Platform (GCP), BigQuery, and dbt to design and maintain scalable data pipelines and analytics infrastructure. You will play a key role in building reliable data systems that support reporting, analytics, and advanced data use cases.

Experience with Apache Pinot for real-time analytics is a strong plus.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines on GCP
  • Develop and optimize data models using dbt
  • Build and manage data warehouses in BigQuery
  • Implement ELT/ETL processes for structured and semi-structured data
  • Ensure data quality, reliability, and performance optimization
  • Collaborate with Data Analysts, Data Scientists, and engineering teams
  • Optimize query performance and cost efficiency in BigQuery
  • Implement monitoring and validation frameworks for data pipelines
  • Support real-time analytics use cases (Pinot experience is a plus)
  • Maintain documentation and data governance best practices

Required Skills & Qualifications

  • 3+ years of experience in Data Engineering
  • Strong hands-on experience with Google Cloud Platform (GCP)
  • Expertise in BigQuery
  • Strong experience with dbt
  • Strong SQL skills (complex transformations and optimization)
  • Experience designing scalable data models
  • Experience building and maintaining ETL/ELT pipelines
  • Understanding of data warehousing concepts
  • Experience with version control (Git)
  • Strong understanding of data performance tuning and cost optimization

Nice to Have

  • Experience with Apache Pinot (real-time OLAP systems)
  • Experience with streaming pipelines (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance and security best practices
  • Experience in high-volume production environments

Soft Skills

  • Strong analytical thinking
  • Attention to detail
  • Ownership mindset
  • Ability to work cross-functionally
  • Strong communication skills

What We Offer

  • Competitive compensation
  • Opportunity to build modern cloud-based data platforms
  • Flexible work environment
  • Growth and learning opportunities
  • Collaborative data-driven culture

Package Details