About the job Senior Data Engineer @Greece
Why are you looking for a job?
If your answer ticks all the boxes, then maybe we can work together.
- You have a curious mind - You won't understand what we're talking about if you don't.
- You want to learn more around technology - You won't survive if you don't.
- You want to make the world a bit better - We don't like you if you don't.
We happen to be just like that as well. We like hacking things here and there (you included) and create scalable solutions that bring value to the world.
Squaredev?
We use state-of-the-art technology to build solutions for our own customers and for the customers of our partners. We make sure we stay best-in-class by participating in research projects across Europe, collaborating with top universities and enterprises on AI, Data, and Cloud.
What you'll do:
The ideal candidate will be responsible for:
- Working hands-on experience with IBM Watson and/or BAW.
- Designing and implementing data pipelines (batch and streaming) for analytics and AI workloads. The tools used will be python, SQL as well as low code tools in the IBM suite.
- Building and maintaining data lakes / warehouses (One Lake, BigQuery, Delta Lake, or any similar).
- Developing and optimizing ETL/ELT workflows using tools like Spark Jobs, dbt, Airflow, or Prefect.
- Ensuring data quality, observability, and governance across all pipelines.
- Working closely with data scientists and software engineers to deploy and maintain AI-ready datasets.
To excel in this role, you'll need:
- At least 3 years of relevant work experience.
- Hands-on experience with IBM Watson and/or BAW.
- Strong experience in SQL and Python (PySpark or similar).
- Hands-on experience with data modeling, ETL frameworks, and data orchestration tools.
- Familiarity with distributed systems and modern data platforms (Spark, Databricks, Fabric, Snowflake, or BigQuery).
- Understanding of data lifecycle management, versioning, and data testing.
- Solid grasp of Git and CI/CD workflows.
- Strong communication skills in English.
Nice to have:
- Knowledge of vector databases (pgvector, Pinecone, Milvus) or semantic search pipelines.
- Interest / knowledge in LLMs, AI pipelines.
- Familiarity with data catalogs, lineage tools, or dbt tests.
- DevOps familiarity (Docker, Kubernetes, Terraform).
What we offer:
- Hybrid working model.
- 5 extra holidays to spend with your family and friends.
- Private health insurance.
- Ticket restaurant card.
- An Apple MacBook Pro to do your magic.
Well, that's it! Feedback and questions are always welcome. We want to become better and learn from you, whether you want to join or are in the mood to help. Thanks for your time reading this. Looking forward to hearing from you!