Job Openings Senior Data Engineer

About the job Senior Data Engineer

== HYBRID JOB (3 days / week at the office) ==

What you can expect

Our client is a prominent financial marketplace operating across multiple countries in Europe. It facilitates the trading of various financial instruments such as stocks, derivatives, commodities, and exchange-traded funds.

What you will be doing

  • Leverage your deep understanding of AWS services to design, implement, and maintain scalable data solutions. Utilize Lambda, Glue, Step Functions, and other AWS tools to build robust architectures and cloud infrastructures.
  • Develop efficient data models, write and optimize SQL queries, and manage ETL processes. Work with databases like Redshift, MySQL, and PostgreSQL, ensuring performance tuning, security, and compliance.
  • Implement and manage data workflows using Apache Airflow and AWS Step Functions. Use Athena for interactive query analysis of large datasets in Amazon S3.
  • Provide guidance and mentorship to junior team members. Write comprehensive documentation to communicate architectural designs and best practices.
  • Monitor system health, automate checks, and proactively address potential issues. Apply strong troubleshooting skills to minimize downtime and operational impact.
  • Stay abreast of emerging technologies, incorporate new tools and techniques, and propose innovative solutions to improve efficiency and scalability.

What you will bring

  • Extensive experience with AWS services such as Lambda, Glue, Step Functions, CloudFormation, and CloudWatch.
  • Ability to design scalable and efficient data solutions on AWS, considering best practices for cloud architecture.
  • Advanced programming skills in Python and experience with relational (MySQL, PostgreSQL, RedShift) and NoSQL databases.
  • Experience with tools like Apache Airflow and AWS Step Functions for managing data workflows.
  • Proficiency with ETL tools and handling large data volumes, preferably with Kafka.
  • Familiarity with managing large datasets using Iceberg tables for data consistency and supporting ACID transactions.
  • Proactive monitoring and troubleshooting skills to anticipate and resolve issues.
  • Ability to lead technical initiatives and communicate effectively with cross-functional teams.
  • Aptitude for analyzing requirements, defining technical approaches, and proposing innovative solutions.
  • Experience in creating technical documentation and refining business requirements.
  • Knowledge in Apache Flink, Kafka, and other big data technologies (nice to have).
  • Experience with cloud-native architectures and serverless computing (nice to have).
  • Certification in AWS or relevant technologies (nice to have).