Job Openings
Software Engineer — Data Engineering
About the job Software Engineer — Data Engineering
Salary: $42,000 - $54,000 per year
Location: , International
Employment option:
Job type: Full time
Job Summary:
We are looking for a strong Software Engineer with experience in data engineering to join an international team working on large-scale solutions in the financial domain. This role involves building robust, scalable, and maintainable data pipelines and services in a cloud-based environment. You’ll be part of a cross-functional, high-performance team working with real-time and high-volume data systems.
Skills
Must Have
Hard skills:
- novice in: JAVA
- beginner in: Pipeline, Data Pipelines, ETL
- competent in: Data Engineering, SQL
Soft skills:
- competent in: Communication Skills
Nice-to-have
Hard skills:
- beginner in: Performance Optimization, Logic, Prometheus, FinTech, Apache, Kubernetes, Microsoft Clarity, Microservices, Software Engineering, Redis, ClickHouse, Trading, OOD, Snowflake, Airflow, Kafka, RDBMS, Grafana, SOLID, Fraud Detection, Monitoring, Data Storage, Datadog, Accountability
Job Description
Additional Requirements
3+ years of experience with Java (in production environments) 3+ years in data engineering and pipeline development with large volumes of data Experience with ETL workflows and data processing using cloud-native tools Strong knowledge of SQL, relational and non-relational databases, and performance optimization Experience with monitoring tools (e.g., Prometheus, Grafana, Datadog) Familiarity with Kubernetes, Kafka, Redis, Snowflake, Clickhouse, Apache Airflow Solid understanding of software engineering principles and object-oriented design Ability to work independently and proactively, with strong communication skills
Responsibilities
Design and develop microservices for the data engineering team (Java-based, running on Kubernetes) Build and maintain high-performance ETL workflows and data ingestion logic Handle data velocity, duplication, schema validation/versioning, and availability Integrate third-party data sources to enrich financial data Collaborate with cross-functional teams to align data consumption formats and standards Optimize data storage, queries, and delivery for internal and external consumers Maintain observability and monitoring across services and pipelines
About Company
Location: , International
Employment option:
Job type: Full time
Job Summary:
We are looking for a strong Software Engineer with experience in data engineering to join an international team working on large-scale solutions in the financial domain. This role involves building robust, scalable, and maintainable data pipelines and services in a cloud-based environment. You’ll be part of a cross-functional, high-performance team working with real-time and high-volume data systems.
Skills
Must Have
Hard skills:
- novice in: JAVA
- beginner in: Pipeline, Data Pipelines, ETL
- competent in: Data Engineering, SQL
Soft skills:
- competent in: Communication Skills
Nice-to-have
Hard skills:
- beginner in: Performance Optimization, Logic, Prometheus, FinTech, Apache, Kubernetes, Microsoft Clarity, Microservices, Software Engineering, Redis, ClickHouse, Trading, OOD, Snowflake, Airflow, Kafka, RDBMS, Grafana, SOLID, Fraud Detection, Monitoring, Data Storage, Datadog, Accountability
Job Description
Additional Requirements
3+ years of experience with Java (in production environments) 3+ years in data engineering and pipeline development with large volumes of data Experience with ETL workflows and data processing using cloud-native tools Strong knowledge of SQL, relational and non-relational databases, and performance optimization Experience with monitoring tools (e.g., Prometheus, Grafana, Datadog) Familiarity with Kubernetes, Kafka, Redis, Snowflake, Clickhouse, Apache Airflow Solid understanding of software engineering principles and object-oriented design Ability to work independently and proactively, with strong communication skills
Responsibilities
Design and develop microservices for the data engineering team (Java-based, running on Kubernetes) Build and maintain high-performance ETL workflows and data ingestion logic Handle data velocity, duplication, schema validation/versioning, and availability Integrate third-party data sources to enrich financial data Collaborate with cross-functional teams to align data consumption formats and standards Optimize data storage, queries, and delivery for internal and external consumers Maintain observability and monitoring across services and pipelines
About Company