Job Openings AI Data Engineer

About the job AI Data Engineer

Responsibilities:

  • Data Pipeline Development: Create and manage ETL workflows using Python and relevant libraries (e.g., Pandas, NumPy) for high-volume data processing.
  • Data Optimization: Monitor and optimize data workflows to reduce latency, maximize throughput, and ensure high-quality data availability.
  • Collaboration: Work with Platform Operations, QA, and Analytics teams to guarantee seamless data integration and consistent data accuracy.
  • Quality Checks: Implement validation processes and address anomalies or performance bottlenecks in real time.
  • Integration & Automation: Develop REST API integrations and Python scripts to automate data exchanges with internal systems and BI dashboards.
  • Documentation: Maintain comprehensive technical documentation, data flow diagrams, and best-practice guidelines.

Requirements:

  • Bachelors degree in Computer Science, Data Engineering, Information Technology, or a related field.
  • Relevant coursework in Python programming, database management, or data integration techniques.
  • 3-5 years of professional experience in data engineering, ETL development, or similar roles.
  • Proven track record of building and maintaining scalable data pipelines.
  • Experience working with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL solutions (e.g., MongoDB).
  • Special Certifications (Non-Mandatory): AWS Certified Data Analytics Specialty, Google Cloud Professional Data Engineer, or similar certifications are a plus.

Skills:

  • Advanced Python proficiency with data libraries (Pandas, NumPy, etc.).
  • Familiarity with ETL/orchestration tools (e.g., Apache Airflow).
  • Understanding of REST APIs and integration frameworks.
  • Experience with version control (Git) and continuous integration practices.
  • Exposure to cloud-based data solutions (AWS, Azure, or GCP) is advantageous.