Job Openings Data Engineer

About the job Data Engineer

The Data Engineer is primarily responsible for participating in the design, development, and implementation of solutions for the data warehouse. The primary focus of the data warehouse includes data and analytics that support business operations including Clinical, Hospital Performance, and Corporate functions.

This role is cloud-facing and requires knowledge in building and managing data engineering code in modern cloud-based technologies such as Google Big Query or equivalent. We are looking for a high-energy individual willing to learn and evolve and would like to contribute to high-impact healthcare environment.

Description of Responsibilities

Assembling small to medium complex sets of data that meet non-functional and

functional business requirements

Building required data pipelines for optimal extraction, transformation and loading of data from various data sources using GCP and SQL technologies

Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics

Working with stakeholders including data, design, and product teams and assisting them with data-related technical issues

Participates in unit and system testing and follow existing change control processes for promoting solutions to production system, escalating issues as needed.

Design, develop, review, test, and deploy using CICD to facilitate the data warehouse solutions.

Identifies data integrity issues and analyzes data and process flows for process improvement opportunities.

Monitoring system performance and evaluate query execution plans for improving overall system performance.

Working with Integration Architect, develop, test and deploy the data pipelines.

Participate in troubleshooting and maintaining existing solutions as required.

Knowledge/ Skills / Abilities

Required:

Strong SQL and relational database design/development skills

Ability to perform root cause analysis on external and internal processes and data to

identify opportunities for improvement and answer questions

Ability to build processes that support data transformation, workload management, data

structures, dependency, and metadata

Development experience with cloud-based modern data warehouses such as Google Big Query or equivalent.

Applicants should have a demonstrated understanding and experience of relational SQL databases including Big Query or equivalent, and function/object-oriented scripting languages including Scala, Java and Python.

Excellent written and verbal communications skills

Strong interpersonal skills- ability to communicate with all levels of customers, vendors, and IT resources

Preferred:

Prior experience with health information systems and/or patient financial service systems a plus

Understanding of software and tools including big data tools like KaXa, Spark; workflow management tools such as Airflow.

Ability to build and optimize data sets, big data data pipelines.

Building near real-time data pipelines a plus.

Cerner, Meditech, CPSI, Allscripts, EPIC, McKesson, NextGen data integration experience a plus

Education:

Bachelors degree from an accredited college/university in technology related field or equivalent combination of education, training, and experience.