About the job Cloud Data Engineer- Informatica
Cloud Data Engineer with a strong technology background and hands-on experience working in an enterprise environment, designing, and implementing data warehouses, data lakes, and data marts for large financial institutions. In this role you will work with technology and business leads to build or enhance critical enterprise applications both on-prem and in the cloud (AWS). In this role you will work with technology and business leads to build or enhance critical enterprise applications both on-prem and in the cloud (AWS) along with Modern Data Stack containing Snowflake Data Platform and Starburst Data Virtualization tool for Semantic Layer Build out. Successful candidates will possess in-depth knowledge of current and emerging technologies and demonstrate a passion for designing and building elegant solutions and for continuous self-improvement.
Roles and Responsibilities:
- Manage data analysis and data integration of disparate systems.
- Work with business users to translate functional specifications into technical designs for implementation and deployment.
- Extract, transform, and load large volumes of structured and unstructured data from various sources into AWS data lakes or data warehouses.
- Work with cross functional team members to develop prototypes, produce design artifacts, develop components, perform, and support SIT and UAT testing, triaging and bug fixing.
- Optimize and fine-tune data pipelines jobs for performance and scalability.
- Implement data quality and data validation processes to ensure data accuracy and integrity.
- Provide problem-solving expertise and complex analysis of data to develop business intelligence integration designs.
- Convert physical data integration models and other design specifications to source code.
- Ensure high quality and optimum performance of data integration systems to meet business solutions.
Job Requirements:
- Bachelors Degree (or foreign equivalent degree) in Information Technology, Information Systems, Computer Science, Software Engineering, or a related field. Experience in the financial services or banking industry is preferred.
- 5+ Years of experience working as a Data Engineer, with a focus on building data pipelines and processing large datasets.
- 2+ years of experience with Snowflake Data Platform is highly desirable. Exposure to Data Virtualiation Platforms (Starburst, Denodo) is a plus.
- 3+ Years of experience in Informatica ETL tool for building data pipelines.
- 5+ Years of Strong proficiency in AWS services, including AWS Glue, Redshift, EMR, RDS, Kinesis, S3, Athena, DynamoDB, Step Functions and Lambda.
- 5+ Years of Expertise in Hive QL, Python programming, experience with Spark, Python, Scala and Spark for big data processing and analysis.
- Solid understanding of data modeling, database design, and ETL principles.
- Experience working with data lakes, data warehouses, and distributed computing systems.
- Familiarity with data governance, data security, and compliance practices in cloud environments.
- Strong problem-solving skills and the ability to optimize and fine-tune data pipelines and Spark jobs for performance.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- AWS certifications on Data related specialties are a plus.