Job Openings Azure Databricks Data Engineer| 6 to 8 Years| Gyansys | Hybrid

About the job Azure Databricks Data Engineer| 6 to 8 Years| Gyansys | Hybrid

Job Description: Data Engineer (Azure Databricks & PySpark)

Position: Data Engineer
Experience:
6 to 8 years 
Primary Skills:
Azure Databricks, PySpark, SQL (M)
Secondary Skills
: ADF (Azure Data Factory) (M)
Project Exposure:
Cloud migration (M)
Location:
Bengaluru/Hyderabad
Mode of Work:
Hybrid
Salary :
13 lac to 17 lac
Notice Period:
15 to 30 Days (M)



Databricks Engineer Job Description

Responsibilities:

  • Work as part of a global distribution team to design and implement Hadoop big data solutions in alignment with business needs and project schedules.
  • 5+ years of data warehousing/engineering, software solutions design, and development experience.
  • Code, test, and document new or modified data systems to create robust and scalable applications for data analytics.
  • Work with other Big Data developers to make sure that all data solutions are consistent.
  • Partner with the business community to understand requirements, determine training needs, and deliver user training sessions.
  • Perform technology and product research to better define requirements, resolve important issues, and improve the overall capability of the analytics technology stack.
  • Evaluate and provide feedback on future technologies and new releases/upgrades.
  • Support Big Data and batch/real-time analytical solutions leveraging transformational technologies.
  • Work on multiple projects as a technical team member or drive:
    • User requirement analysis and elaboration
    • Design and development of software applications
    • Testing and build automation tools
  • Research and incubate new technologies and frameworks.
  • Experience with agile or other rapid application development methodologies and tools like Bitbucket, Jira, and Confluence.
  • Have built solutions with public cloud providers such as AWS, Azure, or GCP.

Expertise Required:

  • Hands-on experience in Databricks stack
  • Data Engineering technologies (Ex: Spark, Hadoop, Kafka, etc.)
  • Proficiency in Streaming technologies
  • Hands-on experience in Python and SQL
  • Expertise in implementing Data Warehousing solutions
  • Expertise in any ETL tool (e.g., SSIS, Redwood)
  • Good understanding of submitting jobs using Workflows, API & CLI