Job Openings
Devops Engineer
About the job Devops Engineer
Location: San Antonio
Devops Engineer
Tools & Technologies:
- Apache Kafka (Self-managed or MSK)
- AWS managed Apache Flink
- Amazon EC2, S3, RDS, and VPC
- Terraform/CloudFormation
- Docker, Kubernetes (EKS)
- Elk, CloudWatch
- Python, Bash
Skills and Expertise:
- AWS Managed Services:
- Proficiency in AWS services such as Amazon MSK (Managed Streaming for Kafka), Amazon Kinesis, AWS Lambda, Amazon S3, Amazon EC2, Amazon RDS, Amazon VPC, and AWS IAM.
- Ability to manage infrastructure as code with AWS CloudFormation or Terraform.
- Apache Flink:
- Understanding of Apache Flink for real-time stream processing and batch data processing.
- Familiarity with Flinks integration with Kafka, or other messaging services.
- Experience in managing Flink clusters on AWS (using EC2, EKS, or managed services).
- Kafka Broker (Apache Kafka):
- Deep knowledge of Kafka architecture, including brokers, topics, partitions, producers, consumers, and zookeeper.
- Proficiency with Kafka management, monitoring, scaling, and optimization.
- Hands-on experience with Amazon MSK (Managed Streaming for Kafka) or self-managed Kafka clusters on EC2.
- DevOps & Automation:
- Strong experience in automating deployments and infrastructure provisioning.
- Familiarity with CI/CD pipelines using tools like Jenkins, GitLab, GitHub Actions, CircleCI, etc.
- Experience with Docker and Kubernetes, especially for containerizing and orchestrating applications in cloud environments.
- Programming & Scripting:
- Strong scripting skills in Python, Bash, or Go for automation tasks.
- Ability to write and maintain code for integrating data pipelines with Kafka, Flink, and other data sources.
- Monitoring & Performance Tuning:
- Knowledge of CloudWatch, Prometheus, Grafana, or similar monitoring tools to observe Kafka, Flink, and AWS service health.
- Expertise in optimizing real-time data pipelines for scalability, fault tolerance, and performance.
Responsibilities:
- Infrastructure Design & Implementation:
- Design and deploy scalable and fault-tolerant real-time data processing pipelines using Apache Flink and Kafka on AWS.
- Build highly available, resilient infrastructure for data streaming, including Kafka brokers and Flink clusters.
- Platform Management:
- Manage and optimize the performance and scaling of Kafka clusters (using MSK or self-managed).
- Configure, monitor, and troubleshoot Flink jobs on AWS infrastructure.
- Oversee the deployment of data processing workloads, ensuring low-latency, high-throughput processing.
- Automation & CI/CD:
- Automate infrastructure provisioning, deployment, and monitoring using Terraform, CloudFormation, or other tools.
- Integrate new applications and services into CI/CD pipelines for real-time processing.
- Collaboration with Data Engineering Teams:
- Work closely with Data Engineers, Data Scientists, and DevOps teams to ensure smooth integration of data systems and services.
- Ensure the data platforms scalability and performance meet the needs of real-time applications.
- Security and Compliance:
- Implement proper security mechanisms for Kafka and Flink clusters (e.g., encryption, access control, VPC configurations).
- Ensure compliance with organizational and regulatory standards, such as GDPR or HIPAA, where necessary.
- Optimization & Troubleshooting:
- Optimize Kafka and Flink deployments for performance, latency, and resource utilization.
- Troubleshoot issues related to Kafka message delivery, Flink job failures, or AWS service outages.