About the job DevOps
This position is within a project with one of the foundational LLM companies. The goal is to assist these foundational LLM companies in enhancing their Large Language Models.
One way we help these companies improve their models is by providing them with high-quality proprietary data. This data serves two main purposes: first, as a basis for fine-tuning their models, and second, as an evaluation set to benchmark the performance of their models or competitor models.
For example, for SFT data generation, you might have to put together or be provided a prompt which contains provided code and questions, you will then provide the model responses and write corresponding script to solve the questions. A collection of 5k-10k such samples could form the dataset for model fine-tuning.
For RLHF data generation, you might have to put together or be provided a prompt by the customer, ask the model questions, and evaluate the outputs generated by two versions of the LLM model. You'd compare these outputs and provide feedback, which is then used to fine-tune the models. Please note that this role does not require you to build LLMs or fine-tune them.
What does day-to-day look like:
* Design and develop challenging prompts based on provided source code with good coverage of DevOps and Infrastructure technologies
* Implement the verified code that can be executed to verify whether a model's response to the prompt is correct or not
* Conduct evaluations (Evals) to benchmark model performance and analyze results for continuous improvement.
* Evaluate and rank AI model responses to user queries across diverse domains, ensuring alignment with predefined criteria.
* Develop comprehensive explanations and rationales for evaluations, showcasing excellent reasoning and technical expertise.
* Lead efforts in Supervised Fine-Tuning (SFT), including creating and maintaining high-quality, task-specific datasets.
* Collaborate with researchers and annotators to execute Reinforcement Learning with Human Feedback (RLHF) and refine reward models.
* Design innovative evaluation strategies and processes to improve the model's alignment with user needs and ethical guidelines.
* Create and refine optimal responses to improve AI performance, emphasizing clarity, relevance, and technical accuracy.
* Conduct thorough peer reviews of code and documentation, providing constructive feedback and identifying areas for improvement.
* Collaborate with cross-functional teams to improve model performance and contribute to product enhancements.
* Continuously explore and integrate new tools, techniques, and methodologies to enhance AI training processes.
*Requirements:
*Technical Expertise:
* Proven 5+ yrs of experience with configuration management and infrastructure automation tools such as Ansible, Terraform, and/or similar platforms.
* Strong exposure to AWS cloud platforms with experience in designing and managing multi-cloud environments.
* Hands-on experience with container technologies (Docker) and container orchestration (Kubernetes).
* Proficiency in scripting languages (Bash, Python, etc.) for automation and tool integration.
* Familiarity with CI/CD tools (Jenkins, GitLab CI, CircleCI, etc.) and version control systems (Git).
Operational Excellence:
* Experience setting up monitoring, logging, and alerting mechanisms to ensure system health and quick incident response.
* Knowledge of networking, security best practices, and high availability design in cloud infrastructures.
* Professional Skills:
* 5+ years of overall work experience in DevOps or related roles.
* Demonstrable ability to collaborate with cross-functional teams and communicate complex technical concepts.
* Strong problem-solving skills, with a proactive approach to identifying and resolving system bottlenecks and vulnerabilities.
* Fluent in conversational and written English communication.
* Work in a fully remote environment.
* Opportunity to work on cutting-edge projects with leading AI and cloud technology companies.
* Potential for contract extension based on performance and project needs.
* Offer Details:
* Commitments Required: At least 4 hours per day and a minimum of 20 hours per week, with a 4-hour overlap with PST.
* Employment Type: Contractor position (Note: this role does not include medical/paid leave).
* Duration of Contract t: 1 month; [expected start date is next week].
* Location: Open to candidates in India, Pakistan, Nigeria, Kenya, Egypt, Ghana, Bangladesh, Turkey, Mexico.
* Evaluation Process (approximately 75 mins):
* Interview Rounds:
* 60-minute technical interview focusing on DevOps tools, cloud architectures, and automation strategies.
15-minute cultural fit and offer discussion session.