About the job Gen AI Engineer
We are looking for a Gen AI Engineer to collaborate with a US-based client!
We seek an AI and AI Agent Developer/Engineer to design, develop, and deploy advanced AI agents that enhance enterprise automation, decision-making, and intelligent data processing. This role requires strong Python expertise, proficiency in AI agent orchestration frameworks (LangGraph, AutoGen, CrewAI), and hands-on experience with multi-modal models, retrieval-augmented generation (RAG/GraphRAG), and machine learning integrations. The ideal candidate is passionate about AI-driven automation, knowledge graphs, graph databases, and ontologies.
- Develop AI-powered agents using LangGraph, AutoGen, CrewAI, or similar frameworks to handle stateful, multi-agent collaboration.
- Implement multi-agent orchestration strategies, enabling AI agents to collaborate effectively and autonomously perform complex tasks.
- Engineer autonomous decision-making workflows, ensuring AI agents communicate and adjust based on dynamic conditions.
- Design and optimize state management for AI agents to enable seamless execution across long-running workflows.
- Work with LLMs (GPT-4, Claude, LLaMA, Mistral) and fine-tune models to optimize task-specific performance.
- Integrate multi-modal AI models, including text-to-image, text-to-video, speech-to-text, and cross-modal transformers.
- Implement AI-powered retrieval systems using RAG and GraphRAG methodologies for contextualized knowledge extraction.
- Build and fine-tune custom AI models, leveraging Hugging Face Transformers, TensorFlow, and PyTorch.
- Work with vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to optimize AI search capabilities.
- Develop AI-powered APIs and deploy AI models using FastAPI, Flask, or gRPC for scalable and efficient model serving.
- Integrate AI systems with cloud platforms prioritizing open-source solutions first, then GCP, AWS, and Microsoft Azure:
- Google Cloud: Vertex AI, BigQuery ML, Cloud Run, AI Platform.
- AWS: SageMaker, Bedrock, Lambda, DynamoDB.
- Microsoft Azure: OpenAI Service, Cognitive Services, Synapse AI.
- Implement MLOps pipelines using Kubeflow, MLflow, Airflow, or Vertex AI Pipelines for model lifecycle automation.
- Build real-time AI inference solutions, leveraging Kafka, RabbitMQ, or Redis Streams.
- Develop AI agents that interface with knowledge graphs and graph databases to enhance reasoning and retrieval.
- Leverage graph-based reasoning engines like Neo4j, TigerGraph, Amazon Neptune, or ArangoDB for AI inference.
- Design and implement semantic search capabilities, utilizing SPARQL, RDF, and OWL for ontological reasoning.
- Integrate RAG methods with graph-based knowledge systems to optimize retrieval efficiency.
- Work within Agile and Scaled Agile frameworks, ensuring AI project milestones align with business goals.
- Maintain Jira/Atlassian boards, tracking AI agent development progress and dependencies.
- Participate in code reviews, technical documentation, and sprint planning to enhance cross-functional collaboration.
- Proficiency in Python, with experience in AI/ML frameworks like TensorFlow, PyTorch, Hugging Face, and OpenAI APIs.
- Experience developing AI agents using LangGraph, AutoGen, CrewAI, or similar agent frameworks.
- Expertise in retrieval-augmented generation (RAG) and GraphRAG for knowledge-based AI applications.
- Hands-on experience with LLM integration in production environments, including fine-tuning and prompt engineering.
- API development experience, including FastAPI, Flask, GraphQL, and gRPC.
- Cloud AI experience, prioritizing open-source, followed by GCP, AWS, and Azure.
- Strong experience in knowledge graphs and graph databases such as Neo4j, ArangoDB, Amazon Neptune.
- Experience in AI Ethics & Responsible AI development, ensuring fairness and bias mitigation in AI models.
- Understanding of reinforcement learning (RLHF, PPO, or SAC) for training AI agents.
- Expertise in scalable AI deployment, using Ray, Dask, or Spark for distributed AI processing.
- Experience with MLOps and model versioning using Kubeflow, MLflow, or Airflow.
- Knowledge of distributed AI architectures (e.g., multi-agent RL, federated learning).
- Contributions to open-source AI projects or research publications in AI agent-based systems.
Why Join Us?- Full-time position
- 100% remote anywhere in LATAM
- Payment in US dollars
- 12 PTO per year
- Holidays from your country off and paid
- Birthday off and paid
- Career Path
- Recognition Program
- Paid Leaves
Key Responsibilities
1. AI Agent Development & Multi-Agent Orchestration
2. Large Language Models (LLMs) & Multi-Modal AI
3. API Development & Cloud Integration
4. Knowledge Graphs, Ontologies & Graph Databases
5. Collaboration & Agile Development
Required Skills & Experience
Preferred Qualifications