About the job Fractional AI Architect (Consultant)
Fractional AI Architect (Consultant)
Generative AI, ML Systems & Scalable Platform Architecture
Contract / Fractional Engagement Remote
Overview
Bridge-it.ai An AI-driven SaaS platform operating in the career readiness and education technology space is seeking a Fractional AI Architect to conduct an architecture review and provide technical guidance for the platform’s AI and data systems.
Experience in the U.S. K-12 education ecosystem or EdTech platforms is highly desirable, particularly in systems that support students, educators, counselors, or workforce readiness initiatives.
The platform combines Generative AI copilots, retrieval-augmented generation (RAG), knowledge graphs, and traditional machine learning models to support career exploration, pathway planning, and personalized recommendations for students.
The engagement focuses on conducting a structured architecture audit and evaluating whether the current system design aligns with the platform’s long-term goals for scalability, reliability, observability, and continuous improvement.
The consultant will collaborate with engineering and product leadership to identify architectural gaps and provide recommendations for strengthening the AI platform.
This role is intended for senior AI architects or principal-level engineers who have previously designed and operated production AI systems at scale.
Scope of Engagement
The consultant will review the current system architecture and provide recommendations across several key areas.
AI Platform Architecture Review
Conduct a structured audit of the platform’s AI architecture, including:
generative AI copilot design
agentic workflow orchestration
retrieval-augmented generation pipelines
knowledge retrieval systems
vector database usage
knowledge graph integration
context management and AI memory strategies
prompt and instruction architecture
Assess whether the current design supports:
reliable AI behavior
scalable inference
controllable AI workflows
maintainable system architecture.
Generative AI & LLM Systems
Evaluate the architecture and technical strategy related to:
LLM model selection
API-based vs self-hosted model strategies
embeddings and vector search pipelines
prompt and context engineering
RAG architecture
agent orchestration frameworks
guardrails and reliability mechanisms.
Provide recommendations to improve:
model response quality
latency
cost efficiency
system reliability.
Traditional Machine Learning Systems
Review architecture related to traditional ML use cases such as:
recommendation systems
predictive analytics
forecasting models
clustering and segmentation pipelines.
Assess the architecture supporting:
training pipelines
experimentation workflows
model deployment
model lifecycle management.
Copilot Interaction & Agentic Workflows
Evaluate the design of AI-driven workflows supporting the copilot experience, including:
user-initiated interactions
event-driven AI recommendations
multi-step reasoning workflows
recommendation pipelines.
Provide guidance on improving:
intent detection
workflow orchestration
AI reasoning pipelines
reliability and safety mechanisms.
Platform Architecture & System Design
Assess the platform’s core architecture, including:
microservices architecture
event-driven system design
message-based communication patterns
API architecture
service boundaries and modularity.
Review the application of architectural patterns such as:
event-driven architecture
message-driven systems
asynchronous processing
hexagonal / ports-and-adapters architecture.
Provide recommendations for improving:
scalability
reliability
maintainability
operational efficiency.
Observability, Monitoring & Evaluation
Evaluate the platform’s ability to monitor both traditional services and AI systems.
Assess current capabilities in areas such as:
distributed tracing
system metrics and logging
operational monitoring
AI workflow traceability
prompt and model evaluation
experiment tracking.
Provide recommendations for implementing robust observability and evaluation frameworks.
Continuous Learning & Feedback Systems
Review architecture supporting long-term improvement of AI systems, including:
user feedback capture
interaction analytics
model performance evaluation
experimentation frameworks
learning pipelines.
Provide recommendations for enabling continuous learning and system improvement.
Deliverables
The consultant will deliver:
a structured architecture assessment report
identified design gaps and architectural risks
prioritized technical recommendations
suggested architecture evolution roadmap.
The consultant will present findings to the leadership and engineering teams.
Required Experience
Candidates should have substantial experience designing AI-driven software systems in production environments.
Minimum qualifications include:
12+ years of experience building distributed software systems and AI/ML platforms, any less experience - no need to apply
strong hands-on experience building Generative AI applications
- deep understanding of:
Retrieval-Augmented Generation (RAG)
prompt and context engineering
embedding pipelines
vector search systems
agentic AI architectures
- practical experience implementing traditional machine learning systems, including:
recommendation systems
forecasting models
predictive analytics pipelines.
Software Architecture Experience
Demonstrated experience designing modern distributed systems using:
microservices architecture
event-driven systems
message-based system communication
asynchronous processing patterns
hexagonal architecture / ports-and-adapters.
Cloud & Infrastructure
Experience building and operating systems on modern cloud platforms such as:
Google Cloud
AWS
Azure.
Experience with containerized systems and cloud-native infrastructure.
Observability & Production Systems
Strong experience operating production systems with:
distributed tracing
system monitoring and metrics
centralized logging
operational diagnostics.
Experience with AI system observability and evaluation tools is highly desirable.
Preferred Experience
Experience building AI copilots or conversational AI systems
Experience with agent orchestration frameworks
Experience with vector databases and knowledge graphs
Experience designing AI evaluation pipelines
Prior experience in EdTech platforms
Familiarity with U.S. K-12 education systems.
Engagement Model
Fractional consulting engagement (part-time).
Initial architecture review phase followed by optional advisory support.
Expected duration for the initial engagement: 1–3 months.
Ideal Candidate Profile
This role is best suited for professionals who have previously served as:
Principal Architect
AI Platform Architect
Staff / Principal Engineer
ML Platform Architect
AI Infrastructure Architect
and who have direct experience building and operating production AI systems.