Job Openings
Solution Architect – Data Platform
About the job Solution Architect – Data Platform
Responsibilities
- Own enterprise-level logical and physical data architecture design
- Define and govern data modelling standards aligned with enterprise principles
- Develop scalable architecture patterns for data ingestion, transformation, and serving layers
- Lead and define migration strategies from legacy platforms to modern big data environments
- Conduct architecture reviews and performance optimisation assessments
- Establish performance tuning and scalability frameworks
- Develop reusable design patterns and reference architectures
- Ensure compliance with data governance, security, and regulatory requirements
- Provide solution sizing, estimation inputs, and risk assessments
- Support production issue resolution at architectural level (Level 3)
- Mentor senior engineers and guide architectural best practices
- Define target-state data architecture and standards
- Translate business strategy into scalable data solutions
- Work across business, engineering, and governance teams
- Identify architectural and migration risks early
- Drive adoption of standards and design patterns
- Provide oversight across multiple squads or workstreams
- Elevate engineering maturity and reduce technical debt
Requirements
Experience & Qualifications
- Bachelors degree in Computer Science, Information Systems, Engineering, or related discipline
- 10+ years of experience in data engineering, data architecture, or platform architecture roles
- Demonstrated experience designing enterprise-scale data platforms
- Strong experience in distributed data processing technologies
- 8+ years of hands-on experience in enterprise data modelling
Data Platform & Architecture
- Experience designing modern data platforms (Lakehouse, Data Lake, Data Warehouse, Hybrid architectures)
- Strong understanding of distributed data processing frameworks (e.g., Spark-based platforms)
- Experience with enterprise-grade relational databases (Oracle, SQL Server, PostgreSQL, etc.)
- Knowledge of both cloud and on-premise big data environments
Data Modelling
- Conceptual, Logical, and Physical modelling
- Dimensional modelling (Kimball)
- Data Vault methodologies
- Canonical data models
- Alignment with Enterprise Information Model (EIM)
- Metadata management and schema governance practices
Migration & Modernisation
- Legacy RDBMS to modern data platforms
- On-premise to cloud/hybrid transformation
- Batch to near real-time processing
- Data refactoring and technical debt remediation
- Cutover strategy, rollback planning, and risk mitigation
- Data reconciliation and validation frameworks post-migration
Performance Engineering
- Distributed compute optimisation (partitioning, indexing, clustering strategies)
- Query performance tuning and workload management
- Storage optimisation and cost-performance balancing
- Data pipeline performance monitoring and troubleshooting
- Scalabilty planning and capacity modelling
Governance & Engineering Practices
- Data governance frameworks (cataloguing, lineage, access control)
- CI/CD design for data pipelines
- Infrastructure as Code and automation concepts
- Data quality frameworks and observability practices
- Experience operating in Agile delivery environments