Accountabilities:
Technical & AI Leadership
- Lead and mentor a multi-functional team of data and AI engineers to deliver scalable, AI-ready data products and pipelines.
- Define and enforce standard processes for Data engineering, data pipeline orchestration, and ELT/ETL development lifecycle management.
- Guide the development of solutions that integrate data engineering with machine learning, foundational models, and semantic enrichment.
AI-Driven Data Engineering
- Architect and develop data pipelines using tools such as DBT, Apache Airflow, and Snowflake, optimized to support both analytics and AI/ML workloads.
- Design infrastructures that facilitate automated feature engineering, metadata tracking, and real-time model inference.
- Enable large-scale data ingestion, preparation, and transformation to support AI use cases such as forecasting, natural language querying/processing (NLQ/P), and intelligent automation.
Governance and Metadata Management
- Have an approach to adhere to data governance and compliance practices that ensure trust, transparency, and explainability in AI outputs.
- Manage and scale enterprise metadata frameworks using tools like Collibra, aligning with FAIR data principles and AI ethics guidelines.
- Establish traceability across data lineage, model lineage, and business outcomes.
Stakeholder Engagement
- Act as a trusted technical advisor to business leaders across enabling functions (e.g., Finance, M&A, GBS), helping translate strategic goals into AI-driven data solutions.
- Lead delivery across multiple workstreams, ensuring measurable KPIs and adoption of both data and AI capabilities.
Essential Skills/Experience:
- 12+ years of hands-on experience in data engineering and AI-enabling infrastructure, with expertise in: DBT, Apache Airflow, Snowflake, PostgreSQL, Amazon Redshift
- 2+ years working with or supporting AI/ML teams in building production-ready pipelines and infrastructure.
- Strong communication skills with a demonstrated ability to influence both technical and non-technical collaborators.
- Experience in implementing data products by applying data mesh principles.
- Experience working across enabling business units such as Finance, HR, and M&A.
Academic Qualifications:
Bachelor’s or Master’s degree in computer science, Information Technology, or related field with relevant industrial experiences.
Desirable Skills/Experience:
- Proficiency in Python, especially in libraries like Pandas, NumPy, and Scikit-learn for data and ML workflows.
- Exposure to ML lifecycle tools such as SageMaker, MLflow, Azure ML, or Databricks.
- Exposure to foundational AI models (e.g., LLMs), vector databases, and retrieval-augmented generation (RAG) methodologies.
- Knowledge of data cataloguing tools such as Collibra, semantic data models, ontologies, and business glossary tools.