What You’ll Do
- Lead and grow a high-performing engineering team focused on batch and streaming data pipelines using technologies like Spark, Trino, Flink, and DBT
- Define and drive the vision for intuitive, scalable metrics frameworks and a robust semantic signal layer
- Partner closely with product, analytics, and engineering stakeholders to align schemas, models, and data usage patterns across the org
- Set engineering direction and best practices for building reliable, observable, and testable data systems
- Mentor and guide engineers in both technical execution and career development
- Contribute to long-term strategy around data governance, AI-readiness, and intelligent system design
- Serve as a thought leader and connector across domains to ensure data products deliver clear, trusted value
What We’re Looking For
- 10+ years of experience in data engineering or backend systems, with at least 2+ years in technical leadership or management roles
- Strong hands-on technical background, with deep experience in big data frameworks (e.g., Spark, Trino/Presto, DBT)
- Familiarity with streaming technologies such as Flink or Kafka
- Solid understanding of semantic layers, data modeling, and metrics systems
- Proven success leading teams that build data products or platforms at scale
- Experience with cloud infrastructure (especially AWS — S3, EMR, ECS, IAM)
- Exposure to modern metadata platforms, Snowflake, or knowledge graphs is a plus
- Excellent communication and stakeholder management skills
- A strategic, pragmatic thinker who is comfortable making high-impact decisions amid complexity