Your role
- Develop and maintain data pipelines tailored to Azure environments, ensuring security and compliance with client data standards.
- Collaborate with cross-functional teams to gather data requirements, translate them into technical specifications, and develop data models.
- Leverage Python libraries for data handling, enhancing processing efficiency and robustness.
- Ensure SQL workflows meet client performance standards and handle large data volumes effectively.
- Build and maintain reliable ETL pipelines, supporting full and incremental loads and ensuring data integrity and scalability in ETL processes.
- Implement CI/CD pipelines for automated deployment and testing of data solutions.
- Optimize and tune data workflows and processes to ensure high performance and reliability.
- Monitor, troubleshoot, and optimize data processes for performance and reliability.
- Document data infrastructure, workflows, and maintain industry knowledge in data engineering and cloud tech.
Your Profile
- Bachelor’s degree in computer science, Information Systems, or a related field
- 4+ years of data engineering experience with a strong focus on Azure data services for client-centric solutions.
- Extensive expertise in Azure Synapse, Data Lake Storage, Data Factory, Databricks, and Blob Storage, ensuring secure, compliant data handling for clients.
- Good interpersonal communication skills
- Skilled in designing and maintaining scalable data pipelines tailored to client needs in Azure environments.
- Proficient in SQL and PL/SQL for complex data processing and client-specific analytics.