Roles & Responsibilities:
- Develop highly reliable data pipelines and data platforms with comprehensive test coverage with limited supervision.
- Contribute to the technical design for system enhancements.
- Apply deployment mechanisms for CI/CD pipelines and Release Management on data platforms.
- Provide technical assistance to other members of the team.
- Participate actively in peer reviews.
- Identify and drive opportunities for continuous improvement within the team and in delivery of products.
- Produce high quality, sustainable solutions to meet business requirements, leveraging approved delivery frameworks and by applying industry best practice.
- Collaborate with stakeholders to ensure that solutions are robust, secure and highly available.
- Complete data engineering duties aligned to core concepts of data design, preparation, transformation, and load.
- Build data pipelines in distributed data platforms including warehouses, databases, data lakes and cloud lakehouses to enable data predictions and models, and reporting and visualisation analysis via data integration tools and frameworks.
Essential Skills:
- 4-5 years of relevant experience
- Hadoop, Spark, Scala/Python, Hive, SQL, AWS(Cloud), HDFS .
Education Qualifications:
- Bachelor’s degree in engineering in Computer Science/Information Technology