What you’ll do:
- Work with business stakeholders to understand their business needs.
- Create data pipelines that extract, transform, and load (ETL) from various sources into a usable format in a Data warehouse.
- Clean, filter, and validate data to ensure it meets quality and format standards.
- Develop data model objects (tables, views) to transform the data into unified format for downstream consumption.
- Expert in monitoring, controlling, configuring, and maintaining processes in cloud data platform.
- Optimize data pipelines and data storage for performance and efficiency.
- Participate in code reviews and provide meaningful feedback to other team members.
- Provide technical support and troubleshoot issue(s).
What you’ll bring:
- Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience.
- Experience working in the AWS cloud platform.
- Data engineer with expertise in developing big data and data warehouse platforms.
- Experience working with structured and semi-structured data.
- Expertise in developing big data solutions, ETL/ELT pipelines for data ingestion, data transformation, and optimization techniques.
- Experience working directly with technical and business teams.
- Able to create technical documentation.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
- AWS (Big Data services) - S3, Glue, Athena, EMR
- Programming - Python, Spark, SQL, Mulesoft,Talend, Dbt
- Data warehouse - ETL, Redshift / Snowflake
Additional Skills:
- Experience in data modeling.
- Certified in AWS platform for Data Engineer skills.
- Experience with ITSM processes/tools such as ServiceNow, Jira
- Understanding of Spark, Hive, Kafka, Kinesis, Spark Streaming, and Airflow