DB

Associate, Data Engineer

DBS

9 months ago

5 - 7 years

Work From Office

Hyderabad, Telangana, Telangana, India

  • Create Scala/Spark/Pyspark jobs for data transformation and aggregation.
  • Produce unit tests for Spark transformations and helper methods.
  • Perform peers Code quality review and be gatekeeper for quality checks.
  • Hadoop

    Spark

    Scala

    JAVA

    HIve

    ci/cd

    Agile Methodologies

    Devops

    Data Warehousing

    Cloud Platforms

    Job description & requirements

    Roles & Responsibilities


    • Create Scala/Spark/Pyspark jobs for data transformation and aggregation

    • Produce unit tests for Spark transformations and helper methods

    • Used Spark and Spark-SQL to read the parquet data and create the tables in hive using the Scala API

    • Work closely with Business Analysts team to review the test results and obtain sign offPrepare necessary design/operations documentation for future usage

    • Perform peers Code quality review and be gatekeeper for quality checks

    • Hands-on coding, usually in a pair programming environment

    • Working in highly collaborative teams and building quality code

    • The candidate must exhibit a good understanding of data structures, data manipulation, distributed processing, application development, and automation

    • Familiar with Oracle, Spark streaming, Kafka, ML

    • To develop an application by using Hadoop tech stack and delivered effectively, efficiently, on-time, in-specification and in a cost-effective manner

    • Ensure smooth production deployments as per plan and post-production deployment verification

    • This Hadoop Developer will play a hands-on role to develop quality applications within the desired timeframes and resolving team queries


    Requirements

    • Hadoop data engineer with total 4 - 7 years of experience and should have strong experience in Hadoop, Spark, Scala, Java, Hive, Impala, CI/CD, Git, Jenkins, Agile Methodologies, DevOps, Cloudera Distribution

    • Strong Knowledge in data warehousing Methodology

    • Relevant 4+ years of Hadoop & Spark/Pyspark experience is mandatory

    • Strong in enterprise data architectures and data models

    • Good experience in Core Banking, Finance domain

    • Good to have Cloud experience on AWS

    Experience :

    5 - 7 years

    Job Domain/Function :

    Data Engineering

    Job Type :

    Work From Office

    Employment Type :

    Full Time

    Number Of Position(s) :

    1

    Educational Qualifications :

    Bachelor's Degree

    Location :

    Hyderabad, Telangana, India, Hyderabad, Telangana, India

    Create alert for similar jobs

    Similar Jobs