TR

Senior Data Engineer - Databricks

Tredence

8 months ago

5 - 7 years

Work From Office

Bengaluru, Karnataka, Karnataka, India

  • Developing Modern Data Warehouse solutions using Databricks and Azure Stack
  • Ability to provide solutions that are forward-thinking in data engineering and analytics space
  • Collaborate with DW/BI leads to understand new ETL pipeline development requirements.
  • SQL

    PySpark

    PYTHON

    ETL

    SQL

    unix shell scripting

    Performance Tuning

    Azure Databricks

    Azure Data Factory

    Azure Data Lake

    Job description & requirements


    This position requires someone with good problem solving, business understanding and client presence.

    Overall professional experience of the candidate should be atleast 5 years with a maximum experience upto 15 years.


    The candidate must understand the usage of data Engineering tools for solving business problems and help clients in their data journey. Must have knowledge of emerging technologies used in companies for data management including data governance, data quality, security, data integration, processing, and provisioning. The candidate must possess required soft skills to work with teams and lead medium to large teams.

    Candidate should be comfortable with taking leadership roles, in client projects, pre-sales/consulting, solutioning, business development conversations, execution on data engineering projects.


    Role Description:


    ● Developing Modern Data Warehouse solutions using Databricks and Azure Stack

    ● Ability to provide solutions that are forward-thinking in data engineering and analytics space

    ● Collaborate with DW/BI leads to understand new ETL pipeline development requirements.

    ● Triage issues to find gaps in existing pipelines and fix the issues

    ● Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs

    ● Drive technical discussion with client architect and team members

    ● Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications:

    ● Bachelor's and/or master’s degree in computer science or equivalent experience.

    ● Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture

    ● Should have hands-on experience in SQL, Python and Spark (PySpark)

    ● Experience in building ETL / data warehouse transformation processes

    ● Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot

    ● Databricks Certified Data Engineer Associate/Professional Certification (Desirable).

    ● Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects

    ● Should have experience working in Agile methodology

    ● Strong verbal and written communication skills.

    ● Strong analytical and problem-solving skills with a high attention to detail.


    Mandatory Skills

    Azure Databricks, Pyspark, Azure Data Factory, Azure Data Lake.


    Job Location - Bangalore, Chennai, Gurgaon, Pune, Kolkata


    Skills


    Azure Databricks,

    Pyspark,

    Azure Data Factory


    Experience :

    5 - 7 years

    Job Domain/Function :

    Data Engineering

    Job Type :

    Work From Office

    Employment Type :

    Full Time

    Number Of Position(s) :

    1

    Educational Qualifications :

    Bachelor's Degree

    Location :

    Bengaluru, Karnataka, India, Bengaluru, Karnataka, India

    Create alert for similar jobs

    TR

    Tredence

    Similar Jobs