OP

Data Engineering Lead

Optum

9 months ago

5 - 7 years

Work From Office

Noida, Uttar Pradesh, Uttar Pradesh, India

  • Participate in scrum process and deliver stories/features according to the schedule.
  • Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable.
  • Participate in product support activities as needed by the team.
  • PYTHON

    PySpark

    Cloud Platforms

    Microsoft Azure Machine Learning

    DevOps

    Big data

    Apache SPARK

    Azure Cloud Platform

    ETL framework

    Data pipelines

    Azure Data Factory

    Azure Databricks

    SQL

    Job description & requirements

    Primary Responsibilities:


    • Participate in scrum process and deliver stories/features according to the schedule
    • Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
    • Participate in product support activities as needed by the team
    • Understand product architecture, features being built and come up with product improvement ideas and POCs
    • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so


    Required Qualifications:

    • Undergraduate degree or equivalent experience
    • Tools/Technologies:
    • Programming Languages: Python, PySpark
    • Cloud Technologies: Azure (ADF, Databricks, WebApp, Key vault, SQL Server, function app, logic app, Synapse
    • Azure Machine Learning, DevOps)
    • Experience:
    • DevOps, implementation of Bigdata, Apache Spark and Azure Cloud
    • Large scale data processing using PySpark on azure ecosystem
    • Implementation of self-service analytics platform ETL framework using PySpark on Azure
    • Deep experience in Data Analysis, including source data analysis, data profiling and mapping
    • Good experience in building data pipelines using ADF/Azure Databricks
    • Proven hands-on experience with a large-scale data warehouse
    • Handson data migration experience from legacy systems to new solutions, such as from on-premises clusters to Cloud
    • Expert skills in Azure data processing tools (Azure Data Factory, Azure Databricks)
    • Ability to learn and adapt to new data technologies
    • Solid proficiency in SQL and complex queries
    • Good problem solving skills
    • Good communication skills


    Preferred Qualifications:

    • Knowledge on US healthcare industry/Pharmacy data
    • Knowledge/Experience on Azure Synapse and Power BI

    Experience :

    5 - 7 years

    Job Domain/Function :

    Data Engineering

    Job Type :

    Work From Office

    Employment Type :

    Full Time

    Number Of Position(s) :

    1

    Educational Qualifications :

    Bachelor's Degree

    Location :

    Noida, Uttar Pradesh, India, Noida, Uttar Pradesh, India

    Create alert for similar jobs

    Similar Jobs

    Data Engineering Lead-Optum-Noida, Uttar Pradesh, India-5 - 7 years