Azure Data Engineer

Contract 2 Hire
Remote
Posted 7 months ago

Position:Azure Data Engineer 

Exp: 7+ years  

Location:100% Remote(Should be ready to work in EST)
Must Have: Azure

Azure Data Engineer will also be responsible for:

  • Develop data pipelines utilizing Azure services such as Azure Data Lake, Data Factory, Databricks, Synapse, SQL Server environment.
  • Develop data transformations utilizing ADF functionality and Databricks Python processing.
  • Working with external on-prem partners to bring data into cloud environment.
  • Designing and maintaining data flow design and schemas.
  • Working with the Data Architect to translate functional specifications into technical specifications.
  • Partner with data analyst, product owners and data scientists, to better understand requirements, solution designs, finding bottlenecks, resolutions, etc.
  • Support/Enhance data pipelines and ETL using heterogeneous sources.

Additional Responsibilities:

  • Work with other internal technical personnel to troubleshoot issues and propose solutions.
  • Support compliance with data stewardship standards and data security procedures.
  • Apply proven communication and problem-solving skills to resolve support issues as they arise.

Required Skills

  • Overall 7+ years’ experience as Data Engineer designing and developing Big Data pipelines utilizing Hadoop ecosystem and/or Cloud.
  • 4+ Years’ experience with Azure ecosystem is a must – Azure Data Lake, Data Factories, Databricks, Azure Functions, Azure SQL Datawarehouse.
  • Experience working with Azure Synapse

Azure Data Engineer will also be responsible for:

  • Develop data pipelines utilizing Azure services such as Azure Data Lake, Data Factory, Databricks, Synapse, SQL Server environment.
  • Develop data transformations utilizing ADF functionality and Databricks Python processing.
  • Working with external on-prem partners to bring data into cloud environments.
  • Designing and maintaining data flow design and schemas.
  • Working with the Data Architect to translate functional specifications into technical specifications.
  • Partner with data analysts, product owners and data scientists, to better understand requirements, solution designs, finding bottlenecks, resolutions, etc.
  • Support/Enhance data pipelines and ETL using heterogeneous sources.

Additional Responsibilities:

  • Work with other internal technical personnel to troubleshoot issues and propose solutions.
  • Support compliance with data stewardship standards and data security procedures.
  • Apply proven communication and problem-solving skills to resolve support issues as they arise.

Required Skills

  • Overall 7+ years’ experience as Data Engineer designing and developing Big Data pipelines utilizing Hadoop ecosystem and/or Cloud.
  • 4+ Years’ experience with the Azure ecosystem is a must – Azure Data Lake, Data Factories, Databricks, Azure Functions, Azure SQL Datawarehouse.
  • Experience working with Azure Synapse
  • Experience with Microsoft SSIS and developing SSIS packages to extract data from on-prem sources such as SAP
  • Experience with Databricks to develop Python processing modules and integrate them with ADF pipelines.
  • Knowledge of design strategies for developing scalable, resilient, always-on data lake
  • Programming – Python, Spark, or Java. Python highly preferred
  • Experience with Query languages – SQL, Hive, Impala, Drill etc. SQL highly preferred
  • You will transform data using data mapping and data processing capabilities like Python, SQL, Spark SQL etc.
  • Strong development/automation skills. Must be very comfortable with reading and writing Python, Spark or Java code.
  • Expands and grows data platform capabilities to solve new data problems and challenges.
  • Ability to dynamically adapt to conventional big-data frameworks and tools with the use-cases required by the project.
  • Experience in agile(scrum) development methodology
  • Excellent interpersonal and teamwork skills
  • Ability to work in a fast-paced environment and manage multiple simultaneous priorities
  • Can-do attitude on problem solving, quality and ability to execute
  • Masters or Bachelor’s degree in engineering in Computer Science or Information Technology is desired

Nice to have:

  • Experience with Theobald connector to extract data.

Job Features

Job CategoryInformation Technology

Apply Online

A valid email address is required.
A valid phone number is required.