Senior Azure Data Engineer
Job Overview
-
Date PostedNovember 1, 2025
-
Expiration dateDecember 1, 2025
-
Experience2-5 Years
-
GenderBoth
Job Description
ob Summary
We are seeking an experienced Senior Azure Data Engineer to join our data and analytics team. In this role, you will be a technical leader responsible for designing, developing, and maintaining our enterprise-scale data platform on Microsoft Azure. You will leverage your deep expertise in Azure Databricks, the modern data stack, and data warehousing principles to build robust, scalable, and high-performance data pipelines.
A key part of this role will be managing data integration and migration projects, specifically involving Oracle databases. The ideal candidate is a hands-on engineer with a strong architectural mindset, proficient in PySpark and SQL, and passionate about building a best-in-class Lakehouse platform.
Key Responsibilities
Data Architecture & Design: Architect and implement end-to-end data solutions in Azure, including data ingestion, transformation, and storage.
Pipeline Development: Design, build, and optimize scalable ETL/ELT data pipelines using Azure Databricks (PySpark, Spark SQL) and Azure Data Factory (ADF).
Lakehouse Implementation: Lead the development and governance of our Delta Lakehouse, implementing a Medallion Architecture (Bronze, Silver, Gold layers) to ensure data quality and reliability.
Oracle Integration: Develop and maintain data ingestion pipelines to efficiently and reliably extract data from on-premise or cloud-based Oracle databases (using PL/SQL, GoldenGate, ADF connectors, or other methods).
Data Modeling: Create and manage data models within the Lakehouse and/or Azure Synapse Analytics to support BI, reporting, and advanced analytics use cases.
Performance Tuning: Monitor, troubleshoot, and optimize Databricks clusters, Spark jobs, and SQL queries for performance and cost-efficiency.
Governance & Security: Implement data governance, data quality, and security best practices within the Azure platform, utilizing tools like Unity Catalog for fine-grained access control and lineage.
DevOps & CI/CD: Implement and manage CI/CD pipelines using Azure DevOps (or GitHub Actions) for automated testing and deployment of data pipelines and infrastructure (IaC).
Mentorship: Provide technical guidance and mentorship to junior data engineers, conduct code reviews, and establish team-wide best practices.
Stakeholder Collaboration: Work closely with data scientists, BI developers, and business stakeholders to understand requirements and deliver data solutions that meet their needs.
Required Qualifications & Skills
Azure Core Skills:
Azure Databricks: Expert-level proficiency in developing with notebooks, jobs, and clusters.
PySpark & SQL: Advanced skills in both for large-scale data transformation.
Azure Data Factory (ADF): Strong experience building and orchestrating complex pipelines.
Azure Storage: Deep knowledge of Azure Data Lake Storage (ADLS Gen2) and Delta Lake.