Responsible for constructing and maintaining data pipelines that extract, transform, and load data from diverse data sources into centralized repositories, such as data warehouses, lakehouses, and databases.
Implement physical data models to effectively structure and organize data for efficient storage and analysis, while ensuring data integrity optimization.
Develop processes to transform raw data into a usable format for analysis and reporting purposes, including data cleanup and deduplication.
Integrate data from disparate sources and ensure data quality and consistency throughout the data pipeline.
Oversee database management activities, including database design, administration, optimization, and performance enhancement.
Required Competencies:
Bachelor’s degree in Computer Science, Information Technology, Mathematics, or an equivalent field.
Minimum of 6 years of hands-on progressive experience in Data Engineering.
At least 6 years of experience in ETL, ELT, data warehousing, and data modeling.
At least 3 years of delivering solutions on the Microsoft Azure Platform, with exposure to data solutions and services such as Microsoft Fabric, and Data Factory.
Proficiency in PySpark or Python, T-SQL scripting and strong technical knowledge of databases.
Expertise in designing, building, and maintaining data pipelines to extract, transform, and load (ETL) data from various sources into data storage systems such as data warehouses or lakehouses.
Our 3rd party tools use cookies, which are necessary for its functioning and required to achieve the purposes illustrated in the policy. We also use cookies to authenticate you when you apply for jobs.