MS cloud Data Architect

Rhineland-Palatinate, Ludwigshafen am Rhein  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Keywords

Azure Cloud Architect

Description

Required skills:


• More than 12 years of overall IT industry experience along with 2+ years of experience on cloud-based solution.
• More than 5 years of experience in Big data, Spark applications.
• Good understanding of Architectural Development Management practices and exposure to Enterprise Architecture frameworks (e.g. TOGAF)
• Well versed with different Cloud Architecture / Design / Integration Patterns, application migration methodology, and implementation approach
• Strong understanding of different Multi-tenancy model implementation techniques using various Cloud Service Models (e.g. BPaaS, SaaS, PaaS, IaaS)
• Good knowledge on Data Lake, Staging area, 3NF, data labs and data marts including starflake and snowflake schemas
• Experience in migrating and integrating different types of data into data lake within in-premise or cloud ecosystem
• Experience on Big Data Tools and Technologies such as Spark, Kafka, Flume, Sqoop, Hive, HDFS, Mapreduce, HBase etc. and with respect to MS Azure general services used to build the data platforms like Azure AD, Data catalog, Stream services. Data factory etc.
• Proficient in distributed computing principles and familiar with key architectures including Lambda and Kappa architectures, and has a broad experience across a set of data stores (e. g., HDFS, Azure Data Lake Store, Azure Blob Storage, Azure SQL Data Warehouse, Apache HBase, Azure DocumentDB), messaging systems (e. g., Apache Kafka, Azure Event Hubs, Azure IoT Hub) and data processing engines (e. g., Apache Hadoop, Apache Spark, Azure Data Lake Analytics, Apache Storm, Azure HDInsight, Azure Databricks).

Responsibility:

• The Azure Data Architect is responsible for helping to design, deploy, manage and support the systems and infrastructure required for a data processing pipeline in support of a products requirements.
• Primary responsibilities revolve around DevOps and include implementing ETL (extract, transform and load) pipelines, monitoring/maintaining data pipeline performance.
Start date
2020
From
Adroit People Ltd
Published at
01.04.2020
Contact person:
Roshini Agarwal
Project ID:
1916360
Contract type
Freelance
To apply to this project you must log in.
Register