Description
Aktuell sind wir für unseren Endkunden auf der Suche nach Unterstützung:Start: 01.08.2019
Ende: 31.01.2020
Ort: München
Onsite: 768 Stunden
Offsite: 192 Stunden
Sprachanforderung: Englisch & Deutsch
Aufgabe:
Develop a data models for structured and unstructured data with SQL DB, noSQL, Parquet
Develop batch data pipelines for collecting, cleaning and archiving big data with Azure Data Factory, SPARK, Python, DataBricks
Fetch data from various sources, including Web services, shared drives or database connections
Deploy the solution to a productive environment in MS Azure cloud
Support in the definition of the solution architecture with various business units
Support the development of an energy analytical solution API implementation
Soft-Skills:
take over responsibility
analytical thinking
enjoying collaborating internationally and across cultural borders
Fach-Skills:
Degree in Computer Science or related technical discipline.
Experience in modelling and processing time series data
Experience in data pipeline development with Azure Data Factory and DataBricks
Experience in Python development preferably with PySPARK
Experience in modelling data on RDBMS, noSQL, Parquet
Experience in Web Service development with REST and SOAP
Experience in deploying API services in a cloud environment preferably AZURE
Experience in setting up logging and monitoring for the data services
Wir freuen uns auf Ihre Bewerbung!