Description
What are you going to do?/What do we ask from you?
- Build & deploy big data pipelines in the Azure cloud, valuing the separation between storage and computing while using cloud native components.
- You have good knowledge of data frames and will be coding in Python or Pyspark for your ETL needs. Automation as well as prototyping your solutions is required.
- Operate within a complex web of data sources and advise on and execute data flows.
- Specializing in data ingestion, while building solid automation data ingestion jobs to get data from the data lake and other sources inside and outside the business.
- Arranging data pipelines while connecting systems and data environments on-premises and in the cloud.
- Your Good sense for security will make sure that the data we are working with is protected & datasets are anonymized or pseudonymized to safeguard important data.