AWS Data Devops Engineer - AWS - Data Lakes - Devops

GB  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

An incredible opportunity to work for a leading financial services company at the forefront of the financial services industry.

This company requires a data lake manager who has extensive experience in building and scaling services around the data lake program as well as scale it group wide. You will be the key when enabling thousands of data consumers to autonomously handle their data on their own.

There will be opportunities to experiment with new technologies in order to further innovate the data handling process and the underlying big data platform.

As Cloud DevOps you are the bridge between the latest and scalable big data technologies and data consumers who would like to orchestrate their data with the same. You will be in charge for building a next level self service data lake that will serve 50.000 data consumers with their demand to autonomously handle their data by their own. Experimenting with new technologies is key aspect in further innovating the data handling process.
What you can expect:

  • Innovation and implementation of the cloud Data Lake platform infrastructure as the basis for data engineers to work with their data
  • Optimize existing infrastructure and data transformation pipelines to become more scalable = Self Service ability
  • Optimize data provisioning workflow to allow Real Time replication of data from on-premise into Data Lake (AWS)
  • Implement new features like provider or consumer connections as IaaC (eg PowerBI connection towards AWS)
  • Implement cost optimizations as part of the platform
  • Implement test cases as part of development process
  • Support solution architects to define the software architecture
  • Taking end to end responsibility for changes throughout delivery pipeline

What you bring to the table:

  • 5+ years professional experience in developing and operating software solutions
  • 2+ years professional experience in building scalable big data solutions in the area of cloud or Hadoop
  • 2+ year experience with AWS and hands on experience to build Infrastructure as code and services on top
  • Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management
  • Professional experience in designing and developing data pipelines in Python/Spark
  • Practical experience with Terraform, Airflow, Data Bricks, Delta Lake is preferred
  • Proactivity; Curiosity; Responsibility; Ideas & Confidence
  • Structured working approach and problem-solving skills
  • Fluent English; German or another CEE language is appreciated, but not mandatory
Start date
n.a
From
Empiric Solutions
Published at
14.10.2021
Project ID:
2228779
Contract type
Freelance
To apply to this project you must log in.
Register