Cloud Architect, Google Cloud Platform, GDPR, Spark, Hadoop

City of Newport  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

Cloud Architect, Google Cloud Platform, Cloud, GDPR, Spark, Hadoop, Data Engineer, Agile, 3 months, Outside IR35, £550 per day.

This will be remote work during COVID19.

Job Description

At the core of the ambitious vision is the appreciation that our clients data teams and operations will need to scale up. We are anticipating leveraging the power of the Google Cloud Platform (GCP) for the foundation of our architecture and are looking for an experienced Google Cloud Architect/Data Engineer to help us on our clients journey.

You will:

- Work with the most innovative and most scalable data processing and cloud technologies.

- Build innovative state-of-the-art solutions with our team, stakeholders and customers.

- Work in an agile and dynamic environment together with a team of business analysts, data architects, data and both senior and junior software engineers.

- Be a leader and mentor to engineering (of both software and data disciplines) team members.

- Play a key role in shaping the engineering team.

- To be considered, the successful GCP Cloud Architect must possess in addition to significant Cloud based experience:

- BSc or MSc degree in a Computer Science or a related technical field.

- Suitable experience building scalable data cloud architecture.

- Ability to develop and maintain relationships with key external stakeholders at various business levels.

- Expertise in cloud technologies and their design/usage (across all areas ie data/security/networking etc).

- Experience building scalable and high-performant code.

- Ideally GCP Certified.

TYPICAL ROLE RESPONSIBILITIES


- Designing data processing systems to enable our reproducible analytical pipelines to operate at required flexible scale; trading off latency, storage and throughput considerations. Our solution is likely to include both batch and streaming data: Cloud Dataflow, Cloud Dataproc, Apache Beam, Cloud Pub/Sub, Apache Kafka etc. while Apache Spark and the Hadoop ecosystem may be an option for dealing with large volume data such as the Census. Job automation and orchestration such as Cloud Composer are also necessary concerns.

- Designing storage systems while considering the effective use of managed services (eg Cloud Bigtable, Cloud Spanner, Cloud SQL, BigQuery, Cloud Storage, Cloud Datastore, Cloud Memorystore) and balancing strategic decisions against the storage costs and performance.

- Designing for security and compliance to meet the requirements of our platform for identity and access management (eg Cloud IAM), data security (encryption, key management), privacy (eg Data Loss Prevention API) and legal compliance (eg General Data Protection Regulation (GDPR).
Start date
ASAP
Duration
Initially 3 months
From
SmartSourcing Ltd
Published at
15.01.2021
Project ID:
2029065
Contract type
Freelance
To apply to this project you must log in.
Register