Big Data Engineer | Python | Spark | AWS | Azure | GCP | Contract | Germany | Remote

Job type:
remote
Start date:
asap
Duration:
3 months (extension possible)
Location:
Berlin
Published at:
03/01/2021
Country:
flag_no Germany
Contact person:
Michael Outar
Project ID:
2060496


Parallel Consulting are currently looking for a Big Data Engineer to join on a 3 month rolling contract with a specialist AI company, this will be a fully remote working position.

Please see below for what we're looking for:

- Strong background in big data engineering using Python and either Apache Spark, Scala Spark, PySpark or Data Bricks for example

- You will have a good understanding of AWS cloud, ideally you will have exposure to AWS tech such as Lambda, SQS, SNS, IAM, Glue, Athen or S3 for example. We can also consider candidates who have an understanding of Azure or GCP.

What you'll be doing:

You will be writing complex ETL pipelines in Python/PySpark to generate reports from a variety of sources.

This is a chance to get stuck in to a project with a very exciting tech company which has experienced incredible growth over the last couple of years even throughout the pandemic and be working on some of the latest big data technologies.

We need the right candidate to have joined ASAP and we can get the interview process wrapped up within a week or two.

If you're suitable and available please send an updated CV straight away to or alternatively please call me on and I can give you more information.

Big Data Engineer | Python | Spark | AWS | Azure | GCP | Contract | Germany | Remote