Data Engineer (Python/Java/Scala/Spark/MapReduce/Hadoop/AWS) - Global

North Holland  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

Data Engineer with expertise in 2 of Python, Java, Scala plus 2+ years Big Data with Spark, Map Reduce, Hadoop and AWS required by a leading Global Consumer Goods organisation.

The successful recruit will work in a DevOps methodology, working with the Product Owner and Scrum Master to understand requirements and develop new features as well as writing unit tests, automated functional tests and integration tests.

Requirements:

  • Strong demonstrable skills in two of the following programming languages - Python, Java or Scala.
  • Minimum 2 years' experience working on full life cycle Big Data production projects as a programmer.
  • Strong experience in processing Big Data and analysing the data using Spark, Map Reduce, Hadoop, Sqoop, Apache Airflow, HDFS, Hive, Zookeeper.
  • Should have experience with AWS services like EMR, Lambda, S3, DynamoDB
  • Should have experience in micro services and no-SQL DB.
  • Experienced in unit, functional and automated testing.
  • Knowledge of HTTP and JavaScript frameworks.
  • Well versed with Agile methodology.
  • Excellent verbal and written communication and collaboration skills to effectively communicate with both business and technical teams.
  • Comfortable working in a fast-paced, results-oriented environment.
  • Multinational company so fluent English required.

This is an exciting long-term assignment with one of the worlds leading brand-names. Please send your CV to (see below) or call Adam today.

Start date
June 2018
Duration
6 months +++
(extension possible)
From
Next Ventures Ltd
Published at
16.05.2018
Project ID:
1555372
Contract type
Freelance
To apply to this project you must log in.
Register