Description
Data Engineer with expertise in 2 of Python, Java, Scala plus 2+ years Big Data with Spark, Map Reduce, Hadoop and AWS required by a leading Global Consumer Goods organisation.
The successful recruit will work in a DevOps methodology, working with the Product Owner and Scrum Master to understand requirements and develop new features as well as writing unit tests, automated functional tests and integration tests.
Requirements:
- Strong demonstrable skills in two of the following programming languages - Python, Java or Scala.
- Minimum 2 years' experience working on full life cycle Big Data production projects as a programmer.
- Strong experience in processing Big Data and analysing the data using Spark, Map Reduce, Hadoop, Sqoop, Apache Airflow, HDFS, Hive, Zookeeper.
- Should have experience with AWS services like EMR, Lambda, S3, DynamoDB
- Should have experience in micro services and no-SQL DB.
- Experienced in unit, functional and automated testing.
- Knowledge of HTTP and JavaScript frameworks.
- Well versed with Agile methodology.
- Excellent verbal and written communication and collaboration skills to effectively communicate with both business and technical teams.
- Comfortable working in a fast-paced, results-oriented environment.
- Multinational company so fluent English required.
This is an exciting long-term assignment with one of the worlds leading brand-names. Please send your CV to (see below) or call Adam today.