Data Engineer - Hadoop.

London  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

Data Engineer required: Hadoop.

Big Data Engineer required for a leading E-commerce company based in Covent Garden. Exciting Hadoop role on a Big Data team. 12month contract that will last throughout 2017. Exciting contract in a prime location, offering longevity and stability.

A skilled and experienced Software Engineer who will own solutions in a disruptive technology environment with agile engineering practices.

The Big Data Hadoop Engineer will work on the development of new and existing data pipelines, turning requirements into a finished technical solution and cooperating effectively with colleagues in both technical and business-facing roles.

You will get to work with massive data sets with opportunities to learn and apply the latest big data technologies on a fast growing platform. Also, you will work with data scientists to implement advanced machine learning algorithms, and have them run at scale.

This role will help shape the organisation's efforts in how we use technology to exploit our data and join it with external data to increase its value, generate unique insight and make it work hard to ensure we can offer the best personalised loyalty scheme experience for our partners and customers.

You'll be expected to be accountable for the design and build of the next generation of technology to allow our customers to better understand consumer current behaviours, then predict and influence future behaviours using bleeding edge technologies.

It is expected you will pro-actively seek to improve the components for which you have long term ownership and seek to automate repetitive tasks whenever possible.

You'll get to work with some intelligent people both from the technology team and wider business. As a team member, you'll be expected to work closely with 5-10 other engineers and data scientists in your team as well as your team lead. You'll be responsible in your own area for ensuring that quality, best practices and delivery timescales are maintained.

It is expected you will also work with a collaborative design culture; we work hard to create a supportive and collaborative environment where you can progress your career. We believe in our PASSION values and applying this to all we do.

Required Data Engineering skills: Hadoop

- Proven Agile experience;

- Full life cycle development experience;

- Experienced in collaborating in design and development discussions;

- Understands good design principles and best practices;

- Able to assist with defining task breakdowns;

- Excellent communicator.

Required technical skills:

- Extensive Software/Data Engineering;

- Proven experience in Scala;

- Experience with Spark

- Experience with AWS (S3, EC2, EMR, etc)

- NoSQL exposure (eg HBase, Neo4j, Cassandra);

- Solid.nix experience;

- Experience in working on big data or streaming Server Side systems

Desirable technical skills:

- Hadoop (HDFS, Hive, YARN)

- An understanding of machine learning or data analysis implementations;

- R/Python;

- Microservices

- Distributed systems

- Kafka

- RDBMS and data warehousing experience (eg Oracle, MySQL).

- DevOps and continuous delivery exposure using tools such as Ansible/Chef/Puppet/Terraform

Big Data Engineer - Hadoop.

Start date
ASAP
Duration
12 month contract.
From
Coal Intelligent Technology Limited
Published at
24.12.2016
Project ID:
1260500
Contract type
Freelance
To apply to this project you must log in.
Register