Big Data Developer

Zurich  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

My Client, a Global Investment bank is currently seeking a Big Data Developer for an initial 6 months with a high possibility of long term extension to join their Zurich offices.

Working as part of a team responsible for the development of a Data Lake based on Cloudera stack, your Key responsibilities will include:
  • Develop software components responsible for ingesting end-of-day and intraday data and also for data flows involving joins between these data sets
  • Participate in the design & technology review of the software components developed in the team
  • Evolve overall architecture of the solution with the use of latest technologies available in the bank
  • Contribute to integration testing


The successful incumbent will possess experience building data warehousing and analytics solutions using one of the major Hadoop distributions and various ecosystem components (eg HDFS, Impala, Spark, Flink, Flume, Kafka). You will have 4+ years of experience in Python and / or Scala programming languages and possess experience with Data modelling and SQL query language. Experience in building Production data pipeline using Spark, Spark Streaming and Flink technologies and experience with Security in Hadoop environment is highly advantageous. Bash scripting experience is ideal.

Desired experience includes practical experience with one of the following Big Data platforms: Cloudera or Hortonworks and experience with Elastic technology stack.

Michael Bailey International is acting as an Employment Business in relation to this vacancy.
Start date
02/2020
Duration
6 months +
(extension possible)
From
Michael Bailey Associates
Published at
04.01.2020
Project ID:
1869116
Contract type
Freelance
To apply to this project you must log in.
Register