Big Data Engineer – Apache Kafka (60862)

Canton of Zurich, Zurich  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

For a project at our client‘s site, an international bank based in Zurich, we are looking for an experienced

Big Data Engineer – Apache Kafka (60862)

In this role you will operate and maintain a whole platform end to end and work closely with the development teams.

Your Qualifications:
• Strong knowledge in designing and operating Kafka Clusters (Confluent and Apache Kafka) on-premise
• +5 Years of experience in design, sizing, implementation and maintaining Hortonworks based Hadoop Clusters
• Deep knowledge in securing and protection of Hadoop clusters (Ranger, Sentry, Kerberos, Knox, SSL, Shuffle)
• +5 Years of Experience in designing Big Data architectures demonstrated experience in gathering and understanding customer business requirements to introduce BigData technologies
• Well versed working with tools from the Hadoop ecosystem, like Hadoop, Hive, Impala, Spark, Kafka, Solr, Flume
• +5 Years of Experience in DevOps automation with Ansible and Terraform
• Experience with IBM DB2 is a plus and IBM Power Systems is a plus
• Experienced hands on in implementing complex security requirements in the financial industry
• Good abstraction and conceptual skills combined with a self-reliant, team-minded, communicative, proactive personality
• Fluent in English, German is an advantage

Your Responsibilities:

• Engineer and integrate the platform from a technology point of view
• Engineer core Big Data platform capabilities

Off to new destinations! Apply now directly on or contact our team on .

Start date
05.2019
Duration
8 months
From
iET SA
Published at
18.02.2019
Contact person:
Senior Recruiter
Project ID:
1721971
Contract type
Freelance
To apply to this project you must log in.
Register