Cloudera CDP Data Engineer

This project has been archived and is not accepting more applications.
Browse open projects on our job board.


Apache Kafka Cloudera Information Engineering Apache Flink Reengineering Apache Nifi Java (Programming Language) Application Programming Interfaces (APIs) Architecture Big Data Cloudera Impala Continuous Integration Couchbase Servers Data Transformation Dutch Engineering English French Hadoop Distributed File System Apache Hive Identity Management Python (Programming Language) Machine Learning MongoDB Cisco Nexus Switches NoSQL Apache Oozie Power BI Ansible Scala (Programming Language) Apache Zookeeper Data/Record Logging Administrative Operations Apache Yarn System Availability Apache Spark Git Gitlab-ci Kubernetes Real Time Data Spark Streaming Offshore Work Stream Processing Docker Jenkins Microservices


My Client is looking for an experienced Data Engineer with Cloudera CDP (Streaming - Kafka)

Job Function

As Data Engineer you will interact with the business and IT people (architects, functional analysts, project managers, data scientists, developers) and participate in the data transformation program.
As part of this program, we are re-engineering our enterprise data platform and machine learning solutions and moving to a CDP technology stack (Cloudera Data Platform). In this re-engineering and migration, you will design and develop solutions for Real Time data ingestion, processing and serving in a highly available environment. You will also be working on several generic frameworks eg framework for logging, for access management.

  • You are an IT professional with a minimum of five years of experience with proven experience in big data engineering.
  • You have experience with the Cloudera products (HDFS, Ozone, Hive, Impala, Spark, Oozie, Atlas, Ranger, ).
  • Experience in Real Time technologies is mandatory (Nifi, Kafka, Flink, Spark Streaming, High Availability setup ).

Specifically on Kafka:

Design and implement broker and partition architecture
Develop consumable topics and corresponding API/microservices to render data consumable.
Configure Producers and Consumers
Mastering all core components required in a Kafka eco-system, like Zookeeper, YARN
Integration with Flink and Nifi
Stream processing knowledge Kafka with Flink
Know-how of a Kafka administration profile is a plus
Experience in NoSQL databases (CouchBase, MongoDB, ) is an advantage.
You have experience with CI/CD (Git, Jenkins/Gitlab CI, Ansible, Nexus)
You have experience in Java and/or Python programming languages.
Experience in PowerBI, Scala, Docker and Kubernetes are an advantage.
You can work autonomously, you can cooperate effectively with different teams onsite and offshore, you are eager to learn new technologies, and you like sharing knowledge and documenting solutions with the right level of detail.
You are fluent in English, both spoken and written.
Knowledge of Dutch and/or French is an advantage although not essential.

3 days per week Remote possible with 2 days per week required onsite in Brussels Belgium.

Start date
12th December
12 months
ComTech Europe Limited
Published at
Project ID:
To apply to this project you must log in.