Hadoop Admin

ANYWHERE  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

Job Description:
Responsible for implementation and ongoing administration of Hadoop infrastructure and maintaining/developing big data application written in Scala, Java and python
Aligning with the Data engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS and MapReduce access for the new users.
Should have hands on experience on big data components like Spark, Scala, Kafka, Hbase, Oozie, Drill
Good to have AWS S3, Python, Ansible, Airflow, Java
Performance tuning of Hadoop clusters and Yarn to deliver sub second result from Apache Spark streaming applications.
Screen Hadoop cluster job performances and capacity planning. Monitor Hadoop cluster connectivity and security
Infrastructure automation using Ansible and Docker. Manage and review Hadoop log files.
File system management and monitoring. HDFS support and maintenance.
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.

Job Type: Contract
Job Duration: 12 months and rolling
Rate: 330 Euros per day
Language: English
Location: Finland
Start Date: June, 2021
Start date
ASAP
From
Ubique Systems GmbH
Published at
19.05.2021
Contact person:
Dipti Barik
Project ID:
2115672
Contract type
Freelance
To apply to this project you must log in.
Register