Description
Big Data Engineer with Java and Spark experience is required to join the big data technology and design team of a leading Bank for an initial 6-month contract. As the Big Data Engineer, you will be would be entrusted to developed solutions/design ideas, identify design ideas to enable the software to meet the acceptance and success criteria. Work with architects/BA to build data components on the Big data environment.
This role is inside IR35
Experience:
- Ability to build data models using Hadoop technologies.
- Programming experience in Java or Scala.
- Experience with most of the following technologies (Apache Hadoop, Kafka, Apache Spark, YARN, Hive, HBase, Apache Atlas, SQL, RESTful services).
- Experience in testing, monitoring, administering, optimizing and operating multiple Hadoop/Spark clusters across cloud providers - GCP and on-premise data centres, primarily in Python, Java and Scala
- Sound working knowledge on Unix/Linux Platform
- Experience with GCP and/or AWS is preferable and advantageous
- Knowledge of working with data is advantageous
- Hands-on experience building data pipelines using Hadoop components Hive, and Spark SQL.
- Experience with industry-standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirements management in JIRA
- Hands-on experience on cloud preferably Google Cloud platform
- Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects.
- Nice to have Python/Shell, ELK, Kubernetes and/or any cloud environment