Description
Location: Krakow, PolandExperience: 7+ years
Skills: Hadoop components (Apache Hadoop, Apache Spark, YARN, Hive, SQL),Python, pySpark and Unix/Linux environment
Requirements:
Experience with industry standard version control tools (Git, GitHub) and automated deployment tools (Ansible & Jenkins)
Experience optimizing Spark jobs and should have Basic shell-scripting knowledge
Understanding of big data modelling techniques using relational and non-relational techniques
Needs to be a Self-starter, proactive and with a team focused attitude.
Willingness to learn and quick to adapt to changing requirements.
Experience and understanding of SDLC (Software Development Lifecycle)
Ability to work with business users and the team stakeholders such as project managers and architects
Responsibility:
Excellent data analysis skills with experience dealing with large and complex data sets.
Good knowledge of Structured Query Language (SQL)
Highly analytical with excellent attention to detail.
Experienced in analysing business requirements and turning them into effective functional solutions
Experience in Scala
Experience with developing RESTful APIs using Java Springboot
Familiar with Cloud technology
Language: English
Contract:6 month + Extension
Job Types: Full-time, Contract