Description
Job requirements
The Skills
- You have experience with analysis and creation of data pipelines, data architecture, ETL/ELT development and with processing structured and unstructured data, including post-go live activities Ability to analyse data, identify issues (eg gaps, inconsistencies) and you can troubleshoot them
- You have experience using data stored in RDBMSs and experience or some understanding of NoSQL databases
- You have knowledge of Scala and Spark, and a good understanding of the Hadoop ecosystem including Hadoop file formats like Parquet and ORC
- You can write performant Scala code and SQL statements and can design modular, future proof solutions that are fit for purpose
- You are autonomous in working on Unix based systems.
- You have a true agile mindset, capable and willing to take on tasks outside of her/his core competencies to help the team
- Experience in working with customers to identify and clarify requirements
- You have a strong interest in FINTECH and technologies related data.
- You have strong communication skills.
Nice to haves
- You have experience with open source technologies used in Data Analytics like Spark, Hive, HBase, Kafka,
- Knowledge of Cloudera, IBM Mainframe
- You speak English fluently, any other language is a plus.