Description
The Company is a global leader in Information and Deep Analytics, primarily leveraging the financial services industry.
They are currently investing in their technology and big data platform to develop a number of new revenue generating products, leveraging open source and big data technologies. These include new data integration, advanced analytics, visualisation, aggregation and smart data initiatives that address new customer needs and are highly visible and strategic within the organisation.
These initiatives are using best of breed technologies, such as Hadoop, Spark, Cassandra, HDFS, Kafka, SOLR and AWS along with in-house developed technologies. The successful candidate will be working in a fast paced, dynamic team environment, building brand new commercial products which are at the heart of their business.
Position Summary
- Design and implement Big Data infrastructure for batch and Real Time analytics.
- Ensure highly interactive response times.
- Avoid allowing performance bottlenecks to creep into the system.
- Interpret and analyse business use-cases and feature requests into technical designs and development tasks.
- Be an active player in system architecture and design discussions.
- Take ownership of development tasks, participate in regular design and code review meetings.
- Be proud of the high quality of your own work
- Work with a number of teams (in multiple worldwide locations).
- Always follow quality standards (unit tests, integration tests and documented code).
- Be delivery focused, have a passion for technology and will enjoy offering new ideas and approaches.
Education and experience
- Bachelor's degree in Computer Science, Applied Mathematics, Engineering, or a related discipline, or equivalent experience.
- strong software development experience and ability to build production software in Scala or Spark
- Distributed file systems/databases/systems
Business Competencies
- Be able to demonstrate commercial experience on big data/advanced analytics projects
- Linux/Unix - Knowledge of algorithms, data structures, computational complexity
- Functional programming - Git
Preferred (one or more of)
- Hadoop, HDFS, YARN, Spark, Hive, NoSQL databases, SOLR, Kafka, Druid - AWS: EC2, EMR, S3
- Visualisation: knowledge of visualisation styles and tools, and ability to programmatically create visualisations.
- Able to create visualisations that minimise time needed to understand complex data sets. HTML5/D3
- Apache Zeppelin - Akka -