Data Engineer - Data Pipelining

London  ‐ Onsite
This project has been archived and is not accepting more applications.
Browse open projects on our job board.

Description

Data Engineer

A Leading Financial Services business are looking to hire an experienced Data Engineer, with specific exposure to Data Pipelining and Transaction Monitoring.

£600-£670 a day (Umbrella) - Inside IR35 only

6 month contract with significant scope

The role is a remote based position for the time being, but will be London based.

Role Purpose

We are looking for a Data Engineer with experience in developing data pipelines to join our team, working on a new Transaction Monitoring platform used across multiple businesses within our Institutional Clients Group.

As a Data Engineer in our team, you will be responsible for building data pipelines and integrate them with various internal or external systems. The Transaction Monitoring platform will be looking at running various models at scale on large data sets to identify possible instances of market abuse in the market.

You will work with the data scientists and business stakeholder as end consumers of the data to ensure we are meeting their requirements. You will contribute to the team's strategy around development and deployment best practices.

This is an exciting opportunity to work on an important project, which will have huge impact on the business and our future architecture in this area.

Key Responsibilities

  • Working closely with a data-centric application, hosting algorithms to detect possible market abuse.
  • Designing the ETL architecture, as we look to extract it from an existing Legacy application. After that, building out additional ETL layers to support the onboarding of additional data sources.
  • Working closely with quants/data scientists to ensure that they have the data necessary to add new algorithms, and that the data is of the necessary quality and timeliness to support these.
  • Act as the subject matter expert regarding data pipelines to the DevOps focused team and to external stakeholders
  • Building a close relationship with clients and stakeholders to understand the use case for the platform and prioritise work accordingly
  • Working well in a multidisciplinary DevOps-focused team, building a close relationship with other developers, Quants/Data Scientists, and production support teams.

Skills & Qualifications

  • Experience building data pipelines on top of Big Data technologies, ideally using Spark and Python
  • Experience working with message queues (ideally Kafka or Solace), traditional databases (SQL) and NoSQL databases (ideally MongoDB)
  • Worked closely with data scientists before, and may have experience creating pipelines that can serve ML/statistical algorithms
  • Passionate about databases and worked in the past with SQL/NoSQL technologies (SQL Server, Oracle, Couchbase, Mongo DB, etc.)
  • Experience working in a DevOps culture and willing to drive it. You are comfortable working with CI/CD tools (ideally IBM UrbanCode Deploy, TeamCity or Jenkins), monitoring tools and log aggregation tools. Ideally, you would have worked with VMs and/or Docker.
  • High development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment
  • Proven capability to interact with clients and deliver results, taking ideas to production
  • Experience working in fast paced development environments
Start date
ASAP
Duration
6 months
From
Harvey Nash IT Recruitment UK
Published at
19.06.2021
Project ID:
2139847
Contract type
Freelance
To apply to this project you must log in.
Register