Description
My client is looking to hire Multiple Contract Data Engineers on an initial 6 month contract with a view to extend. This will be working on cutting edge technology as they look revitalising their approach to data and analytics.
Foundational skills:
- A degree in Computer Science or engineering, solution architect or data engineering certification in leading cloud providers (Azure, GCP or AWS)
- 8+ years direct experience in anticipating and solving complex, large-scale data challenges
- 5+ years Hands-on with architecting and implementing solutions involving massive volumes of data, streaming data, relational and non-relational databases, orchestration frameworks, CI/CD pipelines, and distributed processing technologies
- Demonstrable experience in building re-usable frameworks, patterns, templates, and design artefacts in agile data development environments.
5+ years of experience in data design and development in:
- Data warehousing
- Datamodelling (3NF, star schema etc)
- Data lake development using big data technologies or cloud native solutions
- Complex Data integration pipelines with varied types (nested XML, JSON, Cobol, EBCIDC formats) and sources of data (direct integration, API based, ftp transfer)
- Securing data solutions
- Data Management solutions (Reference data management, master data management)
- Stay abreast of trends and new capabilities in the domain of data engineering and identify opportunities to introduce those in data solutions.
- Understand and apply regulatory and compliance considerations to all data processing solutions, GDPR and PCI in particular.
- Contribute to Agile ALM including refinement, planning, release, and retrospective sessions
- Commits to involvement in regular performance and development conversations with their line manager and embraces a culture of in-time feedback and coaching.
Expertise in building data solutions in following technologies:
- Azure Data ecosystem, essentially with Azure Databricks, Data Factory, Data Lake, Event hub, Cosmos DB and Synapse
- Python, Pyspark, groovy
- Apache Nifi, Hive, Spark, hbase Hadoop HDFS and Kafka
- Ab Initio
- Metadata Management Tools eg Glue, Atlas, Data Catalogue
- Reporting Tools - Power BI, Tableau, Microstrategy
- Continuous Integration, Continuous Development, Continuous Deployment tool and DevOps model, including Git, Jira, Confluence, Jenkins.
- Experience and ability to mentor and line manage small development team.
Development-focus skills:
- Experience in Banking, Finance sector or sectors involving large data volumes
- Ability to multi-task including focussed projects as well as continuous improvement initiatives
- Exposure to micro services-based workloads in data environments
- Experience in AI Engineering
- Good understanding of tools and principles used in ALM:
- Automated software testing
- Agile Methodologies - Scrum/Kanban/XP