Description
- Can confidently demonstrate significant hands-on experience of designing and architecting data solutions within in a large complex, multi-faceted organization translating user requirements into solution designs
- Proven experience in architecting & implementing AWS services like: Amazon Elastic Map Reduce (EMR), SageMaker, Redshift, Kinesis, Glue, S3
- Implementation experience in the Big Data/Apache Hadoop ecosystem (including tools such as Hadoop Streaming, Spark, Pig and Hive), or in the Real Time ecosystem (with tools such as Amazon Kinesis, Apache Kafka, Flink, Storm)
- Proven experience in working with data science teams and frameworks
- Experience on development technologies in the industry (Node.JS, Go, Python, Java, Scala or other languages), and a good understanding of modern software architecture, concepts and development practices (APIs, Containers, Event Sourcing, Continuous Integration/Delivery/Deployment)
- Understanding of security practices related to data ingestion, storage, analysis and visualization
- Experience in working or leading Agile teams and participating in DevOps practices
- Highly technical and analytical, possessing 10 or more years of analytics platform implementation
- Strong English Communication Skills