Description
Data Engineer (Hadoop, Scala, Java, Python, S3, Lambda, Kafka, AWS Kinesis, Yarn, Spark) - Remote and Stockholm, Sweden - English speaking
One of our Blue Chip Clients is urgently looking for a Data Engineer (Hadoop, Scala, Java, Python, S3, Lambda, Kafka, AWS Kinesis, Yarn, Spark).
For this role you can initially work remotely but once the COVID restrictions have been lifted you will then need to be based onsite in Stockholm, Sweden.
Please find some details below:
Experience working with large scale data systems using Hadoop and its ecosystem.
Experience working with Scala/Java/Python
Good knowledge on S3 and creating Data Pipelines
Good knowledge on Lambda functions and AWS Kinesis/Kafka
Hands on experience with YARN & Spark with relevant experience is fine tuning & optimizing SPARK applications
Working experience implementing data ingestion using HDFS, KAFKA, Spark, Scala/Java with Streaming.
Strong experience in Hadoop technologies like HBASE, HIVE and help build prototypes for new tools/products
Should have good handle in database queries and unix background
Good understanding an experience of Real Time/stream processing
Airflow knowledge
Good understanding of CI/CD with github and git actions.
Please send CV for full details and immediate interviews. We are a preferred supplier to the client.