Description
JOB DESCRIPTION:
Duration: 6+months
- Ability to do architect, design
- Ability to evaluate the requirements in the context of various Big data Components and articulate the recommendation.
- Previous experience working with Data Warehouse, Big data and Cloud Technologies
- Strong Agile background
- Experience of Continuous Integration and Continuous Delivery (CI/CD)
- Must be able to design and implement DevOps practices for version control, compliance, configuration management, build, release, and testing by using Big data technologies.
- Excellent written communication skills
- Excellent analytical and problem solving skills
- Experience in Architecting and design Data Lake solutions.
- Design Data Lake deployments based on customer requirements, best practices, and following patterns and practices developed by Agile IT
- Implementing the relevant deployment pattern and scaling a release pipeline to deploy multiple endpoints
- Experience in Big data - Hadoop Components - Spark (Batch and Streaming), Kafka, Storm, HIVE, HDFS, MapReduce, Oozie, Pig, Flume.
- Hadoop: Spark (Batch and Streaming), Kafka, Storm, HIVE, HDFS, MapReduce, Oozie, Pig, Flume.
- Architecture: Distributed, Client Server, multi-tier, SoA and Object Oriented