Keywords
Skills
Summary of Experience
- Professional career reflects 7 years of managing technology projects from concept to completion with remarkable deadline sensitivity.
- Sharp, top-producing Software Engineer with overall 7 years of experience providing programming expertise in Spark, Hadoop, Scala, Core Java, PHP.
- Familiar in installing, configuring and administrating Hadoop cluster of major Hadoop distribution (Cloudera).
- Familiar with storage layer Hadoop Distributed File System (HDFS), computation layer Map Reduce, Spark framework and Hadoop eco system Yarn, Hive, HBase, Pig.
- Hands on experience in writing Spark programs using features RDD, DataFrames, Datasets, spark SQL, streaming.
- Familiar with advance spark feature GraphX, Machine Learning.
- Having basic knowledge on Hive, HBase, Kafka, Flume.
- Knowledge of advanced level programming in Core Java, PHP, NodeJS likes multithreading, concurrency, thread synchronization, socket programming.
- Familiar with design pattern like factory pattern, singleton pattern, observer pattern, strategy pattern.
- Forward thinking problem identification, research, analysis and resolution.
- Spearheads full life-cycle project development and innovatively manages the quality.
- Consistently deliver strong and sustainable technology gains.
- Good in getting acquainted with latest technologies.
- Excellent analytical, problem solving, communication and interpersonal skills with ability to interact with individuals and can work as part of a team as well as independently.
- Organizational driver offering productivity improvements, pioneering technologies and process design and re-engineering.
- Big Data : HDFS, Spark, Map Reduce, Kafka, Hive, HBase, Pig, Machine Learning
- Operating System: Windows, Linux (Red Hat, Ubuntu)
- Programming Languages: Core Java , Scala, PHP
- Protocol Familiar With: RS-232, RS-485(Serial Communication), TCP/IP & UDP, MODBUS
- Database: JDBC, MS SQL Server, MySQL
- Meta language : XML
- IDE &Tools: Eclipse, Microsoft Visual Studio 2010 & 2005, Git
Project history
- BIG DATA / SPARK PROJECTS
Project: Online Ground Booking Application Analytics
Online ground bookings datasets like City, Location, Slot date, Slot time, Ground information, Customer information etc.
Bookings data contains csv format data. The data is stored in the HDFS in distributed manner over a cluster of nodes, which addresses the problems of scalability high availability and fault tolerance.
The bookings data are processed using spark RDDs. The Processed data is loaded into hive table for further final processing.
Technology : HDFS, MapReduce, Hive, Spark RDDs, DataFrames, DataSets, Spark SQL
Roles and Responsibility :
- Involve in full lifecycle of a Hadoop solution, including requirements analysis, platform selection, technical architecture design, application design and development, testing, and deployment.
- Gather and process raw data at scale (including writing scripts, calling APIs, write SQL queries, etc.).
- Cluster installation and management
- Plan and Develop big data analytics projects based on business requirements.
- Understand the requirements of input to output transformations.
- Defining job flows.
- Perform data analysis on all results and prepare presentations for clients.
- Keeping a track of Hadoop Cluster connectivity and security.
Tools: Eclipse
Programming Language: PHP, MySQL, Core Java, AngularJS
Description: Joonk make both Individual and Group Communication apt, seamless and real-time. It uses Artificial Intelligence (AI) to make each communication effective.
Local Availability
Only available in these countries:
India
- Available to work onsite in Europe, Oceania, and Northern America
- Available to start immediately