Description
Data Ingestion/Hadoop Engineer (Software Application Programmer -III)
- Implement interfaces with different systems to ingest/distribute data using different protocols: TCP/IP sockets, SFTP/SCP, SOAP WS, REST, IBM MQ, JMS,SCP,SFTP,HTTP,SNMP
- Develop, Test and Support code in java, Python, Unix Shell scripts, PERL to implement the interfaces.
- Develop and support data processing scripts written in UNIX shell, Java, Python and PERL.
- Support data ingestion and processing routines written using Oracle SQL*Loader and stored PL/SQL procedures
- Design, Develop and support data flows using: Apache NiFi, Apache Kafka
- Design, develop and troubleshoot ad-hoc interfaces & data issues.
- Develop automation of software delivery and infrastructure changes using DEVOPS tools.
- Manage BIG Data via Hadoop protocols and features using HDFS, HIVE, SQOOP
- Support scheduled production code deployments.
MUST HAVE SKILLS:
- Very strong (expert) experience in Java, Shell Script, Python, Perl.
- Should be comfortable & experienced in TCP/IP, HTTP, SOAP WS, REST, JMS, SFTP, SCP, RSYNC, Curl
- Good Familiarity with protocols: SNMP, TL1 etc an advantage.
- Hands on experience with products of Hadoop ecosystems (HDFS, Hive, HBASE, Pig, Sqoop, Oozie) is a big plus.
- Experience with Apache NiFi, Apache kafka.
- Basic SQL skills, SQLLDR and experience with relational databases is a benefit
- Experience with noSQL databases (HBASE or MongoDB) is an added advantage
- Experience with DEVOPS tools
- Exposure to analytical tools (Tableau, Splunk) nice to have
EDUCATION/CERTIFICATIONS:
Bachelors degree in Information Systems, Computer Science or related field.
REQUIRED SHIFT:
Standard, flexible, Mon- Friday and occasional weekend work; must be willing to work with VDSI teams (early hours in US) and also must be willing to work with all the teams across eastern, central, and Mountain Time zones in US.