AB

Amit bhandari

available

Last update: 22.11.2022

Azure Solution Architect, Big Data Technical architect, Big Data Engineer on Azure

Graduation: Master Computer Science
Hourly-/Daily rates: show
Languages: English (Limited professional)

Keywords

Microsoft Azure C++ (Programming Language) Python (Programming Language) Oracle Applications JIRA Big Data Cloud Computing Linux DevOps Agile Methodology + 55 more keywords

Attachments

Amit-Bhandari_221122.docx

Skills

Life Cycle (SDLC), Azure Data Bricks, Azure Key-Vault, file system, Azure, Petaho, python libraries, PyArrow, Pandas, Hive, Oracle, Sybase, Web Development framework, Bottle, C++, Tuxedo 8.1, Agile, Jira, Azur DevOps, Sun Solaris, Windows NT, Linux, Ubuntu 10, Azure Data Studio, VSCode, Scheduler, DBeaver, GIT, Azure DevOps, Confluence, Jira SqlDbx, SQL Developer, Visual Studio, TOAD, VSS, Putty, XCode, Tibco, Control-M, Snowflakes (Basics), AWS Lambda, PySpark, Python, Sqoop, Multi-threading, Linux advance socket programming, Message Queues, PL-SQL, Shell, Python scripting, Database, Azure SQL DB, Others, Google proto buff, Matplotlib, Xerces-C++, OCCI, Purify Memory Detection Tool, Design Patterns, POSIX, Snowflake, Azure Data Factory, Azure Function Apps, VS code Studio, Azure Cloud, Big Data, data pipeline, Azure Data Explorer, Azure Data Lake, algorithms, DevOps, cloud, pipelining, AWS, HQL, Hadoop, HDFS, data ingestion, Hue, Jira tool, bugs, Unit testing, application integration, parser, web-based, C++ 11, Oracle/OCCI, bug fixing, Python/C++/Control, Tuxedo, fault tolerance, load balancing, Oracle Libraries, ++ application, Code Review, memory leaks, Oracle UNIX, Visual Basic, Purify

Project history

11/2022 - 11/2022
Big Data Engineer on Azure
Tata Consultancy Services

Role and Responsibilities
Working as Big Data Engineer for US based client who is biggest retail chain across the word. My job
is to design and develop data pipeline, explore different Azure area to find the best solutions in
term of cost and performance and scalable aspect. Currently client is using Azure Data Lake, Azure
Data Bricks, Azure Data Studio, Azure Key-Vault which are based on Azure Cloud. Client is dealing
with petabytes of data. I have suggested solution to use new concepts PyArrow library to deal with
this huge amount of data. I had implemented solutions and algorithms using PyArrow, which shown the
performance gain which support the cross-language development. Client is using Azure Data bricks
which based on PySpark. This Pyspark is specially design for Azure platform. I am using Azure DevOps
and CI/CD in this project.

Project Ford
Company Tata Consultancy Services - Autonomous Vehicle
Role and Responsibilities
Worked on Designing, solutioning and to develop applications in new technologies like Python,
PySpark or other requested by customer, one of simulator to check the ECU for CAN compliance. Worked
on CAN Network and CAN Hardware. We are connecting with cloud for this. collect the data from
sensors on real time and put into the cloudera data lake. I was Working as Technical Lead and
individual contributor for the same. I was handling the team of 15 members.

11/2022 - 11/2022
Big Data Engineer
Tata Consultancy Services - Latest Pipelines

Project British Petroleum
Company Tata Consultancy Services - Latest Pipelines




Role and Responsibilities
Worked as Big Data Engineer on pipelining. Design and developed the Pipelines using Pyspark on AWS.
In the initial phase I analyzed the existing pipelines which was developed using HQL in CDL i.e.,
clouedera lake for different table patterns. After that, I made simple design and implemented in
Pyspark. which was great performance improvement over CDL pipelines. Client was using Azure DevOps
tool i.e. Continues delivery and Integration.

11/2022 - 11/2022
Tech lead
Deutsche Bank; Tata Consultancy Services - Capital Markets Group

Project Deutsche Bank
Company Tata Consultancy Services - Capital Markets Group
Role and Responsibilities
I Worked as Tech lead and individual contributor for Credit Risk Domain. As lead I was managing team
of 10 people. Used Python, Pyspark, C++ for Credit risk application.it is loan risk calculation.
Some applications were implemented using Hadoop and HDFS, data ingestion from oracle to HDFS. It was
based on Hive. Hue was tool for Query execution. With the data we identified the customer area for
e.g. Healthcare, Automotive, Chemical Industry from which they default more, this is kind of data
analysis to understand reason and patterns. Client was using Jira tool for Issue tracking.

11/2022 - 11/2022
Technical Analyst
HSBC

Company HSBC, Pune
Role and Responsibilities
Technical Analyst
Used Visual Studio for development.
Our application is used for front office reconciliation report generation. Generated reconciliation
report is sent to Intelli-match system. It pulls the data from the database and write the files.
Technology used were Python/C++/Control-M. I was technical Analyst, developer,

09/2015 - 01/2022
Big Data Technical architect
Tata Consultancy Services - Capital Markets Group

Role and Responsibilities
Working as Big Data Technical architect for US based refinery client. My responsibilities are to
design and develop data pipeline using Azure data factory, explore different Azure area to find the
best solutions in term of cost and performance and scalable aspect. Currently client is using Azure
Data lake, Azure Data Factory, Azure Function Apps, VS code Studio, Azure Data Explorer, Azure
Key-Vault which are based on Azure Cloud. We are using Azure Develops and CI/CD. Deployment is
automated from dev, pre-prod, Prod.
For Merging we are using ARM templates for Azure Data Factory.

01/2014 - 02/2015
Project Lead
iGATE

Project Infusion Pump is device
Company iGATE, Pune (Now Capgemini)
Role and Responsibilities
Role was Project Lead; I worked as Technical lead for 5 peoples. Also responsible for maintenance of
application. I was also part of R & D Team to explore different type of testing framework like
Vector Cast, light weight Mapusoft libraries. Worked as Developer on C++, python.

Period January 14 to February 15

02/2013 - 01/2014
software engineer
Inautix

Project Structured derivative reporting tool
Company Inautix, Pune
Role and Responsibilities
Role was software engineer.
responsibilities were Individual contributor and application maintenance. It manages the
complexities of pricing, booking, limits management. Technologies are C++/Oracle/OCCI.

Period February 2013 - January 2014

12/2010 - 01/2013
Senior Software Engineering
Synechron

Project File Search Tool
Company Synechron
Role and Responsibilities Senior Software Engineering
I worked as individual contributor on C++/oracle technologies. development, bug fixing and
enhancements was main part in application.
Period December 2010 - January 2013

11/2009 - 12/2010
Software Engineer
Round A Clock Solutions

Project CITIFT
Company Round A Clock Solutions
Role and Responsibilities
Software Engineer, worked mostly on Tuxedo, Tuxedo is integration tool to integrate application with
all feature like fault tolerance, load balancing etc.

Responsibilities:
* Technical Research and Development: Responsible for research on OCCI (Oracle Libraries).
* Key responsibility was to develop the framework for C++ application, which handles the message
routing (send or received) from different middle-ware such as Tuxedo and MQSeries.
* Code Review and Optimization: Provided the solutions for memory leaks in their applications.
Suggested the various C++ optimization techniques to peers.
Environment: C++, Oracle UNIX, MQSeries, Tuxedo 8.1, OCCI, Visual Basic, Purify Memory Detection
Tool

Local Availability

Only available in these countries: India
Profileimage by Amit bhandari Azure Solution Architect, Big Data Technical architect, Big Data Engineer on Azure from pune Azure Solution Architect, Big Data Technical architect, Big Data Engineer on Azure
Register