I am seeking a competitive and challenging environment where I can serve your organization and establish a career as Data warehouse developer. Extensive experience working on all phases of ETL development lifecycle, from gathering requirements to design, testing, implementation and support
*Excellent technical and analytical skills with clear understanding of design goals and far-reaching experience in preparing technical design specifications documents including source to target mapping documents.
*Strong leadership and problem solving abilities with experience assisting others and advising them on ETL best practices
UNIX Korn shell
SQL, Hadoop HDFS, HIVE, Python
Cloud Computing with Google BigQuery *IBM Tivoli Workload scheduler
HP Quality Center 10
OOAD, E-R design, MS Project
E-R design, MS Visio
THE HOME DEPOT SSCJune 2012 to CurrentData Warehouse Developer Atlanta, GA
Worked with my manager and team leads to review projects' requirements and decide on resource allocation and timing.
Involved in architectural and design discussions of new projects under Supply Chain Data Integration.
Develop data pipelines for moving source data (from Oracle/DB2 LUW) to Google Cloud Platform, and transforming data in Google BigQuery.
Worked closely with Ab Intio ETL infrastructure team to define the sandbox directory structure for each new project.
Provided file space growth estimation to the UNIX infrastructure team and devised archiving strategies for each project.
Redesigned the Ab Initio ETL codes of critical jobs to improve their performance, and reduce their CPU consumption in Production to ensure that they met targeted SLAs.
Worked closely with ETL infrastructure team to grant project check-out/check-in and migration privileges to developers on a need basis.
Communicated with Ab Initio Support to resolve product related issues/concerns such as product integration with Hadoop ecosystem.
Responsible for code promotion from QA to Production environments for projects worked by the offshore team.
Performed clean up on the Standard Sandbox and EME to remove unnecessary files checked-in by other developers.
Extensively worked with UNIX environment and created UNIX processes through the korn shell scripts and worked closely with UNIX administrators to resolve performance related issues.
Designed the schedule and automation of over 50 Ab initio jobs through IBM Tivoli Workload Scheduler (Maestro).
Presented key performance metrics and impact analysis to a Change Control Committee 3 days before deploying every project to production.
Supervised the code migration from Ab Initio 3.1 to Ab Initio 3.4 Performed Ab Initio jobs' performance analysis when the target database was upgraded from Teradata 13 to Teradata 14.
Provided troubleshooting support for code deployed to Production by my team in collaboration with ETL production support.
Responsible for impact analysis and rescheduling of about 32 data movers' jobs following a major upgrade of the DCM (Demand Change Management) system.
Automated scripts to move data between Teradata systems (EDW and DCM).
THE HOME DEPOT SSCMay 2011 to August 2011IT Intern Atlanta, GA
Developed Ab Initio graphs to extract data from DB2, Oracle, flat files and XML files.
Developed graphs with multiple sub-graphs in a large Teradata database environment with massive parallelism that handled complex data sources.
Used various Ab initio database and program components i.e.