Dynamic ETL Developer/Data Engineer/Data Analyst with 9 years of experience in helping companies with diverse transitioning, including sensitive financial data and massive big data installations. Having good Knowledge of Big Data Analytics, promotes extensive simulation and testing to provide smooth ETL execution. Known for providing quick, effective solutions to automate and optimize data management tasks. Talented BI professional bringing advanced understanding successful strategies for identifying, protecting and capitalizing on data assets. Able to construct new systems to answer complex business needs.
|
|
GALAXY is the primary data warehouse of UHG and is also one of the most comprehensive in the entire health care industry. It follows a dimensional schema and has the data/tables divided among different subject areas. The primary objective of this application is to provide a single, consolidated environment for processing claims and health service data sourced both internally and externally and make it available at a central location accessible by any entity within UnitedHealth Group with the appropriate authority.
Responsibilities:
• Extraction of data from different data sources using different ETL tools.
• Involved in design of ETL jobs to apply transformation logic.
• Worked on complex mappings in DataStage.
• Used FastLoad and MultiLoad utilities for loading data to staging and target Teradata tables respectively.
• Developed an automated script as part of innovation to pick the suitable utility based on volume of data dynamically.
• Worked on TPT LOAD & TPT STREAM utilities.
• Used Teradata Export utilities for reporting purpose.
• Developed TWS Jobs to schedule the process based on frequency of sources.
• Created dependencies on the source system by using hard dependency concept rather file watchers.
• Involved in purging data based on retention period and developed automated process for the data purge and archival.
• Recent Integration with Hadoop Data Lake and using Hive as a structured data Layer on top HDFS.
• Had been successful in implementing Spark as a process back bone for Hive Queries and involved in writing python functions to refine the unstructured and poor data to make it understand to ETL Layer.
UGAP is a national consolidated data warehouse for UnitedHealth Group. Contains data from multiple claim processing systems such as UNET, COSMOS, MAMSI and GALAXY data sources categorized into flat files and DB2 database. Data extracted is transformed by applying business logics and loaded into UGAP data mart which is on Teradata. Final warehouse describes the complete details about an individual claim with provider details, member details and pharmacy details.
Responsibilities:
Worked for the Group Economic Capital project with Basel Accords (calculating the risk capital required for the HSBC group) and Global Business Market (creating a simple data mart for the global group decision making reports) project.
Responsibilities:
By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy
Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. As such, it is not owned by us, and it is the user who retains ownership over such content.
Many factors go into creating a strong resume. Here are a few tweaks that could improve the score of this resume:
resume Strength
By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy