Jessica Claire
  • Montgomery Street, San Francisco, CA 94105
  • H: (555) 432-1000
  • C:
  • Date of Birth:
  • India:
  • :
  • single:
Professional Summary

· Over 7 Years of experience in Data Engineering, Data Integration, Data Analytics, and Software development leveraging latest cloud and on-prem technologies.

· Worked on creating big data ecosystems and data lake for structured and semi-structured data using various open-source technologies addressing business needs.

· Extensive experience with SQL and other relational databases, highly involved with query optimization and query pruning.

· Worked on different projects involved with real time data, web scrapping, Images, Pdfs and helped data mining efforts across the organization using tech...

· Involved in creating Data warehouses supporting operational analytics needs of organization and helping them create executive level reports and dashboards.

· Working knowledge in developing scalable Spark applications using Spark Core, Data frames, Spark-SQL, PySpark, Python, Spark Streaming/Kafka and orchestrating using apache airflow.

· End to end Orchestration of data pipelines and code releases using GIT for CI/CD.

· Expertise in performing root cause analysis understanding business processes/requirements with strong analytical skills.

· Worked independently or as a team player successfully lead and delivered data management projects.


    · Hadoop Stack

    · Python, SQL, Apache Spark

    · Google Cloud Platform (Google Cloud Storage, Google Compute, Google SQL Instance, BigQuery)

    · Microsoft Azure (Azure Data Factory, Azure Databricks, Azure Synapse)

    · Databases: IBM Netezza, Teradata, SQL Server, MySQL, PostgreSQL, CosmosDB

Master of Science: Computer Science, Expected in 08/2015
University Of Houston Clear Lake - Houston, TX,
Bachelor of Science: Computer Science, Expected in 05/2013
Jawaharlal Nehru Technological University - Hyderabad, IN,
  • Azure Fundamentals. (AZ-900)
  • Azure Data Fundamentals.
  • Google Cloud Fundamentals.
  • Data Management for Clinical Research.
  • Using Databases with Python.
Work History
Big Data Application Developer , 08/2016 - Current
Ascension Health Mount Prospect, IL,
  • Involved in Design, Analysis, Implementation, Testing, and Support of complete Data Engineering.
  • As part of the Big Data team, ensured data integration (ETL/ELT), data integrity and data quality.
  • Extracted the EHR (epic) data from SQL server to data staging area using BCP.
  • Extensive experience writing complex SQL, Shell, and Python scripts for data transformation and data modeling.
  • Worked on writing python scripts to parse semi-structured data like XML.
  • Increased daily pipeline performance by 90% tuning SQL scripts.
  • Worked on Multiple Phenotyping projects using SQL, Stored procedures, triggers.
  • Led a project on setting up Clamp NLP pipeline in GCP for researchers to extract medical concepts from EHR data used Google Compute, Big Query, Google SQL Instance, Google storage)
  • Working on Migrating ETL Pipelines to Azure using Azure data factory and Azure Databricks.
  • Heavily used Jupyter notebooks to analyze data.
  • Data extraction from APIs with Python.
  • Data Visualization using Apache Superset and Power BI.
  • Responsible for the creation of documents from source fields to destination fields mapping.
Hadoop Data Engineer, 12/2015 - 08/2016
Sears Holdings City, STATE,
  • Created data pipeline to migrate data from OLTP databases to Hadoop and Google Cloud.
  • Ingested data from Teradata to HDFS using Sqoop and BTEQ.
  • Used Hive for data transformation and created partitioned tables in Hive.
  • Performance tuning for Hive and Pig scripts.
  • Worked with Google data flow to get data from different sources and loaded into BigQuery and created tableau dashboards.
  • Analyzed costs incurred by bigquery and optimized the queries for cost savings on bigquery.
  • Worked on Comparing the performance between Hive and Google BigQuery.
Hadoop Developer Intern, 05/2015 - 12/2015
Software Technology Labs City, STATE,
  • Worked on a project Involved in migrating from MySQL to Hadoop (HDFS) using Sqoop utility.
  • Developed MapReduce jobs for data cleansing and preprocessing.
  • Created Hive tables for performance optimization using bucketing & partitioning.
  • Managed and reviewed Hadoop log files.
  • Developed HiveQL scripts for processing data from raw to staging tables and to store the redefined data into partitioned tables for curation and consumption.
  • Involved in documentation of functional and technical requirements specification
Data Analyst, 05/2013 - 12/2013
Peoples IT City, STATE,
  • Gathering business report requirements and modeled database objects, including tables, views and materialized views using SQL.
  • Achieved data integrity after extraction of data from multiple sources by manipulating, also ensured data quality by various validation checks.
  • Developed test data and created testing scenarios for end to end testing life cycle.
  • Developed database objects, including tables, views and materialized views using SQL.

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy


Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. As such, it is not owned by us, and it is the user who retains ownership over such content.

How this resume score
could be improved?

Many factors go into creating a strong resume. Here are a few tweaks that could improve the score of this resume:


resume Strength

  • Formatting
  • Length
  • Personalization
  • Target Job

Resume Overview

School Attended

  • University Of Houston Clear Lake
  • Jawaharlal Nehru Technological University

Job Titles Held:

  • Big Data Application Developer
  • Hadoop Data Engineer
  • Hadoop Developer Intern
  • Data Analyst


  • Master of Science
  • Bachelor of Science

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in: