Jessica Claire
, , 100 Montgomery St. 10th Floor (555) 432-1000,
Professional Summary
  • Experienced IT Professional with MS in Computer Science, and experience in implementing and testing data driven business solutions. 
  • Deep expertise in data warehousing, process validation and business needs analysis.
  • Proven ability to understand customer requirements and translate into actionable project plans. Dedicated and hard-working with a passion for Big Data.
  • Proficient knowledge of Data Structures, Mathematics, Statistics and business intelligence.
  • SDLC Methodologies: Agile/Scrum, Waterfall, RAD (Rapid Application Development)
  • Programming Languages: Python, Scala, PySpark, Java
  • ETL & Reporting Tools: AWS Glue, Talend, SSIS, PowerBI
  • Databases: SQL Server XXX2/14, PostgreSQL, Cassandra, DynamoDB, AWS RDS
  • Data Processing: Hadoop, Spark, Vertica, Amazon RedShift, Kafka, AWS Lambda, Azure
  • Data Visualization: PowerBI, AWS QuickSight, TIBCO Spotfire
Work History
09/XXX9 to Current Bigdata / Cloud Engineer University Of North Carolina Greensboro | Fort Collins, CO,

Client: Pfizer

Duration: June, 2020 - Present

I am working on a project in the Pharma industry. I am responsible for streaming and processing data from multiple applications within the Global Product Development Platform, creating Data Pipelines (CI/CD) for multiple applications. I authored Data Validation rules, Data Quality rules as well as Data Transformation rules as per the requirements of the individual products defined by the business.


Client: MassMutual Life Insurance Company

Duration: September, XXX9 - June, 2020

I worked on a project with a client in the Insurance industry. I was responsible for streaming and processing claims data, creating Data Pipelines (CI/CD) for various Lines of Business.


  • Involved in performance tuning of targets, sources, mappings, and sessions
  • Worked on AWS products such as EMR, RDS, Athena, Glue
  • Created Data pipelines to handle huge volumes of Data on AWS using Spark, Athena and Vertica
  • Worked on Data streaming using Kafka.
  • Created CI/CD pipelines to handle huge volumes of Data on AWS cloud as well as on prem using Jenkins, Java, Spark, Athena and Vertica.
  • Authored Data transformation rules in Spark, PL/SQL, PostgreSQL on AWS RDS for new as well as existing data pipelines
  • Worked with RDDs as well as DataFrames in Spark on AWS EMR for big data processing
  • Worked with tools such as Talend, Alteryx and Spotfire as part of the Data Pipeline workflows.
  • Managed and supported multiple UDDL(User Defined Data Lakes) for the GPD platform.
  • Created and published interactive BI dashboards that include Machine Learning-powered insights using AWS QuickSight.

Environments: JAVA, Python, Spark, Jenkins, Apache Airflow, AWS RDS, AWS RedShift, AWS EMR, AWS QuickSight, Vertica

01/XXX8 to 06/XXX9 Research Assistant Apex Systems | Columbus, MI,

Worked as an Under Graduate Research Assistant, Department of Computer Science and Operations Research.

  • NJIT Public Safety Project: Designed an application for crime prediction and analytics, for the NJIT Department of Public Safety.
  • Twitter Data Sentiment Analysis and Optimized Querying on Unstructured Data: Worked on Twitter data Sentiment analysis and analytics of raw tweets to gain useful insights.
11/XXX5 to 12/XXX7 Data Engineer Amgen Inc. | New Albany, OH,

Client: Wells Fargo

TolloTech provides professional information technology recruiting, staffing, consulting and business services to its clients.

I worked with the Card Systems and Analytics. The project included multiple work streams and was dependent on many cross functional teams including Fraud analytics, Claims, Virtual Assistant and Marketing which catered to more than 50 Million active Debit/Credit Card users.


  • Transformed project data requirements into project data models.
  • Optimized Existing data models to reduce redundancies/complexity as per the functional requirements gathered from business.
  • Translated requirements to technical documentation such as Data Flow and Process flow models
  • Worked on both real time and batch processing of transactional data for fraud analytics using Hadoop, Map-reduce and PL/SQL
  • Deployed clusters, provisioned the appropriate hardware for processing huge volumes of Data on Azure using various Azure services to optimize cost and performance.
  • Worked with WebServices (REST and/or SOAP based) and APIs to handle integration with third party services/data.
  • Identified recommended and implemented the most appropriate paradigms and technology choices for batch and real time data processing based on the business application and needs.

Environments: SQL Server XXX2/14, MS SSRS, MS SSIS, Python, Java, PowerBI, Teradata, Spark, Azure

01/XXX5 to 11/XXX5 Software Engineer TolloTech LLC | City, STATE,

Client: Investment Portfolio Reporting - provides automated, integrated, web-based investment accounting and reporting.


  • Worked as a Software Engineer - focused on developing and testing in-house investment reporting application in JAVA 7/8.
  • Worked with the development team of size 4, primarily in JAVA and SQL Server 2008 R2.
  • Got to work on JIRA:- tool for issue tracking and project management, Fisheye:- tool for real-time notification of code changes, web based reporting and code reviews, Opengrok:- tool for code search
    and cross reference, mockito:- mocking framework for code testing.
  • Got familiar with accounting concepts and learned how to make investment portfolios.
Expected in XXX9 Master of Science | Computer Science New Jersey Institute Of Technology, Newark, NJ GPA:
Expected in XXX4 Bachelor Of Science | Computer Science New Jersey Institute Of Technology, Newark, NJ GPA:

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy


Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. As such, it is not owned by us, and it is the user who retains ownership over such content.

How this resume score
could be improved?

Many factors go into creating a strong resume. Here are a few tweaks that could improve the score of this resume:


resume Strength

  • Length
  • Personalization
  • Target Job

Resume Overview

School Attended

  • New Jersey Institute Of Technology
  • New Jersey Institute Of Technology

Job Titles Held:

  • Bigdata / Cloud Engineer
  • Research Assistant
  • Data Engineer
  • Software Engineer


  • Master of Science
  • Bachelor Of Science

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in: