LiveCareer-Resume

software engineer resume example with 4+ years of experience

Jessica Claire
  • , , 609 Johnson Ave., 49204, Tulsa, OK 100 Montgomery St. 10th Floor
  • H: (555) 432-1000
  • C:
  • resumesample@example.com
  • Date of Birth:
  • India:
  • :
  • single:
  • :
Summary

Data engineer specializing in AWS and Azure having 5 years experience, driving innovation and efficiency through cloud-native data solutions. Expertise in architecting and implementing scalable data pipelines on AWS and Azure, empowering organizations to unlock the value of their data. Proven ability to design and optimize data architectures on both AWS and Azure, leveraging a wide range of services to ensure high-performance data processing and analytics. Proficient in utilizing AWS and Azure data services, such as Amazon S3, AWS Glue, Azure Data Factory, and Azure Databricks, to enable seamless data integration and transformation. Skilled in deploying serverless data workflows on AWS Lambda and Azure Functions, enabling cost-effective and automated data processing at scale. Demonstrated success in building end-to-end data solutions on AWS and Azure, encompassing data ingestion, storage, processing, and visualization. Strong programming skills in Python and SQL combined with deep knowledge of AWS and Azure, enabling me to develop robust data engineering solutions for complex business requirements. Passionate about staying abreast of the latest AWS and Azure advancements, continuously exploring new tools and techniques to drive innovation and maximize the value of data.

Skills
  • Top Skills
  • Business Intelligence Tools
  • Extract, Transform, Load (ETL)
  • Business Intelligence (BI)
  • SQL Transactional Replications
  • Data Analysis
  • Load Balancing
  • Data Mining
  • Deep Learning
  • Warehouse Models
  • Data Analytics
  • Data Security
  • Machine Learning
  • Key Performance Indicators
  • Requirements Gathering
  • Database Design
Experience
Software Engineer, 05/2022 - Current
Inside Real Estate Murray, UT,
  • Managed performance monitoring and tuning while identifying and repairing issues within database realm.
  • Collaborated with solution architects to define database and analytics engagement strategies for operational territories and key accounts.
  • Provided global thought leadership in analytics solutions to benefit customers.
  • Identified key use cases and associated reference architectures for market segments and industry verticals.
  • Developed and managed enterprise-wide data analytics environments.
  • Wrote and coded logical and physical database descriptions, specifying identifiers of database to management systems.
  • Worked as part of project teams to coordinate database development and determine project scopes and limitations.
  • Created and implemented complex business intelligence solutions.
  • Collected, outlined and refined requirements, led design processes and oversaw project progress.
  • Identified, protected and leveraged existing data.
Data Engineer, 12/2020 - 04/2022
Facebook Northridge, CA,
  • Led data ingestion efforts, integrating diverse data sources into Snowflake and orchestrating data manipulation through efficient scripts
  • Architected and implemented schema designs to seamlessly integrate batch data from flat files and databases into Google Cloud Storage, enabling streamlined data processing
  • Leveraged the power of SPARK on Cloud Dataproc to efficiently process and analyze large-scale datasets, empowering data-driven insights and generating impactful reports on student finance accounts using PowerBI
  • Developed a robust automation tool using Python, PostgreSQL, and Excel, enabling the seamless processing of extensive transactional data, improving efficiency and accuracy
  • Demonstrated expertise in handling complex data pipelines, ensuring data integrity, and delivering valuable insights by leveraging a combination of cloud technologies and programming languages.
Associate Software Engineer, 02/2018 - 04/2020
Veritas New York, NY,
  • Worked with BI team in gathering the report requirements and Sqoop to export data into HDFS and Hive
  • Involved in the below phases of Analytics using R, Python, and Jupiter
  • Notebook
  • Data collection and treatment: Analyzed existing internal data and external data, worked on entry errors, classification errors and defined criteria for missing values
  • Data Mining: Used cluster analysis for identifying customer segments,
  • Decision trees used for profitable and non-profitable customers, Market Basket
  • Analysis used for customer purchasing behavior and part/ product association
  • Developed multiple Map Reduce jobs in Java for data cleaning and pre- processing
  • Assisted with data capacity planning and node forecasting
  • Installed, Configured and managed Flume Infrastructure
  • Administrator for Pig, Hive and HBase installing updates patches and upgrades
  • Worked closely with the claims processing team to obtain patterns in filing of fraudulent claims
  • Developed Map Reduce programs to extract and transform the data sets and results exported back to RDBMS using Sqoop
  • Patterns observed in fraudulent claims using text mining in R and Hive
  • Page 2 of 3 Exported the data required information to RDBMS using Sqoop to make the data available for the claims processing team to assist in processing a claim based on the data
  • Developed Map Reduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the EDW
  • Created tables in Hive and loaded the structured (resulted from Map Reduce jobs) data Using Hive QL developed many queries and extracted the required information
  • Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics
  • Responsible for importing the data (mostly log files) from various sources into HDFS using Flume
  • Tested raw data and executed performance scripts.
Education and Training
Master's degree: Information Technology, Expected in 05/2022
-
Arizona State University - ,
GPA:
Status -
Bachelor of Technology - BTech: Engineering, Expected in 01/2017
-
Vellore Institute of Technology - ,
GPA:
Status -
Intermediate: MPC, Expected in 01/2013
-
Sri chaitanya junior kalasala - ,
GPA:
Status - Page 3 of 3

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

Your data is safe with us

Any information uploaded, such as a resume, or input by the user is owned solely by the user, not LiveCareer. For further information, please visit our Terms of Use.

Resume Overview

School Attended

  • Arizona State University
  • Vellore Institute of Technology
  • Sri chaitanya junior kalasala

Job Titles Held:

  • Software Engineer
  • Data Engineer
  • Associate Software Engineer

Degrees

  • Master's degree
  • Bachelor of Technology - BTech
  • Intermediate

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in:As seen in: