LiveCareer-Resume

senior performance engineer resume example with 7+ years of experience

Jessica Claire
  • , , 609 Johnson Ave., 49204, Tulsa, OK 100 Montgomery St. 10th Floor
  • H: (555) 432-1000
  • C:
  • resumesample@example.com
  • Date of Birth:
  • India:
  • :
  • single:
  • :
Professional Summary

Lead Sr Performance Engineer with 8 + years experience in Development, operations and CI/CD. Worked extensively on load generating tools like Load-runner, K6 load io, J-meter Gatling, Locust. With expertise in DEVOPS infrastructure and monitoring tools like Datadog and Cloud-watch. Seeking a position that leverages my experience and allows me to implement industry leading solutions that enable me to grow as well as benefit the organization with renewed efficiency.

Accomplishments
  • Docker Certified Associate.Credentials:12880623 Red hat certified Fundamentals of Containers, Kubernetes, and Red Hat OpenShift by EDX credentials: 70b8ca4144934655948c90bb46451446 AWS certified Solutions Architect.
Skills
  • Infrastructure : Docker, Kubernetes ,AWS, GCP, AZURE, HELM, Terraform, Jenkins
  • BI Tools Business : Microsoft SSAS, Tableau
  • Languages/Scripts :XML, SQL,PL/SQL, Shell Scripting(Bash), Python 3.0,SCALA(GATLING DSL)
  • Version Control Tools: GIT, Rational Clear Case,TFS
  • Monitoring tools: AppDynamics, Cloud-watch, ELK, Data-Dog
  • Build : ANT, MAVEN, MAKE, MS Build
  • Methodologies Agile, Scrum, Waterfall
  • ETL: IBM InfoSphere (Datastage, Qualitystage) & Information Analyzer 8.1,IBM Datastage 11.5,9.1, client components (Designer, Director, Manager, Administrator)
  • Testing Tools: K6 load testing, Load UI, smart bear READY ui, VISUAL STUDIO,HP Load Runner 11.0, 12.50, 12.0, HP Performance Center 12.0,12.5, ALM, HP Quality Center, Neoload 5.1, 5.2, JMeter 4.0,3.3, 3.1, 2.7, 2.8, 2.9, 2.10,Gatling Front line, Python (Locust)
  • Build and deploy Tools: Jenkins, Azure Dev-ops, Octopus
Work History
Senior Performance Engineer, 07/2022 - Current
Trinet , , Atlanta, GA
  • Used monitoring tools like Datadog and Splunk tools on frequent basis.
  • Coordinated and monitored work of co-located and remote teams.
  • Developed and deployed test load scripts with Jmeter and K6.
  • Shift left to dev teams, developing Performance framework with Jmeter-> EC2-> GIT actions /Jenkins->datadog(for monitoring tests)
  • Created load profiles and custom load libraries to tag the tests to individual teams in datadog.
  • Conducted load, stress and endurance tests using Jmeter to simulate realistic user activities.
  • Gathered and defined business and functional requirements for each program.
  • Reviewed scalability, performance and load balancing of each application.
  • Designed, developed, modified and debugged programs.
  • Maintained existing applications and designed and delivered new applications.
  • Worked closely with clients to establish specifications and system designs.
  • Contributed ideas and suggestions in team meetings and delivered updates on deadlines, designs and enhancements.
  • Coordinated deployments of new software, feature updates and fixes.
  • Authored code fixes and enhancements for inclusion in future code releases and patches.
Senior Performance Engineer, 09/2021 - 07/2022
Trinet , ,
  • Developed performance framework using K6 load testing, java script , git actions for CI-CD , Datadog for monitoring , and Slack integration for alerting app teams.
  • Released performance framework to run 65 tests using the shared code libraries for 8 app teams with minimal effort from app teams.
  • Converted 36 Jmeter/ Load runner tests to K6 tests and created and authored performance framework.
  • Performance framework includes pieces : Performance libraries, Performance deployments, performance integration, performance monitoring and alerting.
  • Documented testing procedures for developers and future testing use.
  • Reviewed scalability, performance and load balancing of 6 application.
  • Conducted load, stress and endurance tests using K6 load testing tool to simulate realistic user activities.
  • Used monitoring tools like Datadog and splunk tools on frequent basis.
  • Query application logs during load test using splunk, datadog to debug and find api issues.
  • Created end to end live monitoring using docker containers and Datadog , for live feed dashboard with team filters.
  • Authored common JS library to be used in each of the repositories for different app teams.
  • Conducted and lead peak time performance spike testing , endurance testing and load testing during thanksgiving.
  • Standardize scenarios through shared code using K6 and java script.
  • Authored code to enable app teams to run multiple load profiles by updating json file.
  • Recommended changes and corrections to developers for optimal software performance and usability.
  • Wrote reusable custom functions for data loading and processing during performance tests
  • Wrote libraries to run different scenarios within individual tests
  • Authored GITHUB actions to run different scenarios using workflow dispatch.
  • Reduced overall testing hours 56% by writing and optimizing automation test scripts using javascript and git actions.
Senior Performance Engineer, 12/2020 - 08/2021
Mayo Clinic , ,
  • Developed end - end performance analysis program to analyze million plus requests and generate a graph using python and send automated emails to higher management.
  • Developed and deployed test load scripts with Jenkins.
  • Used monitoring tools like ELK and DataDog tools on frequent basis.
  • Deployed Jmeter-Master -worker configurations on multiple clusters for 8 application teams.
  • System level metrics and cluster metrics automation using shell and SAR utilities.
  • Maintained and created Jenkins node cluster and created pipelines and automated all the perf team requirements to CI-CD using Jenkins, Jmeter , Python program for analytics and Datadog for monitoring.
  • Defined cluster definition and deploying performance test in On -prem servers, Azure instances in multi region mode.
  • Front end java script performance testing using Lighthouse for with cache and without cache.
  • Converted 236 loadrunner scripts to Jmeter.
  • Reviewed scalability, performance and load balancing of each application.
  • Gathered and defined business and functional requirements for each program.
  • Conducted load, stress and endurance tests using Jmeter to simulate realistic user activities.
  • Developed end - end performance analysis program to analyze million plus requests and generate a graph using python and send automated emails to higher management.
Senior Performance Engineer Lead, 10/2019 - 12/2020
Company Name , ,
  • Performance Testing with GATLING frontline , using Gatling 1.8.0 version and Gatling 1.2.0.
  • Created live monitoring dashboards using J-meter and Taurus framework. Test runs were automated using Jenkins pipeline scripts.
  • SHIFT-LEFT testing approach , created a framework with c# application and Gatling as performance engine, deployed the architecture into docker containers to leverage the fargate clusters to runload testing for 100k users. Used the same architecture to deploy in every PR builds by application developers to test performance before code merge.
  • Performed tests and analysis such as load test, spike test, stress test, endurance test, performance bottleneck test, benchmark test, baseline test etc.
  • Created performance tests for complex workflows using fargate aurora clusters , also created hl7 plugins performance test scripts for integrated messaging and data exchange services .
  • Using J-meter on web server, application server and database server at different levels and loads .
  • Expert knowledge of Identifying and Analyzing the Bottlenecks in Performance testing, Web Performance Throughput, Server Response Time and Network Latency Experience working with containers like docker, docker swarm, fargate, AWS serverless services like lambda, camunda.
  • Performance tuning SQL database, AWS DynamoDB, Aurora cluster and AWS elastic-cache.
  • Defined automation roadmaps for the team breakdown of enabling capabilities and enabling features.
  • Worked with advanced protocols like XMPP, MQTT, WCTP, TCP and HTTP/HTTPS.
  • Worked with the teams to understand their needs and drove them towards continuous integration and delivery.
  • Migrated over 13 applications to the DEVOPS standards which includes 18+ sub - applications Engage in meetings with business and functional teams to understand business requirements and provide technical assistance to resolve complicated business requirements Orchestrated the cloud formation deployments project with docker containers Experience in configuring MQ Objects like Queue Managers, Remote queues, Local Queues, Queue Aliases, Channels, Clusters, Transmission Queues, Performance Events, Triggers, Processes, MQ error trapping applications and performance tuning/monitoring Tracked the team activity through codebeamer.
  • Defined Sprint stories for the team and ran standup calls for the teams to track their status.
  • Worked on creating reaper scripts/boot strap to shut down and deploy the EC2 instances through Jenkins.
  • Defined an automated process to copy the repo to S3 buckets using the Jenkins job triggered after the PR request Created bootstrap scripts to jump box into the ECS for ADFS logins to the AWS accounts and the stack deployment using the DUO push mobile application .
  • Automated the stacks deployment process in staging and production environment with the teams push the button collaboration .
  • Defined process to build a MOCK service through stack deployment for the test teams and developers to mock users into their cloud environments .
  • Developed and maintained deploy jobs for application code deployment across all environments .
  • Worked closely with test teams and performance teams to understand complex product requirements and translated them into automated solutions, for automated test results and running the Jmeter tests in distributed load configurations in fargate containers .
  • Provided monitoring solutions using dashboards in AWS CloudWatch, Datadog to visually monitor the live metrics and set up alerts for other agents such as the Jenkins server .
  • Worked on cloud formation template to spin up application stacks in all the environments.
  • Worked with AWS lambda and figured out lambda cold start issues and reached out to the AWS development team to fix the problem .
  • Integrated J-meter with Jenkins and ran the nightly automated performance jobs using Shell scripting and cloud formation template.
Senior Performance Engineer, 09/2015 - 10/2019
Company Name , ,
  • Experience in creating Docker containers leveraging existing Linux Containers and AMI's in addition to creating Docker Containers from scratch .
  • Design, code, configure and automate deployment of services and technology stack, supporting application development and implementing in multi-layered, micro services architecture.
  • Responsible for installation & configuration of Jenkins to support various Java builds and Jenkins plugins to automate continuous builds and publishing Docker images to repository.
  • Built and deployed Docker containers to break up monolithic app into microservices, improving developer workflow, increasing scalability, and optimizing speed.
  • Orchestrated CI/CD processes by responding to Git triggers, human input, and dependency chains and environment setup .
  • Collaborated with other developers to identify and alleviate number of bugs and errors in software Manage continuous integration environment setup using Jenkins, Bitbucket, Nexus Repository and Docker.
  • Worked on integrating GIT into the continuous Integration (CI) environment along with Jenkins and Subversion Worked on cloud solution architecture on open stack Amazon Web Services .
  • Used Kubernetes to orchestrate the deployment, scaling and management of Docker Containers Implemented continuous delivery framework using Jenkins, Maven on multiple environments.
  • Worked in AWS environment, instrumental in utilizing Compute Services ( EC2, ELB), Storage Services (S3, Glacier, Block Storage, Lifecycle Management policies), Cloud Formation(JSON Templates), Elastic Beanstalk, Lambda, VPC, RDS, Trusted Advisor and Cloud Watch.
  • nvolved in LR scripting and performance testing along with teams from IBM and SAIC creating a complete platform for performance testing.
  • Worked as an independent consultant for performance testing and coordinated with multiple vendors.
  • Involved in preparation of estimation, capacity matrix, testing plan and details, capacity plan and performance strategy docs and conducted assessments and data modeling using excel.
  • Recording, scripting, introducing dynamic navigation, parameterization and execution of the scripts were done.
  • Responsible for testing messages from MQ by checking the depth of Queues and pending messages.
  • Responsible for testing both Asynchronous and Synchronous batch jobs in an enterprise wide environment.
  • Performed in-depth analysis to isolate points of failure in the application
  • Assist in production of testing and capacity certification reports.
  • Created comprehensive analysis and test results report.
Education
Master Of Applied Science: Electrical And Computer Engineering, Expected in 07/2015
-
Oklahoma State University - Stillwater, OK
GPA:
Status -
Bachelor of Science: Electrical, Electronics And Communications Engineering, Expected in 05/2013
-
Gandhi Institute of Technology And Management - Visakhapatnam,
GPA:
Status -

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

Your data is safe with us

Any information uploaded, such as a resume, or input by the user is owned solely by the user, not LiveCareer. For further information, please visit our Terms of Use.

Resume Overview

School Attended

  • Oklahoma State University
  • Gandhi Institute of Technology And Management

Job Titles Held:

  • Senior Performance Engineer
  • Senior Performance Engineer
  • Senior Performance Engineer
  • Senior Performance Engineer Lead
  • Senior Performance Engineer

Degrees

  • Master Of Applied Science
  • Bachelor of Science

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in:As seen in: