LiveCareer-Resume

sr cloud devops engineer resume example with 5+ years of experience

Jessica
Claire
resumesample@example.com
(555) 432-1000,
, , 100 Montgomery St. 10th Floor
:
Summary
  • Consistently recognized for technical developing and troubleshooting skills used to rapidly and cost-effectively develop challenging technical software projects.
  • Quick learner; equally successful in both team and self-directed settings; and proficient in a range of computer systems, languages and tools.
  • Experience in the IT industry comprising of Development, Systems Administration and Software Configuration Management (SCM). Extensive experience includes SCM, DevOps Build/Release Management
  • Worked on Build and Release management methodologies and Software procedures in all aspects of SDLC.
  • Skilled at Software Development Life Cycles and Agile Programming & Agile Ops Methodologies.
Education and Training
University Of Cumberlands Williamsburg, KY Expected in 2020 ā€“ ā€“ Ph.D. : Computer Science Engineering - GPA :
Oklahoma Christian University Oklahoma, OK Expected in 2015 ā€“ ā€“ Master of Science : Engineering - GPA :
Jawaharlal Nehru Technological University Hyderabad, Expected in 2012 ā€“ ā€“ Bachelor of Science : Computer Science - GPA :
Experience
Motorola Solutions - Sr. Cloud DevOps Engineer
Fort Lauderdale, FL, 12/2019 - Current
  • Work with AWS managed services, EC2, CloudFormation, Lambda, Fargate, etc.
  • Write scripts to automate operations, server management and Infrastructure as Code (IaC) processes.
  • Proposes cloud solutions based on business and technology considerations, and suitable alternatives to satisfy customer needs.
  • Aid in the creation of automated troubleshooting capabilities across multiple cloud providers.
  • Works as a key contributor on the Delivery APIs team providing necessary Cloud Solutions.
  • Design APIs, develop shippable code, documentation and unit test new features for digital products.
  • Works with fellow API Developers, Team Leads, Architects to deliver features through the creation of re-usable Microservices and API products.
  • Collaborate with Quality, Product and Cloud Engineering teams to keep digital assets fully functional, secure and up-to- date with business needs.
  • Perform pair programming, effectively communicate ideas with the team, assist in systems integration, performance testing and product releases.
  • Implement policies, roles, data access controls, monitoring events, resolve system and data issues for continuous functioning of APIs.
  • Assist maintaining cross-platform tooling across multiple cloud providers.
  • Works to integrate cloud solutions with existing enterprise tools and systems.
  • Analyzes the current enterprise architecture to identify weaknesses and opportunities for improvement using cloud solutions.
  • Acts as a subject matter expert on technologies and trends in cloud solutions architecture.
  • Performs evaluations of enterprise cloud technology standards, tools, products and solutions to identify opportunities for improvement.
  • Keeps abreast of emerging cloud technologies and evaluates vendor offerings to determine best fit for business needs.
  • Setup web application firewall rules and other required comprehensive security implementations.
  • Work with scripting and programming languages such as Python, Ruby, PowerShell and Bash.
  • Assist in architecting the Cloud Backend Infrastructure and DevOps process.
Chenega Corporation - Sr. AWS Cloud DevOps Engineer
West Point, MS, 02/2019 - 01/2020
  • Wrote Terraform modules for AWS Infrastructure as code.
  • Wrote Python scripts to find out non-compliant AWS resources using AWSConfig.
  • Wrote custom AWS SSM (Systems Manager) documents to apply security patch baselines across our production AWS environment.
  • Created SignalFx(Monitoring Tool) custom dashboards to monitor production AWS Infrastructure and Application services using Python.
  • Created CI/CD pipeline using Jenkins and Terraform for Infrastructure as code.
  • Established VPC Peering between multiple AWS Accounts using terraform.
  • Wrote a Lambda function in Python to restore deleted data and deployed this Lambda function using Cloud Formation.
  • Wrote Cloud Custodian scripts for AWS Cost Optimization and AWS Security Analysis.
  • Wrote custom IAM policies to restrict access to AWS resources based on Amazonā€™s least privilege policy.
  • Designed and Implemented Blue/Green Deployments for customer facing web application for high availability.
  • Troubleshoot Production Servers in case of down time and resolve issue and implement new ways to avoid those failures again.
  • Experience in working with different CI tools like Jenkins, Circle CI, Atlantis.
  • Setup JFROG Artifactory and connected it to Jenkins to deploy the Artifacts generated by builds.
  • Created continuous delivery workflow using Docker and Ansible to accelerate application delivery and build scalable architecture platforms.
  • Allowing us to continuously test, build and release and continuously deploy application to AWS by leveraging EC2 container service for running docker containers in production.
  • Configured NAT gateway as part of VPC creation to allow EC2 instances to run in private network.
  • Configured Bastion Host to allow access to the Private EC2 instances which also acts as additional security layer to production servers.
  • Wrote Python script to Auto Enable Cloud Trail logging when Stop Logging event occurs.
  • Clean up unused resources using AWS Trusted Advisor for AWS Cost Optimization.
  • Wrote Packer scripts to take snapshot of AMIā€™s.
  • Wrote Ansible playbooks for installation, update and manage software packages on Linux EC2 instance.
  • Wrote Python Lambda Function to enable VPC Flow Logs for existing and new VPCā€™s.
  • Wrote Python Lambda Function to alert when a new VPC peering is made on prod AWS Account.
  • Built a Kubernetes cluster with Kubeadm and deployed Microservice application to kubernetes.
  • Monitoring Kubernetes cluster with Prometheus and Grafana.
  • Using Grafana with Prometheus for Alerting and Monitoring.
  • Scaling Microservices in Kubernetes Cluster.
  • Deployed an application to EKS and auto scaling an EKS cluster, Captured EKS API calls with cloud trail.
  • Exposure to Netflix Asgard and Eureka.
  • Configured monitoring and alerting for R53 Endpoints using New Relic.
  • Wrote Docker file to install terraform, terragrunt, awscli on fly on Jenkins build slave.
  • Wrote Docker file to dockerize python flask web application and deployed on Elastic BeanStalk.
  • Built services using Docker Compose and Configured Docker Swarm.
  • Load Balancing containers and updating containers with Watchtower.
  • Built a docker image using Packer and Jenkins.
  • Install and configure Jupyter hub on EMR.
  • Experience in writing Athena queries for auditing.
  • Iā€™ve experience working with following AWS services EC2,Lambda,EKS,ECS,Cloud Formation, SSM,SNS,SES, R53,RDS,EMR,S3,IAM, Elastic Bean Stalk, DynamoDB, Redshift, VPC, CloudFront, API Gateway, Config, Cloud Trail, Trusted Advisor, Cloud Watch, Athena, Elastic Search, Kinesis, Guard Duty, Inspector, Cloud Watch, ELB, ASG, EMR.
Alteryx Inc. - DevOps Cloud Engineer
Louisville, KY, 09/2017 - 02/2019
  • Wrote Cloud Formation scripts to create VPCā€™s.
  • Wrote Python scripts to find out non-compliant AWS resources using AWSConfig.
  • Wrote custom SSM (Systems Manager) documents to apply security patch baselines across our production AWS environment.
  • Wrote Python scripts to auto tag AWS resources.
  • Created custom dashboards to monitor production AWS Infrastructure and Application services.
  • Created CI/CD pipeline using Jenkins and Ansible and deployed this infrastructure using terraform on AWS.
  • Established VPC Peering between multiple AWS Accounts using terraform.
  • Cleaned old AMIs using python and scheduled a lambda function to trigger every 3 months.
  • Wrote a lambda function with python to restore deleted data and deployed this lambda function using Cloud Formation.
  • Deployed 3-Tier customer facing application using terraform and ansible on AWS.
  • Wrote ansible playbook to delete un-used and un-tagged resources on AWS for cost-optimization.
  • Wrote custom IAM policies to restrict access to AWS resources based on Amazonā€™s least privilege policy.
  • Troubleshoot production servers in case of down time and resolve issue and implement new ways to avoid those failures again.
  • Configured NAT gateway as part of VPC creation to allow EC2 instances to run in private network.
  • Configured Bastion Host to allow access to the Private EC2 instances which also acts as additional security layer to production servers.
Saic - Cloud Engineer
Vienna, OH, 02/2017 - 07/2017
  • I worked on doing spikes on AWS Config and wrote cloud formation scripts to deploy AWS Config and Its rules.
  • Wrote custom config rules in python boto3 in AWS Lambda to find out non-compliant resources.
  • Wrote Cloud Custodian policies to identify non-compliant resources and take actions like notify and delete those resources.
  • Wrote python unit test cases.
  • Wrote Python boto3 scripts to delete default VPCā€™s.
  • Wrote server-side scripts to handle http post requests using Python Flask App.
  • Wrote cloud formation scripts to deploy S3 SNS Lambda to trigger Step Functions state machine when an object is created in S3 bucket.
  • Wrote custom python boto3 scripts to check the limits of aws services using Trusted Advisor Service and alert DevOps team using SNS Topic when the service usage reached to warning (50%) and critical (80%) To increase the service limit capacity.
  • POC on amazon EMR and wrote cloud formation template to Install, configure Jupyter notebook on Amazon EMR.
  • Created continuous delivery workflow using docker and ansible to accelerate application delivery and build scalable architecture platforms.
  • Allowing us to continuously test, build and release and continuously deploy application to AWS by leveraging EC2 container service for running docker containers in production.
  • I worked on doing spikes for monitoring AWS Infrastructure using cloud watch.
Strategic Resource International - DevOps Engineer
City, STATE, 02/2016 - 01/2017
  • Setup and install dedicated Jenkins servers for each project.
  • Creating builds for Continuous Integration (CI) in each server.
  • Creating a build flow between Jenkins and Ansible deploy.
  • Setup several environments and roles for each environment.
  • Custom built Docker containers.
  • Deployed Docker containers to several environments as needed.
  • Actively practiced Test Driven Development (TDD) for scripts.
  • Expertise in creating and setting up SSL certificates.
  • Migrating physical servers to AWS.
  • Build a highly scalable and fault tolerance architecture using terraform and cloud formation on AWS.
  • Created self-healing system configurations using ELB, Auto Scaling.
  • Wrote several cloud formation scripts for various resources like EC2, ELB, and Security groups, RDS, Cloud Formation, S3, ECS, SNS, SQS, VPC, Opswork, CDN, Elastic Beanstalk, EMR and Route53 etc.
  • Configured NAT Instance and Bastion Host assigned Elastic IPā€™s to enhance security on EC2 resource.
  • Wrote Python boto3 scripts to take EBS snapshots of EC2 Instance.
  • Automate the infrastructure using Terraform and Chef recipes using opswork for deployment.
  • Created S3 Bucket lifecycle policy and setup Amazon Glacier as a backup for S3 objects using cloud formation.
  • Worked on core AWS services further setting up new server EC2 instances, configuring security groups and setting up Elastic IP, auto scaling configuration.
  • Experience in using AWS command line EBS (Elastic Bean Stalk).
  • Install and configure Kafka on AWS.
  • Exposure to Vagrant and KVM.
  • Primary responsibilities include Build and Deployment of the java applications into different environments like Dev, QA, CERT and PROD.
  • Setting up SonarQube to generate Unit-test coverage reports, Integration coverage reports and mutation coverage of JavaScript, java and Scala code present in GIT repository.
  • Experience with build tools Maven for writing pom.xml.
  • Responsible for troubleshooting of applicationā€™s code coverage report on Sonar dashboard.
  • Automated the sonar code coverage reports using Jenkins DSL Scripts.
  • Setup Artifactory on AWS using NGNIX and Apache.
  • Created Build pipeline in Jenkins.
  • Wrote build scripts using Bash to efficiently run the projects on the build system.
  • Experience with setting up log reporting tools such as ELK.
  • Experience working with JIRA.
  • Configured and Administered Nexus Repository Manager and JFrog Artifactory.
  • Migrated Sonar from 5.1.2 to 6.1 using Terraform on AWS with zero downtime following blue/green deployments.
  • Set up Nagios and AWS Cloud watch as monitoring tools for several Linux servers.

By clicking Customize This Resume, you agree to ourĀ Terms of UseĀ andĀ Privacy Policy

Your data is safe with us

Any information uploaded, such as a resume, or input by the user is owned solely by the user, not LiveCareer. For further information, please visit our Terms of Use.

Resume Overview

School Attended

  • University Of Cumberlands
  • Oklahoma Christian University
  • Jawaharlal Nehru Technological University

Job Titles Held:

  • Sr. Cloud DevOps Engineer
  • Sr. AWS Cloud DevOps Engineer
  • DevOps Cloud Engineer
  • Cloud Engineer
  • DevOps Engineer

Degrees

  • Ph.D.
  • Master of Science
  • Bachelor of Science

By clicking Customize This Resume, you agree to ourĀ Terms of UseĀ andĀ Privacy Policy

*As seen in:As seen in: