LiveCareer-Resume

cloud architect hadoop engineer resume example with 8+ years of experience

Jessica
Claire
resumesample@example.com
(555) 432-1000,
Montgomery Street, San Francisco, CA 94105
:
Summary

I am an Engineer with over 4 years of work experience. I have a background in adding value and meeting Organization goals using Amazon Web Services(AWS) & Apache Hadoop and their complementary EcoSystem tools including Apache Spark, Apache HBase, Amazon Redshift, Amazon RDS, Amazon VPC, Amazon S3 and many more. Status: U.S. Citizen

Skills & Certifications
  • AWS Certified Solutions Architect – Associate (AWS-ASA-33209) 
  • CompTIA Security+ Certification(in-progress)
  • Cloudera Certified Hadoop Developer (CERT # 100009638)
  • Cloudera Certified HBase Specialist (CERT # 100009638)
  • Jenkins, Docker & Ansible
  • Knowledge of Continuous Integration & Devops Best Practices
  • Intermediate Java & Scala Shell Scripting
  • Knowledge of Advanced Data mining Techniques(Supervised & Unsupervised learners)
  • Working Knowledge of Github & SBT
Education
Arizona State University - Part-time Remote, Expected in 2014 Master of Science : Business Analytics - GPA :
University of Arizona Tucson, AZ Expected in 2012 Bachelor of Science : Business - GPA :
Experience
AT&T(Accenture/Cognizant) - Cloud Architect/Hadoop Engineer
City, STATE, 2013 - Current
Overview: Multiple years of experience in Big Data & Cloud development life cycle for the Telecommunication domain using Java, Scala, Spark and other Hadoop components. My expertise includes Technical Delivery, Delivery Management & Technical Team Management. I’ve been a developer for 4 years, technical Lead & Architect for a year, and a thought leader in setting technical standards for the projects. The following points describe my skill set in some detail.  
  • Experience with structured and semi-structured Telecommunication data.
  • Experience Architecting & Delivering Cloud Migration Solutions: Security Process implementation, Cost Analysis & Control.
  • Experience Deploying Dockerized Java Applications in AWS as well as writing Dockerfiles
  • Strong experience in executing Proof of concepts and setting up application prototypes using Apache Hadoop ecosystem.
  • Experience implementing Cloud Design Patterns(CDN, Eventual Consistency, Auto Scaling, Map-Reduce).  
  • Setting up and configuring AWS Virtual Private Cloud(VPC) components--Subnets, IGW, Security Groups, EC2 Instances, Elastic Load Balancers & NAT Gateways for an Elastic MapReduce Cluster as well as Application & Web Layer Client access. 
  • Experience setting up Amazon S3 bucket and Access control policies, S3 & Glacier LifeCycle rules.
  • Designed Java API to connect the Amazon S3 service to store and retrieve the data files. 
  • Setting up and Configuring Amazon RDS and Amazon DynamoDB instances in Multi-AZ & Multi-Region for fault tolerance and Backups.
  • Experience Implementing best practices for building Highly Available, Fault Tolerant Applications(Active-Passive Warm).
  • Experience deploying Hadoop Applications on a persistent Elastic MapReduce(EMR) cluster through S3.
  • Experience setting up and configuring IAM policies (Roles, Users, Groups) for security and identity management.
  • Experience writing and deploying AWS Lambda Functions.
  • Creating, editing and deploying CloudFormation Templates.
  • Experience coordinating with AT&T Legal to satisfy privacy requirements.
  • Knowledge of Cost Management and Security Best Practices.
  • Strong Architecture design, key design and table schema design skills on HBase for Clients API product.
  • Expanded aggregation parameters in pre-existing pipelines using MapReduce and Hive.
  • Designed and populated aggregated data into JSON docClairent for API exposure using the Jackson Framework.
  • Maximize return on marketing spend with targeted campaigns based on Big Data analytics insights for client.
  • Streamlining and expansion of Hive pipeline to accommodate additional client requirements while also to optimizing pre-existing Hive queries (30% increase in efficiency).
  • Conversion and tuning of pipeline tasks from Hive to Spark SQL & Data Frames for vast performance improvements and efficiency(6x faster).
  • Performed pre-deployment & Production code review.
  • Strong experience in managing and leading a technical team in an Agile environment.
  • Developed CloudFormation scripts & templates to build on demand EC2 instance formation for faster deployment setup.
  • Ability to interact with and build strong Business Relationships with clients.
  • Experience setting up and configuring Jenkins pipeline for Continuous Testing, Continuous Inspection and Continuous Deployment.
  • Created and maintained Ansible Playbooks and commands run on Autoscaled EC2 instances
  • Excellent communication skills, a great Team Player & outside the box thinker.
  • Knowledge Transfer and Production Support.

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

Your data is safe with us

Any information uploaded, such as a resume, or input by the user is owned solely by the user, not LiveCareer. For further information, please visit our Terms of Use.

Resume Overview

School Attended

  • Arizona State University - Part-time
  • University of Arizona

Job Titles Held:

  • Cloud Architect/Hadoop Engineer

Degrees

  • Master of Science
  • Bachelor of Science

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in:As seen in: