Livecareer-Resume
Jessica Claire
  • , , 100 Montgomery St. 10th Floor
  • H: (555) 432-1000
  • C:
  • resumesample@example.com
  • Date of Birth:
  • India:
  • :
  • single:
Professional Summary
  • Experienced Hadoop/systems administrator with Information Technology experience, extensively in design and implementations of robust technology systems in Big Data (Hadoop), Linux Administration, Database Analysis and Data Engineering.
  • Hands on experience in installation, configuration, and supporting Hadoop Clusters using MapR & Hortonworks.
  • Installing and configuring Hadoop eco system tools like Pig, Hive, HBase, Sqoop, Flume, Oozie, Ambari, Ranger, Grafana.
  • Hands on experience using automation, cloud orchestration and configuration management tools.
  • Experience in managing and reviewing log files and troubleshooting issues with mapreduce/yarn/spark jobs.
  • Experience in ELK stack and Redis RLEC.
  • Implemented Metrics monitoring system using Opentsdb, Collectd and Grafana for dashboard visualization of Yarn/Spark and cluster metrics.
  • Implemented log monitoring system using Elasticsearch, logstash, fluentd & kibana.
  • Used Splunk and Dynatrace extensively for analysis and troubleshooting.
  • Experienced in writing Shell scripts to automate daily activities.
  • Created a cluster health monitoring script which performs health checks of each service, node and runs all sample jobs/validations and reports if any issues found in the cluster.
  • Implemented HA for Job History Server and Spark History Server.
  • Experience in setting up automated 24x7 on monitoring and escalation infrastructure for Hadoop Cluster using Nagios, Ganglia and Icinga2.
  • Hadoop Cluster capacity planning, performance tuning, cluster monitoring, troubleshooting.
  • Excellent command in creating Backups & Recovery, and Disaster recovery procedures.
  • Involved in bench marking Hadoop HBase cluster file systems using various batch jobs and workloads.
  • Experience in minor and major upgrades, patching of Hadoop and Hadoop eco system.
  • Familiar with writing Oozie workflows and Job Controllers for job automation.
Skills

Hadoop/Big Data: MapR, Hortonworks, Ambari, HDFS, Mapreduce, Yarn, Pig, Hive, Sqoop, Spark, Flume, Oozie, Zookeeper, Hbase, Grid Engine, Mapr-DB, Elasticsearch, Kibana, Logstash, Filebeat, Grafana, Fluentd, Redis (RLEC), Apache Tomcat.

Machine Learning: RevR/Skytree/Grid Engine

OS: UNIX, Linux, MS Windows, Centos, Mac OS

Other: C, C++, Core Java, Linux shell scripts, GIT, SQL, IntelliJ, MS Office, Stack IQ, Splunk, PLSQL, MYSQL.

Work History
Hadoop Administrator, 10/2018 - Current
Cognizant Technology Solutions Madison, AL,
  • Involved in major/minor version upgrades from 2.5.3 to 2.6.4 and 3.1 in Hortonworks (HDP) in production and 3.0 to 4.2 and 5.2 and 6.1 in MapR clusters.
  • Performed Minor Upgrades/Ecosystem upgrades on regular basis.
  • Upgrade ecosystems components on regular basis like hive, oozie, spark, pig and sqoop to keep it updated.
  • Performed Hortonworks Patches and OS/Firmware Patches on cluster to maintain interoperability.
  • Performed ELK stack upgrade from 6.2.2 to 6.8.0 and 7.7.0.
  • Performed Redis Upgrades from 5.2 to 5.4.10 and 6.0.6.
  • Worked on troubleshooting and resolving issues including P1 issues on both batch and realtime clusters.
  • Worked with developers and architects in troubleshooting and analyzing jobs and tuned them for optimum performance.
  • Add/Decommission nodes as they arrive to expand production cluster after thorough validation.
  • Created cluster health monitoring, node validation scripts along with other scripts to automate day to day tasks and patches/upgrades.
  • Worked on tuning yarn configurations for efficient resource utilization in clusters.
  • Worked on monitoring, managing, configuring, and administering batch, hbase, spark standalone, mapr-db and disaster recovery clusters.
  • Worked on maintaining data mirroring process in remote DR clusters for all data being mirrored from all clusters so as to have backups available at any time.
  • Planned and implemented production changes without causing any impacts and downtime.
  • Documented and prepared change plans for each change.
  • Perform cluster validations and run various pre-install and post install tests.
  • Gained experience on architecture, planning and preparing nodes, data ingestion, disaster recovery, high availability, management and monitoring.
  • Experienced in setting up project and volume setups for new Hadoop projects.
  • Involved in snapshots and mirroring to maintain backup of cluster data and even remotely.
  • Working experience on MySQL databases creation and setting up users and maintain backup of databases.
  • Helping users in production deployments throughout process.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions.
  • Documented systems processes and procedures for future references.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Commissioning and decommissioning nodes in cluster.
  • Handled ingestion failures, job waiting and job failures.
  • Data backup and data purging based on retention policy.
  • Data subscription and maintaining cluster health and HDFS space for better performance.
  • Handled alerts for CPU, memory, network and storage related processes.
  • Integrated Oozie with rest of Hadoop stack supporting several types of Hadoop jobs Map Reduce, Pig, Hive and Sqoop as well as system specific jobs such as Java programs and Shell scripts.
  • Managed full data mine from huge data volumes is exported to MySQL using Sqoop.
  • Configured Hive Metastore to use MySQL database to establish multiple user connections to hive tables.
Hadoop Administrator, 01/2018 - 10/2018
Cognizant Technology Solutions Madison, CT,
  • Newly setup environments with POCs running and data residing in staging/pre-production environment using flume for ingestion.
  • Worked with MapR vendor in building and setting up production/DR clusters from scratch for the project.
  • Aggregation operations done using hive, pig and oozie in stage clusters.
  • Installed MapR core and ecosystem components on single and multi-node clusters from scratch for production/non-production environments.
  • Perform cluster validations and run various pre-install and post install tests.
  • Gained experience on architecture, planning, and preparing the nodes, data ingestion, disaster recovery, high availability, management, monitoring.
  • Experienced in setting up the project and volume setups for the new Hadoop projects.
  • Involved in snapshots and mirroring to maintain the backup of cluster data and even remotely.
  • Working experience on MySQL databases creation and setting up the users and maintain the backup of databases.
  • Helping the users in production deployments throughout the process.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions.
  • Documented the systems processes and procedures for future references.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
Hadoop Administrator, 06/2017 - 01/2018
Cognizant Technology Solutions Maplewood, MN,
  • Newly setup environments with POCs running and data residing in staging/pre-production environment using flume for ingestion.
  • Worked with MapR vendor in building and setting up production/DR clusters from scratch for the project.
  • Aggregation operations done using hive, pig and oozie in stage clusters.
  • Installed MapR core and ecosystem components on single and multi-node clusters from scratch for production/non-production environments.
  • Perform cluster validations and run various pre-install and post install tests.
  • Gained experience on architecture, planning, and preparing the nodes, data ingestion, disaster recovery, high availability, management, monitoring.
  • Experienced in setting up the project and volume setups for the new Hadoop projects.
  • Involved in snapshots and mirroring to maintain the backup of cluster data and even remotely.
  • Working experience on MySQL databases creation and setting up the users and maintain the backup of databases.
  • Helping the users in production deployments throughout the process.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions.
  • Documented the systems processes and procedures for future references.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Installed MapR core and ecosystem components on single and multi-node clusters from scratch for production/non-production environments.
  • Perform cluster validations and run various pre-install and post install tests.
  • Gained experience on architecture, planning, and preparing the nodes, data ingestion, disaster recovery, high availability, management, monitoring.
  • Experienced in setting up the project and volume setups for the new Hadoop projects.
  • Involved in snapshots and mirroring to maintain the backup of cluster data and even remotely.
  • Working experience on MySQL databases creation and setting up the users and environments.
  • Design & Develop ETL workflow using Oozie for business requirements which includes automating the extraction of data from MySQL database into HDFS using Sqoop scripts.
  • Orchestrated Sqoop scripts, pig scripts, hive queries using Oozie workflows and sub-workflows.
  • Conducting RCA to find out data issues and resolve problems in various environments.
  • Proactively involved in ongoing maintenance, support and improvements in continuous deployments.
  • Performed data analytics in Hive and then exported this metrics back to Oracle Database using Sqoop.
  • Involved in Minor and Major Release work activities.
  • Collaborating with business users/product owners/developers to contribute to the analysis of functional requirements.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions.
  • Documented the systems processes and procedures for future references.
Education
Master of Science: Management Information Systems, Expected in
-
University of Houston - Clear - Houston, TX
GPA:
Bachelor of Science: Computer Science, Expected in
-
SRM Institute of Science and Technology - Chennai,
GPA:

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

Disclaimer

Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. As such, it is not owned by us, and it is the user who retains ownership over such content.

How this resume score
could be improved?

Many factors go into creating a strong resume. Here are a few tweaks that could improve the score of this resume:

72Average

resume Strength

  • Length
  • Personalization
  • Target Job

Resume Overview

School Attended

  • University of Houston - Clear
  • SRM Institute of Science and Technology

Job Titles Held:

  • Hadoop Administrator
  • Hadoop Administrator
  • Hadoop Administrator

Degrees

  • Master of Science
  • Bachelor of Science

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in: