LiveCareer-Resume

Hadoop Admin resume example with 6+ years of experience

Jessica Claire
  • Montgomery Street, San Francisco, CA 94105 609 Johnson Ave., 49204, Tulsa, OK
  • Home: (555) 432-1000
  • Cell:
  • resumesample@example.com
Summary
  • Having around 6+ years' experience in Operations, maintaining, monitoring and upgrading Hadoop Clusters (Cloudera and Hortonworks distributions).
  • Hands on experience in installing/configuring/maintaining Apache Hadoop clusters for application development and Hadoop tools like Hive, Spark, YARN, Flume, Kafka, Impala, Zookeeper, Hue and Sqoop using both Cloudera and Hortonworks.
  • Experience Capacity Planning, validating hardware and software requirements, building and configuring small, medium size clusters, smoke testing, managing and performance tuning the Hadoop clusters.
  • Experience in Configuring Name-node High availability and Name-node Federation and depth knowledge on Zookeeper for cluster coordination services.
  • Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.
  • Expertise in implementing Kerberos Security to Hadoop clusters
  • Responsible for planning of Capacity Planning, Infrastructure Planning and version fix to build Hadoop Cluster.
  • Excellent expertise and knowledge of Cloud Platforms and its components (IBM Private/Public Cloud, Kubernetes, Docker).
  • Experienced in using HDFS, Pig, Hive, Spark, Impala, Sqoop, Oozie, ZooKeeper and Cloudera Manager.
  • Experience in scheduling all Hadoop/Hive/Sqoop/HBase jobs using Oozie.
  • Diverse background with fast learning skills and creative analytical skills.
  • Self-starter and ability to learn new things in a quick span of time.
  • Good communication and documentation skills.

Skilled System Administrator focused on performance optimization and technical improvements with an understanding of cost-effective decision making and usability.

Enthusiastic individual with superior skills in working in both team-based and independent capacities. Bringing strong work ethic and excellent organizational skills to any setting. Excited to begin new challenge with successful team.

Skills
  • Active Directory,
  • Backup, Quality
  • Big data, Express
  • Version control, SAS
  • CA-7, RDBMS
  • Catalog, Real Time
  • Hardware, Reporting
  • CPU, Requirement
  • Database administration
  • Data Integration, SDLC
  • Databases, Shell Scripts
  • Database, SQL
  • Data lake
  • Data Warehousing, SSL
  • Delivery, Strategy
  • Designing, Tableau
  • Certificate in Linux Programming and Administration
  • Encryption, SAP
  • Disaster Recovery, Tables
  • Encryption, SAP
  • Eclipse, Teradata
  • ETL, Troubleshooting
  • IDE, Unix Shell Scripts
  • Informatica, Upgrades
  • Encryption, SAP
  • JDBC, Written
  • LDAP
  • Linux
  • Logging
  • Managing
  • Meetings
  • Memory
  • Access
  • SQLServer
  • Migration
  • MySQL
  • Enterprise
  • NFS
  • Network
  • ODBC
  • Operating system
  • Oracle
Experience
Hadoop Admin, 09/2018 to Current
VirtusaIrving, TX,
  • Project Description : Citizen Bank is the corporate and investment banking division of Citizen Bank. The Project is for implementing big data analytics in Hadoop, loading data from multiple sources like MySQL, Web Server Logs into Hive and query the data as required. The main idea is to understand customer base, buying habits, buying decisions etc
  • Responsibilities:
  • Extensively involved in Installation and configuration of Cloudera distribution Hadoop Name Node, Secondary Name Node, Resource Manager, Node Manager and Data Nodes. Done stress and performance testing, benchmark for the cluster.
  • Installing Patches and packages on Unix/Linux Servers. Worked with development in design and ongoing operation of several clusters utilizing Cloudera's Distribution including Apache Hadoop.
    • Worked extensively with importing metadata into Hive and migrated existing tables and applications to work on Hive and HBase
    • Responsible to migrate from Hadoop to Spark frameworks, in-memory distributed computing for real time fraud detection.
    • Provided System support/maintenance for 24x7 for Customer Experience Business Services
    • Supported Data Analysts in running Map Reduce Programs.
    • Implemented Fair scheduler on the job tracker to allocate the fair amount of resources to small jobs.
    • Involved in running Hadoop jobs for processing millions of records of text data. Troubleshoot the build issue during the Jenkins build process. Implement Docker to create containers for Tomcat Servers, Jenkins.
    • Responsible for scheduling jobs in Hadoop using FIFO, Fair scheduler and Capacity scheduler
    • Expertise in Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
    • Worked on a live Big Data Hadoop production environment with 220 nodes.
    • HA implementation of Name Node to avoid single point of failure.
    • Experience working on LDAP user accounts and configuring ldap on client machines.
    • Automated day to day activities using shell scripting and used Cloudera Manager to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
    • Responsible for Cluster maintenance, Adding and removing cluster nodes, Cluster Monitoring and Troubleshooting, Manage and review data backups, Manage and review Hadoop log files.
    • Involved in planning the Hadoop cluster infrastructure, resources capacity and build plan for Hadoop cluster installations.
    • Resolving tickets submitted by users, P1 issues, troubleshoot the error documenting, resolving the errors.
    • Installed and configured Hive in Hadoop cluster and help business users/application teams fine tune their HIVE QL for optimizing performance and efficient use of resources in cluster.
    • Installed and configured Ganglia Monitoring system to get metrics and monitoring the Hadoop cluster. Also configured Hadoop logs rotation and monitoring them frequently.
    • We do performance tuning of the Hadoop Cluster and map reduce jobs. Also the real-time applications with best practices to fix the design flaws.
    • Implemented Oozie work-flow for ETL Process for critical data feeds across the platform.
    • Configured Ethernet bonding for all Nodes to double the network bandwidth
    • Implementing Kerberos Security Authentication protocol for existing cluster.
    • Built high availability for major production cluster and designed automatic failover control using Zookeeper Failover Controller (ZKFC) and Quorum Journal nodes.
    • Worked on Hive for exposing data for further analysis and for generating transforming files from different analytical formats to parquet files.
    • Worked closely with Business stake holders, BI analysts, developers, and SAS users to establish SLAs and acceptable performance metrics for the Hadoop as a service offering.
    Environment: Hadoop, Apache Pig, Hive, OOZIE, SQOOP, Spark, Hbase, Pig, LDAP, CDH5, Unravel, Splunk, Tomcat, and Java

-

Hadoop Admin, 05/2016 to 07/2018
VirtusaJersey City, NJ,
  • Project Description: The project is aimed at collecting reports periodically for AT&T Customers and storing them into HDFS. Also, these reports extracted data to the service layer for presentation.Perform analytics and provide insights for business needs.
  • Responsibilities:
  • Experience in managing scalable Hadoop cluster environments.
  • Involved in managing, administering and monitoring clusters in Hadoop Infrastructure.
  • Regular Maintenance of Commissioned/decommission nodes as disk failures occur using MapR File
  • Used Sqoop to import and export data from HDFS to RDBMS and vice-versa.
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
  • Responsible for troubleshooting issues in the execution of MapReduce jobs by inspecting and reviewing log files.
  • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
  • Experience in HDFS maintenance and administration.
  • Managing nodes on Hadoop cluster connectivity and security.
  • Experience in commissioning and decommissioning of nodes from cluster.
  • Experience in Name Node HA implementation.
  • Working on architected solutions that process massive amounts of data on corporate and AWS cloud-based servers.
  • Working with data delivery teams to setup new Hadoop users.
  • Installed Oozie workflow engine to run multiple Map Reduce, Hive and HBase jobs.
  • Configured Megastore for Hadoop ecosystem and management tools.
  • Installed and configured Zookeeper
  • Hands-on experience in Nagios and Ganglia monitoring tools.
  • Experience in HDFS data storage and support for running Map Reduce jobs.
  • Performing tuning and troubleshooting of MR jobs by analyzing and reviewing Hadoop log files.
  • Installing and configuring Hadoop eco system like Sqoop, Pig, Flume, and Hive.
  • Maintaining and monitoring clusters. Loaded data into the cluster from dynamically generated files using Flume and from relational database management systems using Sqoop.
  • Importing And Exporting Data from MySQL/Oracle to HiveQL using SQOOP.
  • Experience in using distcp to migrate data between and across the clusters.
  • Hands on experience in analyzing Log files for Hadoop eco system services.
  • Coordinate root cause analysis efforts to minimize future system issues.
  • Highly involved in operations and troubleshooting Hadoop clusters.
  • Troubleshooting of hardware issues and closely worked with various vendors for Hardware/OS and Hadoopissues.
    Environment: Cloudera4.2, HDFS, Hive, Sqoop, HBase, Chef, Rhel, Mahout, Tableau, Micro strategy, Shell Scripting, Red Hat Linux.
Hadoop Administrator, 06/2014 to 05/2016
Cognizant Technology SolutionsEdina, MN,

Project: Hadoop Admin support
John Wiley & amp Sons, Inc, also referred to as Wiley, is a global publishing company that specializes in academic publishing and markets its products to professionals and consumers, students and instructors in higher education, and researchers and practitioners in scientific, technical, medical, and scholarly fields. This project deals with maintaining complete end to end Hadoop environment support.

  • Responsibilities:
  • Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH5). Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files.
  • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce the impact and documenting the same and preventing future issues.
  • Evolved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
  • Strong Experience in Linux core Environment
  • Worked on Capacity planning for the Production Cluster
  • Worked on Configuring Kerberos Authentication in the Hadoop cluster and AS400
  • Ability to Configuring queues in capacity scheduler and taking Snapshot backups for Hbase tables.
  • Worked on fixing the cluster issues and Configuring High Availability for Name Node in CDH5.
  • Involved in Cluster Monitoring backup, restore and troubleshooting activities.
  • Handled the imports and exports of data onto HDFS using Flume and Sqoop.
  • Respo• Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files • Used Spark API over
    Cloudera Hadoop YARN to perform analytics on data in Hive.nsible for implementation and ongoing administration of Hadoop infrastructure
  • Managed and reviwed Hadoop log files.
  • Importing and exporting data from RDBMS into HDFS and HBASE using Sqoop.
  • Good Understanding of installation and configuring Spark and Impala.
  • Successfully installed and configured Queues in Capacity scheduler and Oozie scheduler.
  • Worked on Performance Optimization for the Hive queries while Performing tuning in the Cluster level and adding the Users in the clusters.
  • Monitored workload, job performance and capacity planning .
  • Involved in Analyzing system failures, identifying root causes, and recommended course of actions.
  • Worked closely with team members to deliver project requirements, develop solutions and meet deadlines.
  • Environment: RHEL, CDH 5.11, Hive, Sqoop, Flume, Hbase, MySQL, Cassandra, Oozie, Zookeeper, Puppet, Nagios, AWS (S3, EC2, IAM, EMR, Github)
Education and Training
Bachelor of Science: Computer Science, Expected in 08/2013
Royal University of Dhaka (RUD) - ,
GPA:
Activities and Honors
  • Photographer
  • Writer
  • Member of RUD student Association.

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

Your data is safe with us

Any information uploaded, such as a resume, or input by the user is owned solely by the user, not LiveCareer. For further information, please visit our Terms of Use.

Resume Overview

School Attended

  • Royal University of Dhaka (RUD)

Job Titles Held:

  • Hadoop Admin
  • Hadoop Admin
  • Hadoop Administrator

Degrees

  • Bachelor of Science

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in:As seen in: