close
  • Dashboard
  • Resumes
  • Cover Letters
  • Resumes
    • Resumes
    • Resume Builder
    • Resume Examples
      • Resume Examples
      • Nursing
      • Customer Service
      • Education
      • Sales
      • Manager
      • View All
    • Resume Search
    • Resume Templates
      • Resume Templates
      • Microsoft Word
      • Professional
      • Modern
      • Traditional
      • Creative
      • View All
    • Resume Services
    • Resume Formats
      • Resume Formats
      • Chronological
      • Functional
      • Combination
    • Resume Review
    • How to Write a Resume
      • How to Write a Resume
      • Summary
      • Experience
      • Education
      • Skills
        • Skills
        • Hard Skills
        • Soft Skills
    • Resume Objectives
  • CV
    • CV
    • CV Examples
    • CV Formats
    • CV Templates
    • How to Write a CV
  • Cover Letters
    • Cover Letters
    • Cover Letter Builder
    • Cover Letter Examples
      • Cover Letter Examples
      • Customer Service
      • Marketing
      • Sales
      • Education
      • Accounting
      • View All
    • Cover Letter Services
    • Cover Letter Templates
    • Cover Letter Formats
    • How to Write a Cover Letter
  • Questions
  • Resources
  • About
    • About
    • Reviews
  • Contact
  • jane
    • Settings
    • Help & Support
    • Sign Out
  • Sign In
Member Login
  • LiveCareer
  • Resume Search
  • Hadoop Admin
Please provide a type of job or location to search!
SEARCH

Hadoop Admin Resume Example

Love this resume?Build Your Own Now
HADOOP ADMIN
Professional Summary
  • Over 8 years of professional IT experience which includes experience in Big Data ecosystem related technologies.
  • 5 years of exclusive experience in Hadoop Administration and its components like HDFS, Map Reduce, Apache Pig, Hive, Sqoop, Oozie and Flume.
  • Proven expertise in Hadoop Projects Implementation and Configuring Systems.
  • Excellent Experience in Hadoop architecture and various components such as Job Tracker, Task Tracker, NameNode, Data Node, MapReduce, YARN, and tools including Pig and Hive for data analysis, Sqoop for data migration, Flume for data ingestion, Oozie for scheduling and Zookeeper for coordinating cluster resources.
  • Involved in Design and Development of technical specifications using Hadoop Echo System tools. Administration, Testing, Change Control Process, Hadoop administration activities such as installation and configuration and maintenance of clusters.
  • Expertise in setting, configuring & monitoring of Hadoop cluster using Cloudera CDH3, CDH4, Apache Hadoop on RedHat, Centos&Windows.
  • Expertise in Commissioning, decommissioning, Balancing and Managing Nodes and tuning server for optimal performance of the cluster.
  • Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
  • Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.
  • Expertise in benchmarking, performing backup and disaster recovery of Name Node metadata and important and sensitive data residing on cluster.
  • Rack aware configuration for quick availability and processing of data. Experience in designing and implementing of secure Hadoop cluster using Kerberos.
  • Backup configuration and Recovery from a NameNode failure. Good Experience in Planning, Installing and Configuring Hadoop Cluster in Apache Hadoop and Cloudera Distributions.
  • Handsome experience in Linux admin activities on RHEL &Cent OS. Experience in deploying Hadoop 2.0(YARN).
  • Excellent command in creating Backups & Recovery and Disaster recovery procedures and Implementing BACKUP and RECOVERY strategies for off-line and on-line Backups.
  • Experience in Big data domains like Shared Service (Hadoop Clusters, Operational Model, Inter-Company Charge back, and Lifecycle Management).
Skills
  • Big Data: Hadoop, HDFS, Map-Reduce, PIG, Hive, Hbase, Sqoop, Oozie.
  • Languages: SQL, PLSQL, Core Java, Unix shell scripts, C, C++.
  • Dev Tools: Eclipse, Tableau, Ganglia.
  • Processes: Agile-Scrum, TDD, Application services, Product development.
Work History
Hadoop Admin, 07/2014 to 09/2016
Virtusa – Buffalo
  • Involved in start to end process of Hadoop cluster setup including installation, configuration and monitoring the Hadoop Cluster
  • Administered Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting
  • Performed Adding/removing new nodes to an existing Hadoop cluster
  • Implemented Backup configurations and Recoveries from a Name Node failure.
  • Monitored systems and services, architecture design and implementation of Hadoop deployment, configuration management, backup, and disaster recovery systems and procedures
  • Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml based upon the job requirement
  • Performed Importing and exporting data into HDFS using Sqoop
  • Installed various Hadoop Ecosystems and Hadoop Daemons
  • Installed and configured HDFS, Zookeeper, Map Reduce, Yarn, HBase, Hive, Scoop, Ansible and Oozie
  • Integrated Hive and HBase to perform analysis on data
  • Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes.
  • Communicated and escalated issues appropriately.
  • Applied standard Back up policies to make sure the high availability of cluster.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions.
  • Documented the systems processes and procedures for future references.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Involved in Installing and configuring Kerberos for the authentication of users and Hadoop daemons.
  • Monitored Clusters with Ganglia, Nagios

Environment: Hadoop, HDFS, Zookeeper, Map Reduce, YARN, HBase, Hive, Sqoop, Oozie, Linux- CentOS, Red Hat, Big Data Cloudera CDH, Horton Works, Apache Hadoop, SQL plus, Shell Scripting.



Hadoop Admin, 12/2011 to 05/2014
Virtusa – Atlanta
    • Working on Hadoop Hortonworks (HDP 2.6.0.2.2) distribution which managed services viz. HDFS, MapReduce2, Hive, Pig, Hbase, Sqoop, Flume, Spark, Ambari Metrics, Zookeeper, Falcon and oozie etc. for 4 cluster ranges from LAB, DEV, QA to PROD. 
    • Monitor Hadoop cluster connectivity and security on Ambari monitoring system. 
    • Led the installation, configuration and deployment of product soft wares on new edge nodes that connect and contact Hadoop cluster for data acquisition.
    • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files. 
    • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce the impact and documenting the same and preventing future issues. 
    • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades. 
    • Involved in Analyzing system failures, identifying root causes, and recommended course of actions. 
    • Interacting with HDP support and log the issues in portal and fixing them as per the recommendations.   
    • Imported logs from web servers with Flume to ingest the data into HDFS. 
    • Using Flume and Spool directory loading the data from local system to hdfs 
    • Retrieved data from HDFS into relational databases with Sqoop. Parsed cleansed and mined useful and meaningful data in HDFS using Map-Reduce for further analysis   
    • Fine tuning hive jobs for optimized performance.   
    • Implemented custom interceptors for flume to filter data and defined channel selectors to multiplex the data into different sinks. 
    • Partitioned and queried the data in Hive for further analysis by the BI team. 
    • Extending the functionality of Hive and Pig with custom UDF s and UDAF's. 
    • Involved in extracting the data from various sources into Hadoop HDFS for processing. 
    • Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, Hbase database and Sqoop. 
    • Creating and deploying a corresponding Solr Cloud collection. 
    • Creating collections and configurations, Register a Lily HBase Indexer configuration with the Lily HBase Indexer Service. 
    • Configuring, Managing permissions for the users in hue. 
    • Commissioned and Decommissioned nodes on Hadoop cluster on Red hat LINUX. 
    • Involved in loading data from LINUX file system to HDFS. 
    • Creating and managing the Cron jobs. 
    • Worked on tuning the performance Pig queries. 
    • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required. 
    • Experience in configuring the Storm in loading the data from MYSQL to HBASE using jms 
    • Responsible to manage data coming from different sources.   
    • Involved in loading data from UNIX file system to HDFS. 
    • Integrated Kerberos into Hadoop to make cluster more strong and secure from unauthorized users 
    • Experience in managing and reviewing Hadoop log files. 
    • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team. 
    • Installed Oozie workflow engine to run multiple Hive and pig jobs. 
    • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it. 
    • Supported in setting up QA environment and updating configurations for implementing scripts with Pig and Sqoop. 

    ENVIRONMENT: Hadoop HDFS, MapReduce, HIVE, PIG, FLUME, OOZIE, Sqoop, Eclipse, Hortonworks, Ambari, Redhat, MYSQL


    Hadoop Admin, 09/2010 to 10/2011
    Virtusa – Boston
    • Hands on Installation and configuration of Hortonworks Data Platform HDP 2.3.4.
    • Worked on installing production cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, and slots configuration
    • Worked on Hadoop Administration, responsibilities include software installation, configuration, software upgrades, backup and recovery, cluster setup, cluster performance and monitoring on daily basis, maintaining cluster up and run on healthy.
    • Implemented the security requirements for Hadoop and integrate with Kerberos authentication and authorization infrastructure.
    • Designed, developed and implemented connectivity products that allow efficient exchange of data between the core database engine and the Hadoop ecosystem.
    • Involved in defining job flows using Oozie for scheduling jobs to manage apache Hadoop jobs.
    • Implemented Name Node High Availability on the Hadoop cluster to overcome single point of failure.  
    • Worked on YARN capacity scheduler by creating queues to allocate resource guarantee to specific groups.  
    • Worked on importing and exporting data from Oracle database into HDFS and HIVE using Sqoop.  
    • Monitored and analyzed of the Map Reduce job executions on cluster at task level.  
    • Extensively involved in Cluster Capacity planning, Hardware planning, Performance Tuning of theHadoop Cluster.  
    • Wrote automation scripts and setting up crontab jobs to maintain cluster stability and healthy.
    • Installed Ambari on an already existing Hadoop cluster.  
    • Implemented Rack Awareness for data locality optimization.  
    • Optimized and tuned the Hadoop environments to meet performance requirements.  
    • Hand-On experience with AWS cloud with EC2, S3.  
    • Collaborating with offshore team.  
    • Ability to document existing processes and recommend improvements.
    • Shares knowledge and assists another team member as needed.  
    • Assist with maintenance and troubleshooting of scheduled processes.
    • Participated in development of system test plans and acceptance criteria.
    • Collaborate with offshore developers in order to monitor ETL jobs and troubleshoot steps. 

    Environment: Hortonworks HDP2.3x, Ambari, Oozie 4.2, Sqoop1.4.6, Mapreduce2, Ambari, Sql Developer, Teradata, SSH, Eclipse, Jdk 1.7, CDH 3.x, 4.X, 5.X, Cloudera Manager 4&5, Ganglia, Tableau, Shell Scripting, Oozie, Pig, Hive, Flume, Kafka, Impala, Oozie, CentOS

    Hadoop Engineer, 01/2010 to 07/2010
    Bank Of America Corporation – Pittsburgh
    • Involved in extracting customer's big data from various data sources into Hadoop HDFS.
    • This included data from click stream data (omniture), databases and also logs data from servers.
    • Developed data pipeline using Pig and Hive from Oracle, Omniture data sources.
    • These pipelines had customized UDF'S to extend the ETL functionality.
    • Used Sqoop to efficiently transfer data between databases and HDFS Developed Map Reduce programs to cleanse the data in HDFS obtained from heterogeneous data sources to make it suitable for ingestion into Hive schema for analysis.
    • The Hive tables created as per requirement were internal or external tables defined with appropriate static and dynamic partitions, intended for efficiency.
    • Implemented partitioning, bucketing in Hive for better organization of the data.
    • Developed UDFs in Pig and Hive Used Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs such as Java map-reduce, hive and Sqoop as well as system specific jobs.
    • Involved in End-to-End implementation of ETL logic.
    • Worked with BI teams in generating the reports on Tableau
    • Installed and configured various components of Cloudera Hadoop ecosystem
    • Involved in Performance tuning of Hadoop clusters and Hadoop Map Reduce Programs
    • Upgraded Hadoop Versions using Cloudera Manager in Dev and QA clusters.

    Environment: JDK1.7, Redhat Linux, HDFS, Mahout, Map-Reduce, Hive, Pig, Sqoop, Flume, Zookeeper, Oozie, DB2, Hbase and Pentaho.



    Java Developer, 11/2007 to 11/2009
    Chenega Mios – Massachusetts
    • Designed Object Model using UML.
    • Developed use cases using UML according to business requirements.
    • Developed project using Agile methodology and used continuous integration as for submitting changes.
    • Developed the presentation components like Order management and Profile management in JSP as part of Spring framework implementing the MVC design pattern.
    • Used JavaScript for creating the form Components in the UI Screens and validations.
    • Implemented the DAO pattern for populating the Customer information in Customer DB and the billing details in Bills DB and developed the persistence layer using Hibernate.
    • Designed and implemented Servlets, JSP, Struts for integration with provisioning and billing systems.
    • Responsible for designing the front-end GUI using JSP, HTML and JavaScript for validation.

    Environment: Windows 2000, UNIX, HTML, XML, Java 1.4, J2EE, hibernate 3.0, spring 2.0, Java Beans, JSP, UML, JDBC, Oracle 10g, SQL.




    Education
    Bachelor of Science: Electronics And Communications, 2007
    GITAM University - City
    Build Your Own Now

    DISCLAIMER

    Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. As such, it is not owned by us, and it is the user who retains ownership over such content.

    How this resume score could be improved?

    Many factors go into creating a strong resume. Here are a few tweaks that could improve the score of this resume:

    76Average
    Resume Strength
    • Completeness
    • Formatting
    • Word choice
    • Measurable results
    • Strong summary
    • Typos

    Resume Overview

    School Attended

    • GITAM University

    Job Titles Held:

    • Hadoop Admin
    • Hadoop Engineer
    • Java Developer

    Degrees

    • Bachelor of Science : Electronics And Communications , 2007

    Create a job alert for [job role title] at [location].

    ×

    Advertisement

    Similar Resumes

    View All
    Hadoop-Admin-resume-sample

    Hadoop Admin

    Virtusa

    Atlanta , GA

    Hadoop-Admin-resume-sample

    Hadoop Admin

    Virtusa

    Irving , TX

    Hadoop-Developer-resume-sample

    Hadoop Developer

    Infosys Ltd

    Rockville , MD

    • About Us
    • Privacy Policy
    • Terms of Use
    • Sitemap
    • Work Here
    • Contact Us
    • FAQs
    • Accessibility
    • EN
    • UK
    • ES
    • FR
    • IT
    • DE
    • NL
    • PT
    • PL
    customerservice@livecareer.com
    800-652-8430 Mon- Fri 8am - 8pm CST
    Sat 8am - 5pm CST, Sun 10am - 6pm CST
    • Stay in touch with us
    Site jabber winner award

    © 2022, Bold Limited. All rights reserved.