LiveCareer
LiveCareer
  • Dashboard
  • Jobs
  • Resumes
  • Cover Letters
  • Resumes
    • Resumes
    • Resume Builder
    • Resume Examples
      • Resume Examples
      • Nursing
      • Education
      • Administrative
      • Medical
      • Human Resources
      • View All
    • Resume Search
    • Resume Templates
      • Resume Templates
      • Nursing
      • Education
      • Medical
      • Human Resources
      • Customer Service
      • View All
    • Resume Services
    • Resume Formats
    • Resume Review
    • How to Write a Resume
    • CV Examples
    • CV Formats
    • CV Templates
    • Resume Objectives
  • Cover Letters
    • Cover Letters
    • Cover Letter Builder
    • Cover Letter Examples
      • Cover Letter Examples
      • Education
      • Medical
      • Human Resources
      • Customer Service
      • Business Operations
      • View All
    • Cover Letter Services
    • Cover Letter Templates
    • Cover Letter Formats
    • How to Write a Cover Letter
  • Jobs
    • Mobile App
    • Job Search
    • Job Apply Tool
    • Salary Calculator
    • Business Letters
    • Job Descriptions
  • Questions
  • Resources
  • About
  • Contact
  • 0Notifications
    • Notifications

      0 New
  • jane
    • Settings
    • Help & Support
    • Sign Out
  • Sign In
Member Login
  • LiveCareer
  • Resume Search
  • Big Data/Hadoop Developer
Please provide a type of job or location to search!
SEARCH

Big Data/Hadoop Developer Resume Example

Resume Score: 70%

Love this resume?Build Your Own Now
BIG DATA/HADOOP DEVELOPER
Profile

7 years of overall experience with strong emphasis on Design, Development, Implementation, Testing and Deployment of Software Applications in Hadoop, HDFS, MapReduce, Hadoop Ecosystem, ETL and RDBMS, extensive development experience using Java,J2EE, JSP, and Servlets.

Professional Summary
  • Hadoop Developer with 3 years of working experience on designing and implementing complete end-to-end Hadoop Infrastructure using MapReduce, PIG, HIVE, Sqoop, Oozie, Flume, Spark, HBase, and zookeeper.
  • Java Programmer with 4 years of Extensive programming experience in developing web based applications and Client-Server technologies using Java, J2EE.
  • Experience Installing, Configuring and Testing Hadoop Ecosystem components.
  • Good knowledge ofHadoopArchitecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce concepts.
  • Experience in working with MapReduce programs using Hadoopfor working with Big Data.
  • Experience in analyzing data using Hive QL, Pig Latin and custom MapReduce programs in Java.
  • Experience in importing and exporting data using Sqoop from Relational Database Systems to HDFS and vice-versa.
  • Collecting and Aggregating large amount of Log Data using Apache Flume and storing data in HDFS for further analysis.
  • Job/workflow scheduling and monitoring tools like Oozie and Zookeeper.
  • Experience in designing both time driven and data driven automated workflows using Oozie.
  • Worked in complete Software Development Life Cycle (analysis, design, development, testing, implementation and support) using Agile Methodologies.
  • Experience in automating the Hadoop Installation, configuration and maintaining the cluster by using the tools like puppet.
  • Experience in setting up monitoring infrastructure for Hadoop cluster using Nagios and Ganglia.
  • Experience on Hadoop clusters using major Hadoop Distributions – Cloudera (CDH3, CDH4), Hortonworks (HDP) and MapR (M3 v3.0).
  • Experienced in using Integrated Development environments like Eclipse, NetBeans, Kate and gEdit.
  • Migration from different databases (i.e.Oracle, DB2, MYSQL, MongoDB) to Hadoop.
  • Developed various dashboards in Tableau, used context filters, sets while dealing with huge volume of data.
  • Prior experience working as Software Developer in Java/J2EE and related technologies.
  • Experience in designing and coding web applications using Core Java and J2EE Technologies- JSP, Servlets and JDBC.
  • Excellent knowledge in Java and SQL in application development and deployment.
  • Hands on experience in creating various database objects like tables, stored procedures, functions, and triggers using SQL, PL/SQL,DB2.
  • Excellent technical, communication, analytical and problem solving skills and ability to get on well with people including cross-cultural backgrounds and trouble-shooting capabilities.

Education
Master of Science: Engineering Management2011Murray State University- Murray, KY
Bachelor of Science: Bachelors in Computer Science2008Osmania University- Hyderabad, India
Technical Skills

Big Data Ecosystems: Hadoop, MapReduce, HDFS, HBase, Hive, Pig, Sqoop, Cassandra, Oozie, Zookeeper, Flume, Spark, Scala, Kafka.

Scripting Language: JSP & Servlets, PHP, JavaScript, XML, HTML, Python and Bash.

Programming Languages/Tools: Java, C, C++, VB, XML, HTML/XHTML, HDML, DHTML.

Operating System: Windows 95/98/NT/2000/XP, MS-DOS, UNIX, Linux, Ubuntu

Databases (RDBMS): Oracle 8i/9i/10g, MS SQL Server 2000, DB2, MS Access & MySql

Browser Languages: HTML, XHTML, CSS, XML, XSL, XSD, XSLT

Testing & Case Tools: Eclipse, NetBeans, CVS, ANT, JBuilder.

Methodologies: Agile, Design Patterns

Work History
Big Data/Hadoop Developer 11/2015 to Current
Bristol-Mayers Squibb – Plainsboro, NJ
  • Worked on analyzing Hadoop cluster and different big data analytic tools including Pig, Hive, Spark, Scala and Sqoop.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Used SparkAPI over Hortonworks Hadoop YARN to perform analytics on data in Hive.
  • Developed Spark code using scala and Spark-SQL/Streaming for faster testing and processing of data.
  • Involved in importing and exporting data (SQL Server, Oracle, csv and text file) from local/external file system and RDBMS to HDFS. Load log data into HDFS using Flume.
  • ETL Data Cleansing, Integration &Transformation using Pig: Responsible of managing data from disparate sources.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the R&D team.
  • Designed a data warehouse using Hive, created and managed Hive tables in Hadoop.
  • Created and maintained Technical documentation for launching Hadoop Clusters and for executing Hive queries and Pig Scripts.




Sr. Hadoop Developer09/2014 to 11/2015
FedEx – Memphis, TN
  • Worked with highly unstructured and semi-structured data (Replication factor of 3).
  • Involved in ETL, Data Integration and Migration. Imported data using Sqoop (Version 1.4.4) to load data from Oracle to HDFS on regular basis.
  • Installed and configured Pig (Version 0.12.1) and also written PigLatin scripts to transform raw data from several data sources into forming baseline data.
  • Written Hive (Version 0.13.0) queries for ad hoc data analysis to meet the business requirements.
  • Very good understanding of Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive to optimize performance.
  • Solved performance issues in Hive and Pig scripts with understanding of joins, Groups, and aggregation and how does it translate to MapReduce jobs.Created Hive tables and working on them using HiveQL.
  • Importing and Exporting data into HDFS from Oracle Database and vice versa using Sqoop.
  • Developed UDF's in Java as and when necessary to use in Pig and Hive queries.
  • Experience in using Sequence file, RCFile, AVRO and HAR file formats.
  • Developed Oozie workflow for scheduling and orchestrating the ETL process.
  • Experience in managing and reviewing Hadoop log files.
  • Implemented authentication using Kerberos and authentication using Apache Sentra.
  • Very good experience in monitoring and managing the Hadoop cluster using Cloudera Manager. 



Sr.Hadoop Developer07/2013 to 08/2014
PNC Loans – Princeton, NJ
  • Loading the data from the different Data sources like (Teradata and DB2) into HDFS using Sqoop (Version 1.4.3) and load into Hive tables, which are partitioned.
  • Developed pig (Version 0.11.1) scripts to transform the data into structured format.
  • Developed Hive (Version 0.11.0) queries for Analysis across different banners.
  • Developed Hive UDF's to bring all the customers Email_Id into a structured format.
  • Developed Oozie Workflows for daily incremental loads, which gets data from Teradata and then imported into hive tables.
  • Developed bash scripts to bring the log files from FTP server and then processing it to load into Hive tables.
  • All the bash scripts are scheduled using Resource Manager Scheduler.
  • Moved data from HDFS to Cassandra using Map Reduce and BulkOutputFormat class.
  • Developed Map Reduce programs for applying business rules on the data.
  • Developed and executed hive queries for Denormalizing the data.
  • Worked on analyzing data with Hive and Pig.
  • Experience in Implementing Rack Topology scripts to the Hadoop Cluster.
  • Very good experience with both MapReduce 1 (Job Tracker) and MapReduce 2 (YARN) setups.
  • Worked with the admin team in designing and upgrading CDH 3 to CDH 4.
  • Good working knowledge of Cassandra.
  • Sr. Java Developer01/2012 to 06/2013
    Freddie Mac – McLean, VA
    • Analyzed Business Requirements and Identified mapping documents required for system and functional testing efforts for all test scenarios.
    • Involved in using HTML, DHTML, Java Script, AJAX, ExtJs, JQUERY, JSP and Tag Libraries to develop view pages.
    • Involved in moving all log files generated from various sources to HDFS for further processing through Flume.
    • Involved in preparing design TSD document with Sequence Diagrams, Class Diagrams using Microsoft VISIO.
    • Followed Agile Methodology and participated in SCRUM Meetings. Responsible for upgrading the Crash applications to the latest Java version.
    • Developed SOAP based web services using JAX-WS for UMM application and used SOAP UI for testing.
    • Involved in design, development and enhancement of the applications using agile methodologies with a test driven approach.
    • Created REST based web services using JAX-RS.
    • Implemented the DBCRs by developing PL/SQL scripts and stored procedures.
    • Implemented reports for various screens in the application using Jasper iReports.
    • Developed payment flow using AJAX partial page refresh, validation and dynamic drop down list.
    • Implemented WebServices to send order details to downstream systems using RESTFul, SOAP.
    • Expertise in Object Oriented Analysis and Design (OOAD) concepts, various Design Patterns (J2EE) with excellent logical and analytical skills.
    • Extensive design framework experience using MVC, Struts, spring, Ajax and Hibernate.
    • Extensively used JPA for Object relational Mapping for data persistence.
    • Used Hibernate for Object-Relational Mapping and for database operations in Oracle database.
    • Used JUnit for testing the application, ANT and Maven for building Projects.
    • Involved in configuring JMS and JNDI in Rational Application Developer (RAD).
    • Used JProbe, JMeter for performance testing. 



    Java Developer06/2008 to 07/2010
    L&T infotech – Surat, India
    • Developed all the UI using JSP and Spring MVC with client side validations using Javascript.
    • Developed the DAO layer using Hibernate.
    • Designed class and sequence diagrams for Enhancements.
    • Developed the user interface presentation screens using HTML, XML,CSS, JQuery.
    • Experience in working with Spring MVC using AOP, DI/IOC.
    • Co-ordinate with the QA leads for development of test plan, test cases, and unit test code.
    • Involved in testing and deployment of the application on Apache Tomcat Application Server during integration and QA testing phase.
  • Involved in building JUNIT test cases for various modules.
  • Maintained the existing code base developed in spring and Hibernate framework by incorporating new features and doing bug fixes.
  • Involved in Application Server Configuration and in Production issues resolution.
  • Wrote SQL queries and Stored Procedures for interacting with the Oracle database.
  • Documentation of common problems prior to go-live and while actively involved in a Production Support role. 



  • Build Your Own Now

    DISCLAIMER

    Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. As such, it is not owned by us, and it is the user who retains ownership over such content.

    Resume Overview

    Companies Worked For:

    • Bristol-Mayers Squibb
    • FedEx
    • PNC Loans
    • Freddie Mac
    • L&T infotech

    School Attended

    • Murray State University
    • Osmania University

    Job Titles Held:

    • Big Data/Hadoop Developer
    • Sr. Hadoop Developer
    • Sr.Hadoop Developer
    • Sr. Java Developer
    • Java Developer

    Degrees

    • Master of Science : Engineering Management 2011
      Bachelor of Science : Bachelors in Computer Science 2008

    Create a job alert for [job role title] at [location].

    ×

    Advertisement

    Similar Resumes

    View All
    Big-Data-Hadoop-POC-resume-sample

    Big Data Hadoop POC

    CenturyLink Inc

    Monroe, Louisiana

    Big-Data-Engineer---HADOOP-and-DEVOPS-at-Target-resume-sample

    Big Data Engineer - HADOOP and DEVOPS at Target

    Target Corporation

    Circle Pines, Minnesota

    Big-Data-Developer-resume-sample

    Big Data Developer

    FreeWheel Media Inc.

    New York, New York

    About
    • About Us
    • Privacy Policy
    • Terms of Use
    • Sitemap
    Help & Support
    • Work Here
    • Contact Us
    • FAQs
    Languages
    • EN
    • UK
    • ES
    • FR
    • IT
    • DE
    • NL
    • PT
    • PL
    Customer Service
    customerservice@livecareer.com
    800-652-8430 Mon- Fri 8am - 8pm CST
    Sat 8am - 5pm CST, Sun 10am - 6pm CST
    • Stay in touch with us
    Site jabber winner award

    © 2021, Bold Limited. All rights reserved.