azure data engineer resume example with 3+ years of experience

Jessica Claire
, , 609 Johnson Ave., 49204, Tulsa, OK 100 Montgomery St. 10th Floor
Home: (555) 432-1000 - Cell: - - : - -
Professional Summary
  • Senior Data Engineer with 5 years of experience in building data intensive applications, tackling challenging architectural and scalability problems, managing data repos for efficient visualization, for a wide range of products.
  • Highly analytical team player, with the aptitude for prioritization of needs/risks. Constantly striving to streamlining processes and experimenting with optimising and benchmarking solutions. Creative troubleshooter/problem-solver and loves challenges.
  • Experience in implementing ML Algorithms using distributed paradigms of Spark/Flink, in production, on Azure Databricks/AWS Sagemaker.
  • Experience in shaping and implementing Big Data architecture for connected cars, restaurants supply chain, and Transport Logistics domain (IOT).
  • Reliable Data Engineer keen to help companies collect, collate and exploit digital assets. Skilled administrator of information for Azure services ranging from Azure databricks, Azure relational database and non-relational database, and Azure data factory and cloud services. Practiced at cleansing and organizing data into new, more functional formats to drive increased efficiency and enhanced returns on investment.
  • Dynamic Database Engineer devoted to maintaining reliable computer systems for uninterrupted workflows. Delivers up-to-date methods to increase database stability and lower likelihood of security breaches and data corruption. Offers detailed training and reference materials to teach best practices for system navigation and minor troubleshooting.
  • Background includes data mining, warehousing and analytics. Proficient in machine and deep learning. Quality-driven and hardworking with excellent communication and project management skills.
  • Experienced Data Architect well-versed in defining requirements, planning solutions and implementing structures at the enterprise level. Analytical problem-solver with a detail-oriented and methodical approach. Prepared to offer 5 years of related experience to a dynamic new position with room for advancement.
  • Dedicated big data industry professional with history of meeting company goals utilizing consistent and organized practices. Skilled in working under pressure and adapting to new situations and challenges to best enhance the organizational brand.
  • Programing language: SQL, Python, R, Matlab, SAS, C++, C, Java
  • Databases and Azure Cloud tools : Microsoft SQL server, MySQL, Cosmo DB, Azure Data Lake, Azure blob storage Gen 2, Azure Synapse , IoT hub, Event hub, data factory, Azure databricks, Azure Monitor service, Machine Learning Studio
  • Frameworks : Spark [Structured Streaming, SQL], KafkaStreams
    Databases : Azure SQL database , Neo4J/Azure Cosmos Graph, Cassandra, MongoDB/Azure Cosmos [Doc]
    Visualization: SQL Analytics, Tableau
    Cloud Platforms: Azure
  • Databricks Devops : Docker
    ML Frameworks : Scikit Learn, TensorFlow
  • Microsoft Azure Data Engineer DP203
  • (Microsoft Azure Data scientist DP900)
  • 5 years of data engineer experience in the cloud.
  • Strong in Azure services including ADB and ADF
  • Experienced in the progress of real-time streaming analytics data pipeline. Confidence in building connections between event hub, IoT hub, and Stream analytics.
  • Experienced with techniques of data warehouse like snowflakes schema
  • Skilled and goal-oriented in team work within github version control
  • Highly skilled on machine learning models like svm, neural network, linear regression, logistics regression, and random forest
  • Fully skilled within data mining by using jupyter notebook, sklearn, pytorch, tensorflow, Numpy, and Pandas. Data visualizations by using Seaborn, excel, and tableau
  • Highly communication skills with confidence on public speaking
  • Always looking forward to taking challenges and always curious to learn different things
  • Machine learning
  • Warehousing expertise
  • Excellent Communication
  • Analytical and Critical Thinking
  • Data analysis
Work History
02/XXX0 to 01/XXX2
Azure Data Engineer Cayuse Talent Solutions Lutz, FL,
  • Generated detailed studies on potential third-party data handling solutions, verifying compliance with internal needs and stakeholder requirements.
  • Collaborated on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability.
  • Performed large-scale data conversions for integration into HD insight.
  • Designed and implemented effective database solutions(Azure blob storage) to store and retrieve data.
  • Designed advanced analytics ranging from descriptive to predictive models to machine learning techniques.
  • Monitored incoming data analytics requests and distributed results to support IoT hub and streaming analytics
  • Prepared documentation and analytic reports, delivering summarized results, analysis and conclusions to stakeholders.
  • Communicated new or updated data requirements to global team.
  • Developed database architectural strategies at modeling, design and implementation stages to address business or industry requirements.
  • Employed data cleansing methods, significantly Enhanced data quality.
  • Real time data is censored from CanBus and will be batched into a group of data and sent into the IoT hub.
  • Responsibility for data integration in the whole group
  • Write Azure service bus topic and Azure functions when abnormal data was found in streaming analytics service
  • Created SQL database for storing vehicle trip informations
  • Created blob storage to save raw data sent from streaming analytics
  • Constructed Azure DocumentDB to save the latest status of the target car
  • Connect the blob storage into HD insight
  • Deployed data factory for creating data pipeline to orchestrate the data into SQL database
  • Data integration and storage technologies with Jupyter Notebook and MySQL.
  • provide a clean, usable interface for drivers to check their car’s status and, where applicable, whether on mobile devices or through a web client.
12/2018 to 11/XXX0
Data Engineer Splunk Los Angeles, CA,
  • Communicated new or updated data requirements to global team.
  • Performed large-scale data conversions for integration into MYSQL.
  • Designed compliance frameworks for multi-site data warehousing efforts to verify conformity with restaurant supply chain and data security guidelines.
  • Built snow-flake structured data warehouse system structures for the BA and BS team.
  • Collaborated on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability.
  • Prepared written summaries to accompany results and maintain documentation.
  • Contributed to internal activities for overall process improvements, efficiencies and innovation.
  • Prepared documentation and analytic reports, delivering summarized results, analysis and conclusions to BA team
  • Structured the data into MYSQL
  • Using Cloud Kernel to add log informations into data, then save into Kafka
  • Connected Kafka with ODs
  • Working with data Warehouse and separate the data into fact and dimension tables
  • Creating a layer BAS before fact and dimensions that help to extract the latest data from the slowly changing dimension
  • Deploy a combination of some specific fact and dimension table for ATP special needs
01/2018 to 11/2018
Data Analytics Moxie City, State,
  • Leveraged text, charts and graphs to communicate findings in understandable format.
  • Analyzed large amounts of data to identify trends and find patterns, signals and hidden stories within data.
  • Assessed large datasets, drew valid inferences and prepared insights in narrative or visual forms.
  • Identified, reviewed and evaluated data management metrics to recommend ways to strengthen data across enterprise.
  • Led recruitment and development of strategic alliances to maximize utilization of existing talent and capabilities.
  • Aggregated and cleaned data from TransUnion on thousands of customers' credit attributes
  • Performed missing value imputation using population median, check population distribution for numerical and categorical variables to screen outliers and ensure data quality
  • Leveraged binning algorithm to calculate the information value of each individual attribute to evaluate the separation strength for the target variable
  • Checked variable multicollinearity by calculating VIF across predictors
  • Built logistic regression model to predict the probability of default; used stepwise selection method to select model variables
  • • Tested multiple models by switching variables and selected the best model using performance metrics including KS, ROC, and Somer’s D
Expected in to to
Master of Science: Computer Science And Mathematics
Rensselaer Polytechnic Institute - Troy, NY
Expected in to to
Bachelor of Science: Computer Science And Programming
Rensselaer Polytechnic Institute - Troy, NY

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

Your data is safe with us

Any information uploaded, such as a resume, or input by the user is owned solely by the user, not LiveCareer. For further information, please visit our Terms of Use.

Resume Overview

School Attended

  • Rensselaer Polytechnic Institute
  • Rensselaer Polytechnic Institute

Job Titles Held:

  • Azure Data Engineer
  • Data Engineer
  • Data Analytics


  • Master of Science
  • Bachelor of Science

By clicking Customize This Resume, you agree to our Terms of Use and Privacy Policy

*As seen in:As seen in: