Skip to main content
Version: 0.7.0

Talks & Powered By

Adoption

Alibaba Cloud

Alibaba Cloud provides cloud computing services to online businesses and Alibaba's own e-commerce ecosystem, Apache Hudi is integrated into Alibaba Cloud Data Lake Analytics offering real-time analysis on hudi dataset.

Amazon Web Services

Amazon Web Services is the World's leading cloud services provider. Apache Hudi is pre-installed with the AWS Elastic Map Reduce offering, providing means for AWS users to perform record-level updates/deletes and manage storage efficiently.

Clinbrain

Clinbrain is the leader of big data platform and usage in medical industry. We have built 200 medical big data centers by integrating Hudi Data Lake solution in numerous hospitals. Hudi provides the ability to upsert and delete on hdfs, at the same time, it can make the fresh data-stream up-to-date efficiently in hadoop system with the hudi incremental view.

EMIS Health

EMIS Health is the largest provider of Primary Care IT software in the UK with datasets including more than 500Bn healthcare records. HUDI is used to manage their analytics dataset in production and keeping them up-to-date with their upstream source. Presto is being used to query the data written in HUDI format.

Grofers

Grofers is a grocery delivery provider operating across APAC region. Grofers has integrated hudi in its central pipelines for replicating backend database CDC into the warehouse.

H3C Digital Platform

H3C digital platform provides the whole process capability of data collection, storage, calculation and governance, and enables the construction of data center and data governance ability for medical, smart park, smart city and other industries; Apache Hudi is integrated in the digital platform to meet the real-time update needs of massive data

Kyligence

Kyligence is the leading Big Data analytics platform company. We’ve built end to end solutions for various Global Fortune 500 companies in US and China. We adopted Apache Hudi in our Cloud solution on AWS in 2019. With the help of Hudi, we are able to process upserts and deletes easily and we use incremental views to build efficient data pipelines in AWS. The Hudi datasets can also be integrated to Kyligence Cloud directly for high concurrent OLAP access.

Lingyue-digital Corporation

Lingyue-digital Corporation belongs to BMW Group. Apache Hudi is used to perform ingest MySQL and PostgreSQL change data capture. We build up upsert scenarios on Hadoop and spark.

Logical Clocks

Hopsworks 1.x series supports Apache Hudi feature groups, to enable upserts and time travel.

SF-Express

SF-Express is the leading logistics service provider in China. HUDI is used to build a real-time data warehouse, providing real-time computing solutions with higher efficiency and lower cost for our business.

Tathastu.ai

Tathastu.ai offers the largest AI/ML playground of consumer data for data scientists, AI experts and technologists to build upon. They have built a CDC pipeline using Apache Hudi and Debezium. Data from Hudi datasets is being queried using Hive, Presto and Spark.

Tencent

EMR from Tencent Cloud has integrated Hudi as one of its BigData components since V2.2.0. Using Hudi, the end-users can handle either read-heavy or write-heavy use cases, and Hudi will manage the underlying data stored on HDFS/COS/CHDFS using Apache Parquet and Apache Avro.

Uber

Apache Hudi was originally developed at Uber, to achieve low latency database ingestion, with high efficiency. It has been in production since Aug 2016, powering the massive 100PB data lake, including highly business critical tables like core trips,riders,partners. It also powers several incremental Hive ETL pipelines and being currently integrated into Uber's data dispersal system.

Udemy

At Udemy, Apache Hudi on AWS EMR is used to perform ingest MySQL change data capture.

Yields.io

Yields.io is the first FinTech platform that uses AI for automated model validation and real-time monitoring on an enterprise-wide scale. Their data lake is managed by Hudi. They are also actively building their infrastructure for incremental, cross language/platform machine learning using Hudi.

Yotpo

Using Hudi at Yotpo for several usages. Firstly, integrated Hudi as a writer in their open source ETL framework, Metorikku and using as an output writer for a CDC pipeline, with events that are being generated from a database binlog streams to Kafka and then are written to S3.

37 Interactive Entertainment

37 Interactive Entertainment is a global Top20 listed game company, and a leading company on A-shares market of China. Apache Hudi is integrated into our Data Middle Platform offering real-time data warehouse and solving the problem of frequent changes of data. Meanwhile, we build a set of data access standards based on Hudi, which provides a guarantee for massive data queries in game operation scenarios.

Talks & Presentations

  1. "Hoodie: Incremental processing on Hadoop at Uber" - By Vinoth Chandar & Prasanna Rajaperumal Mar 2017, Strata + Hadoop World, San Jose, CA

  2. "Hoodie: An Open Source Incremental Processing Framework From Uber" - By Vinoth Chandar. Apr 2017, DataEngConf, San Francisco, CA Slides Video

  3. "Incremental Processing on Large Analytical Datasets" - By Prasanna Rajaperumal June 2017, Spark Summit 2017, San Francisco, CA. Slides Video

  4. "Hudi: Unifying storage and serving for batch and near-real-time analytics" - By Nishith Agarwal & Balaji Vardarajan September 2018, Strata Data Conference, New York, NY

  5. "Hudi: Large-Scale, Near Real-Time Pipelines at Uber" - By Vinoth Chandar & Nishith Agarwal October 2018, Spark+AI Summit Europe, London, UK

  6. "Powering Uber's global network analytics pipelines in real-time with Apache Hudi" - By Ethan Guo & Nishith Agarwal, April 2019, Data Council SF19, San Francisco, CA.

  7. "Building highly efficient data lakes using Apache Hudi (Incubating)" - By Vinoth Chandar June 2019, SF Big Analytics Meetup, San Mateo, CA

  8. "Apache Hudi (Incubating) - The Past, Present and Future Of Efficient Data Lake Architectures" - By Vinoth Chandar & Balaji Varadarajan September 2019, ApacheCon NA 19, Las Vegas, NV, USA

  9. "Insert, upsert, and delete data in Amazon S3 using Amazon EMR" - By Paul Codding & Vinoth Chandar December 2019, AWS re:Invent 2019, Las Vegas, NV, USA

  10. "Building Robust CDC Pipeline With Apache Hudi And Debezium" - By Pratyaksh, Purushotham, Syed and Shaik December 2019, Hadoop Summit Bangalore, India

  11. "Using Apache Hudi to build the next-generation data lake and its application in medical big data" - By JingHuang & Leesf March 2020, Apache Hudi & Apache Kylin Online Meetup, China

  12. "Building a near real-time, high-performance data warehouse based on Apache Hudi and Apache Kylin" - By ShaoFeng Shi March 2020, Apache Hudi & Apache Kylin Online Meetup, China

  13. "Building large scale, transactional data lakes using Apache Hudi" - By Nishith Agarwal, June 2020, Berlin Buzzwords 2020.

  14. "Apache Hudi - Design/Code Walkthrough Session for Contributors" - By Vinoth Chandar, July 2020, Hudi community.

  15. "PrestoDB and Apache Hudi" - By Bhavani Sudha Saktheeswaran and Brandon Scheller, Aug 2020, PrestoDB Community Meetup.

  16. "DC_THURS : Apache Hudi w/ Nishith Agarwal & Vinoth Chandar", Aug 2020, Online discussion/Q&A with DataCouncil Founder

  17. "Panel Discussion on Presto Ecosystem" - By Vinoth Chandar, Sep 2020, PrestoCon "panel".

  18. "Next Generation Data lakes using Apache Hudi" - By Balaji Varadarajan and Sivabalan Narayanan, Sep 2020, "ApacheCon"

  19. "Building Large-Scale, Transactional Data Lakes using Apache Hudi" - By Nishith Agarwal, Data Summit 2020

  20. "Landing practice of Apache Hudi in T3go" - By VinoYang and XianghuWang, November 2020, Qcon.

  21. "Meetup talk by Nishith Agarwal" - Uber Data Platforms Meetup, Dec 2020

Articles

You can check out our blog pages for content written by our committers/contributors.

  1. "The Case for incremental processing on Hadoop" - O'reilly Ideas article by Vinoth Chandar
  2. "Hoodie: Uber Engineering's Incremental Processing Framework on Hadoop" - Engineering Blog By Prasanna Rajaperumal
  3. "New – Insert, Update, Delete Data on S3 with Amazon EMR and Apache Hudi" - AWS Blog by Danilo Poccia
  4. "The Apache Software Foundation Announces Apache® Hudi™ as a Top-Level Project" - ASF Graduation announcement
  5. "Apache Hudi grows cloud data lake maturity"
  6. "Building a Large-scale Transactional Data Lake at Uber Using Apache Hudi" - Uber eng blog by Nishith Agarwal
  7. "Hudi On Hops" - By NETSANET GEBRETSADKAN KIDANE
  8. "PrestoDB and Apache Hudi - PrestoDB - Hudi integration blog by Bhavani Sudha Saktheeswaran and Brandon Scheller
  9. "Origins of Data Lake at Grofers" - by Akshay Agarwal
  10. "Data Lake Change Capture using Apache Hudi & Amazon AMS/EMR" - Towards DataScience article, Oct 20
  11. "How nClouds Helps Accelerate Data Delivery with Apache Hudi on Amazon EMR" - published by nClouds in partnership with AWS
  12. "Apply record level changes from relational databases to Amazon S3 data lake using Apache Hudi on Amazon EMR and AWS Database Migration Service" - AWS blog
  13. "Architecting Data Lakes for the Modern Enterprise at Data Summit Connect Fall 2020"
  14. "Can Big Data Solutions Be Affordable?"
  15. "Building High-Performance Data Lake Using Apache Hudi and Alluxio at T3Go"
  16. "Data Lake Change Capture using Apache Hudi & Amazon AMS/EMR Part 2"

Powered by