We will look at how to migrate a large parquet table to Hudi without having to rewrite the entire dataset.
Apache Hudi maintains per record metadata to perform core operations such as upserts and incremental pull. To take advantage of Hudi’s upsert and incremental processing support, users would need to rewrite their whole dataset to make it an Apache Hudi table. Hudi 0.6.0 comes with an experimental feature to support efficient migration of large Parquet tables to Hudi without the need to rewrite the entire dataset.
High Level Idea:
Per Record Metadata:
Apache Hudi maintains record level metadata for perform efficient upserts and incremental pull.
Apache HUDI physical file contains 3 parts
For each record, 5 HUDI metadata fields with column indices 0 to 4
For each record, the original data columns that comprises the record (Original Data)
Additional Hudi Metadata at file footer for index lookup
The parts (1) and (3) constitute what we term as “Hudi skeleton”. Hudi skeleton contains additional metadata that it maintains in each physical parquet file for supporting Hudi primitives. The conceptual idea is to decouple Hudi skeleton data from original data (2). Hudi skeleton can be stored in a Hudi file while the original data is stored in an external non-Hudi file. A migration of large parquet would result in creating only Hudi skeleton files without having to rewrite original data.
Design Deep Dive:
For a deep dive on the internals, please take a look at the RFC document
Hudi supports 2 modes when migrating parquet tables. We will use the term bootstrap and migration interchangeably in this document.
METADATA_ONLY : In this mode, record level metadata alone is generated for each source record and stored in new bootstrap location.
FULL_RECORD : In this mode, record level metadata is generated for each source record and both original record and metadata for each record copied
You can pick and choose these modes at partition level. One of the common strategy would be to use FULL_RECORD mode for a small set of “hot” partitions which are accessed more frequently and METADATA_ONLY for a larger set of “warm” partitions.
Query Engine Support:
For a METADATA_ONLY bootstrapped table, Spark - data source, Spark-Hive and native Hive query engines are supported. Presto support is in the works.
Ways To Migrate :
There are 2 ways to migrate a large parquet table to Hudi.
Spark Datasource Write
We will look at how to migrate using both these approaches.
These are bootstrap specific configurations that needs to be set in addition to regular hudi write configurations.
Base Path of source parquet table.
Spark Parallelism used when running bootstrap
Bootstrap Index internally used by Hudi to map Hudi skeleton and source parquet files.
For METADATA_ONLY bootstrap, this class allows customization of partition paths used in Hudi target dataset. By default, no customization is done and the partition paths reflects what is available in source parquet table.
Hoodie Deltastreamer allows bootstrap to be performed using –run-bootstrap command line option.
If you are planning to use delta-streamer after the initial boostrap to incrementally ingest data to the new hudi dataset, you need to pass either –checkpoint or –initial-checkpoint-provider to set the initial checkpoint for the deltastreamer.
Here is an example for running METADATA_ONLY bootstrap using Delta Streamer.
Need proper defaults for the bootstrap config : hoodie.bootstrap.full.input.provider. Here is the ticket
DeltaStreamer manages checkpoints inside hoodie commit files and expects checkpoints in previously committed metadata. Users are expected to pass checkpoint or initial checkpoint provider when performing bootstrap through deltastreamer. Such support is not present when doing bootstrap using Spark Datasource. Here is the ticket.