Skip to main content
Version: Current

Spark Guide

This guide provides a quick peek at Hudi's capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.

Setup#

Hudi works with Spark-2.4.3+ & Spark 3.x versions. You can follow instructions here for setting up spark. With 0.9.0 release, spark-sql dml support has been added and is experimental.

From the extracted directory run spark-shell with Hudi as:

// spark-shell for spark 3spark-shell \  --packages org.apache.hudi:hudi-spark3-bundle_2.12:0.9.0,org.apache.spark:spark-avro_2.12:3.0.1 \  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'  // spark-shell for spark 2 with scala 2.12spark-shell \  --packages org.apache.hudi:hudi-spark-bundle_2.12:0.9.0,org.apache.spark:spark-avro_2.12:2.4.4 \  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'  // spark-shell for spark 2 with scala 2.11spark-shell \  --packages org.apache.hudi:hudi-spark-bundle_2.11:0.9.0,org.apache.spark:spark-avro_2.11:2.4.4 \  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
Please note the following
  • spark-avro module needs to be specified in --packages as it is not included with spark-shell by default
  • spark-avro and spark versions must match (we have used 3.0.1 for both above)
  • we have used hudi-spark-bundle built for scala 2.12 since the spark-avro module used also depends on 2.12. If spark-avro_2.11 is used, correspondingly hudi-spark-bundle_2.11 needs to be used.

Setup table name, base path and a data generator to generate records for this guide.

// spark-shellimport org.apache.hudi.QuickstartUtils._import scala.collection.JavaConversions._import org.apache.spark.sql.SaveMode._import org.apache.hudi.DataSourceReadOptions._import org.apache.hudi.DataSourceWriteOptions._import org.apache.hudi.config.HoodieWriteConfig._
val tableName = "hudi_trips_cow"val basePath = "file:///tmp/hudi_trips_cow"val dataGen = new DataGenerator
tip

The DataGenerator can generate sample inserts and updates based on the the sample trip schema here

Create Table#

# scala // No separate create table command required in spark. First batch of write to a table will create the table if not exists. 

Insert data#

Generate some new trips, load them into a DataFrame and write the DataFrame into the Hudi table as below.

// spark-shellval inserts = convertToStringList(dataGen.generateInserts(10))val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))df.write.format("hudi").  options(getQuickstartWriteConfigs).  option(PRECOMBINE_FIELD_OPT_KEY, "ts").  option(RECORDKEY_FIELD_OPT_KEY, "uuid").  option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").  option(TABLE_NAME, tableName).  mode(Overwrite).  save(basePath)
info

mode(Overwrite) overwrites and recreates the table if it already exists. You can check the data generated under /tmp/hudi_trips_cow/<region>/<country>/<city>/. We provided a record key (uuid in schema), partition field (region/country/city) and combine logic (ts in schema) to ensure trip records are unique within each partition. For more info, refer to Modeling data stored in Hudi and for info on ways to ingest data into Hudi, refer to Writing Hudi Tables. Here we are using the default write operation : upsert. If you have a workload without updates, you can also issue insert or bulk_insert operations which could be faster. To know more, refer to Write operations

Checkout https://hudi.apache.org/blog/2021/02/13/hudi-key-generators for various key generator options, like Timestamp based, complex, custom, NonPartitioned Key gen, etc.

Query data#

Load the data files into a DataFrame.

// spark-shellval tripsSnapshotDF = spark.  read.  format("hudi").  load(basePath)//load(basePath) use "/partitionKey=partitionValue" folder structure for Spark auto partition discoverytripsSnapshotDF.createOrReplaceTempView("hudi_trips_snapshot")
spark.sql("select fare, begin_lon, begin_lat, ts from  hudi_trips_snapshot where fare > 20.0").show()spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_trips_snapshot").show()

Time Travel Query#

Hudi support time travel query since 0.9.0. Currently three query time formats are supported as given below.

spark.read.  format("hudi").  option("as.of.instant", "20210728141108").  load(basePath)
spark.read.  format("hudi").  option("as.of.instant", "2021-07-28 14: 11: 08").  load(basePath)
// It is equal to "as.of.instant = 2021-07-28 00:00:00"spark.read.  format("hudi").  option("as.of.instant", "2021-07-28").  load(basePath)
info

Since 0.9.0 hudi has support a hudi built-in FileIndex: HoodieFileIndex to query hudi table, which supports partition pruning and metatable for query. This will help improve query performance. It also supports non-global query path which means users can query the table by the base path without specifing the "*" in the query path. This feature has enabled by default for the non-global query path. For the global query path, hudi uses the old query path. Refer to Table types and queries for more info on all table types and query types supported.

Update data#

This is similar to inserting new data. Generate updates to existing trips using the data generator, load into a DataFrame and write DataFrame into the hudi table.

// spark-shellval updates = convertToStringList(dataGen.generateUpdates(10))val df = spark.read.json(spark.sparkContext.parallelize(updates, 2))df.write.format("hudi").  options(getQuickstartWriteConfigs).  option(PRECOMBINE_FIELD_OPT_KEY, "ts").  option(RECORDKEY_FIELD_OPT_KEY, "uuid").  option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").  option(TABLE_NAME, tableName).  mode(Append).  save(basePath)
note

Notice that the save mode is now Append. In general, always use append mode unless you are trying to create the table for the first time. Querying the data again will now show updated trips. Each write operation generates a new commit denoted by the timestamp. Look for changes in _hoodie_commit_time, rider, driver fields for the same _hoodie_record_keys in previous commit.

Incremental query#

Hudi also provides capability to obtain a stream of records that changed since given commit timestamp. This can be achieved using Hudi's incremental querying and providing a begin time from which changes need to be streamed. We do not need to specify endTime, if we want all changes after the given commit (as is the common case).

// spark-shell// reload dataspark.  read.  format("hudi").  load(basePath).  createOrReplaceTempView("hudi_trips_snapshot")
val commits = spark.sql("select distinct(_hoodie_commit_time) as commitTime from  hudi_trips_snapshot order by commitTime").map(k => k.getString(0)).take(50)val beginTime = commits(commits.length - 2) // commit time we are interested in
// incrementally query dataval tripsIncrementalDF = spark.read.format("hudi").  option(QUERY_TYPE_OPT_KEY, QUERY_TYPE_INCREMENTAL_OPT_VAL).  option(BEGIN_INSTANTTIME_OPT_KEY, beginTime).  load(basePath)tripsIncrementalDF.createOrReplaceTempView("hudi_trips_incremental")
spark.sql("select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_trips_incremental where fare > 20.0").show()
info

This will give all changes that happened after the beginTime commit with the filter of fare > 20.0. The unique thing about this feature is that it now lets you author streaming pipelines on batch data.

Point in time query#

Lets look at how to query data as of a specific time. The specific time can be represented by pointing endTime to a specific commit time and beginTime to "000" (denoting earliest possible commit time).

// spark-shellval beginTime = "000" // Represents all commits > this time.val endTime = commits(commits.length - 2) // commit time we are interested in
//incrementally query dataval tripsPointInTimeDF = spark.read.format("hudi").  option(QUERY_TYPE_OPT_KEY, QUERY_TYPE_INCREMENTAL_OPT_VAL).  option(BEGIN_INSTANTTIME_OPT_KEY, beginTime).  option(END_INSTANTTIME_OPT_KEY, endTime).  load(basePath)tripsPointInTimeDF.createOrReplaceTempView("hudi_trips_point_in_time")spark.sql("select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from hudi_trips_point_in_time where fare > 20.0").show()

Delete data#

Delete records for the HoodieKeys passed in.
// spark-shell// fetch total records countspark.sql("select uuid, partitionpath from hudi_trips_snapshot").count()// fetch two records to be deletedval ds = spark.sql("select uuid, partitionpath from hudi_trips_snapshot").limit(2)
// issue deletesval deletes = dataGen.generateDeletes(ds.collectAsList())val df = spark.read.json(spark.sparkContext.parallelize(deletes, 2))
df.write.format("hudi").  options(getQuickstartWriteConfigs).  option(OPERATION_OPT_KEY,"delete").  option(PRECOMBINE_FIELD_OPT_KEY, "ts").  option(RECORDKEY_FIELD_OPT_KEY, "uuid").  option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").  option(TABLE_NAME, tableName).  mode(Append).  save(basePath)
// run the same read query as above.val roAfterDeleteViewDF = spark.  read.  format("hudi").  load(basePath)
roAfterDeleteViewDF.registerTempTable("hudi_trips_snapshot")// fetch should return (total - 2) recordsspark.sql("select uuid, partitionpath from hudi_trips_snapshot").count()
note

Only Append mode is supported for delete operation.

See the deletion section of the writing data page for more details.

Insert Overwrite Table#

Generate some new trips, overwrite the table logically at the Hudi metadata level. The Hudi cleaner will eventually clean up the previous table snapshot's file groups. This can be faster than deleting the older table and recreating in Overwrite mode.

// spark-shellspark.  read.format("hudi").  load(basePath).  select("uuid","partitionpath").  show(10, false)
val inserts = convertToStringList(dataGen.generateInserts(10))val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))df.write.format("hudi").  options(getQuickstartWriteConfigs).  option(OPERATION_OPT_KEY,"insert_overwrite_table").  option(PRECOMBINE_FIELD_OPT_KEY, "ts").  option(RECORDKEY_FIELD_OPT_KEY, "uuid").  option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").  option(TABLE_NAME, tableName).  mode(Append).  save(basePath)
// Should have different keys now, from query before.spark.  read.format("hudi").  load(basePath).  select("uuid","partitionpath").  show(10, false)

Insert Overwrite#

Generate some new trips, overwrite the all the partitions that are present in the input. This operation can be faster than upsert for batch ETL jobs, that are recomputing entire target partitions at once (as opposed to incrementally updating the target tables). This is because, we are able to bypass indexing, precombining and other repartitioning steps in the upsert write path completely.

// spark-shellspark.  read.format("hudi").  load(basePath).  select("uuid","partitionpath").  sort("partitionpath","uuid").  show(100, false)
val inserts = convertToStringList(dataGen.generateInserts(10))val df = spark.  read.json(spark.sparkContext.parallelize(inserts, 2)).  filter("partitionpath = 'americas/united_states/san_francisco'")df.write.format("hudi").  options(getQuickstartWriteConfigs).  option(OPERATION_OPT_KEY,"insert_overwrite").  option(PRECOMBINE_FIELD_OPT_KEY, "ts").  option(RECORDKEY_FIELD_OPT_KEY, "uuid").  option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").  option(TABLE_NAME, tableName).  mode(Append).  save(basePath)
// Should have different keys now for San Francisco alone, from query before.spark.  read.format("hudi").  load(basePath).  select("uuid","partitionpath").  sort("partitionpath","uuid").  show(100, false)

More Spark Sql Commands#

AlterTable#

Syntax

-- Alter table nameALTER TABLE oldTableName RENAME TO newTableName
-- Alter table add columnsALTER TABLE tableIdentifier ADD COLUMNS(colAndType (,colAndType)*)
-- Alter table column typeALTER TABLE tableIdentifier CHANGE COLUMN colName colName colType

Examples

alter table h0 rename to h0_1;
alter table h0_1 add columns(ext0 string);
alter table h0_1 change column id id bigint;

Use set command#

You can use the set command to set any custom hudi's config, which will work for the whole spark session scope.

set hoodie.insert.shuffle.parallelism = 100;set hoodie.upsert.shuffle.parallelism = 100;set hoodie.delete.shuffle.parallelism = 100;

Set with table options#

You can also set the config with table options when creating table which will work for the table scope only and override the config set by the SET command.

create table if not exists h3(  id bigint,   name string,   price double) using hudioptions (  primaryKey = 'id',  type = 'mor',  ${hoodie.config.key1} = '${hoodie.config.value2}',  ${hoodie.config.key2} = '${hoodie.config.value2}',  ....);
e.g.create table if not exists h3(  id bigint,   name string,   price double) using hudioptions (  primaryKey = 'id',  type = 'mor',  hoodie.cleaner.fileversions.retained = '20',  hoodie.keep.max.commits = '20');

You can also alter the write config for a table by the ALTER SERDEPROPERTIES

e.g.

 alter table h3 set serdeproperties (hoodie.keep.max.commits = '10') 

Where to go from here?#

You can also do the quickstart by building hudi yourself, and using --jars <path to hudi_code>/packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.1?-*.*.*-SNAPSHOT.jar in the spark-shell command above instead of --packages org.apache.hudi:hudi-spark3-bundle_2.12:0.9.0. Hudi also supports scala 2.12. Refer build with scala 2.12 for more info.

Also, we used Spark here to show case the capabilities of Hudi. However, Hudi can support multiple table types/query types and Hudi tables can be queried from query engines like Hive, Spark, Presto and much more. We have put together a demo video that show cases all of this on a docker based setup with all dependent systems running locally. We recommend you replicate the same setup and run the demo yourself, by following steps here to get a taste for it. Also, if you are looking for ways to migrate your existing data to Hudi, refer to migration guide.