Skip to main content
Version: Current

Flink Tuning Guide

Global Configurations

When using Flink, you can set some global configurations in $FLINK_HOME/conf/flink-conf.yaml

Parallelism

Option NameDefaultTypeDescription
taskmanager.numberOfTaskSlots1IntegerThe number of parallel operator or user function instances that a single TaskManager can run. We recommend setting this value > 4, and the actual value needs to be set according to the amount of data
parallelism.default1IntegerThe default parallelism used when no parallelism is specified anywhere (default: 1). For example, If the value of write.bucket_assign.tasks is not set, this value will be used

Memory

Option NameDefaultTypeDescription
jobmanager.memory.process.size(none)MemorySizeTotal Process Memory size for the JobManager. This includes all the memory that a JobManager JVM process consumes, consisting of Total Flink Memory, JVM Metaspace, and JVM Overhead
taskmanager.memory.task.heap.size(none)MemorySizeTask Heap Memory size for TaskExecutors. This is the size of JVM heap memory reserved for write cache
taskmanager.memory.managed.size(none)MemorySizeManaged Memory size for TaskExecutors. This is the size of off-heap memory managed by the memory manager, reserved for sorting and RocksDB state backend. If you choose RocksDB as the state backend, you need to set this memory

Checkpoint

Option NameDefaultTypeDescription
execution.checkpointing.interval(none)DurationSetting this value as execution.checkpointing.interval = 150000ms, 150000ms = 2.5min. Configuring this parameter is equivalent to enabling the checkpoint
state.backend(none)StringThe state backend to be used to store state. We recommend setting store state as rocksdb : state.backend: rocksdb
state.backend.rocksdb.localdir(none)StringThe local directory (on the TaskManager) where RocksDB puts its files
state.checkpoints.dir(none)StringThe default directory used for storing the data files and meta data of checkpoints in a Flink supported filesystem. The storage path must be accessible from all participating processes/nodes(i.e. all TaskManagers and JobManagers), like hdfs and oss path
state.backend.incrementalfalseBooleanOption whether the state backend should create incremental checkpoints, if possible. For an incremental checkpoint, only a diff from the previous checkpoint is stored, rather than the complete checkpoint state. If store state is setting as rocksdb, recommending to turn on

Table Options

Flink SQL jobs can be configured through options in the WITH clause. The actual datasource level configs are listed below.

Memory

note

When optimizing memory, we need to pay attention to the memory configuration and the number of taskManagers, parallelism of write tasks (write.tasks : 4) first. After confirm each write task to be allocated with enough memory, we can try to set these memory options.

Option NameDescriptionDefaultRemarks
write.task.max.sizeMaximum memory in MB for a write task, when the threshold hits, it flushes the max size data bucket to avoid OOM. Default 1024MB1024DThe memory reserved for write buffer is write.task.max.size - compaction.max_memory. When total buffer of write tasks reach the threshold, the largest buffer in the memory will be flushed
write.batch.sizeIn order to improve the efficiency of writing, Flink write task will cache data in buffer according to the write bucket until the memory reaches the threshold. When reached threshold, the data buffer would be flushed out. Default 64MB64DRecommend to use the default settings
write.log_block.sizeThe log writer of Hudi will not flush the data immediately after receiving data. The writer flush data to the disk in the unit of LogBlock. Before LogBlock reached threshold, records will be buffered in the writer in form of serialized bytes. Default 128MB128Recommend to use the default settings
write.merge.max_memoryIf write type is COPY_ON_WRITE, Hudi will merge the incremental data and base file data. The incremental data will be cached and spilled to disk. this threshold controls the max heap size that can be used. Default 100MB100Recommend to use the default settings
compaction.max_memorySame as write.merge.max_memory, but occurs during compaction. Default 100MB100If it is online compaction, it can be turned up when resources are sufficient, such as setting as 1024MB

Parallelism

Option NameDescriptionDefaultRemarks
write.tasksThe parallelism of writer tasks. Each write task writes 1 to N buckets in sequence. Default 44Increases the parallelism has no effect on the number of small files
write.bucket_assign.tasksThe parallelism of bucket assigner operators. No default value, using Flink parallelism.defaultparallelism.defaultIncreases the parallelism also increases the number of buckets, thus the number of small files (small buckets)
write.index_boostrap.tasksThe parallelism of index bootstrap. Increasing parallelism can speed up the efficiency of the bootstrap stage. The bootstrap stage will block checkpointing. Therefore, it is necessary to set more checkpoint failure tolerance times. Default using Flink parallelism.defaultparallelism.defaultIt only take effect when index.bootsrap.enabled is true
read.tasksThe parallelism of read operators (batch and stream). Default 44
compaction.tasksThe parallelism of online compaction. Default 44Online compaction will occupy the resources of the write task. It is recommended to use offline compaction

Compaction

note

These are options only for online compaction.

note

Turn off online compaction by setting compaction.async.enabled = false, but we still recommend turning on compaction.schedule.enable for the writing job. You can then execute the compaction plan by offline compaction.

Option NameDescriptionDefaultRemarks
compaction.schedule.enabledWhether to generate compaction plan periodicallytrueRecommend to turn it on, even if compaction.async.enabled = false
compaction.async.enabledAsync Compaction, enabled by default for MORtrueTurn off online compaction by turning off this option
compaction.trigger.strategyStrategy to trigger compactionnum_commitsOptions are num_commits: trigger compaction when reach N delta commits; time_elapsed: trigger compaction when time elapsed > N seconds since last compaction; num_and_time: trigger compaction when both NUM_COMMITS and TIME_ELAPSED are satisfied; num_or_time: trigger compaction when NUM_COMMITS or TIME_ELAPSED is satisfied.
compaction.delta_commitsMax delta commits needed to trigger compaction, default 5 commits5--
compaction.delta_secondsMax delta seconds time needed to trigger compaction, default 1 hour3600--
compaction.max_memoryMax memory in MB for compaction spillable map, default 100MB100If your have sufficient resources, recommend to adjust to 1024MB
compaction.target_ioTarget IO per compaction (both read and write), default 500GB512000--

Memory Optimization

MOR

  1. Setting Flink state backend to rocksdb (the default in memory state backend is very memory intensive).
  2. If there is enough memory, compaction.max_memory can be set larger (100MB by default, and can be adjust to 1024MB).
  3. Pay attention to the memory allocated to each write task by taskManager to ensure that each write task can be allocated to the desired memory size write.task.max.size. For example, taskManager has 4GB of memory running two streamWriteFunction, so each write task can be allocated with 2GB memory. Please reserve some buffers because the network buffer and other types of tasks on taskManager (such as bucketAssignFunction) will also consume memory.
  4. Pay attention to the memory changes of compaction. compaction.max_memory controls the maximum memory that each task can be used when compaction tasks read logs. compaction.tasks controls the parallelism of compaction tasks.

COW

  1. Setting Flink state backend to rocksdb (the default in memory state backend is very memory intensive).
  2. Increase both write.task.max.size and write.merge.max_memory (1024MB and 100MB by default, adjust to 2014MB and 1024MB).
  3. Pay attention to the memory allocated to each write task by taskManager to ensure that each write task can be allocated to the desired memory size write.task.max.size. For example, taskManager has 4GB of memory running two write tasks, so each write task can be allocated with 2GB memory. Please reserve some buffers because the network buffer and other types of tasks on taskManager (such as BucketAssignFunction) will also consume memory.

Write Rate Limit

In the existing data synchronization, snapshot data and incremental data are send to kafka first, and then streaming write to Hudi by Flink. Because the direct consumption of snapshot data will lead to problems such as high throughput and serious disorder (writing partition randomly), which will lead to write performance degradation and throughput glitches. At this time, the write.rate.limit option can be turned on to ensure smooth writing.

Options

Option NameRequiredDefaultRemarks
write.rate.limitfalse0Turn off by default