danny0405 commented on code in PR #5304: URL: https://github.com/apache/hudi/pull/5304#discussion_r849030000
########## website/docs/faq.md: ########## @@ -253,6 +253,24 @@ Simplest way to run compaction on MOR dataset is to run the [compaction inline]( That said, for obvious reasons of not blocking ingesting for compaction, you may want to run it asynchronously as well. This can be done either via a separate [compaction job](https://github.com/apache/hudi/blob/master/hudi-utilities/src/main/java/org/apache/hudi/utilities/HoodieCompactor.java) that is scheduled by your workflow scheduler/notebook independently. If you are using delta streamer, then you can run in [continuous mode](https://github.com/apache/hudi/blob/d3edac4612bde2fa9deca9536801dbc48961fb95/hudi-utilities/src/main/java/org/apache/hudi/utilities/deltastreamer/HoodieDeltaStreamer.java#L241) where the ingestion and compaction are both managed concurrently in a single spark run time. +### What options do I have for asynchronous compactions on MOR dataset? + +There are a couple of options depending on how you write to Hudi. But first let us understand briefly what is involved. There are two parts to compaction +- Scheduling: In this step, Hudi scans the partitions and selects file slices to be compacted. A compaction plan is finally written to Hudi timeline. Scheduling needs tighter coordination with other writers (regular ingestion is considered one of the writers). If scheduling is done inline with the ingestion job, this coordination is automatically taken care of. Else when scheduling happens asynchronously a lock provider needs to be configured for this coordination among multiple writers. +- Execution: A separate process reads the compaction plan and performs compaction of file slices. Execution doesnt need the same level of coordination with other writers as Scheduling step and can be decoupled from ingestion job easily. + +Depending on how you write to Hudi these are the possible options currently. +- DeltaStreamer: + - In Continuous mode asynchronous compaction is achieved by default. Here scheduling is done by the ingestion job inline and compaction execution is achieved asynchronously by a separate parallel thread. +- Spark datasource: + - Async scheduling and async execution can be achieved by periodically running Hudi Compactor Utility or Hudi CLI. However this needs a lock provider to be configured. + - Alternately to avoid dependency on lock providers, scheduling alone can be done inline by regular writer using the config `hoodie.compact.schedule.inline` . And compaction execution can be done asynchronously by periodically triggering the Hudi Compactor Utility or Hudi CLI. +- Spark structured streaming: + - Compactions are scheduled and executed asynchronously inside the streaming job. Async Compactions are enabled by default for structured streaming jobs on Merge-On-Read table. +- Flink: + - TODO Review Comment: `compaction.schedule.enabled`: Schedule the compaction plan, enabled by default for MOR `compaction.async.enabled`: Async Compaction, enabled by default for MOR `compaction.tasks`: Parallelism of tasks that do actual compaction, default is 4 `compaction.trigger.strategy`: Strategy to trigger compaction, options are 'num_commits': trigger compaction when reach N delta commits; 'time_elapsed': trigger compaction when time elapsed > N seconds since last compaction; 'num_and_time': trigger compaction when both NUM_COMMITS and TIME_ELAPSED are satisfied; 'num_or_time': trigger compaction when NUM_COMMITS or TIME_ELAPSED is satisfied. Default is 'num_commits' `compaction.delta_commits`: Max delta commits needed to trigger compaction, default 5 commits `compaction.delta_seconds`: Max delta seconds time needed to trigger compaction, default 1 hour `compaction.timeout.seconds`: Max timeout time in seconds for online compaction to rollback, default 20 minutes `compaction.max_memory`: Max memory in MB for compaction spillable map, default 100MB `compaction.target_io`: Target IO per compaction (both read and write), default 500 GB -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
