danny0405 commented on code in PR #12514: URL: https://github.com/apache/hudi/pull/12514#discussion_r1893439757
########## rfc/rfc-83/rfc-83.md: ########## @@ -0,0 +1,233 @@ +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +# RFC-83: Incremental Table Service + +## Proposers + +- @zhangyue19921010 + +## Approvers +- @danny0405 +- @yuzhaojing + +## Status + +JIRA: https://issues.apache.org/jira/browse/HUDI-8780 + +## Abstract + +In Hudi, when scheduling Compaction and Clustering, the default behavior is to scan all partitions under the current table. +When there are many historical partitions, such as 640,000 in our production environment, this scanning and planning operation becomes very inefficient. +For Flink, it often leads to checkpoint timeouts, resulting in data delays. +As for cleaning, we already have the ability to do cleaning for incremental partitions. + +This RFC will draw on the design of Incremental Clean to generalize the capability of processing incremental partitions to all table services, such as Clustering and Compaction. + +## Background + +`earliestInstantToRetain` in clean plan meta + +HoodieCleanerPlan.avsc + +```text +{ + "namespace": "org.apache.hudi.avro.model", + "type": "record", + "name": "HoodieCleanerPlan", + "fields": [ + { + "name": "earliestInstantToRetain", + "type":["null", { + "type": "record", + "name": "HoodieActionInstant", + "fields": [ + { + "name": "timestamp", + "type": "string" + }, + { + "name": "action", + "type": "string" + }, + { + "name": "state", + "type": "string" + } + ] + }], + "default" : null + }, + xxxx + ] +} +``` + +`EarliestCommitToRetan` in clean commit meta + +HoodieCleanMetadata.avsc + +```text +{"namespace": "org.apache.hudi.avro.model", + "type": "record", + "name": "HoodieCleanMetadata", + "fields": [ + xxxx, + {"name": "earliestCommitToRetain", "type": "string"}, + xxxx + ] +} +``` +How to get incremental partitions during cleaning + + + +**Note** +`EarliestCommitToRetain` is recorded in `HoodieCleanMetadata` +newInstantToRetain is computed based on Clean configs such as `hoodie.clean.commits.retained` and will be record in clean meta as new EarliestCommitToRetain + +## Design And Implementation + +### Changes in TableService Metadata Schema + +Add new column `earliestInstantToRetain` (default null) in Clustering/Compaction plan same as `earliestInstantToRetain` in clean plan + +```text + { + "name": "earliestInstantToRetain", + "type":["null", { + "type": "record", + "name": "HoodieActionInstant", + "fields": [ + { + "name": "timestamp", + "type": "string" + }, + { + "name": "action", + "type": "string" + }, + { + "name": "state", + "type": "string" + } + ] + }], + "default" : null + }, +``` + +We also need a unified interface/abstract-class to control the Plan behavior of the TableService including clustering and compaction. + +### Abstraction + +Use `PartitionBaseTableServicePlanStrategy` to control the behavior of getting partitions, filter partitions and generate table service plan etc. + +Since we want to control the logic of partition acquisition, partition filtering, and plan generation through different strategies, +in the first step, we need to use an abstraction to converge the logic of partition acquisition, partition filtering, and plan generation into the base strategy. + +```java +package org.apache.hudi.table; + +import org.apache.hudi.common.engine.HoodieEngineContext; +import org.apache.hudi.common.table.HoodieTableMetaClient; +import org.apache.hudi.common.table.timeline.HoodieInstant; +import org.apache.hudi.common.util.Option; +import org.apache.hudi.config.HoodieWriteConfig; + +import java.io.IOException; +import java.util.List; + +public abstract class PartitionBaseTableServicePlanStrategy<R,S> { + + /** + * Generate table service plan based on given instant. + * @return + */ + public abstract R generateTableServicePlan(Option<String> instant) throws IOException; + + /** + * Generate table service plan based on given instant. + * @return + */ + public abstract R generateTableServicePlan(List<S> operations) throws IOException; + + + /** + * Get partition paths to be performed for current table service. + * @param metaClient + * @return + */ + public abstract List<String> getPartitionPaths(HoodieWriteConfig writeConfig, HoodieTableMetaClient metaClient, HoodieEngineContext engineContext); + + /** + * Filter partition path for given fully paths. + * @param metaClient + * @return + */ + public abstract List<String> filterPartitionPaths(HoodieWriteConfig writeConfig, List<String> partitionPaths); + + /** + * Get incremental partitions from EarliestCommitToRetain to instantToRetain + * @param instantToRetain + * @param type + * @param deleteEmptyCommit + * @return + * @throws IOException + */ + public List<String> getIncrementalPartitionPaths(Option<HoodieInstant> instantToRetain) { + throw new UnsupportedOperationException("Not support yet"); + } + + /** + * Returns the earliest commit to retain from instant meta + */ + public Option<HoodieInstant> getEarliestCommitToRetain() { + throw new UnsupportedOperationException("Not support yet"); Review Comment: The `IncrementalPartitionAwareStrategy` should be an user interface IMO, the only API we expose to user is the incremental partitions since last table service. So the logic of following should be removed: 1. generate plan (should be responsibility of the planner) 2. getEarliestCommitToRetain (should be responsibility of the planner within the plan executor) And because the implementaion of compaction and clustering are quite different, maybe we just add two new interfaces: `IncrementalPartitionAwareCompactionStrategy` and `IncrementalPartitionAwareClusteringStrategy` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
