danny0405 commented on code in PR #12514:
URL: https://github.com/apache/hudi/pull/12514#discussion_r1895245752


##########
rfc/rfc-83/rfc-83.md:
##########
@@ -0,0 +1,173 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+# RFC-83: Incremental Table Service
+
+## Proposers
+
+- @zhangyue19921010
+
+## Approvers
+- @danny0405
+- @yuzhaojing
+
+## Status
+
+JIRA: https://issues.apache.org/jira/browse/HUDI-8780
+
+## Abstract
+
+In Hudi, when scheduling Compaction and Clustering, the default behavior is to 
scan all partitions under the current table. 
+When there are many historical partitions, such as 640,000 in our production 
environment, this scanning and planning operation becomes very inefficient. 
+For Flink, it often leads to checkpoint timeouts, resulting in data delays.
+As for cleaning, we already have the ability to do cleaning for incremental 
partitions.
+
+This RFC will draw on the design of Incremental Clean to generalize the 
capability of processing incremental partitions to all table services, such as 
Clustering and Compaction.
+
+## Background
+
+`earliestInstantToRetain` in clean plan meta
+
+HoodieCleanerPlan.avsc
+
+```text
+{
+  "namespace": "org.apache.hudi.avro.model",
+  "type": "record",
+  "name": "HoodieCleanerPlan",
+  "fields": [
+    {
+      "name": "earliestInstantToRetain",
+      "type":["null", {
+        "type": "record",
+        "name": "HoodieActionInstant",
+        "fields": [
+          {
+            "name": "timestamp",
+            "type": "string"
+          },
+          {
+            "name": "action",
+            "type": "string"
+          },
+          {
+            "name": "state",
+            "type": "string"
+          }
+        ]
+      }],
+      "default" : null
+    },
+    xxxx
+  ]
+}
+```
+
+`EarliestCommitToRetan` in clean commit meta
+
+HoodieCleanMetadata.avsc
+
+```text
+{"namespace": "org.apache.hudi.avro.model",
+ "type": "record",
+ "name": "HoodieCleanMetadata",
+ "fields": [
+     xxxx,
+     {"name": "earliestCommitToRetain", "type": "string"},
+     xxxx
+ ]
+}
+```
+How to get incremental partitions during cleaning
+
+![cleanIncrementalpartitions.png](cleanIncrementalpartitions.png)
+
+**Note**
+`EarliestCommitToRetain` is recorded in `HoodieCleanMetadata`
+newInstantToRetain is computed based on Clean configs such as 
`hoodie.clean.commits.retained` and will be record in clean meta as new 
EarliestCommitToRetain
+
+## Design And Implementation
+### Abstraction
+Use `IncrementalPartitionAwareStrategy` to control the behavior of getting 
partitions, filter partitions, also incremental getting partitions if necessary 
etc.
+
+```java
+package org.apache.hudi.table;
+
+import org.apache.hudi.common.engine.HoodieEngineContext;
+import org.apache.hudi.common.table.HoodieTableMetaClient;
+import org.apache.hudi.common.table.timeline.HoodieInstant;
+import org.apache.hudi.common.util.Option;
+import org.apache.hudi.config.HoodieWriteConfig;
+
+import java.util.List;
+
+public interface IncrementalPartitionAwareStrategy {
+
+  /**
+   * Get partition paths to be performed for current table service.
+   * @param metaClient
+   * @return
+   */
+  List<String> getPartitionPaths(HoodieWriteConfig writeConfig, 
HoodieTableMetaClient metaClient, HoodieEngineContext engineContext);
+
+  /**
+   * Filter partition path for given fully paths.
+   * @param metaClient
+   * @return
+   */
+  List<String> filterPartitionPaths(HoodieWriteConfig writeConfig, 
List<String> partitionPaths);
+
+  /**
+   * Get incremental partitions from EarliestCommitToRetain to instantToRetain
+   * @param instantToRetain
+   * @param type
+   * @param deleteEmptyCommit
+   * @return
+   * @throws IOException
+   */
+  List<String> getIncrementalPartitionPaths(HoodieWriteConfig writeConfig, 
HoodieTableMetaClient metaClient);

Review Comment:
   Should we just add one interface `List<String> 
filterPartitionPaths(HoodieWriteConfig writeConfig, List<String> 
allPartitionPaths, List<String> incrementalPartitionPaths);` to that the 
strategy can decide which partition are choosed.
   
   The `getXXXPartitionPaths` should belong to the scope of the 
executor/planner, let's move them out.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to