bvaradar commented on a change in pull request #1853:
URL: https://github.com/apache/hudi/pull/1853#discussion_r461124646



##########
File path: hudi-common/src/main/avro/HoodieReplaceMetadata.avsc
##########
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+ /*
+  * Note that all 'replace' instants are read for every query
+  * So it is important to keep this small. Please be careful
+  * before tracking additional information in this file.
+  * This will be used for 'insert_overwrite' (RFC-18) and also 'clustering' 
(RFC-19)
+  */
+{"namespace": "org.apache.hudi.avro.model",
+ "type": "record",
+ "name": "HoodieReplaceMetadata",
+ "fields": [
+     {"name": "totalFilesReplaced", "type": "int"},
+     {"name": "command", "type": "string"},
+     {"name": "partitionMetadata", "type": {

Review comment:
       High level Question :  To make sure we are all on the same page : Is 
this metadata enough to achieve clustering ? Do you foresee any changes that 
needs to happen to this metadata to support clustering ? The PR mentions that 
this is for both clustering and overwrite. Hence, asking this question. 

##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/HoodieTimeline.java
##########
@@ -126,6 +129,13 @@
    */
   HoodieTimeline getCommitsAndCompactionTimeline();
 
+  /**
+   * Timeline to just include replace instants that have valid 
(commit/deltacommit) actions.
+   *
+   * @return
+   */
+  HoodieTimeline getCompletedAndReplaceTimeline();

Review comment:
       Does the return timeline contains only replace timeline ? The naming is 
confusing to me. How about getValidReplaceTimeline or anything else to reflect 
the intention

##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/HoodieTimeline.java
##########
@@ -57,7 +58,7 @@
 
   String[] VALID_ACTIONS_IN_TIMELINE = {COMMIT_ACTION, DELTA_COMMIT_ACTION,
       CLEAN_ACTION, SAVEPOINT_ACTION, RESTORE_ACTION, ROLLBACK_ACTION,
-      COMPACTION_ACTION};
+      COMPACTION_ACTION, REPLACE_ACTION};

Review comment:
       same comment as above

##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/HoodieActiveTimeline.java
##########
@@ -65,7 +65,8 @@
       COMMIT_EXTENSION, INFLIGHT_COMMIT_EXTENSION, REQUESTED_COMMIT_EXTENSION, 
DELTA_COMMIT_EXTENSION,
       INFLIGHT_DELTA_COMMIT_EXTENSION, REQUESTED_DELTA_COMMIT_EXTENSION, 
SAVEPOINT_EXTENSION,
       INFLIGHT_SAVEPOINT_EXTENSION, CLEAN_EXTENSION, 
REQUESTED_CLEAN_EXTENSION, INFLIGHT_CLEAN_EXTENSION,
-      INFLIGHT_COMPACTION_EXTENSION, REQUESTED_COMPACTION_EXTENSION, 
INFLIGHT_RESTORE_EXTENSION, RESTORE_EXTENSION));
+      INFLIGHT_COMPACTION_EXTENSION, REQUESTED_COMPACTION_EXTENSION, 
INFLIGHT_RESTORE_EXTENSION, RESTORE_EXTENSION,

Review comment:
        Its better to avoid this for rollout purpose. In case, this PR gets 
landed before the next and a release cut, then we need to worry about ordering 
of rollout between readers and writers.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to