chenjunjiedada commented on a change in pull request #2841:
URL: https://github.com/apache/iceberg/pull/2841#discussion_r672787141



##########
File path: 
core/src/main/java/org/apache/iceberg/actions/RewriteDeleteStrategy.java
##########
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg.actions;
+
+import java.util.Map;
+import org.apache.iceberg.DeleteFile;
+import org.apache.iceberg.Table;
+
+public interface RewriteDeleteStrategy {
+
+  /**
+   * Returns the table being modified by this rewrite strategy
+   */
+  Table table();
+
+  /**
+   * Select the deletes to rewrite.
+   *
+   * @return iterable of original delete file to be replaced.
+   */
+  Iterable<DeleteFile> selectDeletes();

Review comment:
       The argument of `selectFilesToRewrite(Iterable<FileScanTask> dataFiles)` 
comes from `planFileGroups` and the filter logic. I think we could build 
similar logic by storing the iterable of `FileScanTask` as a local variable in 
the concrete `RewriteDeleteStrategy` class, and grouping them in 
`rewriteDeletes()` API. As for filter logic, we could add it later and apply it 
in `selectDeletes()` API.
   
   As for strategies, it depends on different scenarios. In our cases, which 
mostly use HDFS as storage, we care more about the small files. they could 
impact the planning, reading performance and also bring overhead to the name 
node. So the strategy we want badly is to merge deletes. So we are planning two 
strategies at first:  1) merge all position deletes. 2) read all equality 
deletes under the partition and convert them into one position delete.  I think 
the first one is kind of bin-packing, the second one is 
convert-and-bin-packing.  In the future, we might also want to sort the merged 
position delete by file name and break it into pieces so that executors could 
read only the delete contents that match the data files.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to