chenjunjiedada commented on code in PR #4703:
URL: https://github.com/apache/iceberg/pull/4703#discussion_r877178095


##########
api/src/main/java/org/apache/iceberg/RewriteFiles.java:
##########
@@ -84,4 +84,12 @@ RewriteFiles rewriteFiles(Set<DataFile> dataFilesToReplace, 
Set<DeleteFile> dele
    * @return this for method chaining
    */
   RewriteFiles validateFromSnapshot(long snapshotId);
+
+  /**
+   * Ignore the position deletes in rewrite validation. Flink upsert job only 
generates position deletes in the
+   * ongoing transaction, so it is not necessary to validate position deletes 
when rewriting.
+   *
+   * @return this for method chaining
+   */
+  RewriteFiles ignorePosDeletesInValidation();

Review Comment:
   Theoretically, Spark and Flink can produce deletes at the same time. While 
users usually use one engine to write, for example, use the Flink for 
upsert/CDC writes and rewrite the deletes by Spark. So I think providing an 
option would be better since users know how their tables are written. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to