aokolnychyi commented on code in PR #4703:
URL: https://github.com/apache/iceberg/pull/4703#discussion_r886247045
##########
api/src/main/java/org/apache/iceberg/RewriteFiles.java:
##########
@@ -84,4 +84,12 @@ RewriteFiles rewriteFiles(Set<DataFile> dataFilesToReplace,
Set<DeleteFile> dele
* @return this for method chaining
*/
RewriteFiles validateFromSnapshot(long snapshotId);
+
+ /**
+ * Ignore the position deletes in rewrite validation. Flink upsert job only
generates position deletes in the
+ * ongoing transaction, so it is not necessary to validate position deletes
when rewriting.
+ *
+ * @return this for method chaining
+ */
+ RewriteFiles ignorePosDeletesInValidation();
Review Comment:
I guess you are right that the first case is handled as
`validateNoNewDeletesForDataFiles` checks only new delete files added after the
snapshot we used for reading.
Could you confirm the compaction you are using sets `validateFromSnapshot`
correctly? If so, then the problem is that our delete matching logic may give
false positives (it is currently limited to partition and file path min/max
filtering). Then your solution starts to make sense but I am still concerned
that it is up to the user to say if it is safe to apply this optimization and
it is not something we can do when multiple engines interact with the table.
Is there any immediate optimization we can do in the delete matching logic
to reduce the number of false positives? Thoughts anyone? Bloom filtering and
opening delete files if there are not that many of those are the solutions I
can think of right now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]