rdblue commented on code in PR #15006:
URL: https://github.com/apache/iceberg/pull/15006#discussion_r2718943705
##########
core/src/main/java/org/apache/iceberg/MergingSnapshotProducer.java:
##########
@@ -87,7 +97,7 @@ abstract class MergingSnapshotProducer<ThisT> extends
SnapshotProducer<ThisT> {
private final Map<Integer, DataFileSet> newDataFilesBySpec =
Maps.newHashMap();
private Long newDataFilesDataSequenceNumber;
private final Map<Integer, DeleteFileSet> newDeleteFilesBySpec =
Maps.newHashMap();
- private final Set<String> newDVRefs = Sets.newHashSet();
+ private final Map<String, DeleteFileSet> dvsByReferencedFile =
Maps.newHashMap();
Review Comment:
That sounds reasonable to me, and it may be a good idea not to delete v2
deletes that come in because we don't want this to be used for conversion.
Now that I'm looking at this behavior again, I'm curious about why we use a
`DeleteFileSet`. It seems strange to me that we would automatically dedup
`DeleteFile` instances passed in through the API. Why not fail in that case
because it is a caller bug? I can't think of a reason for committing the same
DV twice.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]