aokolnychyi commented on code in PR #8278:
URL: https://github.com/apache/iceberg/pull/8278#discussion_r1289520720


##########
core/src/main/java/org/apache/iceberg/DeleteFileIndex.java:
##########
@@ -160,6 +173,20 @@ DeleteFile[] forDataFile(long sequenceNumber, DataFile 
file) {
         .toArray(DeleteFile[]::new);
   }
 
+  private DeleteFile[] limitWithoutColumnStatsFiltering(
+      long sequenceNumber, DeleteFileGroup partitionDeletes) {
+
+    if (partitionDeletes == null) {
+      return globalDeletes.filter(sequenceNumber);
+    } else if (globalDeletes == null) {
+      return partitionDeletes.filter(sequenceNumber);
+    } else {
+      DeleteFile[] matchingGlobalDeletes = 
globalDeletes.filter(sequenceNumber);
+      DeleteFile[] matchingPartitionDeletes = 
partitionDeletes.filter(sequenceNumber);
+      return ObjectArrays.concat(matchingGlobalDeletes, 
matchingPartitionDeletes, DeleteFile.class);

Review Comment:
   I am not saying the stats filtering should be disabled by default or that it 
does not make sense in general. In some cases, it will help. In some cases, it 
will harm. I want to have a flag to disable column stats filtering and only 
rely on partition pruning and sequence numbers as that step is expensive and 
may be useless. It may be highly beneficial for position deletes generated by 
Spark and in some other cases when lower/upper boundaries are random.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to