capistrant commented on code in PR #18922:
URL: https://github.com/apache/druid/pull/18922#discussion_r2737963465


##########
multi-stage-query/src/main/java/org/apache/druid/msq/indexing/IndexerTableInputSpecSlicer.java:
##########
@@ -157,24 +161,25 @@ private Set<DataSegmentWithInterval> 
getPrunedSegmentSet(final TableInputSpec ta
   {
     final TimelineLookup<String, DataSegment> timeline =
         getTimeline(tableInputSpec.getDataSource(), 
tableInputSpec.getIntervals());
+    final Predicate<SegmentDescriptor> segmentFilter = 
tableInputSpec.getSegments() != null
+                                                       ? 
Set.copyOf(tableInputSpec.getSegments())::contains
+                                                       : 
Predicates.alwaysTrue();
 
     if (timeline == null) {
       return Collections.emptySet();
     } else {
+      // A segment can overlap with multiple search intervals, or even outside 
search intervals.
+      // The same segment can appear multiple times or 0 time, but each is 
also bounded within the overlapped search interval

Review Comment:
   I don't think I fully follow this comment.
   
   > or even outside search intervals
   
   A segment outside search intervals. Is that referring to a segment that is 
in `tableInputSpec.getSegments()` but ends up not overlapping any search 
intervals and thus not get found?
   
   > The same segment can appear multiple times or 0 time
   
   Same idea regarding the `0 time` comment. Is that just saying that even 
though a segment was in `tableInputSpec.getSegments()`, it may not appear in 
the iterator?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to