zachjsh commented on code in PR #15305:
URL: https://github.com/apache/druid/pull/15305#discussion_r1382227166


##########
server/src/main/java/org/apache/druid/metadata/SqlSegmentsMetadataQuery.java:
##########
@@ -345,6 +353,42 @@ private CloseableIterator<DataSegment> retrieveSegments(
       @Nullable final Integer limit
   )
   {
+    if (intervals.isEmpty()) {
+      return CloseableIterators.withEmptyBaggage(
+          retrieveSegmentsInIntervalsBatch(dataSource, intervals, matchMode, 
used, limit).getIterator()
+      );
+    } else {
+      final List<List<Interval>> intervalsLists = Lists.partition(new 
ArrayList<>(intervals), MAX_INTERVALS_PER_BATCH);
+      final List<Iterator<DataSegment>> resultingIterators = new ArrayList<>();
+      int totalFetched = 0;
+
+      for (final List<Interval> intervalList : intervalsLists) {
+        IteratorWithCount<DataSegment> resultIterator = 
retrieveSegmentsInIntervalsBatch(dataSource, intervalList, matchMode, used, 
limit);
+        resultingIterators.add(resultIterator.getIterator());
+        totalFetched += resultIterator.getCount();
+
+        if (null != limit && totalFetched >= limit) {
+          break;

Review Comment:
   If a user adds a version of this function that uses multiple intervals with 
limit, I think that we can return an iterator with more segments than what the 
limit specifies, breaking the implied contract of the function. Can we cap the 
returned iterator at the limit? There is logic in `KillUnusedSegmentsTask` that 
relies heavily on the limit being absolute maximum of segments returned. Maybe 
we can only add the iterator to the resulteringIterators if it's added size 
wouldn't cause the limit to be violated?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to