kfaraz commented on code in PR #13967:
URL: https://github.com/apache/druid/pull/13967#discussion_r1146042780


##########
sql/src/test/java/org/apache/druid/sql/calcite/planner/SegmentMetadataCacheConfigTest.java:
##########
@@ -19,6 +19,7 @@
 
 package org.apache.druid.sql.calcite.planner;
 
+import org.apache.druid.client.SegmentMetadataCacheConfig;

Review Comment:
   This test should move to the same package as the class itself.



##########
sql/src/main/java/org/apache/druid/sql/calcite/schema/SystemSchema.java:
##########
@@ -1042,7 +1035,7 @@ private static JsonParserIterator<SupervisorStatus> 
getSupervisors(
       ObjectMapper jsonMapper
   )
   {
-    return getThingsFromLeaderNode(
+    return MetadataSegmentView.getThingsFromLeaderNode(

Review Comment:
   It doesn't make sense to have `SystemSchema` call a static method on 
`MetadataSegmentView`. If you want to reuse this method, maybe consider putting 
it in some other existing utility class. You could probably even have it as a 
non-static method on the `DruidLeaderClient` itself.



##########
processing/src/main/java/org/apache/druid/timeline/VersionedIntervalTimeline.java:
##########
@@ -144,6 +144,19 @@ public Collection<ObjectType> iterateAllObjects()
     );
   }
 
+  public Collection<PartitionChunkEntry<VersionType, ObjectType>> 
getAllPartitionChunkEntry() {

Review Comment:
   Nit: `getAllPartitionChunkEntries`



##########
server/src/main/java/org/apache/druid/client/BrokerServerView.java:
##########
@@ -334,22 +408,7 @@ private void serverRemovedSegment(DruidServerMetadata 
server, DataSegment segmen
       }
 
       if (selector.isEmpty()) {
-        VersionedIntervalTimeline<String, ServerSelector> timeline = 
timelines.get(segment.getDataSource());
         selectors.remove(segmentId);
-
-        final PartitionChunk<ServerSelector> removedPartition = 
timeline.remove(
-            segment.getInterval(), segment.getVersion(), 
segment.getShardSpec().createChunk(selector)
-        );
-
-        if (removedPartition == null) {

Review Comment:
   Removal of segments from timeline has now been moved to the new method 
`polledSegmentsFromCoordinator`. We should move this logging and callback 
execution there instead of completely deleting it.



##########
server/src/main/java/org/apache/druid/client/CachingClusteredClient.java:
##########
@@ -462,6 +462,10 @@ private Set<SegmentServerSelector> computeSegmentsToQuery(
               chunk.getChunkNumber()
           );
           segments.add(new SegmentServerSelector(server, segment));
+          if (server.isEmpty()) {
+            // fail the query or alternatively set some headers
+            throw new RuntimeException("Query failed, incomplete data");

Review Comment:
   Maybe throw a `QueryException` or a `QueryInterruptedException` with some 
more info, such as the segment/interval which was found to be unavailable. 
   
   I also think that instead of failing fast, it would be nice if we could 
identify intervals which are incomplete, then log them, then decide if we want 
to fail or not. It would be helpful for users to know all the intervals which 
are incomplete in one go, rather than firing the query and getting it failed 
multiple times.



##########
server/src/main/java/org/apache/druid/client/BrokerServerView.java:
##########
@@ -265,29 +281,67 @@ private QueryableDruidServer removeServer(DruidServer 
server)
     return clients.remove(server.getName());
   }
 
+  private void polledSegmentsFromCoordinator(final 
ImmutableSortedSet<SegmentWithOvershadowedStatus> 
segmentWithOvershadowedStatusSet) {

Review Comment:
   This class does not know whether the segments were obtained via polling or 
otherwise. We just need to clarify what we intend to do with those segments. 
Please rename the other variables accordingly.
   
   ```suggestion
     private void addHandedOffSegmentsToTimeline(final 
ImmutableSortedSet<SegmentWithOvershadowedStatus> allPublishedSegments) {
   ```



##########
server/src/main/java/org/apache/druid/client/CachingClusteredClient.java:
##########
@@ -462,6 +462,10 @@ private Set<SegmentServerSelector> computeSegmentsToQuery(
               chunk.getChunkNumber()
           );
           segments.add(new SegmentServerSelector(server, segment));
+          if (server.isEmpty()) {

Review Comment:
   Instead of always failing, use a query context flag like `partialResult` 
with possible values `allow` (default behaviour) and `fail`. We should always 
log a warning when the result is partial, but fail only when specified.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to