LakshSingla commented on code in PR #16800:
URL: https://github.com/apache/druid/pull/16800#discussion_r1751189746


##########
server/src/main/java/org/apache/druid/server/ClientQuerySegmentWalker.java:
##########
@@ -840,6 +846,7 @@ private static <T, QueryType extends Query<T>> 
Optional<DataSource> materializeR
                                 + "from the query context and/or the server 
config."
                             );
       } else {
+        resultSequence.set(results);
         return Optional.empty();

Review Comment:
   > why do we want to fallback - instead of telling the user that this have 
failed; and he might want to try the row based limiting?
   
   Copied from one of my responses: 
   
   Fallback is mostly for when the types aren't known. I agree that it is a 
performance hit, but at the time this feature was added, the signature informed 
by the tool chest didn't need to have a type. Scan queries only had knowledge 
of the column names (and not types), group by/time series... etc. toolchests 
could return null for the aggregator's dimensions. The fallback was present for 
these cases, where it's easy to detect the failure relatively early in the 
whole subquery processing flow. Fallback meant that transitioning from row -> 
byte based limit was simple. There's an undocumented parameter that treated 
these null types as JSON types, but that had logical flaws of its own iirc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to