abhishekagarwal87 commented on issue #13182:
URL: https://github.com/apache/druid/issues/13182#issuecomment-1272263726

   > Yes while looking at the issue and changes above, I had observed that the 
time buckets generation for time-granularity queries could be improved. One 
crude way of stopping time-grain generation going out of hand is to limit the 
amount of grains allowed to be generated by the query.
   A better but effort requiring way could be to derive the time-grains from 
data as we read it, instead of generating the time buckets first and then 
passing data through it.
   
   A similar problem happens on the ingestion side as well, when someone 
accidentally selects the wrong column for timestamp and kaboom :) I think your 
crude solution makes sense. The limit could be high enough that we know 
something is wrong with the query if a query hits that limit. And the limit is 
low enough that it stops an OOM. like 100,000 seems like a good number. what do 
you think? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to