renatocron commented on issue #12986:
URL: https://github.com/apache/druid/issues/12986#issuecomment-1294983860

   Hello @abhishekagarwal87 I can't really reproduce it easily, it happens 
randomly depending on the filters I choose
   
   The original issue as open with a daily rollup, 
   when I I open this issue I re-imported ~ 30 days of events from kafka, and 
during the ingestion while the segments were created, I had a few invalid 
results, then when I stopped the overload and run the query again, the results 
were OK, then I let auto-compaction runs with type=range and the issue 
appeared, then when I manually re-indexed with 'hash' the issue was gone
   
   This week, with the datasource I posted is not a rollup, and is using 
dynamic auto-compression, with PT72H delay, and my query is using `where __time 
> CURRENT_TIMESTAMP - interval '24' hour` so I think we can skip 
auto-compaction from the equation
   
   Note that this week issue only appeared when I added `TIMESTAMP_TO_MILLIS` 
to the `max(__time)` expression, when I run this only with `max(_time)`  I did 
not see any invalid result, and now looking over, the original issue disappears 
when I added latest or array_agg and this week query is using LATEST as 
expression, so maybe it's not the same bug
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to