didip edited a comment on issue #11535:
URL: https://github.com/apache/druid/issues/11535#issuecomment-896446623


   Yes, this has been happening the last 3 weeks actually. Every ingestion is 
successful, data appears on S3, but unquery-able.
   The input data is huge, 3.5TB per day, this is why we desperately need high 
number of parallelism to have an acceptable SLA.
   
   When the task UI was working, every single log from index_parallel and 
sub_tasks are clean, they all finished successfully.
   
   I am currently unable to load the tasks UI because there are way too many 
druid_tasks records in the DB, thus I cannot get the latest logs.
   
   I will try to search the broker queries and see if there are any errors.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to