520lailai edited a comment on issue #9780:
URL: https://github.com/apache/druid/issues/9780#issuecomment-875284542


   If you are running two or more tasks which generate segments for the same 
data source and the same time chunk, the generated segments could potentially 
overshadow each other, which could lead to incorrect query results. Druid 
ingestion tasks will get the locks prior to avoid this problem.   So Why not 
using the task lock before create the segment in spark job?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to