cloud-fan commented on pull request #29413:
URL: https://github.com/apache/spark/pull/29413#issuecomment-675331719


   I think we can't 100% guarantee no events drop, whatever we do is just a 
best effort. That said, the consumer side of the event queue should not be 
sensitive to event drop. If your use case is to check if a Spark application is 
still running jobs, can we query the spark driver status via REST API directly?
   
   I get the intention of this PR to make the queue size more dynamic, as the 
peek number of events is unpredictable. But I don't see how this patch solves 
it, as users still need to predicate it and set the new config.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to