mridulm commented on code in PR #38181:
URL: https://github.com/apache/spark/pull/38181#discussion_r990890666
##########
core/src/main/scala/org/apache/spark/scheduler/AsyncEventQueue.scala:
##########
@@ -154,8 +154,9 @@ private class AsyncEventQueue(
return
}
- eventCount.incrementAndGet()
- if (eventQueue.offer(event)) {
+ if (eventQueue.offer(event, conf.get(LISTENER_BUS_EVENT_QUEUE_TIMEOUT),
+ TimeUnit.MILLISECONDS)) {
Review Comment:
While minimizing event drop is helpful, the event subsystem has been written
to ensure it continues to work with lost events and prioritize stability of
internal subsystems by remaining non blocking. Blocking here is not an option.
For specific applications exhibiting the issue, you can increase event queue
size and driver memory- but this is unfortunately not a solution.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]