tgravescs commented on pull request #29413:
URL: https://github.com/apache/spark/pull/29413#issuecomment-673097079


   sorry I'm still not seeing any difference here then increasing the size of 
the current queue?  If both are not really allocating memory for the entire 
amount until runtime then either way you have to set memory of driver to be the 
maximum amount used.    why not set the queue size to size + size * 
spark.set.optmized.event.queue.threshold?
   If you look at the driver memory used, I don't think that is very reliable. 
They could change very quickly.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to