tgravescs commented on pull request #29413:
URL: https://github.com/apache/spark/pull/29413#issuecomment-674877554


   >> We are already setting some value in cluster defaults . But as I said 
earlier also "There is no fixed size of the Queue which can be used in all the 
Spark Jobs and even for the Same Spark Job that ran on different input set on 
daily basis"
   
   So what is your plan to set these configs at?  unlimited?  If there is no 
set size for all applications then how does this help, you are adding a config 
to add a set size to a set size config.  
   
   lets say I make my event queue size 10000 for the entire cluster. My 
spark.set.optmized.event.queue.threshold is 10% 10000 * 10% = 1000.  This means 
my event queue size is 11000.   
   If users are already changing the event queue size from 10000 to say 30000, 
then there is no reason they can't make it 33000 (the same as having  
spark.set.optmized.event.queue.threshold=10%.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to