SaurabhChawla100 commented on pull request #29413:
URL: https://github.com/apache/spark/pull/29413#issuecomment-674930924


   > lets say I make my event queue size 10000 for the entire cluster. My 
spark.set.optmized.event.queue.threshold is 10% 10000 * 10% = 1000. This means 
my event queue size is 11000.
   > If users are already changing the event queue size from 10000 to say 
30000, then there is no reason they can't make it 33000 (the same as having 
spark.set.optmized.event.queue.threshold=10%.
   
   So the idea here is whatever be initial capacity is set, have some threshold 
on top that initial capacity that can prevent the event drop in the best case 
scenario.So that there is less human effort is applied in changing the conf for 
setting queue capacity.
   
   **if users are already changing the event queue size from 10000 to say 
30000, then there is no reason they can't make it 33000 (the same as having 
spark.set.optmized.event.queue.threshold=10%.** - Yes they can make but in next 
run , after they see some abrupt behaviour (application hung/ Resource 
wasted).But what if due to this extra threshold size there is no event drop. 
Might be that extra size (33000 with 10% or 36000 with 20%) prevented the user 
to change the conf more frequently.
   
   Anyways If we are the only one impacted by this event drop problem and 
others are fine with setting it manually. Then I believe the current behaviour 
is fine.
   
   We can revisit this problem if there is some ask for this in future.
   
   Thank you every one for their valuable feedback on this issue
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to