pan3793 commented on PR #38902:
URL: https://github.com/apache/spark/pull/38902#issuecomment-1338013789

   Usually, it's a good idea to make the default value of configurations 
adaptive, but I'm not sure about this one. (choosing the 20s as the default 
value is because I don't want to change the behavior a lot from the original 
30s, actually, I set this value to 10s in our internal).
   
   In our practice, as all shutdown procedures in SparkShutdownHookManager 
shares `hadoop.service.shutdown.timeout`, the procedures are usually blocked by 
custom listeners(they are provided by different teams or spark users, and some 
of them may do a final flush when accepting `SparkListenerApplicationEnd` that 
may cost dozens of seconds or even minutes), we recommend all of them to set a 
timeout to 10s or shorter, so they can works fine w/ default 30s timeout limits 
in most case. So in my case, if I enlarge `hadoop.service.shutdown.timeout`, 
it's mostly because listeners cost much time during the shutdown phase, 
automatically increasing 
`KUBERNETES_EXECUTOR_SNAPSHOTS_SUBSCRIBERS_GRACE_PERIOD` is not what I want.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to