Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/9946#issuecomment-173397974
I disagree with: "At the point we call System.exit here all user code is
done and we are terminating" It should ideally be so, but, what happens when it
isn't? A user shutdown hook could be the thing still executing (I think?) so I
don't know if that's a solution.
It may be a common sin among fairly righteous apps, as I can imagine all
kinds of third party libraries taking some short time to shutdown their pools
and flush their whatever to disk. Uncommonly, something pathologically runs
forever -- bad app. There's no way to distinguish them. Killing the JVM risks
some bad end for the app's cleanup; not killing it risks a stuck JVM. A timeout
doesn't solve it; the implicit timeout of 0 here is the most extreme choice in
one direction. My gut is that it's better to avoid possibly harming several
fairly innocent apps with this behavior change, understanding it means more
manual work to chase down and kill the occasional errant bad app.
I'm OK being out-voted too, but given the discussion so far that remains my
take.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]