Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/11693#discussion_r56023939
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -152,13 +152,14 @@ private[spark] class ApplicationMaster(
val isLastAttempt = client.getAttemptId().getAttemptId() >=
maxAppAttempts
if (!finished) {
- // This happens when the user application calls System.exit().
We have the choice
- // of either failing or succeeding at this point. We report
success to avoid
- // retrying applications that have succeeded (System.exit(0)),
which means that
- // applications that explicitly exit with a non-zero status will
also show up as
- // succeeded in the RM UI.
+ // The default state of ApplicationMaster is failed if it is
invoked by shut down hook.
+ // This behavior is different compared to 1.x version, we
guarantee user will not call
+ // System.exit() at the end of application, so state will be
updated before calling
--- End diff --
I think this comment is a bit confusing. We can't guarantee the user
doesn't call System.exit(). I think you mean that spark itself doesn't called
it.
Perhaps just remove that and leave the last sentence saying the user
shouldn't be calling System.exit(0) on a good shutdown.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]