holdenk commented on a change in pull request #33873:
URL: https://github.com/apache/spark/pull/33873#discussion_r707607075
##########
File path:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala
##########
@@ -52,6 +52,14 @@ package object config extends Logging {
.timeConf(TimeUnit.MILLISECONDS)
.createOptional
+ private[spark] val AM_CLIENT_MODE_EXIT_DIRECTLY =
+ ConfigBuilder("spark.yarn.am.clientModeExitDirectly")
+ .doc("When ture, if YarnClientSchedulerBackend.MonitorThread got report
with " +
Review comment:
nit: s/got/gets/
##########
File path:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
##########
@@ -122,6 +122,13 @@ private[spark] class YarnClientSchedulerBackend(
}
allowInterrupt = false
sc.stop()
+ state match {
+ case FinalApplicationStatus.FAILED | FinalApplicationStatus.KILLED =>
+ logWarning(s"ApplicationMaster finished with status ${state}, " +
+ s"SparkContext should exit with code 1.")
+ System.exit(1)
Review comment:
yeah ok so we could be in the situation where someone is using Spark as
part of another application (e.g. some analytics). I think a good compromise
for now could be adding another flag for "exit on driver error" or just
switching this to a logError and dropping `system.exit`. What do folks think?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]