AngersZhuuuu commented on a change in pull request #33873:
URL: https://github.com/apache/spark/pull/33873#discussion_r707878507



##########
File path: 
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
##########
@@ -122,6 +122,13 @@ private[spark] class YarnClientSchedulerBackend(
         }
         allowInterrupt = false
         sc.stop()
+        state match {
+          case FinalApplicationStatus.FAILED | FinalApplicationStatus.KILLED =>
+            logWarning(s"ApplicationMaster finished with status ${state}, " +
+              s"SparkContext should exit with code 1.")
+            System.exit(1)

Review comment:
       > yeah ok so we could be in the situation where someone is using Spark 
as part of another application (e.g. some analytics). I think a good compromise 
for now could be adding another flag for "exit on driver error" or just 
switching this to a logError and dropping `system.exit`. What do folks think?
   
   You mean change the configuration to `exit on driver error` and default is 
true, when false, if application master kill/failed, driver exit too?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to