Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17431155
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -172,22 +131,17 @@ object Client {
}
// Set an env variable indicating we are running in YARN mode.
- // Note: anything env variable with SPARK_ prefix gets propagated to
all (remote) processes -
- // see Client#setupLaunchEnv().
+ // Note that any env variable with the SPARK_ prefix gets propagated
to all (remote) processes
System.setProperty("SPARK_YARN_MODE", "true")
- val sparkConf = new SparkConf()
+ val sparkConf = new SparkConf
try {
val args = new ClientArguments(argStrings, sparkConf)
new Client(args, sparkConf).run()
} catch {
- case e: Exception => {
+ case e: Exception =>
Console.err.println(e.getMessage)
System.exit(1)
- }
}
-
- System.exit(0)
--- End diff --
From my understanding, by explicitly doing System.exit it makes sure
shutdownhooks are called on other running daemon threads that don't know about
the exit. I was originally thinking of client mode where it has other daemon
threads running, but this is only called in cluster mode so maybe it doesn't
matter.
But if we do remove it I would like to make sure it is tested - return
value is still same, things shutdown properly, etc..
Otherwise I would like it left and we can make separate jira to investigate.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]