Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3841#discussion_r22482464
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ExecutorRunnableUtil.scala ---
@@ -75,8 +75,9 @@ trait ExecutorRunnableUtil extends Logging {
// registers with the Scheduler and transfers the spark configs. Since
the Executor backend
// uses Akka to connect to the scheduler, the akka settings are needed
as well as the
// authentication settings.
- sparkConf.getAll.
- filter { case (k, v) => k.startsWith("spark.auth") ||
k.startsWith("spark.akka") }.
+ sparkConf.getAll.filter { case (k, v) =>
+ k.startsWith("spark.auth") || k.startsWith("spark.akka") ||
k.equals("spark.port.maxRetries")
+ }.
--- End diff --
The crux of the problem here seems to be that YARN executors receive some
config settings via System properties as JVM flags, and other config settings
via the SparkEnv object they receive after getting connected up to the rest of
the cluster.
This whitelist is for passing config options that must be received by the
executor before connection, and `spark.port.maxRetries` seems to be one of
those indeed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]