Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r152093236
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -56,11 +56,38 @@ private[spark] class ApplicationMaster(args:
ApplicationMasterArguments) extends
// TODO: Currently, task to container is computed once (TaskSetManager)
- which need not be
// optimal as more containers are available. Might need to handle this
better.
- private val sparkConf = new SparkConf()
- private val yarnConf: YarnConfiguration =
SparkHadoopUtil.get.newConfiguration(sparkConf)
- .asInstanceOf[YarnConfiguration]
private val isClusterMode = args.userClass != null
+ private val sparkConf = new SparkConf()
+ if (args.propertiesFile != null) {
+ Utils.getPropertiesFromFile(args.propertiesFile).foreach { case (k, v)
=>
+ sparkConf.set(k, v)
+ }
+ }
+
+ // Initialize the security manager for authentication, if enabled. This
needs to be done
+ // before the config is propagated to the system properties.
+ private val securityMgr = new SecurityManager(sparkConf)
+ if (isClusterMode) {
+ securityMgr.initializeAuth()
+ }
+
+ // If an auth secret is configured, propagate it to executors.
+ Option(securityMgr.getSecretKey()).foreach { secret =>
+ sparkConf.setExecutorEnv(SecurityManager.ENV_AUTH_SECRET, secret)
--- End diff --
We also set ENV_AUTH_SECRET in the ExecutorRunnable so isn't this duplicate?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]