Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112505736
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -30,6 +30,7 @@ import scala.util.Properties
import org.apache.commons.lang3.StringUtils
import org.apache.hadoop.fs.Path
import org.apache.hadoop.security.UserGroupInformation
+import org.apache.hadoop.yarn.conf.YarnConfiguration
--- End diff --
Yea it looks like this should require `hadoop-yarn-api`. I'll add this
dependency to `core` unless you object, in which case I suppose I could just
use the raw string instead of `YarnConfiguration.RM_PRINCIPAL`, but that seems
hacky.
FYI, we originally talked about placing the code below into the Mesos
scheduler, but that seems to be too late. The `SparkContext` constructor
creates a copy of the configuration, so we need to set the required YARN
property before `SparkContext` is created, which means before user code gets
run, which probably means somewhere in SparkSubmit.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]