Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/20945#discussion_r178902064
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -506,6 +506,10 @@ private[spark] class MesosClusterScheduler(
options ++= Seq("--class", desc.command.mainClass)
}
+ desc.conf.getOption("spark.mesos.proxyUser").foreach { v =>
+ options ++= Seq("--proxy-user", v)
--- End diff --
> The driver will start the SparkJob's main as a proxy user (as usual) and
will use the superuser credentials to impersonate the passed proxy user.
That's a big problem, because, as I said, that makes the super user
credentials available to untrusted user code. How do you prevent the user's
Spark app from using those credentials?
On YARN cluster mode the super user's credentials are never available to
the user application. (On client mode they are, but really, if you're using
`--proxy-user` in client mode you're missing the point, or I hope you know what
you're doing.)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]