Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/20945#discussion_r178993421
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -506,6 +506,10 @@ private[spark] class MesosClusterScheduler(
options ++= Seq("--class", desc.command.mainClass)
}
+ desc.conf.getOption("spark.mesos.proxyUser").foreach { v =>
+ options ++= Seq("--proxy-user", v)
--- End diff --
> In Yarn client mode the driver's main is run with the impersonated user
right. Then why that
user (proxy) cannot access the TGT there and why there is no hole then?
There is only impersonation if you use `--proxy-user`. If you don't, there
is not. Still, the code that is running in the cluster (the AM and the
executors) only sees DTs.
And yes, if you use impersonation in YARN client mode to run untrusted
code, you're potentially exposing your kerberos credentials to malicious code.
I mentioned this several times in my comments, that proxy users and client mode
should not mix unless you really know what you're doing.
My problem here is that you're making spark-submit + proxy user + client
mode the *official* way to run Spark on Mesos in cluster mode, and now you're
basically exposing everyone to that security issue.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]