Github user skonto commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20945#discussion_r178990019
  
    --- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
 ---
    @@ -506,6 +506,10 @@ private[spark] class MesosClusterScheduler(
           options ++= Seq("--class", desc.command.mainClass)
         }
     
    +    desc.conf.getOption("spark.mesos.proxyUser").foreach { v =>
    +      options ++= Seq("--proxy-user", v)
    --- End diff --
    
    > I'm still confused about how submission works on Mesos in cluster mode. 
You mention a DC/OS > CLI. Does that mean you're not using spark-submit?
    
    For dc/os there is a spark cli and spark submit is not used. People though 
use other ways to submit jobs.
    
    > The point I'm trying to make is that using --proxy-user in client mode in 
this context is a security  > issue. And I'm really uncomfortable with adding 
code in Spark that is basically a big security > hole. You're basically giving 
up the idea of multiple users here, since by doing that any user can > 
impersonate anyone else.
    
    If I use client mode and logged in as user X and have a TGT locally then 
why is it a security hole?
    When the Spark main runs as a different user (proxy user) that user 
shouldnt have access to user's X stuff. Is not that true? I know DTs are more 
safe even if I steal them I cannot renew them, but if there is user isolation 
within the container then there is still a hole?
    In Yarn the launcher in client mode still uses the same approach of 
uploading DTs?
    In Yarn client mode the driver's main is run with the impersonated user 
right ? Then why that
    user cannot access the TGT there?



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to