Github user jerryshao commented on the issue:

    https://github.com/apache/spark/pull/16788
  
    >Trying to put it differently: if Spark had its own, secure method for 
distributing the initial set of delegation tokens needed by the executors (+ AM 
in case of YARN), then the YARN backend wouldn't need to use 
amContainer.setTokens at all. What I'm suggesting here is that this method be 
the base of the Mesos / Kerberos integration; and later we could change YARN to 
also use it.
    
    >This particular code is pretty self-contained and is the base of what you 
need here to bootstrap things. Moving it to "core" wouldn't be that hard, I 
think. The main thing would be to work on how the initial set of tokens is sent 
to executors, since that is the only thing YARN does for Spark right now.
    
    Agreed, I'm also thinking about it, the main thing currently only Spark on 
YARN can support DT (delegation token) is that yarn could help propagate DTs in 
bootstrapping. If Spark has a common solution for this, then Spark could 
support accessing kerberized services under different cluster manages. One 
simple way as I prototyped before is to pass serialized credentials as executor 
launch command argument, then when executor launched, deserialize the 
credential and set to UGI.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to