Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/607#issuecomment-42087138
  
    What is your concern with moving it up into the ExecutorBackend?
    
    I'd be ok with using the cache and closeAll (if config off) if the 
consensus is that moving it up is to risky for 1.0.  I looked more at the cache 
concern mentioned in hdfs-3545 and I believe the concern there was with Hive 
caching it across jobs.  In the Spark case, we have a single set of 
tokens/credentials per backend that won't be replaced. On yarn the RM deals 
with renewing the tokens.  They eventually expire after a week (or so) and you 
can't currently run anything longer then that.  I assume if someone is using 
token on mesos/standalone they have their own way to refresh or don't run for 
more then 24 hours, or have the config set much longer then default. 
    
    I do still believe the better thing to do is move it up though.  We 
wouldn't have to have our own cached version as it does it for you, which I see 
as less maintenance, its less prone to someone adding code in a place that 
should be within doAs, and in general seems like it would be more secure and 
fit into a normal authentication protocol better.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to