[ 
https://issues.apache.org/jira/browse/HBASE-15570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220402#comment-15220402
 ] 

Gary Helmling commented on HBASE-15570:
---------------------------------------

bq. I'd strongly prefer to avoid changing the way the HBase server side handles 
tokens, so it'd be new tokens most likely.

+1 to that.

I'm really not familiar with the spark code.  So they push a keytab into HDFS 
for the AM?  That seems iffy, but should have the same effect as the Twill 
approach.  Does the Spark AM then push out updated tokens to the running 
containers?

FWIW, the Twill token update happens here in updateSecureStores / 
updateCredentials: 
https://git-wip-us.apache.org/repos/asf?p=incubator-twill.git;a=blob_plain;f=twill-yarn/src/main/java/org/apache/twill/yarn/YarnTwillRunnerService.java;hb=HEAD

> renewable delegation tokens for long-lived spark applications
> -------------------------------------------------------------
>
>                 Key: HBASE-15570
>                 URL: https://issues.apache.org/jira/browse/HBASE-15570
>             Project: HBase
>          Issue Type: Improvement
>          Components: spark
>            Reporter: Sean Busbey
>            Assignee: Sean Busbey
>
> Right now our spark integration works on secure clusters by getting 
> delegation tokens and sending them to the executors. Unfortunately, 
> applications that need to run for longer than the delegation token lifetime 
> (by default 7 days) will fail.
> In particular, this is an issue for Spark Streaming applications. Since they 
> expect to run indefinitely, we should have a means for renewing the 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to