[
https://issues.apache.org/jira/browse/HBASE-15570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15220071#comment-15220071
]
Sean Busbey commented on HBASE-15570:
-------------------------------------
I have been under the assumption that our plan wrt Spark was that we'd support
folks using our dedicated module rather than leveraging the MR IO Formats,
since those were designed with batch jobs in mind. Unless the solution here
gets us the MR IO Formats coincidentally, I guess we should have a dev@
discussion about that? Folks using the MR formats can always use keytab based
logins on executors to handle the renewing themselves
([ref|https://github.com/saintstack/hbase-downstreamer/pull/6]).
I was planning to fix this in the hbase-spark module, so I don't think anything
I do here will help Flink folks. Do we need to look into having an hbase-flink
module as well? Is there sufficient community interest to support it?
> renewable delegation tokens for long-lived spark applications
> -------------------------------------------------------------
>
> Key: HBASE-15570
> URL: https://issues.apache.org/jira/browse/HBASE-15570
> Project: HBase
> Issue Type: Improvement
> Components: spark
> Reporter: Sean Busbey
> Assignee: Sean Busbey
>
> Right now our spark integration works on secure clusters by getting
> delegation tokens and sending them to the executors. Unfortunately,
> applications that need to run for longer than the delegation token lifetime
> (by default 7 days) will fail.
> In particular, this is an issue for Spark Streaming applications. Since they
> expect to run indefinitely, we should have a means for renewing the
> delegation tokens.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)