Github user harishreedharan commented on the pull request:

    https://github.com/apache/spark/pull/4688#issuecomment-96856362
  
    So I noticed that if the `dfs.namenode.delegation.token.renew-interval` is 
different from the max lifetime of the token, a lot of exceptions get thrown 
around with the token being expired etc - and the executors may not be able to 
read the new tokens. It looks like the tokens don't get renewed if HDFS is not 
accessed before the renew interval - so for an executor which accesses HDFS 
rarely enough, it may not be able to read from HDFS. 
    
    So instead of waiting till 80% of max lifetime, I wait till `0.75 * 
dfs.namenode.delegation.token.renew-interval` to renew. This means that the 
`hdfs-site.xml` file must be in sync with the one on the namenode (my 
understanding is this param's value is rarely changed, so this is unlikely to 
be an issue at all).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to