Github user harishreedharan commented on the pull request:
https://github.com/apache/spark/pull/8942#issuecomment-149666851
So this is my theory (I don't have anything to back this up really). My
assumption is based on the fact that if we don't set
`hadoop.fs.hdfs.impl.disable.cache=true`, then the update of tokens seems to
fail on the cached `FileSystem` instance (we needed to add that config in a PR
at some point to ensure the update of tokens worked). So if that config needed
to be set so the `FileSystem.get()` method to work, it likely means (again my
theory), that a FileSystem object created using older tokens does not seem to
know about the new ones. I can't be sure of this but that could explain why
even updating the tokens locally using the `ExecutorDelegationTokenRenewer`
does not fix the event log writes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]