Github user harishreedharan commented on the pull request:
https://github.com/apache/spark/pull/8942#issuecomment-149344858
Hmm, I think the real issue is that the event logging does not doAs.
I think in `yarn-cluster`, since the SparkContext is created in the AM, the
updated credentials actually are in the cache of the user writing to the event
logs (since we are already running as that user and don't do a doAs).
In `yarn-client` though, because we don'd do a `doAs` - is it possible that
the new tokens are not being used to write to the event log?
@SaintBacchus Let me open a PR that does the doAs and combine it with your
previous one #8867 and can you test it and see if it works? Or you can do it
yourself - just add a `doAs` here:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L66
,
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L143
and
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L48
(basically anywhere HDFS is accessed)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]