GitHub user victor-wong opened a pull request:

    https://github.com/apache/spark/pull/17937

    Reload credentials file config when app starts with checkpoint file i…

    ## What changes were proposed in this pull request?
    
    Currently credentials file configuration is recovered from checkpoint file 
when Spark Streaming applicatioin is restarted, which will lead to some 
unwanted behaviors, for example:
    
    1. Submit Spark Streaming application using keytab file with checkpoint 
enabled in yarn-cluster mode.
    
    > spark-submit --master yarn-cluster --principal xxxx --keytab xxx ...
    
    2. Stop Spark Streaming application;
    3. Resubmit this application after a period of time (i.e. one day);
    4. Credentials file configuration recover from checkpoint file, so value of 
 "spark.yarn.credentials.file" points to old staging directory (i.e. 
hdfs://xxxx/.sparkStaging/application_xxxx/credentials-xxxx, application_xxxx 
is the application id of the previous application which was stopped.)
    4. When launching executor, ExecutorDelegationTokenUpdater will update 
credentials from credentials file immediately. As credentials file was 
generated one day ago (maybe older), it has already expired, so after a period 
of time the executor keeps failing.
    
    Some useful logs are shown below :
    
    >2017-04-27,15:08:08,098 INFO 
org.apache.spark.executor.CoarseGrainedExecutorBackend: Will periodically 
update credentials from: hdfs://xxxx/application_xxxx/credentials-xxxx
    >2017-04-27,15:08:12,519 INFO 
org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater: Reading new 
delegation tokens from hdfs://xxxx/application_1xxxx/credentials-xxxx-xx
    >2017-04-27,15:08:12,661 INFO 
org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater: Tokens updated 
from credentials file.
    ...
    >2017-04-27,15:08:48,156 WARN org.apache.hadoop.ipc.Client: Exception 
encountered while connecting to the server : 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token xxxx for xx) can't be found in cache
    
    
    
    ## How was this patch tested?
    
    manual tests


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/victor-wong/spark fix-credential-file

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/17937.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #17937
    
----
commit fac97c69b8087fda62b776384539301df0230ae2
Author: jiasheng.wang <[email protected]>
Date:   2017-05-10T09:35:11Z

    Reload credentials file config when app starts with checkpoint file in 
cluster mode

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to