Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/18230
  
    The wording in this code always confuses me... I never know what "reload" 
means (`propertiesToReload`).
    
    Anyway, I think I understand why this is broken. Because of this in 
`ApplicationMaster`:
    
    ```
          if (sparkConf.contains(CREDENTIALS_FILE_PATH.key)) {
            // If a principal and keytab have been set, use that to create new 
credentials for executors
            // periodically
            credentialRenewer =
              new ConfigurableCredentialManager(sparkConf, 
yarnConf).credentialRenewer()
            credentialRenewer.scheduleLoginFromKeytab()
          }
    ```
    
    So if you start the second streaming application without providing 
principal / keytab, `Client.scala` will not overwrite the credential file path, 
but still the AM will start the credential updater, because the file location 
is in the configuration read from the checkpoint.
    
    So the workaround is to make sure the restarted application also has the 
principal / keytab arguments.
    
    As for the fix, it seems only adding the credential file does fix the 
problem, since the AM code only looks at it to decide whether to start the 
credential updater thread. But perhaps a better fix would be to fix the AM to 
look at the correct config (e.g. `PRINCIPAL` instead of 
`CREDENTIALS_FILE_PATH`) when starting the credential updater.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to