Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21734#discussion_r201258595
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
---
@@ -193,8 +193,7 @@ object YarnSparkHadoopUtil {
sparkConf: SparkConf,
hadoopConf: Configuration): Set[FileSystem] = {
val filesystemsToAccess = sparkConf.get(FILESYSTEMS_TO_ACCESS)
- .map(new Path(_).getFileSystem(hadoopConf))
- .toSet
+ val isRequestAllDelegationTokens = filesystemsToAccess.isEmpty
--- End diff --
this would mean that if you have your running application accessing
different namespaces and you want to add a new namespace to connect to, if you
just add the namespace you need the application can break as we are not getting
anymore the tokens for the other namespaces.
I'd rather follow @jerryshao's comment about avoiding to crash if the
renewal fails, this seems to fix your problem and it doesn't hurt other
solutions.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]