tgravescs commented on pull request #33135:
URL: https://github.com/apache/spark/pull/33135#issuecomment-875650164


   > We have a custom implementation of setting up tokens for Yarn 
applications. This involves invoking some native libraries to generate 
different kinds of tokens and these needs to be set in AM Container launch 
context. 
   
   Can you just implement your own HadoopDelegationTokenProvider to gets tokens 
for you?  We use service loader to find any user defined ones 
(https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/security/HadoopDelegationTokenManager.scala#L260)
   
   > Yarn configuration for running on multi-tenant Federated Yarn cluster.
   
   you can point spark to any set of Hadoop/yarn configuration files or pass 
them in, is there something special about these?  Otherwise I would have 
expected your normal HADOOP_CONF_DIR to point to yarn specific configs.
   
   Just for reference since talking about different cluster managers, not sure 
what is going to happen to this pr but mention it here -> 
https://github.com/apache/spark/pull/31896


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to