ScrapCodes commented on pull request #33257:
URL: https://github.com/apache/spark/pull/33257#issuecomment-882289114


   Hi @cutiechi,
   
   Thank you for the PR!
   
   There is a way to mount arbitrary hadoop configuration on executors, i.e. by 
[ spark conf propagate](https://github.com/apache/spark/pull/27735) implemented 
in [SPARK-30985]. Place all hadoop configuration files in the SPARK_HOME/conf 
dir and it will be loaded on the executors and driver as well. This happens 
internally by creating a configMap, one  for driver and executor each. At the 
moment these configMaps are not fully user configurable.
   
   IMO, If we make these configMap user configurable, then that solution will 
apply to all the frameworks not specific to hadoop. [SPARK-32223]
   
   In the mean time, we can have this. But we would need a k8s integration test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to