Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/22289
> I failed to find documentation about the RM adding its own Hadoop config
files to the AM/executors' classpath
See `Client.getYarnAppClasspath` and `Client.getMRAppClasspath`.
> however it seems like ApplicationMaster is actually not doing that,
because it doesnât use the newConfiguration instance method
That may have been intentional. The AM-specific code (which is in the
`ApplicationMaster` class) should use the configuration provided by the YARN
cluster, since its purpose is to talk to YARN and that's it. User-specific
changes can actually break things (e.g. point the AM at a different HDFS than
the one where files were staged).
The driver code, which also runs inside the AM process, overlays the user
config in the `Configuration` used by the `SparkContext`.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]