Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/11885#issuecomment-200640251
Thanks a lot for your explanation.
I'm not sure if I understand correctly, currently we will add
`<spark_home>/etc/hadoop` into the classpath by default for AM and executors.
And now if we add `__spark_conf__` into classpath of executors, there will be
another copy of hadoop conf, and we create `Configuration()` at executor start,
which will add some specific configurations like s3 and `spark.hadoop.xxx`.
If the two copies, one in cluster's hadoop home and one send from client,
has difference, not sure if there's any side-effect.
It's just my concern, we haven't yet met such issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]