Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4292#issuecomment-72605117
I found some [previous
discussion](https://issues.apache.org/jira/browse/SPARK-2546?focusedCommentId=14160842&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14160842)
of this issue.
I'd say that expecting `sc.hadoopConfiguration` to be mutated by users
after it's already been used to define RDDs isn't something that we can /
should realistically hope to support because there's just way too many ways
that it could break (e.g. defensive copying, serialization, etc) and because it
runs counter to user expectations around other types of Spark configurations
(e.g. user modifications to SparkConf after creating SparkContext will not take
effect).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]