Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6848#issuecomment-211739456
IMO, this is useful in one way that hadoop configuration need not be a
global state. We can have a default set of configuration that we use everywhere
as a default. And then in every hadoop related method a user has an alternative
to override the default.
Binary compatibility will definitely be broken, but source compatibility
might not be affected i.e. one might need to recompile the project with newer
spark version. As it is asked already, it should be okay for 2.0 ?
@andrewor14 ping !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]