Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3917#issuecomment-74035028
The purpose for this is to make intellij easier. But can't you just enable
the hadoop profiles when important and get the same effects? Is it different?
I would have concerns with this (as I said on a few other occasions too) on
the basis of changing people's dependency graphs substantially for downstream
projects that link against our published pom's. There are embedded applications
that use Spark and don't really care about being on newer Hadoop API's - for
instance applications that primarily interact with S3, or they use their own
storage. This gives them a major build change since Hadoop itself has a lot of
transitive dependencies.
I guess the path for such applications would be that they would exclude
Spark's Hadoop and add their own Hadoop dependency with the older version?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]