Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5354#issuecomment-92532627
That ship may have sailed for better or worse. Yes we have to be careful
about bringing things into core, so I'm glad to see the exclusions, but I think
there are simpler and less brittle ways.
In general if two components need to agree on a particular version of a
dependency, that is resolved in `dependencyManagement` in the parent.
`exclusions` is not a great way to do it; as soon as, maybe, the other
components stops using X that Tachyon needs, now there are 0 copies of it. So
that's still the question -- if there are conflicts, what are they, and why not
manage the dependency directly?
The `hadoop-core` stuff is maybe more cosmetic. If `tachyon-client` works
fine without it, then perhaps it and other dependencies from upstream are
merely redundant? at the least, I'd be sort of surprised if lots of these
non-client, server-side libraries are _directly_ used by the client. This seems
like maybe it's also trying to exclude a bunch of transient dependencies of
something already excluded.
You could say that the exclusions really don't hurt much, or if they do we
know it quickly, but I think it's worth double-checking the above first, as
this exclusion declaration looks surprising. Goodness knows it is necessary in
some cases with Hadoop stuff though -- usually to exclude the *same* code under
*different artifact names* because upstream projects made uber-jars or shaded.
(Looking at you, Jetty)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]