Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/5294#issuecomment-88259688
So yes I could use hadoop provided and then package my own hadoop but you
end up with same scenario as I describe. If I don't package hadoop then I rely
on the version on the cluster then at any time they can deploy new hadoop
version that breaks Spark. Note we've had issue with Hadoop breaking api's
before.
This really shouldn't happen very often but the question comes down to the
risk. If I'm running on a production pipeline where its revenue bearing, do I
want to potentially lose $$$ or should I isolate things and package it together
and minimize my risk. I'm leaning towards doing the latter.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]