Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/6599#issuecomment-108149511
  
    Is this really the only thing that stops it working in this way vs Hadoop 
1? I had imagined it was a lot more than this, like transitive dependencies and 
all that. I can see the practical argument if that's the case, since it's 
probably infeasible to start publishing twice as many artifacts just for this.
    
    But surely it's not suddenly guaranteed that the artifacts work with both 
Hadoop 1 and 2 in this way? That's a stronger promise to undertake, and as you 
say I'm not sure that compatibility has been maintained since for example the 
Hadoop dependency changed to 2.2. How did it ever continue to work for Hadoop 1 
+ embedded situations then? maybe it just didn't come up? which is why I wonder 
if this is all there is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to