Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/5294#issuecomment-88271630
  
    So I was mostly interested in understanding what the use case was, since 
the bug was a little short on details. Tom's explanation makes sense; the 
opposite (hadoopA built into Spark assembly breaking when it's run on the 
cluster's hadoopB) already has workarounds since Spark gives user control of 
the app's classpath in different ways.
    
    Given that the patch looks good; probably should remain as an undocumented 
option, though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to