Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/5786#issuecomment-101090681
  
    Hey @FavioVazquez and @srowen. I took a look at this. A few questions:
    
    1. Does this mean that #5027 was just wrong? I guess I don't see how things 
worked before this patch.
    2. It's actually a pain for users when the default build changes. Why not 
just keep a -Phadoop-2.2 profile in the instructions? I wonder if we should 
just always advise users to use a Hadoop profile when building. Otherwise, 
we'll have to go to people and get them to change things, just like we are 
here. 
    3. Should we just merge this into master and then just revert #5027 in 
branch-1.4? From what I understand the change upgrading the Hadoop version was 
just to make it more convenient for IDE importing. Hardly a user facing 
feature. Also, I think it would be good to e-mail the dev list and explain that 
the default build behavior is changing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to