Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/5786#issuecomment-101131567
  
    Hey @FavioVazquez - thanks for commenting. I'm happy to have a patch that 
makes the default settings more coherent. I just looked into the other pull 
request to understand better the origins (#5783). 
    
    I was just a bit confused because it seems like if you were building for 
CDH 5.3 you would need non default settings anyways. But it seems like this was 
not specifically related to your issue and instead some clean-up suggested by 
@vainzn.
    
    My suggestion was maybe to just make this change in master rather than 
putting it into the 1.4 branch, since I see basically no benefit to this other 
than tidiness and it introduces some natural risk of mucking around build 
stuff. Also, if we are going to make build changes that require developer's to 
build Spark differently, I think we should give ample warning. And I suggested 
we retain the existing profiles in our documentation in order to avoid having 
to keep changing developer habits every time we bump the hadoop version.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to