Github user jkbradley commented on the pull request:

    https://github.com/apache/spark/pull/5055#issuecomment-87808730
  
    @tanyinyan  I think what you're arguing for is actually option (1).  I 
propose this combination of the solutions:
    
    Expose setFeatureScaling() as an option.  Default to true.
    
    If featureScaling is true, then we scale features and do *not* adjust 
regularization.  This will change the optimal solution, but as in your 
references, it is generally better to do anyways.  (My experience is the same.)
    
    If featureScaling is false, then we scale features internally but also 
adjust regularization.  This will improve optimization behavior but will not 
change the optimal solution.
    
    Defaulting to true will mean the algorithm will probably do the best thing 
by default, but will allow informed users to get what they really want if 
necessary.
    
    This proposal will also avoid an API change since the meaning of 
featureScaling will stay the same.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to