Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4259#discussion_r29098031
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/LinearRegression.scala ---
@@ -42,34 +50,122 @@ private[regression] trait LinearRegressionParams
extends RegressorParams
class LinearRegression extends Regressor[Vector, LinearRegression,
LinearRegressionModel]
with LinearRegressionParams {
- setDefault(regParam -> 0.1, maxIter -> 100)
-
- /** @group setParam */
+ /**
+ * Set the regularization parameter.
+ * Default is 0.0.
+ * @group setParam
+ */
def setRegParam(value: Double): this.type = set(regParam, value)
+ setDefault(regParam -> 0.0)
--- End diff --
To match R's default result, we need to `0.0`. Also, the meaning of lambda
will be changed if the numbers of sample is changed. So it's hard to come out
with a good default. Why don't we implement regularization path to find the
best lambda?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]