Github user iyounus commented on the pull request:
https://github.com/apache/spark/pull/10702#issuecomment-177375091
I've completed this PR. I think all the tests are there. Here, I'm going to
document a couple of minor issues just for future reference.
__Issue 1__
For the case when `yStd = 0` and `fitIntercept = false`, we've four
possibilities (`reParam: zero/non-zero` and `standardization: true/false`).
Using `WeightedLeastSquares` (`normal` solver), I _can_ get the following
results:
```
# data used for the following results
val df = sc.parallelize(Seq(
(17.0, Vectors.dense(0.0, 5.0)),
(17.0, Vectors.dense(1.0, 7.0)),
(17.0, Vectors.dense(2.0, 11.0)),
(17.0, Vectors.dense(3.0, 13.0))
), 2).toDF("label", "features")
```
```
# coefficients obtained from WeightedLeastSquares
(1) reg: 0.0, standardization: false
--------> 0.0 [-9.508474576271158,3.457627118644062]
(2) reg: 0.0, standardization: true
--------> 0.0 [-9.508474576271158,3.457627118644062]
(3) reg: 0.1, standardization: false
--------> 0.0 [-7.134240246406588,3.010780287474336]
(4) reg: 0.1, standardization: true
--------> 0.0 [-5.730337078651679,2.7219101123595495]
```
This is with `L2` regularization, and ignoring standardization of the label
for the case (4). For the case (4), we throw an error because this is
ill-defined, so the user never sees these results.
For case (3), even though the standardization is `false`, the label is
still standardized because the `standardizeLable` is hardwired to be `true`
when calling `WeightedLeastSquares` within `LinearRegression` class. Therefore,
an error is thrown in this case too. Which, in my opinion, is not right thing
to do because the analytical solution does exist for this case.
__Issue 2__
Again, for the case when `yStd = 0` and `fitIntercept = false`, I can get
the following results using `l-bfgs` solver:
```
(1) reg: 0.0, standardization: false
--------> 0.0 [-9.508474576271176,3.4576271186440652]
(2) reg: 0.0, standardization: true
--------> 0.0 [-9.508474576271176,3.4576271186440652]
(3) reg: 0.1, standardization: false
--------> 0.0 [-9.327614273741196,3.423618722197146]
(4) reg: 0.1, standardization: true
--------> 0.0 [-9.08129403505256,3.374915377479131]
```
Here, results (1) and (2) are identical to what we get from
`WeightedLeastSquares` as expected. Case (4) is ill-defined and we throw an
error.
Now, for case (3), the numerical values are different as compared to
`WeightedLeastSquares`. This is because we standardize label using `yMean`.
Otherwise, the values obtained from `l-bfgs` are identical to
`WeightedLeastSquares`. Note that the user will not see these values because an
error is thrown for this case instead.
__Issue 3__
The normal equation with regression (Ridge Regression), gives significantly
different results as compared to case (3) above. Here is my R code with results:
```
ridge_regression <- function(A, b, lambda, intercept=TRUE){
if (intercept) {
A = cbind(rep(1.0, length(b)), A)
I = diag(ncol(A))
I[1,1] = 0.0
} else {
I = diag(ncol(A))
}
R = chol( t(A) %*% A + lambda*I )
z = solve(t(R), t(A) %*% b)
w = solve(R, z)
return(w)
}
A <- matrix(c(0, 1, 2, 3, 5, 7, 11, 13), 4, 2)
b <- c(17, 17, 17, 17)
df <- as.data.frame(cbind(A, b))
ridge_regression(A, b, 0.1, intercept = FALSE)
[1,] -8.783272
[2,] 3.321237
```
In my opinion, when `standardization=flase`, the results from `normal`
solver must match this. Even though the user doesn't see this case, it gives me
less confidence in the implementation of normal equation, because it doesn't
match this simple case. I also wrote about this at
https://github.com/apache/spark/pull/10274.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]