srowen commented on a change in pull request #33449:
URL: https://github.com/apache/spark/pull/33449#discussion_r673607385
##########
File path:
mllib/src/test/scala/org/apache/spark/ml/optim/WeightedLeastSquaresSuite.scala
##########
@@ -531,7 +534,8 @@ class WeightedLeastSquaresSuite extends SparkFunSuite with
MLlibTestSparkContext
standardization <- Seq(false, true)) {
for (solver <- WeightedLeastSquares.supportedSolvers) {
val wls = new WeightedLeastSquares(fitIntercept, regParam,
elasticNetParam = 0.0,
- standardizeFeatures = standardization, standardizeLabel = true,
solverType = solver)
+ standardizeFeatures = standardization, standardizeLabel = true,
solverType = solver,
Review comment:
The updated tol is needed to allow it to pass - and matches the tol in
the R code that was used to generate 'correct' answers, so seems OK. Same with
maxIter. I made a similar change throughout this test suite for consistency.
##########
File path: mllib/src/test/scala/org/apache/spark/mllib/linalg/VectorsSuite.scala
##########
@@ -295,7 +295,9 @@ class VectorsSuite extends SparkFunSuite with Logging {
val denseVector1 = Vectors.dense(sparseVector1.toArray)
val denseVector2 = Vectors.dense(sparseVector2.toArray)
- val squaredDist = breezeSquaredDistance(sparseVector1.asBreeze,
sparseVector2.asBreeze)
+ val squaredDist = sparseVector1.toArray.zip(sparseVector2.toArray).map {
Review comment:
There was a weird compile error on breeze's squaredDistance, so I just
wrote it out here instead
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]