Github user yanboliang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17746#discussion_r113133446
  
    --- Diff: R/pkg/inst/tests/testthat/test_mllib_classification.R ---
    @@ -288,18 +288,18 @@ test_that("spark.mlp", {
         c(0, 0, 0, 0, 0, 5, 5, 5, 5, 5, 9, 9, 9, 9, 9))
       mlpPredictions <- collect(select(predict(model, mlpTestDF), 
"prediction"))
       expect_equal(head(mlpPredictions$prediction, 10),
    -               c("1.0", "1.0", "1.0", "1.0", "2.0", "1.0", "2.0", "2.0", 
"1.0", "0.0"))
    +               c("1.0", "1.0", "2.0", "1.0", "2.0", "1.0", "2.0", "2.0", 
"1.0", "0.0"))
     
       model <- spark.mlp(df, label ~ features, layers = c(4, 3), maxIter = 2, 
initialWeights =
         c(0.0, 0.0, 0.0, 0.0, 0.0, 5.0, 5.0, 5.0, 5.0, 5.0, 9.0, 9.0, 9.0, 
9.0, 9.0))
       mlpPredictions <- collect(select(predict(model, mlpTestDF), 
"prediction"))
       expect_equal(head(mlpPredictions$prediction, 10),
    -               c("1.0", "1.0", "1.0", "1.0", "2.0", "1.0", "2.0", "2.0", 
"1.0", "0.0"))
    +               c("1.0", "1.0", "2.0", "1.0", "2.0", "1.0", "2.0", "2.0", 
"1.0", "0.0"))
     
       model <- spark.mlp(df, label ~ features, layers = c(4, 3), maxIter = 2)
       mlpPredictions <- collect(select(predict(model, mlpTestDF), 
"prediction"))
       expect_equal(head(mlpPredictions$prediction, 10),
    -               c("1.0", "1.0", "1.0", "1.0", "0.0", "1.0", "0.0", "2.0", 
"1.0", "0.0"))
    +               c("1.0", "1.0", "1.0", "1.0", "0.0", "1.0", "0.0", "0.0", 
"1.0", "0.0"))
    --- End diff --
    
    Yeah, It's weird. I suspect that numerical issue or some underlying breeze 
fixes caused the output changed, the affected results only happened in very 
tiny dataset(three rows) in PySpark/SparkR, there is no effect for all Scala 
tests(usually thousands of rows).
    I run PySpark ```LogisticRegression``` on a larger dataset against Spark 
depends on breeze 0.12 and 0.13.1, they got the same result with reasonable 
tolerance:
    For breeze 0.12:
    ```
    >>> df = 
spark.read.format("libsvm").load("/Users/yliang/data/trunk4/spark/data/mllib/sample_multiclass_classification_data.txt")
    >>> from pyspark.ml.classification import LogisticRegression
    >>> mlor = LogisticRegression(maxIter=100, regParam=0.01, 
family="multinomial")
    >>> mlorModel = mlor.fit(df)
    >>> mlorModel.coefficientMatrix
    DenseMatrix(3, 4, [1.0584, -1.8365, 3.2426, 3.6224, -2.1275, 2.8712, 
-2.8362, -2.5096, 1.069, -1.0347, -0.4064, -1.1128], 1)
    >>> mlorModel.interceptVector
    DenseVector([-1.1036, -0.5917, 1.6953])
    ```
    For breeze 0.13.1:
    ```
    >>> df = 
spark.read.format("libsvm").load("/Users/yliang/data/trunk4/spark/data/mllib/sample_multiclass_classification_data.txt")
    >>> from pyspark.ml.classification import LogisticRegression
    >>> mlor = LogisticRegression(maxIter=100, regParam=0.01, 
family="multinomial")
    >>> mlorModel = mlor.fit(df)
    >>> mlorModel.coefficientMatrix
    DenseMatrix(3, 4, [1.0584, -1.8365, 3.2426, 3.6224, -2.1274, 2.8712, 
-2.8363, -2.5096, 1.069, -1.0347, -0.4064, -1.1128], 1)
    >>> mlorModel.interceptVector
    DenseVector([-1.1036, -0.5917, 1.6953])
    ```
    So I think we should update these vulnerable PySpark/SparkR test cases to 
use larger dataset to make them more stable. What about merging this firstly 
and do that in a separate PR? Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to