Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/14384
@junyangq The ALS wrapper might not need extra metadata in SparkR since the
MLlib model should store all of them already. If that is the case, the PR could
be further simplified. This PR should
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75179320
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/ALSWrapper.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75179030
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/ALSWrapper.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75178881
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/ALSWrapper.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75178780
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/ALSWrapper.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75178434
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/ALSWrapper.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/14392
LGTM. Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/14384
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/14346
cc: @junyangq
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/14229
@junyangq Could you help review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/14182
@junyangq Could you help review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/14258
@junyangq @shivaram We might need to think about how to test
`install.spark`. Any ideas? It seems hard to me because the distribution tar
file is not available for version like 2.1.0-SNAPSHOT
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72312404
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190554
--- Diff: R/pkg/R/utils.R ---
@@ -689,3 +689,7 @@ getSparkContext <- function() {
sc <- get(".sparkRjsc", envir = .spar
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190550
--- Diff: R/pkg/R/utils.R ---
@@ -689,3 +689,7 @@ getSparkContext <- function() {
sc <- get(".sparkRjsc", envir = .spar
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190498
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190493
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190477
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190472
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190503
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190487
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190481
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190507
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190500
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190491
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190465
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190467
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190484
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190475
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190505
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190480
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190496
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190466
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72190463
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,155 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13972
@yinxusen Do you have time to consolidate example files for
`mllib-data-types.md`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13972
LGTM2. Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13921
The grouping of k-means methods LGTM. So I'm merging this into master and
branch-2.0 and leave the remaining issues to SPARK-16144. Thanks!
---
If your project is set up for it, you can reply
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13970
LGTM. Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13493
Shall we change the logic in the Python wrapper and set `numPartitions`
correctly if it is `-1`? Please also update the PR description to add more
details to the changes.
---
If your project
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13937
Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13940
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13921
@keypointt Could you add `seealso` links to the generic `predict` and
`write.ml` doc?
@felixcheung I think it is useful to have a page for the generic `predict`
and `write.ml`. We just need
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13921
I think the error was because this PR left `predict`, `write.ml`, etc
documented without title. So this PR has to be combined with SPARK-16144.
Basically, let us add some doc to the function
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13921
I just merged #13927. So please fetch and merge the latest master if you
want to push an update to avoid merge conflict (hopefully none).
---
If your project is set up for it, you can reply
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13915
@dongjoon-hyun @rxin Is it necessary? `l` and `I` are similar but different
in most monospace fonts. Instead of banning `l`, we should ban using `I` as the
variable name, which is less often. `l
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13935
LGTM pending Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13936
LGTM pending Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13921#discussion_r68693126
--- Diff: R/pkg/R/mllib.R ---
@@ -266,9 +266,9 @@ setMethod("summary", signature(object =
"NaiveBayesModel"),
return(li
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13927
LGTM. Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13927
LGTM except one minor issue with `predict`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13927#discussion_r68677641
--- Diff: R/pkg/R/mllib.R ---
@@ -595,20 +597,14 @@ setMethod("summary", signature(object =
"AFTSurvivalRegressionModel"),
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13921
@keypointt For quicker tests, you can run `R/create-docs.sh` and then check
the html doc under `R/pkg/html`. It would be much faster than `jekyll build`.
---
If your project is set up for it, you
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13921#discussion_r68677380
--- Diff: R/pkg/R/mllib.R ---
@@ -266,9 +266,9 @@ setMethod("summary", signature(object =
"NaiveBayesModel"),
return(li
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13928
Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13928
LGTM pending Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13888
Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13927
@junyangq You can use `dev/lint-r` to check R code style on your local
machine.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13927#discussion_r68638675
--- Diff: R/pkg/R/mllib.R ---
@@ -420,44 +440,96 @@ setMethod("spark.naiveBayes", signature(data =
"SparkDataFrame", formula = "for
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13927#discussion_r68638138
--- Diff: R/pkg/R/mllib.R ---
@@ -420,44 +440,96 @@ setMethod("spark.naiveBayes", signature(data =
"SparkDataFrame", formula = "for
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13927#discussion_r68637735
--- Diff: R/pkg/R/mllib.R ---
@@ -420,44 +440,96 @@ setMethod("spark.naiveBayes", signature(data =
"SparkDataFrame", formula = "for
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13927
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13927
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13921
@junyangq Could you help review this PR? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13891
@hqzizania This could be tested with benchmarks without ALS. I guess even
with a correct implementation, we need a large rank to see improvement.
---
If your project is set up for it, you can reply
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13844
LGTM. Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13877
Merged into master and branch-2.0. Thanks for reviewing!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13881
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13844#discussion_r68350339
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/MinMaxScaler.scala ---
@@ -232,7 +233,9 @@ object MinMaxScalerModel extends
MLReadable
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13879
LGTM. Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13877#discussion_r68337983
--- Diff: R/pkg/R/mllib.R ---
@@ -390,23 +376,41 @@ setMethod("predict", signature(object =
"KMeansModel"),
return(d
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13872
Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13877#discussion_r68316342
--- Diff: R/pkg/R/mllib.R ---
@@ -218,9 +218,10 @@ print.summary.GeneralizedLinearRegressionModel <-
function(x, ...) {
# Makes predictions f
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13877
cc: @junyangq @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/13877
[SPARK-16142] [R] group naiveBayes method docs in a single Rd
## What changes were proposed in this pull request?
This PR groups `spark.naiveBayes`, `summary(NB)`, `predict(NB
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13872
@liancheng @rxin In the code, I didn't use UDF explicitly in a filter
expression. It is like the following:
~~~
filter b > 0
set a = udf(b)
filter a
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13859
Merged into master and branch-2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13841#discussion_r68191189
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -674,12 +674,12 @@ object LogisticRegressionModel extends
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13844#discussion_r68191096
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/MinMaxScaler.scala ---
@@ -232,7 +233,9 @@ object MinMaxScalerModel extends
MLReadable
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13859#discussion_r68190859
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/package-info.java ---
@@ -16,6 +16,26 @@
*/
/**
- * Spark's machine learning
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13872
cc: @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13856
Merged into master and branch-2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13855
This is a trivial change and fixes Java doc build. Merged into master and
branch-2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13859
cc: @jkbradley @MLnick @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/13859
[SPARK-16154] [MLLIB] Update spark.ml and spark.mllib package docs
## What changes were proposed in this pull request?
Since we decided to switch spark.mllib package into maintenance mode
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/13856
[SPARK-16155] [DOC] remove package grouping in Java docs
## What changes were proposed in this pull request?
In 1.4 and earlier releases, we have package grouping in the generated Java
API
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13855
cc: @JoshRosen It would be nice to have a Jenkins job to build the docs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/13855
[SPARK-16153] [MLLIB] switch to multi-line doc to avoid a genjavadoc bug
## What changes were proposed in this pull request?
We recently deprecated setLabelCol in ChiSqSelectorModel (#13823
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13828
Merged into master and branch-2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13840
LGTM. Merged into master and branch-2.0. Thanks for testing class
compatibility!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13656#discussion_r68091611
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/fpm/AssociationRules.scala ---
@@ -120,6 +120,13 @@ object AssociationRules {
@Since("
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13847
Do we need to hash all values? This could be a performance issue if
`hashCode` is called frequently on very large arrays.
Story: MLlib had some performance issues caused by `Vector.hashCode
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/12968#discussion_r68089887
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StopWordsRemover.scala ---
@@ -109,6 +127,7 @@ class StopWordsRemover(override val uid: String
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/12968#discussion_r68089485
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StopWordsRemover.scala ---
@@ -73,22 +75,38 @@ class StopWordsRemover(override val uid: String
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13844#discussion_r68088732
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/MinMaxScaler.scala ---
@@ -232,9 +233,9 @@ object MinMaxScalerModel extends
MLReadable
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13841#discussion_r68088554
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -674,12 +674,13 @@ object LogisticRegressionModel extends
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13841#discussion_r68088372
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -674,12 +674,13 @@ object LogisticRegressionModel extends
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13844
@hhbyyh Since we don't have the test framework to check backward
compatibility. Could you confirm that you manually tested loading models saved
in 1.6?
---
If your project is set up for it, you
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13820
LGTM. Merged into master and branch-2.0. Thanks!
@junyangq Please follow @felixcheung 's suggestion and insert an empty line
between `#` comments and `#'` comments in your next PR
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13820#discussion_r68083635
--- Diff: R/pkg/R/mllib.R ---
@@ -124,24 +138,21 @@ setMethod("spark.glm", signature(data =
"SparkDataFrame", formula = "formula
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/13820#discussion_r67989751
--- Diff: R/pkg/R/mllib.R ---
@@ -173,10 +182,9 @@ setMethod("summary", signature(object =
"GeneralizedLinearRegressionModel"),
401 - 500 of 8762 matches
Mail list logo