[GitHub] spark pull request #20592: [SPARK-23154][ML][DOC] Document backwards compati...

2018-02-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/20592


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20592: [SPARK-23154][ML][DOC] Document backwards compati...

2018-02-13 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/20592#discussion_r167971683
  
--- Diff: docs/ml-pipeline.md ---
@@ -188,9 +188,36 @@ Parameters belong to specific instances of 
`Estimator`s and `Transformer`s.
 For example, if we have two `LogisticRegression` instances `lr1` and 
`lr2`, then we can build a `ParamMap` with both `maxIter` parameters specified: 
`ParamMap(lr1.maxIter -> 10, lr2.maxIter -> 20)`.
 This is useful if there are two algorithms with the `maxIter` parameter in 
a `Pipeline`.
 
-## Saving and Loading Pipelines
+## ML persistence: Saving and Loading Pipelines
 
-Often times it is worth it to save a model or a pipeline to disk for later 
use. In Spark 1.6, a model import/export functionality was added to the 
Pipeline API. Most basic transformers are supported as well as some of the more 
basic ML models. Please refer to the algorithm's API documentation to see if 
saving and loading is supported.
+Often times it is worth it to save a model or a pipeline to disk for later 
use. In Spark 1.6, a model import/export functionality was added to the 
Pipeline API.
+As of Spark 2.3, the DataFrame-based API in `spark.ml` and `pyspark.ml` 
has complete coverage.
+
+ML persistence works across Scala, Java and Python.  However, R currently 
uses a modified format,
+so models saved in R can only be loaded back in R; this should be fixed in 
the future and is
+tracked in 
[SPARK-15572](https://issues.apache.org/jira/browse/SPARK-15572).
+
+### Backwards compatibility for ML persistence
+
+In general, MLlib maintains backwards compatibility for ML persistence.  
I.e., if you save an ML
--- End diff --

Yep, what @HyukjinKwon said.  Gotta love the irregularities of the language!


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20592: [SPARK-23154][ML][DOC] Document backwards compati...

2018-02-13 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/20592#discussion_r167847513
  
--- Diff: docs/ml-pipeline.md ---
@@ -188,9 +188,36 @@ Parameters belong to specific instances of 
`Estimator`s and `Transformer`s.
 For example, if we have two `LogisticRegression` instances `lr1` and 
`lr2`, then we can build a `ParamMap` with both `maxIter` parameters specified: 
`ParamMap(lr1.maxIter -> 10, lr2.maxIter -> 20)`.
 This is useful if there are two algorithms with the `maxIter` parameter in 
a `Pipeline`.
 
-## Saving and Loading Pipelines
+## ML persistence: Saving and Loading Pipelines
 
-Often times it is worth it to save a model or a pipeline to disk for later 
use. In Spark 1.6, a model import/export functionality was added to the 
Pipeline API. Most basic transformers are supported as well as some of the more 
basic ML models. Please refer to the algorithm's API documentation to see if 
saving and loading is supported.
+Often times it is worth it to save a model or a pipeline to disk for later 
use. In Spark 1.6, a model import/export functionality was added to the 
Pipeline API.
+As of Spark 2.3, the DataFrame-based API in `spark.ml` and `pyspark.ml` 
has complete coverage.
+
+ML persistence works across Scala, Java and Python.  However, R currently 
uses a modified format,
+so models saved in R can only be loaded back in R; this should be fixed in 
the future and is
+tracked in 
[SPARK-15572](https://issues.apache.org/jira/browse/SPARK-15572).
+
+### Backwards compatibility for ML persistence
+
+In general, MLlib maintains backwards compatibility for ML persistence.  
I.e., if you save an ML
--- End diff --

Oh, I was in this too before. IIRC, I was confused of an RDD vs a RDD. I 
learnt English needs "an" vs "a" is by how it pronounce. I believe "an em-el" 
is correct :).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20592: [SPARK-23154][ML][DOC] Document backwards compati...

2018-02-13 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/20592#discussion_r167798378
  
--- Diff: docs/ml-pipeline.md ---
@@ -188,9 +188,36 @@ Parameters belong to specific instances of 
`Estimator`s and `Transformer`s.
 For example, if we have two `LogisticRegression` instances `lr1` and 
`lr2`, then we can build a `ParamMap` with both `maxIter` parameters specified: 
`ParamMap(lr1.maxIter -> 10, lr2.maxIter -> 20)`.
 This is useful if there are two algorithms with the `maxIter` parameter in 
a `Pipeline`.
 
-## Saving and Loading Pipelines
+## ML persistence: Saving and Loading Pipelines
 
-Often times it is worth it to save a model or a pipeline to disk for later 
use. In Spark 1.6, a model import/export functionality was added to the 
Pipeline API. Most basic transformers are supported as well as some of the more 
basic ML models. Please refer to the algorithm's API documentation to see if 
saving and loading is supported.
+Often times it is worth it to save a model or a pipeline to disk for later 
use. In Spark 1.6, a model import/export functionality was added to the 
Pipeline API.
+As of Spark 2.3, the DataFrame-based API in `spark.ml` and `pyspark.ml` 
has complete coverage.
+
+ML persistence works across Scala, Java and Python.  However, R currently 
uses a modified format,
+so models saved in R can only be loaded back in R; this should be fixed in 
the future and is
+tracked in 
[SPARK-15572](https://issues.apache.org/jira/browse/SPARK-15572).
+
+### Backwards compatibility for ML persistence
+
+In general, MLlib maintains backwards compatibility for ML persistence.  
I.e., if you save an ML
--- End diff --

an ML -> a ML


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20592: [SPARK-23154][ML][DOC] Document backwards compati...

2018-02-12 Thread jkbradley
GitHub user jkbradley opened a pull request:

https://github.com/apache/spark/pull/20592

[SPARK-23154][ML][DOC] Document backwards compatibility guarantees for ML 
persistence

## What changes were proposed in this pull request?

Added documentation about what MLlib guarantees in terms of loading ML 
models and Pipelines from old Spark versions.  Discussed & confirmed on linked 
JIRA.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jkbradley/spark 
SPARK-23154-backwards-compat-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/20592.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #20592


commit 5366d169ca6b1c5f590ad1beaa15ac74cf731234
Author: Joseph K. Bradley 
Date:   2018-02-13T00:35:17Z

added backwards compatibility notes to ML pipeline persistence docs




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org