GitHub user jkbradley opened a pull request:
https://github.com/apache/spark/pull/15205
[SPARK-16240][ML] ML persistence backward compatibility for LDA - 2.0
backport
## What changes were proposed in this pull request?
Allow Spark 2.x to load instances of LDA, LocalLDAModel, and
DistributedLDAModel saved from Spark 1.6.
Backport of https://github.com/apache/spark/pull/15034 for branch-2.0
## How was this patch tested?
I tested this manually, saving the 3 types from 1.6 and loading them into
master (2.x). In the future, we can add generic tests for testing backwards
compatibility across all ML models in SPARK-15573.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jkbradley/spark lda-backward-2.0
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/15205.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #15205
----
commit 2a95b5c9b05496a4f80ba13e2f4db9127f3f0c05
Author: Gayathri Murali <[email protected]>
Date: 2016-09-22T23:34:42Z
[SPARK-16240][ML] ML persistence backward compatibility for LDA
Allow Spark 2.x to load instances of LDA, LocalLDAModel, and
DistributedLDAModel saved from Spark 1.6.
I tested this manually, saving the 3 types from 1.6 and loading them into
master (2.x). In the future, we can add generic tests for testing backwards
compatibility across all ML models in SPARK-15573.
Author: Joseph K. Bradley <[email protected]>
Closes #15034 from jkbradley/lda-backwards.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]