GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/16491
[SPARK-19110][ML][MLLIB]:DistributedLDAModel returns different logPrior for
original and loaded model
## What changes were proposed in this pull request?
While adding DistributedLDAModel training summary for SparkR, I found that
the logPrior for original and loaded model is different.
For example, in the test("read/write DistributedLDAModel"), I add the test:
val logPrior = model.asInstanceOf[DistributedLDAModel].logPrior
val logPrior2 = model2.asInstanceOf[DistributedLDAModel].logPrior
assert(logPrior === logPrior2)
The test fails:
-4.394180878889078 did not equal -4.294290536919573
The reason is that `graph.vertices.aggregate(0.0)(seqOp, _ + _)` only
returns the value of a single vertex instead of the aggregation of all
vertices. Therefore, when the loaded model does the aggregation in a different
order, it returns different `logPrior`.
Please refer to #16464 for details.
## How was this patch tested?
Add a new unit test for testing logPrior.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/wangmiao1981/spark ldabug
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/16491.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #16491
----
commit 31d9bce7c1de016fa23f1783e41aa8b394a68cc4
Author: [email protected] <[email protected]>
Date: 2017-01-06T20:40:54Z
fix the bug of logPrior
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]