GitHub user VinceShieh opened a pull request:
https://github.com/apache/spark/pull/19318
[SPARK-22096][ML] use aggregateByKeyLocally in feature frequency calcâ¦
## What changes were proposed in this pull request?
NaiveBayes currently takes aggreateByKey followed by a collect to calculate
frequency for each feature/label. We can implement a new function
'aggregateByKeyLocally' in RDD that merges locally on each mapper before
sending results to a reducer to save one stage.
We tested on NaiveBayes and see ~20% performance gain with these changes.
Signed-off-by: Vincent Xie <[email protected]>
## How was this patch tested?
existing test
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/VinceShieh/spark SPARK-22096
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/19318.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #19318
----
commit efb0fe9c0544d8666c423ba9bde533735961ea75
Author: Vincent Xie <[email protected]>
Date: 2017-09-22T03:57:08Z
[SPARK-22096][ML] use aggregateByKeyLocally in feature frequency calculation
NaiveBayes currently takes aggreateByKey followed by a collect to calculate
frequency for each feature/label. We can implement a new function
'aggregateByKeyLocally' in RDD that merges locally on each mapper before
sending results to a reducer to save one stage.
We tested on NaiveBayes and see ~20% performance gain with these changes.
Signed-off-by: Vincent Xie <[email protected]>
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]