GitHub user Krimit opened a pull request:
https://github.com/apache/spark/pull/17263
[SPARK-19922][ML] small speedups to findSynonyms
Currently generating synonyms using a large model (I've tested with 3m
words) is very slow. These efficiencies have sped things up for us by ~17%
I wasn't sure if such small changes were worthy of a jira, but the
guidelines seemed to suggest that that is the preferred approach
## What changes were proposed in this pull request?
Address a few small issues in the findSynonyms logic:
1) remove usage of ``Array.fill`` to zero out the ``cosineVec`` array. The
default float value in Scala and Java is 0.0f, so explicitly setting the values
to zero is not needed
2) use Floats throughout. The conversion to Doubles before doing the
``priorityQueue`` is totally superfluous, since all the similarity computations
are done using Floats anyway. Creating a second large array just serves to put
extra strain on the GC
3) convert the slow ``for(i <- cosVec.indices)`` to an ugly, but faster,
``while`` loop
These efficiencies are really only apparent when working with a large model
## How was this patch tested?
Existing unit tests + some in-house tests to time the difference
cc @jkbradley @MLNick @srowen
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/Krimit/spark fasterFindSynonyms
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/17263.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #17263
----
commit f22f47f5b341f930b42ccea507a3697c0953abc1
Author: Asher Krim <krim.asher@gmail>
Date: 2017-03-12T01:19:24Z
small speedups to findSynonyms
Currently generating synonyms using a model with 3m words is painfully
slow. These efficiencies have sped things up by more than 17%.
Address a few issues in the findSynonyms logic:
1) no need to zero out the cosineVec array each time, since default value
for float arrays is 0.0f. This should offer some nice speedups
2) use floats throughout. The conversion to Doubles before doing the
priorityQueue is totally superflous, since all the computations are done using
floats anyway
3) convert the slow for(i <- cosVec.indices), which combines a scala
closure with a Range, to an ugly but faster while loop
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]