[ 
https://issues.apache.org/jira/browse/SPARK-7618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549229#comment-14549229
 ] 

Apache Spark commented on SPARK-7618:
-------------------------------------

User 'ezli' has created a pull request for this issue:
https://github.com/apache/spark/pull/6245

> Word2VecModel cache normalized wordVectors to speed up findSynonyms
> -------------------------------------------------------------------
>
>                 Key: SPARK-7618
>                 URL: https://issues.apache.org/jira/browse/SPARK-7618
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 1.3.1
>            Reporter: Eric Li
>            Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> In current implementation, each findSynonyms call will need to do a Euclidean 
> Normalization (cosineVec / wordVecNorms), this is expensive. Caching a copy 
> of normalized wordVectors will speed up multiple findSynonyms call. This is 
> how the Google's word2vec C code implemented. 
> In addition, doing a lazy loading for wordVectors will be nice as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to