Github user Ishiihara commented on a diff in the pull request:
https://github.com/apache/spark/pull/1932#discussion_r16222465
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/Word2Vec.scala ---
@@ -284,16 +284,15 @@ class Word2Vec extends Serializable with Logging {
val newSentences = sentences.repartition(numPartitions).cache()
val initRandom = new XORShiftRandom(seed)
- var syn0Global =
- Array.fill[Float](vocabSize * vectorSize)((initRandom.nextFloat() -
0.5f) / vectorSize)
- var syn1Global = new Array[Float](vocabSize * vectorSize)
-
+ var synGlobal =
--- End diff --
We need to perform reduceByKey on both syn0 and syn1 and we have different
updated keys for syn0 and syn1. To perform reduceByKey of syn0 and syn1
together, we need to have a unique key and one way to achieve this is to treat
i + vocabSize as the key for syn1(i). Then after we collect, we need to slice
to update syn0Global and syn1Global. Any better idea?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]