Github user yangliuyu commented on the pull request:
https://github.com/apache/spark/pull/964#issuecomment-48007755
@vrilleup I missed persist the RDD[Vector], but after added, it only help
reduce time cost from 20+s to 10+s, subsequent aggregate tasks still take more
than 10s. For the case stage 47, Scheduler delay and gc takes too much time.
The matrix is 800371 x 100000, 29898284 non-zeros. Our testing env only has 16
cores, so don't know whether it will get better performance on large number of
cores.
```scala
val data = input.map { case (sid, uid) =>
(songId2IndexMap(sid), userId2IndexMap(uid))
}.groupByKey().sortByKey()
.map({ case (sid, uids) =>
val uidList = uids.toSet.toList.sorted
val uidSeq = uidList.map(uid => (uid, v))
Vectors.sparse(userSize, uidSeq)
}).persist()
val mat = new RowMatrix(data)
val svd = mat.computeSparseSVD(100, computeU = true)
```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---