Github user yangliuyu commented on the pull request:

    https://github.com/apache/spark/pull/964#issuecomment-48007755
  
    @vrilleup I missed persist the RDD[Vector], but after added, it only help 
reduce time cost from 20+s to 10+s, subsequent aggregate tasks still take more 
than 10s. For the case stage 47, Scheduler delay and gc takes too much time. 
The matrix is 800371 x 100000,  29898284 non-zeros. Our testing env only has 16 
cores, so don't know whether it will get better performance on large number of 
cores.
    
    ```scala
    val data = input.map { case (sid, uid) =>
          (songId2IndexMap(sid), userId2IndexMap(uid))
        }.groupByKey().sortByKey()
          .map({ case (sid, uids) =>
          val uidList = uids.toSet.toList.sorted
          val uidSeq = uidList.map(uid => (uid, v))
          Vectors.sparse(userSize, uidSeq)
        }).persist()
    val mat = new RowMatrix(data)
    val svd = mat.computeSparseSVD(100, computeU = true)
    ```
    
    
![song_clustering_svd_sparse_vector__-_spark_stages](https://cloud.githubusercontent.com/assets/1361821/3478215/f6162636-0330-11e4-8e5f-a56e36ba874b.png)
    
    
![song_clustering_svd_sparse_vector__-_details_for_stage_35](https://cloud.githubusercontent.com/assets/1361821/3478217/206e610a-0331-11e4-86ab-342ee3ce3ed0.png)
    
    
![song_clustering_svd_sparse_vector__-_storage](https://cloud.githubusercontent.com/assets/1361821/3478238/db2ca222-0331-11e4-96a7-b7c1c1af284d.png)



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to