[
https://issues.apache.org/jira/browse/SPARK-20446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15981122#comment-15981122
]
Nick Pentreath commented on SPARK-20446:
----------------------------------------
The GC would come from the temp result array in the BLAS3 case. The new result
array per group {{new Array[(Int, (Int, Double))](m * n)}} is the same. I think
the temp result array could be pre-allocated per partition to eliminate the GC
issue for that part of the computation. That was my next efficiency change to
look into for this.
It could be that the combo of the above with the priority queue could be best?
> Optimize the process of MLLIB ALS recommendForAll
> -------------------------------------------------
>
> Key: SPARK-20446
> URL: https://issues.apache.org/jira/browse/SPARK-20446
> Project: Spark
> Issue Type: Improvement
> Components: ML, MLlib
> Affects Versions: 2.3.0
> Reporter: Peng Meng
>
> The recommendForAll of MLLIB ALS is very slow.
> GC is a key problem of the current method.
> The task use the following code to keep temp result:
> val output = new Array[(Int, (Int, Double))](m*n)
> m = n = 4096 (default value, no method to set)
> so output is about 4k * 4k * (4 + 4 + 8) = 256M. This is a large memory and
> cause serious GC problem, and it is frequently OOM.
> Actually, we don't need to save all the temp result. Suppose we recommend
> topK (topK is about 10, or 20) product for each user, we only need 4k * topK
> * (4 + 4 + 8) memory to save the temp result.
> I have written a solution for this method with the following test result.
> The Test Environment:
> 3 workers: each work 10 core, each work 30G memory, each work 1 executor.
> The Data: User 480,000, and Item 17,000
> BlockSize: 1024 2048 4096 8192
> Old method: 245s 332s 488s OOM
> This solution: 121s 118s 117s 120s
>
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]