[ 
https://issues.apache.org/jira/browse/SPARK-20446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15981648#comment-15981648
 ] 

Nick Pentreath commented on SPARK-20446:
----------------------------------------

By "compare to DataFrame implementation" I mean the current "recommendAll" 
methods in master/branch-2.2 for {{ALSModel}}. Did you compare against that? If 
so what was the result?

Reason being, conceptually that is doing something fairly similar (computing 
the vector dot products rather than matrix-matrix multiply, followed by a 
priority queue aggregator for top-k). The idea was that this SparkSQL approach 
would be more efficient. In practice I didn't find this to be the case for 
large data sizes, when comparing to my approach with BLAS 3 (though granted yes 
there is potential for more GC pressure).

Also, there is really no point in doing the "blockify" operation in this case 
right? As you're not using BLAS 3 anyway, so blocking is unnecessary and the 
block size param is irrelevant.

> Optimize the process of MLLIB ALS recommendForAll
> -------------------------------------------------
>
>                 Key: SPARK-20446
>                 URL: https://issues.apache.org/jira/browse/SPARK-20446
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, MLlib
>    Affects Versions: 2.3.0
>            Reporter: Peng Meng
>
> The recommendForAll of MLLIB ALS is very slow.
> GC is a key problem of the current method. 
> The task use the following code to keep temp result:
> val output = new Array[(Int, (Int, Double))](m*n)
> m = n = 4096 (default value, no method to set)
> so output is about 4k * 4k * (4 + 4 + 8) = 256M. This is a large memory and 
> cause serious GC problem, and it is frequently OOM.
> Actually, we don't need to save all the temp result. Suppose we recommend 
> topK (topK is about 10, or 20) product for each user, we only need  4k * topK 
> * (4 + 4 + 8) memory to save the temp result.
> I have written a solution for this method with the following test result. 
> The Test Environment:
> 3 workers: each work 10 core, each work 30G memory, each work 1 executor.
> The Data: User 480,000, and Item 17,000
> BlockSize: 1024 2048 4096 8192
> Old method: 245s 332s 488s OOM
> This solution: 121s 118s 117s 120s
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to