[ 
https://issues.apache.org/jira/browse/SPARK-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-2121.
------------------------------
    Resolution: Not a Problem

> Not fully cached when there is enough memory in ALS
> ---------------------------------------------------
>
>                 Key: SPARK-2121
>                 URL: https://issues.apache.org/jira/browse/SPARK-2121
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager, MLlib, Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Shuo Xiang
>
> While factorizing a large matrix using the latest Alternating Least Squares 
> (ALS) in mllib, from sparkUI it looks like that spark fail to cache all the 
> partitions of some RDD while memory is sufficient. Please find [this 
> post](http://apache-spark-user-list.1001560.n3.nabble.com/Not-fully-cached-when-there-is-enough-memory-tt7429.html)
>  for screenshots. This may cause subsequent job failures while executing 
> `userOut.Count()` or `productsOut.count`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to