[ 
https://issues.apache.org/jira/browse/SPARK-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiangrui Meng resolved SPARK-2995.
----------------------------------

       Resolution: Fixed
    Fix Version/s: 1.1.0

Issue resolved by pull request 1913
[https://github.com/apache/spark/pull/1913]

> Allow to set storage level for intermediate RDDs in ALS
> -------------------------------------------------------
>
>                 Key: SPARK-2995
>                 URL: https://issues.apache.org/jira/browse/SPARK-2995
>             Project: Spark
>          Issue Type: New Feature
>          Components: MLlib
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>             Fix For: 1.1.0
>
>
> As mentioned in [SPARK-2465], using MEMORY_AND_DISK_SER together with 
> spark.rdd.compress=true can help reduce the space requirement by a lot, at 
> the cost of speed. It might be useful to add this option so people can run 
> ALS on much bigger datasets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to