[ 
https://issues.apache.org/jira/browse/SPARK-1201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14031257#comment-14031257
 ] 

Mark Hamstra commented on SPARK-1201:
-------------------------------------

What causes this to not be fixable within the scope of 1.0.1?

> Do not materialize partitions whenever possible in BlockManager
> ---------------------------------------------------------------
>
>                 Key: SPARK-1201
>                 URL: https://issues.apache.org/jira/browse/SPARK-1201
>             Project: Spark
>          Issue Type: New Feature
>          Components: Block Manager, Spark Core
>            Reporter: Patrick Wendell
>            Assignee: Andrew Or
>             Fix For: 1.1.0
>
>
> This is a slightly more complex version of SPARK-942 where we try to avoid 
> unrolling iterators in other situations where it is possible. SPARK-942 
> focused on the case where the DISK_ONLY storage level was used. There are 
> other cases though, such as if data is stored serialized and in memory and 
> but there is not enough memory left to store the RDD.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to