[ 
https://issues.apache.org/jira/browse/SPARK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14482768#comment-14482768
 ] 

uncleGen commented on SPARK-6695:
---------------------------------

 [~srowen] Thanks for your patience.  Yeah, it is a good fix with the smallest 
number of changes. In my practical use, we usually need to create a big array, 
sometimes the size of array may exceed 2^32. So IMHO, we may provide a general 
external `Iterator` in view of usability and memory usage. Well, 
[PR-5364|https://github.com/apache/spark/pull/5364] is enough on that issue.

> Add an external iterator: a hadoop-like output collector
> --------------------------------------------------------
>
>                 Key: SPARK-6695
>                 URL: https://issues.apache.org/jira/browse/SPARK-6695
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: uncleGen
>
> In practical use, we usually need to create a big iterator, which means too 
> big in `memory usage` or too long in `array size`. On the one hand, it leads 
> to too much memory consumption. On the other hand, one `Array` may not hold 
> all the elements, as java array indices are of type 'int' (4 bytes or 32 
> bits). So, IMHO, we may provide a `collector`, which has a buffer, 100MB or 
> any others, and could spill data into disk. The use case may like:
> {code: borderStyle=solid}
>    rdd.mapPartition { it => 
>       ...
>       val collector = new ExternalCollector()
>       collector.collect(a)
>       ...
>       collector.iterator
>   }
>    
> {code}
> I have done some related works, and I need your opinions, thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to