[ 
https://issues.apache.org/jira/browse/SPARK-1122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-1122.
------------------------------
    Resolution: Won't Fix

You can also accomplish this with {{mapPartitions}} and simply convert the 
{{Iterator}} you get for each partition into an {{Array}}.

> Collect the RDD and send to each partition to form a new RDD
> ------------------------------------------------------------
>
>                 Key: SPARK-1122
>                 URL: https://issues.apache.org/jira/browse/SPARK-1122
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: Shuo Xiang
>            Priority: Minor
>
>  Two methods (allCollect, allCollectBroadcast) are added to RDD[T], which 
> output a new RDD[Array[T]] instance with each partition containing all of the
> records of the original RDD stored in a single Array[T] instance (the
> same as RDD.collect). This functionality can be useful in machine learning 
> tasks that require sharing updated parameters across partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to