[ 
https://issues.apache.org/jira/browse/SPARK-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14603543#comment-14603543
 ] 

Perinkulam I Ganesh commented on SPARK-5997:
--------------------------------------------

Hi .. 

New to Spark .. maybe speaking out of ignorance .. won't the following function 
accomplish the above ?

The approach is simple.. If the number of partition requested is higher 
...collapse the current number of partitions into a single partition with no 
shuffle. Now, with only one partition,repartition with shuffle or no-shuffle 
will behave identically. 

So use this single partition with shuffle and split the partition into higher 
partition.

  def repartitionv2(numPartitions: Int, shuffle: Boolean)(implicit ord: 
Ordering[T] = null)
      : RDD[T] = withScope {
    if (shuffle) {
      coalesce(numPartitions, shuffle)
    }
    else {
      var cnt = getPartitions.size;
      if (numPartitions > cnt) {
        var temp = coalesce(1, false)
        temp.coalesce(numPartitions, true)
      }
      else {
        coalesce(numPartitions, shuffle)
      }
    }
  }

It seems to work ...

thanks

- P. I. 

> Increase partition count without performing a shuffle
> -----------------------------------------------------
>
>                 Key: SPARK-5997
>                 URL: https://issues.apache.org/jira/browse/SPARK-5997
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Andrew Ash
>
> When decreasing partition count with rdd.repartition() or rdd.coalesce(), the 
> user has the ability to choose whether or not to perform a shuffle.  However 
> when increasing partition count there is no option of whether to perform a 
> shuffle or not -- a shuffle always occurs.
> This Jira is to create a {{rdd.repartition(largeNum, shuffle=false)}} call 
> that performs a repartition to a higher partition count without a shuffle.
> The motivating use case is to decrease the size of an individual partition 
> enough that the .toLocalIterator has significantly reduced memory pressure on 
> the driver, as it loads a partition at a time into the driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to