[ 
https://issues.apache.org/jira/browse/SPARK-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17462805#comment-17462805
 ] 

Nicholas Chammas commented on SPARK-5997:
-----------------------------------------

[~tenstriker] - I believe in your case you should be able to set 
{{spark.sql.files.maxRecordsPerFile}} to some number. Spark will not shuffle 
the data but it will still split up your output across multiple files.

> Increase partition count without performing a shuffle
> -----------------------------------------------------
>
>                 Key: SPARK-5997
>                 URL: https://issues.apache.org/jira/browse/SPARK-5997
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Andrew Ash
>            Priority: Major
>
> When decreasing partition count with rdd.repartition() or rdd.coalesce(), the 
> user has the ability to choose whether or not to perform a shuffle.  However 
> when increasing partition count there is no option of whether to perform a 
> shuffle or not -- a shuffle always occurs.
> This Jira is to create a {{rdd.repartition(largeNum, shuffle=false)}} call 
> that performs a repartition to a higher partition count without a shuffle.
> The motivating use case is to decrease the size of an individual partition 
> enough that the .toLocalIterator has significantly reduced memory pressure on 
> the driver, as it loads a partition at a time into the driver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to