GitHub user vgankidi opened a pull request:

    https://github.com/apache/spark/pull/19425

    [SPARK-22196][Core] Combine multiple input splits into a HadoopPartition

    ## What changes were proposed in this pull request?
    
    Spark native read path allows tuning the partition size based on 
spark.sql.files.maxPartitionBytes and spark.sql.files.openCostInBytes. It would 
be useful to add a similar functionality/behavior to HadoopRDD, i.e, pack 
multiple input splits into a single partition based on maxPartitionBytes and 
openCostInBytes. We have had several use-cases to merge small files by 
coalescing them by size to reduce the number of tasks launched.
    
    ## How was this patch tested?
    Added a unit test. It was also tested manually in a few production jobs. 


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/vgankidi/spark SPARK-22196

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/19425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19425
    
----
commit 2f4e32681e50d4b42ed5b3d05d91e45483679bee
Author: Vinitha Gankidi <vgank...@netflix.com>
Date:   2017-10-04T06:36:56Z

    [SPARK-22196][Core] Combine multiple input splits into a HadoopPartition

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to