GitHub user f7753 opened a pull request:

    https://github.com/apache/spark/pull/14239

    [SPARK-16593] [CORE] Provide a pre-fetch mechanism to accelerate shuffle 
stage.

    ## What changes were proposed in this pull request?
    
    Added a pre-fetch mechanism for shuffle stage.
    The map stage would load the blocks before the openBlock message arrives, 
then while the block data were transferred through the net, disks would no 
longer has nothing to do,  they would provide the data and let the server load 
to memory.
     This would make the time between the server received  openBlock message 
and ready to transfer the data even shorter.
    
    
    ## How was this patch tested?
    After setting the params:`spark.shuffle.prepare.count ` > 0 & 
`spark.shuffle.prepare.open` to be `true`, this mechanism will work.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/f7753/spark prefetch

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/14239.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #14239
    
----
commit ea3bd91c6df3a230099ebef908c35c5f463395f2
Author: MaBiao <[email protected]>
Date:   2016-07-17T14:05:07Z

    Add preFetch mechanism during shuffle process

commit 8d8f13c2ebc6d308a74f4feee40b091dea81be93
Author: MaBiao <[email protected]>
Date:   2016-07-17T15:07:02Z

    Modified the NettyBlockRpcServer's openBlock message handling mechanism

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to