GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/1549
through shuffling blocksByAddress avoid much reducers to fetch data from a
executor at a time
like mapreduce we need to shuffle blocksByAddress.it can avoid many
reducers to connect a executor at a time.when a map has many paritions, at a
time there has so much reduces connecting to this map.so it maybe make
network's connect to timeout.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/lianhuiwang/spark shuffle-blocksByAddress
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/1549.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1549
----
commit 1dc79ba2cdc6c357d1f5c94dd25469f9383654a6
Author: lianhuiwang <[email protected]>
Date: 2014-07-23T17:14:02Z
shuffle blocksByAddress to avoid executor timeout
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---