GitHub user attilapiros opened a pull request:
https://github.com/apache/spark/pull/21592
[SPARK-24578][Core] Avoid timeout at reading remote cache block
## What changes were proposed in this pull request?
In MessageWithHeader the copyByteBuf() method is called several times in
case of huge byteBuf. If there is really huge CompositeByteBuf containing many
small chunks then in the copyByteBuf method when the whole nioBuffer requested
these small chunks are re-merged again and again. From this nioBuffer only just
a smaller part is written to channel as there is additional buffering here.
This re-merging could took a considerable time which can cause a timeout at
the client. As a result of the timeout the client closed the socket which at
the sender lead to an "java.io.IOException: Broken pipe" exception.
In this PR only just the relevant nioBuffer parts are built.
## How was this patch tested?
With unit tests.
Please review http://spark.apache.org/contributing.html before opening a
pull request.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/attilapiros/spark SPARK-24578
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/21592.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #21592
----
commit 6000c45c6937cedab8dd92933fb8c9fb0b2fc0a4
Author: âattilapirosâ <piros.attila.zsolt@...>
Date: 2018-06-19T20:00:51Z
initial version
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]