GitHub user NiharS opened a pull request:
https://github.com/apache/spark/pull/22114
[SPARK-24938][Core] Prevent Netty from using onheap memory for headers
without regard for configuration
â¦ffer type instead of immediately opening a pool of onheap memory for
headers
## What changes were proposed in this pull request?
In MessageEncoder.java, the header would always be allocated on onheap
memory regardless of whether netty was configured to use/prefer onheap or
offheap. By default this made netty allocate 16mb of onheap memory for a tiny
header message. It would be more practical to use preallocated buffers.
Using a memory monitor tool on a simple spark application, the following
services currently allocate 16 mb of onheap memory:
netty-rpc-client
netty-blockTransfer-client
netty-external-shuffle-client
With this change, the memory monitor tool reports all three of these
services as using 0 b of onheap memory. The offheap memory allocation does not
increase, but more of the already-allocated space is used.
## How was this patch tested?
Manually tested change using spark-memory-tool
https://github.com/squito/spark-memory
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/NiharS/spark nettybuffer
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/22114.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #22114
----
commit c2f9ed10776842ffe0746fcc89b157675fa6c455
Author: Nihar Sheth <niharrsheth@...>
Date: 2018-08-14T22:49:41Z
netty defaults to using current buffers specified by the preferred buffer
type instead of immediately opening a pool of onheap memory for headers
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]