GitHub user caneGuy reopened a pull request:
https://github.com/apache/spark/pull/18730
[SPARK-21527][CORE] Use buffer limit in order to use JAVA NIO Util's
buffercache
## What changes were proposed in this pull request?
Right now, ChunkedByteBuffer#writeFully do not slice bytes first.We observe
code in java nio Util#getTemporaryDirectBuffer below:
BufferCache cache = bufferCache.get();
ByteBuffer buf = cache.get(size);
if (buf != null) {
return buf;
} else {
// No suitable buffer in the cache so we need to allocate a new
// one. To avoid the cache growing then we remove the first
// buffer from the cache and free it.
if (!cache.isEmpty()) {
buf = cache.removeFirst();
free(buf);
}
return ByteBuffer.allocateDirect(size);
}
If we slice first with a fixed size, we can use buffer cache and only need
to allocate at the first write call.
Since we allocate new buffer, we can not control the free time of this
buffer.This once cause memory issue in our production cluster.
In this patch, i supply a new api which will slice with fixed size for
buffer writing.
## How was this patch tested?
Unit test and test in production.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/caneGuy/spark zhoukang/improve-chunkwrite
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/18730.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #18730
----
commit 7cbadc5e367a045dd70af4c85e4c17fd0ac3cba7
Author: zhoukang <[email protected]>
Date: 2017-07-25T09:44:46Z
[SPARK][CORE] Slice write by channel
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]