anoopsjohn commented on a change in pull request #479: HBASE-22802 Avoid temp
ByteBuffer allocation in FileIOEngine#read
URL: https://github.com/apache/hbase/pull/479#discussion_r318188823
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java
##########
@@ -129,20 +129,20 @@ public Cacheable read(BucketEntry be) throws IOException
{
long offset = be.offset();
int length = be.getLength();
Preconditions.checkArgument(length >= 0, "Length of read can not be less
than 0.");
- ByteBuffer dstBuffer = ByteBuffer.allocate(length);
+ ByteBuff dstBuff = be.allocator.allocate(length);
if (length != 0) {
- accessFile(readAccessor, dstBuffer, offset);
+ accessFile(readAccessor, dstBuff, offset);
// The buffer created out of the fileChannel is formed by copying the
data from the file
// Hence in this case there is no shared memory that we point to. Even
if the BucketCache
// evicts this buffer from the file the data is already copied and there
is no need to
// ensure that the results are not corrupted before consuming them.
- if (dstBuffer.limit() != length) {
+ if (dstBuff.limit() != length) {
throw new IllegalArgumentIOException(
- "Only " + dstBuffer.limit() + " bytes read, " + length + "
expected");
+ "Only " + dstBuff.limit() + " bytes read, " + length + "
expected");
}
}
- dstBuffer.rewind();
- return be.wrapAsCacheable(new ByteBuffer[] { dstBuffer });
+ dstBuff.rewind();
+ return be.wrapAsCacheable(dstBuff);
Review comment:
IMHO this is an unwanted sharing and ref count check.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services