[
https://issues.apache.org/jira/browse/HBASE-26659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yutong Xiao updated HBASE-26659:
--------------------------------
Description:
Currently, the process to write HFileBlocks into IOEngine in BucketCache is:
{code:java}
if (data instanceof HFileBlock) {
// If an instance of HFileBlock, save on some allocations.
HFileBlock block = (HFileBlock) data;
ByteBuff sliceBuf = block.getBufferReadOnly();
ByteBuffer metadata = block.getMetaData();
ioEngine.write(sliceBuf, offset);
ioEngine.write(metadata, offset + len - metadata.limit());
}
{code}
The getMetaData() function in HFileBlock is:
{code:java}
public ByteBuffer getMetaData() {
ByteBuffer bb = ByteBuffer.allocate(BLOCK_METADATA_SPACE);
bb = addMetaData(bb, true);
bb.flip();
return bb;
}
{code}
It will allocate new ByteBuffer every time.
We could reuse a local variable of WriterThread to reduce the new allocation of
this small piece ByteBuffer.
Reasons:
1. In a WriterThread, blocks in doDrain() function are written into IOEngine
sequentially, there is no multi-thread problem.
2. After IOEngine.write() function, the data in metadata bytebuffer has been
transformed into ByteArray (ByteBufferIOEngine) or FileChannel (FileIOEngine)
safely. The lifecycle of it is within the if statement above.
was:
Currently, the process to write HFileBlocks into IOEngine in BucketCache is:
{code:java}
if (data instanceof HFileBlock) {
// If an instance of HFileBlock, save on some allocations.
HFileBlock block = (HFileBlock) data;
ByteBuff sliceBuf = block.getBufferReadOnly();
ByteBuffer metadata = block.getMetaData();
ioEngine.write(sliceBuf, offset);
ioEngine.write(metadata, offset + len - metadata.limit());
}
{code}
The getMetaData() function in HFileBlock is:
{code:java}
public ByteBuffer getMetaData() {
ByteBuffer bb = ByteBuffer.allocate(BLOCK_METADATA_SPACE);
bb = addMetaData(bb, true);
bb.flip();
return bb;
}
{code}
It will allocate new ByteBuffer every time.
We could reuse a local variable of WriterThread to reduce the new allocation of
metadata.
Reasons:
1. In a WriterThread, blocks are written into IOEngine sequencially, there is
no multi-thread problem.
2. After IOEngine.write() function, the data in metadata bytebuffer has been
transformed into ByteArray (ByteBufferIOEngine) or FileChannel (FileIOEngine)
safely. The lifecycle of it is within the if statement above.
> The ByteBuffer of metadata in RAMQueueEntry in BucketCache could be reused.
> ---------------------------------------------------------------------------
>
> Key: HBASE-26659
> URL: https://issues.apache.org/jira/browse/HBASE-26659
> Project: HBase
> Issue Type: Improvement
> Reporter: Yutong Xiao
> Assignee: Yutong Xiao
> Priority: Major
>
> Currently, the process to write HFileBlocks into IOEngine in BucketCache is:
> {code:java}
> if (data instanceof HFileBlock) {
> // If an instance of HFileBlock, save on some allocations.
> HFileBlock block = (HFileBlock) data;
> ByteBuff sliceBuf = block.getBufferReadOnly();
> ByteBuffer metadata = block.getMetaData();
> ioEngine.write(sliceBuf, offset);
> ioEngine.write(metadata, offset + len - metadata.limit());
> }
> {code}
> The getMetaData() function in HFileBlock is:
> {code:java}
> public ByteBuffer getMetaData() {
> ByteBuffer bb = ByteBuffer.allocate(BLOCK_METADATA_SPACE);
> bb = addMetaData(bb, true);
> bb.flip();
> return bb;
> }
> {code}
> It will allocate new ByteBuffer every time.
> We could reuse a local variable of WriterThread to reduce the new allocation
> of this small piece ByteBuffer.
> Reasons:
> 1. In a WriterThread, blocks in doDrain() function are written into IOEngine
> sequentially, there is no multi-thread problem.
> 2. After IOEngine.write() function, the data in metadata bytebuffer has been
> transformed into ByteArray (ByteBufferIOEngine) or FileChannel (FileIOEngine)
> safely. The lifecycle of it is within the if statement above.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)