[
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15870831#comment-15870831
]
Ted Yu commented on HBASE-17623:
--------------------------------
Got some compilation errors applying patch on branch-1:
{code}
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile)
on project hbase-common: Compilation failure: Compilation failure:
[ERROR]
/Users/tyu/1-hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java:[194,15]
error: constructor Bytes in class Bytes cannot be applied to given types;
[ERROR] int,int
[ERROR] reason: actual and formal argument lists differ in length
[ERROR]
/Users/tyu/1-hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java:[198,15]
error: constructor Bytes in class Bytes cannot be applied to given types;
[ERROR] int,int
[ERROR] reason: actual and formal argument lists differ in length
[ERROR]
/Users/tyu/1-hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java:[212,15]
error: constructor Bytes in class Bytes cannot be applied to given types;
{code}
Chiaping:
Can you attach patch for branch-1 ?
Thanks
> Reuse the bytes array when building the hfile block
> ---------------------------------------------------
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
> Issue Type: Improvement
> Reporter: ChiaPing Tsai
> Assignee: ChiaPing Tsai
> Priority: Minor
> Fix For: 2.0.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png,
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png,
> before(snappy_hfilesize=755MB).png, HBASE-17623.v0.patch,
> HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
> if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx,
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
> }
> userDataStream.flush();
> // This does an array copy, so it is safe to cache this byte array when
> cache-on-write.
> // Header is still the empty, 'dummy' header that is yet to be filled
> out.
> uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
> prevOffset = prevOffsetByType[blockType.getId()];
> // We need to set state before we can package the block up for
> cache-on-write. In a way, the
> // block is ready, but not yet encoded or compressed.
> state = State.BLOCK_READY;
> if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA)
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
> } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
> }
> // Calculate how many bytes we need for checksum on the tail of the
> block.
> int numBytes = (int) ChecksumUtil.numBytes(
> onDiskBlockBytesWithHeader.length,
> fileContext.getBytesPerChecksum());
> // Put the header for the on disk bytes; header currently is
> unfilled-out
> putHeader(onDiskBlockBytesWithHeader, 0,
> onDiskBlockBytesWithHeader.length + numBytes,
> uncompressedBlockBytesWithHeader.length,
> onDiskBlockBytesWithHeader.length);
> // Set the header for the uncompressed bytes (for cache-on-write) --
> IFF different from
> // onDiskBlockBytesWithHeader array.
> if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
> onDiskBlockBytesWithHeader.length + numBytes,
> uncompressedBlockBytesWithHeader.length,
> onDiskBlockBytesWithHeader.length);
> }
> if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
> }
> ChecksumUtil.generateChecksums(
> onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
> onDiskChecksum, 0, fileContext.getChecksumType(),
> fileContext.getBytesPerChecksum());
> }{code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)