[
https://issues.apache.org/jira/browse/HBASE-27049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17538554#comment-17538554
]
Andrew Kyle Purtell commented on HBASE-27049:
---------------------------------------------
I thought about doing this when adding the new codecs recently but because we
have used Hadoop compression streams for compression and decompression we have
a binary data format issues switching to an implementation that does not.
Internally the streams write their own “block” metadata. It should be possible
to maintain compatibility with read logic that can handle both the old way and
the new way. If we change the write side details we will need to increment the
hfile version number so older versions of HBase will know they cannot read the
files.
> Decrease memory copy when decompress data
> -----------------------------------------
>
> Key: HBASE-27049
> URL: https://issues.apache.org/jira/browse/HBASE-27049
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: chenfengge
> Priority: Minor
>
> HBase RegionServer use createDecompressionStream in class
> org.apache.hadoop.hbase.io.compress.Compression, which cause extra memory
> copy during decompression.We can offer interface for block decompression,
> like "void decompress(ByteBuff src, ByteBuff dst);", and offer default
> implementation for all algorithms.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)