[
https://issues.apache.org/jira/browse/HBASE-27049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
chenfengge updated HBASE-27049:
-------------------------------
Description: HBase RegionServer use createDecompressionStream in class
org.apache.hadoop.hbase.io.compress.Compression, which cause extra memory copy
during decompression.We can offer interface for block decompression, like "void
decompress(ByteBuff src, ByteBuff dst);", and offer default implementation for
all algorithms. (was: HBase RegionServer use createDecompressionStream in
class[
org|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg].[apache|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache].[hadoop|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop].[hbase|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop.hbase].[io|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop.hbase.io].[compress|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop.hbase.io.compress].Compression,
which cause extra memory copy during decompression.We can offer interface for
block decompression, like "void decompress(ByteBuff src, ByteBuff dst);", and
offer default implementation for all algorithms.)
> Decrease memory copy when decompress data
> -----------------------------------------
>
> Key: HBASE-27049
> URL: https://issues.apache.org/jira/browse/HBASE-27049
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: chenfengge
> Priority: Minor
>
> HBase RegionServer use createDecompressionStream in class
> org.apache.hadoop.hbase.io.compress.Compression, which cause extra memory
> copy during decompression.We can offer interface for block decompression,
> like "void decompress(ByteBuff src, ByteBuff dst);", and offer default
> implementation for all algorithms.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)