chenfengge created HBASE-27049:
----------------------------------
Summary: Decrease memory copy when decompress data
Key: HBASE-27049
URL: https://issues.apache.org/jira/browse/HBASE-27049
Project: HBase
Issue Type: Improvement
Components: regionserver
Reporter: chenfengge
HBase RegionServer use createDecompressionStream in class[
org|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg].[apache|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache].[hadoop|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop].[hbase|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop.hbase].[io|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop.hbase.io].[compress|eclipse-javadoc:%E2%98%82=cuckoofilter/src%5C/main%5C/java%3Corg.apache.hadoop.hbase.io.compress].Compression,
which cause extra memory copy during decompression.We can offer interface for
block decompression, like "void decompress(ByteBuff src, ByteBuff dst);", and
offer default implementation for all algorithms.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)