[ 
https://issues.apache.org/jira/browse/HBASE-27049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17538559#comment-17538559
 ] 

chenfengge commented on HBASE-27049:
------------------------------------

Our performance test shows that decrease memory copy when decompressing data 
can increase hbase read performance,especially when cpu is fully loaded.

Maybe we can just create a configuable interface,and use the old way as default 
implementation.If someone require better performance,he can apply his own 
implementation through configuration.

> Decrease memory copy when decompress data
> -----------------------------------------
>
>                 Key: HBASE-27049
>                 URL: https://issues.apache.org/jira/browse/HBASE-27049
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: chenfengge
>            Priority: Minor
>
> HBase RegionServer use createDecompressionStream in class 
> org.apache.hadoop.hbase.io.compress.Compression, which cause extra memory 
> copy during decompression.We can offer interface for block decompression, 
> like "void decompress(ByteBuff src, ByteBuff dst);", and offer default 
> implementation for all algorithms.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to