[
https://issues.apache.org/jira/browse/HBASE-27049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17538950#comment-17538950
]
Andrew Kyle Purtell commented on HBASE-27049:
---------------------------------------------
A configuration toggle is not enough because a change to the compression output
format would change the file format and cause older versions of HBase, or
readers without the necessary configuration, to fail to read the file. It is
customary and best practice in the design and implementation of databases that
when there is a change to some part of the binary file format, the field in the
file indicating the version or feature flags or whatever is updated. In our
case, the version of the HFile would move from 3 to 4.
And then there should be a migration step that is clearly documented for the
user as a point of no return. They need to know that during a rolling upgrade
the older version of HBase will not be able to read HFiles (flushes, etc)
written by the new version being deployed. So they can plan for scenarios where
the rolling upgrade becomes abnormal. They also need to know that rollback to
the older version will not be possible.
> Decrease memory copy when decompress data
> -----------------------------------------
>
> Key: HBASE-27049
> URL: https://issues.apache.org/jira/browse/HBASE-27049
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: chenfengge
> Priority: Minor
>
> HBase RegionServer use createDecompressionStream in class
> org.apache.hadoop.hbase.io.compress.Compression, which cause extra memory
> copy during decompression.We can offer interface for block decompression,
> like "void decompress(ByteBuff src, ByteBuff dst);", and offer default
> implementation for all algorithms.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)