[
https://issues.apache.org/jira/browse/HBASE-27264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17580604#comment-17580604
]
Hudson commented on HBASE-27264:
--------------------------------
Results for branch branch-2
[build #619 on
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/619/]:
(/) *{color:green}+1 overall{color}*
----
details (if available):
(/) {color:green}+1 general checks{color}
-- For more information [see general
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/619/General_20Nightly_20Build_20Report/]
(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2)
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/619/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]
(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3)
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/619/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/619/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 source release artifact{color}
-- See build output for details.
(/) {color:green}+1 client integration test{color}
> Add options to consider compressed size when delimiting blocks during hfile
> writes
> ----------------------------------------------------------------------------------
>
> Key: HBASE-27264
> URL: https://issues.apache.org/jira/browse/HBASE-27264
> Project: HBase
> Issue Type: New Feature
> Affects Versions: 3.0.0-alpha-4
> Reporter: Wellington Chevreuil
> Assignee: Wellington Chevreuil
> Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4
>
>
> In HBASE-27232 we had modified "hbase.writer.unified.encoded.blocksize.ratio"
> property soo that it can allow for the encoded size to be considered when
> delimiting hfiles blocks during writes.
> -Here we propose two additional
> properties,"hbase.block.size.limit.compressed" and
> "hbase.block.size.max.compressed" that would allow for consider the
> compressed size (if compression is in use) for delimiting blocks during hfile
> writing. When compression is enabled, certain datasets can have very high
> compression efficiency, so that the default 64KB block size and 10GB max file
> size can lead to hfiles with very large number of blocks.-
> -In this proposal, "hbase.block.size.limit.compressed" is a boolean flag that
> switches to compressed size for delimiting blocks, and
> "hbase.block.size.max.compressed" is an int with the limit, in bytes for the
> compressed block size, in order to avoid very large uncompressed blocks
> (defaulting to 320KB).-
> Note: As of 15/08/2022, the original proposal above has been modified to
> define a pluggable strategy for predicating block compression rate. Please
> refer to the release notes for more details.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)