[
https://issues.apache.org/jira/browse/HBASE-29135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932223#comment-17932223
]
Hudson commented on HBASE-29135:
--------------------------------
Results for branch branch-2.6
[build #286 on
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/]:
(x) *{color:red}-1 overall{color}*
----
details (if available):
(/) {color:green}+1 general checks{color}
-- For more information [see general
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/General_20Nightly_20Build_20Report/]
(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2)
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]
(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3)
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop3 checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop ${HADOOP_THREE_VERSION} backward compatibility
checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop ${HADOOP_THREE_VERSION} backward compatibility
checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk17 hadoop ${HADOOP_THREE_VERSION} backward compatibility
checks{color}
-- For more information [see jdk17
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(x) {color:red}-1 source release artifact{color}
-- Something went wrong with this stage, [check relevant console
output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286//console].
(x) {color:red}-1 client integration test{color}
-- Something went wrong with this stage, [check relevant console
output|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/286//console].
> ZStandard decompression can operate directly on ByteBuffs
> ---------------------------------------------------------
>
> Key: HBASE-29135
> URL: https://issues.apache.org/jira/browse/HBASE-29135
> Project: HBase
> Issue Type: Improvement
> Reporter: Charles Connell
> Assignee: Charles Connell
> Priority: Minor
> Labels: pull-request-available
> Fix For: 3.0.0-beta-2, 2.6.3, 2.5.12
>
> Attachments: create-decompression-stream-zstd.html
>
>
> I've been thinking about ways to improve HBase's performance when reading
> HFiles, and I believe there is significant opportunity. I look at many
> RegionServer profile flamegraphs of my company's servers. A pattern that I've
> discovered is that object allocation in a very hot code path is a performance
> killer. The HFile decoding code makes some effort to avoid this, but it isn't
> totally successful.
> Each time a block is decoded in {{HFileBlockDefaultDecodingContext}}, a new
> {{DecompressorStream}} is allocated and used. This is a lot of allocation,
> and the use of the streaming pattern requires copying every byte to be
> decompressed more times than necessary. Each byte is copied from a
> {{ByteBuff}} into a {{byte[]}}, then decompressed, then copied back to a
> {{ByteBuff}}. For decompressors like
> {{org.apache.hadoop.hbase.io.compress.zstd.ZstdDecompressor}} that only
> operate on direct memory, two additional copies are introduced to move from a
> {{byte[]}} to a direct NIO {{ByteBuffer}}, then back to a {{byte[]}}.
> Aside from the copies inherent in the decompression algorithm, the necessity
> of copying from an compressed buffer to an uncompressed buffer, all of these
> other copies can be avoided without sacrificing functionality. Along the way,
> we'll also avoid allocating objects.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)