[
https://issues.apache.org/jira/browse/HBASE-26353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17435416#comment-17435416
]
Hudson commented on HBASE-26353:
--------------------------------
Results for branch master
[build #426 on
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/]:
(x) *{color:red}-1 overall{color}*
----
details (if available):
(x) {color:red}-1 general checks{color}
-- For more information [see general
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/General_20Nightly_20Build_20Report/]
(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3)
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/426/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]
(/) {color:green}+1 source release artifact{color}
-- See build output for details.
(/) {color:green}+1 client integration test{color}
> Support loadable dictionaries in hbase-compression-zstd
> -------------------------------------------------------
>
> Key: HBASE-26353
> URL: https://issues.apache.org/jira/browse/HBASE-26353
> Project: HBase
> Issue Type: Sub-task
> Reporter: Andrew Kyle Purtell
> Assignee: Andrew Kyle Purtell
> Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> ZStandard supports initialization of compressors and decompressors with a
> precomputed dictionary, which can dramatically improve and speed up
> compression of tables with small values. For more details, please see [The
> Case For Small Data
> Compression|https://github.com/facebook/zstd#the-case-for-small-data-compression].
>
> If a table is going to have a lot of small values and the user can put
> together a representative set of files that can be used to train a dictionary
> for compressing those values, a dictionary can be trained with the {{zstd}}
> command line utility, available in any zstandard package for your favorite OS:
> Training:
> {noformat}
> $ zstd --maxdict=1126400 --train-fastcover=shrink \
> -o mytable.dict training_files/*
> Trying 82 different sets of parameters
> ...
> k=674
> d=8
> f=20
> steps=40
> split=75
> accel=1
> Save dictionary of size 1126400 into file mytable.dict
> {noformat}
> Deploy the dictionary file to HDFS or S3, etc.
> Create the table:
> {noformat}
> hbase> create "mytable",
> ... ,
> CONFIGURATION => {
> 'hbase.io.compress.zstd.level' => '6',
> 'hbase.io.compress.zstd.dictionary' => 'hdfs://nn/zdicts/mytable.dict'
> }
> {noformat}
> Now start storing data. Compression results even for small values will be
> excellent.
> Note: Beware, if the dictionary is lost, the data will not be decompressable.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)