This is an automated email from the ASF dual-hosted git repository.
elserj pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git
The following commit(s) were added to refs/heads/branch-2.2 by this push:
new 66000c7 HBASE-22566 Update the 2.x upgrade chapter to include default
compaction throughput limits
66000c7 is described below
commit 66000c7f3e5933e741a773f6f4c1098aeb80ed05
Author: Josh Elser <[email protected]>
AuthorDate: Wed Jun 12 11:00:07 2019 -0400
HBASE-22566 Update the 2.x upgrade chapter to include default compaction
throughput limits
Signed-off-by: Sean Busbey <[email protected]>
---
src/main/asciidoc/_chapters/upgrading.adoc | 41 +++++++++++++++++++++---------
1 file changed, 29 insertions(+), 12 deletions(-)
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc
b/src/main/asciidoc/_chapters/upgrading.adoc
index 5025cb4..10942b6 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -629,18 +629,35 @@ Performance is also an area that is now under active
review so look forward to
improvement in coming releases (See
link:https://issues.apache.org/jira/browse/HBASE-20188[HBASE-20188 TESTING
Performance]).
-[[upgrade2.0.it.kerberos]]
-.Integration Tests and Kerberos
-Integration Tests (`IntegrationTests*`) used to rely on the Kerberos
credential cache
-for authentication against secured clusters. This used to lead to tests
failing due
-to authentication failures when the tickets in the credential cache expired.
-As of hbase-2.0.0 (and hbase-1.3.0+), the integration test clients will make
use
-of the configuration properties `hbase.client.keytab.file` and
-`hbase.client.kerberos.principal`. They are required. The clients will perform
a
-login from the configured keytab file and automatically refresh the credentials
-in the background for the process lifetime (See
-link:https://issues.apache.org/jira/browse/HBASE-16231[HBASE-16231]).
-
+[[upgrade2.0.compaction.throughput.limit]]
+.Default Compaction Throughput
+HBase 2.x comes with default limits to the speed at which compactions can
execute. This
+limit is defined per RegionServer. In previous versions of HBase, there was no
limit to
+the speed at which a compaction could run by default. Applying a limit to the
throughput of
+a compaction should ensure more stable operations from RegionServers.
+
+Take care to notice that this limit is _per RegionServer_, not _per
compaction_.
+
+The throughput limit is defined as a range of bytes written per second, and is
+allowed to vary within the given lower and upper bound. RegionServers observe
the
+current throughput of a compaction and apply a linear formula to adjust the
allowed
+throughput, within the lower and upper bound, with respect to external
pressure.
+For compactions, external pressure is defined as the number of store files with
+respect to the maximum number of allowed store files. The more store files, the
+higher the compaction pressure.
+
+Configuration of this throughput is governed by the following properties.
+
+- The lower bound is defined by
`hbase.hstore.compaction.throughput.lower.bound`
+ and defaults to 10 MB/s (`10485760`).
+- The upper bound is defined by
`hbase.hstore.compaction.throughput.higher.bound`
+ and defaults to 20 MB/s (`20971520`).
+
+To revert this behavior to the unlimited compaction throughput of earlier
versions
+of HBase, please set the following property to the implementation that applies
no
+limits to compactions.
+
+`hbase.regionserver.throughput.controller=org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController`
////
This would be a good place to link to an appendix on migrating applications