Added: release/hbase/2.2.7/RELEASENOTES.md
==============================================================================
--- release/hbase/2.2.7/RELEASENOTES.md (added)
+++ release/hbase/2.2.7/RELEASENOTES.md Fri Apr 16 05:07:26 2021
@@ -0,0 +1,2270 @@
+# RELEASENOTES
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Be careful doing manual edits in this file. Do not change format
+# of release header or remove the below marker. This file is generated.
+# DO NOT REMOVE THIS MARKER; FOR INTERPOLATING CHANGES!-->
+# HBASE  2.2.7 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-25738](https://issues.apache.org/jira/browse/HBASE-25738) | *Minor* | 
**Backport HBASE-24305 to branch-2.2**
+
+The following method was added to ServerName
+
+- #valueOf(Address, long)
+
+
+---
+
+* [HBASE-25587](https://issues.apache.org/jira/browse/HBASE-25587) | *Major* | 
**[hbck2] Schedule SCP for all unknown servers**
+
+Adds scheduleSCPsForUnknownServers to Hbck Service.
+
+
+---
+
+* [HBASE-25460](https://issues.apache.org/jira/browse/HBASE-25460) | *Major* | 
**Expose drainingServers as cluster metric**
+
+Exposed new jmx metrics: "draininigRegionServers" and 
"numDrainingRegionServers" to provide "comma separated names for regionservers 
that are put in draining mode" and "num of such regionservers" respectively.
+
+
+---
+
+* [HBASE-25449](https://issues.apache.org/jira/browse/HBASE-25449) | *Major* | 
**'dfs.client.read.shortcircuit' should not be set in hbase-default.xml**
+
+The presence of HDFS short-circuit read configuration properties in 
hbase-default.xml inadvertently causes short-circuit reads to not happen inside 
of RegionServers, despite short-circuit reads being enabled in hdfs-site.xml.
+
+
+---
+
+* [HBASE-25441](https://issues.apache.org/jira/browse/HBASE-25441) | 
*Critical* | **add security check for some APIs in RSRpcServices**
+
+RsRpcServices APIs that can be accessed only through Admin rights:
+- stopServer
+- updateFavoredNodes
+- updateConfiguration
+- clearRegionBlockCache
+- clearSlowLogsResponses
+
+
+---
+
+* [HBASE-25432](https://issues.apache.org/jira/browse/HBASE-25432) | *Blocker* 
| **we should add security checks for setTableStateInMeta and fixMeta**
+
+setTableStateInMeta and fixMeta can be accessed only through Admin rights
+
+
+---
+
+* [HBASE-25318](https://issues.apache.org/jira/browse/HBASE-25318) | *Minor* | 
**Configure where IntegrationTestImportTsv generates HFiles**
+
+Added IntegrationTestImportTsv.generatedHFileFolder configuration property to 
override the default location in IntegrationTestImportTsv. Useful for running 
the integration test when HDFS Transparent Encryption is enabled.
+
+
+---
+
+* [HBASE-25237](https://issues.apache.org/jira/browse/HBASE-25237) | *Major* | 
**'hbase master stop' shuts down the cluster, not the master only**
+
+\`hbase master stop\` should shutdown only master by default.
+1. Help added to \`hbase master stop\`:
+To stop cluster, use \`stop-hbase.sh\` or \`hbase master stop 
--shutDownCluster\`
+
+2. Help added to \`stop-hbase.sh\`:
+stop-hbase.sh can only be used for shutting down entire cluster. To shut down 
(HMaster\|HRegionServer) use hbase-daemon.sh stop (master\|regionserver)
+
+
+---
+
+* [HBASE-25238](https://issues.apache.org/jira/browse/HBASE-25238) | 
*Critical* | **Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message 
missing required fields: state”**
+
+Fixes master procedure store migration issues going from 2.0.x to 2.2.x and/or 
2.3.x. Also fixes failed heartbeat parse during rolling upgrade from 2.0.x. to 
2.3.x.
+
+
+---
+
+* [HBASE-25224](https://issues.apache.org/jira/browse/HBASE-25224) | *Major* | 
**Maximize sleep for checking meta and namespace regions availability**
+
+Changed the max sleep time during meta and namespace regions availability 
check to be 60 sec. Previously there was no such cap
+
+
+---
+
+* [HBASE-25163](https://issues.apache.org/jira/browse/HBASE-25163) | *Major* | 
**Increase the timeout value for nightly jobs**
+
+Increase timeout value for nightly jobs to 16 hours since the new build 
machines are dedicated to hbase project, so we are allowed to use it all the 
time.
+
+
+---
+
+* [HBASE-22976](https://issues.apache.org/jira/browse/HBASE-22976) | *Major* | 
**[HBCK2] Add RecoveredEditsPlayer**
+
+WALPlayer can replay the content of recovered.edits directories.
+
+Side-effect is that WAL filename timestamp is now factored when setting 
start/end times for WALInputFormat; i.e. wal.start.time and wal.end.time values 
on a job context. Previous we looked at wal.end.time only. Now we consider 
wal.start.time too. If a file has a name outside of 
wal.start.time\<-\>wal.end.time, it'll be by-passed. This change-in-behavior 
will make it easier on operator crafting timestamp filters processing WALs.
+
+
+---
+
+* [HBASE-25154](https://issues.apache.org/jira/browse/HBASE-25154) | *Major* | 
**Set java.io.tmpdir to project build directory to avoid writing std\*deferred 
files to /tmp**
+
+Change the java.io.tmpdir to project.build.directory in surefire-maven-plugin, 
to avoid writing std\*deferred files to /tmp which may blow up the /tmp disk on 
our jenkins build node.
+
+
+---
+
+* [HBASE-25109](https://issues.apache.org/jira/browse/HBASE-25109) | *Major* | 
**Add MR Counters to WALPlayer; currently hard to tell if it is doing anything**
+
+Adds a WALPlayer to MR Counter output:
+
+  org.apache.hadoop.hbase.mapreduce.WALPlayer$Counter
+    CELLS\_READ=89574
+    CELLS\_WRITTEN=89572
+    DELETES=64
+    PUTS=5305
+    WALEDITS=4375
+
+
+---
+
+* [HBASE-24776](https://issues.apache.org/jira/browse/HBASE-24776) | *Major* | 
**[hbtop] Support Batch mode**
+
+HBASE-24776 added the following command line parameters to hbtop:
+\| Argument \| Description \|
+\|---\|---\|
+\| -n,--numberOfIterations \<arg\> \| The number of iterations \|
+\| -O,--outputFieldNames \| Print each of the available field names on a 
separate line, then quit \|
+\| -f,--fields \<arg\> \| Show only the given fields. Specify comma separated 
fields to show multiple fields \|
+\| -s,--sortField \<arg\> \| The initial sort field. You can prepend a \`+' or 
\`-' to the field name to also override the sort direction. A leading \`+' will 
force sorting high to low, whereas a \`-' will ensure a low to high ordering \|
+\| -i,--filters \<arg\> \| The initial filters. Specify comma separated 
filters to set multiple filters \|
+\| -b,--batchMode \| Starts hbtop in Batch mode, which could be useful for 
sending output from hbtop to other programs or to a file. In this mode, hbtop 
will not accept input and runs until the iterations limit you've set with the 
\`-n' command-line option or until killed \|
+
+
+---
+
+* [HBASE-24305](https://issues.apache.org/jira/browse/HBASE-24305) | *Minor* | 
**Handle deprecations in ServerName**
+
+The following methods were removed or made private from ServerName (due to 
HBASE-17624):
+
+- getHostNameMinusDomain(String): Was made private without a replacement.
+- parseHostname(String): Use #valueOf(String) instead.
+- parsePort(String): Use #valueOf(String) instead.
+- parseStartcode(String): Use #valueOf(String) instead.
+- getServerName(String, int, long): Was made private. Use #valueOf(String, 
int, long) instead.
+- getServerName(String, long): Use #valueOf(String, long) instead.
+- getHostAndPort(): Use #getAddress() instead.
+- getServerStartcodeFromServerName(String): Use instance of ServerName to pull 
out start code)
+- getServerNameLessStartCode(String): Use #getAddress() instead.
+
+
+
+
+# HBASE  2.2.6 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-24892](https://issues.apache.org/jira/browse/HBASE-24892) | *Major* | 
**config 'hbase.hregion.memstore.mslab.indexchunksize' not be used**
+
+Remove the config "hbase.hregion.memstore.mslab.indexchunksize" which never 
used. And use "hbase.hregion.memstore.mslab.indexchunksize.percent" instead.
+
+
+---
+
+* [HBASE-24150](https://issues.apache.org/jira/browse/HBASE-24150) | *Major* | 
**Allow module tests run in parallel**
+
+Pass -T2 to mvn. Makes it so we do two modules-at-a-time dependencies willing. 
Helps speed build and testing. Doubles the resource usage when running modules 
in parallel.
+
+
+---
+
+* [HBASE-24126](https://issues.apache.org/jira/browse/HBASE-24126) | *Major* | 
**Up the container nproc uplimit from 10000 to 12500**
+
+Start docker with upped ulimit for nproc passing '--ulimit nproc=12500'. It 
was 10000, the default, but made it 12500. Then, set PROC\_LIMIT in 
hbase-personality so when yetus runs, it is w/ the new 12500 value.
+
+
+---
+
+* [HBASE-24625](https://issues.apache.org/jira/browse/HBASE-24625) | 
*Critical* | **AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the 
expected synced file length.**
+
+We add a method getSyncedLength in  WALProvider.WriterBase interface for  
WALFileLengthProvider used for replication, considering the case if we use  
AsyncFSWAL,we write to 3 DNs concurrently,according to the visibility guarantee 
of HDFS, the data will be available immediately
+when arriving at DN since all the DNs will be considered as the last one in 
pipeline.This means replication may read uncommitted data and replicate it to 
the remote cluster and cause data inconsistency.The method WriterBase#getLength 
may return length which just in hdfs client buffer and not successfully synced 
to HDFS, so we use this method WriterBase#getSyncedLength to return the length 
successfully synced to HDFS and replication thread could only read writing WAL 
file limited by this length.
+see also HBASE-14004 and this document for more details:
+https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
+
+Before this patch, replication may read uncommitted data and replicate it to 
the slave cluster and cause data inconsistency between master and slave 
cluster, we could use FSHLog instead of AsyncFSWAL  to reduce probability of 
inconsistency without this patch applied.
+
+
+---
+
+* [HBASE-24578](https://issues.apache.org/jira/browse/HBASE-24578) | *Major* | 
**[WAL] Add a parameter to config RingBufferEventHandler's SyncFuture count**
+
+Introduce a new parameter "hbase.regionserver.wal.sync.batch.count" to control 
the wal sync batch size which is equals to "hbase.regionserver.handler.count" 
by default. It should work well if you use default wal provider---one wal per 
regionserver. But if you use read/write separated handlers, you can set 
"hbase.regionserver.wal.sync.batch.count" to the number of write handlers. And 
if you use wal-per-groups or wal-per-region, you can consider lower 
"hbase.regionserver.wal.sync.batch.count", the default number will be too big 
and consume more memories as the number of wals grows.
+
+
+---
+
+* [HBASE-24603](https://issues.apache.org/jira/browse/HBASE-24603) | 
*Critical* | **Zookeeper sync() call is async**
+
+<!-- markdown -->
+
+Fixes a couple of bugs in ZooKeeper interaction. Firstly, zk sync() call that 
is used to sync the lagging followers with leader so that the client sees a 
consistent snapshot state was actually asynchronous under the hood. We make it 
synchronous for correctness. Second, zookeeper events are now processed in a 
separate thread rather than doing it in the thread context of zookeeper client 
connection. This decoupling frees up client connection quickly and avoids 
deadlocks.
+
+
+---
+
+* [HBASE-24205](https://issues.apache.org/jira/browse/HBASE-24205) | *Major* | 
**Create metric to know the number of reads that happens from memstore**
+
+Adds a new metric where we collect the number of read requests (tracked per 
row) whether the row was fetched completely from memstore or it was pulled from 
files  and memstore. 
+The metric is now collected under the mbean for Tables and under the mbean for 
regions.
+Under table mbean ie.-
+'name": "Hadoop:service=HBase,name=RegionServer,sub=Tables'
+The new metrics will be listed as
+{code}
+    
"Namespace\_default\_table\_t3\_columnfamily\_f1\_metric\_memstoreOnlyRowReadsCount":
 5,
+ 
"Namespace\_default\_table\_t3\_columnfamily\_f1\_metric\_mixedRowReadsCount": 
1,
+{code}
+Where the format is 
Namespace\_\<namespacename\>\_table\_\<tableName\>\_columnfamily\_\<columnfamilyname\>\_metric\_memstoreOnlyRowReadsCount
+Namespace\_\<namespacename\>\_table\_\<tableName\>\_columnfamily\_\<columnfamilyname\>\_metric\_mixedRowReadsCount
+{code}
+
+The same one under the region ie.
+"name": "Hadoop:service=HBase,name=RegionServer,sub=Regions",
+comes as
+{code}
+   
"Namespace\_default\_table\_t3\_region\_75a7846f4ac4a2805071a855f7d0dbdc\_store\_f1\_metric\_memstoreOnlyRowReadsCount":
 5,
+    
"Namespace\_default\_table\_t3\_region\_75a7846f4ac4a2805071a855f7d0dbdc\_store\_f1\_metric\_mixedRowReadsCount":
 1,
+{code}
+where
+Namespace\_\<namespacename\_table\_\<tableName\>\_region\_\<regionName\>\_store\_\<storeName\>\_metric\_memstoreOnlyRowReadsCount
+Namespace\_\<namespacename\_table\_\<tableName\>\_region\_\<regionName\>\_store\_\<storeName\>\_metric\_mixedRowReadsCount
+This is also an aggregate against every store the number of reads that 
happened purely from the memstore or it was a  mixed read that happened from 
memstore and file.
+
+
+---
+
+* [HBASE-24524](https://issues.apache.org/jira/browse/HBASE-24524) | *Minor* | 
**SyncTable logging improvements**
+
+Notice this has changed log level for mismatching row keys, originally those 
were being logged at INFO level, now it's logged at DEBUG level. This is 
consistent with the logging of mismatching cells. Also, for missing row keys, 
it now logs row key values in human readable format, making it more meaningful 
for operators troubleshooting mismatches.
+
+
+
+
+# HBASE  2.2.5 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+---
+
+* [HBASE-24115](https://issues.apache.org/jira/browse/HBASE-24115) | *Major* | 
**Relocate test-only REST "client" from src/ to test/ and mark Private**
+
+Relocate test-only REST RemoteHTable and RemoteAdmin from src/ to test/. And 
mark them as InterfaceAudience.Private.
+
+
+---
+
+* [HBASE-24271](https://issues.apache.org/jira/browse/HBASE-24271) | *Major* | 
**Set values in \`conf/hbase-site.xml\` that enable running on 
\`LocalFileSystem\` out of the box**
+
+<!-- markdown -->
+HBASE-24271 makes changes the the default `conf/hbase-site.xml` such that 
`bin/hbase` will run directly out of the binary tarball or a compiled source 
tree without any configuration modifications vs. Hadoop 2.8+. This changes our 
long-standing history of shipping no configured values in 
`conf/hbase-site.xml`, so existing processes that assume this file is empty of 
configuration properties may require attention.
+
+
+---
+
+* [HBASE-22710](https://issues.apache.org/jira/browse/HBASE-22710) | *Major* | 
**Wrong result in one case of scan that use  raw and versions and filter 
together**
+
+Make the logic of the versions chosen more reasonable for raw scan, to avoid 
lose result when using filter.
+
+
+---
+
+* [HBASE-24252](https://issues.apache.org/jira/browse/HBASE-24252) | *Major* | 
**Implement proxyuser/doAs mechanism for hbase-http**
+
+This feature enables the HBase Web UI's to accept a 'proxyuser' via the HTTP 
Request's query string. When the parameter 
\`hbase.security.authentication.spnego.kerberos.proxyuser.enable\` is set to 
\`true\` in hbase-site.xml (default is \`false\`), the HBase UI will attempt to 
impersonate the user specified by the query parameter "doAs". This query 
parameter is checked case-insensitively. When this option is not provided, the 
user who executed the request is the "real" user and there is no ability to 
execute impersonation against the WebUI.
+
+For example, if the user "bob" with Kerberos credentials executes a request 
against the WebUI with this feature enabled and a query string which includes 
\`doAs=alice\`, the HBase UI will treat this request as executed as \`alice\`, 
not \`bob\`.
+
+The standard Hadoop proxyuser configuration properties to limit users who may 
impersonate others apply to this change (e.g. to enable \`bob\` to impersonate 
\`alice\`). See the Hadoop documentation for more information on how to 
configure these proxyuser rules.
+
+
+---
+
+* [HBASE-24196](https://issues.apache.org/jira/browse/HBASE-24196) | *Major* | 
**[Shell] Add rename rsgroup command in hbase shell**
+
+user or admin can now use
+hbase shell \> rename\_rsgroup 'oldname', 'newname'
+to rename rsgroup.
+
+
+---
+
+* [HBASE-24218](https://issues.apache.org/jira/browse/HBASE-24218) | *Major* | 
**Add hadoop 3.2.x in hadoop check**
+
+Add hadoop-3.2.0 and hadoop-3.2.1 in hadoop check and when 
'--quick-hadoopcheck' we will only check hadoop-3.2.1.
+
+Notice that, for aligning the personality scripts across all the active 
branches, we will commit the patch to all active branches, but the hadoop-3.2.x 
support in hadoopcheck is only applied to branch-2.2+.
+
+
+---
+
+* [HBASE-24112](https://issues.apache.org/jira/browse/HBASE-24112) | *Major* | 
**[RSGroup] Support renaming rsgroup**
+
+Support RSGroup renaming in core codebase. New API Admin#renameRSGroup(String, 
String) is introduced in 3.0.0.
+
+
+---
+
+* [HBASE-24121](https://issues.apache.org/jira/browse/HBASE-24121) | *Major* | 
**[Authorization] ServiceAuthorizationManager isn't dynamically updatable. And 
it should be.**
+
+Master & RegionService now support refresh policy authorization defined in 
hbase-policy.xml without restarting service. To refresh policy, please execute 
hbase shell command: update\_config or update\_config\_all after policy file 
updated and synced on all nodes.
+
+
+---
+
+* [HBASE-24099](https://issues.apache.org/jira/browse/HBASE-24099) | *Major* | 
**Use a fair ReentrantReadWriteLock for the region close lock**
+
+This change modifies the default acquisition policy for the region's close 
lock in order to prevent observed starvation of close requests. The new boolean 
configuration parameter 'hbase.regionserver.fair.region.close.lock' controls 
the lock acquisition policy: if true, the lock is created in fair mode 
(default); if false, the lock is created in nonfair mode (the old default).
+
+
+---
+
+* [HBASE-24122](https://issues.apache.org/jira/browse/HBASE-24122) | *Major* | 
**Change machine ulimit-l to ulimit-a so dumps full ulimit rather than just 
'max locked memory'**
+
+Our 'Build Artifacts' have a machine directory under which we emit vitals on 
the host the build was run on. We used to emit the result of 'ulimit -l' as a 
file named 'ulimit-l'. This has been hijacked to instead emit result of running 
'ulimit -a' which includes stat on ulimit -l.
+
+
+---
+
+* [HBASE-24050](https://issues.apache.org/jira/browse/HBASE-24050) | *Major* | 
**Deprecated PBType on all 2.x branches**
+
+org.apache.hadoop.hbase.types.PBType is marked as deprecated without any 
replacement. It will be moved to hbase-example module and marked as IA.Private 
in 3.0.0. This is a mistake as it should not be part of our public API. Users 
who depend on this class should just copy the code your own code base.
+
+
+---
+
+* [HBASE-8868](https://issues.apache.org/jira/browse/HBASE-8868) | *Minor* | 
**add metric to report client shortcircuit reads**
+
+Expose file system level read metrics for RegionServer.
+
+If the HBase RS runs on top of HDFS, calculate the aggregation of
+ReadStatistics of each HdfsFileInputStream. These metrics include:
+(1) total number of bytes read from HDFS.
+(2) total number of bytes read from local DataNode.
+(3) total number of bytes read locally through short-circuit read.
+(4) total number of bytes read locally through zero-copy read.
+
+Because HDFS ReadStatistics is calculated per input stream, it is not
+feasible to update the aggregated number in real time. Instead, the
+metrics are updated when an input stream is closed.
+
+
+---
+
+* [HBASE-24032](https://issues.apache.org/jira/browse/HBASE-24032) | *Major* | 
**[RSGroup] Assign created tables to respective rsgroup automatically instead 
of manual operations**
+
+Admin can determine which tables go to which rsgroup by script  (setting 
hbase.rsgroup.table.mapping.script with local filystem path) on Master side 
which aims to lighten the burden of admin operations.  Note, since HBase 3+, 
rsgroup can be specified in TableDescriptor as well, if clients specify this, 
master will skip the determination from script.
+
+Here is a simple example of script:
+{code}
+# Input consists of two string, 1st is the namespace of the table, 2nd is the 
table name of the table
+#!/bin/bash
+namespace=$1
+tablename=$2
+if [[ $namespace == test ]]; then
+  echo test
+elif [[ $tablename == \*foo\* ]]; then
+  echo other
+else
+  echo default
+fi
+{code}
+
+
+
+# HBASE  2.2.4 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-22827](https://issues.apache.org/jira/browse/HBASE-22827) | *Major* | 
**Expose multi-region merge in shell and Admin API**
+
+merge\_region shell command can now be used to merge more than 2 regions as 
well. It takes a list of regions as comma separated values or as an array of 
regions, and not just 2 regions. The full regionnames and encoded regionnames 
are continued to be accepted.
+
+
+---
+
+* [HBASE-23874](https://issues.apache.org/jira/browse/HBASE-23874) | *Minor* | 
**Move Jira-attached file precommit definition from script in Jenkins config to 
dev-support**
+
+The Jira Precommit job (https://builds.apache.org/job/PreCommit-HBASE-Build/) 
will now look for a file within the source tree 
(dev-support/jenkins\_precommit\_jira\_yetus.sh) instead of depending on a 
script section embedded in the job.
+
+
+---
+
+* [HBASE-17115](https://issues.apache.org/jira/browse/HBASE-17115) | *Major* | 
**HMaster/HRegion Info Server does not honour admin.acl**
+
+Implements authorization for the HBase Web UI by limiting access to certain 
endpoints which could be used to extract sensitive information from HBase.
+
+Access to these restricted endpoints can be limited to a group of 
administrators, identified either by a list of users 
(hbase.security.authentication.spnego.admin.users) or by a list of groups
+(hbase.security.authentication.spnego.admin.groups).  By default, neither of 
these values are set which will preserve backwards compatibility (allowing all 
authenticated users to access all endpoints).
+
+Further, users who have sensitive information in the HBase service 
configuration can set hbase.security.authentication.ui.config.protected to true 
which will treat the configuration endpoint as a protected, admin-only 
resource. By default, all authenticated users may access the configuration 
endpoint.
+
+
+---
+
+* [HBASE-23686](https://issues.apache.org/jira/browse/HBASE-23686) | *Major* | 
**Revert binary incompatible change and remove reflection**
+
+- Reverts a binary incompatible binary change for ByteRangeUtils
+- Usage of reflection inside CommonFSUtils removed
+
+
+---
+
+* [HBASE-23679](https://issues.apache.org/jira/browse/HBASE-23679) | 
*Critical* | **FileSystem instance leaks due to bulk loads with Kerberos 
enabled**
+
+This issues fixes an issue with Bulk Loading on installations with Kerberos 
enabled and more than a single RegionServer. When multiple tables are involved 
in hosting a table's regions which are being bulk-loaded into, all but the 
RegionServer hosting the table's first Region will "leak" one 
DistributedFileSystem object onto the heap, never freeing that memory. 
Eventually, with enough bulk loads, this will create a situation for 
RegionServers where they have no free heap space and will either spend all time 
in JVM GC, lose their ZK session, or crash with an OutOfMemoryError.
+
+The only mitigation for this issue is to periodically restart RegionServers. 
All earlier versions of HBase 2.x are subject to this issue (2.0.x, \<=2.1.8, 
\<=2.2.3)
+
+
+
+# HBASE  2.2.3 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-23651](https://issues.apache.org/jira/browse/HBASE-23651) | *Major* | 
**Region balance throttling can be disabled**
+
+Set hbase.balancer.max.balancing to a int value which \<=0 will disable region 
balance throttling.
+
+
+---
+
+* [HBASE-23596](https://issues.apache.org/jira/browse/HBASE-23596) | *Major* | 
**HBCKServerCrashProcedure can double assign**
+
+Makes it so the recently added HBCKServerCrashProcedure -- the SCP that gets 
invoked when an operator schedules an SCP via hbck2 scheduleRecoveries command 
-- now works the same as SCP EXCEPT if master knows nothing of the scheduled 
servername. In this latter case, HBCKSCP will do a full scan of hbase:meta 
looking for instances of the passed servername. If any found it will attempt 
cleanup of hbase:meta references by reassigning any found OPEN or OPENING and 
by closing any in CLOSING state.
+
+Used to fix instances of what the 'HBCK Report' page shows as 'Unknown 
Servers'.
+
+
+---
+
+* [HBASE-23619](https://issues.apache.org/jira/browse/HBASE-23619) | *Trivial* 
| **Use built-in formatting for logging in hbase-zookeeper**
+
+Changed the logging in hbase-zookeeper to use built-in formatting
+
+
+---
+
+* [HBASE-23320](https://issues.apache.org/jira/browse/HBASE-23320) | *Major* | 
**Upgrade surefire plugin to 3.0.0-M4**
+
+Bumped surefire plugin to 3.0.0-M4
+
+
+---
+
+* [HBASE-20461](https://issues.apache.org/jira/browse/HBASE-20461) | *Major* | 
**Implement fsync for AsyncFSWAL**
+
+Now AsyncFSWAL also supports Durability.FSYNC\_WAL.
+
+
+---
+
+* [HBASE-23239](https://issues.apache.org/jira/browse/HBASE-23239) | *Major* | 
**Reporting on status of backing MOB files from client-facing cells**
+
+<!-- markdown -->
+
+Users of the MOB feature can now use the `mobrefs` utility to get statistics 
about data in the MOB system and verify the health of backing files on HDFS.
+
+```
+HADOOP_CLASSPATH=/etc/hbase/conf:$(hbase mapredcp) yarn jar \
+    /some/path/to/hbase-shaded-mapreduce.jar mobrefs mobrefs-report-output 
some_table foo
+```
+
+See javadocs of the class `MobRefReporter` for more details.
+
+the reference guide has added some information about MOB internals and 
troubleshooting.
+
+
+---
+
+* [HBASE-23549](https://issues.apache.org/jira/browse/HBASE-23549) | *Minor* | 
**Document steps to disable MOB for a column family**
+
+The reference guide now includes a walk through of disabling the MOB feature 
if needed while maintaining availability.
+
+
+---
+
+* [HBASE-23582](https://issues.apache.org/jira/browse/HBASE-23582) | *Minor* | 
**Unbalanced braces in string representation of table descriptor**
+
+Fixed unbalanced braces in string representation within HBase shell
+
+
+---
+
+* [HBASE-23554](https://issues.apache.org/jira/browse/HBASE-23554) | *Major* | 
**Encoded regionname to regionname utility**
+
+    Adds shell command regioninfo:
+
+      hbase(main):001:0\>  regioninfo '0e6aa5c19ae2b2627649dc7708ce27d0'
+      {ENCODED =\> 0e6aa5c19ae2b2627649dc7708ce27d0, NAME =\> 
'TestTable,,1575941375972.0e6aa5c19ae2b2627649dc7708ce27d0.', STARTKEY =\> '', 
ENDKEY =\> '00000000000000000000299441'}
+      Took 0.4737 seconds
+
+
+---
+
+* [HBASE-23293](https://issues.apache.org/jira/browse/HBASE-23293) | *Minor* | 
**[REPLICATION] make ship edits timeout configurable**
+
+The default rpc timeout for ReplicationSourceShipper#shipEdits is 60s, when 
bulkload replication enabled, timeout exception may be occurred.
+Now we can conf the timeout value through 
replication.source.shipedits.timeout, and it’s adaptive.
+
+
+---
+
+* [HBASE-23312](https://issues.apache.org/jira/browse/HBASE-23312) | *Major* | 
**HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible**
+
+The newer HBase Thrift SPNEGO configs should not be required. The 
hbase.thrift.spnego.keytab.file and hbase.thrift.spnego.principal configs will 
fall back to the hbase.thrift.keytab.file and hbase.thrift.kerberos.principal 
original configs. The older configs will log a deprecation warning. It is 
preferred to new the newer SPNEGO configurations.
+
+
+---
+
+* [HBASE-22969](https://issues.apache.org/jira/browse/HBASE-22969) | *Minor* | 
**A new binary component comparator(BinaryComponentComparator) to perform 
comparison of arbitrary length and position**
+
+With BinaryComponentCompartor applications will be able to design diverse and 
powerful set of filters for rows and columns. See 
https://issues.apache.org/jira/browse/HBASE-22969 for example. In general, the 
comparator can be used with any filter taking ByteArrayComparable. As of now, 
following filters take ByteArrayComparable: 
+
+1. RowFilter
+2. ValueFilter
+3. QualifierFilter
+4. FamilyFilter
+5. ColumnValueFilter
+
+
+---
+
+* [HBASE-23322](https://issues.apache.org/jira/browse/HBASE-23322) | *Minor* | 
**[hbck2] Simplification on HBCKSCP scheduling**
+
+An hbck2 scheduleRecoveries will run a subclass of ServerCrashProcedure which 
asks Master what Regions were on the dead Server but it will also do a 
hbase:meta table scan to see if any vestiges of the old Server remain (for the 
case where an SCP failed mid-point leaving references in place or where Master 
and hbase:meta deviated in accounting).
+
+
+---
+
+* [HBASE-23321](https://issues.apache.org/jira/browse/HBASE-23321) | *Minor* | 
**[hbck2] fixHoles of fixMeta doesn't update in-memory state**
+
+If holes in hbase:meta, hbck2 fixMeta now will update Master in-memory state 
so you do not need to restart master just so you can assign the new 
hole-bridging regions.
+
+
+---
+
+* [HBASE-23282](https://issues.apache.org/jira/browse/HBASE-23282) | *Major* | 
**HBCKServerCrashProcedure for 'Unknown Servers'**
+
+hbck2 scheduleRecoveries will now run a SCP that also looks in hbase:meta for 
any references to the scheduled server -- not just consult Master in-memory 
state -- just in case vestiges of the server are leftover in hbase:meta
+
+
+---
+
+* [HBASE-19450](https://issues.apache.org/jira/browse/HBASE-19450) | *Minor* | 
**Add log about average execution time for ScheduledChore**
+
+<!-- markdown -->
+HBase internal chores now log a moving average of how long execution of each 
chore takes at `INFO` level for the logger 
`org.apache.hadoop.hbase.ScheduledChore`.
+
+Such messages will happen at most once per five minutes.
+
+
+---
+
+* [HBASE-23250](https://issues.apache.org/jira/browse/HBASE-23250) | *Minor* | 
**Log message about CleanerChore delegate initialization should be at INFO**
+
+CleanerChore delegate initialization is now logged at INFO level instead of 
DEBUG
+
+
+---
+
+* [HBASE-23243](https://issues.apache.org/jira/browse/HBASE-23243) | *Major* | 
**[pv2] Filter out SUCCESS procedures; on decent-sized cluster, plethora 
overwhelms problems**
+
+The 'Procedures & Locks' tab in Master UI only displays problematic Procedures 
now (RUNNABLE, WAITING-TIMEOUT, etc.). It no longer notes procedures whose 
state is SUCCESS.
+
+
+---
+
+* [HBASE-23227](https://issues.apache.org/jira/browse/HBASE-23227) | *Blocker* 
| **Upgrade jackson-databind to 2.9.10.1 to avoid recent CVEs**
+
+<!-- markdown -->
+
+the Apache HBase REST Proxy now uses Jackson Databind version 2.9.10.1 to 
address the following CVEs
+
+  - CVE-2019-16942
+  - CVE-2019-16943
+
+Users of prior releases with Jackson Databind 2.9.10 are advised to either 
upgrade to this release or to upgrade their local Jackson Databind jar directly.
+
+
+---
+
+* [HBASE-23222](https://issues.apache.org/jira/browse/HBASE-23222) | 
*Critical* | **Better logging and mitigation for MOB compaction failures**
+
+<!-- markdown -->
+
+The MOB compaction process in the HBase Master now logs more about its 
activity.
+
+In the event that you run into the problems described in HBASE-22075, there is 
a new HFileCleanerDelegate that will stop all removal of MOB hfiles from the 
archive area. It can be configured by adding 
`org.apache.hadoop.hbase.mob.ManualMobMaintHFileCleaner` to the list configured 
for `hbase.master.hfilecleaner.plugins`. This new cleaner delegate will cause 
your archive area to grow unbounded; you will have to manually prune files 
which may be prohibitively complex. Consider if your use case will allow you to 
mitigate by disabling mob compactions instead.
+
+Caveats:
+* Be sure the list of cleaner delegates still includes the default cleaners 
you will likely need: ttl, snapshot, and hlink.
+* Be mindful that if you enable this cleaner delegate then there will be *no* 
automated process for removing these mob hfiles. You should see a single region 
per table in `%hbase_root%/archive` that accumulates files over time. You will 
have to determine which of these files are safe or not to remove.
+* You should list this cleaner delegate after the snapshot and hlink delegates 
so that you can enable sufficient logging to determine when an archived mob 
hfile is needed by those subsystems. When set to `TRACE` logging, the 
CleanerChore logger will include archive retention decision justifications.
+* If your use case creates a large number of uniquely named tables, this new 
delegate will cause memory pressure on the master.
+
+
+---
+
+* [HBASE-23172](https://issues.apache.org/jira/browse/HBASE-23172) | *Minor* | 
**HBase Canary region success count metrics reflect column family successes, 
not region successes**
+
+Added a comment to make clear that read/write success counts are tallying 
column family success counts, not region success counts. 
+
+Additionally, the region read and write latencies previously only stored the 
latencies of the last column family of the region reads/writes. This has been 
fixed by using a map of each region to a list of read and write latency values.
+
+
+---
+
+* [HBASE-23177](https://issues.apache.org/jira/browse/HBASE-23177) | *Major* | 
**If fail to open reference because FNFE, make it plain it is a Reference**
+
+Changes the message on the FNFE exception thrown when the file a Reference 
points to is missing; the message now includes detail on Reference as well as 
pointed-to file so can connect how FNFE relates to region open.
+
+
+
+# HBASE  2.2.2 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-20626](https://issues.apache.org/jira/browse/HBASE-20626) | *Major* | 
**Change the value of "Requests Per Second" on WEBUI**
+
+Use 'totalRowActionRequestCount' to calculate QPS on web UI.
+
+
+---
+
+* [HBASE-22874](https://issues.apache.org/jira/browse/HBASE-22874) | 
*Critical* | **Define a public interface for Canary and move existing 
implementation to LimitedPrivate**
+
+<!-- markdown -->
+Downstream users who wish to programmatically check the health of their HBase 
cluster may now rely on a public interface derived from the previously private 
implementation of the canary cli tool. The interface is named `Canary` and can 
be found in the user facing javadocs.
+
+Downstream users who previously relied on the invoking the canary via the Java 
classname (either on the command line or programmatically) will need to change 
how they do so because the non-public implementation has moved.
+
+
+---
+
+* [HBASE-23035](https://issues.apache.org/jira/browse/HBASE-23035) | *Major* | 
**Retain region to the last RegionServer make the failover slower**
+
+Since 2.0.0,when one regionserver crashed and back online again, 
AssignmentManager will retain the region locations and try assign the regions 
to this regionserver(same host:port with the crashed one) again. But for 1.x.x, 
the behavior is round-robin assignment for the regions belong to the crashed 
regionserver. This jira change the "retain" assignment to round-robin 
assignment, which is same with 1.x.x version. This change will make the 
failover faster and improve availability.
+
+
+---
+
+* [HBASE-22975](https://issues.apache.org/jira/browse/HBASE-22975) | *Minor* | 
**Add read and write QPS metrics at server level and table level**
+
+This issue adds read and write QPS(query per second) metrics at server and 
table level. The table level QPS metrics aggregation at the per-table for each 
RegionServer.
+
+Using DropwizardMeter data structure to calculate QPS. And the metrics can be 
obtained from JMX.
+
+
+---
+
+* [HBASE-23040](https://issues.apache.org/jira/browse/HBASE-23040) | *Minor* | 
**region mover gives NullPointerException instead of saying a host isn't in the 
cluster**
+
+giving the region mover "unload" command a region server name that isn't 
recognized by the cluster results in a "I don't know about that host" message 
instead of a NPE.
+
+set log level to DEBUG if you'd like the region mover to log the set of region 
server names it got back from the cluster.
+
+
+---
+
+* [HBASE-22796](https://issues.apache.org/jira/browse/HBASE-22796) | *Major* | 
**[HBCK2] Add fix of overlaps to fixMeta hbck Service**
+
+Adds fix of overlaps to the fixMeta hbck service method. Uses the bulk-merge 
facility. Merges a max of 10 at a time. Set 
hbase.master.metafixer.max.merge.count to higher if you want to do more than 10 
in the one go.
+
+
+---
+
+* [HBASE-21745](https://issues.apache.org/jira/browse/HBASE-21745) | 
*Critical* | **Make HBCK2 be able to fix issues other than region assignment**
+
+This issue adds via its subtasks:
+
+ \* An 'HBCK Report' page to the Master UI added by 
HBASE-22527+HBASE-22709+HBASE-22723+ (since 2.1.6, 2.2.1, 2.3.0). Lists 
consistency or anomalies found via new hbase:meta consistency checking 
extensions added to CatalogJanitor (holes, overlaps, bad servers) and by a new 
'HBCK chore' that runs at a lesser periodicity that will note filesystem 
orphans and overlaps as well as the following conditions:
+ \*\* Master thought this region opened, but no regionserver reported it.
+ \*\* Master thought this region opened on Server1, but regionserver reported 
Server2
+ \*\* More than one regionservers reported opened this region
+ Both chores can be triggered from the shell to regenerate ‘new’ reports.
+ \* Means of scheduling a ServerCrashProcedure (HBASE-21393).
+ \* An ‘offline’ hbase:meta rebuild (HBASE-22680).
+ \* Offline replace of hbase.version and hbase.id
+ \* Documentation on how to use completebulkload tool to ‘adopt’ orphaned 
data found by new HBCK2 ‘filesystem’ check (see below) and ‘HBCK chore’ 
(HBASE-22859)
+ \* A ‘holes’ and ‘overlaps’ fix that runs in the master that uses new 
bulk-merge facility to collapse many overlaps in the one go.
+ \* hbase-operator-tools HBCK2 client tool got a bunch of additions:
+ \*\* A specialized 'fix' for the case where operators ran old hbck 
'offlinemeta' repair and destroyed their hbase:meta; it ties together holes in 
meta with orphaned data in the fs (HBASE-22567)
+ \*\* A ‘filesystem’ command that reports on orphan data as well as bad 
references and hlinks with a ‘fix’ for the latter two options (based on 
hbck1 facility updated).
+ \*\* Adds back the ‘replication’ fix facility from hbck1 (HBASE-22717)
+
+The compound result is that hbck2 is now in excess of hbck1 abilities. The 
provided functionality is disaggregated as per the hbck2 philosophy of 
providing 'plumbing' rather than 'porcelain' so there is work to do still 
adding fix-it playbooks, scripting across outages, and automation.
+
+
+---
+
+* [HBASE-11062](https://issues.apache.org/jira/browse/HBASE-11062) | *Major* | 
**hbtop**
+
+Introduces hbtop that's a real-time monitoring tool for HBase like Unix's top 
command. See README for the details: 
https://github.com/apache/hbase/blob/master/hbase-hbtop/README.md
+
+
+
+# HBASE  2.2.1 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-22867](https://issues.apache.org/jira/browse/HBASE-22867) | 
*Critical* | **The ForkJoinPool in CleanerChore will spawn thousands of threads 
in our cluster with thousands table**
+
+Replace the ForkJoinPool in CleanerChore by ThreadPoolExecutor which can limit 
the spawn thread size and avoid  the master GC frequently.  The replacement is 
an internal implementation in CleanerChore,  so no config key change, the 
upstream users can just upgrade the hbase master without any other change.
+
+
+---
+
+* [HBASE-22810](https://issues.apache.org/jira/browse/HBASE-22810) | *Major* | 
**Initialize an separate ThreadPoolExecutor for taking/restoring snapshot**
+
+Introduced a new config key for the snapshot taking/restoring operations at 
master side:  hbase.master.executor.snapshot.threads, its default value is 3.  
means we can have 3 snapshot operations running at the same time.
+
+
+---
+
+* [HBASE-22863](https://issues.apache.org/jira/browse/HBASE-22863) | *Major* | 
**Avoid Jackson versions and dependencies with known CVEs**
+
+1. Stopped exposing vulnerable Jackson1 dependencies so that downstreamers 
would not pull it in from HBase.
+2. However, since Hadoop requires some Jackson1 dependencies, put vulnerable 
Jackson mapper at test scope in some HBase modules and hence, HBase tarball 
created by hbase-assembly contains Jackson1 mapper jar in lib. Still, downsteam 
applications can't pull in Jackson1 from HBase.
+
+
+---
+
+* [HBASE-22841](https://issues.apache.org/jira/browse/HBASE-22841) | *Major* | 
**TimeRange's factory functions do not support ranges, only \`allTime\` and 
\`at\`**
+
+Add serveral API in TimeRange class for avoiding using the deprecated 
TimeRange constructor: 
+\* TimeRange#from: Represents the time interval [minStamp, Long.MAX\_VALUE)
+\* TimeRange#until: Represents the time interval [0, maxStamp)
+\* TimeRange#between: Represents the time interval [minStamp, maxStamp)
+
+
+---
+
+* [HBASE-22833](https://issues.apache.org/jira/browse/HBASE-22833) | *Minor* | 
**MultiRowRangeFilter should provide a method for creating a filter which is 
functionally equivalent to multiple prefix filters**
+
+Provide a public method in MultiRowRangeFilter class to speed the requirement 
of filtering with multiple row prefixes, it will expand the row prefixes as 
multiple rowkey ranges by MultiRowRangeFilter, it's more efficient.
+{code}
+public MultiRowRangeFilter(byte[][] rowKeyPrefixes);
+{code}
+
+
+---
+
+* [HBASE-22856](https://issues.apache.org/jira/browse/HBASE-22856) | *Major* | 
**HBASE-Find-Flaky-Tests fails with pip error**
+
+Update the base docker image to ubuntu 18.04 for the find flaky tests jenkins 
job.
+
+
+---
+
+* [HBASE-22771](https://issues.apache.org/jira/browse/HBASE-22771) | *Major* | 
**[HBCK2] fixMeta method and server-side support**
+
+Adds a fixMeta method to hbck Service. Fixes holes in hbase:meta. Follow-up to 
fix overlaps. See HBASE-22567 also.
+
+Follow-on is adding a client-side to hbase-operator-tools that can exploit 
this new addition (HBASE-22825)
+
+
+---
+
+* [HBASE-22777](https://issues.apache.org/jira/browse/HBASE-22777) | *Major* | 
**Add a multi-region merge (for fixing overlaps, etc.)**
+
+Changes merge so you can merge more than two regions at a time.  Currently 
only available inside HBase. HBASE-22827, a follow-on, is about exposing the 
facility in the Admin API (and then via the shell).
+
+
+---
+
+* [HBASE-15666](https://issues.apache.org/jira/browse/HBASE-15666) | 
*Critical* | **shaded dependencies for hbase-testing-util**
+
+New shaded artifact for testing: hbase-shaded-testing-util.
+
+
+---
+
+* [HBASE-22539](https://issues.apache.org/jira/browse/HBASE-22539) | *Blocker* 
| **WAL corruption due to early DBBs re-use when Durability.ASYNC\_WAL is used**
+
+We found a critical bug which can lead to WAL corruption when 
Durability.ASYNC\_WAL is used. The reason is that we release a ByteBuffer 
before actually persist the content into WAL file.
+
+The problem maybe lead to several errors, for example, ArrayIndexOfOutBounds 
when replaying WAL. This is because that the ByteBuffer is reused by others.
+
+ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while 
processing event RS\_LOG\_REPLAY
+java.lang.ArrayIndexOutOfBoundsException: 18056
+        at org.apache.hadoop.hbase.KeyValue.getFamilyLength(KeyValue.java:1365)
+        at org.apache.hadoop.hbase.KeyValue.getFamilyLength(KeyValue.java:1358)
+        at 
org.apache.hadoop.hbase.PrivateCellUtil.matchingFamily(PrivateCellUtil.java:735)
+        at org.apache.hadoop.hbase.CellUtil.matchingFamily(CellUtil.java:816)
+        at 
org.apache.hadoop.hbase.wal.WALEdit.isMetaEditFamily(WALEdit.java:143)
+        at org.apache.hadoop.hbase.wal.WALEdit.isMetaEdit(WALEdit.java:148)
+        at 
org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:297)
+        at 
org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:195)
+        at 
org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:100)
+
+And may even cause segmentation fault and crash the JVM directly. You will see 
a hs\_err\_pidXXX.log file and usually the problem is SIGSEGV. This is usually 
because that the ByteBuffer has already been returned to the OS and used for 
other purpose.
+
+The problem has been reported several times in the past and this time 
Wellington Ramos Chevreuil provided the full logs and deeply analyzed the logs 
so we can find the root cause. And Lijin Bin figured out that the problem may 
only happen when Durability.ASYNC\_WAL is used. Thanks to them.
+
+The problem only effects the 2.x releases, all users are highly recommand to 
upgrade to a release which has this fix in, especially that if you use 
Durability.ASYNC\_WAL.
+
+
+---
+
+* [HBASE-22737](https://issues.apache.org/jira/browse/HBASE-22737) | *Major* | 
**Add a new admin method and shell cmd to trigger the hbck chore to run**
+
+Add a new method runHbckChore in Hbck interface and a new shell cmd 
hbck\_chore\_run to request HBCK chore to run at master side.
+
+
+---
+
+* [HBASE-22741](https://issues.apache.org/jira/browse/HBASE-22741) | *Major* | 
**Show catalogjanitor consistency complaints in new 'HBCK Report' page**
+
+Adds a "CatalogJanitor hbase:meta Consistency Issues" section to the new 'HBCK 
Report' page added by HBASE-22709. This section is empty unless the most recent 
CatalogJanitor scan turned up problems. If so, will show table of issues found.
+
+
+---
+
+* [HBASE-22723](https://issues.apache.org/jira/browse/HBASE-22723) | *Major* | 
**Have CatalogJanitor report holes and overlaps; i.e. problems it sees when 
doing its regular scan of hbase:meta**
+
+When CatalogJanitor runs, it now checks for holes, overlaps, empty 
info:regioninfo columns and bad servers. Dumps findings into log. Follow-up 
adds report to new 'HBCK Report' linked off the Master UI.
+
+NOTE: All features but the badserver check made it into branch-2.1 and 
branch-2.0 backports.
+
+
+---
+
+* [HBASE-22709](https://issues.apache.org/jira/browse/HBASE-22709) | *Major* | 
**Add a chore thread in master to do hbck checking and display results in 'HBCK 
Report' page**
+
+1. Add a new chore thread in master to do hbck checking
+2. Add a new web ui "HBCK Report" page to display checking results.
+
+This feature is enabled by default. And the hbck chore run per 60 minutes by 
default. You can config "hbase.master.hbck.checker.interval" to a value lesser 
than or equal to 0 for disabling the chore.
+
+Notice: the config "hbase.master.hbck.checker.interval" was renamed to 
"hbase.master.hbck.chore.interval" in HBASE-22737.
+
+
+---
+
+* [HBASE-22722](https://issues.apache.org/jira/browse/HBASE-22722) | *Blocker* 
| **Upgrade jackson databind dependencies to 2.9.9.1**
+
+Upgrade jackson databind dependency to 2.9.9.1 due to CVEs
+
+https://nvd.nist.gov/vuln/detail/CVE-2019-12814
+
+https://nvd.nist.gov/vuln/detail/CVE-2019-12384
+
+
+---
+
+* [HBASE-22527](https://issues.apache.org/jira/browse/HBASE-22527) | *Major* | 
**[hbck2] Add a master web ui to show the problematic regions**
+
+Add a new master web UI to show the potentially problematic opened regions. 
There are three case:
+1. Master thought this region opened, but no regionserver reported it.
+2. Master thought this region opened on Server1, but regionserver reported 
Server2
+3. More than one regionservers reported opened this region
+
+
+---
+
+* [HBASE-22610](https://issues.apache.org/jira/browse/HBASE-22610) | *Trivial* 
| **[BucketCache] Rename "hbase.offheapcache.minblocksize"**
+
+The config point "hbase.offheapcache.minblocksize" was wrong and is now 
deprecated. The new config point is "hbase.blockcache.minblocksize".
+
+
+---
+
+* [HBASE-22690](https://issues.apache.org/jira/browse/HBASE-22690) | *Major* | 
**Deprecate / Remove OfflineMetaRepair in hbase-2+**
+
+OfflineMetaRepair is no longer supported in HBase-2+. Please refer to 
https://hbase.apache.org/book.html#HBCK2
+
+This tool is deprecated in 2.x and will be removed in 3.0.
+
+
+---
+
+* [HBASE-22673](https://issues.apache.org/jira/browse/HBASE-22673) | *Major* | 
**Avoid to expose protobuf stuff in Hbck interface**
+
+Mark the Hbck#scheduleServerCrashProcedure(List\<HBaseProtos.ServerName\> 
serverNames) as deprecated. Use 
Hbck#scheduleServerCrashProcedures(List\<ServerName\> serverNames) instead.
+
+
+---
+
+* [HBASE-22617](https://issues.apache.org/jira/browse/HBASE-22617) | *Blocker* 
| **Recovered WAL directories not getting cleaned up**
+
+In HBASE-20734 we moved the recovered.edits onto the wal file system but when 
constructing the directory we missed the BASE\_NAMESPACE\_DIR('data'). So when 
using the default config, you will find that there are lots of new directories 
at the same level with the 'data' directory.
+
+In this issue, we add the BASE\_NAMESPACE\_DIR back, and also try our best to 
clean up the wrong directories. But we can only clean up the region level 
directories, so if you want a clean fs layout on HDFS you still need to 
manually delete the empty directories at the same level with 'data'.
+
+The effect versions are 2.2.0, 2.1.[1-5], 1.4.[8-10], 1.3.[3-5].
+
+
+---
+
+* [HBASE-22596](https://issues.apache.org/jira/browse/HBASE-22596) | *Minor* | 
**[Chore] Separate the execution period between CompactionChecker and 
PeriodicMemStoreFlusher**
+
+hbase.regionserver.compaction.check.period is used for controlling how often 
the compaction checker runs. If unset, will use 
hbase.server.thread.wakefrequency as default value.
+
+hbase.regionserver.flush.check.period is used for controlling how ofter the 
flush checker runs. If unset, will use hbase.server.thread.wakefrequency as 
default value.
+
+
+
+# HBASE  2.2.0 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-21970](https://issues.apache.org/jira/browse/HBASE-21970) | *Major* | 
**Document that how to upgrade from 2.0 or 2.1 to 2.2+**
+
+See the document http://hbase.apache.org/book.html#upgrade2.2 about how to 
upgrade from 2.0 or 2.1 to 2.2+.
+
+HBase 2.2+ uses a new Procedure form assiging/unassigning/moving Regions. It 
does not process HBase 2.1 and 2.0's Unassign/Assign Procedure types. Upgrade 
requires that we first drain the Master Procedure Store of old style Procedures 
before starting the new 2.2 Master. So you need to make sure that before you 
kill the old version (2.0 or 2.1) Master, there is no region in transition. And 
once the new version (2.2+) Master is up, you can rolling upgrade RegionServers 
one by one. 
+
+And there is a more safer way if you are running 2.1.1+ or 2.0.3+ cluster. It 
need four steps to upgrade Master.
+
+1. Shutdown both active and standby Masters (Your cluster will continue to 
server reads and writes without interruption).
+2. Set the property hbase.procedure.upgrade-to-2-2 to true in hbase-site.xml 
for the Master, and start only one Master, still using the 2.1.1+ (or 2.0.3+) 
version.
+3. Wait until the Master quits. Confirm that there is a 'READY TO ROLLING 
UPGRADE' message in the Master log as the cause of the shutdown. The Procedure 
Store is now empty.
+4. Start new Masters with the new 2.2+ version.
+
+Then you can rolling upgrade RegionServers one by one. See HBASE-21075 for 
more details.
+
+
+---
+
+* [HBASE-21536](https://issues.apache.org/jira/browse/HBASE-21536) | *Trivial* 
| **Fix completebulkload usage instructions**
+
+Added completebulkload short name for BulkLoadHFilesTool to bin/hbase.
+
+
+---
+
+* [HBASE-22500](https://issues.apache.org/jira/browse/HBASE-22500) | *Blocker* 
| **Modify pom and jenkins jobs for hadoop versions**
+
+Change the default hadoop-3 version to 3.1.2. Drop the support for the 
releases which are effected by CVE-2018-8029, see this email 
https://lists.apache.org/thread.html/3d6831c3893cd27b6850aea2feff7d536888286d588e703c6ffd2e82@%3Cuser.hadoop.apache.org%3E
+
+
+---
+
+* [HBASE-22148](https://issues.apache.org/jira/browse/HBASE-22148) | *Blocker* 
| **Provide an alternative to CellUtil.setTimestamp**
+
+<!-- markdown -->
+
+The `CellUtil.setTimestamp` method changes to be an API with audience 
`LimitedPrivate(COPROC)` in HBase 3.0. With that designation the API should 
remain stable within a given minor release line, but may change between minor 
releases.
+
+Previously, this method was deprecated in HBase 2.0 for removal in HBase 3.0. 
Deprecation messages in HBase 2.y releases have been updated to indicate the 
expected API audience change.
+
+
+---
+
+* [HBASE-21991](https://issues.apache.org/jira/browse/HBASE-21991) | *Major* | 
**Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
improvements**
+
+The class LossyCounting was unintentionally marked Public but was never 
intended to be part of our public API. This oversight has been corrected and 
LossyCounting is now marked as Private and going forward may be subject to 
additional breaking changes or removal without notice. If you have taken a 
dependency on this class we recommend cloning it locally into your project 
before upgrading to this release.
+
+
+---
+
+* [HBASE-22226](https://issues.apache.org/jira/browse/HBASE-22226) | *Trivial* 
| **Incorrect level for headings in asciidoc**
+
+Warnings for level headings are corrected in the book for the HBase 
Incompatibilities section.
+
+
+---
+
+* [HBASE-20970](https://issues.apache.org/jira/browse/HBASE-20970) | *Major* | 
**Update hadoop check versions for hadoop3 in hbase-personality**
+
+Add hadoop 3.0.3, 3.1.1 3.1.2 in our hadoop check jobs.
+
+
+---
+
+* [HBASE-21784](https://issues.apache.org/jira/browse/HBASE-21784) | *Major* | 
**Dump replication queue should show list of wal files ordered chronologically**
+
+The DumpReplicationQueues tool will now list replication queues sorted in 
chronological order.
+
+
+---
+
+* [HBASE-22384](https://issues.apache.org/jira/browse/HBASE-22384) | *Minor* | 
**Formatting issues in administration section of book**
+
+Fixes a formatting issue in the administration section of the book, where 
listing indentation were a little bit off.
+
+
+---
+
+* [HBASE-22399](https://issues.apache.org/jira/browse/HBASE-22399) | *Major* | 
**Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop 
checks**
+
+Now the default hadoop-two.version has been changed to 2.8.5, and all hadoop 
versions before 2.8.2(exclude) will not be supported any more.
+
+
+---
+
+* [HBASE-22392](https://issues.apache.org/jira/browse/HBASE-22392) | *Trivial* 
| **Remove extra/useless +**
+
+Removed extra + in HRegion, HStore and LoadIncrementalHFiles for branch-2 and 
HRegion and HStore for branch-1.
+
+
+---
+
+* [HBASE-20494](https://issues.apache.org/jira/browse/HBASE-20494) | *Major* | 
**Upgrade com.yammer.metrics dependency**
+
+Updated metrics core from 3.2.1 to 3.2.6.
+
+
+---
+
+* [HBASE-22358](https://issues.apache.org/jira/browse/HBASE-22358) | *Minor* | 
**Change rubocop configuration for method length**
+
+The rubocop definition for the maximum method length was set to 75.
+
+
+---
+
+* [HBASE-22379](https://issues.apache.org/jira/browse/HBASE-22379) | *Minor* | 
**Fix Markdown for "Voting on Release Candidates" in book**
+
+Fixes the formatting of the "Voting on Release Candidates" to actually show 
the quote and code formatting of the RAT check.
+
+
+---
+
+* [HBASE-20851](https://issues.apache.org/jira/browse/HBASE-20851) | *Minor* | 
**Change rubocop config for max line length of 100**
+
+The rubocop configuration in the hbase-shell module now allows a line length 
with 100 characters, instead of 80 as before. For everything before 2.1.5 this 
change introduces rubocop itself.
+
+
+---
+
+* [HBASE-22054](https://issues.apache.org/jira/browse/HBASE-22054) | *Minor* | 
**Space Quota: Compaction is not working for super user in case of 
NO\_WRITES\_COMPACTIONS**
+
+This change allows the system and superusers to initiate compactions, even 
when a space quota violation policy disallows compactions from happening. The 
original intent behind disallowing of compactions was to prevent end-user 
compactions from creating undue I/O load, not disallowing \*any\* compaction in 
the system.
+
+
+---
+
+* [HBASE-22292](https://issues.apache.org/jira/browse/HBASE-22292) | *Blocker* 
| **PreemptiveFastFailInterceptor clean repeatedFailuresMap issue**
+
+Adds new configuration hbase.client.failure.map.cleanup.interval which 
defaults to ten minutes.
+
+
+---
+
+* [HBASE-22155](https://issues.apache.org/jira/browse/HBASE-22155) | *Major* | 
**Move 2.2.0 on to hbase-thirdparty-2.2.0**
+
+ Updates libs used internally by hbase via hbase-thirdparty as follows:
+
+ gson 2.8.1 -\\\> 2.8.5
+ guava 22.0 -\\\> 27.1-jre
+ pb 3.5.1 -\\\> 3.7.0
+ netty 4.1.17 -\\\> 4.1.34
+ commons-collections4 4.1 -\\\> 4.3
+
+
+---
+
+* [HBASE-22178](https://issues.apache.org/jira/browse/HBASE-22178) | *Major* | 
**Introduce a createTableAsync with TableDescriptor method in Admin**
+
+Introduced
+
+Future\<Void\> createTableAsync(TableDescriptor);
+
+
+---
+
+* [HBASE-22108](https://issues.apache.org/jira/browse/HBASE-22108) | *Major* | 
**Avoid passing null in Admin methods**
+
+Introduced these methods:
+void move(byte[]);
+void move(byte[], ServerName);
+Future\<Void\> splitRegionAsync(byte[]);
+
+These methods are deprecated:
+void move(byte[], byte[])
+
+
+---
+
+* [HBASE-22152](https://issues.apache.org/jira/browse/HBASE-22152) | *Major* | 
**Create a jenkins file for yetus to processing GitHub PR**
+
+Add a new jenkins file for running pre commit check for GitHub PR.
+
+
+---
+
+* [HBASE-22007](https://issues.apache.org/jira/browse/HBASE-22007) | *Major* | 
**Add restoreSnapshot and cloneSnapshot with acl methods in AsyncAdmin**
+
+Add cloneSnapshot/restoreSnapshot with acl methods in AsyncAdmin.
+
+
+---
+
+* [HBASE-22123](https://issues.apache.org/jira/browse/HBASE-22123) | *Minor* | 
**REST gateway reports Insufficient permissions exceptions as 404 Not Found**
+
+When insufficient permissions, you now get:
+
+HTTP/1.1 403 Forbidden
+
+on the HTTP side, and in the message
+
+Forbidden
+org.apache.hadoop.hbase.security.AccessDeniedException: 
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions for user ‘myuser',action: get, tableName:mytable, family:cf.
+at 
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.authorizeAccess(RangerAuthorizationCoprocessor.java:547)
+and the rest of the ADE stack
+
+
+---
+
+* [HBASE-22100](https://issues.apache.org/jira/browse/HBASE-22100) | *Minor* | 
**False positive for error prone warnings in pre commit job**
+
+Now we will sort the javac WARNING/ERROR before generating diff in pre-commit 
so we can get a stable output for the error prone. The downside is that we just 
sort the output lexicographically so the line number will also be sorted 
lexicographically, which is a bit strange to human.
+
+
+---
+
+* [HBASE-22057](https://issues.apache.org/jira/browse/HBASE-22057) | *Major* | 
**Impose upper-bound on size of ZK ops sent in a single multi()**
+
+Exposes a new configuration property "zookeeper.multi.max.size" which dictates 
the maximum size of deletes that HBase will make to ZooKeeper in a single RPC. 
This property defaults to 1MB, which should fall beneath the default ZooKeeper 
limit of 2MB, controlled by "jute.maxbuffer".
+
+
+---
+
+* [HBASE-22052](https://issues.apache.org/jira/browse/HBASE-22052) | *Major* | 
**pom cleaning; filter out jersey-core in hadoop2 to match hadoop3 and remove 
redunant version specifications**
+
+<!-- markdown -->
+Fixed awkward dependency issue that prevented site building.
+
+#### note specific to HBase 2.1.4
+HBase 2.1.4 shipped with an early version of this fix that incorrectly altered 
the libraries included in our binary assembly for using Apache Hadoop 2.7 (the 
current build default Hadoop version for 2.1.z). For folks running out of the 
box against a Hadoop 2.7 cluster (or folks who skip the installation step of 
[replacing the bundled Hadoop 
libraries](http://hbase.apache.org/book.html#hadoop)) this will result in a 
failure at Region Server startup due to a missing class definition. e.g.:
+```
+2019-03-27 09:02:05,779 ERROR [main] regionserver.HRegionServer: Failed 
construction RegionServer
+java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
+       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:644)
+       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:628)
+       at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
+       at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
+       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
+       at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701)
+       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683)
+       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372)
+       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:171)
+       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:356)
+       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
+       at 
org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:362)
+       at 
org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:411)
+       at 
org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:387)
+       at 
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:704)
+       at 
org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:613)
+       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
+       at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
+       at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
+       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
+       at 
org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:3029)
+       at 
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:63)
+       at 
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
+       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
+       at 
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
+       at 
org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3047)
+Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
+       at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
+       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
+       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
+       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
+       ... 26 more
+
+```
+
+Workaround via any _one_ of the following:
+* If you are running against a Hadoop cluster that is 2.8+, ensure you replace 
the Hadoop libaries in the default binary assembly with those for your version.
+* If you are running against a Hadoop cluster that is 2.8+, build the binary 
assembly from the source release while specifying your Hadoop version.
+* If you are running against a Hadoop cluster that is a supported 2.7 release, 
ensure the `hadoop` executable is in the `PATH` seen at Region Server startup 
and that you are not using the `HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP` bypass.
+* For any supported Hadoop version, manually make the Apache HTrace artifact 
`htrace-core-3.1.0-incubating.jar` available to all Region Servers via the 
HBASE_CLASSPATH environment variable.
+* For any supported Hadoop version, manually make the Apache HTrace artifact 
`htrace-core-3.1.0-incubating.jar` available to all Region Servers by copying 
it into the directory `${HBASE_HOME}/lib/client-facing-thirdparty/`.
+
+
+---
+
+* [HBASE-22065](https://issues.apache.org/jira/browse/HBASE-22065) | *Major* | 
**Add listTableDescriptors(List\<TableName\>) method in AsyncAdmin**
+
+Add a listTableDescriptors(List\<TableName\>) method in the AsyncAdmin 
interface, to align with the Admin interface.
+
+
+---
+
+* [HBASE-22040](https://issues.apache.org/jira/browse/HBASE-22040) | *Major* | 
**Add mergeRegionsAsync with a List of region names method in AsyncAdmin**
+
+Add a mergeRegionsAsync(byte[][], boolean) method in the AsyncAdmin interface.
+
+Instead of using assert, now we will throw IllegalArgumentException when you 
want to merge less than 2 regions at client side. And also, at master side, 
instead of using assert, now we will throw DoNotRetryIOException if you want 
merge more than 2 regions, since we only support merging two regions at once 
for now.
+
+
+---
+
+* [HBASE-22039](https://issues.apache.org/jira/browse/HBASE-22039) | *Major* | 
**Should add the synchronous parameter for the XXXSwitch method in AsyncAdmin**
+
+Add drainXXX parameter for balancerSwitch/splitSwitch/mergeSwitch methods in 
the AsyncAdmin interface, which has the same meaning with the synchronous 
parameter for these methods in the Admin interface.
+
+
+---
+
+* [HBASE-21810](https://issues.apache.org/jira/browse/HBASE-21810) | *Major* | 
**bulkload  support set hfile compression on client**
+
+bulkload (HFileOutputFormat2)  support config the compression on client ,you 
can set the job configuration "hbase.mapreduce.hfileoutputformat.compression"  
override the auto-detection of the target table's compression
+
+
+---
+
+* [HBASE-22000](https://issues.apache.org/jira/browse/HBASE-22000) | *Major* | 
**Deprecated isTableAvailable with splitKeys**
+
+Deprecated AsyncTable.isTableAvailable(TableName, byte[][]).
+
+
+---
+
+* [HBASE-21871](https://issues.apache.org/jira/browse/HBASE-21871) | *Major* | 
**Support to specify a peer table name in VerifyReplication tool**
+
+After HBASE-21871, we can specify a peer table name with --peerTableName in 
VerifyReplication tool like the following:
+hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication 
--peerTableName=peerTable 5 TestTable
+
+In addition, we can compare any 2 tables in any remote clusters with 
specifying both peerId and --peerTableName.
+
+For example:
+hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication 
--peerTableName=peerTable zk1,zk2,zk3:2181/hbase TestTable
+
+
+---
+
+* [HBASE-15728](https://issues.apache.org/jira/browse/HBASE-15728) | *Major* | 
**Add remaining per-table region / store / flush / compaction related metrics**
+
+Adds below flush, split, and compaction metrics
+
+ +  // split related metrics
+ +  private MutableFastCounter splitRequest;
+ +  private MutableFastCounter splitSuccess;
+ +  private MetricHistogram splitTimeHisto;
+ +
+ +  // flush related metrics
+ +  private MetricHistogram flushTimeHisto;
+ +  private MetricHistogram flushMemstoreSizeHisto;
+ +  private MetricHistogram flushOutputSizeHisto;
+ +  private MutableFastCounter flushedMemstoreBytes;
+ +  private MutableFastCounter flushedOutputBytes;
+ +
+ +  // compaction related metrics
+ +  private MetricHistogram compactionTimeHisto;
+ +  private MetricHistogram compactionInputFileCountHisto;
+ +  private MetricHistogram compactionInputSizeHisto;
+ +  private MetricHistogram compactionOutputFileCountHisto;
+ +  private MetricHistogram compactionOutputSizeHisto;
+ +  private MutableFastCounter compactedInputBytes;
+ +  private MutableFastCounter compactedOutputBytes;
+ +
+ +  private MetricHistogram majorCompactionTimeHisto;
+ +  private MetricHistogram majorCompactionInputFileCountHisto;
+ +  private MetricHistogram majorCompactionInputSizeHisto;
+ +  private MetricHistogram majorCompactionOutputFileCountHisto;
+ +  private MetricHistogram majorCompactionOutputSizeHisto;
+ +  private MutableFastCounter majorCompactedInputBytes;
+ +  private MutableFastCounter majorCompactedOutputBytes;
+
+
+---
+
+* [HBASE-20886](https://issues.apache.org/jira/browse/HBASE-20886) | 
*Critical* | **[Auth] Support keytab login in hbase client**
+
+From 2.2.0, hbase supports client login via keytab. To use this feature, 
client should specify \`hbase.client.keytab.file\` and 
\`hbase.client.keytab.principal\` in hbase-site.xml, then the connection will 
contain the needed credentials which be renewed periodically to communicate 
with kerberized hbase cluster.
+
+
+---
+
+* [HBASE-21410](https://issues.apache.org/jira/browse/HBASE-21410) | *Major* | 
**A helper page that help find all problematic regions and procedures**
+
+After HBASE-21410, we add a helper page to Master UI. This helper page is 
mainly to help HBase operator quickly found all regions and pids that are get 
stuck.
+There are 2 entries to get in this page.
+One is showing in the Regions in Transition section, it made "num region(s) in 
transition" a link that you can click and check all regions in transition and 
their related procedure IDs.
+The other one is showing in the table details section, it made the number of 
CLOSING or OPENING regions a link, which you can click and check regions and 
related procedure IDs of CLOSING or OPENING regions of a certain table.
+In this helper page, not only you can see all regions and related procedures, 
there are 2 buttons at the top which will show these regions or procedure IDs 
in text format. This is mainly aim to help operator to easily copy and paste 
all problematic procedure IDs and encoded region names to HBCK2's command line, 
by which we HBase operator can bypass these procedures or assign these regions.
+
+
+---
+
+* [HBASE-21588](https://issues.apache.org/jira/browse/HBASE-21588) | *Major* | 
**Procedure v2 wal splitting implementation**
+
+After HBASE-21588, we introduce a new way to do WAL splitting coordination by 
procedure framework. This can simplify the process of WAL splitting and no need 
to connect zookeeper any more.
+During ServerCrashProcedure, it will create a SplitWALProcedure for each WAL 
that need to split. Then each SplitWALProcedure will spawn a 
SplitWALRemoteProcedure to send the request to regionserver.
+At the RegionServer side, whole process is handled by SplitWALCallable. It 
split the WAL and return the result to master.
+According to my test, this patch has a better performance as the number of 
WALs that need to split increase. And it can relieve the pressure on zookeeper.
+
+
+---
+
+* [HBASE-20734](https://issues.apache.org/jira/browse/HBASE-20734) | *Major* | 
**Colocate recovered edits directory with hbase.wal.dir**
+
+Previously the recovered.edits directory was under the root directory. This 
JIRA moves the recovered.edits directory to be under the hbase.wal.dir if set. 
It also adds a check for any recovered.edits found under the root directory for 
backwards compatibility. This gives improvements when a faster media(like SSD) 
or more local FileSystem is used for the hbase.wal.dir than the root dir.
+
+
+---
+
+* [HBASE-20401](https://issues.apache.org/jira/browse/HBASE-20401) | *Minor* | 
**Make \`MAX\_WAIT\` and \`waitIfNotFinished\` in CleanerContext configurable**
+
+When oldwals (and hfile) cleaner cleans stale wals (and hfiles), it will 
periodically check and wait the clean results from filesystem, the total wait 
time will be no more than a max time.
+
+The periodically wait and check configurations are 
hbase.oldwals.cleaner.thread.check.interval.msec (default is 500 ms) and 
hbase.regionserver.hfilecleaner.thread.check.interval.msec (default is 1000 
ms). 
+
+Meanwhile, The max time configurations are 
hbase.oldwals.cleaner.thread.timeout.msec and 
hbase.regionserver.hfilecleaner.thread.timeout.msec, they are set to 60 seconds 
by default.
+
+All support dynamic configuration.
+
+e.g. in the oldwals cleaning scenario, one may consider tuning 
hbase.oldwals.cleaner.thread.timeout.msec and 
hbase.oldwals.cleaner.thread.check.interval.msec 
+
+1. While deleting a oldwal never complete (strange but possible), then delete 
file task needs to wait for a max of 60 seconds. Here, 60 seconds might be too 
long, or the opposite way is to increase more than 60 seconds in the use cases 
of slow file delete. 
+2. The check and wait of a file delete is set to default in the period of 500 
milliseconds, one might want to tune this checking period to a short interval 
to check more frequently or to a longer interval to avoid checking too often to 
manage their delete file task checking period (the longer interval may be use 
to avoid checking too fast while using a high latency storage).
+
+
+---
+
+* [HBASE-21481](https://issues.apache.org/jira/browse/HBASE-21481) | *Major* | 
**[acl] Superuser's permissions should not be granted or revoked by any non-su 
global admin**
+
+HBASE-21481 improves the quality of access control, by strengthening the 
protection of super users's privileges.
+
+
+---
+
+* [HBASE-21082](https://issues.apache.org/jira/browse/HBASE-21082) | 
*Critical* | **Reimplement assign/unassign related procedure metrics**
+
+Now we have four types of RIT procedure metrics, assign, unassign, move, 
reopen. The meaning of assign/unassign is changed, as we will not increase the 
unassign metric and then the assign metric when moving a region.
+Also introduced two new procedure metrics, open and close, which are used to 
track the open/close region calls to region server. We may send open/close 
multiple times to finish a RIT since we may retry multiple times.
+
+
+---
+
+* [HBASE-20724](https://issues.apache.org/jira/browse/HBASE-20724) | 
*Critical* | **Sometimes some compacted storefiles are still opened after 
region failover**
+
+Problem: This is an old problem since HBASE-2231. The compaction event marker 
was only writed to WAL. But after flush, the WAL may be archived, which means 
an useful compaction event marker be deleted, too. So the compacted store files 
cannot be archived when region open and replay WAL.
+
+Solution: After this jira, the compaction event tracker will be writed to 
HFile. When region open and load store files, read the compaction evnet tracker 
from HFile and archive the compacted store files which still exist.
+
+
+---
+
+* [HBASE-21820](https://issues.apache.org/jira/browse/HBASE-21820) | *Major* | 
**Implement CLUSTER quota scope**
+
+HBase contains two quota scopes: MACHINE and CLUSTER. Before this patch, set 
quota operations did not expose scope option to client api and use MACHINE as 
default, CLUSTER scope can not be set and used. 
+Shell commands are as follows:
+set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec'
+
+This issue implements CLUSTER scope in a simple way: For user, namespace, user 
over namespace quota, use [ClusterLimit / RSNum] as machine limit. For table 
and user over table quota, use [ClusterLimit / TotalTableRegionNum \* 
MachineTableRegionNum] as machine limit.
+After this patch, user can set CLUSTER scope quota, but MACHINE is still 
default if user ignore scope. 
+Shell commands are as follows:
+set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec'
+set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec', SCOPE 
=\> MACHINE
+set\_quota, TYPE =\> THROTTLE, TABLE =\> 't1', LIMIT =\> '10req/sec', SCOPE 
=\> CLUSTER
+
+
+---
+
+* [HBASE-21057](https://issues.apache.org/jira/browse/HBASE-21057) | *Minor* | 
**upgrade to latest spotbugs**
+
+Change spotbugs version to 3.1.11.
+
+
+---
+
+* [HBASE-21922](https://issues.apache.org/jira/browse/HBASE-21922) | *Major* | 
**BloomContext#sanityCheck may failed when use ROWPREFIX\_DELIMITED bloom 
filter**
+
+Remove bloom filter type ROWPREFIX\_DELIMITED. May add it back when find a 
better solution.
+
+
+---
+
+* [HBASE-21783](https://issues.apache.org/jira/browse/HBASE-21783) | *Major* | 
**Support exceed user/table/ns throttle quota if region server has available 
quota**
+
+Support enable or disable exceed throttle quota. Exceed throttle quota means, 
user can over consume user/namespace/table quota if region server has 
additional available quota because other users don't consume at the same time. 
+Use the following shell commands to enable/disable exceed throttle quota: 
enable\_exceed\_throttle\_quota
+disable\_exceed\_throttle\_quota
+There are two limits when enable exceed throttle quota: 
+1. Must set at least one read and one write region server throttle quota; 
+2. All region server throttle quotas must be in seconds time unit. Because 
once previous requests exceed their quota and consume region server quota, 
quota in other time units may be refilled in a long time, this may affect later 
requests.
+
+
+---
+
+* [HBASE-20587](https://issues.apache.org/jira/browse/HBASE-20587) | *Major* | 
**Replace Jackson with shaded thirdparty gson**
+
+Remove jackson dependencies from most hbase modules except hbase-rest, use 
shaded gson instead. The output json will be a bit different since jackson can 
use getter/setter, but gson will always use the fields.
+
+
+---
+
+* [HBASE-21928](https://issues.apache.org/jira/browse/HBASE-21928) | *Major* | 
**Deprecated HConstants.META\_QOS**
+
+Mark HConstants.META\_QOS as deprecated. It is for internal use only, which is 
the highest priority. You should not try to set a priority greater than or 
equal to this value, although it is no harm but also useless.
+
+
+---
+
+* [HBASE-17942](https://issues.apache.org/jira/browse/HBASE-17942) | *Major* | 
**Disable region splits and merges per table**
+
+This patch adds the ability to disable split and/or merge for a table (By 
default, split and merge are enabled for a table).
+
+
+---
+
+* [HBASE-21636](https://issues.apache.org/jira/browse/HBASE-21636) | *Major* | 
**Enhance the shell scan command to support missing scanner specifications like 
ReadType, IsolationLevel etc.**
+
+Allows shell to set Scan options previously not exposed. See additions as part 
of the scan help by typing following hbase shell:
+
+hbase\> help 'scan'
+
+
+---
+
+* [HBASE-21201](https://issues.apache.org/jira/browse/HBASE-21201) | *Major* | 
**Support to run VerifyReplication MR tool without peerid**
+
+We can specify peerQuorumAddress instead of peerId in VerifyReplication tool. 
So it no longer requires peerId to be setup when using this tool.
+
+For example:
+hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication 
zk1,zk2,zk3:2181/hbase testTable
+
+
+---
+
+* [HBASE-21838](https://issues.apache.org/jira/browse/HBASE-21838) | *Major* | 
**Create a special ReplicationEndpoint just for verifying the WAL entries are 
fine**
+
+Introduce a VerifyWALEntriesReplicationEndpoint which replicates nothing but 
only verifies if all the cells are valid.
+It can be used to capture bugs for writing WAL, as most times we will not read 
the WALs again after writing it if there are no region server crashes.
+
+
+---
+
+* [HBASE-21727](https://issues.apache.org/jira/browse/HBASE-21727) | *Minor* | 
**Simplify documentation around client timeout**
+
+Deprecated HBaseConfiguration#getInt(Configuration, String, String, int) 
method and removed it from 3.0.0 version.
+
+
+---
+
+* [HBASE-21764](https://issues.apache.org/jira/browse/HBASE-21764) | *Major* | 
**Size of in-memory compaction thread pool should be configurable**
+
+Introduced an new config key in this issue: 
hbase.regionserver.inmemory.compaction.pool.size. the default value would be 
10.  you can configure this to set the pool size of in-memory compaction pool. 
Note that all memstores in one region server will share the same pool, so if 
you have many regions in one region server,  you need to set this larger to 
compact faster for better read performance.
+
+
+---
+
+* [HBASE-21684](https://issues.apache.org/jira/browse/HBASE-21684) | *Major* | 
**Throw DNRIOE when connection or rpc client is closed**
+
+Make StoppedRpcClientException extend DoNotRetryIOException.
+
+
+---
+
+* [HBASE-21739](https://issues.apache.org/jira/browse/HBASE-21739) | *Major* | 
**Move grant/revoke from regionserver to master**
+
+To implement user permission control in Precedure V2, move grant and revoke 
method from AccessController to master firstly.
+Mark AccessController#grant and AccessController#revoke as deprecated and 
please use Admin#grant and Admin#revoke instead.
+
+
+---
+
+* [HBASE-21791](https://issues.apache.org/jira/browse/HBASE-21791) | *Blocker* 
| **Upgrade thrift dependency to 0.12.0**
+
+IMPORTANT: Due to security issues, all users who use hbase thrift should avoid 
using releases which do not have this fix.
+
+The effect releases are:
+2.1.x: 2.1.2 and below
+2.0.x: 2.0.4 and below
+1.x: 1.4.x and below
+
+If you are using the effect releases above, please consider upgrading to a 
newer release ASAP.
+
+
+---
+
+* [HBASE-21792](https://issues.apache.org/jira/browse/HBASE-21792) | *Major* | 
**Mark HTableMultiplexer as deprecated and remove it in 3.0.0**
+
+HTableMultiplexer exposes the implementation class, and it is incomplete, so 
we mark it as deprecated and remove it in 3.0.0 release.
+
+There is no direct replacement for HTableMultiplexer, please use 
BufferedMutator if you want to batch mutations to a table.
+
+
+---
+
+* [HBASE-21782](https://issues.apache.org/jira/browse/HBASE-21782) | *Major* | 
**LoadIncrementalHFiles should not be IA.Public**
+
+Introduce a BulkLoadHFiles interface which is marked as IA.Public, for doing 
bulk load programmatically.
+Introduce a BulkLoadHFilesTool which extends BulkLoadHFiles, and is marked as 
IA.LimitedPrivate(TOOLS), for using from command line.
+The old LoadIncrementalHFiles is deprecated and will be removed in 3.0.0.
+
+
+---
+
+* [HBASE-21762](https://issues.apache.org/jira/browse/HBASE-21762) | *Major* | 
**Move some methods in ClusterConnection to Connection**
+
+Move the two getHbck method from ClusterConnection to Connection, and mark the 
methods as IA.LimitedPrivate(HBCK), as ClusterConnection is IA.Private and 
should not be depended by HBCK2.
+
+Add a clearRegionLocationCache method in Connection to clear the region 
location cache for all the tables. As in RegionLocator, most of the methods 
have a 'reload' parameter, which implicitly tells user that we have a region 
location cache, so adding a method to clear the cache is fine.
+
+
+---
+
+* [HBASE-21713](https://issues.apache.org/jira/browse/HBASE-21713) | *Major* | 
**Support set region server throttle quota**
+
+Support set region server rpc throttle quota which represents the read/write 
ability of region servers and throttles when region server's total requests 
exceeding the limit. 
+
+Use the following shell command to set RS quota:
+set\_quota TYPE =\> THROTTLE, REGIONSERVER =\> 'all', THROTTLE\_TYPE =\> 
WRITE, LIMIT =\> '20000req/sec'
+set\_quota TYPE =\> THROTTLE, REGIONSERVER =\> 'all', LIMIT =\> NONE
+"all" represents the throttle quota of all region servers and setting 
specified region server quota isn't supported currently.
+
+
+---
+
+* [HBASE-21689](https://issues.apache.org/jira/browse/HBASE-21689) | *Minor* | 
**Make table/namespace specific current quota info available in 
shell(describe\_namespace & describe)**
+
+In shell commands "describe\_namespace" and "describe", which are used to see 
the descriptors of the namespaces and tables respectively, quotas set on that 
particular namespace/table will also be printed along.
+
+
+---
+
+* [HBASE-17370](https://issues.apache.org/jira/browse/HBASE-17370) | *Major* | 
**Fix or provide shell scripts to drain and decommission region server**
+
+Adds shell support for the following:
+- List decommissioned/draining region servers
+- Decommission a list of region servers, optionally offload corresponding 
regions
+- Recommission a region server, optionally load a list of passed regions
+
+
+---
+
+* [HBASE-21734](https://issues.apache.org/jira/browse/HBASE-21734) | *Major* | 
**Some optimization in FilterListWithOR**
+
+After HBASE-21620, the filterListWithOR has been a bit slow because we need to 
merge each sub-filter's RC , while before HBASE-21620, we will skip many RC 
merging, but the logic was wrong. So here we choose another way to optimaze the 
performance: removing the KeyValueUtil#toNewKeyCell. 
+Anoop Sam John suggested that the KeyValueUtil#toNewKeyCell can save some GC 
before because if we copy key part of cell into a single byte[], then the block 
the cell refering won't be refered by the filter list any more, the upper layer 
can GC the data block quickly. while after HBASE-21620, we will update the 
prevCellList for every encountered cell now, so the lifecycle of cell in 
prevCellList for FilterList will be quite shorter. so just use the cell ref for 
saving cpu.
+BTW, we removed all the arrays streams usage in filter list, because it's also 
quite time-consuming in our test.
+
+
+---
+
+* [HBASE-21738](https://issues.apache.org/jira/browse/HBASE-21738) | 
*Critical* | **Remove all the CSLM#size operation in our memstore because it's 
an quite time consuming.**
+
+We found the memstore snapshotting would cost much time because of calling the 
time-consuming ConcurrentSkipListMap#Size, it would make the p999 latency spike 
happen. So in this issue, we remove all ConcurrentSkipListMap#size in memstore 
by counting the cellsCount in MemstoreSizeing. As the issue described, the p999 
latency spike was mitigated.
+
+
+---
+
+* [HBASE-21034](https://issues.apache.org/jira/browse/HBASE-21034) | *Major* | 
**Add new throttle type: read/write capacity unit**
+
+Provides a new throttle type: capacity unit. One read/write/request capacity 
unit represents that read/write/read+write up to 1K data. If data size is more 
than 1K, then consume additional capacity units.
+
+Use shell command to set capacity unit(CU):
+set\_quota TYPE =\> THROTTLE, THROTTLE\_TYPE =\> WRITE, USER =\> 'u1', LIMIT 
=\> '10CU/sec'
+
+Use the "hbase.quota.read.capacity.unit" property to set the data size of one 
read capacity unit in bytes, the default value is 1K. Use the 
"hbase.quota.write.capacity.unit" property to set the data size of one write 
capacity unit in bytes, the default value is 1K.
+
+
+---
+
+* [HBASE-21595](https://issues.apache.org/jira/browse/HBASE-21595) | *Minor* | 
**Print thread's information and stack traces when RS is aborting forcibly**
+
+Does thread dump on stdout on abort.
+
+
+---
+
+* [HBASE-21732](https://issues.apache.org/jira/browse/HBASE-21732) | 
*Critical* | **Should call toUpperCase before using Enum.valueOf in some 
methods for ColumnFamilyDescriptor**
+
+Now all the Enum configs in ColumnFamilyDescriptor can accept lower case 
config value.
+
+
+---
+
+* [HBASE-21712](https://issues.apache.org/jira/browse/HBASE-21712) | *Minor* | 
**Make submit-patch.py python3 compatible**
+
+Python3 support was added to dev-support/submit-patch.py. To install newly 
required dependencies run \`pip install -r 
dev-support/python-requirements.txt\` command.
+
+
+---
+
+* [HBASE-21657](https://issues.apache.org/jira/browse/HBASE-21657) | *Major* | 
**PrivateCellUtil#estimatedSerializedSizeOf has been the bottleneck in 100% 
scan case.**
+
+In HBASE-21657,  I simplified the path of estimatedSerialiedSize() & 
estimatedSerialiedSizeOfCell() by moving the general getSerializedSize()
+and heapSize() from ExtendedCell to Cell interface. The patch also included 
some other improvments:
+
+1. For 99%  of case, our cells has no tags, so let the HFileScannerImpl just 
return the NoTagsByteBufferKeyValue if no tags, which means we can save 
+   lots of cpu time when sending no tags cell to rpc because can just return 
the length instead of getting the serialize size by caculating offset/length 
+   of each fields(row/cf/cq..)
+2. Move the subclass's getSerializedSize implementation from ExtendedCell to 
their own class, which mean we did not need to call ExtendedCell's
+   getSerialiedSize() firstly, then forward to subclass's 
getSerializedSize(withTags).
+3. Give a estimated result arraylist size for avoiding the frequent list 
extension when in a big scan, now we estimate the array size as min(scan.rows, 
512).
+   it's also help a lot.
+
+We gain almost ~40% throughput improvement in 100% scan case for branch-2 
(cacheHitRatio~100%)[1], it's a good thing. While it's a incompatible change in 
+some case, such as if the upstream user implemented their own Cells, although 
it's rare but can happen, then their compile will be error.
+
+
+---
+
+* [HBASE-21647](https://issues.apache.org/jira/browse/HBASE-21647) | *Major* | 
**Add status track for splitting WAL tasks**
+
+Adds task monitor that shows ServerCrashProcedure progress in UI.
+
+
+---
+

[... 620 lines stripped ...]

Reply via email to