Added: dev/hbase/2.4.1-rm_testRC30/RELEASENOTES.md
==============================================================================
--- dev/hbase/2.4.1-rm_testRC30/RELEASENOTES.md (added)
+++ dev/hbase/2.4.1-rm_testRC30/RELEASENOTES.md Sat Jan  9 02:17:07 2021
@@ -0,0 +1,19535 @@
+# RELEASENOTES
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Be careful doing manual edits in this file. Do not change format
+# of release header or remove the below marker. This file is generated.
+# DO NOT REMOVE THIS MARKER; FOR INTERPOLATING CHANGES!-->
+# hbase 2.4.1-rm_test Release Notes
+No changes
+
+# hbase 2.4.1-rm_test Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_gcp Release Notes
+No changes
+
+# hbase 2.4.1-rm_test_mac Release Notes
+No changes
+
+# hbase 2.4.1-rm_test-2 Release Notes
+No changes
+
+# hbase 2.4.1-rm_test Release Notes
+No changes
+
+# hbase 2.4.1-rm_test7 Release Notes
+No changes
+
+# hbase 2.4.1-rm_test5 Release Notes
+No changes
+
+# hbase 2.4.1-rm_test3 Release Notes
+No changes
+
+# hbase 2.4.1-rm_test2 Release Notes
+No changes
+
+# hbase 2.4.1-rm_test-STAGING Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.1-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# hbase 2.4.0-rm_test Release Notes
+No changes
+
+# HBASE  2.4.0 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-25127](https://issues.apache.org/jira/browse/HBASE-25127) | *Major* | 
**Enhance PerformanceEvaluation to profile meta replica performance.**
+
+Three new commands are added to PE:
+
+metaWrite, metaRandomRead and cleanMeta.
+
+Usage example:
+hbase pe  --rows=100000 metaWrite  1
+hbase pe  --nomapreduce --rows=100000 metaRandomRead  32
+hbase pe  --rows=100000 cleanMeta 1
+
+metaWrite and cleanMeta should be run with only 1 thread and the same number 
of rows so all the rows inserted will be cleaned up properly.
+
+metaRandomRead can be run with multiple threads. The rows option should set to 
within the range of rows inserted by metaWrite
+
+
+---
+
+* [HBASE-25237](https://issues.apache.org/jira/browse/HBASE-25237) | *Major* | 
**'hbase master stop' shuts down the cluster, not the master only**
+
+\`hbase master stop\` should shutdown only master by default. 
+1. Help added to \`hbase master stop\`:
+To stop cluster, use \`stop-hbase.sh\` or \`hbase master stop 
--shutDownCluster\`
+
+2. Help added to \`stop-hbase.sh\`:
+stop-hbase.sh can only be used for shutting down entire cluster. To shut down 
(HMaster\|HRegionServer) use hbase-daemon.sh stop (master\|regionserver)
+
+
+---
+
+* [HBASE-25242](https://issues.apache.org/jira/browse/HBASE-25242) | 
*Critical* | **Add Increment/Append support to RowMutations**
+
+After HBASE-25242, we can add Increment/Append operations to RowMutations and 
perform those operations atomically in a single row.
+HBASE-25242 includes an API change where the mutateRow() API returns a Result 
object to get the result of the Increment/Append operations.
+
+
+---
+
+* [HBASE-25263](https://issues.apache.org/jira/browse/HBASE-25263) | *Major* | 
**Change encryption key generation algorithm used in the HBase shell**
+
+Since the backward-compatible change we introduced in HBASE-25263,  we use the 
more secure PBKDF2WithHmacSHA384  key generation algorithm (instead of 
PBKDF2WithHmacSHA1) to generate a secret key for HFile / WalFile encryption, 
when the user is defining a string encryption key in the hbase shell.
+
+
+---
+
+* [HBASE-24268](https://issues.apache.org/jira/browse/HBASE-24268) | *Minor* | 
**REST and Thrift server do not handle the "doAs" parameter case insensitively**
+
+This change allows the REST and Thrift servers to handle the "doAs" parameter 
case-insensitively, which is deemed as correct per the "specification" provided 
by the Hadoop community.
+
+
+---
+
+* [HBASE-25278](https://issues.apache.org/jira/browse/HBASE-25278) | *Minor* | 
**Add option to toggle CACHE\_BLOCKS in count.rb**
+
+A new option, CACHE\_BLOCKS, was added to the \`count\` shell command which 
will force the data for a table to be loaded into the block cache. By default, 
the \`count\` command will not cache any blocks. This option can serve as a 
means to for a table's data to be loaded into block cache on demand. See the 
help message on the count shell command for usage details.
+
+
+---
+
+* [HBASE-18070](https://issues.apache.org/jira/browse/HBASE-18070) | 
*Critical* | **Enable memstore replication for meta replica**
+
+"Async WAL Replication" [1] was added by HBASE-11183 "Timeline Consistent 
region replicas - Phase 2 design" but only for user-space tables. This feature 
adds "Async WAL Replication" for the hbase:meta table.  It also adds a client 
'LoadBalance' mode that has reads go to replicas first and to the primary only 
on fail so as to shed read load from the primary to alleviate \*hotspotting\* 
on the hbase:meta Region.
+
+Configuration is as it was for the user-space 'Async WAL Replication'. See [2] 
and [3] for details on how to enable.
+
+1. http://hbase.apache.org/book.html#async.wal.replication
+2. http://hbase.apache.org/book.html#async.wal.replication.meta
+3. 
http://hbase.apache.org/book.html#\_async\_wal\_replication\_for\_meta\_table\_as\_of\_hbase\_2\_4\_0
+
+
+---
+
+* [HBASE-25126](https://issues.apache.org/jira/browse/HBASE-25126) | *Major* | 
**Add load balance logic in hbase-client to distribute read load over meta 
replica regions.**
+
+See parent issue, HBASE-18070, release notes for how to enable.
+
+
+---
+
+* [HBASE-25026](https://issues.apache.org/jira/browse/HBASE-25026) | *Minor* | 
**Create a metric to track full region scans RPCs**
+
+Adds a new metric where we collect the number of full region scan requests at 
the RPC layer. This will be collected under "name" : 
"Hadoop:service=HBase,name=RegionServer,sub=Server"
+
+
+---
+
+* [HBASE-25253](https://issues.apache.org/jira/browse/HBASE-25253) | *Major* | 
**Deprecated master carrys regions related methods and configs**
+
+Since 2.4.0, deprecated all master carrys regions related 
methods(LoadBalancer,BaseLoadBalancer,ZNodeClearer) and 
configs(hbase.balancer.tablesOnMaster, 
hbase.balancer.tablesOnMaster.systemTablesOnly), they will be removed in 3.0.0.
+
+
+---
+
+* [HBASE-20598](https://issues.apache.org/jira/browse/HBASE-20598) | *Major* | 
**Upgrade to JRuby 9.2**
+
+<!-- markdown -->
+The HBase shell now relies on JRuby 9.2. This is a new major version change 
for JRuby. The most significant change is Ruby compatibility changed from Ruby 
2.3 to Ruby 2.5. For more detailed changes please see [the JRuby release 
announcement for the start of the 9.2 
series](https://www.jruby.org/2018/05/24/jruby-9-2-0-0.html) as well as the 
[general release announcement page for updates since that 
version](https://www.jruby.org/news).
+
+The runtime dependency versions present on the server side classpath for the 
Joni (now 2.1.31) and JCodings (now 1.0.55) libraries have also been updated to 
match those found in the JRuby version shipped with HBase. These version 
changes are maintenance releases and should be backwards compatible when 
updated in tandem.
+
+
+---
+
+* [HBASE-25181](https://issues.apache.org/jira/browse/HBASE-25181) | *Major* | 
**Add options for disabling column family encryption and choosing hash 
algorithm for wrapped encryption keys.**
+
+<!-- markdown -->
+This change adds options for disabling column family encryption and choosing 
hash algorithm for wrapped encryption keys. Changes are done such that defaults 
will keep the same behavior prior to this issue.
+    
+Prior to this change HBase always used the MD5 hash algorithm to store a hash 
for encryption keys. This hash is needed to verify the secret key of the 
subject. (e.g. making sure that the same secrey key is used during encrypted 
HFile read and write). The MD5 algorithm is considered weak, and can not be 
used in some (e.g. FIPS compliant) clusters. Having a configurable hash enables 
us to use newer and more secure hash algorithms like SHA-384 or SHA-512 (which 
are FIPS compliant).
+
+The hash is set via the configuration option 
`hbase.crypto.key.hash.algorithm`. It should be set to a JDK `MessageDigest` 
algorithm like "MD5", "SHA-256" or "SHA-384". The default is "MD5" for backward 
compatibility.
+
+Alternatively, clusters which rely on an encryption at rest mechanism outside 
of HBase (e.g. those offered by HDFS) and wish to ensure HBase's encryption at 
rest system is inactive can set `hbase.crypto.enabled` to `false`.
+
+
+---
+
+* [HBASE-25238](https://issues.apache.org/jira/browse/HBASE-25238) | 
*Critical* | **Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message 
missing required fields: state”**
+
+Fixes master procedure store migration issues going from 2.0.x to 2.2.x and/or 
2.3.x. Also fixes failed heartbeat parse during rolling upgrade from 2.0.x. to 
2.3.x.
+
+
+---
+
+* [HBASE-25234](https://issues.apache.org/jira/browse/HBASE-25234) | *Major* | 
**[Upgrade]Incompatibility in reading RS report from 2.1 RS when Master is 
upgraded to a version containing HBASE-21406**
+
+Fixes so auto-migration of master procedure store works again going from 2.0.x 
=\> 2.2+. Also make it so heartbeats work when rolling upgrading from 2.0.x =\> 
2.3+.
+
+
+---
+
+* [HBASE-25212](https://issues.apache.org/jira/browse/HBASE-25212) | *Major* | 
**Optionally abort requests in progress after deciding a region should close**
+
+If hbase.regionserver.close.wait.abort is set to true, interrupt RPC handler 
threads holding the region close lock. 
+
+Until requests in progress can be aborted, wait on the region close lock for a 
configurable interval (specified by hbase.regionserver.close.wait.time.ms, 
default 60000 (1 minute)). If we have failed to acquire the close lock after 
this interval elapses, if allowed (also specified by 
hbase.regionserver.close.wait.abort), abort the regionserver.
+
+We will attempt to interrupt any running handlers every 
hbase.regionserver.close.wait.interval.ms (default 10000 (10 seconds)) until 
either the close lock is acquired or we reach the maximum wait time.
+
+
+---
+
+* [HBASE-25167](https://issues.apache.org/jira/browse/HBASE-25167) | *Major* | 
**Normalizer support for hot config reloading**
+
+<!-- markdown -->
+This patch adds [dynamic 
configuration](https://hbase.apache.org/book.html#dyn_config) support for the 
following configuration keys related to the normalizer:
+* hbase.normalizer.throughput.max_bytes_per_sec
+* hbase.normalizer.split.enabled
+* hbase.normalizer.merge.enabled
+* hbase.normalizer.min.region.count
+* hbase.normalizer.merge.min_region_age.days
+* hbase.normalizer.merge.min_region_size.mb
+
+
+---
+
+* [HBASE-25224](https://issues.apache.org/jira/browse/HBASE-25224) | *Major* | 
**Maximize sleep for checking meta and namespace regions availability**
+
+Changed the max sleep time during meta and namespace regions availability 
check to be 60 sec. Previously there was no such cap
+
+
+---
+
+* [HBASE-24628](https://issues.apache.org/jira/browse/HBASE-24628) | *Major* | 
**Region normalizer now respects a rate limit**
+
+<!-- markdown -->
+Introduces a new configuration, 
`hbase.normalizer.throughput.max_bytes_per_sec`, for specifying a limit on the 
throughput of actions executed by the normalizer. Note that while this 
configuration value is in bytes, the minimum honored valued is `1,000,000`, or 
`1m`. Supports values configured using the human-readable suffixes honored by 
[`Configuration.getLongBytes`](https://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html#getLongBytes-java.lang.String-long-)
+
+
+---
+
+* [HBASE-14067](https://issues.apache.org/jira/browse/HBASE-14067) | *Major* | 
**bundle ruby files for hbase shell into a jar.**
+
+<!-- markdown -->
+The `hbase-shell` artifact now contains the ruby files that implement the 
hbase shell. There should be no downstream impact for users of the shell that 
rely on the `hbase shell` command.
+
+Folks that wish to include the HBase ruby classes defined for the shell in 
their own JRuby scripts should add the `hbase-shell.jar` file to their 
classpath rather than add `${HBASE_HOME}/lib/ruby` to their load paths.
+
+
+---
+
+* [HBASE-24875](https://issues.apache.org/jira/browse/HBASE-24875) | *Major* | 
**Remove the force param for unassign since it dose not take effect any more**
+
+<!-- markdown -->
+The "force" flag to various unassign commands (java api, shell, etc) has been 
ignored since HBase 2. As of this change the methods that take it are now 
deprecated. Downstream users should stop passing/using this flag.
+
+The Admin and AsyncAdmin Java APIs will have the deprecated version of the 
unassign method with a force flag removed in HBase 4. Callers can safely 
continue to use the deprecated API until then; the internal implementation just 
calls the new method.
+
+The MasterObserver coprocessor API deprecates the `preUnassign` and 
`postUnassign` methods that include the force parameter and replaces them with 
versions that omit this parameter. The deprecated methods will be removed from 
the API in HBase 3. Until then downstream coprocessor implementations can 
safely continue to *just* implement the deprecated method if they wish; the 
replacement methods provide a default implementation that calls the deprecated 
method with force set to `false`.
+
+
+---
+
+* [HBASE-25099](https://issues.apache.org/jira/browse/HBASE-25099) | *Major* | 
**Change meta replica count by altering meta table descriptor**
+
+Now you can change the region replication config for meta table by altering 
meta table.
+The old "hbase.meta.replica.count" is deprecated and will be removed in 4.0.0. 
But if it is set, we will still honor it, which means, when master restart, if 
we find out that the value of 'hbase.meta.replica.count' is different with the 
region replication config of meta table, we will schedule an alter table 
operation to change the region replication config to the value you configured 
for 'hbase.meta.replica.count'.
+
+
+---
+
+* [HBASE-23834](https://issues.apache.org/jira/browse/HBASE-23834) | *Major* | 
**HBase fails to run on Hadoop 3.3.0/3.2.2/3.1.4 due to jetty version mismatch**
+
+Use shaded json and jersey in HBase.
+Ban the imports of unshaded json and jersey in code.
+
+
+---
+
+* [HBASE-25163](https://issues.apache.org/jira/browse/HBASE-25163) | *Major* | 
**Increase the timeout value for nightly jobs**
+
+Increase timeout value for nightly jobs to 16 hours since the new build 
machines are dedicated to hbase project, so we are allowed to use it all the 
time.
+
+
+---
+
+* [HBASE-22976](https://issues.apache.org/jira/browse/HBASE-22976) | *Major* | 
**[HBCK2] Add RecoveredEditsPlayer**
+
+WALPlayer can replay the content of recovered.edits directories.
+
+Side-effect is that WAL filename timestamp is now factored when setting 
start/end times for WALInputFormat; i.e. wal.start.time and wal.end.time values 
on a job context. Previous we looked at wal.end.time only. Now we consider 
wal.start.time too. If a file has a name outside of 
wal.start.time\<-\>wal.end.time, it'll be by-passed. This change-in-behavior 
will make it easier on operator crafting timestamp filters processing WALs.
+
+
+---
+
+* [HBASE-25165](https://issues.apache.org/jira/browse/HBASE-25165) | *Minor* | 
**Change 'State time' in UI so sorts**
+
+Start time on the Master UI is now displayed using ISO8601 format instead of 
java Date#toString().
+
+
+---
+
+* [HBASE-25124](https://issues.apache.org/jira/browse/HBASE-25124) | *Major* | 
**Support changing region replica count without disabling table**
+
+Now you do not need to disable a table before changing its 'region 
replication' property.
+If you are decreasing the replica count, the excess region replicas will be 
closed before reopening other replicas.
+If you are increasing the replica count, the new region replicas will be 
opened after reopening the existing replicas.
+
+
+---
+
+* [HBASE-25154](https://issues.apache.org/jira/browse/HBASE-25154) | *Major* | 
**Set java.io.tmpdir to project build directory to avoid writing std\*deferred 
files to /tmp**
+
+Change the java.io.tmpdir to project.build.directory in surefire-maven-plugin, 
to avoid writing std\*deferred files to /tmp which may blow up the /tmp disk on 
our jenkins build node.
+
+
+---
+
+* [HBASE-25055](https://issues.apache.org/jira/browse/HBASE-25055) | *Major* | 
**Add ReplicationSource for meta WALs; add enable/disable when hbase:meta 
assigned to RS**
+
+Set hbase.region.replica.replication.catalog.enabled to enable async WAL 
Replication for hbase:meta region replicas. Its off by default.
+
+Defaults to the RegionReadReplicaEndpoint.class shipping edits -- set 
hbase.region.replica.catalog.replication to target a different endpoint 
implementation.
+
+
+---
+
+* [HBASE-25109](https://issues.apache.org/jira/browse/HBASE-25109) | *Major* | 
**Add MR Counters to WALPlayer; currently hard to tell if it is doing anything**
+
+Adds a WALPlayer to MR Counter output:
+
+       org.apache.hadoop.hbase.mapreduce.WALPlayer$Counter
+               CELLS\_READ=89574
+               CELLS\_WRITTEN=89572
+               DELETES=64
+               PUTS=5305
+               WALEDITS=4375
+
+
+---
+
+* [HBASE-24896](https://issues.apache.org/jira/browse/HBASE-24896) | *Major* | 
**'Stuck' in static initialization creating RegionInfo instance**
+
+1. Untangle RegionInfo, RegionInfoBuilder, and MutableRegionInfo static
+initializations.
+2. Undo static initializing references from RegionInfo to RegionInfoBuilder.
+3. Mark RegionInfo#UNDEFINED IA.Private and deprecated;
+it is for internal use only and likely to be removed in HBase4. (sub-task 
HBASE-24918)
+4. Move MutableRegionInfo from inner-class of
+RegionInfoBuilder to be (package private) standalone. (sub-task HBASE-24918)
+
+
+---
+
+* [HBASE-24956](https://issues.apache.org/jira/browse/HBASE-24956) | *Major* | 
**ConnectionManager#locateRegionInMeta waits for user region lock 
indefinitely.**
+
+<!-- markdown -->
+
+Without this fix there are situations in which locateRegionInMeta() on a 
client is not bound by a timeout. This happens because of a global lock whose 
acquisition was not under any lock scope. This affects client facing API calls 
that rely on this method to locate a table region in meta. This fix brings the 
lock acquisition under the scope of "hbase.client.meta.operation.timeout" and 
that guarantees a bounded wait time.
+
+
+---
+
+* [HBASE-24764](https://issues.apache.org/jira/browse/HBASE-24764) | *Minor* | 
**Add support of adding base peer configs via hbase-site.xml for all 
replication peers.**
+
+<!-- markdown -->
+
+Adds a new configuration parameter "hbase.replication.peer.base.config" which 
accepts a semi-colon separated key=CSV pairs (example: k1=v1;k2=v2_1,v3...). 
When this configuration is set on the server side, these kv pairs are added to 
every peer configuration if not already set. Peer specific configuration 
overrides have precedence over the above default configuration. This is useful 
in cases when some configuration has to be set for all the peers by default and 
one does not want to add to every peer definition.
+
+
+---
+
+* [HBASE-24994](https://issues.apache.org/jira/browse/HBASE-24994) | *Minor* | 
**Add hedgedReadOpsInCurThread metric**
+
+Expose Hadoop hedgedReadOpsInCurThread metric to HBase.
+This metric counts the number of times the hedged reads service executor 
rejected a read task, falling back to the current thread.
+This will help determine the proper size of the thread pool 
(dfs.client.hedged.read.threadpool.size).
+
+
+---
+
+* [HBASE-24776](https://issues.apache.org/jira/browse/HBASE-24776) | *Major* | 
**[hbtop] Support Batch mode**
+
+HBASE-24776 added the following command line parameters to hbtop:
+\| Argument \| Description \| 
+\|---\|---\|
+\| -n,--numberOfIterations \<arg\> \| The number of iterations \|
+\| -O,--outputFieldNames \| Print each of the available field names on a 
separate line, then quit \|
+\| -f,--fields \<arg\> \| Show only the given fields. Specify comma separated 
fields to show multiple fields \|
+\| -s,--sortField \<arg\> \| The initial sort field. You can prepend a \`+' or 
\`-' to the field name to also override the sort direction. A leading \`+' will 
force sorting high to low, whereas a \`-' will ensure a low to high ordering \|
+\| -i,--filters \<arg\> \| The initial filters. Specify comma separated 
filters to set multiple filters \|
+\| -b,--batchMode \| Starts hbtop in Batch mode, which could be useful for 
sending output from hbtop to other programs or to a file. In this mode, hbtop 
will not accept input and runs until the iterations limit you've set with the 
\`-n' command-line option or until killed \|
+
+
+---
+
+* [HBASE-24602](https://issues.apache.org/jira/browse/HBASE-24602) | *Major* | 
**Add Increment and Append support to CheckAndMutate**
+
+Summary of the change of HBASE-24602:
+- Add \`build(Increment)\` and \`build(Append)\` methods to the \`Builder\` 
class of the \`CheckAndMutate\` class. After this change, we can perform 
checkAndIncrement/Append operations as follows:
+\`\`\`
+// Build a CheckAndMutate object with a Increment object
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifEquals(family, qualifier, value)
+  .build(increment);
+
+// Perform a CheckAndIncrement operation
+CheckAndMutateResult checkAndMutateResult = 
table.checkAndMutate(checkAndMutate);
+
+// Get whether or not the CheckAndIncrement operation is successful
+boolean success = checkAndMutateResult.isSuccess();
+
+// Get the result of the increment operation
+Result result = checkAndMutateResult.getResult();
+\`\`\`
+- After this change, \`HRegion.batchMutate()\` is used for increment/append 
operations.
+- As the side effect of the above change, the following coprocessor methods of 
RegionObserver are called when increment/append operations are performed:
+  - preBatchMutate()
+  - postBatchMutate()
+  - postBatchMutateIndispensably()
+
+
+---
+
+* [HBASE-24694](https://issues.apache.org/jira/browse/HBASE-24694) | *Major* | 
**Support flush a single column family of table**
+
+Adds option for the flush command to flush all stores from the specified 
column family only, among all regions of the given table (stores from other 
column families on this table would not get flushed).
+
+
+---
+
+* [HBASE-24625](https://issues.apache.org/jira/browse/HBASE-24625) | 
*Critical* | **AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the 
expected synced file length.**
+
+We add a method getSyncedLength in  WALProvider.WriterBase interface for  
WALFileLengthProvider used for replication, considering the case if we use  
AsyncFSWAL,we write to 3 DNs concurrently,according to the visibility guarantee 
of HDFS, the data will be available immediately
+when arriving at DN since all the DNs will be considered as the last one in 
pipeline.This means replication may read uncommitted data and replicate it to 
the remote cluster and cause data inconsistency.The method WriterBase#getLength 
may return length which just in hdfs client buffer and not successfully synced 
to HDFS, so we use this method WriterBase#getSyncedLength to return the length 
successfully synced to HDFS and replication thread could only read writing WAL 
file limited by this length.
+see also HBASE-14004 and this document for more details:
+https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
+
+Before this patch, replication may read uncommitted data and replicate it to 
the slave cluster and cause data inconsistency between master and slave 
cluster, we could use FSHLog instead of AsyncFSWAL  to reduce probability of 
inconsistency without this patch applied.
+
+
+---
+
+* [HBASE-24779](https://issues.apache.org/jira/browse/HBASE-24779) | *Minor* | 
**Improve insight into replication WAL readers hung on checkQuota**
+
+New metrics are exposed, on the global source, for replication which indicate 
the "WAL entry buffer" that was introduced in HBASE-15995. When this usage 
reaches the limit, that RegionServer will cease to read more data for the sake 
of trying to replicate it. This usage (and limit) is local to each RegionServer 
is shared across all peers being handled by that RegionServer.
+
+
+---
+
+* [HBASE-24404](https://issues.apache.org/jira/browse/HBASE-24404) | *Major* | 
**Support flush a single column family of region**
+
+This adds an extra "flush" command option that allows for specifying an 
individual family to have its store flushed.
+
+Usage:
+flush 'REGIONNAME','FAMILYNAME' 
+flush 'ENCODED\_REGIONNAME','FAMILYNAME'
+
+
+---
+
+* [HBASE-24805](https://issues.apache.org/jira/browse/HBASE-24805) | *Major* | 
**HBaseTestingUtility.getConnection should be threadsafe**
+
+<!-- markdown -->
+Users of `HBaseTestingUtility` can now safely call the `getConnection` method 
from multiple threads.
+
+As a consequence of refactoring to improve the thread safety of the HBase 
testing classes, the protected `conf` member of the  
`HBaseCommonTestingUtility` class has been marked final. Downstream users who 
extend from the class hierarchy rooted at this class will need to pass the 
Configuration instance they want used to their super constructor rather than 
overwriting the instance variable.
+
+
+---
+
+* [HBASE-24767](https://issues.apache.org/jira/browse/HBASE-24767) | *Major* | 
**Change default to false for HBASE-15519 per-user metrics**
+
+Disables per-user metrics. They were enabled by default for the first time in 
hbase-2.3.0 but they need some work before they can be on all the time (See 
HBASE-15519)
+
+
+---
+
+* [HBASE-24704](https://issues.apache.org/jira/browse/HBASE-24704) | *Major* | 
**Make the Table Schema easier to view even there are multiple families**
+
+Improve the layout of column family from vertical to horizontal in table UI.
+
+
+---
+
+* [HBASE-11686](https://issues.apache.org/jira/browse/HBASE-11686) | *Minor* | 
**Shell code should create a binding / irb workspace instead of polluting the 
root namespace**
+
+In shell, all HBase constants and commands have been moved out of the 
top-level and into an IRB Workspace. Piped stdin and scripts passed by name to 
the shell will be evaluated within this workspace. If you absolutely need the 
top-level definitions, use the new compatibility flag, ie. hbase shell 
--top-level-defs or hbase shell --top-level-defs script2run.rb.
+
+
+---
+
+* [HBASE-24632](https://issues.apache.org/jira/browse/HBASE-24632) | *Major* | 
**Enable procedure-based log splitting as default in hbase3**
+
+Enables procedure-based distributed WAL splitting as default (HBASE-20610). To 
use 'classic' zk-coordinated splitting instead, set 
'hbase.split.wal.zk.coordinated' to 'true'.
+
+
+---
+
+* [HBASE-24698](https://issues.apache.org/jira/browse/HBASE-24698) | *Major* | 
**Turn OFF Canary WebUI as default**
+
+Flips default for 'HBASE-23994 Add WebUI to Canary' The UI defaulted to on at 
port 16050. This JIRA changes it so new UI is off by default.
+
+To enable the UI, set property 'hbase.canary.info.port' to the port you want 
the UI to use.
+
+
+---
+
+* [HBASE-24650](https://issues.apache.org/jira/browse/HBASE-24650) | *Major* | 
**Change the return types of the new checkAndMutate methods introduced in 
HBASE-8458**
+
+HBASE-24650 introduced CheckAndMutateResult class and changed the return type 
of checkAndMutate methods to this class in order to support CheckAndMutate with 
Increment/Append. CheckAndMutateResult class has two fields, one is \*success\* 
that indicates whether the operation is successful or not, and the other one is 
\*result\* that's the result of the operation and is used for  CheckAndMutate 
with Increment/Append.
+
+The new APIs for the Table interface:
+\`\`\`
+/\*\*
+ \* checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
+ \* it performs the specified action.
+ \*
+ \* @param checkAndMutate The CheckAndMutate object.
+ \* @return A CheckAndMutateResult object that represents the result for the 
CheckAndMutate.
+ \* @throws IOException if a remote or network exception occurs.
+ \*/
+default CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) 
throws IOException {
+  return checkAndMutate(Collections.singletonList(checkAndMutate)).get(0);
+}
+
+/\*\*
+ \* Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
+ \* that they are sent to a RS in one RPC, but each CheckAndMutate operation 
is still executed
+ \* atomically (and thus, each may fail independently of others).
+ \*
+ \* @param checkAndMutates The list of CheckAndMutate.
+ \* @return A list of CheckAndMutateResult objects that represents the result 
for each
+ \*   CheckAndMutate.
+ \* @throws IOException if a remote or network exception occurs.
+ \*/
+default List\<CheckAndMutateResult\> checkAndMutate(List\<CheckAndMutate\> 
checkAndMutates)
+  throws IOException {
+  throw new NotImplementedException("Add an implementation!");
+}
+{code}
+
+The new APIs for the AsyncTable interface:
+{code}
+/\*\*
+ \* checkAndMutate that atomically checks if a row matches the specified 
condition. If it does,
+ \* it performs the specified action.
+ \*
+ \* @param checkAndMutate The CheckAndMutate object.
+ \* @return A {@link CompletableFuture}s that represent the result for the 
CheckAndMutate.
+ \*/
+CompletableFuture\<CheckAndMutateResult\> checkAndMutate(CheckAndMutate 
checkAndMutate);
+
+/\*\*
+ \* Batch version of checkAndMutate. The specified CheckAndMutates are batched 
only in the sense
+ \* that they are sent to a RS in one RPC, but each CheckAndMutate operation 
is still executed
+ \* atomically (and thus, each may fail independently of others).
+ \*
+ \* @param checkAndMutates The list of CheckAndMutate.
+ \* @return A list of {@link CompletableFuture}s that represent the result for 
each
+ \*   CheckAndMutate.
+ \*/
+List\<CompletableFuture\<CheckAndMutateResult\>\> checkAndMutate(
+  List\<CheckAndMutate\> checkAndMutates);
+
+/\*\*
+ \* A simple version of batch checkAndMutate. It will fail if there are any 
failures.
+ \*
+ \* @param checkAndMutates The list of rows to apply.
+ \* @return A {@link CompletableFuture} that wrapper the result list.
+ \*/
+default CompletableFuture\<List\<CheckAndMutateResult\>\> checkAndMutateAll(
+  List\<CheckAndMutate\> checkAndMutates) {
+  return allOf(checkAndMutate(checkAndMutates));
+}
+\`\`\`
+
+
+---
+
+* [HBASE-24671](https://issues.apache.org/jira/browse/HBASE-24671) | *Major* | 
**Add excludefile and designatedfile options to graceful\_stop.sh**
+
+Add excludefile and designatedfile options to graceful\_stop.sh. 
+
+Designated file with \<hostname:port\> per line as unload targets.
+
+Exclude file should have \<hostname:port\> per line. We do not unload regions 
to hostnames given in exclude file.
+
+Here is a simple example using graceful\_stop.sh with designatedfile option:
+./bin/graceful\_stop.sh --maxthreads 4 --designatedfile /path/designatedfile 
hostname
+The usage of the excludefile option is the same as the above.
+
+
+---
+
+* [HBASE-24560](https://issues.apache.org/jira/browse/HBASE-24560) | *Major* | 
**Add a new option of designatedfile in RegionMover**
+
+Add a new option "designatedfile" in RegionMover.
+
+If designated file is present with some contents, we will unload regions to 
hostnames provided in designated file.
+
+Designated file should have 'host:port' per line.
+
+
+---
+
+* [HBASE-24289](https://issues.apache.org/jira/browse/HBASE-24289) | *Major* | 
**Heterogeneous Storage for Date Tiered Compaction**
+
+Enhance DateTieredCompaction to support HDFS storage policy within one class 
family. 
+# First you need enable DTCP.
+To turn on Date Tiered Compaction (It is not recommended to turn on for the 
whole cluster because that will put meta table on it too and random get on meta 
table will be impacted):
+hbase.hstore.compaction.compaction.policy=org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy
+## Parameters for Date Tiered Compaction:
+hbase.hstore.compaction.date.tiered.max.storefile.age.millis: Files with 
max-timestamp smaller than this will no longer be compacted.Default at 
Long.MAX\_VALUE.
+hbase.hstore.compaction.date.tiered.base.window.millis: base window size in 
milliseconds. Default at 6 hours.
+hbase.hstore.compaction.date.tiered.windows.per.tier: number of windows per 
tier. Default at 4.
+hbase.hstore.compaction.date.tiered.incoming.window.min: minimal number of 
files to compact in the incoming window. Set it to expected number of files in 
the window to avoid wasteful compaction. Default at 6.
+
+# Then enable HDTCP(Heterogeneous Date Tiered Compaction) as follow example 
configurations:  
+hbase.hstore.compaction.date.tiered.storage.policy.enable=true
+hbase.hstore.compaction.date.tiered.hot.window.age.millis=3600000
+hbase.hstore.compaction.date.tiered.hot.window.storage.policy=ALL\_SSD
+hbase.hstore.compaction.date.tiered.warm.window.age.millis=20600000
+hbase.hstore.compaction.date.tiered.warm.window.storage.policy=ONE\_SSD
+hbase.hstore.compaction.date.tiered.cold.window.storage.policy=HOT
+## It is better to enable WAL and flushing HFile storage policy with HDTCP. 
You can tune follow settings as well:
+hbase.wal.storage.policy=ALL\_SSD
+create 
'table',{NAME=\>'f1',CONFIGURATION=\>{'hbase.hstore.block.storage.policy'=\>'ALL\_SSD'}}
+
+# Disable HDTCP as follow:
+hbase.hstore.compaction.date.tiered.storage.policy.enable=false
+
+
+---
+
+* [HBASE-24648](https://issues.apache.org/jira/browse/HBASE-24648) | *Major* | 
**Remove the legacy 'forceSplit' related code at region server side**
+
+Add a canSplit method to RegionSplitPolicy to determine whether we can split a 
region. Usually it is not related to RegionSplitPolicy so in the default 
implementation, it will test whether region is available and does not have 
reference file, but in DisabledRegionSplitPolicy, we will always return false.
+
+
+---
+
+* [HBASE-24382](https://issues.apache.org/jira/browse/HBASE-24382) | *Major* | 
**Flush partial stores of region filtered by seqId when archive wal due to too 
many wals**
+
+Change the flush level from region to store when there are too many wals, 
benefit from this we can reduce unnessary flush tasks and small hfiles.
+
+
+---
+
+* [HBASE-24038](https://issues.apache.org/jira/browse/HBASE-24038) | *Major* | 
**Add a metric to show the locality of ssd in table.jsp**
+
+Add a metric to show the locality of ssd in table.jsp, and move the locality 
related metrics to a new tab named localities.
+
+
+---
+
+* [HBASE-8458](https://issues.apache.org/jira/browse/HBASE-8458) | *Major* | 
**Support for batch version of checkAndMutate()**
+
+HBASE-8458 introduced CheckAndMutate class that's used to perform 
CheckAndMutate operations. Use the builder class to instantiate a 
CheckAndMutate object. This builder class is fluent style APIs, the code are 
like:
+\`\`\`
+// A CheckAndMutate operation where do the specified action if the column 
(specified by the
+family and the qualifier) of the row equals to the specified value
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifEquals(family, qualifier, value)
+  .build(put);
+
+// A CheckAndMutate operation where do the specified action if the column 
(specified by the
+// family and the qualifier) of the row doesn't exist
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifNotExists(family, qualifier)
+  .build(put);
+
+// A CheckAndMutate operation where do the specified action if the row matches 
the filter
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifMatches(filter)
+  .build(delete);
+\`\`\`
+
+And This added new checkAndMutate APIs to the Table and AsyncTable interfaces, 
and deprecated the old checkAndMutate APIs. The example code for the new APIs 
are as follows:
+\`\`\`
+Table table = ...;
+
+CheckAndMutate checkAndMutate = ...;
+
+// Perform the checkAndMutate operation
+boolean success = table.checkAndMutate(checkAndMutate);
+
+CheckAndMutate checkAndMutate1 = ...;
+CheckAndMutate checkAndMutate2 = ...;
+
+// Batch version
+List\<Boolean\> successList = 
table.checkAndMutate(Arrays.asList(checkAndMutate1, checkAndMutate2));
+\`\`\`
+
+This also has Protocol Buffers level changes. Old clients without this patch 
will work against new servers with this patch. However, new clients will break 
against old servers without this patch for checkAndMutate with RM and 
mutateRow. So, for rolling upgrade, we will need to upgrade servers first, and 
then roll out the new clients.
+
+
+---
+
+* [HBASE-24471](https://issues.apache.org/jira/browse/HBASE-24471) | *Major* | 
**The way we bootstrap meta table is confusing**
+
+Move all the meta initialization code in MasterFileSystem and HRegionServer to 
InitMetaProcedure. Add a new step for InitMetaProcedure called 
INIT\_META\_WRITE\_FS\_LAYOUT to place the moved code.
+
+This is an incompatible change, but should not have much impact. 
InitMetaProcedure will only be executed once when bootstraping a fresh new 
cluster, so typically this will not effect rolling upgrading. And even if you 
hit this problem, as long as InitMetaProcedure has not been finished, we can 
make sure that there is no user data in the cluster, you can just clean up the 
cluster and try again. There will be no data loss.
+
+
+---
+
+* [HBASE-24017](https://issues.apache.org/jira/browse/HBASE-24017) | *Major* | 
**Turn down flakey rerun rate on all but hot branches**
+
+Changed master, branch-2, and branch-2.1 to twice a day.
+Left branch-2.3, branch-2.2, and branch-1 at every 4 hours.
+Changed branch-1.4 and branch-1.3 to @daily (1.3 was running every hour).
+
+
+
+# HBASE  2.3.0 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-24603](https://issues.apache.org/jira/browse/HBASE-24603) | 
*Critical* | **Zookeeper sync() call is async**
+
+<!-- markdown -->
+
+Fixes a couple of bugs in ZooKeeper interaction. Firstly, zk sync() call that 
is used to sync the lagging followers with leader so that the client sees a 
consistent snapshot state was actually asynchronous under the hood. We make it 
synchronous for correctness. Second, zookeeper events are now processed in a 
separate thread rather than doing it in the thread context of zookeeper client 
connection. This decoupling frees up client connection quickly and avoids 
deadlocks.
+
+
+---
+
+* [HBASE-24631](https://issues.apache.org/jira/browse/HBASE-24631) | *Major* | 
**Loosen Dockerfile pinned package versions of the "debian-revision"**
+
+<!-- markdown -->
+Update our package version numbers throughout the Dockerfiles to be pinned to 
their epic:upstream-version components only. Previously we'd specify the full 
debian package version number, including the debian-revision. This lead to 
instability as debian packaging details changed.
+See also [man 
deb-version](http://manpages.ubuntu.com/manpages/xenial/en/man5/deb-version.5.html)
+
+
+---
+
+* [HBASE-24205](https://issues.apache.org/jira/browse/HBASE-24205) | *Major* | 
**Create metric to know the number of reads that happens from memstore**
+
+Adds a new metric where we collect the number of read requests (tracked per 
row) whether the row was fetched completely from memstore or it was pulled from 
files  and memstore. 
+The metric is now collected under the mbean for Tables and under the mbean for 
regions.
+Under table mbean ie.- 
+'name": "Hadoop:service=HBase,name=RegionServer,sub=Tables'
+The new metrics will be listed as 
+{code}
+    
"Namespace\_default\_table\_t3\_columnfamily\_f1\_metric\_memstoreOnlyRowReadsCount":
 5,
+ 
"Namespace\_default\_table\_t3\_columnfamily\_f1\_metric\_mixedRowReadsCount": 
1,
+{code}
+Where the format is 
Namespace\_\<namespacename\>\_table\_\<tableName\>\_columnfamily\_\<columnfamilyname\>\_metric\_memstoreOnlyRowReadsCount
+Namespace\_\<namespacename\>\_table\_\<tableName\>\_columnfamily\_\<columnfamilyname\>\_metric\_mixedRowReadsCount
+{code}
+
+The same one under the region ie. 
+"name": "Hadoop:service=HBase,name=RegionServer,sub=Regions",
+comes as
+{code}
+   
"Namespace\_default\_table\_t3\_region\_75a7846f4ac4a2805071a855f7d0dbdc\_store\_f1\_metric\_memstoreOnlyRowReadsCount":
 5,
+    
"Namespace\_default\_table\_t3\_region\_75a7846f4ac4a2805071a855f7d0dbdc\_store\_f1\_metric\_mixedRowReadsCount":
 1,
+{code}
+where
+Namespace\_\<namespacename\_table\_\<tableName\>\_region\_\<regionName\>\_store\_\<storeName\>\_metric\_memstoreOnlyRowReadsCount
+Namespace\_\<namespacename\_table\_\<tableName\>\_region\_\<regionName\>\_store\_\<storeName\>\_metric\_mixedRowReadsCount
+This is also an aggregate against every store the number of reads that 
happened purely from the memstore or it was a  mixed read that happened from 
memstore and file.
+
+
+---
+
+* [HBASE-21773](https://issues.apache.org/jira/browse/HBASE-21773) | 
*Critical* | **rowcounter utility should respond to pleas for help**
+
+This adds [-h\|-help] options to rowcounter. Passing either -h or -help will 
print rowcounter guide as below: 
+
+$hbase rowcounter -h
+
+usage: hbase rowcounter \<tablename\> [options] [\<column1\> \<column2\>...]
+Options:
+    --starttime=\<arg\>       starting time filter to start counting rows from.
+    --endtime=\<arg\>         end time filter limit, to only count rows up to 
this timestamp.
+    --range=\<arg\>           [startKey],[endKey][;[startKey],[endKey]...]]
+    --expectedCount=\<arg\>   expected number of rows to be count.
+For performance, consider the following configuration properties:
+-Dhbase.client.scanner.caching=100
+-Dmapreduce.map.speculative=false
+
+
+---
+
+* [HBASE-24217](https://issues.apache.org/jira/browse/HBASE-24217) | *Major* | 
**Add hadoop 3.2.x support**
+
+CI coverage has been extended to include Hadoop 3.2.x for HBase 2.2+.
+
+
+---
+
+* [HBASE-23055](https://issues.apache.org/jira/browse/HBASE-23055) | *Major* | 
**Alter hbase:meta**
+
+Adds being able to edit hbase:meta table schema. For example,
+
+hbase(main):006:0\> alter 'hbase:meta', {NAME =\> 'info', 
DATA\_BLOCK\_ENCODING =\> 'ROW\_INDEX\_V1'}
+Updating all regions with the new schema...
+All regions updated.
+Done.
+Took 1.2138 seconds
+
+You can even add columnfamilies. Howevert, you cannot delete any of the core 
hbase:meta column families such as 'info' and 'table'.
+
+
+---
+
+* [HBASE-15161](https://issues.apache.org/jira/browse/HBASE-15161) | *Major* | 
**Umbrella: Miscellaneous improvements from production usage**
+
+This ticket summarizes significant improvements and expansion to the metrics 
surface area. Interested users should review the individual sub-tasks.
+
+
+---
+
+* [HBASE-24545](https://issues.apache.org/jira/browse/HBASE-24545) | *Major* | 
**Add backoff to SCP check on WAL split completion**
+
+Adds backoff in ServerCrashProcedure wait on WAL split to complete if large 
backlog of files to split (Its possible to avoid SCP blocking, waiting on WALs 
to split if you use procedure-based splitting --  set 
'hbase.split.wal.zk.coordinated' to false to enable procedure based wal 
splitting.)
+
+
+---
+
+* [HBASE-24524](https://issues.apache.org/jira/browse/HBASE-24524) | *Minor* | 
**SyncTable logging improvements**
+
+Notice this has changed log level for mismatching row keys, originally those 
were being logged at INFO level, now it's logged at DEBUG level. This is 
consistent with the logging of mismatching cells. Also, for missing row keys, 
it now logs row key values in human readable format, making it more meaningful 
for operators troubleshooting mismatches.
+
+
+---
+
+* [HBASE-24359](https://issues.apache.org/jira/browse/HBASE-24359) | *Major* | 
**Optionally ignore edits for deleted CFs for replication.**
+
+Introduce a new config hbase.replication.drop.on.deleted.columnfamily, default 
is false. When config to true, the replication will drop the edits for 
columnfamily that has been deleted from the replication source and target.
+
+
+---
+
+* [HBASE-24418](https://issues.apache.org/jira/browse/HBASE-24418) | *Major* | 
**Consolidate Normalizer implementations**
+
+<!-- markdown -->
+This change extends the Normalizer with a handful of new configurations. The 
configuration points supported are:
+* `hbase.normalizer.split.enabled` Whether to split a region as part of 
normalization. Default: `true`.
+* `hbase.normalizer.merge.enabled` Whether to merge a region as part of 
normalization. Default `true`.
+* `hbase.normalizer.min.region.count` The minimum number of regions in a table 
to consider it for merge normalization. Default: 3.
+* `hbase.normalizer.merge.min_region_age.days` The minimum age for a region to 
be considered for a merge, in days. Default: 3.
+* `hbase.normalizer.merge.min_region_size.mb` The minimum size for a region to 
be considered for a merge, in whole MBs. Default: 1.
+
+
+---
+
+* [HBASE-24309](https://issues.apache.org/jira/browse/HBASE-24309) | *Major* | 
**Avoid introducing log4j and slf4j-log4j dependencies for modules other than 
hbase-assembly**
+
+Add a hbase-logging module, put the log4j related code in this module only so 
other modules do not need to depend on log4j at compile scope. See the comments 
of Log4jUtils and InternalLog4jUtils for more details.
+
+Add a log4j.properties to the test jar of hbase-logging module, so for other 
sub modules we just need to depend on the test jar of hbase-logging module at 
test scope to output the log to console, without placing a log4j.properties in 
the test resources as they all (almost) have the same content. And this test 
module will not be included in the assembly tarball so it will not mess up the 
binary distribution.
+
+Ban direct commons-logging dependency, and ban commons-logging and log4j 
imports in non-test code, to avoid mess up the downstream users logging 
framework. In hbase-logging module we do need to use log4j classes and the 
trick is to use full class name.
+
+Add jcl-over-slf4j and jul-to-slf4j dependencies, as some of our dependencies 
use jcl or jul as logging framework, we should also redirect their log message 
to slf4j.
+
+
+---
+
+* [HBASE-21406](https://issues.apache.org/jira/browse/HBASE-21406) | *Minor* | 
**"status 'replication'" should not show SINK if the cluster does not act as 
sink**
+
+Added new metric to differentiate sink startup time from last OP applied time.
+
+Original behaviour was to always set startup time to 
TimestampsOfLastAppliedOp, and always show it on "status 'replication'" 
command, regardless if the sink ever applied any OP. 
+
+This was confusing, specially for scenarios where cluster was just acting as 
source, the output could lead to wrong interpretations about sink not applying 
edits or replication being stuck. 
+
+With the new metric, we now compare the two metrics values, assuming that if 
both are the same, there's never been any OP shipped to the given sink, so 
output would reflect it more clearly, to something as for example:
+
+SINK: TimeStampStarted=Thu Dec 06 23:59:47 GMT 2018, Waiting for OPs...
+
+
+---
+
+* [HBASE-24132](https://issues.apache.org/jira/browse/HBASE-24132) | *Major* | 
**Upgrade to Apache ZooKeeper 3.5.7**
+
+<!-- markdown -->
+HBase ships ZooKeeper 3.5.x. Was the EOL'd 3.4.x. 3.5.x client can talk to 
3.4.x ensemble.
+
+The ZooKeeper project has built a 
[FAQ](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Upgrade+FAQ) that 
documents known issues and work-arounds when upgrading existing deployments.
+
+
+---
+
+* [HBASE-22287](https://issues.apache.org/jira/browse/HBASE-22287) | *Major* | 
**inifinite retries on failed server in RSProcedureDispatcher**
+
+Add backoff. Avoid retrying every 100ms.
+
+
+---
+
+* [HBASE-24425](https://issues.apache.org/jira/browse/HBASE-24425) | *Major* | 
**Run hbck\_chore\_run and catalogjanitor\_run on draw of 'HBCK Report' page**
+
+Runs 'catalogjanitor\_run' and 'hbck\_chore\_run' inline with the loading of 
the 'HBCK Report' page.
+
+Pass '?cache=true' to skip inline invocation of 'catalogjanitor\_run' and 
'hbck\_chore\_run' drawing the page.
+
+
+---
+
+* [HBASE-24408](https://issues.apache.org/jira/browse/HBASE-24408) | *Blocker* 
| **Introduce a general 'local region' to store data on master**
+
+Introduced a general 'local region' at master side to store the procedure 
data, etc.
+
+The hfile of this region will be stored on the root fs while the wal will be 
stored on the wal fs. This issue supercedes part of the code for HBASE-23326, 
as now we store the data in 'MasterData' directory instead of 'MasterProcs'.
+
+The old hfiles will be moved to the global hfile archived directory with the 
suffix $-masterlocalhfile-$. The wal files will be moved to the global old wal 
directory with the suffix $masterlocalwal$. The 
TimeToLiveMasterLocalStoreHFileCleaner and TimeToLiveMasterLocalStoreWALCleaner 
are configured by default for cleaning the old hfiles and wal files, and the 
default TTLs are both 7 days.
+
+
+---
+
+* [HBASE-24115](https://issues.apache.org/jira/browse/HBASE-24115) | *Major* | 
**Relocate test-only REST "client" from src/ to test/ and mark Private**
+
+Relocate test-only REST RemoteHTable and RemoteAdmin from src/ to test/. And 
mark them as InterfaceAudience.Private.
+
+
+---
+
+* [HBASE-23938](https://issues.apache.org/jira/browse/HBASE-23938) | *Major* | 
**Replicate slow/large RPC calls to HDFS**
+
+Config key: hbase.regionserver.slowlog.systable.enabled
+Default value: false
+
+This config can be enabled if hbase.regionserver.slowlog.buffer.enabled is 
already enabled. While hbase.regionserver.slowlog.buffer.enabled ensures that 
any slow/large RPC logs with complete details are written to ring buffer 
available at each RegionServer, hbase.regionserver.slowlog.systable.enabled 
would ensure that all such logs are also persisted in new system table 
hbase:slowlog. 
+Operator can scan hbase:slowlog with filters to retrieve specific attribute 
matching records and this table would be useful to capture historical 
performance of slowness of RPC calls with detailed analysis.
+
+hbase:slowlog consists of single ColumnFamily info. info consists of multiple 
qualifiers similar to the attributes available to query as part of Admin API: 
get\_slowlog\_responses.
+
+One example of a row from hbase:slowlog scan result (Attached a sample 
screenshot in the Jira) :
+
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:call\_details, timestamp=2020-05-16T14:59:58.764Z, 
value=Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)
                             
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:client\_address, timestamp=2020-05-16T14:59:58.764Z, 
value=172.20.10.2:57348                                                         
                                 
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:method\_name, timestamp=2020-05-16T14:59:58.764Z, value=Scan        
                                                                                
                  
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:param, timestamp=2020-05-16T14:59:58.764Z, value=region { type: 
REGION\_NAME value: 
"cluster\_test,cccccccc,1589635796466.aa45e1571d533f5ed0bb31cdccaaf9cf." } scan 
{ a
+                                                             ttribute { name: 
"\_isolationlevel\_" value: "\\x5C000" } start\_row: "cccccccc" time\_range { 
from: 0 to: 9223372036854775807 } max\_versions: 1 cache\_blocks: true 
max\_result\_size: 2
+                                                             097152 caching: 
2147483647 include\_stop\_row: false } number\_of\_rows: 2147483647 
close\_scanner: false client\_handles\_partials: true 
client\_handles\_heartbeats: true track\_scan\_met
+                                                             rics: false       
                                                                                
                                                                               
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:processing\_time, timestamp=2020-05-16T14:59:58.764Z, value=24      
                                                                                
                  
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:queue\_time, timestamp=2020-05-16T14:59:58.764Z, value=0            
                                                                                
                  
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:region\_name, timestamp=2020-05-16T14:59:58.764Z, 
value=cluster\_test,cccccccc,1589635796466.aa45e1571d533f5ed0bb31cdccaaf9cf.    
                                     
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:response\_size, timestamp=2020-05-16T14:59:58.764Z, value=211227    
                                                                                
                  
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:server\_class, timestamp=2020-05-16T14:59:58.764Z, 
value=HRegionServer                                                             
                                   
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:start\_time, timestamp=2020-05-16T14:59:58.764Z, 
value=1589640743932                                                             
                                     
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:type, timestamp=2020-05-16T14:59:58.764Z, value=ALL                 
                                                                                
                 
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  
column=info:username, timestamp=2020-05-16T14:59:58.764Z, value=vjasani
+
+
+---
+
+* [HBASE-24271](https://issues.apache.org/jira/browse/HBASE-24271) | *Major* | 
**Set values in \`conf/hbase-site.xml\` that enable running on 
\`LocalFileSystem\` out of the box**
+
+<!-- markdown -->
+HBASE-24271 makes changes the the default `conf/hbase-site.xml` such that 
`bin/hbase` will run directly out of the binary tarball or a compiled source 
tree without any configuration modifications vs. Hadoop 2.8+. This changes our 
long-standing history of shipping no configured values in 
`conf/hbase-site.xml`, so existing processes that assume this file is empty of 
configuration properties may require attention.
+
+
+---
+
+* [HBASE-24310](https://issues.apache.org/jira/browse/HBASE-24310) | *Major* | 
**Use Slf4jRequestLog for hbase-http**
+
+Use Slf4jRequestLog instead of the log4j HttpRequestLogAppender in HttpServer.
+
+The request log is disabled by default in conf/log4j.properties by the 
following lines:
+
+# Disable request log by default, you can enable this by changing the appender
+log4j.category.http.requests=INFO,NullAppender
+log4j.additivity.http.requests=false
+
+Change the 'NullAppender' to what ever you want if you want to enable request 
log.
+
+Notice that, the logger name for master status http server is 
'http.requests.master', and for region server it is 'http.requests.regionserver'
+
+
+---
+
+* [HBASE-24335](https://issues.apache.org/jira/browse/HBASE-24335) | *Major* | 
**Support deleteall with ts but without column in shell mode**
+
+Use a empty string to represent no column specified for deleteall in shell 
mode.
+useage:  
+deleteall 'test','r1','',12345
+deleteall 'test', {ROWPREFIXFILTER =\> 'prefix'}, '', 12345
+
+
+---
+
+* [HBASE-24304](https://issues.apache.org/jira/browse/HBASE-24304) | *Major* | 
**Separate a hbase-asyncfs module**
+
+Added a new hbase-asyncfs module to hold the asynchronous dfs output stream 
implementation for implementing WAL.
+
+
+---
+
+* [HBASE-22710](https://issues.apache.org/jira/browse/HBASE-22710) | *Major* | 
**Wrong result in one case of scan that use  raw and versions and filter 
together**
+
+Make the logic of the versions chosen more reasonable for raw scan, to avoid 
lose result when using filter.
+
+
+---
+
+* [HBASE-24285](https://issues.apache.org/jira/browse/HBASE-24285) | *Major* | 
**Move to hbase-thirdparty-3.3.0**
+
+Moved to hbase-thirdparty 3.3.0.
+
+
+---
+
+* [HBASE-24252](https://issues.apache.org/jira/browse/HBASE-24252) | *Major* | 
**Implement proxyuser/doAs mechanism for hbase-http**
+
+This feature enables the HBase Web UI's to accept a 'proxyuser' via the HTTP 
Request's query string. When the parameter 
\`hbase.security.authentication.spnego.kerberos.proxyuser.enable\` is set to 
\`true\` in hbase-site.xml (default is \`false\`), the HBase UI will attempt to 
impersonate the user specified by the query parameter "doAs". This query 
parameter is checked case-insensitively. When this option is not provided, the 
user who executed the request is the "real" user and there is no ability to 
execute impersonation against the WebUI.
+
+For example, if the user "bob" with Kerberos credentials executes a request 
against the WebUI with this feature enabled and a query string which includes 
\`doAs=alice\`, the HBase UI will treat this request as executed as \`alice\`, 
not \`bob\`.
+
+The standard Hadoop proxyuser configuration properties to limit users who may 
impersonate others apply to this change (e.g. to enable \`bob\` to impersonate 
\`alice\`). See the Hadoop documentation for more information on how to 
configure these proxyuser rules.
+
+
+---
+
+* [HBASE-24143](https://issues.apache.org/jira/browse/HBASE-24143) | *Major* | 
**[JDK11] Switch default garbage collector from CMS**
+
+<!-- markdown -->
+`bin/hbase` will now dynamically select a Garbage Collector implementation 
based on the detected JVM version. JDKs 8,9,10 use `-XX:+UseConcMarkSweepGC`, 
while JDK11+ use `-XX:+UseG1GC`.
+
+Notice a slight compatibility change. Previously, the garbage collector choice 
would always be appended to a user-provided value for `HBASE_OPTS`. As of this 
change, this setting will only be applied when `HBASE_OPTS` is unset. That 
means that operators who provide a value for this variable will now need to 
also specify the collector. This is especially important for those on JDK8, 
where the vm default GC is not the recommended ConcMarkSweep.
+
+
+---
+
+* [HBASE-24024](https://issues.apache.org/jira/browse/HBASE-24024) | *Major* | 
**Optionally reject multi() requests with very high no of rows**
+
+New Config: hbase.rpc.rows.size.threshold.reject
+-----------------------------------------------------------------------
+
+Default value: false
+Description:
+If value is true, RegionServer will abort batch requests of Put/Delete with 
number of rows in a batch operation exceeding threshold defined by value of 
config: hbase.rpc.rows.warning.threshold.
+
+
+---
+
+* [HBASE-24139](https://issues.apache.org/jira/browse/HBASE-24139) | 
*Critical* | **Balancer should avoid leaving idle region servers**
+
+StochasticLoadBalancer functional improvement:
+
+StochasticLoadBalancer would rebalance the cluster if there are any idle 
RegionServers in the cluster (RegionServer having no region), while other 
RegionServers have at least 1 region available.
+
+
+---
+
+* [HBASE-24196](https://issues.apache.org/jira/browse/HBASE-24196) | *Major* | 
**[Shell] Add rename rsgroup command in hbase shell**
+
+user or admin can now use
+hbase shell \> rename\_rsgroup 'oldname', 'newname'
+to rename rsgroup.
+
+
+---
+
+* [HBASE-24218](https://issues.apache.org/jira/browse/HBASE-24218) | *Major* | 
**Add hadoop 3.2.x in hadoop check**
+
+Add hadoop-3.2.0 and hadoop-3.2.1 in hadoop check and when 
'--quick-hadoopcheck' we will only check hadoop-3.2.1.
+
+Notice that, for aligning the personality scripts across all the active 
branches, we will commit the patch to all active branches, but the hadoop-3.2.x 
support in hadoopcheck is only applied to branch-2.2+.
+
+
+---
+
+* [HBASE-23829](https://issues.apache.org/jira/browse/HBASE-23829) | *Major* | 
**Get \`-PrunSmallTests\` passing on JDK11**
+
+\`-PrunSmallTests\` now pass on JDK11 when using \`-Phadoop.profile=3.0\`.
+
+
+---
+
+* [HBASE-24185](https://issues.apache.org/jira/browse/HBASE-24185) | *Major* | 
**Junit tests do not behave well with System.exit or Runtime.halt or JVM exits 
in general.**
+
+Tests that fail because a process -- RegionServer or Master -- called 
System.exit, will now instead throw an exception.
+
+
+---
+
+* [HBASE-24072](https://issues.apache.org/jira/browse/HBASE-24072) | *Major* | 
**Nightlies reporting OutOfMemoryError: unable to create new native thread**
+
+Hadoop hosts have had their ulimit -u raised from 10000 to 30000 (per user, by 
INFRA). The Docker build container has had its limit raised from 10000 to 12500.
+
+
+---
+
+* [HBASE-24112](https://issues.apache.org/jira/browse/HBASE-24112) | *Major* | 
**[RSGroup] Support renaming rsgroup**
+
+Support RSGroup renaming in core codebase. New API Admin#renameRSGroup(String, 
String) is introduced in 3.0.0.
+
+
+---
+
+* [HBASE-23994](https://issues.apache.org/jira/browse/HBASE-23994) | *Trivial* 
| ** Add WebUI to Canary**
+
+<!-- markdown -->
+The Canary tool now offers a WebUI when run in `region` mode (the default 
mode). It is enabled by default, and by default, it binds to `0.0.0.0:16050`. 
This can be overridden by setting `hbase.canary.info.bindAddress` and 
`hbase.canary.info.port`. To disable entirely, set the port to `-1`.
+
+
+---
+
+* [HBASE-23779](https://issues.apache.org/jira/browse/HBASE-23779) | *Major* | 
**Up the default fork count to make builds complete faster; make count relative 
to CPU count**
+
+Pass --threads=2 building on jenkins. It shortens nightly build times by about 
~25%.
+
+It works by running module build/test in parallel when dependencies allow. 
Upping the forkcount beyond the pom default of 0.25C would have us broach our 
CPU budget on jenkins when two modules are running in parallel (2 modules at 
0.25% of CPU each makes 0.5C and on jenkins, hadoop nodes run two jenkins 
executors per host).  Higher forkcounts also seems to threaten build stability.
+
+For running tests locally, to go faster, up fork count.
+
+$ x="0.5C"  ;  mvn --threads=2  -Dsurefire.firstPartForkCount=$x 
-Dsurefire.secondPartForkCount=$x test -PrunAllTests
+
+You could up the x from 0.5C to 1.0C but YMMV (On overcommitted hardware, 
tests start bombing out pretty soon after startup). You could try upping thread 
count but on occasion are likely to overcommit hardware.
+
+
+---
+
+* [HBASE-24126](https://issues.apache.org/jira/browse/HBASE-24126) | *Major* | 
**Up the container nproc uplimit from 10000 to 12500**
+
+Start docker with upped ulimit for nproc passing '--ulimit nproc=12500'. It 
was 10000, the default, but made it 12500. Then, set PROC\_LIMIT in 
hbase-personality so when yetus runs, it is w/ the new 12500 value.
+
+
+---
+
+* [HBASE-24150](https://issues.apache.org/jira/browse/HBASE-24150) | *Major* | 
**Allow module tests run in parallel**
+
+Pass -T2 to mvn. Makes it so we do two modules-at-a-time dependencies willing. 
Helps speed build and testing. Doubles the resource usage when running modules 
in parallel.
+
+
+---
+
+* [HBASE-24121](https://issues.apache.org/jira/browse/HBASE-24121) | *Major* | 
**[Authorization] ServiceAuthorizationManager isn't dynamically updatable. And 
it should be.**
+
+Master & RegionService now support refresh policy authorization defined in 
hbase-policy.xml without restarting service. To refresh policy, please execute 
hbase shell command: update\_config or update\_config\_all after policy file 
updated and synced on all nodes.
+
+
+---
+
+* [HBASE-24099](https://issues.apache.org/jira/browse/HBASE-24099) | *Major* | 
**Use a fair ReentrantReadWriteLock for the region close lock**
+
+This change modifies the default acquisition policy for the region's close 
lock in order to prevent observed starvation of close requests. The new boolean 
configuration parameter 'hbase.regionserver.fair.region.close.lock' controls 
the lock acquisition policy: if true, the lock is created in fair mode 
(default); if false, the lock is created in nonfair mode (the old default).
+
+
+---
+
+* [HBASE-23153](https://issues.apache.org/jira/browse/HBASE-23153) | *Major* | 
**PrimaryRegionCountSkewCostFunction SLB function should implement 
CostFunction#isNeeded**
+
+<!-- markdown -->
+The `PrimaryRegionCountSkewCostFunction` for the `StochasticLoadBalancer` is 
only needed when the read replicas feature is enabled. With this change, that 
function now properly indicates that it is not needed when the read replica 
feature is off.
+
+If this improvement is not available, operators with clusters that are not 
using the read replica feature should manually disable it by setting 
`hbase.master.balancer.stochastic.primaryRegionCountCost` to `0.0` in 
hbase-site.xml for all HBase Masters.
+
+
+---
+
+* [HBASE-24055](https://issues.apache.org/jira/browse/HBASE-24055) | *Major* | 
**Make AsyncFSWAL can run on EC cluster**
+
+Now AsyncFSWAL can also be used against the directory which has EC enabled. 
Need to make sure you also make use of the hadoop 3.x client as the option is 
only available in hadoop 3.x.
+
+
+---
+
+* [HBASE-24113](https://issues.apache.org/jira/browse/HBASE-24113) | *Major* | 
**Upgrade the maven we use from 3.5.4 to 3.6.3 in nightlies**
+
+Branches-2.3+ use maven 3.5.3 building. Older branches use 3.5.4 still.
+
+
+---
+
+* [HBASE-24122](https://issues.apache.org/jira/browse/HBASE-24122) | *Major* | 
**Change machine ulimit-l to ulimit-a so dumps full ulimit rather than just 
'max locked memory'**
+
+Our 'Build Artifacts' have a machine directory under which we emit vitals on 
the host the build was run on. We used to emit the result of 'ulimit -l' as a 
file named 'ulimit-l'. This has been hijacked to instead emit result of running 
'ulimit -a' which includes stat on ulimit -l.
+
+
+---
+
+* [HBASE-23678](https://issues.apache.org/jira/browse/HBASE-23678) | *Major* | 
**Literate builder API for version management in schema**
+
+ColumnFamilyDescriptor new builder API:
+
+    /\*\*
+     \* Retain all versions for a given TTL(retentionInterval), and then only 
a specific number
+     \* of versions(versionAfterInterval) after that interval elapses.
+     \*
+     \* @param retentionInterval Retain all versions for this interval
+     \* @param versionAfterInterval Retain no of versions to retain after 
retentionInterval
+     \*/
+    public ModifyableColumnFamilyDescriptor setVersionsWithTimeToLive(
+        final int retentionInterval, final int versionAfterInterval)
+
+
+---
+
+* [HBASE-24050](https://issues.apache.org/jira/browse/HBASE-24050) | *Major* | 
**Deprecated PBType on all 2.x branches**
+
+org.apache.hadoop.hbase.types.PBType is marked as deprecated without any 
replacement. It will be moved to hbase-example module and marked as IA.Private 
in 3.0.0. This is a mistake as it should not be part of our public API. Users 
who depend on this class should just copy the code your own code base.
+
+
+---
+
+* [HBASE-8868](https://issues.apache.org/jira/browse/HBASE-8868) | *Minor* | 
**add metric to report client shortcircuit reads**
+
+Expose file system level read metrics for RegionServer.
+
+If the HBase RS runs on top of HDFS, calculate the aggregation of
+ReadStatistics of each HdfsFileInputStream. These metrics include:
+(1) total number of bytes read from HDFS.
+(2) total number of bytes read from local DataNode.
+(3) total number of bytes read locally through short-circuit read.
+(4) total number of bytes read locally through zero-copy read.
+
+Because HDFS ReadStatistics is calculated per input stream, it is not
+feasible to update the aggregated number in real time. Instead, the
+metrics are updated when an input stream is closed.
+
+
+---
+
+* [HBASE-24032](https://issues.apache.org/jira/browse/HBASE-24032) | *Major* | 
**[RSGroup] Assign created tables to respective rsgroup automatically instead 
of manual operations**
+
+Admin can determine which tables go to which rsgroup by script  (setting 
hbase.rsgroup.table.mapping.script with local filystem path) on Master side 
which aims to lighten the burden of admin operations.  Note, since HBase 3+, 
rsgroup can be specified in TableDescriptor as well, if clients specify this, 
master will skip the determination from script.
+
+Here is a simple example of script:
+{code}
+# Input consists of two string, 1st is the namespace of the table, 2nd is the 
table name of the table
+#!/bin/bash
+namespace=$1
+tablename=$2
+if [[ $namespace == test ]]; then
+  echo test
+elif [[ $tablename == \*foo\* ]]; then
+  echo other
+else
+  echo default
+fi
+{code}
+
+
+---
+
+* [HBASE-23993](https://issues.apache.org/jira/browse/HBASE-23993) | *Major* | 
**Use loopback for zk standalone server in minizkcluster**
+
+MiniZKCluster now puts up its standalone node listening on loopback/127.0.0.1 
rather than "localhost".
+
+
+---
+
+* [HBASE-23986](https://issues.apache.org/jira/browse/HBASE-23986) | *Major* | 
**Bump hadoop-two.version to 2.10.0 on master and branch-2**
+
+Bumped hadoop-two.version to 2.10.0, which means we will drop the support for 
hadoop-2.8.x and hadoop-2.9.x.
+
+
+---
+
+* [HBASE-23930](https://issues.apache.org/jira/browse/HBASE-23930) | *Minor* | 
**Shell should attempt to format \`timestamp\` attributes as ISO-8601**
+
+Change timestamp display to be ISO8601 when toString on Cell and outputting in 
shell....
+
+User used to see....
+    
+  column=table:state, timestamp=1583967620343 .....
+
+... but now sees:
+
+  column=table:state, timestamp=2020-03-11T23:00:20.343Z ....
+
+
+---
+
+* [HBASE-22827](https://issues.apache.org/jira/browse/HBASE-22827) | *Major* | 
**Expose multi-region merge in shell and Admin API**
+
+merge\_region shell command can now be used to merge more than 2 regions as 
well. It takes a list of regions as comma separated values or as an array of 
regions, and not just 2 regions. The full regionnames and encoded regionnames 
are continued to be accepted.
+
+
+---
+
+* [HBASE-23767](https://issues.apache.org/jira/browse/HBASE-23767) | *Major* | 
**Add JDK11 compilation and unit test support to Github precommit**
+
+Rebuild our Dockerfile with support for multiple JDK versions. Use multiple 
stages in the Jenkinsfile instead of yetus's multijdk because of YETUS-953. Run 
those multiple stages in parallel to speed up results.
+
+Note that multiple stages means multiple Yetus invocations means multiple 
comments on the PreCommit. This should become more obvious to users once we can 
make use of GitHub Checks API, HBASE-23902.
+
+
+---
+
+* [HBASE-22978](https://issues.apache.org/jira/browse/HBASE-22978) | *Minor* | 
**Online slow response log**
+
+get\_slowlog\_responses and clear\_slowlog\_responses are used to retrieve and 
clear slow RPC logs from RingBuffer maintained by RegionServers.
+
+New Admin APIs:
+1.   List\<SlowLogRecord\> getSlowLogResponses(final Set\<ServerName\> 
serverNames,
+      final SlowLogQueryFilter slowLogQueryFilter) throws IOException;
+
+2.   List\<Boolean\> clearSlowLogResponses(final Set\<ServerName\> serverNames)
+      throws IOException;
+
+Configs:
+
+1. hbase.regionserver.slowlog.ringbuffer.size:
+Default size of ringbuffer to be maintained by each RegionServer in order to 
store online slowlog responses. This is an in-memory ring buffer of requests 
that were judged to be too slow in addition to the responseTooSlow logging. The 
in-memory representation would be complete. For more details, please look into 
Doc Section: Get Slow Response Log from shell
+
+Default
+256
+
+2. hbase.regionserver.slowlog.buffer.enabled:
+Indicates whether RegionServers have ring buffer running for storing Online 
Slow logs in FIFO manner with limited entries. The size of the ring buffer is 
indicated by config: hbase.regionserver.slowlog.ringbuffer.size The default 
value is false, turn this on and get latest slowlog responses with complete 
data.
+
+Default
+false
+
+
+For more details, please look into "Get Slow Response Log from shell" section 
from HBase book.
+
+
+---
+
+* [HBASE-23926](https://issues.apache.org/jira/browse/HBASE-23926) | *Major* | 
**[Flakey Tests] Down the flakies re-run ferocity; it makes for too many 
fails.**
+
+Down the flakey re-rerun fork count from 1.0C -- i.e. a fork per CPU -- to 
0.25C. On a recent run, the machine had 16 cores. 0.25 is 4 cores. We'd 
hardcoded fork count at 3 previous to changes made by parent.
+
+
+---
+
+* [HBASE-23146](https://issues.apache.org/jira/browse/HBASE-23146) | *Major* | 
**Support CheckAndMutate with multiple conditions**
+
+Add a checkAndMutate(row, filter) method in the AsyncTable interface and the 
Table interface.
+
+This method atomically checks if the row matches the specified filter. If it 
does, it adds the Put/Delete/RowMutations.
+
+This is a fluent style API, the code is like:
+
+For Table interface:
+{code}
+table.checkAndMutate(row, filter).thenPut(put);
+{code}
+
+For AsyncTable interface:
+{code}
+table.checkAndMutate(row, filter).thenPut(put)
+    .thenAccept(succ -\> {
+      if (succ) {
+        System.out.println("Check and put succeeded");
+      } else {
+        System.out.println("Check and put failed");
+      }
+    });
+{code}
+
+
+---
+
+* [HBASE-23874](https://issues.apache.org/jira/browse/HBASE-23874) | *Minor* | 
**Move Jira-attached file precommit definition from script in Jenkins config to 
dev-support**
+
+The Jira Precommit job (https://builds.apache.org/job/PreCommit-HBASE-Build/) 
will now look for a file within the source tree 
(dev-support/jenkins\_precommit\_jira\_yetus.sh) instead of depending on a 
script section embedded in the job.
+
+
+---
+
+* [HBASE-23865](https://issues.apache.org/jira/browse/HBASE-23865) | *Major* | 
**Up flakey history from 5 to 10**
+
+Changed flakey list reporting to show 5 rather than 10 items. Also changed the 
second and first part fort counts to be 1C rather than hardcoded 3.
+
+
+---
+
+* [HBASE-23554](https://issues.apache.org/jira/browse/HBASE-23554) | *Major* | 
**Encoded regionname to regionname utility**
+
+    Adds shell command regioninfo:
+
+      hbase(main):001:0\>  regioninfo '0e6aa5c19ae2b2627649dc7708ce27d0'
+      {ENCODED =\> 0e6aa5c19ae2b2627649dc7708ce27d0, NAME =\> 
'TestTable,,1575941375972.0e6aa5c19ae2b2627649dc7708ce27d0.', STARTKEY =\> '', 
ENDKEY =\> '00000000000000000000299441'}
+      Took 0.4737 seconds
+
+
+---
+
+* [HBASE-23350](https://issues.apache.org/jira/browse/HBASE-23350) | *Major* | 
**Make compaction files cacheonWrite configurable based on threshold**
+
+This JIRA adds a new configuration - 
\`hbase.rs.cachecompactedblocksonwrite.threshold\`. This configuration is the 
maximum total size (in bytes) of the compacted files below which the 
configuration \`hbase.rs.cachecompactedblocksonwrite\` is honoured. If the 
total size of the compacted fies exceeds this threshold, even when 
\`hbase.rs.cachecompactedblocksonwrite\` is enabled, the data blocks are not 
cached. Caching index and bloom blocks is not affected by this configuration 
(user configuration is always honoured).
+
+Default value of this configuration is Long.MAX\_VALUE. This means whatever 
the total size of the compacted files, it wil be cached.
+
+
+---
+
+* [HBASE-17115](https://issues.apache.org/jira/browse/HBASE-17115) | *Major* | 
**HMaster/HRegion Info Server does not honour admin.acl**
+
+Implements authorization for the HBase Web UI by limiting access to certain 
endpoints which could be used to extract sensitive information from HBase.
+
+Access to these restricted endpoints can be limited to a group of 
administrators, identified either by a list of users 
(hbase.security.authentication.spnego.admin.users) or by a list of groups
+(hbase.security.authentication.spnego.admin.groups).  By default, neither of 
these values are set which will preserve backwards compatibility (allowing all 
authenticated users to access all endpoints).
+
+Further, users who have sensitive information in the HBase service 
configuration can set hbase.security.authentication.ui.config.protected to true 
which will treat the configuration endpoint as a protected, admin-only 
resource. By default, all authenticated users may access the configuration 
endpoint.
+
+
+---
+
+* [HBASE-23647](https://issues.apache.org/jira/browse/HBASE-23647) | *Major* | 
**Make MasterRegistry the default registry impl**
+
+<!-- markdown -->
+Enables master based registry as the default registry used by clients to fetch 
connection metadata.
+Refer to the section "Master Registry" in the client documentation for more 
details and advantages
+of this implementation over the default Zookeeper based registry. 
+
+Configuration parameter that controls the registry in use: 
`hbase.client.registry.impl`
+
+Where to set this: HBase client configuration (hbase-site.xml)
+
+Possible values:
+- `org.apache.hadoop.hbase.client.ZKConnectionRegistry` (For ZK based registry 
implementation)
+- `org.apache.hadoop.hbase.client.MasterRegistry` (New, for master based 
registry implementation)
+
+Notes on defaults:
+
+- For v3.0.0 and later, MasterRegistry is the default registry
+- For all releases in 2.x line, ZK based registry is the default.
+
+This feature has been back ported to 2.3.0 and later releases. MasterRegistry 
can be enabled by setting the following client configuration.
+
+```
+<property>
+  <name>hbase.client.registry.impl</name>
+  <value>org.apache.hadoop.hbase.client.MasterRegistry</value>
+</property>
+```
+
+
+---
+
+* [HBASE-23069](https://issues.apache.org/jira/browse/HBASE-23069) | 
*Critical* | **periodic dependency bump for Sep 2019**
+
+caffeine: 2.6.2 =\> 2.8.1
+commons-codec: 1.10 =\> 1.13
+commons-io: 2.5 =\> 2.6
+disrupter: 3.3.6 =\> 3.4.2
+httpcore: 4.4.6 =\> 4.4.13
+jackson: 2.9.10 =\> 2.10.1
+jackson.databind: 2.9.10.1 =\> 2.10.1
+jetty: 9.3.27.v20190418 =\> 9.3.28.v20191105
+protobuf.plugin: 0.5.0 =\> 0.6.1
+zookeeper: 3.4.10 =\> 3.4.14
+slf4j: 1.7.25 =\> 1.7.30
+rat: 0.12 =\> 0.13
+asciidoctor: 1.5.5 =\> 1.5.8
+asciidoctor.pdf: 1.5.0-alpha.15 =\> 1.5.0-rc.2
+error-prone: 2.3.3 =\> 2.3.4
+
+
+---
+
+* [HBASE-23686](https://issues.apache.org/jira/browse/HBASE-23686) | *Major* | 
**Revert binary incompatible change and remove reflection**
+
+- Reverts a binary incompatible binary change for ByteRangeUtils
+- Usage of reflection inside CommonFSUtils removed
+
+
+---
+
+* [HBASE-23347](https://issues.apache.org/jira/browse/HBASE-23347) | *Major* | 
**Pluggable RPC authentication**
+
+This change introduces an internal abstraction layer which allows for new 
SASL-based authentication mechanisms to be used inside HBase services. All 
existing SASL-based authentication mechanism were ported to the new 
abstraction, making no external change in runtime semantics, client API, or RPC 
serialization format.
+
+Developers familiar with extending HBase can implement authentication 
mechanism beyond simple Kerberos and DelegationTokens which authenticate HBase 
users against some other user database. HBase service authentication (Master 
to/from RegionServer) continue to operate solely over Kerberos.
+
+
+---
+
+* [HBASE-23156](https://issues.apache.org/jira/browse/HBASE-23156) | *Major* | 
**start-hbase.sh failed with ClassNotFoundException when build with hadoop3**
+
+Introduce a new hbase-assembly/src/main/assembly/hadoop-three-compat.xml for 
build with hadoop 3.x.
+
+
+---
+
+* [HBASE-23680](https://issues.apache.org/jira/browse/HBASE-23680) | *Major* | 
**RegionProcedureStore missing cleaning of hfile archive**
+
+Add a new config to hbase-default.xml
+
+  \<property\>
+    \<name\>hbase.procedure.store.region.hfilecleaner.plugins\</name\>
+    
\<value\>org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner\</value\>
+    \<description\>A comma-separated list of BaseHFileCleanerDelegate invoked 
by
+    the RegionProcedureStore HFileCleaner service. These HFiles cleaners are
+    called in order, so put the cleaner that prunes the most files in front. To
+    implement your own BaseHFileCleanerDelegate, just put it in HBase's 
classpath
+    and add the fully qualified class name here. Always add the above
+    default hfile cleaners in the list as they will be overwritten in
+    hbase-site.xml.\</description\>
+  \</property\>
+
+It will share the same TTL with other HFileCleaners. And you can also 
implement your own cleaner and change this property to enable it.
+
+
+---
+
+* [HBASE-23675](https://issues.apache.org/jira/browse/HBASE-23675) | *Minor* | 
**Move to Apache parent POM version 22**
+
+Updated parent pom to Apache version 22.
+
+
+---
+
+* [HBASE-23679](https://issues.apache.org/jira/browse/HBASE-23679) | 
*Critical* | **FileSystem instance leaks due to bulk loads with Kerberos 
enabled**
+
+This issues fixes an issue with Bulk Loading on installations with Kerberos 
enabled and more than a single RegionServer. When multiple tables are involved 
in hosting a table's regions which are being bulk-loaded into, all but the 
RegionServer hosting the table's first Region will "leak" one 
DistributedFileSystem object onto the heap, never freeing that memory. 
Eventually, with enough bulk loads, this will create a situation for 
RegionServers where they have no free heap space and will either spend all time 
in JVM GC, lose their ZK session, or crash with an OutOfMemoryError.
+
+The only mitigation for this issue is to periodically restart RegionServers. 
All earlier versions of HBase 2.x are subject to this issue (2.0.x, \<=2.1.8, 
\<=2.2.3)
+
+
+---
+
+* [HBASE-23286](https://issues.apache.org/jira/browse/HBASE-23286) | *Major* | 
**Improve MTTR: Split WAL to HFile**
+
+Add a new feature to improve MTTR which have 3 steps to failover:
+1. Read WAL and write HFile to region’s column family’s recovered.hfiles 
directory.
+2. Open region.
+3. Bulkload the recovered.hfiles for every column family.
+
+Compared to DLS(distributed log split), this feature will reduce region open 
time significantly.
+
+Config hbase.wal.split.to.hfile to true to enable this featue.
+
+
+---
+
+* [HBASE-23619](https://issues.apache.org/jira/browse/HBASE-23619) | *Trivial* 
| **Use built-in formatting for logging in hbase-zookeeper**
+
+Changed the logging in hbase-zookeeper to use built-in formatting
+
+
+---
+
+* [HBASE-23628](https://issues.apache.org/jira/browse/HBASE-23628) | *Minor* | 
**Replace Apache Commons Digest Base64 with JDK8 Base64**
+
+From the PR:
+
+"Yes. The two create the same output... I just wrote a small test suite to 
increase my confidence on that. I generated many tens of millions of random 
byte patterns and compared the output of the two algorithms. They came back 
identical every time.
+
+"Just in case any inquiring minds would like to know, there is no longer an 
encoding required when generating the strings. The JDK implementation 
specifically specifies that strings returned are StandardCharsets.ISO\_8859\_1. 
This does not change anything because UTF8 and ISO\_8859 overlap for the 
limited character set (64 characters) the encoding uses."
+
+
+---
+
+* [HBASE-23651](https://issues.apache.org/jira/browse/HBASE-23651) | *Major* | 
**Region balance throttling can be disabled**
+
+Set hbase.balancer.max.balancing to a int value which \<=0 will disable region 
balance throttling.
+
+
+---
+
+* [HBASE-23588](https://issues.apache.org/jira/browse/HBASE-23588) | *Major* | 
**Cache index blocks and bloom blocks on write if CacheCompactedBlocksOnWrite 
is enabled**
+
+If cacheOnWrite is enabled during flush or compaction, index and bloom 
blocks(with data blocks) would be automatically cached during write.
+
+
+---
+
+* [HBASE-23369](https://issues.apache.org/jira/browse/HBASE-23369) | *Major* | 
**Auto-close 'unknown' Regions reported as OPEN on RegionServers**
+
+If a RegionServer reports a Region as OPEN in disagreement with Master's 
status on the Region, the Master now tells the RegionServer to silently close 
the Region.
+
+
+---
+
+* [HBASE-23596](https://issues.apache.org/jira/browse/HBASE-23596) | *Major* | 
**HBCKServerCrashProcedure can double assign**
+
+Makes it so the recently added HBCKServerCrashProcedure -- the SCP that gets 
invoked when an operator schedules an SCP via hbck2 scheduleRecoveries command 
-- now works the same as SCP EXCEPT if master knows nothing of the scheduled 
servername. In this latter case, HBCKSCP will do a full scan of hbase:meta 
looking for instances of the passed servername. If any found it will attempt 
cleanup of hbase:meta references by reassigning any found OPEN or OPENING and 
by closing any in CLOSING state.
+
+Used to fix instances of what the 'HBCK Report' page shows as 'Unknown 
Servers'.
+
+
+---
+
+* [HBASE-23624](https://issues.apache.org/jira/browse/HBASE-23624) | *Major* | 
**Add a tool to dump the procedure info in HFile**
+
+Use ./hbase 
org.apache.hadoop.hbase.procedure2.store.region.HFileProcedurePrettyPrinter to 
run the tool.
+
+
+---
+
+* [HBASE-23590](https://issues.apache.org/jira/browse/HBASE-23590) | *Major* | 
**Update maxStoreFileRefCount to maxCompactedStoreFileRefCount**
+
+RegionsRecoveryChore introduced as part of HBASE-22460 tries to reopen regions 
based on config: hbase.regions.recovery.store.file.ref.count.
+Region reopen needs to take into consideration all compacted away store files 
that belong to the region and not store files(non-compacted).
+
+Fixed this bug as part of this Jira. 
+Updated description for corresponding configs:
+
+1. hbase.master.regions.recovery.check.interval :
+
+Regions Recovery Chore interval in milliseconds. This chore keeps running at 
this interval to find all regions with configurable max store file ref count 
and reopens them. Defaults to 20 mins
+
+2. hbase.regions.recovery.store.file.ref.count :
+
+Very large number of ref count on a compacted store file indicates that it is 
a ref leak on that object(compacted store file). Such files can not be removed 
after it is invalidated via compaction. Only way to recover in such scenario is 
to reopen the region which can release all resources, like the refcount, 
leases, etc. This config represents Store files Ref Count threshold value 
considered for reopening regions. Any region with compacted store files ref 
count \> this value would be eligible for reopening by master. Here, we get the 
max refCount among all refCounts on all compacted away store files that belong 
to a particular region. Default value -1 indicates this feature is turned off. 
Only positive integer value should be provided to enable this feature.
+
+
+---
+
+* [HBASE-23618](https://issues.apache.org/jira/browse/HBASE-23618) | *Major* | 
**Add a tool to dump procedure info in the WAL file**
+
+Use ./hbase 
org.apache.hadoop.hbase.procedure2.store.region.WALProcedurePrettyPrinter to 
run the tool.
+
+
+---
+
+* [HBASE-23617](https://issues.apache.org/jira/browse/HBASE-23617) | *Major* | 
**Add a stress test tool for region based procedure store**
+
+Use ./hbase 
org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStorePerformanceEvaluation
 to run the tool.
+
+
+---
+
+* [HBASE-23326](https://issues.apache.org/jira/browse/HBASE-23326) | 
*Critical* | **Implement a ProcedureStore which stores procedures in a HRegion**
+
+Use a region based procedure store to replace the old customized WAL based 
procedure store. The procedure data migration is done automatically during 
upgrading. After upgrading, the MasterProcWALs directory will be deleted and a 
new MasterProc directory will be created. And notice that a region will still 
write WAL so we still have WAL files and they will be moved to the oldWALs 
directory. The file name is mostly like a normal WAL file, and the only 
difference is that it is ended with "$masterproc$".
+
+
+---
+
+* [HBASE-23320](https://issues.apache.org/jira/browse/HBASE-23320) | *Major* | 
**Upgrade surefire plugin to 3.0.0-M4**
+
+Bumped surefire plugin to 3.0.0-M4
+
+
+---
+
+* [HBASE-20461](https://issues.apache.org/jira/browse/HBASE-20461) | *Major* | 
**Implement fsync for AsyncFSWAL**
+
+Now AsyncFSWAL also supports Durability.FSYNC\_WAL.
+
+
+---
+
+* [HBASE-23066](https://issues.apache.org/jira/browse/HBASE-23066) | *Minor* | 
**Create a config that forces to cache blocks on compaction**
+

[... 17836 lines stripped ...]

Reply via email to