http://git-wip-us.apache.org/repos/asf/hbase/blob/d7547c61/RELEASENOTES.md
----------------------------------------------------------------------
diff --git a/RELEASENOTES.md b/RELEASENOTES.md
new file mode 100644
index 0000000..7776c08
--- /dev/null
+++ b/RELEASENOTES.md
@@ -0,0 +1,8204 @@
+# HBASE  2.0.0 Release Notes
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-14175](https://issues.apache.org/jira/browse/HBASE-14175) | 
*Critical* | **Adopt releasedocmaker for better generated release notes**
+
+We will use yetus releasedocmaker to make our changes doc from here on out. A 
CHANGELOG.md will replace our current CHANGES.txt. Adjacent, we'll keep up a 
RELEASENOTES.md doc courtesy of releasedocmaker.
+
+Over in HBASE-18828 is where we are working through steps for the RM 
integrating this new tooling.
+
+
+---
+
+* [HBASE-16499](https://issues.apache.org/jira/browse/HBASE-16499) | 
*Critical* | **slow replication for small HBase clusters**
+
+Changed the default value for replication.source.ratio from 0.1 to 0.5. Which 
means now by default 50% of the total RegionServers in peer cluster(s) will 
participate in replication.
+
+
+---
+
+* [HBASE-16459](https://issues.apache.org/jira/browse/HBASE-16459) | *Trivial* 
| **Remove unused hbase shell --format option**
+
+<!-- markdown -->
+
+The HBase `shell` command no longer recognizes the option `--format`. 
Previously this option only recognized the default value of 'console'. The 
default value is now always used.
+
+
+---
+
+* [HBASE-20259](https://issues.apache.org/jira/browse/HBASE-20259) | 
*Critical* | **Doc configs for in-memory-compaction and add detail to 
in-memory-compaction logging**
+
+Disables in-memory compaction as default.
+
+Adds logging of in-memory compaction configuration on creation.
+
+Adds a chapter to the refguide on this new feature.
+
+
+---
+
+* [HBASE-20282](https://issues.apache.org/jira/browse/HBASE-20282) | *Major* | 
**Provide short name invocations for useful tools**
+
+\`hbase regionsplitter\` is a new short invocation for \`hbase 
org.apache.hadoop.hbase.util.RegionSplitter\`
+
+
+---
+
+* [HBASE-20314](https://issues.apache.org/jira/browse/HBASE-20314) | *Major* | 
**Precommit build for master branch fails because of surefire fork fails**
+
+Upgrade surefire plugin to 2.21.0.
+
+
+---
+
+* [HBASE-20130](https://issues.apache.org/jira/browse/HBASE-20130) | 
*Critical* | **Use defaults (16020 & 16030) as base ports when the RS is bound 
to localhost**
+
+<!-- markdown -->
+When region servers bind to localhost (mostly in pseudo distributed mode), 
default ports (16020 & 16030) are used as base ports. This will support up to 9 
instances of region servers by default with `local-regionservers.sh` script. If 
additional instances are needed, see the reference guide on how to deploy with 
a different range using the environment variables `HBASE_RS_BASE_PORT` and 
`HBASE_RS_INFO_BASE_PORT`.
+
+
+---
+
+* [HBASE-20111](https://issues.apache.org/jira/browse/HBASE-20111) | 
*Critical* | **Able to split region explicitly even on shouldSplit return false 
from split policy**
+
+When a split is requested on a Region, the RegionServer hosting that Region 
will now consult the configured SplitPolicy for that table when determining if 
a split of that Region is allowed. When a split is disallowed (due to the 
Region not being OPEN or the SplitPolicy denying the request), the operation 
will \*not\* be implicitly retried as it has previously done. Users will need 
to guard against and explicitly retry region split requests which are denied by 
the system.
+
+
+---
+
+* [HBASE-20223](https://issues.apache.org/jira/browse/HBASE-20223) | *Blocker* 
| **Use hbase-thirdparty 2.1.0**
+
+Moves commons-cli and commons-collections4 into the HBase thirdparty shaded 
jar which means that these are no longer generally available for users on the 
classpath.
+
+
+---
+
+* [HBASE-19128](https://issues.apache.org/jira/browse/HBASE-19128) | *Major* | 
**Purge Distributed Log Replay from codebase, configurations, text; mark the 
feature as unsupported, broken.**
+
+Removes Distributed Log Replay feature. Disable the feature before upgrading.
+
+
+---
+
+* [HBASE-19504](https://issues.apache.org/jira/browse/HBASE-19504) | *Major* | 
**Add TimeRange support into checkAndMutate**
+
+1) checkAndMutate accept a TimeRange to query the specified cell
+2) remove writeToWAL flag from Region#checkAndMutate since it is useless (this 
is a incompatible change)
+
+
+---
+
+* [HBASE-20224](https://issues.apache.org/jira/browse/HBASE-20224) | *Blocker* 
| **Web UI is broken in standalone mode**
+
+Standalone webui was broken inadvertently by HBASE-20027.
+
+
+---
+
+* [HBASE-20237](https://issues.apache.org/jira/browse/HBASE-20237) | 
*Critical* | **Put back getClosestRowBefore and throw UnknownProtocolException 
instead... for asynchbase client**
+
+Throw UnknownProtocolException if a client connects and tries to invoke the 
old getClosestRowOrBefore method. Pre-hbase-1.0.0 or asynchbase do this instead 
of using its replacement, the reverse Scan.
+
+getClosestRowOrBefore was implemented as a flag on Get. Before this patch 
though the flag was set, hbase2 were ignoring it. This made it look like a 
pre-1.0.0 client was 'working' but then it'd fail finding the appropriate 
Region for a client-specified row doing lookups into hbase:meta.
+
+
+---
+
+* [HBASE-20247](https://issues.apache.org/jira/browse/HBASE-20247) | *Major* | 
**Set version as 2.0.0 in branch-2.0 in prep for first RC**
+
+Set version as 2.0.0 on branch-2.0.
+
+
+---
+
+* [HBASE-20090](https://issues.apache.org/jira/browse/HBASE-20090) | *Major* | 
**Properly handle Preconditions check failure in 
MemStoreFlusher$FlushHandler.run**
+
+When there is concurrent region split, MemStoreFlusher may not find flushable 
region if the only candidate region left hasn't received writes (resulting in 0 
data size).
+After this JIRA, such scenario wouldn't trigger Precondition assertion 
(replaced by an if statement to see whether there is any flushable region).
+If there is no flushable region, a DEBUG log would appear in region server 
log, saying "Above memory mark but there is no flushable region".
+
+
+---
+
+* [HBASE-19552](https://issues.apache.org/jira/browse/HBASE-19552) | *Major* | 
**update hbase to use new thirdparty libs**
+
+hbase-thirdparty libs have moved to o.a.h.thirdparty offset. Netty shading 
system property is no longer necessary.
+
+
+---
+
+* [HBASE-20119](https://issues.apache.org/jira/browse/HBASE-20119) | *Minor* | 
**Introduce a pojo class to carry coprocessor information in order to make 
TableDescriptorBuilder accept multiple cp at once**
+
+1) Make all methods in TableDescriptorBuilder be setter pattern.
+addCoprocessor -\> setCoprocessor
+addColumnFamily -\> setColumnFamily
+(addCoprocessor and addColumnFamily are still in branch-2 but they are marked 
as deprecated)
+2) add CoprocessorDescriptor to carry cp information
+3) add CoprocessorDescriptorBuilder to build CoprocessorDescriptor
+4) TD disallow user to set negative priority to coprocessor since parsing the 
negative value will cause a exception
+
+
+---
+
+* [HBASE-17165](https://issues.apache.org/jira/browse/HBASE-17165) | 
*Critical* | **Add retry to LoadIncrementalHFiles tool**
+
+Adds retry to load of incremental hfiles. Pertinent key is 
HConstants.HBASE\_CLIENT\_RETRIES\_NUMBER. Default is 
HConstants.DEFAULT\_HBASE\_CLIENT\_RETRIES\_NUMBER.
+
+
+---
+
+* [HBASE-20108](https://issues.apache.org/jira/browse/HBASE-20108) | 
*Critical* | **\`hbase zkcli\` falls into a non-interactive prompt after 
HBASE-15199**
+
+This issue fixes a runtime dependency issues where JLine is not made available 
on the classpath which causes the ZooKeeper CLI to appear non-interactive. 
JLine was being made available unintentionally via the JRuby jar file on the 
classpath for the HBase shell. While the JRuby jar is not always present, the 
fix made here was to selectively include the JLine dependency on the zkcli 
command's classpath.
+
+
+---
+
+* [HBASE-8770](https://issues.apache.org/jira/browse/HBASE-8770) | *Blocker* | 
**deletes and puts with the same ts should be resolved according to 
mvcc/seqNum**
+
+This behavior is available as a new feature. See HBASE-15968 release note.
+
+This issue is just about adding to the refguide documentation on the 
HBASE\_15968 feature.
+
+
+---
+
+* [HBASE-19114](https://issues.apache.org/jira/browse/HBASE-19114) | *Major* | 
**Split out o.a.h.h.zookeeper from hbase-server and hbase-client**
+
+Splits out most of ZooKeeper related code into a separate new module: 
hbase-zookeeper.
+Also, renames some ZooKeeper related classes to follow a common naming pattern 
- "ZK" prefix - as compared to many different styles earlier.
+
+
+---
+
+* [HBASE-19437](https://issues.apache.org/jira/browse/HBASE-19437) | 
*Critical* | **Batch operation can't handle the null result for 
Append/Increment**
+
+The result from server is changed from null to Result.EMPTY\_RESULT when 
Append/Increment operation can't retrieve any data from server,
+
+
+---
+
+* [HBASE-17448](https://issues.apache.org/jira/browse/HBASE-17448) | *Major* | 
**Export metrics from RecoverableZooKeeper**
+
+Committed to master and branch-1
+
+
+---
+
+* [HBASE-19400](https://issues.apache.org/jira/browse/HBASE-19400) | *Major* | 
**Add missing security checks in MasterRpcServices**
+
+Added ACL check to following Admin functions:
+enableCatalogJanitor, runCatalogJanitor, cleanerChoreSwitch, runCleanerChore, 
execProcedure, execProcedureWithReturn, normalize, normalizerSwitch, 
coprocessorService.
+When ACL is enabled, only those with ADMIN rights will be able to invoke these 
operations successfully.
+
+
+---
+
+* [HBASE-20048](https://issues.apache.org/jira/browse/HBASE-20048) | *Blocker* 
| **Revert serial replication feature**
+
+Revert the serial replication feature from all branches. Plan to reimplement 
it soon and land onto 2.1 release line.
+
+
+---
+
+* [HBASE-19166](https://issues.apache.org/jira/browse/HBASE-19166) | *Blocker* 
| **AsyncProtobufLogWriter persists ProtobufLogWriter as class name for 
backward compatibility**
+
+For backward compatibility, AsyncProtobufLogWriter uses "ProtobufLogWriter" as 
writer class name and SecureAsyncProtobufLogWriter uses 
"SecureProtobufLogWriter" as writer class name.
+
+
+---
+
+* [HBASE-18596](https://issues.apache.org/jira/browse/HBASE-18596) | *Blocker* 
| **[TEST] A hbase1 cluster should be able to replicate to a hbase2 cluster; 
verify**
+
+Replication between versions verified as basically working. 0.98.25-SNAPSHOT 
to beta-2 hbase2 and a 1.2-ish version tried.
+
+
+---
+
+* [HBASE-20017](https://issues.apache.org/jira/browse/HBASE-20017) | *Blocker* 
| **BufferedMutatorImpl submit the same mutation repeatedly**
+
+This change fixes multithreading issues in the implementation of 
BufferedMutator. BufferedMutator should not be used with 1.4 releases prior to 
1.4.2.
+
+
+---
+
+* [HBASE-20032](https://issues.apache.org/jira/browse/HBASE-20032) | *Minor* | 
**Receving multiple warnings for missing reporting.plugins.plugin.version**
+
+Add (latest) version elements missing from reporting plugins in top-level pom.
+
+
+---
+
+* [HBASE-19954](https://issues.apache.org/jira/browse/HBASE-19954) | *Major* | 
**Separate TestBlockReorder into individual tests to avoid ShutdownHook 
suppression error against hadoop3**
+
+hadoop3 minidfscluster removes all shutdown handlers when the cluster goes 
down which made this test that does FS-stuff fail (Fix was to break up the test 
so each test method ran with an unadulterated FS).
+
+
+---
+
+* [HBASE-20014](https://issues.apache.org/jira/browse/HBASE-20014) | *Major* | 
**TestAdmin1 Times out**
+
+Ups the overall test timeout from 10 minutes to 13minutes. 15minutes is the 
surefire timeout.
+
+
+---
+
+* [HBASE-20020](https://issues.apache.org/jira/browse/HBASE-20020) | 
*Critical* | **Make sure we throw DoNotRetryIOException when 
ConnectionImplementation is closed**
+
+Add checkClosed to core Client methods. Avoid unnecessary retry.
+
+
+---
+
+* [HBASE-19978](https://issues.apache.org/jira/browse/HBASE-19978) | *Major* | 
**The keepalive logic is incomplete in ProcedureExecutor**
+
+Completes keep-alive logic and then enables it; ProcedureExecutor Workers will 
spin up more threads when need settling back to the core count after the burst 
in demand has passed. Default keep-alive is one minute. Default core-count is 
CPUs/4 or 16, which ever is greater. Maximum is an arbitrary core-count \* 10 
(a limit that should never be hit and if it is, there is something else very 
wrong).
+
+
+---
+
+* [HBASE-19950](https://issues.apache.org/jira/browse/HBASE-19950) | *Minor* | 
**Introduce a ColumnValueFilter**
+
+ColumnValueFilter provides a way to fetch matched cells only by providing 
specified column, value and a comparator, which is different from 
SingleValueFilter, fetching an entire row as soon as a matched cell found.
+
+
+---
+
+* [HBASE-18294](https://issues.apache.org/jira/browse/HBASE-18294) | *Major* | 
**Reduce global heap pressure: flush based on heap occupancy**
+
+A region is flushed if its memory component exceeds the region flush threshold.
+A flush policy decides which stores to flush by comparing the size of the 
store to a column-family-flush threshold.
+If the overall size of all memstores in the machine exceeds the bounds defined 
by the administrator (denoted global pressure) a region is selected and flushed.
+HBASE-18294 changes flush decisions to be based on heap-occupancy and not data 
(key-value) size, consistently across levels. This rolls back some of the 
changes by HBASE-16747. Specifically,
+(1) RSs, Regions and stores track their overall on-heap and off-heap occupancy,
+(2) A region is flushed when its on-heap+off-heap size exceeds the region 
flush threshold specified in hbase.hregion.memstore.flush.size,
+(3) The store to be flushed is chosen based on its on-heap+off-heap size
+(4) At the RS level, a flush is triggered when the overall on-heap exceeds the 
on-heap limit, or when the overall off-heap size exceeds the off-heap limit 
(low/high water marks).
+
+Note that when the region flush size is set to XXmb a region flush may be 
triggered even before writing keys and values of size XX because the total heap 
occupancy of the region which includes additional metadata exceeded the 
threshold.
+
+
+---
+
+* [HBASE-19116](https://issues.apache.org/jira/browse/HBASE-19116) | 
*Critical* | **Currently the tail of hfiles with CellComparator\* classname 
makes it so hbase1 can't open hbase2 written hfiles; fix**
+
+hbase-2.x sets KeyValue Comparators into the tail of hfiles rather than 
CellComparator, what it uses internally, just so hbase-1.x can continue to read 
hbase-2.x written hfiles.
+
+
+---
+
+* [HBASE-19948](https://issues.apache.org/jira/browse/HBASE-19948) | *Major* | 
**Since HBASE-19873, HBaseClassTestRule, Small/Medium/Large has different 
semantic**
+
+In subtask, fixed doc and annotations to be more explicit that test timings 
are for the whole Test Fixture/Test Class/Test Suite NOT the test method only 
as we'd measuring up to this (tother subtasks untethered Categorization and 
test timeout such that all categories now have a ten minute timeout -- no test 
can run longer than ten minutes or it gets killed/timedout).
+
+
+---
+
+* [HBASE-16060](https://issues.apache.org/jira/browse/HBASE-16060) | *Blocker* 
| **1.x clients cannot access table state talking to 2.0 cluster**
+
+By default, we mirror table state to zookeeper so hbase-1.x clients will work 
against an hbase-2 cluster (With this patch, hbase-1.x clients can do most 
Admin functions including table create; hbase-1.x clients can do all Table/DML 
against hbase-2 cluster).
+
+Flag to disable mirroring is hbase.mirror.table.state.to.zookeeper; set it to 
false in Configuration.
+
+Related, Master on startup will look to see if there are table state znodes 
left over by an hbase-1 instance. If any found, it will migrate the table state 
to hbase-2 setting the state into the hbase:meta table where table state is now 
kept. We will do this check on every Master start. Notion is that this will be 
overall beneficial with low impediment. To disable the migration check, set 
hbase.migrate.table.state.from.zookeeper to false.
+
+
+---
+
+* [HBASE-19900](https://issues.apache.org/jira/browse/HBASE-19900) | 
*Critical* | **Region-level exception destroy the result of batch**
+
+This fix makes the following changes to how client handle the both of action 
result and region exception.
+1) honor the action result rather than region exception. If the action have 
both of true result and region exception, the action is fine as the exception 
is caused by other actions which are in the same region.
+2) honor the action exception rather than region exception. If the action have 
both of action exception and region exception, we deal with the action 
exception only. If we also handle the region exception for the same action, it 
will introduce the negative count of actions in progress. The 
AsyncRequestFuture#waitUntilDone will block forever.
+
+
+---
+
+* [HBASE-19841](https://issues.apache.org/jira/browse/HBASE-19841) | *Major* | 
**Tests against hadoop3 fail with StreamLacksCapabilityException**
+
+HBaseTestingUtility now assumes that all clusters will use local storage until 
a MiniDFSCluster is started or assigned.
+
+
+---
+
+* [HBASE-19528](https://issues.apache.org/jira/browse/HBASE-19528) | *Major* | 
**Major Compaction Tool**
+
+Tool allows you to compact a cluster with given concurrency of regionservers 
compacting at a given time.  If tool completes successfully everything 
requested for compaction will be compacted, regardless of region moves, splits 
and merges.
+
+
+---
+
+* [HBASE-19919](https://issues.apache.org/jira/browse/HBASE-19919) | *Major* | 
**Tidying up logging**
+
+(I thought this change innocuous but I made work for a co-worker when I upped 
interval between log cleaner runs -- meant a smoke test failed because we were 
slow doing an expected cleanup).
+
+Edit of log lines removing redundancy. Shorten thread names shown in log.  
Made some log TRACE instead of DEBUG.  Capitalizations.
+
+Upped log cleaner interval from every minute to every ten minutes. 
hbase.master.cleaner.interval
+
+Lowered default count of threads started by Procedure Executor from count of 
CPUs to 1/4 of count of CPUs.
+
+
+---
+
+* [HBASE-19901](https://issues.apache.org/jira/browse/HBASE-19901) | *Major* | 
**Up yetus proclimit on nightlies**
+
+Pass to yetus a dockermemlimit of 20G and a proclimit of 10000. Defaults are 
4G and 1G respectively.
+
+
+---
+
+* [HBASE-19912](https://issues.apache.org/jira/browse/HBASE-19912) | *Minor* | 
**The flag "writeToWAL" of Region#checkAndRowMutate is useless**
+
+Remove useless 'writeToWAL' flag of Region#checkAndRowMutate & related class
+
+
+---
+
+* [HBASE-19911](https://issues.apache.org/jira/browse/HBASE-19911) | *Major* | 
**Convert some tests from small to medium because they are timing out: 
TestNettyRpcServer, TestClientClusterStatus, TestCheckTestClasses**
+
+Changed a few tests so they are medium sized rather than small size.
+
+Also, upped the time we wait on small tests to 60seconds from 30seconds. Small 
tests are tests that run in 15seconds or less. What we changed was the timeout 
watcher. It is now more lax, more tolerant of dodgy infrastructure that might 
be running tests slowly.
+
+
+---
+
+* [HBASE-19892](https://issues.apache.org/jira/browse/HBASE-19892) | *Major* | 
**Checking 'patch attach' and yetus 0.7.0 and move to Yetus 0.7.0**
+
+Moved our internal yetus reference from 0.6.0 to 0.7.0. Concurrently, I 
changed hadoopqa to run with 0.7.0 (by editing the config in jenkins).
+
+
+---
+
+* [HBASE-19873](https://issues.apache.org/jira/browse/HBASE-19873) | *Major* | 
**Add a CategoryBasedTimeout ClassRule for all UTs**
+
+Along with @category -- small, medium, large -- all hbase tests must now carry 
a ClassRule as follows:
+
++  @ClassRule
++  public static final HBaseClassTestRule CLASS\_RULE =
++      HBaseClassTestRule.forClass(TestInterfaceAudienceAnnotations.class);
+
+where the class changes by test.
+
+Currently the classrule enforces timeout for the whole test suite -- i.e. if a 
SmallTest Category then all the tests in the TestSuite must complete inside 
60seconds, the timeout we set on SmallTest Category test suite -- but is meant 
to be a repository for general, runtime, hbase test facility.
+
+
+---
+
+* [HBASE-19770](https://issues.apache.org/jira/browse/HBASE-19770) | 
*Critical* | **Add '--return-values' option to Shell to print return values of 
commands in interactive mode**
+
+Introduces a new option to the HBase shell: -r, --return-values. When the 
shell is in "interactive" mode (default), the return value of shell commands 
are not returned to the user as they dirty the console output. For those who 
desire this functionality, the "--return-values" option restores the old 
functionality of the commands passing their return value to the user.
+
+
+---
+
+* [HBASE-15321](https://issues.apache.org/jira/browse/HBASE-15321) | *Major* | 
**Ability to open a HRegion from hdfs snapshot.**
+
+HRegion.openReadOnlyFileSystemHRegion() provides the ability to open HRegion 
from a read-only hdfs snapshot.  Because hdfs snapshots are read-only, no 
cleanup happens when using this API.
+
+
+---
+
+* [HBASE-17513](https://issues.apache.org/jira/browse/HBASE-17513) | 
*Critical* | **Thrift Server 1 uses different QOP settings than RPC and Thrift 
Server 2 and can easily be misconfigured so there is no encryption when the 
operator expects it.**
+
+This change fixes an issue where users could have unintentionally configured 
the HBase Thrift1 server to run without wire-encryption, when they believed 
they had configured the Thrift1 server to do so.
+
+
+---
+
+* [HBASE-19828](https://issues.apache.org/jira/browse/HBASE-19828) | *Major* | 
**Flakey TestRegionsOnMasterOptions.testRegionsOnAllServers**
+
+Disables TestRegionsOnMasterOptions because Regions on Master does not work 
reliably; see HBASE-19831.
+
+
+---
+
+* [HBASE-18963](https://issues.apache.org/jira/browse/HBASE-18963) | *Major* | 
**Remove MultiRowMutationProcessor and implement mutateRows... methods using 
batchMutate()**
+
+Modified HRegion.mutateRow() APIs to use batchMutate() instead of 
processRowsWithLocks() with MultiRowMutationProcessor. 
MultiRowMutationProcessor is removed to have single write path that uses 
batchMutate().
+
+
+---
+
+* [HBASE-19163](https://issues.apache.org/jira/browse/HBASE-19163) | *Major* | 
**"Maximum lock count exceeded" from region server's batch processing**
+
+When there are many mutations against the same row in a batch, as each 
mutation will acquire a shared row lock, it will exceed the maximum shared lock 
count the java ReadWritelock supports (64k). Along with other optimization, the 
batch is divided into multiple possible minibatches. A new config is added to 
limit the maximum number of mutations in the minibatch.
+
+   \<property\>
+    \<name\>hbase.regionserver.minibatch.size\</name\>
+    \<value\>20000\</value\>
+   \</property\>
+The default value is 20000.
+
+
+---
+
+* [HBASE-19739](https://issues.apache.org/jira/browse/HBASE-19739) | *Minor* | 
**Include thrift IDL files in HBase binary distribution**
+
+Thrift IDLs are now shipped, bundled up in the respective hbase-\*thrift.jars 
(look for files ending in .thrift).
+
+
+---
+
+* [HBASE-11409](https://issues.apache.org/jira/browse/HBASE-11409) | *Major* | 
**Add more flexibility for input directory structure to LoadIncrementalHFiles**
+
+Allows for users to bulk load entire tables from hdfs by specifying the 
parameter -loadTable.  This allows you to pass in a table level directory and 
have all regions column families bulk loaded, if you do not specify the 
-loadTable parameter LoadIncrementalHFiles will work as before. Note: you must 
have a pre-created table to run with -loadTable it will not create one for you.
+
+
+---
+
+* [HBASE-19769](https://issues.apache.org/jira/browse/HBASE-19769) | 
*Critical* | **IllegalAccessError on package-private Hadoop metrics2 classes in 
MapReduce jobs**
+
+Client-side ZooKeeper metrics which were added to 2.0.0 alpha/beta releases 
cause issues when launching MapReduce jobs via {{yarn jar}} on the command 
line. This stems from ClassLoader separation issues that YARN implements. It 
was chosen that the easiest solution was to remove these ZooKeeper metrics 
entirely.
+
+
+---
+
+* [HBASE-19783](https://issues.apache.org/jira/browse/HBASE-19783) | *Minor* | 
**Change replication peer cluster key/endpoint from a not-null value to null is 
not allowed**
+
+To reduce the confusing behavior, now when you call updatePeerConfig with 
empty ClusterKey or ReplicationEndpointImpl, but the value of field of the 
to-be-updated ReplicationPeerConfig is not null, we will throw exception 
instead of ignoring them.
+
+
+---
+
+* [HBASE-19483](https://issues.apache.org/jira/browse/HBASE-19483) | *Major* | 
**Add proper privilege check for rsgroup commands**
+
+This JIRA aims at refactoring AccessController, using ACL as core library in 
CPs.
+1. Stripping out a public class AccessChecker from AccessController, using ACL 
as core library in CPs. AccessChecker don't have any dependency on anything CP 
related. Create it's instance from other CPS.
+2. Change the default value of hbase.security.authorization to false.
+3. Don't use CP hooks to check access in RSGroup. Use the access checker 
instance directly in functions of RSGroupAdminServiceImpl.
+
+
+---
+
+* [HBASE-19358](https://issues.apache.org/jira/browse/HBASE-19358) | *Major* | 
**Improve the stability of splitting log when do fail over**
+
+After HBASE-19358 we introduced a new property 
hbase.split.writer.creation.bounded to limit the opening writers for each 
WALSplitter. If set to true, we won't open any writer for recovered.edits until 
the entries accumulated in memory reaching 
hbase.regionserver.hlog.splitlog.buffersize (which defaults at 128M) and will 
write and close the file in one go instead of keeping the writer open. It's 
false by default and we recommend to set it to true if your cluster has a high 
region load (like more than 300 regions per RS), especially when you observed 
obvious NN/HDFS slow down during hbase (single RS or cluster) failover.
+
+
+---
+
+* [HBASE-19651](https://issues.apache.org/jira/browse/HBASE-19651) | *Minor* | 
**Remove LimitInputStream**
+
+HBase had copied from guava the file LmiitedInputStream. This commit removes 
the copied file in favor of (our internal, shaded) guava's ByteStreams.limit. 
Guava 14.0's LIS noted: "Use ByteStreams.limit(java.io.InputStream, long) 
instead. This class is scheduled to be removed in Guava release 15.0."
+
+
+---
+
+* [HBASE-19691](https://issues.apache.org/jira/browse/HBASE-19691) | 
*Critical* | **Do not require ADMIN permission for obtaining ClusterStatus**
+
+This change reverts an unintentional requirement for global ADMIN permission 
to obtain cluster status from the active HMaster.
+
+
+---
+
+* [HBASE-19486](https://issues.apache.org/jira/browse/HBASE-19486) | *Major* | 
** Periodically ensure records are not buffered too long by BufferedMutator**
+
+The BufferedMutator now supports two settings that are used to ensure records 
do not stay too long in the buffer of a BufferedMutator. For periodically 
flushing the BufferedMutator there is now a "Timeout": "How old may the oldest 
record in the buffer be before we force a flush" and a "TimerTick": How often 
do we check if the timeout has been exceeded. Using these settings you can make 
the BufferedMutator automatically flush the write buffer if after the specified 
number of milliseconds no flush has occurred.
+
+This is mainly useful in streaming scenarios (i.e. writing data into HBase 
using Apache Flink/Beam/Storm) where it is common (especially in a 
test/development situation) to see small unpredictable bursts of data that need 
to be written into HBase. When using the BufferedMutator till now the effect 
was that records would remain in the write buffer until the buffer was full or 
an explicit flush was triggered. In practice this would mean that the 'last few 
records' of a burst would remain in the write buffer until the next burst 
arrives filling the buffer to capacity and thus triggering a flush.
+
+
+---
+
+* [HBASE-19670](https://issues.apache.org/jira/browse/HBASE-19670) | *Major* | 
**Workaround: Purge User API building from branch-2 so can make a beta-1**
+
+Disable filtering of User API based off yetus annotation done in doclet. See 
parent issue for build failure currently being worked on but not done in time 
for a beta-1.
+
+
+---
+
+* [HBASE-19282](https://issues.apache.org/jira/browse/HBASE-19282) | *Major* | 
**CellChunkMap Benchmarking and User Interface**
+
+When MSLAB is in use (that is the default config) , we will always use the 
CellChunkMap indexing variant for in memory flushed Immutable segments. When 
MSLAB is turned off, we will use CellAraryMap. These can not be changed with 
any configs.  The in memory flush threshold been made to be default to 10% of 
region flush size. This can be turned using 
'hbase.memstore.inmemoryflush.threshold.factor'.
+
+
+---
+
+* [HBASE-19628](https://issues.apache.org/jira/browse/HBASE-19628) | *Major* | 
**ByteBufferCell should extend ExtendedCell**
+
+ByteBufferCell → ByteBufferExtendedCell
+MapReduceCell → MapReduceExtendedCell
+ByteBufferChunkCell → ByteBufferChunkKeyValue
+NoTagByteBufferChunkCell → NoTagByteBufferChunkKeyValue
+KeyOnlyByteBufferCell → KeyOnlyByteBufferExtendedCell
+TagRewriteByteBufferCell → TagRewriteByteBufferExtendedCell
+ValueAndTagRewriteByteBufferCell → ValueAndTagRewriteByteBufferExtendedCell
+EmptyByteBufferCell → EmptyByteBufferExtendedCell
+FirstOnRowByteBufferCell → FirstOnRowByteBufferExtendedCell
+LastOnRowByteBufferCell → LastOnRowByteBufferExtendedCell
+FirstOnRowColByteBufferCell → FirstOnRowColByteBufferExtendedCell
+FirstOnRowColTSByteBufferCell → FirstOnRowColTSByteBufferExtendedCell
+LastOnRowColByteBufferCell → LastOnRowColByteBufferCell
+OffheapDecodedCell → OffheapDecodedExtendedCell
+
+
+---
+
+* [HBASE-19576](https://issues.apache.org/jira/browse/HBASE-19576) | *Major* | 
**Introduce builder for ReplicationPeerConfig and make it immutable**
+
+Add a ReplicationPeerConfigBuilder to create ReplicationPeerConfig and make 
ReplicationPeerConfig immutable. Meanwhile, deprecated set\* methods in 
ReplicationPeerConfig.
+
+
+---
+
+* [HBASE-10092](https://issues.apache.org/jira/browse/HBASE-10092) | 
*Critical* | **Move to slf4j**
+
+We now have slf4j as our front-end. Be careful adding logging from here on 
out; make sure it slf4j.
+
+From here on out, as us devs go, we need to convert log messages from being 
'guarded' -- i.e. surrounded by if (LOG.isDebugEnabled...) -- to instead being 
parameterized log messages. e.g. the latter rather than the former in the below:
+
+logger.debug("The new entry is "+entry+".");
+logger.debug("The new entry is {}.", entry);
+
+See [1] for background on perf benefits.
+
+Note, FATAL log level is not present in slf4j. It is noted as a Marker but 
won't show in logs as a LEVEL.
+
+1.  https://www.slf4j.org/faq.html#logging\_performance
+
+
+---
+
+* [HBASE-19148](https://issues.apache.org/jira/browse/HBASE-19148) | *Blocker* 
| **Reevaluate default values of configurations**
+
+Removed unused hbase.fs.tmp.dir from hbase-default.xml.
+
+Upped hbase.master.fileSplitTimeout from 30s to 10minutes (suggested by 
production experience)
+
+Added note that handler-count should be ~CPU count.
+
+hbase.regionserver.logroll.multiplier has been changed from 0.95 to 0.5 AND 
the default block size has been doubled.
+
+A few of the core configs are now dumped to the log on startup.
+
+
+---
+
+* [HBASE-19492](https://issues.apache.org/jira/browse/HBASE-19492) | *Major* | 
**Add EXCLUDE\_NAMESPACE and EXCLUDE\_TABLECFS support to replication peer 
config**
+
+Add two new field:  EXCLUDE\_NAMESPACE and EXCLUDE\_TABLECFS to replication 
peer config.
+
+If replicate\_all flag is true, it means all user tables will be replicated to 
peer cluster. Then allow config exclude namespaces or exclude table-cfs which 
can't be replicated to  peer cluster.
+
+If replicate\_all flag is false, it means all user tables can't be replicated 
to peer cluster. Then allow to config namespaces or table-cfs which will be 
replicated to peer cluster.
+
+
+---
+
+* [HBASE-19494](https://issues.apache.org/jira/browse/HBASE-19494) | *Major* | 
**Create simple WALKey filter that can be plugged in on the Replication Sink**
+
+Adds means of adding very basic filter on the sink side of replication. We 
already have a means of installing filter source-side, which is better place to 
filter edits before they are shipped over the network, but this facility is 
needed by hbase-indexer.
+
+Set hbase.replication.sink.walentrysinkfilter with a no-param Constructor 
implementation. See test in patch for example.
+
+
+---
+
+* [HBASE-19112](https://issues.apache.org/jira/browse/HBASE-19112) | *Blocker* 
| **Suspect methods on Cell to be deprecated**
+
+Adds method Cell#getType which returns enum describing Cell Type.
+
+Deprecates the following Cell methods:
+
+ getTypeByte
+ getSequenceId
+ getTagsArray
+ getTagsOffset
+ getTagsLength
+
+CPs trying to build cells should use RawCellBuilderFactory that supports  
building cells with tags.
+
+
+---
+
+* [HBASE-14790](https://issues.apache.org/jira/browse/HBASE-14790) | *Major* | 
**Implement a new DFSOutputStream for logging WAL only**
+
+Implement a FanOutOneBlockAsyncDFSOutput for writing WAL only, the WAL 
provider which uses this class is AsyncFSWALProvider.
+
+It is based on netty, and will write to 3 DNs at the same time 
concurrently(fan-out) so generally it will lead to a lower latency. And it is 
also fail-fast, the stream will become unwritable immediately after there are 
any read/write errors, no pipeline recovery. You need to call recoverLease to 
force close the output for this case. And it only supports to write a file with 
a single block. For WAL this is a good behavior as we can always open a new 
file when the old one is broken. The performance analysis in HBASE-16890 shows 
that it has a better performance.
+
+Behavior changes:
+1. As now we write to 3 DNs concurrently, according to the visibility 
guarantee of HDFS, the data will be available immediately when arriving at DN 
since all the DNs will be considered as the last one in pipeline. This means 
replication may read uncommitted data and replicate it to the remote cluster 
and cause data inconsistency. HBASE-14004 is used to solve the problem.
+2. There will be no sync failure. When the output is broken, we will open a 
new file and write all the unacked wal entries to the new file. This means that 
we may have duplicated entries in wal files. HBASE-14949 is used to solve this 
problem.
+
+
+---
+
+* [HBASE-15536](https://issues.apache.org/jira/browse/HBASE-15536) | 
*Critical* | **Make AsyncFSWAL as our default WAL**
+
+Now the default WALProvider is AsyncFSWALProvider, i.e. 'asyncfs'.
+If you want to change back to use FSHLog, please add this in hbase-site.xml
+{code}
+\<property\>
+\<name\>hbase.wal.provider\</name\>
+\<value\>filesystem\</value\>
+\</property\>
+{code}
+If you want to use FSHLog with multiwal, please add this in hbase-site.xml
+{code}
+\<property\>
+\<name\>hbase.wal.regiongrouping.delegate.provider\</name\>
+\<value\>filesystem\</value\>
+\</property\>
+{code}
+
+This patch also sets hbase.wal.async.use-shared-event-loop to false so WAL has 
its own netty event group.
+
+
+---
+
+* [HBASE-19462](https://issues.apache.org/jira/browse/HBASE-19462) | *Major* | 
**Deprecate all addImmutable methods in Put**
+
+Deprecates Put#addImmutable as of release 2.0.0, this will be removed in HBase 
3.0.0. Use {@link #add(Cell)} and {@link org.apache.hadoop.hbase.CellBuilder} 
instead
+
+
+---
+
+* [HBASE-19213](https://issues.apache.org/jira/browse/HBASE-19213) | *Minor* | 
**Align check and mutate operations in Table and AsyncTable**
+
+In Table interface deprecate checkAndPut, checkAndDelete and checkAndMutate 
methods.
+Similarly to AsyncTable a new method was added to replace the deprecated ones: 
CheckAndMutateBuilder checkAndMutate(byte[] row, byte[] family) with 
CheckAndMutateBuilder interface which can be used to construct the checkAnd\*() 
operations.
+
+
+---
+
+* [HBASE-19134](https://issues.apache.org/jira/browse/HBASE-19134) | *Major* | 
**Make WALKey an Interface; expose Read-Only version to CPs**
+
+Made WALKey an Interface and added a WALKeyImpl implementation. WALKey comes 
through to Coprocessors. WALKey is read-only.
+
+
+---
+
+* [HBASE-18169](https://issues.apache.org/jira/browse/HBASE-18169) | *Blocker* 
| **Coprocessor fix and cleanup before 2.0.0 release**
+
+Refactor of Coprocessor API for hbase2. Purged methods that exposed too much 
of our internals. Other hooks were recast so they no longer took or returned 
internal classes; instead we pass Interfaces or read-only versions of 
implementations.
+
+Here is some overview doc on changes in hbase2 for Coprocessors including 
detail on why the change was made:
+https://github.com/apache/hbase/blob/branch-2.0/dev-support/design-docs/Coprocessor\_Design\_Improvements-Use\_composition\_instead\_of\_inheritance-HBASE-17732.adoc
+
+
+---
+
+* [HBASE-19301](https://issues.apache.org/jira/browse/HBASE-19301) | *Major* | 
**Provide way for CPs to create short circuited connection with custom 
configurations**
+
+Provided a way for the CP users to create a short circuitable connection with 
custom configs.
+
+createConnection(Configuration) is added to MasterCoprocessorEnvironment, 
RegionServerCoprocessorEnvironment and RegionCoprocessorEnvironment.
+
+The getConnection() method already available in these Env interfaces returns 
the cluster connection used by the server (which the server also uses) where as 
this new method will create a new connection on request. The difference from 
connection created using ConnectionFactory APIs is that this connection can 
short circuit the calls to same server avoiding the RPC paths. The connection 
will NOT be cached/maintained by server. That should be done the CPs.
+
+Be careful creating Connections out of a Coprocessor. See the javadoc on these 
createConnection and getConnection.
+
+
+---
+
+* [HBASE-19357](https://issues.apache.org/jira/browse/HBASE-19357) | *Major* | 
**Bucket cache no longer L2 for LRU cache**
+
+Removed cacheDataInL1 option for HCD
+BucketCache is no longer the L2 for LRU on heap cache. When BC is used, data 
blocks will be strictly on BC only where as index/bloom blocks are on LRU L1 
cache.
+Config 'hbase.bucketcache.combinedcache.enabled' is removed. There is no way 
set combined mode = false. Means make BC as victim handler for LRU cache.
+This will be one more noticeable change when one uses BucketCache in File 
mode.  Then the system table's data block(Including the META table)  will be 
cached in Bucket Cache files only. Plain scan from META files alone test reveal 
that the throughput of file mode BC is almost half only.  But for META entries 
we have RegionLocation cache at client side connections. So this would not be a 
big concern in a real cluster usage. Will check more on this and probably fix 
even when we do tiered BucketCache.
+
+
+---
+
+* [HBASE-19430](https://issues.apache.org/jira/browse/HBASE-19430) | *Major* | 
**Remove the SettableTimestamp and SettableSequenceId**
+
+All the cells which are used in server side are of ExtendedCell now.
+
+
+---
+
+* [HBASE-19295](https://issues.apache.org/jira/browse/HBASE-19295) | *Major* | 
**The Configuration returned by CPEnv should be read-only.**
+
+CoprocessorEnvironment#getConfiguration returns a READ-ONLY Configuration. 
Attempts at altering the returned Configuration -- whether setting or adding 
resources -- will result in an IllegalStateException warning of the Read-only 
condition of the returned Configuration.
+
+
+---
+
+* [HBASE-19410](https://issues.apache.org/jira/browse/HBASE-19410) | *Major* | 
**Move zookeeper related UTs to hbase-zookeeper and mark them as ZKTests**
+
+There is a new HBaseZKTestingUtility which can only start a mini zookeeper 
cluster. And we will publish sources for test-jar for all modules.
+
+
+---
+
+* [HBASE-19323](https://issues.apache.org/jira/browse/HBASE-19323) | *Major* | 
**Make netty engine default in hbase2**
+
+NettyRpcServer is now our default RPC server replacing SimpleRpcServer.
+
+
+---
+
+* [HBASE-19426](https://issues.apache.org/jira/browse/HBASE-19426) | *Major* | 
**Move has() and setTimestamp() to Mutation**
+
+Moves #has and #setTimestamp back up to Mutation from the subclass Put so 
available to other Mutation implementations.
+
+
+---
+
+* [HBASE-19384](https://issues.apache.org/jira/browse/HBASE-19384) | 
*Critical* | **Results returned by preAppend hook in a coprocessor are replaced 
with null from other coprocessor even on bypass**
+
+When a coprocessor sets 'bypass', we will skip calling subsequent Coprocessors 
that may be stacked-up on the method invocation; e.g. if a prePut has three 
coprocessors hooked up, if the first coprocessor decides to set 'bypass', we 
will not call the two subsequent coprocessors (this is similar to the 
'complete' functionality that was in hbase1, removed in hbase2).
+
+
+---
+
+* [HBASE-19408](https://issues.apache.org/jira/browse/HBASE-19408) | *Trivial* 
| **Remove WALActionsListener.Base**
+
+1) remove the WALActionsListener.Base
+2) provide default method implementation to WALActionsListener
+The person who want to receive the notification of WAL events should 
implements the WALActionsListener rather than WALActionsListener.Base.
+
+
+---
+
+* [HBASE-19339](https://issues.apache.org/jira/browse/HBASE-19339) | 
*Critical* | **Eager policy results in the negative size of memstore**
+
+Enable TestAcidGuaranteesWithEagerPolicy and 
TestAcidGuaranteesWithAdaptivePolicy
+
+
+---
+
+* [HBASE-19336](https://issues.apache.org/jira/browse/HBASE-19336) | *Major* | 
**Improve rsgroup to allow assign all tables within a specified namespace by 
only writing namespace**
+
+Add two new shell cmd.
+move\_namespaces\_rsgroup is used to reassign tables of specified namespaces 
from one RegionServer group to another.
+move\_servers\_namespaces\_rsgroup is used to reassign regionServers and 
tables of specified namespaces from one group to another.
+
+
+---
+
+* [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | 
*Critical* | **Add per-table latency histograms**
+
+Per-RegionServer table latency histograms have been returned to HBase (after 
being removed due to impacting performance). These metrics are exposed via a 
new JMX bean "TableLatencies" with the typical naming conventions: namespace, 
table, and histogram component.
+
+
+---
+
+* [HBASE-19359](https://issues.apache.org/jira/browse/HBASE-19359) | *Major* | 
**Revisit the default config of hbase client retries number**
+
+The default value of hbase.client.retries.number was 35. It is now 10.
+And for server side, the default hbase.client.serverside.retries.multiplier 
was 10. So the server side retries number was 35 \* 10 = 350. It is now 3.
+
+
+---
+
+* [HBASE-18090](https://issues.apache.org/jira/browse/HBASE-18090) | *Major* | 
**Improve TableSnapshotInputFormat to allow more multiple mappers per region**
+
+In this task, we make it possible to run multiple mappers per region in the 
table snapshot. The following code is primary table snapshot mapper 
initializatio:
+
+TableMapReduceUtil.initTableSnapshotMapperJob(
+          snapshotName,                     // The name of the snapshot (of a 
table) to read from
+          scan,                                      // Scan instance to 
control CF and attribute selection
+          mapper,                                 // mapper
+          outputKeyClass,                   // mapper output key
+          outputValueClass,                // mapper output value
+          job,                                       // The current job to 
adjust
+          true,                                     // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
+          restoreDir,                           // a temporary directory to 
copy the snapshot files into
+);
+
+The job only run one map task per region in the table snapshot. With this 
feature, client can specify the desired num of mappers when init table snapshot 
mapper job:
+
+TableMapReduceUtil.initTableSnapshotMapperJob(
+          snapshotName,                     // The name of the snapshot (of a 
table) to read from
+          scan,                                      // Scan instance to 
control CF and attribute selection
+          mapper,                                 // mapper
+          outputKeyClass,                   // mapper output key
+          outputValueClass,                // mapper output value
+          job,                                       // The current job to 
adjust
+          true,                                     // upload HBase jars and 
jars for any of the configured job classes via the distributed cache (tmpjars)
+          restoreDir,                           // a temporary directory to 
copy the snapshot files into
+          splitAlgorithm,                     // splitAlgo algorithm to split, 
current split algorithms  support RegionSplitter.UniformSplit() and 
RegionSplitter.HexStringSplit()
+          n                                         // how many input splits 
to generate per one region
+);
+
+
+---
+
+* [HBASE-19035](https://issues.apache.org/jira/browse/HBASE-19035) | *Major* | 
**Miss metrics when coprocessor use region scanner to read data**
+
+1. Move read requests count to region level. Because RegionScanner is exposed 
to CP.
+2. Update write requests count in processRowsWithLocks.
+3. Remove requestRowActionCount in RSRpcServices. This metric can be computed 
by region's readRequestsCount and writeRequestsCount.
+
+
+---
+
+* [HBASE-19318](https://issues.apache.org/jira/browse/HBASE-19318) | 
*Critical* | **MasterRpcServices#getSecurityCapabilities explicitly checks for 
the HBase AccessController implementation**
+
+Fixes an issue with loading customer coprocessor endpoint implementations 
inside of the HBase Master which breaks Apache Ranger.
+
+
+---
+
+* [HBASE-19092](https://issues.apache.org/jira/browse/HBASE-19092) | 
*Critical* | **Make Tag IA.LimitedPrivate and expose for CPs**
+
+This JIRA aims at exposing Tags for Coprocessor usage.
+Tag interface is now exposed to Coprocessors and CPs can make use of this 
interface to create their own Tags.
+RawCell is a new interface that is a subtype of Cell and that is exposed to 
CPs. RawCell has the following APIs
+
+List\<Tag\> getTags()
+Optional\<Tag\> getTag(byte type)
+byte[] cloneTags()
+
+The above APIs helps to read tags from the Cell.
+
+CellUtil#createCell(Cell cell, List\<Tag\> tags)
+CellUtil#createCell(Cell cell, byte[] tags)
+CellUtil#createCell(Cell cell, byte[] value, byte[] tags)
+are deprecated.
+If CPs want to create a cell with Tags they can use the 
RegionCoprocessorEnvironment#getCellBuilder() that returns an 
ExtendedCellBuilder.
+Using ExtendedCellBuilder the CP can create Cells with Tags. Other helper 
methods to work on Tags are available as static APIs in Tag interface.
+
+
+---
+
+* [HBASE-19266](https://issues.apache.org/jira/browse/HBASE-19266) | *Minor* | 
**TestAcidGuarantees should cover adaptive in-memory compaction**
+
+separate the TestAcidGuarantees by the policy:
+1) NONE -\> TestAcidGuaranteesWithNoInMemCompaction
+2) BASIC -\> TestAcidGuaranteesWithBasicPolicy
+3) EAGER -\> TestAcidGuaranteesWithEagerPolicy
+4) ADAPTIVE -\> TestAcidGuaranteesWithAdaptivePolicy
+
+TestAcidGuaranteesWithEagerPolicy and TestAcidGuaranteesWithAdaptivePolicy are 
disabled by default as the eager policy may cause the negative size of memstore.
+
+
+---
+
+* [HBASE-16868](https://issues.apache.org/jira/browse/HBASE-16868) | 
*Critical* | **Add a replicate\_all flag to avoid misuse the namespaces and 
table-cfs config of replication peer**
+
+Add a replicate\_all flag to replication peer config. The default value is 
true, which means all user tables (REPLICATION\_SCOPE != 0 ) will be replicated 
to peer cluster.
+
+How to config a peer from replicate all to only replicate special 
namespace/tablecfs?
+Step1. Add a new peer with no namespace/tablecfs config, the replicate\_all 
flag will be true automatically.
+Step2. User want only replicate some namespaces or tables, so set 
replicate\_all flag to false first.
+Step3. Add special namespaces or table-cfs config to the replication peer.
+
+How to config a peer from replicate special namespace/tablecfs to replicate 
all?
+Step1. Add a new peer with special namespace/tablecfs config, the 
replicate\_all flag will be false automatically.
+Step2. User want replicate all user tables, so remove the special 
namespace/tablecfs config first.
+Step3. Set replicate\_all flag to true.
+
+How to config replicate nothing?
+Set replicate\_all flag to false and no namespace/tablecfs config, then all 
tables cannot be replicated to peer cluster.
+
+
+---
+
+* [HBASE-19122](https://issues.apache.org/jira/browse/HBASE-19122) | 
*Critical* | **preCompact and preFlush can bypass by returning null scanner; 
shut it down**
+
+Remove the ability to 'bypass' preFlush and preCompact by returning a null 
Scanner. Bypass is disallowed on these methods in hbase2.
+
+
+---
+
+* [HBASE-19200](https://issues.apache.org/jira/browse/HBASE-19200) | *Major* | 
**make hbase-client only depend on ZKAsyncRegistry and ZNodePaths**
+
+ConnectionImplementation now uses asynchronous connections to zookeeper via 
ZKAsyncRegistry to get cluster id, master address, meta region location, etc.
+Since ZKAsyncRegistry uses curator framework, this change purges a lot of 
zookeeper dependencies in hbase-client.
+Now hbase-client only depends on only ZKAsyncRegistry, ZNodePaths and the 
newly introduced ZKMetadata.
+
+
+---
+
+* [HBASE-19311](https://issues.apache.org/jira/browse/HBASE-19311) | *Major* | 
**Promote TestAcidGuarantees to LargeTests and start mini cluster once to make 
it faster**
+
+Introduce a AcidGuaranteesTestTool and expose as tool instead of 
TestAcidGuarantees. Now TestAcidGuarantees is just a UT.
+
+
+---
+
+* [HBASE-19293](https://issues.apache.org/jira/browse/HBASE-19293) | *Major* | 
**Support adding a new replication peer in disabled state**
+
+Add a boolean parameter which means the new replication peer's state is 
enabled or disabled for Admin/AsyncAdmin's addReplicationPeer method. 
Meanwhile, you can use shell cmd to add a enabled/disabled replication peer. 
The STATE parameter is optional and the default state is enabled.
+
+hbase\> add\_peer '1', CLUSTER\_KEY =\> "server1.cie.com:2181:/hbase", STATE 
=\> "ENABLED"
+hbase\> add\_peer '1', CLUSTER\_KEY =\> "server1.cie.com:2181:/hbase", STATE 
=\> "DISABLED"
+
+
+---
+
+* [HBASE-19123](https://issues.apache.org/jira/browse/HBASE-19123) | *Major* | 
**Purge 'complete' support from Coprocesor Observers**
+
+This issue removes the 'complete' facility that was in ObserverContext. It is 
no longer possible for a Coprocessor to cut the chain-of-invocation and insist 
its response prevails.
+
+
+---
+
+* [HBASE-18911](https://issues.apache.org/jira/browse/HBASE-18911) | *Major* | 
**Unify Admin and AsyncAdmin's methods name**
+
+Deprecated 4 methods for Admin interface.
+Deprecated compactRegionServer(ServerName, boolean). Use 
compactRegionServer(ServerName) and majorCompactcompactRegionServer(ServerName) 
instead.
+Deprecated getRegionLoad(ServerName) method. Use getRegionLoads(ServerName) 
instead.
+Deprecated getRegionLoad(ServerName, TableName) method. Use 
getRegionLoads(ServerName, TableName) instead.
+Deprecated getQuotaRetriever(QuotaFilter) instead. Use  getQuota(QuotaFilter) 
instead.
+
+Add 7 methods for Admin interface.
+ServerName getMaster();
+Collection\<ServerName\> getBackupMasters();
+Collection\<ServerName\> getRegionServers();
+boolean splitSwitch(boolean enabled, boolean synchronous);
+boolean mergeSwitch(boolean enabled, boolean synchronous);
+boolean isSplitEnabled();
+boolean isMergeEnabled();
+
+
+---
+
+* [HBASE-18703](https://issues.apache.org/jira/browse/HBASE-18703) | 
*Critical* | **Inconsistent behavior for preBatchMutate in doMiniBatchMutate 
and processRowsWithLocks**
+
+Two write paths Region.batchMutate() and Region.mutateRows() are unified and 
inconsistencies are resolved.
+
+
+---
+
+* [HBASE-18964](https://issues.apache.org/jira/browse/HBASE-18964) | *Major* | 
**Deprecate RowProcessor and processRowsWithLocks() APIs that take RowProcessor 
as an argument**
+
+RowProcessor and Region#processRowsWithLocks() methods that take RowProcessor 
as an argument are deprecated. Use Coprocessors if you want to customize 
handling.
+
+
+---
+
+* [HBASE-19251](https://issues.apache.org/jira/browse/HBASE-19251) | *Major* | 
**Merge RawAsyncTable and AsyncTable**
+
+Merge the RawAsyncTable and AsyncTable interfaces. Use generic to reflection 
the difference between the observer style scan API. For the implementation 
which does not have a user specified thread pool, the observer is 
AdvancedScanResultConsumer. For the implementation which needs a user specified 
thread pool, the observer is ScanResultConsumer.
+
+
+---
+
+* [HBASE-19262](https://issues.apache.org/jira/browse/HBASE-19262) | *Major* | 
**Revisit checkstyle rules**
+
+Change the import order rule that now we should put the shaded import at 
bottom. Ignore the VisibilityModifier warnings for test code.
+
+
+---
+
+* [HBASE-19187](https://issues.apache.org/jira/browse/HBASE-19187) | *Minor* | 
**Remove option to create on heap bucket cache**
+
+Removing the on heap Bucket cache feature.
+The config "hbase.bucketcache.ioengine" no longer support the 'heap' value.
+Its supported values now are 'offheap',  'file:\<path\>', 'files:\<path\>'  
and 'mmap:\<path\>'
+
+
+---
+
+* [HBASE-12350](https://issues.apache.org/jira/browse/HBASE-12350) | *Minor* | 
**Backport error-prone build support to branch-1 and branch-2**
+
+This change introduces compile time support for running the error-prone suite 
of static analyses. Enable with -PerrorProne on the Maven command line. 
Requires JDK 8 or higher. (Don't enable if building with JDK 7.)
+
+
+---
+
+* [HBASE-14350](https://issues.apache.org/jira/browse/HBASE-14350) | *Blocker* 
| **Procedure V2 Phase 2: Assignment Manager**
+
+(Incomplete)
+
+= Incompatbiles
+
+== Coprocessor Incompatibilities
+
+Split/Merge have moved to the Master; it runs them now. Means hooks around 
Split/Merge are now noops. To intercept Split/Merge phases, CPs need to 
intercept on MasterObserver.
+
+
+---
+
+* [HBASE-19189](https://issues.apache.org/jira/browse/HBASE-19189) | *Major* | 
**Ad-hoc test job for running a subset of tests lots of times**
+
+<!-- markdown -->
+
+Folks can now test out tests on an arbitrary release branch. Head over to 
[builds.a.o job 
"HBase-adhoc-run-tests"](https://builds.apache.org/view/H-L/view/HBase/job/HBase-adhoc-run-tests/),
 then pick "Build with parameters".
+Tests are specified as just names e.g. TestLogRollingNoCluster. can also be a 
glob. e.g. TestHFile*
+
+
+---
+
+* [HBASE-19220](https://issues.apache.org/jira/browse/HBASE-19220) | *Major* | 
**Async tests time out talking to zk; 'clusterid came back null'**
+
+Changed retries from 3 to 30 for zk initial connect for registry.
+
+
+---
+
+* [HBASE-19002](https://issues.apache.org/jira/browse/HBASE-19002) | *Minor* | 
**Introduce more examples to show how to intercept normal region operations**
+
+With the change in Coprocessor APIs, the hbase-examples module has been 
updated to provide additional examples that show how to write Coprocessors 
against the new API.
+
+
+---
+
+* [HBASE-18961](https://issues.apache.org/jira/browse/HBASE-18961) | *Major* | 
**doMiniBatchMutate() is big, split it into smaller methods**
+
+HRegion.batchMutate()/ doMiniBatchMutate() is refactored with aim to unify 
batchMutate() and mutateRows() code paths later. batchMutate() currently 
handles 2 types of batches: MutationBatchOperations and ReplayBatchOperations. 
Common base class BatchOperations is augmented with common methods which are 
overridden in derived classes as needed. doMiniBatchMutate() is implemented 
using common methods in base class BatchOperations.
+
+
+---
+
+* [HBASE-19103](https://issues.apache.org/jira/browse/HBASE-19103) | *Minor* | 
**Add BigDecimalComparator for filter**
+
+If BigDecimal is stored as value, and you need to add a matched comparator to 
the value filter when scanning, a BigDecimalComparator can be used.
+
+
+---
+
+* [HBASE-19111](https://issues.apache.org/jira/browse/HBASE-19111) | 
*Critical* | **Add missing CellUtil#isPut(Cell) methods**
+
+A new public API method was added to CellUtil "isPut(Cell)" for clients to use 
to determine if the Cell is for a Put operation.
+
+Additionally, other CellUtil API calls which expose Cell-implementation were 
marked as deprecated and will be removed in a future version.
+
+
+---
+
+* [HBASE-19160](https://issues.apache.org/jira/browse/HBASE-19160) | 
*Critical* | **Re-expose CellComparator**
+
+CellComparator is now InterfaceAudience.Public
+
+
+---
+
+* [HBASE-19131](https://issues.apache.org/jira/browse/HBASE-19131) | *Major* | 
**Add the ClusterStatus hook and cleanup other hooks which can be replaced by 
ClusterStatus hook**
+
+1) Add preGetClusterStatus() and postGetClusterStatus() hooks
+2) add preGetClusterStatus() to access control check - an admin action
+
+
+---
+
+* [HBASE-19095](https://issues.apache.org/jira/browse/HBASE-19095) | *Major* | 
**Add CP hooks in RegionObserver for in memory compaction**
+
+Add 4 methods in RegionObserver:
+preMemStoreCompaction
+preMemStoreCompactionCompactScannerOpen
+preMemStoreCompactionCompact
+postMemStoreCompaction
+preMemStoreCompaction and postMemStoreCompaction will always be called for all 
in memory compactions. Under eager mode, 
preMemStoreCompactionCompactScannerOpen will be called before opening store 
scanner to allow you changing the max versions and TTL, and 
preMemStoreCompactionCompact will be called after the creation to let you do 
wrapping.
+
+
+---
+
+* [HBASE-19152](https://issues.apache.org/jira/browse/HBASE-19152) | *Trivial* 
| **Update refguide 'how to build an RC' and the make\_rc.sh script**
+
+The make\_rc.sh script can run an hbase2 build now generating tarballs and 
pushing up to maven repository. TODO: Sign and checksum, check tarball, push to 
apache dist.....
+
+
+---
+
+* [HBASE-19179](https://issues.apache.org/jira/browse/HBASE-19179) | 
*Critical* | **Remove hbase-prefix-tree**
+
+Purged the hbase-prefix-tree module and all references from the code base.
+
+prefix-tree data block encoding was a super cool experimental feature that saw 
some usage initially but has since languished. If interested in carrying this 
sweet facility forward, write the dev list and we'll restore this module.
+
+
+---
+
+* [HBASE-19176](https://issues.apache.org/jira/browse/HBASE-19176) | *Major* | 
**Remove hbase-native-client from branch-2**
+
+Removed the hbase-native-client module from branch-2 (it is still in Master). 
It is not complete. Look for a finished C++ client in the near future. Will 
restore native client to branch-2 at that point.
+
+
+---
+
+* [HBASE-19144](https://issues.apache.org/jira/browse/HBASE-19144) | *Major* | 
**[RSgroups] Retry assignments in FAILED\_OPEN state when servers (re)join the 
cluster**
+
+When regionserver placement groups (RSGroups) is active, as servers join the 
cluster the Master will attempt to reassign regions in FAILED\_OPEN state.
+
+
+---
+
+* [HBASE-18770](https://issues.apache.org/jira/browse/HBASE-18770) | 
*Critical* | **Remove bypass method in ObserverContext and implement the 
'bypass' logic case by case**
+
+Removes blanket bypass mechanism (Observer#bypass). Instead, a curated subset 
of methods are bypassable.
+
+    Changes Coprocessor ObserverContext 'bypass' semantic. We flip the
+    default so bypass is NOT supported on Observer invocations; only a
+    couple of preXXX methods in RegionObserver allow it: e.g.  preGet
+    and prePut but not preFlush, etc. Everywhere else, we throw
+    a Exception if a Coprocessor Observer tries to invoke bypass. Master
+    Observers can no longer stop or change move, split, assign, create table, 
etc.
+    preBatchMutate can no longer be bypassed (bypass the finer-grained
+    prePut, preDelete, etc. instead)
+
+    Ditto on complete, the mechanism that allowed a Coprocessor
+    rule that all subsequent Coprocessors are skipped in an
+    invocation chain; now, complete is only available to
+    bypassable methods (and Coprocessors will get an exception if
+    they try to 'complete' when it is not allowed).
+
+    See javadoc for whether a Coprocessor Observer method supports
+    'bypass'. If no mention, 'bypass' is NOT supported.
+
+The below methods have been marked deprecated in hbase2. We would have liked 
to have removed them because they use IA.Private parameters but they are in use 
by CoreCoprocessors or are critical to downstreamers and we have no 
alternatives to provide currently.
+
+@Deprecated public boolean prePrepareTimeStampForDeleteVersion(final Mutation 
mutation, final Cell kv, final byte[] byteNow, final Get get) throws 
IOException {
+
+@Deprecated public boolean preWALRestore(final RegionInfo info, final WALKey 
logKey, final WALEdit logEdit) throws IOException {
+
+@Deprecated public void postWALRestore(final RegionInfo info, final WALKey 
logKey, final WALEdit logEdit) throws IOException {
+
+@Deprecated public DeleteTracker postInstantiateDeleteTracker(DeleteTracker 
result) throws IOException
+
+Metrics are updated now even if the Coprocessor does a bypass; e.g. The put 
count is updated even if a Coprocessor bypasses the core put operation (We do 
it this way so no need for Coprocessors to have access to our core metrics 
system).
+
+
+---
+
+* [HBASE-19033](https://issues.apache.org/jira/browse/HBASE-19033) | *Blocker* 
| **Allow CP users to change versions and TTL before opening StoreScanner**
+
+Add back the three methods without a return value:
+preFlushScannerOpen
+preCompactScannerOpen
+preStoreScannerOpen
+
+Introduce a ScanOptions interface to let CP users change the max versions and 
TTL of a ScanInfo. It will be passed as a parameter in the three methods above.
+
+Inntroduce a new example WriteHeavyIncrementObserver which convert increment 
to put and do aggregating when get. It uses the above three methods.
+
+
+---
+
+* [HBASE-19110](https://issues.apache.org/jira/browse/HBASE-19110) | *Minor* | 
**Add default for Server#isStopping & #getFileSystem**
+
+Made defaults for Server#isStopping and Server#getFileSystem. Should have done 
this when I added them (lesson learned, was actually mentioned in a review).
+
+
+---
+
+* [HBASE-19047](https://issues.apache.org/jira/browse/HBASE-19047) | 
*Critical* | **CP exposed Scanner types should not extend Shipper**
+
+RegionObserver#preScannerOpen signature changed
+RegionScanner preScannerOpen( ObserverContext\<RegionCoprocessorEnvironment\> 
c, Scan scan,  RegionScanner s)   -\>   void preScannerOpen( 
ObserverContext\<RegionCoprocessorEnvironment\> c, Scan scan)
+The pre hook can no longer return a RegionScanner instance.
+
+
+---
+
+* [HBASE-18995](https://issues.apache.org/jira/browse/HBASE-18995) | 
*Critical* | **Move methods that are for internal usage from CellUtil to 
Private util class**
+
+Split CellUtil into public CellUtil and PrivateCellUtil for Internal use only.
+
+
+---
+
+* [HBASE-18906](https://issues.apache.org/jira/browse/HBASE-18906) | 
*Critical* | **Provide Region#waitForFlushes API**
+
+Provided an API in Region (Exposed to CPs)
+boolean waitForFlushes(long timeout)
+This call will make the current thread to be waiting for all flushes in this 
region to be finished.  (Upto the time out time being specified). The boolean 
return value specify whether the flushes are really over or the time out being 
elapsed. Return false when timeout elapsed but flushes are not over or  true 
when flushes are over
+
+
+---
+
+* [HBASE-18905](https://issues.apache.org/jira/browse/HBASE-18905) | *Major* | 
**Allow CPs to request flush on Region and know the completion of the requested 
flush**
+
+Add a FlushLifeCycleTracker which is similiar to CompactionLifeCycleTracker 
for tracking flush.
+Add a requestFlush method in Region interface to let CP users request flush on 
a region. The operation is asynchronous, you need to use the 
FlushLifeCycleTracker to track the flush.
+The difference with CompactionLifeCycleTracker is that, flush is per region so 
we do not use Store as a parameter of the methods. And also, notExecuted means 
the whole flush has not been executed, and afterExecution means the whole flush 
has been finished, so we do not have a separated completed method. A flush will 
be ended either by notExecuted or afterExecution.
+
+
+---
+
+* [HBASE-19048](https://issues.apache.org/jira/browse/HBASE-19048) | *Major* | 
**Cleanup MasterObserver hooks which takes IA private params**
+
+Purged InterfaceAudience.Private parameters from methods in MasterObserver.
+
+preAbortProcedure no longer takes a ProcedureExecutor.
+
+postGetProcedures no longer takes a list of Procedures.
+
+postGetLocks no longer takes a list of locks.
+
+preRequestLock and postRequestLock no longer take lock type.
+
+preLockHeartbeat and postLockHeartbeat no longer takes a lock procedure.
+
+The implication is that that the Coprocessors that depended on these params 
have had to coarsen so for example, the AccessController can not do access per 
Procedure or Lock but rather, makes a judgement on the general access (You'll 
need to be ADMIN to see list of procedures and locks).
+
+
+---
+
+* [HBASE-18994](https://issues.apache.org/jira/browse/HBASE-18994) | *Major* | 
**Decide if META/System tables should use Compacting Memstore or Default 
Memstore**
+
+Added a new config 'hbase.systemtables.compacting.memstore.type"  for the 
system tables. By default all the system tables will have 'NONE' as the type 
and so it will be using the default memstore by default.
+{code}
+ \<property\>
+    \<name\>hbase.systemtables.compacting.memstore.type\</name\>
+    \<value\>NONE\</value\>
+  \</property\>
+{code}
+
+
+---
+
+* [HBASE-19029](https://issues.apache.org/jira/browse/HBASE-19029) | 
*Critical* | **Align RPC timout methods in Table and AsyncTableBase**
+
+Deprecate the following methods in Table:
+- int getRpcTimeout()
+- int getReadRpcTimeout()
+- int getWriteRpcTimeout()
+- int getOperationTimeout()
+
+Add the following methods to Table:
+- long getRpcTimeout(TimeUnit)
+- long getReadRpcTimeout(TimeUnit)
+- long getWriteRpcTimeout(TimeUnit)
+- long getOperationTimeout(TimeUnit)
+
+Add missing deprecation tag for long getRpcTimeout(TimeUnit unit) in 
AsyncTableBase
+
+
+---
+
+* [HBASE-18410](https://issues.apache.org/jira/browse/HBASE-18410) | *Major* | 
**FilterList  Improvement.**
+
+In this task, we fixed all existing bugs in FilterList, and did the code 
refactor which ensured interface compatibility .
+
+The primary bug  fixes are :
+1. For sub-filter in FilterList with MUST\_PASS\_ONE, if previous 
filterKeyValue() of sub-filter returns NEXT\_COL, we cannot make sure that the 
next cell will be the first cell in next column, because FilterList choose the 
minimal forward step among sub-filters, and it may return a SKIP. so here we 
add an extra check to ensure that the next cell will match preivous return code 
for sub-filters.
+2. Previous logic about transforming cell of FilterList is incorrect, we 
should set the previous transform result (rather than the given cell in 
question) as the initial vaule of transform cell before call filterKeyValue() 
of FilterList.
+3. Handle the ReturnCodes which the previous code did not handle.
+
+About code refactor, we divided the FilterList into two separated sub-classes: 
FilterListWithOR and FilterListWithAND,  The FilterListWithOR has been 
optimised to choose the next minimal step to seek cell rather than SKIP cell 
one by one, and the FilterListWithAND  has been optimised to choose the next 
maximal key to seek among sub-filters in filter list. All in all, The code in 
FilterList is clean and easier to follow now.
+
+Note that ReturnCode NEXT\_ROW has been redefined as skipping to next row in 
current family,   not to next row in all family. it’s more reasonable, 
because ReturnCode is a concept in store level, not in region level.
+
+Another bug that needs attention is: filterAllRemaining() in FilterList with 
MUST\_PASS\_ONE  will now return false if the filter list is empty whereas 
earlier it used to return true for Operator.MUST\_PASS\_ONE.  it's more 
reasonable now.
+
+
+---
+
+* [HBASE-19077](https://issues.apache.org/jira/browse/HBASE-19077) | 
*Critical* | **Have Region\*CoprocessorEnvironment provide an 
ImmutableOnlineRegions**
+
+Adds getOnlineRegions to the RegionCoprocessorEnvironment (Context) and ditto 
to RegionServerCoprocessorEnvironment. Allows Coprocessor get list of Regions 
online on the currently hosting RegionServer.
+
+
+---
+
+* [HBASE-19021](https://issues.apache.org/jira/browse/HBASE-19021) | 
*Critical* | **Restore a few important missing logics for balancer in 2.0**
+
+Re-enabled 'hbase.master.loadbalance.bytable', default 'false'.
+Draining servers are removed from consideration by blancer.balanceCluster() 
call.
+
+
+---
+
+* [HBASE-19049](https://issues.apache.org/jira/browse/HBASE-19049) | *Major* | 
**Update kerby to 1.0.1 GA release**
+
+HBase now relies on Kerby version 1.0.1 for its test environment. No 
downstream facing change is expected.
+
+
+---
+
+* [HBASE-16290](https://issues.apache.org/jira/browse/HBASE-16290) | *Major* | 
**Dump summary of callQueue content; can help debugging**
+
+Patch to print summary of call queues by size and count. This is displayed on 
the debug dump page of region server UI
+
+
+---
+
+* [HBASE-18846](https://issues.apache.org/jira/browse/HBASE-18846) | *Major* | 
**Accommodate the hbase-indexer/lily/SEP consumer deploy-type**
+
+Makes it so hbase-indexer/lily can move off dependence on internal APIs and 
instead move to public APIs.
+
+Adds being able to disable near-all HRegionServer services. This along with an 
existing plugin mechanism which allows configuring the RegionServer to host an 
alternate Connection implementation, makes it so we can put up a cluster of 
hollowed-out HRegionServers purposed to pose as a Replication Sink for a source 
HBase Cluster (Users do not need to figure our RPC, our PB encodings, build a 
distributed service, etc.). In the alternate supplied Connection 
implementation, hbase-indexer would install its own code to catch the 
Replication.
+
+Below and attached are sample hbase-server.xml files and alternate Connection 
implementations. To start up an HRegionServer as a sink, first make sure there 
is a ZooKeeper ensemble we can talk to. If none, just start one:
+{code}
+./bin/hbase-daemon.sh start zookeeper
+{code}
+
+To start up a single RegionServer, put in place the below sample 
hbase-site.xml and a derviative of the below IndexerConnection on the 
CLASSPATH, and then start the RegionServer:
+{code}
+./bin/hbase-daemon.sh  start  
org.apache.hadoop.hbase.regionserver.HRegionServer
+{code}
+Stdout and Stderr will go into files under configured logs directory. Browse 
to localhost:16030 to find webui (unless disabled).
+
+DETAILS
+
+This patch adds configuration to disable RegionServer internal Services, 
Managers, Caches, etc., starting up.
+
+By default a RegionServer starts up an Admin and Client Service. To disable 
either or both, use the below booleans:
+{code}
+hbase.regionserver.admin.service
+hbase.regionserver.client.service
+{code}
+
+Both default true.
+
+To make a HRegionServer startup and stay up without expecting to communicate 
with a master, set the below boolean to false:
+
+{code}
+hbase.masterless
+{code]
+Default is false.
+
+h3. Sample hbase-site.xml that disables internal HRegionServer Services
+Below is an example hbase-site.xml that turns off most Services and that then 
installs an alternate Connection implementation, one that is nulled out in all 
regards except in being able to return a "Table" that can catch a Replication 
Stream in its {code}batch(List\<? extends Row\> actions, Object[] 
results){code} method. i.e. what the hbase-indexer wants. I also add the 
example alternate Connection implementation below (both of these files are also 
attached to this issue). Expects there to be an up and running zookeeper 
ensemble.
+
+{code}
+\<configuration\>
+  \<!-- This file is an example for hbase-indexer. It shuts down
+       facility in the regionserver and interjects a special
+       Connection implementation which is how hbase-indexer will
+       receive the replication stream from source hbase cluster.
+       See the class referenced in the config.
+
+       Most of the config in here is booleans set to off and
+       setting values to zero so services doon't start. Some of
+       the flags are new via this patch.
+--\>
+  \<!--Need this for the RegionServer to come up standalone--\>
+  \<property\>
+    \<name\>hbase.cluster.distributed\</name\>
+    \<value\>true\</value\>
+  \</property\>
+
+  \<!--This is what you implement, a Connection that returns a Table that
+       overrides the batch call. It is at this point you do your indexer 
inserts.
+    --\>
+  \<property\>
+    \<name\>hbase.client.connection.impl\</name\>
+    \<value\>org.apache.hadoop.hbase.client.IndexerConnection\</value\>
+    \<description\>A customs connection implementation just so we can 
interject our
+      own Table class, one that has an override for the batch call which 
receives
+      the replication stream edits; i.e. it is called by the replication sink
+      #replicateEntries method.\</description\>
+  \</property\>
+
+  \<!--Set hbase.regionserver.info.port to -1 for no webui--\>
+
+  \<!--Below are configs to shut down unused services in hregionserver--\>
+  \<property\>
+    \<name\>hbase.regionserver.admin.service\</name\>
+    \<value\>false\</value\>
+    \<description\>Do NOT stand up an Admin Service Interface on 
RPC\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.regionserver.client.service\</name\>
+    \<value\>false\</value\>
+    \<description\>Do NOT stand up a client-facing Service on 
RPC\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.wal.provider\</name\>
+    \<value\>org.apache.hadoop.hbase.wal.DisabledWALProvider\</value\>
+    \<description\>Set WAL service to be the null WAL\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.regionserver.workers\</name\>
+    \<value\>false\</value\>
+    \<description\>Turn off all background workers, log splitters, executors, 
etc.\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hfile.block.cache.size\</name\>
+    \<value\>0.0001\</value\>
+    \<description\>Turn off block cache completely\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.mob.file.cache.size\</name\>
+    \<value\>0\</value\>
+    \<description\>Disable MOB cache.\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.masterless\</name\>
+    \<value\>true\</value\>
+    \<description\>Do not expect Master in cluster.\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.regionserver.metahandler.count\</name\>
+    \<value\>1\</value\>
+    \<description\>How many priority handlers to run; we probably need none.
+    Default is 20 which is too much on a server like this.\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.regionserver.replication.handler.count\</name\>
+    \<value\>1\</value\>
+    \<description\>How many replication handlers to run; we probably need none.
+    Default is 3 which is too much on a server like this.\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.regionserver.handler.count\</name\>
+    \<value\>10\</value\>
+    \<description\>How many default handlers to run; tie to # of CPUs.
+    Default is 30 which is too much on a server like this.\</description\>
+  \</property\>
+  \<property\>
+    \<name\>hbase.ipc.server.read.threadpool.size\</name\>
+    \<value\>3\</value\>
+    \<description\>How many Listener request reaaders to run; tie to a portion 
# of CPUs (1/4?).
+    Default is 10 which is too much on a server like this.\</description\>
+  \</property\>
+\</configuration\>
+{code}
+
+h2. Sample Connection Implementation
+Has call-out for where an hbase-indexer would insert its capture code.
+{code}
+package org.apache.hadoop.hbase.client;
+
+import com.google.protobuf.Descriptors;
+import com.google.protobuf.Message;
+import com.google.protobuf.Service;
+import com.google.protobuf.ServiceException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.CompareOperator;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.coprocessor.Batch;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel;
+import org.apache.hadoop.hbase.security.User;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+
+
+/\*\*
+ \* Sample class for hbase-indexer.
+ \* DO NOT COMMIT TO HBASE CODEBASE!!!
+ \* Overrides Connection just so we can return a Table that has the
+ \* method that the replication sink calls, i.e. Table#batch.
+ \* It is at this point that the hbase-indexer catches the replication
+ \* stream so it can insert into the lucene index.
+ \*/
+public class IndexerConnection implements Connection {
+  private final Configuration conf;
+  private final User user;
+  private final ExecutorService pool;
+  private volatile boolean closed = false;
+
+  public IndexerConnection(Configuration conf, ExecutorService pool, User 
user) throws IOException {
+    this.conf = conf;
+    this.user = user;
+    this.pool = pool;
+  }
+
+  @Override
+  public void abort(String why, Throwable e) {}
+
+  @Override
+  public boolean isAborted() {
+    return false;
+  }
+
+  @Override
+  public Configuration getConfiguration() {
+    return this.conf;
+  }
+
+  @Override
+  public BufferedMutator getBufferedMutator(TableName tableName) throws 
IOException {
+    return null;
+  }
+
+  @Override
+  public BufferedMutator getBufferedMutator(BufferedMutatorParams params) 
throws IOException {
+    return null;
+  }
+
+  @Override
+  public RegionLocator getRegionLocator(TableName tableName) throws 
IOException {
+    return null;
+  }
+
+  @Override
+  public Admin getAdmin() throws IOException {
+    return null;
+  }
+
+  @Override
+  public void close() throws IOException {
+    if (!this.closed) this.closed = true;
+  }
+
+  @Override
+  public boolean isClosed() {
+    return this.closed;
+  }
+
+  @Override
+  public TableBuilder getTableBuilder(final TableName tn, ExecutorService 
pool) {
+    if (isClosed()) {
+      throw new RuntimeException("IndexerConnection is closed.");
+    }
+    final Configuration passedInConfiguration = getConfiguration();
+    return new TableBuilder() {
+      @Override
+      public TableBuilder setOperationTimeout(int timeout) {
+        return null;
+      }
+
+      @Override
+      public TableBuilder setRpcTimeout(int timeout) {
+        return null;
+      }
+
+      @Override
+      public TableBuilder setReadRpcTimeout(int timeout) {
+        return null;
+      }
+
+      @Override
+      public TableBuilder setWriteRpcTimeout(int timeout) {
+        return null;
+      }
+
+      @Override
+      public Table build() {
+        return new Table() {
+          private final Configuration conf = passedInConfiguration;
+          private final TableName tableName = tn;
+
+          @Override
+          public TableName getName() {
+            return this.tableName;
+          }
+
+          @Override
+          public Configuration getConfiguration() {
+            return this.conf;
+          }
+
+          @Override
+          public void batch(List\<? extends Row\> actions, Object[] results)
+          throws IOException, InterruptedException {
+            // Implementation goes here.
+          }
+
+          @Override
+          public HTableDescriptor getTableDescriptor() throws IOException {
+            return null;
+          }
+
+          @Override
+          public TableDescriptor getDescriptor() throws IOException {
+            return null;
+          }
+
+          @Override
+          public boolean exists(Get get) throws IOException {
+            return false;
+          }
+
+          @Override
+          public boolean[] existsAll(List\<Get\> gets) throws IOException {
+            return new boolean[0];
+          }
+
+          @Override
+          public \<R\> void batchCallback(List\<? extends Row\> actions, 
Object[] results, Batch.Callback\<R\> callback) throws IOException, 
InterruptedException {
+
+          }
+
+          @Override
+          public Result get(Get get) throws IOException {
+            return null;
+          }
+
+          @Override
+          public Result[] get(List\<Get\> gets) throws IOException {
+            return new Result[0];
+          }
+
+          @Override
+          public ResultScanner getScanner(Scan scan) throws IOException {
+            return null;
+          }
+
+          @Override
+          public ResultScanner getScanner(byte[] family) throws IOException {
+            return null;
+          }
+
+          @Override
+          public ResultScanner getScanner(byte[] family, byte[] qualifier) 
throws IOException {
+            return null;
+          }
+
+          @Override
+          public void put(Put put) throws IOException {
+
+          }
+
+          @Override
+          public void put(List\<Put\> puts) throws IOException {
+
+          }
+
+          @Override
+          public boolean checkAndPut(byte[] row, byte[] family, byte[] 
qualifier, byte[] value, Put put) throws IOException {
+            return false;
+          }
+
+          @Override
+          public boolean checkAndPut(byte[] row, byte[] family, byte[] 
qualifier, CompareFilter.CompareOp compareOp, byte[] value, Put put) throws 
IOException {
+            return false;
+          }
+
+          @Override
+          public boolean checkAndPut(byte[] row, byte[] family, byte[] 
qualifier, CompareOperator op, byte[] value, Put put) throws IOException {
+            return false;
+          }
+
+          @Override
+          public void delete(Delete delete) throws IOException {
+
+          }
+
+          @Override
+          public void delete(List\<Delete\> deletes) throws IOException {
+
+          }
+
+          @Override
+          public boolean checkAndDelete(byte[] row, byte[] family, byte[] 
qualifier, byte[] value, Delete delete) throws IOException {
+            return false;
+          }
+
+          @Override
+          public boolean checkAndDelete(byte[] row, byte[] family, byte[] 
qualifier, CompareFilter.CompareOp compareOp, byte[] value, Delete delete) 
throws IOException {
+            return false;
+          }
+
+          @Override
+          public boolean checkAndDelete(byte[] row, byte[] family, byte[] 
qualifier, CompareOperator op, byte[] value, Delete delete) throws IOException {
+            return false;
+          }
+
+          @Override
+          public void mutateRow(RowMutations rm) throws IOException {
+
+          }
+
+          @Override
+          public Result append(Append append) throws IOException {
+            return null;
+          }
+
+          @Override
+          public Result increment(Increment increment) throws IOException {
+            return null;
+          }
+
+          @Override
+          public long incrementColumnValue(byte[] row, byte[] family, byte[] 
qualifier, long amount) throws IOException {
+            return 0;
+          }
+
+          @Override
+          public long incrementColumnValue(byte[] row, byte[] family, byte[] 
qualifier, long amount, Durability durability) throws IOException {
+            return 0;
+          }
+
+          @Override
+          public void close() throws IOException {
+
+          }
+
+          @Override
+          public CoprocessorRpcChannel coprocessorService(byte[] row) {
+            return null;
+          }
+
+          @Override
+          public \<T extends Service, R\> Map\<byte[], R\> 
coprocessorService(Class\<T\> service, byte[] startKey, byte[] endKey, 
Batch.Call\<T, R\> callable) throws ServiceException, Throwable {
+            return null;
+          }
+
+          @Override
+          public \<T extends Service, R\> void coprocessorService(Class\<T\> 
service, byte[] startKey, byte[] endKey, Batch.Call\<T, R\> callable, 
Batch.Callback\<R\> callback) throws ServiceException, Throwable {
+
+          }
+
+          @Override
+          public \<R extends Message\> Map\<byte[], R\> 
batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor, Message 
request, byte[] startKey, byte[] endKey, R responsePrototype) throws 
ServiceException, Throwable {
+            return null;
+          }
+
+          @Override
+          public \<R extends Message\> void 
batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor, Message 
request, byte[] startKey, byte[] endKey, R responsePrototype, 
Batch.Callback\<R\> callback) throws ServiceException, Throwable {
+
+          }
+
+          @Override
+          public boolean checkAndMutate(byte[] row, byte[] family, byte[] 
qualifier, CompareFilter.CompareOp compareOp, byte[] value, RowMutations 
mutation) throws IOException {
+            return false;
+          }
+
+          @Override
+          public boolean checkAndMutate(byte[] row, byte[] family, byte[] 
qualifier, CompareOperator op, byte[] value, RowMutations mutation) throws 
IOException {
+            return false;
+          }
+
+          @Override
+          public void setOperationTimeout(int operationTimeout) {
+
+          }
+
+          @Override
+          public int getOperationTimeout() {
+            return 0;
+          }
+
+          @Override
+          public int getRpcTimeout() {
+            return 0;
+          }
+
+          @Override
+          public void setRpcTimeout(int rpcTimeout) {
+
+          }
+
+          @Override
+          public int getReadRpcTimeout() {
+            return 0;
+          }
+
+          @Override
+          public void setReadRpcTimeout(int readRpcTimeout) {
+
+          }
+
+          @Override
+          public int getWriteRpcTimeout() {
+            return 0;
+          }
+
+          @Override
+          public void setWriteRpcTimeout(int writeRpcTimeout) {
+
+          }
+        };
+      }
+    };
+  }
+}
+{code}
+
+
+---
+
+* [HBASE-18873](https://issues.apache.org/jira/browse/HBASE-18873) | 
*Critical* | **Hide protobufs in GlobalQuotaSettings**
+
+GlobalQuotaSettings was introduced to avoid protocol-specific Java classes 
from leaking into API which is users may leverage. This class has a number of 
methods which return plain-Java-objects instead of these protocol-specific 
classes in an effort to better provide stability in the future.
+
+
+---
+
+* [HBASE-18893](https://issues.apache.org/jira/browse/HBASE-18893) | *Major* | 
**Remove Add/Modify/DeleteColumnFamilyProcedure in favor of using 
ModifyTableProcedure**
+
+The RPC calls for Add/Modify/DeleteColumn have been removed and are now backed 
by ModifyTable functionality. The corresponding permissions in AccessController 
have been removed as well.
+
+The shell already bypassed these RPCs and used ModifyTable directly, and thus 
would not be getting these permission checks, this change brings the rest of 
the RPC inline with that.
+
+Coprocessor hooks for pre/post Add/Modify/DeleteColumn have likewise been 
removed. Coprocessors needing to take special actions on schema change should 
instead process ModifyTable events (which they should have been doing already, 
but it was easy for developers to miss this nuance).
+
+
+---
+
+* [HBASE-16338](https://issues.apache.org/jira/browse/HBASE-16338) | *Major* | 
**update jackson to 2.y**
+
+HBase has upgraded from Jackson 1 to Jackson 2. JSON output should not have 
changed and this should not be user facing, but server classpaths should be 
adjusted accordingly.
+
+
+---
+
+* [HBASE-19051](https://issues.apache.org/jira/browse

<TRUNCATED>

Reply via email to