This is an automated email from the ASF dual-hosted git repository.

psomogyi pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
     new 7828d6a  HBASE-22409 update branch-1 ref guide for Hadoop, Java, and 
HBase version support (#239)
7828d6a is described below

commit 7828d6a1066908f15b2ec722003dbef435ce321f
Author: Sean Busbey <[email protected]>
AuthorDate: Sat Jun 29 05:22:54 2019 -0500

    HBASE-22409 update branch-1 ref guide for Hadoop, Java, and HBase version 
support (#239)
    
    Signed-off-by: Andrew Purtell <[email protected]>
    Signed-off-by: Peter Somogyi <[email protected]>
---
 src/main/asciidoc/_chapters/architecture.adoc    |  17 +-
 src/main/asciidoc/_chapters/configuration.adoc   | 301 ++++++++++-------------
 src/main/asciidoc/_chapters/developer.adoc       |   9 +-
 src/main/asciidoc/_chapters/getting_started.adoc |  30 +--
 src/main/asciidoc/_chapters/ops_mgt.adoc         |  42 ++--
 src/main/asciidoc/_chapters/preface.adoc         |  35 +++
 src/main/asciidoc/_chapters/troubleshooting.adoc | 109 +-------
 src/main/asciidoc/_chapters/upgrading.adoc       | 270 +-------------------
 src/main/site/site.xml                           |  17 +-
 9 files changed, 231 insertions(+), 599 deletions(-)

diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 7309410..5287801 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -1449,10 +1449,19 @@ Alphanumeric Rowkeys::
 Using a Custom Algorithm::
   The RegionSplitter tool is provided with HBase, and uses a _SplitAlgorithm_ 
to determine split points for you.
   As parameters, you give it the algorithm, desired number of regions, and 
column families.
-  It includes two split algorithms.
-  The first is the 
`link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html[HexStringSplit]`
 algorithm, which assumes the row keys are hexadecimal strings.
-  The second, 
`link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html[UniformSplit]`,
 assumes the row keys are random byte arrays.
-  You will probably need to develop your own 
`link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.SplitAlgorithm.html[SplitAlgorithm]`,
 using the provided ones as models.
+  It includes three split algorithms.
+  The first is the
+  
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html[HexStringSplit]`
+  algorithm, which assumes the row keys are hexadecimal strings.
+  The second is the
+  
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.DecimalStringSplit.html[DecimalStringSplit]`
+  algorithm, which assumes the row keys are decimal strings in the range 
00000000 to 99999999.
+  The third,
+  
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html[UniformSplit]`,
+  assumes the row keys are random byte arrays.
+  You will probably need to develop your own
+  
`link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.SplitAlgorithm.html[SplitAlgorithm]`,
+  using the provided ones as models.
 
 === Online Region Merges
 
diff --git a/src/main/asciidoc/_chapters/configuration.adoc 
b/src/main/asciidoc/_chapters/configuration.adoc
index 5e7d16d..e2013bd 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -28,7 +28,9 @@
 :experimental:
 
 This chapter expands upon the <<getting_started>> chapter to further explain 
configuration of Apache HBase.
-Please read this chapter carefully, especially the <<basic.prerequisites,Basic 
Prerequisites>> to ensure that your HBase testing and deployment goes smoothly, 
and prevent data loss.
+Please read this chapter carefully, especially the <<basic.prerequisites,Basic 
Prerequisites>>
+to ensure that your HBase testing and deployment goes smoothly.
+Familiarize yourself with <<hbase_supported_tested_definitions>> as well.
 
 == Configuration Files
 Apache HBase uses the same configuration system as Apache Hadoop.
@@ -91,54 +93,60 @@ This section lists required services and some required 
system configuration.
 
 [[java]]
 .Java
-[cols="1,1,1,4", options="header"]
+
+The following table summarizes the recommendation of the HBase community wrt 
deploying on various Java versions.
+A icon:check-circle[role="green"] symbol is meant to indicate a base level of 
testing and willingness to help diagnose and address issues you might run into.
+Similarly, an entry of icon:exclamation-circle[role="yellow"] or 
icon:times-circle[role="red"] generally means that should you run into an issue 
the community is likely to ask you to change the Java environment before 
proceeding to help.
+In some cases, specific guidance on limitations (e.g. whether compiling / unit 
tests work, specific operational issues, etc) will also be noted.
+
+.Long Term Support JDKs are recommended
+[TIP]
+====
+HBase recommends downstream users rely on JDK releases that are marked as Long 
Term Supported (LTS) either from the OpenJDK project or vendors. As of March 
2018 that means Java 8 is the only applicable version and that the next likely 
version to see testing will be Java 11 near Q3 2018.
+====
+
+.Java support by release line
+[cols="6*^.^", options="header"]
 |===
 |HBase Version
-|JDK 6
 |JDK 7
 |JDK 8
+|JDK 9 (Non-LTS)
+|JDK 10 (Non-LTS)
+|JDK 11
+
+|2.0+
+|icon:times-circle[role="red"]
+|icon:check-circle[role="green"]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-21110[HBASE-21110]
+
+|1.2+
+|icon:check-circle[role="green"]
+|icon:check-circle[role="green"]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-21110[HBASE-21110]
 
-|1.1
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
-|yes
-|Running with JDK 8 will work but is not well tested.
-
-|1.0
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
-|yes
-|Running with JDK 8 will work but is not well tested.
-
-|0.98
-|yes
-|yes
-|Running with JDK 8 works but is not well tested. Building with JDK 8 would 
require removal of the
-deprecated `remove()` method of the `PoolMap` class and is under 
consideration. See
-link:https://issues.apache.org/jira/browse/HBASE-7608[HBASE-7608] for more 
information about JDK 8
-support.
-
-|0.96
-|yes
-|yes
-|N/A
-
-|0.94
-|yes
-|yes
-|N/A
 |===
 
-NOTE: In HBase 0.98.5 and newer, you must set `JAVA_HOME` on each node of your 
cluster. _hbase-env.sh_ provides a handy mechanism to do this.
+NOTE: HBase will neither build nor run with Java 6.
+
+NOTE: You must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ 
provides a handy mechanism to do this.
 
 .Operating System Utilities
 ssh::
   HBase uses the Secure Shell (ssh) command and utilities extensively to 
communicate between cluster nodes. Each server in the cluster must be running 
`ssh` so that the Hadoop and HBase daemons can be managed. You must be able to 
connect to all nodes via SSH, including the local node, from the Master as well 
as any backup Master, using a shared key rather than a password. You can see 
the basic methodology for such a set-up in Linux or Unix systems at 
"<<passwordless.ssh.quickstart>>". If [...]
 
 DNS::
-  HBase uses the local hostname to self-report its IP address. Both forward 
and reverse DNS resolving must work in versions of HBase previous to 0.92.0. 
The link:https://github.com/sujee/hadoop-dns-checker[hadoop-dns-checker] tool 
can be used to verify DNS is working correctly on the cluster. The project 
`README` file provides detailed instructions on usage.
-
-Loopback IP::
-  Prior to hbase-0.96.0, HBase only used the IP address `127.0.0.1` to refer 
to `localhost`, and this could not be configured.
-  See <<loopback.ip,Loopback IP>> for more details.
+  HBase uses the local hostname to self-report its IP address.
 
 NTP::
   The clocks on cluster nodes should be synchronized. A small amount of 
variation is acceptable, but larger amounts of skew can cause erratic and 
unexpected behavior. Time synchronization is one of the first things to check 
if you see unexplained problems in your cluster. It is recommended that you run 
a Network Time Protocol (NTP) service, or another time-synchronization 
mechanism, on your cluster, and that all nodes look to the same service for 
time synchronization. See the link:http:/ [...]
@@ -160,9 +168,9 @@ It is recommended to raise the ulimit to at least 10,000, 
but more likely 10,240
 +
 For example, assuming that a schema had 3 ColumnFamilies per region with an 
average of 3 StoreFiles per ColumnFamily, and there are 100 regions per 
RegionServer, the JVM will open `3 * 3 * 100 = 900` file descriptors, not 
counting open JAR files, configuration files, and others. Opening a file does 
not take many resources, and the risk of allowing a user to open too many files 
is minimal.
 +
-Another related setting is the number of processes a user is allowed to run at 
once. In Linux and Unix, the number of processes is set using the `ulimit -u` 
command. This should not be confused with the `nproc` command, which controls 
the number of CPUs available to a given user. Under load, a `ulimit -u` that is 
too low can cause OutOfMemoryError exceptions. See Jack Levin's major HDFS 
issues thread on the hbase-users mailing list, from 2011.
+Another related setting is the number of processes a user is allowed to run at 
once. In Linux and Unix, the number of processes is set using the `ulimit -u` 
command. This should not be confused with the `nproc` command, which controls 
the number of CPUs available to a given user. Under load, a `ulimit -u` that is 
too low can cause OutOfMemoryError exceptions.
 +
-Configuring the maximum number of file descriptors and processes for the user 
who is running the HBase process is an operating system configuration, rather 
than an HBase configuration. It is also important to be sure that the settings 
are changed for the user that actually runs HBase. To see which user started 
HBase, and that user's ulimit configuration, look at the first line of the 
HBase log for that instance. A useful read setting config on you hadoop cluster 
is Aaron Kimballs' Config [...]
+Configuring the maximum number of file descriptors and processes for the user 
who is running the HBase process is an operating system configuration, rather 
than an HBase configuration. It is also important to be sure that the settings 
are changed for the user that actually runs HBase. To see which user started 
HBase, and that user's ulimit configuration, look at the first line of the 
HBase log for that instance.
 +
 .`ulimit` Settings on Ubuntu
 ====
@@ -181,14 +189,14 @@ Linux Shell::
   All of the shell scripts that come with HBase rely on the 
link:http://www.gnu.org/software/bash[GNU Bash] shell.
 
 Windows::
-  Prior to HBase 0.96, testing for running HBase on Microsoft Windows was 
limited.
-  Running a on Windows nodes is not recommended for production systems.
+  Running production systems on Windows machines is not recommended.
 
 
 [[hadoop]]
 === link:http://hadoop.apache.org[Hadoop](((Hadoop)))
 
-The following table summarizes the versions of Hadoop supported with each 
version of HBase.
+The following table summarizes the versions of Hadoop supported with each 
version of HBase. Older versions not appearing in this table are considered 
unsupported and likely missing necessary features, while newer versions are 
untested but may be suitable.
+
 Based on the version of HBase, you should select the most appropriate version 
of Hadoop.
 You can use Apache Hadoop, or a vendor's distribution of Hadoop.
 No distinction is made here.
@@ -197,146 +205,90 @@ See 
link:http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Suppor
 .Hadoop 2.x is recommended.
 [TIP]
 ====
-Hadoop 2.x is faster and includes features, such as short-circuit reads, which 
will help improve your HBase random read profile.
-Hadoop 2.x also includes important bug fixes that will improve your overall 
HBase experience.
-HBase 0.98 drops support for Hadoop 1.0, deprecates use of Hadoop 1.1+, and 
HBase 1.0 will not support Hadoop 1.x.
+Hadoop 2.x is faster and includes features, such as short-circuit reads (see 
<<perf.hdfs.configs.localread>>),
+which will help improve your HBase random read profile.
+Hadoop 2.x also includes important bug fixes that will improve your overall 
HBase experience. HBase does not support running with
+earlier versions of Hadoop. See the table below for requirements specific to 
different HBase versions.
+
+Hadoop 3.x is still in early access releases and has not yet been sufficiently 
tested by the HBase community for production use cases.
 ====
 
 Use the following legend to interpret this table:
 
 .Hadoop version support matrix
 
-* "S" = supported
-* "X" = not supported
-* "NT" = Not tested
+* icon:check-circle[role="green"] = Tested to be fully-functional
+* icon:times-circle[role="red"] = Known to not be fully-functional
+* icon:exclamation-circle[role="yellow"] = Not tested, may/may-not function
 
-[cols="1,1,1,1,1,1", options="header"]
+[cols="1,6*^.^", options="header"]
 |===
-| | HBase-1.2.x | HBase-1.3.x | HBase-1.5.x | HBase-2.0.x | HBase-2.1.x
-|Hadoop-2.4.x | S | S | X | X | X
-|Hadoop-2.5.x | S | S | X | X | X
-|Hadoop-2.6.0 | X | X | X | X | X
-|Hadoop-2.6.1+ | S | S | X | S | X
-|Hadoop-2.7.0 | X | X | X | X | X
-|Hadoop-2.7.1+ | S | S | S | S | S
-|Hadoop-2.8.[0-1] | X | X | X | X | X
-|Hadoop-2.8.2 | NT | NT | NT | NT | NT
-|Hadoop-2.8.3+ | NT | NT | NT | S | S
-|Hadoop-2.9.0 | X | X | X | X | X
-|Hadoop-3.0.0 | NT | NT | NT | NT | NT
+| | HBase-1.2.x, HBase-1.3.x | HBase-1.4.x | HBase-1.5.x | HBase-2.0.x | 
HBase-2.1.x | HBase-2.2.x
+|Hadoop-2.4.x | icon:check-circle[role="green"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"]
+|Hadoop-2.5.x | icon:check-circle[role="green"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"]
+|Hadoop-2.6.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] 
| icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.6.1+ | icon:check-circle[role="green"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:check-circle[role="green"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"]
+|Hadoop-2.7.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] 
| icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.7.1+ | icon:check-circle[role="green"] | 
icon:check-circle[role="green"] | icon:times-circle[role="red"] | 
icon:check-circle[role="green"] | icon:check-circle[role="green"] | 
icon:times-circle[role="red"]
+|Hadoop-2.8.[0-1] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"]
+|Hadoop-2.8.2 | icon:exclamation-circle[role="yellow"] | 
icon:exclamation-circle[role="yellow"] | icon:check-circle[role="green"] | 
icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"] 
| icon:check-circle[role="green"]
+|Hadoop-2.8.3+ | icon:exclamation-circle[role="yellow"] | 
icon:exclamation-circle[role="yellow"] | icon:check-circle[role="green"] | 
icon:check-circle[role="green"] | icon:check-circle[role="green"] | 
icon:check-circle[role="green"]
+|Hadoop-2.9.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] 
| icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.9.1+ | icon:exclamation-circle[role="yellow"] | 
icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"] 
| icon:exclamation-circle[role="yellow"] | 
icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"]
+|Hadoop-3.0.[0-2] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"]
+|Hadoop-3.0.3+ | icon:times-circle[role="red"] | icon:times-circle[role="red"] 
| icon:times-circle[role="red"] | icon:check-circle[role="green"] | 
icon:check-circle[role="green"] | icon:check-circle[role="green"]
+|Hadoop-3.1.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] 
| icon:times-circle[role="red"] | icon:times-circle[role="red"] | 
icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-3.1.1+ | icon:times-circle[role="red"] | icon:times-circle[role="red"] 
| icon:times-circle[role="red"] | icon:check-circle[role="green"] | 
icon:check-circle[role="green"] | icon:check-circle[role="green"]
 |===
 
-.Replace the Hadoop Bundled With HBase!
-[NOTE]
+.Hadoop Pre-2.6.1 and JDK 1.8 Kerberos
+[TIP]
 ====
-Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar 
under its _lib_ directory.
-The bundled jar is ONLY for use in standalone mode.
-In distributed mode, it is _critical_ that the version of Hadoop that is out 
on your cluster match what is under HBase.
-Replace the hadoop jar found in the HBase lib directory with the hadoop jar 
you are running on your cluster to avoid version mismatch issues.
-Make sure you replace the jar in HBase everywhere on your cluster.
-Hadoop version mismatch issues have various manifestations but often all looks 
like its hung up.
+When using pre-2.6.1 Hadoop versions and JDK 1.8 in a Kerberos environment, 
HBase server can fail
+and abort due to Kerberos keytab relogin error. Late version of JDK 1.7 
(1.7.0_80) has the problem too.
+Refer to link:https://issues.apache.org/jira/browse/HADOOP-10786[HADOOP-10786] 
for additional details.
+Consider upgrading to Hadoop 2.6.1+ in this case.
 ====
 
-[[hadoop2.hbase_0.94]]
-==== Apache HBase 0.94 with Hadoop 2
-
-To get 0.94.x to run on Hadoop 2.2.0, you need to change the hadoop 2 and 
protobuf versions in the _pom.xml_: Here is a diff with pom.xml changes:
-
-[source]
-----
-$ svn diff pom.xml
-Index: pom.xml
-===================================================================
---- pom.xml     (revision 1545157)
-+++ pom.xml     (working copy)
-@@ -1034,7 +1034,7 @@
-     <slf4j.version>1.4.3</slf4j.version>
-     <log4j.version>1.2.16</log4j.version>
-     <mockito-all.version>1.8.5</mockito-all.version>
--    <protobuf.version>2.4.0a</protobuf.version>
-+    <protobuf.version>2.5.0</protobuf.version>
-     <stax-api.version>1.0.1</stax-api.version>
-     <thrift.version>0.8.0</thrift.version>
-     <zookeeper.version>3.4.5</zookeeper.version>
-@@ -2241,7 +2241,7 @@
-         </property>
-       </activation>
-       <properties>
--        <hadoop.version>2.0.0-alpha</hadoop.version>
-+        <hadoop.version>2.2.0</hadoop.version>
-         <slf4j.version>1.6.1</slf4j.version>
-       </properties>
-       <dependencies>
-----
-
-The next step is to regenerate Protobuf files and assuming that the Protobuf 
has been installed:
-
-* Go to the HBase root folder, using the command line;
-* Type the following commands:
-+
-
-[source,bourne]
-----
-$ protoc -Isrc/main/protobuf --java_out=src/main/java 
src/main/protobuf/hbase.proto
-----
-+
-
-[source,bourne]
-----
-$ protoc -Isrc/main/protobuf --java_out=src/main/java 
src/main/protobuf/ErrorHandling.proto
-----
-
-
-Building against the hadoop 2 profile by running something like the following 
command:
-
-----
-$  mvn clean install assembly:single -Dhadoop.profile=2.0 -DskipTests
-----
-
-[[hadoop.hbase_0.94]]
-==== Apache HBase 0.92 and 0.94
-
-HBase 0.92 and 0.94 versions can work with Hadoop versions, 0.20.205, 0.22.x, 
1.0.x, and 1.1.x.
-HBase-0.94 can additionally work with Hadoop-0.23.x and 2.x, but you may have 
to recompile the code using the specific maven profile (see top level pom.xml)
-
-[[hadoop.hbase_0.96]]
-==== Apache HBase 0.96
-
-As of Apache HBase 0.96.x, Apache Hadoop 1.0.x at least is required.
-Hadoop 2 is strongly encouraged (faster but also has fixes that help MTTR). We 
will no longer run properly on older Hadoops such as 0.20.205 or 
branch-0.20-append.
-Do not move to Apache HBase 0.96.x if you cannot upgrade your Hadoop. See 
link:http://search-hadoop.com/m/7vFVx4EsUb2[HBase, mail # dev - DISCUSS:
-                Have hbase require at least hadoop 1.0.0 in hbase 0.96.0?]
-
-[[hadoop.older.versions]]
-==== Hadoop versions 0.20.x - 1.x
-
-HBase will lose data unless it is running on an HDFS that has a durable `sync` 
implementation.
-DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO 
NOT have this attribute.
-Currently only Hadoop versions 0.20.205.x or any release in excess of this 
version -- this includes hadoop-1.0.0 -- have a working, durable sync.
-The Cloudera blog post 
link:http://www.cloudera.com/blog/2012/01/an-update-on-apache-hadoop-1-0/[An
-            update on Apache Hadoop 1.0] by Charles Zedlweski has a nice 
exposition on how all the Hadoop versions relate.
-It's worth checking out if you are having trouble making sense of the Hadoop 
version morass.
-
-Sync has to be explicitly enabled by setting `dfs.support.append` equal to 
true on both the client side -- in _hbase-site.xml_ -- and on the serverside in 
_hdfs-site.xml_ (The sync facility HBase needs is a subset of the append code 
path).
-
-[source,xml]
-----
-
-<property>
-  <name>dfs.support.append</name>
-  <value>true</value>
-</property>
-----
+.Hadoop 2.6.x
+[TIP]
+====
+Hadoop distributions based on the 2.6.x line *must* have
+link:https://issues.apache.org/jira/browse/HADOOP-11710[HADOOP-11710] applied 
if you plan to run
+HBase on top of an HDFS Encryption Zone. Failure to do so will result in 
cluster failure and
+data loss. This patch is present in Apache Hadoop releases 2.6.1+.
+====
 
-You will have to restart your cluster after making this edit.
-Ignore the chicken-little comment you'll find in the _hdfs-default.xml_ in the 
description for the `dfs.support.append` configuration.
+.Hadoop 2.y.0 Releases
+[TIP]
+====
+Starting around the time of Hadoop version 2.7.0, the Hadoop PMC got into the 
habit of calling out new minor releases on their major version 2 release line 
as not stable / production ready. As such, HBase expressly advises downstream 
users to avoid running on top of these releases. Note that additionally the 
2.8.1 release was given the same caveat by the Hadoop PMC. For reference, see 
the release announcements for 
link:https://s.apache.org/hadoop-2.7.0-announcement[Apache Hadoop 2.7.0],  [...]
+====
 
-[[hadoop.security]]
-==== Apache HBase on Secure Hadoop
+.Hadoop 3.0.x Releases
+[TIP]
+====
+Hadoop distributions that include the Application Timeline Service feature may 
cause unexpected versions of HBase classes to be present in the application 
classpath. Users planning on running MapReduce applications with HBase should 
make sure that link:https://issues.apache.org/jira/browse/YARN-7190[YARN-7190] 
is present in their YARN service (currently fixed in 2.9.1+ and 3.1.0+).
+====
 
-Apache HBase will run on any Hadoop 0.20.x that incorporates Hadoop security 
features as long as you do as suggested above and replace the Hadoop jar that 
ships with HBase with the secure version.
-If you want to read more about how to setup Secure HBase, see 
<<hbase.secure.configuration,hbase.secure.configuration>>.
+.Hadoop 3.1.0 Release
+[TIP]
+====
+The Hadoop PMC called out the 3.1.0 release as not stable / production ready. 
As such, HBase expressly advises downstream users to avoid running on top of 
this release. For reference, see the 
link:https://s.apache.org/hadoop-3.1.0-announcement[release announcement for 
Hadoop 3.1.0].
+====
 
+.Replace the Hadoop Bundled With HBase!
+[NOTE]
+====
+Because HBase depends on Hadoop, it bundles Hadoop jars under its _lib_ 
directory.
+The bundled jars are ONLY for use in standalone mode.
+In distributed mode, it is _critical_ that the version of Hadoop that is out 
on your cluster match what is under HBase.
+Replace the hadoop jars found in the HBase lib directory with the equivalent 
hadoop jars from the version you are running
+on your cluster to avoid version mismatch issues.
+Make sure you replace the jars under HBase across your whole cluster.
+Hadoop version mismatch issues have various manifestations. Check for mismatch 
if
+HBase appears hung.
+====
 
 [[dfs.datanode.max.transfer.threads]]
 ==== `dfs.datanode.max.transfer.threads` 
(((dfs.datanode.max.transfer.threads)))
@@ -370,9 +322,7 @@ See also 
<<casestudies.max.transfer.threads,casestudies.max.transfer.threads>> a
 [[zookeeper.requirements]]
 === ZooKeeper Requirements
 
-ZooKeeper 3.4.x is required as of HBase 1.0.0.
-HBase makes use of the `multi` functionality that is only available since 
3.4.0 (The `useMulti` configuration option defaults to `true` in HBase 1.0.0).
-See link:https://issues.apache.org/jira/browse/HBASE-12241[HBASE-12241 (The 
crash of regionServer when taking deadserver's replication queue breaks 
replication)] and 
link:https://issues.apache.org/jira/browse/HBASE-6775[HBASE-6775 (Use ZK.multi 
when available for HBASE-6710 0.92/0.94 compatibility fix)] for background.
+ZooKeeper 3.4.x is required.
 
 [[standalone_dist]]
 == HBase run modes: Standalone and Distributed
@@ -392,6 +342,7 @@ Standalone mode is what is described in the 
<<quickstart,quickstart>> section.
 In standalone mode, HBase does not use HDFS -- it uses the local filesystem 
instead -- and it runs all HBase daemons and a local ZooKeeper all up in the 
same JVM.
 Zookeeper binds to a well known port so clients may talk to HBase.
 
+[[distributed]]
 === Distributed
 
 Distributed mode can be subdivided into distributed but all daemons run on a 
single node -- a.k.a _pseudo-distributed_ -- and _fully-distributed_ where the 
daemons are spread across all nodes in the cluster.
@@ -526,8 +477,6 @@ Check them out especially if HBase had trouble starting.
 HBase also puts up a UI listing vital attributes.
 By default it's deployed on the Master host at port 16010 (HBase RegionServers 
listen on port 16020 by default and put up an informational HTTP server at port 
16030). If the Master is running on a host named `master.example.org` on the 
default port, point your browser at _http://master.example.org:16010_ to see 
the web interface.
 
-Prior to HBase 0.98 the master UI was deployed on port 60010, and the HBase 
RegionServers UI on port 60030.
-
 Once HBase has started, see the <<shell_exercises,shell exercises>> section 
for how to create tables, add data, scan your insertions, and finally disable 
and drop your tables.
 
 To stop HBase after exiting the HBase shell enter
@@ -733,7 +682,7 @@ example9
 [[hbase_env]]
 ==== _hbase-env.sh_
 
-The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` 
environment variable (required for HBase 0.98.5 and newer) and set the heap to 
4 GB (rather than the default value of 1 GB). If you copy and paste this 
example, be sure to adjust the `JAVA_HOME` to suit your environment.
+The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` 
environment variable (required for HBase) and set the heap to 4 GB (rather than 
the default value of 1 GB). If you copy and paste this example, be sure to 
adjust the `JAVA_HOME` to suit your environment.
 
 ----
 # The java implementation to use.
@@ -856,7 +805,6 @@ For most use patterns, most of the time, you should use 
automatic splitting.
 See <<manual_region_splitting_decisions,manual region splitting decisions>> 
for more information about manual region splitting.
 
 Instead of allowing HBase to split your regions automatically, you can choose 
to manage the splitting yourself.
-This feature was added in HBase 0.90.0.
 Manually managing splits works if you know your keyspace well, otherwise let 
HBase figure where to split for you.
 Manual splitting can mitigate region creation and movement under load.
 It also makes it so region boundaries are known and invariant (if you disable 
region splitting). If you use manual splits, it is easier doing staggered, 
time-based major compactions to spread out your network IO load.
@@ -882,13 +830,12 @@ Otherwise, the cluster can be prone to compaction storms 
where a large number of
 It is important to understand that the data growth causes compaction storms, 
and not the manual split decision.
 
 If the regions are split into too many large regions, you can increase the 
major compaction interval by configuring `HConstants.MAJOR_COMPACTION_PERIOD`.
-HBase 0.90 introduced `org.apache.hadoop.hbase.util.RegionSplitter`, which 
provides a network-IO-safe rolling split of all regions.
+The `org.apache.hadoop.hbase.util.RegionSplitter` utility also provides a 
network-IO-safe rolling split of all regions.
 
 [[managed.compactions]]
 ==== Managed Compactions
 
 By default, major compactions are scheduled to run once in a 7-day period.
-Prior to HBase 0.96.x, major compactions were scheduled to happen once per day 
by default.
 
 If you need to control exactly when and how often major compaction runs, you 
can disable managed major compactions.
 See the entry for `hbase.hregion.majorcompaction` in the 
<<compaction.parameters,compaction.parameters>> table for details.
@@ -1008,8 +955,8 @@ To enable monitoring and management from remote systems, 
you need to set system
 See the 
link:http://docs.oracle.com/javase/6/docs/technotes/guides/management/agent.html[official
 documentation] for more information.
 Historically, besides above port mentioned, JMX opens two additional random 
TCP listening ports, which could lead to port conflict problem. (See 
link:https://issues.apache.org/jira/browse/HBASE-10289[HBASE-10289] for details)
 
-As an alternative, You can use the coprocessor-based JMX implementation 
provided by HBase.
-To enable it in 0.99 or above, add below property in _hbase-site.xml_: 
+As an alternative, you can use the coprocessor-based JMX implementation 
provided by HBase.
+To enable it, add below property in _hbase-site.xml_:
 
 [source,xml]
 ----
@@ -1104,8 +1051,8 @@ The corresponding properties for port configuration are 
`master.rmi.registry.por
 [[dyn_config]]
 == Dynamic Configuration
 
-Since HBase 1.0.0, it is possible to change a subset of the configuration 
without requiring a server restart.
-In the HBase shell, there are new operators, `update_config` and 
`update_all_config` that will prompt a server or all servers to reload 
configuration.
+It is possible to change a subset of the configuration without requiring a 
server restart.
+In the HBase shell, the operations `update_config` and `update_all_config` 
will prompt a server or all servers to reload configuration.
 
 Only a subset of all configurations can currently be changed in the running 
server.
 Here is an incomplete list: `hbase.regionserver.thread.compaction.large`, 
`hbase.regionserver.thread.compaction.small`, 
`hbase.regionserver.thread.split`, `hbase.regionserver.thread.merge`, as well 
as compaction policy and configurations and adjustment to offpeak hours.
diff --git a/src/main/asciidoc/_chapters/developer.adoc 
b/src/main/asciidoc/_chapters/developer.adoc
index 208ad73..a17e866 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -401,7 +401,7 @@ mvn -Dhadoop.profile=3.0 ...
 The above will build against whatever explicit hadoop 3.y version we have in 
our _pom.xml_ as our '3.0' version.
 Tests may not all pass so you may need to pass `-DskipTests` unless you are 
inclined to fix the failing tests.
 
-To pick a particular Hadoop 3.y release, you'd set e.g. 
`-Dhadoop-three.version=3.0.0-alpha1`.
+To pick a particular Hadoop 3.y release, you'd set hadoop-three.version 
property e.g. `-Dhadoop-three.version=3.0.0`.
 
 [[build.protobuf]]
 ==== Build Protobuf
@@ -538,7 +538,12 @@ For the build to sign them for you, you a properly 
configured _settings.xml_ in
 
 [[maven.release]]
 === Making a Release Candidate
-Only committers may make releases of hbase artifacts.
+
+NOTE: These instructions are for building HBase 1.y.z
+
+.Point Releases
+If you are making a point release (for example to quickly address a critical 
incompatibility or security problem) off of a release branch instead of a 
development branch, the tagging instructions are slightly different.
+I'll prefix those special steps with _Point Release Only_.
 
 .Before You Begin
 Make sure your environment is properly set up. Maven and Git are the main 
tooling
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index 374c946..986c0ff 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -19,6 +19,7 @@
  */
 ////
 
+[[getting_started]]
 = Getting Started
 :doctype: book
 :numbered:
@@ -38,35 +39,6 @@ This is not an appropriate configuration for a production 
instance of HBase, but
 This section shows you how to create a table in HBase using the `hbase shell` 
CLI, insert rows into the table, perform put and scan operations against the 
table, enable or disable the table, and start and stop HBase.
 Apart from downloading HBase, this procedure should take less than 10 minutes.
 
-.Local Filesystem and Durability
-WARNING: _The following is fixed in HBase 0.98.3 and beyond. See 
link:https://issues.apache.org/jira/browse/HBASE-11272[HBASE-11272] and 
link:https://issues.apache.org/jira/browse/HBASE-11218[HBASE-11218]._
-
-Using HBase with a local filesystem does not guarantee durability.
-The HDFS local filesystem implementation will lose edits if files are not 
properly closed.
-This is very likely to happen when you are experimenting with new software, 
starting and stopping the daemons often and not always cleanly.
-You need to run HBase on HDFS to ensure all writes are preserved.
-Running against the local filesystem is intended as a shortcut to get you 
familiar with how the general system works, as the very first phase of 
evaluation.
-See link:https://issues.apache.org/jira/browse/HBASE-3696[HBASE-3696] and its 
associated issues for more details about the issues of running on the local 
filesystem.
-
-[[loopback.ip]]
-.Loopback IP - HBase 0.94.x and earlier
-NOTE: _The below advice is for hbase-0.94.x and older versions only. This is 
fixed in hbase-0.96.0 and beyond._
-
-Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. 
Ubuntu and some other distributions default to 127.0.1.1 and this will cause 
problems for you. See link:http://devving.com/?p=414[Why does HBase care about 
/etc/hosts?] for detail
-
-
-.Example /etc/hosts File for Ubuntu
-====
-The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, 
on Ubuntu. Use this as a template if you run into trouble. 
-[listing]
-----
-127.0.0.1 localhost
-127.0.0.1 ubuntu.ubuntu-domain ubuntu
-----
-
-====
-
-
 === JDK Version Requirements
 
 HBase requires that a JDK be installed.
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc 
b/src/main/asciidoc/_chapters/ops_mgt.adoc
index 0193d63..43d11a0 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -44,13 +44,16 @@ Some commands, such as `version`, `pe`, `ltt`, `clean`, are 
not available in pre
 $ bin/hbase
 Usage: hbase [<options>] <command> [<args>]
 Options:
-  --config DIR    Configuration direction to use. Default: ./conf
-  --hosts HOSTS   Override the list in 'regionservers' file
+  --config DIR     Configuration direction to use. Default: ./conf
+  --hosts HOSTS    Override the list in 'regionservers' file
+  --auth-as-server Authenticate to ZooKeeper using servers configuration
 
 Commands:
 Some commands take arguments. Pass no args or -h for usage.
   shell           Run the HBase shell
   hbck            Run the hbase 'fsck' tool
+  snapshot        Tool for managing snapshots
+  snapshotinfo    Tool for dumping snapshot information
   wal             Write-ahead-log analyzer
   hfile           Store file analyzer
   zkcli           Run the ZooKeeper shell
@@ -64,8 +67,10 @@ Some commands take arguments. Pass no args or -h for usage.
   clean           Run the HBase clean up script
   classpath       Dump hbase CLASSPATH
   mapredcp        Dump CLASSPATH entries required by mapreduce
+  completebulkload Run LoadIncrementalHFiles tool
   pe              Run PerformanceEvaluation
   ltt             Run LoadTestTool
+  canary          Run the Canary tool
   version         Print the version
   CLASSNAME       Run the class named CLASSNAME
 ----
@@ -81,20 +86,29 @@ To see the usage, use the `--help` parameter.
 ----
 $ ${HBASE_HOME}/bin/hbase canary -help
 
-Usage: bin/hbase org.apache.hadoop.hbase.tool.Canary [opts] [table1 
[table2]...] | [regionserver1 [regionserver2]..]
+Usage: hbase org.apache.hadoop.hbase.tool.Canary [opts] [table1 [table2]...] | 
[regionserver1 [regionserver2]..]
  where [opts] are:
    -help          Show this help and exit.
    -regionserver  replace the table argument to regionserver,
       which means to enable regionserver mode
+   -allRegions    Tries all regions on a regionserver,
+      only works in regionserver mode.
+   -zookeeper    Tries to grab zookeeper.znode.parent
+      on each zookeeper instance
    -daemon        Continuous check at defined intervals.
+   -permittedZookeeperFailures <N>    Ignore first N failures when attempting 
to
+      connect to individual zookeeper nodes in the ensemble
    -interval <N>  Interval between checks (sec)
-   -e             Use region/regionserver as regular expression
-      which means the region/regionserver is regular expression pattern
+   -e             Use table/regionserver as regular expression
+      which means the table/regionserver is regular expression pattern
    -f <B>         stop whole program if first error occurs, default is true
-   -t <N>         timeout for a check, default is 600000 (milliseconds)
+   -t <N>         timeout for a check, default is 600000 (millisecs)
+   -writeTableTimeout <N>         write timeout for the writeTable, default is 
600000 (millisecs)
+   -readTableTimeouts <tableName>=<read timeout>,<tableName>=<read timeout>, 
...    comma-separated list of read timeouts per table (no spaces), default is 
600000 (millisecs)
    -writeSniffing enable the write sniffing in canary
    -treatFailureAsError treats read / write failure as error
    -writeTable    The table used for write sniffing. Default is hbase:canary
+   -Dhbase.canary.read.raw.enabled=<true/false> Use this flag to enable or 
disable raw scan during read canary test Default is false and raw is not 
enabled during scan
    -D<configProperty>=<value> assigning or override the configuration params
 ----
 
@@ -107,6 +121,7 @@ private static final int USAGE_EXIT_CODE = 1;
 private static final int INIT_ERROR_EXIT_CODE = 2;
 private static final int TIMEOUT_ERROR_EXIT_CODE = 3;
 private static final int ERROR_EXIT_CODE = 4;
+private static final int FAILURE_EXIT_CODE = 5;
 ----
 
 Here are some examples based on the following given case.
@@ -802,27 +817,20 @@ Options:
 
 === `hbase pe`
 
-The `hbase pe` command is a shortcut provided to run the 
`org.apache.hadoop.hbase.PerformanceEvaluation` tool, which is used for testing.
-The `hbase pe` command was introduced in HBase 0.98.4.
+The `hbase pe` command runs the PerformanceEvaluation tool, which is used for 
testing.
 
 The PerformanceEvaluation tool accepts many different options and commands.
 For usage instructions, run the command with no options.
 
-To run PerformanceEvaluation prior to HBase 0.98.4, issue the command `hbase 
org.apache.hadoop.hbase.PerformanceEvaluation`.
-
 The PerformanceEvaluation tool has received many updates in recent HBase 
releases, including support for namespaces, support for tags, cell-level ACLs 
and visibility labels, multiget support for RPC calls, increased sampling 
sizes, an option to randomly sleep during testing, and ability to "warm up" the 
cluster before testing starts.
 
 === `hbase ltt`
 
-The `hbase ltt` command is a shortcut provided to run the 
`org.apache.hadoop.hbase.util.LoadTestTool` utility, which is used for testing.
-The `hbase ltt` command was introduced in HBase 0.98.4.
+The `hbase ltt` command runs the LoadTestTool utility, which is used for 
testing.
 
-You must specify either `-write` or `-update-read` as the first option.
+You must specify one of `-write`, `-update`, or `-read` as the first option.
 For general usage instructions, pass the `-h` option.
 
-To run LoadTestTool prior to HBase 0.98.4, issue the command +hbase
-          org.apache.hadoop.hbase.util.LoadTestTool+.
-
 The LoadTestTool has received many updates in recent HBase releases, including 
support for namespaces, support for tags, cell-level ACLS and visibility 
labels, testing security-related features, ability to specify the number of 
regions per server, tests for multi-get RPC calls, and tests relating to 
replication.
 
 [[ops.regionmgt]]
@@ -885,7 +893,7 @@ See <<lb,lb>> below.
 [NOTE]
 ====
 In hbase-2.0, in the bin directory, we added a script named 
_considerAsDead.sh_ that can be used to kill a regionserver.
-Hardware issues could be detected by specialized monitoring tools before the  
zookeeper timeout has expired. _considerAsDead.sh_ is a simple function to mark 
a RegionServer as dead.
+Hardware issues could be detected by specialized monitoring tools before the 
zookeeper timeout has expired. _considerAsDead.sh_ is a simple function to mark 
a RegionServer as dead.
 It deletes all the znodes of the server, starting the recovery process.
 Plug in the script into your monitoring/fault detection tools to initiate 
faster failover.
 Be careful how you use this disruptive tool.
diff --git a/src/main/asciidoc/_chapters/preface.adoc 
b/src/main/asciidoc/_chapters/preface.adoc
index 960fcc4..859211d 100644
--- a/src/main/asciidoc/_chapters/preface.adoc
+++ b/src/main/asciidoc/_chapters/preface.adoc
@@ -61,4 +61,39 @@ Please use 
link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-
 
 To protect existing HBase installations from new vulnerabilities, please *do 
not* use JIRA to report security-related bugs. Instead, send your report to the 
mailing list [email protected], which allows anyone to send messages, but 
restricts who can read them. Someone on that list will contact you to follow up 
on your report.
 
+[hbase_supported_tested_definitions]
+.Support and Testing Expectations
+
+The phrases /supported/, /not supported/, /tested/, and /not tested/ occur 
several
+places throughout this guide. In the interest of clarity, here is a brief 
explanation
+of what is generally meant by these phrases, in the context of HBase.
+
+NOTE: Commercial technical support for Apache HBase is provided by many Hadoop 
vendors.
+This is not the sense in which the term /support/ is used in the context of the
+Apache HBase project. The Apache HBase team assumes no responsibility for your
+HBase clusters, your configuration, or your data.
+
+Supported::
+  In the context of Apache HBase, /supported/ means that HBase is designed to 
work
+  in the way described, and deviation from the defined behavior or 
functionality should
+  be reported as a bug.
+
+Not Supported::
+  In the context of Apache HBase, /not supported/ means that a use case or use 
pattern
+  is not expected to work and should be considered an antipattern. If you 
think this
+  designation should be reconsidered for a given feature or use pattern, file 
a JIRA
+  or start a discussion on one of the mailing lists.
+
+Tested::
+  In the context of Apache HBase, /tested/ means that a feature is covered by 
unit
+  or integration tests, and has been proven to work as expected.
+
+Not Tested::
+  In the context of Apache HBase, /not tested/ means that a feature or use 
pattern
+  may or may notwork in a given way, and may or may not corrupt your data or 
cause
+  operational issues. It is an unknown, and there are no guarantees. If you 
can provide
+  proof that a feature designated as /not tested/ does work in a given way, 
please
+  submit the tests and/or the metrics so that other users can gain certainty 
about
+  such features or use patterns.
+
 :numbered:
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc 
b/src/main/asciidoc/_chapters/troubleshooting.adoc
index 906b2f8..e60eb41 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -101,6 +101,11 @@ To disable, set the logging level back to `INFO` level.
 [[trouble.log.gc]]
 === JVM Garbage Collection Logs
 
+[NOTE]
+----
+All example Garbage Collection logs in this section are based on Java 7 and 
Java 8 output. The introduction of Unified Logging in Java 9 and newer will 
result in very different looking logs.
+----
+
 HBase is memory intensive, and using the default GC you can see long pauses in 
all threads including the _Juliet Pause_ aka "GC of Death". To help debug this 
or confirm this is happening GC logging can be turned on in the Java virtual 
machine.
 
 To enable, in _hbase-env.sh_, uncomment one of the below lines :
@@ -253,7 +258,6 @@ link:https://issues.apache.org/jira/browse/HBASE[JIRA] is 
also really helpful wh
 ==== Master Web Interface
 
 The Master starts a web-interface on port 16010 by default.
-(Up to and including 0.98 this was port 60010)
 
 The Master web UI lists created tables and their definition (e.g., 
ColumnFamilies, blocksize, etc.). Additionally, the available RegionServers in 
the cluster are listed along with selected high-level metrics (requests, number 
of regions, usedHeap, maxHeap). The Master web UI allows navigation to each 
RegionServer's web UI.
 
@@ -261,7 +265,6 @@ The Master web UI lists created tables and their definition 
(e.g., ColumnFamilie
 ==== RegionServer Web Interface
 
 RegionServers starts a web-interface on port 16030 by default.
-(Up to an including 0.98 this was port 60030)
 
 The RegionServer web UI lists online regions and their start/end keys, as well 
as point-in-time RegionServer metrics (requests, regions, storeFileIndexSize, 
compactionQueueSize, etc.).
 
@@ -557,7 +560,7 @@ You can also tail all the logs at the same time, edit 
files, etc.
 [[trouble.client]]
 == Client
 
-For more information on the HBase client, see <<client,client>>.
+For more information on the HBase client, see <<architecture.client,client>>.
 
 [[trouble.client.scantimeout]]
 === ScannerTimeoutException or UnknownScannerException
@@ -670,12 +673,6 @@ A workaround is passing your client-side JVM a reasonable 
value for `-XX:MaxDire
 By default, the `MaxDirectMemorySize` is equal to your `-Xmx` max heapsize 
setting (if `-Xmx` is set). Try setting it to something smaller (for example, 
one user had success setting it to `1g` when they had a client-side heap of 
`12g`). If you set it too small, it will bring on `FullGCs` so keep it a bit 
hefty.
 You want to make this setting client-side only especially if you are running 
the new experimental server-side off-heap cache since this feature depends on 
being able to use big direct buffers (You may have to keep separate client-side 
and server-side config dirs).
 
-[[trouble.client.slowdown.admin]]
-=== Client Slowdown When Calling Admin Methods (flush, compact, etc.)
-
-This is a client issue fixed by 
link:https://issues.apache.org/jira/browse/HBASE-5073[HBASE-5073] in 0.90.6.
-There was a ZooKeeper leak in the client and the client was getting pummeled 
by ZooKeeper events with each additional invocation of the admin API.
-
 [[trouble.client.security.rpc]]
 === Secure Client Cannot Connect ([Caused by GSSException: No valid 
credentials provided(Mechanism level: Failed to find any Kerberos tgt)])
 
@@ -848,7 +845,6 @@ See <<managed.compactions>> for more information on 
managing compactions.
 === Loopback IP
 
 HBase expects the loopback IP Address to be 127.0.0.1.
-See the Getting Started section on <<loopback.ip>>.
 
 [[trouble.network.ints]]
 === Network Interfaces
@@ -1071,13 +1067,6 @@ This exception is returned back to the client and then 
the client goes back to `
 
 However, if the NotServingRegionException is logged ERROR, then the client ran 
out of retries and something probably wrong.
 
-[[trouble.rs.runtime.double_listed_regions]]
-==== Regions listed by domain name, then IP
-
-Fix your DNS.
-In versions of Apache HBase before 0.92.x, reverse DNS needs to give same 
answer as forward lookup.
-See link:https://issues.apache.org/jira/browse/HBASE-3431[HBASE 3431 
RegionServer is not using the name given it by the master; double entry in 
master listing of servers] for gorey details.
-
 [[brand.new.compressor]]
 ==== Logs flooded with '2011-01-10 12:40:48,407 INFO 
org.apache.hadoop.io.compress.CodecPool: Gotbrand-new compressor' messages
 
@@ -1234,31 +1223,6 @@ See Andrew's answer here, up on the user list: 
link:http://search-hadoop.com/m/s
 [[trouble.versions]]
 == HBase and Hadoop version issues
 
-[[trouble.versions.205]]
-=== `NoClassDefFoundError` when trying to run 0.90.x on hadoop-0.20.205.x (or 
hadoop-1.0.x)
-
-Apache HBase 0.90.x does not ship with hadoop-0.20.205.x, etc.
-To make it run, you need to replace the hadoop jars that Apache HBase shipped 
with in its _lib_ directory with those of the Hadoop you want to run HBase on.
-If even after replacing Hadoop jars you get the below exception:
-
-[source]
-----
-
-sv4r6s38: Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/commons/configuration/Configuration
-sv4r6s38:       at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
-sv4r6s38:       at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
-sv4r6s38:       at 
org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
-sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:209)
-sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:177)
-sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:229)
-sv4r6s38:       at 
org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
-sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:202)
-sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:177)
-----
-
-you need to copy under _hbase/lib_, the _commons-configuration-X.jar_ you find 
in your Hadoop's _lib_ directory.
-That should fix the above complaint.
-
 [[trouble.wrong.version]]
 === ...cannot communicate with client version...
 
@@ -1267,67 +1231,6 @@ If you see something like the following in your logs 
[computeroutput]+... 2012-0
           shutdown. org.apache.hadoop.ipc.RemoteException: Server IPC version 
7 cannot communicate
           with client version 4 ...+ ...are you trying to talk to an Hadoop 
2.0.x from an HBase that has an Hadoop 1.0.x client? Use the HBase built 
against Hadoop 2.0 or rebuild your HBase passing the +-Dhadoop.profile=2.0+ 
attribute to Maven (See <<maven.build.hadoop>> for more).
 
-== IPC Configuration Conflicts with Hadoop
-
-If the Hadoop configuration is loaded after the HBase configuration, and you 
have configured custom IPC settings in both HBase and Hadoop, the Hadoop values 
may overwrite the HBase values.
-There is normally no need to change these settings for HBase, so this problem 
is an edge case.
-However, link:https://issues.apache.org/jira/browse/HBASE-11492[HBASE-11492] 
renames these settings for HBase to remove the chance of a conflict.
-Each of the setting names have been prefixed with `hbase.`, as shown in the 
following table.
-No action is required related to these changes unless you are already 
experiencing a conflict.
-
-These changes were backported to HBase 0.98.x and apply to all newer versions.
-
-[cols="1,1", options="header"]
-|===
-| Pre-0.98.x
-| 0.98-x And Newer
-
-| ipc.server.listen.queue.size
-| hbase.ipc.server.listen.queue.size
-
-| ipc.server.max.callqueue.size
-| hbase.ipc.server.max.callqueue.size
-
-| ipc.server.callqueue.handler.factor
-| hbase.ipc.server.callqueue.handler.factor
-
-| ipc.server.callqueue.read.share
-| hbase.ipc.server.callqueue.read.share
-
-| ipc.server.callqueue.type
-| hbase.ipc.server.callqueue.type
-
-| ipc.server.queue.max.call.delay
-| hbase.ipc.server.queue.max.call.delay
-
-| ipc.server.max.callqueue.length
-| hbase.ipc.server.max.callqueue.length
-
-| ipc.server.read.threadpool.size
-| hbase.ipc.server.read.threadpool.size
-
-| ipc.server.tcpkeepalive
-| hbase.ipc.server.tcpkeepalive
-
-| ipc.server.tcpnodelay
-| hbase.ipc.server.tcpnodelay
-
-| ipc.client.call.purge.timeout
-| hbase.ipc.client.call.purge.timeout
-
-| ipc.client.connection.maxidletime
-| hbase.ipc.client.connection.maxidletime
-
-| ipc.client.idlethreshold
-| hbase.ipc.client.idlethreshold
-
-| ipc.client.kill.max
-| hbase.ipc.client.kill.max
-
-| ipc.server.scan.vtime.weight
-| hbase.ipc.server.scan.vtime.weight
-|===
-
 == HBase and HDFS
 
 General configuration guidance for Apache HDFS is out of the scope of this 
guide.
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 1cb9bfd..91af2b6 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -27,19 +27,15 @@
 :icons: font
 :experimental:
 
-You cannot skip major versions when upgrading. If you are upgrading from 
version 0.90.x to 0.94.x, you must first go from 0.90.x to 0.92.x and then go 
from 0.92.x to 0.94.x.
+You cannot skip major versions when upgrading. If you are upgrading from 
version 0.98.x to 2.x, you must first go from 0.98.x to 1.2.x and then go from 
1.2.x to 2.x.
 
-NOTE: It may be possible to skip across versions -- for example go from 0.92.2 
straight to 0.98.0 just following the 0.96.x upgrade instructions -- but these 
scenarios are untested.
-
-Review <<configuration>>, in particular <<hadoop>>.
+Review <<configuration>>, in particular <<hadoop>>. Familiarize yourself with 
<<hbase_supported_tested_definitions>>.
 
 [[hbase.versioning]]
 == HBase version number and compatibility
 
-HBase has two versioning schemes, pre-1.0 and post-1.0. Both are detailed 
below.
-
 [[hbase.versioning.post10]]
-=== Post 1.0 versions
+=== Aspirational Semantic Versioning
 
 Starting with the 1.0.0 release, HBase is working towards 
link:http://semver.org/[Semantic Versioning] for its release versioning. In 
summary:
 
@@ -147,20 +143,9 @@ HBase LimitedPrivate API::
 HBase Private API::
   All classes annotated with InterfaceAudience.Private or all classes that do 
not have the annotation are for HBase internal use only. The interfaces and 
method signatures can change at any point in time. If you are relying on a 
particular interface that is marked Private, you should open a jira to propose 
changing the interface to be Public or LimitedPrivate, or an interface exposed 
for this purpose.
 
-[[hbase.versioning.pre10]]
-=== Pre 1.0 versions
-
-Before the semantic versioning scheme pre-1.0, HBase tracked either Hadoop's 
versions (0.2x) or 0.9x versions. If you are into the arcane, checkout our old 
wiki page on link:http://wiki.apache.org/hadoop/Hbase/HBaseVersions[HBase 
Versioning] which tries to connect the HBase version dots. Below sections cover 
ONLY the releases before 1.0.
-
-[[hbase.development.series]]
-.Odd/Even Versioning or "Development" Series Releases
-Ahead of big releases, we have been putting up preview versions to start the 
feedback cycle turning-over earlier. These "Development" Series releases, 
always odd-numbered, come with no guarantees, not even regards being able to 
upgrade between two sequential releases (we reserve the right to break 
compatibility across "Development" Series releases). Needless to say, these 
releases are not for production deploys. They are a preview of what is coming 
in the hope that interested parties wil [...]
-
-Our first "Development" Series was the 0.89 set that came out ahead of HBase 
0.90.0. HBase 0.95 is another "Development" Series that portends HBase 0.96.0. 
0.99.x is the last series in "developer preview" mode before 1.0. Afterwards, 
we will be using semantic versioning naming scheme (see above).
-
 [[hbase.binary.compatibility]]
 .Binary Compatibility
-When we say two HBase versions are compatible, we mean that the versions are 
wire and binary compatible. Compatible HBase versions means that clients can 
talk to compatible but differently versioned servers. It means too that you can 
just swap out the jars of one version and replace them with the jars of 
another, compatible version and all will just work. Unless otherwise specified, 
HBase point versions are (mostly) binary compatible. You can safely do rolling 
upgrades between binary com [...]
+When we say two HBase versions are compatible, we mean that the versions are 
wire and binary compatible. Compatible HBase versions means that clients can 
talk to compatible but differently versioned servers. It means too that you can 
just swap out the jars of one version and replace them with the jars of 
another, compatible version and all will just work. Unless otherwise specified, 
HBase point versions are (mostly) binary compatible. You can safely do rolling 
upgrades between binary com [...]
 
 [[hbase.rolling.upgrade]]
 === Rolling Upgrades
@@ -178,9 +163,9 @@ The rolling-restart script will first gracefully stop and 
restart the master, an
 
 [[hbase.rolling.restart]]
 .Rolling Upgrade Between Versions that are Binary/Wire Compatible
-Unless otherwise specified, HBase point versions are binary compatible. You 
can do a <<hbase.rolling.upgrade>> between HBase point versions. For example, 
you can go to 0.94.6 from 0.94.5 by doing a rolling upgrade across the cluster 
replacing the 0.94.5 binary with a 0.94.6 binary.
+Unless otherwise specified, HBase minor versions are binary compatible. You 
can do a <<hbase.rolling.upgrade>> between HBase point versions. For example, 
you can go to 1.2.4 from 1.2.6 by doing a rolling upgrade across the cluster 
replacing the 1.2.4 binary with a 1.2.6 binary.
 
-In the minor version-particular sections below, we call out where the versions 
are wire/protocol compatible and in this case, it is also possible to do a 
<<hbase.rolling.upgrade>>. For example, in <<upgrade1.0.rolling.upgrade>>, we 
state that it is possible to do a rolling upgrade between hbase-0.98.x and 
hbase-1.0.0.
+In the minor version-particular sections below, we call out where the versions 
are wire/protocol compatible and in this case, it is also possible to do a 
<<hbase.rolling.upgrade>>.
 
 == Upgrade Paths
 
@@ -238,247 +223,10 @@ The Date Tiered Compaction feature available as of 
0.98.19 is available in the 1
 
 [[upgrade1.0.rolling.upgrade]]
 ==== Rolling upgrade from 0.98.x to HBase 1.0.0
-.From 0.96.x to 1.0.0
-NOTE: You cannot do a <<hbase.rolling.upgrade,rolling upgrade>> from 0.96.x to 
1.0.0 without first doing a rolling upgrade to 0.98.x. See comment in 
link:https://issues.apache.org/jira/browse/HBASE-11164?focusedCommentId=14182330&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&#35;comment-14182330[HBASE-11164
 Document and test rolling updates from 0.98 -> 1.0] for the why. Also because 
HBase 1.0.0 enables HFile v3 by default, link:https://issues.apache.org/jira/ 
[...]
 
 There are no known issues running a <<hbase.rolling.upgrade,rolling upgrade>> 
from HBase 0.98.x to HBase 1.0.0.
 
-[[upgrade1.0.from.0.94]]
-==== Upgrading to 1.0 from 0.94
-You cannot rolling upgrade from 0.94.x to 1.x.x.  You must stop your cluster, 
install the 1.x.x software, run the migration described at 
<<executing.the.0.96.upgrade>> (substituting 1.x.x. wherever we make mention of 
0.96.x in the section below), and then restart. Be sure to upgrade your 
ZooKeeper if it is a version less than the required 3.4.x.
-
-[[upgrade0.98]]
-=== Upgrading from 0.96.x to 0.98.x
-A rolling upgrade from 0.96.x to 0.98.x works. The two versions are not binary 
compatible.
-
-Additional steps are required to take advantage of some of the new features of 
0.98.x, including cell visibility labels, cell ACLs, and transparent server 
side encryption. See <<security>> for more information. Significant performance 
improvements include a change to the write ahead log threading model that 
provides higher transaction throughput under high load, reverse scanners, 
MapReduce over snapshot files, and striped compaction.
-
-Clients and servers can run with 0.98.x and 0.96.x versions. However, 
applications may need to be recompiled due to changes in the Java API.
-
-=== Upgrading from 0.94.x to 0.98.x
-A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade 
path follows the same procedures as <<upgrade0.96>>. Additional steps are 
required to use some of the new features of 0.98.x. See <<upgrade0.98>> for an 
abbreviated list of these features.
-
-[[upgrade0.96]]
-=== Upgrading from 0.94.x to 0.96.x
-
-==== The "Singularity"
-
-.HBase 0.96.x was EOL'd, September 1st, 2014
-NOTE: Do not deploy 0.96.x  Deploy at least 0.98.x. See 
link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96].
-
-You will have to stop your old 0.94.x cluster completely to upgrade. If you 
are replicating between clusters, both clusters will have to go down to 
upgrade. Make sure it is a clean shutdown. The less WAL files around, the 
faster the upgrade will run (the upgrade will split any log files it finds in 
the filesystem as part of the upgrade process). All clients must be upgraded to 
0.96 too.
-
-The API has changed. You will need to recompile your code against 0.96 and you 
may need to adjust applications to go against new APIs (TODO: List of changes).
-
-[[executing.the.0.96.upgrade]]
-==== Executing the 0.96 Upgrade
-
-.HDFS and ZooKeeper must be up!
-NOTE: HDFS and ZooKeeper should be up and running during the upgrade process.
-
-HBase 0.96.0 comes with an upgrade script. Run
-
-[source,bash]
-----
-$ bin/hbase upgrade
-----
-to see its usage. The script has two main modes: `-check`, and `-execute`.
-
-.check
-The check step is run against a running 0.94 cluster. Run it from a downloaded 
0.96.x binary. The check step is looking for the presence of HFile v1 files. 
These are unsupported in HBase 0.96.0. To have them rewritten as HFile v2 you 
must run a compaction.
-
-The check step prints stats at the end of its run (grep for `“Result:”` in the 
log) printing absolute path of the tables it scanned, any HFile v1 files found, 
the regions containing said files (these regions will need a major compaction), 
and any corrupted files if found. A corrupt file is unreadable, and so is 
undefined (neither HFile v1 nor HFile v2).
-
-To run the check step, run
-
-[source,bash]
-----
-$ bin/hbase upgrade -check
-----
-
-Here is sample output:
-----
-Tables Processed:
-hdfs://localhost:41020/myHBase/.META.
-hdfs://localhost:41020/myHBase/usertable
-hdfs://localhost:41020/myHBase/TestTable
-hdfs://localhost:41020/myHBase/t
-
-Count of HFileV1: 2
-HFileV1:
-hdfs://localhost:41020/myHBase/usertable    
/fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524
-hdfs://localhost:41020/myHBase/usertable    
/ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512
-
-Count of corrupted files: 1
-Corrupted Files:
-hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1
-Count of Regions with HFileV1: 2
-Regions to Major Compact:
-hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812
-hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af
-
-There are some HFileV1, or corrupt files (files with incorrect major version)
-----
-
-In the above sample output, there are two HFile v1 files in two regions, and 
one corrupt file. Corrupt files should probably be removed. The regions that 
have HFile v1s need to be major compacted. To major compact, start up the hbase 
shell and review how to compact an individual region. After the major 
compaction is done, rerun the check step and the HFile v1 files should be gone, 
replaced by HFile v2 instances.
-
-By default, the check step scans the HBase root directory (defined as 
`hbase.rootdir` in the configuration). To scan a specific directory only, pass 
the `-dir` option.
-[source,bash]
-----
-$ bin/hbase upgrade -check -dir /myHBase/testTable
-----
-The above command would detect HFile v1 files in the _/myHBase/testTable_ 
directory.
-
-Once the check step reports all the HFile v1 files have been rewritten, it is 
safe to proceed with the upgrade.
-
-.execute
-After the _check_ step shows the cluster is free of HFile v1, it is safe to 
proceed with the upgrade. Next is the _execute_ step. You must *SHUTDOWN YOUR 
0.94.x CLUSTER* before you can run the execute step. The execute step will not 
run if it detects running HBase masters or RegionServers.
-
-[NOTE]
-====
-HDFS and ZooKeeper should be up and running during the upgrade process. If 
zookeeper is managed by HBase, then you can start zookeeper so it is available 
to the upgrade by running
-[source,bash]
-----
-$ ./hbase/bin/hbase-daemon.sh start zookeeper
-----
-====
-
-The execute upgrade step is made of three substeps.
-
-* Namespaces: HBase 0.96.0 has support for namespaces. The upgrade needs to 
reorder directories in the filesystem for namespaces to work.
+[[upgrade2.0]]
+=== Upgrading to 2.x
 
-* ZNodes: All znodes are purged so that new ones can be written in their place 
using a new protobuf'ed format and a few are migrated in place: e.g. 
replication and table state znodes
-
-* WAL Log Splitting: If the 0.94.x cluster shutdown was not clean, we'll split 
WAL logs as part of migration before we startup on 0.96.0. This WAL splitting 
runs slower than the native distributed WAL splitting because it is all inside 
the single upgrade process (so try and get a clean shutdown of the 0.94.0 
cluster if you can).
-
-To run the _execute_ step, make sure that first you have copied HBase 0.96.0 
binaries everywhere under servers and under clients. Make sure the 0.94.0 
cluster is down. Then do as follows:
-[source,bash]
-----
-$ bin/hbase upgrade -execute
-----
-Here is some sample output.
-
-----
-Starting Namespace upgrade
-Created version file at hdfs://localhost:41020/myHBase with version=7
-Migrating table testTable to 
hdfs://localhost:41020/myHBase/.data/default/testTable
-.....
-Created version file at hdfs://localhost:41020/myHBase with version=8
-Successfully completed NameSpace upgrade.
-Starting Znode upgrade
-.....
-Successfully completed Znode upgrade
-
-Starting Log splitting
-...
-Successfully completed Log splitting
-----
-
-If the output from the execute step looks good, stop the zookeeper instance 
you started to do the upgrade:
-[source,bash]
-----
-$ ./hbase/bin/hbase-daemon.sh stop zookeeper
-----
-Now start up hbase-0.96.0.
-
-[[s096.migration.troubleshooting]]
-=== Troubleshooting
-
-[[s096.migration.troubleshooting.old.client]]
-.Old Client connecting to 0.96 cluster
-It will fail with an exception like the below. Upgrade.
-----
-17:22:15  Exception in thread "main" java.lang.IllegalArgumentException: Not a 
host:port pair: PBUF
-17:22:15  *
-17:22:15   api-compat-8.ent.cloudera.com ��  ���(
-17:22:15    at 
org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
-17:22:15    at org.apache.hadoop.hbase.ServerName.&init>(ServerName.java:101)
-17:22:15    at 
org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
-17:22:15    at 
org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
-17:22:15    at 
org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
-17:22:15    at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
-17:22:15    at 
org.apache.hadoop.hbase.client.HBaseAdmin.&init>(HBaseAdmin.java:126)
-17:22:15    at Client_4_3_0.setup(Client_4_3_0.java:716)
-17:22:15    at Client_4_3_0.main(Client_4_3_0.java:63)
-----
-
-==== Upgrading `META` to use Protocol Buffers (Protobuf)
-
-When you upgrade from versions prior to 0.96, `META` needs to be converted to 
use protocol buffers. This is controlled by the configuration option 
`hbase.MetaMigrationConvertingToPB`, which is set to `true` by default. 
Therefore, by default, no action is required on your part.
-
-The migration is a one-time event. However, every time your cluster starts, 
`META` is scanned to ensure that it does not need to be converted. If you have 
a very large number of regions, this scan can take a long time. Starting in 
0.98.5, you can set `hbase.MetaMigrationConvertingToPB` to `false` in 
_hbase-site.xml_, to disable this start-up scan. This should be considered an 
expert-level setting.
-
-[[upgrade0.94]]
-=== Upgrading from 0.92.x to 0.94.x
-We used to think that 0.92 and 0.94 were interface compatible and that you can 
do a rolling upgrade between these versions but then we figured that 
link:https://issues.apache.org/jira/browse/HBASE-5357[HBASE-5357 Use builder 
pattern in HColumnDescriptor] changed method signatures so rather than return 
`void` they instead return `HColumnDescriptor`. This will 
throw`java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V` so 0.92 and 0.94 
are NOT compatibl [...]
-
-[[upgrade0.92]]
-=== Upgrading from 0.90.x to 0.92.x
-==== Upgrade Guide
-You will find that 0.92.0 runs a little differently to 0.90.x releases. Here 
are a few things to watch out for upgrading from 0.90.x to 0.92.0.
-
-.tl:dr
-[NOTE]
-====
-These are the important things to know before upgrading.
-. Once you upgrade, you can’t go back.
-
-. MSLAB is on by default. Watch that heap usage if you have a lot of regions.
-
-. Distributed Log Splitting is on by default. It should make RegionServer 
failover faster.
-
-. There’s a separate tarball for security.
-
-. If `-XX:MaxDirectMemorySize` is set in your _hbase-env.sh_, it’s going to 
enable the experimental off-heap cache (You may not want this).
-====
-
-.You can’t go back!
-To move to 0.92.0, all you need to do is shutdown your cluster, replace your 
HBase 0.90.x with HBase 0.92.0 binaries (be sure you clear out all 0.90.x 
instances) and restart (You cannot do a rolling restart from 0.90.x to 0.92.x 
-- you must restart). On startup, the `.META.` table content is rewritten 
removing the table schema from the `info:regioninfo` column. Also, any flushes 
done post first startup will write out data in the new 0.92.0 file format, 
<<hfilev2>>. This means you cannot  [...]
-
-.MSLAB is ON by default
-In 0.92.0, the 
`<<hbase.hregion.memstore.mslab.enabled,hbase.hregion.memstore.mslab.enabled>>` 
flag is set to `true` (See <<gcpause>>). In 0.90.x it was false. When it is 
enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the 
memstore has zero or just a few small elements. This is fine usually but if you 
had lots of regions per RegionServer in a 0.90.x cluster (and MSLAB was off), 
you may find yourself OOME'ing on upgrade because the `thousands of regions * 
number o [...]
-
-[[dls]]
-.Distributed Log Splitting is on by default
-Previous, WAL logs on crash were split by the Master alone. In 0.92.0, log 
splitting is done by the cluster (See 
link:https://issues.apache.org/jira/browse/hbase-1364[HBASE-1364 [performance\] 
Distributed splitting of regionserver commit logs] or see the blog post 
link:http://blog.cloudera.com/blog/2012/07/hbase-log-splitting/[Apache HBase 
Log Splitting]). This should cut down significantly on the amount of time it 
takes splitting logs and getting regions back online again.
-
-.Memory accounting is different now
-In 0.92.0, <<hfilev2>> indices and bloom filters take up residence in the same 
LRU used caching blocks that come from the filesystem. In 0.90.x, the HFile v1 
indices lived outside of the LRU so they took up space even if the index was on 
a ‘cold’ file, one that wasn’t being actively used. With the indices now in the 
LRU, you may find you have less space for block caching. Adjust your block 
cache accordingly. See the <<block.cache>> for more detail. The block size 
default size has been ch [...]
-
-.On the Hadoop version to use
-Run 0.92.0 on Hadoop 1.0.x (or CDH3u3). The performance benefits are worth 
making the move. Otherwise, our Hadoop prescription is as it has been; you need 
an Hadoop that supports a working sync. See <<hadoop>>.
-
-If running on Hadoop 1.0.x (or CDH3u3), enable local read. See 
link:http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf[Practical Caching] 
presentation for ruminations on the performance benefits ‘going local’ (and for 
how to enable local reads).
-
-.HBase 0.92.0 ships with ZooKeeper 3.4.2
-If you can, upgrade your ZooKeeper. If you can’t, 3.4.2 clients should work 
against 3.3.X ensembles (HBase makes use of 3.4.2 API).
-
-.Online alter is off by default
-In 0.92.0, we’ve added an experimental online schema alter facility (See 
<<hbase.online.schema.update.enable,hbase.online.schema.update.enable>>). It's 
off by default. Enable it at your own risk. Online alter and splitting tables 
do not play well together so be sure your cluster quiescent using this feature 
(for now).
-
-.WebUI
-The web UI has had a few additions made in 0.92.0. It now shows a list of the 
regions currently transitioning, recent compactions/flushes, and a process list 
of running processes (usually empty if all is well and requests are being 
handled promptly). Other additions including requests by region, a debugging 
servlet dump, etc.
-
-.Security tarball
-We now ship with two tarballs; secure and insecure HBase. Documentation on how 
to setup a secure HBase is on the way.
-
-.Changes in HBase replication
-0.92.0 adds two new features: multi-slave and multi-master replication. The 
way to enable this is the same as adding a new peer, so in order to have 
multi-master you would just run add_peer for each cluster that acts as a master 
to the other slave clusters. Collisions are handled at the timestamp level 
which may or may not be what you want, this needs to be evaluated on a per use 
case basis. Replication is still experimental in 0.92 and is disabled by 
default, run it at your own risk.
-
-.RegionServer now aborts if OOME
-If an OOME, we now have the JVM kill -9 the RegionServer process so it goes 
down fast. Previous, a RegionServer might stick around after incurring an OOME 
limping along in some wounded state. To disable this facility, and recommend 
you leave it in place, you’d need to edit the bin/hbase file. Look for the 
addition of the -XX:OnOutOfMemoryError="kill -9 %p" arguments (See 
link:https://issues.apache.org/jira/browse/HBASE-4769[HBASE-4769 - ‘Abort 
RegionServer Immediately on OOME’]).
-
-.HFile v2 and the “Bigger, Fewer” Tendency
-0.92.0 stores data in a new format, <<hfilev2>>. As HBase runs, it will move 
all your data from HFile v1 to HFile v2 format. This auto-migration will run in 
the background as flushes and compactions run. HFile v2 allows HBase run with 
larger regions/files. In fact, we encourage that all HBasers going forward tend 
toward Facebook axiom #1, run with larger, fewer regions. If you have lots of 
regions now -- more than 100s per host -- you should look into setting your 
region size up after yo [...]
-
-[[upgrade0.90]]
-=== Upgrading to HBase 0.90.x from 0.20.x or 0.89.x
-This version of 0.90.x HBase can be started on data written by HBase 0.20.x or 
HBase 0.89.x. There is no need of a migration step. HBase 0.89.x and 0.90.x 
does write out the name of region directories differently -- it names them with 
a md5 hash of the region name rather than a jenkins hash -- so this means that 
once started, there is no going back to HBase 0.20.x.
-
-Be sure to remove the _hbase-default.xml_ from your _conf_ directory on 
upgrade. A 0.20.x version of this file will have sub-optimal configurations for 
0.90.x HBase. The _hbase-default.xml_ file is now bundled into the HBase jar 
and read from there. If you would like to review the content of this file, see 
it in the src tree at _src/main/resources/hbase-default.xml_ or see 
<<hbase_default_configurations>>.
-
-Finally, if upgrading from 0.20.x, check your .META. schema in the shell. In 
the past we would recommend that users run with a 16kb MEMSTORE_FLUSHSIZE. Run
-----
-hbase> scan '-ROOT-'
-----
-in the shell. This will output the current `.META.` schema. Check 
`MEMSTORE_FLUSHSIZE` size. Is it 16kb (16384)? If so, you will need to change 
this (The 'normal'/default value is 64MB (67108864)). Run the script 
`bin/set_meta_memstore_size.rb`. This will make the necessary edit to your 
`.META.` schema. Failure to run this change will make for a slow cluster. See 
link:https://issues.apache.org/jira/browse/HBASE-3499[HBASE-3499 Users 
upgrading to 0.90.0 need to have their .META. table upd [...]
+Please see the reference guide for the latest 2.x release for guidance on 
upgrading.
diff --git a/src/main/site/site.xml b/src/main/site/site.xml
index b7debd3..736ab7a 100644
--- a/src/main/site/site.xml
+++ b/src/main/site/site.xml
@@ -77,12 +77,17 @@
       <item name="Bulk Loads" href="book.html#arch.bulk.load" target="_blank" 
/>
       <item name="Metrics" href="metrics.html" target="_blank" />
       <item name="HBase on Windows" href="cygwin.html" target="_blank" />
-      <item name="Cluster replication" href="replication.html" target="_blank" 
/>
-    </menu>
-    <menu name="0.94 Documentation">
-      <item name="API" href="0.94/apidocs/index.html" target="_blank" />
-      <item name="X-Ref" href="0.94/xref/index.html" target="_blank" />
-      <item name="Ref Guide (single-page)" href="0.94/book.html" 
target="_blank" />
+      <item name="Cluster replication" href="book.html#replication" 
target="_blank" />
+      <item name="1.2 Documentation">
+        <item name="API" href="1.2/apidocs/index.html" target="_blank" />
+        <item name="X-Ref" href="1.2/xref/index.html" target="_blank" />
+        <item name="Ref Guide (single-page)" href="1.2/book.html" 
target="_blank" />
+      </item>
+      <item name="1.1 Documentation">
+        <item name="API" href="1.1/apidocs/index.html" target="_blank" />
+        <item name="X-Ref" href="1.1/xref/index.html" target="_blank" />
+        <item name="Ref Guide (single-page)" href="1.1/book.html" 
target="_blank" />
+      </item>
     </menu>
     <menu name="ASF">
       <item name="Apache Software Foundation" 
href="http://www.apache.org/foundation/"; target="_blank" />

Reply via email to