Repository: hbase
Updated Branches:
  refs/heads/master 38701ea8e -> 5fbf80ee5


http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc 
b/src/main/asciidoc/_chapters/troubleshooting.adoc
index b272849..afe24fe 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -49,19 +49,19 @@ For more information on GC pauses, see the 
link:http://www.cloudera.com/blog/201
 
 The key process logs are as follows... (replace <user> with the user that 
started the service, and <hostname> for the machine name) 
 
-NameNode: [path]_$HADOOP_HOME/logs/hadoop-<user>-namenode-<hostname>.log_    
+NameNode: _$HADOOP_HOME/logs/hadoop-<user>-namenode-<hostname>.log_    
 
-DataNode: [path]_$HADOOP_HOME/logs/hadoop-<user>-datanode-<hostname>.log_    
+DataNode: _$HADOOP_HOME/logs/hadoop-<user>-datanode-<hostname>.log_    
 
-JobTracker: [path]_$HADOOP_HOME/logs/hadoop-<user>-jobtracker-<hostname>.log_  
  
+JobTracker: _$HADOOP_HOME/logs/hadoop-<user>-jobtracker-<hostname>.log_    
 
-TaskTracker: 
[path]_$HADOOP_HOME/logs/hadoop-<user>-tasktracker-<hostname>.log_    
+TaskTracker: _$HADOOP_HOME/logs/hadoop-<user>-tasktracker-<hostname>.log_    
 
-HMaster: [path]_$HBASE_HOME/logs/hbase-<user>-master-<hostname>.log_    
+HMaster: _$HBASE_HOME/logs/hbase-<user>-master-<hostname>.log_    
 
-RegionServer: 
[path]_$HBASE_HOME/logs/hbase-<user>-regionserver-<hostname>.log_    
+RegionServer: _$HBASE_HOME/logs/hbase-<user>-regionserver-<hostname>.log_    
 
-ZooKeeper: [path]_TODO_    
+ZooKeeper: _TODO_    
 
 [[trouble.log.locations]]
 === Log Locations
@@ -94,17 +94,17 @@ Enabling the RPC-level logging on a RegionServer can often 
given insight on timi
 Once enabled, the amount of log spewed is voluminous.
 It is not recommended that you leave this logging on for more than short 
bursts of time.
 To enable RPC-level logging, browse to the RegionServer UI and click on _Log 
Level_.
-Set the log level to [var]+DEBUG+ for the package 
[class]+org.apache.hadoop.ipc+ (Thats right, for [class]+hadoop.ipc+, NOT, 
[class]+hbase.ipc+). Then tail the RegionServers log.
+Set the log level to `DEBUG` for the package `org.apache.hadoop.ipc` (Thats 
right, for `hadoop.ipc`, NOT, `hbase.ipc`). Then tail the RegionServers log.
 Analyze.
 
-To disable, set the logging level back to [var]+INFO+ level. 
+To disable, set the logging level back to `INFO` level. 
 
 [[trouble.log.gc]]
 === JVM Garbage Collection Logs
 
 HBase is memory intensive, and using the default GC you can see long pauses in 
all threads including the _Juliet Pause_ aka "GC of Death". To help debug this 
or confirm this is happening GC logging can be turned on in the Java virtual 
machine. 
 
-To enable, in [path]_hbase-env.sh_, uncomment one of the below lines :
+To enable, in _hbase-env.sh_, uncomment one of the below lines :
 
 [source,bourne]
 ----
@@ -188,14 +188,14 @@ CMS pauses are always low, but if your ParNew starts 
growing, you can see minor
 This can be due to the size of the ParNew, which should be relatively small.
 If your ParNew is very large after running HBase for a while, in one example a 
ParNew was about 150MB, then you might have to constrain the size of ParNew 
(The larger it is, the longer the collections take but if its too small, 
objects are promoted to old gen too quickly). In the below we constrain new gen 
size to 64m. 
 
-Add the below line in [path]_hbase-env.sh_: 
+Add the below line in _hbase-env.sh_: 
 [source,bourne]
 ----
 
 export SERVER_GC_OPTS="$SERVER_GC_OPTS -XX:NewSize=64m -XX:MaxNewSize=64m"
 ----      
 
-Similarly, to enable GC logging for client processes, uncomment one of the 
below lines in [path]_hbase-env.sh_:
+Similarly, to enable GC logging for client processes, uncomment one of the 
below lines in _hbase-env.sh_:
 
 [source,bourne]
 ----
@@ -273,7 +273,7 @@ See <<hbase_metrics,hbase metrics>> for more information in 
metric definitions.
 [[trouble.tools.builtin.zkcli]]
 ==== zkcli
 
-[code]+zkcli+ is a very useful tool for investigating ZooKeeper-related issues.
+`zkcli` is a very useful tool for investigating ZooKeeper-related issues.
 To invoke: 
 [source,bourne]
 ----
@@ -312,14 +312,14 @@ The commands (and arguments) are:
 [[trouble.tools.tail]]
 ==== tail
 
-[code]+tail+ is the command line tool that lets you look at the end of a file.
+`tail` is the command line tool that lets you look at the end of a file.
 Add the ``-f'' option and it will refresh when new data is available.
 It's useful when you are wondering what's happening, for example, when a 
cluster is taking a long time to shutdown or startup as you can just fire a new 
terminal and tail the master log (and maybe a few RegionServers). 
 
 [[trouble.tools.top]]
 ==== top
 
-[code]+top+ is probably one of the most important tool when first trying to 
see what's running on a machine and how the resources are consumed.
+`top` is probably one of the most important tool when first trying to see 
what's running on a machine and how the resources are consumed.
 Here's an example from production system:
 
 [source]
@@ -351,7 +351,7 @@ Typing ``1'' will give you the detail of how each CPU is 
used instead of the ave
 [[trouble.tools.jps]]
 ==== jps
 
-[code]+jps+ is shipped with every JDK and gives the java process ids for the 
current user (if root, then it gives the ids for all users). Example:
+`jps` is shipped with every JDK and gives the java process ids for the current 
user (if root, then it gives the ids for all users). Example:
 
 [source,bourne]
 ----
@@ -389,7 +389,7 @@ hadoop   17789  155 35.2 9067824 8604364 ?     S&lt;l  
Mar04 9855:48 /usr/java/j
 [[trouble.tools.jstack]]
 ==== jstack
 
-[code]+jstack+ is one of the most important tools when trying to figure out 
what a java process is doing apart from looking at the logs.
+`jstack` is one of the most important tools when trying to figure out what a 
java process is doing apart from looking at the logs.
 It has to be used in conjunction with jps in order to give it a process id.
 It shows a list of threads, each one has a name, and they appear in the order 
that they were created (so the top ones are the most recent threads). Here are 
a few example: 
 
@@ -566,36 +566,36 @@ For more information on the HBase client, see 
<<client,client>>.
 === ScannerTimeoutException or UnknownScannerException
 
 This is thrown if the time between RPC calls from the client to RegionServer 
exceeds the scan timeout.
-For example, if [code]+Scan.setCaching+ is set to 500, then there will be an 
RPC call to fetch the next batch of rows every 500 [code]+.next()+ calls on the 
ResultScanner because data is being transferred in blocks of 500 rows to the 
client.
+For example, if `Scan.setCaching` is set to 500, then there will be an RPC 
call to fetch the next batch of rows every 500 `.next()` calls on the 
ResultScanner because data is being transferred in blocks of 500 rows to the 
client.
 Reducing the setCaching value may be an option, but setting this value too low 
makes for inefficient processing on numbers of rows. 
 
 See <<perf.hbase.client.caching,perf.hbase.client.caching>>. 
 
 === Performance Differences in Thrift and Java APIs
 
-Poor performance, or even [code]+ScannerTimeoutExceptions+, can occur if 
[code]+Scan.setCaching+ is too high, as discussed in 
<<trouble.client.scantimeout,trouble.client.scantimeout>>.
+Poor performance, or even `ScannerTimeoutExceptions`, can occur if 
`Scan.setCaching` is too high, as discussed in 
<<trouble.client.scantimeout,trouble.client.scantimeout>>.
 If the Thrift client uses the wrong caching settings for a given workload, 
performance can suffer compared to the Java API.
-To set caching for a given scan in the Thrift client, use the 
[code]+scannerGetList(scannerId,
-          numRows)+ method, where [code]+numRows+ is an integer representing 
the number of rows to cache.
+To set caching for a given scan in the Thrift client, use the 
`scannerGetList(scannerId,
+          numRows)` method, where `numRows` is an integer representing the 
number of rows to cache.
 In one case, it was found that reducing the cache for Thrift scans from 1000 
to 100 increased performance to near parity with the Java API given the same 
queries.
 
 See also Jesse Andersen's 
link:http://blog.cloudera.com/blog/2014/04/how-to-use-the-hbase-thrift-interface-part-3-using-scans/[blog
 post]  about using Scans with Thrift.
 
 [[trouble.client.lease.exception]]
-=== [class]+LeaseException+ when calling[class]+Scanner.next+
+=== `LeaseException` when calling`Scanner.next`
 
 In some situations clients that fetch data from a RegionServer get a 
LeaseException instead of the usual 
<<trouble.client.scantimeout,trouble.client.scantimeout>>.
-Usually the source of the exception is 
[class]+org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)+
        (line number may vary). It tends to happen in the context of a 
slow/freezing RegionServer#next call.
-It can be prevented by having [var]+hbase.rpc.timeout+ > 
[var]+hbase.regionserver.lease.period+.
+Usually the source of the exception is 
`org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)`      
  (line number may vary). It tends to happen in the context of a slow/freezing 
RegionServer#next call.
+It can be prevented by having `hbase.rpc.timeout` > 
`hbase.regionserver.lease.period`.
 Harsh J investigated the issue as part of the mailing list thread 
link:http://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase,
           mail # user - Lease does not exist exceptions]      
 
 [[trouble.client.scarylogs]]
 === Shell or client application throws lots of scary exceptions during 
normaloperation
 
-Since 0.20.0 the default log level for [code]+org.apache.hadoop.hbase.*+is 
DEBUG. 
+Since 0.20.0 the default log level for `org.apache.hadoop.hbase.*`is DEBUG. 
 
-On your clients, edit [path]_$HBASE_HOME/conf/log4j.properties_ and change 
this: [code]+log4j.logger.org.apache.hadoop.hbase=DEBUG+ to this: 
[code]+log4j.logger.org.apache.hadoop.hbase=INFO+, or even 
[code]+log4j.logger.org.apache.hadoop.hbase=WARN+. 
+On your clients, edit _$HBASE_HOME/conf/log4j.properties_ and change this: 
`log4j.logger.org.apache.hadoop.hbase=DEBUG` to this: 
`log4j.logger.org.apache.hadoop.hbase=INFO`, or even 
`log4j.logger.org.apache.hadoop.hbase=WARN`. 
 
 [[trouble.client.longpauseswithcompression]]
 === Long Client Pauses With Compression
@@ -606,7 +606,7 @@ Compression can exacerbate the pauses, although it is not 
the source of the prob
 
 See <<precreate.regions,precreate.regions>> on the pattern for pre-creating 
regions and confirm that the table isn't starting with a single region.
 
-See <<perf.configurations,perf.configurations>> for cluster configuration, 
particularly [code]+hbase.hstore.blockingStoreFiles+, 
[code]+hbase.hregion.memstore.block.multiplier+, [code]+MAX_FILESIZE+ (region 
size), and [code]+MEMSTORE_FLUSHSIZE.+      
+See <<perf.configurations,perf.configurations>> for cluster configuration, 
particularly `hbase.hstore.blockingStoreFiles`, 
`hbase.hregion.memstore.block.multiplier`, `MAX_FILESIZE` (region size), and 
`MEMSTORE_FLUSHSIZE.`      
 
 A slightly longer explanation of why pauses can happen is as follows: Puts are 
sometimes blocked on the MemStores which are blocked by the flusher thread 
which is blocked because there are too many files to compact because the 
compactor is given too many small files to compact and has to compact the same 
data repeatedly.
 This situation can occur even with minor compactions.
@@ -631,7 +631,7 @@ Secure Client Connect ([Caused by GSSException: No valid 
credentials provided
 
 This issue is caused by bugs in the MIT Kerberos replay_cache component, 
link:http://krbdev.mit.edu/rt/Ticket/Display.html?id=1201[#1201] and 
link:http://krbdev.mit.edu/rt/Ticket/Display.html?id=5924[#5924].
 These bugs caused the old version of krb5-server to erroneously block 
subsequent requests sent from a Principal.
-This caused krb5-server to block the connections sent from one Client (one 
HTable instance with multi-threading connection instances for each 
regionserver); Messages, such as [literal]+Request is a replay (34)+, are 
logged in the client log You can ignore the messages, because HTable will retry 
5 * 10 (50) times for each failed connection by default.
+This caused krb5-server to block the connections sent from one Client (one 
HTable instance with multi-threading connection instances for each 
regionserver); Messages, such as `Request is a replay (34)`, are logged in the 
client log You can ignore the messages, because HTable will retry 5 * 10 (50) 
times for each failed connection by default.
 HTable will throw IOException if any connection to the regionserver fails 
after the retries, so that the user client code for HTable instance can handle 
it further. 
 
 Alternatively, update krb5-server to a version which solves these issues, such 
as krb5-server-1.10.3.
@@ -673,8 +673,8 @@ The utility 
<<trouble.tools.builtin.zkcli,trouble.tools.builtin.zkcli>> may help
 You are likely running into the issue that is described and worked through in 
the mail thread 
link:http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&subj=Re+Suspected+memory+leak[HBase,
           mail # user - Suspected memory leak] and continued over in 
link:http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&subj=Re+FeedbackRe+Suspected+memory+leak[HBase,
           mail # dev - FeedbackRe: Suspected memory leak].
-A workaround is passing your client-side JVM a reasonable value for 
[code]+-XX:MaxDirectMemorySize+.
-By default, the [var]+MaxDirectMemorySize+ is equal to your [code]+-Xmx+ max 
heapsize setting (if [code]+-Xmx+ is set). Try seting it to something smaller 
(for example, one user had success setting it to [code]+1g+ when they had a 
client-side heap of [code]+12g+). If you set it too small, it will bring on 
[code]+FullGCs+ so keep it a bit hefty.
+A workaround is passing your client-side JVM a reasonable value for 
`-XX:MaxDirectMemorySize`.
+By default, the `MaxDirectMemorySize` is equal to your `-Xmx` max heapsize 
setting (if `-Xmx` is set). Try seting it to something smaller (for example, 
one user had success setting it to `1g` when they had a client-side heap of 
`12g`). If you set it too small, it will bring on `FullGCs` so keep it a bit 
hefty.
 You want to make this setting client-side only especially if you are running 
the new experiemental server-side off-heap cache since this feature depends on 
being able to use big direct buffers (You may have to keep separate client-side 
and server-side config dirs). 
 
 [[trouble.client.slowdown.admin]]
@@ -715,7 +715,7 @@ Uncompress and extract the downloaded file, and install the 
policy jars into <ja
 [[trouble.mapreduce.local]]
 === You Think You're On The Cluster, But You're Actually Local
 
-This following stacktrace happened using [code]+ImportTsv+, but things like 
this can happen on any job with a mis-configuration.
+This following stacktrace happened using `ImportTsv`, but things like this can 
happen on any job with a mis-configuration.
 
 [source]
 ----
@@ -748,7 +748,7 @@ at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
 
 LocalJobRunner means the job is running locally, not on the cluster. 
 
-To solve this problem, you should run your MR job with your 
[code]+HADOOP_CLASSPATH+ set to include the HBase dependencies.
+To solve this problem, you should run your MR job with your `HADOOP_CLASSPATH` 
set to include the HBase dependencies.
 The "hbase classpath" utility can be used to do this easily.
 For example (substitute VERSION with your HBase version):
 
@@ -776,7 +776,7 @@ For more information on the NameNode, see 
<<arch.hdfs,arch.hdfs>>.
 [[trouble.namenode.disk]]
 === HDFS Utilization of Tables and Regions
 
-To determine how much space HBase is using on HDFS use the [code]+hadoop+ 
shell commands from the NameNode.
+To determine how much space HBase is using on HDFS use the `hadoop` shell 
commands from the NameNode.
 For example... 
 
 
@@ -833,7 +833,7 @@ The HDFS directory structure of HBase WAL is..
 ----      
 
 See the 
link:http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html[HDFS User
-          Guide] for other non-shell diagnostic utilities like [code]+fsck+. 
+          Guide] for other non-shell diagnostic utilities like `fsck`. 
 
 [[trouble.namenode.0size.hlogs]]
 ==== Zero size WALs with data in them
@@ -856,7 +856,7 @@ Additionally, after a major compaction if the resulting 
StoreFile is "small" it
 [[trouble.network.spikes]]
 === Network Spikes
 
-If you are seeing periodic network spikes you might want to check the 
[code]+compactionQueues+ to see if major compactions are happening. 
+If you are seeing periodic network spikes you might want to check the 
`compactionQueues` to see if major compactions are happening. 
 
 See <<managed.compactions,managed.compactions>> for more information on 
managing compactions. 
 
@@ -886,7 +886,7 @@ The Master believes the RegionServers have the IP of 
127.0.0.1 - which is localh
 
 The RegionServers are erroneously informing the Master that their IP addresses 
are 127.0.0.1. 
 
-Modify [path]_/etc/hosts_ on the region servers, from...
+Modify _/etc/hosts_ on the region servers, from...
 
 [source]
 ----
@@ -933,7 +933,7 @@ See the Configuration section on link:[LZO compression 
configuration].
 
 Are you running an old JVM (< 1.6.0_u21?)? When you look at a thread dump, 
does it look like threads are BLOCKED but no one holds the lock all are blocked 
on? See link:https://issues.apache.org/jira/browse/HBASE-3622[HBASE 3622 
Deadlock in
             HBaseServer (JVM bug?)].
-Adding [code]`-XX:+UseMembar` to the HBase [var]+HBASE_OPTS+ in 
[path]_conf/hbase-env.sh_ may fix it. 
+Adding `-XX:+UseMembar` to the HBase `HBASE_OPTS` in _conf/hbase-env.sh_ may 
fix it. 
 
 [[trouble.rs.runtime.filehandles]]
 ==== java.io.IOException...(Too many open files)
@@ -1013,13 +1013,13 @@ ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session expi
 The JVM is doing a long running garbage collecting which is pausing every 
threads (aka "stop the world"). Since the RegionServer's local ZooKeeper client 
cannot send heartbeats, the session times out.
 By design, we shut down any node that isn't able to contact the ZooKeeper 
ensemble after getting a timeout so that it stops serving data that may already 
be assigned elsewhere. 
 
-* Make sure you give plenty of RAM (in [path]_hbase-env.sh_), the default of 
1GB won't be able to sustain long running imports.
+* Make sure you give plenty of RAM (in _hbase-env.sh_), the default of 1GB 
won't be able to sustain long running imports.
 * Make sure you don't swap, the JVM never behaves well under swapping.
 * Make sure you are not CPU starving the RegionServer thread.
   For example, if you are running a MapReduce job using 6 CPU-intensive tasks 
on a machine with 4 cores, you are probably starving the RegionServer enough to 
create longer garbage collection pauses.
 * Increase the ZooKeeper session timeout
 
-If you wish to increase the session timeout, add the following to your 
[path]_hbase-site.xml_ to increase the timeout from the default of 60 seconds 
to 120 seconds. 
+If you wish to increase the session timeout, add the following to your 
_hbase-site.xml_ to increase the timeout from the default of 60 seconds to 120 
seconds. 
 
 [source,xml]
 ----
@@ -1138,10 +1138,10 @@ A ZooKeeper server wasn't able to start, throws that 
error.
 xyz is the name of your server.
 
 This is a name lookup problem.
-HBase tries to start a ZooKeeper server on some machine but that machine isn't 
able to find itself in the [var]+hbase.zookeeper.quorum+ configuration. 
+HBase tries to start a ZooKeeper server on some machine but that machine isn't 
able to find itself in the `hbase.zookeeper.quorum` configuration. 
 
 Use the hostname presented in the error message instead of the value you used.
-If you have a DNS server, you can set [var]+hbase.zookeeper.dns.interface+ and 
[var]+hbase.zookeeper.dns.nameserver+ in [path]_hbase-site.xml_ to make sure it 
resolves to the correct FQDN. 
+If you have a DNS server, you can set `hbase.zookeeper.dns.interface` and 
`hbase.zookeeper.dns.nameserver` in _hbase-site.xml_ to make sure it resolves 
to the correct FQDN. 
 
 [[trouble.zookeeper.general]]
 === ZooKeeper, The Cluster Canary
@@ -1191,10 +1191,10 @@ See Andrew's answer here, up on the user list: 
link:http://search-hadoop.com/m/s
 == HBase and Hadoop version issues
 
 [[trouble.versions.205]]
-=== [code]+NoClassDefFoundError+ when trying to run 0.90.x on 
hadoop-0.20.205.x (or hadoop-1.0.x)
+=== `NoClassDefFoundError` when trying to run 0.90.x on hadoop-0.20.205.x (or 
hadoop-1.0.x)
 
 Apache HBase 0.90.x does not ship with hadoop-0.20.205.x, etc.
-To make it run, you need to replace the hadoop jars that Apache HBase shipped 
with in its [path]_lib_ directory with those of the Hadoop you want to run 
HBase on.
+To make it run, you need to replace the hadoop jars that Apache HBase shipped 
with in its _lib_ directory with those of the Hadoop you want to run HBase on.
 If even after replacing Hadoop jars you get the below exception:
 
 [source]
@@ -1212,7 +1212,7 @@ sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.initialize(Us
 sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:177)
 ----
 
-you need to copy under [path]_hbase/lib_, the 
[path]_commons-configuration-X.jar_ you find in your Hadoop's [path]_lib_ 
directory.
+you need to copy under _hbase/lib_, the _commons-configuration-X.jar_ you find 
in your Hadoop's _lib_ directory.
 That should fix the above complaint. 
 
 [[trouble.wrong.version]]
@@ -1228,7 +1228,7 @@ If you see something like the following in your logs 
[computeroutput]+... 2012-0
 If the Hadoop configuration is loaded after the HBase configuration, and you 
have configured custom IPC settings in both HBase and Hadoop, the Hadoop values 
may overwrite the HBase values.
 There is normally no need to change these settings for HBase, so this problem 
is an edge case.
 However, link:https://issues.apache.org/jira/browse/HBASE-11492[HBASE-11492] 
renames these settings for HBase to remove the chance of a conflict.
-Each of the setting names have been prefixed with [literal]+hbase.+, as shown 
in the following table.
+Each of the setting names have been prefixed with `hbase.`, as shown in the 
following table.
 No action is required related to these changes unless you are already 
experiencing a conflict.
 
 These changes were backported to HBase 0.98.x and apply to all newer versions.
@@ -1297,7 +1297,7 @@ To operate with the most efficiency, HBase needs data to 
be available locally.
 Therefore, it is a good practice to run an HDFS datanode on each RegionServer.
 
 .Important Information and Guidelines for HBase and HDFSHBase is a client of 
HDFS.::
-  HBase is an HDFS client, using the HDFS [code]+DFSClient+ class, and 
references to this class appear in HBase logs with other HDFS client log 
messages.
+  HBase is an HDFS client, using the HDFS `DFSClient` class, and references to 
this class appear in HBase logs with other HDFS client log messages.
 
 Configuration is necessary in multiple places.::
   Some HDFS configurations relating to HBase need to be done at the HDFS 
(server) side.
@@ -1309,9 +1309,9 @@ Write errors which affect HBase may be logged in the HDFS 
logs rather than HBase
   Communication problems between datanodes are logged in the HDFS logs, not 
the HBase logs.
 
 HBase communicates with HDFS using two different ports.::
-  HBase communicates with datanodes using the [code]+ipc.Client+ interface and 
the [code]+DataNode+ class.
+  HBase communicates with datanodes using the `ipc.Client` interface and the 
`DataNode` class.
   References to these will appear in HBase logs.
-  Each of these communication channels use a different port (50010 and 50020 
by default). The ports are configured in the HDFS configuration, via the 
[code]+dfs.datanode.address+ and [code]+dfs.datanode.ipc.address+            
parameters.
+  Each of these communication channels use a different port (50010 and 50020 
by default). The ports are configured in the HDFS configuration, via the 
`dfs.datanode.address` and `dfs.datanode.ipc.address`            parameters.
 
 Errors may be logged in HBase, HDFS, or both.::
   When troubleshooting HDFS issues in HBase, check logs in both places for 
errors.
@@ -1320,8 +1320,8 @@ HDFS takes a while to mark a node as dead. You can 
configure HDFS to avoid using
           datanodes.::
   By default, HDFS does not mark a node as dead until it is unreachable for 
630 seconds.
   In Hadoop 1.1 and Hadoop 2.x, this can be alleviated by enabling checks for 
stale datanodes, though this check is disabled by default.
-  You can enable the check for reads and writes separately, via 
[code]+dfs.namenode.avoid.read.stale.datanode+ and 
[code]+dfs.namenode.avoid.write.stale.datanode settings+.
-  A stale datanode is one that has not been reachable for 
[code]+dfs.namenode.stale.datanode.interval+            (default is 30 
seconds). Stale datanodes are avoided, and marked as the last possible target 
for a read or write operation.
+  You can enable the check for reads and writes separately, via 
`dfs.namenode.avoid.read.stale.datanode` and 
`dfs.namenode.avoid.write.stale.datanode settings`.
+  A stale datanode is one that has not been reachable for 
`dfs.namenode.stale.datanode.interval`            (default is 30 seconds). 
Stale datanodes are avoided, and marked as the last possible target for a read 
or write operation.
   For configuration details, see the HDFS documentation.
 
 Settings for HDFS retries and timeouts are important to HBase.::
@@ -1336,28 +1336,28 @@ Connection timeouts occur between the client (HBASE) 
and the HDFS datanode.
 They may occur when establishing a connection, attempting to read, or 
attempting to write.
 The two settings below are used in combination, and affect connections between 
the DFSClient and the datanode, the ipc.cClient and the datanode, and 
communication between two datanodes. 
 
-[code]+dfs.client.socket-timeout+ (default: 60000)::
+`dfs.client.socket-timeout` (default: 60000)::
   The amount of time before a client connection times out when establishing a 
connection or reading.
   The value is expressed in milliseconds, so the default is 60 seconds.
 
-[code]+dfs.datanode.socket.write.timeout+ (default: 480000)::
+`dfs.datanode.socket.write.timeout` (default: 480000)::
   The amount of time before a write operation times out.
   The default is 8 minutes, expressed as milliseconds.
 
 .Typical Error Logs
 The following types of errors are often seen in the logs.
 
-[code]+INFO HDFS.DFSClient: Failed to connect to /xxx50010, add to deadNodes 
and
+`INFO HDFS.DFSClient: Failed to connect to /xxx50010, add to deadNodes and
             continue java.net.SocketTimeoutException: 60000 millis timeout 
while waiting for channel
             to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending
-            remote=/region-server-1:50010]+::
+            remote=/region-server-1:50010]`::
   All datanodes for a block are dead, and recovery is not possible.
   Here is the sequence of events that leads to this error:
 
-[code]+INFO org.apache.hadoop.HDFS.DFSClient: Exception in 
createBlockOutputStream
+`INFO org.apache.hadoop.HDFS.DFSClient: Exception in createBlockOutputStream
             java.net.SocketTimeoutException: 69000 millis timeout while 
waiting for channel to be
             ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending remote=/
-            xxx:50010]+::
+            xxx:50010]`::
   This type of error indicates a write issue.
   In this case, the master wants to split the log.
   It does not have a local datanode so it tries to connect to a remote 
datanode, but the datanode is dead.
@@ -1423,7 +1423,7 @@ This problem appears to affect some versions of OpenJDK 7 
shipped by some Linux
 NSS is configured as the default provider.
 If the host has an x86_64 architecture, depending on if the vendor packages 
contain the defect, the NSS provider will not function correctly. 
 
-To work around this problem, find the JRE home directory and edit the file 
[path]_lib/security/java.security_.
+To work around this problem, find the JRE home directory and edit the file 
_lib/security/java.security_.
 Edit the file to comment out the line: 
 
 [source]
@@ -1446,7 +1446,7 @@ Some users have reported seeing the following error:
 kernel: java: page allocation failure. order:4, mode:0x20
 ----
 
-Raising the value of [code]+min_free_kbytes+ was reported to fix this problem.
+Raising the value of `min_free_kbytes` was reported to fix this problem.
 This parameter is set to a percentage of the amount of RAM on your system, and 
is described in more detail at 
link:http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s3-proc-sys-vm.html.
 
 
 To find the current value on your system, run the following command:
@@ -1460,7 +1460,7 @@ Try doubling, then quadrupling the value.
 Note that setting the value too low or too high could have detrimental effects 
on your system.
 Consult your operating system vendor for specific recommendations.
 
-Use the following command to modify the value of [code]+min_free_kbytes+, 
substituting [replaceable]_<value>_ with your intended value:
+Use the following command to modify the value of `min_free_kbytes`, 
substituting [replaceable]_<value>_ with your intended value:
 
 ----
 [user@host]# echo <value> > /proc/sys/vm/min_free_kbytes

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc 
b/src/main/asciidoc/_chapters/unit_testing.adoc
index 70d27f1..1ffedf1 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -73,7 +73,7 @@ The first step is to add JUnit dependencies to your Maven POM 
file:
 ----
 
 Next, add some unit tests to your code.
-Tests are annotated with [literal]+@Test+.
+Tests are annotated with `@Test`.
 Here, the unit tests are in bold.
 
 [source,java]
@@ -94,7 +94,7 @@ public class TestMyHbaseDAOData {
 }
 ----
 
-These tests ensure that your [code]+createPut+ method creates, populates, and 
returns a [code]+Put+ object with expected values.
+These tests ensure that your `createPut` method creates, populates, and 
returns a `Put` object with expected values.
 Of course, JUnit can do much more than this.
 For an introduction to JUnit, see 
link:https://github.com/junit-team/junit/wiki/Getting-started. 
 
@@ -105,9 +105,9 @@ It goes further than JUnit by allowing you to test the 
interactions between obje
 You can read more about Mockito at its project site, 
link:https://code.google.com/p/mockito/.
 
 You can use Mockito to do unit testing on smaller units.
-For instance, you can mock a [class]+org.apache.hadoop.hbase.Server+ instance 
or a [class]+org.apache.hadoop.hbase.master.MasterServices+ interface reference 
rather than a full-blown [class]+org.apache.hadoop.hbase.master.HMaster+.
+For instance, you can mock a `org.apache.hadoop.hbase.Server` instance or a 
`org.apache.hadoop.hbase.master.MasterServices` interface reference rather than 
a full-blown `org.apache.hadoop.hbase.master.HMaster`.
 
-This example builds upon the example code in <<unit.tests,unit.tests>>, to 
test the [code]+insertRecord+ method.
+This example builds upon the example code in <<unit.tests,unit.tests>>, to 
test the `insertRecord` method.
 
 First, add a dependency for Mockito to your Maven POM file.
 
@@ -122,7 +122,7 @@ First, add a dependency for Mockito to your Maven POM file.
 </dependency>
 ----
 
-Next, add a [code]+@RunWith+ annotation to your test class, to direct it to 
use Mockito.
+Next, add a `@RunWith` annotation to your test class, to direct it to use 
Mockito.
 
 [source,java]
 ----
@@ -158,7 +158,7 @@ public class TestMyHBaseDAO{
 }
 ----
 
-This code populates [code]+HBaseTestObj+ with ``ROWKEY-1'', ``DATA-1'', 
``DATA-2'' as values.
+This code populates `HBaseTestObj` with ``ROWKEY-1'', ``DATA-1'', ``DATA-2'' 
as values.
 It then inserts the record into the mocked table.
 The Put that the DAO would have inserted is captured, and values are tested to 
verify that they are what you expected them to be.
 
@@ -171,7 +171,7 @@ Similarly, you can now expand into other operations such as 
Get, Scan, or Delete
 link:http://mrunit.apache.org/[Apache MRUnit] is a library that allows you to 
unit-test MapReduce jobs.
 You can use it to test HBase jobs in the same way as other MapReduce jobs.
 
-Given a MapReduce job that writes to an HBase table called [literal]+MyTest+, 
which has one column family called [literal]+CF+, the reducer of such a job 
could look like the following:
+Given a MapReduce job that writes to an HBase table called `MyTest`, which has 
one column family called `CF`, the reducer of such a job could look like the 
following:
 
 [source,java]
 ----
@@ -338,7 +338,7 @@ public class MyHBaseIntegrationTest {
 ----
 
 This code creates an HBase mini-cluster and starts it.
-Next, it creates a table called [literal]+MyTest+ with one column family, 
[literal]+CF+.
+Next, it creates a table called `MyTest` with one column family, `CF`.
 A record is inserted, a Get is performed from the same table, and the 
insertion is verified.
 
 NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be 
appropriate for integration testing. 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index ef1f816..e90b98a 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -157,10 +157,12 @@ When we say two HBase versions are compatible, we mean 
that the versions are wir
 A rolling upgrade is the process by which you update the servers in your 
cluster a server at a time. You can rolling upgrade across HBase versions if 
they are binary or wire compatible. See <<hbase.rolling.restart>> for more on 
what this means. Coarsely, a rolling upgrade is a graceful stop each server, 
update the software, and then restart. You do this for each server in the 
cluster. Usually you upgrade the Master first and then the regionservers. See 
<<rolling>> for tools that can help use the rolling upgrade process.
 
 For example, in the below, hbase was symlinked to the actual hbase install. On 
upgrade, before running a rolling restart over the cluser, we changed the 
symlink to point at the new HBase software version and then ran
+
 [source,bash]
 ----
 $ HADOOP_HOME=~/hadoop-2.6.0-CRC-SNAPSHOT ~/hbase/bin/rolling-restart.sh 
--config ~/conf_hbase
 ----
+
 The rolling-restart script will first gracefully stop and restart the master, 
and then each of the regionservers in turn. Because the symlink was changed, on 
restart the server will come up using the new hbase version. Check logs for 
errors as the rolling upgrade proceeds.
 
 [[hbase.rolling.restart]]
@@ -169,12 +171,14 @@ Unless otherwise specified, HBase point versions are 
binary compatible. You can
 
 In the minor version-particular sections below, we call out where the versions 
are wire/protocol compatible and in this case, it is also possible to do a 
<<hbase.rolling.upgrade>>. For example, in <<upgrade1.0.rolling.upgrade>>, we 
state that it is possible to do a rolling upgrade between hbase-0.98.x and 
hbase-1.0.0.
 
+== Upgrade Paths
+
 [[upgrade1.0]]
-== Upgrading from 0.98.x to 1.0.x
+=== Upgrading from 0.98.x to 1.0.x
 
 In this section we first note the significant changes that come in with 1.0.0 
HBase and then we go over the upgrade process.  Be sure to read the significant 
changes section with care so you avoid surprises.
 
-=== Changes of Note!
+==== Changes of Note!
 
 In here we list important changes that are in 1.0.0 since 0.98.x., changes you 
should be aware that will go into effect once you upgrade.
 
@@ -184,7 +188,7 @@ See <<zookeeper.requirements>>.
 
 [[default.ports.changed]]
 .HBase Default Ports Changed
-The ports used by HBase changed.  The used to be in the 600XX range.  In 
hbase-1.0.0 they have been moved up out of the ephemeral port range and are 
160XX instead (Master web UI was 60010 and is now 16030; the RegionServer web 
UI was 60030 and is now 16030, etc). If you want to keep the old port 
locations, copy the port setting configs from [path]_hbase-default.xml_ into 
[path]_hbase-site.xml_, change them back to the old values from hbase-0.98.x 
era, and ensure you've distributed your configurations before you restart.
+The ports used by HBase changed.  The used to be in the 600XX range.  In 
hbase-1.0.0 they have been moved up out of the ephemeral port range and are 
160XX instead (Master web UI was 60010 and is now 16010; the RegionServer web 
UI was 60030 and is now 16030, etc). If you want to keep the old port 
locations, copy the port setting configs from _hbase-default.xml_ into 
_hbase-site.xml_, change them back to the old values from hbase-0.98.x era, and 
ensure you've distributed your configurations before you restart.
 
 [[upgrade1.0.hbase.bucketcache.percentage.in.combinedcache]]
 .hbase.bucketcache.percentage.in.combinedcache configuration has been REMOVED
@@ -199,31 +203,31 @@ See the release notes on the issue 
link:https://issues.apache.org/jira/browse/HB
 <<distributed.log.replay>> is off by default in hbase-1.0. Enabling it can 
make a big difference improving HBase MTTR. Enable this feature if you are 
doing a clean stop/start when you are upgrading. You cannot rolling upgrade on 
to this feature (caveat if you are running on a version of hbase in excess of 
hbase-0.98.4 -- see 
link:https://issues.apache.org/jira/browse/HBASE-12577[HBASE-12577 Disable 
distributed log replay by default] for more).
 
 [[upgrade1.0.rolling.upgrade]]
-=== Rolling upgrade from 0.98.x to HBase 1.0.0
+==== Rolling upgrade from 0.98.x to HBase 1.0.0
 .From 0.96.x to 1.0.0
 NOTE: You cannot do a <<hbase.rolling.upgrade,rolling upgrade>> from 0.96.x to 
1.0.0 without first doing a rolling upgrade to 0.98.x. See comment in 
link:https://issues.apache.org/jira/browse/HBASE-11164?focusedCommentId=14182330&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&#35;comment-14182330[HBASE-11164
 Document and test rolling updates from 0.98 -> 1.0] for the why. Also because 
hbase-1.0.0 enables hfilev3 by default, 
link:https://issues.apache.org/jira/browse/HBASE-9801[HBASE-9801 Change the 
default HFile version to V3], and support for hfilev3 only arrives in 0.98, 
this is another reason you cannot rolling upgrade from hbase-0.96.x; if the 
rolling upgrade stalls, the 0.96.x servers cannot open files written by the 
servers running the newer hbase-1.0.0 hfilev3 writing servers. 
 
 There are no known issues running a <<hbase.rolling.upgrade,rolling upgrade>> 
from hbase-0.98.x to hbase-1.0.0.
 
 [[upgrade1.0.from.0.94]]
-=== Upgrading to 1.0 from 0.94
+==== Upgrading to 1.0 from 0.94
 You cannot rolling upgrade from 0.94.x to 1.x.x.  You must stop your cluster, 
install the 1.x.x software, run the migration described at 
<<executing.the.0.96.upgrade>> (substituting 1.x.x. wherever we make mention of 
0.96.x in the section below), and then restart.  Be sure to upgrade your 
zookeeper if it is a version less than the required 3.4.x.
 
 [[upgrade0.98]]
-== Upgrading from 0.96.x to 0.98.x
+=== Upgrading from 0.96.x to 0.98.x
 A rolling upgrade from 0.96.x to 0.98.x works. The two versions are not binary 
compatible.
 
 Additional steps are required to take advantage of some of the new features of 
0.98.x, including cell visibility labels, cell ACLs, and transparent server 
side encryption. See <<security>> for more information. Significant performance 
improvements include a change to the write ahead log threading model that 
provides higher transaction throughput under high load, reverse scanners, 
MapReduce over snapshot files, and striped compaction.
 
 Clients and servers can run with 0.98.x and 0.96.x versions. However, 
applications may need to be recompiled due to changes in the Java API.
 
-== Upgrading from 0.94.x to 0.98.x
+=== Upgrading from 0.94.x to 0.98.x
 A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade 
path follows the same procedures as <<upgrade0.96>>. Additional steps are 
required to use some of the new features of 0.98.x. See <<upgrade0.98>> for an 
abbreviated list of these features.
 
 [[upgrade0.96]]
-== Upgrading from 0.94.x to 0.96.x
+=== Upgrading from 0.94.x to 0.96.x
 
-=== The "Singularity"
+==== The "Singularity"
 
 .HBase 0.96.x was EOL'd, September 1st, 2014
 NOTE: Do not deploy 0.96.x  Deploy a 0.98.x at least. See 
link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96].
@@ -233,12 +237,14 @@ You will have to stop your old 0.94.x cluster completely 
to upgrade. If you are
 The API has changed. You will need to recompile your code against 0.96 and you 
may need to adjust applications to go against new APIs (TODO: List of changes).
 
 [[executing.the.0.96.upgrade]]
-=== Executing the 0.96 Upgrade
+==== Executing the 0.96 Upgrade
 
 .HDFS and ZooKeeper must be up!
 NOTE: HDFS and ZooKeeper should be up and running during the upgrade process.
 
 hbase-0.96.0 comes with an upgrade script. Run
+
+[source,bash]
 ----
 $ bin/hbase upgrade
 ----
@@ -250,9 +256,12 @@ The check step is run against a running 0.94 cluster. Run 
it from a downloaded 0
 The check step prints stats at the end of its run (grep for `“Result:”` in 
the log) printing absolute path of the tables it scanned, any HFileV1 files 
found, the regions containing said files (the regions we need to major compact 
to purge the HFileV1s), and any corrupted files if any found. A corrupt file is 
unreadable, and so is undefined (neither HFileV1 nor HFileV2).
 
 To run the check step, run 
+
+[source,bash]
 ----
 $ bin/hbase upgrade -check
 ----
+
 Here is sample output:
 ----
 Tables Processed:
@@ -280,6 +289,7 @@ There are some HFileV1, or corrupt files (files with 
incorrect major version)
 In the above sample output, there are two HFileV1 in two regions, and one 
corrupt file. Corrupt files should probably be removed. The regions that have 
HFileV1s need to be major compacted. To major compact, start up the hbase shell 
and review how to compact an individual region. After the major compaction is 
done, rerun the check step and the HFileV1s shoudl be gone, replaced by HFileV2 
instances.
 
 By default, the check step scans the hbase root directory (defined as 
hbase.rootdir in the configuration). To scan a specific directory only, pass 
the -dir option.
+[source,bash]
 ----
 $ bin/hbase upgrade -check -dir /myHBase/testTable
 ----
@@ -293,6 +303,7 @@ After the _check_ step shows the cluster is free of 
HFileV1, it is safe to proce
 [NOTE]
 ====
 HDFS and ZooKeeper should be up and running during the upgrade process. If 
zookeeper is managed by HBase, then you can start zookeeper so it is available 
to the upgrade by running 
+[source,bash]
 ----
 $ ./hbase/bin/hbase-daemon.sh start zookeeper
 ----
@@ -307,6 +318,7 @@ The execute upgrade step is made of three substeps.
 * WAL Log Splitting: If the 0.94.x cluster shutdown was not clean, we'll split 
WAL logs as part of migration before we startup on 0.96.0. This WAL splitting 
runs slower than the native distributed WAL splitting because it is all inside 
the single upgrade process (so try and get a clean shutdown of the 0.94.0 
cluster if you can).
 
 To run the _execute_ step, make sure that first you have copied hbase-0.96.0 
binaries everywhere under servers and under clients. Make sure the 0.94.0 
cluster is down. Then do as follows:
+[source,bash]
 ----
 $ bin/hbase upgrade -execute
 ----
@@ -329,6 +341,7 @@ Successfully completed Log splitting
 ----
          
 If the output from the execute step looks good, stop the zookeeper instance 
you started to do the upgrade:
+[source,bash]
 ----
 $ ./hbase/bin/hbase-daemon.sh stop zookeeper
 ----
@@ -355,19 +368,19 @@ It will fail with an exception like the below. Upgrade.
 17:22:15    at Client_4_3_0.main(Client_4_3_0.java:63)
 ----
 
-=== Upgrading `META` to use Protocol Buffers (Protobuf)
+==== Upgrading `META` to use Protocol Buffers (Protobuf)
 
 When you upgrade from versions prior to 0.96, `META` needs to be converted to 
use protocol buffers. This is controlled by the configuration option 
`hbase.MetaMigrationConvertingToPB`, which is set to `true` by default. 
Therefore, by default, no action is required on your part.
 
 The migration is a one-time event. However, every time your cluster starts, 
`META` is scanned to ensure that it does not need to be converted. If you have 
a very large number of regions, this scan can take a long time. Starting in 
0.98.5, you can set `hbase.MetaMigrationConvertingToPB` to `false` in 
_hbase-site.xml_, to disable this start-up scan. This should be considered an 
expert-level setting.
 
 [[upgrade0.94]]
-== Upgrading from 0.92.x to 0.94.x
+=== Upgrading from 0.92.x to 0.94.x
 We used to think that 0.92 and 0.94 were interface compatible and that you can 
do a rolling upgrade between these versions but then we figured that 
link:https://issues.apache.org/jira/browse/HBASE-5357[";>]HBASE-5357 Use builder 
pattern in HColumnDescriptor] changed method signatures so rather than return 
void they instead return HColumnDescriptor. This will 
throw`java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V` so 0.92 and 0.94 
are NOT compatible. You cannot do a rolling upgrade between them.
 
 [[upgrade0.92]]
-== Upgrading from 0.90.x to 0.92.x
-=== Upgrade Guide
+=== Upgrading from 0.90.x to 0.92.x
+==== Upgrade Guide
 ou will find that 0.92.0 runs a little differently to 0.90.x releases. Here 
are a few things to watch out for upgrading from 0.90.x to 0.92.0.
 
 .tl:dr
@@ -425,7 +438,7 @@ If an OOME, we now have the JVM kill -9 the regionserver 
process so it goes down
 0.92.0 stores data in a new format, <<hfilev2>>. As HBase runs, it will move 
all your data from HFile v1 to HFile v2 format. This auto-migration will run in 
the background as flushes and compactions run. HFile V2 allows HBase run with 
larger regions/files. In fact, we encourage that all HBasers going forward tend 
toward Facebook axiom #1, run with larger, fewer regions. If you have lots of 
regions now -- more than 100s per host -- you should look into setting your 
region size up after you move to 0.92.0 (In 0.92.0, default size is now 1G, up 
from 256M), and then running online merge tool (See 
link:https://issues.apache.org/jira/browse/HBASE-1621[HBASE-1621 merge tool 
should work on online cluster, but disabled table]).
 
 [[upgrade0.90]]
-== Upgrading to HBase 0.90.x from 0.20.x or 0.89.x
+=== Upgrading to HBase 0.90.x from 0.20.x or 0.89.x
 This version of 0.90.x HBase can be started on data written by HBase 0.20.x or 
HBase 0.89.x. There is no need of a migration step. HBase 0.89.x and 0.90.x 
does write out the name of region directories differently -- it names them with 
a md5 hash of the region name rather than a jenkins hash -- so this means that 
once started, there is no going back to HBase 0.20.x.
 
 Be sure to remove the _hbase-default.xml_ from your _conf_ directory on 
upgrade. A 0.20.x version of this file will have sub-optimal configurations for 
0.90.x HBase. The _hbase-default.xml_ file is now bundled into the HBase jar 
and read from there. If you would like to review the content of this file, see 
it in the src tree at _src/main/resources/hbase-default.xml_ or see 
<<hbase_default_configurations>>.

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/_chapters/zookeeper.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/zookeeper.adoc 
b/src/main/asciidoc/_chapters/zookeeper.adoc
index 973a6ad..f6134b7 100644
--- a/src/main/asciidoc/_chapters/zookeeper.adoc
+++ b/src/main/asciidoc/_chapters/zookeeper.adoc
@@ -32,19 +32,19 @@ All participating nodes and clients need to be able to 
access the running ZooKee
 Apache HBase by default manages a ZooKeeper "cluster" for you.
 It will start and stop the ZooKeeper ensemble as part of the HBase start/stop 
process.
 You can also manage the ZooKeeper ensemble independent of HBase and just point 
HBase at the cluster it should use.
-To toggle HBase management of ZooKeeper, use the [var]+HBASE_MANAGES_ZK+ 
variable in [path]_conf/hbase-env.sh_.
-This variable, which defaults to [var]+true+, tells HBase whether to 
start/stop the ZooKeeper ensemble servers as part of HBase start/stop.
+To toggle HBase management of ZooKeeper, use the `HBASE_MANAGES_ZK` variable 
in _conf/hbase-env.sh_.
+This variable, which defaults to `true`, tells HBase whether to start/stop the 
ZooKeeper ensemble servers as part of HBase start/stop.
 
-When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper 
configuration using its native [path]_zoo.cfg_ file, or, the easier option is 
to just specify ZooKeeper options directly in [path]_conf/hbase-site.xml_.
-A ZooKeeper configuration option can be set as a property in the HBase 
[path]_hbase-site.xml_ XML configuration file by prefacing the ZooKeeper option 
name with [var]+hbase.zookeeper.property+.
-For example, the [var]+clientPort+ setting in ZooKeeper can be changed by 
setting the [var]+hbase.zookeeper.property.clientPort+ property.
+When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper 
configuration using its native _zoo.cfg_ file, or, the easier option is to just 
specify ZooKeeper options directly in _conf/hbase-site.xml_.
+A ZooKeeper configuration option can be set as a property in the HBase 
_hbase-site.xml_ XML configuration file by prefacing the ZooKeeper option name 
with `hbase.zookeeper.property`.
+For example, the `clientPort` setting in ZooKeeper can be changed by setting 
the `hbase.zookeeper.property.clientPort` property.
 For all default values used by HBase, including ZooKeeper configuration, see 
<<hbase_default_configurations,hbase default configurations>>.
-Look for the [var]+hbase.zookeeper.property+ prefix.
-For the full list of ZooKeeper configurations, see ZooKeeper's [path]_zoo.cfg_.
-HBase does not ship with a [path]_zoo.cfg_ so you will need to browse the 
[path]_conf_ directory in an appropriate ZooKeeper download.
+Look for the `hbase.zookeeper.property` prefix.
+For the full list of ZooKeeper configurations, see ZooKeeper's _zoo.cfg_.
+HBase does not ship with a _zoo.cfg_ so you will need to browse the _conf_ 
directory in an appropriate ZooKeeper download.
 
-You must at least list the ensemble servers in [path]_hbase-site.xml_ using 
the [var]+hbase.zookeeper.quorum+ property.
-This property defaults to a single ensemble member at [var]+localhost+ which 
is not suitable for a fully distributed HBase.
+You must at least list the ensemble servers in _hbase-site.xml_ using the 
`hbase.zookeeper.quorum` property.
+This property defaults to a single ensemble member at `localhost` which is not 
suitable for a fully distributed HBase.
 (It binds to the local machine only and remote clients will not be able to 
connect). 
 
 .How many ZooKeepers should I run?
@@ -59,9 +59,9 @@ Thus, an ensemble of 5 allows 2 peers to fail, and thus is 
more fault tolerant t
 Give each ZooKeeper server around 1GB of RAM, and if possible, its own 
dedicated disk (A dedicated disk is the best thing you can do to ensure a 
performant ZooKeeper ensemble). For very heavily loaded clusters, run ZooKeeper 
servers on separate machines from RegionServers (DataNodes and TaskTrackers).
 ====
 
-For example, to have HBase manage a ZooKeeper quorum on nodes 
_rs{1,2,3,4,5}.example.com_, bound to port 2222 (the default is 2181) ensure 
[var]+HBASE_MANAGE_ZK+ is commented out or set to [var]+true+ in 
[path]_conf/hbase-env.sh_ and then edit [path]_conf/hbase-site.xml_    and set 
[var]+hbase.zookeeper.property.clientPort+ and [var]+hbase.zookeeper.quorum+.
-You should also set [var]+hbase.zookeeper.property.dataDir+ to other than the 
default as the default has ZooKeeper persist data under [path]_/tmp_ which is 
often cleared on system restart.
-In the example below we have ZooKeeper persist to 
[path]_/user/local/zookeeper_.
+For example, to have HBase manage a ZooKeeper quorum on nodes 
_rs{1,2,3,4,5}.example.com_, bound to port 2222 (the default is 2181) ensure 
`HBASE_MANAGE_ZK` is commented out or set to `true` in _conf/hbase-env.sh_ and 
then edit _conf/hbase-site.xml_    and set 
`hbase.zookeeper.property.clientPort` and `hbase.zookeeper.quorum`.
+You should also set `hbase.zookeeper.property.dataDir` to other than the 
default as the default has ZooKeeper persist data under _/tmp_ which is often 
cleared on system restart.
+In the example below we have ZooKeeper persist to _/user/local/zookeeper_.
 
 [source,java]
 ----
@@ -102,7 +102,7 @@ In the example below we have ZooKeeper persist to 
[path]_/user/local/zookeeper_.
 ====
 The newer version, the better.
 For example, some folks have been bitten by 
link:https://issues.apache.org/jira/browse/ZOOKEEPER-1277[ZOOKEEPER-1277].
-If running zookeeper 3.5+, you can ask hbase to make use of the new multi 
operation by enabling <<hbase.zookeeper.usemulti,hbase.zookeeper.useMulti>>" in 
your [path]_hbase-site.xml_. 
+If running zookeeper 3.5+, you can ask hbase to make use of the new multi 
operation by enabling <<hbase.zookeeper.usemulti,hbase.zookeeper.useMulti>>" in 
your _hbase-site.xml_. 
 ====
 
 .ZooKeeper Maintenance
@@ -115,7 +115,7 @@ zookeeper could start dropping sessions if it has to run 
through a directory of
 
 == Using existing ZooKeeper ensemble
 
-To point HBase at an existing ZooKeeper cluster, one that is not managed by 
HBase, set [var]+HBASE_MANAGES_ZK+ in [path]_conf/hbase-env.sh_ to false
+To point HBase at an existing ZooKeeper cluster, one that is not managed by 
HBase, set `HBASE_MANAGES_ZK` in _conf/hbase-env.sh_ to false
 
 ----
 
@@ -124,8 +124,8 @@ To point HBase at an existing ZooKeeper cluster, one that 
is not managed by HBas
   export HBASE_MANAGES_ZK=false
 ----
 
-Next set ensemble locations and client port, if non-standard, in 
[path]_hbase-site.xml_, or add a suitably configured [path]_zoo.cfg_ to HBase's 
[path]_CLASSPATH_.
-HBase will prefer the configuration found in [path]_zoo.cfg_ over any settings 
in [path]_hbase-site.xml_.
+Next set ensemble locations and client port, if non-standard, in 
_hbase-site.xml_, or add a suitably configured _zoo.cfg_ to HBase's _CLASSPATH_.
+HBase will prefer the configuration found in _zoo.cfg_ over any settings in 
_hbase-site.xml_.
 
 When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as a 
part of the regular start/stop scripts.
 If you would like to run ZooKeeper yourself, independent of HBase start/stop, 
you would do the following
@@ -136,7 +136,7 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
 ----
 
 Note that you can use HBase in this manner to spin up a ZooKeeper cluster, 
unrelated to HBase.
-Just make sure to set [var]+HBASE_MANAGES_ZK+ to [var]+false+      if you want 
it to stay up across HBase restarts so that when HBase shuts down, it doesn't 
take ZooKeeper down with it.
+Just make sure to set `HBASE_MANAGES_ZK` to `false`      if you want it to 
stay up across HBase restarts so that when HBase shuts down, it doesn't take 
ZooKeeper down with it.
 
 For more information about running a distinct ZooKeeper cluster, see the 
ZooKeeper 
link:http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html[Getting
         Started Guide].
@@ -154,21 +154,21 @@ ZooKeeper/HBase mutual authentication 
(link:https://issues.apache.org/jira/brows
 === Operating System Prerequisites
 
 You need to have a working Kerberos KDC setup.
-For each [code]+$HOST+ that will run a ZooKeeper server, you should have a 
principle [code]+zookeeper/$HOST+.
-For each such host, add a service key (using the [code]+kadmin+ or 
[code]+kadmin.local+        tool's [code]+ktadd+ command) for 
[code]+zookeeper/$HOST+ and copy this file to [code]+$HOST+, and make it 
readable only to the user that will run zookeeper on [code]+$HOST+.
-Note the location of this file, which we will use below as 
[path]_$PATH_TO_ZOOKEEPER_KEYTAB_. 
+For each `$HOST` that will run a ZooKeeper server, you should have a principle 
`zookeeper/$HOST`.
+For each such host, add a service key (using the `kadmin` or `kadmin.local`    
    tool's `ktadd` command) for `zookeeper/$HOST` and copy this file to 
`$HOST`, and make it readable only to the user that will run zookeeper on 
`$HOST`.
+Note the location of this file, which we will use below as 
_$PATH_TO_ZOOKEEPER_KEYTAB_. 
 
-Similarly, for each [code]+$HOST+ that will run an HBase server (master or 
regionserver), you should have a principle: [code]+hbase/$HOST+.
-For each host, add a keytab file called [path]_hbase.keytab_ containing a 
service key for [code]+hbase/$HOST+, copy this file to [code]+$HOST+, and make 
it readable only to the user that will run an HBase service on [code]+$HOST+.
-Note the location of this file, which we will use below as 
[path]_$PATH_TO_HBASE_KEYTAB_. 
+Similarly, for each `$HOST` that will run an HBase server (master or 
regionserver), you should have a principle: `hbase/$HOST`.
+For each host, add a keytab file called _hbase.keytab_ containing a service 
key for `hbase/$HOST`, copy this file to `$HOST`, and make it readable only to 
the user that will run an HBase service on `$HOST`.
+Note the location of this file, which we will use below as 
_$PATH_TO_HBASE_KEYTAB_. 
 
 Each user who will be an HBase client should also be given a Kerberos 
principal.
 This principal should usually have a password assigned to it (as opposed to, 
as with the HBase servers, a keytab file) which only this user knows.
-The client's principal's [code]+maxrenewlife+ should be set so that it can be 
renewed enough so that the user can complete their HBase client processes.
-For example, if a user runs a long-running HBase client process that takes at 
most 3 days, we might create this user's principal within [code]+kadmin+ with: 
[code]+addprinc -maxrenewlife 3days+.
+The client's principal's `maxrenewlife` should be set so that it can be 
renewed enough so that the user can complete their HBase client processes.
+For example, if a user runs a long-running HBase client process that takes at 
most 3 days, we might create this user's principal within `kadmin` with: 
`addprinc -maxrenewlife 3days`.
 The Zookeeper client and server libraries manage their own ticket refreshment 
by running threads that wake up periodically to do the refreshment. 
 
-On each host that will run an HBase client (e.g. [code]+hbase shell+), add the 
following file to the HBase home directory's [path]_conf_ directory:
+On each host that will run an HBase client (e.g. `hbase shell`), add the 
following file to the HBase home directory's _conf_ directory:
 
 [source,java]
 ----
@@ -180,11 +180,11 @@ Client {
 };
 ----
 
-We'll refer to this JAAS configuration file as [path]_$CLIENT_CONF_        
below.
+We'll refer to this JAAS configuration file as _$CLIENT_CONF_        below.
 
 === HBase-managed Zookeeper Configuration
 
-On each node that will run a zookeeper, a master, or a regionserver, create a 
link:http://docs.oracle.com/javase/1.4.2/docs/guide/security/jgss/tutorials/LoginConfigFile.html[JAAS]
        configuration file in the conf directory of the node's 
[path]_HBASE_HOME_        directory that looks like the following:
+On each node that will run a zookeeper, a master, or a regionserver, create a 
link:http://docs.oracle.com/javase/1.4.2/docs/guide/security/jgss/tutorials/LoginConfigFile.html[JAAS]
        configuration file in the conf directory of the node's _HBASE_HOME_     
   directory that looks like the following:
 
 [source,java]
 ----
@@ -206,14 +206,14 @@ Client {
 };
 ----
 
-where the [path]_$PATH_TO_HBASE_KEYTAB_ and [path]_$PATH_TO_ZOOKEEPER_KEYTAB_ 
files are what you created above, and [code]+$HOST+ is the hostname for that 
node.
+where the _$PATH_TO_HBASE_KEYTAB_ and _$PATH_TO_ZOOKEEPER_KEYTAB_ files are 
what you created above, and `$HOST` is the hostname for that node.
 
-The [code]+Server+ section will be used by the Zookeeper quorum server, while 
the [code]+Client+ section will be used by the HBase master and regionservers.
-The path to this file should be substituted for the text 
[path]_$HBASE_SERVER_CONF_ in the [path]_hbase-env.sh_ listing below.
+The `Server` section will be used by the Zookeeper quorum server, while the 
`Client` section will be used by the HBase master and regionservers.
+The path to this file should be substituted for the text _$HBASE_SERVER_CONF_ 
in the _hbase-env.sh_ listing below.
 
-The path to this file should be substituted for the text [path]_$CLIENT_CONF_ 
in the [path]_hbase-env.sh_ listing below. 
+The path to this file should be substituted for the text _$CLIENT_CONF_ in the 
_hbase-env.sh_ listing below. 
 
-Modify your [path]_hbase-env.sh_ to include the following:
+Modify your _hbase-env.sh_ to include the following:
 
 [source,bourne]
 ----
@@ -225,9 +225,9 @@ export 
HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
 export 
HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
 ----
 
-where [path]_$HBASE_SERVER_CONF_ and [path]_$CLIENT_CONF_ are the full paths 
to the JAAS configuration files created above.
+where _$HBASE_SERVER_CONF_ and _$CLIENT_CONF_ are the full paths to the JAAS 
configuration files created above.
 
-Modify your [path]_hbase-site.xml_ on each node that will run zookeeper, 
master or regionserver to contain:
+Modify your _hbase-site.xml_ on each node that will run zookeeper, master or 
regionserver to contain:
 
 [source,java]
 ----
@@ -256,7 +256,7 @@ Modify your [path]_hbase-site.xml_ on each node that will 
run zookeeper, master
 </configuration>
 ----
 
-where [code]+$ZK_NODES+ is the comma-separated list of hostnames of the 
Zookeeper Quorum hosts.
+where `$ZK_NODES` is the comma-separated list of hostnames of the Zookeeper 
Quorum hosts.
 
 Start your hbase cluster by running one or more of the following set of 
commands on the appropriate hosts: 
 
@@ -283,9 +283,9 @@ Client {
 };
 ----
 
-where the [path]_$PATH_TO_HBASE_KEYTAB_ is the keytab created above for HBase 
services to run on this host, and [code]+$HOST+ is the hostname for that node.
+where the _$PATH_TO_HBASE_KEYTAB_ is the keytab created above for HBase 
services to run on this host, and `$HOST` is the hostname for that node.
 Put this in the HBase home's configuration directory.
-We'll refer to this file's full pathname as [path]_$HBASE_SERVER_CONF_ below.
+We'll refer to this file's full pathname as _$HBASE_SERVER_CONF_ below.
 
 Modify your hbase-env.sh to include the following:
 
@@ -298,7 +298,7 @@ export 
HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
 export 
HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
 ----
 
-Modify your [path]_hbase-site.xml_ on each node that will run a master or 
regionserver to contain:
+Modify your _hbase-site.xml_ on each node that will run a master or 
regionserver to contain:
 
 [source,xml]
 ----
@@ -315,9 +315,9 @@ Modify your [path]_hbase-site.xml_ on each node that will 
run a master or region
 </configuration>
 ----
 
-where [code]+$ZK_NODES+ is the comma-separated list of hostnames of the 
Zookeeper Quorum hosts.
+where `$ZK_NODES` is the comma-separated list of hostnames of the Zookeeper 
Quorum hosts.
 
-Add a [path]_zoo.cfg_ for each Zookeeper Quorum host containing:
+Add a _zoo.cfg_ for each Zookeeper Quorum host containing:
 
 [source,java]
 ----
@@ -342,8 +342,8 @@ Server {
 };
 ----
 
-where [code]+$HOST+ is the hostname of each Quorum host.
-We will refer to the full pathname of this file as [path]_$ZK_SERVER_CONF_ 
below. 
+where `$HOST` is the hostname of each Quorum host.
+We will refer to the full pathname of this file as _$ZK_SERVER_CONF_ below. 
 
 Start your Zookeepers on each Zookeeper Quorum host with:
 
@@ -427,7 +427,7 @@ bin/hbase regionserver &
 
 ==== Fix target/cached_classpath.txt
 
-You must override the standard hadoop-core jar file from the 
[code]+target/cached_classpath.txt+ file with the version containing the 
HADOOP-7070 fix.
+You must override the standard hadoop-core jar file from the 
`target/cached_classpath.txt` file with the version containing the HADOOP-7070 
fix.
 You can use the following script to do this:
 
 ----
@@ -440,7 +440,7 @@ mv target/tmp.txt target/cached_classpath.txt
 
 This would avoid the need for a separate Hadoop jar that fixes 
link:https://issues.apache.org/jira/browse/HADOOP-7070[HADOOP-7070]. 
 
-==== Elimination of [code]+kerberos.removeHostFromPrincipal+ 
and[code]+kerberos.removeRealmFromPrincipal+
+==== Elimination of `kerberos.removeHostFromPrincipal` 
and`kerberos.removeRealmFromPrincipal`
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index da0162a..790a23c 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -19,7 +19,7 @@
  */
 ////
 
-= Apache HBase (TM) Reference Guide image:jumping-orca_rotated_25percent.png[]
+= Apache HBase (TM) Reference Guide image:hbase_logo.png[] 
image:jumping-orca_rotated_25percent.png[]
 :Author: Apache HBase Team
 :Email: <[email protected]>
 :doctype: book

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fbf80ee/src/main/xslt/configuration_to_asciidoc_chapter.xsl
----------------------------------------------------------------------
diff --git a/src/main/xslt/configuration_to_asciidoc_chapter.xsl 
b/src/main/xslt/configuration_to_asciidoc_chapter.xsl
index 7164fde..428d95a 100644
--- a/src/main/xslt/configuration_to_asciidoc_chapter.xsl
+++ b/src/main/xslt/configuration_to_asciidoc_chapter.xsl
@@ -1,7 +1,7 @@
 <?xml version="1.0"?>
 <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"; version="1.0">
-<xsl:output method="text"/>
-<xsl:template match="configuration">
+
+
 <!--
 /**
  * Copyright 2010 The Apache Software Foundation
@@ -26,6 +26,19 @@
 This stylesheet is used making an html version of hbase-default.adoc.
 -->
 
+<xsl:output method="text"/>
+
+<!-- Normalize space -->
+<xsl:template match="text()">
+    <xsl:if test="normalize-space(.)">
+      <xsl:value-of select="normalize-space(.)"/>
+    </xsl:if>
+</xsl:template>
+
+<!-- Grab nodes of the <configuration> element -->
+<xsl:template match="configuration">
+
+<!-- Print the license at the top of the file -->
 ////
 /**
  *
@@ -46,7 +59,6 @@ This stylesheet is used making an html version of 
hbase-default.adoc.
  * limitations under the License.
  */
 ////
-
 :doctype: book
 :numbered:
 :toc: left
@@ -58,18 +70,23 @@ This stylesheet is used making an html version of 
hbase-default.adoc.
 
 The documentation below is generated using the default hbase configuration 
file, _hbase-default.xml_, as source.
 
-<xsl:for-each select="property">
-  <xsl:if test="not(@skipInDoc)">
-[[<xsl:value-of select="name" />]]
-*`<xsl:value-of select="name"/>`*::
+  <xsl:for-each select="property">
+    <xsl:if test="not(@skipInDoc)">
+[[<xsl:apply-templates select="name"/>]]
+`<xsl:apply-templates select="name"/>`::
 +
 .Description
-<xsl:value-of select="description"/>
+<xsl:apply-templates select="description"/>
 +
 .Default
-`<xsl:value-of select="value"/>`
+<xsl:choose>
+  <xsl:when test="value != ''">`<xsl:apply-templates select="value"/>`
+
+</xsl:when>
+  <xsl:otherwise>none</xsl:otherwise>
+</xsl:choose>
+    </xsl:if>
+  </xsl:for-each>
 
-  </xsl:if>
-</xsl:for-each>
 </xsl:template>
 </xsl:stylesheet>

Reply via email to