http://git-wip-us.apache.org/repos/asf/hbase-site/blob/f07ee53f/book.html
----------------------------------------------------------------------
diff --git a/book.html b/book.html
index 876f41c..2810c41 100644
--- a/book.html
+++ b/book.html
@@ -4,11 +4,11 @@
 <meta charset="UTF-8">
 <!--[if IE]><meta http-equiv="X-UA-Compatible" content="IE=edge"><![endif]-->
 <meta name="viewport" content="width=device-width, initial-scale=1.0">
-<meta name="generator" content="Asciidoctor 1.5.3">
+<meta name="generator" content="Asciidoctor 1.5.2">
 <meta name="author" content="Apache HBase Team">
 <title>Apache HBase &#8482; Reference Guide</title>
 <link rel="stylesheet" href="./hbase.css">
-<link rel="stylesheet" 
href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.4.0/css/font-awesome.min.css";>
+<link rel="stylesheet" 
href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.2.0/css/font-awesome.min.css";>
 <link rel="stylesheet" href="./coderay-asciidoctor.css">
 </head>
 <body class="book toc2 toc-left">
@@ -93,190 +93,191 @@
 <li><a href="#_constraints">42. Constraints</a></li>
 <li><a href="#schema.casestudies">43. Schema Design Case Studies</a></li>
 <li><a href="#schema.ops">44. Operational and Performance Configuration 
Options</a></li>
+<li><a href="#_special_cases">45. Special Cases</a></li>
 </ul>
 </li>
 <li><a href="#mapreduce">HBase and MapReduce</a>
 <ul class="sectlevel1">
-<li><a href="#hbase.mapreduce.classpath">45. HBase, MapReduce, and the 
CLASSPATH</a></li>
-<li><a href="#_mapreduce_scan_caching">46. MapReduce Scan Caching</a></li>
-<li><a href="#_bundled_hbase_mapreduce_jobs">47. Bundled HBase MapReduce 
Jobs</a></li>
-<li><a href="#_hbase_as_a_mapreduce_job_data_source_and_data_sink">48. HBase 
as a MapReduce Job Data Source and Data Sink</a></li>
-<li><a href="#_writing_hfiles_directly_during_bulk_import">49. Writing HFiles 
Directly During Bulk Import</a></li>
-<li><a href="#_rowcounter_example">50. RowCounter Example</a></li>
-<li><a href="#splitter">51. Map-Task Splitting</a></li>
-<li><a href="#mapreduce.example">52. HBase MapReduce Examples</a></li>
-<li><a href="#mapreduce.htable.access">53. Accessing Other HBase Tables in a 
MapReduce Job</a></li>
-<li><a href="#mapreduce.specex">54. Speculative Execution</a></li>
-<li><a href="#cascading">55. Cascading</a></li>
+<li><a href="#hbase.mapreduce.classpath">46. HBase, MapReduce, and the 
CLASSPATH</a></li>
+<li><a href="#_mapreduce_scan_caching">47. MapReduce Scan Caching</a></li>
+<li><a href="#_bundled_hbase_mapreduce_jobs">48. Bundled HBase MapReduce 
Jobs</a></li>
+<li><a href="#_hbase_as_a_mapreduce_job_data_source_and_data_sink">49. HBase 
as a MapReduce Job Data Source and Data Sink</a></li>
+<li><a href="#_writing_hfiles_directly_during_bulk_import">50. Writing HFiles 
Directly During Bulk Import</a></li>
+<li><a href="#_rowcounter_example">51. RowCounter Example</a></li>
+<li><a href="#splitter">52. Map-Task Splitting</a></li>
+<li><a href="#mapreduce.example">53. HBase MapReduce Examples</a></li>
+<li><a href="#mapreduce.htable.access">54. Accessing Other HBase Tables in a 
MapReduce Job</a></li>
+<li><a href="#mapreduce.specex">55. Speculative Execution</a></li>
+<li><a href="#cascading">56. Cascading</a></li>
 </ul>
 </li>
 <li><a href="#security">Securing Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#_using_secure_http_https_for_the_web_ui">56. Using Secure HTTP 
(HTTPS) for the Web UI</a></li>
-<li><a href="#hbase.secure.spnego.ui">57. Using SPNEGO for Kerberos 
authentication with Web UIs</a></li>
-<li><a href="#hbase.secure.configuration">58. Secure Client Access to Apache 
HBase</a></li>
-<li><a href="#hbase.secure.simpleconfiguration">59. Simple User Access to 
Apache HBase</a></li>
-<li><a href="#_securing_access_to_hdfs_and_zookeeper">60. Securing Access to 
HDFS and ZooKeeper</a></li>
-<li><a href="#_securing_access_to_your_data">61. Securing Access To Your 
Data</a></li>
-<li><a href="#security.example.config">62. Security Configuration 
Example</a></li>
+<li><a href="#_using_secure_http_https_for_the_web_ui">57. Using Secure HTTP 
(HTTPS) for the Web UI</a></li>
+<li><a href="#hbase.secure.spnego.ui">58. Using SPNEGO for Kerberos 
authentication with Web UIs</a></li>
+<li><a href="#hbase.secure.configuration">59. Secure Client Access to Apache 
HBase</a></li>
+<li><a href="#hbase.secure.simpleconfiguration">60. Simple User Access to 
Apache HBase</a></li>
+<li><a href="#_securing_access_to_hdfs_and_zookeeper">61. Securing Access to 
HDFS and ZooKeeper</a></li>
+<li><a href="#_securing_access_to_your_data">62. Securing Access To Your 
Data</a></li>
+<li><a href="#security.example.config">63. Security Configuration 
Example</a></li>
 </ul>
 </li>
 <li><a href="#_architecture">Architecture</a>
 <ul class="sectlevel1">
-<li><a href="#arch.overview">63. Overview</a></li>
-<li><a href="#arch.catalog">64. Catalog Tables</a></li>
-<li><a href="#architecture.client">65. Client</a></li>
-<li><a href="#client.filter">66. Client Request Filters</a></li>
-<li><a href="#architecture.master">67. Master</a></li>
-<li><a href="#regionserver.arch">68. RegionServer</a></li>
-<li><a href="#regions.arch">69. Regions</a></li>
-<li><a href="#arch.bulk.load">70. Bulk Loading</a></li>
-<li><a href="#arch.hdfs">71. HDFS</a></li>
-<li><a href="#arch.timelineconsistent.reads">72. Timeline-consistent High 
Available Reads</a></li>
-<li><a href="#hbase_mob">73. Storing Medium-sized Objects (MOB)</a></li>
+<li><a href="#arch.overview">64. Overview</a></li>
+<li><a href="#arch.catalog">65. Catalog Tables</a></li>
+<li><a href="#architecture.client">66. Client</a></li>
+<li><a href="#client.filter">67. Client Request Filters</a></li>
+<li><a href="#architecture.master">68. Master</a></li>
+<li><a href="#regionserver.arch">69. RegionServer</a></li>
+<li><a href="#regions.arch">70. Regions</a></li>
+<li><a href="#arch.bulk.load">71. Bulk Loading</a></li>
+<li><a href="#arch.hdfs">72. HDFS</a></li>
+<li><a href="#arch.timelineconsistent.reads">73. Timeline-consistent High 
Available Reads</a></li>
+<li><a href="#hbase_mob">74. Storing Medium-sized Objects (MOB)</a></li>
 </ul>
 </li>
 <li><a href="#hbase_apis">Apache HBase APIs</a>
 <ul class="sectlevel1">
-<li><a href="#_examples">74. Examples</a></li>
+<li><a href="#_examples">75. Examples</a></li>
 </ul>
 </li>
 <li><a href="#external_apis">Apache HBase External APIs</a>
 <ul class="sectlevel1">
-<li><a href="#_rest">75. REST</a></li>
-<li><a href="#_thrift">76. Thrift</a></li>
-<li><a href="#c">77. C/C++ Apache HBase Client</a></li>
-<li><a href="#jdo">78. Using Java Data Objects (JDO) with HBase</a></li>
-<li><a href="#scala">79. Scala</a></li>
-<li><a href="#jython">80. Jython</a></li>
+<li><a href="#_rest">76. REST</a></li>
+<li><a href="#_thrift">77. Thrift</a></li>
+<li><a href="#c">78. C/C++ Apache HBase Client</a></li>
+<li><a href="#jdo">79. Using Java Data Objects (JDO) with HBase</a></li>
+<li><a href="#scala">80. Scala</a></li>
+<li><a href="#jython">81. Jython</a></li>
 </ul>
 </li>
 <li><a href="#thrift">Thrift API and Filter Language</a>
 <ul class="sectlevel1">
-<li><a href="#thrift.filter_language">81. Filter Language</a></li>
+<li><a href="#thrift.filter_language">82. Filter Language</a></li>
 </ul>
 </li>
 <li><a href="#spark">HBase and Spark</a>
 <ul class="sectlevel1">
-<li><a href="#_basic_spark">82. Basic Spark</a></li>
-<li><a href="#_spark_streaming">83. Spark Streaming</a></li>
-<li><a href="#_bulk_load">84. Bulk Load</a></li>
-<li><a href="#_sparksql_dataframes">85. SparkSQL/DataFrames</a></li>
+<li><a href="#_basic_spark">83. Basic Spark</a></li>
+<li><a href="#_spark_streaming">84. Spark Streaming</a></li>
+<li><a href="#_bulk_load">85. Bulk Load</a></li>
+<li><a href="#_sparksql_dataframes">86. SparkSQL/DataFrames</a></li>
 </ul>
 </li>
 <li><a href="#cp">Apache HBase Coprocessors</a>
 <ul class="sectlevel1">
-<li><a href="#_coprocessor_overview">86. Coprocessor Overview</a></li>
-<li><a href="#_types_of_coprocessors">87. Types of Coprocessors</a></li>
-<li><a href="#cp_loading">88. Loading Coprocessors</a></li>
-<li><a href="#cp_example">89. Examples</a></li>
-<li><a href="#_guidelines_for_deploying_a_coprocessor">90. Guidelines For 
Deploying A Coprocessor</a></li>
-<li><a href="#_monitor_time_spent_in_coprocessors">91. Monitor Time Spent in 
Coprocessors</a></li>
+<li><a href="#_coprocessor_overview">87. Coprocessor Overview</a></li>
+<li><a href="#_types_of_coprocessors">88. Types of Coprocessors</a></li>
+<li><a href="#cp_loading">89. Loading Coprocessors</a></li>
+<li><a href="#cp_example">90. Examples</a></li>
+<li><a href="#_guidelines_for_deploying_a_coprocessor">91. Guidelines For 
Deploying A Coprocessor</a></li>
+<li><a href="#_monitor_time_spent_in_coprocessors">92. Monitor Time Spent in 
Coprocessors</a></li>
 </ul>
 </li>
 <li><a href="#performance">Apache HBase Performance Tuning</a>
 <ul class="sectlevel1">
-<li><a href="#perf.os">92. Operating System</a></li>
-<li><a href="#perf.network">93. Network</a></li>
-<li><a href="#jvm">94. Java</a></li>
-<li><a href="#perf.configurations">95. HBase Configurations</a></li>
-<li><a href="#perf.zookeeper">96. ZooKeeper</a></li>
-<li><a href="#perf.schema">97. Schema Design</a></li>
-<li><a href="#perf.general">98. HBase General Patterns</a></li>
-<li><a href="#perf.writing">99. Writing to HBase</a></li>
-<li><a href="#perf.reading">100. Reading from HBase</a></li>
-<li><a href="#perf.deleting">101. Deleting from HBase</a></li>
-<li><a href="#perf.hdfs">102. HDFS</a></li>
-<li><a href="#perf.ec2">103. Amazon EC2</a></li>
-<li><a href="#perf.hbase.mr.cluster">104. Collocating HBase and 
MapReduce</a></li>
-<li><a href="#perf.casestudy">105. Case Studies</a></li>
+<li><a href="#perf.os">93. Operating System</a></li>
+<li><a href="#perf.network">94. Network</a></li>
+<li><a href="#jvm">95. Java</a></li>
+<li><a href="#perf.configurations">96. HBase Configurations</a></li>
+<li><a href="#perf.zookeeper">97. ZooKeeper</a></li>
+<li><a href="#perf.schema">98. Schema Design</a></li>
+<li><a href="#perf.general">99. HBase General Patterns</a></li>
+<li><a href="#perf.writing">100. Writing to HBase</a></li>
+<li><a href="#perf.reading">101. Reading from HBase</a></li>
+<li><a href="#perf.deleting">102. Deleting from HBase</a></li>
+<li><a href="#perf.hdfs">103. HDFS</a></li>
+<li><a href="#perf.ec2">104. Amazon EC2</a></li>
+<li><a href="#perf.hbase.mr.cluster">105. Collocating HBase and 
MapReduce</a></li>
+<li><a href="#perf.casestudy">106. Case Studies</a></li>
 </ul>
 </li>
 <li><a href="#trouble">Troubleshooting and Debugging Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#trouble.general">106. General Guidelines</a></li>
-<li><a href="#trouble.log">107. Logs</a></li>
-<li><a href="#trouble.resources">108. Resources</a></li>
-<li><a href="#trouble.tools">109. Tools</a></li>
-<li><a href="#trouble.client">110. Client</a></li>
-<li><a href="#trouble.mapreduce">111. MapReduce</a></li>
-<li><a href="#trouble.namenode">112. NameNode</a></li>
-<li><a href="#trouble.network">113. Network</a></li>
-<li><a href="#trouble.rs">114. RegionServer</a></li>
-<li><a href="#trouble.master">115. Master</a></li>
-<li><a href="#trouble.zookeeper">116. ZooKeeper</a></li>
-<li><a href="#trouble.ec2">117. Amazon EC2</a></li>
-<li><a href="#trouble.versions">118. HBase and Hadoop version issues</a></li>
-<li><a href="#_ipc_configuration_conflicts_with_hadoop">119. IPC Configuration 
Conflicts with Hadoop</a></li>
-<li><a href="#_hbase_and_hdfs">120. HBase and HDFS</a></li>
-<li><a href="#trouble.tests">121. Running unit or integration tests</a></li>
-<li><a href="#trouble.casestudy">122. Case Studies</a></li>
-<li><a href="#trouble.crypto">123. Cryptographic Features</a></li>
-<li><a href="#_operating_system_specific_issues">124. Operating System 
Specific Issues</a></li>
-<li><a href="#_jdk_issues">125. JDK Issues</a></li>
+<li><a href="#trouble.general">107. General Guidelines</a></li>
+<li><a href="#trouble.log">108. Logs</a></li>
+<li><a href="#trouble.resources">109. Resources</a></li>
+<li><a href="#trouble.tools">110. Tools</a></li>
+<li><a href="#trouble.client">111. Client</a></li>
+<li><a href="#trouble.mapreduce">112. MapReduce</a></li>
+<li><a href="#trouble.namenode">113. NameNode</a></li>
+<li><a href="#trouble.network">114. Network</a></li>
+<li><a href="#trouble.rs">115. RegionServer</a></li>
+<li><a href="#trouble.master">116. Master</a></li>
+<li><a href="#trouble.zookeeper">117. ZooKeeper</a></li>
+<li><a href="#trouble.ec2">118. Amazon EC2</a></li>
+<li><a href="#trouble.versions">119. HBase and Hadoop version issues</a></li>
+<li><a href="#_ipc_configuration_conflicts_with_hadoop">120. IPC Configuration 
Conflicts with Hadoop</a></li>
+<li><a href="#_hbase_and_hdfs">121. HBase and HDFS</a></li>
+<li><a href="#trouble.tests">122. Running unit or integration tests</a></li>
+<li><a href="#trouble.casestudy">123. Case Studies</a></li>
+<li><a href="#trouble.crypto">124. Cryptographic Features</a></li>
+<li><a href="#_operating_system_specific_issues">125. Operating System 
Specific Issues</a></li>
+<li><a href="#_jdk_issues">126. JDK Issues</a></li>
 </ul>
 </li>
 <li><a href="#casestudies">Apache HBase Case Studies</a>
 <ul class="sectlevel1">
-<li><a href="#casestudies.overview">126. Overview</a></li>
-<li><a href="#casestudies.schema">127. Schema Design</a></li>
-<li><a href="#casestudies.perftroub">128. Performance/Troubleshooting</a></li>
+<li><a href="#casestudies.overview">127. Overview</a></li>
+<li><a href="#casestudies.schema">128. Schema Design</a></li>
+<li><a href="#casestudies.perftroub">129. Performance/Troubleshooting</a></li>
 </ul>
 </li>
 <li><a href="#ops_mgt">Apache HBase Operational Management</a>
 <ul class="sectlevel1">
-<li><a href="#tools">129. HBase Tools and Utilities</a></li>
-<li><a href="#ops.regionmgt">130. Region Management</a></li>
-<li><a href="#node.management">131. Node Management</a></li>
-<li><a href="#hbase_metrics">132. HBase Metrics</a></li>
-<li><a href="#ops.monitoring">133. HBase Monitoring</a></li>
-<li><a href="#_cluster_replication">134. Cluster Replication</a></li>
-<li><a href="#_running_multiple_workloads_on_a_single_cluster">135. Running 
Multiple Workloads On a Single Cluster</a></li>
-<li><a href="#ops.backup">136. HBase Backup</a></li>
-<li><a href="#ops.snapshots">137. HBase Snapshots</a></li>
-<li><a href="#snapshots_azure">138. Storing Snapshots in Microsoft Azure Blob 
Storage</a></li>
-<li><a href="#ops.capacity">139. Capacity Planning and Region Sizing</a></li>
-<li><a href="#table.rename">140. Table Rename</a></li>
+<li><a href="#tools">130. HBase Tools and Utilities</a></li>
+<li><a href="#ops.regionmgt">131. Region Management</a></li>
+<li><a href="#node.management">132. Node Management</a></li>
+<li><a href="#hbase_metrics">133. HBase Metrics</a></li>
+<li><a href="#ops.monitoring">134. HBase Monitoring</a></li>
+<li><a href="#_cluster_replication">135. Cluster Replication</a></li>
+<li><a href="#_running_multiple_workloads_on_a_single_cluster">136. Running 
Multiple Workloads On a Single Cluster</a></li>
+<li><a href="#ops.backup">137. HBase Backup</a></li>
+<li><a href="#ops.snapshots">138. HBase Snapshots</a></li>
+<li><a href="#snapshots_azure">139. Storing Snapshots in Microsoft Azure Blob 
Storage</a></li>
+<li><a href="#ops.capacity">140. Capacity Planning and Region Sizing</a></li>
+<li><a href="#table.rename">141. Table Rename</a></li>
 </ul>
 </li>
 <li><a href="#developer">Building and Developing Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#getting.involved">141. Getting Involved</a></li>
-<li><a href="#repos">142. Apache HBase Repositories</a></li>
-<li><a href="#_ides">143. IDEs</a></li>
-<li><a href="#build">144. Building Apache HBase</a></li>
-<li><a href="#releasing">145. Releasing Apache HBase</a></li>
-<li><a href="#hbase.rc.voting">146. Voting on Release Candidates</a></li>
-<li><a href="#documentation">147. Generating the HBase Reference Guide</a></li>
-<li><a href="#hbase.org">148. Updating <a 
href="http://hbase.apache.org";>hbase.apache.org</a></a></li>
-<li><a href="#hbase.tests">149. Tests</a></li>
-<li><a href="#developing">150. Developer Guidelines</a></li>
+<li><a href="#getting.involved">142. Getting Involved</a></li>
+<li><a href="#repos">143. Apache HBase Repositories</a></li>
+<li><a href="#_ides">144. IDEs</a></li>
+<li><a href="#build">145. Building Apache HBase</a></li>
+<li><a href="#releasing">146. Releasing Apache HBase</a></li>
+<li><a href="#hbase.rc.voting">147. Voting on Release Candidates</a></li>
+<li><a href="#documentation">148. Generating the HBase Reference Guide</a></li>
+<li><a href="#hbase.org">149. Updating <a 
href="http://hbase.apache.org";>hbase.apache.org</a></a></li>
+<li><a href="#hbase.tests">150. Tests</a></li>
+<li><a href="#developing">151. Developer Guidelines</a></li>
 </ul>
 </li>
 <li><a href="#unit.tests">Unit Testing HBase Applications</a>
 <ul class="sectlevel1">
-<li><a href="#_junit">151. JUnit</a></li>
-<li><a href="#mockito">152. Mockito</a></li>
-<li><a href="#_mrunit">153. MRUnit</a></li>
-<li><a href="#_integration_testing_with_an_hbase_mini_cluster">154. 
Integration Testing with an HBase Mini-Cluster</a></li>
+<li><a href="#_junit">152. JUnit</a></li>
+<li><a href="#mockito">153. Mockito</a></li>
+<li><a href="#_mrunit">154. MRUnit</a></li>
+<li><a href="#_integration_testing_with_an_hbase_mini_cluster">155. 
Integration Testing with an HBase Mini-Cluster</a></li>
 </ul>
 </li>
 <li><a href="#protobuf">Protobuf in HBase</a>
 <ul class="sectlevel1">
-<li><a href="#_protobuf">155. Protobuf</a></li>
+<li><a href="#_protobuf">156. Protobuf</a></li>
 </ul>
 </li>
 <li><a href="#zookeeper">ZooKeeper</a>
 <ul class="sectlevel1">
-<li><a href="#_using_existing_zookeeper_ensemble">156. Using existing 
ZooKeeper ensemble</a></li>
-<li><a href="#zk.sasl.auth">157. SASL Authentication with ZooKeeper</a></li>
+<li><a href="#_using_existing_zookeeper_ensemble">157. Using existing 
ZooKeeper ensemble</a></li>
+<li><a href="#zk.sasl.auth">158. SASL Authentication with ZooKeeper</a></li>
 </ul>
 </li>
 <li><a href="#community">Community</a>
 <ul class="sectlevel1">
-<li><a href="#_decisions">158. Decisions</a></li>
-<li><a href="#community.roles">159. Community Roles</a></li>
-<li><a href="#hbase.commit.msg.format">160. Commit Message format</a></li>
+<li><a href="#_decisions">159. Decisions</a></li>
+<li><a href="#community.roles">160. Community Roles</a></li>
+<li><a href="#hbase.commit.msg.format">161. Commit Message format</a></li>
 </ul>
 </li>
 <li><a href="#_appendix">Appendix</a>
@@ -286,7 +287,7 @@
 <li><a href="#hbck.in.depth">Appendix C: hbck In Depth</a></li>
 <li><a href="#appendix_acl_matrix">Appendix D: Access Control Matrix</a></li>
 <li><a href="#compression">Appendix E: Compression and Data Block Encoding In 
HBase</a></li>
-<li><a href="#data.block.encoding.enable">161. Enable Data Block 
Encoding</a></li>
+<li><a href="#data.block.encoding.enable">162. Enable Data Block 
Encoding</a></li>
 <li><a href="#sql">Appendix F: SQL over HBase</a></li>
 <li><a href="#ycsb">Appendix G: YCSB</a></li>
 <li><a href="#_hfile_format_2">Appendix H: HFile format</a></li>
@@ -295,8 +296,8 @@
 <li><a href="#asf">Appendix K: HBase and the Apache Software 
Foundation</a></li>
 <li><a href="#orca">Appendix L: Apache HBase Orca</a></li>
 <li><a href="#tracing">Appendix M: Enabling Dapper-like Tracing in 
HBase</a></li>
-<li><a href="#tracing.client.modifications">162. Client Modifications</a></li>
-<li><a href="#tracing.client.shell">163. Tracing from HBase Shell</a></li>
+<li><a href="#tracing.client.modifications">163. Client Modifications</a></li>
+<li><a href="#tracing.client.shell">164. Tracing from HBase Shell</a></li>
 <li><a href="#hbase.rpc">Appendix N: 0.95 RPC Specification</a></li>
 </ul>
 </li>
@@ -5821,7 +5822,7 @@ It may be possible to skip across 
versions&#8201;&#8212;&#8201;for example go fr
 <p>APIs available in a patch version will be available in all later patch 
versions. However, new APIs may be added which will not be available in earlier 
patch versions.</p>
 </li>
 <li>
-<p>New APIs introduced in a patch version will only be added in a source 
compatible way <sup class="footnote">[<a id="_footnoteref_1" class="footnote" 
href="#_footnote_1" title="View footnote.">1</a>]</sup>: i.e. code that 
implements public APIs will continue to compile.</p>
+<p>New APIs introduced in a patch version will only be added in a source 
compatible way <span class="footnote">[<a id="_footnoteref_1" class="footnote" 
href="#_footnote_1" title="View footnote.">1</a>]</span>: i.e. code that 
implements public APIs will continue to compile.</p>
 </li>
 <li>
 <p>Example: A user using a newly deprecated API does not need to modify 
application code with HBase API calls until the next major version.</p>
@@ -5885,7 +5886,7 @@ It may be possible to skip across 
versions&#8201;&#8212;&#8201;for example go fr
 <div class="title">Summary</div>
 <ul>
 <li>
-<p>A patch upgrade is a drop-in replacement. Any change that is not Java 
binary and source compatible would not be allowed.<sup class="footnote">[<a 
id="_footnoteref_2" class="footnote" href="#_footnote_2" title="View 
footnote.">2</a>]</sup> Downgrading versions within patch releases may not be 
compatible.</p>
+<p>A patch upgrade is a drop-in replacement. Any change that is not Java 
binary and source compatible would not be allowed.<span class="footnote">[<a 
id="_footnoteref_2" class="footnote" href="#_footnote_2" title="View 
footnote.">2</a>]</span> Downgrading versions within patch releases may not be 
compatible.</p>
 </li>
 <li>
 <p>A minor upgrade requires no application/client code modification. Ideally 
it would be a drop-in replacement but client code, coprocessors, filters, etc 
might have to be recompiled if new jars are used.</p>
@@ -5896,7 +5897,7 @@ It may be possible to skip across 
versions&#8201;&#8212;&#8201;for example go fr
 </ul>
 </div>
 <table class="tableblock frame-all grid-all spread">
-<caption class="title">Table 3. Compatibility Matrix <sup class="footnote">[<a 
id="_footnoteref_3" class="footnote" href="#_footnote_3" title="View 
footnote.">3</a>]</sup></caption>
+<caption class="title">Table 3. Compatibility Matrix <span 
class="footnote">[<a id="_footnoteref_3" class="footnote" href="#_footnote_3" 
title="View footnote.">3</a>]</span></caption>
 <colgroup>
 <col style="width: 25%;">
 <col style="width: 25%;">
@@ -5924,7 +5925,7 @@ It may be possible to skip across 
versions&#8201;&#8212;&#8201;for example go fr
 </tr>
 <tr>
 <td class="tableblock halign-left valign-top"><p class="tableblock">File 
Format Compatibility</p></td>
-<td class="tableblock halign-left valign-top"><p class="tableblock">N <sup 
class="footnote">[<a id="_footnoteref_4" class="footnote" href="#_footnote_4" 
title="View footnote.">4</a>]</sup></p></td>
+<td class="tableblock halign-left valign-top"><p class="tableblock">N <span 
class="footnote">[<a id="_footnoteref_4" class="footnote" href="#_footnote_4" 
title="View footnote.">4</a>]</span></p></td>
 <td class="tableblock halign-left valign-top"><p class="tableblock">Y</p></td>
 <td class="tableblock halign-left valign-top"><p class="tableblock">Y</p></td>
 </tr>
@@ -9341,8 +9342,293 @@ If you don&#8217;t have time to build it both ways and 
compare, my advice would
 <div class="sect1">
 <h2 id="schema.ops"><a class="anchor" href="#schema.ops"></a>44. Operational 
and Performance Configuration Options</h2>
 <div class="sectionbody">
+<div class="sect3">
+<h4 id="_tune_hbase_server_rpc_handling"><a class="anchor" 
href="#_tune_hbase_server_rpc_handling"></a>44.1. Tune HBase Server RPC 
Handling</h4>
+<div class="ulist">
+<ul>
+<li>
+<p>Set <code>hbase.regionserver.handler.count</code> (in 
<code>hbase-site.xml</code>) to cores x spindles for concurrency.</p>
+</li>
+<li>
+<p>Optionally, split the call queues into separate read and write queues for 
differentiated service. The parameter 
<code>hbase.ipc.server.callqueue.handler.factor</code> specifies the number of 
call queues:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>0</code> means a single shared queue</p>
+</li>
+<li>
+<p><code>1</code> means one queue for each handler.</p>
+</li>
+<li>
+<p>A value between <code>0</code> and <code>1</code> allocates the number of 
queues proportionally to the number of handlers. For instance, a value of 
<code>.5</code> shares one queue between each two handlers.</p>
+</li>
+</ul>
+</div>
+</li>
+<li>
+<p>Use <code>hbase.ipc.server.callqueue.read.ratio</code> 
(<code>hbase.ipc.server.callqueue.read.share</code> in 0.98) to split the call 
queues into read and write queues:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>0.5</code> means there will be the same number of read and write 
queues</p>
+</li>
+<li>
+<p><code>&lt; 0.5</code> for more read than write</p>
+</li>
+<li>
+<p><code>&gt; 0.5</code> for more write than read</p>
+</li>
+</ul>
+</div>
+</li>
+<li>
+<p>Set <code>hbase.ipc.server.callqueue.scan.ratio</code> (HBase 1.0+)  to 
split read call queues into small-read and long-read queues:</p>
+<div class="ulist">
+<ul>
+<li>
+<p>0.5 means that there will be the same number of short-read and long-read 
queues</p>
+</li>
+<li>
+<p><code>&lt; 0.5</code> for more short-read</p>
+</li>
+<li>
+<p><code>&gt; 0.5</code> for more long-read</p>
+</li>
+</ul>
+</div>
+</li>
+</ul>
+</div>
+</div>
+<div class="sect3">
+<h4 id="_disable_nagle_for_rpc"><a class="anchor" 
href="#_disable_nagle_for_rpc"></a>44.2. Disable Nagle for RPC</h4>
+<div class="paragraph">
+<p>Disable Nagle’s algorithm. Delayed ACKs can add up to ~200ms to RPC round 
trip time. Set the following parameters:</p>
+</div>
+<div class="ulist">
+<ul>
+<li>
+<p>In Hadoop’s <code>core-site.xml</code>:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>ipc.server.tcpnodelay = true</code></p>
+</li>
+<li>
+<p><code>ipc.client.tcpnodelay = true</code></p>
+</li>
+</ul>
+</div>
+</li>
+<li>
+<p>In HBase’s <code>hbase-site.xml</code>:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>hbase.ipc.client.tcpnodelay = true</code></p>
+</li>
+<li>
+<p><code>hbase.ipc.server.tcpnodelay = true</code></p>
+</li>
+</ul>
+</div>
+</li>
+</ul>
+</div>
+</div>
+<div class="sect3">
+<h4 id="_limit_server_failure_impact"><a class="anchor" 
href="#_limit_server_failure_impact"></a>44.3. Limit Server Failure Impact</h4>
 <div class="paragraph">
-<p>See the Performance section <a href="#perf.schema">perf.schema</a> for more 
information operational and performance schema design options, such as Bloom 
Filters, Table-configured regionsizes, compression, and blocksizes.</p>
+<p>Detect regionserver failure as fast as reasonable. Set the following 
parameters:</p>
+</div>
+<div class="ulist">
+<ul>
+<li>
+<p>In <code>hbase-site.xml</code>, set <code>zookeeper.session.timeout</code> 
to 30 seconds or less to bound failure detection (20-30 seconds is a good 
start).</p>
+</li>
+<li>
+<p>Detect and avoid unhealthy or failed HDFS DataNodes: in 
<code>hdfs-site.xml</code> and <code>hbase-site.xml</code>, set the following 
parameters:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>dfs.namenode.avoid.read.stale.datanode = true</code></p>
+</li>
+<li>
+<p><code>dfs.namenode.avoid.write.stale.datanode = true</code></p>
+</li>
+</ul>
+</div>
+</li>
+</ul>
+</div>
+</div>
+<div class="sect3">
+<h4 id="_optimize_on_the_server_side_for_low_latency"><a class="anchor" 
href="#_optimize_on_the_server_side_for_low_latency"></a>44.4. Optimize on the 
Server Side for Low Latency</h4>
+<div class="ulist">
+<ul>
+<li>
+<p>Skip the network for local blocks. In <code>hbase-site.xml</code>, set the 
following parameters:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>dfs.client.read.shortcircuit = true</code></p>
+</li>
+<li>
+<p><code>dfs.client.read.shortcircuit.buffer.size = 131072</code> (Important 
to avoid OOME)</p>
+</li>
+</ul>
+</div>
+</li>
+<li>
+<p>Ensure data locality. In <code>hbase-site.xml</code>, set 
<code>hbase.hstore.min.locality.to.skip.major.compact = 0.7</code> (Meaning 
that 0.7 &lt;= n &lt;= 1)</p>
+</li>
+<li>
+<p>Make sure DataNodes have enough handlers for block transfers. In 
<code>hdfs-site</code>.xml``, set the following parameters:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>dfs.datanode.max.xcievers &gt;= 8192</code></p>
+</li>
+<li>
+<p><code>dfs.datanode.handler.count =</code> number of spindles</p>
+</li>
+</ul>
+</div>
+</li>
+</ul>
+</div>
+</div>
+<div class="sect2">
+<h3 id="_jvm_tuning"><a class="anchor" href="#_jvm_tuning"></a>44.5. JVM 
Tuning</h3>
+<div class="sect3">
+<h4 id="_tune_jvm_gc_for_low_collection_latencies"><a class="anchor" 
href="#_tune_jvm_gc_for_low_collection_latencies"></a>44.5.1. Tune JVM GC for 
low collection latencies</h4>
+<div class="ulist">
+<ul>
+<li>
+<p>Use the CMS collector: <code>-XX:+UseConcMarkSweepGC</code></p>
+</li>
+<li>
+<p>Keep eden space as small as possible to minimize average collection time. 
Example:</p>
+<div class="literalblock">
+<div class="content">
+<pre>-XX:CMSInitiatingOccupancyFraction=70</pre>
+</div>
+</div>
+</li>
+<li>
+<p>Optimize for low collection latency rather than throughput: 
<code>-Xmn512m</code></p>
+</li>
+<li>
+<p>Collect eden in parallel: <code>-XX:+UseParNewGC</code></p>
+</li>
+<li>
+<p>Avoid collection under pressure: 
<code>-XX:+UseCMSInitiatingOccupancyOnly</code></p>
+</li>
+<li>
+<p>Limit per request scanner result sizing so everything fits into survivor 
space but doesn’t tenure. In <code>hbase-site.xml</code>, set 
<code>hbase.client.scanner.max.result.size</code> to 1/8th of eden space (with 
-<code>Xmn512m</code> this is ~51MB )</p>
+</li>
+<li>
+<p>Set <code>max.result.size</code> x <code>handler.count</code> less than 
survivor space</p>
+</li>
+</ul>
+</div>
+</div>
+<div class="sect3">
+<h4 id="_os_level_tuning"><a class="anchor" 
href="#_os_level_tuning"></a>44.5.2. OS-Level Tuning</h4>
+<div class="ulist">
+<ul>
+<li>
+<p>Turn transparent huge pages (THP) off:</p>
+<div class="literalblock">
+<div class="content">
+<pre>echo never &gt; /sys/kernel/mm/transparent_hugepage/enabled
+echo never &gt; /sys/kernel/mm/transparent_hugepage/defrag</pre>
+</div>
+</div>
+</li>
+<li>
+<p>Set <code>vm.swappiness = 0</code></p>
+</li>
+<li>
+<p>Set <code>vm.min_free_kbytes</code> to at least 1GB (8GB on larger memory 
systems)</p>
+</li>
+<li>
+<p>Disable NUMA zone reclaim with <code>vm.zone_reclaim_mode = 0</code></p>
+</li>
+</ul>
+</div>
+</div>
+</div>
+</div>
+</div>
+<div class="sect1">
+<h2 id="_special_cases"><a class="anchor" href="#_special_cases"></a>45. 
Special Cases</h2>
+<div class="sectionbody">
+<div class="sect3">
+<h4 id="_for_applications_where_failing_quickly_is_better_than_waiting"><a 
class="anchor" 
href="#_for_applications_where_failing_quickly_is_better_than_waiting"></a>45.1.
 For applications where failing quickly is better than waiting</h4>
+<div class="ulist">
+<ul>
+<li>
+<p>In <code>hbase-site.xml</code> on the client side, set the following 
parameters:</p>
+<div class="ulist">
+<ul>
+<li>
+<p>Set <code>hbase.client.pause = 1000</code></p>
+</li>
+<li>
+<p>Set <code>hbase.client.retries.number = 3</code></p>
+</li>
+<li>
+<p>If you want to ride over splits and region moves, increase 
<code>hbase.client.retries.number</code> substantially (&gt;= 20)</p>
+</li>
+<li>
+<p>Set the RecoverableZookeeper retry count: <code>zookeeper.recovery.retry = 
1</code> (no retry)</p>
+</li>
+</ul>
+</div>
+</li>
+<li>
+<p>In <code>hbase-site.xml</code> on the server side, set the Zookeeper 
session timeout for detecting server failures: 
<code>zookeeper.session.timeout</code> &#8656; 30 seconds (20-30 is good).</p>
+</li>
+</ul>
+</div>
+</div>
+<div class="sect3">
+<h4 
id="_for_applications_that_can_tolerate_slightly_out_of_date_information"><a 
class="anchor" 
href="#_for_applications_that_can_tolerate_slightly_out_of_date_information"></a>45.2.
 For applications that can tolerate slightly out of date information</h4>
+<div class="paragraph">
+<p><strong>HBase timeline consistency (HBASE-10070) </strong>
+With read replicas enabled, read-only copies of regions (replicas) are 
distributed over the cluster. One RegionServer services the default or primary 
replica, which is the only replica that can service writes. Other RegionServers 
serve the secondary replicas, follow the primary RegionServer, and only see 
committed updates. The secondary replicas are read-only, but can serve reads 
immediately while the primary is failing over, cutting read availability blips 
from seconds to milliseconds. Phoenix supports timeline consistency as of 4.4.0
+Tips:</p>
+</div>
+<div class="ulist">
+<ul>
+<li>
+<p>Deploy HBase 1.0.0 or later.</p>
+</li>
+<li>
+<p>Enable timeline consistent replicas on the server side.</p>
+</li>
+<li>
+<p>Use one of the following methods to set timeline consistency:</p>
+<div class="ulist">
+<ul>
+<li>
+<p>Use <code>ALTER SESSION SET CONSISTENCY = 'TIMELINE’</code></p>
+</li>
+<li>
+<p>Set the connection property <code>Consistency</code> to 
<code>timeline</code> in the JDBC connect string</p>
+</li>
+</ul>
+</div>
+</li>
+</ul>
+</div>
+</div>
+<div class="sect2">
+<h3 id="_more_information"><a class="anchor" 
href="#_more_information"></a>45.3. More Information</h3>
+<div class="paragraph">
+<p>See the Performance section <a href="#perf.schema">perf.schema</a> for more 
information about operational and performance schema design options, such as 
Bloom Filters, Table-configured regionsizes, compression, and blocksizes.</p>
+</div>
 </div>
 </div>
 </div>
@@ -9384,7 +9670,7 @@ In the notes below, we refer to o.a.h.h.mapreduce but 
replace with the o.a.h.h.m
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbase.mapreduce.classpath"><a class="anchor" 
href="#hbase.mapreduce.classpath"></a>45. HBase, MapReduce, and the 
CLASSPATH</h2>
+<h2 id="hbase.mapreduce.classpath"><a class="anchor" 
href="#hbase.mapreduce.classpath"></a>46. HBase, MapReduce, and the 
CLASSPATH</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>By default, MapReduce jobs deployed to a MapReduce cluster do not have 
access to either the HBase configuration under <code>$HBASE_CONF_DIR</code> or 
the HBase classes.</p>
@@ -9540,7 +9826,7 @@ $ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar 
MyJob.jar MyJobMainClass</code>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_mapreduce_scan_caching"><a class="anchor" 
href="#_mapreduce_scan_caching"></a>46. MapReduce Scan Caching</h2>
+<h2 id="_mapreduce_scan_caching"><a class="anchor" 
href="#_mapreduce_scan_caching"></a>47. MapReduce Scan Caching</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>TableMapReduceUtil now restores the option to set scanner caching (the 
number of rows which are cached before returning the result to the client) on 
the Scan object that is passed in.
@@ -9575,7 +9861,7 @@ If you think of the scan as a shovel, a bigger cache 
setting is analogous to a b
 </div>
 </div>
 <div class="sect1">
-<h2 id="_bundled_hbase_mapreduce_jobs"><a class="anchor" 
href="#_bundled_hbase_mapreduce_jobs"></a>47. Bundled HBase MapReduce Jobs</h2>
+<h2 id="_bundled_hbase_mapreduce_jobs"><a class="anchor" 
href="#_bundled_hbase_mapreduce_jobs"></a>48. Bundled HBase MapReduce Jobs</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The HBase JAR also serves as a Driver for some bundled MapReduce jobs.
@@ -9606,7 +9892,7 @@ To run one of the jobs, model your command after the 
following example.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_hbase_as_a_mapreduce_job_data_source_and_data_sink"><a class="anchor" 
href="#_hbase_as_a_mapreduce_job_data_source_and_data_sink"></a>48. HBase as a 
MapReduce Job Data Source and Data Sink</h2>
+<h2 id="_hbase_as_a_mapreduce_job_data_source_and_data_sink"><a class="anchor" 
href="#_hbase_as_a_mapreduce_job_data_source_and_data_sink"></a>49. HBase as a 
MapReduce Job Data Source and Data Sink</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>HBase can be used as a data source, <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html";>TableInputFormat</a>,
 and data sink, <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html";>TableOutputFormat</a>
 or <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.html";>MultiTableOutputFormat</a>,
 for MapReduce jobs.
@@ -9635,7 +9921,7 @@ Otherwise use the default partitioner.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_writing_hfiles_directly_during_bulk_import"><a class="anchor" 
href="#_writing_hfiles_directly_during_bulk_import"></a>49. Writing HFiles 
Directly During Bulk Import</h2>
+<h2 id="_writing_hfiles_directly_during_bulk_import"><a class="anchor" 
href="#_writing_hfiles_directly_during_bulk_import"></a>50. Writing HFiles 
Directly During Bulk Import</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>If you are importing into a new table, you can bypass the HBase API and 
write your content directly to the filesystem, formatted into HBase data files 
(HFiles). Your import will run faster, perhaps an order of magnitude faster.
@@ -9644,7 +9930,7 @@ For more on how this mechanism works, see <a 
href="#arch.bulk.load">Bulk Loading
 </div>
 </div>
 <div class="sect1">
-<h2 id="_rowcounter_example"><a class="anchor" 
href="#_rowcounter_example"></a>50. RowCounter Example</h2>
+<h2 id="_rowcounter_example"><a class="anchor" 
href="#_rowcounter_example"></a>51. RowCounter Example</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The included <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html";>RowCounter</a>
 MapReduce job uses <code>TableInputFormat</code> and does a count of all rows 
in the specified table.
@@ -9665,17 +9951,17 @@ If you have classpath errors, see <a 
href="#hbase.mapreduce.classpath">HBase, Ma
 </div>
 </div>
 <div class="sect1">
-<h2 id="splitter"><a class="anchor" href="#splitter"></a>51. Map-Task 
Splitting</h2>
+<h2 id="splitter"><a class="anchor" href="#splitter"></a>52. Map-Task 
Splitting</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="splitter.default"><a class="anchor" href="#splitter.default"></a>51.1. 
The Default HBase MapReduce Splitter</h3>
+<h3 id="splitter.default"><a class="anchor" href="#splitter.default"></a>52.1. 
The Default HBase MapReduce Splitter</h3>
 <div class="paragraph">
 <p>When <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html";>TableInputFormat</a>
 is used to source an HBase table in a MapReduce job, its splitter will make a 
map task for each region of the table.
 Thus, if there are 100 regions in the table, there will be 100 map-tasks for 
the job - regardless of how many column families are selected in the Scan.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="splitter.custom"><a class="anchor" href="#splitter.custom"></a>51.2. 
Custom Splitters</h3>
+<h3 id="splitter.custom"><a class="anchor" href="#splitter.custom"></a>52.2. 
Custom Splitters</h3>
 <div class="paragraph">
 <p>For those interested in implementing custom splitters, see the method 
<code>getSplits</code> in <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html";>TableInputFormatBase</a>.
 That is where the logic for map-task assignment resides.</p>
@@ -9684,10 +9970,10 @@ That is where the logic for map-task assignment 
resides.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="mapreduce.example"><a class="anchor" href="#mapreduce.example"></a>52. 
HBase MapReduce Examples</h2>
+<h2 id="mapreduce.example"><a class="anchor" href="#mapreduce.example"></a>53. 
HBase MapReduce Examples</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="mapreduce.example.read"><a class="anchor" 
href="#mapreduce.example.read"></a>52.1. HBase MapReduce Read Example</h3>
+<h3 id="mapreduce.example.read"><a class="anchor" 
href="#mapreduce.example.read"></a>53.1. HBase MapReduce Read Example</h3>
 <div class="paragraph">
 <p>The following is an example of using HBase as a MapReduce source in 
read-only manner.
 Specifically, there is a Mapper instance but no Reducer, and nothing is being 
emitted from the Mapper.
@@ -9735,7 +10021,7 @@ job.setOutputFormatClass(NullOutputFormat.class);   
<span class="comment">// bec
 </div>
 </div>
 <div class="sect2">
-<h3 id="mapreduce.example.readwrite"><a class="anchor" 
href="#mapreduce.example.readwrite"></a>52.2. HBase MapReduce Read/Write 
Example</h3>
+<h3 id="mapreduce.example.readwrite"><a class="anchor" 
href="#mapreduce.example.readwrite"></a>53.2. HBase MapReduce Read/Write 
Example</h3>
 <div class="paragraph">
 <p>The following is an example of using HBase both as a source and as a sink 
with MapReduce.
 This example will simply copy data from one table to another.</p>
@@ -9805,13 +10091,13 @@ Note: this is what the CopyTable utility does.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="mapreduce.example.readwrite.multi"><a class="anchor" 
href="#mapreduce.example.readwrite.multi"></a>52.3. HBase MapReduce Read/Write 
Example With Multi-Table Output</h3>
+<h3 id="mapreduce.example.readwrite.multi"><a class="anchor" 
href="#mapreduce.example.readwrite.multi"></a>53.3. HBase MapReduce Read/Write 
Example With Multi-Table Output</h3>
 <div class="paragraph">
 <p>TODO: example for <code>MultiTableOutputFormat</code>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="mapreduce.example.summary"><a class="anchor" 
href="#mapreduce.example.summary"></a>52.4. HBase MapReduce Summary to HBase 
Example</h3>
+<h3 id="mapreduce.example.summary"><a class="anchor" 
href="#mapreduce.example.summary"></a>53.4. HBase MapReduce Summary to HBase 
Example</h3>
 <div class="paragraph">
 <p>The following example uses HBase as a MapReduce source and sink with a 
summarization step.
 This example will count the number of distinct instances of a value in a table 
and write those summarized counts in another table.</p>
@@ -9891,7 +10177,7 @@ This value is used as the key to emit from the mapper, 
and an <code>IntWritable<
 </div>
 </div>
 <div class="sect2">
-<h3 id="mapreduce.example.summary.file"><a class="anchor" 
href="#mapreduce.example.summary.file"></a>52.5. HBase MapReduce Summary to 
File Example</h3>
+<h3 id="mapreduce.example.summary.file"><a class="anchor" 
href="#mapreduce.example.summary.file"></a>53.5. HBase MapReduce Summary to 
File Example</h3>
 <div class="paragraph">
 <p>This very similar to the summary example above, with exception that this is 
using HBase as a MapReduce source but HDFS as the sink.
 The differences are in the job setup and in the reducer.
@@ -9945,7 +10231,7 @@ As for the Reducer, it is a "generic" Reducer instead of 
extending TableMapper a
 </div>
 </div>
 <div class="sect2">
-<h3 id="mapreduce.example.summary.noreducer"><a class="anchor" 
href="#mapreduce.example.summary.noreducer"></a>52.6. HBase MapReduce Summary 
to HBase Without Reducer</h3>
+<h3 id="mapreduce.example.summary.noreducer"><a class="anchor" 
href="#mapreduce.example.summary.noreducer"></a>53.6. HBase MapReduce Summary 
to HBase Without Reducer</h3>
 <div class="paragraph">
 <p>It is also possible to perform summaries without a reducer - if you use 
HBase as the reducer.</p>
 </div>
@@ -9960,7 +10246,7 @@ However, your mileage may vary depending on the number 
of rows to be processed a
 </div>
 </div>
 <div class="sect2">
-<h3 id="mapreduce.example.summary.rdbms"><a class="anchor" 
href="#mapreduce.example.summary.rdbms"></a>52.7. HBase MapReduce Summary to 
RDBMS</h3>
+<h3 id="mapreduce.example.summary.rdbms"><a class="anchor" 
href="#mapreduce.example.summary.rdbms"></a>53.7. HBase MapReduce Summary to 
RDBMS</h3>
 <div class="paragraph">
 <p>Sometimes it is more appropriate to generate summaries to an RDBMS.
 For these cases, it is possible to generate summaries directly to an RDBMS via 
a custom reducer.
@@ -10001,7 +10287,7 @@ Recognize that the more reducers that are assigned to 
the job, the more simultan
 </div>
 </div>
 <div class="sect1">
-<h2 id="mapreduce.htable.access"><a class="anchor" 
href="#mapreduce.htable.access"></a>53. Accessing Other HBase Tables in a 
MapReduce Job</h2>
+<h2 id="mapreduce.htable.access"><a class="anchor" 
href="#mapreduce.htable.access"></a>54. Accessing Other HBase Tables in a 
MapReduce Job</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Although the framework currently allows one HBase table as input to a 
MapReduce job, other HBase tables can be accessed as lookup tables, etc., in a 
MapReduce job via creating an Table instance in the setup method of the 
Mapper.</p>
@@ -10026,7 +10312,7 @@ Recognize that the more reducers that are assigned to 
the job, the more simultan
 </div>
 </div>
 <div class="sect1">
-<h2 id="mapreduce.specex"><a class="anchor" href="#mapreduce.specex"></a>54. 
Speculative Execution</h2>
+<h2 id="mapreduce.specex"><a class="anchor" href="#mapreduce.specex"></a>55. 
Speculative Execution</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>It is generally advisable to turn off speculative execution for MapReduce 
jobs that use HBase as a source.
@@ -10039,7 +10325,7 @@ Especially for longer running jobs, speculative 
execution will create duplicate
 </div>
 </div>
 <div class="sect1">
-<h2 id="cascading"><a class="anchor" href="#cascading"></a>55. Cascading</h2>
+<h2 id="cascading"><a class="anchor" href="#cascading"></a>56. Cascading</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p><a href="http://www.cascading.org/";>Cascading</a> is an alternative API for 
MapReduce, which
@@ -10128,7 +10414,7 @@ To protect existing HBase installations from 
exploitation, please <strong>do not
 </div>
 </div>
 <div class="sect1">
-<h2 id="_using_secure_http_https_for_the_web_ui"><a class="anchor" 
href="#_using_secure_http_https_for_the_web_ui"></a>56. Using Secure HTTP 
(HTTPS) for the Web UI</h2>
+<h2 id="_using_secure_http_https_for_the_web_ui"><a class="anchor" 
href="#_using_secure_http_https_for_the_web_ui"></a>57. Using Secure HTTP 
(HTTPS) for the Web UI</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>A default HBase install uses insecure HTTP connections for Web UIs for the 
master and region servers.
@@ -10181,7 +10467,7 @@ If you know how to fix this without opening a second 
port for HTTPS, patches are
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbase.secure.spnego.ui"><a class="anchor" 
href="#hbase.secure.spnego.ui"></a>57. Using SPNEGO for Kerberos authentication 
with Web UIs</h2>
+<h2 id="hbase.secure.spnego.ui"><a class="anchor" 
href="#hbase.secure.spnego.ui"></a>58. Using SPNEGO for Kerberos authentication 
with Web UIs</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Kerberos-authentication to HBase Web UIs can be enabled via configuring 
SPNEGO with the <code>hbase.security.authentication.ui</code>
@@ -10238,7 +10524,7 @@ for RPCs (e.g 
<code>hbase.security.authentication</code> = <code>kerberos</code>
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbase.secure.configuration"><a class="anchor" 
href="#hbase.secure.configuration"></a>58. Secure Client Access to Apache 
HBase</h2>
+<h2 id="hbase.secure.configuration"><a class="anchor" 
href="#hbase.secure.configuration"></a>59. Secure Client Access to Apache 
HBase</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Newer releases of Apache HBase (&gt;= 0.92) support optional SASL 
authentication of clients.
@@ -10248,7 +10534,7 @@ See also Matteo Bertozzi&#8217;s article on <a 
href="https://blog.cloudera.com/b
 <p>This describes how to set up Apache HBase and clients for connection to 
secure HBase resources.</p>
 </div>
 <div class="sect2">
-<h3 id="security.prerequisites"><a class="anchor" 
href="#security.prerequisites"></a>58.1. Prerequisites</h3>
+<h3 id="security.prerequisites"><a class="anchor" 
href="#security.prerequisites"></a>59.1. Prerequisites</h3>
 <div class="dlist">
 <dl>
 <dt class="hdlist1">Hadoop Authentication Configuration</dt>
@@ -10265,7 +10551,7 @@ Otherwise, you would be using strong authentication for 
HBase but not for the un
 </div>
 </div>
 <div class="sect2">
-<h3 id="_server_side_configuration_for_secure_operation"><a class="anchor" 
href="#_server_side_configuration_for_secure_operation"></a>58.2. Server-side 
Configuration for Secure Operation</h3>
+<h3 id="_server_side_configuration_for_secure_operation"><a class="anchor" 
href="#_server_side_configuration_for_secure_operation"></a>59.2. Server-side 
Configuration for Secure Operation</h3>
 <div class="paragraph">
 <p>First, refer to <a 
href="#security.prerequisites">security.prerequisites</a> and ensure that your 
underlying HDFS configuration is secure.</p>
 </div>
@@ -10293,7 +10579,7 @@ Otherwise, you would be using strong authentication for 
HBase but not for the un
 </div>
 </div>
 <div class="sect2">
-<h3 id="_client_side_configuration_for_secure_operation"><a class="anchor" 
href="#_client_side_configuration_for_secure_operation"></a>58.3. Client-side 
Configuration for Secure Operation</h3>
+<h3 id="_client_side_configuration_for_secure_operation"><a class="anchor" 
href="#_client_side_configuration_for_secure_operation"></a>59.3. Client-side 
Configuration for Secure Operation</h3>
 <div class="paragraph">
 <p>First, refer to <a href="#security.prerequisites">Prerequisites</a> and 
ensure that your underlying HDFS configuration is secure.</p>
 </div>
@@ -10347,7 +10633,7 @@ conf.set(<span class="string"><span 
class="delimiter">&quot;</span><span class="
 </div>
 </div>
 <div class="sect2">
-<h3 id="security.client.thrift"><a class="anchor" 
href="#security.client.thrift"></a>58.4. Client-side Configuration for Secure 
Operation - Thrift Gateway</h3>
+<h3 id="security.client.thrift"><a class="anchor" 
href="#security.client.thrift"></a>59.4. Client-side Configuration for Secure 
Operation - Thrift Gateway</h3>
 <div class="paragraph">
 <p>Add the following to the <code>hbase-site.xml</code> file for every Thrift 
gateway:</p>
 </div>
@@ -10397,7 +10683,7 @@ All client access via the Thrift gateway will use the 
Thrift gateway&#8217;s cre
 </div>
 </div>
 <div class="sect2">
-<h3 id="security.gateway.thrift"><a class="anchor" 
href="#security.gateway.thrift"></a>58.5. Configure the Thrift Gateway to 
Authenticate on Behalf of the Client</h3>
+<h3 id="security.gateway.thrift"><a class="anchor" 
href="#security.gateway.thrift"></a>59.5. Configure the Thrift Gateway to 
Authenticate on Behalf of the Client</h3>
 <div class="paragraph">
 <p><a href="#security.client.thrift">Client-side Configuration for Secure 
Operation - Thrift Gateway</a> describes how to authenticate a Thrift client to 
HBase using a fixed user.
 As an alternative, you can configure the Thrift gateway to authenticate to 
HBase on the client&#8217;s behalf, and to access HBase using a proxy user.
@@ -10455,7 +10741,7 @@ To start Thrift on a node, run the command 
<code>bin/hbase-daemon.sh start thrif
 </div>
 </div>
 <div class="sect2">
-<h3 id="security.gateway.thrift.doas"><a class="anchor" 
href="#security.gateway.thrift.doas"></a>58.6. Configure the Thrift Gateway to 
Use the <code>doAs</code> Feature</h3>
+<h3 id="security.gateway.thrift.doas"><a class="anchor" 
href="#security.gateway.thrift.doas"></a>59.6. Configure the Thrift Gateway to 
Use the <code>doAs</code> Feature</h3>
 <div class="paragraph">
 <p><a href="#security.gateway.thrift">Configure the Thrift Gateway to 
Authenticate on Behalf of the Client</a> describes how to configure the Thrift 
gateway to authenticate to HBase on the client&#8217;s behalf, and to access 
HBase using a proxy user. The limitation of this approach is that after the 
client is initialized with a particular set of credentials, it cannot change 
these credentials during the session. The <code>doAs</code> feature provides a 
flexible way to impersonate multiple principals using the same client. This 
feature was implemented in <a 
href="https://issues.apache.org/jira/browse/HBASE-12640";>HBASE-12640</a> for 
Thrift 1, but is currently not available for Thrift 2.</p>
 </div>
@@ -10500,7 +10786,7 @@ to get an overall idea of how to use this feature in 
your client.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_client_side_configuration_for_secure_operation_rest_gateway"><a 
class="anchor" 
href="#_client_side_configuration_for_secure_operation_rest_gateway"></a>58.7. 
Client-side Configuration for Secure Operation - REST Gateway</h3>
+<h3 id="_client_side_configuration_for_secure_operation_rest_gateway"><a 
class="anchor" 
href="#_client_side_configuration_for_secure_operation_rest_gateway"></a>59.7. 
Client-side Configuration for Secure Operation - REST Gateway</h3>
 <div class="paragraph">
 <p>Add the following to the <code>hbase-site.xml</code> file for every REST 
gateway:</p>
 </div>
@@ -10577,7 +10863,7 @@ For more information, refer to <a 
href="http://hadoop.apache.org/docs/stable/had
 </div>
 </div>
 <div class="sect2">
-<h3 id="security.rest.gateway"><a class="anchor" 
href="#security.rest.gateway"></a>58.8. REST Gateway Impersonation 
Configuration</h3>
+<h3 id="security.rest.gateway"><a class="anchor" 
href="#security.rest.gateway"></a>59.8. REST Gateway Impersonation 
Configuration</h3>
 <div class="paragraph">
 <p>By default, the REST gateway doesn&#8217;t support impersonation.
 It accesses the HBase on behalf of clients as the user configured as in the 
previous section.
@@ -10639,7 +10925,7 @@ So it can apply proper authorizations.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbase.secure.simpleconfiguration"><a class="anchor" 
href="#hbase.secure.simpleconfiguration"></a>59. Simple User Access to Apache 
HBase</h2>
+<h2 id="hbase.secure.simpleconfiguration"><a class="anchor" 
href="#hbase.secure.simpleconfiguration"></a>60. Simple User Access to Apache 
HBase</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Newer releases of Apache HBase (&gt;= 0.92) support optional SASL 
authentication of clients.
@@ -10649,7 +10935,7 @@ See also Matteo Bertozzi&#8217;s article on <a 
href="https://blog.cloudera.com/b
 <p>This describes how to set up Apache HBase and clients for simple user 
access to HBase resources.</p>
 </div>
 <div class="sect2">
-<h3 id="_simple_versus_secure_access"><a class="anchor" 
href="#_simple_versus_secure_access"></a>59.1. Simple versus Secure Access</h3>
+<h3 id="_simple_versus_secure_access"><a class="anchor" 
href="#_simple_versus_secure_access"></a>60.1. Simple versus Secure Access</h3>
 <div class="paragraph">
 <p>The following section shows how to set up simple user access.
 Simple user access is not a secure method of operating HBase.
@@ -10663,13 +10949,13 @@ Refer to the section <a 
href="#hbase.secure.configuration">Secure Client Access
 </div>
 </div>
 <div class="sect2">
-<h3 id="_prerequisites"><a class="anchor" href="#_prerequisites"></a>59.2. 
Prerequisites</h3>
+<h3 id="_prerequisites"><a class="anchor" href="#_prerequisites"></a>60.2. 
Prerequisites</h3>
 <div class="paragraph">
 <p>None</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_server_side_configuration_for_simple_user_access_operation"><a 
class="anchor" 
href="#_server_side_configuration_for_simple_user_access_operation"></a>59.3. 
Server-side Configuration for Simple User Access Operation</h3>
+<h3 id="_server_side_configuration_for_simple_user_access_operation"><a 
class="anchor" 
href="#_server_side_configuration_for_simple_user_access_operation"></a>60.3. 
Server-side Configuration for Simple User Access Operation</h3>
 <div class="paragraph">
 <p>Add the following to the <code>hbase-site.xml</code> file on every server 
machine in the cluster:</p>
 </div>
@@ -10721,7 +11007,7 @@ Refer to the section <a 
href="#hbase.secure.configuration">Secure Client Access
 </div>
 </div>
 <div class="sect2">
-<h3 id="_client_side_configuration_for_simple_user_access_operation"><a 
class="anchor" 
href="#_client_side_configuration_for_simple_user_access_operation"></a>59.4. 
Client-side Configuration for Simple User Access Operation</h3>
+<h3 id="_client_side_configuration_for_simple_user_access_operation"><a 
class="anchor" 
href="#_client_side_configuration_for_simple_user_access_operation"></a>60.4. 
Client-side Configuration for Simple User Access Operation</h3>
 <div class="paragraph">
 <p>Add the following to the <code>hbase-site.xml</code> file on every 
client:</p>
 </div>
@@ -10748,7 +11034,7 @@ Refer to the section <a 
href="#hbase.secure.configuration">Secure Client Access
 <p>Be advised that if the <code>hbase.security.authentication</code> in the 
client- and server-side site files do not match, the client will not be able to 
communicate with the cluster.</p>
 </div>
 <div class="sect3">
-<h4 
id="_client_side_configuration_for_simple_user_access_operation_thrift_gateway"><a
 class="anchor" 
href="#_client_side_configuration_for_simple_user_access_operation_thrift_gateway"></a>59.4.1.
 Client-side Configuration for Simple User Access Operation - Thrift 
Gateway</h4>
+<h4 
id="_client_side_configuration_for_simple_user_access_operation_thrift_gateway"><a
 class="anchor" 
href="#_client_side_configuration_for_simple_user_access_operation_thrift_gateway"></a>60.4.1.
 Client-side Configuration for Simple User Access Operation - Thrift 
Gateway</h4>
 <div class="paragraph">
 <p>The Thrift gateway user will need access.
 For example, to give the Thrift API user, <code>thrift_server</code>, 
administrative access, a command such as this one will suffice:</p>
@@ -10768,7 +11054,7 @@ All client access via the Thrift gateway will use the 
Thrift gateway&#8217;s cre
 </div>
 </div>
 <div class="sect3">
-<h4 
id="_client_side_configuration_for_simple_user_access_operation_rest_gateway"><a
 class="anchor" 
href="#_client_side_configuration_for_simple_user_access_operation_rest_gateway"></a>59.4.2.
 Client-side Configuration for Simple User Access Operation - REST Gateway</h4>
+<h4 
id="_client_side_configuration_for_simple_user_access_operation_rest_gateway"><a
 class="anchor" 
href="#_client_side_configuration_for_simple_user_access_operation_rest_gateway"></a>60.4.2.
 Client-side Configuration for Simple User Access Operation - REST Gateway</h4>
 <div class="paragraph">
 <p>The REST gateway will authenticate with HBase using the supplied credential.
 No authentication will be performed by the REST gateway itself.
@@ -10795,13 +11081,13 @@ This is future work.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_securing_access_to_hdfs_and_zookeeper"><a class="anchor" 
href="#_securing_access_to_hdfs_and_zookeeper"></a>60. Securing Access to HDFS 
and ZooKeeper</h2>
+<h2 id="_securing_access_to_hdfs_and_zookeeper"><a class="anchor" 
href="#_securing_access_to_hdfs_and_zookeeper"></a>61. Securing Access to HDFS 
and ZooKeeper</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Secure HBase requires secure ZooKeeper and HDFS so that users cannot access 
and/or modify the metadata and data from under HBase. HBase uses HDFS (or 
configured file system) to keep its data files as well as write ahead logs 
(WALs) and other data. HBase uses ZooKeeper to store some metadata for 
operations (master address, table locks, recovery state, etc).</p>
 </div>
 <div class="sect2">
-<h3 id="_securing_zookeeper_data"><a class="anchor" 
href="#_securing_zookeeper_data"></a>60.1. Securing ZooKeeper Data</h3>
+<h3 id="_securing_zookeeper_data"><a class="anchor" 
href="#_securing_zookeeper_data"></a>61.1. Securing ZooKeeper Data</h3>
 <div class="paragraph">
 <p>ZooKeeper has a pluggable authentication mechanism to enable access from 
clients using different methods. ZooKeeper even allows authenticated and 
un-authenticated clients at the same time. The access to znodes can be 
restricted by providing Access Control Lists (ACLs) per znode. An ACL contains 
two components, the authentication method and the principal. ACLs are NOT 
enforced hierarchically. See <a 
href="https://zookeeper.apache.org/doc/r3.3.6/zookeeperProgrammers.html#sc_ZooKeeperPluggableAuthentication";>ZooKeeper
 Programmers Guide</a> for details.</p>
 </div>
@@ -10810,7 +11096,7 @@ This is future work.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_securing_file_system_hdfs_data"><a class="anchor" 
href="#_securing_file_system_hdfs_data"></a>60.2. Securing File System (HDFS) 
Data</h3>
+<h3 id="_securing_file_system_hdfs_data"><a class="anchor" 
href="#_securing_file_system_hdfs_data"></a>61.2. Securing File System (HDFS) 
Data</h3>
 <div class="paragraph">
 <p>All of the data under management is kept under the root directory in the 
file system (<code>hbase.rootdir</code>). Access to the data and WAL files in 
the filesystem should be restricted so that users cannot bypass the HBase 
layer, and peek at the underlying data files from the file system. HBase 
assumes the filesystem used (HDFS or other) enforces permissions 
hierarchically. If sufficient protection from the file system (both 
authorization and authentication) is not provided, HBase level authorization 
control (ACLs, visibility labels, etc) is meaningless since the user can always 
access the data from the file system.</p>
 </div>
@@ -10832,7 +11118,7 @@ This is future work.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_securing_access_to_your_data"><a class="anchor" 
href="#_securing_access_to_your_data"></a>61. Securing Access To Your Data</h2>
+<h2 id="_securing_access_to_your_data"><a class="anchor" 
href="#_securing_access_to_your_data"></a>62. Securing Access To Your Data</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>After you have configured secure authentication between HBase client and 
server processes and gateways, you need to consider the security of your data 
itself.
@@ -10911,7 +11197,7 @@ This is the default for HBase 1.0 and newer.</p>
 </ol>
 </div>
 <div class="sect2">
-<h3 id="hbase.tags"><a class="anchor" href="#hbase.tags"></a>61.1. Tags</h3>
+<h3 id="hbase.tags"><a class="anchor" href="#hbase.tags"></a>62.1. Tags</h3>
 <div class="paragraph">
 <p><em class="firstterm">Tags</em> are a feature of HFile v3.
 A tag is a piece of metadata which is part of a cell, separate from the key, 
value, and version.
@@ -10921,7 +11207,7 @@ It is possible that in the future, tags will be used to 
implement other HBase fe
 You don&#8217;t need to know a lot about tags in order to use the security 
features they enable.</p>
 </div>
 <div class="sect3">
-<h4 id="_implementation_details"><a class="anchor" 
href="#_implementation_details"></a>61.1.1. Implementation Details</h4>
+<h4 id="_implementation_details"><a class="anchor" 
href="#_implementation_details"></a>62.1.1. Implementation Details</h4>
 <div class="paragraph">
 <p>Every cell can have zero or more tags.
 Every tag has a type and the actual tag byte array.</p>
@@ -10942,9 +11228,9 @@ Tag compression uses dictionary encoding.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="hbase.accesscontrol.configuration"><a class="anchor" 
href="#hbase.accesscontrol.configuration"></a>61.2. Access Control Labels 
(ACLs)</h3>
+<h3 id="hbase.accesscontrol.configuration"><a class="anchor" 
href="#hbase.accesscontrol.configuration"></a>62.2. Access Control Labels 
(ACLs)</h3>
 <div class="sect3">
-<h4 id="_how_it_works"><a class="anchor" href="#_how_it_works"></a>61.2.1. How 
It Works</h4>
+<h4 id="_how_it_works"><a class="anchor" href="#_how_it_works"></a>62.2.1. How 
It Works</h4>
 <div class="paragraph">
 <p>ACLs in HBase are based upon a user&#8217;s membership in or exclusion from 
groups, and a given group&#8217;s permissions to access a given resource.
 ACLs are implemented as a coprocessor called AccessController.</p>
@@ -11609,7 +11895,7 @@ hbase&gt; user_permission JAVA_REGEX</pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="hbase.visibility.labels"><a class="anchor" 
href="#hbase.visibility.labels"></a>61.3. Visibility Labels</h3>
+<h3 id="hbase.visibility.labels"><a class="anchor" 
href="#hbase.visibility.labels"></a>62.3. Visibility Labels</h3>
 <div class="paragraph">
 <p>Visibility labels control can be used to only permit users or principals 
associated with a given label to read or access cells with that label.
 For instance, you might label a cell <code>top-secret</code>, and only grant 
access to that label to the <code>managers</code> group.
@@ -11722,7 +12008,7 @@ Visibility labels are not currently applied for 
superusers.
 </tbody>
 </table>
 <div class="sect3">
-<h4 id="_server_side_configuration_2"><a class="anchor" 
href="#_server_side_configuration_2"></a>61.3.1. Server-Side Configuration</h4>
+<h4 id="_server_side_configuration_2"><a class="anchor" 
href="#_server_side_configuration_2"></a>62.3.1. Server-Side Configuration</h4>
 <div class="olist arabic">
 <ol class="arabic">
 <li>
@@ -11772,7 +12058,7 @@ In that case, the mutation will fail if it makes use of 
labels the user is not a
 </div>
 </div>
 <div class="sect3">
-<h4 id="_administration_2"><a class="anchor" 
href="#_administration_2"></a>61.3.2. Administration</h4>
+<h4 id="_administration_2"><a class="anchor" 
href="#_administration_2"></a>62.3.2. Administration</h4>
 <div class="paragraph">
 <p>Administration tasks can be performed using the HBase Shell or the Java API.
 For defining the list of visibility labels and associating labels with users, 
the HBase Shell is probably simpler.</p>
@@ -12006,7 +12292,7 @@ The correct way to apply cell level labels is to do so 
in the application code w
 </div>
 </div>
 <div class="sect3">
-<h4 id="reading_cells_with_labels"><a class="anchor" 
href="#reading_cells_with_labels"></a>61.3.3. Reading Cells with Labels</h4>
+<h4 id="reading_cells_with_labels"><a class="anchor" 
href="#reading_cells_with_labels"></a>62.3.3. Reading Cells with Labels</h4>
 <div class="paragraph">
 <p>When you issue a Scan or Get, HBase uses your default set of authorizations 
to
 filter out cells that you do not have access to. A superuser can set the 
default
@@ -12067,7 +12353,7 @@ public <span class="predefined-type">Void</span> run() 
<span class="directive">t
 </div>
 </div>
 <div class="sect3">
-<h4 id="_implementing_your_own_visibility_label_algorithm"><a class="anchor" 
href="#_implementing_your_own_visibility_label_algorithm"></a>61.3.4. 
Implementing Your Own Visibility Label Algorithm</h4>
+<h4 id="_implementing_your_own_visibility_label_algorithm"><a class="anchor" 
href="#_implementing_your_own_visibility_label_algorithm"></a>62.3.4. 
Implementing Your Own Visibility Label Algorithm</h4>
 <div class="paragraph">
 <p>Interpreting the labels authenticated for a given get/scan request is a 
pluggable algorithm.</p>
 </div>
@@ -12079,7 +12365,7 @@ public <span class="predefined-type">Void</span> run() 
<span class="directive">t
 </div>
 </div>
 <div class="sect3">
-<h4 id="_replicating_visibility_tags_as_strings"><a class="anchor" 
href="#_replicating_visibility_tags_as_strings"></a>61.3.5. Replicating 
Visibility Tags as Strings</h4>
+<h4 id="_replicating_visibility_tags_as_strings"><a class="anchor" 
href="#_replicating_visibility_tags_as_strings"></a>62.3.5. Replicating 
Visibility Tags as Strings</h4>
 <div class="paragraph">
 <p>As mentioned in the above sections, the interface 
<code>VisibilityLabelService</code> could be used to implement a different way 
of storing the visibility expressions in the cells. Clusters with replication 
enabled also must replicate the visibility expressions to the peer cluster. If 
<code>DefaultVisibilityLabelServiceImpl</code> is used as the implementation 
for <code>VisibilityLabelService</code>, all the visibility expression are 
converted to the corresponding expression based on the ordinals for each 
visibility label stored in the labels table. During replication, visible cells 
are also replicated with the ordinal-based expression intact. The peer cluster 
may not have the same <code>labels</code> table with the same ordinal mapping 
for the visibility labels. In that case, replicating the ordinals makes no 
sense. It would be better if the replication occurred with the visibility 
expressions transmitted as strings. To replicate the visibility expression as 
strings to the peer 
 cluster, create a <code>RegionServerObserver</code> configuration which works 
based on the implementation of the <code>VisibilityLabelService</code> 
interface. The configuration below enables replication of visibility 
expressions to peer clusters as strings. See <a 
href="https://issues.apache.org/jira/browse/HBASE-11639";>HBASE-11639</a> for 
more details.</p>
 </div>
@@ -12094,7 +12380,7 @@ public <span class="predefined-type">Void</span> run() 
<span class="directive">t
 </div>
 </div>
 <div class="sect2">
-<h3 id="hbase.encryption.server"><a class="anchor" 
href="#hbase.encryption.server"></a>61.4. Transparent Encryption of Data At 
Rest</h3>
+<h3 id="hbase.encryption.server"><a class="anchor" 
href="#hbase.encryption.server"></a>62.4. Transparent Encryption of Data At 
Rest</h3>
 <div class="paragraph">
 <p>HBase provides a mechanism for protecting your data at rest, in HFiles and 
the WAL, which reside within HDFS or another distributed filesystem.
 A two-tier architecture is used for flexible and non-intrusive key rotation.
@@ -12103,7 +12389,7 @@ When data is written, it is encrypted.
 When it is read, it is decrypted on demand.</p>
 </div>
 <div class="sect3">
-<h4 id="_how_it_works_2"><a class="anchor" href="#_how_it_works_2"></a>61.4.1. 
How It Works</h4>
+<h4 id="_how_it_works_2"><a class="anchor" href="#_how_it_works_2"></a>62.4.1. 
How It Works</h4>
 <div class="paragraph">
 <p>The administrator provisions a master key for the cluster, which is stored 
in a key provider accessible to every trusted HBase process, including the 
HMaster, RegionServers, and clients (such as HBase Shell) on administrative 
workstations.
 The default key provider is integrated with the Java KeyStore API and any key 
management systems with support for it.
@@ -12134,7 +12420,7 @@ When WAL encryption is enabled, all WALs are encrypted, 
regardless of whether th
 </div>
 </div>
 <div class="sect3">
-<h4 id="_server_side_configuration_3"><a class="anchor" 
href="#_server_side_configuration_3"></a>61.4.2. Server-Side Configuration</h4>
+<h4 id="_server_side_configuration_3"><a class="anchor" 
href="#_server_side_configuration_3"></a>62.4.2. Server-Side Configuration</h4>
 <div class="paragraph">
 <p>This procedure assumes you are using the default Java keystore 
implementation.
 If you are using a custom implementation, check its documentation and adjust 
accordingly.</p>
@@ -12289,7 +12575,7 @@ You can include these in the HMaster&#8217;s 
<em>hbase-site.xml</em> as well, bu
 </div>
 </div>
 <div class="sect3">
-<h4 id="_administration_3"><a class="anchor" 
href="#_administration_3"></a>61.4.3. Administration</h4>
+<h4 id="_administration_3"><a class="anchor" 
href="#_administration_3"></a>62.4.3. Administration</h4>
 <div class="paragraph">
 <p>Administrative tasks can be performed in HBase Shell or the Java API.</p>
 </div>
@@ -12343,7 +12629,7 @@ Next, configure fallback to the old master key in the 
<em>hbase-site.xml</em> fi
 </div>
 </div>
 <div class="sect2">
-<h3 id="hbase.secure.bulkload"><a class="anchor" 
href="#hbase.secure.bulkload"></a>61.5. Secure Bulk Load</h3>
+<h3 id="hbase.secure.bulkload"><a class="anchor" 
href="#hbase.secure.bulkload"></a>62.5. Secure Bulk Load</h3>
 <div class="paragraph">
 <p>Bulk loading in secure mode is a bit more involved than normal setup, since 
the client has to transfer the ownership of the files generated from the 
MapReduce job to HBase.
 Secure bulk loading is implemented by a coprocessor, named
@@ -12400,7 +12686,7 @@ HBase manages creation and deletion of this 
directory.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="security.example.config"><a class="anchor" 
href="#security.example.config"></a>62. Security Configuration Example</h2>
+<h2 id="security.example.config"><a class="anchor" 
href="#security.example.config"></a>63. Security Configuration Example</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>This configuration example includes support for HFile v3, ACLs, Visibility 
Labels, and transparent encryption of data at rest and the WAL.
@@ -12550,10 +12836,10 @@ All options have been discussed separately in the 
sections above.</p>
 </div>
 <h1 id="_architecture" class="sect0"><a class="anchor" 
href="#_architecture"></a>Architecture</h1>
 <div class="sect1">
-<h2 id="arch.overview"><a class="anchor" href="#arch.overview"></a>63. 
Overview</h2>
+<h2 id="arch.overview"><a class="anchor" href="#arch.overview"></a>64. 
Overview</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="arch.overview.nosql"><a class="anchor" 
href="#arch.overview.nosql"></a>63.1. NoSQL?</h3>
+<h3 id="arch.overview.nosql"><a class="anchor" 
href="#arch.overview.nosql"></a>64.1. NoSQL?</h3>
 <div class="paragraph">
 <p>HBase is a type of "NoSQL" database.
 "NoSQL" is a general term meaning that the database isn&#8217;t an RDBMS which 
supports SQL as its primary access language, but there are many types of NoSQL 
databases: BerkeleyDB is an example of a local NoSQL database, whereas HBase is 
very much a distributed database.
@@ -12601,7 +12887,7 @@ This makes it very suitable for tasks such as 
high-speed counter aggregation.</p
 </div>
 </div>
 <div class="sect2">
-<h3 id="arch.overview.when"><a class="anchor" 
href="#arch.overview.when"></a>63.2. When Should I Use HBase?</h3>
+<h3 id="arch.overview.when"><a class="anchor" 
href="#arch.overview.when"></a>64.2. When Should I Use HBase?</h3>
 <div class="paragraph">
 <p>HBase isn&#8217;t suitable for every problem.</p>
 </div>
@@ -12623,7 +12909,7 @@ Even HDFS doesn&#8217;t do well with anything less than 
5 DataNodes (due to thin
 </div>
 </div>
 <div class="sect2">
-<h3 id="arch.overview.hbasehdfs"><a class="anchor" 
href="#arch.overview.hbasehdfs"></a>63.3. What Is The Difference Between HBase 
and Hadoop/HDFS?</h3>
+<h3 id="arch.overview.hbasehdfs"><a class="anchor" 
href="#arch.overview.hbasehdfs"></a>64.3. What Is The Difference Between HBase 
and Hadoop/HDFS?</h3>
 <div class="paragraph">
 <p><a href="http://hadoop.apache.org/hdfs/";>HDFS</a> is a distributed file 
system that is well suited for the storage of large files.
 Its documentation states that it is not, however, a general purpose file 
system, and does not provide fast individual record lookups in files.
@@ -12636,13 +12922,13 @@ See the <a href="#datamodel">Data Model</a> and the 
rest of this chapter for mor
 </div>
 </div>
 <div class="sect1">
-<h2 id="arch.catalog"><a class="anchor" href="#arch.catalog"></a>64. Catalog 
Tables</h2>
+<h2 id="arch.catalog"><a class="anchor" href="#arch.catalog"></a>65. Catalog 
Tables</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The catalog table <code>hbase:meta</code> exists as an HBase table and is 
filtered out of the HBase shell&#8217;s <code>list</code> command, but is in 
fact a table just like any other.</p>
 </div>
 <div class="sect2">
-<h3 id="arch.catalog.root"><a class="anchor" 
href="#arch.catalog.root"></a>64.1. -ROOT-</h3>
+<h3 id="arch.catalog.root"><a class="anchor" 
href="#arch.catalog.root"></a>65.1. -ROOT-</h3>
 <div class="admonitionblock note">
 <table>
 <tr>
@@ -12685,7 +12971,7 @@ region key (<code>.META.,,1</code>)</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="arch.catalog.meta"><a class="anchor" 
href="#arch.catalog.meta"></a>64.2. hbase:meta</h3>
+<h3 id="arch.catalog.meta"><a class="anchor" 
href="#arch.catalog.meta"></a>65.2. hbase:meta</h3>
 <div class="paragraph">
 <p>The <code>hbase:meta</code> table (previously called <code>.META.</code>) 
keeps a list of all regions in the system.
 The location of <code>hbase:meta</code> was previously tracked within the 
<code>-ROOT-</code> table, but is now stored in ZooKeeper.</p>
@@ -12746,7 +13032,7 @@ utility.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="arch.catalog.startup"><a class="anchor" 
href="#arch.catalog.startup"></a>64.3. Startup Sequencing</h3>
+<h3 id="arch.catalog.startup"><a class="anchor" 
href="#arch.catalog.startup"></a>65.3. Startup Sequencing</h3>
 <div class="paragraph">
 <p>First, the location of <code>hbase:meta</code> is looked up in ZooKeeper.
 Next, <code>hbase:meta</code> is updated with server and startcode values.</p>
@@ -12758,7 +13044,7 @@ Next, <code>hbase:meta</code> is updated with server 
and startcode values.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="architecture.client"><a class="anchor" 
href="#architecture.client"></a>65. Client</h2>
+<h2 id="architecture.client"><a class="anchor" 
href="#architecture.client"></a>66. Client</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The HBase client finds the RegionServers that are serving the particular 
row range of interest.
@@ -12775,12 +13061,12 @@ Should a region be reassigned either by the master 
load balancer or because a Re
 <p>Administrative functions are done via an instance of <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html";>Admin</a></p>
 </div>
 <div class="sect2">
-<h3 id="client.connections"><a class="anchor" 
href="#client.connections"></a>65.1. Cluster Connections</h3>
+<h3 id="client.connections"><a class="anchor" 
href="#client.connections"></a>66.1. Cluster Connections</h3>
 <div class="paragraph">
 <p>The API changed in HBase 1.0. For connection configuration information, see 
<a href="#client_dependencies">Client configuration and dependencies connecting 
to an HBase cluster</a>.</p>
 </div>
 <div class="sect3">
-<h4 id="_api_as_of_hbase_1_0_0"><a class="anchor" 
href="#_api_as_of_hbase_1_0_0"></a>65.1.1. API as of HBase 1.0.0</h4>
+<h4 id="_api_as_of_hbase_1_0_0"><a class="anchor" 
href="#_api_as_of_hbase_1_0_0"></a>66.1.1. API as of HBase 1.0.0</h4>
 <div class="paragraph">
 <p>It&#8217;s been cleaned up and users are returned Interfaces to work 
against rather than particular types.
 In HBase 1.0, obtain a <code>Connection</code> object from 
<code>ConnectionFactory</code> and thereafter, get from it instances of 
<code>Table</code>, <code>Admin</code>, and <code>RegionLocator</code> on an 
as-need basis.
@@ -12793,7 +13079,7 @@ See the <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/
 </div>
 </div>
 <div class="sect3">
-<h4 id="_api_before_hbase_1_0_0"><a class="anchor" 
href="#_api_before_hbase_1_0_0"></a>65.1.2. API before HBase 1.0.0</h4>
+<h4 id="_api_before_hbase_1_0_0"><a class="anchor" 
href="#_api_before_hbase_1_0_0"></a>66.1.2. API before HBase 1.0.0</h4>
 <div class="paragraph">
 <p>Instances of <code>HTable</code> are the way to interact with an HBase 
cluster earlier than 1.0.0. <em><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html";>Table</a>
 instances are not thread-safe</em>. Only one thread can use an instance of 
Table at any given time.
 When creating Table instances, it is advisable to use the same <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HBaseConfiguration";>HBaseConfiguration</a>
 instance.
@@ -12865,7 +13151,7 @@ Please use <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/clie
 </div>
 </div>
 <div class="sect2">
-<h3 id="client.writebuffer"><a class="anchor" 
href="#client.writebuffer"></a>65.2. WriteBuffer and Batch Methods</h3>
+<h3 id="client.writebuffer"><a class="anchor" 
href="#client.writebuffer"></a>66.2. WriteBuffer and Batch Methods</h3>
 <div class="paragraph">
 <p>In HBase 1.0 and later, <a 
href="http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTable.html";>HTable</a>
 is deprecated in favor of <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html";>Table</a>.
 <code>Table</code> does not use autoflush. To do buffered writes, use the 
BufferedMutator class.</p>
 </div>
@@ -12880,7 +13166,7 @@ Please use <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/clie
 </div>
 </div>
 <div class="sect2">
-<h3 id="client.external"><a class="anchor" href="#client.external"></a>65.3. 
External Clients</h3>
+<h3 id="client.external"><a class="anchor" href="#client.external"></a>66.3. 
External Clients</h3>
 <div class="paragraph">
 <p>Information on non-Java clients and custom protocols is covered in <a 
href="#external_apis">Apache HBase External APIs</a></p>
 </div>
@@ -12888,7 +13174,7 @@ Please use <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/clie
 </div>
 </div>
 <div class="sect1">
-<h2 id="client.filter"><a class="anchor" href="#client.filter"></a>66. Client 
Request Filters</h2>
+<h2 id="client.filter"><a class="anchor" href="#client.filter"></a>67. Client 
Request Filters</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html";>Get</a>
 and <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html";>Scan</a>
 instances can be optionally configured with <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html";>filters</a>
 which are applied on the RegionServer.</p>
@@ -12897,12 +13183,12 @@ Please use <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/clie
 <p>Filters can be confusing because there are many different types, and it is 
best to approach them by understanding the groups of Filter functionality.</p>
 </div>
 <div class="sect2">
-<h3 id="client.filter.structural"><a class="anchor" 
href="#client.filter.structural"></a>66.1. Structural</h3>
+<h3 id="client.filter.structural"><a class="anchor" 
href="#client.filter.structural"></a>67.1. Structural</h3>
 <div class="paragraph">
 <p>Structural Filters contain other Filters.</p>
 </div>
 <div class="sect3">
-<h4 id="client.filter.structural.fl"><a class="anchor" 
href="#client.filter.structural.fl"></a>66.1.1. FilterList</h4>
+<h4 id="client.filter.structural.fl"><a class="anchor" 
href="#client.filter.structural.fl"></a>67.1.1. FilterList</h4>
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FilterList.html";>FilterList</a>
 represents a list of Filters with a relationship of 
<code>FilterList.Operator.MUST_PASS_ALL</code> or 
<code>FilterList.Operator.MUST_PASS_ONE</code> between the Filters.
 The following example shows an 'or' between two Filters (checking for either 
'my value' or 'my other value' on the same attribute).</p>
@@ -12930,9 +13216,9 @@ scan.setFilter(list);</code></pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="client.filter.cv"><a class="anchor" href="#client.filter.cv"></a>66.2. 
Column Value</h3>
+<h3 id="client.filter.cv"><a class="anchor" href="#client.filter.cv"></a>67.2. 
Column Value</h3>
 <div class="sect3">
-<h4 id="client.filter.cv.scvf"><a class="anchor" 
href="#client.filter.cv.scvf"></a>66.2.1. SingleColumnValueFilter</h4>
+<h4 id="client.filter.cv.scvf"><a class="anchor" 
href="#client.filter.cv.scvf"></a>67.2.1. SingleColumnValueFilter</h4>
 <div class="paragraph">
 <p>A SingleColumnValueFilter (see:
 <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.html";
 
class="bare">http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.html</a>)
@@ -12954,13 +13240,13 @@ scan.setFilter(filter);</code></pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="client.filter.cvp"><a class="anchor" 
href="#client.filter.cvp"></a>66.3. Column Value Comparators</h3>
+<h3 id="client.filter.cvp"><a class="anchor" 
href="#client.filter.cvp"></a>67.3. Column Value Comparators</h3>
 <div class="paragraph">
 <p>There are several Comparator classes in the Filter package that deserve 
special mention.
 These Comparators are used in concert with other Filters, such as <a 
href="#client.filter.cv.scvf">SingleColumnValueFilter</a>.</p>
 </div>
 <div class="sect3">
-<h4 id="client.filter.cvp.rcs"><a class="anchor" 
href="#client.filter.cvp.rcs"></a>66.3.1. RegexStringComparator</h4>
+<h4 id="client.filter.cvp.rcs"><a class="anchor" 
href="#client.filter.cvp.rcs"></a>67.3.1. RegexStringComparator</h4>
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/RegexStringComparator.html";>RegexStringComparator</a>
 supports regular expressions for value comparisons.</p>
 </div>
@@ -12981,7 +13267,7 @@ scan.setFilter(filter);</code></pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="client.filter.cvp.substringcomparator"><a class="anchor" 
href="#client.filter.cvp.substringcomparator"></a>66.3.2. 
SubstringComparator</h4>
+<h4 id="client.filter.cvp.substringcomparator"><a class="anchor" 
href="#client.filter.cvp.substringcomparator"></a>67.3.2. 
SubstringComparator</h4>
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SubstringComparator.html";>SubstringComparator</a>
 can be used to determine if a given substring exists in a value.
 The comparison is case-insensitive.</p>
@@ -13000,38 +13286,38 @@ scan.setFilter(filter);</code></pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="client.filter.cvp.bfp"><a class="anchor" 
href="#client.filter.cvp.bfp"></a>66.3.3. BinaryPrefixComparator</h4>
+<h4 id="client.filter.cvp.bfp"><a class="anchor" 
href="#client.filter.cvp.bfp"></a>67.3.3. BinaryPrefixComparator</h4>
 <div class="paragraph">
 <p>See <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.html";>BinaryPrefixComparator</a>.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="client.filter.cvp.bc"><a class="anchor" 
href="#client.filter.cvp.bc"></a>66.3.4. BinaryComparator</h4>
+<h4 id="client.filter.cvp.bc"><a class="anchor" 
href="#client.filter.cvp.bc"></a>67.3.4. BinaryComparator</h4>
 <div class="paragraph">
 <p>See <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/BinaryComparator.html";>BinaryComparator</a>.</p>
 </div>
 </div>
 </div>
 <div class="sect2">
-<h3 id="client.filter.kvm"><a class="anchor" 
href="#client.filter.kvm"></a>66.4. KeyValue Metadata</h3>
+<h3 id="client.filter.kvm"><a class="anchor" 
href="#client.filter.kvm"></a>67.4. KeyValue Metadata</h3>
 <div class="paragraph">
 <p>As HBase stores data internally as KeyValue pairs, KeyValue Metadata 
Filters evaluate the existence of keys (i.e., ColumnFamily:Column qualifiers) 
for a row, as opposed to values the previous section.</p>
 </div>
 <div class="sect3">
-<h4 id="client.filter.kvm.ff"><a class="anchor" 
href="#client.filter.kvm.ff"></a>66.4.1. FamilyFilter</h4>
+<h4 id="client.filter.kvm.ff"><a class="anchor" 
href="#client.filter.kvm.ff"></a>67.4.1. FamilyFilter</h4>
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FamilyFilter.html";>FamilyFilter</a>
 can be used to filter on the ColumnFamily.
 It is generally a better idea to select ColumnFamilies in the Scan than to do 
it with a Filter.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="client.filter.kvm.qf"><a class="anchor" 
href="#client.filter.kvm.qf"></a>66.4.2. QualifierFilter</h4>
+<h4 id="client.filter.kvm.qf"><a class="anchor" 
href="#client.filter.kvm.qf"></a>67.4.2. QualifierFilter</h4>
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/QualifierFilter.html";>QualifierFilter</a>
 can be used to filter based on Column (aka Qualifier) name.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="client.filter.kvm.cpf"><a class="anchor" 
href="#client.filter.kvm.cpf"></a>66.4.3. ColumnPrefixFilter</h4>
+<h4 id="client.filter.kvm.cpf"><a class="anchor" 
href="#client.filter.kvm.cpf"></a>67.4.3. ColumnPrefixFilter</h4>
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.html";>ColumnPrefixFilter</a>
 can be used to filter based on the lead portion of Column (aka Qualifier) 
names.</p>
 </div>
@@ -13068,7 +13354,7 @@ rs.close();</code></pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="client.filter.kvm.mcpf"><a class="anchor" 
href="#client.filter.kvm.mcpf"></a>66.4.4. MultipleColumnPrefixFilter</h4>
+<h4 id="client.filter.kvm.mcpf"><a class="anchor" 
href="#client.filter.kvm.mcpf"></a>67.4.4. MultipleColumnPrefixFilter</h4>
 <div class="paragraph">
 <p><a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.html";>MultipleColumnPrefixFilter</a>
 behaves like ColumnPrefixFilter but allows specifying multiple prefixes.</p>
 </div>
@@ -13101,7 +13387,7 @@ rs.close();</code></pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="client.filter.kvm.crf"><a class="anchor" 
href="#client.filter.kvm.crf"></a>66.4.5. ColumnRangeFilter</h4>
+<h4 id="client.filter.kvm.crf"><a class="anchor" 
href="#client.filter.kvm.crf"></a>67.4.5. ColumnRangeFilter</h4>
 <div class="paragraph">
 <p>A <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/ColumnRangeFilter.html";>ColumnRangeFilter</a>
 allows efficient intra row scanning.</p>
 </div>
@@ -13145,18 +13431,18 @@ rs.close();</code></pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="client.filter.row"><a class="anchor" 
href="#client.filter.row"></a>66.5. RowKey</h3>
+<h3 id="client.filter.row"><a class="anchor" 
href="#client.filter.row"></a>67.5. RowKey</h3>
 <div class="sect3">
-<h4 id="client.filter.row.rf"><a class="anchor" 
href="#client.filter.row.rf"></a>66.5.1. RowFilter</h4>
+<h4 id="client.filter.row.rf"><a class="anchor" 
href="#client.filter.row.rf"></a>67.5.1. RowFilter</h4>
 <div class="paragraph">
 <p>It is generally a better idea to use the startRow/stopRow methods on Scan 
for row selection, however <a 
href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/RowFilter.html";>RowFilter</a>
 can also be

<TRUNCATED>

Reply via email to