Author: stack
Date: Fri Apr 22 19:07:56 2011
New Revision: 1095988
URL: http://svn.apache.org/viewvc?rev=1095988&view=rev
Log:
Added a head-up to preface that user is about to enter the realm of distributed
computing, added how to enable rpc logging, added note to decommissioning
server that balacner should be off, and converted links to xrefs when they were
linkends
Modified:
hbase/trunk/src/docbkx/book.xml
hbase/trunk/src/docbkx/configuration.xml
hbase/trunk/src/docbkx/getting_started.xml
hbase/trunk/src/docbkx/performance.xml
hbase/trunk/src/docbkx/preface.xml
hbase/trunk/src/docbkx/shell.xml
hbase/trunk/src/docbkx/upgrading.xml
Modified: hbase/trunk/src/docbkx/book.xml
URL:
http://svn.apache.org/viewvc/hbase/trunk/src/docbkx/book.xml?rev=1095988&r1=1095987&r2=1095988&view=diff
==============================================================================
--- hbase/trunk/src/docbkx/book.xml (original)
+++ hbase/trunk/src/docbkx/book.xml Fri Apr 22 19:07:56 2011
@@ -148,7 +148,7 @@ throws InterruptedException, IOException
<title>
Schema Creation
</title>
- <para>HBase schemas can be created or updated through the <link
linkend="shell">HBase shell</link>
+ <para>HBase schemas can be created or updated with <xref linkend="shell" />
or by using <link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html">HBaseAdmin</link>
in the Java API.
</para>
</section>
@@ -197,7 +197,7 @@ throws InterruptedException, IOException
the case described by Marc Limotte at the tail of
<link
xlink:url="https://issues.apache.org/jira/browse/HBASE-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13005272#comment-13005272">HBASE-3551</link>
(recommended!).
- Therein, the indices that are kept on HBase storefiles (<link
linkend="hfile">HFile</link>s)
+ Therein, the indices that are kept on HBase storefiles (<xref
linkend="hfile" />)
to facilitate random access may end up occupyng large chunks
of the HBase
allotted RAM because the cell value coordinates are large.
Mark in the above cited comment suggests upping the block
size so
@@ -213,7 +213,7 @@ throws InterruptedException, IOException
<para>The number of row versions to store is configured per column
family via <link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</link>.
The default is 3.
- This is an important parameter because as described in the <link
linkend="datamodel">Data Model</link>
+ This is an important parameter because as described in <xref
linkend="datamodel" />
section HBase does <emphasis>not</emphasis> overwrite row values, but
rather
stores different values per row by time (and qualifier). Excess
versions are removed during major
compactions. The number of versions may need to be increased or
decreased depending on application needs.
@@ -248,7 +248,7 @@ throws InterruptedException, IOException
<para>Size of the compaction queue. This is the number of stores in
the region that have been targeted for compaction.</para>
</section>
<section
xml:id="hbase.regionserver.fsReadLatency_avg_time"><title><varname>hbase.regionserver.fsReadLatency_avg_time</varname></title>
- <para>Filesystem read latency (ms)</para>
+ <para>Filesystem read latency (ms). This is the average time to
read from HDFS.</para>
</section>
<section
xml:id="hbase.regionserver.fsReadLatency_num_ops"><title><varname>hbase.regionserver.fsReadLatency_num_ops</varname></title>
<para>TODO</para>
@@ -294,11 +294,10 @@ throws InterruptedException, IOException
<chapter xml:id="datamodel">
<title>Data Model</title>
- <para>In short, applications store data into HBase <link
linkend="table">tables</link>.
- Tables are made of <link linkend="row">rows</link> and
<emphasis>columns</emphasis>.
- All columns in HBase belong to a particular
- <link linkend="columnfamily">column family</link>.
- Table <link linkend="cell">cells</link> -- the intersection of row and
column
+ <para>In short, applications store data into an HBase table.
+ Tables are made of rows and columns.
+ All columns in HBase belong to a particular column family.
+ Table cells -- the intersection of row and column
coordinates -- are versioned.
A cellâs content is an uninterpreted array of bytes.
</para>
@@ -709,7 +708,7 @@ throws InterruptedException, IOException
<para>Administrative functions are handled through <link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html">HBaseAdmin</link>
</para>
<section xml:id="client.connections"><title>Connections</title>
- <para>For connection configuration information, see the <link
linkend="client_dependencies">configuration</link> section.
+ <para>For connection configuration information, see <xref
linkend="client_dependencies" />.
</para>
<para><link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html">HTable</link>
instances are not thread-safe. When creating HTable instances, it is
advisable to use the same <link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HBaseConfiguration">HBaseConfiguration</link>
@@ -728,7 +727,8 @@ HTable table2 = new HTable(conf2, "myTab
</para>
</section>
<section xml:id="client.writebuffer"><title>WriteBuffer and Batch
Methods</title>
- <para>If <link
linkend="perf.hbase.client.autoflush">autoflush</link> is turned off on <link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html">HTable</link>,
+ <para>If <xref linkend="perf.hbase.client.autoflush" /> is turned
off on
+ <link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html">HTable</link>,
<classname>Put</classname>s are sent to region servers when the
writebuffer
is filled. The writebuffer is 2MB by default. Before an
HTable instance is
discarded, either <methodname>close()</methodname> or
@@ -813,7 +813,7 @@ HTable table2 = new HTable(conf2, "myTab
participate. The RegionServer splits a region, offlines the split
region and then adds the daughter regions to META, opens daughters on
the parent's hosting RegionServer and then reports the split to the
- Master. See <link linkend="disable.splitting">Managed Splitting</link>
for how to manually manage
+ Master. See <xref linkend="disable.splitting" /> for how to manually
manage
splits (and for why you might do this)</para>
</section>
@@ -872,7 +872,7 @@ HTable table2 = new HTable(conf2, "myTab
For a description of how a minor compaction picks files to compact,
see the <link
xlink:href="http://hbase.apache.org/xref/org/apache/hadoop/hbase/regionserver/Store.html#836">ascii
diagram in the Store source code.</link>
</para>
<para>After a major compaction runs there will be a single storefile
per store, and this will help performance usually. Caution: major compactions
rewrite all of the stores data and on a loaded system, this may not be tenable;
- major compactions will usually have to be <link
linkend="disable.splitting">managed</link> on large systems.
+ major compactions will usually have to be <xref
linkend="disable.splitting" /> on large systems.
</para>
</section>
@@ -888,7 +888,7 @@ HTable table2 = new HTable(conf2, "myTab
<title>Purpose</title>
<para>Each RegionServer adds updates (Puts, Deletes) to its
write-ahead log (WAL)
- first, and then to the <link linkend="store.memstore">MemStore</link>
for the affected <link linkend="store">Store</link>.
+ first, and then to the <xref linkend="store.memstore"/> for the
affected <xref linkend="store" />.
This ensures that HBase has durable writes. Without WAL, there is the
possibility of data loss in the case of a RegionServer failure
before each MemStore is flushed and new StoreFiles are written. <link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/wal/HLog.html">HLog</link>
is the HBase WAL implementation, and there is one HLog instance per
RegionServer.
@@ -1090,7 +1090,7 @@ HTable table2 = new HTable(conf2, "myTab
</para>
</section>
<section><title>HFile Tool</title>
- <para>See <link linkend="hfile_tool" >HFile Tool</link>.</para>
+ <para>See <xref linkend="hfile_tool" />.</para>
</section>
<section xml:id="wal_tools">
<title>WAL Tools</title>
@@ -1113,10 +1113,31 @@ HTable table2 = new HTable(conf2, "myTab
</section>
</section>
<section xml:id="compression.tool"><title>Compression Tool</title>
- <para>See <link linkend="compression.tool" >Compression
Tool</link>.</para>
+ <para>See <xref linkend="compression.tool" />.</para>
</section>
<section xml:id="decommission"><title>Node Decommission</title>
- <para>Since HBase 0.90.2, you can have a node gradually shed its load
and then shutdown using the
+ <para>You can stop an individual regionserver by running the following
+ script in the HBase directory on the particular node:
+ <programlisting>$ ./bin/hbase-daemon.sh stop
regionserver</programlisting>
+ The regionserver will first close all regions and then shut itself
down.
+ On shutdown, the regionserver's ephemeral node in ZooKeeper will
expire.
+ The master will notice the regionserver gone and will treat it as
+ a 'crashed' server; it will reassign the nodes the regionserver
was carrying.
+ <note><title>Disable the Load Balancer before Decommissioning a
node</title>
+ <para>If the load balancer runs while a node is shutting down,
then
+ there could be contention between the Load Balancer and the
+ Master's recovery of the just decommissioned regionserver.
+ Avoid any problems by disabling the balancer first.
+ See <xref linkend="lb" /> below.
+ </para>
+ </note>
+ </para>
+ <para>
+ A downside to the above stop of a regionserver is that regions could
be offline for
+ a good period of time. Regions are closed in order. If many regions
on the server, the
+ first region to close may not be back online until all regions close
and after the master
+ notices the regionserver's znode gone. In HBase 0.90.2, we added
facility for having
+ a node gradually shed its load and then shutdown itself down. HBase
0.90.2 added the
<filename>graceful_stop.sh</filename> script. Here is its usage:
<programlisting>$ ./bin/graceful_stop.sh
Usage: graceful_stop.sh [--config &conf-dir>] [--restart] [--reload]
[--thrift] [--rest] &hostname>
@@ -1152,7 +1173,7 @@ Usage: graceful_stop.sh [--config &c
RegionServer gone but all regions will have already been redeployed
and because the RegionServer went down cleanly, there will be no
WAL logs to split.
- <note><title>Load Balancer</title>
+ <note xml:id="lb"><title>Load Balancer</title>
<para>
It is assumed that the Region Load Balancer is disabled while
the
<command>graceful_stop</command> script runs (otherwise the
balancer
@@ -1270,7 +1291,7 @@ false
LZO
</title>
<para>
- See <link linkend="lzo">LZO Compression</link> above.
+ See <xref linkend="lzo" /> above.
</para>
</section>
@@ -1285,7 +1306,7 @@ false
available on the CLASSPATH; in this case it will use native
compressors instead (If the native libs are NOT present,
you will see lots of <emphasis>Got brand-new compressor</emphasis>
- reports in your logs; see <link linkend="brand.new.compressor">FAQ</link>).
+ reports in your logs; see <xref linkend="brand.new.compressor" />).
</para>
</section>
</appendix>
@@ -1309,7 +1330,7 @@ false
<answer>
<para>
Not really. SQL-ish support for HBase via <link
xlink:href="http://hive.apache.org/">Hive</link> is in development, however
Hive is based on MapReduce which is not generally suitable for low-latency
requests.
- See the <link linkend="datamodel">Data Model</link>
section for examples on the HBase client.
+ See the <xref linkend="datamodel" /> section for examples
on the HBase client.
</para>
</answer>
</qandaentry>
@@ -1320,7 +1341,7 @@ false
<link
xlink:href="http://hadoop.apache.org/hdfs/">HDFS</link> is a distributed file
system that is well suited for the storage of large files. It's documentation
states that it is not, however, a general purpose file
system, and does not provide fast individual record lookups in files.
HBase, on the other hand, is built on top of HDFS and
provides fast record lookups (and updates) for large tables. This can
sometimes be a point of conceptual confusion.
- See the <link linkend="datamodel">Data Model</link> and
<link linkend="architecture">Architecture</link> sections for more information
on how HBase achieves its goals.
+ See the <xref linkend="datamodel" /> and <xref
linkend="architecture" /> sections for more information on how HBase achieves
its goals.
</para>
</answer>
</qandaentry>
@@ -1406,6 +1427,7 @@ When I build, why do I always get <code>
<appendix>
<title xml:id="ycsb"><link
xlink:href="https://github.com/brianfrankcooper/YCSB/">YCSB: The Yahoo! Cloud
Serving Benchmark</link> and HBase</title>
+ <para>TODO: Describe how YCSB is poor for putting up a decent cluster
load.</para>
<para>TODO: Describe setup of YCSB for HBase</para>
<para>Ted Dunning redid YCSB so its mavenized and added facility for
verifying workloads. See <link
xlink:href="https://github.com/tdunning/YCSB">Ted Dunning's YCSB</link>.</para>
Modified: hbase/trunk/src/docbkx/configuration.xml
URL:
http://svn.apache.org/viewvc/hbase/trunk/src/docbkx/configuration.xml?rev=1095988&r1=1095987&r2=1095988&view=diff
==============================================================================
--- hbase/trunk/src/docbkx/configuration.xml (original)
+++ hbase/trunk/src/docbkx/configuration.xml Fri Apr 22 19:07:56 2011
@@ -40,7 +40,7 @@ to ensure well-formedness of your docume
for HBase, site specific customizations go into
the file <filename>conf/hbase-site.xml</filename>.
For the list of configurable properties, see
- <link linkend="hbase_default_configurations">Default HBase
Configurations</link>
+ <xref linkend="hbase_default_configurations" />
below or view the raw <filename>hbase-default.xml</filename>
source file in the HBase source code at
<filename>src/main/resources</filename>.
@@ -99,10 +99,10 @@ to ensure well-formedness of your docume
<section xml:id="required_configuration"><title>Required
Configurations</title>
- <para>See the <link linkend="requirements">Requirements</link> section.
+ <para>See <xref linkend="requirements" />.
It lists at least two required configurations needed running HBase
bearing
- load: i.e. <link linkend="ulimit">file descriptors
<varname>ulimit</varname></link> and
- <link
linkend="dfs.datanode.max.xcievers"><varname>dfs.datanode.max.xcievers</varname></link>.
+ load: i.e. <xref linkend="ulimit" /> and
+ <xref linkend="dfs.datanode.max.xcievers" />.
</para>
</section>
@@ -185,10 +185,10 @@ to ensure well-formedness of your docume
fixup on the new machine. In versions since HBase 0.90.0, we should
fail in a way that makes it plain what the problem is, but maybe not.
Remember you read this paragraph<footnote><para>See
- <link
linkend="hbase.regionserver.codecs">hbase.regionserver.codecs</link>
+ <xref linkend="hbase.regionserver.codecs" />
for a feature to help protect against failed LZO
install</para></footnote>.
</para>
- <para>See also the <link linkend="compression">Compression
Appendix</link>
+ <para>See also <xref linkend="compression" />
at the tail of this book.</para>
</section>
<section xml:id="bigger.regions">
@@ -303,11 +303,11 @@ of all regions.
(Invocation will also factor in any
<filename>hbase-default.xml</filename> found;
an hbase-default.xml ships inside the
<filename>hbase.X.X.X.jar</filename>).
It is also possible to specify configuration directly without having
to read from a
- <filename>hbase-site.xml</filename>. For example, to set the
- <link linkend="zookeeper">zookeeper</link> ensemble for the cluster
programmatically do as follows:
+ <filename>hbase-site.xml</filename>. For example, to set the ZooKeeper
+ ensemble for the cluster programmatically do as follows:
<programlisting>Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "localhost"); // Here we are running
zookeeper locally</programlisting>
- If multiple <link linkend="zookeeper">zookeeper</link> instances make
up your zookeeper ensemble,
+ If multiple ZooKeeper instances make up your zookeeper ensemble,
they may be specified in a comma-separated list (just as in the
<filename>hbase-site.xml</filename> file).
This populated <classname>Configuration</classname> instance can then
be passed to an
<link
xlink:href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html">HTable</link>,
Modified: hbase/trunk/src/docbkx/getting_started.xml
URL:
http://svn.apache.org/viewvc/hbase/trunk/src/docbkx/getting_started.xml?rev=1095988&r1=1095987&r2=1095988&view=diff
==============================================================================
--- hbase/trunk/src/docbkx/getting_started.xml (original)
+++ hbase/trunk/src/docbkx/getting_started.xml Fri Apr 22 19:07:56 2011
@@ -12,9 +12,9 @@
<section>
<title>Introduction</title>
- <para><link linkend="quickstart">Quick Start</link> will get you up and
+ <para><xref linkend="quickstart" /> will get you up and
running on a single-node instance of HBase using the local filesystem. The
- <link linkend="notsoquick">Not-so-quick Start Guide</link> describes setup
+ <xref linkend="notsoquick" /> describes setup
of HBase in distributed mode running on top of HDFS.</para>
</section>
@@ -23,7 +23,7 @@
<para>This guide describes setup of a standalone HBase instance that uses
the local filesystem. It leads you through creating a table, inserting
- rows via the <link linkend="shell">HBase Shell</link>, and then cleaning
+ rows via the HBase <command>shell</command>, and then cleaning
up and shutting down your standalone HBase instance. The below exercise
should take no more than ten minutes (not including download time).</para>
@@ -97,8 +97,7 @@ starting Master, logging to logs/hbase-u
<section xml:id="shell_exercises">
<title>Shell Exercises</title>
- <para>Connect to your running HBase via the <link linkend="shell">HBase
- Shell</link>.</para>
+ <para>Connect to your running HBase via the
<command>shell</command>.</para>
<para><programlisting>$ ./bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
@@ -114,8 +113,7 @@ hbase(main):001:0> </programlisting><
HBase shell; in particular note how table names, rows, and columns,
etc., must be quoted.</para>
- <para>Create a table named <varname>test</varname> with a single <link
- linkend="columnfamily">column family</link> named <varname>cf</varname>.
+ <para>Create a table named <varname>test</varname> with a single column
family named <varname>cf</varname>.
Verify its creation by listing all tables and then insert some
values.</para>
@@ -133,8 +131,7 @@ hbase(main):006:0> put 'test', 'row3'
<para>Above we inserted 3 values, one at a time. The first insert is at
<varname>row1</varname>, column <varname>cf:a</varname> with a value of
- <varname>value1</varname>. Columns in HBase are comprised of a <link
- linkend="columnfamily">column family</link> prefix --
+ <varname>value1</varname>. Columns in HBase are comprised of a column
family prefix --
<varname>cf</varname> in this example -- followed by a colon and then a
column qualifier suffix (<varname>a</varname> in this case).</para>
@@ -182,8 +179,7 @@ stopping hbase...............</programli
<title>Where to go next</title>
<para>The above described standalone setup is good for testing and
- experiments only. Move on to the next section, the <link
- linkend="notsoquick">Not-so-quick Start Guide</link> where we'll go into
+ experiments only. Next move on to <xref linkend="notsoquick" />
where we'll go into
depth on the different HBase run modes, requirements and critical
configurations needed setting up a distributed HBase deploy.</para>
</section>
@@ -437,9 +433,7 @@ stopping hbase...............</programli
<section xml:id="standalone_dist">
<title>HBase run modes: Standalone and Distributed</title>
- <para>HBase has two run modes: <link
- linkend="standalone">standalone</link> and <link
- linkend="distributed">distributed</link>. Out of the box, HBase runs in
+ <para>HBase has two run modes: <xref linkend="standalone" /> and <xref
linkend="distributed" />. Out of the box, HBase runs in
standalone mode. To set up a distributed deploy, you will need to
configure HBase by editing files in the HBase <filename>conf</filename>
directory.</para>
@@ -456,7 +450,7 @@ stopping hbase...............</programli
<title>Standalone HBase</title>
<para>This is the default mode. Standalone mode is what is described
- in the <link linkend="quickstart">quickstart</link> section. In
+ in the <xref linkend="quickstart" /> section. In
standalone mode, HBase does not use HDFS -- it uses the local
filesystem instead -- and it runs all HBase daemons and a local
zookeeper all up in the same JVM. Zookeeper binds to a well known port
@@ -485,8 +479,7 @@ stopping hbase...............</programli
verification and exploration of your install, whether a
<emphasis>pseudo-distributed</emphasis> or
<emphasis>fully-distributed</emphasis> configuration is described in a
- section that follows, <link linkend="confirm">Running and Confirming
- your Installation</link>. The same verification script applies to both
+ section that follows, <xref linkend="confirm" />. The same
verification script applies to both
deploy types.</para>
<section xml:id="pseudo">
@@ -499,10 +492,8 @@ stopping hbase...............</programli
<para>Once you have confirmed your HDFS setup, edit
<filename>conf/hbase-site.xml</filename>. This is the file into
- which you add local customizations and overrides for <link
- linkend="hbase_default_configurations">Default HBase
- Configurations</link> and <link linkend="hdfs_client_conf">HDFS
- Client Configurations</link>. Point HBase at the running Hadoop HDFS
+ which you add local customizations and overrides for
+ <xreg linkend="hbase_default_configurations" /> and <xref
linkend="hdfs_client_conf" />. Point HBase at the running Hadoop HDFS
instance by setting the <varname>hbase.rootdir</varname> property.
This property points HBase at the Hadoop filesystem instance to use.
For example, adding the properties below to your
@@ -543,8 +534,7 @@ stopping hbase...............</programli
want to connect from a remote location.</para>
</note>
- <para>Now skip to <link linkend="confirm">Running and Confirming
- your Installation</link> for how to start and verify your
+ <para>Now skip to <xref linkend="confirm" /> for how to start and
verify your
pseudo-distributed install. <footnote>
<para>See <link
xlink:href="http://hbase.apache.org/pseudo-distributed.html">Pseudo-distributed
@@ -594,8 +584,7 @@ stopping hbase...............</programli
<para>In addition, a fully-distributed mode requires that you
modify <filename>conf/regionservers</filename>. The
- <filename><link
- linkend="regionservrers">regionservers</link></filename> file
+ <xref linkend="regionservers" /> file
lists all hosts that you would have running
<application>HRegionServer</application>s, one host per line (This
file in HBase is like the Hadoop <filename>slaves</filename>
@@ -634,9 +623,7 @@ stopping hbase...............</programli
by setting the
<varname>hbase.zookeeper.property.clientPort</varname> property.
For all default values used by HBase, including ZooKeeper
- configuration, see the section <link
- linkend="hbase_default_configurations">Default HBase
- Configurations</link>. Look for the
+ configuration, see <xref linkend="hbase_default_configurations"
/>. Look for the
<varname>hbase.zookeeper.property</varname> prefix <footnote>
<para>For the full list of ZooKeeper configurations, see
ZooKeeper's <filename>zoo.cfg</filename>. HBase does not ship
@@ -835,8 +822,7 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {star
- <para>Once HBase has started, see the <link
- linkend="shell_exercises">Shell Exercises</link> section for how to
+ <para>Once HBase has started, see the <xref linkend="shell_exercises" />
for how to
create tables, add data, scan your insertions, and finally disable and
drop your tables.</para>
Modified: hbase/trunk/src/docbkx/performance.xml
URL:
http://svn.apache.org/viewvc/hbase/trunk/src/docbkx/performance.xml?rev=1095988&r1=1095987&r2=1095988&view=diff
==============================================================================
--- hbase/trunk/src/docbkx/performance.xml (original)
+++ hbase/trunk/src/docbkx/performance.xml Fri Apr 22 19:07:56 2011
@@ -15,6 +15,20 @@
factors involved; RAM, compression, JVM settings, etc. Afterward, come back
here for more pointers.</para>
+ <note xml:id="rpc.logging"><title>Enabling RPC-level logging</title>
+ <para>Enabling the RPC-level logging on a regionserver can often given
+ insight on timings at the server. Once enabled, the amount of log
+ spewed is voluminous. It is not recommended that you leave this
+ logging on for more than short bursts of time. To enable RPC-level
+ logging, browse to the regionserver UI and click on
+ <emphasis>Log Level</emphasis>. Set the log level to DEBUG for the
package
+ <classname>org.apache.hadoop.ipc</classname> (Thats right, for
+ hadoop.ipc, NOT, hbase.ipc). Then tail the regionservers log.
+ Analyze.</para>
+ <para>To disable, set the logging level back to WARN level.
+ </para>
+ </note>
+
<section xml:id="jvm">
<title>Java</title>
@@ -46,16 +60,14 @@
<section xml:id="perf.configurations">
<title>Configurations</title>
- <para>See the section on <link
- linkend="recommended_configurations">recommended
- configurations</link>.</para>
+ <para>See <xref linkend="recommended_configurations" />.</para>
<section xml:id="perf.number.of.regions">
<title>Number of Regions</title>
- <para>The number of regions for an HBase table is driven by the <link
- linkend="bigger.regions">filesize</link>. Also, see the architecture
- section on <link linkend="arch.regions.size">region size</link></para>
+ <para>The number of regions for an HBase table is driven by the <xref
+ linkend="bigger.regions" />. Also, see the architecture
+ section on <xref linkend="arch.regions.size" /></para>
</section>
<section xml:id="perf.compactions.and.splits">
@@ -68,18 +80,28 @@
<section xml:id="perf.compression">
<title>Compression</title>
-
- <para>Production systems should use compression such as <link
- linkend="lzo">LZO</link> compression with their column family
+ <para>Production systems should use compression such as <xref
linkend="lzo" /> compression with their column family
definitions.</para>
</section>
+
+ <section xml:id="perf.handlers">
+ <title><varname>hbase.regionserver.handler.count</varname></title>
+ <para>This setting is in essence sets how many requests are
+ concurrently being processed inside the regionserver at any
+ one time. If set too high, then throughput may suffer as
+ the concurrent requests contend; if set too low, requests will
+ be stuck waiting to get into the machine. You can get a
+ sense of whether you have too little or too many handlers by
+ <xref linkend="rpc.logging" />
+ on an individual regionserver then tailing its logs.</para>
+ </section>
+
</section>
<section xml:id="perf.number.of.cfs">
<title>Number of Column Families</title>
- <para>See the section on <link linkend="number.of.cfs">Number of Column
- Families</link>.</para>
+ <para>See <xref linkend="number.of.cfs" />.</para>
</section>
<section xml:id="perf.one.region">
Modified: hbase/trunk/src/docbkx/preface.xml
URL:
http://svn.apache.org/viewvc/hbase/trunk/src/docbkx/preface.xml?rev=1095988&r1=1095987&r2=1095988&view=diff
==============================================================================
--- hbase/trunk/src/docbkx/preface.xml (original)
+++ hbase/trunk/src/docbkx/preface.xml Fri Apr 22 19:07:56 2011
@@ -23,4 +23,25 @@
hope to fill in the holes with time. Feel free to add to this book by adding
a patch to an issue up in the HBase <link
xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link>.</para>
+
+ <note xml:id="headsup">
+ <title>Heads-up</title>
+ <para>
+ If this is your first foray into the wonderful world of
+ Distributed Computing, then you are in for
+ some interesting times. First off, distributed systems are
+ hard; making a distributed system hum requires a disparate
+ skillset that needs span systems (hardware and software) and
+ networking. Your cluster' operation can hiccup because of any
+ of a myriad set of reasons from bugs in HBase itself through
misconfigurations
+ -- misconfiguration of HBase but also operating system
misconfigurations --
+ through to hardware problems whether it be a bug in your network card
+ drivers or an underprovisioned RAM bus (to mention two recent
+ examples of hardware issues that manifested as "HBase is slow").
+ You will also need to do a recalibration if up to this your
+ computing has been bound to a single box. Here is one good
+ starting point:
+ <link
xlink:href="http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing">Fallacies
of Distributed Computing</link>.
+ </para>
+ </note>
</preface>
Modified: hbase/trunk/src/docbkx/shell.xml
URL:
http://svn.apache.org/viewvc/hbase/trunk/src/docbkx/shell.xml?rev=1095988&r1=1095987&r2=1095988&view=diff
==============================================================================
--- hbase/trunk/src/docbkx/shell.xml (original)
+++ hbase/trunk/src/docbkx/shell.xml Fri Apr 22 19:07:56 2011
@@ -24,7 +24,7 @@
arguments are entered into the
HBase shell; in particular note how table names, rows, and
columns, etc., must be quoted.</para>
- <para>See <link linkend="shell_exercises">Shell Exercises</link>
+ <para>See <xref linkend="shell_exercises" />
for example basic shell operation.</para>
<section xml:id="scripting"><title>Scripting</title>
Modified: hbase/trunk/src/docbkx/upgrading.xml
URL:
http://svn.apache.org/viewvc/hbase/trunk/src/docbkx/upgrading.xml?rev=1095988&r1=1095987&r2=1095988&view=diff
==============================================================================
--- hbase/trunk/src/docbkx/upgrading.xml (original)
+++ hbase/trunk/src/docbkx/upgrading.xml Fri Apr 22 19:07:56 2011
@@ -9,8 +9,7 @@
xmlns:db="http://docbook.org/ns/docbook">
<title>Upgrading</title>
<para>
- Review the <link linkend="requirements">requirements</link>
- section above, in particular the section on Hadoop version.
+ Review <xref linkend="requirements" />, in particular the section on
Hadoop version.
</para>
<section xml:id="upgrade0.90">
<title>Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</title>
@@ -30,7 +29,7 @@
HBase jar and read from there. If you would like to review
the content of this file, see it in the src tree at
<filename>src/main/resources/hbase-default.xml</filename> or
- see <link linkend="hbase_default_configurations">Default HBase
Configurations</link>.
+ see <xref linkend="hbase_default_configurations" />.
</para>
<para>
Finally, if upgrading from 0.20.x, check your