Author: khorgath
Date: Tue Aug 28 23:58:57 2012
New Revision: 1378391

URL: http://svn.apache.org/viewvc?rev=1378391&view=rev
Log:
HCATALOG-481 Fix CLI usage syntax in doc & revise HCat docset (lefty via 
khorgath)

Modified:
    incubator/hcatalog/trunk/CHANGES.txt
    incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/cli.xml
    
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/dynpartition.xml
    incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/index.xml
    
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/install.xml
    
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/notification.xml

Modified: incubator/hcatalog/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/incubator/hcatalog/trunk/CHANGES.txt?rev=1378391&r1=1378390&r2=1378391&view=diff
==============================================================================
--- incubator/hcatalog/trunk/CHANGES.txt (original)
+++ incubator/hcatalog/trunk/CHANGES.txt Tue Aug 28 23:58:57 2012
@@ -38,6 +38,8 @@ Trunk (unreleased changes)
   HCAT-427 Document storage-based authorization (lefty via gates)
 
   IMPROVEMENTS
+  HCAT-481 Fix CLI usage syntax in doc & revise HCat docset (lefty via 
khorgath)
+
   HCAT-444 Document reader & writer interfaces (lefty via gates)
 
   HCAT-425 Pig cannot read/write SMALLINT/TINYINT columns (traviscrawford)

Modified: 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/cli.xml
URL: 
http://svn.apache.org/viewvc/incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/cli.xml?rev=1378391&r1=1378390&r2=1378391&view=diff
==============================================================================
--- incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/cli.xml 
(original)
+++ incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/cli.xml 
Tue Aug 28 23:58:57 2012
@@ -41,31 +41,62 @@ where <em>hive_home</em> is the director
 <title>HCatalog CLI</title>
 
 <p>The HCatalog CLI supports these command line options:</p>
-<ul>
-<li><strong>-g</strong>: Usage is -g mygroup .... This indicates to HCatalog 
that table that needs to be created must have group "mygroup" </li>
-<li><strong>-p</strong>: Usage is -p rwxr-xr-x .... This indicates to HCatalog 
that table that needs to be created must have permissions "rwxr-xr-x" </li>
-<li><strong>-f</strong>: Usage is -f myscript.hcatalog .... This indicates to 
HCatalog that myscript.hcatalog is a file which contains DDL commands it needs 
to execute. </li>
-<li><strong>-e</strong>: Usage is -e 'create table mytable(a int);' .... This 
indicates to HCatalog to treat the following string as a DDL command and 
execute it. </li>
-<li><strong>-D</strong>: Usage is -Dkey=value .... The key value pair is 
passed to HCatalog as a Java System Property.</li>
-</ul>
-<p></p>        
+
+<table>
+  <tr>
+    <th><p class="center">Option</p></th>
+    <th><p class="center">Usage</p></th>
+    <th><p class="center">Description</p></th>
+  </tr>
+  <tr>
+    <td><p class="cell"><strong>-g</strong></p></td>
+    <td><p class="cell"><code>hcat -g mygroup ...</code></p></td>
+    <td><p class="cell">Tells HCatalog that the table which needs to be 
created must have group "mygroup".</p></td>
+  </tr>
+  <tr>
+    <td><p class="cell"><strong>-p</strong></p></td>
+    <td><p class="cell"><code>hcat -p rwxr-xr-x ...</code></p></td>
+    <td><p class="cell">Tells HCatalog that the table which needs to be 
created must have permissions "rwxr-xr-x".</p></td>
+  </tr>
+  <tr>
+    <td><p class="cell"><strong>-f</strong></p></td>
+    <td><p class="cell"><code>hcat -f myscript.hcatalog ...</code></p></td>
+    <td><p class="cell">Tells HCatalog that myscript.hcatalog is a file 
containing DDL commands to execute.</p></td>
+  </tr>
+  <tr>
+    <td><p class="cell"><strong>-e</strong></p></td>
+    <td><p class="cell"><code>hcat -e 'create table mytable(a int);' 
...</code></p></td>
+    <td><p class="cell">Tells HCatalog to treat the following string as a DDL 
command and execute it.</p></td>
+  </tr>
+  <tr>
+    <td><p class="cell"><strong>-D</strong></p></td>
+    <td><p class="cell"><code>hcat -D</code><em>key</em>=<em>value</em><code> 
...</code></p></td>
+    <td><p class="cell">Passes the key-value pair to HCatalog as a Java System 
Property.</p></td>
+  </tr>
+  <tr>
+    <td></td>
+    <td><p class="cell"><code>hcat</code></p></td>
+    <td><p class="cell">Prints a usage message.</p></td>
+  </tr>
+</table>
+
 <p>Note the following:</p>
 <ul>
 <li>The <strong>-g</strong> and <strong>-p</strong> options are not mandatory. 
 </li>
-<li>Only one of the <strong>-e</strong> or <strong>-f</strong> option can be 
provided, not both. 
+<li>Only one <strong>-e</strong> or <strong>-f</strong> option can be 
provided, not both. 
 </li>
 <li>The order of options is immaterial; you can specify the options in any 
order. 
 </li>
-<li>If no option is provided, then a usage message is printed: 
+</ul>
+<p>If no option is provided, then a usage message is printed:</p>
 <source>
-Usage: hcat  { -e "&lt;query&gt;" | -f "&lt;filepath&gt;" } [-g 
"&lt;group&gt;" ] [-p "&lt;perms&gt;"] [-D "&lt;name&gt;=&lt;value&gt;"]
+Usage:  hcat  { -e "&lt;query&gt;" | -f &lt;filepath&gt; }  [-g &lt;group&gt;] 
[-p &lt;perms&gt;] [-D&lt;name&gt;=&lt;value&gt;]
 </source>
-</li>
-</ul>
 <p></p>
-<p><strong>Assumptions</strong></p>
-<p>When using the HCatalog CLI, you cannot specify a permission string without 
read permissions for owner, such as -wxrwxr-x. If such a permission setting is 
desired, you can use the octal version instead, which in this case would be 
375. Also, any other kind of permission string where the owner has read 
permissions (for example r-x------ or r--r--r--) will work fine.</p>
+
+<p><strong>Owner Permissions</strong></p>
+<p>When using the HCatalog CLI, you cannot specify a permission string without 
read permissions for owner, such as -wxrwxr-x, because the string begins with 
"-". If such a permission setting is desired, you can use the octal version 
instead, which in this case would be 375. Also, any other kind of permission 
string where the owner has read permissions (for example r-x------ or 
r--r--r--) will work fine.</p>
        
 </section>
 
@@ -113,7 +144,7 @@ Usage: hcat  { -e "&lt;query&gt;" | -f "
 <!-- ==================================================================== -->
 <section>
        <title>Create/Drop/Alter View</title>
-<p>Note: Pig and MapReduce coannot read from or write to views.</p>
+<p>Note: Pig and MapReduce cannot read from or write to views.</p>
 
 <p><strong>CREATE VIEW</strong></p>    
 <p>Supported. Behavior same as Hive.</p>               
@@ -162,7 +193,7 @@ Usage: hcat  { -e "&lt;query&gt;" | -f "
        
        <!-- 
==================================================================== -->
 <section>
-       <title>"dfs" command and "set" command</title>
+       <title>"dfs" Command and "set" Command</title>
        <p>Supported. Behavior same as Hive.</p>
 </section>
 <section>
@@ -172,15 +203,20 @@ Usage: hcat  { -e "&lt;query&gt;" | -f "
 
 </section>
 
+<section>
+    <title>CLI Errors</title>
 <p><strong>Authentication</strong></p>
 <table>
        <tr>
        <td><p>If a failure results in a message like "2010-11-03 16:17:28,225 
WARN hive.metastore ... - Unable to connect metastore with URI thrift://..." in 
/tmp/&lt;username&gt;/hive.log, then make sure you have run "kinit 
&lt;username&gt;@FOO.COM" to get a Kerberos ticket and to be able to 
authenticate to the HCatalog server. </p></td>
        </tr>
 </table>
-<p>If other errors occur while using the HCatalog CLI, more detailed messages 
are written to /tmp/&lt;username&gt;/hive.log. </p>
 
+<p><strong>Error Log</strong></p>
 
+<p>If other errors occur while using the HCatalog CLI, more detailed messages 
are written to /tmp/&lt;username&gt;/hive.log. </p>
+
+</section>
 
   </body>
 </document>

Modified: 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/dynpartition.xml
URL: 
http://svn.apache.org/viewvc/incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/dynpartition.xml?rev=1378391&r1=1378390&r2=1378391&view=diff
==============================================================================
--- 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/dynpartition.xml
 (original)
+++ 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/dynpartition.xml
 Tue Aug 28 23:58:57 2012
@@ -49,7 +49,8 @@ store Z into 'processed' using HCatStore
 </source> 
 
 <p>The way dynamic partitioning works is that HCatalog locates partition 
columns in the data passed to it and uses the data in these columns to split 
the rows across multiple partitions. (The data passed to HCatalog 
<strong>must</strong> have a schema that matches the schema of the destination 
table and hence should always contain partition columns.)  It is important to 
note that partition columns can’t contain null values or the whole process 
will fail.</p>
-<p>It is also important to note that all partitions created during a single 
run are part of a transaction and if any part of the process fails none of the 
partitions will be added to the table.</p>
+<p>It is also important to note that all partitions created during a single 
run are part of one transaction;
+therefore if any part of the process fails, none of the partitions will be 
added to the table.</p>
 </section>
   
 <!-- ==================================================================== -->  
@@ -101,7 +102,7 @@ store A2 into 'mytable' using HCatStorer
 <title>Usage from MapReduce</title>
 <p>As with Pig, the only change in dynamic partitioning that a MapReduce 
programmer sees is that they don't have to specify all the partition key/value 
combinations.</p>   
 
-<p>A current code example for writing out a specific partition for (a=1,b=1) 
would go something like this: </p>  
+<p>A current code example for writing out a specific partition for (a=1, b=1) 
would go something like this: </p> 
    
 <source>
 Map&lt;String, String&gt; partitionValues = new HashMap&lt;String, 
String&gt;();

Modified: 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/index.xml
URL: 
http://svn.apache.org/viewvc/incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/index.xml?rev=1378391&r1=1378390&r2=1378391&view=diff
==============================================================================
--- incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/index.xml 
(original)
+++ incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/index.xml 
Tue Aug 28 23:58:57 2012
@@ -26,7 +26,7 @@
       <title>HCatalog </title>
       
        <p>HCatalog is a table and storage management layer for Hadoop that 
enables users with different data processing tools – Pig, MapReduce, and Hive 
– to more easily read and write data on the grid. HCatalog’s table 
abstraction presents users with a relational view of data in the Hadoop 
distributed file system (HDFS) and ensures that users need not worry about 
where or in what format their data is stored – RCFile format, text files, or 
SequenceFiles. </p>
-<p>HCatalog supports reading and writing files in any format for which a SerDe 
can be written. By default, HCatalog supports RCFile, CSV, JSON, and 
SequenceFile formats. To use a custom format, you must provide the InputFormat, 
OutputFormat, and SerDe.</p>
+<p>HCatalog supports reading and writing files in any format for which a SerDe 
(serializer-deserializer) can be written. By default, HCatalog supports RCFile, 
CSV, JSON, and SequenceFile formats. To use a custom format, you must provide 
the InputFormat, OutputFormat, and SerDe.</p>
 <p></p>
 <figure src="images/hcat-product.jpg" align="left" alt="HCatalog Product"/>
 
@@ -42,20 +42,25 @@
 
 <section>
 <title>Interfaces</title>   
-<p>The HCatalog interface for Pig consists of HCatLoader and HCatStorer, which 
implement the Pig load and store interfaces respectively. HCatLoader accepts a 
table to read data from; you can indicate which partitions to scan by 
immediately following the load statement with a partition filter statement. 
HCatStorer accepts a table to write to and optionally a specification of 
partition keys to create a new partition. You can write to a single partition 
by specifying the partition key(s) and value(s) in the STORE clause; and you 
can write to multiple partitions if the partition key(s) are columns in the 
data being stored. HCatLoader is implemented on top of HCatInputFormat and 
HCatStorer is implemented on top of HCatOutputFormat (see <a 
href="loadstore.html">HCatalog Load and Store</a>).</p>
+<p>The HCatalog interface for Pig consists of HCatLoader and HCatStorer, which 
implement the Pig load and store interfaces respectively. HCatLoader accepts a 
table to read data from; you can indicate which partitions to scan by 
immediately following the load statement with a partition filter statement. 
HCatStorer accepts a table to write to and optionally a specification of 
partition keys to create a new partition. You can write to a single partition 
by specifying the partition key(s) and value(s) in the STORE clause; and you 
can write to multiple partitions if the partition key(s) are columns in the 
data being stored. HCatLoader is implemented on top of HCatInputFormat and 
HCatStorer is implemented on top of HCatOutputFormat.
+(See <a href="loadstore.html">Load and Store Interfaces</a>.)</p>
 
-<p>The HCatalog interface for MapReduce – HCatInputFormat and 
HCatOutputFormat – is an implementation of Hadoop InputFormat and 
OutputFormat. HCatInputFormat accepts a table to read data from and optionally 
a selection predicate to indicate which partitions to scan. HCatOutputFormat 
accepts a table to write to and optionally a specification of partition keys to 
create a new partition. You can write to a single partition by specifying the 
partition key(s) and value(s) in the setOutput method; and you can write to 
multiple partitions if the partition key(s) are columns in the data being 
stored. (See <a href="inputoutput.html">HCatalog Input and Output</a>.)</p>
+<p>The HCatalog interface for MapReduce &#8212; HCatInputFormat and 
HCatOutputFormat &#8212; is an implementation of Hadoop InputFormat and 
OutputFormat. HCatInputFormat accepts a table to read data from and optionally 
a selection predicate to indicate which partitions to scan. HCatOutputFormat 
accepts a table to write to and optionally a specification of partition keys to 
create a new partition. You can write to a single partition by specifying the 
partition key(s) and value(s) in the setOutput method; and you can write to 
multiple partitions if the partition key(s) are columns in the data being 
stored.
+(See <a href="inputoutput.html">Input and Output Interfaces</a>.)</p>
 
-<p>Note: There is no Hive-specific interface. Since HCatalog uses Hive's 
metastore, Hive can read data in HCatalog directly.</p>
+<p><strong>Note:</strong> There is no Hive-specific interface. Since HCatalog 
uses Hive's metastore, Hive can read data in HCatalog directly.</p>
 
-<p>Data is defined using HCatalog's command line interface (CLI). The HCatalog 
CLI supports all Hive DDL that does not require MapReduce to execute, allowing 
users to create, alter, drop tables, etc. (Unsupported Hive DDL includes 
import/export, CREATE TABLE AS SELECT, ALTER TABLE options REBUILD and 
CONCATENATE, and ANALYZE TABLE ... COMPUTE STATISTICS.) The CLI also supports 
the data exploration part of the Hive command line, such as SHOW TABLES, 
DESCRIBE TABLE, etc. (see the <a href="cli.html">HCatalog Command Line 
Interface</a>).</p> 
+<p>Data is defined using HCatalog's command line interface (CLI). The HCatalog 
CLI supports all Hive DDL that does not require MapReduce to execute, allowing 
users to create, alter, drop tables, etc. The CLI also supports the data 
exploration part of the Hive command line, such as SHOW TABLES, DESCRIBE TABLE, 
and so on.
+Unsupported Hive DDL includes import/export, the REBUILD and CONCATENATE 
options of ALTER TABLE, CREATE TABLE AS SELECT, and ANALYZE TABLE ... COMPUTE 
STATISTICS.
+(See <a href="cli.html">Command Line Interface</a>.)</p>
 </section>
 
 <section>
 <title>Data Model</title>
 <p>HCatalog presents a relational view of data. Data is stored in tables and 
these tables can be placed in databases. Tables can also be hash partitioned on 
one or more keys; that is, for a given value of a key (or set of keys) there 
will be one partition that contains all rows with that value (or set of 
values). For example, if a table is partitioned on date and there are three 
days of data in the table, there will be three partitions in the table. New 
partitions can be added to a table, and partitions can be dropped from a table. 
Partitioned tables have no partitions at create time. Unpartitioned tables 
effectively have one default partition that must be created at table creation 
time. There is no guaranteed read consistency when a partition is dropped.</p>
 
-<p>Partitions contain records. Once a partition is created records cannot be 
added to it, removed from it, or updated in it. Partitions are 
multi-dimensional and not hierarchical. Records are divided into columns. 
Columns have a name and a datatype. HCatalog supports the same datatypes as 
Hive (see <a href="loadstore.html">HCatalog Load and Store</a>). </p>
+<p>Partitions contain records. Once a partition is created records cannot be 
added to it, removed from it, or updated in it. Partitions are 
multi-dimensional and not hierarchical. Records are divided into columns. 
Columns have a name and a datatype. HCatalog supports the same datatypes as 
Hive.
+See <a href="loadstore.html">Load and Store Interfaces</a> for more 
information about datatypes.</p>
 </section>
      </section>
      

Modified: 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/install.xml
URL: 
http://svn.apache.org/viewvc/incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/install.xml?rev=1378391&r1=1378390&r2=1378391&view=diff
==============================================================================
--- 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/install.xml 
(original)
+++ 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/install.xml 
Tue Aug 28 23:58:57 2012
@@ -19,7 +19,7 @@
 
 <document>
   <header>
-    <title>Source Installation</title>
+    <title>Installation from Tarball</title>
   </header>
   <body>
 
@@ -28,11 +28,11 @@
 
     <p><strong>Prerequisites</strong></p>
     <ul>
-        <li>Machine to build the installation tar on</li>
-        <li>Machine on which the server can be installed - this should have
+        <li>machine to build the installation tar on</li>
+        <li>machine on which the server can be installed &#8212; this should 
have
         access to the Hadoop cluster in question, and be accessible from
         the machines you launch jobs from</li>
-        <li>an RDBMS - we recommend MySQL and provide instructions for it</li>
+        <li>an RDBMS &#8212; we recommend MySQL and provide instructions for 
it</li>
         <li>Hadoop cluster</li>
         <li>Unix user that the server will run as, and, if you are running your
           cluster in secure mode, an associated Kerberos service principal and 
keytabs.</li>
@@ -227,7 +227,7 @@
         </tr>
         <tr>
             <td>hive.metastore.sasl.enabled</td>
-            <td>Set to true if you are using kerberos security with your Hadoop
+            <td>Set to true if you are using Kerberos security with your Hadoop
             cluster, false otherwise.</td>
         </tr>
         <tr>
@@ -245,7 +245,7 @@
         </tr>
     </table>
 
-    <p>You can now procede to starting the server.</p>
+    <p>You can now proceed to starting the server.</p>
   </section>
 
   <section>
@@ -254,7 +254,8 @@
     <p>To start your server, HCatalog needs to know where Hive is installed.
     This is communicated by setting the environment variable 
<code>HIVE_HOME</code>
     to the location you installed Hive.  Start the HCatalog server by 
switching directories to
-    <em>root</em> and invoking <code>HIVE_HOME=</code><em>hive_home</em><code> 
sbin/hcat_server.sh start</code></p>
+    <em>root</em> and invoking
+    "<code>HIVE_HOME=</code><em>hive_home</em><code> sbin/hcat_server.sh 
start</code>".</p>
 
   </section>
 
@@ -273,7 +274,8 @@
     <title>Stopping the Server</title>
 
     <p>To stop the HCatalog server, change directories to the <em>root</em>
-    directory and invoking <code>HIVE_HOME=</code><em>hive_home</em><code> 
sbin/hcat_server.sh stop</code></p>
+    directory and invoke 
+    "<code>HIVE_HOME=</code><em>hive_home</em><code> sbin/hcat_server.sh 
stop</code>".</p>
 
   </section>
 

Modified: 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/notification.xml
URL: 
http://svn.apache.org/viewvc/incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/notification.xml?rev=1378391&r1=1378390&r2=1378391&view=diff
==============================================================================
--- 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/notification.xml
 (original)
+++ 
incubator/hcatalog/trunk/src/docs/src/documentation/content/xdocs/notification.xml
 Tue Aug 28 23:58:57 2012
@@ -23,7 +23,7 @@
   </header>
   <body>
   
- <p>Since HCatalog 0.2 provides notifications for certain events happening in 
the system. This way applications such as Oozie can wait for those events and 
schedule the work that depends on them. The current version of HCatalog 
supports two kinds of events: </p>
+ <p>Since version 0.2, HCatalog provides notifications for certain events 
happening in the system. This way applications such as Oozie can wait for those 
events and schedule the work that depends on them. The current version of 
HCatalog supports two kinds of events: </p>
 <ul>
 <li>Notification when a new partition is added</li>
 <li>Notification when a set of partitions is added</li>
@@ -31,58 +31,64 @@
 
 <p>No additional work is required to send a notification when a new partition 
is added: the existing addPartition call will send the notification message.</p>
 
+<!-- ==================================================================== -->
 <section>
 <title>Notification for a New Partition</title>
 
 <p>To receive notification that a new partition has been added, you need to 
follow these three steps.</p>
  
- <p>1. To start receiving messages, create a connection to a message bus as 
shown here:</p>
- <source>
+<ol>
+  <li>To start receiving messages, create a connection to a message bus as 
shown here:
+<source>
 ConnectionFactory connFac = new ActiveMQConnectionFactory(amqurl);
 Connection conn = connFac.createConnection();
 conn.start();
- </source>
- 
-  <p>2. Subscribe to a topic you are interested in. When subscribing on a 
message bus, you need to subscribe to a particular topic to receive the 
messages that are being delivered on that topic. </p>
+</source>
+  </li>
+
+  <li>Subscribe to a topic you are interested in. When subscribing on a 
message bus, you need to subscribe to a particular topic to receive the 
messages that are being delivered on that topic.
   <ul>
-  <li>  
-  <p>The topic name corresponding to a particular table is stored in table 
properties and can be retrieved using the following piece of code: </p>
- <source>
+    <li>The topic name corresponding to a particular table is stored in table 
properties and can be retrieved using the following piece of code:
+<source>
 HiveMetaStoreClient msc = new HiveMetaStoreClient(hiveConf);
-String topicName = msc.getTable("mydb", 
"myTbl").getParameters().get(HCatConstants.HCAT_MSGBUS_TOPIC_NAME);
- </source>
- </li>
-  
-  <li>  
-  <p>Use the topic name to subscribe to a topic as follows: </p>
- <source>
+String topicName = msc.getTable("mydb",
+                   
"myTbl").getParameters().get(HCatConstants.HCAT_MSGBUS_TOPIC_NAME);
+</source>
+    </li>
+
+    <li>Use the topic name to subscribe to a topic as follows:
+<source>
 Session session = conn.createSession(true, Session.SESSION_TRANSACTED);
 Destination hcatTopic = session.createTopic(topicName);
 MessageConsumer consumer = session.createConsumer(hcatTopic);
 consumer.setMessageListener(this);
- </source>
- </li>
+</source>
+    </li>
   </ul>
+  </li>
 
-  <p>3. To start receiving messages you need to implement the JMS interface 
<code>MessageListener</code>, which, in turn, will make you implement the 
method <code>onMessage(Message msg)</code>. This method will be called whenever 
a new message arrives on the message bus. The message contains a partition 
object representing the corresponding partition, which can be retrieved as 
shown here: </p>
- <source>
+  <li>To start receiving messages you need to implement the JMS interface 
<code>MessageListener</code>, which, in turn, will make you implement the 
method <code>onMessage(Message msg)</code>. This method will be called whenever 
a new message arrives on the message bus. The message contains a partition 
object representing the corresponding partition, which can be retrieved as 
shown here:
+<source>
 @Override
-   public void onMessage(Message msg) {
-      // We are interested in only add_partition events on this table.
-      // So, check message type first.
-      
if(msg.getStringProperty(HCatConstants.HCAT_EVENT).equals(HCatConstants.HCAT_ADD_PARTITION_EVENT)){
-          Object obj = (((ObjectMessage)msg).getObject());
-      }
-   }
- </source>
- 
-  <p>You need to have a JMS jar in your classpath to make this work. 
Additionally, you need to have a JMS provider’s jar in your classpath. 
HCatalog is tested with ActiveMQ as a JMS provider, although any JMS provider 
can be used. ActiveMQ can be obtained from: 
http://activemq.apache.org/activemq-550-release.html .</p>
+public void onMessage(Message msg) {
+  // We are interested in only add_partition events on this table.
+  // So, check message type first.
+  
if(msg.getStringProperty(HCatConstants.HCAT_EVENT).equals(HCatConstants.HCAT_ADD_PARTITION_EVENT)){
+       Object obj = (((ObjectMessage)msg).getObject());
+  }
+}
+</source>
+  </li>
+</ol>
+
+  <p>You need to have a JMS jar in your classpath to make this work. 
Additionally, you need to have a JMS provider’s jar in your classpath. 
HCatalog is tested with ActiveMQ as a JMS provider, although any JMS provider 
can be used. ActiveMQ can be obtained from: <a 
href="http://activemq.apache.org/activemq-550-release.html";>http://activemq.apache.org/activemq-550-release.html</a>.</p>
 </section>
 
+<!-- ==================================================================== -->
 <section>
 <title>Notification for a Set of Partitions</title>
 
-<p>Sometimes a user wants to wait until a collection of partitions is 
finished. For example, you may want to start processing after all partitions 
for a day are done. However, HCatalog has no notion of collections or 
hierarchies of partitions. To support this, HCatalog allows data writers to 
signal when they are finished writing a collection of partitions. Data readers 
may wait for this signal before beginning to read.</p>
+<p>Sometimes you need to wait until a collection of partitions is finished 
before proceeding with another operation. For example, you may want to start 
processing after all partitions for a day are done. However, HCatalog has no 
notion of collections or hierarchies of partitions. To support this, HCatalog 
allows data writers to signal when they are finished writing a collection of 
partitions. Data readers may wait for this signal before beginning to read.</p>
 
 <p>The example code below illustrates how to send a notification when a set of 
partitions has been added.</p>
 
@@ -154,17 +160,19 @@ public void onMessage(Message msg) {
   System.out.println("Message: "+msg);
 </source>
 
-
 </section>
 
+<!-- ==================================================================== -->
 <section>
        <title>Server Configuration</title>
        <p>To enable notification, you need to configure the server (see 
below). </p>
        <p>To disable notification, you need to leave 
<code>hive.metastore.event.listeners</code> blank or remove it from 
<code>hive-site.xml.</code></p>
-       
-       <p><strong>Enable JMS Notifications</strong></p>
-       <p>You need to make (add/modify) the following changes to the 
hive-site.xml file of your HCatalog server to turn on notifications.</p>
-       
+
+  <section>
+      <title>Enable JMS Notifications</title>
+
+<p>You need to make (add/modify) the following changes to the hive-site.xml 
file of your HCatalog server to turn on notifications.</p>
+
 <source>
 &lt;property&gt;
 &lt;name&gt;hive.metastore.event.expiry.duration&lt;/name&gt;
@@ -175,7 +183,7 @@ public void onMessage(Message msg) {
 &lt;property&gt;
 &lt;name&gt;hive.metastore.event.clean.freq&lt;/name&gt;
 &lt;value&gt;360L&lt;/value&gt;
-&lt;description&gt;Frequency at which timer task runs to purge expired events 
in metastore(in seconds).&lt;/description&gt;
+&lt;description&gt;Frequency at which timer task runs to purge expired events 
in metastore (in seconds).&lt;/description&gt;
 &lt;/property&gt;
 
 &lt;property&gt;
@@ -198,34 +206,35 @@ public void onMessage(Message msg) {
 </source>
 
 <p>For the server to start with support for notifications, the following must 
be in the classpath:</p>
-<ul>
-       <li>(a) activemq jar </li>
-    <li>(b) jndi.properties file with properties suitably configured for 
notifications</li>
-</ul>
-<p></p>
-<p>Then, follow these steps:</p>
+<p>&nbsp;&nbsp; (a) activemq jar </p>
+<p>&nbsp;&nbsp; (b) jndi.properties file with properties suitably configured 
for notifications</p>
+
+<p>Then, follow these guidelines to set up your environment:</p>
 <ol>
-<li>HCatalog server start script is 
$YOUR_HCAT_SERVER/share/hcatalog/scripts/hcat_server_start.sh</li>
+<li>The HCatalog server start script is 
$<em>YOUR_HCAT_SERVER</em>/share/hcatalog/scripts/hcat_server_start.sh.</li>
 <li>This script expects classpath to be set by the AUX_CLASSPATH environment 
variable.</li>
 <li>Therefore set AUX_CLASSPATH to satisfy (a) and (b) above.</li>
-<li>jndi.properties file is located at 
$YOUR_HCAT_SERVER/etc/hcatalog/jndi.properties</li>
-<li>You need to uncomment and set the following properties in this file: -
+<li>The jndi.properties file is located at 
$<em>YOUR_HCAT_SERVER</em>/etc/hcatalog/jndi.properties.</li>
+<li>You need to uncomment and set the following properties in the 
jndi.properties file:
 <ul>
 <li>java.naming.factory.initial = 
org.apache.activemq.jndi.ActiveMQInitialContextFactory</li>
-<li>java.naming.provider.url = tcp://localhost:61616 (this is activemq url in 
your setup)
+<li>java.naming.provider.url = tcp://localhost:61616 &nbsp;&nbsp; (This is the 
ActiveMQ URL in your setup.)
 </li>
 </ul>
 </li>
 </ol>
+</section>
 
-<p><strong>Topic Names</strong></p>
-<p>If tables are created while the server is configured for notifications, a 
default topic name is automatically set as table property. To use notifications 
with tables created previously (previous HCatalog installations or created 
prior to enabling notifications), you will have to manually set a topic name, 
an example will be: </p>
+<section>
+    <title>Topic Names</title>
+<p>If tables are created while the server is configured for notifications, a 
default topic name is automatically set as a table property. To use 
notifications with tables created previously (either in other HCatalog 
installations or prior to enabling notifications in the current installation) 
you will have to manually set a topic name. For example:</p>
 <source>
-$YOUR_HCAT_CLIENT_HOME/bin/hcat -e "ALTER TABLE access SET 
hcat.msgbus.topic.name=$TOPIC_NAME"
+$<em>YOUR_HCAT_CLIENT_HOME</em>/bin/hcat -e "ALTER TABLE access SET 
hcat.msgbus.topic.name=$TOPIC_NAME"
 </source>
        
-<p>You then need to configure your activemq Consumer(s) to listen for messages 
on the topic you gave in $TOPIC_NAME. A good default policy for TOPIC_NAME = 
"$database.$table" (that is a literal dot)</p>     
-       
+<p>You then need to configure your ActiveMQ Consumer(s) to listen for messages 
on the topic you gave in $TOPIC_NAME. A good default policy is TOPIC_NAME = 
"$database.$table" (that is a literal dot).</p>
+
+</section>
 </section>
     
   </body>


Reply via email to