Author: lewismc
Date: Mon Sep 29 00:59:33 2014
New Revision: 1628110

URL: http://svn.apache.org/r1628110
Log:
Update GoraCI documentation

Modified:
    gora/site/trunk/content/current/index.md

Modified: gora/site/trunk/content/current/index.md
URL: 
http://svn.apache.org/viewvc/gora/site/trunk/content/current/index.md?rev=1628110&r1=1628109&r2=1628110&view=diff
==============================================================================
--- gora/site/trunk/content/current/index.md (original)
+++ gora/site/trunk/content/current/index.md Mon Sep 29 00:59:33 2014
@@ -129,6 +129,7 @@ for your datastore.  To run against Accu
 
 <code>
   vim src/main/resources/gora.properties //set Accumulo properties
+
   mvn package -Paccumulo-1.4
 </code>
 
@@ -136,6 +137,7 @@ To run against HBase, do the following.
 
 <code>
   vim src/main/resources/gora.properties //set HBase properties
+
   mvn package -Phbase-0.92
 </code>
 
@@ -143,6 +145,7 @@ To run against Cassandra, do the followi
 
 <code>
   vim src/main/resources/gora.properties //set Cassandra properties
+
   mvn package -Pcassandra-1.1.2
 </code>
 
@@ -172,6 +175,7 @@ You can just run <code>goraci.sh Generat
 
 <code>
   $ ./goraci.sh Generator
+
   Usage : Generator <num mappers> <num nodes>
 </code>
 
@@ -189,20 +193,22 @@ The two libraries  jackson-core and jack
 jackson-core-asl-1.4.2.jar and jackson-mapper-asl-1.4.2.jar.  For details see
 [HADOOP-6945](https://issues.apache.org/jira/browse/HADOOP-6945). 
 
-GORACI AND HBASE
------------------
+####GoraCI and HBase
 
 To improve performance running read jobs such as the Verify step, enable
 scanner caching on the command line.  For example:
 
+<code>
     $ ./gorachi.sh Verify -Dhbase.client.scanner.caching=1000 \
          -Dmapred.map.tasks.speculative.execution=false verify_dir 1000
+</code>
 
-Dependent on how you have your hadoop and hbase deployed, you may need to
-change the gorachi.sh script around some.  Here is one suggestion that may help
-in the case where your hadoop and hbase configuration are other than under the
-hadoop and hbase home directories.
+Dependent on how you have your Hadoop and HBase setup deployed, you may need to
+change the <code>gorachi.sh</code> script around some.  Here is one suggestion 
that may help
+in the case where your Hadoop and HBase configuration are other than under the
+Hadoop and HBase home directories.
 
+<code>
   diff --git a/org.apache.gora.goraci.sh b/org.apache.gora.goraci.sh
   index db1562a..31c3c94 100755
   --- a/org.apache.gora.goraci.sh
@@ -215,35 +221,38 @@ hadoop and hbase home directories.
   -
   -
   +CLASSPATH="${HBASE_CONF_DIR}" hadoop --config "${HADOOP_CONF_DIR} jar 
"$GORACI_HOME/lib/org.apache.gora.goraci-0.0.1-SNAPSHOT.jar" $CLASS -files 
"${HBASE_CONF_DIR}/hbase-site.xml" -libjars "$LIBJARS" "$@"
+</code>
 
-You will need to define HBASE_CONF_DIR and HADOOP_CONF_DIR before you run your
-org.apache.gora.goraci jobs.  For example:
+You will need to define <code>HBASE_CONF_DIR</code> and 
</code>HADOOP_CONF_DIR</code> before you run your
+**goraci** jobs.  For example:
 
+<code>
   $ export HADOOP_CONF_DIR=/home/you/hadoop-conf
+
   $ export HBASE_CONF_DIR=/home/you/hbase-conf
-  $ PATH=/home/you/hadoop-1.0.2/bin:$PATH ./org.apache.gora.goraci.sh 
Generator 1000 1000000
 
-CONCURRENCY
-------------
+  $ PATH=/home/you/hadoop-1.0.2/bin:$PATH ./goraci.sh Generator 1000 1000000
+</code>
+
+####Concurrency
 
 Its possible to run verification at the same time as generation.  To do this
 supply the -c option to Generator and Verify.  This will cause Genertor to
 create a secondary table which holds information about what verification can
-safely verify.  Running Verify with the -c option will make it run slower
+safely verify.  Running Verify with the **-c** option will make it run slower
 because more information must be brought back to the client side for filtering
 purposes.  The Loop program also supports the -c option, which will cause it to
 run verification concurrently with generation.
 
-If verification is run at the same time as generation without the -c option,
+If verification is run at the same time as generation without the **-c** 
option,
 then it will inevitably fail.  This is because verification mappers read
 different parts of the table at different times and giving an inconsistent view
 of the table.  So one mapper may read a part of a table before a node is
 written, when the node is later referenced it will appear to be missing.  The
--c option basically filters out newer information using data written to the
+**-c** option basically filters out newer information using data written to the
 secondary table.
 
-CONCLUSIONS
-------------
+####Conclusions
 
 This test suite does not do everything that the Accumulo test suite does,
 mainly it does not collect statistics and generate reports.  The reports
@@ -253,61 +262,72 @@ Below shows running a test of the test. 
 in it, ensure the verifaction map reduce job notices that the node is missing.
 Not all output is shown, just the important parts.
 
+<code>
   $ ./org.apache.gora.goraci.sh Generator  1 25000000
+
   $ ./org.apache.gora.goraci.sh Print -s 2000000000000000 -l 1
+
   
2000001f65dbd238:30350f9ae6f6e8f7:000004265852:ef09f9dd-75b1-4c16-9f14-0fa84f3029b6
+
   $ ./org.apache.gora.goraci.sh Print -s 30350f9ae6f6e8f7 -l 1
+
   
30350f9ae6f6e8f7:4867fe03de6ea6c8:000003265852:ef09f9dd-75b1-4c16-9f14-0fa84f3029b6
+
   $ ./org.apache.gora.goraci.sh Delete 30350f9ae6f6e8f7
+
   Delete returned true
+
   $ ./org.apache.gora.goraci.sh Verify gci_verify_1 2 
+
   11/12/20 17:12:31 INFO mapred.JobClient:   
org.apache.gora.goraci.Verify$Counts
+
   11/12/20 17:12:31 INFO mapred.JobClient:     UNDEFINED=1
+
   11/12/20 17:12:31 INFO mapred.JobClient:     REFERENCED=24999998
+
   11/12/20 17:12:31 INFO mapred.JobClient:     UNREFERENCED=1
-  $ hadoop fs -cat gci_verify_1/part\*
-  30350f9ae6f6e8f7     2000001f65dbd238
+
+  $ hadoop fs -cat gci_verify_1/part\* 30350f9ae6f6e8f7        2000001f65dbd238
+</code>
 
 The map reduce job found the one undefined node and gave the node that
 referenced it.
 
-Below are some timing statistics for running org.apache.gora.goraci on a 10 
node cluster. 
+Below are some timing statistics for running Goraci on a 10 node cluster. 
 
+<code>
   Store           | Task                   | Time    | Undef  | Unref | Ref    
    
   
----------------+------------------------+---------+--------+-------+------------
   accumulo-1.4.0  | Generator 10 100000000 | 40m 16s |    N/A |   N/A |        
N/A     
   accumulo-1.4.0  | Verify /tmp/goraci1 40 |  6m  7s |      0 |     0 | 
1000000000  
   hbase-0.92.1    | Generator 10 100000000 |  2h 44m |    N/A |   N/A |        
N/A     
   hbase-0.92.1    | Verify /tmp/goraci2 40 |  6m 34s |      0 |     0 | 
1000000000
+</code>
 
-Hbase and Accumulo are configured differently out-of-the-box.  We used the 
Accumulo 
-3G, native configuration examples in the conf/examples directory.
+HBase and Accumulo are configured differently out-of-the-box.  We used the 
Accumulo 
+3G, native configuration examples in the 
[conf/examples](https://github.com/apache/gora/tree/master/gora-goraci/src/main/resources)
 directory.
 
 To provide a comparable memory footprint, we increased the HBase jvm to 
"-Xmx4000m", 
 and turned on compression for the ci table:
 
+<code>
 create 'ci', {NAME=>'meta', COMPRESSION=>'GZ'}
+</code>
 
 We also turned down the replication of write-ahead logs to be comparable to 
Accumulo:
 
-  <property>
-    <name>hbase.regionserver.hlog.replication</name>
-    <value>2</value>
-  </property>
+    <property>
+      <name>hbase.regionserver.hlog.replication</name>
+      <value>2</value>
+    </property>
 
 For the accumulo run, we set the split threshold to 512M:
 
- shell> config -t ci -s table.split.threshold=512M
+    shell> config -t ci -s table.split.threshold=512M
 
 This was done so that Accumulo would end up with 64 tablets, which is the
-number of regions hbase had.   The number of tablets/regions determines how
+number of regions HBase had. The number of tablets/regions determines how
 much parallelism there is in the map phase of the verify step.
 
 Sometimes when this test suite is run against HBase data is lost.  This issue
-is being tracked under HBASE-5754 [4].
-
-[0] http://accumulo.apache.org
-[1] http://gora.apache.org
-[2] http://gora.apache.org/docs/current/gora-conf.html
-
-[4] https://issues.apache.org/jira/browse/HBASE-5754
+is being tracked under 
[HBASE-5754](https://issues.apache.org/jira/browse/HBASE-5754)


Reply via email to