Author: billie
Date: Fri May 31 19:25:22 2013
New Revision: 1488359
URL: http://svn.apache.org/r1488359
Log:
ACCUMULO-1465 regenerated website examples
Modified:
accumulo/site/trunk/content/1.5/examples/batch.mdtext
accumulo/site/trunk/content/1.5/examples/classpath.mdtext
accumulo/site/trunk/content/1.5/examples/client.mdtext
accumulo/site/trunk/content/1.5/examples/combiner.mdtext
accumulo/site/trunk/content/1.5/examples/constraints.mdtext
accumulo/site/trunk/content/1.5/examples/export.mdtext
accumulo/site/trunk/content/1.5/examples/helloworld.mdtext
accumulo/site/trunk/content/1.5/examples/index.mdtext
accumulo/site/trunk/content/1.5/examples/tabletofile.mdtext
accumulo/site/trunk/content/1.5/examples/terasort.mdtext
Modified: accumulo/site/trunk/content/1.5/examples/batch.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/batch.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/batch.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/batch.mdtext Fri May 31 19:25:22
2013
@@ -16,7 +16,7 @@ Notice: Licensed to the Apache Softwa
specific language governing permissions and limitations
under the License.
-This tutorial uses the following Java classes, which can be found in
org.apache.accumulo.examples.simple.client in the simple-examples module:
+This tutorial uses the following Java classes, which can be found in
org.apache.accumulo.examples.simple.client in the examples-simple module:
* SequentialBatchWriter.java - writes mutations with sequential rows and
random values
* RandomBatchWriter.java - used by SequentialBatchWriter to generate random
values
Modified: accumulo/site/trunk/content/1.5/examples/classpath.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/classpath.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/classpath.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/classpath.mdtext Fri May 31
19:25:22 2013
@@ -1,4 +1,4 @@
-Title: Apache Accumulo Client Examples
+Title: Apache Accumulo Classpath Example
Notice: Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
@@ -25,44 +25,44 @@ table reference that jar.
Execute the following command in the shell.
- $ hadoop fs -copyFromLocal
$ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
+ $ hadoop fs -copyFromLocal
$ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib
Execute following in Accumulo shell to setup classpath context
- root@test15> config -s general.vfs.context.classpath.cx1=hdfs://<namenode
host>:<namenode port>/user1/lib
+ root@test15> config -s general.vfs.context.classpath.cx1=hdfs://<namenode
host>:<namenode port>/user1/lib
Create a table
- root@test15> createtable nofoo
+ root@test15> createtable nofoo
The following command makes this table use the configured classpath context
- root@test15 nofoo> config -t nofoo -s table.classpath.context=cx1
+ root@test15 nofoo> config -t nofoo -s table.classpath.context=cx1
The following command configures an iterator thats in FooFilter.jar
- root@test15 nofoo> setiter -n foofilter -p 10 -scan -minc -majc -class
org.apache.accumulo.test.FooFilter
- Filter accepts or rejects each Key/Value pair
- ----------> set FooFilter parameter negate, default false keeps k/v that
pass accept method, true rejects k/v that pass accept method: false
+ root@test15 nofoo> setiter -n foofilter -p 10 -scan -minc -majc -class
org.apache.accumulo.test.FooFilter
+ Filter accepts or rejects each Key/Value pair
+ ----------> set FooFilter parameter negate, default false keeps k/v that
pass accept method, true rejects k/v that pass accept method: false
The commands below show the filter is working.
- root@test15 nofoo> insert foo1 f1 q1 v1
- root@test15 nofoo> insert noo1 f1 q1 v2
- root@test15 nofoo> scan
- noo1 f1:q1 [] v2
- root@test15 nofoo>
+ root@test15 nofoo> insert foo1 f1 q1 v1
+ root@test15 nofoo> insert noo1 f1 q1 v2
+ root@test15 nofoo> scan
+ noo1 f1:q1 [] v2
+ root@test15 nofoo>
Below, an attempt is made to add the FooFilter to a table thats not configured
to use the clasppath context cx1. This fails util the table is configured to
use cx1.
- root@test15 nofoo> createtable nofootwo
- root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class
org.apache.accumulo.test.FooFilter
- 2013-05-03 12:49:35,943 [shell.Shell] ERROR:
java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
- root@test15 nofootwo> config -t nofootwo -s table.classpath.context=cx1
- root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class
org.apache.accumulo.test.FooFilter
- Filter accepts or rejects each Key/Value pair
- ----------> set FooFilter parameter negate, default false keeps k/v that
pass accept method, true rejects k/v that pass accept method: false
+ root@test15 nofoo> createtable nofootwo
+ root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class
org.apache.accumulo.test.FooFilter
+ 2013-05-03 12:49:35,943 [shell.Shell] ERROR:
java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
+ root@test15 nofootwo> config -t nofootwo -s table.classpath.context=cx1
+ root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class
org.apache.accumulo.test.FooFilter
+ Filter accepts or rejects each Key/Value pair
+ ----------> set FooFilter parameter negate, default false keeps k/v that
pass accept method, true rejects k/v that pass accept method: false
Modified: accumulo/site/trunk/content/1.5/examples/client.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/client.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/client.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/client.mdtext Fri May 31 19:25:22
2013
@@ -18,6 +18,12 @@ Notice: Licensed to the Apache Softwa
This documents how you run the simplest java examples.
+This tutorial uses the following Java classes, which can be found in
org.apache.accumulo.examples.simple.client in the examples-simple module:
+
+ * Flush.java - flushes a table
+ * RowOperations.java - reads and writes rows
+ * ReadWriteExample.java - creates a table, writes to it, and reads from it
+
Using the accumulo command, you can run the simple client examples by
providing their
class name, and enough arguments to find your accumulo instance. For example,
the Flush class will flush a table:
Modified: accumulo/site/trunk/content/1.5/examples/combiner.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/combiner.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/combiner.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/combiner.mdtext Fri May 31
19:25:22 2013
@@ -16,7 +16,7 @@ Notice: Licensed to the Apache Softwa
specific language governing permissions and limitations
under the License.
-This tutorial uses the following Java class, which can be found in
org.apache.accumulo.examples.simple.combiner in the simple-examples module:
+This tutorial uses the following Java class, which can be found in
org.apache.accumulo.examples.simple.combiner in the examples-simple module:
* StatsCombiner.java - a combiner that calculates max, min, sum, and count
Modified: accumulo/site/trunk/content/1.5/examples/constraints.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/constraints.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/constraints.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/constraints.mdtext Fri May 31
19:25:22 2013
@@ -16,7 +16,7 @@ Notice: Licensed to the Apache Softwa
specific language governing permissions and limitations
under the License.
-This tutorial uses the following Java classes, which can be found in
org.apache.accumulo.examples.simple.constraints in the simple-examples module:
+This tutorial uses the following Java classes, which can be found in
org.apache.accumulo.examples.simple.constraints in the examples-simple module:
* AlphaNumKeyConstraint.java - a constraint that requires alphanumeric keys
* NumericValueConstraint.java - a constraint that requires numeric string
values
Modified: accumulo/site/trunk/content/1.5/examples/export.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/export.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/export.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/export.mdtext Fri May 31 19:25:22
2013
@@ -1,3 +1,4 @@
+Title: Apache Accumulo Export/Import Example
Notice: Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
@@ -23,67 +24,67 @@ the table. A table must be offline to e
for the duration of the distcp. An easy way to take a table offline without
interuppting access to it is to clone it and take the clone offline.
- root@test15> createtable table1
- root@test15 table1> insert a cf1 cq1 v1
- root@test15 table1> insert h cf1 cq1 v2
- root@test15 table1> insert z cf1 cq1 v3
- root@test15 table1> insert z cf1 cq2 v4
- root@test15 table1> addsplits -t table1 b r
- root@test15 table1> scan
- a cf1:cq1 [] v1
- h cf1:cq1 [] v2
- z cf1:cq1 [] v3
- z cf1:cq2 [] v4
- root@test15> config -t table1 -s table.split.threshold=100M
- root@test15 table1> clonetable table1 table1_exp
- root@test15 table1> offline table1_exp
- root@test15 table1> exporttable -t table1_exp /tmp/table1_export
- root@test15 table1> quit
+ root@test15> createtable table1
+ root@test15 table1> insert a cf1 cq1 v1
+ root@test15 table1> insert h cf1 cq1 v2
+ root@test15 table1> insert z cf1 cq1 v3
+ root@test15 table1> insert z cf1 cq2 v4
+ root@test15 table1> addsplits -t table1 b r
+ root@test15 table1> scan
+ a cf1:cq1 [] v1
+ h cf1:cq1 [] v2
+ z cf1:cq1 [] v3
+ z cf1:cq2 [] v4
+ root@test15> config -t table1 -s table.split.threshold=100M
+ root@test15 table1> clonetable table1 table1_exp
+ root@test15 table1> offline table1_exp
+ root@test15 table1> exporttable -t table1_exp /tmp/table1_export
+ root@test15 table1> quit
After executing the export command, a few files are created in the hdfs dir.
One of the files is a list of files to distcp as shown below.
- $ hadoop fs -ls /tmp/table1_export
- Found 2 items
- -rw-r--r-- 3 user supergroup 162 2012-07-25 09:56
/tmp/table1_export/distcp.txt
- -rw-r--r-- 3 user supergroup 821 2012-07-25 09:56
/tmp/table1_export/exportMetadata.zip
- $ hadoop fs -cat /tmp/table1_export/distcp.txt
- hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
- hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
+ $ hadoop fs -ls /tmp/table1_export
+ Found 2 items
+ -rw-r--r-- 3 user supergroup 162 2012-07-25 09:56
/tmp/table1_export/distcp.txt
+ -rw-r--r-- 3 user supergroup 821 2012-07-25 09:56
/tmp/table1_export/exportMetadata.zip
+ $ hadoop fs -cat /tmp/table1_export/distcp.txt
+ hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+ hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
Before the table can be imported, it must be copied using distcp. After the
discp completed, the cloned table may be deleted.
- $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+ $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
The Accumulo shell session below shows importing the table and inspecting it.
The data, splits, config, and logical time information for the table were
preserved.
- root@test15> importtable table1_copy /tmp/table1_export_dest
- root@test15> table table1_copy
- root@test15 table1_copy> scan
- a cf1:cq1 [] v1
- h cf1:cq1 [] v2
- z cf1:cq1 [] v3
- z cf1:cq2 [] v4
- root@test15 table1_copy> getsplits -t table1_copy
- b
- r
- root@test15> config -t table1_copy -f split
-
---------+--------------------------+-------------------------------------------
- SCOPE | NAME | VALUE
-
---------+--------------------------+-------------------------------------------
- default | table.split.threshold .. | 1G
- table | @override ........... | 100M
-
---------+--------------------------+-------------------------------------------
- root@test15> tables -l
- !METADATA => !0
- trace => 1
- table1_copy => 5
- root@test15 table1_copy> scan -t !METADATA -b 5 -c srv:time
- 5;b srv:time [] M1343224500467
- 5;r srv:time [] M1343224500467
- 5< srv:time [] M1343224500467
+ root@test15> importtable table1_copy /tmp/table1_export_dest
+ root@test15> table table1_copy
+ root@test15 table1_copy> scan
+ a cf1:cq1 [] v1
+ h cf1:cq1 [] v2
+ z cf1:cq1 [] v3
+ z cf1:cq2 [] v4
+ root@test15 table1_copy> getsplits -t table1_copy
+ b
+ r
+ root@test15> config -t table1_copy -f split
+
---------+--------------------------+-------------------------------------------
+ SCOPE | NAME | VALUE
+
---------+--------------------------+-------------------------------------------
+ default | table.split.threshold .. | 1G
+ table | @override ........... | 100M
+
---------+--------------------------+-------------------------------------------
+ root@test15> tables -l
+ !METADATA => !0
+ trace => 1
+ table1_copy => 5
+ root@test15 table1_copy> scan -t !METADATA -b 5 -c srv:time
+ 5;b srv:time [] M1343224500467
+ 5;r srv:time [] M1343224500467
+ 5< srv:time [] M1343224500467
Modified: accumulo/site/trunk/content/1.5/examples/helloworld.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/helloworld.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/helloworld.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/helloworld.mdtext Fri May 31
19:25:22 2013
@@ -16,7 +16,7 @@ Notice: Licensed to the Apache Softwa
specific language governing permissions and limitations
under the License.
-This tutorial uses the following Java classes, which can be found in
org.apache.accumulo.examples.simple.helloworld in the simple-examples module:
+This tutorial uses the following Java classes, which can be found in
org.apache.accumulo.examples.simple.helloworld in the examples-simple module:
* InsertWithBatchWriter.java - Inserts 10K rows (50K entries) into accumulo
with each row having 5 entries
* ReadData.java - Reads all data between two rows
Modified: accumulo/site/trunk/content/1.5/examples/index.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/index.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/index.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/index.mdtext Fri May 31 19:25:22
2013
@@ -45,9 +45,9 @@ features of Apache Accumulo.
[bulkIngest](bulkIngest.html): Ingesting bulk data using map/reduce jobs
on Hadoop.
- [classpath](classpath.html)
+ [classpath](classpath.html): Using per-table classpaths.
- [client](client.html)
+ [client](client.html): Using table operations, reading and writing
data in Java.
[combiner](combiner.html): Using example StatsCombiner to find min, max,
sum, and
count.
@@ -56,7 +56,7 @@ features of Apache Accumulo.
[dirlist](dirlist.html): Storing filesystem information.
- [export](export.html)
+ [export](export.html): Exporting and importing tables.
[filedata](filedata.html): Storing file data.
@@ -74,16 +74,19 @@ features of Apache Accumulo.
[maxmutation](maxmutation.html): Limiting mutation size to avoid running
out of memory.
- [regex](regex.html)
+ [regex](regex.html): Using MapReduce and Accumulo to find data using
regular
+ expressions.
- [rowhash](rowhash.html)
+ [rowhash](rowhash.html): Using MapReduce to read a table and write to a
new
+ column in the same table.
[shard](shard.html): Using the intersecting iterator with a term
index
partitioned by document.
- [tabletofile](tabletofile.html)
+ [tabletofile](tabletofile.html): Using MapReduce to read a table and write
one of its
+ columns to a file in HDFS.
- [terasort](terasort.html)
+ [terasort](terasort.html): Generating random data and sorting it using
Accumulo.
[visibility](visibility.html): Using visibilities (or combinations of
authorizations).
Also shows user permissions.
Modified: accumulo/site/trunk/content/1.5/examples/tabletofile.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/tabletofile.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/tabletofile.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/tabletofile.mdtext Fri May 31
19:25:22 2013
@@ -1,4 +1,4 @@
-Title: Apache Accumulo Regex Example
+Title: Apache Accumulo Table-to-File Example
Notice: Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
Modified: accumulo/site/trunk/content/1.5/examples/terasort.mdtext
URL:
http://svn.apache.org/viewvc/accumulo/site/trunk/content/1.5/examples/terasort.mdtext?rev=1488359&r1=1488358&r2=1488359&view=diff
==============================================================================
--- accumulo/site/trunk/content/1.5/examples/terasort.mdtext (original)
+++ accumulo/site/trunk/content/1.5/examples/terasort.mdtext Fri May 31
19:25:22 2013
@@ -1,4 +1,4 @@
-Title: Apache Accumulo MapReduce Example
+Title: Apache Accumulo Terasort Example
Notice: Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information