ACCUMULO-4532 Improve documentation of examples

* Renamed files and structured them to be markdown
* Fixed references in user manual


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/52d526b9
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/52d526b9
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/52d526b9

Branch: refs/heads/master
Commit: 52d526b9b25a3cacae48092bc7669eb911af94e2
Parents: add2217
Author: Mike Walch <mwa...@apache.org>
Authored: Mon Dec 5 16:43:49 2016 -0500
Committer: Mike Walch <mwa...@apache.org>
Committed: Tue Dec 6 14:04:34 2016 -0500

----------------------------------------------------------------------
 docs/src/main/asciidoc/chapters/analytics.txt   |   3 +-
 docs/src/main/asciidoc/chapters/clients.txt     |  13 +-
 .../asciidoc/chapters/high_speed_ingest.txt     |   7 +-
 docs/src/main/asciidoc/chapters/sampling.txt    |  10 +-
 .../asciidoc/chapters/table_configuration.txt   |  17 +-
 docs/src/main/resources/examples/README         |  97 --------
 docs/src/main/resources/examples/README.batch   |  55 -----
 docs/src/main/resources/examples/README.bloom   | 219 ------------------
 .../main/resources/examples/README.bulkIngest   |  33 ---
 .../main/resources/examples/README.classpath    |  68 ------
 docs/src/main/resources/examples/README.client  |  79 -------
 .../src/main/resources/examples/README.combiner |  70 ------
 .../examples/README.compactionStrategy          |  65 ------
 .../main/resources/examples/README.constraints  |  54 -----
 docs/src/main/resources/examples/README.dirlist | 114 ----------
 docs/src/main/resources/examples/README.export  |  91 --------
 .../src/main/resources/examples/README.filedata |  47 ----
 docs/src/main/resources/examples/README.filter  | 110 ---------
 .../main/resources/examples/README.helloworld   |  47 ----
 .../main/resources/examples/README.isolation    |  50 -----
 docs/src/main/resources/examples/README.mapred  | 154 -------------
 .../main/resources/examples/README.maxmutation  |  49 ----
 docs/src/main/resources/examples/README.regex   |  57 -----
 .../main/resources/examples/README.reservations |  66 ------
 .../main/resources/examples/README.rgbalancer   | 159 -------------
 docs/src/main/resources/examples/README.rowhash |  59 -----
 docs/src/main/resources/examples/README.sample  | 192 ----------------
 docs/src/main/resources/examples/README.shard   |  66 ------
 .../main/resources/examples/README.tabletofile  |  59 -----
 .../src/main/resources/examples/README.terasort |  50 -----
 .../main/resources/examples/README.visibility   | 131 -----------
 docs/src/main/resources/examples/batch.md       |  57 +++++
 docs/src/main/resources/examples/bloom.md       | 221 +++++++++++++++++++
 docs/src/main/resources/examples/bulkIngest.md  |  35 +++
 docs/src/main/resources/examples/classpath.md   |  69 ++++++
 docs/src/main/resources/examples/client.md      |  81 +++++++
 docs/src/main/resources/examples/combiner.md    |  72 ++++++
 .../resources/examples/compactionStrategy.md    |  67 ++++++
 docs/src/main/resources/examples/constraints.md |  56 +++++
 docs/src/main/resources/examples/dirlist.md     | 118 ++++++++++
 docs/src/main/resources/examples/export.md      |  93 ++++++++
 docs/src/main/resources/examples/filedata.md    |  51 +++++
 docs/src/main/resources/examples/filter.md      | 112 ++++++++++
 docs/src/main/resources/examples/helloworld.md  |  49 ++++
 docs/src/main/resources/examples/index.md       | 100 +++++++++
 docs/src/main/resources/examples/isolation.md   |  51 +++++
 docs/src/main/resources/examples/mapred.md      | 156 +++++++++++++
 docs/src/main/resources/examples/maxmutation.md |  51 +++++
 docs/src/main/resources/examples/regex.md       |  59 +++++
 .../src/main/resources/examples/reservations.md |  68 ++++++
 docs/src/main/resources/examples/rgbalancer.md  | 161 ++++++++++++++
 docs/src/main/resources/examples/rowhash.md     |  61 +++++
 docs/src/main/resources/examples/sample.md      | 193 ++++++++++++++++
 docs/src/main/resources/examples/shard.md       |  68 ++++++
 docs/src/main/resources/examples/tabletofile.md |  61 +++++
 docs/src/main/resources/examples/terasort.md    |  52 +++++
 docs/src/main/resources/examples/visibility.md  | 133 +++++++++++
 57 files changed, 2316 insertions(+), 2270 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/asciidoc/chapters/analytics.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/analytics.txt 
b/docs/src/main/asciidoc/chapters/analytics.txt
index 00e0403..3954788 100644
--- a/docs/src/main/asciidoc/chapters/analytics.txt
+++ b/docs/src/main/asciidoc/chapters/analytics.txt
@@ -185,8 +185,7 @@ AccumuloOutputFormat.setZooKeeperInstance(job, "myinstance",
 AccumuloOutputFormat.setMaxLatency(job, 300000); // milliseconds
 AccumuloOutputFormat.setMaxMutationBufferSize(job, 50000000); // bytes
 
-An example of using MapReduce with Accumulo can be found at
-+accumulo/docs/examples/README.mapred+.
+An example of using MapReduce with Accumulo can be found at 
+docs/examples/mapred.md+.
 
 === Combiners
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/asciidoc/chapters/clients.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/clients.txt 
b/docs/src/main/asciidoc/chapters/clients.txt
index 63bb937..713abad 100644
--- a/docs/src/main/asciidoc/chapters/clients.txt
+++ b/docs/src/main/asciidoc/chapters/clients.txt
@@ -123,10 +123,10 @@ writer.addMutation(mutation);
 writer.close();
 ----
 
-An example of using the batch writer can be found at
-+accumulo/docs/examples/README.batch+.
+An example of using the batch writer can be found at +docs/examples/batch.md+.
 
 ==== ConditionalWriter
+
 The ConditionalWriter enables efficient, atomic read-modify-write operations on
 rows.  The ConditionalWriter writes special Mutations which have a list of per
 column conditions that must all be met before the mutation is applied.  The
@@ -148,8 +148,7 @@ and possibly sending another conditional mutation.  If this 
is not sufficient,
 then a higher level of abstraction can be built by storing transactional
 information within a row.
 
-An example of using the batch writer can be found at
-+accumulo/docs/examples/README.reservations+.
+An example of using the conditional writer can be found at 
+docs/examples/reservations.md+.
 
 ==== Durability
 
@@ -233,9 +232,7 @@ crash a tablet server. By default rows are buffered in 
memory, but the user
 can easily supply their own buffer if they wish to buffer to disk when rows are
 large.
 
-For an example, look at the following
-
-  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/isolation/InterferenceTest.java
+For an example, see +docs/examples/src/isolation/InterferenceTest.java+
 
 ==== BatchScanner
 
@@ -264,7 +261,7 @@ for(Entry<Key,Value> entry : bscan) {
 }
 ----
 
-An example of the BatchScanner can be found at 
+accumulo/docs/examples/README.batch+.
+An example of the BatchScanner can be found at +docs/examples/batch.md+.
 
 At this time, there is no client side isolation support for the BatchScanner.
 You may consider using the WholeRowIterator with the BatchScanner to achieve

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/asciidoc/chapters/high_speed_ingest.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/high_speed_ingest.txt 
b/docs/src/main/asciidoc/chapters/high_speed_ingest.txt
index 909f4c4..1e1be48 100644
--- a/docs/src/main/asciidoc/chapters/high_speed_ingest.txt
+++ b/docs/src/main/asciidoc/chapters/high_speed_ingest.txt
@@ -92,8 +92,7 @@ Note that the paths referenced are directories within the 
same HDFS instance ove
 which Accumulo is running. Accumulo places any files that failed to be added 
to the
 second directory specified.
 
-A complete example of using Bulk Ingest can be found at
-+accumulo/docs/examples/README.bulkIngest+.
+A complete example of using Bulk Ingest can be found at 
+docs/examples/bulkIngest.md+.
 
 === Logical Time for Bulk Ingest
 
@@ -116,10 +115,10 @@ undefined. This could occur if an insert and an update 
were in the same bulk
 import file.
 
 === MapReduce Ingest
+
 It is possible to efficiently write many mutations to Accumulo in parallel via 
a
 MapReduce job. In this scenario the MapReduce is written to process data that 
lives
 in HDFS and write mutations to Accumulo using the AccumuloOutputFormat. See
 the MapReduce section under Analytics for details.
 
-An example of using MapReduce can be found under
-+accumulo/docs/examples/README.mapred+.
+An example of using MapReduce can be found at +docs/examples/mapred.md+.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/asciidoc/chapters/sampling.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/sampling.txt 
b/docs/src/main/asciidoc/chapters/sampling.txt
index f035c56..99c3c7b 100644
--- a/docs/src/main/asciidoc/chapters/sampling.txt
+++ b/docs/src/main/asciidoc/chapters/sampling.txt
@@ -22,7 +22,7 @@ This sample data is kept up to date as a table is mutated.  
What key values are
 placed in the sample data is configurable per table.
 
 This feature can be used for query estimation and optimization.  For an example
-of estimaiton assume an Accumulo table is configured to generate a sample
+of estimation assume an Accumulo table is configured to generate a sample
 containing one millionth of a tables data.   If a query is executed against the
 sample and returns one thousand results, then the same query against all the
 data would probably return a billion results.  A nice property of having
@@ -40,8 +40,8 @@ that class.  For guidance on implementing a Sampler see that 
interface's
 javadoc.  Accumulo provides a few implementations out of the box.   For
 information on how to use the samplers that ship with Accumulo look in the
 package `org.apache.accumulo.core.sample` and consult the javadoc of the
-classes there.  See +README.sample+ and +SampleExample.java+ for examples of
-how to configure a Sampler on a table.
+classes there.  See +docs/examples/sample.md+ and 
+docs/examples/src/sample/SampleExample.java+
+for examples of how to configure a Sampler on a table.
 
 Once a table is configured with a sampler all writes after that point will
 generate sample data.  For data written before sampling was configured sample
@@ -61,8 +61,8 @@ Inorder to scan sample data, use the 
+setSamplerConfiguration(...)+  method on
 information.
 
 Sample data can also be scanned from within an Accumulo
-+SortedKeyValueIterator+.  To see how to do this look at the example iterator
-referenced in README.sample.  Also, consult the javadoc on
++SortedKeyValueIterator+.  To see how to do this, look at the example iterator
+referenced in +docs/examples/sample.md+.  Also, consult the javadoc on
 
+org.apache.accumulo.core.iterators.IteratorEnvironment.cloneWithSamplingEnabled()+.
 
 Map reduce jobs using the +AccumuloInputFormat+ can also read sample data.  See

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/asciidoc/chapters/table_configuration.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/table_configuration.txt 
b/docs/src/main/asciidoc/chapters/table_configuration.txt
index dd167ec..fa2b16c 100644
--- a/docs/src/main/asciidoc/chapters/table_configuration.txt
+++ b/docs/src/main/asciidoc/chapters/table_configuration.txt
@@ -21,6 +21,7 @@ These include locality groups, constraints, bloom filters, 
iterators, and block
 cache.  For a complete list of available configuration options, see 
<<configuration>>.
 
 === Locality Groups
+
 Accumulo supports storing sets of column families separately on disk to allow
 clients to efficiently scan over columns that are frequently used together and 
to avoid
 scanning over column families that are not requested. After a locality group 
is set,
@@ -102,11 +103,11 @@ new constraint and place it in the lib directory of the 
Accumulo installation. N
 constraint jars can be added to Accumulo and enabled without restarting but any
 change to an existing constraint class requires Accumulo to be restarted.
 
-An example of constraints can be found in
-+accumulo/docs/examples/README.constraints+ with corresponding code under
-+accumulo/examples/simple/src/main/java/accumulo/examples/simple/constraints+ .
+An example of constraints can be found in +docs/examples/constraints.md+ with
+corresponding code in +docs/examples/src/constraints+ .
 
 === Bloom Filters
+
 As mutations are applied to an Accumulo table, several files are created per 
tablet. If
 bloom filters are enabled, Accumulo will create and load a small data 
structure into
 memory to determine whether a file contains a given key before opening the 
file.
@@ -116,8 +117,7 @@ To enable bloom filters, enter the following command in the 
Shell:
 
   user@myinstance> config -t mytable -s table.bloom.enabled=true
 
-An extensive example of using Bloom Filters can be found at
-+accumulo/docs/examples/README.bloom+ .
+An extensive example of using Bloom Filters can be found at 
+docs/examples/bloom.md+ .
 
 === Iterators
 Iterators provide a modular mechanism for adding functionality to be executed 
by
@@ -345,10 +345,7 @@ Additional Combiners can be added by creating a Java class 
that extends
 +org.apache.accumulo.core.iterators.Combiner+ and adding a jar containing that
 class to Accumulo's lib/ext directory.
 
-An example of a Combiner can be found under
-
-  
accumulo/examples/simple/src/main/java/org/apache/accumulo/examples/simple/combiner/StatsCombiner.java
-
+An example of a Combiner can be found at 
+docs/examples/src/combiner/StatsCombiner.java+.
 
 === Block Cache
 
@@ -661,4 +658,4 @@ splits, and logical time. Tables are exported and then 
copied via the hadoop
 distcp command. To export a table, it must be offline and stay offline while
 discp runs. The reason it needs to stay offline is to prevent files from being
 deleted. A table can be cloned and the clone taken offline inorder to avoid
-losing access to the table. See +docs/examples/README.export+ for an example.
+losing access to the table. See +docs/examples/export.md+ for an example.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README 
b/docs/src/main/resources/examples/README
deleted file mode 100644
index 1c88b56..0000000
--- a/docs/src/main/resources/examples/README
+++ /dev/null
@@ -1,97 +0,0 @@
-Title: Apache Accumulo Examples
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-Before running any of the examples, the following steps must be performed.
-
-1. Install and run Accumulo via the instructions found in INSTALL.md.
-   Remember the instance name. It will be referred to as "instance" throughout
-   the examples. A comma-separated list of zookeeper servers will be referred
-   to as "zookeepers".
-
-2. Create an Accumulo user (see the [user manual][1]), or use the root user.
-   The "username" Accumulo user name with password "password" is used
-   throughout the examples. This user needs the ability to create tables.
-
-In all commands, you will need to replace "instance", "zookeepers",
-"username", and "password" with the values you set for your Accumulo instance.
-
-Commands intended to be run in bash are prefixed by '$'. These are always
-assumed to be run the from the root of your Accumulo installation.
-
-Commands intended to be run in the Accumulo shell are prefixed by '>'.
-
-Each README in the examples directory highlights the use of particular
-features of Apache Accumulo.
-
-   README.batch:       Using the batch writer and batch scanner.
-
-   README.bloom:       Creating a bloom filter enabled table to increase query
-                       performance.
-
-   README.bulkIngest:  Ingesting bulk data using map/reduce jobs on Hadoop.
-
-   README.classpath:   Using per-table classpaths.
-
-   README.client:      Using table operations, reading and writing data in 
Java.
-
-   README.combiner:    Using example StatsCombiner to find min, max, sum, and
-                       count.
-
-   README.constraints: Using constraints with tables.
-
-   README.dirlist:     Storing filesystem information.
-
-   README.export:      Exporting and importing tables.
-
-   README.filedata:    Storing file data.
-
-   README.filter:      Using the AgeOffFilter to remove records more than 30
-                       seconds old.
-
-   README.helloworld:  Inserting records both inside map/reduce jobs and
-                       outside. And reading records between two rows.
-
-   README.isolation:   Using the isolated scanner to ensure partial changes
-                       are not seen.
-
-   README.mapred:      Using MapReduce to read from and write to Accumulo
-                       tables.
-
-   README.maxmutation: Limiting mutation size to avoid running out of memory.
-
-   README.regex:       Using MapReduce and Accumulo to find data using regular
-                       expressions.
-
-   README.rowhash:     Using MapReduce to read a table and write to a new
-                       column in the same table.
-
-   README.sample:      Building and using sample data in Accumulo.
-
-   README.shard:       Using the intersecting iterator with a term index
-                       partitioned by document.
-
-   README.tabletofile: Using MapReduce to read a table and write one of its
-                       columns to a file in HDFS.
-
-   README.terasort:    Generating random data and sorting it using Accumulo.
-
-   README.visibility:  Using visibilities (or combinations of authorizations).
-                       Also shows user permissions.
-
-
-[1]: https://accumulo.apache.org/1.5/accumulo_user_manual#_user_administration

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.batch
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.batch 
b/docs/src/main/resources/examples/README.batch
deleted file mode 100644
index 463481b..0000000
--- a/docs/src/main/resources/examples/README.batch
+++ /dev/null
@@ -1,55 +0,0 @@
-Title: Apache Accumulo Batch Writing and Scanning Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java classes, which can be found in 
org.apache.accumulo.examples.simple.client in the examples-simple module:
-
- * SequentialBatchWriter.java - writes mutations with sequential rows and 
random values
- * RandomBatchWriter.java - used by SequentialBatchWriter to generate random 
values
- * RandomBatchScanner.java - reads random rows and verifies their values
-
-This is an example of how to use the batch writer and batch scanner. To compile
-the example, run maven and copy the produced jar into the accumulo lib dir.
-This is already done in the tar distribution.
-
-Below are commands that add 10000 entries to accumulo and then do 100 random
-queries. The write command generates random 50 byte values.
-
-Be sure to use the name of your instance (given as instance here) and the 
appropriate
-list of zookeeper nodes (given as zookeepers here).
-
-Before you run this, you must ensure that the user you are running has the
-"exampleVis" authorization. (you can set this in the shell with "setauths -u 
username -s exampleVis")
-
-    $ ./bin/accumulo shell -u root -e "setauths -u username -s exampleVis"
-
-You must also create the table, batchtest1, ahead of time. (In the shell, use 
"createtable batchtest1")
-
-    $ ./bin/accumulo shell -u username -e "createtable batchtest1"
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance -z 
zookeepers -u username -p password -t batchtest1 --start 0 --num 10000 --size 
50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchScanner -i instance -z 
zookeepers -u username -p password -t batchtest1 --num 100 --min 0 --max 10000 
--size 50 --scanThreads 20 --auths exampleVis
-    07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 
random queries...
-    07 11:33:11,112 [client.CountingVerifyingReceiver] INFO : finished
-    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : 694.44 
lookups/sec   0.14 secs
-
-    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : num results : 100
-
-    07 11:33:11,364 [client.CountingVerifyingReceiver] INFO : Generating 100 
random queries...
-    07 11:33:11,370 [client.CountingVerifyingReceiver] INFO : finished
-    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 
lookups/sec   0.05 secs
-
-    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.bloom
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.bloom 
b/docs/src/main/resources/examples/README.bloom
deleted file mode 100644
index 555f06d..0000000
--- a/docs/src/main/resources/examples/README.bloom
+++ /dev/null
@@ -1,219 +0,0 @@
-Title: Apache Accumulo Bloom Filter Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example shows how to create a table with bloom filters enabled.  It also
-shows how bloom filters increase query performance when looking for values that
-do not exist in a table.
-
-Below table named bloom_test is created and bloom filters are enabled.
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    -
-    - type 'help' for a list of available commands
-    -
-    username@instance> setauths -u username -s exampleVis
-    username@instance> createtable bloom_test
-    username@instance bloom_test> config -t bloom_test -s 
table.bloom.enabled=true
-    username@instance bloom_test> exit
-
-Below 1 million random values are inserted into accumulo. The randomly
-generated rows range between 0 and 1 billion. The random number generator is
-initialized with the seed 7.
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 -i 
instance -z zookeepers -u username -p password -t bloom_test --num 1000000 
--min 0 --max 1000000000 --size 50 --batchMemory 2M --batchLatency 60s 
--batchThreads 3 --vis exampleVis
-
-Below the table is flushed:
-
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test -w'
-    05 10:40:06,069 [shell.Shell] INFO : Flush of table bloom_test completed.
-
-After the flush completes, 500 random queries are done against the table. The
-same seed is used to generate the queries, therefore everything is found in the
-table.
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i 
instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 
--max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
-    Generating 500 random queries...finished
-    96.19 lookups/sec   5.20 secs
-    num results : 500
-    Generating 500 random queries...finished
-    102.35 lookups/sec   4.89 secs
-    num results : 500
-
-Below another 500 queries are performed, using a different seed which results
-in nothing being found. In this case the lookups are much faster because of
-the bloom filters.
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 8 -i 
instance -z zookeepers -u username -p password -t bloom_test --num 500 --min 0 
--max 1000000000 --size 50 -batchThreads 20 -auths exampleVis
-    Generating 500 random queries...finished
-    2212.39 lookups/sec   0.23 secs
-    num results : 0
-    Did not find 500 rows
-    Generating 500 random queries...finished
-    4464.29 lookups/sec   0.11 secs
-    num results : 0
-    Did not find 500 rows
-
-********************************************************************************
-
-Bloom filters can also speed up lookups for entries that exist. In accumulo
-data is divided into tablets and each tablet has multiple map files. Every
-lookup in accumulo goes to a specific tablet where a lookup is done on each
-map file in the tablet. So if a tablet has three map files, lookup performance
-can be three times slower than a tablet with one map file. However if the map
-files contain unique sets of data, then bloom filters can help eliminate map
-files that do not contain the row being looked up. To illustrate this two
-identical tables were created using the following process. One table had bloom
-filters, the other did not. Also the major compaction ratio was increased to
-prevent the files from being compacted into one file.
-
- * Insert 1 million entries using  RandomBatchWriter with a seed of 7
- * Flush the table using the shell
- * Insert 1 million entries using  RandomBatchWriter with a seed of 8
- * Flush the table using the shell
- * Insert 1 million entries using  RandomBatchWriter with a seed of 9
- * Flush the table using the shell
-
-After following the above steps, each table will have a tablet with three map
-files. Flushing the table after each batch of inserts will create a map file.
-Each map file will contain 1 million entries generated with a different seed.
-This is assuming that Accumulo is configured with enough memory to hold 1
-million inserts. If not, then more map files will be created.
-
-The commands for creating the first table without bloom filters are below.
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    -
-    - type 'help' for a list of available commands
-    -
-    username@instance> setauths -u username -s exampleVis
-    username@instance> createtable bloom_test1
-    username@instance bloom_test1> config -t bloom_test1 -s 
table.compaction.major.ratio=7
-    username@instance bloom_test1> exit
-
-    $ ARGS="-i instance -z zookeepers -u username -p password -t bloom_test1 
--num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M 
--batchLatency 60s --batchThreads 3 --vis exampleVis"
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 8 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test1 -w'
-
-The commands for creating the second table with bloom filers are below.
-
-    $ ./bin/accumulo shell -u username -p password
-    Shell - Apache Accumulo Interactive Shell
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    -
-    - type 'help' for a list of available commands
-    -
-    username@instance> setauths -u username -s exampleVis
-    username@instance> createtable bloom_test2
-    username@instance bloom_test2> config -t bloom_test2 -s 
table.compaction.major.ratio=7
-    username@instance bloom_test2> config -t bloom_test2 -s 
table.bloom.enabled=true
-    username@instance bloom_test2> exit
-
-    $ ARGS="-i instance -z zookeepers -u username -p password -t bloom_test2 
--num 1000000 --min 0 --max 1000000000 --size 50 --batchMemory 2M 
--batchLatency 60s --batchThreads 3 --vis exampleVis"
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 7 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 8 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchWriter --seed 9 $ARGS
-    $ ./bin/accumulo shell -u username -p password -e 'flush -t bloom_test2 -w'
-
-Below 500 lookups are done against the table without bloom filters using random
-NG seed 7. Even though only one map file will likely contain entries for this
-seed, all map files will be interrogated.
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i 
instance -z zookeepers -u username -p password -t bloom_test1 --num 500 --min 0 
--max 1000000000 --size 50 --scanThreads 20 --auths exampleVis
-    Generating 500 random queries...finished
-    35.09 lookups/sec  14.25 secs
-    num results : 500
-    Generating 500 random queries...finished
-    35.33 lookups/sec  14.15 secs
-    num results : 500
-
-Below the same lookups are done against the table with bloom filters. The
-lookups were 2.86 times faster because only one map file was used, even though 
three
-map files existed.
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.RandomBatchScanner --seed 7 -i 
instance -z zookeepers -u username -p password -t bloom_test2 --num 500 --min 0 
--max 1000000000 --size 50 -scanThreads 20 --auths exampleVis
-    Generating 500 random queries...finished
-    99.03 lookups/sec   5.05 secs
-    num results : 500
-    Generating 500 random queries...finished
-    101.15 lookups/sec   4.94 secs
-    num results : 500
-
-You can verify the table has three files by looking in HDFS. To look in HDFS
-you will need the table ID, because this is used in HDFS instead of the table
-name. The following command will show table ids.
-
-    $ ./bin/accumulo shell -u username -p password -e 'tables -l'
-    accumulo.metadata    =>        !0
-    accumulo.root        =>        +r
-    bloom_test1          =>        o7
-    bloom_test2          =>        o8
-    trace                =>         1
-
-So the table id for bloom_test2 is o8. The command below shows what files this
-table has in HDFS. This assumes Accumulo is at the default location in HDFS.
-
-    $ hadoop fs -lsr /accumulo/tables/o8
-    drwxr-xr-x   - username supergroup          0 2012-01-10 14:02 
/accumulo/tables/o8/default_tablet
-    -rw-r--r--   3 username supergroup   52672650 2012-01-10 14:01 
/accumulo/tables/o8/default_tablet/F00000dj.rf
-    -rw-r--r--   3 username supergroup   52436176 2012-01-10 14:01 
/accumulo/tables/o8/default_tablet/F00000dk.rf
-    -rw-r--r--   3 username supergroup   52850173 2012-01-10 14:02 
/accumulo/tables/o8/default_tablet/F00000dl.rf
-
-Running the rfile-info command shows that one of the files has a bloom filter
-and its 1.5MB.
-
-    $ ./bin/accumulo rfile-info /accumulo/tables/o8/default_tablet/F00000dj.rf
-    Locality group         : <DEFAULT>
-       Start block          : 0
-       Num   blocks         : 752
-       Index level 0        : 43,598 bytes  1 blocks
-       First key            : row_0000001169 foo:1 [exampleVis] 1326222052539 
false
-       Last key             : row_0999999421 foo:1 [exampleVis] 1326222052058 
false
-       Num entries          : 999,536
-       Column families      : [foo]
-
-    Meta block     : BCFile.index
-      Raw size             : 4 bytes
-      Compressed size      : 12 bytes
-      Compression type     : gz
-
-    Meta block     : RFile.index
-      Raw size             : 43,696 bytes
-      Compressed size      : 15,592 bytes
-      Compression type     : gz
-
-    Meta block     : acu_bloom
-      Raw size             : 1,540,292 bytes
-      Compressed size      : 1,433,115 bytes
-      Compression type     : gz
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.bulkIngest
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.bulkIngest 
b/docs/src/main/resources/examples/README.bulkIngest
deleted file mode 100644
index bc9f913..0000000
--- a/docs/src/main/resources/examples/README.bulkIngest
+++ /dev/null
@@ -1,33 +0,0 @@
-Title: Apache Accumulo Bulk Ingest Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This is an example of how to bulk ingest data into accumulo using map reduce.
-
-The following commands show how to run this example. This example creates a
-table called test_bulk which has two initial split points. Then 1000 rows of
-test data are created in HDFS. After that the 1000 rows are ingested into
-accumulo. Then we verify the 1000 rows are in accumulo.
-
-    $ PKG=org.apache.accumulo.examples.simple.mapreduce.bulk
-    $ ARGS="-i instance -z zookeepers -u username -p password"
-    $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 
row_00000666
-    $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output 
bulk/test_1.txt
-    $ ./contrib/tool.sh lib/accumulo-examples-simple.jar 
$PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
-    $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 
--count 1000
-
-For a high level discussion of bulk ingest, see the docs dir.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.classpath
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.classpath 
b/docs/src/main/resources/examples/README.classpath
deleted file mode 100644
index 7497014..0000000
--- a/docs/src/main/resources/examples/README.classpath
+++ /dev/null
@@ -1,68 +0,0 @@
-Title: Apache Accumulo Classpath Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-
-This example shows how to use per table classpaths. The example leverages a
-test jar which contains a Filter that supresses rows containing "foo". The
-example shows copying the FooFilter.jar into HDFS and then making an Accumulo
-table reference that jar.
-
-
-Execute the following command in the shell.
-
-    $ hadoop fs -copyFromLocal 
/path/to/accumulo/test/src/test/resources/FooFilter.jar /user1/lib
-
-Execute following in Accumulo shell to setup classpath context
-
-    root@test15> config -s general.vfs.context.classpath.cx1=hdfs://<namenode 
host>:<namenode port>/user1/lib/[^.].*.jar
-
-Create a table
-
-    root@test15> createtable nofoo
-
-The following command makes this table use the configured classpath context
-
-    root@test15 nofoo> config -t nofoo -s table.classpath.context=cx1
-
-The following command configures an iterator thats in FooFilter.jar
-
-    root@test15 nofoo> setiter -n foofilter -p 10 -scan -minc -majc -class 
org.apache.accumulo.test.FooFilter
-    Filter accepts or rejects each Key/Value pair
-    ----------> set FooFilter parameter negate, default false keeps k/v that 
pass accept method, true rejects k/v that pass accept method: false
-
-The commands below show the filter is working.
-
-    root@test15 nofoo> insert foo1 f1 q1 v1
-    root@test15 nofoo> insert noo1 f1 q1 v2
-    root@test15 nofoo> scan
-    noo1 f1:q1 []    v2
-    root@test15 nofoo>
-
-Below, an attempt is made to add the FooFilter to a table thats not configured
-to use the clasppath context cx1. This fails util the table is configured to
-use cx1.
-
-    root@test15 nofoo> createtable nofootwo
-    root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class 
org.apache.accumulo.test.FooFilter
-    2013-05-03 12:49:35,943 [shell.Shell] ERROR: 
java.lang.IllegalArgumentException: org.apache.accumulo.test.FooFilter
-    root@test15 nofootwo> config -t nofootwo -s table.classpath.context=cx1
-    root@test15 nofootwo> setiter -n foofilter -p 10 -scan -minc -majc -class 
org.apache.accumulo.test.FooFilter
-    Filter accepts or rejects each Key/Value pair
-    ----------> set FooFilter parameter negate, default false keeps k/v that 
pass accept method, true rejects k/v that pass accept method: false
-
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.client
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.client 
b/docs/src/main/resources/examples/README.client
deleted file mode 100644
index f6b8bcb..0000000
--- a/docs/src/main/resources/examples/README.client
+++ /dev/null
@@ -1,79 +0,0 @@
-Title: Apache Accumulo Client Examples
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This documents how you run the simplest java examples.
-
-This tutorial uses the following Java classes, which can be found in 
org.apache.accumulo.examples.simple.client in the examples-simple module:
-
- * Flush.java - flushes a table
- * RowOperations.java - reads and writes rows
- * ReadWriteExample.java - creates a table, writes to it, and reads from it
-
-Using the accumulo command, you can run the simple client examples by 
providing their
-class name, and enough arguments to find your accumulo instance. For example,
-the Flush class will flush a table:
-
-    $ PACKAGE=org.apache.accumulo.examples.simple.client
-    $ bin/accumulo $PACKAGE.Flush -u root -p mypassword -i instance -z 
zookeeper -t trace
-
-The very simple RowOperations class demonstrates how to read and write rows 
using the BatchWriter
-and Scanner:
-
-    $ bin/accumulo $PACKAGE.RowOperations -u root -p mypassword -i instance -z 
zookeeper
-    2013-01-14 14:45:24,738 [client.RowOperations] INFO : This is everything
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:1 
[] 1358192724640 false Value: This is the value for this key
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:2 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:3 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,744 [client.RowOperations] INFO : Key: row1 column:4 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:1 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:2 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:3 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,746 [client.RowOperations] INFO : Key: row2 column:4 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:1 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:2 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:3 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,747 [client.RowOperations] INFO : Key: row3 column:4 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,756 [client.RowOperations] INFO : This is row1 and row3
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:1 
[] 1358192724640 false Value: This is the value for this key
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:2 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:3 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,757 [client.RowOperations] INFO : Key: row1 column:4 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:1 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:2 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:3 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,761 [client.RowOperations] INFO : Key: row3 column:4 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,765 [client.RowOperations] INFO : This is just row3
-    2013-01-14 14:45:24,769 [client.RowOperations] INFO : Key: row3 column:1 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:2 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:3 
[] 1358192724642 false Value: This is the value for this key
-    2013-01-14 14:45:24,770 [client.RowOperations] INFO : Key: row3 column:4 
[] 1358192724642 false Value: This is the value for this key
-
-To create a table, write to it and read from it:
-
-    $ bin/accumulo $PACKAGE.ReadWriteExample -u root -p mypassword -i instance 
-z zookeeper --createtable --create --read
-    hello%00; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%01; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%02; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%03; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%04; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%05; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%06; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%07; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%08; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-    hello%09; datatypes:xml [LEVEL1|GROUP1] 1358192329450 false -> world
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.combiner
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.combiner 
b/docs/src/main/resources/examples/README.combiner
deleted file mode 100644
index f388e5b..0000000
--- a/docs/src/main/resources/examples/README.combiner
+++ /dev/null
@@ -1,70 +0,0 @@
-Title: Apache Accumulo Combiner Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java class, which can be found in 
org.apache.accumulo.examples.simple.combiner in the examples-simple module:
-
- * StatsCombiner.java - a combiner that calculates max, min, sum, and count
-
-This is a simple combiner example. To build this example run maven and then
-copy the produced jar into the accumulo lib dir. This is already done in the
-tar distribution.
-
-    $ bin/accumulo shell -u username
-    Enter current password for 'username'@'instance': ***
-
-    Shell - Apache Accumulo Interactive Shell
-    -
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    -
-    - type 'help' for a list of available commands
-    -
-    username@instance> createtable runners
-    username@instance runners> setiter -t runners -p 10 -scan -minc -majc -n 
decStats -class org.apache.accumulo.examples.simple.combiner.StatsCombiner
-    Combiner that keeps track of min, max, sum, and count
-    ----------> set StatsCombiner parameter all, set to true to apply Combiner 
to every column, otherwise leave blank. if true, columns option will be 
ignored.:
-    ----------> set StatsCombiner parameter columns, <col fam>[:<col 
qual>]{,<col fam>[:<col qual>]} escape non aplhanum chars using %<hex>.: stat
-    ----------> set StatsCombiner parameter radix, radix/base of the numbers: 
10
-    username@instance runners> setiter -t runners -p 11 -scan -minc -majc -n 
hexStats -class org.apache.accumulo.examples.simple.combiner.StatsCombiner
-    Combiner that keeps track of min, max, sum, and count
-    ----------> set StatsCombiner parameter all, set to true to apply Combiner 
to every column, otherwise leave blank. if true, columns option will be 
ignored.:
-    ----------> set StatsCombiner parameter columns, <col fam>[:<col 
qual>]{,<col fam>[:<col qual>]} escape non aplhanum chars using %<hex>.: hstat
-    ----------> set StatsCombiner parameter radix, radix/base of the numbers: 
16
-    username@instance runners> insert 123456 name first Joe
-    username@instance runners> insert 123456 stat marathon 240
-    username@instance runners> scan
-    123456 name:first []    Joe
-    123456 stat:marathon []    240,240,240,1
-    username@instance runners> insert 123456 stat marathon 230
-    username@instance runners> insert 123456 stat marathon 220
-    username@instance runners> scan
-    123456 name:first []    Joe
-    123456 stat:marathon []    220,240,690,3
-    username@instance runners> insert 123456 hstat virtualMarathon 6a
-    username@instance runners> insert 123456 hstat virtualMarathon 6b
-    username@instance runners> scan
-    123456 hstat:virtualMarathon []    6a,6b,d5,2
-    123456 name:first []    Joe
-    123456 stat:marathon []    220,240,690,3
-
-In this example a table is created and the example stats combiner is applied to
-the column family stat and hstat. The stats combiner computes min,max,sum, and
-count. It can be configured to use a different base or radix. In the example
-above the column family stat is configured for base 10 and the column family
-hstat is configured for base 16.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.compactionStrategy
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.compactionStrategy 
b/docs/src/main/resources/examples/README.compactionStrategy
deleted file mode 100644
index 344080b..0000000
--- a/docs/src/main/resources/examples/README.compactionStrategy
+++ /dev/null
@@ -1,65 +0,0 @@
-Title: Apache Accumulo Customizing the Compaction Strategy 
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java classes, which can be found in 
org.apache.accumulo.tserver.compaction: 
-
- * DefaultCompactionStrategy.java - determines which files to compact based on 
table.compaction.major.ratio and table.file.max
- * EverythingCompactionStrategy.java - compacts all files
- * SizeLimitCompactionStrategy.java - compacts files no bigger than 
table.majc.compaction.strategy.opts.sizeLimit
- * TwoTierCompactionStrategy.java - uses default compression for smaller files 
and table.majc.compaction.strategy.opts.file.large.compress.type for larger 
files
-
-This is an example of how to configure a compaction strategy. By default 
Accumulo will always use the DefaultCompactionStrategy, unless 
-these steps are taken to change the configuration.  Use the strategy and 
settings that best fits your Accumulo setup. This example shows
-how to configure and test one of the more complicated strategies, the 
TwoTierCompactionStrategy. Note that this example requires hadoop
-native libraries built with snappy in order to use snappy compression.
-
-To begin, run the command to create a table for testing:
-
-    $ ./bin/accumulo shell -u root -p secret -e "createtable test1"
-
-The command below sets the compression for smaller files and minor compactions 
for that table.
-
-    $ ./bin/accumulo shell -u root -p secret -e "config -s 
table.file.compress.type=snappy -t test1"
-
-The commands below will configure the TwoTierCompactionStrategy to use gz 
compression for files larger than 1M. 
-
-    $ ./bin/accumulo shell -u root -p secret -e "config -s 
table.majc.compaction.strategy.opts.file.large.compress.threshold=1M -t test1"
-    $ ./bin/accumulo shell -u root -p secret -e "config -s 
table.majc.compaction.strategy.opts.file.large.compress.type=gz -t test1"
-    $ ./bin/accumulo shell -u root -p secret -e "config -s 
table.majc.compaction.strategy=org.apache.accumulo.tserver.compaction.TwoTierCompactionStrategy
 -t test1"
-
-Generate some data and files in order to test the strategy:
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance17 
-z localhost:2181 -u root -p secret -t test1 --start 0 --num 10000 --size 50 
--batchMemory 20M --batchLatency 500 --batchThreads 20
-    $ ./bin/accumulo shell -u root -p secret -e "flush -t test1"
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance17 
-z localhost:2181 -u root -p secret -t test1 --start 0 --num 11000 --size 50 
--batchMemory 20M --batchLatency 500 --batchThreads 20
-    $ ./bin/accumulo shell -u root -p secret -e "flush -t test1"
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance17 
-z localhost:2181 -u root -p secret -t test1 --start 0 --num 12000 --size 50 
--batchMemory 20M --batchLatency 500 --batchThreads 20
-    $ ./bin/accumulo shell -u root -p secret -e "flush -t test1"
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.client.SequentialBatchWriter -i instance17 
-z localhost:2181 -u root -p secret -t test1 --start 0 --num 13000 --size 50 
--batchMemory 20M --batchLatency 500 --batchThreads 20
-    $ ./bin/accumulo shell -u root -p secret -e "flush -t test1"
-
-View the tserver log in <accumulo_home>/logs for the compaction and find the 
name of the <rfile> that was compacted for your table. Print info about this 
file using the PrintInfo tool:
-
-    $ ./bin/accumulo rfile-info <rfile>
-
-Details about the rfile will be printed and the compression type should match 
the type used in the compaction...
-Meta block     : RFile.index
-      Raw size             : 512 bytes
-      Compressed size      : 278 bytes
-      Compression type     : gz
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.constraints
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.constraints 
b/docs/src/main/resources/examples/README.constraints
deleted file mode 100644
index b15b409..0000000
--- a/docs/src/main/resources/examples/README.constraints
+++ /dev/null
@@ -1,54 +0,0 @@
-Title: Apache Accumulo Constraints Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java classes, which can be found in 
org.apache.accumulo.examples.simple.constraints in the examples-simple module:
-
- * AlphaNumKeyConstraint.java - a constraint that requires alphanumeric keys
- * NumericValueConstraint.java - a constraint that requires numeric string 
values
-
-This an example of how to create a table with constraints. Below a table is
-created with two example constraints. One constraints does not allow non alpha
-numeric keys. The other constraint does not allow non numeric values. Two
-inserts that violate these constraints are attempted and denied. The scan at
-the end shows the inserts were not allowed.
-
-    $ ./bin/accumulo shell -u username -p password
-
-    Shell - Apache Accumulo Interactive Shell
-    -
-    - version: 1.5.0
-    - instance name: instance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    -
-    - type 'help' for a list of available commands
-    -
-    username@instance> createtable testConstraints
-    username@instance testConstraints> constraint -a 
org.apache.accumulo.examples.simple.constraints.NumericValueConstraint
-    username@instance testConstraints> constraint -a 
org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint
-    username@instance testConstraints> insert r1 cf1 cq1 1111
-    username@instance testConstraints> insert r1 cf1 cq1 ABC
-      Constraint Failures:
-          
ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.NumericValueConstraint,
 violationCode:1, violationDescription:Value is not numeric, 
numberOfViolatingMutations:1)
-    username@instance testConstraints> insert r1! cf1 cq1 ABC
-      Constraint Failures:
-          
ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.NumericValueConstraint,
 violationCode:1, violationDescription:Value is not numeric, 
numberOfViolatingMutations:1)
-          
ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint,
 violationCode:1, violationDescription:Row was not alpha numeric, 
numberOfViolatingMutations:1)
-    username@instance testConstraints> scan
-    r1 cf1:cq1 []    1111
-    username@instance testConstraints>
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.dirlist
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.dirlist 
b/docs/src/main/resources/examples/README.dirlist
deleted file mode 100644
index 50623c6..0000000
--- a/docs/src/main/resources/examples/README.dirlist
+++ /dev/null
@@ -1,114 +0,0 @@
-Title: Apache Accumulo File System Archive
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example stores filesystem information in accumulo. The example stores the 
information in the following three tables. More information about the table 
structures can be found at the end of README.dirlist.
-
- * directory table : This table stores information about the filesystem 
directory structure.
- * index table     : This table stores a file name index. It can be used to 
quickly find files with given name, suffix, or prefix.
- * data table      : This table stores the file data. File with duplicate data 
are only stored once.
-
-This example shows how to use Accumulo to store a file system history. It has 
the following classes:
-
- * Ingest.java - Recursively lists the files and directories under a given 
path, ingests their names and file info into one Accumulo table, indexes the 
file names in a separate table, and the file data into a third table.
- * QueryUtil.java - Provides utility methods for getting the info for a file, 
listing the contents of a directory, and performing single wild card searches 
on file or directory names.
- * Viewer.java - Provides a GUI for browsing the file system information 
stored in Accumulo.
- * FileCount.java - Computes recursive counts over file system information and 
stores them back into the same Accumulo table.
-
-To begin, ingest some data with Ingest.java.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Ingest -i 
instance -z zookeepers -u username -p password --vis exampleVis --chunkSize 
100000 /local/username/workspace
-
-This may take some time if there are large files in the 
/local/username/workspace directory. If you use 0 instead of 100000 on the 
command line, the ingest will run much faster, but it will not put any file 
data into Accumulo (the dataTable will be empty).
-Note that running this example will create tables dirTable, indexTable, and 
dataTable in Accumulo that you should delete when you have completed the 
example.
-If you modify a file or add new files in the directory ingested (e.g. 
/local/username/workspace), you can run Ingest again to add new information 
into the Accumulo tables.
-
-To browse the data ingested, use Viewer.java. Be sure to give the "username" 
user the authorizations to see the data (in this case, run
-
-    $ ./bin/accumulo shell -u root -e 'setauths -u username -s exampleVis'
-
-then run the Viewer:
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.Viewer -i 
instance -z zookeepers -u username -p password -t dirTable --dataTable 
dataTable --auths exampleVis --path /local/username/workspace
-
-To list the contents of specific directories, use QueryUtil.java.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i 
instance -z zookeepers -u username -p password -t dirTable --auths exampleVis 
--path /local/username
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i 
instance -z zookeepers -u username -p password -t dirTable --auths exampleVis 
--path /local/username/workspace
-
-To perform searches on file or directory names, also use QueryUtil.java. 
Search terms must contain no more than one wild card and cannot contain "/".
-*Note* these queries run on the _indexTable_ table instead of the dirTable 
table.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i 
instance -z zookeepers -u username -p password -t indexTable --auths exampleVis 
--path filename --search
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i 
instance -z zookeepers -u username -p password -t indexTable --auths exampleVis 
--path 'filename*' --search
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i 
instance -z zookeepers -u username -p password -t indexTable --auths exampleVis 
--path '*jar' --search
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.QueryUtil -i 
instance -z zookeepers -u username -p password -t indexTable --auths exampleVis 
--path 'filename*jar' --search
-
-To count the number of direct children (directories and files) and descendants 
(children and children's descendants, directories and files), run the FileCount 
over the dirTable table.
-The results are written back to the same table. FileCount reads from and 
writes to Accumulo. This requires scan authorizations for the read and a 
visibility for the data written.
-In this example, the authorizations and visibility are set to the same value, 
exampleVis. See README.visibility for more information on visibility and 
authorizations.
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.dirlist.FileCount -i 
instance -z zookeepers -u username -p password -t dirTable --auths exampleVis
-
-## Directory Table
-
-Here is a illustration of what data looks like in the directory table:
-
-    row colf:colq [vis]        value
-    000 dir:exec [exampleVis]    true
-    000 dir:hidden [exampleVis]    false
-    000 dir:lastmod [exampleVis]    1291996886000
-    000 dir:length [exampleVis]    1666
-    001/local dir:exec [exampleVis]    true
-    001/local dir:hidden [exampleVis]    false
-    001/local dir:lastmod [exampleVis]    1304945270000
-    001/local dir:length [exampleVis]    272
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:exec [exampleVis]  
  false
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:hidden 
[exampleVis]    false
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:lastmod 
[exampleVis]    1308746481000
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:length 
[exampleVis]    9192
-    002/local/Accumulo.README \x7F\xFF\xFE\xCFH\xA1\x82\x97:md5 [exampleVis]   
 274af6419a3c4c4a259260ac7017cbf1
-
-The rows are of the form depth + path, where depth is the number of slashes 
("/") in the path padded to 3 digits. This is so that all the children of a 
directory appear as consecutive keys in Accumulo; without the depth, you would 
for example see all the subdirectories of /local before you saw /usr.
-For directories the column family is "dir". For files the column family is 
Long.MAX_VALUE - lastModified in bytes rather than string format so that newer 
versions sort earlier.
-
-## Index Table
-
-Here is an illustration of what data looks like in the index table:
-
-    row colf:colq [vis]
-    fAccumulo.README i:002/local/Accumulo.README [exampleVis]
-    flocal i:001/local [exampleVis]
-    rEMDAER.olumuccA i:002/local/Accumulo.README [exampleVis]
-    rlacol i:001/local [exampleVis]
-
-The values of the index table are null. The rows are of the form "f" + 
filename or "r" + reverse file name. This is to enable searches with wildcards 
at the beginning, middle, or end.
-
-## Data Table
-
-Here is an illustration of what data looks like in the data table:
-
-    row colf:colq [vis]        value
-    274af6419a3c4c4a259260ac7017cbf1 
refs:e77276a2b56e5c15b540eaae32b12c69\x00filext [exampleVis]    README
-    274af6419a3c4c4a259260ac7017cbf1 
refs:e77276a2b56e5c15b540eaae32b12c69\x00name [exampleVis]    
/local/Accumulo.README
-    274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x00 
[exampleVis]    
*******************************************************************************\x0A1.
 Building\x0A\x0AIn the normal tarball release of accumulo, [truncated]
-    274af6419a3c4c4a259260ac7017cbf1 ~chunk:\x00\x0FB@\x00\x00\x00\x01 
[exampleVis]
-
-The rows are the md5 hash of the file. Some column family : column qualifier 
pairs are "refs" : hash of file name + null byte + property name, in which case 
the value is property value. There can be multiple references to the same file 
which are distinguished by the hash of the file name.
-Other column family : column qualifier pairs are "~chunk" : chunk size in 
bytes + chunk number in bytes, in which case the value is the bytes for that 
chunk of the file. There is an end of file data marker whose chunk number is 
the number of chunks for the file and whose value is empty.
-
-There may exist multiple copies of the same file (with the same md5 hash) with 
different chunk sizes or different visibilities. There is an iterator that can 
be set on the data table that combines these copies into a single copy with a 
visibility taken from the visibilities of the file references, e.g. (vis from 
ref1)|(vis from ref2).

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.export
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.export 
b/docs/src/main/resources/examples/README.export
deleted file mode 100644
index b6ea8f8..0000000
--- a/docs/src/main/resources/examples/README.export
+++ /dev/null
@@ -1,91 +0,0 @@
-Title: Apache Accumulo Export/Import Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-Accumulo provides a mechanism to export and import tables. This README shows
-how to use this feature.
-
-The shell session below shows creating a table, inserting data, and exporting
-the table. A table must be offline to export it, and it should remain offline
-for the duration of the distcp. An easy way to take a table offline without
-interuppting access to it is to clone it and take the clone offline.
-
-    root@test15> createtable table1
-    root@test15 table1> insert a cf1 cq1 v1
-    root@test15 table1> insert h cf1 cq1 v2
-    root@test15 table1> insert z cf1 cq1 v3
-    root@test15 table1> insert z cf1 cq2 v4
-    root@test15 table1> addsplits -t table1 b r
-    root@test15 table1> scan
-    a cf1:cq1 []    v1
-    h cf1:cq1 []    v2
-    z cf1:cq1 []    v3
-    z cf1:cq2 []    v4
-    root@test15> config -t table1 -s table.split.threshold=100M
-    root@test15 table1> clonetable table1 table1_exp
-    root@test15 table1> offline table1_exp
-    root@test15 table1> exporttable -t table1_exp /tmp/table1_export
-    root@test15 table1> quit
-
-After executing the export command, a few files are created in the hdfs dir.
-One of the files is a list of files to distcp as shown below.
-
-    $ hadoop fs -ls /tmp/table1_export
-    Found 2 items
-    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 
/tmp/table1_export/distcp.txt
-    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 
/tmp/table1_export/exportMetadata.zip
-    $ hadoop fs -cat /tmp/table1_export/distcp.txt
-    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
-    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
-
-Before the table can be imported, it must be copied using distcp. After the
-discp completed, the cloned table may be deleted.
-
-    $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
-
-The Accumulo shell session below shows importing the table and inspecting it.
-The data, splits, config, and logical time information for the table were
-preserved.
-
-    root@test15> importtable table1_copy /tmp/table1_export_dest
-    root@test15> table table1_copy
-    root@test15 table1_copy> scan
-    a cf1:cq1 []    v1
-    h cf1:cq1 []    v2
-    z cf1:cq1 []    v3
-    z cf1:cq2 []    v4
-    root@test15 table1_copy> getsplits -t table1_copy
-    b
-    r
-    root@test15> config -t table1_copy -f split
-    
---------+--------------------------+-------------------------------------------
-    SCOPE    | NAME                     | VALUE
-    
---------+--------------------------+-------------------------------------------
-    default  | table.split.threshold .. | 1G
-    table    |    @override ........... | 100M
-    
---------+--------------------------+-------------------------------------------
-    root@test15> tables -l
-    accumulo.metadata    =>        !0
-    accumulo.root        =>        +r
-    table1_copy          =>         5
-    trace                =>         1
-    root@test15 table1_copy> scan -t accumulo.metadata -b 5 -c srv:time
-    5;b srv:time []    M1343224500467
-    5;r srv:time []    M1343224500467
-    5< srv:time []    M1343224500467
-
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.filedata
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.filedata 
b/docs/src/main/resources/examples/README.filedata
deleted file mode 100644
index a94d493..0000000
--- a/docs/src/main/resources/examples/README.filedata
+++ /dev/null
@@ -1,47 +0,0 @@
-Title: Apache Accumulo File System Archive Example (Data Only)
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This example archives file data into an Accumulo table. Files with duplicate 
data are only stored once.
-The example has the following classes:
-
- * CharacterHistogram - A MapReduce that computes a histogram of byte 
frequency for each file and stores the histogram alongside the file data. An 
example use of the ChunkInputFormat.
- * ChunkCombiner - An Iterator that dedupes file data and sets their 
visibilities to a combined visibility based on current references to the file 
data.
- * ChunkInputFormat - An Accumulo InputFormat that provides keys containing 
file info (List<Entry<Key,Value>>) and values with an InputStream over the file 
(ChunkInputStream).
- * ChunkInputStream - An input stream over file data stored in Accumulo.
- * FileDataIngest - Takes a list of files and archives them into Accumulo 
keyed on hashes of the files.
- * FileDataQuery - Retrieves file data based on the hash of the file. (Used by 
the dirlist.Viewer.)
- * KeyUtil - A utility for creating and parsing null-byte separated strings 
into/from Text objects.
- * VisibilityCombiner - A utility for merging visibilities into the form 
(VIS1)|(VIS2)|...
-
-This example is coupled with the dirlist example. See README.dirlist for 
instructions.
-
-If you haven't already run the README.dirlist example, ingest a file with 
FileDataIngest.
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z 
zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 
/path/to/accumulo/README.md
-
-Open the accumulo shell and look at the data. The row is the MD5 hash of the 
file, which you can verify by running a command such as 'md5sum' on the file.
-
-    > scan -t dataTable
-
-Run the CharacterHistogram MapReduce to add some information about the file.
-
-    $ ./contrib/tool.sh lib/accumulo-examples-simple.jar 
org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z 
zookeepers -u username -p password -t dataTable --auths exampleVis --vis 
exampleVis
-
-Scan again to see the histogram stored in the 'info' column family.
-
-    > scan -t dataTable

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.filter
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.filter 
b/docs/src/main/resources/examples/README.filter
deleted file mode 100644
index e00ba4a..0000000
--- a/docs/src/main/resources/examples/README.filter
+++ /dev/null
@@ -1,110 +0,0 @@
-Title: Apache Accumulo Filter Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This is a simple filter example. It uses the AgeOffFilter that is provided as
-part of the core package org.apache.accumulo.core.iterators.user. Filters are
-iterators that select desired key/value pairs (or weed out undesired ones).
-Filters extend the org.apache.accumulo.core.iterators.Filter class
-and must implement a method accept(Key k, Value v). This method returns true
-if the key/value pair are to be delivered and false if they are to be ignored.
-Filter takes a "negate" parameter which defaults to false. If set to true, the
-return value of the accept method is negated, so that key/value pairs accepted
-by the method are omitted by the Filter.
-
-    username@instance> createtable filtertest
-    username@instance filtertest> setiter -t filtertest -scan -p 10 -n 
myfilter -ageoff
-    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds 
old
-    ----------> set AgeOffFilter parameter negate, default false keeps k/v 
that pass accept method, true rejects k/v that pass accept method:
-    ----------> set AgeOffFilter parameter ttl, time to live (milliseconds): 
30000
-    ----------> set AgeOffFilter parameter currentTime, if set, use the given 
value as the absolute time in milliseconds as the current time of day:
-    username@instance filtertest> scan
-    username@instance filtertest> insert foo a b c
-    username@instance filtertest> scan
-    foo a:b []    c
-    username@instance filtertest>
-
-... wait 30 seconds ...
-
-    username@instance filtertest> scan
-    username@instance filtertest>
-
-Note the absence of the entry inserted more than 30 seconds ago. Since the
-scope was set to "scan", this means the entry is still in Accumulo, but is
-being filtered out at query time. To delete entries from Accumulo based on
-the ages of their timestamps, AgeOffFilters should be set up for the "minc"
-and "majc" scopes, as well.
-
-To force an ageoff of the persisted data, after setting up the ageoff iterator
-on the "minc" and "majc" scopes you can flush and compact your table. This will
-happen automatically as a background operation on any table that is being
-actively written to, but can also be requested in the shell.
-
-The first setiter command used the special -ageoff flag to specify the
-AgeOffFilter, but any Filter can be configured by using the -class flag. The
-following commands show how to enable the AgeOffFilter for the minc and majc
-scopes using the -class flag, then flush and compact the table.
-
-    username@instance filtertest> setiter -t filtertest -minc -majc -p 10 -n 
myfilter -class org.apache.accumulo.core.iterators.user.AgeOffFilter
-    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds 
old
-    ----------> set AgeOffFilter parameter negate, default false keeps k/v 
that pass accept method, true rejects k/v that pass accept method:
-    ----------> set AgeOffFilter parameter ttl, time to live (milliseconds): 
30000
-    ----------> set AgeOffFilter parameter currentTime, if set, use the given 
value as the absolute time in milliseconds as the current time of day:
-    username@instance filtertest> flush
-    06 10:42:24,806 [shell.Shell] INFO : Flush of table filtertest initiated...
-    username@instance filtertest> compact
-    06 10:42:36,781 [shell.Shell] INFO : Compaction of table filtertest 
started for given range
-    username@instance filtertest> flush -t filtertest -w
-    06 10:42:52,881 [shell.Shell] INFO : Flush of table filtertest completed.
-    username@instance filtertest> compact -t filtertest -w
-    06 10:43:00,632 [shell.Shell] INFO : Compacting table ...
-    06 10:43:01,307 [shell.Shell] INFO : Compaction of table filtertest 
completed for given range
-    username@instance filtertest>
-
-By default, flush and compact execute in the background, but with the -w flag
-they will wait to return until the operation has completed. Both are
-demonstrated above, though only one call to each would be necessary. A
-specific table can be specified with -t.
-
-After the compaction runs, the newly created files will not contain any data
-that should have been aged off, and the Accumulo garbage collector will remove
-the old files.
-
-To see the iterator settings for a table, use config.
-
-    username@instance filtertest> config -t filtertest -f iterator
-    
---------+---------------------------------------------+---------------------------------------------------------------------------
-    SCOPE    | NAME                                        | VALUE
-    
---------+---------------------------------------------+---------------------------------------------------------------------------
-    table    | table.iterator.majc.myfilter .............. | 
10,org.apache.accumulo.core.iterators.user.AgeOffFilter
-    table    | table.iterator.majc.myfilter.opt.ttl ...... | 30000
-    table    | table.iterator.majc.vers .................. | 
20,org.apache.accumulo.core.iterators.user.VersioningIterator
-    table    | table.iterator.majc.vers.opt.maxVersions .. | 1
-    table    | table.iterator.minc.myfilter .............. | 
10,org.apache.accumulo.core.iterators.user.AgeOffFilter
-    table    | table.iterator.minc.myfilter.opt.ttl ...... | 30000
-    table    | table.iterator.minc.vers .................. | 
20,org.apache.accumulo.core.iterators.user.VersioningIterator
-    table    | table.iterator.minc.vers.opt.maxVersions .. | 1
-    table    | table.iterator.scan.myfilter .............. | 
10,org.apache.accumulo.core.iterators.user.AgeOffFilter
-    table    | table.iterator.scan.myfilter.opt.ttl ...... | 30000
-    table    | table.iterator.scan.vers .................. | 
20,org.apache.accumulo.core.iterators.user.VersioningIterator
-    table    | table.iterator.scan.vers.opt.maxVersions .. | 1
-    
---------+---------------------------------------------+---------------------------------------------------------------------------
-    username@instance filtertest>
-
-When setting new iterators, make sure to order their priority numbers
-(specified with -p) in the order you would like the iterators to be applied.
-Also, each iterator must have a unique name and priority within each scope.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/52d526b9/docs/src/main/resources/examples/README.helloworld
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.helloworld 
b/docs/src/main/resources/examples/README.helloworld
deleted file mode 100644
index 618e301..0000000
--- a/docs/src/main/resources/examples/README.helloworld
+++ /dev/null
@@ -1,47 +0,0 @@
-Title: Apache Accumulo Hello World Example
-Notice:    Licensed to the Apache Software Foundation (ASF) under one
-           or more contributor license agreements.  See the NOTICE file
-           distributed with this work for additional information
-           regarding copyright ownership.  The ASF licenses this file
-           to you under the Apache License, Version 2.0 (the
-           "License"); you may not use this file except in compliance
-           with the License.  You may obtain a copy of the License at
-           .
-             http://www.apache.org/licenses/LICENSE-2.0
-           .
-           Unless required by applicable law or agreed to in writing,
-           software distributed under the License is distributed on an
-           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-           KIND, either express or implied.  See the License for the
-           specific language governing permissions and limitations
-           under the License.
-
-This tutorial uses the following Java classes, which can be found in 
org.apache.accumulo.examples.simple.helloworld in the examples-simple module:
-
- * InsertWithBatchWriter.java - Inserts 10K rows (50K entries) into accumulo 
with each row having 5 entries
- * ReadData.java - Reads all data between two rows
-
-Log into the accumulo shell:
-
-    $ ./bin/accumulo shell -u username -p password
-
-Create a table called 'hellotable':
-
-    username@instance> createtable hellotable
-
-Launch a Java program that inserts data with a BatchWriter:
-
-    $ ./bin/accumulo 
org.apache.accumulo.examples.simple.helloworld.InsertWithBatchWriter -i 
instance -z zookeepers -u username -p password -t hellotable
-
-On the accumulo status page at the URL below (where 'master' is replaced with 
the name or IP of your accumulo master), you should see 50K entries
-
-    http://master:9995/
-
-To view the entries, use the shell to scan the table:
-
-    username@instance> table hellotable
-    username@instance hellotable> scan
-
-You can also use a Java class to scan the table:
-
-    $ ./bin/accumulo org.apache.accumulo.examples.simple.helloworld.ReadData 
-i instance -z zookeepers -u username -p password -t hellotable --startKey 
row_0 --endKey row_1001

Reply via email to