http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Accumulo_Shell.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Accumulo_Shell.md 
b/user_manual_1.3-incubating/Accumulo_Shell.md
deleted file mode 100644
index e8612ce..0000000
--- a/user_manual_1.3-incubating/Accumulo_Shell.md
+++ /dev/null
@@ -1,136 +0,0 @@
----
-title: "User Manual: Accumulo Shell"
----
-
-** Next:** [Writing Accumulo Clients][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Accumulo Design][6]   ** [Contents][8]**   
-  
-<a id="CHILD_LINKS"></a>**Subsections**
-
-* [Basic Administration][9]
-* [Table Maintenance][10]
-* [User Administration][11]
-
-* * *
-
-## <a id="Accumulo_Shell"></a> Accumulo Shell
-
-Accumulo provides a simple shell that can be used to examine the contents and 
configuration settings of tables, apply individual mutations, and change 
configuration settings. 
-
-The shell can be started by the following command: 
-    
-    
-    $ACCUMULO_HOME/bin/accumulo shell -u [username]
-    
-
-The shell will prompt for the corresponding password to the username specified 
and then display the following prompt: 
-    
-    
-    Shell - Apache Accumulo Interactive Shell
-    -
-    - version 1.3
-    - instance name: myinstance
-    - instance id: 00000000-0000-0000-0000-000000000000
-    -
-    - type 'help' for a list of available commands
-    -
-    
-
-## <a id="Basic_Administration"></a> Basic Administration
-
-The Accumulo shell can be used to create and delete tables, as well as to 
configure table and instance specific options. 
-    
-    
-    root@myinstance> tables
-    !METADATA
-    
-    root@myinstance> createtable mytable
-    
-    root@myinstance mytable>
-    
-    root@myinstance mytable> tables
-    !METADATA
-    mytable
-    
-    root@myinstance mytable> createtable testtable
-    
-    root@myinstance testtable>
-    
-    root@myinstance junk> deletetable testtable
-    
-    root@myinstance>
-    
-
-The Shell can also be used to insert updates and scan tables. This is useful 
for inspecting tables. 
-    
-    
-    root@myinstance mytable> scan
-    
-    root@myinstance mytable> insert row1 colf colq value1
-    insert successful
-    
-    root@myinstance mytable> scan
-    row1 colf:colq [] value1
-    
-
-## <a id="Table_Maintenance"></a> Table Maintenance
-
-The **compact** command instructs Accumulo to schedule a compaction of the 
table during which files are consolidated and deleted entries are removed. 
-    
-    
-    root@myinstance mytable> compact -t mytable
-    07 16:13:53,201 [shell.Shell] INFO : Compaction of table mytable
-    scheduled for 20100707161353EDT
-    
-
-The **flush** command instructs Accumulo to write all entries currently in 
memory for a given table to disk. 
-    
-    
-    root@myinstance mytable> flush -t mytable
-    07 16:14:19,351 [shell.Shell] INFO : Flush of table mytable
-    initiated...
-    
-
-## <a id="User_Administration"></a> User Administration
-
-The Shell can be used to add, remove, and grant privileges to users. 
-    
-    
-    root@myinstance mytable> createuser bob
-    Enter new password for 'bob': *********
-    Please confirm new password for 'bob': *********
-    
-    root@myinstance mytable> authenticate bob
-    Enter current password for 'bob': *********
-    Valid
-    
-    root@myinstance mytable> grant System.CREATE_TABLE -s -u bob
-    
-    root@myinstance mytable> user bob
-    Enter current password for 'bob': *********
-    
-    bob@myinstance mytable> userpermissions
-    System permissions: System.CREATE_TABLE
-    Table permissions (!METADATA): Table.READ
-    Table permissions (mytable): NONE
-    
-    bob@myinstance mytable> createtable bobstable
-    bob@myinstance bobstable>
-    
-    bob@myinstance bobstable> user root
-    Enter current password for 'root': *********
-    
-    root@myinstance bobstable> revoke System.CREATE_TABLE -s -u bob
-    
-
-* * *
-
-** Next:** [Writing Accumulo Clients][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Accumulo Design][6]   ** [Contents][8]**
-
-[2]: Writing_Accumulo_Clients.html
-[4]: accumulo_user_manual.html
-[6]: Accumulo_Design.html
-[8]: Contents.html
-[9]: Accumulo_Shell.html#Basic_Administration
-[10]: Accumulo_Shell.html#Table_Maintenance
-[11]: Accumulo_Shell.html#User_Administration
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Administration.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Administration.md 
b/user_manual_1.3-incubating/Administration.md
deleted file mode 100644
index f231617..0000000
--- a/user_manual_1.3-incubating/Administration.md
+++ /dev/null
@@ -1,169 +0,0 @@
----
-title: "User Manual: Administration"
----
-
-** Next:** [Shell Commands][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Security][6]   ** [Contents][8]**   
-  
-<a id="CHILD_LINKS"></a>**Subsections**
-
-* [Hardware][9]
-* [Network][10]
-* [Installation][11]
-* [Dependencies][12]
-* [Configuration][13]
-* [Initialization][14]
-* [Running][15]
-* [Monitoring][16]
-* [Logging][17]
-* [Recovery][18]
-
-* * *
-
-## <a id="Administration"></a> Administration
-
-## <a id="Hardware"></a> Hardware
-
-Because we are running essentially two or three systems simultaneously layered 
across the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware to 
consist of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process 
can have at least one core and 2 - 4 GB each. 
-
-One core running HDFS can typically keep 2 to 4 disks busy, so each machine 
may typically have as little as 2 x 300GB disks and as much as 4 x 1TB or 2TB 
disks. 
-
-It is possible to do with less than this, such as with 1u servers with 2 cores 
and 4GB each, but in this case it is recommended to only run up to two 
processes per machine - i.e. DataNode and TabletServer or DataNode and 
MapReduce worker but not all three. The constraint here is having enough 
available heap space for all the processes on a machine. 
-
-## <a id="Network"></a> Network
-
-Accumulo communicates via remote procedure calls over TCP/IP for both passing 
data and control messages. In addition, Accumulo uses HDFS clients to 
communicate with HDFS. To achieve good ingest and query performance, sufficient 
network bandwidth must be available between any two machines. 
-
-## <a id="Installation"></a> Installation
-
-Choose a directory for the Accumulo installation. This directory will be 
referenced by the environment variable $ACCUMULO_HOME. Run the following: 
-    
-    
-    $ tar xzf $ACCUMULO_HOME/accumulo.tar.gz
-    
-
-Repeat this step at each machine within the cluster. Usually all machines have 
the same $ACCUMULO_HOME. 
-
-## <a id="Dependencies"></a> Dependencies
-
-Accumulo requires HDFS and ZooKeeper to be configured and running before 
starting. Password-less SSH should be configured between at least the Accumulo 
master and TabletServer machines. It is also a good idea to run Network Time 
Protocol (NTP) within the cluster to ensure nodes' clocks don't get too out of 
sync, which can cause problems with automatically timestamped data. Accumulo 
will remove from the set of TabletServers those machines whose times differ too 
much from the master's. 
-
-## <a id="Configuration"></a> Configuration
-
-Accumulo is configured by editing several Shell and XML files found in 
$ACCUMULO_HOME/conf. The structure closely resembles Hadoop's configuration 
files. 
-
-### <a id="Edit_conf/accumulo-env.sh"></a> Edit conf/accumulo-env.sh
-
-Accumulo needs to know where to find the software it depends on. Edit 
accumuloenv. sh and specify the following: 
-
-1. Enter the location of the installation directory of Accumulo for 
$ACCUMULO_HOME
-2. Enter your system's Java home for $JAVA_HOME
-3. Enter the location of Hadoop for $HADOOP_HOME
-4. Choose a location for Accumulo logs and enter it for $ACCUMULO_LOG_DIR
-5. Enter the location of ZooKeeper for $ZOOKEEPER_HOME
-
-By default Accumulo TabletServers are set to use 1GB of memory. You may change 
this by altering the value of $ACCUMULO_TSERVER_OPTS. Note the syntax is that 
of the Java JVM command line options. This value should be less than the 
physical memory of the machines running TabletServers. 
-
-There are similar options for the master's memory usage and the garbage 
collector process. Reduce these if they exceed the physical RAM of your 
hardware and increase them, within the bounds of the physical RAM, if a process 
fails because of insufficient memory. 
-
-Note that you will be specifying the Java heap space in accumulo-env.sh. You 
should make sure that the total heap space used for the Accumulo tserver and 
the Hadoop DataNode and TaskTracker is less than the available memory on each 
slave node in the cluster. On large clusters, it is recommended that the 
Accumulo master, Hadoop NameNode, secondary NameNode, and Hadoop JobTracker all 
be run on separate machines to allow them to use more heap space. If you are 
running these on the same machine on a small cluster, likewise make sure their 
heap space settings fit within the available memory. 
-
-### <a id="Cluster_Specification"></a> Cluster Specification
-
-On the machine that will serve as the Accumulo master: 
-
-1. Write the IP address or domain name of the Accumulo Master to the   
-$ACCUMULO_HOME/conf/masters file. 
-2. Write the IP addresses or domain name of the machines that will be 
TabletServers in   
-$ACCUMULO_HOME/conf/slaves, one per line. 
-
-Note that if using domain names rather than IP addresses, DNS must be 
configured properly for all machines participating in the cluster. DNS can be a 
confusing source of errors. 
-
-### <a id="Accumulo_Settings"></a> Accumulo Settings
-
-Specify appropriate values for the following settings in   
-$ACCUMULO_HOME/conf/accumulo-site.xml : 
-    
-    
-    <property>
-        <name>zookeeper</name>
-        <value>zooserver-one:2181,zooserver-two:2181</value>
-        <description>list of zookeeper servers</description>
-    </property>
-    <property>
-        <name>walog</name>
-        <value>/var/accumulo/walogs</value>
-        <description>local directory for write ahead logs</description>
-    </property>
-    
-
-This enables Accumulo to find ZooKeeper. Accumulo uses ZooKeeper to coordinate 
settings between processes and helps finalize TabletServer failure. 
-
-Accumulo records all changes to tables to a write-ahead log before committing 
them to the table. The `walog' setting specifies the local directory on each 
machine to which write-ahead logs are written. This directory should exist on 
all machines acting as TabletServers. 
-
-Some settings can be modified via the Accumulo shell and take effect 
immediately. However, any settings that should be persisted across system 
restarts must be recorded in the accumulo-site.xml file. 
-
-### <a id="Deploy_Configuration"></a> Deploy Configuration
-
-Copy the masters, slaves, accumulo-env.sh, and if necessary, accumulo-site.xml 
from the   
-$ACCUMULO_HOME/conf/ directory on the master to all the machines specified in 
the slaves file. 
-
-## <a id="Initialization"></a> Initialization
-
-Accumulo must be initialized to create the structures it uses internally to 
locate data across the cluster. HDFS is required to be configured and running 
before Accumulo can be initialized. 
-
-Once HDFS is started, initialization can be performed by executing   
-$ACCUMULO_HOME/bin/accumulo init . This script will prompt for a name for this 
instance of Accumulo. The instance name is used to identify a set of tables and 
instance-specific settings. The script will then write some information into 
HDFS so Accumulo can start properly. 
-
-The initialization script will prompt you to set a root password. Once 
Accumulo is initialized it can be started. 
-
-## <a id="Running"></a> Running
-
-### <a id="Starting_Accumulo"></a> Starting Accumulo
-
-Make sure Hadoop is configured on all of the machines in the cluster, 
including access to a shared HDFS instance. Make sure HDFS and ZooKeeper are 
running. Make sure ZooKeeper is configured and running on at least one machine 
in the cluster. Start Accumulo using the bin/start-all.sh script. 
-
-To verify that Accumulo is running, check the Status page as described under 
*Monitoring*. In addition, the Shell can provide some information about the 
status of tables via reading the !METADATA table. 
-
-### <a id="Stopping_Accumulo"></a> Stopping Accumulo
-
-To shutdown cleanly, run bin/stop-all.sh and the master will orchestrate the 
shutdown of all the tablet servers. Shutdown waits for all minor compactions to 
finish, so it may take some time for particular configurations. 
-
-## <a id="Monitoring"></a> Monitoring
-
-The Accumulo Master provides an interface for monitoring the status and health 
of Accumulo components. This interface can be accessed by pointing a web 
browser to   
-http://accumulomaster:50095/status
-
-## <a id="Logging"></a> Logging
-
-Accumulo processes each write to a set of log files. By default these are 
found under   
-$ACCUMULO/logs/. 
-
-## <a id="Recovery"></a> Recovery
-
-In the event of TabletServer failure or error on shutting Accumulo down, some 
mutations may not have been minor compacted to HDFS properly. In this case, 
Accumulo will automatically reapply such mutations from the write-ahead log 
either when the tablets from the failed server are reassigned by the Master, in 
the case of a single TabletServer failure or the next time Accumulo starts, in 
the event of failure during shutdown. 
-
-Recovery is performed by asking the loggers to copy their write-ahead logs 
into HDFS. As the logs are copied, they are also sorted, so that tablets can 
easily find their missing updates. The copy/sort status of each file is 
displayed on Accumulo monitor status page. Once the recovery is complete any 
tablets involved should return to an ``online" state. Until then those tablets 
will be unavailable to clients. 
-
-The Accumulo client library is configured to retry failed mutations and in 
many cases clients will be able to continue processing after the recovery 
process without throwing an exception. 
-
-Note that because Accumulo uses timestamps to order mutations, any mutations 
that are applied as part of the recovery process should appear to have been 
applied when they originally arrived at the TabletServer that failed. This 
makes the ordering of mutations consistent in the presence of failure. 
-
-* * *
-
-** Next:** [Shell Commands][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Security][6]   ** [Contents][8]**
-
-[2]: Shell_Commands.html
-[4]: accumulo_user_manual.html
-[6]: Security.html
-[8]: Contents.html
-[9]: Administration.html#Hardware
-[10]: Administration.html#Network
-[11]: Administration.html#Installation
-[12]: Administration.html#Dependencies
-[13]: Administration.html#Configuration
-[14]: Administration.html#Initialization
-[15]: Administration.html#Running
-[16]: Administration.html#Monitoring
-[17]: Administration.html#Logging
-[18]: Administration.html#Recovery
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Analytics.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Analytics.md 
b/user_manual_1.3-incubating/Analytics.md
deleted file mode 100644
index ba833be..0000000
--- a/user_manual_1.3-incubating/Analytics.md
+++ /dev/null
@@ -1,150 +0,0 @@
----
-title: "User Manual: Analytics"
----
-
-** Next:** [Security][2] ** Up:** [Apache Accumulo User Manual Version 1.3][4] 
** Previous:** [High-Speed Ingest][6]   ** [Contents][8]**   
-  
-<a id="CHILD_LINKS"></a>**Subsections**
-
-* [MapReduce][9]
-* [Aggregating Iterators][10]
-* [Statistical Modeling][11]
-
-* * *
-
-## <a id="Analytics"></a> Analytics
-
-Accumulo supports more advanced data processing than simply keeping keys 
sorted and performing efficient lookups. Analytics can be developed by using 
MapReduce and Iterators in conjunction with Accumulo tables. 
-
-## <a id="MapReduce"></a> MapReduce
-
-Accumulo tables can be used as the source and destination of MapReduce jobs. 
To use an Accumulo table with a MapReduce job (specifically with the new Hadoop 
API as of version 0.20), configure the job parameters to use the 
AccumuloInputFormat and AccumuloOutputFormat. Accumulo specific parameters can 
be set via these two format classes to do the following: 
-
-* Authenticate and provide user credentials for the input 
-* Restrict the scan to a range of rows 
-* Restrict the input to a subset of available columns 
-
-### <a id="Mapper_and_Reducer_classes"></a> Mapper and Reducer classes
-
-To read from an Accumulo table create a Mapper with the following class 
parameterization and be sure to configure the AccumuloInputFormat. 
-    
-    
-    class MyMapper extends Mapper<Key,Value,WritableComparable,Writable> {
-        public void map(Key k, Value v, Context c) {
-            // transform key and value data here
-        }
-    }
-    
-
-To write to an Accumulo table, create a Reducer with the following class 
parameterization and be sure to configure the AccumuloOutputFormat. The key 
emitted from the Reducer identifies the table to which the mutation is sent. 
This allows a single Reducer to write to more than one table if desired. A 
default table can be configured using the AccumuloOutputFormat, in which case 
the output table name does not have to be passed to the Context object within 
the Reducer. 
-    
-    
-    class MyReducer extends Reducer<WritableComparable, Writable, Text, 
Mutation> {
-    
-        public void reduce(WritableComparable key, Iterator<Text> values, 
Context c) {
-            
-            Mutation m;
-            
-            // create the mutation based on input key and value
-            
-            c.write(new Text("output-table"), m);
-        }
-    }
-    
-
-The Text object passed as the output should contain the name of the table to 
which this mutation should be applied. The Text can be null in which case the 
mutation will be applied to the default table name specified in the 
AccumuloOutputFormat options. 
-
-### <a id="AccumuloInputFormat_options"></a> AccumuloInputFormat options
-    
-    
-    Job job = new Job(getConf());
-    AccumuloInputFormat.setInputInfo(job,
-            "user",
-            "passwd".getBytes(),
-            "table",
-            new Authorizations());
-    
-    AccumuloInputFormat.setZooKeeperInstance(job, "myinstance",
-            "zooserver-one,zooserver-two");
-    
-
-**Optional settings:**
-
-To restrict Accumulo to a set of row ranges: 
-    
-    
-    ArrayList<Range> ranges = new ArrayList<Range>();
-    // populate array list of row ranges ...
-    AccumuloInputFormat.setRanges(job, ranges);
-    
-
-To restrict accumulo to a list of columns: 
-    
-    
-    ArrayList<Pair<Text,Text>> columns = new ArrayList<Pair<Text,Text>>();
-    // populate list of columns
-    AccumuloInputFormat.fetchColumns(job, columns);
-    
-
-To use a regular expression to match row IDs: 
-    
-    
-    AccumuloInputFormat.setRegex(job, RegexType.ROW, "^.*");
-    
-
-### <a id="AccumuloOutputFormat_options"></a> AccumuloOutputFormat options
-    
-    
-    boolean createTables = true;
-    String defaultTable = "mytable";
-    
-    AccumuloOutputFormat.setOutputInfo(job,
-            "user",
-            "passwd".getBytes(),
-            createTables,
-            defaultTable);
-    
-    AccumuloOutputFormat.setZooKeeperInstance(job, "myinstance",
-            "zooserver-one,zooserver-two");
-    
-
-**Optional Settings:**
-    
-    
-    AccumuloOutputFormat.setMaxLatency(job, 300); // milliseconds
-    AccumuloOutputFormat.setMaxMutationBufferSize(job, 5000000); // bytes
-    
-
-An example of using MapReduce with Accumulo can be found at   
-accumulo/docs/examples/README.mapred 
-
-## <a id="Aggregating_Iterators"></a> Aggregating Iterators
-
-Many applications can benefit from the ability to aggregate values across 
common keys. This can be done via aggregating iterators and is similar to the 
Reduce step in MapReduce. This provides the ability to define online, 
incrementally updated analytics without the overhead or latency associated with 
batch-oriented MapReduce jobs. 
-
-All that is needed to aggregate values of a table is to identify the fields 
over which values will be grouped, insert mutations with those fields as the 
key, and configure the table with an aggregating iterator that supports the 
summarization operation desired. 
-
-The only restriction on an aggregating iterator is that the aggregator 
developer should not assume that all values for a given key have been seen, 
since new mutations can be inserted at anytime. This precludes using the total 
number of values in the aggregation such as when calculating an average, for 
example. 
-
-### <a id="Feature_Vectors"></a> Feature Vectors
-
-An interesting use of aggregating iterators within an Accumulo table is to 
store feature vectors for use in machine learning algorithms. For example, many 
algorithms such as k-means clustering, support vector machines, anomaly 
detection, etc. use the concept of a feature vector and the calculation of 
distance metrics to learn a particular model. The columns in an Accumulo table 
can be used to efficiently store sparse features and their weights to be 
incrementally updated via the use of an aggregating iterator. 
-
-## <a id="Statistical_Modeling"></a> Statistical Modeling
-
-Statistical models that need to be updated by many machines in parallel could 
be similarly stored within an Accumulo table. For example, a MapReduce job that 
is iteratively updating a global statistical model could have each map or 
reduce worker reference the parts of the model to be read and updated through 
an embedded Accumulo client. 
-
-Using Accumulo this way enables efficient and fast lookups and updates of 
small pieces of information in a random access pattern, which is complementary 
to MapReduce's sequential access model. 
-
-* * *
-
-** Next:** [Security][2] ** Up:** [Apache Accumulo User Manual Version 1.3][4] 
** Previous:** [High-Speed Ingest][6]   ** [Contents][8]**
-
-[2]: Security.html
-[4]: accumulo_user_manual.html
-[6]: High_Speed_Ingest.html
-[8]: Contents.html
-[9]: Analytics.html#MapReduce
-[10]: Analytics.html#Aggregating_Iterators
-[11]: Analytics.html#Statistical_Modeling
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Contents.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Contents.md 
b/user_manual_1.3-incubating/Contents.md
deleted file mode 100644
index 7deba5e..0000000
--- a/user_manual_1.3-incubating/Contents.md
+++ /dev/null
@@ -1,232 +0,0 @@
----
-title: "User Manual: Contents"
----
-
-** Next:** [Introduction][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Apache Accumulo User Manual Version 1.3][4]   
-  
-  
-
-
-### <a id="Contents"></a> Contents
-
-* [Introduction][2]
-* [Accumulo Design][6]
-
-    * [Data Model][7]
-    * [Architecture][8]
-    * [Components][9]
-
-        * [Tablet Server][10]
-        * [Loggers][11]
-        * [Garbage Collector][12]
-        * [Master][13]
-        * [Client][14]
-
-    * [Data Management][15]
-    * [Tablet Service][16]
-    * [Compactions][17]
-    * [Fault-Tolerance][18]
-
-  
-
-* [Accumulo Shell][19]
-
-    * [Basic Administration][20]
-    * [Table Maintenance][21]
-    * [User Administration][22]
-
-  
-
-* [Writing Accumulo Clients][23]
-
-    * [Writing Data][24]
-
-        * [BatchWriter][25]
-
-    * [Reading Data][26]
-
-        * [Scanner][27]
-        * [BatchScanner][28]
-
-  
-
-* [Table Configuration][29]
-
-    * [Locality Groups][30]
-
-        * [Managing Locality Groups via the Shell][31]
-        * [Managing Locality Groups via the Client API][32]
-
-    * [Constraints][33]
-    * [Bloom Filters][34]
-    * [Iterators][35]
-
-        * [Setting Iterators via the Shell][36]
-        * [Setting Iterators Programmatically][37]
-        * [Versioning Iterators and Timestamps][38]
-        * [Filtering Iterators][39]
-
-    * [Aggregating Iterators][40]
-    * [Block Cache][41]
-
-  
-
-* [Table Design][42]
-
-    * [Basic Table][43]
-    * [RowID Design][44]
-    * [Indexing][45]
-    * [Entity-Attribute and Graph Tables][46]
-    * [Document-Partitioned Indexing][47]
-
-  
-
-* [High-Speed Ingest][48]
-
-    * [Pre-Splitting New Tables][49]
-    * [Multiple Ingester Clients][50]
-    * [Bulk Ingest][51]
-    * [MapReduce Ingest][52]
-
-  
-
-* [Analytics][53]
-
-    * [MapReduce][54]
-
-        * [Mapper and Reducer classes][55]
-        * [AccumuloInputFormat options][56]
-        * [AccumuloOutputFormat options][57]
-
-    * [Aggregating Iterators][58]
-
-        * [Feature Vectors][59]
-
-    * [Statistical Modeling][60]
-
-  
-
-* [Security][61]
-
-    * [Security Label Expressions][62]
-    * [Security Label Expression Syntax][63]
-    * [Authorization][64]
-    * [Secure Authorizations Handling][65]
-    * [Query Services Layer][66]
-
-  
-
-* [Administration][67]
-
-    * [Hardware][68]
-    * [Network][69]
-    * [Installation][70]
-    * [Dependencies][71]
-    * [Configuration][72]
-
-        * [Edit conf/accumulo-env.sh][73]
-        * [Cluster Specification][74]
-        * [Accumulo Settings][75]
-        * [Deploy Configuration][76]
-
-    * [Initialization][77]
-    * [Running][78]
-
-        * [Starting Accumulo][79]
-        * [Stopping Accumulo][80]
-
-    * [Monitoring][81]
-    * [Logging][82]
-    * [Recovery][83]
-
-  
-
-* [Shell Commands][84]
-
-  
-
-
-* * *
-
-[2]: Introduction.html
-[4]: accumulo_user_manual.html
-[6]: Accumulo_Design.html
-[7]: Accumulo_Design.html#Data_Model
-[8]: Accumulo_Design.html#Architecture
-[9]: Accumulo_Design.html#Components
-[10]: Accumulo_Design.html#Tablet_Server
-[11]: Accumulo_Design.html#Loggers
-[12]: Accumulo_Design.html#Garbage_Collector
-[13]: Accumulo_Design.html#Master
-[14]: Accumulo_Design.html#Client
-[15]: Accumulo_Design.html#Data_Management
-[16]: Accumulo_Design.html#Tablet_Service
-[17]: Accumulo_Design.html#Compactions
-[18]: Accumulo_Design.html#Fault-Tolerance
-[19]: Accumulo_Shell.html
-[20]: Accumulo_Shell.html#Basic_Administration
-[21]: Accumulo_Shell.html#Table_Maintenance
-[22]: Accumulo_Shell.html#User_Administration
-[23]: Writing_Accumulo_Clients.html
-[24]: Writing_Accumulo_Clients.html#Writing_Data
-[25]: Writing_Accumulo_Clients.html#BatchWriter
-[26]: Writing_Accumulo_Clients.html#Reading_Data
-[27]: Writing_Accumulo_Clients.html#Scanner
-[28]: Writing_Accumulo_Clients.html#BatchScanner
-[29]: Table_Configuration.html
-[30]: Table_Configuration.html#Locality_Groups
-[31]: Table_Configuration.html#Managing_Locality_Groups_via_the_Shell
-[32]: Table_Configuration.html#Managing_Locality_Groups_via_the_Client_API
-[33]: Table_Configuration.html#Constraints
-[34]: Table_Configuration.html#Bloom_Filters
-[35]: Table_Configuration.html#Iterators
-[36]: Table_Configuration.html#Setting_Iterators_via_the_Shell
-[37]: Table_Configuration.html#Setting_Iterators_Programmatically
-[38]: Table_Configuration.html#Versioning_Iterators_and_Timestamps
-[39]: Table_Configuration.html#Filtering_Iterators
-[40]: Table_Configuration.html#Aggregating_Iterators
-[41]: Table_Configuration.html#Block_Cache
-[42]: Table_Design.html
-[43]: Table_Design.html#Basic_Table
-[44]: Table_Design.html#RowID_Design
-[45]: Table_Design.html#Indexing
-[46]: Table_Design.html#Entity-Attribute_and_Graph_Tables
-[47]: Table_Design.html#Document-Partitioned_Indexing
-[48]: High_Speed_Ingest.html
-[49]: High_Speed_Ingest.html#Pre-Splitting_New_Tables
-[50]: High_Speed_Ingest.html#Multiple_Ingester_Clients
-[51]: High_Speed_Ingest.html#Bulk_Ingest
-[52]: High_Speed_Ingest.html#MapReduce_Ingest
-[53]: Analytics.html
-[54]: Analytics.html#MapReduce
-[55]: Analytics.html#Mapper_and_Reducer_classes
-[56]: Analytics.html#AccumuloInputFormat_options
-[57]: Analytics.html#AccumuloOutputFormat_options
-[58]: Analytics.html#Aggregating_Iterators
-[59]: Analytics.html#Feature_Vectors
-[60]: Analytics.html#Statistical_Modeling
-[61]: Security.html
-[62]: Security.html#Security_Label_Expressions
-[63]: Security.html#Security_Label_Expression_Syntax
-[64]: Security.html#Authorization
-[65]: Security.html#Secure_Authorizations_Handling
-[66]: Security.html#Query_Services_Layer
-[67]: Administration.html
-[68]: Administration.html#Hardware
-[69]: Administration.html#Network
-[70]: Administration.html#Installation
-[71]: Administration.html#Dependencies
-[72]: Administration.html#Configuration
-[73]: Administration.html#Edit_conf/accumulo-env.sh
-[74]: Administration.html#Cluster_Specification
-[75]: Administration.html#Accumulo_Settings
-[76]: Administration.html#Deploy_Configuration
-[77]: Administration.html#Initialization
-[78]: Administration.html#Running
-[79]: Administration.html#Starting_Accumulo
-[80]: Administration.html#Stopping_Accumulo
-[81]: Administration.html#Monitoring
-[82]: Administration.html#Logging
-[83]: Administration.html#Recovery
-[84]: Shell_Commands.html
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/High_Speed_Ingest.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/High_Speed_Ingest.md 
b/user_manual_1.3-incubating/High_Speed_Ingest.md
deleted file mode 100644
index a30e395..0000000
--- a/user_manual_1.3-incubating/High_Speed_Ingest.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: "User Manual: High Speed Ingest"
----
-
-** Next:** [Analytics][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Table Design][6]   ** [Contents][8]**   
-  
-<a id="CHILD_LINKS"></a>**Subsections**
-
-* [Pre-Splitting New Tables][9]
-* [Multiple Ingester Clients][10]
-* [Bulk Ingest][11]
-* [MapReduce Ingest][12]
-
-* * *
-
-## <a id="High-Speed_Ingest"></a> High-Speed Ingest
-
-Accumulo is often used as part of a larger data processing and storage system. 
To maximize the performance of a parallel system involving Accumulo, the 
ingestion and query components should be designed to provide enough parallelism 
and concurrency to avoid creating bottlenecks for users and other systems 
writing to and reading from Accumulo. There are several ways to achieve high 
ingest performance. 
-
-## <a id="Pre-Splitting_New_Tables"></a> Pre-Splitting New Tables
-
-New tables consist of a single tablet by default. As mutations are applied, 
the table grows and splits into multiple tablets which are balanced by the 
Master across TabletServers. This implies that the aggregate ingest rate will 
be limited to fewer servers than are available within the cluster until the 
table has reached the point where there are tablets on every TabletServer. 
-
-Pre-splitting a table ensures that there are as many tablets as desired 
available before ingest begins to take advantage of all the parallelism 
possible with the cluster hardware. Tables can be split anytime by using the 
shell: 
-    
-    
-    user@myinstance mytable> addsplits -sf /local_splitfile -t mytable
-    
-
-For the purposes of providing parallelism to ingest it is not necessary to 
create more tablets than there are physical machines within the cluster as the 
aggregate ingest rate is a function of the number of physical machines. Note 
that the aggregate ingest rate is still subject to the number of machines 
running ingest clients, and the distribution of rowIDs across the table. The 
aggregation ingest rate will be suboptimal if there are many inserts into a 
small number of rowIDs. 
-
-## <a id="Multiple_Ingester_Clients"></a> Multiple Ingester Clients
-
-Accumulo is capable of scaling to very high rates of ingest, which is 
dependent upon not just the number of TabletServers in operation but also the 
number of ingest clients. This is because a single client, while capable of 
batching mutations and sending them to all TabletServers, is ultimately limited 
by the amount of data that can be processed on a single machine. The aggregate 
ingest rate will scale linearly with the number of clients up to the point at 
which either the aggregate I/O of TabletServers or total network bandwidth 
capacity is reached. 
-
-In operational settings where high rates of ingest are paramount, clusters are 
often configured to dedicate some number of machines solely to running Ingester 
Clients. The exact ratio of clients to TabletServers necessary for optimum 
ingestion rates will vary according to the distribution of resources per 
machine and by data type. 
-
-## <a id="Bulk_Ingest"></a> Bulk Ingest
-
-Accumulo supports the ability to import files produced by an external process 
such as MapReduce into an existing table. In some cases it may be faster to 
load data this way rather than via ingesting through clients using 
BatchWriters. This allows a large number of machines to format data the way 
Accumulo expects. The new files can then simply be introduced to Accumulo via a 
shell command. 
-
-To configure MapReduce to format data in preparation for bulk loading, the job 
should be set to use a range partitioner instead of the default hash 
partitioner. The range partitioner uses the split points of the Accumulo table 
that will receive the data. The split points can be obtained from the shell and 
used by the MapReduce RangePartitioner. Note that this is only useful if the 
existing table is already split into multiple tablets. 
-    
-    
-    user@myinstance mytable> getsplits
-    aa
-    ab
-    ac
-    ...
-    zx
-    zy
-    zz
-    
-
-Run the MapReduce job, using the AccumuloFileOutputFormat to create the files 
to be introduced to Accumulo. Once this is complete, the files can be added to 
Accumulo via the shell: 
-    
-    
-    user@myinstance mytable> importdirectory /files_dir /failures
-    
-
-Note that the paths referenced are directories within the same HDFS instance 
over which Accumulo is running. Accumulo places any files that failed to be 
added to the second directory specified. 
-
-A complete example of using Bulk Ingest can be found at   
-accumulo/docs/examples/README.bulkIngest 
-
-## <a id="MapReduce_Ingest"></a> MapReduce Ingest
-
-It is possible to efficiently write many mutations to Accumulo in parallel via 
a MapReduce job. In this scenario the MapReduce is written to process data that 
lives in HDFS and write mutations to Accumulo using the AccumuloOutputFormat. 
See the MapReduce section under Analytics for details. 
-
-An example of using MapReduce can be found under   
-accumulo/docs/examples/README.mapred 
-
-* * *
-
-** Next:** [Analytics][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Table Design][6]   ** [Contents][8]**
-
-[2]: Analytics.html
-[4]: accumulo_user_manual.html
-[6]: Table_Design.html
-[8]: Contents.html
-[9]: High_Speed_Ingest.html#Pre-Splitting_New_Tables
-[10]: High_Speed_Ingest.html#Multiple_Ingester_Clients
-[11]: High_Speed_Ingest.html#Bulk_Ingest
-[12]: High_Speed_Ingest.html#MapReduce_Ingest
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Introduction.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Introduction.md 
b/user_manual_1.3-incubating/Introduction.md
deleted file mode 100644
index b8e6247..0000000
--- a/user_manual_1.3-incubating/Introduction.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "User Manual: Introduction"
----
-
-** Next:** [Accumulo Design][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Contents][6]   ** [Contents][6]**   
-  
-
-
-## <a id="Introduction"></a> Introduction
-
-Apache Accumulo is a highly scalable structured store based on Google's 
BigTable. Accumulo is written in Java and operates over the Hadoop Distributed 
File System (HDFS), which is part of the popular Apache Hadoop project. 
Accumulo supports efficient storage and retrieval of structured data, including 
queries for ranges, and provides support for using Accumulo tables as input and 
output for MapReduce jobs. 
-
-Accumulo features automatic load-balancing and partitioning, data compression 
and fine-grained security labels. 
-
-  
-
-
-* * *
-
-[2]: Accumulo_Design.html
-[4]: accumulo_user_manual.html
-[6]: Contents.html
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Security.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Security.md 
b/user_manual_1.3-incubating/Security.md
deleted file mode 100644
index f0cc9bb..0000000
--- a/user_manual_1.3-incubating/Security.md
+++ /dev/null
@@ -1,108 +0,0 @@
----
-title: "User Manual: Security"
----
-
-** Next:** [Administration][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Analytics][6]   ** [Contents][8]**   
-  
-<a id="CHILD_LINKS"></a>**Subsections**
-
-* [Security Label Expressions][9]
-* [Security Label Expression Syntax][10]
-* [Authorization][11]
-* [Secure Authorizations Handling][12]
-* [Query Services Layer][13]
-
-* * *
-
-## <a id="Security"></a> Security
-
-Accumulo extends the BigTable data model to implement a security mechanism 
known as cell-level security. Every key-value pair has its own security label, 
stored under the column visibility element of the key, which is used to 
determine whether a given user meets the security requirements to read the 
value. This enables data of various security levels to be stored within the 
same row, and users of varying degrees of access to query the same table, while 
preserving data confidentiality. 
-
-## <a id="Security_Label_Expressions"></a> Security Label Expressions
-
-When mutations are applied, users can specify a security label for each value. 
This is done as the Mutation is created by passing a ColumnVisibility object to 
the put() method: 
-    
-    
-    Text rowID = new Text("row1");
-    Text colFam = new Text("myColFam");
-    Text colQual = new Text("myColQual");
-    ColumnVisibility colVis = new ColumnVisibility("public");
-    long timestamp = System.currentTimeMillis();
-    
-    Value value = new Value("myValue");
-    
-    Mutation mutation = new Mutation(rowID);
-    mutation.put(colFam, colQual, colVis, timestamp, value);
-    
-
-## <a id="Security_Label_Expression_Syntax"></a> Security Label Expression 
Syntax
-
-Security labels consist of a set of user-defined tokens that are required to 
read the value the label is associated with. The set of tokens required can be 
specified using syntax that supports logical AND and OR combinations of tokens, 
as well as nesting groups of tokens together. 
-
-For example, suppose within our organization we want to label our data values 
with security labels defined in terms of user roles. We might have tokens such 
as: 
-    
-    
-    admin
-    audit
-    system
-    
-
-These can be specified alone or combined using logical operators: 
-    
-    
-    // Users must have admin privileges:
-    admin
-    
-    // Users must have admin and audit privileges
-    admin&audit
-    
-    // Users with either admin or audit privileges
-    admin|audit
-    
-    // Users must have audit and one or both of admin or system
-    (admin|system)&audit
-    
-
-When both `|` and `&` operators are used, parentheses must be used to specify 
precedence of the operators. 
-
-## <a id="Authorization"></a> Authorization
-
-When clients attempt to read data from Accumulo, any security labels present 
are examined against the set of authorizations passed by the client code when 
the Scanner or BatchScanner are created. If the authorizations are determined 
to be insufficient to satisfy the security label, the value is suppressed from 
the set of results sent back to the client. 
-
-Authorizations are specified as a comma-separated list of tokens the user 
possesses: 
-    
-    
-    // user possess both admin and system level access
-    Authorization auths = new Authorization("admin","system");
-    
-    Scanner s = connector.createScanner("table", auths);
-    
-
-## <a id="Secure_Authorizations_Handling"></a> Secure Authorizations Handling
-
-Because the client can pass any authorization tokens to Accumulo, applications 
must be designed to obtain users' authorization tokens from a trusted 3rd party 
rather than having the users specify their authorizations directly. 
-
-Often production systems will integrate with Public-Key Infrastructure (PKI) 
and designate client code within the query layer to negotiate with PKI servers 
in order to authenticate users and retrieve their authorization tokens 
(credentials). This requires users to specify only the information necessary to 
authenticate themselves to the system. Once user identity is established, their 
credentials can be accessed by the client code and passed to Accumulo outside 
of the reach of the user. 
-
-## <a id="Query_Services_Layer"></a> Query Services Layer
-
-Since the primary method of interaction with Accumulo is through the Java API, 
production environments often call for the implementation of a Query layer. 
This can be done using web services in containers such as Apache Tomcat, but is 
not a requirement. The Query Services Layer provides a mechanism for providing 
a platform on which user facing applications can be built. This allows the 
application designers to isolate potentially complex query logic, and enables a 
convenient point at which to perform essential security functions. 
-
-Several production environments choose to implement authentication at this 
layer, where users identifiers are used to retrieve their access credentials 
which are then cached within the query layer and presented to Accumulo through 
the Authorizations mechanism. 
-
-Typically, the query services layer sits between Accumulo and user 
workstations. 
-
-* * *
-
-** Next:** [Administration][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Analytics][6]   ** [Contents][8]**
-
-[2]: Administration.html
-[4]: accumulo_user_manual.html
-[6]: Analytics.html
-[8]: Contents.html
-[9]: Security.html#Security_Label_Expressions
-[10]: Security.html#Security_Label_Expression_Syntax
-[11]: Security.html#Authorization
-[12]: Security.html#Secure_Authorizations_Handling
-[13]: Security.html#Query_Services_Layer
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Shell_Commands.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Shell_Commands.md 
b/user_manual_1.3-incubating/Shell_Commands.md
deleted file mode 100644
index 94f9080..0000000
--- a/user_manual_1.3-incubating/Shell_Commands.md
+++ /dev/null
@@ -1,534 +0,0 @@
----
-title: "User Manual: Shell Commands"
----
-
-** Up:** [Apache Accumulo User Manual Version 1.3][3] ** Previous:** 
[Administration][5]   ** [Contents][7]**   
-  
-
-
-## <a id="Shell_Commands"></a> Shell Commands
-
-**?**   
-  
-    usage: ? [ <command> <command> ] [-?] [-np]   
-    description: provides information about the available commands   
-      -?,-help  display this help   
-      -np,-no-pagination  disables pagination of output   
-  
-**about**   
-  
-    usage: about [-?] [-v]   
-    description: displays information about this program   
-      -?,-help  display this help   
-      -v,-verbose displays details session information   
-  
-**addsplits**   
-  
-    usage: addsplits [<split> <split> ] [-?] [-b64] [-sf <filename>] -t 
<tableName>   
-    description: add split points to an existing table   
-      -?,-help  display this help   
-      -b64,-base64encoded decode encoded split points   
-      -sf,-splits-file <filename> file with newline separated list of rows to 
add   
-           to table   
-      -t,-table <tableName>  name of a table to add split points to   
-  
-**authenticate**   
-  
-    usage: authenticate <username> [-?]   
-    description: verifies a user's credentials   
-      -?,-help  display this help   
-  
-**bye**   
-  
-    usage: bye [-?]   
-    description: exits the shell   
-      -?,-help  display this help   
-  
-**classpath**   
-  
-    usage: classpath [-?]   
-    description: lists the current files on the classpath   
-      -?,-help  display this help   
-  
-**clear**   
-  
-    usage: clear [-?]   
-    description: clears the screen   
-      -?,-help  display this help   
-  
-**cls**   
-  
-    usage: cls [-?]   
-    description: clears the screen   
-      -?,-help  display this help   
-  
-**compact**   
-  
-    usage: compact [-?] [-override] -p <pattern> | -t <tableName>   
-    description: sets all tablets for a table to major compact as soon as 
possible   
-           (based on current time)   
-      -?,-help  display this help   
-      -override  override a future scheduled compaction   
-      -p,-pattern <pattern>  regex pattern of table names to flush   
-      -t,-table <tableName>  name of a table to flush   
-  
-**config**   
-  
-    usage: config [-?] [-d <property> | -f <string> | -s <property=value>] 
[-np]   
-           [-t <table>]   
-    description: prints system properties and table specific properties   
-      -?,-help  display this help   
-      -d,-delete <property>  delete a per-table property   
-      -f,-filter <string> show only properties that contain this string   
-      -np,-no-pagination  disables pagination of output   
-      -s,-set <property=value>  set a per-table property   
-      -t,-table <table>  display/set/delete properties for specified table   
-  
-**createtable**   
-  
-    usage: createtable <tableName> [-?] [-a   
-           <<columnfamily>[:<columnqualifier>]=<aggregation_class>>] [-b64]   
-           [-cc <table>] [-cs <table> | -sf <filename>] [-ndi]  [-tl | -tm]   
-    description: creates a new table, with optional aggregators and optionally 
  
-           pre-split   
-      -?,-help  display this help   
-      -a,-aggregator <<columnfamily>[:<columnqualifier>]=<aggregation_class>>  
 
-           comma separated column=aggregator   
-      -b64,-base64encoded decode encoded split points   
-      -cc,-copy-config <table>  table to copy configuration from   
-      -cs,-copy-splits <table>  table to copy current splits from   
-      -ndi,-no-default-iterators  prevents creation of the normal default 
iterator   
-           set   
-      -sf,-splits-file <filename> file with newline separated list of rows to  
 
-           create a pre-split table   
-      -tl,-time-logical  use logical time   
-      -tm,-time-millis  use time in milliseconds   
-  
-**createuser**   
-  
-    usage: createuser <username> [-?] [-s <comma-separated-authorizations>]   
-    description: creates a new user   
-      -?,-help  display this help   
-      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
-  
-**debug**   
-  
-    usage: debug [ on | off ] [-?]   
-    description: turns debug logging on or off   
-      -?,-help  display this help   
-  
-**delete**   
-  
-    usage: delete <row> <colfamily> <colqualifier> [-?] [-l <expression>] [-t  
 
-           <timestamp>]   
-    description: deletes a record from a table   
-      -?,-help  display this help   
-      -l,-authorization-label <expression>  formatted authorization label 
expression   
-      -t,-timestamp <timestamp>  timestamp to use for insert   
-  
-**deleteiter**   
-  
-    usage: deleteiter [-?] [-majc] [-minc] -n <itername> [-scan] [-t <table>]  
 
-    description: deletes a table-specific iterator   
-      -?,-help  display this help   
-      -majc,-major-compaction  applied at major compaction   
-      -minc,-minor-compaction  applied at minor compaction   
-      -n,-name <itername> iterator to delete   
-      -scan,-scan-time  applied at scan time   
-      -t,-table <table>  tableName   
-  
-**deletemany**   
-  
-    usage: deletemany [-?] [-b <start-row>] [-c   
-           <<columnfamily>[:<columnqualifier>]>] [-e <end-row>] [-f] [-np]   
-           [-s <comma-separated-authorizations>] [-st]   
-    description: scans a table and deletes the resulting records   
-      -?,-help  display this help   
-      -b,-begin-row <start-row>  begin row (inclusive)   
-      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
-      -e,-end-row <end-row>  end row (inclusive)   
-      -f,-force  forces deletion without prompting   
-      -np,-no-pagination  disables pagination of output   
-      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
-           (all user auths are used if this argument is not specified)   
-      -st,-show-timestamps  enables displaying timestamps   
-  
-**deletescaniter**   
-  
-    usage: deletescaniter [-?] [-a] [-n <itername>] [-t <table>]   
-    description: deletes a table-specific scan iterator so it is no longer 
used   
-           during this shell session   
-      -?,-help  display this help   
-      -a,-all  delete all for tableName   
-      -n,-name <itername> iterator to delete   
-      -t,-table <table>  tableName   
-  
-**deletetable**   
-  
-    usage: deletetable <tableName> [-?]   
-    description: deletes a table   
-      -?,-help  display this help   
-  
-**deleteuser**   
-  
-    usage: deleteuser <username> [-?]   
-    description: deletes a user   
-      -?,-help  display this help   
-  
-**droptable**   
-  
-    usage: droptable <tableName> [-?]   
-    description: deletes a table   
-      -?,-help  display this help   
-  
-**dropuser**   
-  
-    usage: dropuser <username> [-?]   
-    description: deletes a user   
-      -?,-help  display this help   
-  
-**egrep**   
-  
-    usage: egrep <regex> <regex> [-?] [-b <start-row>] [-c   
-           <<columnfamily>[:<columnqualifier>]>] [-e <end-row>] [-np] [-s   
-           <comma-separated-authorizations>] [-st] [-t <arg>]   
-    description: egreps a table in parallel on the server side (uses java 
regex)   
-      -?,-help  display this help   
-      -b,-begin-row <start-row>  begin row (inclusive)   
-      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
-      -e,-end-row <end-row>  end row (inclusive)   
-      -np,-no-pagination  disables pagination of output   
-      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
-           (all user auths are used if this argument is not specified)   
-      -st,-show-timestamps  enables displaying timestamps   
-      -t,-num-threads <arg>  num threads   
-  
-**execfile**   
-  
-    usage: execfile [-?] [-v]   
-    description: specifies a file containing accumulo commands to execute   
-      -?,-help  display this help   
-      -v,-verbose displays command prompt as commands are executed   
-  
-**exit**   
-  
-    usage: exit [-?]   
-    description: exits the shell   
-      -?,-help  display this help   
-  
-**flush**   
-  
-    usage: flush [-?] -p <pattern> | -t <tableName>   
-    description: makes a best effort to flush tables from memory to disk   
-      -?,-help  display this help   
-      -p,-pattern <pattern>  regex pattern of table names to flush   
-      -t,-table <tableName>  name of a table to flush   
-  
-**formatter**   
-  
-    usage: formatter [-?] -f <className> | -l | -r   
-    description: specifies a formatter to use for displaying database entries  
 
-      -?,-help  display this help   
-      -f,-formatter <className>  fully qualified name of formatter class to 
use   
-      -l,-list  display the current formatter   
-      -r,-reset  reset to default formatter   
-  
-**getauths**   
-  
-    usage: getauths [-?] [-u <user>]   
-    description: displays the maximum scan authorizations for a user   
-      -?,-help  display this help   
-      -u,-user <user>  user to operate on   
-  
-**getgroups**   
-  
-    usage: getgroups [-?] -t <table>   
-    description: gets the locality groups for a given table   
-      -?,-help  display this help   
-      -t,-table <table>  get locality groups for specified table   
-  
-**getsplits**   
-  
-    usage: getsplits [-?] [-b64] [-m <num>] [-o <file>] [-v]   
-    description: retrieves the current split points for tablets in the current 
table   
-      -?,-help  display this help   
-      -b64,-base64encoded encode the split points   
-      -m,-max <num>  specifies the maximum number of splits to create   
-      -o,-output <file>  specifies a local file to write the splits to   
-      -v,-verbose print out the tablet information with start/end rows   
-  
-**grant**   
-  
-    usage: grant <permission> [-?] -p <pattern> | -s | -t <table>  -u 
<username>   
-    description: grants system or table permissions for a user   
-      -?,-help  display this help   
-      -p,-pattern <pattern>  regex pattern of tables to grant permissions on   
-      -s,-system  grant a system permission   
-      -t,-table <table>  grant a table permission on this table   
-      -u,-user <username> user to operate on   
-  
-**grep**   
-  
-    usage: grep <term> <term> [-?] [-b <start-row>] [-c   
-           <<columnfamily>[:<columnqualifier>]>] [-e <end-row>] [-np] [-s   
-           <comma-separated-authorizations>] [-st] [-t <arg>]   
-    description: searches a table for a substring, in parallel, on the server 
side   
-      -?,-help  display this help   
-      -b,-begin-row <start-row>  begin row (inclusive)   
-      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
-      -e,-end-row <end-row>  end row (inclusive)   
-      -np,-no-pagination  disables pagination of output   
-      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
-           (all user auths are used if this argument is not specified)   
-      -st,-show-timestamps  enables displaying timestamps   
-      -t,-num-threads <arg>  num threads   
-  
-**help**   
-  
-    usage: help [ <command> <command> ] [-?] [-np]   
-    description: provides information about the available commands   
-      -?,-help  display this help   
-      -np,-no-pagination  disables pagination of output   
-  
-**importdirectory**   
-  
-    usage: importdirectory <directory> <failureDirectory> [-?] [-a <num>] [-f 
<num>]   
-           [-g] [-v]   
-    description: bulk imports an entire directory of data files to the current 
table   
-      -?,-help  display this help   
-      -a,-numAssignThreads <num>  number of assign threads for import 
(default: 20)   
-      -f,-numFileThreads <num>  number of threads to process files (default: 
8)   
-      -g,-disableGC  prevents imported files from being deleted by the garbage 
  
-           collector   
-      -v,-verbose displays statistics from the import   
-  
-**info**   
-  
-    usage: info [-?] [-v]   
-    description: displays information about this program   
-      -?,-help  display this help   
-      -v,-verbose displays details session information   
-  
-**insert**   
-  
-    usage: insert <row> <colfamily> <colqualifier> <value> [-?] [-l 
<expression>] [-t   
-           <timestamp>]   
-    description: inserts a record   
-      -?,-help  display this help   
-      -l,-authorization-label <expression>  formatted authorization label 
expression   
-      -t,-timestamp <timestamp>  timestamp to use for insert   
-  
-**listscans**   
-  
-    usage: listscans [-?] [-np] [-ts <tablet server>]   
-    description: list what scans are currently running in accumulo. See the   
-           org.apache.accumulo.core.client.admin.ActiveScan javadoc for more 
information   
-           about columns.   
-      -?,-help  display this help   
-      -np,-no-pagination  disables pagination of output   
-      -ts,-tabletServer <tablet server>  list scans for a specific tablet 
server   
-  
-**masterstate**   
-  
-    usage: masterstate <NORMAL|SAFE_MODE|CLEAN_STOP> [-?]   
-    description: set the master state: NORMAL, SAFE_MODE or CLEAN_STOP   
-      -?,-help  display this help   
-  
-**offline**   
-  
-    usage: offline [-?] -p <pattern> | -t <tableName>   
-    description: starts the process of taking table offline   
-      -?,-help  display this help   
-      -p,-pattern <pattern>  regex pattern of table names to flush   
-      -t,-table <tableName>  name of a table to flush   
-  
-**online**   
-  
-    usage: online [-?] -p <pattern> | -t <tableName>   
-    description: starts the process of putting a table online   
-      -?,-help  display this help   
-      -p,-pattern <pattern>  regex pattern of table names to flush   
-      -t,-table <tableName>  name of a table to flush   
-  
-**passwd**   
-  
-    usage: passwd [-?] [-u <user>]   
-    description: changes a user's password   
-      -?,-help  display this help   
-      -u,-user <user>  user to operate on   
-  
-**quit**   
-  
-    usage: quit [-?]   
-    description: exits the shell   
-      -?,-help  display this help   
-  
-**renametable**   
-  
-    usage: renametable <current table name> <new table name> [-?]   
-    description: rename a table   
-      -?,-help  display this help   
-  
-**revoke**   
-  
-    usage: revoke <permission> [-?] -s | -t <table>  -u <username>   
-    description: revokes system or table permissions from a user   
-      -?,-help  display this help   
-      -s,-system  revoke a system permission   
-      -t,-table <table>  revoke a table permission on this table   
-      -u,-user <username> user to operate on   
-  
-**scan**   
-  
-    usage: scan [-?] [-b <start-row>] [-c 
<<columnfamily>[:<columnqualifier>]>] [-e   
-           <end-row>] [-np] [-s <comma-separated-authorizations>] [-st]   
-    description: scans the table, and displays the resulting records   
-      -?,-help  display this help   
-      -b,-begin-row <start-row>  begin row (inclusive)   
-      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
-      -e,-end-row <end-row>  end row (inclusive)   
-      -np,-no-pagination  disables pagination of output   
-      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
-           (all user auths are used if this argument is not specified)   
-      -st,-show-timestamps  enables displaying timestamps   
-  
-**select**   
-  
-    usage: select <row> <columnfamily> <columnqualifier> [-?] [-np] [-s   
-           <comma-separated-authorizations>] [-st]   
-    description: scans for and displays a single record   
-      -?,-help  display this help   
-      -np,-no-pagination  disables pagination of output   
-      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
-      -st,-show-timestamps  enables displaying timestamps   
-  
-**selectrow**   
-  
-    usage: selectrow <row> [-?] [-np] [-s <comma-separated-authorizations>] 
[-st]   
-    description: scans a single row and displays all resulting records   
-      -?,-help  display this help   
-      -np,-no-pagination  disables pagination of output   
-      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
-      -st,-show-timestamps  enables displaying timestamps   
-  
-**setauths**   
-  
-    usage: setauths [-?] -c | -s <comma-separated-authorizations>  [-u <user>] 
  
-    description: sets the maximum scan authorizations for a user   
-      -?,-help  display this help   
-      -c,-clear-authorizations  clears the scan authorizations   
-      -s,-scan-authorizations <comma-separated-authorizations>  set the scan   
-           authorizations   
-      -u,-user <user>  user to operate on   
-  
-**setgroups**   
-  
-    usage: setgroups <group>=<col fam>,<col fam> <group>=<col fam>,<col fam>   
-           [-?] -t <table>   
-    description: sets the locality groups for a given table (for binary or 
commas,   
-           use Java API)   
-      -?,-help  display this help   
-      -t,-table <table>  get locality groups for specified table   
-  
-**setiter**   
-  
-    usage: setiter [-?] -agg | -class <name> | -filter | -nolabel | -regex | 
-vers   
-           [-majc] [-minc] [-n <itername>]  -p <pri>  [-scan] [-t <table>]   
-    description: sets a table-specific iterator   
-      -?,-help  display this help   
-      -agg,-aggregator  an aggregating type   
-      -class,-class-name <name>  a java class type   
-      -filter,-filter  a filtering type   
-      -majc,-major-compaction  applied at major compaction   
-      -minc,-minor-compaction  applied at minor compaction   
-      -n,-name <itername> iterator to set   
-      -nolabel,-no-label  a no-labeling type   
-      -p,-priority <pri>  the order in which the iterator is applied   
-      -regex,-regular-expression  a regex matching type   
-      -scan,-scan-time  applied at scan time   
-      -t,-table <table>  tableName   
-      -vers,-version  a versioning type   
-  
-**setscaniter**   
-  
-    usage: setscaniter [-?] -agg | -class <name> | -filter | -nolabel | -regex 
|   
-           -vers  [-n <itername>]  -p <pri> [-t <table>]   
-    description: sets a table-specific scan iterator for this shell session   
-      -?,-help  display this help   
-      -agg,-aggregator  an aggregating type   
-      -class,-class-name <name>  a java class type   
-      -filter,-filter  a filtering type   
-      -n,-name <itername> iterator to set   
-      -nolabel,-no-label  a no-labeling type   
-      -p,-priority <pri>  the order in which the iterator is applied   
-      -regex,-regular-expression  a regex matching type   
-      -t,-table <table>  tableName   
-      -vers,-version  a versioning type   
-  
-**systempermissions**   
-  
-    usage: systempermissions [-?]   
-    description: displays a list of valid system permissions   
-      -?,-help  display this help   
-  
-**table**   
-  
-    usage: table <tableName> [-?]   
-    description: switches to the specified table   
-      -?,-help  display this help   
-  
-**tablepermissions**   
-  
-    usage: tablepermissions [-?]   
-    description: displays a list of valid table permissions   
-      -?,-help  display this help   
-  
-**tables**   
-  
-    usage: tables [-?] [-l]   
-    description: displays a list of all existing tables   
-      -?,-help  display this help   
-      -l,-list-ids  display internal table ids along with the table name   
-  
-**trace**   
-  
-    usage: trace [ on | off ] [-?]   
-    description: turns trace logging on or off   
-      -?,-help  display this help   
-  
-**user**   
-  
-    usage: user <username> [-?]   
-    description: switches to the specified user   
-      -?,-help  display this help   
-  
-**userpermissions**   
-  
-    usage: userpermissions [-?] [-u <user>]   
-    description: displays a user's system and table permissions   
-      -?,-help  display this help   
-      -u,-user <user>  user to operate on   
-  
-**users**   
-  
-    usage: users [-?]   
-    description: displays a list of existing users   
-      -?,-help  display this help   
-  
-**whoami**   
-  
-    usage: whoami [-?]   
-    description: reports the current user name   
-      -?,-help  display this help   
-  
-  
-
-
-* * *
-
-** Up:** [Apache Accumulo User Manual Version 1.3][3] ** Previous:** 
[Administration][5]   ** [Contents][7]**
-
-[3]: accumulo_user_manual.html
-[5]: Administration.html
-[7]: Contents.html
-

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/user_manual_1.3-incubating/Table_Configuration.md
----------------------------------------------------------------------
diff --git a/user_manual_1.3-incubating/Table_Configuration.md 
b/user_manual_1.3-incubating/Table_Configuration.md
deleted file mode 100644
index 172a10d..0000000
--- a/user_manual_1.3-incubating/Table_Configuration.md
+++ /dev/null
@@ -1,330 +0,0 @@
----
-title: "User Manual: Table Configuration"
----
-
-** Next:** [Table Design][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Writing Accumulo Clients][6]   ** [Contents][8]**   
-  
-<a id="CHILD_LINKS"></a>**Subsections**
-
-* [Locality Groups][9]
-* [Constraints][10]
-* [Bloom Filters][11]
-* [Iterators][12]
-* [Aggregating Iterators][13]
-* [Block Cache][14]
-
-* * *
-
-## <a id="Table_Configuration"></a> Table Configuration
-
-Accumulo tables have a few options that can be configured to alter the default 
behavior of Accumulo as well as improve performance based on the data stored. 
These include locality groups, constraints, and iterators. 
-
-## <a id="Locality_Groups"></a> Locality Groups
-
-Accumulo supports storing of sets of column families separately on disk to 
allow clients to scan over columns that are frequently used together efficient 
and to avoid scanning over column families that are not requested. After a 
locality group is set Scanner and BatchScanner operations will automatically 
take advantage of them whenever the fetchColumnFamilies() method is used. 
-
-By default tables place all column families into the same ``default" locality 
group. Additional locality groups can be configured anytime via the shell or 
programmatically as follows: 
-
-### <a id="Managing_Locality_Groups_via_the_Shell"></a> Managing Locality 
Groups via the Shell
-    
-    
-    usage: setgroups <group>=<col fam>{,<col fam>}{ <group>=<col fam>{,<col
-    fam>}} [-?] -t <table>
-    
-    user@myinstance mytable> setgroups -t mytable group_one=colf1,colf2
-    
-    user@myinstance mytable> getgroups -t mytable
-    group_one=colf1,colf2
-    
-
-### <a id="Managing_Locality_Groups_via_the_Client_API"></a> Managing Locality 
Groups via the Client API
-    
-    
-    Connector conn;
-    
-    HashMap<String,Set<Text>> localityGroups =
-        new HashMap<String, Set<Text>>();
-    
-    HashSet<Text> metadataColumns = new HashSet<Text>();
-    metadataColumns.add(new Text("domain"));
-    metadataColumns.add(new Text("link"));
-    
-    HashSet<Text> contentColumns = new HashSet<Text>();
-    contentColumns.add(new Text("body"));
-    contentColumns.add(new Text("images"));
-    
-    localityGroups.put("metadata", metadataColumns);
-    localityGroups.put("content", contentColumns);
-    
-    conn.tableOperations().setLocalityGroups("mytable", localityGroups);
-    
-    // existing locality groups can be obtained as follows
-    Map<String, Set<Text>> groups =
-        conn.tableOperations().getLocalityGroups("mytable");
-    
-
-The assignment of Column Families to Locality Groups can be changed anytime. 
The physical movement of column families into their new locality groups takes 
place via the periodic Major Compaction process that takes place continuously 
in the background. Major Compaction can also be scheduled to take place 
immediately through the shell: 
-    
-    
-    user@myinstance mytable> compact -t mytable
-    
-
-## <a id="Constraints"></a> Constraints
-
-Accumulo supports constraints applied on mutations at insert time. This can be 
used to disallow certain inserts according to a user defined policy. Any 
mutation that fails to meet the requirements of the constraint is rejected and 
sent back to the client. 
-
-Constraints can be enabled by setting a table property as follows: 
-    
-    
-    user@myinstance mytable> config -t mytable -s 
table.constraint.1=com.test.ExampleConstraint
-    user@myinstance mytable> config -t mytable -s 
table.constraint.2=com.test.AnotherConstraint
-    user@myinstance mytable> config -t mytable -f constraint
-    ---------+--------------------------------+----------------------------
-    SCOPE    | NAME                           | VALUE
-    ---------+--------------------------------+----------------------------
-    table    | table.constraint.1............ | com.test.ExampleConstraint
-    table    | table.constraint.2............ | com.test.AnotherConstraint
-    ---------+--------------------------------+----------------------------
-    
-
-Currently there are no general-purpose constraints provided with the Accumulo 
distribution. New constraints can be created by writing a Java class that 
implements the org.apache.accumulo.core.constraints.Constraint interface. 
-
-To deploy a new constraint, create a jar file containing the class 
implementing the new constraint and place it in the lib directory of the 
Accumulo installation. New constraint jars can be added to Accumulo and enabled 
without restarting but any change to an existing constraint class requires 
Accumulo to be restarted. 
-
-An example of constraints can be found in   
-accumulo/docs/examples/README.constraints with corresponding code under   
-accumulo/src/examples/main/java/accumulo/examples/constraints . 
-
-## <a id="Bloom_Filters"></a> Bloom Filters
-
-As mutations are applied to an Accumulo table, several files are created per 
tablet. If bloom filters are enabled, Accumulo will create and load a small 
data structure into memory to determine whether a file contains a given key 
before opening the file. This can speed up lookups considerably. 
-
-To enable bloom filters, enter the following command in the Shell: 
-    
-    
-    user@myinstance> config -t mytable -s table.bloom.enabled=true
-    
-
-An extensive example of using Bloom Filters can be found at   
-accumulo/docs/examples/README.bloom . 
-
-## <a id="Iterators"></a> Iterators
-
-Iterators provide a modular mechanism for adding functionality to be executed 
by TabletServers when scanning or compacting data. This allows users to 
efficiently summarize, filter, and aggregate data. In fact, the built-in 
features of cell-level security and age-off are implemented using Iterators. 
-
-### <a id="Setting_Iterators_via_the_Shell"></a> Setting Iterators via the 
Shell
-    
-    
-    usage: setiter [-?] -agg | -class <name> | -filter | -nolabel | 
-    -regex | -vers [-majc] [-minc] [-n <itername>] -p <pri> [-scan] 
-    [-t <table>]
-    
-    user@myinstance mytable> setiter -t mytable -scan -p 10 -n myiter
-    
-
-### <a id="Setting_Iterators_Programmatically"></a> Setting Iterators 
Programmatically
-    
-    
-    scanner.setScanIterators(
-        15, // priority
-        "com.company.MyIterator", // class name
-        "myiter"); // name this iterator
-    
-
-Some iterators take additional parameters from client code, as in the 
following example: 
-    
-    
-    bscan.setIteratorOption(
-        "myiter", // iterator reference
-        "myoptionname",
-        "myoptionvalue");
-    
-
-Tables support separate Iterator settings to be applied at scan time, upon 
minor compaction and upon major compaction. For most uses, tables will have 
identical iterator settings for all three to avoid inconsistent results. 
-
-### <a id="Versioning_Iterators_and_Timestamps"></a> Versioning Iterators and 
Timestamps
-
-Accumulo provides the capability to manage versioned data through the use of 
timestamps within the Key. If a timestamp is not specified in the key created 
by the client then the system will set the timestamp to the current time. Two 
keys with identical rowIDs and columns but different timestamps are considered 
two versions of the same key. If two inserts are made into accumulo with the 
same rowID, column, and timestamp, then the behavior is non-deterministic. 
-
-Timestamps are sorted in descending order, so the most recent data comes 
first. Accumulo can be configured to return the top k versions, or versions 
later than a given date. The default is to return the one most recent version. 
-
-The version policy can be changed by changing the VersioningIterator options 
for a table as follows: 
-    
-    
-    user@myinstance mytable> config -t mytable -s
-    table.iterator.scan.vers.opt.maxVersions=3
-    
-    user@myinstance mytable> config -t mytable -s
-    table.iterator.minc.vers.opt.maxVersions=3
-    
-    user@myinstance mytable> config -t mytable -s
-    table.iterator.majc.vers.opt.maxVersions=3
-    
-
-#### <a id="Logical_Time"></a> Logical Time
-
-Accumulo 1.2 introduces the concept of logical time. This ensures that 
timestamps set by accumulo always move forward. This helps avoid problems 
caused by TabletServers that have different time settings. The per tablet 
counter gives unique one up time stamps on a per mutation basis. When using 
time in milliseconds, if two things arrive within the same millisecond then 
both receive the same timestamp. 
-
-A table can be configured to use logical timestamps at creation time as 
follows: 
-    
-    
-    user@myinstance> createtable -tl logical
-    
-
-#### <a id="Deletes"></a> Deletes
-
-Deletes are special keys in accumulo that get sorted along will all the other 
data. When a delete key is inserted, accumulo will not show anything that has a 
timestamp less than or equal to the delete key. During major compaction, any 
keys older than a delete key are omitted from the new file created, and the 
omitted keys are removed from disk as part of the regular garbage collection 
process. 
-
-### <a id="Filtering_Iterators"></a> Filtering Iterators
-
-When scanning over a set of key-value pairs it is possible to apply an 
arbitrary filtering policy through the use of a FilteringIterator. These types 
of iterators return only key-value pairs that satisfy the filter logic. 
Accumulo has two built-in filtering iterators that can be configured on any 
table: AgeOff and RegEx. More can be added by writing a Java class that 
implements the   
-org.apache.accumulo.core.iterators.filter.Filter interface. 
-
-To configure the AgeOff filter to remove data older than a certain date or a 
fixed amount of time from the present. The following example sets a table to 
delete everything inserted over 30 seconds ago: 
-    
-    
-    user@myinstance> createtable filtertest
-    user@myinstance filtertest> setiter -t filtertest -scan -minc -majc -p
-    10 -n myfilter -filter
-    
-    FilteringIterator uses Filters to accept or reject key/value pairs
-    ----------> entering options: <filterPriorityNumber>
-    <ageoff|regex|filterClass>
-    
-    ----------> set org.apache.accumulo.core.iterators.FilteringIterator option
-    (<name> <value>, hit enter to skip): 0 ageoff
-    
-    ----------> set org.apache.accumulo.core.iterators.FilteringIterator option
-    (<name> <value>, hit enter to skip):
-    AgeOffFilter removes entries with timestamps more than <ttl>
-    milliseconds old
-    
-    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter
-    currentTime, if set, use the given value as the absolute time in
-    milliseconds as the current time of day:
-    
-    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter
-    ttl, time to live (milliseconds): 30000
-    
-    user@myinstance filtertest>
-    user@myinstance filtertest> scan
-    user@myinstance filtertest> insert foo a b c
-    insert successful
-    user@myinstance filtertest> scan
-    foo a:b [] c
-    
-    ... wait 30 seconds ...
-    
-    user@myinstance filtertest> scan
-    user@myinstance filtertest>
-    
-
-To see the iterator settings for a table, use: 
-    
-    
-    user@example filtertest> config -t filtertest -f iterator
-    ---------+------------------------------------------+------------------
-    SCOPE    | NAME                                     | VALUE
-    ---------+------------------------------------------+------------------
-    table    | table.iterator.majc.myfilter ........... |
-    10,org.apache.accumulo.core.iterators.FilteringIterator
-    table    | table.iterator.majc.myfilter.opt.0 ..... |
-    org.apache.accumulo.core.iterators.filter.AgeOffFilter
-    table    | table.iterator.majc.myfilter.opt.0.ttl . | 30000
-    table    | table.iterator.minc.myfilter ........... |
-    10,org.apache.accumulo.core.iterators.FilteringIterator
-    table    | table.iterator.minc.myfilter.opt.0 ..... |
-    org.apache.accumulo.core.iterators.filter.AgeOffFilter
-    table    | table.iterator.minc.myfilter.opt.0.ttl . | 30000
-    table    | table.iterator.scan.myfilter ........... |
-    10,org.apache.accumulo.core.iterators.FilteringIterator
-    table    | table.iterator.scan.myfilter.opt.0 ..... |
-    org.apache.accumulo.core.iterators.filter.AgeOffFilter
-    table    | table.iterator.scan.myfilter.opt.0.ttl . | 30000
-    ---------+------------------------------------------+------------------
-    
-
-## <a id="Aggregating_Iterators"></a> Aggregating Iterators
-
-Accumulo allows aggregating iterators to be configured on tables and column 
families. When an aggregating iterator is set, the iterator is applied across 
the values associated with any keys that share rowID, column family, and column 
qualifier. This is similar to the reduce step in MapReduce, which applied some 
function to all the values associated with a particular key. 
-
-For example, if an aggregating iterator were configured on a table and the 
following mutations were inserted: 
-    
-    
-    Row     Family Qualifier Timestamp  Value
-    rowID1  colfA  colqA     20100101   1
-    rowID1  colfA  colqA     20100102   1
-    
-
-The table would reflect only one aggregate value: 
-    
-    
-    rowID1  colfA  colqA     -          2
-    
-
-Aggregating iterators can be enabled for a table as follows: 
-    
-    
-    user@myinstance> createtable perDayCounts -a
-    day=org.apache.accumulo.core.iterators.aggregation.StringSummation
-    
-    user@myinstance perDayCounts> insert row1 day 20080101 1
-    user@myinstance perDayCounts> insert row1 day 20080101 1
-    user@myinstance perDayCounts> insert row1 day 20080103 1
-    user@myinstance perDayCounts> insert row2 day 20080101 1
-    user@myinstance perDayCounts> insert row3 day 20080101 1
-    
-    user@myinstance perDayCounts> scan
-    row1 day:20080101 [] 2
-    row1 day:20080103 [] 1
-    row2 day:20080101 [] 2
-    
-
-Accumulo includes the following aggregators: 
-
-* **LongSummation**: expects values of type long and adds them. 
-* **StringSummation**: expects numbers represented as strings and adds them. 
-* **StringMax**: expects numbers as strings and retains the maximum number 
inserted. 
-* **StringMin**: expects numbers as strings and retains the minimum number 
inserted. 
-
-Additional Aggregators can be added by creating a Java class that implements   
-**org.apache.accumulo.core.iterators.aggregation.Aggregator** and adding a jar 
containing that class to Accumulo's lib directory. 
-
-An example of an aggregator can be found under   
-accumulo/src/examples/main/java/org/apache/accumulo/examples/aggregation/SortedSetAggregator.java
 
-
-## <a id="Block_Cache"></a> Block Cache
-
-In order to increase throughput of commonly accessed entries, Accumulo employs 
a block cache. This block cache buffers data in memory so that it doesn't have 
to be read off of disk. The RFile format that Accumulo prefers is a mix of 
index blocks and data blocks, where the index blocks are used to find the 
appropriate data blocks. Typical queries to Accumulo result in a binary search 
over several index blocks followed by a linear scan of one or more data blocks. 
-
-The block cache can be configured on a per-table basis, and all tablets hosted 
on a tablet server share a single resource pool. To configure the size of the 
tablet server's block cache, set the following properties: 
-    
-    
-    tserver.cache.data.size: Specifies the size of the cache for file data 
blocks.
-    tserver.cache.index.size: Specifies the size of the cache for file indices.
-    
-
-To enable the block cache for your table, set the following properties: 
-    
-    
-    table.cache.block.enable: Determines whether file (data) block cache is 
enabled.
-    table.cache.index.enable: Determines whether index cache is enabled.
-    
-
-The block cache can have a significant effect on alleviating hot spots, as 
well as reducing query latency. It is enabled by default for the !METADATA 
table. 
-
-* * *
-
-** Next:** [Table Design][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Writing Accumulo Clients][6]   ** [Contents][8]**
-
-[2]: Table_Design.html
-[4]: accumulo_user_manual.html
-[6]: Writing_Accumulo_Clients.html
-[8]: Contents.html
-[9]: Table_Configuration.html#Locality_Groups
-[10]: Table_Configuration.html#Constraints
-[11]: Table_Configuration.html#Bloom_Filters
-[12]: Table_Configuration.html#Iterators
-[13]: Table_Configuration.html#Aggregating_Iterators
-[14]: Table_Configuration.html#Block_Cache
-

Reply via email to