ACCUMULO-4518 Use Jekyll posts for releases

* Made releases Jekyll posts so they now show in Latest News
  and RSS feed.
* Reorganized and simplified navbar
* Created user manual, examples, and javadoc archive pages
* Remove old documentation page
* Moved 1.3 user manual to 1.3/ directory


Project: http://git-wip-us.apache.org/repos/asf/accumulo-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo-website/commit/9a50bd13
Tree: http://git-wip-us.apache.org/repos/asf/accumulo-website/tree/9a50bd13
Diff: http://git-wip-us.apache.org/repos/asf/accumulo-website/diff/9a50bd13

Branch: refs/heads/master
Commit: 9a50bd132206284c1f4f7f93eec8e07ce619bea6
Parents: dd2d8cf
Author: Mike Walch <[email protected]>
Authored: Wed Nov 9 15:39:22 2016 -0500
Committer: Mike Walch <[email protected]>
Committed: Thu Nov 10 16:28:31 2016 -0500

----------------------------------------------------------------------
 1.3/user_manual/Accumulo_Design.md              | 104 ++++
 1.3/user_manual/Accumulo_Shell.md               | 136 +++++
 1.3/user_manual/Administration.md               | 169 ++++++
 1.3/user_manual/Analytics.md                    | 150 ++++++
 1.3/user_manual/Contents.md                     | 232 ++++++++
 1.3/user_manual/High_Speed_Ingest.md            |  85 +++
 1.3/user_manual/Introduction.md                 |  23 +
 1.3/user_manual/Security.md                     | 108 ++++
 1.3/user_manual/Shell_Commands.md               | 534 +++++++++++++++++++
 1.3/user_manual/Table_Configuration.md          | 330 ++++++++++++
 1.3/user_manual/Table_Design.md                 | 197 +++++++
 1.3/user_manual/Writing_Accumulo_Clients.md     | 124 +++++
 1.3/user_manual/accumulo_user_manual.md         |  49 ++
 1.3/user_manual/data_distribution.png           | Bin 0 -> 86936 bytes
 1.3/user_manual/examples.md                     |   7 +
 1.3/user_manual/examples/aggregation.md         |  36 ++
 1.3/user_manual/examples/batch.md               |  39 ++
 1.3/user_manual/examples/bloom.md               | 113 ++++
 1.3/user_manual/examples/bulkIngest.md          |  20 +
 1.3/user_manual/examples/constraints.md         |  34 ++
 1.3/user_manual/examples/dirlist.md             |  43 ++
 1.3/user_manual/examples/filter.md              |  91 ++++
 1.3/user_manual/examples/helloworld.md          |  38 ++
 1.3/user_manual/examples/index.md               |  42 ++
 1.3/user_manual/examples/mapred.md              |  71 +++
 1.3/user_manual/examples/shard.md               |  52 ++
 1.3/user_manual/failure_handling.png            | Bin 0 -> 48904 bytes
 1.3/user_manual/img1.png                        | Bin 0 -> 2977 bytes
 1.3/user_manual/img2.png                        | Bin 0 -> 4121 bytes
 1.3/user_manual/img3.png                        | Bin 0 -> 6520 bytes
 1.3/user_manual/img4.png                        | Bin 0 -> 16325 bytes
 1.3/user_manual/img5.png                        | Bin 0 -> 3974 bytes
 1.3/user_manual/index.md                        |  50 ++
 _config-asf.yml                                 |   8 +
 _config.yml                                     |   8 +
 _includes/nav.html                              |  32 +-
 _layouts/release.html                           |   8 +
 .../blog/2016-10-28-durability-performance.md   |   6 +-
 _posts/release/2014-03-06-accumulo-1.5.1.md     | 205 +++++++
 _posts/release/2014-05-02-accumulo-1.6.0.md     | 350 ++++++++++++
 _posts/release/2014-09-25-accumulo-1.6.1.md     | 189 +++++++
 _posts/release/2015-02-16-accumulo-1.6.2.md     | 172 ++++++
 _posts/release/2015-05-18-accumulo-1.7.0.md     | 399 ++++++++++++++
 _posts/release/2015-06-25-accumulo-1.5.3.md     | 113 ++++
 _posts/release/2015-07-04-accumulo-1.6.3.md     | 113 ++++
 _posts/release/2015-09-19-accumulo-1.5.2.md     | 179 +++++++
 _posts/release/2015-09-21-accumulo-1.5.4.md     |  68 +++
 _posts/release/2015-10-03-accumulo-1.6.4.md     |  69 +++
 _posts/release/2016-02-17-accumulo-1.6.5.md     | 110 ++++
 _posts/release/2016-02-26-accumulo-1.7.1.md     | 150 ++++++
 _posts/release/2016-06-22-accumulo-1.7.2.md     |  94 ++++
 _posts/release/2016-09-06-accumulo-1.8.0.md     | 196 +++++++
 _posts/release/2016-09-18-accumulo-1.6.6.md     | 136 +++++
 downloads/index.md                              |   6 +-
 index.md                                        |   4 +-
 news.md                                         |  12 +-
 notable_features.md                             |   2 +-
 old_documentation.md                            |  50 --
 pages/examples.md                               |  10 +
 pages/javadocs.md                               |  10 +
 pages/old_archive.md                            |  57 ++
 pages/release.md                                |  24 +
 pages/user-manual.md                            |  11 +
 release_notes/1.5.1.md                          | 204 -------
 release_notes/1.5.2.md                          | 178 -------
 release_notes/1.5.3.md                          | 112 ----
 release_notes/1.5.4.md                          |  67 ---
 release_notes/1.6.0.md                          | 349 ------------
 release_notes/1.6.1.md                          | 188 -------
 release_notes/1.6.2.md                          | 171 ------
 release_notes/1.6.3.md                          | 112 ----
 release_notes/1.6.4.md                          |  68 ---
 release_notes/1.6.5.md                          | 109 ----
 release_notes/1.6.6.md                          | 129 -----
 release_notes/1.7.0.md                          | 398 --------------
 release_notes/1.7.1.md                          | 149 ------
 release_notes/1.7.2.md                          |  87 ---
 release_notes/1.8.0.md                          | 189 -------
 release_notes/index.md                          |  57 --
 user_manual_1.3-incubating/Accumulo_Design.md   | 104 ----
 user_manual_1.3-incubating/Accumulo_Shell.md    | 136 -----
 user_manual_1.3-incubating/Administration.md    | 169 ------
 user_manual_1.3-incubating/Analytics.md         | 150 ------
 user_manual_1.3-incubating/Contents.md          | 232 --------
 user_manual_1.3-incubating/High_Speed_Ingest.md |  85 ---
 user_manual_1.3-incubating/Introduction.md      |  23 -
 user_manual_1.3-incubating/Security.md          | 108 ----
 user_manual_1.3-incubating/Shell_Commands.md    | 534 -------------------
 .../Table_Configuration.md                      | 330 ------------
 user_manual_1.3-incubating/Table_Design.md      | 197 -------
 .../Writing_Accumulo_Clients.md                 | 124 -----
 .../accumulo_user_manual.md                     |  49 --
 .../data_distribution.png                       | Bin 86936 -> 0 bytes
 user_manual_1.3-incubating/examples.md          |   7 -
 .../examples/aggregation.md                     |  36 --
 user_manual_1.3-incubating/examples/batch.md    |  39 --
 user_manual_1.3-incubating/examples/bloom.md    | 113 ----
 .../examples/bulkIngest.md                      |  20 -
 .../examples/constraints.md                     |  34 --
 user_manual_1.3-incubating/examples/dirlist.md  |  43 --
 user_manual_1.3-incubating/examples/filter.md   |  91 ----
 .../examples/helloworld.md                      |  38 --
 user_manual_1.3-incubating/examples/index.md    |  42 --
 user_manual_1.3-incubating/examples/mapred.md   |  71 ---
 user_manual_1.3-incubating/examples/shard.md    |  52 --
 user_manual_1.3-incubating/failure_handling.png | Bin 48904 -> 0 bytes
 user_manual_1.3-incubating/img1.png             | Bin 2977 -> 0 bytes
 user_manual_1.3-incubating/img2.png             | Bin 4121 -> 0 bytes
 user_manual_1.3-incubating/img3.png             | Bin 6520 -> 0 bytes
 user_manual_1.3-incubating/img4.png             | Bin 16325 -> 0 bytes
 user_manual_1.3-incubating/img5.png             | Bin 3974 -> 0 bytes
 user_manual_1.3-incubating/index.md             |  49 --
 112 files changed, 5585 insertions(+), 5526 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Accumulo_Design.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Accumulo_Design.md 
b/1.3/user_manual/Accumulo_Design.md
new file mode 100644
index 0000000..0e74f8f
--- /dev/null
+++ b/1.3/user_manual/Accumulo_Design.md
@@ -0,0 +1,104 @@
+---
+title: "User Manual: Accumulo Design"
+---
+
+** Next:** [Accumulo Shell][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Introduction][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Data Model][9]
+* [Architecture][10]
+* [Components][11]
+* [Data Management][12]
+* [Tablet Service][13]
+* [Compactions][14]
+* [Fault-Tolerance][15]
+
+* * *
+
+## <a id="Accumulo_Design"></a> Accumulo Design
+
+## <a id="Data_Model"></a> Data Model
+
+Accumulo provides a richer data model than simple key-value stores, but is not 
a fully relational database. Data is represented as key-value pairs, where the 
key and value are comprised of the following elements: 
+
+![converted table][16]
+
+All elements of the Key and the Value are represented as byte arrays except 
for Timestamp, which is a Long. Accumulo sorts keys by element and 
lexicographically in ascending order. Timestamps are sorted in descending order 
so that later versions of the same Key appear first in a sequential scan. 
Tables consist of a set of sorted key-value pairs. 
+
+## <a id="Architecture"></a> Architecture
+
+Accumulo is a distributed data storage and retrieval system and as such 
consists of several architectural components, some of which run on many 
individual servers. Much of the work Accumulo does involves maintaining certain 
properties of the data, such as organization, availability, and integrity, 
across many commodity-class machines. 
+
+## <a id="Components"></a> Components
+
+An instance of Accumulo includes many TabletServers, write-ahead Logger 
servers, one Garbage Collector process, one Master server and many Clients. 
+
+### <a id="Tablet_Server"></a> Tablet Server
+
+The TabletServer manages some subset of all the tablets (partitions of 
tables). This includes receiving writes from clients, persisting writes to a 
write‐ahead log, sorting new key‐value pairs in memory, periodically 
flushing sorted key‐value pairs to new files in HDFS, and responding to reads 
from clients, forming a merge‐sorted view of all keys and values from all the 
files it has created and the sorted in‐memory store. 
+
+TabletServers also perform recovery of a tablet that was previously on a 
server that failed, reapplying any writes found in the write-ahead log to the 
tablet. 
+
+### <a id="Loggers"></a> Loggers
+
+The Loggers accept updates to Tablet servers and write them to local on-disk 
storage. Each tablet server will write their updates to multiple loggers to 
preserve data in case of hardware failure. 
+
+### <a id="Garbage_Collector"></a> Garbage Collector
+
+Accumulo processes will share files stored in HDFS. Periodically, the Garbage 
Collector will identify files that are no longer needed by any process, and 
delete them. 
+
+### <a id="Master"></a> Master
+
+The Accumulo Master is responsible for detecting and responding to 
TabletServer failure. It tries to balance the load across TabletServer by 
assigning tablets carefully and instructing TabletServers to migrate tablets 
when necessary. The Master ensures all tablets are assigned to one TabletServer 
each, and handles table creation, alteration, and deletion requests from 
clients. The Master also coordinates startup, graceful shutdown and recovery of 
changes in write-ahead logs when Tablet servers fail. 
+
+### <a id="Client"></a> Client
+
+Accumulo includes a client library that is linked to every application. The 
client library contains logic for finding servers managing a particular tablet, 
and communicating with TabletServers to write and retrieve key-value pairs. 
+
+## <a id="Data_Management"></a> Data Management
+
+Accumulo stores data in tables, which are partitioned into tablets. Tablets 
are partitioned on row boundaries so that all of the columns and values for a 
particular row are found together within the same tablet. The Master assigns 
Tablets to one TabletServer at a time. This enables row-level transactions to 
take place without using distributed locking or some other complicated 
synchronization mechanism. As clients insert and query data, and as machines 
are added and removed from the cluster, the Master migrates tablets to ensure 
they remain available and that the ingest and query load is balanced across the 
cluster. 
+
+![Image data_distribution][17]
+
+## <a id="Tablet_Service"></a> Tablet Service
+
+When a write arrives at a TabletServer it is written to a Write‐Ahead Log 
and then inserted into a sorted data structure in memory called a MemTable. 
When the MemTable reaches a certain size the TabletServer writes out the sorted 
key-value pairs to a file in HDFS called Indexed Sequential Access Method 
(ISAM) file. This process is called a minor compaction. A new MemTable is then 
created and the fact of the compaction is recorded in the Write‐Ahead Log. 
+
+When a request to read data arrives at a TabletServer, the TabletServer does a 
binary search across the MemTable as well as the in-memory indexes associated 
with each ISAM file to find the relevant values. If clients are performing a 
scan, several key‐value pairs are returned to the client in order from the 
MemTable and the set of ISAM files by performing a merge‐sort as they are 
read. 
+
+## <a id="Compactions"></a> Compactions
+
+In order to manage the number of files per tablet, periodically the 
TabletServer performs Major Compactions of files within a tablet, in which some 
set of ISAM files are combined into one file. The previous files will 
eventually be removed by the Garbage Collector. This also provides an 
opportunity to permanently remove deleted key‐value pairs by omitting 
key‐value pairs suppressed by a delete entry when the new file is created. 
+
+## <a id="Fault-Tolerance"></a> Fault-Tolerance
+
+If a TabletServer fails, the Master detects it and automatically reassigns the 
tablets assigned from the failed server to other servers. Any key-value pairs 
that were in memory at the time the TabletServer are automatically reapplied 
from the Write-Ahead Log to prevent any loss of data. 
+
+The Master will coordinate the copying of write-ahead logs to HDFS so the logs 
are available to all tablet servers. To make recovery efficient, the updates 
within a log are grouped by tablet. The sorting process can be performed by 
Hadoops MapReduce or the Logger server. TabletServers can quickly apply the 
mutations from the sorted logs that are destined for the tablets they have now 
been assigned. 
+
+TabletServer failures are noted on the Master's monitor page, accessible via   
+http://master-address:50095/monitor. 
+
+![Image failure_handling][18]
+
+* * *
+
+** Next:** [Accumulo Shell][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Introduction][6]   ** [Contents][8]**
+
+[2]: Accumulo_Shell.html
+[4]: accumulo_user_manual.html
+[6]: Introduction.html
+[8]: Contents.html
+[9]: Accumulo_Design.html#Data_Model
+[10]: Accumulo_Design.html#Architecture
+[11]: Accumulo_Design.html#Components
+[12]: Accumulo_Design.html#Data_Management
+[13]: Accumulo_Design.html#Tablet_Service
+[14]: Accumulo_Design.html#Compactions
+[15]: Accumulo_Design.html#Fault-Tolerance
+[16]: img1.png
+[17]: ./data_distribution.png
+[18]: ./failure_handling.png
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Accumulo_Shell.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Accumulo_Shell.md 
b/1.3/user_manual/Accumulo_Shell.md
new file mode 100644
index 0000000..e8612ce
--- /dev/null
+++ b/1.3/user_manual/Accumulo_Shell.md
@@ -0,0 +1,136 @@
+---
+title: "User Manual: Accumulo Shell"
+---
+
+** Next:** [Writing Accumulo Clients][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Accumulo Design][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Basic Administration][9]
+* [Table Maintenance][10]
+* [User Administration][11]
+
+* * *
+
+## <a id="Accumulo_Shell"></a> Accumulo Shell
+
+Accumulo provides a simple shell that can be used to examine the contents and 
configuration settings of tables, apply individual mutations, and change 
configuration settings. 
+
+The shell can be started by the following command: 
+    
+    
+    $ACCUMULO_HOME/bin/accumulo shell -u [username]
+    
+
+The shell will prompt for the corresponding password to the username specified 
and then display the following prompt: 
+    
+    
+    Shell - Apache Accumulo Interactive Shell
+    -
+    - version 1.3
+    - instance name: myinstance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    
+
+## <a id="Basic_Administration"></a> Basic Administration
+
+The Accumulo shell can be used to create and delete tables, as well as to 
configure table and instance specific options. 
+    
+    
+    root@myinstance> tables
+    !METADATA
+    
+    root@myinstance> createtable mytable
+    
+    root@myinstance mytable>
+    
+    root@myinstance mytable> tables
+    !METADATA
+    mytable
+    
+    root@myinstance mytable> createtable testtable
+    
+    root@myinstance testtable>
+    
+    root@myinstance junk> deletetable testtable
+    
+    root@myinstance>
+    
+
+The Shell can also be used to insert updates and scan tables. This is useful 
for inspecting tables. 
+    
+    
+    root@myinstance mytable> scan
+    
+    root@myinstance mytable> insert row1 colf colq value1
+    insert successful
+    
+    root@myinstance mytable> scan
+    row1 colf:colq [] value1
+    
+
+## <a id="Table_Maintenance"></a> Table Maintenance
+
+The **compact** command instructs Accumulo to schedule a compaction of the 
table during which files are consolidated and deleted entries are removed. 
+    
+    
+    root@myinstance mytable> compact -t mytable
+    07 16:13:53,201 [shell.Shell] INFO : Compaction of table mytable
+    scheduled for 20100707161353EDT
+    
+
+The **flush** command instructs Accumulo to write all entries currently in 
memory for a given table to disk. 
+    
+    
+    root@myinstance mytable> flush -t mytable
+    07 16:14:19,351 [shell.Shell] INFO : Flush of table mytable
+    initiated...
+    
+
+## <a id="User_Administration"></a> User Administration
+
+The Shell can be used to add, remove, and grant privileges to users. 
+    
+    
+    root@myinstance mytable> createuser bob
+    Enter new password for 'bob': *********
+    Please confirm new password for 'bob': *********
+    
+    root@myinstance mytable> authenticate bob
+    Enter current password for 'bob': *********
+    Valid
+    
+    root@myinstance mytable> grant System.CREATE_TABLE -s -u bob
+    
+    root@myinstance mytable> user bob
+    Enter current password for 'bob': *********
+    
+    bob@myinstance mytable> userpermissions
+    System permissions: System.CREATE_TABLE
+    Table permissions (!METADATA): Table.READ
+    Table permissions (mytable): NONE
+    
+    bob@myinstance mytable> createtable bobstable
+    bob@myinstance bobstable>
+    
+    bob@myinstance bobstable> user root
+    Enter current password for 'root': *********
+    
+    root@myinstance bobstable> revoke System.CREATE_TABLE -s -u bob
+    
+
+* * *
+
+** Next:** [Writing Accumulo Clients][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Accumulo Design][6]   ** [Contents][8]**
+
+[2]: Writing_Accumulo_Clients.html
+[4]: accumulo_user_manual.html
+[6]: Accumulo_Design.html
+[8]: Contents.html
+[9]: Accumulo_Shell.html#Basic_Administration
+[10]: Accumulo_Shell.html#Table_Maintenance
+[11]: Accumulo_Shell.html#User_Administration
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Administration.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Administration.md 
b/1.3/user_manual/Administration.md
new file mode 100644
index 0000000..f231617
--- /dev/null
+++ b/1.3/user_manual/Administration.md
@@ -0,0 +1,169 @@
+---
+title: "User Manual: Administration"
+---
+
+** Next:** [Shell Commands][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Security][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Hardware][9]
+* [Network][10]
+* [Installation][11]
+* [Dependencies][12]
+* [Configuration][13]
+* [Initialization][14]
+* [Running][15]
+* [Monitoring][16]
+* [Logging][17]
+* [Recovery][18]
+
+* * *
+
+## <a id="Administration"></a> Administration
+
+## <a id="Hardware"></a> Hardware
+
+Because we are running essentially two or three systems simultaneously layered 
across the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware to 
consist of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process 
can have at least one core and 2 - 4 GB each. 
+
+One core running HDFS can typically keep 2 to 4 disks busy, so each machine 
may typically have as little as 2 x 300GB disks and as much as 4 x 1TB or 2TB 
disks. 
+
+It is possible to do with less than this, such as with 1u servers with 2 cores 
and 4GB each, but in this case it is recommended to only run up to two 
processes per machine - i.e. DataNode and TabletServer or DataNode and 
MapReduce worker but not all three. The constraint here is having enough 
available heap space for all the processes on a machine. 
+
+## <a id="Network"></a> Network
+
+Accumulo communicates via remote procedure calls over TCP/IP for both passing 
data and control messages. In addition, Accumulo uses HDFS clients to 
communicate with HDFS. To achieve good ingest and query performance, sufficient 
network bandwidth must be available between any two machines. 
+
+## <a id="Installation"></a> Installation
+
+Choose a directory for the Accumulo installation. This directory will be 
referenced by the environment variable $ACCUMULO_HOME. Run the following: 
+    
+    
+    $ tar xzf $ACCUMULO_HOME/accumulo.tar.gz
+    
+
+Repeat this step at each machine within the cluster. Usually all machines have 
the same $ACCUMULO_HOME. 
+
+## <a id="Dependencies"></a> Dependencies
+
+Accumulo requires HDFS and ZooKeeper to be configured and running before 
starting. Password-less SSH should be configured between at least the Accumulo 
master and TabletServer machines. It is also a good idea to run Network Time 
Protocol (NTP) within the cluster to ensure nodes' clocks don't get too out of 
sync, which can cause problems with automatically timestamped data. Accumulo 
will remove from the set of TabletServers those machines whose times differ too 
much from the master's. 
+
+## <a id="Configuration"></a> Configuration
+
+Accumulo is configured by editing several Shell and XML files found in 
$ACCUMULO_HOME/conf. The structure closely resembles Hadoop's configuration 
files. 
+
+### <a id="Edit_conf/accumulo-env.sh"></a> Edit conf/accumulo-env.sh
+
+Accumulo needs to know where to find the software it depends on. Edit 
accumuloenv. sh and specify the following: 
+
+1. Enter the location of the installation directory of Accumulo for 
$ACCUMULO_HOME
+2. Enter your system's Java home for $JAVA_HOME
+3. Enter the location of Hadoop for $HADOOP_HOME
+4. Choose a location for Accumulo logs and enter it for $ACCUMULO_LOG_DIR
+5. Enter the location of ZooKeeper for $ZOOKEEPER_HOME
+
+By default Accumulo TabletServers are set to use 1GB of memory. You may change 
this by altering the value of $ACCUMULO_TSERVER_OPTS. Note the syntax is that 
of the Java JVM command line options. This value should be less than the 
physical memory of the machines running TabletServers. 
+
+There are similar options for the master's memory usage and the garbage 
collector process. Reduce these if they exceed the physical RAM of your 
hardware and increase them, within the bounds of the physical RAM, if a process 
fails because of insufficient memory. 
+
+Note that you will be specifying the Java heap space in accumulo-env.sh. You 
should make sure that the total heap space used for the Accumulo tserver and 
the Hadoop DataNode and TaskTracker is less than the available memory on each 
slave node in the cluster. On large clusters, it is recommended that the 
Accumulo master, Hadoop NameNode, secondary NameNode, and Hadoop JobTracker all 
be run on separate machines to allow them to use more heap space. If you are 
running these on the same machine on a small cluster, likewise make sure their 
heap space settings fit within the available memory. 
+
+### <a id="Cluster_Specification"></a> Cluster Specification
+
+On the machine that will serve as the Accumulo master: 
+
+1. Write the IP address or domain name of the Accumulo Master to the   
+$ACCUMULO_HOME/conf/masters file. 
+2. Write the IP addresses or domain name of the machines that will be 
TabletServers in   
+$ACCUMULO_HOME/conf/slaves, one per line. 
+
+Note that if using domain names rather than IP addresses, DNS must be 
configured properly for all machines participating in the cluster. DNS can be a 
confusing source of errors. 
+
+### <a id="Accumulo_Settings"></a> Accumulo Settings
+
+Specify appropriate values for the following settings in   
+$ACCUMULO_HOME/conf/accumulo-site.xml : 
+    
+    
+    <property>
+        <name>zookeeper</name>
+        <value>zooserver-one:2181,zooserver-two:2181</value>
+        <description>list of zookeeper servers</description>
+    </property>
+    <property>
+        <name>walog</name>
+        <value>/var/accumulo/walogs</value>
+        <description>local directory for write ahead logs</description>
+    </property>
+    
+
+This enables Accumulo to find ZooKeeper. Accumulo uses ZooKeeper to coordinate 
settings between processes and helps finalize TabletServer failure. 
+
+Accumulo records all changes to tables to a write-ahead log before committing 
them to the table. The `walog' setting specifies the local directory on each 
machine to which write-ahead logs are written. This directory should exist on 
all machines acting as TabletServers. 
+
+Some settings can be modified via the Accumulo shell and take effect 
immediately. However, any settings that should be persisted across system 
restarts must be recorded in the accumulo-site.xml file. 
+
+### <a id="Deploy_Configuration"></a> Deploy Configuration
+
+Copy the masters, slaves, accumulo-env.sh, and if necessary, accumulo-site.xml 
from the   
+$ACCUMULO_HOME/conf/ directory on the master to all the machines specified in 
the slaves file. 
+
+## <a id="Initialization"></a> Initialization
+
+Accumulo must be initialized to create the structures it uses internally to 
locate data across the cluster. HDFS is required to be configured and running 
before Accumulo can be initialized. 
+
+Once HDFS is started, initialization can be performed by executing   
+$ACCUMULO_HOME/bin/accumulo init . This script will prompt for a name for this 
instance of Accumulo. The instance name is used to identify a set of tables and 
instance-specific settings. The script will then write some information into 
HDFS so Accumulo can start properly. 
+
+The initialization script will prompt you to set a root password. Once 
Accumulo is initialized it can be started. 
+
+## <a id="Running"></a> Running
+
+### <a id="Starting_Accumulo"></a> Starting Accumulo
+
+Make sure Hadoop is configured on all of the machines in the cluster, 
including access to a shared HDFS instance. Make sure HDFS and ZooKeeper are 
running. Make sure ZooKeeper is configured and running on at least one machine 
in the cluster. Start Accumulo using the bin/start-all.sh script. 
+
+To verify that Accumulo is running, check the Status page as described under 
*Monitoring*. In addition, the Shell can provide some information about the 
status of tables via reading the !METADATA table. 
+
+### <a id="Stopping_Accumulo"></a> Stopping Accumulo
+
+To shutdown cleanly, run bin/stop-all.sh and the master will orchestrate the 
shutdown of all the tablet servers. Shutdown waits for all minor compactions to 
finish, so it may take some time for particular configurations. 
+
+## <a id="Monitoring"></a> Monitoring
+
+The Accumulo Master provides an interface for monitoring the status and health 
of Accumulo components. This interface can be accessed by pointing a web 
browser to   
+http://accumulomaster:50095/status
+
+## <a id="Logging"></a> Logging
+
+Accumulo processes each write to a set of log files. By default these are 
found under   
+$ACCUMULO/logs/. 
+
+## <a id="Recovery"></a> Recovery
+
+In the event of TabletServer failure or error on shutting Accumulo down, some 
mutations may not have been minor compacted to HDFS properly. In this case, 
Accumulo will automatically reapply such mutations from the write-ahead log 
either when the tablets from the failed server are reassigned by the Master, in 
the case of a single TabletServer failure or the next time Accumulo starts, in 
the event of failure during shutdown. 
+
+Recovery is performed by asking the loggers to copy their write-ahead logs 
into HDFS. As the logs are copied, they are also sorted, so that tablets can 
easily find their missing updates. The copy/sort status of each file is 
displayed on Accumulo monitor status page. Once the recovery is complete any 
tablets involved should return to an ``online" state. Until then those tablets 
will be unavailable to clients. 
+
+The Accumulo client library is configured to retry failed mutations and in 
many cases clients will be able to continue processing after the recovery 
process without throwing an exception. 
+
+Note that because Accumulo uses timestamps to order mutations, any mutations 
that are applied as part of the recovery process should appear to have been 
applied when they originally arrived at the TabletServer that failed. This 
makes the ordering of mutations consistent in the presence of failure. 
+
+* * *
+
+** Next:** [Shell Commands][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Security][6]   ** [Contents][8]**
+
+[2]: Shell_Commands.html
+[4]: accumulo_user_manual.html
+[6]: Security.html
+[8]: Contents.html
+[9]: Administration.html#Hardware
+[10]: Administration.html#Network
+[11]: Administration.html#Installation
+[12]: Administration.html#Dependencies
+[13]: Administration.html#Configuration
+[14]: Administration.html#Initialization
+[15]: Administration.html#Running
+[16]: Administration.html#Monitoring
+[17]: Administration.html#Logging
+[18]: Administration.html#Recovery
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Analytics.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Analytics.md b/1.3/user_manual/Analytics.md
new file mode 100644
index 0000000..ba833be
--- /dev/null
+++ b/1.3/user_manual/Analytics.md
@@ -0,0 +1,150 @@
+---
+title: "User Manual: Analytics"
+---
+
+** Next:** [Security][2] ** Up:** [Apache Accumulo User Manual Version 1.3][4] 
** Previous:** [High-Speed Ingest][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [MapReduce][9]
+* [Aggregating Iterators][10]
+* [Statistical Modeling][11]
+
+* * *
+
+## <a id="Analytics"></a> Analytics
+
+Accumulo supports more advanced data processing than simply keeping keys 
sorted and performing efficient lookups. Analytics can be developed by using 
MapReduce and Iterators in conjunction with Accumulo tables. 
+
+## <a id="MapReduce"></a> MapReduce
+
+Accumulo tables can be used as the source and destination of MapReduce jobs. 
To use an Accumulo table with a MapReduce job (specifically with the new Hadoop 
API as of version 0.20), configure the job parameters to use the 
AccumuloInputFormat and AccumuloOutputFormat. Accumulo specific parameters can 
be set via these two format classes to do the following: 
+
+* Authenticate and provide user credentials for the input 
+* Restrict the scan to a range of rows 
+* Restrict the input to a subset of available columns 
+
+### <a id="Mapper_and_Reducer_classes"></a> Mapper and Reducer classes
+
+To read from an Accumulo table create a Mapper with the following class 
parameterization and be sure to configure the AccumuloInputFormat. 
+    
+    
+    class MyMapper extends Mapper<Key,Value,WritableComparable,Writable> {
+        public void map(Key k, Value v, Context c) {
+            // transform key and value data here
+        }
+    }
+    
+
+To write to an Accumulo table, create a Reducer with the following class 
parameterization and be sure to configure the AccumuloOutputFormat. The key 
emitted from the Reducer identifies the table to which the mutation is sent. 
This allows a single Reducer to write to more than one table if desired. A 
default table can be configured using the AccumuloOutputFormat, in which case 
the output table name does not have to be passed to the Context object within 
the Reducer. 
+    
+    
+    class MyReducer extends Reducer<WritableComparable, Writable, Text, 
Mutation> {
+    
+        public void reduce(WritableComparable key, Iterator<Text> values, 
Context c) {
+            
+            Mutation m;
+            
+            // create the mutation based on input key and value
+            
+            c.write(new Text("output-table"), m);
+        }
+    }
+    
+
+The Text object passed as the output should contain the name of the table to 
which this mutation should be applied. The Text can be null in which case the 
mutation will be applied to the default table name specified in the 
AccumuloOutputFormat options. 
+
+### <a id="AccumuloInputFormat_options"></a> AccumuloInputFormat options
+    
+    
+    Job job = new Job(getConf());
+    AccumuloInputFormat.setInputInfo(job,
+            "user",
+            "passwd".getBytes(),
+            "table",
+            new Authorizations());
+    
+    AccumuloInputFormat.setZooKeeperInstance(job, "myinstance",
+            "zooserver-one,zooserver-two");
+    
+
+**Optional settings:**
+
+To restrict Accumulo to a set of row ranges: 
+    
+    
+    ArrayList<Range> ranges = new ArrayList<Range>();
+    // populate array list of row ranges ...
+    AccumuloInputFormat.setRanges(job, ranges);
+    
+
+To restrict accumulo to a list of columns: 
+    
+    
+    ArrayList<Pair<Text,Text>> columns = new ArrayList<Pair<Text,Text>>();
+    // populate list of columns
+    AccumuloInputFormat.fetchColumns(job, columns);
+    
+
+To use a regular expression to match row IDs: 
+    
+    
+    AccumuloInputFormat.setRegex(job, RegexType.ROW, "^.*");
+    
+
+### <a id="AccumuloOutputFormat_options"></a> AccumuloOutputFormat options
+    
+    
+    boolean createTables = true;
+    String defaultTable = "mytable";
+    
+    AccumuloOutputFormat.setOutputInfo(job,
+            "user",
+            "passwd".getBytes(),
+            createTables,
+            defaultTable);
+    
+    AccumuloOutputFormat.setZooKeeperInstance(job, "myinstance",
+            "zooserver-one,zooserver-two");
+    
+
+**Optional Settings:**
+    
+    
+    AccumuloOutputFormat.setMaxLatency(job, 300); // milliseconds
+    AccumuloOutputFormat.setMaxMutationBufferSize(job, 5000000); // bytes
+    
+
+An example of using MapReduce with Accumulo can be found at   
+accumulo/docs/examples/README.mapred 
+
+## <a id="Aggregating_Iterators"></a> Aggregating Iterators
+
+Many applications can benefit from the ability to aggregate values across 
common keys. This can be done via aggregating iterators and is similar to the 
Reduce step in MapReduce. This provides the ability to define online, 
incrementally updated analytics without the overhead or latency associated with 
batch-oriented MapReduce jobs. 
+
+All that is needed to aggregate values of a table is to identify the fields 
over which values will be grouped, insert mutations with those fields as the 
key, and configure the table with an aggregating iterator that supports the 
summarization operation desired. 
+
+The only restriction on an aggregating iterator is that the aggregator 
developer should not assume that all values for a given key have been seen, 
since new mutations can be inserted at anytime. This precludes using the total 
number of values in the aggregation such as when calculating an average, for 
example. 
+
+### <a id="Feature_Vectors"></a> Feature Vectors
+
+An interesting use of aggregating iterators within an Accumulo table is to 
store feature vectors for use in machine learning algorithms. For example, many 
algorithms such as k-means clustering, support vector machines, anomaly 
detection, etc. use the concept of a feature vector and the calculation of 
distance metrics to learn a particular model. The columns in an Accumulo table 
can be used to efficiently store sparse features and their weights to be 
incrementally updated via the use of an aggregating iterator. 
+
+## <a id="Statistical_Modeling"></a> Statistical Modeling
+
+Statistical models that need to be updated by many machines in parallel could 
be similarly stored within an Accumulo table. For example, a MapReduce job that 
is iteratively updating a global statistical model could have each map or 
reduce worker reference the parts of the model to be read and updated through 
an embedded Accumulo client. 
+
+Using Accumulo this way enables efficient and fast lookups and updates of 
small pieces of information in a random access pattern, which is complementary 
to MapReduce's sequential access model. 
+
+* * *
+
+** Next:** [Security][2] ** Up:** [Apache Accumulo User Manual Version 1.3][4] 
** Previous:** [High-Speed Ingest][6]   ** [Contents][8]**
+
+[2]: Security.html
+[4]: accumulo_user_manual.html
+[6]: High_Speed_Ingest.html
+[8]: Contents.html
+[9]: Analytics.html#MapReduce
+[10]: Analytics.html#Aggregating_Iterators
+[11]: Analytics.html#Statistical_Modeling
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Contents.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Contents.md b/1.3/user_manual/Contents.md
new file mode 100644
index 0000000..7deba5e
--- /dev/null
+++ b/1.3/user_manual/Contents.md
@@ -0,0 +1,232 @@
+---
+title: "User Manual: Contents"
+---
+
+** Next:** [Introduction][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Apache Accumulo User Manual Version 1.3][4]   
+  
+  
+
+
+### <a id="Contents"></a> Contents
+
+* [Introduction][2]
+* [Accumulo Design][6]
+
+    * [Data Model][7]
+    * [Architecture][8]
+    * [Components][9]
+
+        * [Tablet Server][10]
+        * [Loggers][11]
+        * [Garbage Collector][12]
+        * [Master][13]
+        * [Client][14]
+
+    * [Data Management][15]
+    * [Tablet Service][16]
+    * [Compactions][17]
+    * [Fault-Tolerance][18]
+
+  
+
+* [Accumulo Shell][19]
+
+    * [Basic Administration][20]
+    * [Table Maintenance][21]
+    * [User Administration][22]
+
+  
+
+* [Writing Accumulo Clients][23]
+
+    * [Writing Data][24]
+
+        * [BatchWriter][25]
+
+    * [Reading Data][26]
+
+        * [Scanner][27]
+        * [BatchScanner][28]
+
+  
+
+* [Table Configuration][29]
+
+    * [Locality Groups][30]
+
+        * [Managing Locality Groups via the Shell][31]
+        * [Managing Locality Groups via the Client API][32]
+
+    * [Constraints][33]
+    * [Bloom Filters][34]
+    * [Iterators][35]
+
+        * [Setting Iterators via the Shell][36]
+        * [Setting Iterators Programmatically][37]
+        * [Versioning Iterators and Timestamps][38]
+        * [Filtering Iterators][39]
+
+    * [Aggregating Iterators][40]
+    * [Block Cache][41]
+
+  
+
+* [Table Design][42]
+
+    * [Basic Table][43]
+    * [RowID Design][44]
+    * [Indexing][45]
+    * [Entity-Attribute and Graph Tables][46]
+    * [Document-Partitioned Indexing][47]
+
+  
+
+* [High-Speed Ingest][48]
+
+    * [Pre-Splitting New Tables][49]
+    * [Multiple Ingester Clients][50]
+    * [Bulk Ingest][51]
+    * [MapReduce Ingest][52]
+
+  
+
+* [Analytics][53]
+
+    * [MapReduce][54]
+
+        * [Mapper and Reducer classes][55]
+        * [AccumuloInputFormat options][56]
+        * [AccumuloOutputFormat options][57]
+
+    * [Aggregating Iterators][58]
+
+        * [Feature Vectors][59]
+
+    * [Statistical Modeling][60]
+
+  
+
+* [Security][61]
+
+    * [Security Label Expressions][62]
+    * [Security Label Expression Syntax][63]
+    * [Authorization][64]
+    * [Secure Authorizations Handling][65]
+    * [Query Services Layer][66]
+
+  
+
+* [Administration][67]
+
+    * [Hardware][68]
+    * [Network][69]
+    * [Installation][70]
+    * [Dependencies][71]
+    * [Configuration][72]
+
+        * [Edit conf/accumulo-env.sh][73]
+        * [Cluster Specification][74]
+        * [Accumulo Settings][75]
+        * [Deploy Configuration][76]
+
+    * [Initialization][77]
+    * [Running][78]
+
+        * [Starting Accumulo][79]
+        * [Stopping Accumulo][80]
+
+    * [Monitoring][81]
+    * [Logging][82]
+    * [Recovery][83]
+
+  
+
+* [Shell Commands][84]
+
+  
+
+
+* * *
+
+[2]: Introduction.html
+[4]: accumulo_user_manual.html
+[6]: Accumulo_Design.html
+[7]: Accumulo_Design.html#Data_Model
+[8]: Accumulo_Design.html#Architecture
+[9]: Accumulo_Design.html#Components
+[10]: Accumulo_Design.html#Tablet_Server
+[11]: Accumulo_Design.html#Loggers
+[12]: Accumulo_Design.html#Garbage_Collector
+[13]: Accumulo_Design.html#Master
+[14]: Accumulo_Design.html#Client
+[15]: Accumulo_Design.html#Data_Management
+[16]: Accumulo_Design.html#Tablet_Service
+[17]: Accumulo_Design.html#Compactions
+[18]: Accumulo_Design.html#Fault-Tolerance
+[19]: Accumulo_Shell.html
+[20]: Accumulo_Shell.html#Basic_Administration
+[21]: Accumulo_Shell.html#Table_Maintenance
+[22]: Accumulo_Shell.html#User_Administration
+[23]: Writing_Accumulo_Clients.html
+[24]: Writing_Accumulo_Clients.html#Writing_Data
+[25]: Writing_Accumulo_Clients.html#BatchWriter
+[26]: Writing_Accumulo_Clients.html#Reading_Data
+[27]: Writing_Accumulo_Clients.html#Scanner
+[28]: Writing_Accumulo_Clients.html#BatchScanner
+[29]: Table_Configuration.html
+[30]: Table_Configuration.html#Locality_Groups
+[31]: Table_Configuration.html#Managing_Locality_Groups_via_the_Shell
+[32]: Table_Configuration.html#Managing_Locality_Groups_via_the_Client_API
+[33]: Table_Configuration.html#Constraints
+[34]: Table_Configuration.html#Bloom_Filters
+[35]: Table_Configuration.html#Iterators
+[36]: Table_Configuration.html#Setting_Iterators_via_the_Shell
+[37]: Table_Configuration.html#Setting_Iterators_Programmatically
+[38]: Table_Configuration.html#Versioning_Iterators_and_Timestamps
+[39]: Table_Configuration.html#Filtering_Iterators
+[40]: Table_Configuration.html#Aggregating_Iterators
+[41]: Table_Configuration.html#Block_Cache
+[42]: Table_Design.html
+[43]: Table_Design.html#Basic_Table
+[44]: Table_Design.html#RowID_Design
+[45]: Table_Design.html#Indexing
+[46]: Table_Design.html#Entity-Attribute_and_Graph_Tables
+[47]: Table_Design.html#Document-Partitioned_Indexing
+[48]: High_Speed_Ingest.html
+[49]: High_Speed_Ingest.html#Pre-Splitting_New_Tables
+[50]: High_Speed_Ingest.html#Multiple_Ingester_Clients
+[51]: High_Speed_Ingest.html#Bulk_Ingest
+[52]: High_Speed_Ingest.html#MapReduce_Ingest
+[53]: Analytics.html
+[54]: Analytics.html#MapReduce
+[55]: Analytics.html#Mapper_and_Reducer_classes
+[56]: Analytics.html#AccumuloInputFormat_options
+[57]: Analytics.html#AccumuloOutputFormat_options
+[58]: Analytics.html#Aggregating_Iterators
+[59]: Analytics.html#Feature_Vectors
+[60]: Analytics.html#Statistical_Modeling
+[61]: Security.html
+[62]: Security.html#Security_Label_Expressions
+[63]: Security.html#Security_Label_Expression_Syntax
+[64]: Security.html#Authorization
+[65]: Security.html#Secure_Authorizations_Handling
+[66]: Security.html#Query_Services_Layer
+[67]: Administration.html
+[68]: Administration.html#Hardware
+[69]: Administration.html#Network
+[70]: Administration.html#Installation
+[71]: Administration.html#Dependencies
+[72]: Administration.html#Configuration
+[73]: Administration.html#Edit_conf/accumulo-env.sh
+[74]: Administration.html#Cluster_Specification
+[75]: Administration.html#Accumulo_Settings
+[76]: Administration.html#Deploy_Configuration
+[77]: Administration.html#Initialization
+[78]: Administration.html#Running
+[79]: Administration.html#Starting_Accumulo
+[80]: Administration.html#Stopping_Accumulo
+[81]: Administration.html#Monitoring
+[82]: Administration.html#Logging
+[83]: Administration.html#Recovery
+[84]: Shell_Commands.html
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/High_Speed_Ingest.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/High_Speed_Ingest.md 
b/1.3/user_manual/High_Speed_Ingest.md
new file mode 100644
index 0000000..a30e395
--- /dev/null
+++ b/1.3/user_manual/High_Speed_Ingest.md
@@ -0,0 +1,85 @@
+---
+title: "User Manual: High Speed Ingest"
+---
+
+** Next:** [Analytics][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Table Design][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Pre-Splitting New Tables][9]
+* [Multiple Ingester Clients][10]
+* [Bulk Ingest][11]
+* [MapReduce Ingest][12]
+
+* * *
+
+## <a id="High-Speed_Ingest"></a> High-Speed Ingest
+
+Accumulo is often used as part of a larger data processing and storage system. 
To maximize the performance of a parallel system involving Accumulo, the 
ingestion and query components should be designed to provide enough parallelism 
and concurrency to avoid creating bottlenecks for users and other systems 
writing to and reading from Accumulo. There are several ways to achieve high 
ingest performance. 
+
+## <a id="Pre-Splitting_New_Tables"></a> Pre-Splitting New Tables
+
+New tables consist of a single tablet by default. As mutations are applied, 
the table grows and splits into multiple tablets which are balanced by the 
Master across TabletServers. This implies that the aggregate ingest rate will 
be limited to fewer servers than are available within the cluster until the 
table has reached the point where there are tablets on every TabletServer. 
+
+Pre-splitting a table ensures that there are as many tablets as desired 
available before ingest begins to take advantage of all the parallelism 
possible with the cluster hardware. Tables can be split anytime by using the 
shell: 
+    
+    
+    user@myinstance mytable> addsplits -sf /local_splitfile -t mytable
+    
+
+For the purposes of providing parallelism to ingest it is not necessary to 
create more tablets than there are physical machines within the cluster as the 
aggregate ingest rate is a function of the number of physical machines. Note 
that the aggregate ingest rate is still subject to the number of machines 
running ingest clients, and the distribution of rowIDs across the table. The 
aggregation ingest rate will be suboptimal if there are many inserts into a 
small number of rowIDs. 
+
+## <a id="Multiple_Ingester_Clients"></a> Multiple Ingester Clients
+
+Accumulo is capable of scaling to very high rates of ingest, which is 
dependent upon not just the number of TabletServers in operation but also the 
number of ingest clients. This is because a single client, while capable of 
batching mutations and sending them to all TabletServers, is ultimately limited 
by the amount of data that can be processed on a single machine. The aggregate 
ingest rate will scale linearly with the number of clients up to the point at 
which either the aggregate I/O of TabletServers or total network bandwidth 
capacity is reached. 
+
+In operational settings where high rates of ingest are paramount, clusters are 
often configured to dedicate some number of machines solely to running Ingester 
Clients. The exact ratio of clients to TabletServers necessary for optimum 
ingestion rates will vary according to the distribution of resources per 
machine and by data type. 
+
+## <a id="Bulk_Ingest"></a> Bulk Ingest
+
+Accumulo supports the ability to import files produced by an external process 
such as MapReduce into an existing table. In some cases it may be faster to 
load data this way rather than via ingesting through clients using 
BatchWriters. This allows a large number of machines to format data the way 
Accumulo expects. The new files can then simply be introduced to Accumulo via a 
shell command. 
+
+To configure MapReduce to format data in preparation for bulk loading, the job 
should be set to use a range partitioner instead of the default hash 
partitioner. The range partitioner uses the split points of the Accumulo table 
that will receive the data. The split points can be obtained from the shell and 
used by the MapReduce RangePartitioner. Note that this is only useful if the 
existing table is already split into multiple tablets. 
+    
+    
+    user@myinstance mytable> getsplits
+    aa
+    ab
+    ac
+    ...
+    zx
+    zy
+    zz
+    
+
+Run the MapReduce job, using the AccumuloFileOutputFormat to create the files 
to be introduced to Accumulo. Once this is complete, the files can be added to 
Accumulo via the shell: 
+    
+    
+    user@myinstance mytable> importdirectory /files_dir /failures
+    
+
+Note that the paths referenced are directories within the same HDFS instance 
over which Accumulo is running. Accumulo places any files that failed to be 
added to the second directory specified. 
+
+A complete example of using Bulk Ingest can be found at   
+accumulo/docs/examples/README.bulkIngest 
+
+## <a id="MapReduce_Ingest"></a> MapReduce Ingest
+
+It is possible to efficiently write many mutations to Accumulo in parallel via 
a MapReduce job. In this scenario the MapReduce is written to process data that 
lives in HDFS and write mutations to Accumulo using the AccumuloOutputFormat. 
See the MapReduce section under Analytics for details. 
+
+An example of using MapReduce can be found under   
+accumulo/docs/examples/README.mapred 
+
+* * *
+
+** Next:** [Analytics][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Table Design][6]   ** [Contents][8]**
+
+[2]: Analytics.html
+[4]: accumulo_user_manual.html
+[6]: Table_Design.html
+[8]: Contents.html
+[9]: High_Speed_Ingest.html#Pre-Splitting_New_Tables
+[10]: High_Speed_Ingest.html#Multiple_Ingester_Clients
+[11]: High_Speed_Ingest.html#Bulk_Ingest
+[12]: High_Speed_Ingest.html#MapReduce_Ingest
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Introduction.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Introduction.md b/1.3/user_manual/Introduction.md
new file mode 100644
index 0000000..b8e6247
--- /dev/null
+++ b/1.3/user_manual/Introduction.md
@@ -0,0 +1,23 @@
+---
+title: "User Manual: Introduction"
+---
+
+** Next:** [Accumulo Design][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Contents][6]   ** [Contents][6]**   
+  
+
+
+## <a id="Introduction"></a> Introduction
+
+Apache Accumulo is a highly scalable structured store based on Google's 
BigTable. Accumulo is written in Java and operates over the Hadoop Distributed 
File System (HDFS), which is part of the popular Apache Hadoop project. 
Accumulo supports efficient storage and retrieval of structured data, including 
queries for ranges, and provides support for using Accumulo tables as input and 
output for MapReduce jobs. 
+
+Accumulo features automatic load-balancing and partitioning, data compression 
and fine-grained security labels. 
+
+  
+
+
+* * *
+
+[2]: Accumulo_Design.html
+[4]: accumulo_user_manual.html
+[6]: Contents.html
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Security.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Security.md b/1.3/user_manual/Security.md
new file mode 100644
index 0000000..f0cc9bb
--- /dev/null
+++ b/1.3/user_manual/Security.md
@@ -0,0 +1,108 @@
+---
+title: "User Manual: Security"
+---
+
+** Next:** [Administration][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Analytics][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Security Label Expressions][9]
+* [Security Label Expression Syntax][10]
+* [Authorization][11]
+* [Secure Authorizations Handling][12]
+* [Query Services Layer][13]
+
+* * *
+
+## <a id="Security"></a> Security
+
+Accumulo extends the BigTable data model to implement a security mechanism 
known as cell-level security. Every key-value pair has its own security label, 
stored under the column visibility element of the key, which is used to 
determine whether a given user meets the security requirements to read the 
value. This enables data of various security levels to be stored within the 
same row, and users of varying degrees of access to query the same table, while 
preserving data confidentiality. 
+
+## <a id="Security_Label_Expressions"></a> Security Label Expressions
+
+When mutations are applied, users can specify a security label for each value. 
This is done as the Mutation is created by passing a ColumnVisibility object to 
the put() method: 
+    
+    
+    Text rowID = new Text("row1");
+    Text colFam = new Text("myColFam");
+    Text colQual = new Text("myColQual");
+    ColumnVisibility colVis = new ColumnVisibility("public");
+    long timestamp = System.currentTimeMillis();
+    
+    Value value = new Value("myValue");
+    
+    Mutation mutation = new Mutation(rowID);
+    mutation.put(colFam, colQual, colVis, timestamp, value);
+    
+
+## <a id="Security_Label_Expression_Syntax"></a> Security Label Expression 
Syntax
+
+Security labels consist of a set of user-defined tokens that are required to 
read the value the label is associated with. The set of tokens required can be 
specified using syntax that supports logical AND and OR combinations of tokens, 
as well as nesting groups of tokens together. 
+
+For example, suppose within our organization we want to label our data values 
with security labels defined in terms of user roles. We might have tokens such 
as: 
+    
+    
+    admin
+    audit
+    system
+    
+
+These can be specified alone or combined using logical operators: 
+    
+    
+    // Users must have admin privileges:
+    admin
+    
+    // Users must have admin and audit privileges
+    admin&audit
+    
+    // Users with either admin or audit privileges
+    admin|audit
+    
+    // Users must have audit and one or both of admin or system
+    (admin|system)&audit
+    
+
+When both `|` and `&` operators are used, parentheses must be used to specify 
precedence of the operators. 
+
+## <a id="Authorization"></a> Authorization
+
+When clients attempt to read data from Accumulo, any security labels present 
are examined against the set of authorizations passed by the client code when 
the Scanner or BatchScanner are created. If the authorizations are determined 
to be insufficient to satisfy the security label, the value is suppressed from 
the set of results sent back to the client. 
+
+Authorizations are specified as a comma-separated list of tokens the user 
possesses: 
+    
+    
+    // user possess both admin and system level access
+    Authorization auths = new Authorization("admin","system");
+    
+    Scanner s = connector.createScanner("table", auths);
+    
+
+## <a id="Secure_Authorizations_Handling"></a> Secure Authorizations Handling
+
+Because the client can pass any authorization tokens to Accumulo, applications 
must be designed to obtain users' authorization tokens from a trusted 3rd party 
rather than having the users specify their authorizations directly. 
+
+Often production systems will integrate with Public-Key Infrastructure (PKI) 
and designate client code within the query layer to negotiate with PKI servers 
in order to authenticate users and retrieve their authorization tokens 
(credentials). This requires users to specify only the information necessary to 
authenticate themselves to the system. Once user identity is established, their 
credentials can be accessed by the client code and passed to Accumulo outside 
of the reach of the user. 
+
+## <a id="Query_Services_Layer"></a> Query Services Layer
+
+Since the primary method of interaction with Accumulo is through the Java API, 
production environments often call for the implementation of a Query layer. 
This can be done using web services in containers such as Apache Tomcat, but is 
not a requirement. The Query Services Layer provides a mechanism for providing 
a platform on which user facing applications can be built. This allows the 
application designers to isolate potentially complex query logic, and enables a 
convenient point at which to perform essential security functions. 
+
+Several production environments choose to implement authentication at this 
layer, where users identifiers are used to retrieve their access credentials 
which are then cached within the query layer and presented to Accumulo through 
the Authorizations mechanism. 
+
+Typically, the query services layer sits between Accumulo and user 
workstations. 
+
+* * *
+
+** Next:** [Administration][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Analytics][6]   ** [Contents][8]**
+
+[2]: Administration.html
+[4]: accumulo_user_manual.html
+[6]: Analytics.html
+[8]: Contents.html
+[9]: Security.html#Security_Label_Expressions
+[10]: Security.html#Security_Label_Expression_Syntax
+[11]: Security.html#Authorization
+[12]: Security.html#Secure_Authorizations_Handling
+[13]: Security.html#Query_Services_Layer
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Shell_Commands.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Shell_Commands.md 
b/1.3/user_manual/Shell_Commands.md
new file mode 100644
index 0000000..94f9080
--- /dev/null
+++ b/1.3/user_manual/Shell_Commands.md
@@ -0,0 +1,534 @@
+---
+title: "User Manual: Shell Commands"
+---
+
+** Up:** [Apache Accumulo User Manual Version 1.3][3] ** Previous:** 
[Administration][5]   ** [Contents][7]**   
+  
+
+
+## <a id="Shell_Commands"></a> Shell Commands
+
+**?**   
+  
+    usage: ? [ <command> <command> ] [-?] [-np]   
+    description: provides information about the available commands   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+  
+**about**   
+  
+    usage: about [-?] [-v]   
+    description: displays information about this program   
+      -?,-help  display this help   
+      -v,-verbose displays details session information   
+  
+**addsplits**   
+  
+    usage: addsplits [<split> <split> ] [-?] [-b64] [-sf <filename>] -t 
<tableName>   
+    description: add split points to an existing table   
+      -?,-help  display this help   
+      -b64,-base64encoded decode encoded split points   
+      -sf,-splits-file <filename> file with newline separated list of rows to 
add   
+           to table   
+      -t,-table <tableName>  name of a table to add split points to   
+  
+**authenticate**   
+  
+    usage: authenticate <username> [-?]   
+    description: verifies a user's credentials   
+      -?,-help  display this help   
+  
+**bye**   
+  
+    usage: bye [-?]   
+    description: exits the shell   
+      -?,-help  display this help   
+  
+**classpath**   
+  
+    usage: classpath [-?]   
+    description: lists the current files on the classpath   
+      -?,-help  display this help   
+  
+**clear**   
+  
+    usage: clear [-?]   
+    description: clears the screen   
+      -?,-help  display this help   
+  
+**cls**   
+  
+    usage: cls [-?]   
+    description: clears the screen   
+      -?,-help  display this help   
+  
+**compact**   
+  
+    usage: compact [-?] [-override] -p <pattern> | -t <tableName>   
+    description: sets all tablets for a table to major compact as soon as 
possible   
+           (based on current time)   
+      -?,-help  display this help   
+      -override  override a future scheduled compaction   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+  
+**config**   
+  
+    usage: config [-?] [-d <property> | -f <string> | -s <property=value>] 
[-np]   
+           [-t <table>]   
+    description: prints system properties and table specific properties   
+      -?,-help  display this help   
+      -d,-delete <property>  delete a per-table property   
+      -f,-filter <string> show only properties that contain this string   
+      -np,-no-pagination  disables pagination of output   
+      -s,-set <property=value>  set a per-table property   
+      -t,-table <table>  display/set/delete properties for specified table   
+  
+**createtable**   
+  
+    usage: createtable <tableName> [-?] [-a   
+           <<columnfamily>[:<columnqualifier>]=<aggregation_class>>] [-b64]   
+           [-cc <table>] [-cs <table> | -sf <filename>] [-ndi]  [-tl | -tm]   
+    description: creates a new table, with optional aggregators and optionally 
  
+           pre-split   
+      -?,-help  display this help   
+      -a,-aggregator <<columnfamily>[:<columnqualifier>]=<aggregation_class>>  
 
+           comma separated column=aggregator   
+      -b64,-base64encoded decode encoded split points   
+      -cc,-copy-config <table>  table to copy configuration from   
+      -cs,-copy-splits <table>  table to copy current splits from   
+      -ndi,-no-default-iterators  prevents creation of the normal default 
iterator   
+           set   
+      -sf,-splits-file <filename> file with newline separated list of rows to  
 
+           create a pre-split table   
+      -tl,-time-logical  use logical time   
+      -tm,-time-millis  use time in milliseconds   
+  
+**createuser**   
+  
+    usage: createuser <username> [-?] [-s <comma-separated-authorizations>]   
+    description: creates a new user   
+      -?,-help  display this help   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
+  
+**debug**   
+  
+    usage: debug [ on | off ] [-?]   
+    description: turns debug logging on or off   
+      -?,-help  display this help   
+  
+**delete**   
+  
+    usage: delete <row> <colfamily> <colqualifier> [-?] [-l <expression>] [-t  
 
+           <timestamp>]   
+    description: deletes a record from a table   
+      -?,-help  display this help   
+      -l,-authorization-label <expression>  formatted authorization label 
expression   
+      -t,-timestamp <timestamp>  timestamp to use for insert   
+  
+**deleteiter**   
+  
+    usage: deleteiter [-?] [-majc] [-minc] -n <itername> [-scan] [-t <table>]  
 
+    description: deletes a table-specific iterator   
+      -?,-help  display this help   
+      -majc,-major-compaction  applied at major compaction   
+      -minc,-minor-compaction  applied at minor compaction   
+      -n,-name <itername> iterator to delete   
+      -scan,-scan-time  applied at scan time   
+      -t,-table <table>  tableName   
+  
+**deletemany**   
+  
+    usage: deletemany [-?] [-b <start-row>] [-c   
+           <<columnfamily>[:<columnqualifier>]>] [-e <end-row>] [-f] [-np]   
+           [-s <comma-separated-authorizations>] [-st]   
+    description: scans a table and deletes the resulting records   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -f,-force  forces deletion without prompting   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
+           (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+  
+**deletescaniter**   
+  
+    usage: deletescaniter [-?] [-a] [-n <itername>] [-t <table>]   
+    description: deletes a table-specific scan iterator so it is no longer 
used   
+           during this shell session   
+      -?,-help  display this help   
+      -a,-all  delete all for tableName   
+      -n,-name <itername> iterator to delete   
+      -t,-table <table>  tableName   
+  
+**deletetable**   
+  
+    usage: deletetable <tableName> [-?]   
+    description: deletes a table   
+      -?,-help  display this help   
+  
+**deleteuser**   
+  
+    usage: deleteuser <username> [-?]   
+    description: deletes a user   
+      -?,-help  display this help   
+  
+**droptable**   
+  
+    usage: droptable <tableName> [-?]   
+    description: deletes a table   
+      -?,-help  display this help   
+  
+**dropuser**   
+  
+    usage: dropuser <username> [-?]   
+    description: deletes a user   
+      -?,-help  display this help   
+  
+**egrep**   
+  
+    usage: egrep <regex> <regex> [-?] [-b <start-row>] [-c   
+           <<columnfamily>[:<columnqualifier>]>] [-e <end-row>] [-np] [-s   
+           <comma-separated-authorizations>] [-st] [-t <arg>]   
+    description: egreps a table in parallel on the server side (uses java 
regex)   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
+           (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-num-threads <arg>  num threads   
+  
+**execfile**   
+  
+    usage: execfile [-?] [-v]   
+    description: specifies a file containing accumulo commands to execute   
+      -?,-help  display this help   
+      -v,-verbose displays command prompt as commands are executed   
+  
+**exit**   
+  
+    usage: exit [-?]   
+    description: exits the shell   
+      -?,-help  display this help   
+  
+**flush**   
+  
+    usage: flush [-?] -p <pattern> | -t <tableName>   
+    description: makes a best effort to flush tables from memory to disk   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+  
+**formatter**   
+  
+    usage: formatter [-?] -f <className> | -l | -r   
+    description: specifies a formatter to use for displaying database entries  
 
+      -?,-help  display this help   
+      -f,-formatter <className>  fully qualified name of formatter class to 
use   
+      -l,-list  display the current formatter   
+      -r,-reset  reset to default formatter   
+  
+**getauths**   
+  
+    usage: getauths [-?] [-u <user>]   
+    description: displays the maximum scan authorizations for a user   
+      -?,-help  display this help   
+      -u,-user <user>  user to operate on   
+  
+**getgroups**   
+  
+    usage: getgroups [-?] -t <table>   
+    description: gets the locality groups for a given table   
+      -?,-help  display this help   
+      -t,-table <table>  get locality groups for specified table   
+  
+**getsplits**   
+  
+    usage: getsplits [-?] [-b64] [-m <num>] [-o <file>] [-v]   
+    description: retrieves the current split points for tablets in the current 
table   
+      -?,-help  display this help   
+      -b64,-base64encoded encode the split points   
+      -m,-max <num>  specifies the maximum number of splits to create   
+      -o,-output <file>  specifies a local file to write the splits to   
+      -v,-verbose print out the tablet information with start/end rows   
+  
+**grant**   
+  
+    usage: grant <permission> [-?] -p <pattern> | -s | -t <table>  -u 
<username>   
+    description: grants system or table permissions for a user   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of tables to grant permissions on   
+      -s,-system  grant a system permission   
+      -t,-table <table>  grant a table permission on this table   
+      -u,-user <username> user to operate on   
+  
+**grep**   
+  
+    usage: grep <term> <term> [-?] [-b <start-row>] [-c   
+           <<columnfamily>[:<columnqualifier>]>] [-e <end-row>] [-np] [-s   
+           <comma-separated-authorizations>] [-st] [-t <arg>]   
+    description: searches a table for a substring, in parallel, on the server 
side   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
+           (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+      -t,-num-threads <arg>  num threads   
+  
+**help**   
+  
+    usage: help [ <command> <command> ] [-?] [-np]   
+    description: provides information about the available commands   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+  
+**importdirectory**   
+  
+    usage: importdirectory <directory> <failureDirectory> [-?] [-a <num>] [-f 
<num>]   
+           [-g] [-v]   
+    description: bulk imports an entire directory of data files to the current 
table   
+      -?,-help  display this help   
+      -a,-numAssignThreads <num>  number of assign threads for import 
(default: 20)   
+      -f,-numFileThreads <num>  number of threads to process files (default: 
8)   
+      -g,-disableGC  prevents imported files from being deleted by the garbage 
  
+           collector   
+      -v,-verbose displays statistics from the import   
+  
+**info**   
+  
+    usage: info [-?] [-v]   
+    description: displays information about this program   
+      -?,-help  display this help   
+      -v,-verbose displays details session information   
+  
+**insert**   
+  
+    usage: insert <row> <colfamily> <colqualifier> <value> [-?] [-l 
<expression>] [-t   
+           <timestamp>]   
+    description: inserts a record   
+      -?,-help  display this help   
+      -l,-authorization-label <expression>  formatted authorization label 
expression   
+      -t,-timestamp <timestamp>  timestamp to use for insert   
+  
+**listscans**   
+  
+    usage: listscans [-?] [-np] [-ts <tablet server>]   
+    description: list what scans are currently running in accumulo. See the   
+           org.apache.accumulo.core.client.admin.ActiveScan javadoc for more 
information   
+           about columns.   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -ts,-tabletServer <tablet server>  list scans for a specific tablet 
server   
+  
+**masterstate**   
+  
+    usage: masterstate <NORMAL|SAFE_MODE|CLEAN_STOP> [-?]   
+    description: set the master state: NORMAL, SAFE_MODE or CLEAN_STOP   
+      -?,-help  display this help   
+  
+**offline**   
+  
+    usage: offline [-?] -p <pattern> | -t <tableName>   
+    description: starts the process of taking table offline   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+  
+**online**   
+  
+    usage: online [-?] -p <pattern> | -t <tableName>   
+    description: starts the process of putting a table online   
+      -?,-help  display this help   
+      -p,-pattern <pattern>  regex pattern of table names to flush   
+      -t,-table <tableName>  name of a table to flush   
+  
+**passwd**   
+  
+    usage: passwd [-?] [-u <user>]   
+    description: changes a user's password   
+      -?,-help  display this help   
+      -u,-user <user>  user to operate on   
+  
+**quit**   
+  
+    usage: quit [-?]   
+    description: exits the shell   
+      -?,-help  display this help   
+  
+**renametable**   
+  
+    usage: renametable <current table name> <new table name> [-?]   
+    description: rename a table   
+      -?,-help  display this help   
+  
+**revoke**   
+  
+    usage: revoke <permission> [-?] -s | -t <table>  -u <username>   
+    description: revokes system or table permissions from a user   
+      -?,-help  display this help   
+      -s,-system  revoke a system permission   
+      -t,-table <table>  revoke a table permission on this table   
+      -u,-user <username> user to operate on   
+  
+**scan**   
+  
+    usage: scan [-?] [-b <start-row>] [-c 
<<columnfamily>[:<columnqualifier>]>] [-e   
+           <end-row>] [-np] [-s <comma-separated-authorizations>] [-st]   
+    description: scans the table, and displays the resulting records   
+      -?,-help  display this help   
+      -b,-begin-row <start-row>  begin row (inclusive)   
+      -c,-columns <<columnfamily>[:<columnqualifier>]>  comma-separated 
columns   
+      -e,-end-row <end-row>  end row (inclusive)   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
+           (all user auths are used if this argument is not specified)   
+      -st,-show-timestamps  enables displaying timestamps   
+  
+**select**   
+  
+    usage: select <row> <columnfamily> <columnqualifier> [-?] [-np] [-s   
+           <comma-separated-authorizations>] [-st]   
+    description: scans for and displays a single record   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
+      -st,-show-timestamps  enables displaying timestamps   
+  
+**selectrow**   
+  
+    usage: selectrow <row> [-?] [-np] [-s <comma-separated-authorizations>] 
[-st]   
+    description: scans a single row and displays all resulting records   
+      -?,-help  display this help   
+      -np,-no-pagination  disables pagination of output   
+      -s,-scan-authorizations <comma-separated-authorizations>  scan 
authorizations   
+      -st,-show-timestamps  enables displaying timestamps   
+  
+**setauths**   
+  
+    usage: setauths [-?] -c | -s <comma-separated-authorizations>  [-u <user>] 
  
+    description: sets the maximum scan authorizations for a user   
+      -?,-help  display this help   
+      -c,-clear-authorizations  clears the scan authorizations   
+      -s,-scan-authorizations <comma-separated-authorizations>  set the scan   
+           authorizations   
+      -u,-user <user>  user to operate on   
+  
+**setgroups**   
+  
+    usage: setgroups <group>=<col fam>,<col fam> <group>=<col fam>,<col fam>   
+           [-?] -t <table>   
+    description: sets the locality groups for a given table (for binary or 
commas,   
+           use Java API)   
+      -?,-help  display this help   
+      -t,-table <table>  get locality groups for specified table   
+  
+**setiter**   
+  
+    usage: setiter [-?] -agg | -class <name> | -filter | -nolabel | -regex | 
-vers   
+           [-majc] [-minc] [-n <itername>]  -p <pri>  [-scan] [-t <table>]   
+    description: sets a table-specific iterator   
+      -?,-help  display this help   
+      -agg,-aggregator  an aggregating type   
+      -class,-class-name <name>  a java class type   
+      -filter,-filter  a filtering type   
+      -majc,-major-compaction  applied at major compaction   
+      -minc,-minor-compaction  applied at minor compaction   
+      -n,-name <itername> iterator to set   
+      -nolabel,-no-label  a no-labeling type   
+      -p,-priority <pri>  the order in which the iterator is applied   
+      -regex,-regular-expression  a regex matching type   
+      -scan,-scan-time  applied at scan time   
+      -t,-table <table>  tableName   
+      -vers,-version  a versioning type   
+  
+**setscaniter**   
+  
+    usage: setscaniter [-?] -agg | -class <name> | -filter | -nolabel | -regex 
|   
+           -vers  [-n <itername>]  -p <pri> [-t <table>]   
+    description: sets a table-specific scan iterator for this shell session   
+      -?,-help  display this help   
+      -agg,-aggregator  an aggregating type   
+      -class,-class-name <name>  a java class type   
+      -filter,-filter  a filtering type   
+      -n,-name <itername> iterator to set   
+      -nolabel,-no-label  a no-labeling type   
+      -p,-priority <pri>  the order in which the iterator is applied   
+      -regex,-regular-expression  a regex matching type   
+      -t,-table <table>  tableName   
+      -vers,-version  a versioning type   
+  
+**systempermissions**   
+  
+    usage: systempermissions [-?]   
+    description: displays a list of valid system permissions   
+      -?,-help  display this help   
+  
+**table**   
+  
+    usage: table <tableName> [-?]   
+    description: switches to the specified table   
+      -?,-help  display this help   
+  
+**tablepermissions**   
+  
+    usage: tablepermissions [-?]   
+    description: displays a list of valid table permissions   
+      -?,-help  display this help   
+  
+**tables**   
+  
+    usage: tables [-?] [-l]   
+    description: displays a list of all existing tables   
+      -?,-help  display this help   
+      -l,-list-ids  display internal table ids along with the table name   
+  
+**trace**   
+  
+    usage: trace [ on | off ] [-?]   
+    description: turns trace logging on or off   
+      -?,-help  display this help   
+  
+**user**   
+  
+    usage: user <username> [-?]   
+    description: switches to the specified user   
+      -?,-help  display this help   
+  
+**userpermissions**   
+  
+    usage: userpermissions [-?] [-u <user>]   
+    description: displays a user's system and table permissions   
+      -?,-help  display this help   
+      -u,-user <user>  user to operate on   
+  
+**users**   
+  
+    usage: users [-?]   
+    description: displays a list of existing users   
+      -?,-help  display this help   
+  
+**whoami**   
+  
+    usage: whoami [-?]   
+    description: reports the current user name   
+      -?,-help  display this help   
+  
+  
+
+
+* * *
+
+** Up:** [Apache Accumulo User Manual Version 1.3][3] ** Previous:** 
[Administration][5]   ** [Contents][7]**
+
+[3]: accumulo_user_manual.html
+[5]: Administration.html
+[7]: Contents.html
+

Reply via email to