http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Table_Configuration.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Table_Configuration.md 
b/1.3/user_manual/Table_Configuration.md
new file mode 100644
index 0000000..172a10d
--- /dev/null
+++ b/1.3/user_manual/Table_Configuration.md
@@ -0,0 +1,330 @@
+---
+title: "User Manual: Table Configuration"
+---
+
+** Next:** [Table Design][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Writing Accumulo Clients][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Locality Groups][9]
+* [Constraints][10]
+* [Bloom Filters][11]
+* [Iterators][12]
+* [Aggregating Iterators][13]
+* [Block Cache][14]
+
+* * *
+
+## <a id="Table_Configuration"></a> Table Configuration
+
+Accumulo tables have a few options that can be configured to alter the default 
behavior of Accumulo as well as improve performance based on the data stored. 
These include locality groups, constraints, and iterators. 
+
+## <a id="Locality_Groups"></a> Locality Groups
+
+Accumulo supports storing of sets of column families separately on disk to 
allow clients to scan over columns that are frequently used together efficient 
and to avoid scanning over column families that are not requested. After a 
locality group is set Scanner and BatchScanner operations will automatically 
take advantage of them whenever the fetchColumnFamilies() method is used. 
+
+By default tables place all column families into the same ``default" locality 
group. Additional locality groups can be configured anytime via the shell or 
programmatically as follows: 
+
+### <a id="Managing_Locality_Groups_via_the_Shell"></a> Managing Locality 
Groups via the Shell
+    
+    
+    usage: setgroups <group>=<col fam>{,<col fam>}{ <group>=<col fam>{,<col
+    fam>}} [-?] -t <table>
+    
+    user@myinstance mytable> setgroups -t mytable group_one=colf1,colf2
+    
+    user@myinstance mytable> getgroups -t mytable
+    group_one=colf1,colf2
+    
+
+### <a id="Managing_Locality_Groups_via_the_Client_API"></a> Managing Locality 
Groups via the Client API
+    
+    
+    Connector conn;
+    
+    HashMap<String,Set<Text>> localityGroups =
+        new HashMap<String, Set<Text>>();
+    
+    HashSet<Text> metadataColumns = new HashSet<Text>();
+    metadataColumns.add(new Text("domain"));
+    metadataColumns.add(new Text("link"));
+    
+    HashSet<Text> contentColumns = new HashSet<Text>();
+    contentColumns.add(new Text("body"));
+    contentColumns.add(new Text("images"));
+    
+    localityGroups.put("metadata", metadataColumns);
+    localityGroups.put("content", contentColumns);
+    
+    conn.tableOperations().setLocalityGroups("mytable", localityGroups);
+    
+    // existing locality groups can be obtained as follows
+    Map<String, Set<Text>> groups =
+        conn.tableOperations().getLocalityGroups("mytable");
+    
+
+The assignment of Column Families to Locality Groups can be changed anytime. 
The physical movement of column families into their new locality groups takes 
place via the periodic Major Compaction process that takes place continuously 
in the background. Major Compaction can also be scheduled to take place 
immediately through the shell: 
+    
+    
+    user@myinstance mytable> compact -t mytable
+    
+
+## <a id="Constraints"></a> Constraints
+
+Accumulo supports constraints applied on mutations at insert time. This can be 
used to disallow certain inserts according to a user defined policy. Any 
mutation that fails to meet the requirements of the constraint is rejected and 
sent back to the client. 
+
+Constraints can be enabled by setting a table property as follows: 
+    
+    
+    user@myinstance mytable> config -t mytable -s 
table.constraint.1=com.test.ExampleConstraint
+    user@myinstance mytable> config -t mytable -s 
table.constraint.2=com.test.AnotherConstraint
+    user@myinstance mytable> config -t mytable -f constraint
+    ---------+--------------------------------+----------------------------
+    SCOPE    | NAME                           | VALUE
+    ---------+--------------------------------+----------------------------
+    table    | table.constraint.1............ | com.test.ExampleConstraint
+    table    | table.constraint.2............ | com.test.AnotherConstraint
+    ---------+--------------------------------+----------------------------
+    
+
+Currently there are no general-purpose constraints provided with the Accumulo 
distribution. New constraints can be created by writing a Java class that 
implements the org.apache.accumulo.core.constraints.Constraint interface. 
+
+To deploy a new constraint, create a jar file containing the class 
implementing the new constraint and place it in the lib directory of the 
Accumulo installation. New constraint jars can be added to Accumulo and enabled 
without restarting but any change to an existing constraint class requires 
Accumulo to be restarted. 
+
+An example of constraints can be found in   
+accumulo/docs/examples/README.constraints with corresponding code under   
+accumulo/src/examples/main/java/accumulo/examples/constraints . 
+
+## <a id="Bloom_Filters"></a> Bloom Filters
+
+As mutations are applied to an Accumulo table, several files are created per 
tablet. If bloom filters are enabled, Accumulo will create and load a small 
data structure into memory to determine whether a file contains a given key 
before opening the file. This can speed up lookups considerably. 
+
+To enable bloom filters, enter the following command in the Shell: 
+    
+    
+    user@myinstance> config -t mytable -s table.bloom.enabled=true
+    
+
+An extensive example of using Bloom Filters can be found at   
+accumulo/docs/examples/README.bloom . 
+
+## <a id="Iterators"></a> Iterators
+
+Iterators provide a modular mechanism for adding functionality to be executed 
by TabletServers when scanning or compacting data. This allows users to 
efficiently summarize, filter, and aggregate data. In fact, the built-in 
features of cell-level security and age-off are implemented using Iterators. 
+
+### <a id="Setting_Iterators_via_the_Shell"></a> Setting Iterators via the 
Shell
+    
+    
+    usage: setiter [-?] -agg | -class <name> | -filter | -nolabel | 
+    -regex | -vers [-majc] [-minc] [-n <itername>] -p <pri> [-scan] 
+    [-t <table>]
+    
+    user@myinstance mytable> setiter -t mytable -scan -p 10 -n myiter
+    
+
+### <a id="Setting_Iterators_Programmatically"></a> Setting Iterators 
Programmatically
+    
+    
+    scanner.setScanIterators(
+        15, // priority
+        "com.company.MyIterator", // class name
+        "myiter"); // name this iterator
+    
+
+Some iterators take additional parameters from client code, as in the 
following example: 
+    
+    
+    bscan.setIteratorOption(
+        "myiter", // iterator reference
+        "myoptionname",
+        "myoptionvalue");
+    
+
+Tables support separate Iterator settings to be applied at scan time, upon 
minor compaction and upon major compaction. For most uses, tables will have 
identical iterator settings for all three to avoid inconsistent results. 
+
+### <a id="Versioning_Iterators_and_Timestamps"></a> Versioning Iterators and 
Timestamps
+
+Accumulo provides the capability to manage versioned data through the use of 
timestamps within the Key. If a timestamp is not specified in the key created 
by the client then the system will set the timestamp to the current time. Two 
keys with identical rowIDs and columns but different timestamps are considered 
two versions of the same key. If two inserts are made into accumulo with the 
same rowID, column, and timestamp, then the behavior is non-deterministic. 
+
+Timestamps are sorted in descending order, so the most recent data comes 
first. Accumulo can be configured to return the top k versions, or versions 
later than a given date. The default is to return the one most recent version. 
+
+The version policy can be changed by changing the VersioningIterator options 
for a table as follows: 
+    
+    
+    user@myinstance mytable> config -t mytable -s
+    table.iterator.scan.vers.opt.maxVersions=3
+    
+    user@myinstance mytable> config -t mytable -s
+    table.iterator.minc.vers.opt.maxVersions=3
+    
+    user@myinstance mytable> config -t mytable -s
+    table.iterator.majc.vers.opt.maxVersions=3
+    
+
+#### <a id="Logical_Time"></a> Logical Time
+
+Accumulo 1.2 introduces the concept of logical time. This ensures that 
timestamps set by accumulo always move forward. This helps avoid problems 
caused by TabletServers that have different time settings. The per tablet 
counter gives unique one up time stamps on a per mutation basis. When using 
time in milliseconds, if two things arrive within the same millisecond then 
both receive the same timestamp. 
+
+A table can be configured to use logical timestamps at creation time as 
follows: 
+    
+    
+    user@myinstance> createtable -tl logical
+    
+
+#### <a id="Deletes"></a> Deletes
+
+Deletes are special keys in accumulo that get sorted along will all the other 
data. When a delete key is inserted, accumulo will not show anything that has a 
timestamp less than or equal to the delete key. During major compaction, any 
keys older than a delete key are omitted from the new file created, and the 
omitted keys are removed from disk as part of the regular garbage collection 
process. 
+
+### <a id="Filtering_Iterators"></a> Filtering Iterators
+
+When scanning over a set of key-value pairs it is possible to apply an 
arbitrary filtering policy through the use of a FilteringIterator. These types 
of iterators return only key-value pairs that satisfy the filter logic. 
Accumulo has two built-in filtering iterators that can be configured on any 
table: AgeOff and RegEx. More can be added by writing a Java class that 
implements the   
+org.apache.accumulo.core.iterators.filter.Filter interface. 
+
+To configure the AgeOff filter to remove data older than a certain date or a 
fixed amount of time from the present. The following example sets a table to 
delete everything inserted over 30 seconds ago: 
+    
+    
+    user@myinstance> createtable filtertest
+    user@myinstance filtertest> setiter -t filtertest -scan -minc -majc -p
+    10 -n myfilter -filter
+    
+    FilteringIterator uses Filters to accept or reject key/value pairs
+    ----------> entering options: <filterPriorityNumber>
+    <ageoff|regex|filterClass>
+    
+    ----------> set org.apache.accumulo.core.iterators.FilteringIterator option
+    (<name> <value>, hit enter to skip): 0 ageoff
+    
+    ----------> set org.apache.accumulo.core.iterators.FilteringIterator option
+    (<name> <value>, hit enter to skip):
+    AgeOffFilter removes entries with timestamps more than <ttl>
+    milliseconds old
+    
+    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter
+    currentTime, if set, use the given value as the absolute time in
+    milliseconds as the current time of day:
+    
+    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter
+    ttl, time to live (milliseconds): 30000
+    
+    user@myinstance filtertest>
+    user@myinstance filtertest> scan
+    user@myinstance filtertest> insert foo a b c
+    insert successful
+    user@myinstance filtertest> scan
+    foo a:b [] c
+    
+    ... wait 30 seconds ...
+    
+    user@myinstance filtertest> scan
+    user@myinstance filtertest>
+    
+
+To see the iterator settings for a table, use: 
+    
+    
+    user@example filtertest> config -t filtertest -f iterator
+    ---------+------------------------------------------+------------------
+    SCOPE    | NAME                                     | VALUE
+    ---------+------------------------------------------+------------------
+    table    | table.iterator.majc.myfilter ........... |
+    10,org.apache.accumulo.core.iterators.FilteringIterator
+    table    | table.iterator.majc.myfilter.opt.0 ..... |
+    org.apache.accumulo.core.iterators.filter.AgeOffFilter
+    table    | table.iterator.majc.myfilter.opt.0.ttl . | 30000
+    table    | table.iterator.minc.myfilter ........... |
+    10,org.apache.accumulo.core.iterators.FilteringIterator
+    table    | table.iterator.minc.myfilter.opt.0 ..... |
+    org.apache.accumulo.core.iterators.filter.AgeOffFilter
+    table    | table.iterator.minc.myfilter.opt.0.ttl . | 30000
+    table    | table.iterator.scan.myfilter ........... |
+    10,org.apache.accumulo.core.iterators.FilteringIterator
+    table    | table.iterator.scan.myfilter.opt.0 ..... |
+    org.apache.accumulo.core.iterators.filter.AgeOffFilter
+    table    | table.iterator.scan.myfilter.opt.0.ttl . | 30000
+    ---------+------------------------------------------+------------------
+    
+
+## <a id="Aggregating_Iterators"></a> Aggregating Iterators
+
+Accumulo allows aggregating iterators to be configured on tables and column 
families. When an aggregating iterator is set, the iterator is applied across 
the values associated with any keys that share rowID, column family, and column 
qualifier. This is similar to the reduce step in MapReduce, which applied some 
function to all the values associated with a particular key. 
+
+For example, if an aggregating iterator were configured on a table and the 
following mutations were inserted: 
+    
+    
+    Row     Family Qualifier Timestamp  Value
+    rowID1  colfA  colqA     20100101   1
+    rowID1  colfA  colqA     20100102   1
+    
+
+The table would reflect only one aggregate value: 
+    
+    
+    rowID1  colfA  colqA     -          2
+    
+
+Aggregating iterators can be enabled for a table as follows: 
+    
+    
+    user@myinstance> createtable perDayCounts -a
+    day=org.apache.accumulo.core.iterators.aggregation.StringSummation
+    
+    user@myinstance perDayCounts> insert row1 day 20080101 1
+    user@myinstance perDayCounts> insert row1 day 20080101 1
+    user@myinstance perDayCounts> insert row1 day 20080103 1
+    user@myinstance perDayCounts> insert row2 day 20080101 1
+    user@myinstance perDayCounts> insert row3 day 20080101 1
+    
+    user@myinstance perDayCounts> scan
+    row1 day:20080101 [] 2
+    row1 day:20080103 [] 1
+    row2 day:20080101 [] 2
+    
+
+Accumulo includes the following aggregators: 
+
+* **LongSummation**: expects values of type long and adds them. 
+* **StringSummation**: expects numbers represented as strings and adds them. 
+* **StringMax**: expects numbers as strings and retains the maximum number 
inserted. 
+* **StringMin**: expects numbers as strings and retains the minimum number 
inserted. 
+
+Additional Aggregators can be added by creating a Java class that implements   
+**org.apache.accumulo.core.iterators.aggregation.Aggregator** and adding a jar 
containing that class to Accumulo's lib directory. 
+
+An example of an aggregator can be found under   
+accumulo/src/examples/main/java/org/apache/accumulo/examples/aggregation/SortedSetAggregator.java
 
+
+## <a id="Block_Cache"></a> Block Cache
+
+In order to increase throughput of commonly accessed entries, Accumulo employs 
a block cache. This block cache buffers data in memory so that it doesn't have 
to be read off of disk. The RFile format that Accumulo prefers is a mix of 
index blocks and data blocks, where the index blocks are used to find the 
appropriate data blocks. Typical queries to Accumulo result in a binary search 
over several index blocks followed by a linear scan of one or more data blocks. 
+
+The block cache can be configured on a per-table basis, and all tablets hosted 
on a tablet server share a single resource pool. To configure the size of the 
tablet server's block cache, set the following properties: 
+    
+    
+    tserver.cache.data.size: Specifies the size of the cache for file data 
blocks.
+    tserver.cache.index.size: Specifies the size of the cache for file indices.
+    
+
+To enable the block cache for your table, set the following properties: 
+    
+    
+    table.cache.block.enable: Determines whether file (data) block cache is 
enabled.
+    table.cache.index.enable: Determines whether index cache is enabled.
+    
+
+The block cache can have a significant effect on alleviating hot spots, as 
well as reducing query latency. It is enabled by default for the !METADATA 
table. 
+
+* * *
+
+** Next:** [Table Design][2] ** Up:** [Apache Accumulo User Manual Version 
1.3][4] ** Previous:** [Writing Accumulo Clients][6]   ** [Contents][8]**
+
+[2]: Table_Design.html
+[4]: accumulo_user_manual.html
+[6]: Writing_Accumulo_Clients.html
+[8]: Contents.html
+[9]: Table_Configuration.html#Locality_Groups
+[10]: Table_Configuration.html#Constraints
+[11]: Table_Configuration.html#Bloom_Filters
+[12]: Table_Configuration.html#Iterators
+[13]: Table_Configuration.html#Aggregating_Iterators
+[14]: Table_Configuration.html#Block_Cache
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Table_Design.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Table_Design.md b/1.3/user_manual/Table_Design.md
new file mode 100644
index 0000000..f5dc981
--- /dev/null
+++ b/1.3/user_manual/Table_Design.md
@@ -0,0 +1,197 @@
+---
+title: "User Manual: Table Design"
+---
+
+** Next:** [High-Speed Ingest][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Table Configuration][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Basic Table][9]
+* [RowID Design][10]
+* [Indexing][11]
+* [Entity-Attribute and Graph Tables][12]
+* [Document-Partitioned Indexing][13]
+
+* * *
+
+## <a id="Table_Design"></a> Table Design
+
+## <a id="Basic_Table"></a> Basic Table
+
+Since Accumulo tables are sorted by row ID, each table can be thought of as 
being indexed by the row ID. Lookups performed row ID can be executed quickly, 
by doing a binary search, first across the tablets, and then within a tablet. 
Clients should choose a row ID carefully in order to support their desired 
application. A simple rule is to select a unique identifier as the row ID for 
each entity to be stored and assign all the other attributes to be tracked to 
be columns under this row ID. For example, if we have the following data in a 
comma-separated file: 
+    
+    
+        userid,age,address,account-balance
+    
+
+We might choose to store this data using the userid as the rowID and the rest 
of the data in column families: 
+    
+    
+    Mutation m = new Mutation(new Text(userid));
+    m.put(new Text("age"), age);
+    m.put(new Text("address"), address);
+    m.put(new Text("balance"), account_balance);
+    
+    writer.add(m);
+    
+
+We could then retrieve any of the columns for a specific userid by specifying 
the userid as the range of a scanner and fetching specific columns: 
+    
+    
+    Range r = new Range(userid, userid); // single row
+    Scanner s = conn.createScanner("userdata", auths);
+    s.setRange(r);
+    s.fetchColumnFamily(new Text("age"));
+    
+    for(Entry<Key,Value> entry : s)
+        System.out.println(entry.getValue().toString());
+    
+
+## <a id="RowID_Design"></a> RowID Design
+
+Often it is necessary to transform the rowID in order to have rows ordered in 
a way that is optimal for anticipated access patterns. A good example of this 
is reversing the order of components of internet domain names in order to group 
rows of the same parent domain together: 
+    
+    
+    com.google.code
+    com.google.labs
+    com.google.mail
+    com.yahoo.mail
+    com.yahoo.research
+    
+
+Some data may result in the creation of very large rows - rows with many 
columns. In this case the table designer may wish to split up these rows for 
better load balancing while keeping them sorted together for scanning purposes. 
This can be done by appending a random substring at the end of the row: 
+    
+    
+    com.google.code_00
+    com.google.code_01
+    com.google.code_02
+    com.google.labs_00
+    com.google.mail_00
+    com.google.mail_01
+    
+
+It could also be done by adding a string representation of some period of time 
such as date to the week or month: 
+    
+    
+    com.google.code_201003
+    com.google.code_201004
+    com.google.code_201005
+    com.google.labs_201003
+    com.google.mail_201003
+    com.google.mail_201004
+    
+
+Appending dates provides the additional capability of restricting a scan to a 
given date range. 
+
+## <a id="Indexing"></a> Indexing
+
+In order to support lookups via more than one attribute of an entity, 
additional indexes can be built. However, because Accumulo tables can support 
any number of columns without specifying them beforehand, a single additional 
index will often suffice for supporting lookups of records in the main table. 
Here, the index has, as the rowID, the Value or Term from the main table, the 
column families are the same, and the column qualifier of the index table 
contains the rowID from the main table. 
+
+![converted table][14]
+
+Note: We store rowIDs in the column qualifier rather than the Value so that we 
can have more than one rowID associated with a particular term within the 
index. If we stored this in the Value we would only see one of the rows in 
which the value appears since Accumulo is configured by default to return the 
one most recent value associated with a key. 
+
+Lookups can then be done by scanning the Index Table first for occurrences of 
the desired values in the columns specified, which returns a list of row ID 
from the main table. These can then be used to retrieve each matching record, 
in their entirety, or a subset of their columns, from the Main Table. 
+
+To support efficient lookups of multiple rowIDs from the same table, the 
Accumulo client library provides a BatchScanner. Users specify a set of Ranges 
to the BatchScanner, which performs the lookups in multiple threads to multiple 
servers and returns an Iterator over all the rows retrieved. The rows returned 
are NOT in sorted order, as is the case with the basic Scanner interface. 
+    
+    
+    // first we scan the index for IDs of rows matching our query
+    
+    Text term = new Text("mySearchTerm");
+    
+    HashSet<Text> matchingRows = new HashSet<Text>();
+    
+    Scanner indexScanner = createScanner("index", auths);
+    indexScanner.setRange(new Range(term, term));
+    
+    // we retrieve the matching rowIDs and create a set of ranges
+    for(Entry<Key,Value> entry : indexScanner)
+        matchingRows.add(new Text(entry.getValue()));
+    
+    // now we pass the set of rowIDs to the batch scanner to retrieve them
+    BatchScanner bscan = conn.createBatchScanner("table", auths, 10);
+    
+    bscan.setRanges(matchingRows);
+    bscan.fetchFamily("attributes");
+    
+    for(Entry<Key,Value> entry : scan)
+        System.out.println(e.getValue());
+    
+
+One advantage of the dynamic schema capabilities of Accumulo is that different 
fields may be indexed into the same physical table. However, it may be 
necessary to create different index tables if the terms must be formatted 
differently in order to maintain proper sort order. For example, real numbers 
must be formatted differently than their usual notation in order to be sorted 
correctly. In these cases, usually one index per unique data type will suffice. 
+
+## <a id="Entity-Attribute_and_Graph_Tables"></a> Entity-Attribute and Graph 
Tables
+
+Accumulo is ideal for storing entities and their attributes, especially of the 
attributes are sparse. It is often useful to join several datasets together on 
common entities within the same table. This can allow for the representation of 
graphs, including nodes, their attributes, and connections to other nodes. 
+
+Rather than storing individual events, Entity-Attribute or Graph tables store 
aggregate information about the entities involved in the events and the 
relationships between entities. This is often preferrable when single events 
aren't very useful and when a continuously updated summarization is desired. 
+
+The physical shema for an entity-attribute or graph table is as follows: 
+
+![converted table][15]
+
+For example, to keep track of employees, managers and products the following 
entity-attribute table could be used. Note that the weights are not always 
necessary and are set to 0 when not used. 
+
+![converted table][16]   
+  
+
+
+To allow efficient updating of edge weights, an aggregating iterator can be 
configured to add the value of all mutations applied with the same key. These 
types of tables can easily be created from raw events by simply extracting the 
entities, attributes, and relationships from individual events and inserting 
the keys into Accumulo each with a count of 1. The aggregating iterator will 
take care of maintaining the edge weights. 
+
+## <a id="Document-Partitioned_Indexing"></a> Document-Partitioned Indexing
+
+Using a simple index as described above works well when looking for records 
that match one of a set of given criteria. When looking for records that match 
more than one criterion simultaneously, such as when looking for documents that 
contain all of the words `the' and `white' and `house', there are several 
issues. 
+
+First is that the set of all records matching any one of the search terms must 
be sent to the client, which incurs a lot of network traffic. The second 
problem is that the client is responsible for performing set intersection on 
the sets of records returned to eliminate all but the records matching all 
search terms. The memory of the client may easily be overwhelmed during this 
operation. 
+
+For these reasons Accumulo includes support for a scheme known as sharded 
indexing, in which these set operations can be performed at the TabletServers 
and decisions about which records to include in the result set can be made 
without incurring network traffic. 
+
+This is accomplished via partitioning records into bins that each reside on at 
most one TabletServer, and then creating an index of terms per record within 
each bin as follows: 
+
+![converted table][17]
+
+Documents or records are mapped into bins by a user-defined ingest 
application. By storing the BinID as the RowID we ensure that all the 
information for a particular bin is contained in a single tablet and hosted on 
a single TabletServer since Accumulo never splits rows across tablets. Storing 
the Terms as column families serves to enable fast lookups of all the documents 
within this bin that contain the given term. 
+
+Finally, we perform set intersection operations on the TabletServer via a 
special iterator called the Intersecting Iterator. Since documents are 
partitioned into many bins, a search of all documents must search every bin. We 
can use the BatchScanner to scan all bins in parallel. The Intersecting 
Iterator should be enabled on a BatchScanner within user query code as follows: 
+    
+    
+    Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
+    
+    BatchScanner bs = conn.createBatchScanner(table, auths, 20);
+    bs.setScanIterators(20, IntersectingIterator.class.getName(), "ii");
+    
+    // tells scanner to look for terms in the column family and sends terms
+    bs.setScanIteratorOption("ii",
+        IntersectingIterator.columnFamiliesOptionName,
+        IntersectingIterator.encodeColumns(terms));
+    
+    bs.setRanges(Collections.singleton(new Range()));
+    
+    for(Entry<Key,Value> entry : bs) {
+        System.out.println(" " + entry.getKey().getColumnQualifier());
+    }
+    
+
+This code effectively has the BatchScanner scan all tablets of a table, 
looking for documents that match all the given terms. Because all tablets are 
being scanned for every query, each query is more expensive than other Accumulo 
scans, which typically involve a small number of TabletServers. This reduces 
the number of concurrent queries supported and is subject to what is known as 
the `straggler' problem in which every query runs as slow as the slowest server 
participating. 
+
+Of course, fast servers will return their results to the client which can 
display them to the user immediately while they wait for the rest of the 
results to arrive. If the results are unordered this is quite effective as the 
first results to arrive are as good as any others to the user. 
+
+* * *
+
+** Next:** [High-Speed Ingest][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Table Configuration][6]   ** [Contents][8]**
+
+[2]: High_Speed_Ingest.html
+[4]: accumulo_user_manual.html
+[6]: Table_Configuration.html
+[8]: Contents.html
+[9]: Table_Design.html#Basic_Table
+[10]: Table_Design.html#RowID_Design
+[11]: Table_Design.html#Indexing
+[12]: Table_Design.html#Entity-Attribute_and_Graph_Tables
+[13]: Table_Design.html#Document-Partitioned_Indexing
+[14]: img2.png
+[15]: img3.png
+[16]: img4.png
+[17]: img5.png
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/Writing_Accumulo_Clients.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/Writing_Accumulo_Clients.md 
b/1.3/user_manual/Writing_Accumulo_Clients.md
new file mode 100644
index 0000000..a31f91c
--- /dev/null
+++ b/1.3/user_manual/Writing_Accumulo_Clients.md
@@ -0,0 +1,124 @@
+---
+title: "User Manual: Writing Accumulo Clients"
+---
+
+** Next:** [Table Configuration][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Accumulo Shell][6]   ** [Contents][8]**   
+  
+<a id="CHILD_LINKS"></a>**Subsections**
+
+* [Writing Data][9]
+* [Reading Data][10]
+
+* * *
+
+## <a id="Writing_Accumulo_Clients"></a> Writing Accumulo Clients
+
+All clients must first identify the Accumulo instance to which they will be 
communicating. Code to do this is as follows: 
+    
+    
+    String instanceName = "myinstance";
+    String zooServers = "zooserver-one,zooserver-two"
+    Instance inst = new ZooKeeperInstance(instanceName, zooServers);
+    
+    Connector conn = new Connector(inst, "user","passwd".getBytes());
+    
+
+## <a id="Writing_Data"></a> Writing Data
+
+Data are written to Accumulo by creating Mutation objects that represent all 
the changes to the columns of a single row. The changes are made atomically in 
the TabletServer. Clients then add Mutations to a BatchWriter which submits 
them to the appropriate TabletServers. 
+
+Mutations can be created thus: 
+    
+    
+    Text rowID = new Text("row1");
+    Text colFam = new Text("myColFam");
+    Text colQual = new Text("myColQual");
+    ColumnVisibility colVis = new ColumnVisibility("public");
+    long timestamp = System.currentTimeMillis();
+    
+    Value value = new Value("myValue".getBytes());
+    
+    Mutation mutation = new Mutation(rowID);
+    mutation.put(colFam, colQual, colVis, timestamp, value);
+    
+
+### <a id="BatchWriter"></a> BatchWriter
+
+The BatchWriter is highly optimized to send Mutations to multiple 
TabletServers and automatically batches Mutations destined for the same 
TabletServer to amortize network overhead. Care must be taken to avoid changing 
the contents of any Object passed to the BatchWriter since it keeps objects in 
memory while batching. 
+
+Mutations are added to a BatchWriter thus: 
+    
+    
+    long memBuf = 1000000L; // bytes to store before sending a batch
+    long timeout = 1000L; // milliseconds to wait before sending
+    int numThreads = 10;
+    
+    BatchWriter writer =
+        conn.createBatchWriter("table", memBuf, timeout, numThreads)
+    
+    writer.add(mutation);
+    
+    writer.close();
+    
+
+An example of using the batch writer can be found at   
+accumulo/docs/examples/README.batch 
+
+## <a id="Reading_Data"></a> Reading Data
+
+Accumulo is optimized to quickly retrieve the value associated with a given 
key, and to efficiently return ranges of consecutive keys and their associated 
values. 
+
+### <a id="Scanner"></a> Scanner
+
+To retrieve data, Clients use a Scanner, which provides acts like an Iterator 
over keys and values. Scanners can be configured to start and stop at 
particular keys, and to return a subset of the columns available. 
+    
+    
+    // specify which visibilities we are allowed to see
+    Authorizations auths = new Authorizations("public");
+    
+    Scanner scan =
+        conn.createScanner("table", auths);
+    
+    scan.setRange(new Range("harry","john"));
+    scan.fetchFamily("attributes");
+    
+    for(Entry<Key,Value> entry : scan) {
+        String row = e.getKey().getRow();
+        Value value = e.getValue();
+    }
+    
+
+### <a id="BatchScanner"></a> BatchScanner
+
+For some types of access, it is more efficient to retrieve several ranges 
simultaneously. This arises when accessing a set of rows that are not 
consecutive whose IDs have been retrieved from a secondary index, for example. 
+
+The BatchScanner is configured similarly to the Scanner; it can be configured 
to retrieve a subset of the columns available, but rather than passing a single 
Range, BatchScanners accept a set of Ranges. It is important to note that the 
keys returned by a BatchScanner are not in sorted order since the keys streamed 
are from multiple TabletServers in parallel. 
+    
+    
+    ArrayList<Range> ranges = new ArrayList<Range>();
+    // populate list of ranges ...
+    
+    BatchScanner bscan =
+        conn.createBatchScanner("table", auths, 10);
+    
+    bscan.setRanges(ranges);
+    bscan.fetchFamily("attributes");
+    
+    for(Entry<Key,Value> entry : scan)
+        System.out.println(e.getValue());
+    
+
+An example of the BatchScanner can be found at   
+accumulo/docs/examples/README.batch 
+
+* * *
+
+** Next:** [Table Configuration][2] ** Up:** [Apache Accumulo User Manual 
Version 1.3][4] ** Previous:** [Accumulo Shell][6]   ** [Contents][8]**
+
+[2]: Table_Configuration.html
+[4]: accumulo_user_manual.html
+[6]: Accumulo_Shell.html
+[8]: Contents.html
+[9]: Writing_Accumulo_Clients.html#Writing_Data
+[10]: Writing_Accumulo_Clients.html#Reading_Data
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/accumulo_user_manual.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/accumulo_user_manual.md 
b/1.3/user_manual/accumulo_user_manual.md
new file mode 100644
index 0000000..e40cda8
--- /dev/null
+++ b/1.3/user_manual/accumulo_user_manual.md
@@ -0,0 +1,49 @@
+---
+title: "User Manual: index"
+---
+
+** Next:** [Contents][2]   ** [Contents][2]**   
+  
+
+
+## Apache Accumulo User Manual   
+Version 1.3
+
+  
+
+
+* * *
+
+<a id="CHILD_LINKS"></a>
+
+* [Contents][2]
+* [Introduction][6]
+* [Accumulo Design][7]
+* [Accumulo Shell][8]
+* [Writing Accumulo Clients][9]
+* [Table Configuration][10]
+* [Table Design][11]
+* [High-Speed Ingest][12]
+* [Analytics][13]
+* [Security][14]
+* [Administration][15]
+* [Shell Commands][16]
+
+  
+
+
+* * *
+
+[2]: Contents.html
+[6]: Introduction.html
+[7]: Accumulo_Design.html
+[8]: Accumulo_Shell.html
+[9]: Writing_Accumulo_Clients.html
+[10]: Table_Configuration.html
+[11]: Table_Design.html
+[12]: High_Speed_Ingest.html
+[13]: Analytics.html
+[14]: Security.html
+[15]: Administration.html
+[16]: Shell_Commands.html
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/data_distribution.png
----------------------------------------------------------------------
diff --git a/1.3/user_manual/data_distribution.png 
b/1.3/user_manual/data_distribution.png
new file mode 100644
index 0000000..71b585b
Binary files /dev/null and b/1.3/user_manual/data_distribution.png differ

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples.md b/1.3/user_manual/examples.md
new file mode 100644
index 0000000..4654a0a
--- /dev/null
+++ b/1.3/user_manual/examples.md
@@ -0,0 +1,7 @@
+---
+title: Examples
+redirect_to: examples/
+---
+
+This page redirects to the 1.3 examples
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/aggregation.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/aggregation.md 
b/1.3/user_manual/examples/aggregation.md
new file mode 100755
index 0000000..3def5e4
--- /dev/null
+++ b/1.3/user_manual/examples/aggregation.md
@@ -0,0 +1,36 @@
+---
+title: Aggregation Example
+---
+
+This is a simple aggregation example.  To build this example run maven and then
+copy the produced jar into the accumulo lib dir.  This is already done in the
+tar distribution.
+
+    $ bin/accumulo shell -u username
+    Enter current password for 'username'@'instance': ***
+    
+    Shell - Apache Accumulo Interactive Shell
+    - 
+    - version: 1.3.x-incubating
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    - 
+    - type 'help' for a list of available commands
+    - 
+    username@instance> createtable aggtest1 -a 
app=org.apache.accumulo.examples.aggregation.SortedSetAggregator
+    username@instance aggtest1> insert foo app 1 a
+    username@instance aggtest1> insert foo app 1 b
+    username@instance aggtest1> scan
+    foo app:1 []  a,b
+    username@instance aggtest1> insert foo app 1 z,1,foo,w
+    username@instance aggtest1> scan
+    foo app:1 []  1,a,b,foo,w,z
+    username@instance aggtest1> insert foo app 2 cat,dog,muskrat
+    username@instance aggtest1> insert foo app 2 mouse,bird
+    username@instance aggtest1> scan
+    foo app:1 []  1,a,b,foo,w,z
+    foo app:2 []  bird,cat,dog,mouse,muskrat
+    username@instance aggtest1> 
+
+In this example a table is created and the example set aggregator is
+applied to the column family app.

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/batch.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/batch.md 
b/1.3/user_manual/examples/batch.md
new file mode 100755
index 0000000..6989f59
--- /dev/null
+++ b/1.3/user_manual/examples/batch.md
@@ -0,0 +1,39 @@
+---
+title: Batch Writing and Scanning Example
+---
+
+This is an example of how to use the batch writer and batch scanner. To compile
+the example, run maven and copy the produced jar into the accumulo lib dir.
+This is already done in the tar distribution. 
+
+Below are commands that add 10000 entries to accumulo and then do 100 random
+queries.  The write command generates random 50 byte values. 
+
+Be sure to use the name of your instance (given as instance here) and the 
appropriate 
+list of zookeeper nodes (given as zookeepers here).
+
+Before you run this, you must ensure that the user you are running has the
+"exampleVis" authorization. (you can set this in the shell with "setauths -u 
username -s exampleVis")
+
+    $ ./bin/accumulo shell -u root
+    > setauths -u username -s exampleVis
+    > exit
+
+You must also create the table, batchtest1, ahead of time. (In the shell, use 
"createtable batchtest1")
+
+    $ ./bin/accumulo shell -u username
+    > createtable batchtest1
+    > exit
+    $ ./bin/accumulo org.apache.accumulo.examples.client.SequentialBatchWriter 
instance zookeepers username password batchtest1 0 10000 50 20000000 500 20 
exampleVis
+    $ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner 
instance zookeepers username password batchtest1 100 0 10000 50 20 exampleVis
+    07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 
random queries...
+    07 11:33:11,112 [client.CountingVerifyingReceiver] INFO : finished
+    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : 694.44 
lookups/sec   0.14 secs
+    
+    07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : num results : 100
+    
+    07 11:33:11,364 [client.CountingVerifyingReceiver] INFO : Generating 100 
random queries...
+    07 11:33:11,370 [client.CountingVerifyingReceiver] INFO : finished
+    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 
lookups/sec   0.05 secs
+    
+    07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/bloom.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/bloom.md 
b/1.3/user_manual/examples/bloom.md
new file mode 100755
index 0000000..d77a966
--- /dev/null
+++ b/1.3/user_manual/examples/bloom.md
@@ -0,0 +1,113 @@
+---
+title: Bloom Filter Example
+---
+
+This example shows how to create a table with bloom filters enabled.  It also
+shows how bloom filters increase query performance when looking for values that
+do not exist in a table.
+
+Below table named bloom_test is created and bloom filters are enabled.
+
+    $ ./accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.3.x-incubating
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    - 
+    - type 'help' for a list of available commands
+    - 
+    username@instance> setauths -u username -s exampleVis
+    username@instance> createtable bloom_test
+    username@instance bloom_test> config -t bloom_test -s 
table.bloom.enabled=true
+    username@instance bloom_test> exit
+
+Below 1 million random values are inserted into accumulo.  The randomly
+generated rows range between 0 and 1 billion.  The random number generator is
+initialized with the seed 7.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchWriter -s 
7 instance zookeepers username password bloom_test 1000000 0 1000000000 50 
2000000 60000 3 exampleVis
+
+Below the table is flushed, look at the monitor page and wait for the flush to
+complete.  
+
+    $ ./bin/accumulo shell -u username -p password
+    username@instance> flush -t bloom_test
+    Flush of table bloom_test initiated...
+    username@instance> exit
+
+The flush will be finished when there are no entries in memory and the 
+number of minor compactions goes to zero. Refresh the page to see changes to 
the table.
+
+After the flush completes, 500 random queries are done against the table.  The
+same seed is used to generate the queries, therefore everything is found in the
+table.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 
7 instance zookeepers username password bloom_test 500 0 1000000000 50 20 
exampleVis
+    Generating 500 random queries...finished
+    96.19 lookups/sec   5.20 secs
+    num results : 500
+    Generating 500 random queries...finished
+    102.35 lookups/sec   4.89 secs
+    num results : 500
+
+Below another 500 queries are performed, using a different seed which results
+in nothing being found.  In this case the lookups are much faster because of
+the bloom filters.
+
+    $ ../bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner 
-s 8 instance zookeepers username password bloom_test 500 0 1000000000 50 20 
exampleVis
+    Generating 500 random queries...finished
+    2212.39 lookups/sec   0.23 secs
+    num results : 0
+    Did not find 500 rows
+    Generating 500 random queries...finished
+    4464.29 lookups/sec   0.11 secs
+    num results : 0
+    Did not find 500 rows
+
+********************************************************************************
+
+Bloom filters can also speed up lookups for entries that exist.  In accumulo
+data is divided into tablets and each tablet has multiple map files. Every
+lookup in accumulo goes to a specific tablet where a lookup is done on each
+map file in the tablet.  So if a tablet has three map files, lookup performance
+can be three times slower than a tablet with one map file.  However if the map
+files contain unique sets of data, then bloom filters can help eliminate map
+files that do not contain the row being looked up.  To illustrate this two
+identical tables were created using the following process.  One table had bloom
+filters, the other did not.  Also the major compaction ratio was increased to
+prevent the files from being compacted into one file.
+
+ * Insert 1 million entries using  RandomBatchWriter with a seed of 7
+ * Flush the table using the shell
+ * Insert 1 million entries using  RandomBatchWriter with a seed of 8
+ * Flush the table using the shell
+ * Insert 1 million entries using  RandomBatchWriter with a seed of 9
+ * Flush the table using the shell
+
+After following the above steps, each table will have a tablet with three map
+files.  Each map file will contain 1 million entries generated with a different
+seed. 
+
+Below 500 lookups are done against the table without bloom filters using random
+NG seed 7.  Even though only one map file will likely contain entries for this
+seed, all map files will be interrogated.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 
7 instance zookeepers username password bloom_test1 500 0 1000000000 50 20 
exampleVis
+    Generating 500 random queries...finished
+    35.09 lookups/sec  14.25 secs
+    num results : 500
+    Generating 500 random queries...finished
+    35.33 lookups/sec  14.15 secs
+    num results : 500
+
+Below the same lookups are done against the table with bloom filters.  The
+lookups were 2.86 times faster because only one map file was used, even though 
three
+map files existed.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.client.RandomBatchScanner -s 
7 instance zookeepers username password bloom_test2 500 0 1000000000 50 20 
exampleVis
+    Generating 500 random queries...finished
+    99.03 lookups/sec   5.05 secs
+    num results : 500
+    Generating 500 random queries...finished
+    101.15 lookups/sec   4.94 secs
+    num results : 500

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/bulkIngest.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/bulkIngest.md 
b/1.3/user_manual/examples/bulkIngest.md
new file mode 100755
index 0000000..9b3081c
--- /dev/null
+++ b/1.3/user_manual/examples/bulkIngest.md
@@ -0,0 +1,20 @@
+---
+title: Bulk Ingest Example
+---
+
+This is an example of how to bulk ingest data into accumulo using map reduce.
+
+The following commands show how to run this example.  This example creates a
+table called test_bulk which has two initial split points. Then 1000 rows of
+test data are created in HDFS. After that the 1000 rows are ingested into
+accumulo.  Then we verify the 1000 rows are in accumulo. The
+first two arguments to all of the commands except for GenerateTestData are the
+accumulo instance name, and a comma-separated list of zookeepers.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.mapreduce.bulk.SetupTable 
instance zookeepers username password test_bulk row_00000333 row_00000666
+    $ ./bin/accumulo 
org.apache.accumulo.examples.mapreduce.bulk.GenerateTestData 0 1000 
bulk/test_1.txt
+    
+    $ ./bin/tool.sh lib/accumulo-examples-*[^c].jar 
org.apache.accumulo.examples.mapreduce.bulk.BulkIngestExample instance 
zookeepers username password test_bulk bulk tmp/bulkWork
+    $ ./bin/accumulo org.apache.accumulo.examples.mapreduce.bulk.VerifyIngest 
instance zookeepers username password test_bulk 0 1000
+
+For a high level discussion of bulk ingest, see the docs dir.

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/constraints.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/constraints.md 
b/1.3/user_manual/examples/constraints.md
new file mode 100755
index 0000000..58aa354
--- /dev/null
+++ b/1.3/user_manual/examples/constraints.md
@@ -0,0 +1,34 @@
+---
+title: Constraints Example
+---
+
+This an example of how to create a table with constraints. Below a table is
+create with two example constraints.  One constraints does not allow non alpha
+numeric keys.  The other constraint does not allow non numeric values. Two
+inserts that violate these constraints are attempted and denied.  The scan at
+the end shows the inserts were not allowed. 
+
+    $ ./bin/accumulo shell -u username -p pass
+    
+    Shell - Apache Accumulo Interactive Shell
+    - 
+    - version: 1.3.x-incubating
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    - 
+    - type 'help' for a list of available commands
+    - 
+    username@instance> createtable testConstraints
+    username@instance testConstraints> config -t testConstraints -s 
table.constraint.1=org.apache.accumulo.examples.constraints.NumericValueConstraint
+    username@instance testConstraints> config -t testConstraints -s 
table.constraint.2=org.apache.accumulo.examples.constraints.AlphaNumKeyConstrain
                                                                                
                    
+    username@instance testConstraints> insert r1 cf1 cq1 1111
+    username@instance testConstraints> insert r1 cf1 cq1 ABC
+      Constraint Failures:
+          
ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.constraints.NumericValueConstraint,
 violationCode:1, violationDescription:Value is not numeric, 
numberOfViolatingMutations:1)
+    username@instance testConstraints> insert r1! cf1 cq1 ABC 
+      Constraint Failures:
+          
ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.constraints.NumericValueConstraint,
 violationCode:1, violationDescription:Value is not numeric, 
numberOfViolatingMutations:1)
+          
ConstraintViolationSummary(constrainClass:org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint,
 violationCode:1, violationDescription:Row was not alpha numeric, 
numberOfViolatingMutations:1)
+    username@instance testConstraints> scan
+    r1 cf1:cq1 []    1111
+    username@instance testConstraints> 

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/dirlist.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/dirlist.md 
b/1.3/user_manual/examples/dirlist.md
new file mode 100755
index 0000000..cb419c5
--- /dev/null
+++ b/1.3/user_manual/examples/dirlist.md
@@ -0,0 +1,43 @@
+---
+title: File System Archive
+---
+
+This example shows how to use Accumulo to store a file system history.  It has 
three classes:
+
+ * Ingest.java - Recursively lists the files and directories under a given 
path, ingests their names and file info (not the file data!) into an Accumulo 
table, and indexes the file names in a separate table.
+ * QueryUtil.java - Provides utility methods for getting the info for a file, 
listing the contents of a directory, and performing single wild card searches 
on file or directory names.
+ * Viewer.java - Provides a GUI for browsing the file system information 
stored in Accumulo.
+ * FileCountMR.java - Runs MR over the file system information and writes out 
counts to an Accumulo table.
+ * FileCount.java - Accomplishes the same thing as FileCountMR, but in a 
different way.  Computes recursive counts and stores them back into table.
+ * StringArraySummation.java - Aggregates counts for the FileCountMR reducer.
+ 
+To begin, ingest some data with Ingest.java.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.Ingest instance 
zookeepers username password direxample dirindex exampleVis 
/local/user1/workspace
+
+Note that running this example will create tables direxample and dirindex in 
Accumulo that you should delete when you have completed the example.
+If you modify a file or add new files in the directory ingested (e.g. 
/local/user1/workspace), you can run Ingest again to add new information into 
the Accumulo tables.
+
+To browse the data ingested, use Viewer.java.  Be sure to give the "username" 
user the authorizations to see the data.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.Viewer instance 
zookeepers username password direxample exampleVis /local/user1/workspace
+
+To list the contents of specific directories, use QueryUtil.java.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance 
zookeepers username password direxample exampleVis /local/user1
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance 
zookeepers username password direxample exampleVis /local/user1/workspace
+
+To perform searches on file or directory names, also use QueryUtil.java.  
Search terms must contain no more than one wild card and cannot contain "/".
+*Note* these queries run on the _dirindex_ table instead of the direxample 
table.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance 
zookeepers username password dirindex exampleVis filename -search
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance 
zookeepers username password dirindex exampleVis 'filename*' -search
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance 
zookeepers username password dirindex exampleVis '*jar' -search
+    $ ./bin/accumulo org.apache.accumulo.examples.dirlist.QueryUtil instance 
zookeepers username password dirindex exampleVis filename*jar -search
+
+To count the number of direct children (directories and files) and descendants 
(children and children's descendents, directories and files), run the 
FileCountMR over the direxample table.
+The results can be written back to the same table.
+
+    $ ./bin/tool.sh lib/accumulo-examples-*[^c].jar 
org.apache.accumulo.examples.dirlist.FileCountMR instance zookeepers username 
password direxample direxample exampleVis exampleVis
+
+Alternatively, you can also run FileCount.java.

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/filter.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/filter.md 
b/1.3/user_manual/examples/filter.md
new file mode 100755
index 0000000..08b297c
--- /dev/null
+++ b/1.3/user_manual/examples/filter.md
@@ -0,0 +1,91 @@
+---
+title: Filter Example
+---
+
+This is a simple filter example.  It uses the AgeOffFilter that is provided as 
+part of the core package org.apache.accumulo.core.iterators.filter.  Filters 
are used by
+the FilteringIterator to select desired key/value pairs (or weed out undesired 
+ones).  Filters implement the 
org.apache.accumulo.core.iterators.iterators.filter.Filter interface which 
+contains a method accept(Key k, Value v).  This method returns true if the 
key, 
+value pair are to be delivered and false if they are to be ignored.
+
+    username@instance> createtable filtertest
+    username@instance filtertest> setiter -t filtertest -scan -p 10 -n 
myfilter -filter
+    FilteringIterator uses Filters to accept or reject key/value pairs
+    ----------> entering options: <filterPriorityNumber> 
<ageoff|regex|filterClass>
+    ----------> set org.apache.accumulo.core.iterators.FilteringIterator 
option (<name> <value>, hit enter to skip): 0 ageoff
+    ----------> set org.apache.accumulo.core.iterators.FilteringIterator 
option (<name> <value>, hit enter to skip): 
+    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds 
old
+    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter currentTime, if set, use the given value as the absolute time in 
milliseconds as the current time of day: 
+    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter ttl, time to live (milliseconds): 30000
+    username@instance filtertest> 
+    
+    username@instance filtertest> scan
+    username@instance filtertest> insert foo a b c
+    username@instance filtertest> scan
+    foo a:b []    c
+    
+... wait 30 seconds ...
+    
+    username@instance filtertest> scan
+    username@instance filtertest>
+
+Note the absence of the entry inserted more than 30 seconds ago.  Since the
+scope was set to "scan", this means the entry is still in Accumulo, but is
+being filtered out at query time.  To delete entries from Accumulo based on
+the ages of their timestamps, AgeOffFilters should be set up for the "minc"
+and "majc" scopes, as well.
+
+To force an ageoff in the persisted data, after setting up the ageoff iterator 
+on the "minc" and "majc" scopes you can flush and compact your table. This will
+happen automatically as a background operation on any table that is being 
+actively written to, but these are the commands to force compaction:
+
+    username@instance filtertest> setiter -t filtertest -scan -minc -majc -p 
10 -n myfilter -filter
+    FilteringIterator uses Filters to accept or reject key/value pairs
+    ----------> entering options: <filterPriorityNumber> 
<ageoff|regex|filterClass>
+    ----------> set org.apache.accumulo.core.iterators.FilteringIterator 
option (<name> <value>, hit enter to skip): 0 ageoff
+    ----------> set org.apache.accumulo.core.iterators.FilteringIterator 
option (<name> <value>, hit enter to skip): 
+    AgeOffFilter removes entries with timestamps more than <ttl> milliseconds 
old
+    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter currentTime, if set, use the given value as the absolute time in 
milliseconds as the current time of day: 
+    ----------> set org.apache.accumulo.core.iterators.filter.AgeOffFilter 
parameter ttl, time to live (milliseconds): 30000
+    username@instance filtertest> 
+    
+    username@instance filtertest> flush -t filtertest
+    08 11:13:55,745 [shell.Shell] INFO : Flush of table filtertest initiated...
+    username@instance filtertest> compact -t filtertest
+    08 11:14:10,800 [shell.Shell] INFO : Compaction of table filtertest 
scheduled for 20110208111410EST
+    username@instance filtertest> 
+
+After the compaction runs, the newly created files will not contain any data 
that should be aged off, and the
+Accumulo garbage collector will remove the old files.
+
+To see the iterator settings for a table, use:
+
+    username@instance filtertest> config -t filtertest -f iterator
+    
---------+------------------------------------------+----------------------------------------------------------
+    SCOPE    | NAME                                     | VALUE
+    
---------+------------------------------------------+----------------------------------------------------------
+    table    | table.iterator.majc.myfilter .............. | 
10,org.apache.accumulo.core.iterators.FilteringIterator
+    table    | table.iterator.majc.myfilter.opt.0 ........ | 
org.apache.accumulo.core.iterators.filter.AgeOffFilter
+    table    | table.iterator.majc.myfilter.opt.0.ttl .... | 30000
+    table    | table.iterator.majc.vers .................. | 
20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.majc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.minc.myfilter .............. | 
10,org.apache.accumulo.core.iterators.FilteringIterator
+    table    | table.iterator.minc.myfilter.opt.0 ........ | 
org.apache.accumulo.core.iterators.filter.AgeOffFilter
+    table    | table.iterator.minc.myfilter.opt.0.ttl .... | 30000
+    table    | table.iterator.minc.vers .................. | 
20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.minc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.scan.myfilter .............. | 
10,org.apache.accumulo.core.iterators.FilteringIterator
+    table    | table.iterator.scan.myfilter.opt.0 ........ | 
org.apache.accumulo.core.iterators.filter.AgeOffFilter
+    table    | table.iterator.scan.myfilter.opt.0.ttl .... | 30000
+    table    | table.iterator.scan.vers .................. | 
20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.scan.vers.opt.maxVersions .. | 1
+    
---------+------------------------------------------+----------------------------------------------------------
+    username@instance filtertest> 
+
+If you would like to apply multiple filters, this can be done using a single
+iterator. Just continue adding entries during the 
+"set org.apache.accumulo.core.iterators.FilteringIterator option" step.
+Make sure to order the filterPriorityNumbers in the order you would like
+the filters to be applied.

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/helloworld.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/helloworld.md 
b/1.3/user_manual/examples/helloworld.md
new file mode 100755
index 0000000..bff4977
--- /dev/null
+++ b/1.3/user_manual/examples/helloworld.md
@@ -0,0 +1,38 @@
+---
+title: Hello World Example
+---
+
+This tutorial uses the following Java classes, which can be found in 
org.apache.accumulo.examples.helloworld in the accumulo-examples module: 
+
+ * InsertWithBatchWriter.java - Inserts 10K rows (50K entries) into accumulo 
with each row having 5 entries
+ * InsertWithOutputFormat.java - Example of inserting data in MapReduce
+ * ReadData.java - Reads all data between two rows
+
+Log into the accumulo shell:
+
+    $ ./bin/accumulo shell -u username -p password
+
+Create a table called 'hellotable':
+
+    username@instance> createtable hellotable
+
+Launch a Java program that inserts data with a BatchWriter:
+
+    $ ./bin/accumulo 
org.apache.accumulo.examples.helloworld.InsertWithBatchWriter instance 
zookeepers hellotable username password
+
+Alternatively, the same data can be inserted using MapReduce writers:
+
+    $ ./bin/accumulo 
org.apache.accumulo.examples.helloworld.InsertWithOutputFormat instance 
zookeepers hellotable username password
+
+On the accumulo status page at the URL below (where 'master' is replaced with 
the name or IP of your accumulo master), you should see 50K entries
+
+    http://master:50095/
+
+To view the entries, use the shell to scan the table:
+
+    username@instance> table hellotable
+    username@instance hellotable> scan
+
+You can also use a Java class to scan the table:
+
+    $ ./bin/accumulo org.apache.accumulo.examples.helloworld.ReadData instance 
zookeepers hellotable username password row_0 row_1001

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/index.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/index.md 
b/1.3/user_manual/examples/index.md
new file mode 100644
index 0000000..eaa3e4b
--- /dev/null
+++ b/1.3/user_manual/examples/index.md
@@ -0,0 +1,42 @@
+---
+title: Examples
+---
+
+Each README in the examples directory highlights the use of particular 
features of Apache Accumulo.
+
+Before running any of the examples, the following steps must be performed.
+
+1. Install and run Accumulo via the instructions found in 
$ACCUMULO_HOME/README.
+Remember the instance name.  It will be referred to as "instance" throughout 
the examples.
+A comma-separated list of zookeeper servers will be referred to as 
"zookeepers".
+
+2. Create an Accumulo user (see the [user manual][1]), or use the root user.
+The Accumulo user name will be referred to as "username" with password 
"password" throughout the examples.
+
+In all commands, you will need to replace "instance", "zookeepers", 
"username", and "password" with the values you set for your Accumulo instance.
+
+Commands intended to be run in bash are prefixed by '$'.  These are always 
assumed to be run from the $ACCUMULO_HOME directory.
+
+Commands intended to be run in the Accumulo shell are prefixed by '>'.
+
+[1]: {{ site.baseurl 
}}/user_manual_1.3-incubating/Accumulo_Shell.html#User_Administration
+[aggregation](aggregation.html)
+
+[batch](batch.html)
+
+[bloom](bloom.html)
+
+[bulkIngest](bulkIngest.html)
+
+[constraints](constraints.html)
+
+[dirlist](dirlist.html)
+
+[filter](filter.html)
+
+[helloworld](helloworld.html)
+
+[mapred](mapred.html)
+
+[shard](shard.html)
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/mapred.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/mapred.md 
b/1.3/user_manual/examples/mapred.md
new file mode 100755
index 0000000..975b5a0
--- /dev/null
+++ b/1.3/user_manual/examples/mapred.md
@@ -0,0 +1,71 @@
+---
+title: MapReduce Example
+---
+
+This example uses mapreduce and accumulo to compute word counts for a set of
+documents.  This is accomplished using a map-only mapreduce job and a
+accumulo table with aggregators.
+
+To run this example you will need a directory in HDFS containing text files.
+The accumulo readme will be used to show how to run this example.
+
+    $ hadoop fs -copyFromLocal $ACCUMULO_HOME/README 
/user/username/wc/Accumulo.README
+    $ hadoop fs -ls /user/username/wc
+    Found 1 items
+    -rw-r--r--   2 username supergroup       9359 2009-07-15 17:54 
/user/username/wc/Accumulo.README
+
+The first part of running this example is to create a table with aggregation
+for the column family count.
+
+    $ ./bin/accumulo shell -u username -p password
+    Shell - Apache Accumulo Interactive Shell
+    - version: 1.3.x-incubating
+    - instance name: instance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    - 
+    - type 'help' for a list of available commands
+    - 
+    username@instance> createtable wordCount -a 
count=org.apache.accumulo.core.iterators.aggregation.StringSummation 
+    username@instance wordCount> quit
+
+After creating the table, run the word count map reduce job.
+
+    [user1@instance accumulo]$ bin/tool.sh lib/accumulo-examples-*[^c].jar 
org.apache.accumulo.examples.mapreduce.WordCount instance zookeepers 
/user/user1/wc wordCount -u username -p password
+    
+    11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process 
: 1
+    11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
+    11/02/07 18:20:13 INFO mapred.JobClient:  map 0% reduce 0%
+    11/02/07 18:20:20 INFO mapred.JobClient:  map 100% reduce 0%
+    11/02/07 18:20:22 INFO mapred.JobClient: Job complete: 
job_201102071740_0003
+    11/02/07 18:20:22 INFO mapred.JobClient: Counters: 6
+    11/02/07 18:20:22 INFO mapred.JobClient:   Job Counters 
+    11/02/07 18:20:22 INFO mapred.JobClient:     Launched map tasks=1
+    11/02/07 18:20:22 INFO mapred.JobClient:     Data-local map tasks=1
+    11/02/07 18:20:22 INFO mapred.JobClient:   FileSystemCounters
+    11/02/07 18:20:22 INFO mapred.JobClient:     HDFS_BYTES_READ=10487
+    11/02/07 18:20:22 INFO mapred.JobClient:   Map-Reduce Framework
+    11/02/07 18:20:22 INFO mapred.JobClient:     Map input records=255
+    11/02/07 18:20:22 INFO mapred.JobClient:     Spilled Records=0
+    11/02/07 18:20:22 INFO mapred.JobClient:     Map output records=1452
+
+After the map reduce job completes, query the accumulo table to see word
+counts.
+
+    $ ./bin/accumulo shell -u username -p password
+    username@instance> table wordCount
+    username@instance wordCount> scan -b the
+    the count:20080906 []    75
+    their count:20080906 []    2
+    them count:20080906 []    1
+    then count:20080906 []    1
+    there count:20080906 []    1
+    these count:20080906 []    3
+    this count:20080906 []    6
+    through count:20080906 []    1
+    time count:20080906 []    3
+    time. count:20080906 []    1
+    to count:20080906 []    27
+    total count:20080906 []    1
+    tserver, count:20080906 []    1
+    tserver.compaction.major.concurrent.max count:20080906 []    1
+    ...

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/examples/shard.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/examples/shard.md 
b/1.3/user_manual/examples/shard.md
new file mode 100755
index 0000000..0e0668a
--- /dev/null
+++ b/1.3/user_manual/examples/shard.md
@@ -0,0 +1,52 @@
+---
+title: Shard Example
+---
+
+Accumulo has an iterator called the intersecting iterator which supports 
querying a term index that is partitioned by 
+document, or "sharded". This example shows how to use the intersecting 
iterator through these four programs:
+
+ * Index.java - Indexes a set of text files into an Accumulo table
+ * Query.java - Finds documents containing a given set of terms.
+ * Reverse.java - Reads the index table and writes a map of documents to terms 
into another table.
+ * ContinuousQuery.java  Uses the table populated by Reverse.java to select N 
random terms per document.  Then it continuously and randomly queries those 
terms.
+
+To run these example programs, create two tables like below.
+
+    username@instance> createtable shard
+    username@instance shard> createtable doc2term
+
+After creating the tables, index some files.  The following command indexes 
all of the java files in the Accumulo source code.
+
+    $ cd /local/user1/workspace/accumulo/
+    $ find src -name "*.java" | xargs ./bin/accumulo 
org.apache.accumulo.examples.shard.Index instance zookeepers shard username 
password 30
+
+The following command queries the index to find all files containing 'foo' and 
'bar'.
+
+    $ cd $ACCUMULO_HOME
+    $ ./bin/accumulo org.apache.accumulo.examples.shard.Query instance 
zookeepers shard username password foo bar
+    
/local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java
+    
/local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java
+    
/local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/security/VisibilityEvaluatorTest.java
+    
/local/user1/workspace/accumulo/src/server/src/main/java/accumulo/server/test/functional/RowDeleteTest.java
+    
/local/user1/workspace/accumulo/src/server/src/test/java/accumulo/server/logger/TestLogWriter.java
+    
/local/user1/workspace/accumulo/src/server/src/main/java/accumulo/server/test/functional/DeleteEverythingTest.java
+    
/local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/data/KeyExtentTest.java
+    
/local/user1/workspace/accumulo/src/server/src/test/java/accumulo/server/constraints/MetadataConstraintsTest.java
+    
/local/user1/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java
+    
/local/user1/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java
+    
/local/user1/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java
+
+Inorder to run ContinuousQuery, we need to run Reverse.java to populate 
doc2term
+
+    $ ./bin/accumulo org.apache.accumulo.examples.shard.Reverse instance 
zookeepers shard doc2term username password
+
+Below ContinuousQuery is run using 5 terms.  So it selects 5 random terms from 
each document, then it continually randomly selects one set of 5 terms and 
queries.  It prints the number of matching documents and the time in seconds.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.shard.ContinuousQuery 
instance zookeepers shard doc2term username password 5
+    [public, core, class, binarycomparable, b] 2  0.081
+    [wordtodelete, unindexdocument, doctablename, putdelete, insert] 1  0.041
+    [import, columnvisibilityinterpreterfactory, illegalstateexception, cv, 
columnvisibility] 1  0.049
+    [getpackage, testversion, util, version, 55] 1  0.048
+    [for, static, println, public, the] 55  0.211
+    [sleeptime, wrappingiterator, options, long, utilwaitthread] 1  0.057
+    [string, public, long, 0, wait] 12  0.132

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/failure_handling.png
----------------------------------------------------------------------
diff --git a/1.3/user_manual/failure_handling.png 
b/1.3/user_manual/failure_handling.png
new file mode 100644
index 0000000..90b9f0f
Binary files /dev/null and b/1.3/user_manual/failure_handling.png differ

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/img1.png
----------------------------------------------------------------------
diff --git a/1.3/user_manual/img1.png b/1.3/user_manual/img1.png
new file mode 100644
index 0000000..8a5846c
Binary files /dev/null and b/1.3/user_manual/img1.png differ

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/img2.png
----------------------------------------------------------------------
diff --git a/1.3/user_manual/img2.png b/1.3/user_manual/img2.png
new file mode 100644
index 0000000..cbfe2b3
Binary files /dev/null and b/1.3/user_manual/img2.png differ

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/img3.png
----------------------------------------------------------------------
diff --git a/1.3/user_manual/img3.png b/1.3/user_manual/img3.png
new file mode 100644
index 0000000..3b6f1f2
Binary files /dev/null and b/1.3/user_manual/img3.png differ

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/img4.png
----------------------------------------------------------------------
diff --git a/1.3/user_manual/img4.png b/1.3/user_manual/img4.png
new file mode 100644
index 0000000..5b0ceb2
Binary files /dev/null and b/1.3/user_manual/img4.png differ

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/img5.png
----------------------------------------------------------------------
diff --git a/1.3/user_manual/img5.png b/1.3/user_manual/img5.png
new file mode 100644
index 0000000..83d8955
Binary files /dev/null and b/1.3/user_manual/img5.png differ

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/1.3/user_manual/index.md
----------------------------------------------------------------------
diff --git a/1.3/user_manual/index.md b/1.3/user_manual/index.md
new file mode 100644
index 0000000..06b81e2
--- /dev/null
+++ b/1.3/user_manual/index.md
@@ -0,0 +1,50 @@
+---
+title: "User Manual: index"
+redirect_from: /user_manual_1.3-incubating/
+---
+
+** Next:** [Contents][2]   ** [Contents][2]**   
+  
+
+
+## Apache Accumulo User Manual   
+Version 1.3
+
+  
+
+
+* * *
+
+<a id="CHILD_LINKS"></a>
+
+* [Contents][2]
+* [Introduction][6]
+* [Accumulo Design][7]
+* [Accumulo Shell][8]
+* [Writing Accumulo Clients][9]
+* [Table Configuration][10]
+* [Table Design][11]
+* [High-Speed Ingest][12]
+* [Analytics][13]
+* [Security][14]
+* [Administration][15]
+* [Shell Commands][16]
+
+  
+
+
+* * *
+
+[2]: Contents.html
+[6]: Introduction.html
+[7]: Accumulo_Design.html
+[8]: Accumulo_Shell.html
+[9]: Writing_Accumulo_Clients.html
+[10]: Table_Configuration.html
+[11]: Table_Design.html
+[12]: High_Speed_Ingest.html
+[13]: Analytics.html
+[14]: Security.html
+[15]: Administration.html
+[16]: Shell_Commands.html
+

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/_config-asf.yml
----------------------------------------------------------------------
diff --git a/_config-asf.yml b/_config-asf.yml
index e501720..bc6840c 100644
--- a/_config-asf.yml
+++ b/_config-asf.yml
@@ -35,5 +35,13 @@ defaults:
     values:
       layout: "post"
       category: "blog"
+  -
+    scope:
+      path: "_posts/release"
+      type: "posts"
+    values:
+      layout: "release"
+      category: "release"
+      permalink: "/:categories/:title/"
 whitelist: [jekyll-redirect-from]
 gems: [jekyll-redirect-from]

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/_config.yml
----------------------------------------------------------------------
diff --git a/_config.yml b/_config.yml
index 68c5817..77bb62e 100644
--- a/_config.yml
+++ b/_config.yml
@@ -35,5 +35,13 @@ defaults:
     values:
       layout: "post"
       category: "blog"
+  -
+    scope:
+      path: "_posts/release"
+      type: "posts"
+    values:
+      layout: "release"
+      category: "release"
+      permalink: "/:categories/:title/"
 whitelist: [jekyll-redirect-from]
 gems: [jekyll-redirect-from]

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/_includes/nav.html
----------------------------------------------------------------------
diff --git a/_includes/nav.html b/_includes/nav.html
index 85bbe65..27b7a01 100644
--- a/_includes/nav.html
+++ b/_includes/nav.html
@@ -13,29 +13,21 @@
       <ul class="nav navbar-nav">
         <li class="nav-link"><a href="{{ site.baseurl 
}}/downloads">Download</a></li>
         <li class="dropdown">
+        <a class="dropdown-toggle" data-toggle="dropdown" 
href="#">Releases<span class="caret"></span></a>
+        <ul class="dropdown-menu">
+          <li><a href="{{ site.baseurl }}/release/accumulo-1.8.0/">1.8.0 
(Latest)</a></li>
+          <li><a href="{{ site.baseurl 
}}/release/accumulo-1.7.2/">1.7.2</a></li>
+          <li><a href="{{ site.baseurl 
}}/release/accumulo-1.6.6/">1.6.6</a></li>
+          <li><a href="{{ site.baseurl }}/release/">Archive</a></li>
+        </ul>
+        </li>
+        <li class="dropdown">
         <a class="dropdown-toggle" data-toggle="dropdown" 
href="#">Documentation<span class="caret"></span></a>
         <ul class="dropdown-menu">
-          <li class="dropdown-header">1.8.0 Release (Latest)</li>
-          <li><a href="{{ site.baseurl }}/release_notes/1.8.0">Release 
Notes</a></li>
-          <li><a href="{{ site.baseurl }}/1.8/accumulo_user_manual">User 
Manual</a></li>
-          <li><a href="{{ site.baseurl }}/1.8/apidocs">Javadoc</a></li>
-          <li><a href="{{ site.baseurl }}/1.8/examples">Examples</a></li>
-          <li class="divider"></li>
-          <li class="dropdown-header">1.7.2 Release</li>
-          <li><a href="{{ site.baseurl }}/release_notes/1.7.2">Release 
Notes</a></li>
-          <li><a href="{{ site.baseurl }}/1.7/accumulo_user_manual">User 
Manual</a></li>
-          <li><a href="{{ site.baseurl }}/1.7/apidocs">Javadoc</a></li>
-          <li><a href="{{ site.baseurl }}/1.7/examples">Examples</a></li>
-          <li class="divider"></li>
-          <li class="dropdown-header">1.6.6 Release</li>
-          <li><a href="{{ site.baseurl }}/release_notes/1.6.6">Release 
Notes</a></li>
-          <li><a href="{{ site.baseurl }}/1.6/accumulo_user_manual">User 
Manual</a></li>
-          <li><a href="{{ site.baseurl }}/1.6/apidocs">Javadoc</a></li>
-          <li><a href="{{ site.baseurl }}/1.6/examples">Examples</a></li>
-          <li class="divider"></li>
+          <li><a href="{{ site.baseurl }}/user-manual/">User Manuals</a></li>
+          <li><a href="{{ site.baseurl }}/javadocs/">Javadocs</a></li>
+          <li><a href="{{ site.baseurl }}/examples/">Examples</a></li>
           <li><a href="{{ site.baseurl }}/notable_features">Features</a></li>
-          <li><a href="{{ site.baseurl }}/release_notes">Release Notes 
Archive</a></li>
-          <li><a href="{{ site.baseurl }}/old_documentation">Documentation 
Archive</a></li>
           <li><a href="{{ site.baseurl }}/screenshots">Screenshots</a></li>
           <li><a href="{{ site.baseurl }}/papers">Papers &amp; 
Presentations</a></li>
           <li><a href="{{ site.baseurl }}/glossary">Glossary</a></li>

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/_layouts/release.html
----------------------------------------------------------------------
diff --git a/_layouts/release.html b/_layouts/release.html
new file mode 100644
index 0000000..5c3d856
--- /dev/null
+++ b/_layouts/release.html
@@ -0,0 +1,8 @@
+---
+layout: default
+---
+<p>{{ page.date | date_to_string }}</p>
+
+{{ content }}
+
+<p><strong>View all releases in the <a href="{{ site.baseurl 
}}/release/">archive</a></strong></p>

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/9a50bd13/_posts/blog/2016-10-28-durability-performance.md
----------------------------------------------------------------------
diff --git a/_posts/blog/2016-10-28-durability-performance.md 
b/_posts/blog/2016-10-28-durability-performance.md
index 14b2f01..eee9ce5 100644
--- a/_posts/blog/2016-10-28-durability-performance.md
+++ b/_posts/blog/2016-10-28-durability-performance.md
@@ -165,10 +165,10 @@ problems with Per-durability write ahead logs.
 [fos]: 
https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java#L358
 [ACCUMULO-4146]: https://issues.apache.org/jira/browse/ACCUMULO-4146
 [ACCUMULO-4112]: https://issues.apache.org/jira/browse/ACCUMULO-4112
-[160_RN_WAL]: {{ site.baseurl 
}}/release_notes/1.6.0#slower-writes-than-previous-accumulo-versions
-[161_RN_WAL]: {{ site.baseurl 
}}/release_notes/1.6.1#write-ahead-log-sync-implementation
+[160_RN_WAL]: {{ site.baseurl 
}}/release/accumulo-1.6.0#slower-writes-than-previous-accumulo-versions
+[161_RN_WAL]: {{ site.baseurl 
}}/release/accumulo-1.6.1#write-ahead-log-sync-implementation
 [16_UM_SM]: {{ site.baseurl 
}}/1.6/accumulo_user_manual#_tserver_wal_sync_method
 [17_UM_TD]: {{ site.baseurl }}/1.7/accumulo_user_manual#_table_durability
-[172_RN_MCHS]: {{ site.baseurl 
}}/release_notes/1.7.2#minor-performance-improvements
+[172_RN_MCHS]: {{ site.baseurl 
}}/release/accumulo-1.7.2#minor-performance-improvements
 [SD]: {{ site.baseurl 
}}/1.8/apidocs/org/apache/accumulo/core/client/BatchWriterConfig.html#setDurability(org.apache.accumulo.core.client.Durability)
 [ML]: {{ site.baseurl }}/mailing_list

Reply via email to