Author: buildbot
Date: Wed Dec 14 21:04:20 2011
New Revision: 800257

Log:
Staging update by buildbot

Modified:
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Analytics.html
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Table_Configuration.html
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/dirlist.html
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/shard.html
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Analytics.html
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Security.html
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Table_Configuration.html
    
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Writing_Accumulo_Clients.html

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Analytics.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Analytics.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Analytics.html
 Wed Dec 14 21:04:20 2011
@@ -104,14 +104,14 @@
 <h2 id="a_idanalyticsa_analytics"><a id=Analytics></a> Analytics</h2>
 <p>Accumulo supports more advanced data processing than simply keeping keys 
sorted and performing efficient lookups. Analytics can be developed by using 
MapReduce and Iterators in conjunction with Accumulo tables. </p>
 <h2 id="a_idmapreducea_mapreduce"><a id=MapReduce></a> MapReduce</h2>
-<p>Accumulo tables can be used as the source and destination of MapReduce 
jobs. To use a Accumulo table with a MapReduce job (specifically with the new 
Hadoop API as of version 0.20), configure the job parameters to use the 
AccumuloInputFormat and AccumuloOutputFormat. Accumulo specific parameters can 
be set via these two format classes to do the following: </p>
+<p>Accumulo tables can be used as the source and destination of MapReduce 
jobs. To use an Accumulo table with a MapReduce job (specifically with the new 
Hadoop API as of version 0.20), configure the job parameters to use the 
AccumuloInputFormat and AccumuloOutputFormat. Accumulo specific parameters can 
be set via these two format classes to do the following: </p>
 <ul>
 <li>Authenticate and provide user credentials for the input </li>
 <li>Restrict the scan to a range of rows </li>
 <li>Restrict the input to a subset of available columns </li>
 </ul>
 <h3 id="a_idmapper_and_reducer_classesa_mapper_and_reducer_classes"><a 
id=Mapper_and_Reducer_classes></a> Mapper and Reducer classes</h3>
-<p>To read from a Accumulo table create a Mapper with the following class 
parameterization and be sure to configure the AccumuloInputFormat. </p>
+<p>To read from an Accumulo table create a Mapper with the following class 
parameterization and be sure to configure the AccumuloInputFormat. </p>
 <div class="codehilite"><pre><span class="n">class</span> <span 
class="n">MyMapper</span> <span class="n">extends</span> <span 
class="n">Mapper</span><span 
class="sr">&lt;Key,Value,WritableComparable,Writable&gt;</span> <span 
class="p">{</span>
     <span class="n">public</span> <span class="n">void</span> <span 
class="nb">map</span><span class="p">(</span><span class="n">Key</span> <span 
class="n">k</span><span class="p">,</span> <span class="n">Value</span> <span 
class="n">v</span><span class="p">,</span> <span class="n">Context</span> <span 
class="n">c</span><span class="p">)</span> <span class="p">{</span>
         <span class="sr">//</span> <span class="n">transform</span> <span 
class="n">key</span> <span class="ow">and</span> <span class="n">value</span> 
<span class="n">data</span> <span class="n">here</span>
@@ -120,7 +120,7 @@
 </pre></div>
 
 
-<p>To write to a Accumulo table, create a Reducer with the following class 
parameterization and be sure to configure the AccumuloOutputFormat. The key 
emitted from the Reducer identifies the table to which the mutation is sent. 
This allows a single Reducer to write to more than one table if desired. A 
default table can be configured using the AccumuloOutputFormat, in which case 
the output table name does not have to be passed to the Context object within 
the Reducer. </p>
+<p>To write to an Accumulo table, create a Reducer with the following class 
parameterization and be sure to configure the AccumuloOutputFormat. The key 
emitted from the Reducer identifies the table to which the mutation is sent. 
This allows a single Reducer to write to more than one table if desired. A 
default table can be configured using the AccumuloOutputFormat, in which case 
the output table name does not have to be passed to the Context object within 
the Reducer. </p>
 <div class="codehilite"><pre><span class="n">class</span> <span 
class="n">MyReducer</span> <span class="n">extends</span> <span 
class="n">Reducer</span><span class="o">&lt;</span><span 
class="n">WritableComparable</span><span class="p">,</span> <span 
class="n">Writable</span><span class="p">,</span> <span 
class="n">Text</span><span class="p">,</span> <span 
class="n">Mutation</span><span class="o">&gt;</span> <span class="p">{</span>
 
     <span class="n">public</span> <span class="n">void</span> <span 
class="n">reduce</span><span class="p">(</span><span 
class="n">WritableComparable</span> <span class="n">key</span><span 
class="p">,</span> <span class="n">Iterator</span><span 
class="sr">&lt;Text&gt;</span> <span class="nb">values</span><span 
class="p">,</span> <span class="n">Context</span> <span class="n">c</span><span 
class="p">)</span> <span class="p">{</span>
@@ -197,9 +197,9 @@ accumulo/docs/examples/README.mapred </p
 <p>All that is needed to aggregate values of a table is to identify the fields 
over which values will be grouped, insert mutations with those fields as the 
key, and configure the table with an aggregating iterator that supports the 
summarization operation desired. </p>
 <p>The only restriction on an aggregating iterator is that the aggregator 
developer should not assume that all values for a given key have been seen, 
since new mutations can be inserted at anytime. This precludes using the total 
number of values in the aggregation such as when calculating an average, for 
example. </p>
 <h3 id="a_idfeature_vectorsa_feature_vectors"><a id=Feature_Vectors></a> 
Feature Vectors</h3>
-<p>An interesting use of aggregating iterators within a Accumulo table is to 
store feature vectors for use in machine learning algorithms. For example, many 
algorithms such as k-means clustering, support vector machines, anomaly 
detection, etc. use the concept of a feature vector and the calculation of 
distance metrics to learn a particular model. The columns in a Accumulo table 
can be used to efficiently store sparse features and their weights to be 
incrementally updated via the use of an aggregating iterator. </p>
+<p>An interesting use of aggregating iterators within an Accumulo table is to 
store feature vectors for use in machine learning algorithms. For example, many 
algorithms such as k-means clustering, support vector machines, anomaly 
detection, etc. use the concept of a feature vector and the calculation of 
distance metrics to learn a particular model. The columns in an Accumulo table 
can be used to efficiently store sparse features and their weights to be 
incrementally updated via the use of an aggregating iterator. </p>
 <h2 id="a_idstatistical_modelinga_statistical_modeling"><a 
id=Statistical_Modeling></a> Statistical Modeling</h2>
-<p>Statistical models that need to be updated by many machines in parallel 
could be similarly stored within a Accumulo table. For example, a MapReduce job 
that is iteratively updating a global statistical model could have each map or 
reduce worker reference the parts of the model to be read and updated through 
an embedded Accumulo client. </p>
+<p>Statistical models that need to be updated by many machines in parallel 
could be similarly stored within an Accumulo table. For example, a MapReduce 
job that is iteratively updating a global statistical model could have each map 
or reduce worker reference the parts of the model to be read and updated 
through an embedded Accumulo client. </p>
 <p>Using Accumulo this way enables efficient and fast lookups and updates of 
small pieces of information in a random access pattern, which is complementary 
to MapReduce's sequential access model. </p>
 <hr />
 <p><strong> Next:</strong> <a href="Security.html">Security</a> <strong> 
Up:</strong> <a href="accumulo_user_manual.html">Accumulo User Manual Version 
1.3</a> <strong> Previous:</strong> <a href="High_Speed_Ingest.html">High-Speed 
Ingest</a>   <strong> <a href="Contents.html">Contents</a></strong></p>

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Table_Configuration.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Table_Configuration.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/Table_Configuration.html
 Wed Dec 14 21:04:20 2011
@@ -171,7 +171,7 @@
 accumulo/docs/examples/README.constraints with corresponding code under <br />
 accumulo/src/examples/main/java/accumulo/examples/constraints . </p>
 <h2 id="a_idbloom_filtersa_bloom_filters"><a id=Bloom_Filters></a> Bloom 
Filters</h2>
-<p>As mutations are applied to a Accumulo table, several files are created per 
tablet. If bloom filters are enabled, Accumulo will create and load a small 
data structure into memory to determine whether a file contains a given key 
before opening the file. This can speed up lookups considerably. </p>
+<p>As mutations are applied to an Accumulo table, several files are created 
per tablet. If bloom filters are enabled, Accumulo will create and load a small 
data structure into memory to determine whether a file contains a given key 
before opening the file. This can speed up lookups considerably. </p>
 <p>To enable bloom filters, enter the following command in the Shell: </p>
 <div class="codehilite"><pre><span class="n">user</span><span 
class="nv">@myinstance</span><span class="o">&gt;</span> <span 
class="n">config</span> <span class="o">-</span><span class="n">t</span> <span 
class="n">mytable</span> <span class="o">-</span><span class="n">s</span> <span 
class="n">table</span><span class="o">.</span><span class="n">bloom</span><span 
class="o">.</span><span class="n">enabled</span><span class="o">=</span><span 
class="n">true</span>
 </pre></div>

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/dirlist.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/dirlist.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/dirlist.html
 Wed Dec 14 21:04:20 2011
@@ -94,10 +94,10 @@
     <h1 class="title">File System Archive</h1>
     <p>This example shows how to use Accumulo to store a file system history.  
It has three classes:</p>
 <ul>
-<li>Ingest.java - Recursively lists the files and directories under a given 
path, ingests their names and file info (not the file data!) into a Accumulo 
table, and indexes the file names in a separate table.</li>
+<li>Ingest.java - Recursively lists the files and directories under a given 
path, ingests their names and file info (not the file data!) into an Accumulo 
table, and indexes the file names in a separate table.</li>
 <li>QueryUtil.java - Provides utility methods for getting the info for a file, 
listing the contents of a directory, and performing single wild card searches 
on file or directory names.</li>
 <li>Viewer.java - Provides a GUI for browsing the file system information 
stored in Accumulo.</li>
-<li>FileCountMR.java - Runs MR over the file system information and writes out 
counts to a Accumulo table.</li>
+<li>FileCountMR.java - Runs MR over the file system information and writes out 
counts to an Accumulo table.</li>
 <li>FileCount.java - Accomplishes the same thing as FileCountMR, but in a 
different way.  Computes recursive counts and stores them back into table.</li>
 <li>StringArraySummation.java - Aggregates counts for the FileCountMR 
reducer.</li>
 </ul>

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/shard.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/shard.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.3-incubating/examples/shard.html
 Wed Dec 14 21:04:20 2011
@@ -95,7 +95,7 @@
     <p>Accumulo has in iterator called the intersecting iterator which 
supports querying a term index that is partitioned by 
 document, or "sharded". This example shows how to use the intersecting 
iterator through these four programs:</p>
 <ul>
-<li>Index.java - Indexes a set of text files into a Accumulo table</li>
+<li>Index.java - Indexes a set of text files into an Accumulo table</li>
 <li>Query.java - Finds documents containing a given set of terms.</li>
 <li>Reverse.java - Reads the index table and writes a map of documents to 
terms into another table.</li>
 <li>ContinuousQuery.java  Uses the table populated by Reverse.java to select N 
random terms per document.  Then it continuously and randomly queries those 
terms.</li>

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Analytics.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Analytics.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Analytics.html
 Wed Dec 14 21:04:20 2011
@@ -104,14 +104,14 @@
 <h2 id="a_idanalyticsa_analytics"><a id=Analytics></a> Analytics</h2>
 <p>Accumulo supports more advanced data processing than simply keeping keys 
sorted and performing efficient lookups. Analytics can be developed by using 
MapReduce and Iterators in conjunction with Accumulo tables. </p>
 <h2 id="a_idmapreducea_mapreduce"><a id=MapReduce></a> MapReduce</h2>
-<p>Accumulo tables can be used as the source and destination of MapReduce 
jobs. To use a Accumulo table with a MapReduce job (specifically with the new 
Hadoop API as of version 0.20), configure the job parameters to use the 
AccumuloInputFormat and AccumuloOutputFormat. Accumulo specific parameters can 
be set via these two format classes to do the following: </p>
+<p>Accumulo tables can be used as the source and destination of MapReduce 
jobs. To use an Accumulo table with a MapReduce job (specifically with the new 
Hadoop API as of version 0.20), configure the job parameters to use the 
AccumuloInputFormat and AccumuloOutputFormat. Accumulo specific parameters can 
be set via these two format classes to do the following: </p>
 <ul>
 <li>Authenticate and provide user credentials for the input </li>
 <li>Restrict the scan to a range of rows </li>
 <li>Restrict the input to a subset of available columns </li>
 </ul>
 <h3 id="a_idmapper_and_reducer_classesa_mapper_and_reducer_classes"><a 
id=Mapper_and_Reducer_classes></a> Mapper and Reducer classes</h3>
-<p>To read from a Accumulo table create a Mapper with the following class 
parameterization and be sure to configure the AccumuloInputFormat. </p>
+<p>To read from an Accumulo table create a Mapper with the following class 
parameterization and be sure to configure the AccumuloInputFormat. </p>
 <div class="codehilite"><pre><span class="n">class</span> <span 
class="n">MyMapper</span> <span class="n">extends</span> <span 
class="n">Mapper</span><span 
class="sr">&lt;Key,Value,WritableComparable,Writable&gt;</span> <span 
class="p">{</span>
     <span class="n">public</span> <span class="n">void</span> <span 
class="nb">map</span><span class="p">(</span><span class="n">Key</span> <span 
class="n">k</span><span class="p">,</span> <span class="n">Value</span> <span 
class="n">v</span><span class="p">,</span> <span class="n">Context</span> <span 
class="n">c</span><span class="p">)</span> <span class="p">{</span>
         <span class="sr">//</span> <span class="n">transform</span> <span 
class="n">key</span> <span class="ow">and</span> <span class="n">value</span> 
<span class="n">data</span> <span class="n">here</span>
@@ -120,7 +120,7 @@
 </pre></div>
 
 
-<p>To write to a Accumulo table, create a Reducer with the following class 
parameterization and be sure to configure the AccumuloOutputFormat. The key 
emitted from the Reducer identifies the table to which the mutation is sent. 
This allows a single Reducer to write to more than one table if desired. A 
default table can be configured using the AccumuloOutputFormat, in which case 
the output table name does not have to be passed to the Context object within 
the Reducer. </p>
+<p>To write to an Accumulo table, create a Reducer with the following class 
parameterization and be sure to configure the AccumuloOutputFormat. The key 
emitted from the Reducer identifies the table to which the mutation is sent. 
This allows a single Reducer to write to more than one table if desired. A 
default table can be configured using the AccumuloOutputFormat, in which case 
the output table name does not have to be passed to the Context object within 
the Reducer. </p>
 <div class="codehilite"><pre><span class="n">class</span> <span 
class="n">MyReducer</span> <span class="n">extends</span> <span 
class="n">Reducer</span><span class="o">&lt;</span><span 
class="n">WritableComparable</span><span class="p">,</span> <span 
class="n">Writable</span><span class="p">,</span> <span 
class="n">Text</span><span class="p">,</span> <span 
class="n">Mutation</span><span class="o">&gt;</span> <span class="p">{</span>
 
     <span class="n">public</span> <span class="n">void</span> <span 
class="n">reduce</span><span class="p">(</span><span 
class="n">WritableComparable</span> <span class="n">key</span><span 
class="p">,</span> <span class="n">Iterator</span><span 
class="sr">&lt;Text&gt;</span> <span class="nb">values</span><span 
class="p">,</span> <span class="n">Context</span> <span class="n">c</span><span 
class="p">)</span> <span class="p">{</span>
@@ -197,9 +197,9 @@ accumulo/docs/examples/README.mapred </p
 <p>All that is needed to aggregate values of a table is to identify the fields 
over which values will be grouped, insert mutations with those fields as the 
key, and configure the table with a combining iterator that supports the 
summarizing operation desired. </p>
 <p>The only restriction on an combining iterator is that the combiner 
developer should not assume that all values for a given key have been seen, 
since new mutations can be inserted at anytime. This precludes using the total 
number of values in the aggregation such as when calculating an average, for 
example. </p>
 <h3 id="a_idfeature_vectorsa_feature_vectors"><a id=Feature_Vectors></a> 
Feature Vectors</h3>
-<p>An interesting use of combining iterators within a Accumulo table is to 
store feature vectors for use in machine learning algorithms. For example, many 
algorithms such as k-means clustering, support vector machines, anomaly 
detection, etc. use the concept of a feature vector and the calculation of 
distance metrics to learn a particular model. The columns in a Accumulo table 
can be used to efficiently store sparse features and their weights to be 
incrementally updated via the use of an combining iterator. </p>
+<p>An interesting use of combining iterators within an Accumulo table is to 
store feature vectors for use in machine learning algorithms. For example, many 
algorithms such as k-means clustering, support vector machines, anomaly 
detection, etc. use the concept of a feature vector and the calculation of 
distance metrics to learn a particular model. The columns in an Accumulo table 
can be used to efficiently store sparse features and their weights to be 
incrementally updated via the use of an combining iterator. </p>
 <h2 id="a_idstatistical_modelinga_statistical_modeling"><a 
id=Statistical_Modeling></a> Statistical Modeling</h2>
-<p>Statistical models that need to be updated by many machines in parallel 
could be similarly stored within a Accumulo table. For example, a MapReduce job 
that is iteratively updating a global statistical model could have each map or 
reduce worker reference the parts of the model to be read and updated through 
an embedded Accumulo client. </p>
+<p>Statistical models that need to be updated by many machines in parallel 
could be similarly stored within an Accumulo table. For example, a MapReduce 
job that is iteratively updating a global statistical model could have each map 
or reduce worker reference the parts of the model to be read and updated 
through an embedded Accumulo client. </p>
 <p>Using Accumulo this way enables efficient and fast lookups and updates of 
small pieces of information in a random access pattern, which is complementary 
to MapReduce's sequential access model. </p>
 <hr />
 <p><strong> Next:</strong> <a href="Security.html">Security</a> <strong> 
Up:</strong> <a href="accumulo_user_manual.html">Accumulo User Manual Version 
1.4</a> <strong> Previous:</strong> <a href="High_Speed_Ingest.html">High-Speed 
Ingest</a>   <strong> <a href="Contents.html">Contents</a></strong></p>

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Security.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Security.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Security.html
 Wed Dec 14 21:04:20 2011
@@ -166,7 +166,7 @@
 
 <p>Any user with the alter table permission can add or remove this constraint. 
This constraint is not applied to bulk imported data, if this a concern then 
disable the bulk import permission. </p>
 <h2 id="a_idsecure_authorizations_handlinga_secure_authorizations_handling"><a 
id=Secure_Authorizations_Handling></a> Secure Authorizations Handling</h2>
-<p>For applications serving many users, it is not expected that a accumulo 
user will be created for each application user. In this case a accumulo user 
with all authorizations needed by any of the applications users must be 
created. To service queries, the application should create a scanner with the 
application users authorizations. These authorizations could be obtained from a 
trusted 3rd party. </p>
+<p>For applications serving many users, it is not expected that an accumulo 
user will be created for each application user. In this case an accumulo user 
with all authorizations needed by any of the applications users must be 
created. To service queries, the application should create a scanner with the 
application users authorizations. These authorizations could be obtained from a 
trusted 3rd party. </p>
 <p>Often production systems will integrate with Public-Key Infrastructure 
(PKI) and designate client code within the query layer to negotiate with PKI 
servers in order to authenticate users and retrieve their authorization tokens 
(credentials). This requires users to specify only the information necessary to 
authenticate themselves to the system. Once user identity is established, their 
credentials can be accessed by the client code and passed to Accumulo outside 
of the reach of the user. </p>
 <h2 id="a_idquery_services_layera_query_services_layer"><a 
id=Query_Services_Layer></a> Query Services Layer</h2>
 <p>Since the primary method of interaction with Accumulo is through the Java 
API, production environments often call for the implementation of a Query 
layer. This can be done using web services in containers such as Apache Tomcat, 
but is not a requirement. The Query Services Layer provides a mechanism for 
providing a platform on which user facing applications can be built. This 
allows the application designers to isolate potentially complex query logic, 
and enables a convenient point at which to perform essential security 
functions. </p>

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Table_Configuration.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Table_Configuration.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Table_Configuration.html
 Wed Dec 14 21:04:20 2011
@@ -175,7 +175,7 @@
 accumulo/docs/examples/README.constraints with corresponding code under <br />
 accumulo/src/examples/main/java/accumulo/examples/constraints . </p>
 <h2 id="a_idbloom_filtersa_bloom_filters"><a id=Bloom_Filters></a> Bloom 
Filters</h2>
-<p>As mutations are applied to a Accumulo table, several files are created per 
tablet. If bloom filters are enabled, Accumulo will create and load a small 
data structure into memory to determine whether a file contains a given key 
before opening the file. This can speed up lookups considerably. </p>
+<p>As mutations are applied to an Accumulo table, several files are created 
per tablet. If bloom filters are enabled, Accumulo will create and load a small 
data structure into memory to determine whether a file contains a given key 
before opening the file. This can speed up lookups considerably. </p>
 <p>To enable bloom filters, enter the following command in the Shell: </p>
 <div class="codehilite"><pre><span class="n">user</span><span 
class="nv">@myinstance</span><span class="o">&gt;</span> <span 
class="n">config</span> <span class="o">-</span><span class="n">t</span> <span 
class="n">mytable</span> <span class="o">-</span><span class="n">s</span> <span 
class="n">table</span><span class="o">.</span><span class="n">bloom</span><span 
class="o">.</span><span class="n">enabled</span><span class="o">=</span><span 
class="n">true</span>
 </pre></div>

Modified: 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Writing_Accumulo_Clients.html
==============================================================================
--- 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Writing_Accumulo_Clients.html
 (original)
+++ 
websites/staging/accumulo/trunk/content/accumulo/user_manual_1.4-incubating/Writing_Accumulo_Clients.html
 Wed Dec 14 21:04:20 2011
@@ -171,7 +171,7 @@ accumulo/docs/examples/README.batch </p>
 <li>iterators executed as part of a minor or major compaction </li>
 <li>bulk import of new files </li>
 </ul>
-<p>Isolation guarantees that either all or none of the changes made by these 
operations on a row are seen. Use the IsolatedScanner to obtain an isolated 
view of a accumulo table. When using the regular scanner it is possible to see 
a non isolated view of a row. For example if a mutation modifies three columns, 
it is possible that you will only see two of those modifications. With the 
isolated scanner either all three of the changes are seen or none. </p>
+<p>Isolation guarantees that either all or none of the changes made by these 
operations on a row are seen. Use the IsolatedScanner to obtain an isolated 
view of an accumulo table. When using the regular scanner it is possible to see 
a non isolated view of a row. For example if a mutation modifies three columns, 
it is possible that you will only see two of those modifications. With the 
isolated scanner either all three of the changes are seen or none. </p>
 <p>The IsolatedScanner buffers rows on the client side so a large row will not 
crash a tablet server. By default rows are buffered in memory, but the user can 
easily supply their own buffer if they wish to buffer to disk when rows are 
large. </p>
 <p>For an example, look at the following <br />
 
src/examples/src/main/java/org/apache/accumulo/examples/isolation/InterferenceTest.java</p>


Reply via email to