Author: buildbot
Date: Thu Jun 20 13:31:40 2013
New Revision: 866625

Log:
Staging update by buildbot for accumulo

Modified:
    websites/staging/accumulo/trunk/content/   (props changed)
    websites/staging/accumulo/trunk/content/notable_features.html

Propchange: websites/staging/accumulo/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Thu Jun 20 13:31:40 2013
@@ -1 +1 @@
-1494813
+1494983

Modified: websites/staging/accumulo/trunk/content/notable_features.html
==============================================================================
--- websites/staging/accumulo/trunk/content/notable_features.html (original)
+++ websites/staging/accumulo/trunk/content/notable_features.html Thu Jun 20 
13:31:40 2013
@@ -178,7 +178,8 @@ Zookeeper to synchronize operations acro
 <p>If consecutive keys have identical portions (row, colf, colq, or colvis), 
there
 is a flag to indicate that a portion is the same as that of the previous key.
 This is applied when keys are stored on disk and when transferred over the
-network.</p>
+network.  Starting with 1.5, prefix erasure is supported.  When its cost 
+effective, prefixes repeated in subsequent key fields are not repeated.</p>
 <h3 id="native-in-memory-map">Native In-Memory Map</h3>
 <p>By default data written is stored outside of Java managed memory into a C++ 
STL
 map of maps.  It maps rows to columns to values.  This hierarchical structure
@@ -203,10 +204,23 @@ blocks. The entire index never has to be
 written. When an index block exceeds the configurable size threshold, its
 written out between data blocks. The size of index blocks is configurable on a
 per table basis.</p>
+<h3 id="binary-search-in-rfile-blocks-15">Binary search in RFile blocks 
(1.5)</h3>
+<p>RFile uses its index to locate a block of key values.  Once it reaches a 
block 
+it performs a linear scan to find a key on interest.  Starting with 1.5, 
Accumulo
+will generate indexes of cached blocks in an adaptive manner.  Accumulo 
indexes 
+the blocks that are read most frequently.  When a block is read a few times, a 
+small index is generated.  As a block is read more, larger indexes are 
generated 
+making future seeks faster. This strategy allows Accumulo to dynamically 
respond 
+to read patterns without precomputing block indexes when RFiles are 
written.</p>
 <h2 id="testing-wzxhzdk6wzxhzdk7">Testing <a id="testing"></a></h2>
 <h3 id="mock">Mock</h3>
 <p>The Accumulo client API has a mock implementation that is useful writing 
unit
 test against Accumulo. Mock Accumulo is in memory and in process.</p>
+<h3 id="mini-accumulo-cluster-15-144">Mini Accumulo Cluster (1.5 &amp; 
1.4.4)</h3>
+<p>Mini Accumulo cluster is a set of utility code that makes it easy to spin 
up 
+a local Accumulo instance running against the local filesystem.  Mini Accumulo
+is slower than Mock Accumulo, but its behavior is mirrors a real Accumulo 
+instance more closely.  </p>
 <h3 id="functional-test">Functional Test</h3>
 <p>Small, system-level tests of basic Accumulo features run in a test harness,
 external to the build and unit-tests.  These tests start a complete Accumulo
@@ -251,6 +265,12 @@ flexibility in resource allocation.  The
 could be different from the Accumulo nodes.</p>
 <h3 id="map-reduce"><a 
href="/1.4/user_manual/Writing_Accumulo_Clients.html">Map Reduce</a></h3>
 <p>Accumulo can be a source and/or sink for map reduce jobs.</p>
+<h3 id="thrift-proxy-15-144">Thrift Proxy (1.5 &amp; 1.4.4)</h3>
+<p>The Accumulo client code contains a lot of complexity.  For example, the 
+client code locates tablets, retries in the case of failures, and supports 
+concurrent reading and writing.  All of this is written in Java.  The thrift
+proxy wraps the Accumulo client API with thrift, making this API easily
+available to other languages like Python, Ruby, C++, etc.</p>
 <h2 id="extensible-behaviors-wzxhzdk10wzxhzdk11">Extensible Behaviors <a 
id="behaviors"></a></h2>
 <h3 id="pluggable-balancer">Pluggable balancer</h3>
 <p>Users can provide a balancer plugin that decides how to distribute tablets
@@ -318,13 +338,18 @@ even if major compactions were falling b
 was growing.  Without this feature, ingest performance can roughly continue at 
a
 constant rate, even as scan performance decreases because tablets have too many
 files.</p>
+<h3 id="loading-jars-using-vfs-15">Loading jars using VFS (1.5)</h3>
+<p>User written iterators are a useful way to manipulate data in data in 
Accumulo.<br />
+Before 1.5., users had to copy their iterators to each tablet server.  
Starting 
+with 1.5 Accumulo can load iterators from HDFS using Apache commons VFS.</p>
 <h2 id="on-demand-data-management-wzxhzdk16wzxhzdk17">On-demand Data 
Management <a id="ondemand_dm"></a></h2>
 <h3 id="compactions">Compactions</h3>
 <p>Ability to force tablets to compact to one file. Even tablets with one file 
are
 compacted.  This is useful for improving query performance, permanently
 applying iterators, or using a new locality group configuration.  One example
 of using iterators is applying a filtering iterator to remove data from a
-table. </p>
+table. As of 1.5, users can initiate a compaction with iterators only applied 
to 
+that compaction event.</p>
 <h3 id="split-points">Split points</h3>
 <p>Arbitrary split points can be added to an online table at any point in time.
 This is useful for increasing ingest performance on a new table. It can also be
@@ -338,14 +363,15 @@ data and copies its configuration. A clo
 mutated independently. Testing was the motivating reason behind this new
 feature. For example to test a new filtering iterator, clone the table, add the
 filter to the clone, and force a major compaction.</p>
+<h3 id="importexport-table-15">Import/Export Table (1.5)</h3>
+<p>An offline tables metadata and files can easily be copied to another 
cluster and 
+imported.</p>
 <h3 id="compact-range-14">Compact Range (1.4)</h3>
-<p>Compact each tablet that falls within a row range down to a single file.<br 
/>
-</p>
+<p>Compact each tablet that falls within a row range down to a single file.  
</p>
 <h3 id="delete-range-14">Delete Range (1.4)</h3>
 <p>Added an operation to efficiently delete a range of rows from a table. 
Tablets
 that fall completely within a range are simply dropped. Tablets overlapping the
-beginning and end of the range are split, compacted, and then merged.<br />
-</p>
+beginning and end of the range are split, compacted, and then merged.  </p>
   </div>
 
   <div id="footer">


Reply via email to