This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new b049a43  Jekyll build from master:af2e63b
b049a43 is described below

commit b049a4392e967bf6a907dd9138a5cbd7406cd06e
Author: Mike Walch <mwa...@apache.org>
AuthorDate: Sat Jan 26 15:29:32 2019 -0500

    Jekyll build from master:af2e63b
    
    Updates to table config
---
 docs/2.x/development/iterators.html               |   2 +-
 docs/2.x/getting-started/table_configuration.html | 102 +++++++++++-----------
 feed.xml                                          |   4 +-
 redirects.json                                    |   2 +-
 search_data.json                                  |   4 +-
 5 files changed, 57 insertions(+), 57 deletions(-)

diff --git a/docs/2.x/development/iterators.html 
b/docs/2.x/development/iterators.html
index b8a5f9d..aa47085 100644
--- a/docs/2.x/development/iterators.html
+++ b/docs/2.x/development/iterators.html
@@ -436,7 +436,7 @@ in the iteration, Accumulo Iterators must also support the 
ability to “move”
 iteration (the Accumulo table). Accumulo Iterators are designed to be 
concatenated together, similar to applying a
 series of transformations to a list of elements. Accumulo Iterators can 
duplicate their underlying source to create
 multiple “pointers” over the same underlying data (which is extremely powerful 
since each stream is sorted) or they can
-merge multiple Iterators into a single view. In this sense, a collection of 
Iterators operating in tandem is close to
+merge multiple Iterators into a single view. In this sense, a collection of 
Iterators operating in tandem is closer to
 a tree-structure than a list, but there is always a sense of a flow of 
Key-Value pairs through some Iterators. Iterators
 are not designed to act as triggers nor are they designed to operate outside 
of the purview of a single table.</p>
 
diff --git a/docs/2.x/getting-started/table_configuration.html 
b/docs/2.x/getting-started/table_configuration.html
index 771b117..1b8f83e 100644
--- a/docs/2.x/getting-started/table_configuration.html
+++ b/docs/2.x/getting-started/table_configuration.html
@@ -510,7 +510,7 @@ and place it in the <code 
class="highlighter-rouge">lib/</code> directory of the
 constraint jars can be added to Accumulo and enabled without restarting but any
 change to an existing constraint class requires Accumulo to be restarted.</p>
 
-<p>See the <a 
href="https://github.com/apache/accumulo-examples/blob/master/docs/contraints.md";>constraints
 examples</a> for example code.</p>
+<p>See the <a 
href="https://github.com/apache/accumulo-examples/blob/master/docs/constraints.md";>constraints
 examples</a> for example code.</p>
 
 <h2 id="bloom-filters">Bloom Filters</h2>
 
@@ -1062,72 +1062,72 @@ importing tables.</p>
 <p>The shell session below illustrates creating a table, inserting data, and
 exporting the table.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>    root@test15&gt; createtable table1
-    root@test15 table1&gt; insert a cf1 cq1 v1
-    root@test15 table1&gt; insert h cf1 cq1 v2
-    root@test15 table1&gt; insert z cf1 cq1 v3
-    root@test15 table1&gt; insert z cf1 cq2 v4
-    root@test15 table1&gt; addsplits -t table1 b r
-    root@test15 table1&gt; scan
-    a cf1:cq1 []    v1
-    h cf1:cq1 []    v2
-    z cf1:cq1 []    v3
-    z cf1:cq2 []    v4
-    root@test15&gt; config -t table1 -s table.split.threshold=100M
-    root@test15 table1&gt; clonetable table1 table1_exp
-    root@test15 table1&gt; offline table1_exp
-    root@test15 table1&gt; exporttable -t table1_exp /tmp/table1_export
-    root@test15 table1&gt; quit
+<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>root@test15&gt; createtable table1
+root@test15 table1&gt; insert a cf1 cq1 v1
+root@test15 table1&gt; insert h cf1 cq1 v2
+root@test15 table1&gt; insert z cf1 cq1 v3
+root@test15 table1&gt; insert z cf1 cq2 v4
+root@test15 table1&gt; addsplits -t table1 b r
+root@test15 table1&gt; scan
+a cf1:cq1 []    v1
+h cf1:cq1 []    v2
+z cf1:cq1 []    v3
+z cf1:cq2 []    v4
+root@test15&gt; config -t table1 -s table.split.threshold=100M
+root@test15 table1&gt; clonetable table1 table1_exp
+root@test15 table1&gt; offline table1_exp
+root@test15 table1&gt; exporttable -t table1_exp /tmp/table1_export
+root@test15 table1&gt; quit
 </code></pre></div></div>
 
 <p>After executing the export command, a few files are created in the hdfs dir.
 One of the files is a list of files to distcp as shown below.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>    $ hadoop fs -ls /tmp/table1_export
-    Found 2 items
-    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 
/tmp/table1_export/distcp.txt
-    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 
/tmp/table1_export/exportMetadata.zip
-    $ hadoop fs -cat /tmp/table1_export/distcp.txt
-    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
-    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
+<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>$ hadoop fs -ls /tmp/table1_export
+Found 2 items
+-rw-r--r--   3 user supergroup        162 2012-07-25 09:56 
/tmp/table1_export/distcp.txt
+-rw-r--r--   3 user supergroup        821 2012-07-25 09:56 
/tmp/table1_export/exportMetadata.zip
+$ hadoop fs -cat /tmp/table1_export/distcp.txt
+hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
 </code></pre></div></div>
 
 <p>Before the table can be imported, it must be copied using <code 
class="highlighter-rouge">distcp</code>. After the
 <code class="highlighter-rouge">distcp</code> completes, the cloned table may 
be deleted.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>    $ hadoop distcp -f /tmp/table1_export/distcp.txt 
/tmp/table1_export_dest
+<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>$ hadoop distcp -f /tmp/table1_export/distcp.txt 
/tmp/table1_export_dest
 </code></pre></div></div>
 
 <p>The Accumulo shell session below shows importing the table and inspecting 
it.
 The data, splits, config, and logical time information for the table were
 preserved.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>    root@test15&gt; importtable table1_copy 
/tmp/table1_export_dest
-    root@test15&gt; table table1_copy
-    root@test15 table1_copy&gt; scan
-    a cf1:cq1 []    v1
-    h cf1:cq1 []    v2
-    z cf1:cq1 []    v3
-    z cf1:cq2 []    v4
-    root@test15 table1_copy&gt; getsplits -t table1_copy
-    b
-    r
-    root@test15&gt; config -t table1_copy -f split
-    
---------+--------------------------+-------------------------------------------
-    SCOPE    | NAME                     | VALUE
-    
---------+--------------------------+-------------------------------------------
-    default  | table.split.threshold .. | 1G
-    table    |    @override ........... | 100M
-    
---------+--------------------------+-------------------------------------------
-    root@test15&gt; tables -l
-    accumulo.metadata    =&gt;        !0
-    accumulo.root        =&gt;        +r
-    table1_copy          =&gt;         5
-    trace                =&gt;         1
-    root@test15 table1_copy&gt; scan -t accumulo.metadata -b 5 -c srv:time
-    5;b srv:time []    M1343224500467
-    5;r srv:time []    M1343224500467
-    5&lt; srv:time []    M1343224500467
+<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>root@test15&gt; importtable table1_copy 
/tmp/table1_export_dest
+root@test15&gt; table table1_copy
+root@test15 table1_copy&gt; scan
+a cf1:cq1 []    v1
+h cf1:cq1 []    v2
+z cf1:cq1 []    v3
+z cf1:cq2 []    v4
+root@test15 table1_copy&gt; getsplits -t table1_copy
+b
+r
+root@test15&gt; config -t table1_copy -f split
+---------+--------------------------+-------------------------------------------
+SCOPE    | NAME                     | VALUE
+---------+--------------------------+-------------------------------------------
+default  | table.split.threshold .. | 1G
+table    |    @override ........... | 100M
+---------+--------------------------+-------------------------------------------
+root@test15&gt; tables -l
+accumulo.metadata    =&gt;        !0
+accumulo.root        =&gt;        +r
+table1_copy          =&gt;         5
+trace                =&gt;         1
+root@test15 table1_copy&gt; scan -t accumulo.metadata -b 5 -c srv:time
+5;b srv:time []    M1343224500467
+5;r srv:time []    M1343224500467
+5&lt; srv:time []    M1343224500467
 </code></pre></div></div>
 
 
diff --git a/feed.xml b/feed.xml
index 9f0b2be..8b4f642 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>https://accumulo.apache.org/</link>
     <atom:link href="https://accumulo.apache.org/feed.xml"; rel="self" 
type="application/rss+xml"/>
-    <pubDate>Tue, 15 Jan 2019 19:25:58 -0500</pubDate>
-    <lastBuildDate>Tue, 15 Jan 2019 19:25:58 -0500</lastBuildDate>
+    <pubDate>Sat, 26 Jan 2019 15:29:22 -0500</pubDate>
+    <lastBuildDate>Sat, 26 Jan 2019 15:29:22 -0500</lastBuildDate>
     <generator>Jekyll v3.7.3</generator>
     
     
diff --git a/redirects.json b/redirects.json
index 9d051e4..6b9f5d5 100644
--- a/redirects.json
+++ b/redirects.json
@@ -1 +1 @@
-{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/";
 [...]
\ No newline at end of file
+{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/";
 [...]
\ No newline at end of file
diff --git a/search_data.json b/search_data.json
index 4b7588f..4d19868 100644
--- a/search_data.json
+++ b/search_data.json
@@ -100,7 +100,7 @@
   
     "docs-2-x-development-iterators": {
       "title": "Iterators",
-      "content"         : "Accumulo SortedKeyValueIterators, commonly referred 
to as Iterators for short, are server-side programming constructsthat allow 
users to implement custom retrieval or computational purpose within Accumulo 
TabletServers.  The name rightlybrings forward similarities to the Java 
Iterator interface; however, Accumulo Iterators are more complex than 
JavaIterators. Notably, in addition to the expected methods to retrieve the 
current element and advance to the next elementin [...]
+      "content"         : "Accumulo SortedKeyValueIterators, commonly referred 
to as Iterators for short, are server-side programming constructsthat allow 
users to implement custom retrieval or computational purpose within Accumulo 
TabletServers.  The name rightlybrings forward similarities to the Java 
Iterator interface; however, Accumulo Iterators are more complex than 
JavaIterators. Notably, in addition to the expected methods to retrieve the 
current element and advance to the next elementin [...]
       "url": " /docs/2.x/development/iterators",
       "categories": "development"
     },
@@ -177,7 +177,7 @@
   
     "docs-2-x-getting-started-table-configuration": {
       "title": "Table Configuration",
-      "content"         : "Accumulo tables have a few options that can be 
configured to alter the defaultbehavior of Accumulo as well as improve 
performance based on the data stored.These include locality groups, 
constraints, bloom filters, iterators, and blockcache.  See the server 
properties documentation for a complete list of availableconfiguration 
options.Locality GroupsAccumulo supports storing sets of column families 
separately on disk to allowclients to efficiently scan over columns tha [...]
+      "content"         : "Accumulo tables have a few options that can be 
configured to alter the defaultbehavior of Accumulo as well as improve 
performance based on the data stored.These include locality groups, 
constraints, bloom filters, iterators, and blockcache.  See the server 
properties documentation for a complete list of availableconfiguration 
options.Locality GroupsAccumulo supports storing sets of column families 
separately on disk to allowclients to efficiently scan over columns tha [...]
       "url": " /docs/2.x/getting-started/table_configuration",
       "categories": "getting-started"
     },

Reply via email to