This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 8e58cbc  Jekyll build from master:ab08d17
8e58cbc is described below

commit 8e58cbc1325e1622cab110ac5aaa309adddd42f9
Author: Mike Walch <mwa...@apache.org>
AuthorDate: Tue Jan 15 19:26:06 2019 -0500

    Jekyll build from master:ab08d17
    
    Improved links in docs
---
 docs/2.x/administration/in-depth-install.html | 11 ++++++-----
 docs/2.x/development/mapreduce.html           |  2 +-
 feed.xml                                      |  4 ++--
 search_data.json                              |  2 +-
 4 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/docs/2.x/administration/in-depth-install.html 
b/docs/2.x/administration/in-depth-install.html
index 42720b5..872a2ab 100644
--- a/docs/2.x/administration/in-depth-install.html
+++ b/docs/2.x/administration/in-depth-install.html
@@ -829,7 +829,7 @@ configuration is:</p>
 <div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>general.vfs.context.classpath.app1.delegation=post
 </code></pre></div></div>
 
-<p>To use contexts in your application you can set the <code 
class="highlighter-rouge">table.classpath.context</code> on your tables or use 
the <code class="highlighter-rouge">setClassLoaderContext()</code> method on 
Scanner
+<p>To use contexts in your application you can set the <a 
href="/docs/2.x/configuration/server-properties#table_classpath_context">table.classpath.context</a>
 on your tables or use the <code 
class="highlighter-rouge">setClassLoaderContext()</code> method on Scanner
 and BatchScanner passing in the name of the context, app1 in the example 
above. Setting the property on the table allows your minc, majc, and scan 
 iterators to load classes from the locations defined by the context. Passing 
the context name to the scanners allows you to override the table setting
 to load only scan time iterators from a different location.</p>
@@ -933,11 +933,12 @@ to be able to scale to using 10’s of GB of RAM and 10’s 
of CPU cores.</p>
 <p>Accumulo TabletServers bind certain ports on the host to accommodate remote 
procedure calls to/from
 other nodes. Running more than one TabletServer on a host requires that you 
set the environment variable
 <code class="highlighter-rouge">ACCUMULO_SERVICE_INSTANCE</code> to an 
instance number (i.e 1, 2) for each instance that is started. Also, set
-these properties in <a 
href="/docs/2.x/configuration/files#accumuloproperties">accumulo.properties</a>:</p>
+the these properties in <a 
href="/docs/2.x/configuration/files#accumuloproperties">accumulo.properties</a>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre 
class="highlight"><code>tserver.port.search=true
-replication.receipt.service.port=0
-</code></pre></div></div>
+<ul>
+  <li><a 
href="/docs/2.x/configuration/server-properties#tserver_port_search">tserver.port.search</a>
 = <code class="highlighter-rouge">true</code></li>
+  <li><a 
href="/docs/2.x/configuration/server-properties#replication_receipt_service_port">replication.receipt.service.port</a>
 = <code class="highlighter-rouge">0</code></li>
+</ul>
 
 <h2 id="logging">Logging</h2>
 
diff --git a/docs/2.x/development/mapreduce.html 
b/docs/2.x/development/mapreduce.html
index 2e5a4bf..9f13f6c 100644
--- a/docs/2.x/development/mapreduce.html
+++ b/docs/2.x/development/mapreduce.html
@@ -473,7 +473,7 @@ MapReduce jobs to run with both Accumulo’s &amp; Hadoop’s 
dependencies on th
 <p>Since 2.0, Accumulo no longer has the same versions for dependencies as 
Hadoop. While this allows
 Accumulo to update its dependencies more frequently, it can cause problems if 
both Accumulo’s &amp;
 Hadoop’s dependencies are on the classpath of the MapReduce job. When 
launching a MapReduce job that
-use Accumulo, you should build a shaded jar with all of your dependencies and 
complete the following
+use Accumulo, you should build a <a 
href="https://maven.apache.org/plugins/maven-shade-plugin/index.html";>shaded 
jar</a> with all of your dependencies and complete the following
 steps so YARN only includes Hadoop code (and not all of Hadoop’s dependencies) 
when running your MapReduce job:</p>
 
 <ol>
diff --git a/feed.xml b/feed.xml
index d9aab6e..9f0b2be 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>https://accumulo.apache.org/</link>
     <atom:link href="https://accumulo.apache.org/feed.xml"; rel="self" 
type="application/rss+xml"/>
-    <pubDate>Tue, 15 Jan 2019 17:50:28 -0500</pubDate>
-    <lastBuildDate>Tue, 15 Jan 2019 17:50:28 -0500</lastBuildDate>
+    <pubDate>Tue, 15 Jan 2019 19:25:58 -0500</pubDate>
+    <lastBuildDate>Tue, 15 Jan 2019 19:25:58 -0500</lastBuildDate>
     <generator>Jekyll v3.7.3</generator>
     
     
diff --git a/search_data.json b/search_data.json
index 978b936..4b7588f 100644
--- a/search_data.json
+++ b/search_data.json
@@ -16,7 +16,7 @@
   
     "docs-2-x-administration-in-depth-install": {
       "title": "In-depth Installation",
-      "content"         : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can haveat 
least one core and 2 - 4 GB each.One core running HDFS can typically keep 2 to 
4 disks busy, so each machi [...]
+      "content"         : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can haveat 
least one core and 2 - 4 GB each.One core running HDFS can typically keep 2 to 
4 disks busy, so each machi [...]
       "url": " /docs/2.x/administration/in-depth-install",
       "categories": "administration"
     },

Reply via email to