This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
     new ab08d17  Improved links in docs
ab08d17 is described below

commit ab08d1718e49c359cf8b478bba072ed5fd22474b
Author: Mike Walch <mwa...@apache.org>
AuthorDate: Tue Jan 15 19:25:38 2019 -0500

    Improved links in docs
---
 _docs-2/administration/in-depth-install.md | 10 ++++------
 _docs-2/development/mapreduce.md           |  3 ++-
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/_docs-2/administration/in-depth-install.md 
b/_docs-2/administration/in-depth-install.md
index 530191c..c22ce2c 100644
--- a/_docs-2/administration/in-depth-install.md
+++ b/_docs-2/administration/in-depth-install.md
@@ -344,7 +344,7 @@ configuration is:
 general.vfs.context.classpath.app1.delegation=post
 ```
 
-To use contexts in your application you can set the `table.classpath.context` 
on your tables or use the `setClassLoaderContext()` method on Scanner
+To use contexts in your application you can set the {% plink 
table.classpath.context %} on your tables or use the `setClassLoaderContext()` 
method on Scanner
 and BatchScanner passing in the name of the context, app1 in the example 
above. Setting the property on the table allows your minc, majc, and scan 
 iterators to load classes from the locations defined by the context. Passing 
the context name to the scanners allows you to override the table setting
 to load only scan time iterators from a different location. 
@@ -445,12 +445,10 @@ to be able to scale to using 10's of GB of RAM and 10's 
of CPU cores.
 Accumulo TabletServers bind certain ports on the host to accommodate remote 
procedure calls to/from
 other nodes. Running more than one TabletServer on a host requires that you 
set the environment variable
 `ACCUMULO_SERVICE_INSTANCE` to an instance number (i.e 1, 2) for each instance 
that is started. Also, set
-these properties in [accumulo.properties]:
+the these properties in [accumulo.properties]:
 
-```
-tserver.port.search=true
-replication.receipt.service.port=0
-```
+* {% plink tserver.port.search %} = `true`
+* {% plink replication.receipt.service.port %} = `0`
 
 ## Logging
 
diff --git a/_docs-2/development/mapreduce.md b/_docs-2/development/mapreduce.md
index adf0643..f5877e4 100644
--- a/_docs-2/development/mapreduce.md
+++ b/_docs-2/development/mapreduce.md
@@ -42,7 +42,7 @@ MapReduce jobs to run with both Accumulo's & Hadoop's 
dependencies on the classp
 Since 2.0, Accumulo no longer has the same versions for dependencies as 
Hadoop. While this allows
 Accumulo to update its dependencies more frequently, it can cause problems if 
both Accumulo's &
 Hadoop's dependencies are on the classpath of the MapReduce job. When 
launching a MapReduce job that
-use Accumulo, you should build a shaded jar with all of your dependencies and 
complete the following
+use Accumulo, you should build a [shaded jar] with all of your dependencies 
and complete the following
 steps so YARN only includes Hadoop code (and not all of Hadoop's dependencies) 
when running your MapReduce job:
 
 1. Set `export HADOOP_USE_CLIENT_CLASSLOADER=true` in your environment before 
submitting
@@ -181,6 +181,7 @@ The [Accumulo Examples repo][examples-repo] has several 
MapReduce examples:
 * [tablettofile] - Uses MapReduce to read a table and write one of its columns 
to a file in HDFS
 * [uniquecols] - Uses MapReduce to count unique columns in Accumulo
 
+[shaded jar]: https://maven.apache.org/plugins/maven-shade-plugin/index.html
 [AccumuloInputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloInputFormat %}
 [AccumuloOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloOutputFormat %}
 [AccumuloFileOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloFileOutputFormat %}

Reply via email to