Modified: knox/site/books/knox-1-1-0/user-guide.html
URL: 
http://svn.apache.org/viewvc/knox/site/books/knox-1-1-0/user-guide.html?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/site/books/knox-1-1-0/user-guide.html (original)
+++ knox/site/books/knox-1-1-0/user-guide.html Tue Jul  3 19:13:36 2018
@@ -38,7 +38,9 @@
       <li><a href="#Externalized+Provider+Configurations">Externalized 
Provider Configurations</a></li>
       <li><a href="#Sharing+HA+Providers">Sharing HA Providers</a></li>
       <li><a href="#Simplified+Descriptor+Files">Simplified Descriptor 
Files</a></li>
-      <li><a href="#Cluster+Configuration+Monitoring">Cluster Configuration 
Monitoring</a></li>
+    </ul></li>
+    <li><a href="#Cluster+Configuration+Monitoring">Cluster Configuration 
Monitoring</a>
+    <ul>
       <li><a href="#Remote+Configuration+Monitor">Remote Configuration 
Monitor</a></li>
       <li><a href="#Remote+Configuration+Registry+Clients">Remote 
Configuration Registry Clients</a></li>
       <li><a href="#Remote+Alias+Discovery">Remote Alias Discovery</a></li>
@@ -159,11 +161,11 @@
   <li>Do Hadoop with Knox</li>
 </ol><h3><a id="1+-+Requirements">1 - Requirements</a> <a 
href="#1+-+Requirements"><img src="markbook-section-link.png"/></a></h3><h4><a 
id="Java">Java</a> <a href="#Java"><img 
src="markbook-section-link.png"/></a></h4><p>Java 1.8 is required for the Knox 
Gateway runtime. Use the command below to check the version of Java installed 
on the system where Knox will be running.</p>
 <pre><code>java -version
-</code></pre><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>Knox 1.1.0 supports Hadoop 3.x, 
the quick start instructions assume a Hadoop 2.x virtual machine based 
environment.</p><h3><a id="2+-+Download+Hadoop+2.x+VM">2 - Download Hadoop 2.x 
VM</a> <a href="#2+-+Download+Hadoop+2.x+VM"><img 
src="markbook-section-link.png"/></a></h3><p>The quick start provides a link to 
download Hadoop 2.0 based Hortonworks virtual machine <a 
href="http://hortonworks.com/products/hdp-2/#install";>Sandbox</a>. Please note 
Knox supports other Hadoop distributions and is configurable against a 
full-blown Hadoop cluster. Configuring Knox for Hadoop 2.x version, or Hadoop 
deployed in EC2 or a custom Hadoop cluster is documented in advance deployment 
guide.</p><h3><a id="3+-+Download+Apache+Knox+Gateway">3 - Download Apache Knox 
Gateway</a> <a href="#3+-+Download+Apache+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h3><p>Download one of the dist
 ributions below from the <a 
href="http://www.apache.org/dyn/closer.cgi/knox";>Apache mirrors</a>.</p>
+</code></pre><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>Knox 1.1.0 supports Hadoop 2.x and 
3.x, the quick start instructions assume a Hadoop 2.x virtual machine based 
environment.</p><h3><a id="2+-+Download+Hadoop+2.x+VM">2 - Download Hadoop 2.x 
VM</a> <a href="#2+-+Download+Hadoop+2.x+VM"><img 
src="markbook-section-link.png"/></a></h3><p>The quick start provides a link to 
download Hadoop 2.0 based Hortonworks virtual machine <a 
href="http://hortonworks.com/products/hdp-2/#install";>Sandbox</a>. Please note 
Knox supports other Hadoop distributions and is configurable against a 
full-blown Hadoop cluster. Configuring Knox for Hadoop 2.x version, or Hadoop 
deployed in EC2 or a custom Hadoop cluster is documented in advance deployment 
guide.</p><h3><a id="3+-+Download+Apache+Knox+Gateway">3 - Download Apache Knox 
Gateway</a> <a href="#3+-+Download+Apache+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h3><p>Download one of 
 the distributions below from the <a 
href="http://www.apache.org/dyn/closer.cgi/knox";>Apache mirrors</a>.</p>
 <ul>
   <li>Source archive: <a 
href="http://www.apache.org/dyn/closer.cgi/knox/1.1.0/knox-1.1.0-src.zip";>knox-1.1.0-src.zip</a>
 (<a href="http://www.apache.org/dist/knox/1.1.0/knox-1.1.0-src.zip.asc";>PGP 
signature</a>, <a 
href="http://www.apache.org/dist/knox/1.1.0/knox-1.1.0-src.zip.sha";>SHA1 
digest</a>, <a 
href="http://www.apache.org/dist/knox/1.1.0/knox-1.1.0-src.zip.md5";>MD5 
digest</a>)</li>
   <li>Binary archive: <a 
href="http://www.apache.org/dyn/closer.cgi/knox/1.1.0/knox-1.1.0.zip";>knox-1.1.0.zip</a>
 (<a href="http://www.apache.org/dist/knox/1.1.0/knox-1.1.0.zip.asc";>PGP 
signature</a>, <a 
href="http://www.apache.org/dist/knox/1.1.0/knox-1.1.0.zip.sha";>SHA1 
digest</a>, <a 
href="http://www.apache.org/dist/knox/1.1.0/knox-1.1.0.zip.md5";>MD5 
digest</a>)</li>
-</ul><p>Apache Knox Gateway releases are available under the <a 
href="http://www.apache.org/licenses/LICENSE-2.0";>Apache License, Version 
2.0</a>. See the NOTICE file contained in each release artifact for applicable 
copyright attribution notices.</p><h3><a id="Verify">Verify</a> <a 
href="#Verify"><img src="markbook-section-link.png"/></a></h3><p>While 
recommended, verify is an optional step. You can verify the integrity of any 
downloaded files using the PGP signatures. Please read <a 
href="http://httpd.apache.org/dev/verification.html";>Verifying Apache HTTP 
Server Releases</a> for more information on why you should verify our 
releases.</p><p>The PGP signatures can be verified using PGP or GPG. First 
download the <a 
href="https://dist.apache.org/repos/dist/release/knox/KEYS";>KEYS</a> file as 
well as the .asc signature files for the relevant release packages. Make sure 
you get these files from the main distribution directory linked above, rather 
than from a mirror. Then verify the si
 gnatures using one of the methods below.</p>
+</ul><p>Apache Knox Gateway releases are available under the <a 
href="http://www.apache.org/licenses/LICENSE-2.0";>Apache License, Version 
2.0</a>. See the NOTICE file contained in each release artifact for applicable 
copyright attribution notices.</p><h3><a id="Verify">Verify</a> <a 
href="#Verify"><img src="markbook-section-link.png"/></a></h3><p>While 
recommended, verification of signatures is an optional step. You can verify the 
integrity of any downloaded files using the PGP signatures. Please read <a 
href="http://httpd.apache.org/dev/verification.html";>Verifying Apache HTTP 
Server Releases</a> for more information on why you should verify our 
releases.</p><p>The PGP signatures can be verified using PGP or GPG. First 
download the <a 
href="https://dist.apache.org/repos/dist/release/knox/KEYS";>KEYS</a> file as 
well as the <code>.asc</code> signature files for the relevant release 
packages. Make sure you get these files from the main distribution directory 
linked above, rather than 
 from a mirror. Then verify the signatures using one of the methods below.</p>
 <pre><code>% pgpk -a KEYS
 % pgpv knox-1.1.0.zip.asc
 </code></pre><p>or</p>
@@ -177,22 +179,22 @@
 </code></pre><p>This will create a directory <code>knox-{VERSION}</code> in 
your current directory. The directory <code>knox-{VERSION}</code> will 
considered your <code>{GATEWAY_HOME}</code></p><h3><a 
id="6+-+Start+LDAP+embedded+in+Knox">6 - Start LDAP embedded in Knox</a> <a 
href="#6+-+Start+LDAP+embedded+in+Knox"><img 
src="markbook-section-link.png"/></a></h3><p>Knox comes with an LDAP server for 
demonstration purposes. Note: If the tool used to extract the contents of the 
Tar or tar.gz file was not capable of making the files in the bin directory 
executable</p>
 <pre><code>cd {GATEWAY_HOME}
 bin/ldap.sh start
-</code></pre><h3><a id="7+-+Create+the+Master+Secret">7 - Create the Master 
Secret</a> <a href="#7+-+Create+the+Master+Secret"><img 
src="markbook-section-link.png"/></a></h3><p>Run the knoxcli create-master 
command in order to persist the master secret that is used to protect the key 
and credential stores for the gateway instance.</p>
+</code></pre><h3><a id="7+-+Create+the+Master+Secret">7 - Create the Master 
Secret</a> <a href="#7+-+Create+the+Master+Secret"><img 
src="markbook-section-link.png"/></a></h3><p>Run the <code>knoxcli.sh 
create-master</code> command in order to persist the master secret that is used 
to protect the key and credential stores for the gateway instance.</p>
 <pre><code>cd {GATEWAY_HOME}
 bin/knoxcli.sh create-master
-</code></pre><p>The cli will prompt you for the master secret (i.e. 
password).</p><h3><a id="7+-+Start+Knox">7 - Start Knox</a> <a 
href="#7+-+Start+Knox"><img src="markbook-section-link.png"/></a></h3><p>The 
gateway can be started using the provided shell script.</p><p>The server will 
discover the persisted master secret during start up and complete the setup 
process for demo installs. A demo install will consist of a knox gateway 
instance with an identity certificate for localhost. This will require clients 
to be on the same machine or to turn off hostname verification. For more 
involved deployments, See the Knox CLI section of this document for additional 
configuration options, including the ability to create a self-signed 
certificate for a specific hostname.</p>
+</code></pre><p>The CLI will prompt you for the master secret (i.e. 
password).</p><h3><a id="7+-+Start+Knox">7 - Start Knox</a> <a 
href="#7+-+Start+Knox"><img src="markbook-section-link.png"/></a></h3><p>The 
gateway can be started using the provided shell script.</p><p>The server will 
discover the persisted master secret during start up and complete the setup 
process for demo installs. A demo install will consist of a Knox gateway 
instance with an identity certificate for localhost. This will require clients 
to be on the same machine or to turn off hostname verification. For more 
involved deployments, See the Knox CLI section of this document for additional 
configuration options, including the ability to create a self-signed 
certificate for a specific hostname.</p>
 <pre><code>cd {GATEWAY_HOME}
 bin/gateway.sh start
-</code></pre><p>When starting the gateway this way the process will be run in 
the background. The log files will be written to {GATEWAY_HOME}/logs and the 
process ID files (PIDS) will b written to {GATEWAY_HOME}/pids.</p><p>In order 
to stop a gateway that was started with the script use this command.</p>
+</code></pre><p>When starting the gateway this way the process will be run in 
the background. The log files will be written to 
<code>{GATEWAY_HOME}/logs</code> and the process ID files (PIDs) will be 
written to <code>{GATEWAY_HOME}/pids</code>.</p><p>In order to stop a gateway 
that was started with the script use this command:</p>
 <pre><code>cd {GATEWAY_HOME}
 bin/gateway.sh stop
-</code></pre><p>If for some reason the gateway is stopped other than by using 
the command above you may need to clear the tracking PID.</p>
+</code></pre><p>If for some reason the gateway is stopped other than by using 
the command above you may need to clear the tracking PID:</p>
 <pre><code>cd {GATEWAY_HOME}
 bin/gateway.sh clean
-</code></pre><p><strong>NOTE: This command will also clear any .out and .err 
file from the {GATEWAY_HOME}/logs directory so use this with 
caution.</strong></p><h3><a id="8+-+Do+Hadoop+with+Knox">8 - Do Hadoop with 
Knox</a> <a href="#8+-+Do+Hadoop+with+Knox"><img 
src="markbook-section-link.png"/></a></h3><h4><a 
id="Invoke+the+LISTSTATUS+operation+on+WebHDFS+via+the+gateway.">Invoke the 
LISTSTATUS operation on WebHDFS via the gateway.</a> <a 
href="#Invoke+the+LISTSTATUS+operation+on+WebHDFS+via+the+gateway."><img 
src="markbook-section-link.png"/></a></h4><p>This will return a directory 
listing of the root (i.e. /) directory of HDFS.</p>
+</code></pre><p><strong>NOTE: This command will also clear any 
<code>.out</code> and <code>.err</code> file from the 
<code>{GATEWAY_HOME}/logs</code> directory so use this with 
caution.</strong></p><h3><a id="8+-+Access+Hadoop+with+Knox">8 - Access Hadoop 
with Knox</a> <a href="#8+-+Access+Hadoop+with+Knox"><img 
src="markbook-section-link.png"/></a></h3><h4><a 
id="Invoke+the+LISTSTATUS+operation+on+WebHDFS+via+the+gateway.">Invoke the 
LISTSTATUS operation on WebHDFS via the gateway.</a> <a 
href="#Invoke+the+LISTSTATUS+operation+on+WebHDFS+via+the+gateway."><img 
src="markbook-section-link.png"/></a></h4><p>This will return a directory 
listing of the root (i.e. <code>/</code>) directory of HDFS.</p>
 <pre><code>curl -i -k -u guest:guest-password -X GET \
     &#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS&#39;
-</code></pre><p>The results of the above command should result in something to 
along the lines of the output below. The exact information returned is subject 
to the content within HDFS in your Hadoop cluster. Successfully executing this 
command at a minimum proves that the gateway is properly configured to provide 
access to WebHDFS. It does not necessarily provide that any of the other 
services are correct configured to be accessible. To validate that see the 
sections for the individual services in <a href="#Service+Details">Service 
Details</a>.</p>
+</code></pre><p>The results of the above command should result in something to 
along the lines of the output below. The exact information returned is subject 
to the content within HDFS in your Hadoop cluster. Successfully executing this 
command at a minimum proves that the gateway is properly configured to provide 
access to WebHDFS. It does not necessarily mean that any of the other services 
are correctly configured to be accessible. To validate that see the sections 
for the individual services in <a href="#Service+Details">Service 
Details</a>.</p>
 <pre><code>HTTP/1.1 200 OK
 Content-Type: application/json
 Content-Length: 760
@@ -209,18 +211,18 @@ Server: Jetty(6.1.26)
     
&#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=CREATE&#39;
 
 curl -i -k -u guest:guest-password -T LICENSE -X PUT \
-    &#39;{Value of Location header from response   above}&#39;
+    &#39;{Value of Location header from response above}&#39;
 </code></pre><h4><a id="Get+a+file+in+HDFS+via+Knox.">Get a file in HDFS via 
Knox.</a> <a href="#Get+a+file+in+HDFS+via+Knox."><img 
src="markbook-section-link.png"/></a></h4>
 <pre><code>curl -i -k -u guest:guest-password -X GET \
     
&#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=OPEN&#39;
 
 curl -i -k -u guest:guest-password -X GET \
     &#39;{Value of Location header from command response above}&#39;
-</code></pre><h2><a id="Apache+Knox+Details">Apache Knox Details</a> <a 
href="#Apache+Knox+Details"><img 
src="markbook-section-link.png"/></a></h2><p>This section provides everything 
you need to know to get the Knox gateway up and running against a Hadoop 
cluster.</p><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>An existing Hadoop 2.x cluster is 
required for Knox to sit in front of and protect. It is possible to use a 
Hadoop cluster deployed on EC2 but this will require additional configuration 
not covered here. It is also possible to protect access to a services of a 
Hadoop cluster that is secured with Kerberos. This too requires additional 
configuration that is described in other sections of this guide. See <a 
href="#Supported+Services">Supported Services</a> for details on what is 
supported for this release.</p><p>The Hadoop cluster should be ensured to have 
at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, deploy
 ed and running. HBase/Stargate and Hive can also be accessed via the Knox 
Gateway given the proper versions and configuration.</p><p>The instructions 
that follow assume a few things:</p>
+</code></pre><h2><a id="Apache+Knox+Details">Apache Knox Details</a> <a 
href="#Apache+Knox+Details"><img 
src="markbook-section-link.png"/></a></h2><p>This section provides everything 
you need to know to get the Knox gateway up and running against a Hadoop 
cluster.</p><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>An existing Hadoop 2.x or 3.x 
cluster is required for Knox to sit in front of and protect. It is possible to 
use a Hadoop cluster deployed on EC2 but this will require additional 
configuration not covered here. It is also possible to protect access to a 
services of a Hadoop cluster that is secured with Kerberos. This too requires 
additional configuration that is described in other sections of this guide. See 
<a href="#Supported+Services">Supported Services</a> for details on what is 
supported for this release.</p><p>The instructions that follow assume a few 
things:</p>
 <ol>
   <li>The gateway is <em>not</em> collocated with the Hadoop clusters 
themselves.</li>
   <li>The host names and IP addresses of the cluster services are accessible 
by the gateway where ever it happens to be running.</li>
-</ol><p>All of the instructions and samples provided here are tailored and 
tested to work &ldquo;out of the box&rdquo; against a <a 
href="http://hortonworks.com/products/hortonworks-sandbox";>Hortonworks Sandbox 
2.x VM</a>.</p><h4><a id="Apache+Knox+Directory+Layout">Apache Knox Directory 
Layout</a> <a href="#Apache+Knox+Directory+Layout"><img 
src="markbook-section-link.png"/></a></h4><p>Knox can be installed by expanding 
the zip/archive file.</p><p>The table below provides a brief explanation of the 
important files and directories within <code>{GATEWAY_HOME}</code></p>
+</ol><p>All of the instructions and samples provided here are tailored and 
tested to work &ldquo;out of the box&rdquo; against a <a 
href="https://hortonworks.com/products/sandbox/";>Hortonworks Sandbox 2.x 
VM</a>.</p><h4><a id="Apache+Knox+Directory+Layout">Apache Knox Directory 
Layout</a> <a href="#Apache+Knox+Directory+Layout"><img 
src="markbook-section-link.png"/></a></h4><p>Knox can be installed by expanding 
the zip/archive file.</p><p>The table below provides a brief explanation of the 
important files and directories within <code>{GATEWAY_HOME}</code></p>
 <table>
   <thead>
     <tr>
@@ -275,7 +277,7 @@ curl -i -k -u guest:guest-password -X GE
     </tr>
     <tr>
       <td>pids/ </td>
-      <td>Contains the process ids for running ldap and gateway servers </td>
+      <td>Contains the process ids for running LDAP and gateway servers </td>
     </tr>
     <tr>
       <td>samples/ </td>
@@ -398,7 +400,7 @@ curl -i -k -u guest:guest-password -X GE
   <li><a href="#Hive+Examples">Hive Examples</a></li>
   <li><a href="#Yarn+Examples">Yarn Examples</a></li>
   <li><a href="#Storm+Examples">Storm Examples</a></li>
-</ul><h3><a id="Gateway+Samples">Gateway Samples</a> <a 
href="#Gateway+Samples"><img src="markbook-section-link.png"/></a></h3><p>The 
purpose of the samples within the {GATEWAY_HOME}/samples directory is to 
demonstrate the capabilities of the Apache Knox Gateway to provide access to 
the numerous APIs that are available from the service components of a Hadoop 
cluster.</p><p>Depending on exactly how your Knox installation was done, there 
will be some number of steps required in order fully install and configure the 
samples for use.</p><p>This section will help describe the assumptions of the 
samples and the steps to get them to work in a couple of different deployment 
scenarios.</p><h4><a id="Assumptions+of+the+Samples">Assumptions of the 
Samples</a> <a href="#Assumptions+of+the+Samples"><img 
src="markbook-section-link.png"/></a></h4><p>The samples were initially written 
with the intent of working out of the box for the various Hadoop demo 
environments that are deployed as a single no
 de cluster inside of a VM. The following assumptions were made from that 
context and should be understood in order to get the samples to work in other 
deployment scenarios:</p>
+</ul><h3><a id="Gateway+Samples">Gateway Samples</a> <a 
href="#Gateway+Samples"><img src="markbook-section-link.png"/></a></h3><p>The 
purpose of the samples within the <code>{GATEWAY_HOME}/samples</code> directory 
is to demonstrate the capabilities of the Apache Knox Gateway to provide access 
to the numerous APIs that are available from the service components of a Hadoop 
cluster.</p><p>Depending on exactly how your Knox installation was done, there 
will be some number of steps required in order fully install and configure the 
samples for use.</p><p>This section will help describe the assumptions of the 
samples and the steps to get them to work in a couple of different deployment 
scenarios.</p><h4><a id="Assumptions+of+the+Samples">Assumptions of the 
Samples</a> <a href="#Assumptions+of+the+Samples"><img 
src="markbook-section-link.png"/></a></h4><p>The samples were initially written 
with the intent of working out of the box for the various Hadoop demo 
environments that are deployed a
 s a single node cluster inside of a VM. The following assumptions were made 
from that context and should be understood in order to get the samples to work 
in other deployment scenarios:</p>
 <ul>
   <li>That there is a valid java JDK on the PATH for executing the samples</li>
   <li>The Knox Demo LDAP server is running on localhost and port 33389 which 
is the default port for the ApacheDS LDAP server.</li>
@@ -407,17 +409,17 @@ curl -i -k -u guest:guest-password -X GE
   <li>Finally, that there is a properly provisioned sandbox.xml topology in 
the <code>{GATEWAY_HOME}/conf/topologies</code> directory that is configured to 
point to the actual host and ports of running service components.</li>
 </ul><h4><a id="Steps+for+Demo+Single+Node+Clusters">Steps for Demo Single 
Node Clusters</a> <a href="#Steps+for+Demo+Single+Node+Clusters"><img 
src="markbook-section-link.png"/></a></h4><p>There should be little to do if 
anything in a demo environment that has been provisioned with illustrating the 
use of Apache Knox.</p><p>However, the following items will be worth ensuring 
before you start:</p>
 <ol>
-  <li>The sandbox.xml topology is configured properly for the deployed 
services</li>
+  <li>The <code>sandbox.xml</code> topology is configured properly for the 
deployed services</li>
   <li>That there is a LDAP server running with guest/guest-password user 
available in the directory</li>
-</ol><h4><a id="Steps+for+Ambari+Deployed+Knox+Gateway">Steps for Ambari 
Deployed Knox Gateway</a> <a 
href="#Steps+for+Ambari+Deployed+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h4><p>Apache Knox instances that are 
under the management of Ambari are generally assumed not to be demo instances. 
These instances are in place to facilitate development, testing or production 
Hadoop clusters.</p><p>The Knox samples can however be made to work with Ambari 
managed Knox instances with a few steps:</p>
+</ol><h4><a id="Steps+for+Ambari+deployed+Knox+Gateway">Steps for Ambari 
deployed Knox Gateway</a> <a 
href="#Steps+for+Ambari+deployed+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h4><p>Apache Knox instances that are 
under the management of Ambari are generally assumed not to be demo instances. 
These instances are in place to facilitate development, testing or production 
Hadoop clusters.</p><p>The Knox samples can however be made to work with Ambari 
managed Knox instances with a few steps:</p>
 <ol>
-  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
+  <li>You need to have SSH access to the environment in order for the 
localhost assumption within the samples to be valid</li>
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
-  <li>The default.xml topology file can be copied to sandbox.xml in order to 
satisfy the topology name assumption in the samples.</li>
+  <li>The <code>default.xml</code> topology file can be copied to 
<code>sandbox.xml</code> in order to satisfy the topology name assumption in 
the samples</li>
   <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
-</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway">Steps for a 
Manually Installed Knox Gateway</a> <a 
href="#Steps+for+a+Manually+Installed+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h4><p>For manually installed Knox 
instances, there is really no way for the installer to know how to configure 
the topology file for you.</p><p>Essentially, these steps are identical to the 
Ambari deployed instance except that #3 should be replaced with the 
configuration of the out of the box sandbox.xml to point the configuration at 
the proper hosts and ports.</p>
+</ol><h4><a id="Steps+for+a+manually+installed+Knox+Gateway">Steps for a 
manually installed Knox Gateway</a> <a 
href="#Steps+for+a+manually+installed+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h4><p>For manually installed Knox 
instances, there is really no way for the installer to know how to configure 
the topology file for you.</p><p>Essentially, these steps are identical to the 
Ambari deployed instance except that #3 should be replaced with the 
configuration of the out of the box <code>sandbox.xml</code> to point the 
configuration at the proper hosts and ports.</p>
 <ol>
-  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
+  <li>You need to have SSH access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
   <li>Change the hosts and ports within the 
<code>{GATEWAY_HOME}/conf/topologies/sandbox.xml</code> to reflect your actual 
cluster service locations.</li>
   <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
@@ -440,12 +442,12 @@ curl -i -k -u guest:guest-password -X GE
 --><h2><a id="Gateway+Details">Gateway Details</a> <a 
href="#Gateway+Details"><img src="markbook-section-link.png"/></a></h2><p>This 
section describes the details of the Knox Gateway itself. Including: </p>
 <ul>
   <li>How URLs are mapped between a gateway that services multiple Hadoop 
clusters and the clusters themselves</li>
-  <li>How the gateway is configured through gateway-site.xml and cluster 
specific topology files</li>
+  <li>How the gateway is configured through <code>gateway-site.xml</code> and 
cluster specific topology files</li>
   <li>How to configure the various policy enforcement provider features such 
as authentication, authorization, auditing, hostmapping, etc.</li>
-</ul><h3><a id="URL+Mapping">URL Mapping</a> <a href="#URL+Mapping"><img 
src="markbook-section-link.png"/></a></h3><p>The gateway functions much like a 
reverse proxy. As such, it maintains a mapping of URLs that are exposed 
externally by the gateway to URLs that are provided by the Hadoop 
cluster.</p><h4><a id="Default+Topology+URLs">Default Topology URLs</a> <a 
href="#Default+Topology+URLs"><img 
src="markbook-section-link.png"/></a></h4><p>In order to provide compatibility 
with the Hadoop java client and existing CLI tools, the Knox Gateway has 
provided a feature called the Default Topology. This refers to a topology 
deployment that will be able to route URLs without the additional context that 
the gateway uses for differentiating from one Hadoop cluster to another. This 
allows the URLs to match those used by existing clients that may access webhdfs 
through the Hadoop file system abstraction.</p><p>When a topology file is 
deployed with a file name that matches the configured defaul
 t topology name, a specialized mapping for URLs is installed for that 
particular topology. This allows the URLs that are expected by the existing 
Hadoop CLIs for webhdfs to be used in interacting with the specific Hadoop 
cluster that is represented by the default topology file.</p><p>The 
configuration for the default topology name is found in gateway-site.xml as a 
property called: &ldquo;default.app.topology.name&rdquo;.</p><p>The default 
value for this property is &ldquo;sandbox&rdquo;.</p><p>Therefore, when 
deploying the sandbox.xml topology, both of the following example URLs work for 
the same underlying Hadoop cluster:</p>
+</ul><h3><a id="URL+Mapping">URL Mapping</a> <a href="#URL+Mapping"><img 
src="markbook-section-link.png"/></a></h3><p>The gateway functions much like a 
reverse proxy. As such, it maintains a mapping of URLs that are exposed 
externally by the gateway to URLs that are provided by the Hadoop 
cluster.</p><h4><a id="Default+Topology+URLs">Default Topology URLs</a> <a 
href="#Default+Topology+URLs"><img 
src="markbook-section-link.png"/></a></h4><p>In order to provide compatibility 
with the Hadoop Java client and existing CLI tools, the Knox Gateway has 
provided a feature called the <em>Default Topology</em>. This refers to a 
topology deployment that will be able to route URLs without the additional 
context that the gateway uses for differentiating from one Hadoop cluster to 
another. This allows the URLs to match those used by existing clients that may 
access WebHDFS through the Hadoop file system abstraction.</p><p>When a 
topology file is deployed with a file name that matches the configur
 ed default topology name, a specialized mapping for URLs is installed for that 
particular topology. This allows the URLs that are expected by the existing 
Hadoop CLIs for WebHDFS to be used in interacting with the specific Hadoop 
cluster that is represented by the default topology file.</p><p>The 
configuration for the default topology name is found in 
<code>gateway-site.xml</code> as a property called: 
<code>default.app.topology.name</code>.</p><p>The default value for this 
property is <code>sandbox</code>.</p><p>Therefore, when deploying the 
<code>sandbox.xml</code> topology, both of the following example URLs work for 
the same underlying Hadoop cluster:</p>
 <pre><code>https://{gateway-host}:{gateway-port}/webhdfs
 https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs
-</code></pre><p>These default topology URLs exist for all of the services in 
the topology.</p><h4><a id="Fully+Qualified+URLs">Fully Qualified URLs</a> <a 
href="#Fully+Qualified+URLs"><img 
src="markbook-section-link.png"/></a></h4><p>Examples of mappings for the 
WebHDFS, WebHCat, Oozie and HBase are shown below. These mapping are generated 
from the combination of the gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology 
descriptors (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>). The port 
numbers shown for the Cluster URLs represent the default ports for these 
services. The actual port number may be different for a given cluster.</p>
+</code></pre><p>These default topology URLs exist for all of the services in 
the topology.</p><h4><a id="Fully+Qualified+URLs">Fully Qualified URLs</a> <a 
href="#Fully+Qualified+URLs"><img 
src="markbook-section-link.png"/></a></h4><p>Examples of mappings for WebHDFS, 
WebHCat, Oozie and HBase are shown below. These mapping are generated from the 
combination of the gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology 
descriptors (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>). The port 
numbers shown for the Cluster URLs represent the default ports for these 
services. The actual port number may be different for a given cluster.</p>
 <ul>
   <li>WebHDFS
   <ul>
@@ -472,7 +474,7 @@ https://{gateway-host}:{gateway-port}/{g
     <li>Gateway: 
<code>jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password};transportMode=http;httpPath={gateway-path}/{cluster-name}/hive</code></li>
     <li>Cluster: <code>http://{hive-host}:10001/cliservice</code></li>
   </ul></li>
-</ul><p>The values for <code>{gateway-host}</code>, 
<code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the 
gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for 
<code>{cluster-name}</code> is derived from the file name of the cluster 
topology descriptor (e.g. 
<code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value 
for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, 
<code>{oozie-host}</code>, <code>{hbase-host}</code> and 
<code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>).</p><p>Note: 
The ports 50070, 50111, 11000, 8080 and 10001 are the defaults for WebHDFS, 
WebHCat, Oozie, HBase and Hive respectively. Their values can also be provided 
via the cluster topology descriptor if your Hadoop cluster uses different 
ports.</p><p>Note: The HBase REST API uses port 8080 by default. This often 
clash
 es with other running services. In the Hortonworks Sandbox, Apache Ambari 
might be running on this port so you might have to change it to a different 
port (e.g. 60080). </p><h4><a id="Topology+Port+Mapping">Topology Port 
Mapping</a> <a href="#Topology+Port+Mapping"><img 
src="markbook-section-link.png"/></a></h4><p>This feature allows mapping of a 
topology to a port, as a result one can have a specific topology listening on a 
configured port. This feature routes URLs to these port-mapped topologies 
without the additional context that the gateway uses for differentiating from 
one Hadoop cluster to another, just like the <a 
href="#Default+Topology+URLs">Default Topology URLs</a> feature, but on a 
dedicated port. </p><p>The configuration for Topology Port Mapping goes in 
<code>gateway-site.xml</code> file. The configuration uses the property name 
and value model to configure the settings for this feature. The format for the 
property name is <code>gateway.port.mapping.{topologyName}</cod
 e> and value is the port number that this topology would listen on. </p><p>In 
the following example, the topology <code>development</code> will listen on 
9443 (if the port is not already taken).</p>
+</ul><p>The values for <code>{gateway-host}</code>, 
<code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the 
gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for 
<code>{cluster-name}</code> is derived from the file name of the cluster 
topology descriptor (e.g. 
<code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value 
for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, 
<code>{oozie-host}</code>, <code>{hbase-host}</code> and 
<code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>).</p><p>Note: 
The ports 50070 (9870 for Hadoop 3.x), 50111, 11000, 8080 and 10001 are the 
defaults for WebHDFS, WebHCat, Oozie, HBase and Hive respectively. Their values 
can also be provided via the cluster topology descriptor if your Hadoop cluster 
uses different ports.</p><p>Note: The HBase REST API uses port 8080 by def
 ault. This often clashes with other running services. In the Hortonworks 
Sandbox, Apache Ambari might be running on this port, so you might have to 
change it to a different port (e.g. 60080). </p><h4><a 
id="Topology+Port+Mapping">Topology Port Mapping</a> <a 
href="#Topology+Port+Mapping"><img 
src="markbook-section-link.png"/></a></h4><p>This feature allows mapping of a 
topology to a port, as a result one can have a specific topology listening on a 
configured port. This feature routes URLs to these port-mapped topologies 
without the additional context that the gateway uses for differentiating from 
one Hadoop cluster to another, just like the <a 
href="#Default+Topology+URLs">Default Topology URLs</a> feature, but on a 
dedicated port. </p><p>The configuration for Topology Port Mapping goes in 
<code>gateway-site.xml</code> file. The configuration uses the property name 
and value model to configure the settings for this feature. The format for the 
property name is <code>gateway.port.mapp
 ing.{topologyName}</code> and value is the port number that this topology 
would listen on. </p><p>In the following example, the topology 
<code>development</code> will listen on 9443 (if the port is not already 
taken).</p>
 <pre><code>  &lt;property&gt;
       &lt;name&gt;gateway.port.mapping.development&lt;/name&gt;
       &lt;value&gt;9443&lt;/value&gt;
@@ -488,8 +490,7 @@ https://{gateway-host}:{gateway-port}/{g
      &lt;value&gt;false&lt;/value&gt;
      &lt;description&gt;Enable/Disable port mapping 
feature.&lt;/description&gt;
  &lt;/property&gt;
-</code></pre>
-<!--If a topology mapped port is in use by another topology or process then an 
ERROR message is logged and gateway startup continues as normal.-->
+</code></pre><p>If a topology mapped port is in use by another topology or 
process then an ERROR message is logged and gateway startup continues as 
normal.</p>
 <!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
@@ -564,216 +565,216 @@ https://{gateway-host}:{gateway-port}/{g
 <table>
   <thead>
     <tr>
-      <th>property </th>
-      <th>description </th>
-      <th>default</th>
+      <th>Property </th>
+      <th>Description </th>
+      <th>Default</th>
     </tr>
   </thead>
   <tbody>
     <tr>
-      <td>gateway.deployment.dir</td>
-      <td>The directory within GATEWAY_HOME that contains gateway topology 
deployments.</td>
-      <td>{GATEWAY_HOME}/data/deployments</td>
+      <td><code>gateway.deployment.dir</code></td>
+      <td>The directory within <code>GATEWAY_HOME</code> that contains gateway 
topology deployments</td>
+      <td><code>{GATEWAY_HOME}/data/deployments</code></td>
     </tr>
     <tr>
-      <td>gateway.security.dir</td>
-      <td>The directory within GATEWAY_HOME that contains the required 
security artifacts</td>
-      <td>{GATEWAY_HOME}/data/security</td>
+      <td><code>gateway.security.dir</code></td>
+      <td>The directory within <code>GATEWAY_HOME</code> that contains the 
required security artifacts</td>
+      <td><code>{GATEWAY_HOME}/data/security</code></td>
     </tr>
     <tr>
-      <td>gateway.data.dir</td>
-      <td>The directory within GATEWAY_HOME that contains the gateway instance 
data</td>
-      <td>{GATEWAY_HOME}/data</td>
+      <td><code>gateway.data.dir</code></td>
+      <td>The directory within <code>GATEWAY_HOME</code> that contains the 
gateway instance data</td>
+      <td><code>{GATEWAY_HOME}/data</code></td>
     </tr>
     <tr>
-      <td>gateway.services.dir</td>
-      <td>The directory within GATEWAY_HOME that contains the gateway services 
definitions.</td>
-      <td>{GATEWAY_HOME}/services</td>
+      <td><code>gateway.services.dir</code></td>
+      <td>The directory within <code>GATEWAY_HOME</code> that contains the 
gateway services definitions</td>
+      <td><code>{GATEWAY_HOME}/services</code></td>
     </tr>
     <tr>
-      <td>gateway.hadoop.conf.dir</td>
-      <td>The directory within GATEWAY_HOME that contains the gateway 
configuration</td>
-      <td>{GATEWAY_HOME}/conf</td>
+      <td><code>gateway.hadoop.conf.dir</code></td>
+      <td>The directory within <code>GATEWAY_HOME</code> that contains the 
gateway configuration</td>
+      <td><code>{GATEWAY_HOME}/conf</code></td>
     </tr>
     <tr>
-      <td>gateway.frontend.url</td>
+      <td><code>gateway.frontend.url</code></td>
       <td>The URL that should be used during rewriting so that it can rewrite 
the URLs with the correct &ldquo;frontend&rdquo; URL</td>
       <td>none</td>
     </tr>
     <tr>
-      <td>gateway.xforwarded.enabled</td>
+      <td><code>gateway.xforwarded.enabled</code></td>
       <td>Indicates whether support for some X-Forwarded-* headers is 
enabled</td>
-      <td>true</td>
+      <td><code>true</code></td>
     </tr>
     <tr>
-      <td>gateway.trust.all.certs</td>
+      <td><code>gateway.trust.all.certs</code></td>
       <td>Indicates whether all presented client certs should establish 
trust</td>
-      <td>false</td>
+      <td><code>false</code></td>
     </tr>
     <tr>
-      <td>gateway.client.auth.needed</td>
+      <td><code>gateway.client.auth.needed</code></td>
       <td>Indicates whether clients are required to establish a trust 
relationship with client certificates</td>
-      <td>false</td>
+      <td><code>false</code></td>
     </tr>
     <tr>
-      <td>gateway.truststore.path</td>
+      <td><code>gateway.truststore.path</code></td>
       <td>Location of the truststore for client certificates to be trusted</td>
-      <td>gateway.jks</td>
+      <td><code>gateway.jks</code></td>
     </tr>
     <tr>
-      <td>gateway.truststore.type</td>
+      <td><code>gateway.truststore.type</code></td>
       <td>Indicates the type of truststore</td>
-      <td>JKS</td>
+      <td><code>JKS</code></td>
     </tr>
     <tr>
-      <td>gateway.keystore.type</td>
+      <td><code>gateway.keystore.type</code></td>
       <td>Indicates the type of keystore for the identity store</td>
-      <td>JKS</td>
+      <td><code>JKS</code></td>
     </tr>
     <tr>
-      <td>gateway.jdk.tls.ephemeralDHKeySize</td>
-      <td>jdk.tls.ephemeralDHKeySize, is defined to customize the ephemeral DH 
key sizes. The minimum acceptable DH key size is 1024 bits, except for 
exportable cipher suites or legacy mode (jdk.tls.ephemeralDHKeySize=legacy)</td>
-      <td>2048</td>
+      <td><code>gateway.jdk.tls.ephemeralDHKeySize</code></td>
+      <td><code>jdk.tls.ephemeralDHKeySize</code>, is defined to customize the 
ephemeral DH key sizes. The minimum acceptable DH key size is 1024 bits, except 
for exportable cipher suites or legacy mode 
(<code>jdk.tls.ephemeralDHKeySize=legacy</code>)</td>
+      <td><code>2048</code></td>
     </tr>
     <tr>
-      <td>gateway.threadpool.max</td>
+      <td><code>gateway.threadpool.max</code></td>
       <td>The maximum concurrent requests the server will process. The default 
is 254. Connections beyond this will be queued.</td>
-      <td>254</td>
+      <td><code>254</code></td>
     </tr>
     <tr>
-      <td>gateway.httpclient.maxConnections</td>
-      <td>The maximum number of connections that a single httpclient will 
maintain to a single host:port. The default is 32.</td>
-      <td>32</td>
+      <td><code>gateway.httpclient.maxConnections</code></td>
+      <td>The maximum number of connections that a single HttpClient will 
maintain to a single host:port.</td>
+      <td><code>32</code></td>
     </tr>
     <tr>
-      <td>gateway.httpclient.connectionTimeout</td>
-      <td>The amount of time to wait when attempting a connection. The natural 
unit is milliseconds but a &lsquo;s&rsquo; or &lsquo;m&rsquo; suffix may be 
used for seconds or minutes respectively. The default timeout is 20 sec. </td>
-      <td>20 sec.</td>
+      <td><code>gateway.httpclient.connectionTimeout</code></td>
+      <td>The amount of time to wait when attempting a connection. The natural 
unit is milliseconds, but a &lsquo;s&rsquo; or &lsquo;m&rsquo; suffix may be 
used for seconds or minutes respectively.</td>
+      <td>20s</td>
     </tr>
     <tr>
-      <td>gateway.httpclient.socketTimeout</td>
-      <td>The amount of time to wait for data on a socket before aborting the 
connection. The natural unit is milliseconds but a &lsquo;s&rsquo; or 
&lsquo;m&rsquo; suffix may be used for seconds or minutes respectively. The 
default timeout is 20 sec. </td>
-      <td>20 sec.</td>
+      <td><code>gateway.httpclient.socketTimeout</code></td>
+      <td>The amount of time to wait for data on a socket before aborting the 
connection. The natural unit is milliseconds, but a &lsquo;s&rsquo; or 
&lsquo;m&rsquo; suffix may be used for seconds or minutes respectively.</td>
+      <td>20s</td>
     </tr>
     <tr>
-      <td>gateway.httpserver.requestBuffer</td>
-      <td>The size of the HTTP server request buffer. The default is 16K.</td>
-      <td>16384</td>
+      <td><code>gateway.httpserver.requestBuffer</code></td>
+      <td>The size of the HTTP server request buffer in bytes</td>
+      <td><code>16384</code></td>
     </tr>
     <tr>
-      <td>gateway.httpserver.requestHeaderBuffer</td>
-      <td>The size of the HTTP server request header buffer. The default is 
8K.</td>
-      <td>8192</td>
+      <td><code>gateway.httpserver.requestHeaderBuffer</code></td>
+      <td>The size of the HTTP server request header buffer in bytes</td>
+      <td><code>8192</code></td>
     </tr>
     <tr>
-      <td>gateway.httpserver.responseBuffer</td>
-      <td>The size of the HTTP server response buffer. The default is 32K.</td>
-      <td>32768</td>
+      <td><code>gateway.httpserver.responseBuffer</code></td>
+      <td>The size of the HTTP server response buffer in bytes</td>
+      <td><code>32768</code></td>
     </tr>
     <tr>
-      <td>gateway.httpserver.responseHeaderBuffer</td>
-      <td>The size of the HTTP server response header buffer. The default is 
8K.</td>
-      <td>8192</td>
+      <td><code>gateway.httpserver.responseHeaderBuffer</code></td>
+      <td>The size of the HTTP server response header buffer in bytes</td>
+      <td><code>8192</code></td>
     </tr>
     <tr>
-      <td>gateway.websocket.feature.enabled</td>
-      <td>Enable/Disable websocket feature.</td>
-      <td>false</td>
+      <td><code>gateway.websocket.feature.enabled</code></td>
+      <td>Enable/Disable WebSocket feature</td>
+      <td><code>false</code></td>
     </tr>
     <tr>
-      <td>gateway.gzip.compress.mime.types</td>
+      <td><code>gateway.gzip.compress.mime.types</code></td>
       <td>Content types to be gzip compressed by Knox on the way out to 
browser.</td>
       <td>text/html, text/plain, text/xml, text/css, application/javascript, 
text/javascript, application/x-javascript</td>
     </tr>
     <tr>
-      <td>gateway.signing.keystore.name</td>
-      <td>OPTIONAL Filename of keystore file that contains the signing 
keypair. NOTE: An alias needs to be created using &ldquo;knoxcli.sh 
create-alias&rdquo; for the alias name signing.key.passphrase in order to 
provide the passphrase to access the keystore.</td>
+      <td><code>gateway.signing.keystore.name</code></td>
+      <td>OPTIONAL Filename of keystore file that contains the signing 
keypair. NOTE: An alias needs to be created using <code>knoxcli.sh 
create-alias</code> for the alias name <code>signing.key.passphrase</code> in 
order to provide the passphrase to access the keystore.</td>
       <td>null</td>
     </tr>
     <tr>
-      <td>gateway.signing.key.alias</td>
-      <td>OPTIONAL alias for the signing keypair within the keystore specified 
via gateway.signing.keystore.name.</td>
+      <td><code>gateway.signing.key.alias</code></td>
+      <td>OPTIONAL alias for the signing keypair within the keystore specified 
via <code>gateway.signing.keystore.name</code></td>
       <td>null</td>
     </tr>
     <tr>
-      <td>ssl.enabled</td>
+      <td><code>ssl.enabled</code></td>
       <td>Indicates whether SSL is enabled for the Gateway</td>
-      <td>true</td>
+      <td><code>true</code></td>
     </tr>
     <tr>
-      <td>ssl.include.ciphers</td>
+      <td><code>ssl.include.ciphers</code></td>
       <td>A comma separated list of ciphers to accept for SSL. See the <a 
href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSEProvider";>JSSE
 Provider docs</a> for possible ciphers. These can also contain regular 
expressions as shown in the <a 
href="http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html";>Jetty
 documentation</a>.</td>
       <td>all</td>
     </tr>
     <tr>
-      <td>ssl.exclude.ciphers</td>
+      <td><code>ssl.exclude.ciphers</code></td>
       <td>A comma separated list of ciphers to reject for SSL. See the <a 
href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSEProvider";>JSSE
 Provider docs</a> for possible ciphers. These can also contain regular 
expressions as shown in the <a 
href="http://www.eclipse.org/jetty/documentation/current/configuring-ssl.html";>Jetty
 documentation</a>.</td>
       <td>none</td>
     </tr>
     <tr>
-      <td>ssl.exclude.protocols</td>
+      <td><code>ssl.exclude.protocols</code></td>
       <td>Excludes a comma separated list of protocols to not accept for SSL 
or &ldquo;none&rdquo;</td>
-      <td>SSLv3</td>
+      <td><code>SSLv3</code></td>
     </tr>
     <tr>
-      <td>gateway.remote.config.monitor.client</td>
-      <td>A reference to the <a 
href="#Remote+Configuration+Registry+Clients">remote configuration registry 
client</a> the remote configuration monitor will employ.</td>
+      <td><code>gateway.remote.config.monitor.client</code></td>
+      <td>A reference to the <a 
href="#Remote+Configuration+Registry+Clients">remote configuration registry 
client</a> the remote configuration monitor will employ</td>
       <td>null</td>
     </tr>
     <tr>
-      <td>gateway.remote.config.monitor.client.allowUnauthenticatedReadAccess 
</td>
+      
<td><code>gateway.remote.config.monitor.client.allowUnauthenticatedReadAccess</code>
 </td>
       <td>When a remote registry client is configured to access a registry 
securely, this property can be set to allow unauthenticated clients to continue 
to read the content from that registry by setting the ACLs accordingly. </td>
-      <td>false</td>
+      <td><code>false</code></td>
     </tr>
     <tr>
-      <td>gateway.remote.config.registry.<b>&lt;name&gt;</b></td>
+      
<td><code>gateway.remote.config.registry.&lt;b&gt;&amp;lt;name&amp;gt;&lt;/b&gt;</code></td>
       <td>A named <a href="#Remote+Configuration+Registry+Clients">remote 
configuration registry client</a> definition</td>
       <td>null</td>
     </tr>
     <tr>
-      <td>gateway.cluster.config.monitor.ambari.enabled </td>
-      <td>Indicates whether the cluster monitoring and associated dynamic 
topology updating is enabled. </td>
-      <td>false</td>
+      <td><code>gateway.cluster.config.monitor.ambari.enabled</code></td>
+      <td>Indicates whether the cluster monitoring and associated dynamic 
topology updating is enabled </td>
+      <td><code>false</code></td>
     </tr>
     <tr>
-      <td>gateway.cluster.config.monitor.ambari.interval </td>
-      <td>The interval (in seconds) at which the cluster monitor will poll 
Ambari for cluster configuration changes. </td>
-      <td>60</td>
+      <td><code>gateway.cluster.config.monitor.ambari.interval</code> </td>
+      <td>The interval (in seconds) at which the cluster monitor will poll 
Ambari for cluster configuration changes </td>
+      <td><code>60</code></td>
     </tr>
     <tr>
-      <td>gateway.remote.alias.service.enabled </td>
+      <td><code>gateway.remote.alias.service.enabled</code> </td>
       <td>Turn on/off Remote Alias Discovery, this will take effect only when 
remote configuration monitor is enabled </td>
-      <td>true</td>
+      <td><code>true</code></td>
     </tr>
     <tr>
-      <td>gateway.read.only.override.topologies </td>
+      <td><code>gateway.read.only.override.topologies</code> </td>
       <td>A comma-delimited list of topology names which should be forcibly 
treated as read-only. </td>
       <td>none</td>
     </tr>
     <tr>
-      <td>gateway.discovery.default.address </td>
+      <td><code>gateway.discovery.default.address</code> </td>
       <td>The default discovery address, which is applied if no address is 
specified in a descriptor. </td>
       <td>null</td>
     </tr>
     <tr>
-      <td>gateway.discovery.default.cluster </td>
+      <td><code>gateway.discovery.default.cluster</code> </td>
       <td>The default discovery cluster name, which is applied if no cluster 
name is specified in a descriptor. </td>
       <td>null</td>
     </tr>
     <tr>
-      <td>gateway.dispatch.whitelist </td>
+      <td><code>gateway.dispatch.whitelist</code> </td>
       <td>A semicolon-delimited list of regular expressions for controlling to 
which endpoints Knox dispatches and redirects will be permitted. If DEFAULT is 
specified, or the property is omitted entirely, then a default domain-based 
whitelist will be derived from the Knox host. An empty value means no 
dispatches will be permitted. </td>
       <td>null</td>
     </tr>
     <tr>
-      <td>gateway.dispatch.whitelist.services </td>
+      <td><code>gateway.dispatch.whitelist.services</code> </td>
       <td>A comma-delimited list of service roles to which the 
<em>gateway.dispatch.whitelist</em> will be applied. </td>
       <td>none</td>
     </tr>
     <tr>
-      <td>gateway.strict.topology.validation </td>
+      <td><code>gateway.strict.topology.validation</code> </td>
       <td>If true topology xml files will be validated against the topology 
schema during redeploy </td>
-      <td>false</td>
+      <td><code>false</code></td>
     </tr>
   </tbody>
 </table><h4><a id="Topology+Descriptors">Topology Descriptors</a> <a 
href="#Topology+Descriptors"><img 
src="markbook-section-link.png"/></a></h4><p>The topology descriptor files 
provide the gateway with per-cluster configuration information. This includes 
configuration for both the providers within the gateway and the services within 
the Hadoop cluster. These files are located in 
<code>{GATEWAY_HOME}/conf/topologies</code>. The general outline of this 
document looks like this.</p>
@@ -828,7 +829,7 @@ ec2-23-23-25-10.compute-1.amazonaws.com
 Internal HOSTNAMES:
 ip-10-118-99-172.ec2.internal
 ip-10-39-107-209.ec2.internal
-</code></pre><p>The Hostmap configuration required to allow access external to 
the Hadoop cluster via the Apache Knox Gateway would be this.</p>
+</code></pre><p>The Hostmap configuration required to allow access external to 
the Hadoop cluster via the Apache Knox Gateway would be this:</p>
 <pre><code>&lt;topology&gt;
     &lt;gateway&gt;
         ...
@@ -934,41 +935,41 @@ ip-10-39-107-209.ec2.internal
 <table>
   <thead>
     <tr>
-      
<th>property&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
-      <th>description</th>
+      <th>Property </th>
+      <th>Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
-      <td>discovery-type</td>
-      <td>The discovery source type. (Currently, the only supported type is 
<em>AMBARI</em>).</td>
+      <td><code>discovery-type</code></td>
+      <td>The discovery source type. (Currently, the only supported type is 
<code>AMBARI</code>).</td>
     </tr>
     <tr>
-      <td>discovery-address</td>
-      <td>The endpoint address for the discovery source. If omitted, then Knox 
will check for the gateway-site configuration property named 
<em>gateway.discovery.default.address</em>, and use its value if defined.</td>
+      <td><code>discovery-address</code></td>
+      <td>The endpoint address for the discovery source.</td>
     </tr>
     <tr>
-      <td>discovery-user</td>
-      <td>The username with permission to access the discovery source. If 
omitted, then Knox will check for an alias named 
<em>ambari.discovery.user</em>, and use its value if defined.</td>
+      <td><code>discovery-user</code></td>
+      <td>The username with permission to access the discovery source. If 
omitted, then Knox will check for an alias named 
<code>ambari.discovery.user</code>, and use its value if defined.</td>
     </tr>
     <tr>
-      <td>discovery-pwd-alias</td>
-      <td>The alias of the password for the user with permission to access the 
discovery source. If omitted, then Knox will check for an alias named 
<em>ambari.discovery.password</em>, and use its value if defined.</td>
+      <td><code>discovery-pwd-alias</code></td>
+      <td>The alias of the password for the user with permission to access the 
discovery source. If omitted, then Knox will check for an alias named 
<code>ambari.discovery.password</code>, and use its value if defined.</td>
     </tr>
     <tr>
-      <td>provider-config-ref</td>
+      <td><code>provider-config-ref</code></td>
       <td>A reference to a provider configuration in 
<code>{GATEWAY_HOME}/conf/shared-providers/</code>.</td>
     </tr>
     <tr>
-      <td>cluster</td>
-      <td>The name of the cluster from which the topology service endpoints 
should be determined. If omitted, then Knox will check for the gateway-site 
configuration property named <em>gateway.discovery.default.cluster</em>, and 
use its value if defined.</td>
+      <td><code>cluster</code></td>
+      <td>The name of the cluster from which the topology service endpoints 
should be determined.</td>
     </tr>
     <tr>
-      <td>services</td>
+      <td><code>services</code></td>
       <td>The collection of services to be included in the topology.</td>
     </tr>
     <tr>
-      <td>applications</td>
+      <td><code>applications</code></td>
       <td>The collection of applications to be included in the topology.</td>
     </tr>
   </tbody>
@@ -1083,9 +1084,9 @@ services:
 &lt;/property&gt;
 </code></pre><p><em>The actual name of the client (e.g., 
sandbox-zookeeper-client) is not important, except that the reference matches 
the name specified in the client definition.</em></p><p>With this 
configuration, the gateway will monitor the following znodes in the specified 
ZooKeeper instance</p>
 <pre><code>/knox
-   /config
-      /shared-providers
-      /descriptors
+    /config
+        /shared-providers
+        /descriptors
 </code></pre><p>The creation of these znodes, and the population of their 
respective contents, is an activity <strong>not</strong> currently managed by 
the gateway. However, the <a href="#Knox+CLI">KNOX CLI</a> includes commands 
for managing the contents of these znodes.</p><p>These znodes are treated 
similarly to the local <em>shared-providers</em> and <em>descriptors</em> 
directories described in <a href="#Deployment+Directories">Deployment 
Directories</a>. When the monitor notices a change to these znodes, it will 
attempt to effect the same change locally.</p><p>If a provider configuration is 
added to the <em>/knox/config/shared-providers</em> znode, the monitor will 
download the new configuration to the local shared-providers directory. 
Likewise, if a descriptor is added to the <em>/knox/config/descriptors</em> 
znode, the monitor will download the new descriptor to the local descriptors 
directory, which will trigger an attempt to generate and deploy a corresponding 
topology.</p>
 <p>Modifications to the contents of these znodes, will yield the same behavior 
as can be seen resulting from the corresponding local modification.</p>
 <table>
   <thead>
@@ -1097,32 +1098,32 @@ services:
   </thead>
   <tbody>
     <tr>
-      <td>/knox/config/shared-providers </td>
+      <td><code>/knox/config/shared-providers</code> </td>
       <td>add </td>
       <td>Download the new file to the local shared-providers directory</td>
     </tr>
     <tr>
-      <td>/knox/config/shared-providers </td>
+      <td><code>/knox/config/shared-providers</code> </td>
       <td>modify </td>
       <td>Download the new file to the local shared-providers directory; If 
there are any existing descriptor references, then topology will be regenerated 
and redeployed for those referencing descriptors.</td>
     </tr>
     <tr>
-      <td>/knox/config/shared-providers </td>
+      <td><code>/knox/config/shared-providers</code> </td>
       <td>delete </td>
       <td>Delete the corresponding file from the local shared-providers 
directory</td>
     </tr>
     <tr>
-      <td>/knox/config/descriptors </td>
+      <td><code>/knox/config/descriptors</code> </td>
       <td>add </td>
       <td>Download the new file to the local descriptors directory; A 
corresponding topology will be generated and deployed.</td>
     </tr>
     <tr>
-      <td>/knox/config/descriptors </td>
+      <td><code>/knox/config/descriptors</code> </td>
       <td>modify </td>
       <td>Download the new file to the local descriptors directory; The 
corresponding topology will be regenerated and redeployed.</td>
     </tr>
     <tr>
-      <td>/knox/config/descriptors </td>
+      <td><code>/knox/config/descriptors</code> </td>
       <td>delete </td>
       <td>Delete the corresponding file from the local descriptors 
directory</td>
     </tr>
@@ -1132,9 +1133,9 @@ services:
 that ACLs be applied to restrict at least writing of the entries referenced by 
this monitor. If write
 access is available to everyone, then the contents of the configuration cannot 
be known to be trustworthy,
 and there is the potential for malicious activity. Be sure to carefully 
consider who will have the ability
-to define configuration in monitored remote registries, and apply the 
necessary measures to ensure its
+to define configuration in monitored remote registries and apply the necessary 
measures to ensure its
 trustworthiness.
-</code></pre><h4><a id="Remote+Configuration+Registry+Clients">Remote 
Configuration Registry Clients</a> <a 
href="#Remote+Configuration+Registry+Clients"><img 
src="markbook-section-link.png"/></a></h4><p>One or more features of the 
gateway employ remote configuration registry (e.g., ZooKeeper) clients. These 
clients are configured by setting properties in the gateway configuration 
(gateway-site.xml).</p><p>Each client configuration is a single property, the 
name of which is prefixed with <strong>gateway.remote.config.registry.</strong> 
and suffixed by the client identifier. The value of such a property, is a 
registry-type-specific set of semicolon-delimited properties for that client, 
including the type of registry with which it will interact.</p>
+</code></pre><h4><a id="Remote+Configuration+Registry+Clients">Remote 
Configuration Registry Clients</a> <a 
href="#Remote+Configuration+Registry+Clients"><img 
src="markbook-section-link.png"/></a></h4><p>One or more features of the 
gateway employ remote configuration registry (e.g., ZooKeeper) clients. These 
clients are configured by setting properties in the gateway configuration 
(<code>gateway-site.xml</code>).</p><p>Each client configuration is a single 
property, the name of which is prefixed with 
<strong>gateway.remote.config.registry.</strong> and suffixed by the client 
identifier. The value of such a property, is a registry-type-specific set of 
semicolon-delimited properties for that client, including the type of registry 
with which it will interact.</p>
 <pre><code>&lt;property&gt;
     &lt;name&gt;gateway.remote.config.registry.a-zookeeper-client&lt;/name&gt;
     
&lt;value&gt;type=ZooKeeper;address=zkhost1:2181,zkhost2:2181,zkhost3:2181&lt;/value&gt;
@@ -1164,7 +1165,7 @@ trustworthiness.
     &lt;value&gt;false&lt;/value&gt;
     &lt;description&gt;Turn on/off Remote Alias Discovery(true by 
default)&lt;/description&gt;
 &lt;/property&gt;
-</code></pre><h4><a id="Logging">Logging</a> <a href="#Logging"><img 
src="markbook-section-link.png"/></a></h4><p>If necessary you can enable 
additional logging by editing the <code>log4j.properties</code> file in the 
<code>conf</code> directory. Changing the <code>rootLogger</code> value from 
<code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug 
logging. A number of useful, more fine loggers are also provided in the 
file.</p><h4><a id="Java+VM+Options">Java VM Options</a> <a 
href="#Java+VM+Options"><img src="markbook-section-link.png"/></a></h4><p>TODO 
- Java VM options doc.</p><h4><a id="Persisting+the+Master+Secret">Persisting 
the Master Secret</a> <a href="#Persisting+the+Master+Secret"><img 
src="markbook-section-link.png"/></a></h4><p>The master secret is required to 
start the server. This secret is used to access secured artifacts by the 
gateway instance. Keystore, trust stores and credential stores are all 
protected with the master secret.</p><p>You m
 ay persist the master secret by supplying the <em>-persist-master</em> switch 
at startup. This will result in a warning indicating that persisting the secret 
is less secure than providing it at startup. We do make some provisions in 
order to protect the persisted password.</p><p>It is encrypted with AES 128 bit 
encryption and where possible the file permissions are set to only be 
accessible by the user that the gateway is running as.</p><p>After persisting 
the secret, ensure that the file at data/security/master has the appropriate 
permissions set for your environment. This is probably the most important layer 
of defense for master secret. Do not assume that the encryption is sufficient 
protection.</p><p>A specific user should be created to run the gateway. This 
user will be the only user with permissions for the persisted master 
file.</p><p>See the Knox CLI section for descriptions of the command line 
utilities related to the master secret.</p><h4><a 
id="Management+of+Security+Arti
 facts">Management of Security Artifacts</a> <a 
href="#Management+of+Security+Artifacts"><img 
src="markbook-section-link.png"/></a></h4><p>There are a number of artifacts 
that are used by the gateway in ensuring the security of wire level 
communications, access to protected resources and the encryption of sensitive 
data. These artifacts can be managed from outside of the gateway instances or 
generated and populated by the gateway instance itself.</p><p>The following is 
a description of how this is coordinated with both standalone (development, 
demo, etc) gateway instances and instances as part of a cluster of gateways in 
mind.</p><p>Upon start of the gateway server we:</p>
+</code></pre><h4><a id="Logging">Logging</a> <a href="#Logging"><img 
src="markbook-section-link.png"/></a></h4><p>If necessary you can enable 
additional logging by editing the <code>log4j.properties</code> file in the 
<code>conf</code> directory. Changing the <code>rootLogger</code> value from 
<code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug 
logging. A number of useful, more fine loggers are also provided in the 
file.</p><h4><a id="Java+VM+Options">Java VM Options</a> <a 
href="#Java+VM+Options"><img src="markbook-section-link.png"/></a></h4><p>TODO 
- Java VM options doc.</p><h4><a id="Persisting+the+Master+Secret">Persisting 
the Master Secret</a> <a href="#Persisting+the+Master+Secret"><img 
src="markbook-section-link.png"/></a></h4><p>The master secret is required to 
start the server. This secret is used to access secured artifacts by the 
gateway instance. Keystore, trust stores and credential stores are all 
protected with the master secret.</p><p>You m
 ay persist the master secret by supplying the <em>-persist-master</em> switch 
at startup. This will result in a warning indicating that persisting the secret 
is less secure than providing it at startup. We do make some provisions in 
order to protect the persisted password.</p><p>It is encrypted with AES 128 bit 
encryption and where possible the file permissions are set to only be 
accessible by the user that the gateway is running as.</p><p>After persisting 
the secret, ensure that the file at <code>data/security/master</code> has the 
appropriate permissions set for your environment. This is probably the most 
important layer of defense for master secret. Do not assume that the encryption 
is sufficient protection.</p><p>A specific user should be created to run the 
gateway. This user will be the only user with permissions for the persisted 
master file.</p><p>See the Knox CLI section for descriptions of the command 
line utilities related to the master secret.</p><h4><a id="Management+of+
 Security+Artifacts">Management of Security Artifacts</a> <a 
href="#Management+of+Security+Artifacts"><img 
src="markbook-section-link.png"/></a></h4><p>There are a number of artifacts 
that are used by the gateway in ensuring the security of wire level 
communications, access to protected resources and the encryption of sensitive 
data. These artifacts can be managed from outside of the gateway instances or 
generated and populated by the gateway instance itself.</p><p>The following is 
a description of how this is coordinated with both standalone (development, 
demo, etc.) gateway instances and instances as part of a cluster of gateways in 
mind.</p><p>Upon start of the gateway server we:</p>
 <ol>
   <li>Look for an identity store at 
<code>data/security/keystores/gateway.jks</code>.  The identity store contains 
the certificate and private key used to represent the identity of the server 
for SSL connections and signature creation.
   <ul>
@@ -1188,7 +1189,7 @@ trustworthiness.
   <li>Using a single gateway instance as a master instance the artifacts can 
be generated or placed into the expected location and then replicated across 
all of the slave instances before startup.</li>
   <li>Using an NFS mount as a central location for the artifacts would provide 
a single source of truth without the need to replicate them over the network. 
Of course, NFS mounts have their own challenges.</li>
   <li>Using the KnoxCLI to create and manage the security artifacts.</li>
-</ol><p>See the Knox CLI section for descriptions of the command line 
utilities related to the security artifact management.</p><h4><a 
id="Keystores">Keystores</a> <a href="#Keystores"><img 
src="markbook-section-link.png"/></a></h4><p>In order to provide your own 
certificate for use by the gateway, you will need to either import an existing 
key pair into a Java keystore or generate a self-signed cert using the Java 
keytool.</p><h5><a id="Importing+a+key+pair+into+a+Java+keystore">Importing a 
key pair into a Java keystore</a> <a 
href="#Importing+a+key+pair+into+a+Java+keystore"><img 
src="markbook-section-link.png"/></a></h5><p>One way to accomplish this is to 
start with a PKCS12 store for your key pair and then convert it to a Java 
keystore or JKS.</p><p>The following example uses openssl to create a PKCS12 
encoded store from your provided certificate and private key that are in PEM 
format.</p>
+</ol><p>See the Knox CLI section for descriptions of the command line 
utilities related to the security artifact management.</p><h4><a 
id="Keystores">Keystores</a> <a href="#Keystores"><img 
src="markbook-section-link.png"/></a></h4><p>In order to provide your own 
certificate for use by the gateway, you will need to either import an existing 
key pair into a Java keystore or generate a self-signed cert using the Java 
keytool.</p><h5><a id="Importing+a+key+pair+into+a+Java+keystore">Importing a 
key pair into a Java keystore</a> <a 
href="#Importing+a+key+pair+into+a+Java+keystore"><img 
src="markbook-section-link.png"/></a></h5><p>One way to accomplish this is to 
start with a PKCS12 store for your key pair and then convert it to a Java 
keystore or JKS.</p><p>The following example uses OpenSSL to create a PKCS12 
encoded store from your provided certificate and private key that are in PEM 
format.</p>
 <pre><code>openssl pkcs12 -export -in cert.pem -inkey key.pem &gt; server.p12
 </code></pre><p>The next example converts the PKCS12 store into a Java 
keystore (JKS). It should prompt you for the keystore and key passwords for the 
destination keystore. You must use the master-secret for the keystore password 
and keep track of the password that you use for the key passphrase.</p>
 <pre><code>keytool -importkeystore -srckeystore server.p12 -destkeystore 
gateway.jks -srcstoretype pkcs12
@@ -1197,7 +1198,7 @@ trustworthiness.
   <li><p>the alias MUST be &ldquo;gateway-identity&rdquo;. You may need to 
change it using keytool after the import of the PKCS12 store. You can use 
keytool to do this - for example:</p>
   <pre><code>keytool -changealias -alias &quot;1&quot; -destalias 
&quot;gateway-identity&quot; -keystore gateway.jks -storepass {knoxpw}
 </code></pre></li>
-  <li><p>the name of the expected identity keystore for the gateway MUST be 
gateway.jks</p></li>
+  <li><p>the name of the expected identity keystore for the gateway MUST be 
<code>gateway.jks</code></p></li>
   <li><p>the passwords for the keystore and the imported key may both be set 
to the master secret for the gateway install. You can change the key passphrase 
after import using keytool as well. You may need to do this in order to 
provision the password in the credential store as described later in this 
section. For example:</p>
   <pre><code>keytool -keypasswd -alias gateway-identity -keystore gateway.jks
 </code></pre></li>
@@ -1235,190 +1236,190 @@ keytool -keystore gateway.jks -storepass
   <li><p>Verify that clients can use the CA authority cert to access Knox 
(which is the goal of using public signed cert) using curl or a web browsers 
which has the CA certificate installed</p>
   <pre><code>curl --cacert supwin12ad.cer -u hdptester:hadoop -X GET 
&#39;https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS&#39;
 </code></pre></li>
-</ol><h5><a id="Credential+Store">Credential Store</a> <a 
href="#Credential+Store"><img 
src="markbook-section-link.png"/></a></h5><p>Whenever you provide your own 
keystore with either a self-signed cert or an issued certificate signed by a 
trusted authority, you will need to set an alias for the 
gateway-identity-passphrase or create an empty credential store. This is 
necessary for the current release in order for the system to determine the 
correct password for the keystore and the key.</p><p>The credential stores in 
Knox use the JCEKS keystore type as it allows for the storage of general 
secrets in addition to certificates.</p><p>Keytool may be used to create 
credential stores but the Knox CLI section details how to create aliases. These 
aliases are managed within credential stores which are created by the CLI as 
needed. The simplest approach is to create the gateway-identity-passpharse 
alias with the Knox CLI. This will create the credential store if it 
doesn&rsquo;t already exist
  and add the key passphrase.</p><p>See the Knox CLI section for descriptions 
of the command line utilities related to the management of the credential 
stores.</p><h5><a id="Provisioning+of+Keystores">Provisioning of Keystores</a> 
<a href="#Provisioning+of+Keystores"><img 
src="markbook-section-link.png"/></a></h5><p>Once you have created these 
keystores you must move them into place for the gateway to discover them and 
use them to represent its identity for SSL connections. This is done by copying 
the keystores to the <code>{GATEWAY_HOME}/data/security/keystores</code> 
directory for your gateway install.</p><h4><a 
id="Summary+of+Secrets+to+be+Managed">Summary of Secrets to be Managed</a> <a 
href="#Summary+of+Secrets+to+be+Managed"><img 
src="markbook-section-link.png"/></a></h4>
+</ol><h5><a id="Credential+Store">Credential Store</a> <a 
href="#Credential+Store"><img 
src="markbook-section-link.png"/></a></h5><p>Whenever you provide your own 
keystore with either a self-signed cert or an issued certificate signed by a 
trusted authority, you will need to set an alias for the 
<code>gateway-identity-passphrase</code> or create an empty credential store. 
This is necessary for the current release in order for the system to determine 
the correct password for the keystore and the key.</p><p>The credential stores 
in Knox use the JCEKS keystore type as it allows for the storage of general 
secrets in addition to certificates.</p><p>Keytool may be used to create 
credential stores but the Knox CLI section details how to create aliases. These 
aliases are managed within credential stores which are created by the CLI as 
needed. The simplest approach is to create the 
<code>gateway-identity-passphrase</code> alias with the Knox CLI. This will 
create the credential store if it d
 oesn&rsquo;t already exist and add the key passphrase.</p><p>See the Knox CLI 
section for descriptions of the command line utilities related to the 
management of the credential stores.</p><h5><a 
id="Provisioning+of+Keystores">Provisioning of Keystores</a> <a 
href="#Provisioning+of+Keystores"><img 
src="markbook-section-link.png"/></a></h5><p>Once you have created these 
keystores you must move them into place for the gateway to discover them and 
use them to represent its identity for SSL connections. This is done by copying 
the keystores to the <code>{GATEWAY_HOME}/data/security/keystores</code> 
directory for your gateway install.</p><h4><a 
id="Summary+of+Secrets+to+be+Managed">Summary of Secrets to be Managed</a> <a 
href="#Summary+of+Secrets+to+be+Managed"><img 
src="markbook-section-link.png"/></a></h4>
 <ol>
   <li>Master secret - the same for all gateway instances in a cluster of 
gateways</li>
   <li>All security related artifacts are protected with the master secret</li>
   <li>Secrets used by the gateway itself are stored within the gateway 
credential store and are the same across all gateway instances in the cluster 
of gateways</li>
-  <li>Secrets used by providers within cluster topologies are stored in 
topology specific credential stores and are the same for the same topology 
across the cluster of gateway instances.  However, they are specific to the 
topology - so secrets for one hadoop cluster are different from those of 
another.  This allows for fail-over from one gateway instance to another even 
when encryption is being used while not allowing the compromise of one 
encryption key to expose the data for all clusters.</li>
-</ol><p>NOTE: the SSL certificate will need special consideration depending on 
the type of certificate. Wildcard certs may be able to be shared across all 
gateway instances in a cluster. When certs are dedicated to specific machines 
the gateway identity store will not be able to be blindly replicated as host 
name verification problems will ensue. Obviously, trust-stores will need to be 
taken into account as well.</p><h3><a id="Knox+CLI">Knox CLI</a> <a 
href="#Knox+CLI"><img src="markbook-section-link.png"/></a></h3><p>The Knox CLI 
is a command line utility for the management of various aspects of the Knox 
deployment. It is primarily concerned with the management of the security 
artifacts for the gateway instance and each of the deployed topologies or 
Hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various 
security artifacts are also generated and populated automatically by the Knox 
Gateway runtime when they are not found at startup. The assumptions made in tho
 se cases are appropriate for a test or development gateway instance and assume 
&lsquo;localhost&rsquo; for hostname specific activities. For production 
deployments the use of the CLI may aid in managing some production 
deployments.</p><p>The knoxcli.sh script is located in the 
<code>{GATEWAY_HOME}/bin</code> directory.</p><h4><a id="Help">Help</a> <a 
href="#Help"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+[--help]`"><code>bin/knoxcli.sh [--help]</code></a> <a 
href="#`bin/knoxcli.sh+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>prints help for all 
commands</p><h4><a id="Knox+Version+Info">Knox Version Info</a> <a 
href="#Knox+Version+Info"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+version+[--help]`"><code>bin/knoxcli.sh version 
[--help]</code></a> <a href="#`bin/knoxcli.sh+version+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Displays Knox version 
information.</p><h4><a id="Master+secret+persi
 stence">Master secret persistence</a> <a 
href="#Master+secret+persistence"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-master+[--force][--help]`"><code>bin/knoxcli.sh 
create-master [--force][--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-master+[--force][--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Creates and persists an encrypted 
master secret in a file within 
<code>{GATEWAY_HOME}/data/security/master</code>. </p><p>NOTE: This command 
fails when there is an existing master file in the expected location. You may 
force it to overwrite the master file with the --force switch. NOTE: this will 
require you to change passwords protecting the keystores for the gateway 
identity keystores and all credential stores.</p><h4><a 
id="Alias+creation">Alias creation</a> <a href="#Alias+creation"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"><code>
 bin/knoxcli.sh create-alias name [--cluster c] [--value v] [--generate] 
[--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"><img
 src="markbook-section-link.png"/></a></h5><p>Creates a password alias and 
stores it in a credential store within the 
<code>{GATEWAY_HOME}/data/security/keystores</code> dir. </p>
+  <li>Secrets used by providers within cluster topologies are stored in 
topology specific credential stores and are the same for the same topology 
across the cluster of gateway instances.  However, they are specific to the 
topology - so secrets for one Hadoop cluster are different from those of 
another.  This allows for fail-over from one gateway instance to another even 
when encryption is being used while not allowing the compromise of one 
encryption key to expose the data for all clusters.</li>
+</ol><p>NOTE: the SSL certificate will need special consideration depending on 
the type of certificate. Wildcard certs may be able to be shared across all 
gateway instances in a cluster. When certs are dedicated to specific machines 
the gateway identity store will not be able to be blindly replicated as host 
name verification problems will ensue. Obviously, trust-stores will need to be 
taken into account as well.</p><h3><a id="Knox+CLI">Knox CLI</a> <a 
href="#Knox+CLI"><img src="markbook-section-link.png"/></a></h3><p>The Knox CLI 
is a command line utility for the management of various aspects of the Knox 
deployment. It is primarily concerned with the management of the security 
artifacts for the gateway instance and each of the deployed topologies or 
Hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various 
security artifacts are also generated and populated automatically by the Knox 
Gateway runtime when they are not found at startup. The assumptions made in tho
 se cases are appropriate for a test or development gateway instance and assume 
&lsquo;localhost&rsquo; for hostname specific activities. For production 
deployments the use of the CLI may aid in managing some production 
deployments.</p><p>The <code>knoxcli.sh</code> script is located in the 
<code>{GATEWAY_HOME}/bin</code> directory.</p><h4><a id="Help">Help</a> <a 
href="#Help"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+[--help]`"><code>bin/knoxcli.sh [--help]</code></a> <a 
href="#`bin/knoxcli.sh+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>prints help for all 
commands</p><h4><a id="Knox+Version+Info">Knox Version Info</a> <a 
href="#Knox+Version+Info"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+version+[--help]`"><code>bin/knoxcli.sh version 
[--help]</code></a> <a href="#`bin/knoxcli.sh+version+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Displays Knox version 
information.</p><h4><a id="Master
 +secret+persistence">Master secret persistence</a> <a 
href="#Master+secret+persistence"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-master+[--force][--help]`"><code>bin/knoxcli.sh 
create-master [--force][--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-master+[--force][--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Creates and persists an encrypted 
master secret in a file within 
<code>{GATEWAY_HOME}/data/security/master</code>. </p><p>NOTE: This command 
fails when there is an existing master file in the expected location. You may 
force it to overwrite the master file with the --force switch. NOTE: this will 
require you to change passwords protecting the keystores for the gateway 
identity keystores and all credential stores.</p><h4><a 
id="Alias+creation">Alias creation</a> <a href="#Alias+creation"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--h
 elp]`"><code>bin/knoxcli.sh create-alias name [--cluster c] [--value v] 
[--generate] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"><img
 src="markbook-section-link.png"/></a></h5><p>Creates a password alias and 
stores it in a credential store within the 
<code>{GATEWAY_HOME}/data/security/keystores</code> dir. </p>
 <table>
   <thead>
     <tr>
-      <th>argument </th>
-      <th>description</th>
+      <th>Argument </th>
+      <th>Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
-      <td>name</td>
-      <td>name of the alias to create</td>
+      <td>name </td>
+      <td>Name of the alias to create</td>
     </tr>
     <tr>
-      <td>--cluster</td>
-      <td>name of Hadoop cluster for the cluster specific credential store 
otherwise assumes that it is for the gateway itself</td>
+      <td>--cluster </td>
+      <td>Name of Hadoop cluster for the cluster specific credential store 
otherwise assumes that it is for the gateway itself</td>
     </tr>
     <tr>
-      <td>--value</td>
-      <td>parameter for specifying the actual password otherwise prompted. 
Escape complex passwords or surround with single quotes.<br/></td>
+      <td>--value </td>
+      <td>Parameter for specifying the actual password otherwise prompted. 
Escape complex passwords or surround with single quotes</td>
     </tr>
     <tr>
-      <td>--generate</td>
-      <td>boolean flag to indicate whether the tool should just generate the 
value. This assumes that --value is not set - will result in error otherwise. 
User will not be prompted for the value when --generate is set.</td>
+      <td>--generate </td>
+      <td>Boolean flag to indicate whether the tool should just generate the 
value. This assumes that --value is not set - will result in error otherwise. 
User will not be prompted for the value when --generate is set.</td>
     </tr>
   </tbody>
 </table><h4><a id="Alias+deletion">Alias deletion</a> <a 
href="#Alias+deletion"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+delete-alias+name+[--cluster+c]+[--help]`"><code>bin/knoxcli.sh
 delete-alias name [--cluster c] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+delete-alias+name+[--cluster+c]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Deletes a password and alias 
mapping from a credential store within 
<code>{GATEWAY_HOME}/data/security/keystores</code>.</p>
 <table>
   <thead>
     <tr>
-      <th>argument </th>
-      <th>description</th>
+      <th>Argument </th>
+      <th>Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
       <td>name </td>
-      <td>name of the alias to delete</td>
+      <td>Name of the alias to delete</td>
     </tr>
     <tr>
       <td>--cluster </td>
-      <td>name of Hadoop cluster for the cluster specific credential store 
otherwise assumes &rsquo;__gateway&rsquo;</td>
+      <td>Name of Hadoop cluster for the cluster specific credential store 
otherwise assumes &rsquo;__gateway&rsquo;</td>
     </tr>
   </tbody>
 </table><h4><a id="Alias+listing">Alias listing</a> <a 
href="#Alias+listing"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+list-alias+[--cluster+c]+[--help]`"><code>bin/knoxcli.sh 
list-alias [--cluster c] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+list-alias+[--cluster+c]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Lists the alias names for the 
credential store within 
<code>{GATEWAY_HOME}/data/security/keystores</code>.</p><p>NOTE: This command 
will list the aliases in lowercase which is a result of the underlying 
credential store implementation. Lookup of credentials is a case insensitive 
operation - so this is not an issue.</p>
 <table>
   <thead>
     <tr>
-      <th>argument </th>
-      <th>description</th>
+      <th>Argument </th>
+      <th>Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
       <td>--cluster </td>
-      <td>name of Hadoop cluster for the cluster specific credential store 
otherwise assumes &rsquo;__gateway&rsquo;</td>
+      <td>Name of Hadoop cluster for the cluster specific credential store 
otherwise assumes &rsquo;__gateway&rsquo;</td>
     </tr>
   </tbody>
 </table><h4><a id="Self-signed+cert+creation">Self-signed cert creation</a> <a 
href="#Self-signed+cert+creation"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-cert+[--hostname+n]+[--help]`"><code>bin/knoxcli.sh 
create-cert [--hostname n] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-cert+[--hostname+n]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Creates and stores a self-signed 
certificate to represent the identity of the gateway instance. This is stored 
within the <code>{GATEWAY_HOME}/data/security/keystores/gateway.jks</code> 
keystore. </p>
 <table>
   <thead>
     <tr>
-      <th>argument </th>
-      <th>description</th>
+      <th>Argument </th>
+      <th>Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
-      <td>--hostname</td>
-      <td>name of the host to be used in the self-signed certificate. This 
allows multi-host deployments to specify the proper hostnames for hostname 
verification to succeed on the client side of the SSL connection. The default 
is &lsquo;localhost&rsquo;.</td>
+      <td>--hostname </td>
+      <td>Name of the host to be used in the self-signed certificate. This 
allows multi-host deployments to specify the proper hostnames for hostname 
verification to succeed on the client side of the SSL connection. The default 
is &lsquo;localhost&rsquo;.</td>
     </tr>
   </tbody>

[... 1435 lines stripped ...]

Reply via email to