Modified: knox/site/books/knox-0-7-0/user-guide.html
URL: 
http://svn.apache.org/viewvc/knox/site/books/knox-0-7-0/user-guide.html?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/site/books/knox-0-7-0/user-guide.html (original)
+++ knox/site/books/knox-0-7-0/user-guide.html Fri Jan 15 15:24:45 2016
@@ -33,7 +33,7 @@
     <li><a href="#Authentication">Authentication</a></li>
     <li><a href="#Advanced+LDAP+Authentication">Advanced LDAP 
Authentication</a></li>
     <li><a href="#LDAP+Authentication+Caching">LDAP Authentication 
Caching</a></li>
-    <li><a href="#LDAPGroupLookup">LDAPGroupLookup</a></li>
+    <li><a href="#LDAP+Group+Lookup">LDAP Group Lookup</a></li>
     <li><a href="#Identity+Assertion">Identity Assertion</a></li>
     <li><a href="#Authorization">Authorization</a></li>
     <li><a href="#Secure+Clusters">Secure Clusters</a></li>
@@ -41,6 +41,7 @@
     <li><a href="#Web+App+Security+Provider">Web App Security Provider</a></li>
     <li><a href="#Preauthenticated+SSO+Provider">Preauthenticated SSO 
Provider</a></li>
     <li><a href="#KnoxSSO+Setup+and+Configuration">KnoxSSO Setup and 
Configuration</a></li>
+    <li><a href="#Mutual+Authentication+with+SSL">Mutual Authentication with 
SSL</a></li>
     <li><a href="#Audit">Audit</a></li>
   </ul></li>
   <li><a href="#Client+Details">Client Details</a></li>
@@ -53,6 +54,7 @@
     <li><a href="#Hive">Hive</a></li>
     <li><a href="#Yarn">Yarn</a></li>
     <li><a href="#Storm">Storm</a></li>
+    <li><a href="#Default+Service+HA+support">Default Service HA 
support</a></li>
   </ul></li>
   <li><a href="#UI+Service+Details">UI Service Details</a></li>
   <li><a href="#Limitations">Limitations</a></li>
@@ -83,7 +85,7 @@
   <li>Do Hadoop with Knox</li>
 </ol><h3><a id="1+-+Requirements">1 - Requirements</a> <a 
href="#1+-+Requirements"><img src="markbook-section-link.png"/></a></h3><h4><a 
id="Java">Java</a> <a href="#Java"><img 
src="markbook-section-link.png"/></a></h4><p>Java 1.6 or later is required for 
the Knox Gateway runtime. Use the command below to check the version of Java 
installed on the system where Knox will be running.</p>
 <pre><code>java -version
-</code></pre><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>Knox 0.7.0 supports Hadoop 2.x, 
the quick start instructions assume a Hadoop 2.x virtual machine based 
environment.</p><h3><a id="2+-+Download+Hadoop+2.x+VM">2 - Download Hadoop 2.x 
VM</a> <a href="#2+-+Download+Hadoop+2.x+VM"><img 
src="markbook-section-link.png"/></a></h3><p>The quick start provides a link to 
download Hadoop 2.0 based Hortonworks virtual machine <a 
href="http://hortonworks.com/products/hdp-2/#install";>Sandbox</a>. Please note 
Knox supports other Hadoop distributions and is configurable against a full 
blown Hadoop cluster. Configuring Knox for Hadoop 2.x version, or Hadoop 
deployed in EC2 or a custom Hadoop cluster is documented in advance deployment 
guide.</p><h3><a id="3+-+Download+Apache+Knox+Gateway">3 - Download Apache Knox 
Gateway</a> <a href="#3+-+Download+Apache+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h3><p>Download one of the dist
 ributions below from the <a 
href="http://www.apache.org/dyn/closer.cgi/knox";>Apache mirrors</a>.</p>
+</code></pre><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>Knox 0.7.0 supports Hadoop 2.x, 
the quick start instructions assume a Hadoop 2.x virtual machine based 
environment.</p><h3><a id="2+-+Download+Hadoop+2.x+VM">2 - Download Hadoop 2.x 
VM</a> <a href="#2+-+Download+Hadoop+2.x+VM"><img 
src="markbook-section-link.png"/></a></h3><p>The quick start provides a link to 
download Hadoop 2.0 based Hortonworks virtual machine <a 
href="http://hortonworks.com/products/hdp-2/#install";>Sandbox</a>. Please note 
Knox supports other Hadoop distributions and is configurable against a 
full-blown Hadoop cluster. Configuring Knox for Hadoop 2.x version, or Hadoop 
deployed in EC2 or a custom Hadoop cluster is documented in advance deployment 
guide.</p><h3><a id="3+-+Download+Apache+Knox+Gateway">3 - Download Apache Knox 
Gateway</a> <a href="#3+-+Download+Apache+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h3><p>Download one of the dist
 ributions below from the <a 
href="http://www.apache.org/dyn/closer.cgi/knox";>Apache mirrors</a>.</p>
 <ul>
   <li>Source archive: <a 
href="http://www.apache.org/dyn/closer.cgi/knox/0.7.0/knox-0.7.0-src.zip";>knox-0.7.0-src.zip</a>
 (<a href="http://www.apache.org/dist/knox/0.7.0/knox-0.7.0-src.zip.asc";>PGP 
signature</a>, <a 
href="http://www.apache.org/dist/knox/0.7.0/knox-0.7.0-src.zip.sha";>SHA1 
digest</a>, <a 
href="http://www.apache.org/dist/knox/0.7.0/knox-0.7.0-src.zip.md5";>MD5 
digest</a>)</li>
   <li>Binary archive: <a 
href="http://www.apache.org/dyn/closer.cgi/knox/0.7.0/knox-0.7.0.zip";>knox-0.7.0.zip</a>
 (<a href="http://www.apache.org/dist/knox/0.7.0/knox-0.7.0.zip.asc";>PGP 
signature</a>, <a 
href="http://www.apache.org/dist/knox/0.7.0/knox-0.7.0.zip.sha";>SHA1 
digest</a>, <a 
href="http://www.apache.org/dist/knox/0.7.0/knox-0.7.0.zip.md5";>MD5 
digest</a>)</li>
@@ -140,7 +142,7 @@ curl -i -k -u guest:guest-password -T LI
 
 curl -i -k -u guest:guest-password -X GET \
     &#39;{Value of Location header from command response above}&#39;
-</code></pre><h2><a id="Apache+Knox+Details">Apache Knox Details</a> <a 
href="#Apache+Knox+Details"><img 
src="markbook-section-link.png"/></a></h2><p>This section provides everything 
you need to know to get the Knox gateway up and running against a Hadoop 
cluster.</p><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>An existing Hadoop 2.x cluster is 
required for Knox 0.7.0 to sit in front of and protect. It is possible to use a 
Hadoop cluster deployed on EC2 but this will require additional configuration 
not covered here. It is also possible to protect access to a services of a 
Hadoop cluster that is secured with kerberos. This too requires additional 
configuration that is described in other sections of this guide. See <a 
href="#Supported+Services">Supported Services</a> for details on what is 
supported for this release.</p><p>The Hadoop cluster should be ensured to have 
at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, 
 deployed and running. HBase/Stargate and Hive can also be accessed via the 
Knox Gateway given the proper versions and configuration.</p><p>The 
instructions that follow assume a few things:</p>
+</code></pre><h2><a id="Apache+Knox+Details">Apache Knox Details</a> <a 
href="#Apache+Knox+Details"><img 
src="markbook-section-link.png"/></a></h2><p>This section provides everything 
you need to know to get the Knox gateway up and running against a Hadoop 
cluster.</p><h4><a id="Hadoop">Hadoop</a> <a href="#Hadoop"><img 
src="markbook-section-link.png"/></a></h4><p>An existing Hadoop 2.x cluster is 
required for Knox 0.7.0 to sit in front of and protect. It is possible to use a 
Hadoop cluster deployed on EC2 but this will require additional configuration 
not covered here. It is also possible to protect access to a services of a 
Hadoop cluster that is secured with Kerberos. This too requires additional 
configuration that is described in other sections of this guide. See <a 
href="#Supported+Services">Supported Services</a> for details on what is 
supported for this release.</p><p>The Hadoop cluster should be ensured to have 
at least WebHDFS, WebHCat (i.e. Templeton) and Oozie configured, 
 deployed and running. HBase/Stargate and Hive can also be accessed via the 
Knox Gateway given the proper versions and configuration.</p><p>The 
instructions that follow assume a few things:</p>
 <ol>
   <li>The gateway is <em>not</em> collocated with the Hadoop clusters 
themselves.</li>
   <li>The host names and IP addresses of the cluster services are accessible 
by the gateway where ever it happens to be running.</li>
@@ -264,7 +266,7 @@ curl -i -k -u guest:guest-password -X GE
       <td><img src="check.png"  alt="y"/></td>
     </tr>
     <tr>
-      <td>HBase/Stargate </td>
+      <td>HBase </td>
       <td>0.98.0 </td>
       <td><img src="check.png"  alt="y"/> </td>
       <td><img src="check.png"  alt="y"/> </td>
@@ -314,22 +316,22 @@ curl -i -k -u guest:guest-password -X GE
   <li>The Knox Demo LDAP server is running on localhost and port 33389 which 
is the default port for the ApacheDS LDAP server.</li>
   <li>That the LDAP directory in use has a set of demo users provisioned with 
the convention of username and username&ldquo;-password&rdquo; as the password. 
Most of the samples have some variation of this pattern with 
&ldquo;guest&rdquo; and &ldquo;guest-password&rdquo;.</li>
   <li>That the Knox Gateway instance is running on the same machine which you 
will be running the samples from - therefore &ldquo;localhost&rdquo; and that 
the default port of &ldquo;8443&rdquo; is being used.</li>
-  <li>Finally, that there is a properly provisioned sandbox.xml topology in 
the {GATEWAY_HOME}/conf/topologies directory that is configured to point to the 
actual host and ports of running service components.</li>
+  <li>Finally, that there is a properly provisioned sandbox.xml topology in 
the <code>{GATEWAY_HOME}/conf/topologies</code> directory that is configured to 
point to the actual host and ports of running service components.</li>
 </ul><h4><a id="Steps+for+Demo+Single+Node+Clusters">Steps for Demo Single 
Node Clusters</a> <a href="#Steps+for+Demo+Single+Node+Clusters"><img 
src="markbook-section-link.png"/></a></h4><p>There should be little to do if 
anything in a demo environment that has been provisioned with illustrating the 
use of Apache Knox.</p><p>However, the following items will be worth ensuring 
before you start:</p>
 <ol>
   <li>The sandbox.xml topology is configured properly for the deployed 
services</li>
-  <li>That there is an LDAP server running with guest/guest-password user 
available in the directory</li>
+  <li>That there is a LDAP server running with guest/guest-password user 
available in the directory</li>
 </ol><h4><a id="Steps+for+Ambari+Deployed+Knox+Gateway">Steps for Ambari 
Deployed Knox Gateway</a> <a 
href="#Steps+for+Ambari+Deployed+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h4><p>Apache Knox instances that are 
under the management of Ambari are generally assumed not to be demo instances. 
These instances are in place to facilitate development, testing or production 
Hadoop clusters.</p><p>The Knox samples can however be made to work with Ambari 
managed Knox instances with a few steps:</p>
 <ol>
   <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
   <li>The default.xml topology file can be copied to sandbox.xml in order to 
satisfy the topology name assumption in the samples.</li>
   <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
-</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway">Steps for a 
Manually Installed Knox Gateway</a> <a 
href="#Steps+for+a+Manually+Installed+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h4><p>For manually installed Knox 
instances, there is really no way for the installer to know how to configure 
the topology file for you.</p><p>Essentially, these steps are identical to the 
Ambari deployed instance except that #3 should be replaced with the 
configuration of the ootb sandbox.xml to point the configuration at the proper 
hosts and ports.</p>
+</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway">Steps for a 
Manually Installed Knox Gateway</a> <a 
href="#Steps+for+a+Manually+Installed+Knox+Gateway"><img 
src="markbook-section-link.png"/></a></h4><p>For manually installed Knox 
instances, there is really no way for the installer to know how to configure 
the topology file for you.</p><p>Essentially, these steps are identical to the 
Ambari deployed instance except that #3 should be replaced with the 
configuration of the out of the box sandbox.xml to point the configuration at 
the proper hosts and ports.</p>
 <ol>
   <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
-  <li>Change the hosts and ports within the 
{GATEWAY_HOME}/conf/topologies/sandbox.xml to reflect your actual cluster 
service locations.</li>
+  <li>Change the hosts and ports within the 
<code>{GATEWAY_HOME}/conf/topologies/sandbox.xml</code> to reflect your actual 
cluster service locations.</li>
   <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
 </ol><h2><a id="Gateway+Details">Gateway Details</a> <a 
href="#Gateway+Details"><img src="markbook-section-link.png"/></a></h2><p>This 
section describes the details of the Knox Gateway itself. Including: </p>
 <ul>
@@ -339,7 +341,7 @@ curl -i -k -u guest:guest-password -X GE
 </ul><h3><a id="URL+Mapping">URL Mapping</a> <a href="#URL+Mapping"><img 
src="markbook-section-link.png"/></a></h3><p>The gateway functions much like a 
reverse proxy. As such, it maintains a mapping of URLs that are exposed 
externally by the gateway to URLs that are provided by the Hadoop 
cluster.</p><h4><a id="Default+Topology+URLs">Default Topology URLs</a> <a 
href="#Default+Topology+URLs"><img 
src="markbook-section-link.png"/></a></h4><p>In order to provide compatibility 
with the Hadoop java client and existing CLI tools, the Knox Gateway has 
provided a feature called the Default Topology. This refers to a topology 
deployment that will be able to route URLs without the additional context that 
the gateway uses for differentiating from one Hadoop cluster to another. This 
allows the URLs to match those used by existing clients for that may access 
webhdfs through the Hadoop file system abstraction.</p><p>When a topology file 
is deployed with a file name that matches the configured de
 fault topology name, a specialized mapping for URLs is installed for that 
particular topology. This allows the URLs that are expected by the existing 
Hadoop CLIs for webhdfs to be used in interacting with the specific Hadoop 
cluster that is represented by the default topology file.</p><p>The 
configuration for the default topology name is found in gateway-site.xml as a 
property called: &ldquo;default.app.topology.name&rdquo;.</p><p>The default 
value for this property is &ldquo;sandbox&rdquo;.</p><p>Therefore, when 
deploying the sandbox.xml topology, both of the following example URLs work for 
the same underlying Hadoop cluster:</p>
 <pre><code>https://{gateway-host}:{gateway-port}/webhdfs
 https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs
-</code></pre><p>These default topology URLs exist for all of the services in 
the topology.</p><h4><a id="Fully+Qualified+URLs">Fully Qualified URLs</a> <a 
href="#Fully+Qualified+URLs"><img 
src="markbook-section-link.png"/></a></h4><p>Examples of mappings for the 
WebHDFS, WebHCat, Oozie and Stargate/HBase are shown below. These mapping are 
generated from the combination of the gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology 
descriptors (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>). The port 
numbers show for the Cluster URLs represent the default ports for these 
services. The actual port number may be different for a given cluster.</p>
+</code></pre><p>These default topology URLs exist for all of the services in 
the topology.</p><h4><a id="Fully+Qualified+URLs">Fully Qualified URLs</a> <a 
href="#Fully+Qualified+URLs"><img 
src="markbook-section-link.png"/></a></h4><p>Examples of mappings for the 
WebHDFS, WebHCat, Oozie and HBase are shown below. These mapping are generated 
from the combination of the gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>) and the cluster topology 
descriptors (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>). The port 
numbers shown for the Cluster URLs represent the default ports for these 
services. The actual port number may be different for a given cluster.</p>
 <ul>
   <li>WebHDFS
   <ul>
@@ -356,22 +358,22 @@ https://{gateway-host}:{gateway-port}/{g
     <li>Gateway: 
<code>https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/oozie</code></li>
     <li>Cluster: <code>http://{oozie-host}:11000/oozie}</code></li>
   </ul></li>
-  <li>Stargate (HBase)
+  <li>HBase
   <ul>
     <li>Gateway: 
<code>https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/hbase</code></li>
-    <li>Cluster: <code>http://{hbase-host}:60080</code></li>
+    <li>Cluster: <code>http://{hbase-host}:8080</code></li>
   </ul></li>
   <li>Hive JDBC
   <ul>
-    <li>Gateway: 
jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password}?hive.server2.transport.mode=http;hive.server2.thrift.http.path={gateway-path}/{cluster-name}/hive</li>
+    <li>Gateway: 
<code>jdbc:hive2://{gateway-host}:{gateway-port}/;ssl=true;sslTrustStore={gateway-trust-store-path};trustStorePassword={gateway-trust-store-password};transportMode=http;httpPath={gateway-path}/{cluster-name}/hive</code></li>
     <li>Cluster: <code>http://{hive-host}:10001/cliservice</code></li>
   </ul></li>
-</ul><p>The values for <code>{gateway-host}</code>, 
<code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the 
gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for 
<code>{cluster-name}</code> is derived from the file name of the cluster 
topology descriptor (e.g. 
<code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value 
for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, 
<code>{oozie-host}</code>, <code>{hbase-host}</code> and 
<code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>).</p><p>Note: 
The ports 50070, 50111, 11000, 60080 (default 8080) and 10001 are the defaults 
for WebHDFS, WebHCat, Oozie, Stargate/HBase and Hive respectively. Their values 
can also be provided via the cluster topology descriptor if your Hadoop cluster 
uses different ports.</p><h3><a id="Configuration">Configuration</a> <a
  href="#Configuration"><img 
src="markbook-section-link.png"/></a></h3><p>Configuration for Apache Knox 
includes:</p>
+</ul><p>The values for <code>{gateway-host}</code>, 
<code>{gateway-port}</code>, <code>{gateway-path}</code> are provided via the 
gateway configuration file (i.e. 
<code>{GATEWAY_HOME}/conf/gateway-site.xml</code>).</p><p>The value for 
<code>{cluster-name}</code> is derived from the file name of the cluster 
topology descriptor (e.g. 
<code>{GATEWAY_HOME}/deployments/{cluster-name}.xml</code>).</p><p>The value 
for <code>{webhdfs-host}</code>, <code>{webhcat-host}</code>, 
<code>{oozie-host}</code>, <code>{hbase-host}</code> and 
<code>{hive-host}</code> are provided via the cluster topology descriptor (e.g. 
<code>{GATEWAY_HOME}/conf/topologies/{cluster-name}.xml</code>).</p><p>Note: 
The ports 50070, 50111, 11000, 8080 and 10001 are the defaults for WebHDFS, 
WebHCat, Oozie, HBase and Hive respectively. Their values can also be provided 
via the cluster topology descriptor if your Hadoop cluster uses different 
ports.</p><p>Note: The HBase REST API uses port 8080 by default. This often 
clash
 es with other running services. In the Hortonworks Sandbox Ambari might be 
running on this port so you might have to change it to a different port (e.g. 
60080). </p><h3><a id="Configuration">Configuration</a> <a 
href="#Configuration"><img 
src="markbook-section-link.png"/></a></h3><p>Configuration for Apache Knox 
includes:</p>
 <ol>
   <li><a href="#Related+Cluster+Configuration">Related Cluster 
Configuration</a> that must be done within the Hadoop cluster to allow Knox to 
communicate with various services</li>
   <li><a href="#Gateway+Server+Configuration">Gateway Server Configuration</a> 
- which is the configurable elements of the server itself which applies to 
behavior that spans all topologies or managed Hadoop clusters</li>
   <li><a href="#Topology+Descriptors">Topology Descriptors</a> which are the 
descriptors for controlling access to Hadoop clusters in various ways</li>
-</ol><h3><a id="Related+Cluster+Configuration">Related Cluster 
Configuration</a> <a href="#Related+Cluster+Configuration"><img 
src="markbook-section-link.png"/></a></h3><p>The following configuration 
changes must be made to your cluster to allow Apache Knox to dispatch requests 
to the various service components on behalf of end users.</p><h4><a 
id="Grant+Proxy+privileges+for+Knox+user+in+`core-site.xml`+on+Hadoop+master+nodes">Grant
 Proxy privileges for Knox user in <code>core-site.xml</code> on Hadoop master 
nodes</a> <a 
href="#Grant+Proxy+privileges+for+Knox+user+in+`core-site.xml`+on+Hadoop+master+nodes"><img
 src="markbook-section-link.png"/></a></h4><p>Update <code>core-site.xml</code> 
and add the following lines towards the end of the file.</p><p>Replace 
FQDN_OF_KNOX_HOST with the fully qualified domain name of the host running the 
gateway. You can usually find this by running <code>hostname -f</code> on that 
host.</p><p>You could use * for local developer testing if Knox host 
 does not have static IP.</p>
+</ol><h3><a id="Related+Cluster+Configuration">Related Cluster 
Configuration</a> <a href="#Related+Cluster+Configuration"><img 
src="markbook-section-link.png"/></a></h3><p>The following configuration 
changes must be made to your cluster to allow Apache Knox to dispatch requests 
to the various service components on behalf of end users.</p><h4><a 
id="Grant+Proxy+privileges+for+Knox+user+in+`core-site.xml`+on+Hadoop+master+nodes">Grant
 Proxy privileges for Knox user in <code>core-site.xml</code> on Hadoop master 
nodes</a> <a 
href="#Grant+Proxy+privileges+for+Knox+user+in+`core-site.xml`+on+Hadoop+master+nodes"><img
 src="markbook-section-link.png"/></a></h4><p>Update <code>core-site.xml</code> 
and add the following lines towards the end of the file.</p><p>Replace 
<code>FQDN_OF_KNOX_HOST</code> with the fully qualified domain name of the host 
running the Knox gateway. You can usually find this by running <code>hostname 
-f</code> on that host.</p><p>You can use <code>*</code> for local de
 veloper testing if the Knox host does not have a static IP.</p>
 <pre><code>&lt;property&gt;
     &lt;name&gt;hadoop.proxyuser.knox.groups&lt;/name&gt;
     &lt;value&gt;users&lt;/value&gt;
@@ -380,7 +382,7 @@ https://{gateway-host}:{gateway-port}/{g
     &lt;name&gt;hadoop.proxyuser.knox.hosts&lt;/name&gt;
     &lt;value&gt;FQDN_OF_KNOX_HOST&lt;/value&gt;
 &lt;/property&gt;
-</code></pre><h4><a 
id="Grant+proxy+privilege+for+Knox+in+`webhcat-site.xml`+on+Hadoop+master+nodes">Grant
 proxy privilege for Knox in <code>webhcat-site.xml</code> on Hadoop master 
nodes</a> <a 
href="#Grant+proxy+privilege+for+Knox+in+`webhcat-site.xml`+on+Hadoop+master+nodes"><img
 src="markbook-section-link.png"/></a></h4><p>Update 
<code>webhcat-site.xml</code> and add the following lines towards the end of 
the file.</p><p>Replace FQDN_OF_KNOX_HOST with right value in your cluster. You 
could use * for local developer testing if Knox host does not have static 
IP.</p>
+</code></pre><h4><a 
id="Grant+proxy+privilege+for+Knox+in+`webhcat-site.xml`+on+Hadoop+master+nodes">Grant
 proxy privilege for Knox in <code>webhcat-site.xml</code> on Hadoop master 
nodes</a> <a 
href="#Grant+proxy+privilege+for+Knox+in+`webhcat-site.xml`+on+Hadoop+master+nodes"><img
 src="markbook-section-link.png"/></a></h4><p>Update 
<code>webhcat-site.xml</code> and add the following lines towards the end of 
the file.</p><p>Replace <code>FQDN_OF_KNOX_HOST</code> with the fully qualified 
domain name of the host running the Knox gateway. You can use <code>*</code> 
for local developer testing if the Knox host does not have a static IP.</p>
 <pre><code>&lt;property&gt;
     &lt;name&gt;webhcat.proxyuser.knox.groups&lt;/name&gt;
     &lt;value&gt;users&lt;/value&gt;
@@ -389,19 +391,19 @@ https://{gateway-host}:{gateway-port}/{g
     &lt;name&gt;webhcat.proxyuser.knox.hosts&lt;/name&gt;
     &lt;value&gt;FQDN_OF_KNOX_HOST&lt;/value&gt;
 &lt;/property&gt;
-</code></pre><h4><a 
id="Grant+proxy+privilege+for+Knox+in+`oozie-site.xml`+on+Oozie+host">Grant 
proxy privilege for Knox in <code>oozie-site.xml</code> on Oozie host</a> <a 
href="#Grant+proxy+privilege+for+Knox+in+`oozie-site.xml`+on+Oozie+host"><img 
src="markbook-section-link.png"/></a></h4><p>Update <code>oozie-site.xml</code> 
and add the following lines towards the end of the file.</p><p>Replace 
FQDN_OF_KNOX_HOST with right value in your cluster. You could use * for local 
developer testing if Knox host does not have static IP.</p>
+</code></pre><h4><a 
id="Grant+proxy+privilege+for+Knox+in+`oozie-site.xml`+on+Oozie+host">Grant 
proxy privilege for Knox in <code>oozie-site.xml</code> on Oozie host</a> <a 
href="#Grant+proxy+privilege+for+Knox+in+`oozie-site.xml`+on+Oozie+host"><img 
src="markbook-section-link.png"/></a></h4><p>Update <code>oozie-site.xml</code> 
and add the following lines towards the end of the file.</p><p>Replace 
<code>FQDN_OF_KNOX_HOST</code> with the fully qualified domain name of the host 
running the Knox gateway. You can use <code>*</code> for local developer 
testing if the Knox host does not have a static IP.</p>
 <pre><code>&lt;property&gt;
-   
&lt;name&gt;oozie.service.ProxyUserService.proxyuser.knox.groups&lt;/name&gt;
-   &lt;value&gt;users&lt;/value&gt;
+    
&lt;name&gt;oozie.service.ProxyUserService.proxyuser.knox.groups&lt;/name&gt;
+    &lt;value&gt;users&lt;/value&gt;
 &lt;/property&gt;
 &lt;property&gt;
-   &lt;name&gt;oozie.service.ProxyUserService.proxyuser.knox.hosts&lt;/name&gt;
-   &lt;value&gt;FQDN_OF_KNOX_HOST&lt;/value&gt;
+    
&lt;name&gt;oozie.service.ProxyUserService.proxyuser.knox.hosts&lt;/name&gt;
+    &lt;value&gt;FQDN_OF_KNOX_HOST&lt;/value&gt;
 &lt;/property&gt;
-</code></pre><h4><a 
id="Enable+http+transport+mode+and+use+substitution+in+Hive+Server2">Enable 
http transport mode and use substitution in Hive Server2</a> <a 
href="#Enable+http+transport+mode+and+use+substitution+in+Hive+Server2"><img 
src="markbook-section-link.png"/></a></h4><p>Update <code>hive-site.xml</code> 
and set the following properties on Hive Server2 hosts. Some of the properties 
may already be in the hive-site.xml. Ensure that the values match the ones 
below.</p>
+</code></pre><h4><a 
id="Enable+http+transport+mode+and+use+substitution+in+HiveServer2">Enable http 
transport mode and use substitution in HiveServer2</a> <a 
href="#Enable+http+transport+mode+and+use+substitution+in+HiveServer2"><img 
src="markbook-section-link.png"/></a></h4><p>Update <code>hive-site.xml</code> 
and set the following properties on HiveServer2 hosts. Some of the properties 
may already be in the hive-site.xml. Ensure that the values match the ones 
below.</p>
 <pre><code>&lt;property&gt;
-  &lt;name&gt;hive.server2.allow.user.substitution&lt;/name&gt;
-  &lt;value&gt;true&lt;/value&gt;
+    &lt;name&gt;hive.server2.allow.user.substitution&lt;/name&gt;
+    &lt;value&gt;true&lt;/value&gt;
 &lt;/property&gt;
 
 &lt;property&gt;
@@ -618,7 +620,10 @@ ip-10-39-107-209.ec2.internal
             &lt;role&gt;hostmap&lt;/role&gt;
             &lt;name&gt;static&lt;/name&gt;
             &lt;enabled&gt;true&lt;/enabled&gt;
-            
&lt;param&gt;&lt;name&gt;localhost&lt;/name&gt;&lt;value&gt;sandbox,sandbox.hortonworks.com&lt;/value&gt;&lt;/param&gt;
+            &lt;param&gt;
+                &lt;name&gt;localhost&lt;/name&gt;
+                &lt;value&gt;sandbox,sandbox.hortonworks.com&lt;/value&gt;
+            &lt;/param&gt;
         &lt;/provider&gt;
         ...
     &lt;/gateway&gt;
@@ -626,7 +631,7 @@ ip-10-39-107-209.ec2.internal
 &lt;/topology&gt;
 </code></pre><h5><a id="Hostmap+Provider+Configuration">Hostmap Provider 
Configuration</a> <a href="#Hostmap+Provider+Configuration"><img 
src="markbook-section-link.png"/></a></h5><p>Details about each provider 
configuration element is enumerated below.</p>
 <dl><dt>topology/gateway/provider/role</dt><dd>The role for a Hostmap provider 
must always be 
<code>hostmap</code>.</dd><dt>topology/gateway/provider/name</dt><dd>The 
Hostmap provider supplied out-of-the-box is selected via the name 
<code>static</code>.</dd><dt>topology/gateway/provider/enabled</dt><dd>Host 
mapping can be enabled or disabled by providing <code>true</code> or 
<code>false</code>.</dd><dt>topology/gateway/provider/param</dt><dd>Host 
mapping is configured by providing parameters for each external to internal 
mapping.</dd><dt>topology/gateway/provider/param/name</dt><dd>The parameter 
names represent an external host names associated with the internal host names 
provided by the value element. This can be a comma separated list of host names 
that all represent the same physical host. When mapping from internal to 
external host name the first external host name in the list is 
used.</dd><dt>topology/gateway/provider/param/value</dt><dd>The parameter 
values represent the inte
 rnal host names associated with the external host names provider by the name 
element. This can be a comma separated list of host names that all represent 
the same physical host. When mapping from external to internal host names the 
first internal host name in the list is used.</dd>
-</dl><h4><a id="Logging">Logging</a> <a href="#Logging"><img 
src="markbook-section-link.png"/></a></h4><p>If necessary you can enable 
additional logging by editing the <code>log4j.properties</code> file in the 
<code>conf</code> directory. Changing the rootLogger value from 
<code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug 
logging. A number of useful, more fine loggers are also provided in the 
file.</p><h4><a id="Java+VM+Options">Java VM Options</a> <a 
href="#Java+VM+Options"><img src="markbook-section-link.png"/></a></h4><p>TODO 
- Java VM options doc.</p><h4><a id="Persisting+the+Master+Secret">Persisting 
the Master Secret</a> <a href="#Persisting+the+Master+Secret"><img 
src="markbook-section-link.png"/></a></h4><p>The master secret is required to 
start the server. This secret is used to access secured artifacts by the 
gateway instance. Keystore, trust stores and credential stores are all 
protected with the master secret.</p><p>You may persist the master
  secret by supplying the <em>-persist-master</em> switch at startup. This will 
result in a warning indicating that persisting the secret is less secure than 
providing it at startup. We do make some provisions in order to protect the 
persisted password.</p><p>It is encrypted with AES 128 bit encryption and where 
possible the file permissions are set to only be accessible by the user that 
the gateway is running as.</p><p>After persisting the secret, ensure that the 
file at config/security/master has the appropriate permissions set for your 
environment. This is probably the most important layer of defense for master 
secret. Do not assume that the encryption if sufficient protection.</p><p>A 
specific user should be created to run the gateway this user will be the only 
user with permissions for the persisted master file.</p><p>See the Knox CLI 
section for descriptions of the command line utilities related to the master 
secret.</p><h4><a id="Management+of+Security+Artifacts">Management of
  Security Artifacts</a> <a href="#Management+of+Security+Artifacts"><img 
src="markbook-section-link.png"/></a></h4><p>There are a number of artifacts 
that are used by the gateway in ensuring the security of wire level 
communications, access to protected resources and the encryption of sensitive 
data. These artifacts can be managed from outside of the gateway instances or 
generated and populated by the gateway instance itself.</p><p>The following is 
a description of how this is coordinated with both standalone (development, 
demo, etc) gateway instances and instances as part of a cluster of gateways in 
mind.</p><p>Upon start of the gateway server we:</p>
+</dl><h4><a id="Logging">Logging</a> <a href="#Logging"><img 
src="markbook-section-link.png"/></a></h4><p>If necessary you can enable 
additional logging by editing the <code>log4j.properties</code> file in the 
<code>conf</code> directory. Changing the <code>rootLogger</code> value from 
<code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug 
logging. A number of useful, more fine loggers are also provided in the 
file.</p><h4><a id="Java+VM+Options">Java VM Options</a> <a 
href="#Java+VM+Options"><img src="markbook-section-link.png"/></a></h4><p>TODO 
- Java VM options doc.</p><h4><a id="Persisting+the+Master+Secret">Persisting 
the Master Secret</a> <a href="#Persisting+the+Master+Secret"><img 
src="markbook-section-link.png"/></a></h4><p>The master secret is required to 
start the server. This secret is used to access secured artifacts by the 
gateway instance. Keystore, trust stores and credential stores are all 
protected with the master secret.</p><p>You may persi
 st the master secret by supplying the <em>-persist-master</em> switch at 
startup. This will result in a warning indicating that persisting the secret is 
less secure than providing it at startup. We do make some provisions in order 
to protect the persisted password.</p><p>It is encrypted with AES 128 bit 
encryption and where possible the file permissions are set to only be 
accessible by the user that the gateway is running as.</p><p>After persisting 
the secret, ensure that the file at config/security/master has the appropriate 
permissions set for your environment. This is probably the most important layer 
of defense for master secret. Do not assume that the encryption if sufficient 
protection.</p><p>A specific user should be created to run the gateway this 
user will be the only user with permissions for the persisted master 
file.</p><p>See the Knox CLI section for descriptions of the command line 
utilities related to the master secret.</p><h4><a 
id="Management+of+Security+Artifacts">
 Management of Security Artifacts</a> <a 
href="#Management+of+Security+Artifacts"><img 
src="markbook-section-link.png"/></a></h4><p>There are a number of artifacts 
that are used by the gateway in ensuring the security of wire level 
communications, access to protected resources and the encryption of sensitive 
data. These artifacts can be managed from outside of the gateway instances or 
generated and populated by the gateway instance itself.</p><p>The following is 
a description of how this is coordinated with both standalone (development, 
demo, etc) gateway instances and instances as part of a cluster of gateways in 
mind.</p><p>Upon start of the gateway server we:</p>
 <ol>
   <li>Look for an identity store at 
<code>data/security/keystores/gateway.jks</code>.  The identity store contains 
the certificate and private key used to represent the identity of the server 
for SSL connections and signature creation.
   <ul>
@@ -653,12 +658,16 @@ ip-10-39-107-209.ec2.internal
 </ol><p>See the Knox CLI section for descriptions of the command line 
utilities related to the security artifact management.</p><h4><a 
id="Keystores">Keystores</a> <a href="#Keystores"><img 
src="markbook-section-link.png"/></a></h4><p>In order to provide your own 
certificate for use by the gateway, you will need to either import an existing 
key pair into a Java keystore or generate a self-signed cert using the Java 
keytool.</p><h5><a id="Importing+a+key+pair+into+a+Java+keystore">Importing a 
key pair into a Java keystore</a> <a 
href="#Importing+a+key+pair+into+a+Java+keystore"><img 
src="markbook-section-link.png"/></a></h5><p>One way to accomplish this is to 
start with a PKCS12 store for your key pair and then convert it to a Java 
keystore or JKS.</p><p>The following example uses openssl to create a PKCS12 
encoded store from your provided certificate and private key that are in PEM 
format.</p>
 <pre><code>openssl pkcs12 -export -in cert.pem -inkey key.pem &gt; server.p12
 </code></pre><p>The next example converts the PKCS12 store into a Java 
keystore (JKS). It should prompt you for the keystore and key passwords for the 
destination keystore. You must use the master-secret for the keystore password 
and keep track of the password that you use for the key passphrase.</p>
-<pre><code>keytool -importkeystore -srckeystore {server.p12} -destkeystore 
gateway.jks -srcstoretype pkcs12
+<pre><code>keytool -importkeystore -srckeystore server.p12 -destkeystore 
gateway.jks -srcstoretype pkcs12
 </code></pre><p>While using this approach a couple of important things to be 
aware of:</p>
 <ol>
-  <li><p>the alias MUST be &ldquo;gateway-identity&rdquo;. You may need to 
change it using keytool after the import of the PKCS12 store. You can use 
keytool to do this - for example:</p><p>keytool -changealias -alias 
&ldquo;1&rdquo; -destalias &ldquo;gateway-identity&rdquo; -keystore gateway.jks 
-storepass {knoxpw}</p></li>
+  <li><p>the alias MUST be &ldquo;gateway-identity&rdquo;. You may need to 
change it using keytool after the import of the PKCS12 store. You can use 
keytool to do this - for example:</p>
+  <pre><code>keytool -changealias -alias &quot;1&quot; -destalias 
&quot;gateway-identity&quot; -keystore gateway.jks -storepass {knoxpw}
+</code></pre></li>
   <li><p>the name of the expected identity keystore for the gateway MUST be 
gateway.jks</p></li>
-  <li><p>the passwords for the keystore and the imported key may both be set 
to the master secret for the gateway install. You can change the key passphrase 
after import using keytool as well. You may need to do this in order to 
provision the password in the credential store as described later in this 
section. For example:</p><p>keytool -keypasswd -alias gateway-identity 
-keystore gateway.jks</p></li>
+  <li><p>the passwords for the keystore and the imported key may both be set 
to the master secret for the gateway install. You can change the key passphrase 
after import using keytool as well. You may need to do this in order to 
provision the password in the credential store as described later in this 
section. For example:</p>
+  <pre><code>keytool -keypasswd -alias gateway-identity -keystore gateway.jks
+</code></pre></li>
 </ol><p>NOTE: The password for the keystore as well as that of the imported 
key may be the master secret for the gateway instance or you may set the 
gateway-identity-passphrase alias using the Knox CLI to the actual key 
passphrase. See the Knox CLI section for details.</p><p>The following will 
allow you to provision the passphrase for the private key that you set during 
keystore creation above - it will prompt you for the actual passphrase.</p>
 <pre><code>bin/knoxcli.sh create-alias gateway-identity-passphrase
 </code></pre><h5><a 
id="Generating+a+self-signed+cert+for+use+in+testing+or+development+environments">Generating
 a self-signed cert for use in testing or development environments</a> <a 
href="#Generating+a+self-signed+cert+for+use+in+testing+or+development+environments"><img
 src="markbook-section-link.png"/></a></h5>
@@ -666,42 +675,40 @@ ip-10-39-107-209.ec2.internal
     -storepass {master-secret} -validity 360 -keysize 2048
 </code></pre><p>Keytool will prompt you for a number of elements used will 
comprise the distinguished name (DN) within your certificate. 
</p><p><em>NOTE:</em> When it prompts you for your First and Last name be sure 
to type in the hostname of the machine that your gateway instance will be 
running on. This is used by clients during hostname verification to ensure that 
the presented certificate matches the hostname that was used in the URL for the 
connection - so they need to match.</p><p><em>NOTE:</em> When it prompts for 
the key password just press enter to ensure that it is the same as the keystore 
password. Which, as was described earlier, must match the master secret for the 
gateway instance. Alternatively, you can set it to another passphrase - take 
note of it and set the gateway-identity-passphrase alias to that passphrase 
using the Knox CLI.</p><p>See the Knox CLI section for descriptions of the 
command line utilities related to the management of the keystores.</p><h5><a 
id="U
 sing+a+CA+Signed+Key+Pair">Using a CA Signed Key Pair</a> <a 
href="#Using+a+CA+Signed+Key+Pair"><img 
src="markbook-section-link.png"/></a></h5><p>For certain deployments a 
certificate key pair that is signed by a trusted certificate authority is 
required. There are a number of different ways in which these certificates are 
acquired and can be converted and imported into the Apache Knox 
keystore.</p><p>The following steps have been used to do this and are provided 
here for guidance in your installation. You may have to adjust according to 
your environment.</p><p>General steps:</p>
 <ol>
-  <li>stop gateway and back up all files in 
/var/lib/knox/data/security/keystores<br/>gateway.sh stop</li>
-  <li>create new master key for knox and persist, the master key will be 
referred to in following steps as $master-key<br/>knoxcli.sh create-master 
-force</li>
-  <li>create identity keystore gateway.jks. cert in alias gateway-identity
-  <ul>
-    <li>cd /var/lib/knox/data/security/keystore</li>
-    <li>keytool -genkeypair -alias gateway-identity -keyalg RSA -keysize 1024 
-dname &ldquo;CN=$fqdn_knox,OU=hdp,O=sdge&rdquo; -keypass $keypass -keystore 
gateway.jks -storepass $master-key -validity 300<br/>NOTE: above $fqdn_knox is 
the hostname of the knox host. adjust validity as needed. some may choose 
$keypass to be the same as $master-key</li>
-  </ul></li>
-  <li>create credential store to store the $keypass in step 3. this creates 
__gateway-credentials.jceks file<br/>
-  <ul>
-    <li>knoxcli.sh create-alias gateway-identity-passphrase &ndash;value 
$keypass</li>
-  </ul></li>
-  <li>generate a certificate signing request from the gateway.jks
-  <ul>
-    <li>keytool -keystore gateway.jks -storepass $master-key -alias 
gateway-identity -certreq -file knox.csr</li>
-  </ul></li>
-  <li>send the knox.csr file to the CA authority and get back the singed 
certificate, signed cert referred to as knox.signed in following steps. Also 
need the CA cert, which normally can be requested through openssl command or 
web browser. (or can ask the CA authority to send a copy).</li>
-  <li>import both the CA authority certificate (referred as corporateCA.cer) 
and the signed knox certificate back into gateway.jks
-  <ul>
-    <li>keytool -keystore gateway.jks -storepass $master-key -alias $hwhq 
-import -file corporateCA.cer</li>
-    <li>keytool -keystore gateway.jks -storepass $master-key -alias 
gateway-identity -import -file knox.signed<br/>Note: use any alias appropriate 
for the corporate CA.</li>
-  </ul></li>
-  <li>restart gateway. check gateway.log to see that gateway started properly 
and clusters are deployed. Can check the timestamp on cluster deployment files
-  <ul>
-    <li>ls -alrt /var/lib/knox/data/deployment</li>
-  </ul></li>
-  <li>verify that clients can use the CA authority cert to access Knox (which 
is the goal of using public signed cert)
-  <ul>
-    <li>curl &ndash;cacert supwin12ad.cer -u hdptester:hadoop -X GET &lsquo;<a 
href="https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS";>https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS</a>&rsquo;
 or can verify through client browser which already has the corporate CA cert 
installed.</li>
-  </ul></li>
+  <li><p>Stop Knox gateway and back up all files in 
<code>{GATEWWAY_HOME}/data/security/keystores</code></p>
+  <pre><code>gateway.sh stop
+</code></pre></li>
+  <li><p>Create a new master key for Knox and persist it. The master key will 
be referred to in following steps as <code>$master-key</code></p>
+  <pre><code>knoxcli.sh create-master -force
+</code></pre></li>
+  <li><p>Create identity keystore gateway.jks. cert in alias gateway-identity 
</p>
+  <pre><code>cd {GATEWWAY_HOME}/data/security/keystore  
+keytool -genkeypair -alias gateway-identity -keyalg RSA -keysize 1024 -dname 
&quot;CN=$fqdn_knox,OU=hdp,O=sdge&quot; -keypass $keypass -keystore gateway.jks 
-storepass $master-key -validity 300  
+</code></pre><p>NOTE: <code>$fqdn_knox</code> is the hostname of the Knox 
host. Some may choose <code>$keypass</code> to be the same as 
<code>$master-key</code>.</p></li>
+  <li><p>Create credential store to store the <code>$keypass</code> in step 3. 
This creates <code>__gateway-credentials.jceks</code> file</p>
+  <pre><code>knoxcli.sh create-alias gateway-identity-passphrase --value 
$keypass
+</code></pre></li>
+  <li><p>Generate a certificate signing request from the gateway.jks</p>
+  <pre><code>keytool -keystore gateway.jks -storepass $master-key -alias 
gateway-identity -certreq -file knox.csr
+</code></pre></li>
+  <li><p>Send the <code>knox.csr</code> file to the CA authority and get back 
the signed certificate (<code>knox.signed</code>). You also need the CA 
certificate, which normally can be requested through an openssl command or web 
browser or from the CA.</p></li>
+  <li><p>Import both the CA authority certificate (referred as 
<code>corporateCA.cer</code>) and the signed Knox certificate back into 
<code>gateway.jks</code></p>
+  <pre><code>keytool -keystore gateway.jks -storepass $master-key -alias $hwhq 
-import -file corporateCA.cer  
+keytool -keystore gateway.jks -storepass $master-key -alias gateway-identity 
-import -file knox.signed  
+</code></pre><p>NOTE: Use any alias appropriate for the corporate CA.</p></li>
+  <li><p>Restart Knox gateway. Check <code>gateway.log</code> to check whether 
the gateway started properly and clusters are deployed. You can check the 
timestamp on cluster deployment files</p>
+  <pre><code>ls -alrt {GATEWAY_HOME}/data/deployment
+</code></pre></li>
+  <li><p>Verify that clients can use the CA authority cert to access Knox 
(which is the goal of using public signed cert) using curl or a web browsers 
which has the CA certificate installed</p>
+  <pre><code>curl --cacert supwin12ad.cer -u hdptester:hadoop -X GET 
&#39;https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS&#39;
+</code></pre></li>
 </ol><h5><a id="Credential+Store">Credential Store</a> <a 
href="#Credential+Store"><img 
src="markbook-section-link.png"/></a></h5><p>Whenever you provide your own 
keystore with either a self-signed cert or an issued certificate signed by a 
trusted authority, you will need to set an alias for the 
gateway-identity-passphrase or create an empty credential store. This is 
necessary for the current release in order for the system to determine the 
correct password for the keystore and the key.</p><p>The credential stores in 
Knox use the JCEKS keystore type as it allows for the storage of general 
secrets in addition to certificates.</p><p>Keytool may be used to create 
credential stores but the Knox CLI section details how to create aliases. These 
aliases are managed within credential stores which are created by the CLI as 
needed. The simplest approach is to create the gateway-identity-passpharse 
alias with the Knox CLI. This will create the credential store if it 
doesn&rsquo;t already exist
  and add the key passphrase.</p><p>See the Knox CLI section for descriptions 
of the command line utilities related to the management of the credential 
stores.</p><h5><a id="Provisioning+of+Keystores">Provisioning of Keystores</a> 
<a href="#Provisioning+of+Keystores"><img 
src="markbook-section-link.png"/></a></h5><p>Once you have created these 
keystores you must move them into place for the gateway to discover them and 
use them to represent its identity for SSL connections. This is done by copying 
the keystores to the <code>{GATEWAY_HOME}/data/security/keystores</code> 
directory for your gateway install.</p><h4><a 
id="Summary+of+Secrets+to+be+Managed">Summary of Secrets to be Managed</a> <a 
href="#Summary+of+Secrets+to+be+Managed"><img 
src="markbook-section-link.png"/></a></h4>
 <ol>
   <li>Master secret - the same for all gateway instances in a cluster of 
gateways</li>
   <li>All security related artifacts are protected with the master secret</li>
   <li>Secrets used by the gateway itself are stored within the gateway 
credential store and are the same across all gateway instances in the cluster 
of gateways</li>
   <li>Secrets used by providers within cluster topologies are stored in 
topology specific credential stores and are the same for the same topology 
across the cluster of gateway instances.  However, they are specific to the 
topology - so secrets for one hadoop cluster are different from those of 
another.  This allows for fail-over from one gateway instance to another even 
when encryption is being used while not allowing the compromise of one 
encryption key to expose the data for all clusters.</li>
-</ol><p>NOTE: the SSL certificate will need special consideration depending on 
the type of certificate. Wildcard certs may be able to be shared across all 
gateway instances in a cluster. When certs are dedicated to specific machines 
the gateway identity store will not be able to be blindly replicated as host 
name verification problems will ensue. Obviously, trust-stores will need to be 
taken into account as well.</p><h3><a id="Knox+CLI">Knox CLI</a> <a 
href="#Knox+CLI"><img src="markbook-section-link.png"/></a></h3><p>The Knox CLI 
is a command line utility for management of various aspects of the Knox 
deployment. It is primarily concerned with the management of the security 
artifacts for the gateway instance and each of the deployed topologies or 
hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various 
security artifacts are also generated and populated automatically by the Knox 
Gateway runtime when they are not found at startup. The assumptions made in 
those c
 ases are appropriate for a test or development gateway instance and assume 
&lsquo;localhost&rsquo; for hostname specific activities. For production 
deployments the use of the CLI may aid in managing some production 
deployments.</p><p>The knoxcli.sh script is located in the {GATEWAY_HOME}/bin 
directory.</p><h4><a id="Help">Help</a> <a href="#Help"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+[--help]`"><code>bin/knoxcli.sh [--help]</code></a> <a 
href="#`bin/knoxcli.sh+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>prints help for all 
commands</p><h4><a id="Knox+Version+Info">Knox Version Info</a> <a 
href="#Knox+Version+Info"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+version+[--help]`"><code>bin/knoxcli.sh version 
[--help]</code></a> <a href="#`bin/knoxcli.sh+version+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Displays Knox version 
information.</p><h4><a id="Master+secret+persistence">Master se
 cret persistence</a> <a href="#Master+secret+persistence"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-master+[--force][--help]`"><code>bin/knoxcli.sh 
create-master [--force][--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-master+[--force][--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Creates and persists an encrypted 
master secret in a file within {GATEWAY_HOME}/data/security/master. 
</p><p>NOTE: This command fails when there is an existing master file in the 
expected location. You may force it to overwrite the master file with the 
--force switch. NOTE: this will require you to change passwords protecting the 
keystores for the gateway identity keystores and all credential 
stores.</p><h4><a id="Alias+creation">Alias creation</a> <a 
href="#Alias+creation"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"><code>bin/knoxcli.sh
 create-alias na
 me [--cluster c] [--value v] [--generate] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"><img
 src="markbook-section-link.png"/></a></h5><p>Creates a password alias and 
stores it in a credential store within the 
{GATEWAY_HOME}/data/security/keystores dir. </p>
+</ol><p>NOTE: the SSL certificate will need special consideration depending on 
the type of certificate. Wildcard certs may be able to be shared across all 
gateway instances in a cluster. When certs are dedicated to specific machines 
the gateway identity store will not be able to be blindly replicated as host 
name verification problems will ensue. Obviously, trust-stores will need to be 
taken into account as well.</p><h3><a id="Knox+CLI">Knox CLI</a> <a 
href="#Knox+CLI"><img src="markbook-section-link.png"/></a></h3><p>The Knox CLI 
is a command line utility for the management of various aspects of the Knox 
deployment. It is primarily concerned with the management of the security 
artifacts for the gateway instance and each of the deployed topologies or 
Hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various 
security artifacts are also generated and populated automatically by the Knox 
Gateway runtime when they are not found at startup. The assumptions made in tho
 se cases are appropriate for a test or development gateway instance and assume 
&lsquo;localhost&rsquo; for hostname specific activities. For production 
deployments the use of the CLI may aid in managing some production 
deployments.</p><p>The knoxcli.sh script is located in the 
<code>{GATEWAY_HOME}/bin</code> directory.</p><h4><a id="Help">Help</a> <a 
href="#Help"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+[--help]`"><code>bin/knoxcli.sh [--help]</code></a> <a 
href="#`bin/knoxcli.sh+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>prints help for all 
commands</p><h4><a id="Knox+Version+Info">Knox Version Info</a> <a 
href="#Knox+Version+Info"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+version+[--help]`"><code>bin/knoxcli.sh version 
[--help]</code></a> <a href="#`bin/knoxcli.sh+version+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Displays Knox version 
information.</p><h4><a id="Master+secret+persi
 stence">Master secret persistence</a> <a 
href="#Master+secret+persistence"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-master+[--force][--help]`"><code>bin/knoxcli.sh 
create-master [--force][--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-master+[--force][--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Creates and persists an encrypted 
master secret in a file within 
<code>{GATEWAY_HOME}/data/security/master</code>. </p><p>NOTE: This command 
fails when there is an existing master file in the expected location. You may 
force it to overwrite the master file with the --force switch. NOTE: this will 
require you to change passwords protecting the keystores for the gateway 
identity keystores and all credential stores.</p><h4><a 
id="Alias+creation">Alias creation</a> <a href="#Alias+creation"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"><code>
 bin/knoxcli.sh create-alias name [--cluster c] [--value v] [--generate] 
[--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"><img
 src="markbook-section-link.png"/></a></h5><p>Creates a password alias and 
stores it in a credential store within the 
<code>{GATEWAY_HOME}/data/security/keystores</code> dir. </p>
 <table>
   <thead>
     <tr>
@@ -727,7 +734,7 @@ ip-10-39-107-209.ec2.internal
       <td>boolean flag to indicate whether the tool should just generate the 
value. This assumes that --value is not set - will result in error otherwise. 
User will not be prompted for the value when --generate is set.</td>
     </tr>
   </tbody>
-</table><h4><a id="Alias+deletion">Alias deletion</a> <a 
href="#Alias+deletion"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+delete-alias+name+[--cluster+c]+[--help]`"><code>bin/knoxcli.sh
 delete-alias name [--cluster c] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+delete-alias+name+[--cluster+c]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Deletes a password and alias 
mapping from a credential store within {GATEWAY_HOME}/data/security/keystores. 
</p>
+</table><h4><a id="Alias+deletion">Alias deletion</a> <a 
href="#Alias+deletion"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+delete-alias+name+[--cluster+c]+[--help]`"><code>bin/knoxcli.sh
 delete-alias name [--cluster c] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+delete-alias+name+[--cluster+c]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Deletes a password and alias 
mapping from a credential store within 
<code>{GATEWAY_HOME}/data/security/keystores</code>.</p>
 <table>
   <thead>
     <tr>
@@ -745,7 +752,7 @@ ip-10-39-107-209.ec2.internal
       <td>name of Hadoop cluster for the cluster specific credential store 
otherwise assumes &rsquo;__gateway&rsquo;</td>
     </tr>
   </tbody>
-</table><h4><a id="Alias+listing">Alias listing</a> <a 
href="#Alias+listing"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+list-alias+[--cluster+c]+[--help]`"><code>bin/knoxcli.sh 
list-alias [--cluster c] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+list-alias+[--cluster+c]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Lists the alias names for the 
credential store within {GATEWAY_HOME}/data/security/keystores. </p><p>NOTE: 
This command will list the aliases in lowercase which is a result of the 
underlying credential store implementation. Lookup of credentials is a case 
insensitive operation - so this is not an issue.</p>
+</table><h4><a id="Alias+listing">Alias listing</a> <a 
href="#Alias+listing"><img src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+list-alias+[--cluster+c]+[--help]`"><code>bin/knoxcli.sh 
list-alias [--cluster c] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+list-alias+[--cluster+c]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Lists the alias names for the 
credential store within 
<code>{GATEWAY_HOME}/data/security/keystores</code>.</p><p>NOTE: This command 
will list the aliases in lowercase which is a result of the underlying 
credential store implementation. Lookup of credentials is a case insensitive 
operation - so this is not an issue.</p>
 <table>
   <thead>
     <tr>
@@ -759,7 +766,7 @@ ip-10-39-107-209.ec2.internal
       <td>name of Hadoop cluster for the cluster specific credential store 
otherwise assumes &rsquo;__gateway&rsquo;</td>
     </tr>
   </tbody>
-</table><h4><a id="Self-signed+cert+creation">Self-signed cert creation</a> <a 
href="#Self-signed+cert+creation"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-cert+[--hostname+n]+[--help]`"><code>bin/knoxcli.sh 
create-cert [--hostname n] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-cert+[--hostname+n]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Creates and stores a self-signed 
certificate to represent the identity of the gateway instance. This is stored 
within the {GATEWAY_HOME}/data/security/keystores/gateway.jks keystore. </p>
+</table><h4><a id="Self-signed+cert+creation">Self-signed cert creation</a> <a 
href="#Self-signed+cert+creation"><img 
src="markbook-section-link.png"/></a></h4><h5><a 
id="`bin/knoxcli.sh+create-cert+[--hostname+n]+[--help]`"><code>bin/knoxcli.sh 
create-cert [--hostname n] [--help]</code></a> <a 
href="#`bin/knoxcli.sh+create-cert+[--hostname+n]+[--help]`"><img 
src="markbook-section-link.png"/></a></h5><p>Creates and stores a self-signed 
certificate to represent the identity of the gateway instance. This is stored 
within the <code>{GATEWAY_HOME}/data/security/keystores/gateway.jks</code> 
keystore. </p>
 <table>
   <thead>
     <tr>
@@ -883,74 +890,37 @@ ip-10-39-107-209.ec2.internal
   </tbody>
 </table><p>Please note that to access that admin API, the user attempting to 
connect must have admin credentials inside of the LDAP Server</p><h5><a 
id="API+Documentation">API Documentation</a> <a href="#API+Documentation"><img 
src="markbook-section-link.png"/></a></h5><h6><a id="Operations">Operations</a> 
<a href="#Operations"><img src="markbook-section-link.png"/></a></h6>
 <ul>
-  <li><h6>HTTP GET</h6> 1. <a href="#Server+Version">Server Version</a><br/> 
2. <a href="#Topology+Collection">Topology Collection</a><br/> 3. <a 
href="#Topology">Topology</a></li>
+  <li><h6>HTTP GET</h6></li>
+</ul>
+<ol>
+  <li><a href="#Server+Version">Server Version</a></li>
+  <li><a href="#Topology+Collection">Topology Collection</a></li>
+  <li><a href="#Topology">Topology</a></li>
+</ol>
+<ul>
   <li><h6>HTTP PUT</h6></li>
   <li><h6>HTTP DELETE</h6></li>
 </ul><h5><a id="Server+Version">Server Version</a> <a 
href="#Server+Version"><img src="markbook-section-link.png"/></a></h5><h6><a 
id="Description">Description</a> <a href="#Description"><img 
src="markbook-section-link.png"/></a></h6><p>Calls to Knox and returns the 
gateway&rsquo;s current version and the version hash inside of a JSON object. 
</p><h6><a id="Example+Request+URL">Example Request URL</a> <a 
href="#Example+Request+URL"><img 
src="markbook-section-link.png"/></a></h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/version</code>
 </p><h6><a id="Example+cURL+Request">Example cURL Request</a> <a 
href="#Example+cURL+Request"><img 
src="markbook-section-link.png"/></a></h6><p><code>curl -u admin:admin-password 
-i -k 
https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/version</code></p><h6><a
 id="Response">Response</a> <a href="#Response"><img 
src="markbook-section-link.png"/></a></h6>
-<pre><code>    &lt;ServerVersion&gt;
-       &lt;version&gt;0.7.0&lt;/version&gt;
-       &lt;hash&gt;{version-hash}&lt;/hash&gt;
-    &lt;/ServerVersion&gt;
+<pre><code>&lt;ServerVersion&gt;
+    &lt;version&gt;0.7.0&lt;/version&gt;
+    &lt;hash&gt;{version-hash}&lt;/hash&gt;
+&lt;/ServerVersion&gt;
 </code></pre><h5><a id="Topology+Collection">Topology Collection</a> <a 
href="#Topology+Collection"><img 
src="markbook-section-link.png"/></a></h5><h6><a 
id="Description">Description</a> <a href="#Description"><img 
src="markbook-section-link.png"/></a></h6><p>Calls to Knox and return an array 
of JSON objects that represent the list of deployed topologies currently inside 
of the gateway. </p><h6><a id="Example+Request+URL">Example Request URL</a> <a 
href="#Example+Request+URL"><img 
src="markbook-section-link.png"/></a></h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/{api-version}/topologies</code>
 </p><h6><a id="Example+cURL+Request">Example cURL Request</a> <a 
href="#Example+cURL+Request"><img 
src="markbook-section-link.png"/></a></h6><p><code>curl -u admin:admin-password 
-i -k -H Accept:application/json 
https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies</code></p><h6><a
 id="Response">Response</a> <a href="#Response"><img src="ma
 rkbook-section-link.png"/></a></h6>
 <pre><code>[  
-    {  
-       
&quot;href&quot;:&quot;https://localhost:8443/gateway/admin/api/v1/topologies/_default&quot;,
-       &quot;name&quot;:&quot;_default&quot;,
-       &quot;timestamp&quot;:&quot;1405633120000&quot;,
-       &quot;uri&quot;:&quot;https://localhost:8443/gateway/_default&quot;
-    },
-    {  
-       
&quot;href&quot;:&quot;https://localhost:8443/gateway/admin/api/v1/topologies/admin&quot;,
-       &quot;name&quot;:&quot;admin&quot;,
-       &quot;timestamp&quot;:&quot;1406672646000&quot;,
-       &quot;uri&quot;:&quot;https://localhost:8443/gateway/admin&quot;
-    }
+  {  
+    
&quot;href&quot;:&quot;https://localhost:8443/gateway/admin/api/v1/topologies/_default&quot;,
+    &quot;name&quot;:&quot;_default&quot;,
+    &quot;timestamp&quot;:&quot;1405633120000&quot;,
+    &quot;uri&quot;:&quot;https://localhost:8443/gateway/_default&quot;
+  },
+  {  
+    
&quot;href&quot;:&quot;https://localhost:8443/gateway/admin/api/v1/topologies/admin&quot;,
+    &quot;name&quot;:&quot;admin&quot;,
+    &quot;timestamp&quot;:&quot;1406672646000&quot;,
+    &quot;uri&quot;:&quot;https://localhost:8443/gateway/admin&quot;
+  }
 ]  
-</code></pre><h5><a id="Topology">Topology</a> <a href="#Topology"><img 
src="markbook-section-link.png"/></a></h5><h6><a 
id="Description">Description</a> <a href="#Description"><img 
src="markbook-section-link.png"/></a></h6><p>Calls to Knox and return a JSON 
object that represents the requested topology </p><h6><a 
id="Example+Request+URL">Example Request URL</a> <a 
href="#Example+Request+URL"><img 
src="markbook-section-link.png"/></a></h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code>
 </p><h6><a id="Example+cURL+Request">Example cURL Request</a> <a 
href="#Example+cURL+Request"><img 
src="markbook-section-link.png"/></a></h6><p><code>curl -u admin:admin-password 
-i -k -H Accept:application/json 
https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code></p><h6><a
 id="Response">Response</a> <a href="#Response"><img 
src="markbook-section-link.png"/></a></h6>
-<pre><code>{
-    &quot;name&quot;: &quot;admin&quot;,
-    &quot;providers&quot;: [{
-       &quot;enabled&quot;: true,
-       &quot;name&quot;: &quot;ShiroProvider&quot;,
-       &quot;params&quot;: {
-         &quot;sessionTimeout&quot;: &quot;30&quot;,
-         &quot;main.ldapRealm&quot;: 
&quot;org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm&quot;,
-         &quot;main.ldapRealm.userDnTemplate&quot;: 
&quot;uid={0},ou=people,dc=hadoop,dc=apache,dc=org&quot;,
-         &quot;main.ldapRealm.contextFactory.url&quot;: 
&quot;ldap://localhost:33389&quot;,
-         &quot;main.ldapRealm.contextFactory.authenticationMechanism&quot;: 
&quot;simple&quot;,
-         &quot;urls./**&quot;: &quot;authcBasic&quot;
-       },
-       &quot;role&quot;: &quot;authentication&quot;
-    }, {
-       &quot;enabled&quot;: true,
-       &quot;name&quot;: &quot;AclsAuthz&quot;,
-       &quot;params&quot;: {
-         &quot;knox.acl&quot;: &quot;admin;*;*&quot;
-       },
-       &quot;role&quot;: &quot;authorization&quot;
-    }, {
-       &quot;enabled&quot;: true,
-       &quot;name&quot;: &quot;Default&quot;,
-       &quot;params&quot;: {},
-       &quot;role&quot;: &quot;identity-assertion&quot;
-    }, {
-       &quot;enabled&quot;: true,
-       &quot;name&quot;: &quot;static&quot;,
-       &quot;params&quot;: {
-         &quot;localhost&quot;: &quot;sandbox,sandbox.hortonworks.com&quot;
-       },
-       &quot;role&quot;: &quot;hostmap&quot;
-    }],
-    &quot;services&quot;: [{
-       &quot;name&quot;: null,
-       &quot;params&quot;: {},
-       &quot;role&quot;: &quot;KNOX&quot;,
-       &quot;url&quot;: null
-    }],
-    &quot;timestamp&quot;: 1406672646000,
-    &quot;uri&quot;: &quot;https://localhost:8443/gateway/admin&quot;
-}
-</code></pre><h3><a id="X-Forwarded-*+Headers+Support">X-Forwarded-* Headers 
Support</a> <a href="#X-Forwarded-*+Headers+Support"><img 
src="markbook-section-link.png"/></a></h3><p>Out-of-the-box Knox provides 
support for some X-Forwarded-* headers through the use of a Servlet Filter. 
Specifically the headers handled/populated by Knox are:</p>
+</code></pre><h5><a id="Topology">Topology</a> <a href="#Topology"><img 
src="markbook-section-link.png"/></a></h5><h6><a 
id="Description">Description</a> <a href="#Description"><img 
src="markbook-section-link.png"/></a></h6><p>Calls to Knox and return a JSON 
object that represents the requested topology </p><h6><a 
id="Example+Request+URL">Example Request URL</a> <a 
href="#Example+Request+URL"><img 
src="markbook-section-link.png"/></a></h6><p><code>https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code>
 </p><h6><a id="Example+cURL+Request">Example cURL Request</a> <a 
href="#Example+cURL+Request"><img 
src="markbook-section-link.png"/></a></h6><p><code>curl -u admin:admin-password 
-i -k -H Accept:application/json 
https://{gateway-host}:{gateway-port}/{gateway-path}/admin/api/v1/topologies/{topology-name}</code></p><h6><a
 id="Response">Response</a> <a href="#Response"><img 
src="markbook-section-link.png"/></a></h6><p>{  &ldquo;name&rdquo;: &ld
 quo;admin&rdquo;,  &ldquo;providers&rdquo;: [{  &ldquo;enabled&rdquo;: true,  
&ldquo;name&rdquo;: &ldquo;ShiroProvider&rdquo;,  &ldquo;params&rdquo;: {  
&ldquo;sessionTimeout&rdquo;: &ldquo;30&rdquo;,  &ldquo;main.ldapRealm&rdquo;: 
&ldquo;org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm&rdquo;,  
&ldquo;main.ldapRealm.userDnTemplate&rdquo;: 
&ldquo;uid={0},ou=people,dc=hadoop,dc=apache,dc=org&rdquo;,  
&ldquo;main.ldapRealm.contextFactory.url&rdquo;: &ldquo;<a 
href="ldap://localhost:33389"";>ldap://localhost:33389";</a>,  
&rdquo;main.ldapRealm.contextFactory.authenticationMechanism&ldquo;: 
&rdquo;simple&ldquo;,  &rdquo;urls./**&ldquo;: &rdquo;authcBasic&ldquo;  },  
&rdquo;role&ldquo;: &rdquo;authentication&ldquo;  }, {  &rdquo;enabled&ldquo;: 
true,  &rdquo;name&ldquo;: &rdquo;AclsAuthz&ldquo;,  &rdquo;params&ldquo;: {  
&rdquo;knox.acl&ldquo;: &rdquo;admin;*;*&ldquo;  },  &rdquo;role&ldquo;: 
&rdquo;authorization&ldquo;  }, {  &rdquo;enabled&ldquo;: true,  
&rdquo;name&ldquo;: &rdquo;Defa
 ult&ldquo;,  &rdquo;params&ldquo;: {},  &rdquo;role&ldquo;: 
&rdquo;identity-assertion&ldquo;  }, {  &rdquo;enabled&ldquo;: true,  
&rdquo;name&ldquo;: &rdquo;static&ldquo;,  &rdquo;params&ldquo;: {  
&rdquo;localhost&ldquo;: &rdquo;sandbox,sandbox.hortonworks.com&ldquo;  },  
&rdquo;role&ldquo;: &rdquo;hostmap&ldquo;  }],  &rdquo;services&ldquo;: [{  
&ldquo;name&rdquo;: null,  &ldquo;params&rdquo;: {},  &ldquo;role&rdquo;: 
&ldquo;KNOX&rdquo;,  &ldquo;url&rdquo;: null  }],  &rdquo;timestamp&ldquo;: 
1406672646000,  &rdquo;uri&ldquo;: &rdquo;<a 
href="https://localhost:8443/gateway/admin";>https://localhost:8443/gateway/admin</a>&quot;
  }</p><h3><a id="X-Forwarded-*+Headers+Support">X-Forwarded-* Headers 
Support</a> <a href="#X-Forwarded-*+Headers+Support"><img 
src="markbook-section-link.png"/></a></h3><p>Out-of-the-box Knox provides 
support for some X-Forwarded-* headers through the use of a Servlet Filter. 
Specifically the headers handled/populated by Knox are:</p>
 <ul>
   <li>X-Forwarded-For</li>
   <li>X-Forwarded-Proto</li>
@@ -959,11 +929,11 @@ ip-10-39-107-209.ec2.internal
   <li>X-Forwarded-Server</li>
   <li>X-Forwarded-Context</li>
 </ul><p>If this functionality can be turned off by a configuration setting in 
the file gateway-site.xml and redeploying the necessary 
topology/topologies.</p><p>The setting is (under the 
&lsquo;configuration&rsquo; tag) :</p>
-<pre><code>   &lt;property&gt;
-        &lt;name&gt;gateway.xforwarded.enabled&lt;/name&gt;
-        &lt;value&gt;false&lt;/value&gt;
-    &lt;/property&gt;
-</code></pre><p>If this setting is absent, the default behavior is that the 
X-Forwarded-* header support is on or in other words, 
&lsquo;gateway.xforwarded.enabled&rsquo; is set to &lsquo;true&rsquo; by 
default.</p><h4><a id="Header+population">Header population</a> <a 
href="#Header+population"><img src="markbook-section-link.png"/></a></h4><p>The 
following are the various rules for population of these headers:</p><h5><a 
id="X-Forwarded-For">X-Forwarded-For</a> <a href="#X-Forwarded-For"><img 
src="markbook-section-link.png"/></a></h5><p>This header represents a list of 
client IP addresses. If the header is already present Knox adds a comma 
separated value to the list. The value added is the client&rsquo;s IP address 
as Knox sees it. This value is added to the end of the list.</p><h5><a 
id="X-Forwarded-Proto">X-Forwarded-Proto</a> <a href="#X-Forwarded-Proto"><img 
src="markbook-section-link.png"/></a></h5><p>The protocol used in the client 
request. If this header is passed into Knox 
 it&rsquo;s value is maintained, otherwise Knox will populate the header with 
the value &lsquo;https&rsquo; if the request is a secure one or 
&lsquo;http&rsquo; otherwise.</p><h5><a 
id="X-Forwarded-Port">X-Forwarded-Port</a> <a href="#X-Forwarded-Port"><img 
src="markbook-section-link.png"/></a></h5><p>The port used in the client 
request. If this header is passed into Knox it&rsquo;s value is maintained, 
otherwise Knox will populate the header with the value of the port that the 
request was made coming into Knox.</p><h5><a 
id="X-Forwarded-Host">X-Forwarded-Host</a> <a href="#X-Forwarded-Host"><img 
src="markbook-section-link.png"/></a></h5><p>Represents the original host 
requested by the client in the Host HTTP request header. The value passed into 
Knox is maintained by Knox. If no value is present, Knox populates the header 
with the value of the HTTP Host header.</p><h5><a 
id="X-Forwarded-Server">X-Forwarded-Server</a> <a 
href="#X-Forwarded-Server"><img src="markbook-section-link.png"
 /></a></h5><p>The hostname of the server Knox is running on.</p><h5><a 
id="X-Forwarded-Context">X-Forwarded-Context</a> <a 
href="#X-Forwarded-Context"><img 
src="markbook-section-link.png"/></a></h5><p>This header value contains the 
context path of the request to Knox.</p><h3><a 
id="Authentication">Authentication</a> <a href="#Authentication"><img 
src="markbook-section-link.png"/></a></h3><p>There are two types of providers 
supported in Knox for establishing a user&rsquo;s identity:</p>
+<pre><code>&lt;property&gt;
+    &lt;name&gt;gateway.xforwarded.enabled&lt;/name&gt;
+    &lt;value&gt;false&lt;/value&gt;
+&lt;/property&gt;
+</code></pre><p>If this setting is absent, the default behavior is that the 
X-Forwarded-* header support is on or in other words, 
&lsquo;gateway.xforwarded.enabled&rsquo; is set to &lsquo;true&rsquo; by 
default.</p><h4><a id="Header+population">Header population</a> <a 
href="#Header+population"><img src="markbook-section-link.png"/></a></h4><p>The 
following are the various rules for population of these headers:</p><h5><a 
id="X-Forwarded-For">X-Forwarded-For</a> <a href="#X-Forwarded-For"><img 
src="markbook-section-link.png"/></a></h5><p>This header represents a list of 
client IP addresses. If the header is already present Knox adds a comma 
separated value to the list. The value added is the client&rsquo;s IP address 
as Knox sees it. This value is added to the end of the list.</p><h5><a 
id="X-Forwarded-Proto">X-Forwarded-Proto</a> <a href="#X-Forwarded-Proto"><img 
src="markbook-section-link.png"/></a></h5><p>The protocol used in the client 
request. If this header is passed into Knox 
 its value is maintained, otherwise Knox will populate the header with the 
value &lsquo;https&rsquo; if the request is a secure one or &lsquo;http&rsquo; 
otherwise.</p><h5><a id="X-Forwarded-Port">X-Forwarded-Port</a> <a 
href="#X-Forwarded-Port"><img src="markbook-section-link.png"/></a></h5><p>The 
port used in the client request. If this header is passed into Knox its value 
is maintained, otherwise Knox will populate the header with the value of the 
port that the request was made coming into Knox.</p><h5><a 
id="X-Forwarded-Host">X-Forwarded-Host</a> <a href="#X-Forwarded-Host"><img 
src="markbook-section-link.png"/></a></h5><p>Represents the original host 
requested by the client in the Host HTTP request header. The value passed into 
Knox is maintained by Knox. If no value is present, Knox populates the header 
with the value of the HTTP Host header.</p><h5><a 
id="X-Forwarded-Server">X-Forwarded-Server</a> <a 
href="#X-Forwarded-Server"><img src="markbook-section-link.png"/></a></h5><p>
 The hostname of the server Knox is running on.</p><h5><a 
id="X-Forwarded-Context">X-Forwarded-Context</a> <a 
href="#X-Forwarded-Context"><img 
src="markbook-section-link.png"/></a></h5><p>This header value contains the 
context path of the request to Knox.</p><h3><a 
id="Authentication">Authentication</a> <a href="#Authentication"><img 
src="markbook-section-link.png"/></a></h3><p>There are two types of providers 
supported in Knox for establishing a user&rsquo;s identity:</p>
 <ol>
   <li>Authentication Providers</li>
   <li>Federation Providers</li>
@@ -1018,12 +988,16 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
             &lt;value&gt;authcBasic&lt;/value&gt;
         &lt;/param&gt;
     &lt;/provider&gt;
-</code></pre><p>This happens to be the way that we are currently configuring 
Shiro for BASIC/LDAP authentication. This same config approach may be used to 
achieve other authentication mechanisms or variations on this one. We however 
have not tested additional uses for it for this release.</p><h4><a 
id="LDAP+Configuration">LDAP Configuration</a> <a 
href="#LDAP+Configuration"><img 
src="markbook-section-link.png"/></a></h4><p>This section discusses the LDAP 
configuration used above for the Shiro Provider. Some of these configuration 
elements will need to be customized to reflect your deployment 
environment.</p><p><strong>main.ldapRealm</strong> - this element indicates the 
fully qualified classname of the Shiro realm to be used in authenticating the 
user. The classname provided by default in the sample is the 
<code>org.apache.shiro.realm.ldap.JndiLdapRealm</code> this implementation 
provides us with the ability to authenticate but by default has authorization 
disabled. In order to prov
 ide authorization - which is seen by Shiro as dependent on an LDAP schema that 
is specific to each organization - an extension of JndiLdapRealm is generally 
used to override and implement the doGetAuhtorizationInfo method. In this 
particular release we are providing a simple authorization provider that can be 
used along with the Shiro authentication 
provider.</p><p><strong>main.ldapRealm.userDnTemplate</strong> - in order to 
bind a simple username to an LDAP server that generally requires a full 
distinguished name (DN), we must provide the template into which the simple 
username will be inserted. This template allows for the creation of a DN by 
injecting the simple username into the common name (CN) portion of the DN. 
<strong>This element will need to be customized to reflect your deployment 
environment.</strong> The template provided in the sample is only an example 
and is valid only within the LDAP schema distributed with Knox and is 
represented by the users.ldif file in the {GATE
 WAY_HOME}/conf 
directory.</p><p><strong>main.ldapRealm.contextFactory.url</strong> - this 
element is the URL that represents the host and port of LDAP server. It also 
includes the scheme of the protocol to use. This may be either ldap or ldaps 
depending on whether you are communicating with the LDAP over SSL (highly 
recommended). <strong>This element will need to be customized to reflect your 
deployment 
environment.</strong>.</p><p><strong>main.ldapRealm.contextFactory.authenticationMechanism</strong>
 - this element indicates the type of authentication that should be performed 
against the LDAP server. The current default value is <code>simple</code> which 
indicates a simple bind operation. This element should not need to be modified 
and no mechanism other than a simple bind has been tested for this particular 
release.</p><p><strong>urls./</strong>** - this element represents a single 
URL_Ant_Path_Expression and the value the Shiro filter chain to apply to it. 
This particular sample 
 indicates that all paths into the application have the same Shiro filter chain 
applied. The paths are relative to the application context path. The use of the 
value <code>authcBasic</code> here indicates that BASIC authentication is 
expected for every path into the application. Adding an additional Shiro filter 
to that chain for validating that the request isSecure() and over SSL can be 
achieved by changing the value to <code>ssl, authcBasic</code>. It is not 
likely that you need to change this element for your environment.</p><h4><a 
id="Active+Directory+-+Special+Note">Active Directory - Special Note</a> <a 
href="#Active+Directory+-+Special+Note"><img 
src="markbook-section-link.png"/></a></h4><p>You would use LDAP configuration 
as documented above to authenticate against Active Directory as 
well.</p><p>Some Active Directory specific things to keep in 
mind:</p><p>Typical AD main.ldapRealm.userDnTemplate value looks slightly 
different, such as  cn={0},cn=users,DC=lab,DC=sample,dc=com
 </p><p>Please compare this with a typical Apache DS 
main.ldapRealm.userDnTemplate value and make note of the difference.  
uid={0},ou=people,dc=hadoop,dc=apache,dc=org</p><p>If your AD is configured to 
authenticate based on just the cn and password and does not require user DN, 
you do not have to specify value for main.ldapRealm.userDnTemplate.</p><h4><a 
id="LDAP+over+SSL+(LDAPS)+Configuration">LDAP over SSL (LDAPS) 
Configuration</a> <a href="#LDAP+over+SSL+(LDAPS)+Configuration"><img 
src="markbook-section-link.png"/></a></h4><p>In order to communicate with your 
LDAP server over SSL (again, highly recommended), you will need to modify the 
topology file in a couple ways and possibly provision some keying material.</p>
+</code></pre><p>This happens to be the way that we are currently configuring 
Shiro for BASIC/LDAP authentication. This same config approach may be used to 
achieve other authentication mechanisms or variations on this one. We however 
have not tested additional uses for it for this release.</p><h4><a 
id="LDAP+Configuration">LDAP Configuration</a> <a 
href="#LDAP+Configuration"><img 
src="markbook-section-link.png"/></a></h4><p>This section discusses the LDAP 
configuration used above for the Shiro Provider. Some of these configuration 
elements will need to be customized to reflect your deployment 
environment.</p><p><strong>main.ldapRealm</strong> - this element indicates the 
fully qualified class name of the Shiro realm to be used in authenticating the 
user. The class name provided by default in the sample is the 
<code>org.apache.shiro.realm.ldap.JndiLdapRealm</code> this implementation 
provides us with the ability to authenticate but by default has authorization 
disabled. In order to pr
 ovide authorization - which is seen by Shiro as dependent on an LDAP schema 
that is specific to each organization - an extension of JndiLdapRealm is 
generally used to override and implement the doGetAuhtorizationInfo method. In 
this particular release we are providing a simple authorization provider that 
can be used along with the Shiro authentication 
provider.</p><p><strong>main.ldapRealm.userDnTemplate</strong> - in order to 
bind a simple username to an LDAP server that generally requires a full 
distinguished name (DN), we must provide the template into which the simple 
username will be inserted. This template allows for the creation of a DN by 
injecting the simple username into the common name (CN) portion of the DN. 
<strong>This element will need to be customized to reflect your deployment 
environment.</strong> The template provided in the sample is only an example 
and is valid only within the LDAP schema distributed with Knox and is 
represented by the users.ldif file in the <co
 de>{GATEWAY_HOME}/conf</code> 
directory.</p><p><strong>main.ldapRealm.contextFactory.url</strong> - this 
element is the URL that represents the host and port of LDAP server. It also 
includes the scheme of the protocol to use. This may be either ldap or ldaps 
depending on whether you are communicating with the LDAP over SSL (highly 
recommended). <strong>This element will need to be customized to reflect your 
deployment 
environment.</strong>.</p><p><strong>main.ldapRealm.contextFactory.authenticationMechanism</strong>
 - this element indicates the type of authentication that should be performed 
against the LDAP server. The current default value is <code>simple</code> which 
indicates a simple bind operation. This element should not need to be modified 
and no mechanism other than a simple bind has been tested for this particular 
release.</p><p><strong>urls./</strong>** - this element represents a single 
URL_Ant_Path_Expression and the value the Shiro filter chain to apply to it. 
This par
 ticular sample indicates that all paths into the application have the same 
Shiro filter chain applied. The paths are relative to the application context 
path. The use of the value <code>authcBasic</code> here indicates that BASIC 
authentication is expected for every path into the application. Adding an 
additional Shiro filter to that chain for validating that the request 
isSecure() and over SSL can be achieved by changing the value to <code>ssl, 
authcBasic</code>. It is not likely that you need to change this element for 
your environment.</p><h4><a id="Active+Directory+-+Special+Note">Active 
Directory - Special Note</a> <a href="#Active+Directory+-+Special+Note"><img 
src="markbook-section-link.png"/></a></h4><p>You would use LDAP configuration 
as documented above to authenticate against Active Directory as 
well.</p><p>Some Active Directory specific things to keep in 
mind:</p><p>Typical AD main.ldapRealm.userDnTemplate value looks slightly 
different, such as</p>
+<pre><code>cn={0},cn=users,DC=lab,DC=sample,dc=com
+</code></pre><p>Please compare this with a typical Apache DS 
main.ldapRealm.userDnTemplate value and make note of the difference:</p>
+<pre><code>`uid={0},ou=people,dc=hadoop,dc=apache,dc=org`
+</code></pre><p>If your AD is configured to authenticate based on just the cn 
and password and does not require user DN, you do not have to specify value for 
main.ldapRealm.userDnTemplate.</p><h4><a 
id="LDAP+over+SSL+(LDAPS)+Configuration">LDAP over SSL (LDAPS) 
Configuration</a> <a href="#LDAP+over+SSL+(LDAPS)+Configuration"><img 
src="markbook-section-link.png"/></a></h4><p>In order to communicate with your 
LDAP server over SSL (again, highly recommended), you will need to modify the 
topology file in a couple ways and possibly provision some keying material.</p>
 <ol>
   <li><strong>main.ldapRealm.contextFactory.url</strong> must be changed to 
have the <code>ldaps</code> protocol scheme and the port must be the SSL 
listener port on your LDAP server.</li>
   <li>Identity certificate (keypair) provisioned to LDAP server - your LDAP 
server specific documentation should indicate what is required for providing a 
cert or keypair to represent the LDAP server identity to connecting 
clients.</li>
   <li>Trusting the LDAP Server&rsquo;s public key - if the LDAP Server&rsquo;s 
identity certificate is issued by a well known and trusted certificate 
authority and is already represented in the JRE&rsquo;s cacerts truststore then 
you don&rsquo;t need to do anything for trusting the LDAP server&rsquo;s cert. 
If, however, the cert is selfsigned or issued by an untrusted authority you 
will need to either add it to the cacerts keystore or to another truststore 
that you may direct Knox to utilize through a system property.</li>
-</ol><h4><a id="Session+Configuration">Session Configuration</a> <a 
href="#Session+Configuration"><img 
src="markbook-section-link.png"/></a></h4><p>Knox maps each cluster topology to 
a web application and leverages standard JavaEE session management.</p><p>To 
configure session idle timeout for the topology, please specify value of 
parameter sessionTimeout for ShiroProvider in your topology file. If you do not 
specify the value for this parameter, it defaults to 30minutes.</p><p>The 
definition would look like the following in the topoloogy file:</p>
+</ol><h4><a id="Session+Configuration">Session Configuration</a> <a 
href="#Session+Configuration"><img 
src="markbook-section-link.png"/></a></h4><p>Knox maps each cluster topology to 
a web application and leverages standard JavaEE session management.</p><p>To 
configure session idle timeout for the topology, please specify value of 
parameter sessionTimeout for ShiroProvider in your topology file. If you do not 
specify the value for this parameter, it defaults to 30 minutes.</p><p>The 
definition would look like the following in the topoloogy file:</p>
 <pre><code>...
 &lt;provider&gt;
     &lt;role&gt;authentication&lt;/role&gt;
@@ -1041,221 +1015,263 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
     &lt;/param&gt;
 &lt;provider&gt;
 ...

[... 2290 lines stripped ...]
Modified: knox/site/index.html
URL: 
http://svn.apache.org/viewvc/knox/site/index.html?rev=1724836&r1=1724835&r2=1724836&view=diff
==============================================================================
--- knox/site/index.html (original)
+++ knox/site/index.html Fri Jan 15 15:24:45 2016
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2016-01-14
+ | Generated by Apache Maven Doxia at 2016-01-15
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20160114" />
+    <meta name="Date-Revision-yyyymmdd" content="20160115" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Knox Gateway &#x2013; REST API Gateway for the Hadoop 
Ecosystem</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
@@ -58,7 +58,7 @@
               
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 
2016-01-14</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 
2016-01-15</li> 
             
                             </ul>
       </div>



Reply via email to