Author: kminder
Date: Tue Sep  8 13:32:22 2015
New Revision: 1701803

URL: http://svn.apache.org/r1701803
Log:
Fix mailing list link for commits.

Modified:
    knox/site/books/knox-0-7-0/dev-guide.html
    knox/site/books/knox-0-7-0/user-guide.html
    knox/site/index.html
    knox/site/issue-tracking.html
    knox/site/license.html
    knox/site/mail-lists.html
    knox/site/project-info.html
    knox/site/team-list.html
    knox/trunk/pom.xml

Modified: knox/site/books/knox-0-7-0/dev-guide.html
URL: 
http://svn.apache.org/viewvc/knox/site/books/knox-0-7-0/dev-guide.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/books/knox-0-7-0/dev-guide.html (original)
+++ knox/site/books/knox-0-7-0/dev-guide.html Tue Sep  8 13:32:22 2015
@@ -365,7 +365,7 @@ public void testDevGuideSample() throws
 
   assertThat( match.getValue(), is( "fake-chain") );
 }
-</code></pre><h2><a id="Extension+Logistics"></a>Extension 
Logistics</h2><p>There are a number of extension points available in the 
gateway: services, providers, rewrite steps and functions, etc. All of these 
use the Java ServiceLoader mechanism for their discovery. There are two ways to 
make these extensions available on the class path at runtime. The first way to 
to add a new module to the project and have the extension 
&ldquo;built-in&rdquo;. The second is to add the extension to the class path of 
the server after it is installed. Both mechanism are described in more detail 
below.</p><h3><a id="Service+Loaders"></a>Service Loaders</h3><p>Extensions are 
discovered via Java&rsquo;s [Service 
Loader|http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html] 
mechanism. There are good 
[tutorials|http://docs.oracle.com/javase/tutorial/ext/basics/spi.html] 
available for learning more about this. The basics come town to two things.</p>
+</code></pre><h2><a id="Extension+Logistics"></a>Extension 
Logistics</h2><p>There are a number of extension points available in the 
gateway: services, providers, rewrite steps and functions, etc. All of these 
use the Java ServiceLoader mechanism for their discovery. There are two ways to 
make these extensions available on the class path at runtime. The first way to 
add a new module to the project and have the extension &ldquo;built-in&rdquo;. 
The second is to add the extension to the class path of the server after it is 
installed. Both mechanism are described in more detail below.</p><h3><a 
id="Service+Loaders"></a>Service Loaders</h3><p>Extensions are discovered via 
Java&rsquo;s [Service 
Loader|http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html] 
mechanism. There are good 
[tutorials|http://docs.oracle.com/javase/tutorial/ext/basics/spi.html] 
available for learning more about this. The basics come town to two things.</p>
 <ol>
   <li><p>Implement the service contract interface (e.g. 
ServiceDeploymentContributor, ProviderDeploymentContributor)</p></li>
   <li><p>Create a file in META-INF/services of the JAR that will contain the 
extension. This file will be named as the fully qualified name of the contract 
interface (e.g. 
org.apache.hadoop.gateway.deploy.ProviderDeploymentContributor). The contents 
of the file will be the fully qualified names of any implementation of that 
contract interface in that JAR.</p></li>

Modified: knox/site/books/knox-0-7-0/user-guide.html
URL: 
http://svn.apache.org/viewvc/knox/site/books/knox-0-7-0/user-guide.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/books/knox-0-7-0/user-guide.html (original)
+++ knox/site/books/knox-0-7-0/user-guide.html Tue Sep  8 13:32:22 2015
@@ -131,7 +131,7 @@ Server: Jetty(6.1.26)
     
&#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=CREATE&#39;
 
 curl -i -k -u guest:guest-password -T LICENSE -X PUT \
-    &#39;{Value of Location header from response response above}&#39;
+    &#39;{Value of Location header from response   above}&#39;
 </code></pre><h4><a id="Get+a+file+in+HDFS+via+Knox."></a>Get a file in HDFS 
via Knox.</h4>
 <pre><code>curl -i -k -u guest:guest-password -X GET \
     
&#39;https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=OPEN&#39;
@@ -314,7 +314,7 @@ curl -i -k -u guest:guest-password -X GE
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
   <li>The default.xml topology file can be copied to sandbox.xml in order to 
satisfy the topology name assumption in the samples.</li>
   <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
-</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway"></a>Steps for a 
Manually Installed Knox Gateway</h4><p>For manually installed Knox instances, 
there is really no way for the installer to know how to configure the topology 
file for you.</p><p>Essentially, these steps are identical to the Amabari 
deployed instance except that #3 should be replaced with the configuration of 
the ootb sandbox.xml to point the configuration at the proper hosts and 
ports.</p>
+</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway"></a>Steps for a 
Manually Installed Knox Gateway</h4><p>For manually installed Knox instances, 
there is really no way for the installer to know how to configure the topology 
file for you.</p><p>Essentially, these steps are identical to the Ambari 
deployed instance except that #3 should be replaced with the configuration of 
the ootb sandbox.xml to point the configuration at the proper hosts and 
ports.</p>
 <ol>
   <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
   <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
@@ -324,7 +324,7 @@ curl -i -k -u guest:guest-password -X GE
 <ul>
   <li>How URLs are mapped between a gateway that services multiple Hadoop 
clusters and the clusters themselves</li>
   <li>How the gateway is configured through gateway-site.xml and cluster 
specific topology files</li>
-  <li>How to configure the various policy enfocement provider features such as 
authentication, authorization, auditing, hostmapping, etc.</li>
+  <li>How to configure the various policy enforcement provider features such 
as authentication, authorization, auditing, hostmapping, etc.</li>
 </ul><h3><a id="URL+Mapping"></a>URL Mapping</h3><p>The gateway functions much 
like a reverse proxy. As such, it maintains a mapping of URLs that are exposed 
externally by the gateway to URLs that are provided by the Hadoop 
cluster.</p><h4><a id="Default+Topology+URLs"></a>Default Topology 
URLs</h4><p>In order to provide compatibility with the Hadoop java client and 
existing CLI tools, the Knox Gateway has provided a feature called the Default 
Topology. This refers to a topology deployment that will be able to route URLs 
without the additional context that the gateway uses for differentiating from 
one Hadoop cluster to another. This allows the URLs to match those used by 
existing clients for that may access webhdfs through the Hadoop file system 
abstraction.</p><p>When a topology file is deployed with a file name that 
matches the configured default topology name, a specialized mapping for URLs is 
installed for that particular topology. This allows the URLs that are expected 
by the e
 xisting Hadoop CLIs for webhdfs to be used in interacting with the specific 
Hadoop cluster that is represented by the default topology file.</p><p>The 
configuration for the default topology name is found in gateway-site.xml as a 
property called: &ldquo;default.app.topology.name&rdquo;.</p><p>The default 
value for this property is &ldquo;sandbox&rdquo;.</p><p>Therefore, when 
deploying the sandbox.xml topology, both of the following example URLs work for 
the same underlying Hadoop cluster:</p>
 <pre><code>https://{gateway-host}:{gateway-port}/webhdfs
 https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/webhdfs
@@ -427,7 +427,7 @@ https://{gateway-host}:{gateway-port}/{g
     &lt;/param&gt;
 &lt;/provider&gt;
 </code></pre>
-<dl><dt>/topology/gateway/provider</dt><dd>Groups information for a specific 
provider.</dd><dt>/topology/gateway/provider/role</dt><dd>Defines the role of a 
particular provider. There are a number of pre-defined roles used by 
out-of-the-box provider plugins for the gateay. These roles are: 
authentication, identity-assertion, authentication, rewrite and 
hostmap</dd><dt>/topology/gateway/provider/name</dt><dd>Defines the name of the 
provider for which this configuration applies. There can be multiple provider 
implementations for a given role. Specifying the name is used identify which 
particular provider is being configured. Typically each topology descriptor 
should contain only one provider for each role but there are 
exceptions.</dd><dt>/topology/gateway/provider/enabled</dt><dd>Allows a 
particular provider to be enabled or disabled via <code>true</code> or 
<code>false</code> respectively. When a provider is disabled any filters 
associated with that provider are excluded from the pr
 ocessing chain.</dd><dt>/topology/gateway/provider/param</dt><dd>These 
elements are used to supply provider configuration. There can be zero or more 
of these per 
provider.</dd><dt>/topology/gateway/provider/param/name</dt><dd>The name of a 
parameter to pass to the 
provider.</dd><dt>/topology/gateway/provider/param/value</dt><dd>The value of a 
parameter to pass to the provider.</dd>
+<dl><dt>/topology/gateway/provider</dt><dd>Groups information for a specific 
provider.</dd><dt>/topology/gateway/provider/role</dt><dd>Defines the role of a 
particular provider. There are a number of pre-defined roles used by 
out-of-the-box provider plugins for the gateway. These roles are: 
authentication, identity-assertion, authentication, rewrite and 
hostmap</dd><dt>/topology/gateway/provider/name</dt><dd>Defines the name of the 
provider for which this configuration applies. There can be multiple provider 
implementations for a given role. Specifying the name is used identify which 
particular provider is being configured. Typically each topology descriptor 
should contain only one provider for each role but there are 
exceptions.</dd><dt>/topology/gateway/provider/enabled</dt><dd>Allows a 
particular provider to be enabled or disabled via <code>true</code> or 
<code>false</code> respectively. When a provider is disabled any filters 
associated with that provider are excluded from the p
 rocessing chain.</dd><dt>/topology/gateway/provider/param</dt><dd>These 
elements are used to supply provider configuration. There can be zero or more 
of these per 
provider.</dd><dt>/topology/gateway/provider/param/name</dt><dd>The name of a 
parameter to pass to the 
provider.</dd><dt>/topology/gateway/provider/param/value</dt><dd>The value of a 
parameter to pass to the provider.</dd>
 </dl><h5><a id="Service+Configuration"></a>Service 
Configuration</h5><p>Service configuration is used to specify the location of 
services within the Hadoop cluster. The general outline of a service element 
looks like this.</p>
 <pre><code>&lt;service&gt;
     &lt;role&gt;WEBHDFS&lt;/role&gt;
@@ -449,7 +449,7 @@ https://{gateway-host}:{gateway-port}/{g
     &lt;/gateway&gt;
     ...
 &lt;/topology&gt;
-</code></pre><p>This mapping is required because the Hadoop servies running 
within the cluster are unaware that they are being accessed from outside the 
cluster. Therefore URLs returned as part of REST API responses will typically 
contain internal host names. Since clients outside the cluster will be unable 
to resolve those host name they must be mapped to external host 
names.</p><h5><a id="Hostmap+Provider+Example+-+EC2"></a>Hostmap Provider 
Example - EC2</h5><p>Consider an EC2 example where two VMs have been allocated. 
Each VM has an external host name by which it can be accessed via the internet. 
However the EC2 VM is unaware of this external host name and instead is 
configured with the internal host name.</p>
+</code></pre><p>This mapping is required because the Hadoop services running 
within the cluster are unaware that they are being accessed from outside the 
cluster. Therefore URLs returned as part of REST API responses will typically 
contain internal host names. Since clients outside the cluster will be unable 
to resolve those host name they must be mapped to external host 
names.</p><h5><a id="Hostmap+Provider+Example+-+EC2"></a>Hostmap Provider 
Example - EC2</h5><p>Consider an EC2 example where two VMs have been allocated. 
Each VM has an external host name by which it can be accessed via the internet. 
However the EC2 VM is unaware of this external host name and instead is 
configured with the internal host name.</p>
 <pre><code>External HOSTNAMES:
 ec2-23-22-31-165.compute-1.amazonaws.com
 ec2-23-23-25-10.compute-1.amazonaws.com
@@ -494,7 +494,7 @@ ip-10-39-107-209.ec2.internal
 &lt;/topology&gt;
 </code></pre><h5><a id="Hostmap+Provider+Configuration"></a>Hostmap Provider 
Configuration</h5><p>Details about each provider configuration element is 
enumerated below.</p>
 <dl><dt>topology/gateway/provider/role</dt><dd>The role for a Hostmap provider 
must always be 
<code>hostmap</code>.</dd><dt>topology/gateway/provider/name</dt><dd>The 
Hostmap provider supplied out-of-the-box is selected via the name 
<code>static</code>.</dd><dt>topology/gateway/provider/enabled</dt><dd>Host 
mapping can be enabled or disabled by providing <code>true</code> or 
<code>false</code>.</dd><dt>topology/gateway/provider/param</dt><dd>Host 
mapping is configured by providing parameters for each external to internal 
mapping.</dd><dt>topology/gateway/provider/param/name</dt><dd>The parameter 
names represent an external host names associated with the internal host names 
provided by the value element. This can be a comma separated list of host names 
that all represent the same physical host. When mapping from internal to 
external host name the first external host name in the list is 
used.</dd><dt>topology/gateway/provider/param/value</dt><dd>The parameter 
values represent the inte
 rnal host names associated with the external host names provider by the name 
element. This can be a comma separated list of host names that all represent 
the same physical host. When mapping from external to internal host names the 
first internal host name in the list is used.</dd>
-</dl><h4><a id="Logging"></a>Logging</h4><p>If necessary you can enable 
additional logging by editing the <code>log4j.properties</code> file in the 
<code>conf</code> directory. Changing the rootLogger value from 
<code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug 
logging. A number of useful, more fine loggers are also provided in the 
file.</p><h4><a id="Java+VM+Options"></a>Java VM Options</h4><p>TODO - Java VM 
options doc.</p><h4><a id="Persisting+the+Master+Secret"></a>Persisting the 
Master Secret</h4><p>The master secret is required to start the server. This 
secret is used to access secured artifacts by the gateway instance. Keystore, 
trust stores and credential stores are all protected with the master 
secret.</p><p>You may persist the master secret by supplying the 
<em>-persist-master</em> switch at startup. This will result in a warning 
indicating that persisting the secret is less secure than providing it at 
startup. We do make some provisions in ord
 er to protect the persisted password.</p><p>It is encrypted with AES 128 bit 
encryption and where possible the file permissions are set to only be 
accessible by the user that the gateway is running as.</p><p>After persisting 
the secret, ensure that the file at config/security/master has the appropriate 
permissions set for your environment. This is probably the most important layer 
of defense for master secret. Do not assume that the encryption if sufficient 
protection.</p><p>A specific user should be created to run the gateway this 
user will be the only user with permissions for the persisted master 
file.</p><p>See the Knox CLI section for descriptions of the command line 
utilties related to the master secret.</p><h4><a 
id="Management+of+Security+Artifacts"></a>Management of Security 
Artifacts</h4><p>There are a number of artifacts that are used by the gateway 
in ensuring the security of wire level communications, access to protected 
resources and the encryption of sensitive data. T
 hese artifacts can be managed from outside of the gateway instances or 
generated and populated by the gateway instance itself.</p><p>The following is 
a description of how this is coordinated with both standalone (development, 
demo, etc) gateway instances and instances as part of a cluster of gateways in 
mind.</p><p>Upon start of the gateway server we:</p>
+</dl><h4><a id="Logging"></a>Logging</h4><p>If necessary you can enable 
additional logging by editing the <code>log4j.properties</code> file in the 
<code>conf</code> directory. Changing the rootLogger value from 
<code>ERROR</code> to <code>DEBUG</code> will generate a large amount of debug 
logging. A number of useful, more fine loggers are also provided in the 
file.</p><h4><a id="Java+VM+Options"></a>Java VM Options</h4><p>TODO - Java VM 
options doc.</p><h4><a id="Persisting+the+Master+Secret"></a>Persisting the 
Master Secret</h4><p>The master secret is required to start the server. This 
secret is used to access secured artifacts by the gateway instance. Keystore, 
trust stores and credential stores are all protected with the master 
secret.</p><p>You may persist the master secret by supplying the 
<em>-persist-master</em> switch at startup. This will result in a warning 
indicating that persisting the secret is less secure than providing it at 
startup. We do make some provisions in ord
 er to protect the persisted password.</p><p>It is encrypted with AES 128 bit 
encryption and where possible the file permissions are set to only be 
accessible by the user that the gateway is running as.</p><p>After persisting 
the secret, ensure that the file at config/security/master has the appropriate 
permissions set for your environment. This is probably the most important layer 
of defense for master secret. Do not assume that the encryption if sufficient 
protection.</p><p>A specific user should be created to run the gateway this 
user will be the only user with permissions for the persisted master 
file.</p><p>See the Knox CLI section for descriptions of the command line 
utilities related to the master secret.</p><h4><a 
id="Management+of+Security+Artifacts"></a>Management of Security 
Artifacts</h4><p>There are a number of artifacts that are used by the gateway 
in ensuring the security of wire level communications, access to protected 
resources and the encryption of sensitive data. 
 These artifacts can be managed from outside of the gateway instances or 
generated and populated by the gateway instance itself.</p><p>The following is 
a description of how this is coordinated with both standalone (development, 
demo, etc) gateway instances and instances as part of a cluster of gateways in 
mind.</p><p>Upon start of the gateway server we:</p>
 <ol>
   <li>Look for an identity store at 
<code>data/security/keystores/gateway.jks</code>.  The identity store contains 
the certificate and private key used to represent the identity of the server 
for SSL connections and signature creation.
   <ul>
@@ -518,7 +518,7 @@ ip-10-39-107-209.ec2.internal
   <li>Using a single gateway instance as a master instance the artifacts can 
be generated or placed into the expected location and then replicated across 
all of the slave instances before startup.</li>
   <li>Using an NFS mount as a central location for the artifacts would provide 
a single source of truth without the need to replicate them over the network. 
Of course, NFS mounts have their own challenges.</li>
   <li>Using the KnoxCLI to create and manage the security artifacts.</li>
-</ol><p>See the Knox CLI section for descriptions of the command line utilties 
related to the security artifact management.</p><h4><a 
id="Keystores"></a>Keystores</h4><p>In order to provide your own certificate 
for use by the gateway, you will need to either import an existing key pair 
into a Java keystore or generate a self-signed cert using the Java 
keytool.</p><h5><a id="Importing+a+key+pair+into+a+Java+keystore"></a>Importing 
a key pair into a Java keystore</h5><p>One way to accomplish this is to start 
with a PKCS12 store for your key pair and then convert it to a Java keystore or 
JKS.</p><p>The following example uses openssl to create a PKCS12 encoded store 
from your provided certificate and private key that are in PEM format.</p>
+</ol><p>See the Knox CLI section for descriptions of the command line 
utilities related to the security artifact management.</p><h4><a 
id="Keystores"></a>Keystores</h4><p>In order to provide your own certificate 
for use by the gateway, you will need to either import an existing key pair 
into a Java keystore or generate a self-signed cert using the Java 
keytool.</p><h5><a id="Importing+a+key+pair+into+a+Java+keystore"></a>Importing 
a key pair into a Java keystore</h5><p>One way to accomplish this is to start 
with a PKCS12 store for your key pair and then convert it to a Java keystore or 
JKS.</p><p>The following example uses openssl to create a PKCS12 encoded store 
from your provided certificate and private key that are in PEM format.</p>
 <pre><code>openssl pkcs12 -export -in cert.pem -inkey key.pem &gt; server.p12
 </code></pre><p>The next example converts the PKCS12 store into a Java 
keystore (JKS). It should prompt you for the keystore and key passwords for the 
destination keystore. You must use the master-secret for the keystore password 
and keep track of the password that you use for the key passphrase.</p>
 <pre><code>keytool -importkeystore -srckeystore {server.p12} -destkeystore 
gateway.jks -srcstoretype pkcs12
@@ -532,7 +532,7 @@ ip-10-39-107-209.ec2.internal
 </code></pre><h5><a 
id="Generating+a+self-signed+cert+for+use+in+testing+or+development+environments"></a>Generating
 a self-signed cert for use in testing or development environments</h5>
 <pre><code>keytool -genkey -keyalg RSA -alias gateway-identity -keystore 
gateway.jks \
     -storepass {master-secret} -validity 360 -keysize 2048
-</code></pre><p>Keytool will prompt you for a number of elements used will 
comprise the distiniguished name (DN) within your certificate. 
</p><p><em>NOTE:</em> When it prompts you for your First and Last name be sure 
to type in the hostname of the machine that your gateway instance will be 
running on. This is used by clients during hostname verification to ensure that 
the presented certificate matches the hostname that was used in the URL for the 
connection - so they need to match.</p><p><em>NOTE:</em> When it prompts for 
the key password just press enter to ensure that it is the same as the keystore 
password. Which, as was described earlier, must match the master secret for the 
gateway instance. Alternatively, you can set it to another passphrase - take 
note of it and set the gateway-identity-passphrase alias to that passphrase 
using the Knox CLI.</p><p>See the Knox CLI section for descriptions of the 
command line utilties related to the management of the keystores.</p><h5><a 
id="U
 sing+a+CA+Signed+Key+Pair"></a>Using a CA Signed Key Pair</h5><p>For certain 
deployments a certificate key pair that is signed by a trusted certificate 
authority is required. There are a number of different ways in which these 
certificates are acquired and can be converted and imported into the Apache 
Knox keystore.</p><p>The following steps have been used to do this and are 
provided here for guidance in your installation. You may have to adjust 
according to your environment.</p><p>General steps:</p>
+</code></pre><p>Keytool will prompt you for a number of elements used will 
comprise the distinguished name (DN) within your certificate. 
</p><p><em>NOTE:</em> When it prompts you for your First and Last name be sure 
to type in the hostname of the machine that your gateway instance will be 
running on. This is used by clients during hostname verification to ensure that 
the presented certificate matches the hostname that was used in the URL for the 
connection - so they need to match.</p><p><em>NOTE:</em> When it prompts for 
the key password just press enter to ensure that it is the same as the keystore 
password. Which, as was described earlier, must match the master secret for the 
gateway instance. Alternatively, you can set it to another passphrase - take 
note of it and set the gateway-identity-passphrase alias to that passphrase 
using the Knox CLI.</p><p>See the Knox CLI section for descriptions of the 
command line utilities related to the management of the keystores.</p><h5><a 
id="U
 sing+a+CA+Signed+Key+Pair"></a>Using a CA Signed Key Pair</h5><p>For certain 
deployments a certificate key pair that is signed by a trusted certificate 
authority is required. There are a number of different ways in which these 
certificates are acquired and can be converted and imported into the Apache 
Knox keystore.</p><p>The following steps have been used to do this and are 
provided here for guidance in your installation. You may have to adjust 
according to your environment.</p><p>General steps:</p>
 <ol>
   <li>stop gateway and back up all files in 
/var/lib/knox/data/security/keystores<br/>gateway.sh stop</li>
   <li>create new master key for knox and persist, the master key will be 
referred to in following steps as $master-key<br/>knoxcli.sh create-master 
-force</li>
@@ -563,13 +563,13 @@ ip-10-39-107-209.ec2.internal
   <ul>
     <li>curl &ndash;cacert supwin12ad.cer -u hdptester:hadoop -X GET &lsquo;<a 
href="https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS";>https://$fqdn_knox:8443/gateway/$topologyname/webhdfs/v1/tmp?op=LISTSTATUS</a>&rsquo;
 or can verify through client browser which already has the corporate CA cert 
installed.</li>
   </ul></li>
-</ol><h5><a id="Credential+Store"></a>Credential Store</h5><p>Whenever you 
provide your own keystore with either a self-signed cert or an issued 
certificate signed by a trusted authority, you will need to set an alias for 
the gateway-identity-passphrase or create an empty credential store. This is 
necessary for the current release in order for the system to determine the 
correct password for the keystore and the key.</p><p>The credential stores in 
Knox use the JCEKS keystore type as it allows for the storage of general 
secrets in addition to certificates.</p><p>Keytool may be used to create 
credential stores but the Knox CLI section details how to create aliases. These 
aliases are managed within credential stores which are created by the CLI as 
needed. The simplest approach is to create the gateway-identity-passpharse 
alias with the Knox CLI. This will create the credential store if it 
doesn&rsquo;t already exist and add the key passphrase.</p><p>See the Knox CLI 
section for descrip
 tions of the command line utilties related to the management of the credential 
stores.</p><h5><a id="Provisioning+of+Keystores"></a>Provisioning of 
Keystores</h5><p>Once you have created these keystores you must move them into 
place for the gateway to discover them and use them to represent its identity 
for SSL connections. This is done by copying the keystores to the 
<code>{GATEWAY_HOME}/data/security/keystores</code> directory for your gateway 
install.</p><h4><a id="Summary+of+Secrets+to+be+Managed"></a>Summary of Secrets 
to be Managed</h4>
+</ol><h5><a id="Credential+Store"></a>Credential Store</h5><p>Whenever you 
provide your own keystore with either a self-signed cert or an issued 
certificate signed by a trusted authority, you will need to set an alias for 
the gateway-identity-passphrase or create an empty credential store. This is 
necessary for the current release in order for the system to determine the 
correct password for the keystore and the key.</p><p>The credential stores in 
Knox use the JCEKS keystore type as it allows for the storage of general 
secrets in addition to certificates.</p><p>Keytool may be used to create 
credential stores but the Knox CLI section details how to create aliases. These 
aliases are managed within credential stores which are created by the CLI as 
needed. The simplest approach is to create the gateway-identity-passpharse 
alias with the Knox CLI. This will create the credential store if it 
doesn&rsquo;t already exist and add the key passphrase.</p><p>See the Knox CLI 
section for descrip
 tions of the command line utilities related to the management of the 
credential stores.</p><h5><a id="Provisioning+of+Keystores"></a>Provisioning of 
Keystores</h5><p>Once you have created these keystores you must move them into 
place for the gateway to discover them and use them to represent its identity 
for SSL connections. This is done by copying the keystores to the 
<code>{GATEWAY_HOME}/data/security/keystores</code> directory for your gateway 
install.</p><h4><a id="Summary+of+Secrets+to+be+Managed"></a>Summary of Secrets 
to be Managed</h4>
 <ol>
   <li>Master secret - the same for all gateway instances in a cluster of 
gateways</li>
   <li>All security related artifacts are protected with the master secret</li>
   <li>Secrets used by the gateway itself are stored within the gateway 
credential store and are the same across all gateway instances in the cluster 
of gateways</li>
   <li>Secrets used by providers within cluster topologies are stored in 
topology specific credential stores and are the same for the same topology 
across the cluster of gateway instances.  However, they are specific to the 
topology - so secrets for one hadoop cluster are different from those of 
another.  This allows for fail-over from one gateway instance to another even 
when encryption is being used while not allowing the compromise of one 
encryption key to expose the data for all clusters.</li>
-</ol><p>NOTE: the SSL certificate will need special consideration depending on 
the type of certificate. Wildcard certs may be able to be shared across all 
gateway instances in a cluster. When certs are dedicated to specific machines 
the gateway identity store will not be able to be blindly replicated as host 
name verification problems will ensue. Obviously, trust-stores will need to be 
taken into account as well.</p><h3><a id="Knox+CLI"></a>Knox CLI</h3><p>The 
Knox CLI is a command line utility for management of various aspects of the 
Knox deployment. It is primarily concerned with the management of the security 
artifacts for the gateway instance and each of the deployed topologies or 
hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various 
security artifacts are also generated and populated automatically by the Knox 
Gateway runtime when they are not found at startup. The assumptions made in 
those cases are appropriate for a test or development gateway instance
  and assume &lsquo;localhost&rsquo; for hostname specific activities. For 
production deployments the use of the CLI may aid in managing some production 
deployments.</p><p>The knoxcli.sh script is located in the {GATEWAY_HOME}/bin 
directory.</p><h4><a id="Help"></a>Help</h4><h5><a 
id="`bin/knoxcli.sh+[--help]`"></a><code>bin/knoxcli.sh 
[--help]</code></h5><p>prints help for all commands</p><h4><a 
id="Knox+Verison+Info"></a>Knox Verison Info</h4><h5><a 
id="`bin/knoxcli.sh+version+[--help]`"></a><code>bin/knoxcli.sh version 
[--help]</code></h5><p>Displays Knox version information.</p><h4><a 
id="Master+secret+persistence"></a>Master secret persistence</h4><h5><a 
id="`bin/knoxcli.sh+create-master+[--force][--help]`"></a><code>bin/knoxcli.sh 
create-master [--force][--help]</code></h5><p>Creates and persists an encrypted 
master secret in a file within {GATEWAY_HOME}/data/security/master. 
</p><p>NOTE: This command fails when there is an existing master file in the 
expected location. You may
  force it to overwrite the master file with the --force switch. NOTE: this 
will require you to change passwords protecting the keystores for the gateway 
identity keystores and all credential stores.</p><h4><a 
id="Alias+creation"></a>Alias creation</h4><h5><a 
id="`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"></a><code>bin/knoxcli.sh
 create-alias name [--cluster c] [--value v] [--generate] 
[--help]</code></h5><p>Creates a password alias and stores it in a credential 
store within the {GATEWAY_HOME}/data/security/keystores dir. </p>
+</ol><p>NOTE: the SSL certificate will need special consideration depending on 
the type of certificate. Wildcard certs may be able to be shared across all 
gateway instances in a cluster. When certs are dedicated to specific machines 
the gateway identity store will not be able to be blindly replicated as host 
name verification problems will ensue. Obviously, trust-stores will need to be 
taken into account as well.</p><h3><a id="Knox+CLI"></a>Knox CLI</h3><p>The 
Knox CLI is a command line utility for management of various aspects of the 
Knox deployment. It is primarily concerned with the management of the security 
artifacts for the gateway instance and each of the deployed topologies or 
hadoop clusters that are gated by the Knox Gateway instance.</p><p>The various 
security artifacts are also generated and populated automatically by the Knox 
Gateway runtime when they are not found at startup. The assumptions made in 
those cases are appropriate for a test or development gateway instance
  and assume &lsquo;localhost&rsquo; for hostname specific activities. For 
production deployments the use of the CLI may aid in managing some production 
deployments.</p><p>The knoxcli.sh script is located in the {GATEWAY_HOME}/bin 
directory.</p><h4><a id="Help"></a>Help</h4><h5><a 
id="`bin/knoxcli.sh+[--help]`"></a><code>bin/knoxcli.sh 
[--help]</code></h5><p>prints help for all commands</p><h4><a 
id="Knox+Version+Info"></a>Knox Version Info</h4><h5><a 
id="`bin/knoxcli.sh+version+[--help]`"></a><code>bin/knoxcli.sh version 
[--help]</code></h5><p>Displays Knox version information.</p><h4><a 
id="Master+secret+persistence"></a>Master secret persistence</h4><h5><a 
id="`bin/knoxcli.sh+create-master+[--force][--help]`"></a><code>bin/knoxcli.sh 
create-master [--force][--help]</code></h5><p>Creates and persists an encrypted 
master secret in a file within {GATEWAY_HOME}/data/security/master. 
</p><p>NOTE: This command fails when there is an existing master file in the 
expected location. You may
  force it to overwrite the master file with the --force switch. NOTE: this 
will require you to change passwords protecting the keystores for the gateway 
identity keystores and all credential stores.</p><h4><a 
id="Alias+creation"></a>Alias creation</h4><h5><a 
id="`bin/knoxcli.sh+create-alias+name+[--cluster+c]+[--value+v]+[--generate]+[--help]`"></a><code>bin/knoxcli.sh
 create-alias name [--cluster c] [--value v] [--generate] 
[--help]</code></h5><p>Creates a password alias and stores it in a credential 
store within the {GATEWAY_HOME}/data/security/keystores dir. </p>
 <table>
   <thead>
     <tr>
@@ -886,10 +886,10 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
             &lt;value&gt;authcBasic&lt;/value&gt;
         &lt;/param&gt;
     &lt;/provider&gt;
-</code></pre><p>This happens to be the way that we are currently configuring 
Shiro for BASIC/LDAP authentication. This same config approach may be used to 
achieve other authentication mechanisms or variations on this one. We however 
have not tested additional uses for it for this release.</p><h4><a 
id="LDAP+Configuration"></a>LDAP Configuration</h4><p>This section discusses 
the LDAP configuration used above for the Shiro Provider. Some of these 
configuration elements will need to be customized to reflect your deployment 
environment.</p><p><strong>main.ldapRealm</strong> - this element indicates the 
fully qualified classname of the Shiro realm to be used in authenticating the 
user. The classname provided by default in the sample is the 
<code>org.apache.shiro.realm.ldap.JndiLdapRealm</code> this implementation 
provides us with the ability to authenticate but by default has authorization 
disabled. In order to provide authorization - which is seen by Shiro as 
dependent on an LDAP schema
  that is specific to each organization - an extension of JndiLdapRealm is 
generally used to override and implement the doGetAuhtorizationInfo method. In 
this particular release we are providing a simple authorization provider that 
can be used along with the Shiro authentication 
provider.</p><p><strong>main.ldapRealm.userDnTemplate</strong> - in order to 
bind a simple username to an LDAP server that generally requires a full 
distinguished name (DN), we must provide the template into which the simple 
username will be inserted. This template allows for the creation of a DN by 
injecting the simple username into the common name (CN) portion of the DN. 
<strong>This element will need to be customized to reflect your deployment 
environment.</strong> The template provided in the sample is only an example 
and is valid only within the LDAP schema distributed with Knox and is 
represented by the users.ldif file in the {GATEWAY_HOME}/conf 
directory.</p><p><strong>main.ldapRealm.contextFactory.url
 </strong> - this element is the URL that represents the host and port of LDAP 
server. It also includes the scheme of the protocol to use. This may be either 
ldap or ldaps depending on whether you are communicating with the LDAP over SSL 
(higly recommended). <strong>This element will need to be cusomized to reflect 
your deployment 
environment.</strong>.</p><p><strong>main.ldapRealm.contextFactory.authenticationMechanism</strong>
 - this element indicates the type of authentication that should be performed 
against the LDAP server. The current default value is <code>simple</code> which 
indicates a simple bind operation. This element should not need to be modified 
and no mechanism other than a simple bind has been tested for this particular 
release.</p><p><strong>urls./</strong>** - this element represents a single 
URL_Ant_Path_Expression and the value the Shiro filter chain to apply to it. 
This particular sample indicates that all paths into the application have the 
same Shiro filter ch
 ain applied. The paths are relative to the application context path. The use 
of the value <code>authcBasic</code> here indicates that BASIC authentication 
is expected for every path into the application. Adding an additional Shiro 
filter to that chain for validating that the request isSecure() and over SSL 
can be achieved by changing the value to <code>ssl, authcBasic</code>. It is 
not likely that you need to change this element for your environment.</p><h4><a 
id="Active+Directory+-+Special+Note"></a>Active Directory - Special 
Note</h4><p>You would use LDAP configuration as documented above to 
authenticate against Active Directory as well.</p><p>Some Active Directory 
specifc things to keep in mind:</p><p>Typical AD main.ldapRealm.userDnTemplate 
value looks slightly different, such as  
cn={0},cn=users,DC=lab,DC=sample,dc=com</p><p>Please compare this with a 
typical Apache DS main.ldapRealm.userDnTemplate value and make note of the 
difference.  uid={0},ou=people,dc=hadoop,dc=apache,dc
 =org</p><p>If your AD is configured to authenticate based on just the cn and 
password and does not require user DN, you do not have to specify value for 
main.ldapRealm.userDnTemplate.</p><h4><a 
id="LDAP+over+SSL+(LDAPS)+Configuration"></a>LDAP over SSL (LDAPS) 
Configuration</h4><p>In order to communicate with your LDAP server over SSL 
(again, highly recommended), you will need to modify the topology file in a 
couple ways and possibly provision some keying material.</p>
+</code></pre><p>This happens to be the way that we are currently configuring 
Shiro for BASIC/LDAP authentication. This same config approach may be used to 
achieve other authentication mechanisms or variations on this one. We however 
have not tested additional uses for it for this release.</p><h4><a 
id="LDAP+Configuration"></a>LDAP Configuration</h4><p>This section discusses 
the LDAP configuration used above for the Shiro Provider. Some of these 
configuration elements will need to be customized to reflect your deployment 
environment.</p><p><strong>main.ldapRealm</strong> - this element indicates the 
fully qualified classname of the Shiro realm to be used in authenticating the 
user. The classname provided by default in the sample is the 
<code>org.apache.shiro.realm.ldap.JndiLdapRealm</code> this implementation 
provides us with the ability to authenticate but by default has authorization 
disabled. In order to provide authorization - which is seen by Shiro as 
dependent on an LDAP schema
  that is specific to each organization - an extension of JndiLdapRealm is 
generally used to override and implement the doGetAuhtorizationInfo method. In 
this particular release we are providing a simple authorization provider that 
can be used along with the Shiro authentication 
provider.</p><p><strong>main.ldapRealm.userDnTemplate</strong> - in order to 
bind a simple username to an LDAP server that generally requires a full 
distinguished name (DN), we must provide the template into which the simple 
username will be inserted. This template allows for the creation of a DN by 
injecting the simple username into the common name (CN) portion of the DN. 
<strong>This element will need to be customized to reflect your deployment 
environment.</strong> The template provided in the sample is only an example 
and is valid only within the LDAP schema distributed with Knox and is 
represented by the users.ldif file in the {GATEWAY_HOME}/conf 
directory.</p><p><strong>main.ldapRealm.contextFactory.url
 </strong> - this element is the URL that represents the host and port of LDAP 
server. It also includes the scheme of the protocol to use. This may be either 
ldap or ldaps depending on whether you are communicating with the LDAP over SSL 
(highly recommended). <strong>This element will need to be customized to 
reflect your deployment 
environment.</strong>.</p><p><strong>main.ldapRealm.contextFactory.authenticationMechanism</strong>
 - this element indicates the type of authentication that should be performed 
against the LDAP server. The current default value is <code>simple</code> which 
indicates a simple bind operation. This element should not need to be modified 
and no mechanism other than a simple bind has been tested for this particular 
release.</p><p><strong>urls./</strong>** - this element represents a single 
URL_Ant_Path_Expression and the value the Shiro filter chain to apply to it. 
This particular sample indicates that all paths into the application have the 
same Shiro filter 
 chain applied. The paths are relative to the application context path. The use 
of the value <code>authcBasic</code> here indicates that BASIC authentication 
is expected for every path into the application. Adding an additional Shiro 
filter to that chain for validating that the request isSecure() and over SSL 
can be achieved by changing the value to <code>ssl, authcBasic</code>. It is 
not likely that you need to change this element for your environment.</p><h4><a 
id="Active+Directory+-+Special+Note"></a>Active Directory - Special 
Note</h4><p>You would use LDAP configuration as documented above to 
authenticate against Active Directory as well.</p><p>Some Active Directory 
specific things to keep in mind:</p><p>Typical AD main.ldapRealm.userDnTemplate 
value looks slightly different, such as  
cn={0},cn=users,DC=lab,DC=sample,dc=com</p><p>Please compare this with a 
typical Apache DS main.ldapRealm.userDnTemplate value and make note of the 
difference.  uid={0},ou=people,dc=hadoop,dc=apache
 ,dc=org</p><p>If your AD is configured to authenticate based on just the cn 
and password and does not require user DN, you do not have to specify value for 
main.ldapRealm.userDnTemplate.</p><h4><a 
id="LDAP+over+SSL+(LDAPS)+Configuration"></a>LDAP over SSL (LDAPS) 
Configuration</h4><p>In order to communicate with your LDAP server over SSL 
(again, highly recommended), you will need to modify the topology file in a 
couple ways and possibly provision some keying material.</p>
 <ol>
   <li><strong>main.ldapRealm.contextFactory.url</strong> must be changed to 
have the <code>ldaps</code> protocol scheme and the port must be the SSL 
listener port on your LDAP server.</li>
-  <li>Identity certificate (keypair) provisioned to LDAP server - your LDAP 
server specific documentation should indicate what is requried for providing a 
cert or keypair to represent the LDAP server identity to connecting 
clients.</li>
+  <li>Identity certificate (keypair) provisioned to LDAP server - your LDAP 
server specific documentation should indicate what is required for providing a 
cert or keypair to represent the LDAP server identity to connecting 
clients.</li>
   <li>Trusting the LDAP Server&rsquo;s public key - if the LDAP Server&rsquo;s 
identity certificate is issued by a well known and trusted certificate 
authority and is already represented in the JRE&rsquo;s cacerts truststore then 
you don&rsquo;t need to do anything for trusting the LDAP server&rsquo;s cert. 
If, however, the cert is selfsigned or issued by an untrusted authority you 
will need to either add it to the cacerts keystore or to another truststore 
that you may direct Knox to utilize through a system property.</li>
 </ol><h4><a id="Session+Configuration"></a>Session Configuration</h4><p>Knox 
maps each cluster topology to a web application and leverages standard JavaEE 
session management.</p><p>To configure session idle timeout for the topology, 
please specify value of parameter sessionTimeout for ShiroProvider in your 
topology file. If you do not specify the value for this parameter, it defaults 
to 30minutes.</p><p>The definition would look like the following in the 
topoloogy file:</p>
 <pre><code>...
@@ -1010,7 +1010,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 &lt;!-- search base used to search for user bind DN.
      Defaults to the value of main.ldapRealm.searchBase. 
      If main.ldapRealm.userSearchAttributeName is defined, 
-     vlaue for main.ldapRealm.searchBase  or main.ldapRealm.userSearchBase 
+     value for main.ldapRealm.searchBase  or main.ldapRealm.userSearchBase 
      should be defined --&gt;
 &lt;param&gt;
     &lt;name&gt;main.ldapRealm.userSearchBase&lt;/name&gt;
@@ -1020,7 +1020,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 &lt;!-- search base used to search for groups.
      Defaults to the value of main.ldapRealm.searchBase.
        If value of main.ldapRealm.authorizationEnabled is true,
-     vlaue for main.ldapRealm.searchBase  or main.ldapRealm.groupSearchBase 
should be defined --&gt;
+     value for main.ldapRealm.searchBase  or main.ldapRealm.groupSearchBase 
should be defined --&gt;
 &lt;param&gt;
     &lt;name&gt;main.ldapRealm.groupSearchBase&lt;/name&gt;
     &lt;value&gt;dc=hadoop,dc=apache,dc=org&lt;/value&gt;
@@ -1028,7 +1028,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
 
 &lt;!-- optional, default value: groupOfNames
      Objectclass to identify group entries in ldap, used to build search 
-   filter to search for group entires --&gt; 
+   filter to search for group entries --&gt; 
 &lt;param&gt;
     &lt;name&gt;main.ldapRealm.groupObjectClass&lt;/name&gt;
     &lt;value&gt;groupOfNames&lt;/value&gt;
@@ -1060,7 +1060,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
     &lt;name&gt;sessionTimeout&lt;/name&gt;
     &lt;value&gt;30&lt;/value&gt;
 &lt;/param&gt;
-</code></pre><p></provider></p><h4><a 
id="Special+note+on+parameter+main.ldapRealm.contextFactory.systemPassword"></a>Special
 note on parameter main.ldapRealm.contextFactory.systemPassword</h4><p>The 
value for this could have one of the following 2 formats</p><p>plantextpassword 
${ALIAS=ldcSystemPassword}</p><p>The first format specifies the password in 
plain text in the provider configuration. Use of this format should be limited 
for testing and troubleshooting.</p><p>We strongly recommend using the second 
format ${ALIAS=ldcSystemPassword} n production. This format uses an alias for 
the password stored in credential store. In the example 
${ALIAS=ldcSystemPassword}, ldcSystemPassword is the alias for the password 
stored in credential store.</p><p>Assuming plain text password is 
&ldquo;hadoop&rdquo;, and your topology file name is &ldquo;hdp.xml&rdquo;, you 
would use following command to create the right password alias in credential 
store.</p><p>$gateway_home/bin/knoxcli.sh create-al
 ias ldcSystemPassword &ndash;cluster hdp &ndash;value hadoop</p><h3><a 
id="LDAP+Authentication+Caching"></a>LDAP Authentication Caching</h3><p>Knox 
can be configured to cache LDAP authentication information. Knox leverages 
Shiro&rsquo;s built in caching mechanisms and has been tested with 
Shiro&rsquo;s EhCache cache manager implementation.</p><p>The following 
provider snippet demonstrates how to configure turning on the cache using the 
ShiroProvider. In addition to using 
org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm in the Shiro configuration, 
and setting up the cache you <em>must</em> set the flag for enabling caching 
authentication to true. Please see the property, 
main.ldapRealm.authenticationCachingEnabled below.</p>
+</code></pre><p></provider></p><h4><a 
id="Special+note+on+parameter+main.ldapRealm.contextFactory.systemPassword"></a>Special
 note on parameter main.ldapRealm.contextFactory.systemPassword</h4><p>The 
value for this could have one of the following 2 
formats</p><p>plaintextpassword ${ALIAS=ldcSystemPassword}</p><p>The first 
format specifies the password in plain text in the provider configuration. Use 
of this format should be limited for testing and troubleshooting.</p><p>We 
strongly recommend using the second format ${ALIAS=ldcSystemPassword} n 
production. This format uses an alias for the password stored in credential 
store. In the example ${ALIAS=ldcSystemPassword}, ldcSystemPassword is the 
alias for the password stored in credential store.</p><p>Assuming plain text 
password is &ldquo;hadoop&rdquo;, and your topology file name is 
&ldquo;hdp.xml&rdquo;, you would use following command to create the right 
password alias in credential store.</p><p>$gateway_home/bin/knoxcli.sh create-a
 lias ldcSystemPassword &ndash;cluster hdp &ndash;value hadoop</p><h3><a 
id="LDAP+Authentication+Caching"></a>LDAP Authentication Caching</h3><p>Knox 
can be configured to cache LDAP authentication information. Knox leverages 
Shiro&rsquo;s built in caching mechanisms and has been tested with 
Shiro&rsquo;s EhCache cache manager implementation.</p><p>The following 
provider snippet demonstrates how to configure turning on the cache using the 
ShiroProvider. In addition to using 
org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm in the Shiro configuration, 
and setting up the cache you <em>must</em> set the flag for enabling caching 
authentication to true. Please see the property, 
main.ldapRealm.authenticationCachingEnabled below.</p>
 <pre><code>          &lt;provider&gt;
               &lt;role&gt;authentication&lt;/role&gt;
               &lt;name&gt;ShiroProvider&lt;/name&gt;
@@ -1216,7 +1216,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
         &lt;!-- 
         session timeout in minutes,  this is really idle timeout,
         defaults to 30mins, if the property value is not defined,, 
-        current client authentication would expire if client idles contiuosly 
for more than this value
+        current client authentication would expire if client idles continuosly 
for more than this value
         --&gt;
         &lt;!-- defaults to: 30 minutes
         &lt;param&gt;
@@ -1326,7 +1326,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
           &lt;name&gt;main.ldapRealm.memberAttribute&lt;/name&gt;
           &lt;value&gt;memberUrl&lt;/value&gt;
         &lt;/param&gt;
-</code></pre><h3><a 
id="Template+topology+files+and+LDIF+files+to+try+out+LDAP+Group+Look+up"></a>Template
 topology files and LDIF files to try out LDAP Group Look up</h3><p>Knox 
bundles some template topology files and ldif files that you can use to try and 
test LDAP Group Lookup and associated authorization acls. All these template 
files are located under {GATEWAY_HOME}/templates.</p><h4><a 
id="LDAP+Static+Group+Lookup+Templates,+authentication+and+group+lookup+from+the+same+directory"></a>LDAP
 Static Group Lookup Templates, authentication and group lookup from the same 
directory</h4><p>topology file: sandbox.knoxrealm1.xml ldif file : 
users.ldapgroups.ldif</p><p>To try this out</p><p>cd {GATEWAY_HOME} cp 
templates/sandbox.knoxrealm1.xml conf/topologies/sandbox.xml cp 
templates/users.ldapgroups.ldif conf/users.ldif java -jar bin/ldap.jar conf 
java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master</p><p>Following call to WebHDFS should report HTTP/1.1 
 401 Unauthorized As guest is not a member of group &ldquo;analyst&rdquo;, 
authorization prvoider states user should be member of group 
&ldquo;analyst&rdquo;</p><p>curl -i -v -k -u guest:guest-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><p>Following
 call to WebHDFS should report: {&ldquo;Path&rdquo;:&ldquo;/user/sam&rdquo;} As 
sam is a member of group &ldquo;analyst&rdquo;, authorization prvoider states 
user should be member of group &ldquo;analyst&rdquo;</p><p>curl -i -v -k -u 
sam:sam-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><h4><a
 
id="LDAP+Static+Group+Lookup+Templates,+authentication+and+group+lookup+from+different++directories"></a>LDAP
 Static Group Lookup Templates, authentication and group lookup from different 
directorie
 s</h4><p>topology file: sandbox.knoxrealm2.xml ldif file : 
users.ldapgroups.ldif</p><p>To try this out</p><p>cd {GATEWAY_HOME} cp 
templates/sandbox.knoxrealm2.xml conf/topologies/sandbox.xml cp 
templates/users.ldapgroups.ldif conf/users.ldif java -jar bin/ldap.jar conf 
java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master</p><p>Following call to WebHDFS should report HTTP/1.1 401 
Unauthorized As guest is not a member of group &ldquo;analyst&rdquo;, 
authorization prvoider states user should be member of group 
&ldquo;analyst&rdquo;</p><p>curl -i -v -k -u guest:guest-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><p>Following
 call to WebHDFS should report: {&ldquo;Path&rdquo;:&ldquo;/user/sam&rdquo;} As 
sam is a member of group &ldquo;analyst&rdquo;, authorization prvoider states 
user should be member of group &ldquo;analyst&rdquo;
 </p><p>curl -i -v -k -u sam:sam-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><h4><a
 
id="LDAP+Dynamic+Group+Lookup+Templates,+authentication+and+dynamic+group+lookup+from+same++directory"></a>LDAP
 Dynamic Group Lookup Templates, authentication and dynamic group lookup from 
same directory</h4><p>topology file: sandbox.knoxrealmdg.xml ldif file : 
users.ldapdynamicgroups.ldif</p><p>To try this out</p><p>cd {GATEWAY_HOME} cp 
templates/sandbox.knoxrealmdg.xml conf/topologies/sandbox.xml cp 
templates/users.ldapdynamicgroups.ldif conf/users.ldif java -jar bin/ldap.jar 
conf java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master</p><p>Please note that user.ldapdynamicgroups.ldif also loads 
ncessary schema to create dynamic groups in Apache DS.</p><p>Following call to 
WebHDFS should report HTTP/1.1 401 Unauthorized As guest is not a membe
 r of dynamic group &ldquo;directors&rdquo;, authorization prvoider states user 
should be member of group &ldquo;directors&rdquo;</p><p>curl -i -v -k -u 
guest:guest-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><p>Following
 call to WebHDFS should report: {&ldquo;Path&rdquo;:&ldquo;/user/bob&rdquo;} As 
bob is a member of dynamic group &ldquo;directors&rdquo;, authorization 
prvoider states user should be member of group 
&ldquo;directors&rdquo;</p><p>curl -i -v -k -u sam:sam-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><h3><a
 id="Identity+Assertion"></a>Identity Assertion</h3><p>The identity assertion 
provider within Knox plays the critical role of communicating the identity 
principal to be used within the Hadoop cluster to represent th
 e identity that has been authenticated at the gateway.</p><p>The general 
responsibilities of the identity assertion provider is to interrogate the 
current Java Subject that has been established by the authentication or 
federation provider and:</p>
+</code></pre><h3><a 
id="Template+topology+files+and+LDIF+files+to+try+out+LDAP+Group+Look+up"></a>Template
 topology files and LDIF files to try out LDAP Group Look up</h3><p>Knox 
bundles some template topology files and ldif files that you can use to try and 
test LDAP Group Lookup and associated authorization acls. All these template 
files are located under {GATEWAY_HOME}/templates.</p><h4><a 
id="LDAP+Static+Group+Lookup+Templates,+authentication+and+group+lookup+from+the+same+directory"></a>LDAP
 Static Group Lookup Templates, authentication and group lookup from the same 
directory</h4><p>topology file: sandbox.knoxrealm1.xml ldif file : 
users.ldapgroups.ldif</p><p>To try this out</p><p>cd {GATEWAY_HOME} cp 
templates/sandbox.knoxrealm1.xml conf/topologies/sandbox.xml cp 
templates/users.ldapgroups.ldif conf/users.ldif java -jar bin/ldap.jar conf 
java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master</p><p>Following call to WebHDFS should report HTTP/1.1 
 401 Unauthorized As guest is not a member of group &ldquo;analyst&rdquo;, 
authorization provider states user should be member of group 
&ldquo;analyst&rdquo;</p><p>curl -i -v -k -u guest:guest-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><p>Following
 call to WebHDFS should report: {&ldquo;Path&rdquo;:&ldquo;/user/sam&rdquo;} As 
sam is a member of group &ldquo;analyst&rdquo;, authorization provider states 
user should be member of group &ldquo;analyst&rdquo;</p><p>curl -i -v -k -u 
sam:sam-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><h4><a
 
id="LDAP+Static+Group+Lookup+Templates,+authentication+and+group+lookup+from+different++directories"></a>LDAP
 Static Group Lookup Templates, authentication and group lookup from different 
directorie
 s</h4><p>topology file: sandbox.knoxrealm2.xml ldif file : 
users.ldapgroups.ldif</p><p>To try this out</p><p>cd {GATEWAY_HOME} cp 
templates/sandbox.knoxrealm2.xml conf/topologies/sandbox.xml cp 
templates/users.ldapgroups.ldif conf/users.ldif java -jar bin/ldap.jar conf 
java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master</p><p>Following call to WebHDFS should report HTTP/1.1 401 
Unauthorized As guest is not a member of group &ldquo;analyst&rdquo;, 
authorization provider states user should be member of group 
&ldquo;analyst&rdquo;</p><p>curl -i -v -k -u guest:guest-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><p>Following
 call to WebHDFS should report: {&ldquo;Path&rdquo;:&ldquo;/user/sam&rdquo;} As 
sam is a member of group &ldquo;analyst&rdquo;, authorization provxider states 
user should be member of group &ldquo;analyst&rdquo
 ;</p><p>curl -i -v -k -u sam:sam-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><h4><a
 
id="LDAP+Dynamic+Group+Lookup+Templates,+authentication+and+dynamic+group+lookup+from+same++directory"></a>LDAP
 Dynamic Group Lookup Templates, authentication and dynamic group lookup from 
same directory</h4><p>topology file: sandbox.knoxrealmdg.xml ldif file : 
users.ldapdynamicgroups.ldif</p><p>To try this out</p><p>cd {GATEWAY_HOME} cp 
templates/sandbox.knoxrealmdg.xml conf/topologies/sandbox.xml cp 
templates/users.ldapdynamicgroups.ldif conf/users.ldif java -jar bin/ldap.jar 
conf java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master</p><p>Please note that user.ldapdynamicgroups.ldif also loads 
necessary schema to create dynamic groups in Apache DS.</p><p>Following call to 
WebHDFS should report HTTP/1.1 401 Unauthorized As guest is not a mem
 ber of dynamic group &ldquo;directors&rdquo;, authorization provider states 
user should be member of group &ldquo;directors&rdquo;</p><p>curl -i -v -k -u 
guest:guest-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><p>Following
 call to WebHDFS should report: {&ldquo;Path&rdquo;:&ldquo;/user/bob&rdquo;} As 
bob is a member of dynamic group &ldquo;directors&rdquo;, authorization 
provider states user should be member of group 
&ldquo;directors&rdquo;</p><p>curl -i -v -k -u sam:sam-password -X GET <a 
href="https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY";>https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY</a></p><h3><a
 id="Identity+Assertion"></a>Identity Assertion</h3><p>The identity assertion 
provider within Knox plays the critical role of communicating the identity 
principal to be used within the Hadoop cluster to represent 
 the identity that has been authenticated at the gateway.</p><p>The general 
responsibilities of the identity assertion provider is to interrogate the 
current Java Subject that has been established by the authentication or 
federation provider and:</p>
 <ol>
   <li>determine whether it matches any principal mapping rules and apply them 
appropriately</li>
   <li>determine whether it matches any group principal mapping rules and apply 
them</li>
@@ -1337,7 +1337,7 @@ ldapRealm.userDnTemplate=uid={0},ou=peop
     &lt;name&gt;Default&lt;/name&gt;
     &lt;enabled&gt;true&lt;/enabled&gt;
 &lt;/provider&gt;
-</code></pre><p>This particular configuration indicates that the Default 
identity assertion provider is enabled and that there are no principal mapping 
rules to apply to identities flowing from the authentication in the gateway to 
the backend Hadoop cluster services. The primary principal of the current 
subject will therefore be asserted via a query paramter or as a form parameter 
- ie. ?user.name={primaryPrincipal}</p>
+</code></pre><p>This particular configuration indicates that the Default 
identity assertion provider is enabled and that there are no principal mapping 
rules to apply to identities flowing from the authentication in the gateway to 
the backend Hadoop cluster services. The primary principal of the current 
subject will therefore be asserted via a query parameter or as a form parameter 
- ie. ?user.name={primaryPrincipal}</p>
 <pre><code>&lt;provider&gt;
     &lt;role&gt;identity-assertion&lt;/role&gt;
     &lt;name&gt;Default&lt;/name&gt;
@@ -1893,7 +1893,7 @@ APACHE_HOME/bin/apachectl -k stop
   <li>Support the notion of a SSO session for multiple Hadoop interactions</li>
   <li>Support the multiple authentication and federation token capabilities of 
the Apache Knox Gateway</li>
   <li>Promote the use of REST APIs as the dominant remote client mechanism for 
Hadoop services</li>
-  <li>Promote the the sense of Hadoop as a single unified product</li>
+  <li>Promote the sense of Hadoop as a single unified product</li>
   <li>Aligned with the Apache Knox Gateway&rsquo;s overall goals for 
security</li>
 </ul><p>The result is a very simple DSL (<a 
href="http://en.wikipedia.org/wiki/Domain-specific_language";>Domain Specific 
Language</a>) of sorts that is used via <a 
href="http://groovy.codehaus.org";>Groovy</a> scripts. Here is an example of a 
command that copies a file from the local file system to HDFS.</p><p><em>Note: 
The variables session, localFile and remoteFile are assumed to be 
defined.</em></p>
 <pre><code>Hdfs.put( session ).file( localFile ).to( remoteFile ).now()
@@ -2307,7 +2307,7 @@ session.shutdown()
 <ul>
   <li>Request
   <ul>
-    <li>text( String text ) - Text to upload to HDFS. Takes precidence over 
file if both present.</li>
+    <li>text( String text ) - Text to upload to HDFS. Takes precedence over 
file if both present.</li>
     <li>file( String name ) - The name of a local file to upload to HDFS.</li>
     <li>to( String name ) - The fully qualified name to create in HDFS.</li>
   </ul></li>
@@ -2478,7 +2478,7 @@ session.shutdown()
   <ul>
     <li><code>Job.queryStatus(session).jobId(jobId).now().string</code></li>
   </ul></li>
-</ul><h3><a id="Oozie"></a>Oozie</h3><p>Oozie is a Hadoop component provides 
complex job workflows to be submitted and managed. Please refer to the latest 
<a href="http://oozie.apache.org/docs/4.0.0/";>Oozie documentation</a> for 
details.</p><p>In order to make Oozie accessible via the gateway there are 
several important Haddop configuration settings. These all relate to the 
network endpoint exposed by various Hadoop services.</p><p>The HTTP endpoint at 
which Oozie is running can be found via the oozie.base.url property in the 
oozie-site.xml file. In a Sandbox installation this can typically be found in 
/etc/oozie/conf/oozie-site.xml.</p>
+</ul><h3><a id="Oozie"></a>Oozie</h3><p>Oozie is a Hadoop component provides 
complex job workflows to be submitted and managed. Please refer to the latest 
<a href="http://oozie.apache.org/docs/4.0.0/";>Oozie documentation</a> for 
details.</p><p>In order to make Oozie accessible via the gateway there are 
several important Hadoop configuration settings. These all relate to the 
network endpoint exposed by various Hadoop services.</p><p>The HTTP endpoint at 
which Oozie is running can be found via the oozie.base.url property in the 
oozie-site.xml file. In a Sandbox installation this can typically be found in 
/etc/oozie/conf/oozie-site.xml.</p>
 <pre><code>&lt;property&gt;
     &lt;name&gt;oozie.base.url&lt;/name&gt;
     &lt;value&gt;http://sandbox.hortonworks.com:11000/oozie&lt;/value&gt;
@@ -2615,7 +2615,7 @@ curl -i -k -u guest:guest-password -X DE
       <td><code>http://{stargate-host}:60080/</code> </td>
     </tr>
   </tbody>
-</table><h4><a id="HBase+Examples"></a>HBase Examples</h4><p>The examples 
below illustrate the set of basic operations with HBase instance using Stargate 
REST API. Use following link to get more more details about HBase/Stargate API: 
<a 
href="http://wiki.apache.org/hadoop/Hbase/Stargate";>http://wiki.apache.org/hadoop/Hbase/Stargate</a>.</p><p>Note:
 Some HBase examples may not work due to enabled <a 
href="https://hbase.apache.org/book/hbase.accesscontrol.configuration.html";>Access
 Control</a>. User may not be granted for performing operations in samples. In 
order to check if Access Control is configured in the HBase instance verify 
hbase-site.xml for a presence of 
<code>org.apache.hadoop.hbase.security.access.AccessController</code> in 
<code>hbase.coprocessor.master.classes</code> and 
<code>hbase.coprocessor.region.classes</code> properties.<br/>To grant the 
Read, Write, Create permissions to <code>guest</code> user execute the 
following command:</p>
+</table><h4><a id="HBase+Examples"></a>HBase Examples</h4><p>The examples 
below illustrate the set of basic operations with HBase instance using Stargate 
REST API. Use following link to get more details about HBase/Stargate API: <a 
href="http://wiki.apache.org/hadoop/Hbase/Stargate";>http://wiki.apache.org/hadoop/Hbase/Stargate</a>.</p><p>Note:
 Some HBase examples may not work due to enabled <a 
href="https://hbase.apache.org/book/hbase.accesscontrol.configuration.html";>Access
 Control</a>. User may not be granted for performing operations in samples. In 
order to check if Access Control is configured in the HBase instance verify 
hbase-site.xml for a presence of 
<code>org.apache.hadoop.hbase.security.access.AccessController</code> in 
<code>hbase.coprocessor.master.classes</code> and 
<code>hbase.coprocessor.region.classes</code> properties.<br/>To grant the 
Read, Write, Create permissions to <code>guest</code> user execute the 
following command:</p>
 <pre><code>echo grant &#39;guest&#39;, &#39;RWC&#39; | hbase shell
 </code></pre><p>If you are using a cluster secured with Kerberos you will need 
to have used <code>kinit</code> to authenticate to the KDC </p><h4><a 
id="HBase+Stargate+Setup"></a>HBase Stargate Setup</h4><h4><a 
id="Launch+Stargate"></a>Launch Stargate</h4><p>The command below launches the 
Stargate daemon on port 60080</p>
 <pre><code>sudo {HBASE_BIN}/hbase-daemon.sh start rest -p 60080
@@ -2874,7 +2874,7 @@ HBase.session(session).table(tableName).
     <li>endTime(Long) - the upper bound for filtration by time.</li>
     <li>times(Long startTime, Long endTime) - the lower and upper bounds for 
filtration by time.</li>
     <li>filter(String) - the filter XML definition.</li>
-    <li>maxVersions(Integer) - the the maximum number of versions to 
return.</li>
+    <li>maxVersions(Integer) - the maximum number of versions to return.</li>
   </ul></li>
   <li>Response
   <ul>
@@ -3217,7 +3217,7 @@ session.shutdown(10, SECONDS)
       <td><code>http://{hive-host}:{hive-port}/{hive-path}</code></td>
     </tr>
   </tbody>
-</table><h4><a id="Hive+Examples"></a>Hive Examples</h4><p>This guide provides 
detailed examples for how to to some basic interactions with Hive via the 
Apache Knox Gateway.</p><h5><a id="Hive+Setup"></a>Hive Setup</h5>
+</table><h4><a id="Hive+Examples"></a>Hive Examples</h4><p>This guide provides 
detailed examples for how to do some basic interactions with Hive via the 
Apache Knox Gateway.</p><h5><a id="Hive+Setup"></a>Hive Setup</h5>
 <ol>
   <li>Make sure you are running the correct version of Hive to ensure 
JDBC/Thrift/HTTP support.</li>
   <li>Make sure Hive Server is running on the correct port.</li>
@@ -3360,7 +3360,7 @@ while ( resultSet.next() ) {
 resultSet.close();
 statement.close();
 connection.close();
-</code></pre><p>Exampes use &lsquo;log.txt&rsquo; with content:</p>
+</code></pre><p>Examples use &lsquo;log.txt&rsquo; with content:</p>
 <pre><code>2012-02-03 18:35:34 SampleClass6 [INFO] everything normal for id 
577725851
 2012-02-03 18:35:34 SampleClass4 [FATAL] system problem at id 1991281254
 2012-02-03 18:35:34 SampleClass3 [DEBUG] detail for id 1304807656
@@ -3785,7 +3785,7 @@ org.apache.http.conn.HttpHostConnectExce
     at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
     at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
     at 
org.apache.hadoop.gateway.dispatch.HttpClientDispatch.executeRequest(HttpClientDispatch.java:99)
-</code></pre><p>The the resulting behavior on the client will differ by 
client. For the client DSL executing the 
{GATEWAY_HOME}/samples/ExampleWebHdfsLs.groovy the output will look look like 
this.</p>
+</code></pre><p>The resulting behavior on the client will differ by client. 
For the client DSL executing the {GATEWAY_HOME}/samples/ExampleWebHdfsLs.groovy 
the output will look look like this.</p>
 <pre><code>Caught: org.apache.hadoop.gateway.shell.HadoopException: 
org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
 org.apache.hadoop.gateway.shell.HadoopException: 
org.apache.hadoop.gateway.shell.ErrorResponse: HTTP/1.1 500 Server Error
   at 
org.apache.hadoop.gateway.shell.AbstractRequest.now(AbstractRequest.java:72)
@@ -3808,7 +3808,7 @@ Server: Jetty(8.1.12.v20130726)
 <pre><code>curl -i -k -u guest:guest-password -X GET 
&#39;http://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS&#39;
 the following error is returned
 curl: (52) Empty reply from server
-</code></pre><p>This is the default behavior for Jetty SSL listener. While the 
credentials to the default authentication provider continue to be username and 
password, we do not want to encourage sending these in clear text. Since 
prememptively sending BASIC credentials is a common pattern with REST APIs it 
would be unwise to redirect to a HTTPS listener thus allowing clear text 
passwords.</p><p>To resolve this issue, we have two options:</p>
+</code></pre><p>This is the default behavior for Jetty SSL listener. While the 
credentials to the default authentication provider continue to be username and 
password, we do not want to encourage sending these in clear text. Since 
preemptively sending BASIC credentials is a common pattern with REST APIs it 
would be unwise to redirect to a HTTPS listener thus allowing clear text 
passwords.</p><p>To resolve this issue, we have two options:</p>
 <ol>
   <li>change the scheme in the URL to https and deal with any trust 
relationship issues with the presented server certificate</li>
   <li>Disabling SSL in gateway-site.xml - this is not encouraged due to the 
reasoning described above.</li>
@@ -3832,7 +3832,7 @@ curl: (52) Empty reply from server
 WWW-Authenticate: BASIC realm=&quot;application&quot;
 Content-Length: 0
 Server: Jetty(8.1.12.v20130726)
-</code></pre><h4><a 
id="Using+ldapsearch+to+verify+ldap+connectivtiy+and+credentials"></a>Using 
ldapsearch to verify ldap connectivtiy and credentials</h4><p>If your 
authentication to knox fails and you believe your are using correct 
creedentilas, you could try to verify the connectivity and credentials usong 
ldapsearch, assuming you are using ldap directory for 
authentication.</p><p>Assuming you are using the default values that came out 
of box with knox, your ldapsearch command would be like the following</p>
+</code></pre><h4><a 
id="Using+ldapsearch+to+verify+ldap+connectivtiy+and+credentials"></a>Using 
ldapsearch to verify ldap connectivtiy and credentials</h4><p>If your 
authentication to knox fails and you believe your are using correct 
creedentilas, you could try to verify the connectivity and credentials using 
ldapsearch, assuming you are using ldap directory for 
authentication.</p><p>Assuming you are using the default values that came out 
of box with knox, your ldapsearch command would be like the following</p>
 <pre><code>ldapsearch -h localhost -p 33389 -D 
&quot;uid=guest,ou=people,dc=hadoop,dc=apache,dc=org&quot; -w guest-password -b 
&quot;uid=guest,ou=people,dc=hadoop,dc=apache,dc=org&quot; 
&quot;objectclass=*&quot;
 </code></pre><p>This should produce output like the following</p>
 <pre><code># extended LDIF
@@ -3885,11 +3885,11 @@ org.apache.hadoop.gateway.shell.HadoopEx
 Cluster version : 0.96.0.2.0.6.0-76-hadoop2
 Status : {...}
 Creating table &#39;test_table&#39;...
-</code></pre><p>HBase and Starget can be restred using the following commands 
on the Hadoop Sandbox VM. You will need to ssh into the VM in order to run 
these commands.</p>
+</code></pre><p>HBase and Stargate can be restred using the following commands 
on the Hadoop Sandbox VM. You will need to ssh into the VM in order to run 
these commands.</p>
 <pre><code>sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh stop master
 sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh start master
 sudo -u hbase /usr/lib/hbase/bin/hbase-daemon.sh restart rest -p 60080
-</code></pre><h3><a id="SSL+Certificate+Issues"></a>SSL Certificate 
Issues</h3><p>Clients that do not trust the certificate presented by the server 
will behave in different ways. A browser will typically warn you of the 
inability to trust the receieved certificate and give you an opportunity to add 
an exception for the particular certificate. Curl will present you with the 
follow message and instructions for turning of certificate verification:</p>
+</code></pre><h3><a id="SSL+Certificate+Issues"></a>SSL Certificate 
Issues</h3><p>Clients that do not trust the certificate presented by the server 
will behave in different ways. A browser will typically warn you of the 
inability to trust the received certificate and give you an opportunity to add 
an exception for the particular certificate. Curl will present you with the 
follow message and instructions for turning of certificate verification:</p>
 <pre><code>curl performs SSL certificate verification by default, using a 
&quot;bundle&quot; 
  of Certificate Authority (CA) public keys (CA certs). If the default
  bundle file isn&#39;t adequate, you can specify an alternate file

Modified: knox/site/index.html
URL: 
http://svn.apache.org/viewvc/knox/site/index.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/index.html (original)
+++ knox/site/index.html Tue Sep  8 13:32:22 2015
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2015-07-22
+ | Generated by Apache Maven Doxia at 2015-09-08
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20150722" />
+    <meta name="Date-Revision-yyyymmdd" content="20150908" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Knox Gateway &#x2013; REST API Gateway for the Hadoop 
Ecosystem</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
@@ -58,7 +58,7 @@
               
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 
2015-07-22</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-09-08</li> 
             
                             </ul>
       </div>

Modified: knox/site/issue-tracking.html
URL: 
http://svn.apache.org/viewvc/knox/site/issue-tracking.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/issue-tracking.html (original)
+++ knox/site/issue-tracking.html Tue Sep  8 13:32:22 2015
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2015-07-22
+ | Generated by Apache Maven Doxia at 2015-09-08
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20150722" />
+    <meta name="Date-Revision-yyyymmdd" content="20150908" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Knox Gateway &#x2013; Issue Tracking</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
@@ -58,7 +58,7 @@
               
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 
2015-07-22</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-09-08</li> 
             
                             </ul>
       </div>

Modified: knox/site/license.html
URL: 
http://svn.apache.org/viewvc/knox/site/license.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/license.html (original)
+++ knox/site/license.html Tue Sep  8 13:32:22 2015
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2015-07-22
+ | Generated by Apache Maven Doxia at 2015-09-08
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20150722" />
+    <meta name="Date-Revision-yyyymmdd" content="20150908" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Knox Gateway &#x2013; Project License</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
@@ -58,7 +58,7 @@
               
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 
2015-07-22</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-09-08</li> 
             
                             </ul>
       </div>

Modified: knox/site/mail-lists.html
URL: 
http://svn.apache.org/viewvc/knox/site/mail-lists.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/mail-lists.html (original)
+++ knox/site/mail-lists.html Tue Sep  8 13:32:22 2015
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2015-07-22
+ | Generated by Apache Maven Doxia at 2015-09-08
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20150722" />
+    <meta name="Date-Revision-yyyymmdd" content="20150908" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Knox Gateway &#x2013; Project Mailing Lists</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
@@ -58,7 +58,7 @@
               
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 
2015-07-22</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-09-08</li> 
             
                             </ul>
       </div>
@@ -317,7 +317,7 @@
 <td><a class="externalLink" 
href="mailto:[email protected]";>Subscribe</a></td>
 <td><a class="externalLink" 
href="mailto:[email protected]";>Unsubscribe</a></td>
 <td><a class="externalLink" href="mailto:[email protected]";>Post</a></td>
-<td><a class="externalLink" 
href="http://mail-archives.apache.org/mod_mbox/knox-commit/";>mail-archives.apache.org</a></td></tr></table></div>
+<td><a class="externalLink" 
href="http://mail-archives.apache.org/mod_mbox/knox-commits/";>mail-archives.apache.org</a></td></tr></table></div>
                   </div>
             </div>
           </div>

Modified: knox/site/project-info.html
URL: 
http://svn.apache.org/viewvc/knox/site/project-info.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/project-info.html (original)
+++ knox/site/project-info.html Tue Sep  8 13:32:22 2015
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2015-07-22
+ | Generated by Apache Maven Doxia at 2015-09-08
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20150722" />
+    <meta name="Date-Revision-yyyymmdd" content="20150908" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Knox Gateway &#x2013; Project Information</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
@@ -58,7 +58,7 @@
               
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 
2015-07-22</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-09-08</li> 
             
                             </ul>
       </div>

Modified: knox/site/team-list.html
URL: 
http://svn.apache.org/viewvc/knox/site/team-list.html?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/site/team-list.html (original)
+++ knox/site/team-list.html Tue Sep  8 13:32:22 2015
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2015-07-22
+ | Generated by Apache Maven Doxia at 2015-09-08
  | Rendered using Apache Maven Fluido Skin 1.3.0
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20150722" />
+    <meta name="Date-Revision-yyyymmdd" content="20150908" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Knox Gateway &#x2013; Team list</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.3.0.min.css" />
@@ -58,7 +58,7 @@
               
                 
                     
-                  <li id="publishDate" class="pull-right">Last Published: 
2015-07-22</li> 
+                  <li id="publishDate" class="pull-right">Last Published: 
2015-09-08</li> 
             
                             </ul>
       </div>

Modified: knox/trunk/pom.xml
URL: 
http://svn.apache.org/viewvc/knox/trunk/pom.xml?rev=1701803&r1=1701802&r2=1701803&view=diff
==============================================================================
--- knox/trunk/pom.xml (original)
+++ knox/trunk/pom.xml Tue Sep  8 13:32:22 2015
@@ -87,7 +87,7 @@
             <subscribe>mailto:[email protected]</subscribe>
             
<unsubscribe>mailto:[email protected]</unsubscribe>
             <post>mailto:[email protected]</post>
-            
<archive>http://mail-archives.apache.org/mod_mbox/knox-commit/</archive>
+            
<archive>http://mail-archives.apache.org/mod_mbox/knox-commits/</archive>
         </mailingList>
     </mailingLists>
 


Reply via email to