Added: knox/trunk/books/1.3.0/config_authz.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_authz.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_authz.md (added)
+++ knox/trunk/books/1.3.0/config_authz.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,321 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+### Authorization ###
+
+#### Service Level Authorization ####
+
+The Knox Gateway has an out-of-the-box authorization provider that allows 
administrators to restrict access to the individual services within a Hadoop 
cluster.
+
+This provider utilizes a simple and familiar pattern of using ACLs to protect 
Hadoop resources by specifying users, groups and ip addresses that are 
permitted access.
+
+Note: This feature will not work as expected if 'anonymous' authentication is 
used. 
+
+#### Configuration ####
+
+ACLs are bound to services within the topology descriptors by introducing the 
authorization provider with configuration like:
+
+    <provider>
+        <role>authorization</role>
+        <name>AclsAuthz</name>
+        <enabled>true</enabled>
+    </provider>
+
+The above configuration enables the authorization provider but does not 
indicate any ACLs yet and therefore there is no restriction to accessing the 
Hadoop services. In order to indicate the resources to be protected and the 
specific users, groups or ip's to grant access, we need to provide parameters 
like the following:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        
<value>username[,*|username...];group[,*|group...];ipaddr[,*|ipaddr...]</value>
+    </param>
+    
+where `{serviceName}` would need to be the name of a configured Hadoop service 
within the topology.
+
+NOTE: ipaddr is unique among the parts of the ACL in that you are able to 
specify a wildcard within an ipaddr to indicate that the remote address must 
being with the String prior to the asterisk within the ipaddr ACL. For instance:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;192.168.*</value>
+    </param>
+    
+This indicates that the request must come from an IP address that begins with 
'192.168.' in order to be granted access.
+
+Note also that configuration without any ACLs defined is equivalent to:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;*</value>
+    </param>
+
+meaning: all users, groups and IPs have access.
+Each of the elements of the ACL parameter support multiple values via comma 
separated list and the `*` wildcard to match any.
+
+For instance:
+
+    <param>
+        <name>webhdfs.acl</name>
+        <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
+    </param>
+
+this configuration indicates that ALL of the following have to be satisfied to 
be granted access:
+
+1. The user name must be "hdfs" AND
+2. the user must be in the group "admin" AND
+3. the user must come from either 127.0.0.2 or 127.0.0.3
+
+This allows us to craft policy that restricts the members of a large group to 
a subset that should have access.
+The user being removed from the group will allow access to be denied even 
though their username may have been in the ACL.
+
+An additional configuration element may be used to alter the processing of the 
ACL to be OR instead of the default AND behavior:
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+
+this processing behavior requires that the effective user satisfy one of the 
parts of the ACL definition in order to be granted access.
+For instance:
+
+    <param>
+        <name>webhdfs.acl</name>
+        <value>hdfs,guest;admin;127.0.0.2,127.0.0.3</value>
+    </param>
+
+You may also set the ACL processing mode at the top level for the topology. 
This essentially sets the default for the managed cluster.
+It may then be overridden at the service level as well.
+
+    <param>
+        <name>acl.mode</name>
+        <value>OR</value>
+    </param>
+
+this configuration indicates that ONE of the following must be satisfied to be 
granted access:
+
+1. The user is "hdfs" or "guest" OR
+2. the user is in "admin" group OR
+3. the request is coming from 127.0.0.2 or 127.0.0.3
+
+
+Following are a few concrete examples on how to use this feature.
+
+Note: In the examples below `{serviceName}` represents a real service name 
(e.g. WEBHDFS) and would be replaced with these values in an actual 
configuration.
+
+##### Usecases #####
+
+###### USECASE-1: Restrict access to specific Hadoop services to specific Users
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;*;*</value>
+    </param>
+
+###### USECASE-2: Restrict access to specific Hadoop services to specific 
Groups
+
+    <param>
+        <name>{serviceName}.acls</name>
+        <value>*;admins;*</value>
+    </param>
+
+###### USECASE-3: Restrict access to specific Hadoop services to specific 
Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;127.0.0.1</value>
+    </param>
+
+###### USECASE-4: Restrict access to specific Hadoop services to specific 
Users OR users within specific Groups
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admin;*</value>
+    </param>
+
+###### USECASE-5: Restrict access to specific Hadoop services to specific 
Users OR users from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;*;127.0.0.1</value>
+    </param>
+
+###### USECASE-6: Restrict access to specific Hadoop services to users within 
specific Groups OR from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;admin;127.0.0.1</value>
+    </param>
+
+###### USECASE-7: Restrict access to specific Hadoop services to specific 
Users OR users within specific Groups OR from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admin;127.0.0.1</value>
+    </param>
+
+###### USECASE-8: Restrict access to specific Hadoop services to specific 
Users AND users within specific Groups
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admin;*</value>
+    </param>
+
+###### USECASE-9: Restrict access to specific Hadoop services to specific 
Users AND users from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;*;127.0.0.1</value>
+    </param>
+
+###### USECASE-10: Restrict access to specific Hadoop services to users within 
specific Groups AND from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;admins;127.0.0.1</value>
+    </param>
+
+###### USECASE-11: Restrict access to specific Hadoop services to specific 
Users AND users within specific Groups AND from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admins;127.0.0.1</value>
+    </param>
+
+###### USECASE-12: Full example including identity assertion/principal mapping 
######
+
+The principal mapping aspect of the identity assertion provider is important 
to understand in order to fully utilize the authorization features of this 
provider.
+
+This feature allows us to map the authenticated principal to a runAs or 
impersonated principal to be asserted to the Hadoop services in the backend. It 
is fully documented in the Identity Assertion section of this guide.
+
+These additional mapping capabilities are used together with the authorization 
ACL policy.
+An example of a full topology that illustrates these together is below.
+
+    <topology>
+        <gateway>
+            <provider>
+                <role>authentication</role>
+                <name>ShiroProvider</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>main.ldapRealm</name>
+                    <value>org.apache.shiro.realm.ldap.JndiLdapRealm</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.userDnTemplate</name>
+                    <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.contextFactory.url</name>
+                    <value>ldap://localhost:33389</value>
+                </param>
+                <param>
+                    
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+                    <value>simple</value>
+                </param>
+                <param>
+                    <name>urls./**</name>
+                    <value>authcBasic</value>
+                </param>
+            </provider>
+            <provider>
+                <role>identity-assertion</role>
+                <name>Default</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>principal.mapping</name>
+                    <value>guest=hdfs;</value>
+                </param>
+                <param>
+                    <name>group.principal.mapping</name>
+                    <value>*=users;hdfs=admin</value>
+                </param>
+            </provider>
+            <provider>
+                <role>authorization</role>
+                <name>AclsAuthz</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>acl.mode</name>
+                    <value>OR</value>
+                </param>
+                <param>
+                    <name>webhdfs.acl.mode</name>
+                    <value>AND</value>
+                </param>
+                <param>
+                    <name>webhdfs.acl</name>
+                    <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
+                </param>
+                <param>
+                    <name>webhcat.acl</name>
+                    <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
+                </param>
+            </provider>
+            <provider>
+                <role>hostmap</role>
+                <name>static</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>localhost</name>
+                    <value>sandbox,sandbox.hortonworks.com</value>
+                </param>
+            </provider>
+        </gateway>
+
+        <service>
+            <role>JOBTRACKER</role>
+            <url>rpc://localhost:8050</url>
+        </service>
+
+        <service>
+            <role>WEBHDFS</role>
+            <url>http://localhost:50070/webhdfs</url>
+        </service>
+
+        <service>
+            <role>WEBHCAT</role>
+            <url>http://localhost:50111/templeton</url>
+        </service>
+
+        <service>
+            <role>OOZIE</role>
+            <url>http://localhost:11000/oozie</url>
+        </service>
+
+        <service>
+            <role>WEBHBASE</role>
+            <url>http://localhost:8080</url>
+        </service>
+
+        <service>
+            <role>HIVE</role>
+            <url>http://localhost:10001/cliservice</url>
+        </service>
+    </topology>

Added: knox/trunk/books/1.3.0/config_ha.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_ha.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_ha.md (added)
+++ knox/trunk/books/1.3.0/config_ha.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,165 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### High Availability ###
+
+This describes how Knox itself can be made highly available.
+
+All Knox instances must be configured to use the same topology credential 
keystores.
+These files are located under 
`{GATEWAY_HOME}/conf/security/keystores/{TOPOLOGY_NAME}-credentials.jceks`.
+They are generated after the first topology deployment.
+
+In addition to these topology-specific credentials, gateway credentials and 
topologies must also be kept in-sync for Knox to operate in an HA manner.
+
+#### Manually Synchronize Knox Instances ####
+
+Here are the steps to manually sync topology credential keystores:
+
+1. Choose a Knox instance that will be the source for topology credential 
keystores. Let's call it _keystores master_
+2. Replace the topology credential keystores in the other Knox instances with 
topology credential keystores from the _keystores master_
+3. Restart Knox instances
+
+Manually synchronizing the gateway credentials and topologies involves using 
ssh/scp to copy the topology-related files to all the participating Knox 
instances, and running the Knox CLI on each participating instance to define 
the gateway credential aliases.
+
+This manual process can be tedious and error-prone. As such, [ZooKeeper-based 
HA](#High+Availability+with+Apache+ZooKeeper) is recommended to simplify the 
management of these deployments.
+
+#### High Availability with Apache ZooKeeper ####
+
+Rather than manually keeping Knox HA instances in sync (in terms of 
credentials and topology), Knox can get it's state from Apache ZooKeeper.
+By configuring all the Knox instances to monitor the same ZooKeeper ensemble, 
they can be kept in-sync by modifying the topology-related
+configuration and/or credential aliases at only one of the instances (using 
the Admin UI, Admin API, or Knox CLI).
+
+##### What is Automatically Synchronized Across Instances?
+
+* Provider Configurations
+* Descriptors
+* *Topologies* (generated only)
+* Credential Aliases
+
+When a provider configuration or descriptor is added or updated to the 
ZooKeeper ensemble, all of the participating Knox instances will get the 
change, and the affected topologies will be [re]generated and [re]deployed. 
Similarly, if one of these is deleted, the affected topologies will be deleted 
and undeployed.
+
+When provider configurations and descriptors are added, modified or removed 
using the Admin UI or API (when the Knox instance is configured to monitor a 
ZooKeeper ensemble), then those changes will be automatically reflected in the 
associated ZooKeeper ensemble. Those changes will subsequently be consumed by 
all the other Knox instances monitoring that ensemble.
+By using the Admin UI or API, ssh/scp access to the Knox hosts can be avoided 
completely for the purpose of effecting topology changes.
+
+Similarly, when the Knox CLI is used to create or delete a gateway alias (when 
the Knox instance is configured to monitor a ZooKeeper ensemble), that alias 
change is reflected in the ZooKeeper ensemble, and all other Knox instances 
montoring that ensemble will apply the change.
+
+
+##### What is NOT Automatically Synchronized Across Instances?
+
+* Topologies (XML)
+* Gateway config (e.g., gateway-site, gateway-logging, etc...)
+
+If you're creating/modifying topology XML files directly, then there is no 
automated support for keeping these in sync across Knox HA instances.
+
+However, if the Knox instances are running in an Apache Ambari-managed 
cluster, there is limited support for keeping topology XML files and gateway 
configuration synchronized across those instances.
+
+<br>
+
+#### High Availability with Apache HTTP Server + mod_proxy + 
mod_proxy_balancer ####
+
+##### 1 - Requirements #####
+
+###### openssl-devel ######
+
+openssl-devel is required for Apache Module mod_ssl.
+
+    sudo yum install openssl-devel
+
+###### Apache HTTP Server ######
+
+Apache HTTP Server 2.4.6 or later is required. See this document for 
installing and setting up Apache HTTP Server: 
http://httpd.apache.org/docs/2.4/install.html
+
+Hint: pass `--enable-ssl` to the `./configure` command to enable the 
generation of the Apache Module _mod_ssl_.
+
+###### Apache Module mod_proxy ######
+
+See this document for setting up Apache Module mod_proxy: 
http://httpd.apache.org/docs/2.4/mod/mod_proxy.html
+
+###### Apache Module mod_proxy_balancer ######
+
+See this document for setting up Apache Module mod_proxy_balancer: 
http://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html
+
+###### Apache Module mod_ssl ######
+
+See this document for setting up Apache Module mod_ssl: 
http://httpd.apache.org/docs/2.4/mod/mod_ssl.html
+
+##### 2 - Configuration example #####
+
+###### Generate certificate for Apache HTTP Server ######
+
+See this document for an example: 
http://www.akadia.com/services/ssh_test_certificate.html
+
+By convention, Apache HTTP Server and Knox certificates are put into the 
`/etc/apache2/ssl/` folder.
+
+###### Update Apache HTTP Server configuration file ######
+
+This file is located under {APACHE_HOME}/conf/httpd.conf.
+
+Following directives have to be added or uncommented in the configuration file:
+
+* LoadModule proxy_module modules/mod_proxy.so
+* LoadModule proxy_http_module modules/mod_proxy_http.so
+* LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
+* LoadModule ssl_module modules/mod_ssl.so
+* LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
+* LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so
+* LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so
+* LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so
+* LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
+
+Also following lines have to be added to file. Replace placeholders (${...}) 
with real data:
+
+    Listen 443
+    <VirtualHost *:443>
+       SSLEngine On
+       SSLProxyEngine On
+       SSLCertificateFile ${PATH_TO_CERTIFICATE_FILE}
+       SSLCertificateKeyFile ${PATH_TO_CERTIFICATE_KEY_FILE}
+       SSLProxyCACertificateFile ${PATH_TO_PROXY_CA_CERTIFICATE_FILE}
+
+       ProxyRequests Off
+       ProxyPreserveHost Off
+
+       RequestHeader set X-Forwarded-Port "443"
+       Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" 
env=BALANCER_ROUTE_CHANGED
+       <Proxy balancer://mycluster>
+         BalancerMember ${HOST_#1} route=1
+         BalancerMember ${HOST_#2} route=2
+         ...
+         BalancerMember ${HOST_#N} route=N
+
+         ProxySet failontimeout=On lbmethod=${LB_METHOD} stickysession=ROUTEID 
+       </Proxy>
+
+       ProxyPass / balancer://mycluster/
+       ProxyPassReverse / balancer://mycluster/
+    </VirtualHost>
+
+Note:
+
+* SSLProxyEngine enables SSL between Apache HTTP Server and Knox instances;
+* SSLCertificateFile and SSLCertificateKeyFile have to point to certificate 
data of Apache HTTP Server. User will use this certificate for communications 
with Apache HTTP Server;
+* SSLProxyCACertificateFile has to point to Knox certificates.
+
+###### Start/stop Apache HTTP Server ######
+
+    APACHE_HOME/bin/apachectl -k start
+    APACHE_HOME/bin/apachectl -k stop
+
+###### Verify ######
+
+Use Knox samples.

Added: knox/trunk/books/1.3.0/config_hadoop_auth_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_hadoop_auth_provider.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_hadoop_auth_provider.md (added)
+++ knox/trunk/books/1.3.0/config_hadoop_auth_provider.md Wed Jan  2 17:31:29 
2019
@@ -0,0 +1,98 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### HadoopAuth Authentication Provider ###
+The HadoopAuth authentication provider for Knox integrates the use of the 
Apache Hadoop module for SPNEGO and delegation token based authentication. This 
introduces the same authentication pattern used across much of the Hadoop 
ecosystem to Apache Knox and allows clients to using the strong authentication 
and SSO capabilities of Kerberos.
+
+#### Configuration ####
+##### Overview #####
+As with all providers in the Knox gateway, the HadoopAuth provider is 
configured through provider parameters. The configuration parameters are the 
same parameters used within Apache Hadoop for the same capabilities. In this 
section, we provide an example configuration and description of each of the 
parameters. We do encourage the reader to refer to the Hadoop documentation for 
this as well. (see 
http://hadoop.apache.org/docs/current/hadoop-auth/Configuration.html)
+
+One of the interesting things to note about this configuration is the use of 
the `config.prefix` parameter. In Hadoop there may be multiple components with 
their own specific configuration values for these parameters and since they may 
get mixed into the same Configuration object - there needs to be a way to 
identify the component specific values. The `config.prefix` parameter is used 
for this and is prepended to each of the configuration parameters for this 
provider. Below, you see an example configuration where the value for 
config.prefix happens to be `hadoop.auth.config`. You will also notice that 
this same value is prepended to the name of the rest of the configuration 
parameters.
+
+    <provider>
+      <role>authentication</role>
+      <name>HadoopAuth</name>
+      <enabled>true</enabled>
+      <param>
+        <name>config.prefix</name>
+        <value>hadoop.auth.config</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.signature.secret</name>
+        <value>knox-signature-secret</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.type</name>
+        <value>kerberos</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.simple.anonymous.allowed</name>
+        <value>false</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.token.validity</name>
+        <value>1800</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.cookie.domain</name>
+        <value>novalocal</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.cookie.path</name>
+        <value>gateway/default</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.kerberos.principal</name>
+        
<value>HTTP/[email protected]</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.kerberos.keytab</name>
+        <value>/etc/security/keytabs/spnego.service.keytab</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.kerberos.name.rules</name>
+        <value>DEFAULT</value>
+      </param>
+    </provider>
+  
+
+#### Descriptions ####
+The following tables describes the configuration parameters for the HadoopAuth 
provider:
+
+###### Config
+
+Name | Description | Default
+---------|-----------|----
+config.prefix            | If specified, all other configuration parameter 
names must start with the prefix. | none
+signature.secret|This is the secret used to sign the delegation token in the 
hadoop.auth cookie. This same secret needs to be used across all instances of 
the Knox gateway in a given cluster. Otherwise, the delegation token will fail 
validation and authentication will be repeated each request. | A simple random 
number  
+type                     | This parameter needs to be set to `kerberos` | 
none, would throw exception
+simple.anonymous.allowed | This should always be false for a secure 
deployment. | true
+token.validity           | The validity -in seconds- of the generated 
authentication token. This is also used for the rollover interval when 
`signer.secret.provider` is set to random or ZooKeeper. | 36000 seconds
+cookie.domain            | Domain to use for the HTTP cookie that stores the 
authentication token | null
+cookie.path              | Path to use for the HTTP cookie that stores the 
authentication token | null
+kerberos.principal       | The web-application Kerberos principal name. The 
Kerberos principal name must start with HTTP/.... For example: 
`HTTP/localhost@LOCALHOST` | null
+kerberos.keytab          | The path to the keytab file containing the 
credentials for the kerberos principal. For example: 
`/Users/lmccay/lmccay.keytab` | null
+kerberos.name.rules      | The name of the ruleset for extracting the username 
from the kerberos principal. | DEFAULT
+
+###### REST Invocation
+Once a user logs in with kinit then their Kerberos session may be used across 
client requests with things like curl.
+The following curl command can be used to request a directory listing from 
HDFS while authenticating with SPNEGO via the `--negotiate` flag
+
+    curl -k -i --negotiate -u : 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+
+

Added: knox/trunk/books/1.3.0/config_id_assertion.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_id_assertion.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_id_assertion.md (added)
+++ knox/trunk/books/1.3.0/config_id_assertion.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,299 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+### Identity Assertion ###
+The identity assertion provider within Knox plays the critical role of 
communicating the identity principal to be used within the Hadoop cluster to 
represent the identity that has been authenticated at the gateway.
+
+The general responsibilities of the identity assertion provider is to 
interrogate the current Java Subject that has been established by the 
authentication or federation provider and:
+
+1. determine whether it matches any principal mapping rules and apply them 
appropriately
+2. determine whether it matches any group principal mapping rules and apply 
them
+3. if it is determined that the principal will be impersonating another 
through a principal mapping rule then a Subject.doAS is required so providers 
farther downstream can determine the appropriate effective principal name and 
groups for the user
+
+#### Default Identity Assertion Provider ####
+The following configuration is required for asserting the users identity to 
the Hadoop cluster using Pseudo or Simple "authentication" and for using 
Kerberos/SPNEGO for secure clusters.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Default</name>
+        <enabled>true</enabled>
+    </provider>
+
+This particular configuration indicates that the Default identity assertion 
provider is enabled and that there are no principal mapping rules to apply to 
identities flowing from the authentication in the gateway to the backend Hadoop 
cluster services. The primary principal of the current subject will therefore 
be asserted via a query parameter or as a form parameter - ie. 
`?user.name={primaryPrincipal}`
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Default</name>
+        <enabled>true</enabled>
+        <param>
+            <name>principal.mapping</name>
+            <value>guest=hdfs;</value>
+        </param>
+        <param>
+            <name>group.principal.mapping</name>
+            <value>*=users;hdfs=admin</value>
+        </param>
+    </provider>
+
+This configuration identifies the same identity assertion provider but does 
provide principal and group mapping rules. In this case, when a user is 
authenticated as "guest" his identity is actually asserted to the Hadoop 
cluster as "hdfs". In addition, since there are group principal mappings 
defined, he will also be considered as a member of the groups "users" and 
"admin". In this particular example the wildcard "*" is used to indicate that 
all authenticated users need to be considered members of the "users" group and 
that only the user "hdfs" is mapped to be a member of the "admin" group.
+
+**NOTE: These group memberships are currently only meaningful for Service 
Level Authorization using the AclsAuthorization provider. The groups are not 
currently asserted to the Hadoop cluster at this time. See the Authorization 
section within this guide to see how this is used.**
+
+The principal mapping aspect of the identity assertion provider is important 
to understand in order to fully utilize the authorization features of this 
provider.
+
+This feature allows us to map the authenticated principal to a runAs or 
impersonated principal to be asserted to the Hadoop services in the backend.
+
+When a principal mapping is defined that results in an impersonated principal, 
this impersonated principal is then the effective principal.
+
+If there is no mapping to another principal then the authenticated or primary 
principal is the effective principal.
+
+#### Principal Mapping ####
+
+    <param>
+        <name>principal.mapping</name>
+        <value>{primaryPrincipal}[,...]={impersonatedPrincipal}[;...]</value>
+    </param>
+
+For instance:
+
+    <param>
+        <name>principal.mapping</name>
+        <value>guest=hdfs</value>
+    </param>
+
+For multiple mappings:
+
+    <param>
+        <name>principal.mapping</name>
+        <value>guest,alice=hdfs;mary=alice2</value>
+    </param>
+
+#### Group Principal Mapping ####
+
+    <param>
+        <name>group.principal.mapping</name>
+        
<value>{userName[,*|userName...]}={groupName[,groupName...]}[,...]</value>
+    </param>
+
+For instance:
+
+    <param>
+        <name>group.principal.mapping</name>
+        <value>*=users;hdfs=admin</value>
+    </param>
+
+this configuration indicates that all (*) authenticated users are members of 
the "users" group and that user "hdfs" is a member of the admin group. Group 
principal mapping has been added along with the authorization provider 
described in this document.
+
+#### Concat Identity Assertion Provider ####
+The Concat identity assertion provider allows for composition of a new user 
principal through the concatenation of optionally configured prefix and/or 
suffix provider parameters. This is a useful assertion provider for converting 
an incoming identity into a disambiguated identity within the Hadoop cluster 
based on what topology is used to access Hadoop.
+
+The following configuration would convert the user principal into a value that 
represents a domain specific identity where the identities used inside the 
Hadoop cluster represent this same separation.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Concat</name>
+        <enabled>true</enabled>
+        <param>
+            <name>concat.suffix</name>
+            <value>_domain1</value>
+        </param>
+    </provider>
+
+The above configuration will result in all user interactions through that 
topology to have their principal communicated to the Hadoop cluster with a 
domain designator concatenated to the username. Possibly useful for 
multi-tenant deployment scenarios.
+
+In addition to the concat.suffix parameter, the provider supports the setting 
of a prefix through a `concat.prefix` parameter.
+
+#### SwitchCase Identity Assertion Provider ####
+The SwitchCase identity assertion provider solves issues where down stream 
ecosystem components require user and group principal names to be a specific 
case.
+An example of how this provider is enabled and configured within the 
`<gateway>` section of a topology file is shown below.
+This particular example will switch user principals names to lower case and 
group principal names to upper case.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>SwitchCase</name>
+        <param>
+            <name>principal.case</name>
+            <value>lower</value>
+        </param>
+        <param>
+            <name>group.principal.case</name>
+            <value>upper</value>
+        </param>
+        <enabled>true</enabled>
+    </provider>
+
+These are the configuration parameters used to control the behavior of the 
provider.
+
+Param                | Description
+---------------------|------------
+principal.case       | The case mapping of user principal names. Choices are: 
lower, upper, none.  Defaults to lower.
+group.principal.case | The case mapping of group principal names. Choices are: 
lower, upper, none. Defaults to setting of principal.case.
+
+If no parameters are provided the full defaults will results in both user and 
group principal names being switched to lower case.
+A setting of "none" or anything other than "upper" or "lower" leaves the case 
of the principal name unchanged.
+
+#### Regular Expression Identity Assertion Provider ####
+The regular expression identity assertion provider allows incoming identities 
to be translated using a regular expression, template and lookup table.
+This will probably be most useful in conjunction with the HeaderPreAuth 
federation provider.
+
+There are three configuration parameters used to control the behavior of the 
provider.
+
+Param | Description
+------|-----------
+input | This is a regular expression that will be applied to the incoming 
identity. The most critical part of the regular expression is the group 
notation within the expression. In regular expressions, groups are expressed 
within parenthesis. For example in the regular expression "`(.*)@(.*?)\..*`" 
there are two groups. When this regular expression is applied to 
"[email protected]" group 1 matches "nobody" and group 2 matches "us". 
+output| This is a template that assembles the result identity. The result is 
assembled from the static text and the matched groups from the input regular 
expression. In addition, the matched group values can be looked up in the 
lookup table. An output value of "`{1}_{2}`" of will result in "nobody_us".     
            
+lookup| This lookup table provides a simple (albeit limited) way to translate 
text in the incoming identities. This configuration takes the form of "=" 
separated name values pairs separated by ";". For example a lookup setting is 
"us=USA;ca=CANADA". The lookup is invoked in the output setting by surrounding 
the desired group number in square brackets (i.e. []). Putting it all together, 
output setting of "`{1}_[{2}]`" combined with input of "`(.*)@(.*?)\..*`" and 
lookup of "us=USA;ca=CANADA" will turn "[email protected]" into 
"nobody@USA".
+use.original.on.lookup.failure | (Optional) Default value is false. If set to 
true, it will preserve the original string if there is no match. e.g. In the 
above lookup case for email [email protected], it will be transformed to 
nobody@ , if this property is set to true it will be transformed to  nobody@uk. 
 
+
+Within the topology file the provider configuration might look like this.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Regex</name>
+        <enabled>true</enabled>
+        <param>
+            <name>input</name>
+            <value>(.*)@(.*?)\..*</value>
+        </param>
+        <param>
+            <name>output</name>
+            <value>{1}_{[2]}</value>
+        </param>
+        <param>
+            <name>lookup</name>
+            <value>us=USA;ca=CANADA</value>
+        </param>
+    </provider>  
+
+Using curl with this type of configuration might produce the following 
results. 
+
+    curl -k --header "SM_USER: [email protected]" 
'https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY'
+    
+    {"Path":"/user/member_USA"}
+    
+    url -k --header "SM_USER: [email protected]" 
'https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY'
+    
+    {"Path":"/user/member_CANADA"}
+
+### Hadoop Group Lookup Provider ###
+
+An identity assertion provider that looks up user's 'group membership' for 
authenticated users using Hadoop's group mapping service 
(GroupMappingServiceProvider).
+
+This allows existing investments in the Hadoop to be leveraged within Knox and 
used within the access control policy enforcement at the perimeter.
+
+The 'role' for this provider is 'identity-assertion' and name is 
'HadoopGroupProvider'.
+
+        <provider>
+            <role>identity-assertion</role>
+            <name>HadoopGroupProvider</name>
+            <enabled>true</enabled>
+            <<param> ... </param>
+        </provider>
+
+### Configuration ###
+
+All the configuration for 'HadoopGroupProvider' resides in the provider 
section in a gateway topology file.
+The 'hadoop.security.group.mapping' property determines the implementation. 
This configuration may be centralized within the gateway-site.xml through the 
use of a special param to this provider called CENTRAL_GROUP_CONFIG_PREFIX. 
This indicates to the provider that the required configuration can be found 
within the gateway-site.xml file with the provided prefix.
+
+         <param>
+            <name>CENTRAL_GROUP_CONFIG_PREFIX</name>
+            <value>gateway.group.config.</value>
+         </param>
+
+ Some of the valid implementations are as follows: 
+#### org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
+
+This is the default implementation and will be picked up if 
'hadoop.security.group.mapping' is not specified. This implementation will 
determine if the Java Native Interface (JNI) is available. If JNI is available, 
the implementation will use the API within Hadoop to resolve a list of groups 
for a user. If JNI is not available then the shell implementation, 
`org.apache.hadoop.security.ShellBasedUnixGroupsMapping`, is used, which shells 
out with the `bash -c id -gn <user> ; id -Gn <user>` command (for a Linux/Unix 
environment) or the `groups -F <user>` command (for a Windows environment) to 
resolve a list of groups for a user.
+
+#### org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMappingWithFallback
+
+As above, if JNI is available then we get the netgroup membership using Hadoop 
native API, else fallback on ShellBasedUnixGroupsNetgroupMapping to resolve 
list of groups for a user.
+
+#### org.apache.hadoop.security.ShellBasedUnixGroupsMapping
+
+Uses the `bash -c id -gn <user> ; id -Gn <user>` command (for a Linux/Unix 
environment) or the `groups -F <user>` command (for a Windows environment) to 
resolve list of groups for a user.
+
+#### org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping
+
+Similar to `org.apache.hadoop.security.ShellBasedUnixGroupsMapping` except it 
uses `getent netgroup` command to get netgroup membership.
+
+#### org.apache.hadoop.security.LdapGroupsMapping
+
+This implementation connects directly to an LDAP server to resolve the list of 
groups. However, this should only be used if the required groups reside 
exclusively in LDAP, and are not materialized on the Unix servers.
+
+#### org.apache.hadoop.security.CompositeGroupsMapping
+
+This implementation asks multiple other group mapping providers for 
determining group membership, see [Composite Groups 
Mapping](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/GroupsMapping.html#Composite_Groups_Mapping)
 for more details.
+
+For more information on the implementation and properties refer to Hadoop 
Group Mapping.
+
+### Example ###
+
+The following example snippet works with the demo ldap server that ships with 
Apache Knox. Replace the existing 'Default' identity-assertion provider with 
the one below (HadoopGroupProvider).
+
+        <provider>
+            <role>identity-assertion</role>
+            <name>HadoopGroupProvider</name>
+            <enabled>true</enabled>
+            <param>
+                <name>hadoop.security.group.mapping</name>
+                <value>org.apache.hadoop.security.LdapGroupsMapping</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.bind.user</name>
+                <value>uid=tom,ou=people,dc=hadoop,dc=apache,dc=org</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.bind.password</name>
+                <value>tom-password</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.url</name>
+                <value>ldap://localhost:33389</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.base</name>
+                <value></value>
+            </param>
+            <param>
+                
<name>hadoop.security.group.mapping.ldap.search.filter.user</name>
+                
<value>(&amp;(|(objectclass=person)(objectclass=applicationProcess))(cn={0}))</value>
+            </param>
+            <param>
+                
<name>hadoop.security.group.mapping.ldap.search.filter.group</name>
+                <value>(objectclass=groupOfNames)</value>
+            </param>
+            <param>
+                
<name>hadoop.security.group.mapping.ldap.search.attr.member</name>
+                <value>member</value>
+            </param>
+            <param>
+                
<name>hadoop.security.group.mapping.ldap.search.attr.group.name</name>
+                <value>cn</value>
+            </param>
+        </provider>
+
+
+Here, we are working with the demo LDAP server running at 
'ldap://localhost:33389' which populates some dummy users for testing that we 
will use in this example. This example uses the user 'tom' for LDAP binding. If 
you have different LDAP/AD settings, you will have to update the properties 
accordingly. 
+
+Let's test our setup using the following command (assuming the gateway is 
started and listening on localhost:8443). Note that we are using credentials 
for the user 'sam' along with the command. 
+
+        curl -i -k -u sam:sam-password -X GET 
'https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS' 
+
+The command should be executed successfully and you should see the groups 
'scientist' and 'analyst' to which user 'sam' belongs to in gateway-audit.log 
i.e.
+
+        
||a99aa0ab-fc06-48f2-8df3-36e6fe37c230|audit|WEBHDFS|sam|||identity-mapping|principal|sam|success|Groups:
 [scientist, analyst]

Added: knox/trunk/books/1.3.0/config_kerberos.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_kerberos.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_kerberos.md (added)
+++ knox/trunk/books/1.3.0/config_kerberos.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,68 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Secure Clusters ###
+
+See the Hadoop documentation for setting up a secure Hadoop cluster
+http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html
+
+Once you have a Hadoop cluster that is using Kerberos for authentication, you 
have to do the following to configure Knox to work with that cluster.
+
+#### Create Unix account for Knox on Hadoop master nodes ####
+
+    useradd -g hadoop knox
+
+#### Create Kerberos principal, keytab for Knox ####
+
+One way of doing this, assuming your KDC realm is EXAMPLE.COM, is to ssh into 
your host running KDC and execute `kadmin.local`
+That will result in an interactive session in which you can execute commands.
+
+ssh into your host running KDC
+
+    kadmin.local
+    add_principal -randkey knox/[email protected]
+    ktadd -k knox.service.keytab -norandkey knox/[email protected]
+    exit
+
+
+#### Copy knox keytab to Knox host ####
+
+Add unix account for the knox user on Knox host
+
+    useradd -g hadoop knox
+
+Copy knox.service.keytab created on KDC host on to your Knox host 
`{GATEWAY_HOME}/conf/knox.service.keytab`
+
+    chown knox knox.service.keytab
+    chmod 400 knox.service.keytab
+
+#### Update `krb5.conf` at `{GATEWAY_HOME}/conf/krb5.conf` on Knox host ####
+
+You could copy the `{GATEWAY_HOME}/templates/krb5.conf` file provided in the 
Knox binary download and customize it to suit your cluster.
+
+#### Update `krb5JAASLogin.conf` at `/etc/knox/conf/krb5JAASLogin.conf` on 
Knox host ####
+
+You could copy the `{GATEWAY_HOME}/templates/krb5JAASLogin.conf` file provided 
in the Knox binary download and customize it to suit your cluster.
+
+#### Update `gateway-site.xml` on Knox host ####
+
+Update `conf/gateway-site.xml` in your Knox installation and set the value of 
`gateway.hadoop.kerberos.secured` to true.
+
+#### Restart Knox ####
+
+After you do the above configurations and restart Knox, Knox would use SPNEGO 
to authenticate with Hadoop services and Oozie.
+There is no change in the way you make calls to Knox whether you use curl or 
Knox DSL.

Added: knox/trunk/books/1.3.0/config_knox_sso.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_knox_sso.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_knox_sso.md (added)
+++ knox/trunk/books/1.3.0/config_knox_sso.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,149 @@
+## KnoxSSO Setup and Configuration
+
+### Introduction
+---
+
+Authentication of the Hadoop component UIs, and those of the overall 
ecosystem, is usually limited to Kerberos (which requires SPNEGO to be 
configured for the user's browser) and simple/pseudo. This often results in the 
UIs not being secured - even in secured clusters. This is where KnoxSSO 
provides value by providing WebSSO capabilities to the Hadoop cluster.
+
+By leveraging the hadoop-auth module in Hadoop common, we have introduced the 
ability to consume a common SSO cookie for web UIs while retaining the non-web 
browser authentication through kerberos/SPNEGO. We do this by extending the 
AltKerberosAuthenticationHandler class which provides the useragent based 
multiplexing. 
+
+We also provide integration guidance within the developers guide for other 
applications to be able to participate in these SSO capabilities.
+
+The flexibility of the Apache Knox authentication and federation providers 
allows KnoxSSO to provide a normalization of authentication events through 
token exchange resulting in a common JWT (JSON WebToken) based token.
+
+KnoxSSO provides an abstraction for integrating any number of authentication 
systems and SSO solutions and enables participating web applications to scale 
to those solutions more easily. Without the token exchange capabilities offered 
by KnoxSSO each component UI would need to integrate with each desired solution 
on its own. With KnoxSSO they only need to integrate with the single solution 
and common token.
+
+In addition, KnoxSSO comes with its own form-based IdP. This allows for easily 
integrating a form-based login with the enterprise AD/LDAP server.
+
+This document describes the overall setup requirements for KnoxSSO and 
participating applications.
+
+### Form-based IdP Setup
+By default the `knoxsso.xml` topology contains an application element for the 
knoxauth login application. This is a simple single page application for 
providing a login page and authenticating the user with HTTP basic auth against 
AD/LDAP.
+
+    <application>
+        <name>knoxauth</name>
+    </application>
+
+The Shiro Provider has specialized configuration beyond the typical HTTP Basic 
authentication requirements for REST APIs or other non-knoxauth applications. 
You will notice below that there are a couple additional elements - namely, 
**redirectToUrl** and **restrictedCookies with WWW-Authenticate**. These are 
used to short-circuit the browser's HTTP basic dialog challenge so that we can 
use a form instead.
+
+    <provider>
+       <role>authentication</role>
+       <name>ShiroProvider</name>
+       <enabled>true</enabled>
+       <param>
+          <name>sessionTimeout</name>
+          <value>30</value>
+       </param>
+       <param>
+          <name>redirectToUrl</name>
+          <value>/gateway/knoxsso/knoxauth/login.html</value>
+       </param>
+       <param>
+          <name>restrictedCookies</name>
+          <value>rememberme,WWW-Authenticate</value>
+       </param>
+       <param>
+          <name>main.ldapRealm</name>
+          <value>org.apache.knox.gateway.shirorealm.KnoxLdapRealm</value>
+       </param>
+       <param>
+          <name>main.ldapContextFactory</name>
+          
<value>org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory</value>
+       </param>
+       <param>
+          <name>main.ldapRealm.contextFactory</name>
+          <value>$ldapContextFactory</value>
+       </param>
+       <param>
+          <name>main.ldapRealm.userDnTemplate</name>
+          <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+       </param>
+       <param>
+          <name>main.ldapRealm.contextFactory.url</name>
+          <value>ldap://localhost:33389</value>
+       </param>
+       <param>
+          <name>main.ldapRealm.authenticationCachingEnabled</name>
+          <value>false</value>
+       </param>
+       <param>
+          <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+          <value>simple</value>
+       </param>
+       <param>
+          <name>urls./**</name>
+          <value>authcBasic</value>
+       </param>
+    </provider>
+
+### KnoxSSO Service Setup
+
+#### knoxsso.xml Topology
+To enable KnoxSSO, we use the KnoxSSO topology for exposing an API that can be 
used to abstract the use of any number of enterprise or customer IdPs. By 
default, the `knoxsso.xml` file is configured for using the simple KnoxAuth 
application for form-based authentication against LDAP/AD. By swapping the 
Shiro authentication provider that is there out-of-the-box with another 
authentication or federation provider, an admin may leverage many of the 
existing providers for SSO for the UI components that participate in KnoxSSO.
+
+Just as with any Knox service, the KNOXSSO service is protected by the gateway 
providers defined above it. In this case, the ShiroProvider is taking care of 
HTTP Basic Auth against LDAP for us. Once the user authenticates the request 
processing continues to the KNOXSSO service that will create the required 
cookie and do the necessary redirects.
+
+The knoxsso.xml topology will result in a KnoxSSO URL that looks something 
like:
+
+    https://{gateway_host}:{gateway_port}/gateway/knoxsso/api/v1/websso
+
+This URL is needed when configuring applications that participate in KnoxSSO 
for a given deployment. We will refer to this as the Provider URL.
+
+#### KnoxSSO Configuration Parameters
+
+Parameter                        | Description | Default
+-------------------------------- |------------ |----------- 
+knoxsso.cookie.name              | This optional setting allows the admin to 
set the name of the sso cookie to use to represent a successful authentication 
event. | hadoop-jwt
+knoxsso.cookie.secure.only       | This determines whether the browser is 
allowed to send the cookie over unsecured channels. This should always be set 
to true in production systems. If during development a relying party is not 
running SSL then you can turn this off. Running with it off exposes the cookie 
and underlying token for capture and replay by others. | true
+knoxsso.cookie.max.age           | optional: This indicates that a cookie can 
only live for a specified amount of time - in seconds. This should probably be 
left to the default which makes it a session cookie. Session cookies are 
discarded once the browser session is closed. | session
+knoxsso.cookie.domain.suffix     | optional: This indicates the portion of the 
request hostname that represents the domain to be used for the cookie domain. 
For single host development scenarios, the default behavior should be fine. For 
production deployments, the expected domain should be set and all configured 
URLs that are related to SSO should use this domain. Otherwise, the cookie will 
not be presented by the browser to mismatched URLs. | Default cookie domain or 
a domain derived from a hostname that includes more than 2 dots.
+knoxsso.token.ttl                | This indicates the lifespan of the token 
within the cookie. Once it expires a new cookie must be acquired from KnoxSSO. 
This is in milliseconds. The 36000000 in the topology above gives you 10 hrs. | 
30000 That is 30 seconds.
+knoxsso.token.audiences          | This is a comma separated list of audiences 
to add to the JWT token. This is used to ensure that a token received by a 
participating application knows that the token was intended for use with that 
application. It is optional. In the event that an application has expected 
audiences and they are not present the token must be rejected. In the event 
where the token has audiences and the application has none expected then the 
token is accepted.| empty
+knoxsso.redirect.whitelist.regex | A semicolon-delimited list of regular 
expressions. The incoming originalUrl must match one of the expressions in 
order for KnoxSSO to redirect to it after authentication. Note that cookie use 
is still constrained to redirect destinations in the same domain as the KnoxSSO 
service - regardless of the expressions specified here. | The value of the 
gateway-site property named *gateway.dispatch.whitelist*. If that is not 
defined, the default allows only relative paths, localhost or destinations in 
the same domain as the Knox host (with or without SSL). This may need to be 
opened up for production use and actual participating applications.
+knoxsso.expected.params          | Optional: Comma separated list of query 
parameters that are expected and consumed by KnoxSSO and will not be passed on 
to originalUrl | empty
+
+
+### Participating Application Configuration
+#### Hadoop Configuration Example
+The following is used as the KnoxSSO configuration in the Hadoop 
JWTRedirectAuthenticationHandler implementation. Any participating application 
will need similar configuration. Since JWTRedirectAuthenticationHandler extends 
the AltKerberosAuthenticationHandler, the typical Kerberos configuration 
parameters for authentication are also required.
+
+
+    <property>
+        <name>hadoop.http.authentication.type</name
+        
<value>org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler</value>
+    </property>
+
+
+This is the handler classname in Hadoop auth for JWT token (KnoxSSO) support.
+
+
+    <property>
+        <name>hadoop.http.authentication.authentication.provider.url</name>
+        
<value>https://c6401.ambari.apache.org:8443/gateway/knoxsso/api/v1/websso</value>
+    </property>
+
+
+The above property is the SSO provider URL that points to the knoxsso endpoint.
+
+    <property>
+        <name>hadoop.http.authentication.public.key.pem</name>
+        <value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
+      BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
+      YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
+      aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
+      BhMCVVMxDTALBgNVBAgTBFRlc3QxDTALBgNVBAcTBFRlc3QxDzANBgNVBAoTBkhh
+      ZG9vcDENMAsGA1UECxMEVGVzdDEgMB4GA1UEAxMXYzY0MDEuYW1iYXJpLmFwYWNo
+      ZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFs/rymbiNvg8lDhsdA
+      qvh5uHP6iMtfv9IYpDleShjkS1C+IqId6bwGIEO8yhIS5BnfUR/fcnHi2ZNrXX7x
+      QUtQe7M9tDIKu48w//InnZ6VpAqjGShWxcSzR6UB/YoGe5ytHS6MrXaormfBg3VW
+      tDoy2MS83W8pweS6p5JnK7S5AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEANyVg6EzE
+      2q84gq7wQfLt9t047nYFkxcRfzhNVL3LB8p6IkM4RUrzWq4kLA+z+bpY2OdpkTOe
+      wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
+      TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value>
+    </property>
+
+The above property holds the KnoxSSO server's public key for signature 
verification. Adding it directly to the config like this is convenient and is 
easily done through Ambari to existing config files that take custom 
properties. Config is generally protected as root access only as well - so it 
is a pretty good solution.
+
+Individual UIs within the Hadoop ecosystem will have similar configuration for 
participating in the KnoxSSO websso capabilities.
+
+Blogs will be provided on the Apache Knox project site for these usecases as 
they become available.

Added: knox/trunk/books/1.3.0/config_knox_token.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_knox_token.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_knox_token.md (added)
+++ knox/trunk/books/1.3.0/config_knox_token.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,51 @@
+## KnoxToken Configuration
+
+### Introduction
+---
+
+The Knox Token Service enables the ability for clients to acquire the same JWT 
token that is used for KnoxSSO with WebSSO flows for UIs to be used for 
accessing REST APIs. By acquiring the token and setting it as a Bearer token on 
a request, a client is able to access REST APIs that are protected with the 
JWTProvider federation provider.
+
+This section describes the overall setup requirements and options for 
KnoxToken service.
+
+### KnoxToken service
+The Knox Token Service configuration can be configured in any topology and be 
tailored to issue tokens to authenticated users and constrain the usage of the 
tokens in a number of ways.
+
+    <service>
+       <role>KNOXTOKEN</role>
+       <param>
+          <name>knox.token.ttl</name>
+          <value>36000000</value>
+       </param>
+       <param>
+          <name>knox.token.audiences</name>
+          <value>tokenbased</value>
+       </param>
+       <param>
+          <name>knox.token.target.url</name>
+          <value>https://localhost:8443/gateway/tokenbased</value>
+       </param>
+    </service>
+
+#### KnoxToken Configuration Parameters
+
+Parameter                        | Description | Default
+-------------------------------- |------------ |----------- 
+knox.token.ttl                | This indicates the lifespan of the token. Once 
it expires a new token must be acquired from KnoxToken service. This is in 
milliseconds. The 36000000 in the topology above gives you 10 hrs. | 30000 That 
is 30 seconds.
+knox.token.audiences          | This is a comma separated list of audiences to 
add to the JWT token. This is used to ensure that a token received by a 
participating application knows that the token was intended for use with that 
application. It is optional. In the event that an endpoint has expected 
audiences and they are not present the token must be rejected. In the event 
where the token has audiences and the endpoint has none expected then the token 
is accepted.| empty
+knox.token.target.url         | This is an optional configuration parameter to 
indicate the intended endpoint for which the token may be used. The KnoxShell 
token credential collector can pull this URL from a knoxtokencache file to be 
used in scripts. This eliminates the need to prompt for or hardcode endpoints 
in your scripts. | n/a
+
+Adding the KnoxToken configuration shown above to a topology that is protected 
with the ShrioProvider is a very simple and effective way to expose an endpoint 
from which a Knox token can be requested. Once it is acquired it may be used to 
access resources at intended endpoints until it expires.
+
+The following curl command can be used to acquire a token from the Knox Token 
service as configured in the sandbox topology:
+
+    curl -ivku guest:guest-password 
https://localhost:8443/gateway/sandbox/knoxtoken/api/v1/token
+    
+Resulting in a JSON response that contains the token, the expiration and the 
optional target endpoint:
+
+          
`{"access_token":"eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJndWVzdCIsImF1ZCI6InRva2VuYmFzZWQiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNDg5OTQyMTg4fQ.bcqSK7zMnABEM_HVsm3oWNDrQ_ei7PcMI4AtZEERY9LaPo9dzugOg3PA5JH2BRF-lXM3tuEYuZPaZVf8PenzjtBbuQsCg9VVImuu2r1YNVJlcTQ7OV-eW50L6OTI0uZfyrFwX6C7jVhf7d7YR1NNxs4eVbXpS1TZ5fDIRSfU3MU","target_url":"https://localhost:8443/gateway/tokenbased","token_type":"Bearer
 ","expires_in":1489942188233}`
+
+The following curl example shows how to add a bearer token to an Authorization 
header:
+
+    curl -ivk -H "Authorization: Bearer 
eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJndWVzdCIsImF1ZCI6InRva2VuYmFzZWQiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNDg5OTQyMTg4fQ.bcqSK7zMnABEM_HVsm3oWNDrQ_ei7PcMI4AtZEERY9LaPo9dzugOg3PA5JH2BRF-lXM3tuEYuZPaZVf8PenzjtBbuQsCg9VVImuu2r1YNVJlcTQ7OV-eW50L6OTI0uZfyrFwX6C7jVhf7d7YR1NNxs4eVbXpS1TZ5fDIRSfU3MU"
 https://localhost:8443/gateway/tokenbased/webhdfs/v1/tmp?op=LISTSTATUS
+
+See documentation in Client Details for KnoxShell init, list and destroy for 
commands that leverage this token service for CLI sessions.
\ No newline at end of file

Added: knox/trunk/books/1.3.0/config_ldap_authc_cache.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_ldap_authc_cache.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_ldap_authc_cache.md (added)
+++ knox/trunk/books/1.3.0/config_ldap_authc_cache.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,211 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### LDAP Authentication Caching ###
+
+Knox can be configured to cache LDAP authentication information. Knox 
leverages Shiro's built in
+caching mechanisms and has been tested with Shiro's EhCache cache manager 
implementation.
+
+The following provider snippet demonstrates how to configure turning on the 
cache using the ShiroProvider. In addition to
+using `org.apache.knox.gateway.shirorealm.KnoxLdapRealm` in the Shiro 
configuration, and setting up the cache you *must* set
+the flag for enabling caching authentication to true. Please see the property, 
`main.ldapRealm.authenticationCachingEnabled` below.
+
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.knox.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        <param>
+            <name>main.ldapGroupContextFactory</name>
+            
<value>org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapGroupContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://localhost:33389</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <!-- defaults to: false -->
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.cacheManager</name>
+            <value>org.apache.knox.gateway.shirorealm.KnoxCacheManager</value>
+        </param>
+        <param>
+            <name>main.securityManager.cacheManager</name>
+            <value>$cacheManager</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authenticationCachingEnabled</name>
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>guest-password</value>
+        </param>
+        <param>
+            <name>urls./**</name>
+            <value>authcBasic</value>
+        </param>
+    </provider>
+
+
+### Trying out caching ###
+
+Knox bundles a template topology files that can be used to try out the caching 
functionality.
+The template file located under `{GATEWAY_HOME}/templates` is 
`sandbox.knoxrealm.ehcache.xml`.
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm.ehcache.xml conf/topologies/sandbox.xml
+    bin/ldap.sh start
+    bin/gateway.sh start
+
+The following call to WebHDFS should report: `{"Path":"/user/tom"}`
+
+    curl  -i -v  -k -u tom:tom-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+In order to see the cache working, LDAP can now be shutdown and the user will 
still authenticate successfully.
+
+    bin/ldap.sh stop
+
+and then the following should still return successfully like it did earlier.
+
+    curl  -i -v  -k -u tom:tom-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+
+#### Advanced Caching Config ####
+
+By default the EhCache support in Shiro contains a ehcache.xml in its 
classpath which is the following
+
+    <ehcache name="knox-YOUR_TOPOLOGY_NAME">
+
+        <!-- Sets the path to the directory where cache .data files are 
created.
+
+             If the path is a Java System Property it is replaced by
+             its value in the running VM. The following properties are 
translated:
+
+                user.home - User's home directory
+                user.dir - User's current working directory
+                java.io.tmpdir - Default temp file path
+        -->
+        <diskStore path="java.io.tmpdir/shiro-ehcache"/>
+
+
+        <!--Default Cache configuration. These will applied to caches 
programmatically created through
+        the CacheManager.
+
+        The following attributes are required:
+
+        maxElementsInMemory            - Sets the maximum number of objects 
that will be created in memory
+        eternal                        - Sets whether elements are eternal. If 
eternal,  timeouts are ignored and the
+                                         element is never expired.
+        overflowToDisk                 - Sets whether elements can overflow to 
disk when the in-memory cache
+                                         has reached the maxInMemory limit.
+
+        The following attributes are optional:
+        timeToIdleSeconds              - Sets the time to idle for an element 
before it expires.
+                                         i.e. The maximum amount of time 
between accesses before an element expires
+                                         Is only used if the element is not 
eternal.
+                                         Optional attribute. A value of 0 
means that an Element can idle for infinity.
+                                         The default value is 0.
+        timeToLiveSeconds              - Sets the time to live for an element 
before it expires.
+                                         i.e. The maximum time between 
creation time and when an element expires.
+                                         Is only used if the element is not 
eternal.
+                                         Optional attribute. A value of 0 
means that and Element can live for infinity.
+                                         The default value is 0.
+        diskPersistent                 - Whether the disk store persists 
between restarts of the Virtual Machine.
+                                         The default value is false.
+        diskExpiryThreadIntervalSeconds- The number of seconds between runs of 
the disk expiry thread. The default value
+                                         is 120 seconds.
+        memoryStoreEvictionPolicy      - Policy would be enforced upon 
reaching the maxElementsInMemory limit. Default
+                                         policy is Least Recently Used 
(specified as LRU). Other policies available -
+                                         First In First Out (specified as 
FIFO) and Less Frequently Used
+                                         (specified as LFU)
+        -->
+
+        <defaultCache
+                maxElementsInMemory="10000"
+                eternal="false"
+                timeToIdleSeconds="120"
+                timeToLiveSeconds="120"
+                overflowToDisk="false"
+                diskPersistent="false"
+                diskExpiryThreadIntervalSeconds="120"
+                />
+
+        <!-- We want eternal="true" and no timeToIdle or timeToLive settings 
because Shiro manages session
+             expirations explicitly.  If we set it to false and then set 
corresponding timeToIdle and timeToLive properties,
+             ehcache would evict sessions without Shiro's knowledge, which 
would cause many problems
+            (e.g. "My Shiro session timeout is 30 minutes - why isn't a 
session available after 2 minutes?"
+                   Answer - ehcache expired it due to the timeToIdle property 
set to 120 seconds.)
+
+            diskPersistent=true since we want an enterprise session management 
feature - ability to use sessions after
+            even after a JVM restart.  -->
+        <cache name="shiro-activeSessionCache"
+               maxElementsInMemory="10000"
+               overflowToDisk="true"
+               eternal="true"
+               timeToLiveSeconds="0"
+               timeToIdleSeconds="0"
+               diskPersistent="true"
+               diskExpiryThreadIntervalSeconds="600"/>
+
+        <cache name="org.apache.shiro.realm.text.PropertiesRealm-0-accounts"
+               maxElementsInMemory="1000"
+               eternal="true"
+               overflowToDisk="true"/>
+
+    </ehcache>
+
+A custom configuration file (ehcache.xml) can be used in place of this in 
order to set specific caching configuration.
+
+In order to set the ehcache.xml file to use for a particular topology, set the 
following parameter in the configuration
+for the ShiroProvider:
+
+    <param>
+        <name>main.cacheManager.cacheManagerConfigFile</name>
+        <value>classpath:ehcache.xml</value>
+    </param>
+
+In the above example, place the ehcache.xml file under `{GATEWAY_HOME}/conf` 
and restart the gateway server.

Added: knox/trunk/books/1.3.0/config_ldap_group_lookup.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_ldap_group_lookup.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_ldap_group_lookup.md (added)
+++ knox/trunk/books/1.3.0/config_ldap_group_lookup.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,228 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### LDAP Group Lookup ###
+
+Knox can be configured to look up LDAP groups that the authenticated user 
belong to.
+Knox can look up both Static LDAP Groups and Dynamic LDAP Groups.
+The looked up groups are populated as Principal(s) in the Java Subject of the 
authenticated user.
+Therefore service authorization rules can be defined in terms of LDAP groups 
looked up from a LDAP directory.
+
+To look up LDAP groups of authenticated user from LDAP, you have to use 
`org.apache.knox.gateway.shirorealm.KnoxLdapRealm` in Shiro configuration.
+
+Please see below a sample Shiro configuration snippet from a topology file 
that was tested looking LDAP groups.
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <!-- 
+        session timeout in minutes,  this is really idle timeout,
+        defaults to 30mins, if the property value is not defined,, 
+        current client authentication would expire if client idles 
continuously for more than this value
+        -->
+        <!-- defaults to: 30 minutes
+        <param>
+            <name>sessionTimeout</name>
+            <value>30</value>
+        </param>
+        -->
+
+        <!--
+          Use single KnoxLdapRealm to do authentication and ldap group look up
+        -->
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.knox.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        <param>
+            <name>main.ldapGroupContextFactory</name>
+            
<value>org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapGroupContextFactory</value>
+        </param>
+        <!-- defaults to: simple
+        <param>
+            <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://localhost:33389</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <!-- defaults to: false -->
+            <value>true</value>
+        </param>
+        <!-- defaults to: simple
+        <param>
+            
<name>main.ldapRealm.contextFactory.systemAuthenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <!-- defaults to: groupOfNames
+        <param>
+            <name>main.ldapRealm.groupObjectClass</name>
+            <value>groupOfNames</value>
+        </param>
+        -->
+        <!-- defaults to: member
+        <param>
+            <name>main.ldapRealm.memberAttribute</name>
+            <value>member</value>
+        </param>
+        -->
+        <param>
+             <name>main.cacheManager</name>
+             
<value>org.apache.shiro.cache.MemoryConstrainedCacheManager</value>
+        </param>
+        <param>
+            <name>main.securityManager.cacheManager</name>
+            <value>$cacheManager</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <!-- the above element is the template for most ldap servers 
+            for active directory use the following instead and
+            remove the above configuration.
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>cn={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>${ALIAS=ldcSystemPassword}</value>
+        </param>
+
+        <param>
+            <name>urls./**</name> 
+            <value>authcBasic</value>
+        </param>
+
+    </provider>
+
+The configuration shown above would look up Static LDAP groups of the 
authenticated user and populate the group principals in the Java Subject 
corresponding to the authenticated user.
+
+If you want to look up Dynamic LDAP Groups instead of Static LDAP Groups, you 
would have to specify groupObjectClass and memberAttribute params as shown 
below:
+
+    <param>
+        <name>main.ldapRealm.groupObjectClass</name>
+        <value>groupOfUrls</value>
+    </param>
+    <param>
+        <name>main.ldapRealm.memberAttribute</name>
+        <value>memberUrl</value>
+    </param>
+
+### Template topology files and LDIF files to try out LDAP Group Look up ###
+
+Knox bundles some template topology files and ldif files that you can use to 
try and test LDAP Group Lookup and associated authorization ACLs.
+All these template files are located under `{GATEWAY_HOME}/templates`.
+
+
+#### LDAP Static Group Lookup Templates, authentication and group lookup from 
the same directory ####
+
+* topology file: sandbox.knoxrealm1.xml
+* ldif file: users.ldapgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm1.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of group "analyst", authorization provider states 
user should be member of group "analyst"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/sam"}
+As sam is a member of group "analyst", authorization provider states user 
should be member of group "analyst"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+
+#### LDAP Static Group Lookup Templates, authentication and group lookup from 
different  directories ####
+
+* topology file: sandbox.knoxrealm2.xml
+* ldif file: users.ldapgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm2.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of group "analyst", authorization provider states 
user should be member of group "analyst"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/sam"}
+As sam is a member of group "analyst", authorization provider states user 
should be member of group "analyst"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+#### LDAP Dynamic Group Lookup Templates, authentication and dynamic group 
lookup from same  directory ####
+
+* topology file: sandbox.knoxrealmdg.xml
+* ldif file: users.ldapdynamicgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealmdg.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapdynamicgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master
+
+Please note that user.ldapdynamicgroups.ldif also loads necessary schema to 
create dynamic groups in Apache DS.
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of dynamic group "directors", authorization provider 
states user should be member of group "directors"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/bob"}
+As bob is a member of dynamic group "directors", authorization provider states 
user should be member of group "directors"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+

Added: knox/trunk/books/1.3.0/config_metrics.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.3.0/config_metrics.md?rev=1850181&view=auto
==============================================================================
--- knox/trunk/books/1.3.0/config_metrics.md (added)
+++ knox/trunk/books/1.3.0/config_metrics.md Wed Jan  2 17:31:29 2019
@@ -0,0 +1,49 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Metrics ###
+
+See the KIP for details on the implementation of metrics available in the 
gateway.
+
+[Metrics KIP](https://cwiki.apache.org/confluence/display/KNOX/KIP-2+Metrics)
+
+#### Metrics Configuration ####
+
+Metrics configuration can be done in `gateway-site.xml`.
+
+The initial configuration is mainly for turning on or off the metrics 
collection and then enabling reporters with their required config.
+
+The two initial reporters implemented are JMX and Graphite.
+
+    gateway.metrics.enabled 
+
+Turns on or off the metrics, default is 'true'
+ 
+    gateway.jmx.metrics.reporting.enabled
+
+Turns on or off the jmx reporter, default is 'true'
+
+    gateway.graphite.metrics.reporting.enabled
+
+Turns on or off the graphite reporter, default is 'false'
+
+    gateway.graphite.metrics.reporting.host
+    gateway.graphite.metrics.reporting.port
+    gateway.graphite.metrics.reporting.frequency
+
+The above are the host, port and frequency of reporting (in seconds) 
parameters for the graphite reporter.
+


Reply via email to