Modified: knox/trunk/books/1.1.0/config_hadoop_auth_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_hadoop_auth_provider.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_hadoop_auth_provider.md (original)
+++ knox/trunk/books/1.1.0/config_hadoop_auth_provider.md Tue Jul  3 19:13:36 
2018
@@ -20,9 +20,9 @@ The HadoopAuth authentication provider f
 
 #### Configuration ####
 ##### Overview #####
-As with all providers in the Knox gateway, the HadoopAuth provider is 
configured through provider params. The configuration parameters are the same 
parameters used within Apache Hadoop for the same capabilities. In this 
section, we provide an example configuration and description of each of the 
parameters. We do encourage the reader to refer to the Hadoop documentation for 
this as well. (see 
http://hadoop.apache.org/docs/current/hadoop-auth/Configuration.html)
+As with all providers in the Knox gateway, the HadoopAuth provider is 
configured through provider parameters. The configuration parameters are the 
same parameters used within Apache Hadoop for the same capabilities. In this 
section, we provide an example configuration and description of each of the 
parameters. We do encourage the reader to refer to the Hadoop documentation for 
this as well. (see 
http://hadoop.apache.org/docs/current/hadoop-auth/Configuration.html)
 
-One of the interesting things to note about this configuration is the use of 
the config.prefix parameter. In Hadoop there may be multiple components with 
their own specific configuration values for these parameters and since they may 
get mixed into the same Configuration object - there needs to be a way to 
identify the component specific values. The config.prefix parameter is used for 
this and is prepended to each of the configuration parameters for this 
provider. Below, you see an example configuration where the value for 
config.prefix happens to be 'hadoop.auth.config'. You will also notice that 
this same value is prepended to the name of the rest of the configuration 
parameters.
+One of the interesting things to note about this configuration is the use of 
the `config.prefix` parameter. In Hadoop there may be multiple components with 
their own specific configuration values for these parameters and since they may 
get mixed into the same Configuration object - there needs to be a way to 
identify the component specific values. The `config.prefix` parameter is used 
for this and is prepended to each of the configuration parameters for this 
provider. Below, you see an example configuration where the value for 
config.prefix happens to be `hadoop.auth.config`. You will also notice that 
this same value is prepended to the name of the rest of the configuration 
parameters.
 
     <provider>
       <role>authentication</role>
@@ -77,22 +77,22 @@ The following tables describes the confi
 ###### Config
 
 Name | Description | Default
----------|-----------
-config.prefix|If specified, all other configuration parameter names must start 
with the prefix.|none
-signature.secret|This is the secret used to sign the delegation token in the 
hadoop.auth cookie. This same secret needs to be used across all instances of 
the Knox gateway in a given cluster. Otherwise, the delegation token will fail 
validation and authentication will be repeated each request.|a simple random 
number  
-type|This parameter needs to be set to kerberos.|none, would throw exception
-simple.anonymous.allowed|This should always be false for a secure 
deployment.|true
-token.validity|The validity -in seconds- of the generated authentication 
token. This is also used for the rollover interval when signer.secret.provider 
is set to random or zookeeper.|36000 seconds
-cookie.domain|domain to use for the HTTP cookie that stores the authentication 
token|null
-cookie.path|path to use for the HTTP cookie that stores the authentication 
token|null
-kerberos.principal|The web-application Kerberos principal name. The Kerberos 
principal name must start with HTTP/.... For example: 
HTTP/localhost@LOCALHOST|null
-kerberos.keytab|The path to the keytab file containing the credentials for the 
kerberos principal. For example: /Users/lmccay/lmccay.keytab|null
-kerberos.name.rules|The name of the ruleset for extracting the username from 
the kerberos principal.|DEFAULT
+---------|-----------|----
+config.prefix            | If specified, all other configuration parameter 
names must start with the prefix. | none
+signature.secret|This is the secret used to sign the delegation token in the 
hadoop.auth cookie. This same secret needs to be used across all instances of 
the Knox gateway in a given cluster. Otherwise, the delegation token will fail 
validation and authentication will be repeated each request. | A simple random 
number  
+type                     | This parameter needs to be set to `kerberos` | 
none, would throw exception
+simple.anonymous.allowed | This should always be false for a secure 
deployment. | true
+token.validity           | The validity -in seconds- of the generated 
authentication token. This is also used for the rollover interval when 
`signer.secret.provider` is set to random or ZooKeeper. | 36000 seconds
+cookie.domain            | Domain to use for the HTTP cookie that stores the 
authentication token | null
+cookie.path              | Path to use for the HTTP cookie that stores the 
authentication token | null
+kerberos.principal       | The web-application Kerberos principal name. The 
Kerberos principal name must start with HTTP/.... For example: 
`HTTP/localhost@LOCALHOST` | null
+kerberos.keytab          | The path to the keytab file containing the 
credentials for the kerberos principal. For example: 
`/Users/lmccay/lmccay.keytab` | null
+kerberos.name.rules      | The name of the ruleset for extracting the username 
from the kerberos principal. | DEFAULT
 
 ###### REST Invocation
-Once a user logs in with kinit then their kerberos session may be used across 
client requests with things like curl.
-The following curl command can be used to request a directory listing from 
HDFS while authenticating with SPNEGO via the --negotiate flag
+Once a user logs in with kinit then their Kerberos session may be used across 
client requests with things like curl.
+The following curl command can be used to request a directory listing from 
HDFS while authenticating with SPNEGO via the `--negotiate` flag
 
-    curl -k -i --negotiate -u 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+    curl -k -i --negotiate -u : 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
 
 

Modified: knox/trunk/books/1.1.0/config_id_assertion.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_id_assertion.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_id_assertion.md (original)
+++ knox/trunk/books/1.1.0/config_id_assertion.md Tue Jul  3 19:13:36 2018
@@ -22,10 +22,10 @@ The general responsibilities of the iden
 
 1. determine whether it matches any principal mapping rules and apply them 
appropriately
 2. determine whether it matches any group principal mapping rules and apply 
them
-3. if it is determined that the principal will be impersonating another 
through a principal mapping rule then a Subject.doAS is required in order for 
providers farther downstream can determine the appropriate effective principal 
name and groups for the user
+3. if it is determined that the principal will be impersonating another 
through a principal mapping rule then a Subject.doAS is required so providers 
farther downstream can determine the appropriate effective principal name and 
groups for the user
 
 #### Default Identity Assertion Provider ####
-The following configuration is required for asserting the users identity to 
the Hadoop cluster using Pseudo or Simple "authentication" and for using 
kerberos/SPNEGO for secure clusters.
+The following configuration is required for asserting the users identity to 
the Hadoop cluster using Pseudo or Simple "authentication" and for using 
Kerberos/SPNEGO for secure clusters.
 
     <provider>
         <role>identity-assertion</role>
@@ -55,11 +55,11 @@ This configuration identifies the same i
 
 The principal mapping aspect of the identity assertion provider is important 
to understand in order to fully utilize the authorization features of this 
provider.
 
-This feature allows us to map the authenticated principal to a runas or 
impersonated principal to be asserted to the Hadoop services in the backend.
+This feature allows us to map the authenticated principal to a runAs or 
impersonated principal to be asserted to the Hadoop services in the backend.
 
 When a principal mapping is defined that results in an impersonated principal, 
this impersonated principal is then the effective principal.
 
-If there is no mapping to another principal then the authenticated or primary 
principal is then the effective principal.
+If there is no mapping to another principal then the authenticated or primary 
principal is the effective principal.
 
 #### Principal Mapping ####
 
@@ -115,11 +115,11 @@ The following configuration would conver
 
 The above configuration will result in all user interactions through that 
topology to have their principal communicated to the Hadoop cluster with a 
domain designator concatenated to the username. Possibly useful for 
multi-tenant deployment scenarios.
 
-In addition to the concat.suffix parameter, the provider supports the setting 
of a prefix through a concat.prefix parameter.
+In addition to the concat.suffix parameter, the provider supports the setting 
of a prefix through a `concat.prefix` parameter.
 
 #### SwitchCase Identity Assertion Provider ####
 The SwitchCase identity assertion provider solves issues where down stream 
ecosystem components require user and group principal names to be a specific 
case.
-An example of how this provider is enabled and configured within the 
\<gateway> section of a topology file is shown below.
+An example of how this provider is enabled and configured within the 
`<gateway>` section of a topology file is shown below.
 This particular example will switch user principals names to lower case and 
group principal names to upper case.
 
     <provider>
@@ -140,8 +140,8 @@ These are the configuration parameters u
 
 Param                | Description
 ---------------------|------------
-principal.case       | The case mapping of user principal names.  Choices are: 
lower, upper, none.  Defaults to lower.
-group.principal.case | The case mapping of group principal names.  Choices 
are: lower, upper, none. Defaults to setting of principal.case.
+principal.case       | The case mapping of user principal names. Choices are: 
lower, upper, none.  Defaults to lower.
+group.principal.case | The case mapping of group principal names. Choices are: 
lower, upper, none. Defaults to setting of principal.case.
 
 If no parameters are provided the full defaults will results in both user and 
group principal names being switched to lower case.
 A setting of "none" or anything other than "upper" or "lower" leaves the case 
of the principal name unchanged.
@@ -154,9 +154,9 @@ There are three configuration parameters
 
 Param | Description
 ------|-----------
-input | This is a regular expression that will be applied to the incoming 
identity. The most critical part of the regular expression is the group 
notation within the expression. In regular expressions, groups are expressed 
within parenthesis. For example in the regular expression "(.*)@(.*?)\..*" 
there are two groups. When this regular expression is applied to 
"[email protected]" group 1 matches "nobody" and group 2 matches "us". 
-output| This is a template that assembles the result identity. The result is 
assembled from the static text and the matched groups from the input regular 
expression. In addition, the matched group values can be looked up in the 
lookup table. An output value of "{1}_{2}" of will result in "nobody_us".       
          
-lookup| This lookup table provides a simple (albeit limited) way to translate 
text in the incoming identities. This configuration takes the form of "=" 
separated name values pairs separated by ";". For example a lookup setting is 
"us=USA;ca=CANADA". The lookup is invoked in the output setting by surrounding 
the desired group number in square brackets (i.e. []). Putting it all together, 
output setting of "{1}_[{2}]" combined with input of "(.*)@(.*?)\..*" and 
lookup of "us=USA;ca=CANADA" will turn "[email protected]" into 
"nobody@USA".
+input | This is a regular expression that will be applied to the incoming 
identity. The most critical part of the regular expression is the group 
notation within the expression. In regular expressions, groups are expressed 
within parenthesis. For example in the regular expression "`(.*)@(.*?)\..*`" 
there are two groups. When this regular expression is applied to 
"[email protected]" group 1 matches "nobody" and group 2 matches "us". 
+output| This is a template that assembles the result identity. The result is 
assembled from the static text and the matched groups from the input regular 
expression. In addition, the matched group values can be looked up in the 
lookup table. An output value of "`{1}_{2}`" of will result in "nobody_us".     
            
+lookup| This lookup table provides a simple (albeit limited) way to translate 
text in the incoming identities. This configuration takes the form of "=" 
separated name values pairs separated by ";". For example a lookup setting is 
"us=USA;ca=CANADA". The lookup is invoked in the output setting by surrounding 
the desired group number in square brackets (i.e. []). Putting it all together, 
output setting of "`{1}_[{2}]`" combined with input of "`(.*)@(.*?)\..*`" and 
lookup of "us=USA;ca=CANADA" will turn "[email protected]" into 
"nobody@USA".
 use.original.on.lookup.failure | (Optional) Default value is false. If set to 
true, it will preserve the original string if there is no match. e.g. In the 
above lookup case for email [email protected], it will be transformed to 
nobody@ , if this property is set to true it will be transformed to  nobody@uk. 
 
 
 Within the topology file the provider configuration might look like this.
@@ -210,19 +210,19 @@ All the configuration for 'HadoopGroupPr
 The 'hadoop.security.group.mapping' property determines the implementation. 
Some of the valid implementations are as follows 
 #### org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
 
-This is the default implementation and will be picked up if 
'hadoop.security.group.mapping' is not specified. This implementation will 
determine if the Java Native Interface (JNI) is available. If JNI is available, 
the implementation will use the API within Hadoop to resolve a list of groups 
for a user. If JNI is not available then the shell implementation, 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping, is used, which shells 
out with the 'bash -c groups' command (for a Linux/Unix environment) or the 
'net group' command (for a Windows environment) to resolve a list of groups for 
a user.
+This is the default implementation and will be picked up if 
'hadoop.security.group.mapping' is not specified. This implementation will 
determine if the Java Native Interface (JNI) is available. If JNI is available, 
the implementation will use the API within Hadoop to resolve a list of groups 
for a user. If JNI is not available then the shell implementation, 
`org.apache.hadoop.security.ShellBasedUnixGroupsMapping`, is used, which shells 
out with the `bash -c id -gn <user> ; id -Gn <user>` command (for a Linux/Unix 
environment) or the `groups -F <user>` command (for a Windows environment) to 
resolve a list of groups for a user.
 
 #### org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMappingWithFallback
 
-As above, if JNI is available then we get the netgroup membership using Hadoop 
native API, else fallback on ShellBasedUnixGroupsMapping to resolve list of 
groups for a user.
+As above, if JNI is available then we get the netgroup membership using Hadoop 
native API, else fallback on ShellBasedUnixGroupsNetgroupMapping to resolve 
list of groups for a user.
 
 #### org.apache.hadoop.security.ShellBasedUnixGroupsMapping
 
-Uses the 'bash -c groups' command (for a Linux/Unix environment) or the 'net 
group' command (for a Windows environment) to resolve list of groups for a user.
+Uses the `bash -c id -gn <user> ; id -Gn <user>` command (for a Linux/Unix 
environment) or the `groups -F <user>` command (for a Windows environment) to 
resolve list of groups for a user.
 
 #### org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping
 
-Similar to org.apache.hadoop.security.ShellBasedUnixGroupsMapping except it 
uses 'getent netgroup' command to get netgroup membership.
+Similar to `org.apache.hadoop.security.ShellBasedUnixGroupsMapping` except it 
uses `getent netgroup` command to get netgroup membership.
 
 #### org.apache.hadoop.security.LdapGroupsMapping
 
@@ -230,7 +230,7 @@ This implementation connects directly to
 
 #### org.apache.hadoop.security.CompositeGroupsMapping
 
-This implementation composits other group mapping providers for determining 
group membership, see [Composite Groups 
Mapping](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/GroupsMapping.html#Composite_Groups_Mapping)
 for more details.
+This implementation asks multiple other group mapping providers for 
determining group membership, see [Composite Groups 
Mapping](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/GroupsMapping.html#Composite_Groups_Mapping)
 for more details.
 
 For more information on the implementation and properties refer to Hadoop 
Group Mapping.
 
@@ -281,7 +281,7 @@ The following example snippet works with
         </provider>
 
 
-Here, we are working with the demo ldap server running at 
'ldap://localhost:33389' which populates some dummy users for testing that we 
will use in this example. This example uses the user 'tom' for LDAP binding.  
If you have different LDAP/AD settings you will have to update the properties 
accordingly. 
+Here, we are working with the demo LDAP server running at 
'ldap://localhost:33389' which populates some dummy users for testing that we 
will use in this example. This example uses the user 'tom' for LDAP binding. If 
you have different LDAP/AD settings, you will have to update the properties 
accordingly. 
 
 Let's test our setup using the following command (assuming the gateway is 
started and listening on localhost:8443). Note that we are using credentials 
for the user 'sam' along with the command. 
 

Modified: knox/trunk/books/1.1.0/config_kerberos.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_kerberos.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_kerberos.md (original)
+++ knox/trunk/books/1.1.0/config_kerberos.md Tue Jul  3 19:13:36 2018
@@ -64,5 +64,5 @@ Update `conf/gateway-site.xml` in your K
 
 #### Restart Knox ####
 
-After you do the above configurations and restart Knox, Knox would use SPNego 
to authenticate with Hadoop services and Oozie.
-There is no change in the way you make calls to Knox whether you use Curl or 
Knox DSL.
+After you do the above configurations and restart Knox, Knox would use SPNEGO 
to authenticate with Hadoop services and Oozie.
+There is no change in the way you make calls to Knox whether you use curl or 
Knox DSL.

Modified: knox/trunk/books/1.1.0/config_knox_sso.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_knox_sso.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_knox_sso.md (original)
+++ knox/trunk/books/1.1.0/config_knox_sso.md Tue Jul  3 19:13:36 2018
@@ -3,7 +3,7 @@
 ### Introduction
 ---
 
-Authentication of the Hadoop component UIs, and those of the overall 
ecosystem, is usually limited to Kerberos (which requires SPNEGO to be 
configured for the user's browser) and simple/psuedo. This often results in the 
UIs not being secured - even in secured clusters. This is where KnoxSSO 
provides value by providing WebSSO capabilities to the Hadoop cluster.
+Authentication of the Hadoop component UIs, and those of the overall 
ecosystem, is usually limited to Kerberos (which requires SPNEGO to be 
configured for the user's browser) and simple/pseudo. This often results in the 
UIs not being secured - even in secured clusters. This is where KnoxSSO 
provides value by providing WebSSO capabilities to the Hadoop cluster.
 
 By leveraging the hadoop-auth module in Hadoop common, we have introduced the 
ability to consume a common SSO cookie for web UIs while retaining the non-web 
browser authentication through kerberos/SPNEGO. We do this by extending the 
AltKerberosAuthenticationHandler class which provides the useragent based 
multiplexing. 
 
@@ -13,12 +13,12 @@ The flexibility of the Apache Knox authe
 
 KnoxSSO provides an abstraction for integrating any number of authentication 
systems and SSO solutions and enables participating web applications to scale 
to those solutions more easily. Without the token exchange capabilities offered 
by KnoxSSO each component UI would need to integrate with each desired solution 
on its own. With KnoxSSO they only need to integrate with the single solution 
and common token.
 
-In addition, KnoxSSO comes with its own Form-based IDP. This allows for easily 
integrating a form-based login with the enterprise AD/LDAP server.
+In addition, KnoxSSO comes with its own form-based IdP. This allows for easily 
integrating a form-based login with the enterprise AD/LDAP server.
 
 This document describes the overall setup requirements for KnoxSSO and 
participating applications.
 
-### Form-based IDP Setup
-By default the knoxsso.xml topology contains an application element for the 
knoxauth login application. This is a simple single page application for 
providing a login page and authenticating the user with HTTP basic auth against 
AD/LDAP.
+### Form-based IdP Setup
+By default the `knoxsso.xml` topology contains an application element for the 
knoxauth login application. This is a simple single page application for 
providing a login page and authenticating the user with HTTP basic auth against 
AD/LDAP.
 
     <application>
         <name>knoxauth</name>
@@ -79,7 +79,7 @@ The Shiro Provider has specialized confi
 ### KnoxSSO Service Setup
 
 #### knoxsso.xml Topology
-To enable KnoxSSO, we use the KnoxSSO topology for exposing an API that can be 
used to abstract the use of any number of enterprise or customer IDPs. By 
default, the knoxsso.xml file is configured for using the simple KnoxAuth 
application for form-based authentication against LDAP/AD. By swapping the 
Shiro authentication provider that is there out-of-the-box with another 
authentication or federation provider, an admin may leverage many of the 
existing providers for SSO for the UI components that participate in KnoxSSO.
+To enable KnoxSSO, we use the KnoxSSO topology for exposing an API that can be 
used to abstract the use of any number of enterprise or customer IdPs. By 
default, the `knoxsso.xml` file is configured for using the simple KnoxAuth 
application for form-based authentication against LDAP/AD. By swapping the 
Shiro authentication provider that is there out-of-the-box with another 
authentication or federation provider, an admin may leverage many of the 
existing providers for SSO for the UI components that participate in KnoxSSO.
 
 Just as with any Knox service, the KNOXSSO service is protected by the gateway 
providers defined above it. In this case, the ShiroProvider is taking care of 
HTTP Basic Auth against LDAP for us. Once the user authenticates the request 
processing continues to the KNOXSSO service that will create the required 
cookie and do the necessary redirects.
 
@@ -93,10 +93,10 @@ This URL is needed when configuring appl
 
 Parameter                        | Description | Default
 -------------------------------- |------------ |----------- 
-knoxsso.cookie.name       | This optional setting allows the admin to set the 
name of the sso cookie to use to represent a successful authentication event. | 
hadoop-jwt
-knoxsso.cookie.secure.only       | This determines whether the browser is 
allowed to send the cookie over unsecured channels. This should always be set 
to true in production systems. If during development a relying party is not 
running ssl then you can turn this off. Running with it off exposes the cookie 
and underlying token for capture and replay by others. | true
+knoxsso.cookie.name              | This optional setting allows the admin to 
set the name of the sso cookie to use to represent a successful authentication 
event. | hadoop-jwt
+knoxsso.cookie.secure.only       | This determines whether the browser is 
allowed to send the cookie over unsecured channels. This should always be set 
to true in production systems. If during development a relying party is not 
running SSL then you can turn this off. Running with it off exposes the cookie 
and underlying token for capture and replay by others. | true
 knoxsso.cookie.max.age           | optional: This indicates that a cookie can 
only live for a specified amount of time - in seconds. This should probably be 
left to the default which makes it a session cookie. Session cookies are 
discarded once the browser session is closed. | session
-knoxsso.cookie.domain.suffix     | optional: This indicates the portion of the 
request hostname that represents the domain to be used for the cookie domain. 
For single host development scenarios the default behavior should be fine. For 
production deployments, the expected domain should be set and all configured 
URLs that are related to SSO should use this domain. Otherwise, the cookie will 
not be presented by the browser to mismatched URLs. | Default cookie domain or 
a domain derived from a hostname that includes more than 2 dots.
+knoxsso.cookie.domain.suffix     | optional: This indicates the portion of the 
request hostname that represents the domain to be used for the cookie domain. 
For single host development scenarios, the default behavior should be fine. For 
production deployments, the expected domain should be set and all configured 
URLs that are related to SSO should use this domain. Otherwise, the cookie will 
not be presented by the browser to mismatched URLs. | Default cookie domain or 
a domain derived from a hostname that includes more than 2 dots.
 knoxsso.token.ttl                | This indicates the lifespan of the token 
within the cookie. Once it expires a new cookie must be acquired from KnoxSSO. 
This is in milliseconds. The 36000000 in the topology above gives you 10 hrs. | 
30000 That is 30 seconds.
 knoxsso.token.audiences          | This is a comma separated list of audiences 
to add to the JWT token. This is used to ensure that a token received by a 
participating application knows that the token was intended for use with that 
application. It is optional. In the event that an application has expected 
audiences and they are not present the token must be rejected. In the event 
where the token has audiences and the application has none expected then the 
token is accepted.| empty
 knoxsso.redirect.whitelist.regex | A semicolon-delimited list of regular 
expressions. The incoming originalUrl must match one of the expressions in 
order for KnoxSSO to redirect to it after authentication. Note that cookie use 
is still constrained to redirect destinations in the same domain as the KnoxSSO 
service - regardless of the expressions specified here. | The value of the 
gateway-site property named *gateway.dispatch.whitelist*. If that is not 
defined, the default allows only relative paths, localhost or destinations in 
the same domain as the Knox host (with or without SSL). This may need to be 
opened up for production use and actual participating applications.
@@ -145,4 +145,4 @@ The above property holds the KnoxSSO ser
 
 Individual UIs within the Hadoop ecosystem will have similar configuration for 
participating in the KnoxSSO websso capabilities.
 
-Blogs will be provided on the Apache Knox project site for these usecases as 
they become available.
\ No newline at end of file
+Blogs will be provided on the Apache Knox project site for these usecases as 
they become available.

Modified: knox/trunk/books/1.1.0/config_ldap_authc_cache.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_ldap_authc_cache.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_ldap_authc_cache.md (original)
+++ knox/trunk/books/1.1.0/config_ldap_authc_cache.md Tue Jul  3 19:13:36 2018
@@ -101,7 +101,7 @@ To try this out
     bin/ldap.sh start
     bin/gateway.sh start
 
-The following call to WebHDFS should report: {"Path":"/user/tom"}
+The following call to WebHDFS should report: `{"Path":"/user/tom"}`
 
     curl  -i -v  -k -u tom:tom-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
 
@@ -116,7 +116,7 @@ and then the following should still retu
 
 #### Advanced Caching Config ####
 
-By default the ehcache support in shiro contains a ehcache.xml in its 
classpath which is the following
+By default the EhCache support in Shiro contains a ehcache.xml in its 
classpath which is the following
 
     <ehcache>
 

Modified: knox/trunk/books/1.1.0/config_ldap_group_lookup.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_ldap_group_lookup.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_ldap_group_lookup.md (original)
+++ knox/trunk/books/1.1.0/config_ldap_group_lookup.md Tue Jul  3 19:13:36 2018
@@ -19,7 +19,7 @@
 
 Knox can be configured to look up LDAP groups that the authenticated user 
belong to.
 Knox can look up both Static LDAP Groups and Dynamic LDAP Groups.
-The looked up groups are populated as Principal(s) in the Java Subject of 
authenticated user.
+The looked up groups are populated as Principal(s) in the Java Subject of the 
authenticated user.
 Therefore service authorization rules can be defined in terms of LDAP groups 
looked up from a LDAP directory.
 
 To look up LDAP groups of authenticated user from LDAP, you have to use 
`org.apache.knox.gateway.shirorealm.KnoxLdapRealm` in Shiro configuration.
@@ -135,7 +135,7 @@ Please see below a sample Shiro configur
 
     </provider>
 
-The configuration shown above would look up Static LDAP groups of 
authenticated user and populate the group principals in the Java Subject 
corresponding to authenticated user.
+The configuration shown above would look up Static LDAP groups of the 
authenticated user and populate the group principals in the Java Subject 
corresponding to the authenticated user.
 
 If you want to look up Dynamic LDAP Groups instead of Static LDAP Groups, you 
would have to specify groupObjectClass and memberAttribute params as shown 
below:
 
@@ -151,7 +151,7 @@ If you want to look up Dynamic LDAP Grou
 ### Template topology files and LDIF files to try out LDAP Group Look up ###
 
 Knox bundles some template topology files and ldif files that you can use to 
try and test LDAP Group Lookup and associated authorization ACLs.
-All these template files are located under {GATEWAY_HOME}/templates.
+All these template files are located under `{GATEWAY_HOME}/templates`.
 
 
 #### LDAP Static Group Lookup Templates, authentication and group lookup from 
the same directory ####

Modified: knox/trunk/books/1.1.0/config_metrics.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_metrics.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_metrics.md (original)
+++ knox/trunk/books/1.1.0/config_metrics.md Tue Jul  3 19:13:36 2018
@@ -46,5 +46,4 @@ Turns on or off the graphite reporter, d
     gateway.graphite.metrics.reporting.frequency
 
 The above are the host, port and frequency of reporting (in seconds) 
parameters for the graphite reporter.
- 
- 
+

Modified: knox/trunk/books/1.1.0/config_mutual_authentication_ssl.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_mutual_authentication_ssl.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_mutual_authentication_ssl.md (original)
+++ knox/trunk/books/1.1.0/config_mutual_authentication_ssl.md Tue Jul  3 
19:13:36 2018
@@ -17,7 +17,7 @@
 
 ### Mutual Authentication with SSL ###
 
-To establish a stronger trust relationship between client and server, we 
provide mutual authentication with SSL via client certs. This is particularly 
useful in providing additional validation for Preauthenticated SSO with HTTP 
Headers. Rather than just ip address validation, connections will only be 
accepted by Knox from clients presenting trusted certificates.
+To establish a stronger trust relationship between client and server, we 
provide mutual authentication with SSL via client certs. This is particularly 
useful in providing additional validation for Preauthenticated SSO with HTTP 
Headers. Rather than just IP address validation, connections will only be 
accepted by Knox from clients presenting trusted certificates.
 
 This behavior is configured for the entire gateway instance within the 
gateway-site.xml file. All topologies deployed within the gateway instance with 
mutual authentication enabled will require incoming connections to present 
trusted client certificates during the SSL handshake. Otherwise, connections 
will be refused.
 
@@ -36,4 +36,4 @@ We can specify the path to a dedicated t
     knoxcli.sh create-alias gateway-truststore-password --value {pwd} 
   
 Otherwise, the master secret will be used.
-If the truststore is not a JKS type then it can be set via 
`gateway.truststore.type`.
\ No newline at end of file
+If the truststore is not a JKS type then it can be set via 
`gateway.truststore.type`.

Modified: knox/trunk/books/1.1.0/config_pac4j_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_pac4j_provider.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_pac4j_provider.md (original)
+++ knox/trunk/books/1.1.0/config_pac4j_provider.md Tue Jul  3 19:13:36 2018
@@ -105,7 +105,7 @@ and protect the KnoxSSO service by the p
       </param>
     </service>
 
-Notice that the pac4j callback url is the KnoxSSO url (`pac4j.callbackUrl` 
parameter). An additional `pac4j.cookie.domain.suffix` parameter allows you to 
define the domain suffix for the pac4j cookies.
+Notice that the pac4j callback URL is the KnoxSSO URL (`pac4j.callbackUrl` 
parameter). An additional `pac4j.cookie.domain.suffix` parameter allows you to 
define the domain suffix for the pac4j cookies.
 
 In this example, the pac4j provider is configured to authenticate users via a 
CAS server hosted at: https://casserverpac4j.herokuapp.com/login.
 
@@ -136,57 +136,57 @@ Otherwise, you can use Facebook, Twitter
 
 Name | Value
 -----|------
-facebook.id | Identifier of the OAuth Facebook application
+facebook.id     | Identifier of the OAuth Facebook application
 facebook.secret | Secret of the OAuth Facebook application
-facebook.scope | Requested scope at Facebook login
+facebook.scope  | Requested scope at Facebook login
 facebook.fields | Fields returned by Facebook
-twitter.id | Identifier of the OAuth Twitter application
-twitter.secret | Secret of the OAuth Twitter application
+twitter.id      | Identifier of the OAuth Twitter application
+twitter.secret  | Secret of the OAuth Twitter application
 
 ##### For CAS support:
 
 Name | Value
 -----|------
-cas.loginUrl | Login url of the CAS server
+cas.loginUrl | Login URL of the CAS server
 cas.protocol | CAS protocol (`CAS10`, `CAS20`, `CAS20_PROXY`, `CAS30`, 
`CAS30_PROXY`, `SAML`)
 
 ##### For SAML support:
 
 Name | Value
 -----|------
-saml.keystorePassword | Password of the keystore (storepass)
-saml.privateKeyPassword | Password for the private key (keypass)
-saml.keystorePath | Path of the keystore
-saml.identityProviderMetadataPath | Path of the identity provider metadata
+saml.keystorePassword              | Password of the keystore (storepass)
+saml.privateKeyPassword            | Password for the private key (keypass)
+saml.keystorePath                  | Path of the keystore
+saml.identityProviderMetadataPath  | Path of the identity provider metadata
 saml.maximumAuthenticationLifetime | Maximum lifetime for authentication
-saml.serviceProviderEntityId | Identifier of the service provider
-saml.serviceProviderMetadataPath | Path of the service provider metadata
+saml.serviceProviderEntityId       | Identifier of the service provider
+saml.serviceProviderMetadataPath   | Path of the service provider metadata
 
 > Get more details on the [pac4j 
 > wiki](https://github.com/pac4j/pac4j/wiki/Clients#saml-support).
 
-The SSO url in your SAML 2 provider config will need to include a special 
query parameter that lets the pac4j provider know that the request is coming 
back from the provider rather than from a redirect from a KnoxSSO participating 
application. This query parameter is "pac4jCallback=true".
+The SSO URL in your SAML 2 provider config will need to include a special 
query parameter that lets the pac4j provider know that the request is coming 
back from the provider rather than from a redirect from a KnoxSSO participating 
application. This query parameter is "pac4jCallback=true".
 
 This results in a URL that looks something like:
 
-  
https://hostname:8443/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client
+    
https://hostname:8443/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client
 
 This also means that the SP Entity ID should also include this query parameter 
as appropriate for your provider.
-Often something like the above URL is used for both the SSO url and SP Entity 
ID.
+Often something like the above URL is used for both the SSO URL and SP Entity 
ID.
 
 ##### For OpenID Connect support:
 
 Name | Value
 -----|------
-oidc.id | Identifier of the OpenID Connect provider
-oidc.secret | Secret of the OpenID Connect provider
-oidc.discoveryUri | Direcovery URI of the OpenID Connect provider
-oidc.useNonce | Whether to use nonce during login process
+oidc.id                    | Identifier of the OpenID Connect provider
+oidc.secret                | Secret of the OpenID Connect provider
+oidc.discoveryUri          | Direcovery URI of the OpenID Connect provider
+oidc.useNonce              | Whether to use nonce during login process
 oidc.preferredJwsAlgorithm | Preferred JWS algorithm
-oidc.maxClockSkew | Max clock skew during login process
-oidc.customParamKey1 | Key of the first custom parameter
-oidc.customParamValue1 | Value of the first custom parameter
-oidc.customParamKey2 | Key of the second custom parameter
-oidc.customParamValue2 | Value of the second custom parameter
+oidc.maxClockSkew          | Max clock skew during login process
+oidc.customParamKey1       | Key of the first custom parameter
+oidc.customParamValue1     | Value of the first custom parameter
+oidc.customParamKey2       | Key of the second custom parameter
+oidc.customParamValue2     | Value of the second custom parameter
 
 > Get more details on the [pac4j 
 > wiki](https://github.com/pac4j/pac4j/wiki/Clients#openid-connect-support).
 
@@ -194,4 +194,4 @@ In fact, you can even define several ide
 
 ##### UI invocation
 
-In a browser, when calling your Hadoop service (for example: 
`https://127.0.0.1:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS`), you are 
redirected to the identity provider for login. Then, after a successful 
authentication, your are redirected back to your originally requested url and 
your KnoxSSO session is initialized.
+In a browser, when calling your Hadoop service (for example: 
`https://127.0.0.1:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS`), you are 
redirected to the identity provider for login. Then, after a successful 
authentication, your are redirected back to your originally requested URL and 
your KnoxSSO session is initialized.

Modified: knox/trunk/books/1.1.0/config_pam_authn.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_pam_authn.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_pam_authn.md (original)
+++ knox/trunk/books/1.1.0/config_pam_authn.md Tue Jul  3 19:13:36 2018
@@ -17,13 +17,13 @@
 
 ### PAM based Authentication ###
 
-There is a large number of pluggable authentication modules available on many 
linux installations and from vendors of authentication solutions that are great 
to leverage for authenticating access to Hadoop through the Knox Gateway. In 
addition to LDAP support described in this guide, the ShiroProvider also 
includes support for PAM based authentication for unix based systems.
+There is a large number of pluggable authentication modules available on many 
Linux installations and from vendors of authentication solutions that are great 
to leverage for authenticating access to Hadoop through the Knox Gateway. In 
addition to LDAP support described in this guide, the ShiroProvider also 
includes support for PAM based authentication for unix based systems.
 
 This opens up the integration possibilities to many other readily available 
authentication mechanisms as well as other implementations for LDAP based 
authentication. More flexibility may be available through various PAM modules 
for group lookup, more complicated LDAP schemas or other areas where the 
KnoxLdapRealm is not sufficient.
 
 #### Configuration ####
 ##### Overview #####
-The primary motivation for leveraging PAM based authentication is to provide 
the ability to use the configuration provided by existing PAM modules that are 
available in a system's /etc/pam.d/ directory. Therefore, the solution provided 
here is as simple as possible in order to allow the PAM module config itself to 
be the source of truth. What we do need to configure is the fact that we are 
using PAM through the main.pamRealm parameter and the KnoxPamRealm classname 
and the particular PAM module to use with the main.pamRealm.service parameter 
in the below example we have 'login'.
+The primary motivation for leveraging PAM based authentication is to provide 
the ability to use the configuration provided by existing PAM modules that are 
available in a system's `/etc/pam.d/` directory. Therefore, the solution 
provided here is as simple as possible in order to allow the PAM module config 
itself to be the source of truth. What we do need to configure is the fact that 
we are using PAM through the `main.pamRealm` parameter and the KnoxPamRealm 
classname and the particular PAM module to use with the `main.pamRealm.service` 
parameter in the below example we have 'login'.
 
     <provider> 
        <role>authentication</role> 
@@ -48,7 +48,7 @@ The primary motivation for leveraging PA
     </provider>
   
 
-As a non-normative example of a PAM config file see the below from my macbook 
/etc/pam.d/login:
+As a non-normative example of a PAM config file see the below from my MacBook 
`/etc/pam.d/login`:
 
     # login: auth account password session
     auth       optional       pam_krb5.so use_kcminit

Modified: knox/trunk/books/1.1.0/config_preauth_sso_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_preauth_sso_provider.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_preauth_sso_provider.md (original)
+++ knox/trunk/books/1.1.0/config_preauth_sso_provider.md Tue Jul  3 19:13:36 
2018
@@ -40,16 +40,16 @@ The following table describes the config
 ##### Descriptions #####
 
 Name | Description | Default
----------|-----------
-preauth.validation.method|Optional parameter that indicates the types of trust 
validation to perform on incoming requests. There could be one or more 
comma-separated validators defined in this property. If there are multiple 
validators, Apache Knox validates each validator in the same sequence as it is 
configured. This works similar to short-circuit AND operation i.e. if any 
validator fails, Knox does not perform further validation and returns overall 
failure immediately. Possible values are: null, preauth.default.validation, 
preauth.ip.validation, custom validator (details described in [Custom 
Validator](dev-guide.html#Validator)). Failure results in a 403 forbidden HTTP 
status response.|null - which means 'preauth.default.validation' that is  no 
validation will be performed and that we are assuming that the network security 
and external authentication system is sufficient. 
-preauth.ip.addresses|Optional parameter that indicates the list of trusted ip 
addresses. When preauth.ip.validation is indicated as the validation method 
this parameter must be provided to indicate the trusted ip address set. 
Wildcarded IPs may be used to indicate subnet level trust. ie. 127.0.*|null - 
which means that no validation will be performed.
-preauth.custom.header|Required parameter for indicating a custom header to use 
for extracting the preauthenticated principal. The value extracted from this 
header is utilized as the PrimaryPrincipal within the established Subject. An 
incoming request that is missing the configured header will be refused with a 
401 unauthorized HTTP status.|SM_USER for SiteMinder usecase
-preauth.custom.group.header|Optional parameter for indicating a HTTP header 
name that contains a comma separated list of groups. These are added to the 
authenticated Subject as group principals. A missing group header will result 
in no groups being extracted from the incoming request and a log entry but 
processing will continue.|null - which means that there will be no group 
principals extracted from the request and added to the established Subject.
+---------|-----------|--------
+preauth.validation.method   | Optional parameter that indicates the types of 
trust validation to perform on incoming requests. There could be one or more 
comma-separated validators defined in this property. If there are multiple 
validators, Apache Knox validates each validator in the same sequence as it is 
configured. This works similar to short-circuit AND operation i.e. if any 
validator fails, Knox does not perform further validation and returns overall 
failure immediately. Possible values are: null, preauth.default.validation, 
preauth.ip.validation, custom validator (details described in [Custom 
Validator](dev-guide.html#Validator)). Failure results in a 403 forbidden HTTP 
status response.| null - which means 'preauth.default.validation' that is  no 
validation will be performed and that we are assuming that the network security 
and external authentication system is sufficient. 
+preauth.ip.addresses        | Optional parameter that indicates the list of 
trusted ip addresses. When preauth.ip.validation is indicated as the validation 
method this parameter must be provided to indicate the trusted ip address set. 
Wildcarded IPs may be used to indicate subnet level trust. ie. 127.0.* | null - 
which means that no validation will be performed.
+preauth.custom.header       | Required parameter for indicating a custom 
header to use for extracting the preauthenticated principal. The value 
extracted from this header is utilized as the PrimaryPrincipal within the 
established Subject. An incoming request that is missing the configured header 
will be refused with a 401 unauthorized HTTP status. | SM_USER for SiteMinder 
usecase
+preauth.custom.group.header | Optional parameter for indicating a HTTP header 
name that contains a comma separated list of groups. These are added to the 
authenticated Subject as group principals. A missing group header will result 
in no groups being extracted from the incoming request and a log entry but 
processing will continue. | null - which means that there will be no group 
principals extracted from the request and added to the established Subject.
 
 NOTE: Mutual authentication can be used to establish a strong trust 
relationship between clients and servers while using the Preauthenticated SSO 
provider. See the configuration for Mutual Authentication with SSL in this 
document.
 
 ##### Configuration for SiteMinder
-The following is an example of a configuration of the preauthenticated sso 
provider that leverages the default SM_USER header name - assuming use with CA 
SiteMinder. It further configures the validation based on the IP address from 
the incoming request.
+The following is an example of a configuration of the preauthenticated SSO 
provider that leverages the default SM_USER header name - assuming use with CA 
SiteMinder. It further configures the validation based on the IP address from 
the incoming request.
 
     <provider>
         <role>federation</role>
@@ -64,7 +64,7 @@ The following curl command can be used t
 
     curl -k -i --header "SM_USER: guest" -v 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
 
-Omitting the --header "SM_USER: guest" above will result in a rejected request.
+Omitting the `--header "SM_USER: guest"` above will result in a rejected 
request.
 
 ##### Configuration for IBM Tivoli AM
 As an example for configuring the preauthenticated SSO provider for another 
SSO provider, the following illustrates the values used for IBM's Tivoli Access 
Manager:
@@ -84,4 +84,4 @@ The following curl command can be used t
 
     curl -k -i --header "iv_user: guest" --header "iv_group: admin" -v 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
 
-Omitting the --header "iv_user: guest" above will result in a rejected request.
+Omitting the `--header "iv_user: guest"` above will result in a rejected 
request.

Modified: knox/trunk/books/1.1.0/config_sso_cookie_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_sso_cookie_provider.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_sso_cookie_provider.md (original)
+++ knox/trunk/books/1.1.0/config_sso_cookie_provider.md Tue Jul  3 19:13:36 
2018
@@ -22,12 +22,12 @@ The SSOCookieProvider enables the federa
 
 Knox Gateway needs a pluggable mechanism for consuming these cookies and 
federating the KnoxSSO authentication event as an asserted identity in its 
interaction with the Hadoop cluster for REST API invocations. This provider is 
useful when an application that is integrated with KnoxSSO for authentication 
also consumes REST APIs through the Knox Gateway.
 
-Based on our understanding of the websso flow it should behave like:
+Based on our understanding of the WebSSO flow it should behave like:
 
 * SSOCookieProvider checks for hadoop-jwt cookie and in its absence redirects 
to the configured SSO provider URL (knoxsso endpoint)
-* The configured Provider on the KnoxSSO endpoint challenges the user in a 
provider specific way (presents form, redirects to SAML IdP, etc)
+* The configured Provider on the KnoxSSO endpoint challenges the user in a 
provider specific way (presents form, redirects to SAML IdP, etc.)
 * The authentication provider on KnoxSSO validates the identity of the user 
through credentials/tokens
-* The WebSSO service exchanges the normalized Java Subject into a JWT token 
and sets it on the response as a cookie named hadoop-jwt
+* The WebSSO service exchanges the normalized Java Subject into a JWT token 
and sets it on the response as a cookie named `hadoop-jwt`
 * The WebSSO service then redirects the user agent back to the originally 
requested URL - the requested Knox service subsequent invocations will find the 
cookie in the incoming request and not need to engage the WebSSO service again 
until it expires.
 
 #### Configuration ####
@@ -69,8 +69,8 @@ The following table describes the config
 ##### Descriptions #####
 
 Name | Description | Default
----------|-----------
-sso.authentication.provider.url|Required parameter that indicates the location 
of the KnoxSSO endpoint and where to redirect the useragent when no SSO cookie 
is found in the incoming request.|N/A
+---------|-----------|---------
+sso.authentication.provider.url | Required parameter that indicates the 
location of the KnoxSSO endpoint and where to redirect the useragent when no 
SSO cookie is found in the incoming request. | N/A
 
 ### JWT Provider ###
 
@@ -79,7 +79,7 @@ The JWT federation provider accepts JWT
 
 This provider is closely related to the Knox Token Service and is essentially 
the provider that is used to consume the tokens issued by the Knox Token 
Service.
 
-Typical deployments have the KnoxToken service defined in a topology such as 
sandbox.xml that authenticates users based on username and password which as 
with the ShiroProvider. They also have a topology dedicated to clients that 
wish to use KnoxTokens to access Hadoop resources through Knox. 
+Typical deployments have the KnoxToken service defined in a topology such as 
`sandbox.xml` that authenticates users based on username and password which as 
with the ShiroProvider. They also have a topology dedicated to clients that 
wish to use KnoxTokens to access Hadoop resources through Knox. 
 
 The following provider configuration can be used within such a topology.
 
@@ -93,14 +93,14 @@ The following provider configuration can
        </param>
     </provider>
 
-The knox.token.audiences parameter above indicates that any token in an 
incoming request must contain an audience claim called "tokenbased". In this 
case, the idea is that the issuing KnoxToken service will be configured to 
include such an audience claim and that the resulting token is valid to use in 
the topology that contains configuration like above. This would generally be 
the name of the topology but you can standardize on anything.
+The `knox.token.audiences` parameter above indicates that any token in an 
incoming request must contain an audience claim called "tokenbased". In this 
case, the idea is that the issuing KnoxToken service will be configured to 
include such an audience claim and that the resulting token is valid to use in 
the topology that contains configuration like above. This would generally be 
the name of the topology but you can standardize on anything.
 
 The following table describes the configuration options for the JWT federation 
provider:
 
 ##### Descriptions #####
 
 Name | Description | Default
----------|-----------
-knox.token.audiences|Optional parameter. This parameter allows the 
administrator to constrain the use of tokens on this endpoint to those that 
have tokens with at least one of the configured audience claims. These claims 
have associated configuration within the KnoxToken service as well. This 
provides an interesting way to make sure that the token issued based on 
authentication to a particular LDAP server or other IdP is accepted but not 
others.|N/A
+---------|-----------|--------
+knox.token.audiences | Optional parameter. This parameter allows the 
administrator to constrain the use of tokens on this endpoint to those that 
have tokens with at least one of the configured audience claims. These claims 
have associated configuration within the KnoxToken service as well. This 
provides an interesting way to make sure that the token issued based on 
authentication to a particular LDAP server or other IdP is accepted but not 
others.|N/A
 
 See the documentation for the Knox Token service for related details.

Modified: knox/trunk/books/1.1.0/config_webappsec_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/config_webappsec_provider.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/config_webappsec_provider.md (original)
+++ knox/trunk/books/1.1.0/config_webappsec_provider.md Tue Jul  3 19:13:36 2018
@@ -24,15 +24,15 @@ There are three aspects of web applicati
 Cross site request forgery (CSRF) attacks attempt to force an authenticated 
user to 
 execute functionality without their knowledge. By presenting them with a link 
or image that when clicked invokes a request to another site with which the 
user may have already established an active session.
 
-CSRF is entirely a browser based attack. Some background knowledge of how 
browsers work enables us to provide a filter that will prevent CSRF attacks. 
HTTP requests from a web browser performed via form, image, iframe, etc are 
unable to set custom HTTP headers. The only way to create a HTTP request from a 
browser with a custom HTTP header is to use a technology such as Javascript 
XMLHttpRequest or Flash. These technologies can set custom HTTP headers, but 
have security policies built in to prevent web sites from sending requests to 
each other 
+CSRF is entirely a browser-based attack. Some background knowledge of how 
browsers work enables us to provide a filter that will prevent CSRF attacks. 
HTTP requests from a web browser performed via form, image, iframe, etc. are 
unable to set custom HTTP headers. The only way to create a HTTP request from a 
browser with a custom HTTP header is to use a technology such as JavaScript 
XMLHttpRequest or Flash. These technologies can set custom HTTP headers but 
have security policies built in to prevent web sites from sending requests to 
each other 
 unless specifically allowed by policy. 
 
-This means that a website www.bad.com cannot send a request to  
http://bank.example.com with the custom header X-XSRF-Header unless they use a 
technology such as a XMLHttpRequest. That technology  would prevent such a 
request from being made unless the bank.example.com domain specifically allowed 
it. This then results in a REST endpoint that can only be called via 
XMLHttpRequest (or similar technology).
+This means that a website www.bad.com cannot send a request to  
http://bank.example.com with the custom header X-XSRF-Header unless they use a 
technology such as a XMLHttpRequest. That technology would prevent such a 
request from being made unless the bank.example.com domain specifically allowed 
it. This then results in a REST endpoint that can only be called via 
XMLHttpRequest (or similar technology).
 
 NOTE: by enabling this protection within the topology, this custom header will 
be required for *all* clients that interact with it - not just browsers.
 
 #### CORS
-For security reasons, browsers restrict cross-origin HTTP requests initiated 
from within scripts.  For example, XMLHttpRequest follows the same-origin 
policy. So, a web application using XMLHttpRequest could only make HTTP 
requests to its own domain. To improve web applications, developers asked 
browser vendors to allow XMLHttpRequest to make cross-domain requests.
+For security reasons, browsers restrict cross-origin HTTP requests initiated 
from within scripts. For example, XMLHttpRequest follows the same-origin 
policy. So, a web application using XMLHttpRequest could only make HTTP 
requests to its own domain. To improve web applications, developers asked 
browser vendors to allow XMLHttpRequest to make cross-domain requests.
 
 Cross Origin Resource Sharing is a way to explicitly alter the same-origin 
policy for a given application or API. In order to allow for applications to 
make cross domain requests through Apache Knox, we need to configure the CORS 
filter of the WebAppSec provider.
 
@@ -42,9 +42,9 @@ HTTP Strict Transport Security (HSTS) is
 
 #### Configuration ####
 ##### Overview #####
-As with all providers in the Knox gateway, the web app security provider is 
configured through provider params. Unlike many other providers, the web app 
security provider may actually host multiple vulnerability/security filters. 
Currently, we only have implementations for CSRF, CORS and HTTP STS but others 
will follow and you may be interested in creating your own.
+As with all providers in the Knox gateway, the web app security provider is 
configured through provider parameters. Unlike many other providers, the web 
app security provider may actually host multiple vulnerability/security 
filters. Currently, we only have implementations for CSRF, CORS and HTTP STS 
but others might follow, and you may be interested in creating your own.
 
-Because of this one-to-many provider/filter relationship, there is an extra 
configuration element for this provider per filter. As you can see in the 
sample below, the actual filter configuration is defined entirely within the 
params of the WebAppSec provider.
+Because of this one-to-many provider/filter relationship, there is an extra 
configuration element for this provider per filter. As you can see in the 
sample below, the actual filter configuration is defined entirely within the 
parameters of the WebAppSec provider.
 
     <provider>
         <role>webappsec</role>
@@ -65,18 +65,18 @@ The following tables describes the confi
 
 ###### Config
 
-Name | Description | Default
----------|-----------
-csrf.enabled|This param enables the CSRF protection capabilities|false  
-csrf.customHeader|This is an optional param that indicates the name of the 
header to be used in order to determine that the request is from a trusted 
source. It defaults to the header name described by the NSA in its guidelines 
for dealing with CSRF in REST.|X-XSRF-Header
-csrf.methodsToIgnore|This is also an optional param that enumerates the HTTP 
methods to allow through without the custom HTTP header. This is useful for 
allowing things like GET requests from the URL bar of a browser but it assumes 
that the GET request adheres to REST principals in terms of being idempotent. 
If this cannot be assumed then it would be wise to not include GET in the list 
of methods to ignore.|GET,OPTIONS,HEAD
+Name                 | Description | Default
+---------------------|-------------|--------
+csrf.enabled         | This parameter enables the CSRF protection capabilities 
| false  
+csrf.customHeader    | This is an optional parameter that indicates the name 
of the header to be used in order to determine that the request is from a 
trusted source. It defaults to the header name described by the NSA in its 
guidelines for dealing with CSRF in REST. | X-XSRF-Header
+csrf.methodsToIgnore | This is also an optional parameter that enumerates the 
HTTP methods to allow through without the custom HTTP header. This is useful 
for allowing things like GET requests from the URL bar of a browser, but it 
assumes that the GET request adheres to REST principals in terms of being 
idempotent. If this cannot be assumed then it would be wise to not include GET 
in the list of methods to ignore. |  GET,OPTIONS,HEAD
 
 ###### REST Invocation
 The following curl command can be used to request a directory listing from 
HDFS while passing in the expected header X-XSRF-Header.
 
     curl -k -i --header "X-XSRF-Header: valid" -v -u guest:guest-password 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
 
-Omitting the --header "X-XSRF-Header: valid" above should result in an HTTP 
400 bad_request.
+Omitting the `--header "X-XSRF-Header: valid"` above should result in an HTTP 
400 bad_request.
 
 Disabling the provider will then allow a request that is missing the header 
through. 
 
@@ -86,10 +86,10 @@ Disabling the provider will then allow a
 
 Name                         | Description | Default
 -----------------------------|-------------|---------
-cors.enabled                 | This param enables the CORS capabilities|false
-cors.allowGenericHttpRequests| {true\|false} defaults to true. If true generic 
HTTP requests will be allowed to pass through the filter, else only valid and 
accepted CORS requests will be allowed (strict CORS filtering).|true
+cors.enabled                 | This parameter enables the CORS 
capabilities|false
+cors.allowGenericHttpRequests| {true\|false} defaults to true. If true, 
generic HTTP requests will be allowed to pass through the filter, else only 
valid and accepted CORS requests will be allowed (strict CORS filtering).|true
 cors.allowOrigin             | {"\*"\|origin-list} defaults to "\*". 
Whitespace-separated list of origins that the CORS filter must allow. Requests 
from origins not included here will be refused with an HTTP 403 "Forbidden" 
response. If set to \* (asterisk) any origin will be allowed.|"\*"
-cors.allowSubdomains         | {true\|false} defaults to false. If true the 
CORS filter will allow requests from any origin which is a subdomain origin of 
the allowed origins. A subdomain is matched by comparing its scheme and suffix 
(host name / IP address and optional port number).|false
+cors.allowSubdomains         | {true\|false} defaults to false. If true, the 
CORS filter will allow requests from any origin which is a subdomain origin of 
the allowed origins. A subdomain is matched by comparing its scheme and suffix 
(host name / IP address and optional port number).|false
 cors.supportedMethods        | {method-list} defaults to GET, POST, HEAD, 
OPTIONS. List of the supported HTTP methods. These are advertised through the 
Access-Control-Allow-Methods header and must also be implemented by the actual 
CORS web service. Requests for methods not included here will be refused by the 
CORS filter with an HTTP 405 "Method not allowed" response.| GET, POST, HEAD, 
OPTIONS
 cors.supportedHeaders        | {"\*"\|header-list} defaults to \*. The names 
of the supported author request headers. These are advertised through the 
Access-Control-Allow-Headers header. If the configuration property value is set 
to \* (asterisk) any author request header will be allowed. The CORS Filter 
implements this by simply echoing the requested value back to the browser.|\*
 cors.exposedHeaders          | {header-list} defaults to empty list. List of 
the response headers other than simple response headers that the browser should 
expose to the author of the cross-domain request through the 
XMLHttpRequest.getResponseHeader() method. The CORS filter supplies this 
information through the Access-Control-Expose-Headers header.| empty
@@ -103,10 +103,10 @@ Cross Frame Scripting and Clickjacking a
 
 ###### Config
 
-Name                         | Description | Default
------------------------------|-------------|---------
-xframe.options.enabled                 | This param enables the 
X-Frame-Options capabilities|false
-xframe.options                 | This param specifies a particular value for 
the X-Frame-Options header. Most often the default value of DENY will be most 
appropriate. You can also use SAMEORIGIN or ALLOW-FROM uri|DENY
+Name                   | Description | Default
+-----------------------|-------------|---------
+xframe-options.enabled | This parameter enables the X-Frame-Options 
capabilities|false
+xframe-options.value   | This parameter specifies a particular value for the 
X-Frame-Options header. Most often the default value of DENY will be most 
appropriate. You can also use SAMEORIGIN or ALLOW-FROM uri|DENY
 
 ##### X-Content-Type-Options
 
@@ -125,8 +125,8 @@ Web applications can be protected by pro
 
 ###### Config
 
-Name                         | Description | Default
------------------------------|-------------|---------
-strict.transport.enabled                 | This param enables the HTTP 
Strict-Transport-Security response header|false
-strict.transport                 | This param specifies a particular value for 
the HTTP Strict-Transport-Security header. Default value is max-age=31536000. 
You can also use `max-age=<expire-time>` or `max-age=<expire-time>; 
includeSubDomains` or `max-age=<expire-time>;preload`|max-age=31536000
+Name                     | Description | Default
+-------------------------|-------------|---------
+strict.transport.enabled | This parameter enables the HTTP 
Strict-Transport-Security response header|false
+strict.transport         | This parameter specifies a particular value for the 
HTTP Strict-Transport-Security header. Default value is max-age=31536000. You 
can also use `max-age=<expire-time>` or `max-age=<expire-time>; 
includeSubDomains` or `max-age=<expire-time>;preload`|max-age=31536000
 

Modified: knox/trunk/books/1.1.0/knox_cli.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.1.0/knox_cli.md?rev=1835012&r1=1835011&r2=1835012&view=diff
==============================================================================
--- knox/trunk/books/1.1.0/knox_cli.md (original)
+++ knox/trunk/books/1.1.0/knox_cli.md Tue Jul  3 19:13:36 2018
@@ -18,9 +18,9 @@
 ### Knox CLI ###
 The Knox CLI is a command line utility for the management of various aspects 
of the Knox deployment. It is primarily concerned with the management of the 
security artifacts for the gateway instance and each of the deployed topologies 
or Hadoop clusters that are gated by the Knox Gateway instance.
 
-The various security artifacts are also generated and populated automatically 
by the Knox Gateway runtime when they are not found at startup. The assumptions 
made in those cases are appropriate for a test or development gateway instance 
and assume 'localhost' for hostname specific activities. For production 
deployments the use of the CLI may aid in managing  some production deployments.
+The various security artifacts are also generated and populated automatically 
by the Knox Gateway runtime when they are not found at startup. The assumptions 
made in those cases are appropriate for a test or development gateway instance 
and assume 'localhost' for hostname specific activities. For production 
deployments the use of the CLI may aid in managing some production deployments.
 
-The knoxcli.sh script is located in the `{GATEWAY_HOME}/bin` directory.
+The `knoxcli.sh` script is located in the `{GATEWAY_HOME}/bin` directory.
 
 #### Help ####
 ##### `bin/knoxcli.sh [--help]` #####
@@ -40,21 +40,21 @@ NOTE: This command fails when there is a
 ##### `bin/knoxcli.sh create-alias name [--cluster c] [--value v] [--generate] 
[--help]` #####
 Creates a password alias and stores it in a credential store within the 
`{GATEWAY_HOME}/data/security/keystores` dir. 
 
-argument    | description
-------------|-----------
-name|name of the alias to create  
-\-\-cluster|name of Hadoop cluster for the cluster specific credential store 
otherwise assumes that it is for the gateway itself
-\-\-value|parameter for specifying the actual password otherwise prompted. 
Escape complex passwords or surround with single quotes.<br/>
-\-\-generate|boolean flag to indicate whether the tool should just generate 
the value. This assumes that \-\-value is not set - will result in error 
otherwise. User will not be prompted for the value when \-\-generate is set.    
          
+Argument     | Description
+-------------|-----------
+name         | Name of the alias to create  
+\-\-cluster  | Name of Hadoop cluster for the cluster specific credential 
store otherwise assumes that it is for the gateway itself
+\-\-value    | Parameter for specifying the actual password otherwise 
prompted. Escape complex passwords or surround with single quotes
+\-\-generate | Boolean flag to indicate whether the tool should just generate 
the value. This assumes that \-\-value is not set - will result in error 
otherwise. User will not be prompted for the value when \-\-generate is set.
 
 #### Alias deletion ####
 ##### `bin/knoxcli.sh delete-alias name [--cluster c] [--help]` #####
 Deletes a password and alias mapping from a credential store within 
`{GATEWAY_HOME}/data/security/keystores`.
 
-argument | description
----------|-----------
-name | name of the alias to delete  
-\-\-cluster | name of Hadoop cluster for the cluster specific credential store 
otherwise assumes '__gateway'
+Argument    | Description
+------------|-----------
+name        | Name of the alias to delete  
+\-\-cluster | Name of Hadoop cluster for the cluster specific credential store 
otherwise assumes '__gateway'
 
 #### Alias listing ####
 ##### `bin/knoxcli.sh list-alias [--cluster c] [--help]` #####
@@ -62,21 +62,21 @@ Lists the alias names for the credential
 
 NOTE: This command will list the aliases in lowercase which is a result of the 
underlying credential store implementation. Lookup of credentials is a case 
insensitive operation - so this is not an issue.
 
-argument | description
----------|-----------
-\-\-cluster    |       name of Hadoop cluster for the cluster specific 
credential store otherwise assumes '__gateway'
+Argument    | Description
+------------|-----------
+\-\-cluster | Name of Hadoop cluster for the cluster specific credential store 
otherwise assumes '__gateway'
 
 #### Self-signed cert creation ####
 ##### `bin/knoxcli.sh create-cert [--hostname n] [--help]` #####
 Creates and stores a self-signed certificate to represent the identity of the 
gateway instance. This is stored within the 
`{GATEWAY_HOME}/data/security/keystores/gateway.jks` keystore.  
 
-argument | description
----------|-----------
-\-\-hostname|name of the host to be used in the self-signed certificate. This 
allows multi-host deployments to specify the proper hostnames for hostname 
verification to succeed on the client side of the SSL connection. The default 
is 'localhost'.
+Argument     | Description
+-------------|-----------
+\-\-hostname | Name of the host to be used in the self-signed certificate. 
This allows multi-host deployments to specify the proper hostnames for hostname 
verification to succeed on the client side of the SSL connection. The default 
is 'localhost'.
 
 #### Certificate Export ####
 ##### `bin/knoxcli.sh export-cert [--type JKS|PEM] [--help]` #####
-Exports and stores the gateway-identity certificate as the type indicated or 
PEM by default. This is stored within the 
`{GATEWAY_HOME}/data/security/keystores/` directory as either 
gateway-identity.pem or gateway-client-trust.jks depending on the type 
specified.  
+Exports and stores the gateway-identity certificate as the type indicated or 
PEM by default. This is stored within the 
`{GATEWAY_HOME}/data/security/keystores/` directory as either 
`gateway-identity.pem` or `gateway-client-trust.jks` depending on the type 
specified.  
 
 #### Topology Redeploy ####
 ##### `bin/knoxcli.sh redeploy [--cluster c]` #####
@@ -88,48 +88,48 @@ Lists all of the topologies found in Kno
 
 #### Topology Validation ####
 ##### `bin/knoxcli.sh validate-topology [--cluster c] [--path path] [--help]` 
####
-This ensures that a cluster's description (a.k. topology) follows the correct 
formatting rules. It is possible to specify a name of a cluster already in the 
topology directory, or a path to any file.
+This ensures that a cluster's description (a.k.a. topology) follows the 
correct formatting rules. It is possible to specify a name of a cluster already 
in the topology directory, or a path to any file.
 
-argument | description
----------|-----------
-\-\-cluster    |       name of Hadoop cluster for which you want to validate
-\-\-path | path to topology file that you wish to validate.
+Argument    | Description
+------------|-----------
+\-\-cluster | Name of Hadoop cluster for which you want to validate
+\-\-path    | Path to topology file that you wish to validate.
 
 #### LDAP Authentication and Authorization ####
 ##### `bin/knoxcli.sh user-auth-test [--cluster c] [--u username] [--p 
password] [--g] [--d] [--help]` ####
-This command will test a topology's ability to connect, authenticate, and 
authorize a user with an LDAP server. The only required argument is the 
--cluster argument to specify the name of the topology you wish to use. The 
topology must be valid (passes validate-topology command). If a --u and  --p 
argument are not specified, the command line will prompt for a username and 
password. If authentication is successful then the command will attempt to use 
the topology to do an LDAP group lookup. The topology must be configured 
correctly to do this. If it is not, groups will not return and no errors will 
be printed unless the `--g` command is specified. Currently this command only 
works if a topology supports the use of ShiroProvider for authentication.
+This command will test a topology's ability to connect, authenticate, and 
authorize a user with an LDAP server. The only required argument is the 
--cluster argument to specify the name of the topology you wish to use. The 
topology must be valid (passes validate-topology command). If a `--u` and `--p` 
argument are not specified, the command line will prompt for a username and 
password. If authentication is successful then the command will attempt to use 
the topology to do an LDAP group lookup. The topology must be configured 
correctly to do this. If it is not, groups will not return and no errors will 
be printed unless the `--g` command is specified. Currently this command only 
works if a topology supports the use of ShiroProvider for authentication.
 
-argument | description
----------|-----------
-\-\-cluster    | Required; name of cluster for which you want to test 
authentication
-\-\-u | Optional; username you wish you authenticate with.
-\-\-p | Optional; password you wish to authenticate with
-\-\-g | Optional; Specify that you are looking to return a user's groups. If 
not specified, group lookup errors won't return.
-\-\-d | Optional; Print extra debug info on failed authentication
+Argument    | Description
+------------|-----------
+\-\-cluster | Required; Name of cluster for which you want to test 
authentication
+\-\-u       | Optional; Username you wish you authenticate with
+\-\-p       | Optional; Password you wish to authenticate with
+\-\-g       | Optional; Specify that you are looking to return a user's 
groups. If not specified, group lookup errors won't return
+\-\-d       | Optional; Print extra debug info on failed authentication
 
 #### Topology LDAP Bind ####
 ##### `bin/knoxcli.sh system-user-auth-test [--cluster c] [--d] [--help]` ####
-This command will test a given topology's ability to connect, bind, and 
authenticate with the ldap server from the settings specified in the topology 
file. The bind currently only will with Shiro as the authentication provider. 
There are also two parameters required inside of the topology for these  
+This command will test a given topology's ability to connect, bind, and 
authenticate with the LDAP server from the settings specified in the topology 
file. The bind currently only will with Shiro as the authentication provider. 
There are also two parameters required inside of the topology for these  
 
-argument | description
----------|-----------
-\-\-cluster    | Required; name of cluster for which you want to test 
authentication
-\-\-d | Optional; Print extra debug info on failed authentication
+Argument    | Description
+------------|-----------
+\-\-cluster | Required; Name of cluster for which you want to test 
authentication
+\-\-d       | Optional; Print extra debug info on failed authentication
 
 
 #### Gateway Service Test ####
 ##### `bin/knoxcli.sh service-test [--cluster c] [--hostname hostname] [--port 
port] [--u username] [--p password] [--d] [--help]` ####
 
-This will test a topology configuration's ability to connect to multiple 
hadoop services. Each service found in a topology will be tested with multiple 
URLs. Results are printed to the console in JSON format..
+This will test a topology configuration's ability to connect to multiple 
Hadoop services. Each service found in a topology will be tested with multiple 
URLs. Results are printed to the console in JSON format.
 
-argument | description
----------|-----------
-\-\-cluster    | Required; name of cluster for which you want to test 
authentication
-\-\-hostname   | Required; hostname of the cluster currently running on the 
machine
-\-\-port       | Optional; port that the cluster is running on. If not 
supplied CLI will try to read config files to find the port.
-\-\-u  | Required; username to authorize against Hadoop services
-\-\-p  | Required; password to match username
-\-\-d | Optional; Print extra debug info on failed authentication
+Argument     | Description
+-------------|-----------
+\-\-cluster  | Required; Name of cluster for which you want to test 
authentication
+\-\-hostname | Required; Hostname of the cluster currently running on the 
machine
+\-\-port     | Optional; Port that the cluster is running on. If not supplied 
CLI will try to read config files to find the port.
+\-\-u        | Required; Username to authorize against Hadoop services
+\-\-p        | Required; Password to match username
+\-\-d        | Optional; Print extra debug info on failed authentication
 
 #### Remote Configuration Registry Client Listing ####
 ##### `bin/knoxcli.sh list-registry-clients` #####
@@ -140,9 +140,9 @@ Lists the [remote configuration registry
 #### List Provider Configurations in a Remote Configuration Registry ####
 ##### `bin/knoxcli.sh list-provider-configs --registry-client name` ####
 
-List the provider configurations in the remote configuration registry for 
which the referenced client provides acces.
+List the provider configurations in the remote configuration registry for 
which the referenced client provides access.
 
-argument | description
+Argument | Description
 ---------|-----------
 \-\-registry\-client | Required; The name of a [remote configuration registry 
client](#Remote+Configuration+Registry+Clients), as defined in gateway-site.xml
 
@@ -150,9 +150,9 @@ argument | description
 #### List Descriptors in a Remote Configuration Registry ####
 ##### `bin/knoxcli.sh list-descriptors --registry-client name` ####
 
-List the descriptors in the remote configuration registry for which the 
referenced client provides acces.
+List the descriptors in the remote configuration registry for which the 
referenced client provides access.
 
-argument | description
+Argument | Description
 ---------|-----------
 \-\-registry\-client | Required; The name of a [remote configuration registry 
client](#Remote+Configuration+Registry+Clients), as defined in gateway-site.xml
 
@@ -160,10 +160,10 @@ argument | description
 #### Upload Provider Configuration to a Remote Configuration Registry ####
 ##### `bin/knoxcli.sh upload-provider-config providerConfigFile 
--registry-client name [--entry-name entryName]` ####
 
-Upload a provider configuration file to the remote configuration registry for 
which the referenced client provides acces.
+Upload a provider configuration file to the remote configuration registry for 
which the referenced client provides access.
 By default, the entry name will be the same as the uploaded file's name.
 
-argument | description
+Argument | Description
 ---------|-----------
 \-\-registry\-client | Required; The name of a [remote configuration registry 
client](#Remote+Configuration+Registry+Clients), as defined in gateway-site.xml
 \-\-entry\-name        | Optional; The name of the entry for the uploaded 
content in the registry.
@@ -172,10 +172,10 @@ argument | description
 #### Upload Descriptor to a Remote Configuration Registry ####
 ##### `bin/knoxcli.sh upload-descriptor descriptorFile --registry-client name 
[--entry-name entryName]` ####
 
-Upload a descriptor file to the remote configuration registry for which the 
referenced client provides acces.
+Upload a descriptor file to the remote configuration registry for which the 
referenced client provides access.
 By default, the entry name will be the same as the uploaded file's name.
 
-argument | description
+Argument | Description
 ---------|-----------
 \-\-registry\-client | Required; The name of a [remote configuration registry 
client](#Remote+Configuration+Registry+Clients), as defined in gateway-site.xml
 \-\-entry\-name        | Optional; The name of the entry for the uploaded 
content in the registry.
@@ -184,9 +184,9 @@ argument | description
 #### Delete a Provider Configuration From a Remote Configuration Registry ####
 ##### `bin/knoxcli.sh delete-provider-config providerConfig --registry-client 
name` ####
 
-Delete a provider configuration from the remote configuration registry for 
which the referenced client provides acces.
+Delete a provider configuration from the remote configuration registry for 
which the referenced client provides access.
 
-argument | description
+Argument | Description
 ---------|-----------
 \-\-registry\-client | Required; The name of a [remote configuration registry 
client](#Remote+Configuration+Registry+Clients), as defined in gateway-site.xml
 
@@ -194,9 +194,9 @@ argument | description
 #### Delete a Descriptor From a Remote Configuration Registry ####
 ##### `bin/knoxcli.sh delete-descriptor descriptor --registry-client name` ####
 
-Delete a descriptor from the remote configuration registry for which the 
referenced client provides acces.
+Delete a descriptor from the remote configuration registry for which the 
referenced client provides access.
 
-argument | description
+Argument | Description
 ---------|-----------
 \-\-registry\-client | Required; The name of a [remote configuration registry 
client](#Remote+Configuration+Registry+Clients), as defined in gateway-site.xml
 
@@ -204,9 +204,9 @@ argument | description
 #### Get the ACL For an Entry in a Remote Configuration Registry ####
 ##### `bin/knoxcli.sh get-registry-acl entry --registry-client name` ####
 
-List the ACL set for the specified entry in the remote configuration registry 
for which the referenced client provides acces.
+List the ACL set for the specified entry in the remote configuration registry 
for which the referenced client provides access.
 
-argument | description
+Argument | Description
 ---------|-----------
 \-\-registry\-client | Required; The name of a [remote configuration registry 
client](#Remote+Configuration+Registry+Clients), as defined in gateway-site.xml
 


Reply via email to