Author: lmccay
Date: Sat Nov 14 17:46:14 2015
New Revision: 1714349
URL: http://svn.apache.org/viewvc?rev=1714349&view=rev
Log:
cleaned up knoxsso integration guide
Added:
knox/trunk/books/0.7.0/dev-guide/general_saml_flow.puml
Modified:
knox/site/books/knox-0-7-0/knoxsso_integration.html
knox/trunk/books/0.7.0/dev-guide/knoxsso_integration.md
Modified: knox/site/books/knox-0-7-0/knoxsso_integration.html
URL:
http://svn.apache.org/viewvc/knox/site/books/knox-0-7-0/knoxsso_integration.html?rev=1714349&r1=1714348&r2=1714349&view=diff
==============================================================================
--- knox/site/books/knox-0-7-0/knoxsso_integration.html (original)
+++ knox/site/books/knox-0-7-0/knoxsso_integration.html Sat Nov 14 17:46:14 2015
@@ -1,152 +1,152 @@
-<h1>Knox SSO Integration for UIs</h1><h2>Introduction</h2><p>KnoxSSO provides
an abstraction for integrating any number of authentication systems and SSO
solutions and enables participating web applications to scale to those
solutions more easily. Without the token exchange capabilities offered by
KnoxSSO each component UI would need to integrate with each desired solution on
its own. </p><p>This document examines the way to integrate with Knox SSO in
the form of a Servlet Filter. This approach should be easily extrapolated into
other frameworks - ie. Spring Security.</p><h3><a id="General+Flow">General
Flow</a> <a href="#General+Flow"><img
src="markbook-section-link.png"/></a></h3><p>The following is a generic
sequence diagram for SAML integration through KnoxSSO. <img
src='general_saml_flow.png'/> </p><h4><a id="KnoxSSO+Setup">KnoxSSO Setup</a>
<a href="#KnoxSSO+Setup"><img src="markbook-section-link.png"/></a></h4><h5><a
id="knoxsso.xml+Topology">knoxsso.xml Topology</a> <a href=
"#knoxsso.xml+Topology"><img src="markbook-section-link.png"/></a></h5><p>In
order to enable KnoxSSO, we need to configure the IdP topology. The following
is an example of this topology that is configured to use HTTP Basic Auth
against the Knox Demo LDAP server. This is the lowest barrier of entry for your
development environment that actually authenticates against a real user store.
Whatâs great is if you work against the IdP with Basic Auth then you will
work with SAML or anything else as well.</p>
-<pre><code> <?xml version="1.0" encoding="utf-8"?>
- <topology>
- <gateway>
- <provider>
- <role>authentication</role>
- <name>ShiroProvider</name>
- <enabled>true</enabled>
- <param>
- <name>sessionTimeout</name>
- <value>30</value>
- </param>
- <param>
- <name>main.ldapRealm</name>
-
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
- </param>
- <param>
- <name>main.ldapContextFactory</name>
-
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
- </param>
- <param>
- <name>main.ldapRealm.contextFactory</name>
- <value>$ldapContextFactory</value>
- </param>
- <param>
- <name>main.ldapRealm.userDnTemplate</name>
-
<value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
- </param>
- <param>
- <name>main.ldapRealm.contextFactory.url</name>
- <value>ldap://localhost:33389</value>
- </param>
- <param>
-
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
- <value>simple</value>
- </param>
- <param>
- <name>urls./**</name>
- <value>authcBasic</value>
- </param>
- </provider>
-
- <provider>
- <role>identity-assertion</role>
- <name>Default</name>
- <enabled>true</enabled>
- </provider>
- </gateway>
-
- <service>
- <role>KNOXSSO</role>
- <param>
- <name>knoxsso.cookie.secure.only</name>
- <value>true</value>
- </param>
- <param>
- <name>knoxsso.token.ttl</name>
- <value>100000</value>
- </param>
- </service>
- </topology>
+<h1>Knox SSO Integration for UIs</h1><h2>Introduction</h2><p>KnoxSSO provides
an abstraction for integrating any number of authentication systems and SSO
solutions and enables participating web applications to scale to those
solutions more easily. Without the token exchange capabilities offered by
KnoxSSO each component UI would need to integrate with each desired solution on
its own. </p><p>This document examines the way to integrate with Knox SSO in
the form of a Servlet Filter. This approach should be easily extrapolated into
other frameworks - ie. Spring Security.</p><h3><a id="General+Flow">General
Flow</a> <a href="#General+Flow"><img
src="markbook-section-link.png"/></a></h3><p>The following is a generic
sequence diagram for SAML integration through KnoxSSO.</p><p><img
src='general_saml_flow.png'/> </p><h4><a id="KnoxSSO+Setup">KnoxSSO Setup</a>
<a href="#KnoxSSO+Setup"><img src="markbook-section-link.png"/></a></h4><h5><a
id="knoxsso.xml+Topology">knoxsso.xml Topology</a> <a
href="#knoxsso.xml+Topology"><img
src="markbook-section-link.png"/></a></h5><p>In order to enable KnoxSSO, we
need to configure the IdP topology. The following is an example of this
topology that is configured to use HTTP Basic Auth against the Knox Demo LDAP
server. This is the lowest barrier of entry for your development environment
that actually authenticates against a real user store. Whatâs great is if you
work against the IdP with Basic Auth then you will work with SAML or anything
else as well.</p>
+<pre><code> <?xml version="1.0"
encoding="utf-8"?>
+ <topology>
+ <gateway>
+ <provider>
+ <role>authentication</role>
+ <name>ShiroProvider</name>
+ <enabled>true</enabled>
+ <param>
+ <name>sessionTimeout</name>
+ <value>30</value>
+ </param>
+ <param>
+ <name>main.ldapRealm</name>
+
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
+ </param>
+ <param>
+ <name>main.ldapContextFactory</name>
+
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
+ </param>
+ <param>
+
<name>main.ldapRealm.contextFactory</name>
+ <value>$ldapContextFactory</value>
+ </param>
+ <param>
+
<name>main.ldapRealm.userDnTemplate</name>
+
<value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+ </param>
+ <param>
+
<name>main.ldapRealm.contextFactory.url</name>
+
<value>ldap://localhost:33389</value>
+ </param>
+ <param>
+
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+ <value>simple</value>
+ </param>
+ <param>
+ <name>urls./**</name>
+ <value>authcBasic</value>
+ </param>
+ </provider>
+
+ <provider>
+ <role>identity-assertion</role>
+ <name>Default</name>
+ <enabled>true</enabled>
+ </provider>
+ </gateway>
+
+ <service>
+ <role>KNOXSSO</role>
+ <param>
+
<name>knoxsso.cookie.secure.only</name>
+ <value>true</value>
+ </param>
+ <param>
+ <name>knoxsso.token.ttl</name>
+ <value>100000</value>
+ </param>
+ </service>
+ </topology>
</code></pre><p>Just as with any Knox service, the KNOXSSO service is
protected by the gateway providers defined above it. In this case, the
ShiroProvider is taking care of HTTP Basic Auth against LDAP for us. Once the
user authenticates the request processing continues to the KNOXSSO service that
will create the required cookie and do the necessary redirects.</p><p>The
authenticate/federation provider can be swapped out to fit your deployment
environment.</p><h5><a id="sandbox.xml+Topology">sandbox.xml Topology</a> <a
href="#sandbox.xml+Topology"><img
src="markbook-section-link.png"/></a></h5><p>In order to see the end to end
story and use it as an example in your development, you can configure one of
the cluster topologies to use the SSOCookieProvider instead of the out of the
box ShiroProvider. The following is an example sandbox.xml topology that is
configured for using KnoxSSO to protect access to the Hadoop REST APIs.</p>
-<pre><code><?xml version="1.0" encoding="utf-8"?>
-<topology>
-<gateway>
- <provider>
- <role>federation</role>
- <name>SSOCookieProvider</name>
- <enabled>true</enabled>
- <param>
- <name>sso.authentication.provider.url</name>
- <value>https://localhost:9443/gateway/idp/api/v1/websso</value>
- </param>
- </provider>
-
- <provider>
- <role>identity-assertion</role>
- <name>Default</name>
- <enabled>true</enabled>
- </provider>
-
-</gateway>
-
-<service>
- <role>NAMENODE</role>
- <url>hdfs://localhost:8020</url>
-</service>
-
-<service>
- <role>JOBTRACKER</role>
- <url>rpc://localhost:8050</url>
-</service>
-
-<service>
- <role>WEBHDFS</role>
- <url>http://localhost:50070/webhdfs</url>
-</service>
-
-<service>
- <role>WEBHCAT</role>
- <url>http://localhost:50111/templeton</url>
-</service>
-
-<service>
- <role>OOZIE</role>
- <url>http://localhost:11000/oozie</url>
-</service>
-
-<service>
- <role>WEBHBASE</role>
- <url>http://localhost:60080</url>
-</service>
-
-<service>
- <role>HIVE</role>
- <url>http://localhost:10001/cliservice</url>
-</service>
-
-<service>
- <role>RESOURCEMANAGER</role>
- <url>http://localhost:8088/ws</url>
-</service>
-</topology>
+<pre><code> <?xml version="1.0" encoding="utf-8"?>
+ <topology>
+ <gateway>
+ <provider>
+ <role>federation</role>
+ <name>SSOCookieProvider</name>
+ <enabled>true</enabled>
+ <param>
+ <name>sso.authentication.provider.url</name>
+
<value>https://localhost:9443/gateway/idp/api/v1/websso</value>
+ </param>
+ </provider>
+
+ <provider>
+ <role>identity-assertion</role>
+ <name>Default</name>
+ <enabled>true</enabled>
+ </provider>
+
+ </gateway>
+
+ <service>
+ <role>NAMENODE</role>
+ <url>hdfs://localhost:8020</url>
+ </service>
+
+ <service>
+ <role>JOBTRACKER</role>
+ <url>rpc://localhost:8050</url>
+ </service>
+
+ <service>
+ <role>WEBHDFS</role>
+ <url>http://localhost:50070/webhdfs</url>
+ </service>
+
+ <service>
+ <role>WEBHCAT</role>
+ <url>http://localhost:50111/templeton</url>
+ </service>
+
+ <service>
+ <role>OOZIE</role>
+ <url>http://localhost:11000/oozie</url>
+ </service>
+
+ <service>
+ <role>WEBHBASE</role>
+ <url>http://localhost:60080</url>
+ </service>
+
+ <service>
+ <role>HIVE</role>
+ <url>http://localhost:10001/cliservice</url>
+ </service>
+
+ <service>
+ <role>RESOURCEMANAGER</role>
+ <url>http://localhost:8088/ws</url>
+ </service>
+ </topology>
</code></pre>
<ul>
<li>NOTE: Be aware that when using Chrome as your browser that cookies
donât seem to work for âlocalhostâ. Either use a VM or like I did - use
127.0.0.1. Safari works with localhost without problems.</li>
</ul><p>As you can see above, the only thing being configured is the SSO
provider URL. Since Knox is the issuer of the cookie and token, we donât need
to configure the public key since we have programmatic access to the actual
keystore for use at verification time.</p><h4><a id="Curl+the+Flow">Curl the
Flow</a> <a href="#Curl+the+Flow"><img
src="markbook-section-link.png"/></a></h4><p>We should now be able to walk
through the SSO Flow at the command line with curl to see everything that
happens.</p><p>First, issue a request to WEBHDFS through knox.</p>
-<pre><code>bash-3.2$ curl -iku guest:guest-password
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
-
-HTTP/1.1 302 Found
-Location:
https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
-Content-Length: 0
-Server: Jetty(8.1.14.v20131031)
+<pre><code> bash-3.2$ curl -iku guest:guest-password
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
+
+ HTTP/1.1 302 Found
+ Location:
https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
+ Content-Length: 0
+ Server: Jetty(8.1.14.v20131031)
</code></pre><p>Note the redirect to the knoxsso endpoint and the loginUrl
with the originalUrl request parameter. We need to see that come from your
integration as well.</p><p>Letâs manually follow that redirect with curl
now:</p>
-<pre><code>bash-3.2$ curl -iku guest:guest-password
"https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS"
+<pre><code> bash-3.2$ curl -iku guest:guest-password
"https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS"
-HTTP/1.1 307 Temporary Redirect
-Set-Cookie:
JSESSIONID=mlkda4crv7z01jd0q0668nsxp;Path=/gateway/idp;Secure;HttpOnly
-Set-Cookie:
hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE0NDM1ODUzNzEsInN1YiI6Imd1ZXN0IiwiYXVkIjoiSFNTTyIsImlzcyI6IkhTU08ifQ.RpA84Qdr6RxEZjg21PyVCk0G1kogvkuJI2bo302bpwbvmc-i01gCwKNeoGYzUW27MBXf6a40vylHVR3aZuuBUxsJW3aa_ltrx0R5ztKKnTWeJedOqvFKSrVlBzJJ90PzmDKCqJxA7JUhyo800_lDHLTcDWOiY-ueWYV2RMlCO0w;Path=/;Domain=localhost;Secure;HttpOnly
-Expires: Thu, 01 Jan 1970 00:00:00 GMT
-Location: https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
-Content-Length: 0
-Server: Jetty(8.1.14.v20131031)
+ HTTP/1.1 307 Temporary Redirect
+ Set-Cookie:
JSESSIONID=mlkda4crv7z01jd0q0668nsxp;Path=/gateway/idp;Secure;HttpOnly
+ Set-Cookie:
hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE0NDM1ODUzNzEsInN1YiI6Imd1ZXN0IiwiYXVkIjoiSFNTTyIsImlzcyI6IkhTU08ifQ.RpA84Qdr6RxEZjg21PyVCk0G1kogvkuJI2bo302bpwbvmc-i01gCwKNeoGYzUW27MBXf6a40vylHVR3aZuuBUxsJW3aa_ltrx0R5ztKKnTWeJedOqvFKSrVlBzJJ90PzmDKCqJxA7JUhyo800_lDHLTcDWOiY-ueWYV2RMlCO0w;Path=/;Domain=localhost;Secure;HttpOnly
+ Expires: Thu, 01 Jan 1970 00:00:00 GMT
+ Location:
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+ Content-Length: 0
+ Server: Jetty(8.1.14.v20131031)
</code></pre><p>Note the redirect back to the original URL in the Location
header and the Set-Cookie for the hadoop-jwt cookie. This is what the
SSOCookieProvider in sandbox (and ultimately in your integration) will be
looking for.</p><p>Finally, we should be able to take the above cookie and pass
it to the original url as indicated in the Location header for our originally
requested resource:</p>
-<pre><code>bash-3.2$ curl -ikH "Cookie:
hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE0NDM1ODY2OTIsInN1YiI6Imd1ZXN0IiwiYXVkIjoiSFNTTyIsImlzcyI6IkhTU08ifQ.Os5HEfVBYiOIVNLRIvpYyjeLgAIMbBGXHBWMVRAEdiYcNlJRcbJJ5aSUl1aciNs1zd_SHijfB9gOdwnlvQ_0BCeGHlJBzHGyxeypIoGj9aOwEf36h-HVgqzGlBLYUk40gWAQk3aRehpIrHZT2hHm8Pu8W-zJCAwUd8HR3y6LF3M;Path=/;Domain=localhost;Secure;HttpOnly"
https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+<pre><code> bash-3.2$ curl -ikH "Cookie:
hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE0NDM1ODY2OTIsInN1YiI6Imd1ZXN0IiwiYXVkIjoiSFNTTyIsImlzcyI6IkhTU08ifQ.Os5HEfVBYiOIVNLRIvpYyjeLgAIMbBGXHBWMVRAEdiYcNlJRcbJJ5aSUl1aciNs1zd_SHijfB9gOdwnlvQ_0BCeGHlJBzHGyxeypIoGj9aOwEf36h-HVgqzGlBLYUk40gWAQk3aRehpIrHZT2hHm8Pu8W-zJCAwUd8HR3y6LF3M;Path=/;Domain=localhost;Secure;HttpOnly"
https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
-TODO: cluster was down and needs to be recreated :/
+ TODO: cluster was down and needs to be recreated :/
</code></pre><h4><a id="Browse+the+Flow">Browse the Flow</a> <a
href="#Browse+the+Flow"><img src="markbook-section-link.png"/></a></h4><p>At
this point, we can use a web browser instead of the command line and see how
the browser will challenge the user for Basic Auth Credentials and then manage
the cookies such that the SSO and token exchange aspects of the flow are hidden
from the user.</p><p>Simply, try to invoke the same webhdfs API from the
browser URL bar.</p>
-<pre><code>
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+<pre><code>
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
</code></pre><p>Based on our understanding of the flow it should behave
like:</p>
<ul>
<li>SSOCookieProvider checks for hadoop-jwt cookie and in its absence
redirects to the configured SSO provider URL (knoxsso endpoint)</li>
@@ -154,183 +154,375 @@ TODO: cluster was down and needs to be r
<li>The ShiroProvider authenticates the user against the Demo LDAP Server
using a simple LDAP bind and establishes the security context for the WebSSO
request</li>
<li>The WebSSO service exchanges the normalized Java Subject into a JWT
token and sets it on the response as a cookie named hadoop-jwt</li>
<li>The WebSSO service then redirects the user agent back to the originally
requested URL - the webhdfs Knox service subsequent invocations will find the
cookie in the incoming request and not need to engage the WebSSO service again
until it expires.</li>
-</ul><h4><a id="Filter+by+Example">Filter by Example</a> <a
href="#Filter+by+Example"><img src="markbook-section-link.png"/></a></h4><p>We
have added a federation provider to Knox for accepting KnoxSSO cookies for REST
APIs. This provides us with a couple benefits: KnoxSSO support for REST APIs
for XmlHttpRequests from JavaScript (basic CORS functionality is also
included). This is still rather basic and considered beta code. A model and
real world usecase for others to base their integrations on</p><p>In addition,
<a
href="https://issues.apache.org/jira/browse/HADOOP-11717">https://issues.apache.org/jira/browse/HADOOP-11717</a>
added support for the Hadoop UIs to the hadoop-auth module and it can be used
as another example.</p><p>We will examine the new SSOCookieFederationFilter in
Knox here.</p><p>package
org.apache.hadoop.gateway.provider.federation.jwt.filter;</p>
-<pre><code>import java.io.IOException;
- import java.security.Principal;
- import java.security.PrivilegedActionException;
- import java.security.PrivilegedExceptionAction;
- import java.util.ArrayList;
- import java.util.Date;
- import java.util.HashSet;
- import java.util.List;
- import java.util.Set;
-
- import javax.security.auth.Subject;
- import javax.servlet.Filter;
- import javax.servlet.FilterChain;
- import javax.servlet.FilterConfig;
- import javax.servlet.ServletException;
- import javax.servlet.ServletRequest;
- import javax.servlet.ServletResponse;
- import javax.servlet.http.Cookie;
- import javax.servlet.http.HttpServletRequest;
- import javax.servlet.http.HttpServletResponse;
-
- import org.apache.hadoop.gateway.i18n.messages.MessagesFactory;
- import org.apache.hadoop.gateway.provider.federation.jwt.JWTMessages;
- import org.apache.hadoop.gateway.security.PrimaryPrincipal;
- import org.apache.hadoop.gateway.services.GatewayServices;
- import org.apache.hadoop.gateway.services.security.token.JWTokenAuthority;
- import
org.apache.hadoop.gateway.services.security.token.TokenServiceException;
- import org.apache.hadoop.gateway.services.security.token.impl.JWTToken;
-
- public class SSOCookieFederationFilter implements Filter {
- private static JWTMessages log = MessagesFactory.get( JWTMessages.class
);
- private static final String ORIGINAL_URL_QUERY_PARAM =
"originalUrl=";
- private static final String SSO_COOKIE_NAME =
"sso.cookie.name";
- private static final String SSO_EXPECTED_AUDIENCES =
"sso.expected.audiences";
- private static final String SSO_AUTHENTICATION_PROVIDER_URL =
"sso.authentication.provider.url";
- private static final String DEFAULT_SSO_COOKIE_NAME =
"hadoop-jwt";
+</ul><h4><a id="Filter+by+Example">Filter by Example</a> <a
href="#Filter+by+Example"><img src="markbook-section-link.png"/></a></h4><p>We
have added a federation provider to Knox for accepting KnoxSSO cookies for REST
APIs. This provides us with a couple benefits: KnoxSSO support for REST APIs
for XmlHttpRequests from JavaScript (basic CORS functionality is also
included). This is still rather basic and considered beta code. A model and
real world usecase for others to base their integrations on</p><p>In addition,
<a
href="https://issues.apache.org/jira/browse/HADOOP-11717">https://issues.apache.org/jira/browse/HADOOP-11717</a>
added support for the Hadoop UIs to the hadoop-auth module and it can be used
as another example.</p><p>We will examine the new SSOCookieFederationFilter in
Knox here.</p>
+<pre><code>package org.apache.hadoop.gateway.provider.federation.jwt.filter;
+
+ import java.io.IOException;
+ import java.security.Principal;
+ import java.security.PrivilegedActionException;
+ import java.security.PrivilegedExceptionAction;
+ import java.util.ArrayList;
+ import java.util.Date;
+ import java.util.HashSet;
+ import java.util.List;
+ import java.util.Set;
+
+ import javax.security.auth.Subject;
+ import javax.servlet.Filter;
+ import javax.servlet.FilterChain;
+ import javax.servlet.FilterConfig;
+ import javax.servlet.ServletException;
+ import javax.servlet.ServletRequest;
+ import javax.servlet.ServletResponse;
+ import javax.servlet.http.Cookie;
+ import javax.servlet.http.HttpServletRequest;
+ import javax.servlet.http.HttpServletResponse;
+
+ import org.apache.hadoop.gateway.i18n.messages.MessagesFactory;
+ import
org.apache.hadoop.gateway.provider.federation.jwt.JWTMessages;
+ import org.apache.hadoop.gateway.security.PrimaryPrincipal;
+ import org.apache.hadoop.gateway.services.GatewayServices;
+ import
org.apache.hadoop.gateway.services.security.token.JWTokenAuthority;
+ import
org.apache.hadoop.gateway.services.security.token.TokenServiceException;
+ import
org.apache.hadoop.gateway.services.security.token.impl.JWTToken;
+
+ public class SSOCookieFederationFilter implements Filter {
+ private static JWTMessages log = MessagesFactory.get(
JWTMessages.class );
+ private static final String ORIGINAL_URL_QUERY_PARAM =
"originalUrl=";
+ private static final String SSO_COOKIE_NAME =
"sso.cookie.name";
+ private static final String SSO_EXPECTED_AUDIENCES =
"sso.expected.audiences";
+ private static final String SSO_AUTHENTICATION_PROVIDER_URL =
"sso.authentication.provider.url";
+ private static final String DEFAULT_SSO_COOKIE_NAME =
"hadoop-jwt";
</code></pre><p>The above represent the configurable aspects of the
integration</p>
-<pre><code>private JWTokenAuthority authority = null;
-private String cookieName = null;
-private List<String> audiences = null;
-private String authenticationProviderUrl = null;
-
-@Override
-public void init( FilterConfig filterConfig ) throws ServletException {
- GatewayServices services = (GatewayServices)
filterConfig.getServletContext().getAttribute(GatewayServices.GATEWAY_SERVICES_ATTRIBUTE);
- authority =
(JWTokenAuthority)services.getService(GatewayServices.TOKEN_SERVICE);
+<pre><code> private JWTokenAuthority authority = null;
+ private String cookieName = null;
+ private List<String> audiences = null;
+ private String authenticationProviderUrl = null;
+
+ @Override
+ public void init( FilterConfig filterConfig ) throws ServletException {
+ GatewayServices services = (GatewayServices)
filterConfig.getServletContext().getAttribute(GatewayServices.GATEWAY_SERVICES_ATTRIBUTE);
+ authority =
(JWTokenAuthority)services.getService(GatewayServices.TOKEN_SERVICE);
</code></pre><p>The above is a Knox specific internal service that we use to
issue and verify JWT tokens. This will be covered separately and you will need
to be implement something similar in your filter implementation.</p>
-<pre><code>// configured cookieName
-cookieName = filterConfig.getInitParameter(SSO_COOKIE_NAME);
-if (cookieName == null) {
- cookieName = DEFAULT_SSO_COOKIE_NAME;
-}
+<pre><code> // configured cookieName
+ cookieName = filterConfig.getInitParameter(SSO_COOKIE_NAME);
+ if (cookieName == null) {
+ cookieName = DEFAULT_SSO_COOKIE_NAME;
+ }
</code></pre><p>The configurable cookie name is something that can be used to
change a cookie name to fit your deployment environment. The default name is
hadoop-jwt which is also the default in the Hadoop implementation. This name
must match the name being used by the KnoxSSO endpoint when setting the
cookie.</p>
-<pre><code>// expected audiences or null
-String expectedAudiences =
filterConfig.getInitParameter(SSO_EXPECTED_AUDIENCES);
-if (expectedAudiences != null) {
- audiences = parseExpectedAudiences(expectedAudiences);
-}
+<pre><code> // expected audiences or null
+ String expectedAudiences =
filterConfig.getInitParameter(SSO_EXPECTED_AUDIENCES);
+ if (expectedAudiences != null) {
+ audiences = parseExpectedAudiences(expectedAudiences);
+ }
</code></pre><p>Audiences are configured as a comma separated list of audience
strings. Names of intended recipients or intents. The semantics that we are
using for this processing is that - if not configured than any (or none)
audience is accepted. If there are audiences configured then as long as one of
the expected ones is found in the set of claims in the token it is accepted.</p>
-<pre><code>// url to SSO authentication provider
-authenticationProviderUrl =
filterConfig.getInitParameter(SSO_AUTHENTICATION_PROVIDER_URL);
-if (authenticationProviderUrl == null) {
- log.missingAuthenticationProviderUrlConfiguration();
-}
-</code></pre><p> }</p><p>This is the URL to the KnoxSSO endpoint. It is
required and SSO/token exchange will not work without this set correctly.</p>
-<pre><code>/**
-</code></pre><p> * @param expectedAudiences * @return */ private
List<String> parseExpectedAudiences(String expectedAudiences) {
ArrayList<String> audList = null; // setup the list of valid audiences for
token validation if (expectedAudiences != null) { // parse into the list
String[] audArray = expectedAudiences.split(“,”); audList = new
ArrayList<String>(); for (String a : audArray) { audList.add(a); } }
return audList; } The above method parses the comma separated list of expected
audiences and makes it available for interrogation during token validation.</p>
-<pre><code>public void destroy() {
-}
-
-public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain)
- throws IOException, ServletException {
- String wireToken = null;
- HttpServletRequest req = (HttpServletRequest) request;
-
- String loginURL = constructLoginURL(req);
- wireToken = getJWTFromCookie(req);
- if (wireToken == null) {
- if (req.getMethod().equals("OPTIONS")) {
- // CORS preflight requests to determine allowed origins and related
config
- // must be able to continue without being redirected
- Subject sub = new Subject();
- sub.getPrincipals().add(new PrimaryPrincipal("anonymous"));
- continueWithEstablishedSecurityContext(sub, req, (HttpServletResponse)
response, chain);
+<pre><code> // url to SSO authentication provider
+ authenticationProviderUrl =
filterConfig.getInitParameter(SSO_AUTHENTICATION_PROVIDER_URL);
+ if (authenticationProviderUrl == null) {
+ log.missingAuthenticationProviderUrlConfiguration();
}
- log.sendRedirectToLoginURL(loginURL);
- ((HttpServletResponse) response).sendRedirect(loginURL);
}
- else {
- JWTToken token = new JWTToken(wireToken);
- boolean verified = false;
- try {
- verified = authority.verifyToken(token);
- if (verified) {
- Date expires = token.getExpiresDate();
- if (expires != null && new Date().before(expires)) {
- boolean audValid = validateAudiences(token);
- if (audValid) {
- Subject subject = createSubjectFromToken(token);
- continueWithEstablishedSecurityContext(subject,
(HttpServletRequest)request, (HttpServletResponse)response, chain);
+</code></pre><p>This is the URL to the KnoxSSO endpoint. It is required and
SSO/token exchange will not work without this set correctly.</p>
+<pre><code> /**
+ * @param expectedAudiences
+ * @return
+ */
+ private List<String> parseExpectedAudiences(String
expectedAudiences) {
+ ArrayList<String> audList = null;
+ // setup the list of valid audiences for token validation
+ if (expectedAudiences != null) {
+ // parse into the list
+ String[] audArray = expectedAudiences.split(",");
+ audList = new ArrayList<String>();
+ for (String a : audArray) {
+ audList.add(a);
+ }
+ }
+ return audList;
+ }
+</code></pre><p>The above method parses the comma separated list of expected
audiences and makes it available for interrogation during token validation.</p>
+<pre><code> public void destroy() {
+ }
+
+ public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain)
+ throws IOException, ServletException {
+ String wireToken = null;
+ HttpServletRequest req = (HttpServletRequest) request;
+
+ String loginURL = constructLoginURL(req);
+ wireToken = getJWTFromCookie(req);
+ if (wireToken == null) {
+ if (req.getMethod().equals("OPTIONS")) {
+ // CORS preflight requests to determine allowed origins and related
config
+ // must be able to continue without being redirected
+ Subject sub = new Subject();
+ sub.getPrincipals().add(new PrimaryPrincipal("anonymous"));
+ continueWithEstablishedSecurityContext(sub, req,
(HttpServletResponse) response, chain);
+ }
+ log.sendRedirectToLoginURL(loginURL);
+ ((HttpServletResponse) response).sendRedirect(loginURL);
+ }
+ else {
+ JWTToken token = new JWTToken(wireToken);
+ boolean verified = false;
+ try {
+ verified = authority.verifyToken(token);
+ if (verified) {
+ Date expires = token.getExpiresDate();
+ if (expires != null && new Date().before(expires)) {
+ boolean audValid = validateAudiences(token);
+ if (audValid) {
+ Subject subject = createSubjectFromToken(token);
+ continueWithEstablishedSecurityContext(subject,
(HttpServletRequest)request, (HttpServletResponse)response, chain);
+ }
+ else {
+ log.failedToValidateAudience();
+ ((HttpServletResponse) response).sendRedirect(loginURL);
+ }
+ }
+ else {
+ log.tokenHasExpired();
+ ((HttpServletResponse) response).sendRedirect(loginURL);
+ }
}
else {
- log.failedToValidateAudience();
- ((HttpServletResponse) response).sendRedirect(loginURL);
+ log.failedToVerifyTokenSignature();
+ ((HttpServletResponse) response).sendRedirect(loginURL);
}
- }
- else {
- log.tokenHasExpired();
+ } catch (TokenServiceException e) {
+ log.unableToVerifyToken(e);
((HttpServletResponse) response).sendRedirect(loginURL);
}
}
- else {
- log.failedToVerifyTokenSignature();
- ((HttpServletResponse) response).sendRedirect(loginURL);
- }
- } catch (TokenServiceException e) {
- log.unableToVerifyToken(e);
- ((HttpServletResponse) response).sendRedirect(loginURL);
}
- }
-}
</code></pre><p>The doFilter method above is where all the real work is done.
We look for a cookie by the configured name. If it isnât there then we
redirect to the configured SSO provider URL in order to acquire one. That is
unless it is an OPTIONS request which may be a preflight CORS request. You
shouldnât need to worry about this aspect. It is really a REST API concern
not a web app UI one.</p><p>Once we get a cookie, the underlying JWT token is
extracted and returned as the wireToken from which we create a Knox specific
JWTToken. This abstraction is around the use of the nimbus JWT library which
you can use directly. We will cover those details separately.</p><p>We then ask
the token authority component to verify the token. This involves signature
validation of the signed token. In order to verify the signature of the token
you will need to have the public key of the Knox SSO server configured and
provided to the nimbus library through its API at verification time. NO
TE: This is a good place to look at the Hadoop implementation as an
example.</p><p>Once we know the token is signed by a trusted party we then
validate whether it is expired and that it has an expected (or no) audience
claims.</p><p>Finally, when we have a valid token, we create a Java Subject
from it and continue the request through the filterChain as the authenticated
user.</p>
-<pre><code>/**
-</code></pre><p> * Encapsulate the acquisition of the JWT token from
HTTP cookies within the * request. * * @param req servlet request to get the
JWT token from * @return serialized JWT token */ protected String
getJWTFromCookie(HttpServletRequest req) { String serializedJWT = null;
Cookie[] cookies = req.getCookies(); if (cookies != null) { for (Cookie
cookie : cookies) { if (cookieName.equals(cookie.getName())) {
log.cookieHasBeenFound(cookieName); serializedJWT = cookie.getValue(); break;
} } } return serializedJWT; }</p><p>The above method extracts the
serialized token from the cookie and returns it as the wireToken.</p><p> /** *
Create the URL to be used for authentication of the user in the absence of * a
JWT token within the incoming request. * * @param request for getting the
original request URL * @return url to use as login url for redirect */
protected String constructLoginURL(HttpServletRequest request) { String
delimiter = “?”; if
(authenticationProviderUrl.contains(“?”)) { delimiter =
“&”; } String loginURL = authenticationProviderUrl +
delimiter + ORIGINAL_URL_QUERY_PARAM + request.getRequestURL().toString()+
getOriginalQueryString(request); return loginURL; }</p><p> private String
getOriginalQueryString(HttpServletRequest request) { String
originalQueryString = request.getQueryString(); return (originalQueryString ==
null) ? "“ : ”?" + originalQueryString; }</p><p>The
above method creates the full URL to be used in redirecting to the KnoxSSO
endpoint. It includes the SSO provider URL as well as the original request URL
so that we can redirect back to it after authentication and token
exchange.</p><p> /** * Validate whether any of the accepted audience claims is
present in the * issued token claims list for audience. Override this method
in subclasses * in order to customize the audience validation behavior. * *
@param jwtToken * the JWT tok
en where the allowed audiences will be found * @return true if an expected
audience is present, otherwise false */ protected boolean
validateAudiences(JWTToken jwtToken) { boolean valid = false; String[]
tokenAudienceList = jwtToken.getAudienceClaims(); // if there were no expected
audiences configured then just // consider any audience acceptable if
(audiences == null) { valid = true; } else { // if any of the configured
audiences is found then consider it // acceptable for (String aud :
tokenAudienceList) { if (audiences.contains(aud)) { //log.debug(“JWT
token audience has been successfully validated”);
log.jwtAudienceValidated(); valid = true; break; } } } return valid;
}</p><p>The above method implements the audience claim semantics explained
earlier.</p>
-<pre><code>private void continueWithEstablishedSecurityContext(Subject
subject, final HttpServletRequest request, final HttpServletResponse
response, final FilterChain chain) throws IOException, ServletException {
-try {
- Subject.doAs(
- subject,
- new PrivilegedExceptionAction<Object>() {
- @Override
- public Object run() throws Exception {
- chain.doFilter(request, response);
- return null;
+<pre><code> /**
+ * Encapsulate the acquisition of the JWT token from HTTP cookies within
the
+ * request.
+ *
+ * @param req servlet request to get the JWT token from
+ * @return serialized JWT token
+ */
+ protected String getJWTFromCookie(HttpServletRequest req) {
+ String serializedJWT = null;
+ Cookie[] cookies = req.getCookies();
+ if (cookies != null) {
+ for (Cookie cookie : cookies) {
+ if (cookieName.equals(cookie.getName())) {
+ log.cookieHasBeenFound(cookieName);
+ serializedJWT = cookie.getValue();
+ break;
+ }
}
}
- );
-}
-catch (PrivilegedActionException e) {
- Throwable t = e.getCause();
- if (t instanceof IOException) {
- throw (IOException) t;
- }
- else if (t instanceof ServletException) {
- throw (ServletException) t;
- }
- else {
- throw new ServletException(t);
- }
-}
-</code></pre><p> }</p><p>This method continues the filter chain processing
upon successful validation of the token. This would need to be replaced with
your environmentâs equivalent of continuing the request or login to the app
as the authenticated user.</p><p> private Subject
createSubjectFromToken(JWTToken token) { final String principal =
token.getSubject(); @SuppressWarnings(“rawtypes”) HashSet
emptySet = new HashSet(); Set<Principal> principals = new
HashSet<Principal>(); Principal p = new PrimaryPrincipal(principal);
principals.add(p); javax.security.auth.Subject subject = new
javax.security.auth.Subject(true, principals, emptySet, emptySet); return
subject; }</p><p>This method takes a JWTToken and creates a Java Subject with
the principals expected by the rest of the Knox processing. This would need to
be implemented in a way appropriate for your operating environment as well. For
instance, the Hadoop handler implementation returns a Hadoop Authentic
ationToken to the calling filter which in turn ends up in the Hadoop auth
cookie.</p>
-<pre><code>}
+ return serializedJWT;
+ }
+</code></pre><p>The above method extracts the serialized token from the cookie
and returns it as the wireToken.</p>
+<pre><code> /**
+ * Create the URL to be used for authentication of the user in the
absence of
+ * a JWT token within the incoming request.
+ *
+ * @param request for getting the original request URL
+ * @return url to use as login url for redirect
+ */
+ protected String constructLoginURL(HttpServletRequest request) {
+ String delimiter = "?";
+ if (authenticationProviderUrl.contains("?")) {
+ delimiter = "&";
+ }
+ String loginURL = authenticationProviderUrl + delimiter
+ + ORIGINAL_URL_QUERY_PARAM
+ + request.getRequestURL().toString()+ getOriginalQueryString(request);
+ return loginURL;
+ }
+
+ private String getOriginalQueryString(HttpServletRequest request) {
+ String originalQueryString = request.getQueryString();
+ return (originalQueryString == null) ? "" : "?" +
originalQueryString;
+ }
+</code></pre><p>The above method creates the full URL to be used in
redirecting to the KnoxSSO endpoint. It includes the SSO provider URL as well
as the original request URL so that we can redirect back to it after
authentication and token exchange.</p>
+<pre><code> /**
+ * Validate whether any of the accepted audience claims is present in the
+ * issued token claims list for audience. Override this method in
subclasses
+ * in order to customize the audience validation behavior.
+ *
+ * @param jwtToken
+ * the JWT token where the allowed audiences will be found
+ * @return true if an expected audience is present, otherwise false
+ */
+ protected boolean validateAudiences(JWTToken jwtToken) {
+ boolean valid = false;
+ String[] tokenAudienceList = jwtToken.getAudienceClaims();
+ // if there were no expected audiences configured then just
+ // consider any audience acceptable
+ if (audiences == null) {
+ valid = true;
+ } else {
+ // if any of the configured audiences is found then consider it
+ // acceptable
+ for (String aud : tokenAudienceList) {
+ if (audiences.contains(aud)) {
+ //log.debug("JWT token audience has been
successfully validated");
+ log.jwtAudienceValidated();
+ valid = true;
+ break;
+ }
+ }
+ }
+ return valid;
+ }
+</code></pre><p>The above method implements the audience claim semantics
explained earlier.</p>
+<pre><code> private void continueWithEstablishedSecurityContext(Subject
subject, final HttpServletRequest request, final
HttpServletResponse response, final FilterChain chain) throws IOException,
ServletException {
+ try {
+ Subject.doAs(
+ subject,
+ new PrivilegedExceptionAction<Object>() {
+ @Override
+ public Object run() throws Exception {
+ chain.doFilter(request, response);
+ return null;
+ }
+ }
+ );
+ }
+ catch (PrivilegedActionException e) {
+ Throwable t = e.getCause();
+ if (t instanceof IOException) {
+ throw (IOException) t;
+ }
+ else if (t instanceof ServletException) {
+ throw (ServletException) t;
+ }
+ else {
+ throw new ServletException(t);
+ }
+ }
+ }
+</code></pre><p>This method continues the filter chain processing upon
successful validation of the token. This would need to be replaced with your
environmentâs equivalent of continuing the request or login to the app as the
authenticated user.</p>
+<pre><code> private Subject createSubjectFromToken(JWTToken token) {
+ final String principal = token.getSubject();
+ @SuppressWarnings("rawtypes")
+ HashSet emptySet = new HashSet();
+ Set<Principal> principals = new HashSet<Principal>();
+ Principal p = new PrimaryPrincipal(principal);
+ principals.add(p);
+ javax.security.auth.Subject subject = new
javax.security.auth.Subject(true, principals, emptySet, emptySet);
+ return subject;
+ }
+</code></pre><p>This method takes a JWTToken and creates a Java Subject with
the principals expected by the rest of the Knox processing. This would need to
be implemented in a way appropriate for your operating environment as well. For
instance, the Hadoop handler implementation returns a Hadoop
AuthenticationToken to the calling filter which in turn ends up in the Hadoop
auth cookie.</p>
+<pre><code> }
</code></pre><h4><a id="Token+Signature+Validation">Token Signature
Validation</a> <a href="#Token+Signature+Validation"><img
src="markbook-section-link.png"/></a></h4><p>The following is the method from
the Hadoop handler implementation that validates the signature.</p>
-<pre><code>/**
-</code></pre><p> * Verify the signature of the JWT token in this method.
This method depends on the * public key that was established during init based
upon the provisioned public key. * Override this method in subclasses in order
to customize the signature verification behavior. * @param jwtToken the token
that contains the signature to be validated * @return valid true if signature
verifies successfully; false otherwise */ protected boolean
validateSignature(SignedJWT jwtToken){ boolean valid=false; if
(JWSObject.State.SIGNED == jwtToken.getState()) { LOG.debug(“JWT token
is in a SIGNED state”); if (jwtToken.getSignature() != null) {
LOG.debug(“JWT token signature is not null”); try { JWSVerifier
verifier=new RSASSAVerifier(publicKey); if (jwtToken.verify(verifier)) {
valid=true; LOG.debug(“JWT token has been successfully verified”);
} else { LOG.warn(“JWT signature verification failed.”); } }
catch (JOSEException
je) { LOG.warn(“Error while validating signature”,je); } } }
return valid; }</p><p>Hadoop Configuration Example The following is like the
configuration in the Hadoop handler implementation.</p><p>OBSOLETE but in the
proper spirit of HADOOP-11717 ( HADOOP-11717 - Add Redirecting WebSSO behavior
with JWT Token in Hadoop Auth RESOLVED )</p>
-<pre><code><property>
-</code></pre><p> <name>hadoop.http.authentication.type</name>
<value>org.apache.hadoop/security.authentication/server.JWTRedirectAuthenticationHandler</value>
</property></p><p>This is the handler classname in Hadoop auth for JWT token
(KnoxSSO) support.</p>
-<pre><code><property>
-</code></pre><p>
<name>hadoop.http.authentication.authentication.provider.url</name> <value><a
href="http://c6401.ambari.apache.org:8888/knoxsso</value">http://c6401.ambari.apache.org:8888/knoxsso</value</a>
</property></p><p>The above property is the SSO provider URL that points to
the knoxsso endpoint.</p>
-<pre><code><property>
-</code></pre><p> <name>hadoop.http.authentication.public.key.pem</name>
<value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
BhMCVVMxDTALBgNVBAgTBFRlc3QxDTALBgNVBAcTBFRlc3QxDzANBgNVBAoTBkhh
ZG9vcDENMAsGA1UECxMEVGVzdDEgMB4GA1UEAxMXYzY0MDEuYW1iYXJpLmFwYWNo
ZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFs/rymbiNvg8lDhsdA
qvh5uHP6iMtfv9IYpDleShjkS1C+IqId6bwGIEO8yhIS5BnfUR/fcnHi2ZNrXX7x
QUtQe7M9tDIKu48w//InnZ6VpAqjGShWxcSzR6UB/YoGe5ytHS6MrXaormfBg3VW
tDoy2MS83W8pweS6p5JnK7S5AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEANyVg6EzE
2q84gq7wQfLt9t047nYFkxcRfzhNVL3LB8p6IkM4RUrzWq4kLA+z+bpY2OdpkTOe
wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value> </property></p><p>The above
property holds the KnoxSSO serverâs public key
for signature verification. Adding it directly to the config like this is
convenient and is easily done through Ambari to existing config files that take
custom properties. Config is generally protected as root access only as well -
so it is a pretty good solution.</p><h4><a id="Public+Key+Parsing">Public Key
Parsing</a> <a href="#Public+Key+Parsing"><img
src="markbook-section-link.png"/></a></h4><p>In order to turn the pem encoded
config item into a public key the hadoop handler implementation does the
following in the init() method.</p><p> if (publicKey == null) { String
pemPublicKey = config.getProperty(PUBLIC_KEY_PEM); if (pemPublicKey == null) {
throw new ServletException( “Public key for signature validation must
be provisioned.”); } publicKey =
CertificateUtil.parseRSAPublicKey(pemPublicKey); }</p><p>and the
CertificateUtil class is below:</p>
-<pre><code>package org.apache.hadoop.security.authentication.util;
-
-import java.io.ByteArrayInputStream;
-import java.io.UnsupportedEncodingException;
-import java.security.PublicKey;
-import java.security.cert.CertificateException;
-import java.security.cert.CertificateFactory;
-import java.security.cert.X509Certificate;
-import java.security.interfaces.RSAPublicKey;
-
-import javax.servlet.ServletException;
-
-public class CertificateUtil {
- private static final String PEM_HEADER = "-----BEGIN
CERTIFICATE-----\n";
- private static final String PEM_FOOTER = "\n-----END
CERTIFICATE-----";
-
- /**
- * Gets an RSAPublicKey from the provided PEM encoding.
-</code></pre><p> * * @param pem * - the pem encoding from config without the
header and footer * @return RSAPublicKey the RSA public key * @throws
ServletException thrown if a processing error occurred */ public static
RSAPublicKey parseRSAPublicKey(String pem) throws ServletException { String
fullPem = PEM_HEADER + pem + PEM_FOOTER; PublicKey key = null; try {
CertificateFactory fact = CertificateFactory.getInstance(“X.509”);
ByteArrayInputStream is = new ByteArrayInputStream(
fullPem.getBytes(“UTF8”)); X509Certificate cer = (X509Certificate)
fact.generateCertificate(is); key = cer.getPublicKey(); } catch
(CertificateException ce) { String message = null; if
(pem.startsWith(PEM_HEADER)) { message = “CertificateException - be sure
not to include PEM header ” + “and footer in the PEM configuration
element.”; } else { message = “CertificateException - PEM may be
corrupt”; } throw new ServletException(me
ssage, ce); } catch (UnsupportedEncodingException uee) { throw new
ServletException(uee); } return (RSAPublicKey) key; } }</p>
\ No newline at end of file
+<pre><code> /**
+ * Verify the signature of the JWT token in this method. This method
depends on the * public key that was established during init based upon
the provisioned public key. * Override this method in subclasses in order to
customize the signature verification behavior.
+ * @param jwtToken the token that contains the signature to be validated
+ * @return valid true if signature verifies successfully; false otherwise
+ */
+ protected boolean validateSignature(SignedJWT jwtToken){
+ boolean valid=false;
+ if (JWSObject.State.SIGNED == jwtToken.getState()) {
+ LOG.debug("JWT token is in a SIGNED state");
+ if (jwtToken.getSignature() != null) {
+ LOG.debug("JWT token signature is not null");
+ try {
+ JWSVerifier verifier=new
RSASSAVerifier(publicKey);
+ if (jwtToken.verify(verifier)) {
+ valid=true;
+ LOG.debug("JWT token has been successfully
verified");
+ }
+ else {
+ LOG.warn("JWT signature verification
failed.");
+ }
+ }
+ catch (JOSEException je) {
+ LOG.warn("Error while validating signature",je);
+ }
+ }
+ }
+ return valid;
+ }
+</code></pre><p>Hadoop Configuration Example The following is like the
configuration in the Hadoop handler implementation.</p><p>OBSOLETE but in the
proper spirit of HADOOP-11717 ( HADOOP-11717 - Add Redirecting WebSSO behavior
with JWT Token in Hadoop Auth RESOLVED )</p>
+<pre><code> <property>
+ <name>hadoop.http.authentication.type</name>
+
<value>org.apache.hadoop/security.authentication/server.JWTRedirectAuthenticationHandler</value>
+ </property>
+</code></pre><p>This is the handler classname in Hadoop auth for JWT token
(KnoxSSO) support.</p>
+<pre><code> <property>
+
<name>hadoop.http.authentication.authentication.provider.url</name>
+
<value>http://c6401.ambari.apache.org:8888/knoxsso</value>
+ </property>
+</code></pre><p>The above property is the SSO provider URL that points to the
knoxsso endpoint.</p>
+<pre><code> <property>
+
<name>hadoop.http.authentication.public.key.pem</name>
+
<value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
+ BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
+ YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
+ aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
+ BhMCVVMxDTALBgNVBAgTBFRlc3QxDTALBgNVBAcTBFRlc3QxDzANBgNVBAoTBkhh
+ ZG9vcDENMAsGA1UECxMEVGVzdDEgMB4GA1UEAxMXYzY0MDEuYW1iYXJpLmFwYWNo
+ ZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFs/rymbiNvg8lDhsdA
+ qvh5uHP6iMtfv9IYpDleShjkS1C+IqId6bwGIEO8yhIS5BnfUR/fcnHi2ZNrXX7x
+ QUtQe7M9tDIKu48w//InnZ6VpAqjGShWxcSzR6UB/YoGe5ytHS6MrXaormfBg3VW
+ tDoy2MS83W8pweS6p5JnK7S5AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEANyVg6EzE
+ 2q84gq7wQfLt9t047nYFkxcRfzhNVL3LB8p6IkM4RUrzWq4kLA+z+bpY2OdpkTOe
+ wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
+ TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value>
+ </property>
+</code></pre><p>The above property holds the KnoxSSO serverâs public key for
signature verification. Adding it directly to the config like this is
convenient and is easily done through Ambari to existing config files that take
custom properties. Config is generally protected as root access only as well -
so it is a pretty good solution.</p><h4><a id="Public+Key+Parsing">Public Key
Parsing</a> <a href="#Public+Key+Parsing"><img
src="markbook-section-link.png"/></a></h4><p>In order to turn the pem encoded
config item into a public key the hadoop handler implementation does the
following in the init() method.</p>
+<pre><code> if (publicKey == null) {
+ String pemPublicKey = config.getProperty(PUBLIC_KEY_PEM);
+ if (pemPublicKey == null) {
+ throw new ServletException(
+ "Public key for signature validation must be
provisioned.");
+ }
+ publicKey = CertificateUtil.parseRSAPublicKey(pemPublicKey);
+ }
+</code></pre><p>and the CertificateUtil class is below:</p>
+<pre><code> package org.apache.hadoop.security.authentication.util;
+
+ import java.io.ByteArrayInputStream;
+ import java.io.UnsupportedEncodingException;
+ import java.security.PublicKey;
+ import java.security.cert.CertificateException;
+ import java.security.cert.CertificateFactory;
+ import java.security.cert.X509Certificate;
+ import java.security.interfaces.RSAPublicKey;
+
+ import javax.servlet.ServletException;
+
+ public class CertificateUtil {
+ private static final String PEM_HEADER = "-----BEGIN
CERTIFICATE-----\n";
+ private static final String PEM_FOOTER = "\n-----END
CERTIFICATE-----";
+
+ /**
+ * Gets an RSAPublicKey from the provided PEM encoding.
+ *
+ * @param pem
+ * - the pem encoding from config without the header and footer
+ * @return RSAPublicKey the RSA public key
+ * @throws ServletException thrown if a processing error occurred
+ */
+ public static RSAPublicKey parseRSAPublicKey(String pem) throws
ServletException {
+ String fullPem = PEM_HEADER + pem + PEM_FOOTER;
+ PublicKey key = null;
+ try {
+ CertificateFactory fact =
CertificateFactory.getInstance("X.509");
+ ByteArrayInputStream is = new ByteArrayInputStream(
+ fullPem.getBytes("UTF8"));
+ X509Certificate cer = (X509Certificate)
fact.generateCertificate(is);
+ key = cer.getPublicKey();
+ } catch (CertificateException ce) {
+ String message = null;
+ if (pem.startsWith(PEM_HEADER)) {
+ message = "CertificateException - be sure
not to include PEM header "
+ + "and footer in the PEM configuration
element.";
+ } else {
+ message = "CertificateException - PEM may
be corrupt";
+ }
+ throw new ServletException(message, ce);
+ } catch (UnsupportedEncodingException uee) {
+ throw new ServletException(uee);
+ }
+ return (RSAPublicKey) key;
+ }
+ }
+</code></pre>
\ No newline at end of file
Added: knox/trunk/books/0.7.0/dev-guide/general_saml_flow.puml
URL:
http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/dev-guide/general_saml_flow.puml?rev=1714349&view=auto
==============================================================================
--- knox/trunk/books/0.7.0/dev-guide/general_saml_flow.puml (added)
+++ knox/trunk/books/0.7.0/dev-guide/general_saml_flow.puml Sat Nov 14 17:46:14
2015
@@ -0,0 +1,45 @@
+@startuml
+autonumber
+footbox off
+participant Browser as B
+participant UI as A
+participant Knox as G
+participant SAML_IdP as E
+
+B->A: GET(ui-origin-url)
+note right: User/browser makes request to UI without valid token
+activate A
+A-->B: redirect(knox-sso+ui-origin-url)
+note right: AuthFilter in UI detects no/invalid token redirects to\nKnoxSSO
preserving ui-origin-url
+deactivate A
+B->G: GET(knox-sso+ui-origin-url)
+note right: Browser follows redirect
+activate G
+G-->B: redirect(idp-login-ui)
+note right: KnoxSSO finds no/invalid token, redirects to SAML IdP
+deactivate G
+B->E: POST(idp-login-ui)
+note right: Browser follows redirect
+activate E
+E-->B: ok(idp-login-ui)
+note right: SAML IdP presents login form to user
+deactivate E
+B->E: POST(idp-login-ui,credentials)
+note right: User provides credentials to IdP via login form.\nSAML IdP
validates credentials.
+activate E
+E-->B: redirect(knox-sso,saml-assertion)
+note right: IdP redirects back to knox-origin-url with SAML assertion\nin form
POST
+deactivate E
+B->G: POST(knox-sso,saml-assertion)
+note right: KnoxSSO converts SAML assertion to a KnoxSSO cookie\nand extracts
ui-origin-url from original-url cookie
+activate G
+G-->B: redirect(ui-origin-url,knox-token)
+note right: KnoxSSO redirects client back to ui-origin-url with KnoxSSO cookie
+deactivate G
+B->A: GET(ui-origin-url,knox-token)
+note right: Browser follows redirect to ui-origin-url with JWT Bearer Token in
cookie.\nJWT Bearer Token validated by AuthFilter in UI
+activate A
+A->B: ok(ui-cookie)
+note right: Request processes and response returned to client.
+deactivate A
+@enduml
Modified: knox/trunk/books/0.7.0/dev-guide/knoxsso_integration.md
URL:
http://svn.apache.org/viewvc/knox/trunk/books/0.7.0/dev-guide/knoxsso_integration.md?rev=1714349&r1=1714348&r2=1714349&view=diff
==============================================================================
--- knox/trunk/books/0.7.0/dev-guide/knoxsso_integration.md (original)
+++ knox/trunk/books/0.7.0/dev-guide/knoxsso_integration.md Sat Nov 14 17:46:14
2015
@@ -10,6 +10,7 @@ This document examines the way to integr
### General Flow
The following is a generic sequence diagram for SAML integration through
KnoxSSO.
+
<<general_saml_flow.puml>>
#### KnoxSSO Setup
@@ -17,6 +18,7 @@ The following is a generic sequence diag
##### knoxsso.xml Topology
In order to enable KnoxSSO, we need to configure the IdP topology. The
following is an example of this topology that is configured to use HTTP Basic
Auth against the Knox Demo LDAP server. This is the lowest barrier of entry for
your development environment that actually authenticates against a real user
store. Whatâs great is if you work against the IdP with Basic Auth then you
will work with SAML or anything else as well.
+```
<?xml version="1.0" encoding="utf-8"?>
<topology>
<gateway>
@@ -77,6 +79,7 @@ In order to enable KnoxSSO, we need to c
</param>
</service>
</topology>
+```
Just as with any Knox service, the KNOXSSO service is protected by the gateway
providers defined above it. In this case, the ShiroProvider is taking care of
HTTP Basic Auth against LDAP for us. Once the user authenticates the request
processing continues to the KNOXSSO service that will create the required
cookie and do the necessary redirects.
@@ -85,6 +88,7 @@ The authenticate/federation provider can
##### sandbox.xml Topology
In order to see the end to end story and use it as an example in your
development, you can configure one of the cluster topologies to use the
SSOCookieProvider instead of the out of the box ShiroProvider. The following is
an example sandbox.xml topology that is configured for using KnoxSSO to protect
access to the Hadoop REST APIs.
+```
<?xml version="1.0" encoding="utf-8"?>
<topology>
<gateway>
@@ -146,6 +150,7 @@ In order to see the end to end story and
<url>http://localhost:8088/ws</url>
</service>
</topology>
+```
* NOTE: Be aware that when using Chrome as your browser that cookies donât
seem to work for âlocalhostâ. Either use a VM or like I did - use
127.0.0.1. Safari works with localhost without problems.
@@ -156,17 +161,20 @@ We should now be able to walk through th
First, issue a request to WEBHDFS through knox.
+```
bash-3.2$ curl -iku guest:guest-password
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
HTTP/1.1 302 Found
Location:
https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
Content-Length: 0
Server: Jetty(8.1.14.v20131031)
+```
Note the redirect to the knoxsso endpoint and the loginUrl with the
originalUrl request parameter. We need to see that come from your integration
as well.
Letâs manually follow that redirect with curl now:
+```
bash-3.2$ curl -iku guest:guest-password
"https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS"
HTTP/1.1 307 Temporary Redirect
@@ -176,21 +184,27 @@ Letâs manually follow that redirect
Location:
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
Content-Length: 0
Server: Jetty(8.1.14.v20131031)
+```
Note the redirect back to the original URL in the Location header and the
Set-Cookie for the hadoop-jwt cookie. This is what the SSOCookieProvider in
sandbox (and ultimately in your integration) will be looking for.
Finally, we should be able to take the above cookie and pass it to the
original url as indicated in the Location header for our originally requested
resource:
+```
bash-3.2$ curl -ikH "Cookie:
hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE0NDM1ODY2OTIsInN1YiI6Imd1ZXN0IiwiYXVkIjoiSFNTTyIsImlzcyI6IkhTU08ifQ.Os5HEfVBYiOIVNLRIvpYyjeLgAIMbBGXHBWMVRAEdiYcNlJRcbJJ5aSUl1aciNs1zd_SHijfB9gOdwnlvQ_0BCeGHlJBzHGyxeypIoGj9aOwEf36h-HVgqzGlBLYUk40gWAQk3aRehpIrHZT2hHm8Pu8W-zJCAwUd8HR3y6LF3M;Path=/;Domain=localhost;Secure;HttpOnly"
https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
TODO: cluster was down and needs to be recreated :/
+```
#### Browse the Flow
At this point, we can use a web browser instead of the command line and see
how the browser will challenge the user for Basic Auth Credentials and then
manage the cookies such that the SSO and token exchange aspects of the flow are
hidden from the user.
Simply, try to invoke the same webhdfs API from the browser URL bar.
+
+```
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+```
Based on our understanding of the flow it should behave like:
@@ -210,6 +224,7 @@ In addition, https://issues.apache.org/j
We will examine the new SSOCookieFederationFilter in Knox here.
+```
package org.apache.hadoop.gateway.provider.federation.jwt.filter;
import java.io.IOException;
@@ -248,9 +263,11 @@ package org.apache.hadoop.gateway.provid
private static final String SSO_EXPECTED_AUDIENCES =
"sso.expected.audiences";
private static final String SSO_AUTHENTICATION_PROVIDER_URL =
"sso.authentication.provider.url";
private static final String DEFAULT_SSO_COOKIE_NAME =
"hadoop-jwt";
+```
The above represent the configurable aspects of the integration
+```
private JWTokenAuthority authority = null;
private String cookieName = null;
private List<String> audiences = null;
@@ -260,34 +277,42 @@ The above represent the configurable asp
public void init( FilterConfig filterConfig ) throws ServletException {
GatewayServices services = (GatewayServices)
filterConfig.getServletContext().getAttribute(GatewayServices.GATEWAY_SERVICES_ATTRIBUTE);
authority =
(JWTokenAuthority)services.getService(GatewayServices.TOKEN_SERVICE);
+```
The above is a Knox specific internal service that we use to issue and verify
JWT tokens. This will be covered separately and you will need to be implement
something similar in your filter implementation.
+```
// configured cookieName
cookieName = filterConfig.getInitParameter(SSO_COOKIE_NAME);
if (cookieName == null) {
cookieName = DEFAULT_SSO_COOKIE_NAME;
}
+```
The configurable cookie name is something that can be used to change a cookie
name to fit your deployment environment. The default name is hadoop-jwt which
is also the default in the Hadoop implementation. This name must match the name
being used by the KnoxSSO endpoint when setting the cookie.
+```
// expected audiences or null
String expectedAudiences =
filterConfig.getInitParameter(SSO_EXPECTED_AUDIENCES);
if (expectedAudiences != null) {
audiences = parseExpectedAudiences(expectedAudiences);
}
+```
Audiences are configured as a comma separated list of audience strings. Names
of intended recipients or intents. The semantics that we are using for this
processing is that - if not configured than any (or none) audience is accepted.
If there are audiences configured then as long as one of the expected ones is
found in the set of claims in the token it is accepted.
+```
// url to SSO authentication provider
authenticationProviderUrl =
filterConfig.getInitParameter(SSO_AUTHENTICATION_PROVIDER_URL);
if (authenticationProviderUrl == null) {
log.missingAuthenticationProviderUrlConfiguration();
}
- }
+ }
+```
This is the URL to the KnoxSSO endpoint. It is required and SSO/token exchange
will not work without this set correctly.
+```
/**
* @param expectedAudiences
* @return
@@ -305,8 +330,11 @@ This is the URL to the KnoxSSO endpoint.
}
return audList;
}
+```
+
The above method parses the comma separated list of expected audiences and
makes it available for interrogation during token validation.
+```
public void destroy() {
}
@@ -361,6 +389,7 @@ The above method parses the comma separa
}
}
}
+```
The doFilter method above is where all the real work is done. We look for a
cookie by the configured name. If it isnât there then we redirect to the
configured SSO provider URL in order to acquire one. That is unless it is an
OPTIONS request which may be a preflight CORS request. You shouldnât need to
worry about this aspect. It is really a REST API concern not a web app UI one.
@@ -372,6 +401,7 @@ Once we know the token is signed by a tr
Finally, when we have a valid token, we create a Java Subject from it and
continue the request through the filterChain as the authenticated user.
+```
/**
* Encapsulate the acquisition of the JWT token from HTTP cookies within
the
* request.
@@ -393,9 +423,11 @@ Finally, when we have a valid token, we
}
return serializedJWT;
}
+```
The above method extracts the serialized token from the cookie and returns it
as the wireToken.
+```
/**
* Create the URL to be used for authentication of the user in the
absence of
* a JWT token within the incoming request.
@@ -418,9 +450,11 @@ The above method extracts the serialized
String originalQueryString = request.getQueryString();
return (originalQueryString == null) ? "" : "?" + originalQueryString;
}
+```
The above method creates the full URL to be used in redirecting to the KnoxSSO
endpoint. It includes the SSO provider URL as well as the original request URL
so that we can redirect back to it after authentication and token exchange.
+```
/**
* Validate whether any of the accepted audience claims is present in the
* issued token claims list for audience. Override this method in
subclasses
@@ -451,9 +485,11 @@ The above method creates the full URL to
}
return valid;
}
+```
The above method implements the audience claim semantics explained earlier.
+```
private void continueWithEstablishedSecurityContext(Subject subject,
final HttpServletRequest request, final HttpServletResponse
response, final FilterChain chain) throws IOException, ServletException {
try {
Subject.doAs(
@@ -480,9 +516,11 @@ The above method implements the audience
}
}
}
+```
This method continues the filter chain processing upon successful validation
of the token. This would need to be replaced with your environmentâs
equivalent of continuing the request or login to the app as the authenticated
user.
+```
private Subject createSubjectFromToken(JWTToken token) {
final String principal = token.getSubject();
@SuppressWarnings("rawtypes")
@@ -493,14 +531,17 @@ This method continues the filter chain p
javax.security.auth.Subject subject = new
javax.security.auth.Subject(true, principals, emptySet, emptySet);
return subject;
}
-
+```
This method takes a JWTToken and creates a Java Subject with the principals
expected by the rest of the Knox processing. This would need to be implemented
in a way appropriate for your operating environment as well. For instance, the
Hadoop handler implementation returns a Hadoop AuthenticationToken to the
calling filter which in turn ends up in the Hadoop auth cookie.
+```
}
+```
#### Token Signature Validation
The following is the method from the Hadoop handler implementation that
validates the signature.
+```
/**
* Verify the signature of the JWT token in this method. This method
depends on the * public key that was established during init based upon
the provisioned public key. * Override this method in subclasses in order to
customize the signature verification behavior.
* @param jwtToken the token that contains the signature to be validated
@@ -529,29 +570,35 @@ The following is the method from the Had
}
return valid;
}
+```
Hadoop Configuration Example
The following is like the configuration in the Hadoop handler implementation.
OBSOLETE but in the proper spirit of HADOOP-11717 ( HADOOP-11717 - Add
Redirecting WebSSO behavior with JWT Token in Hadoop Auth RESOLVED )
+```
<property>
<name>hadoop.http.authentication.type</name>
<value>org.apache.hadoop/security.authentication/server.JWTRedirectAuthenticationHandler</value>
</property>
+```
This is the handler classname in Hadoop auth for JWT token (KnoxSSO) support.
+```
<property>
<name>hadoop.http.authentication.authentication.provider.url</name>
<value>http://c6401.ambari.apache.org:8888/knoxsso</value>
</property>
+```
The above property is the SSO provider URL that points to the knoxsso endpoint.
+```
<property>
- <name>hadoop.http.authentication.public.key.pem</name>
- <value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
+ <name>hadoop.http.authentication.public.key.pem</name>
+
<value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
@@ -565,13 +612,14 @@ The above property is the SSO provider U
wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value>
</property>
+```
The above property holds the KnoxSSO serverâs public key for signature
verification. Adding it directly to the config like this is convenient and is
easily done through Ambari to existing config files that take custom
properties. Config is generally protected as root access only as well - so it
is a pretty good solution.
#### Public Key Parsing
In order to turn the pem encoded config item into a public key the hadoop
handler implementation does the following in the init() method.
-
+```
if (publicKey == null) {
String pemPublicKey = config.getProperty(PUBLIC_KEY_PEM);
if (pemPublicKey == null) {
@@ -579,10 +627,12 @@ In order to turn the pem encoded config
"Public key for signature validation must be provisioned.");
}
publicKey = CertificateUtil.parseRSAPublicKey(pemPublicKey);
- }
+ }
+```
and the CertificateUtil class is below:
+```
package org.apache.hadoop.security.authentication.util;
import java.io.ByteArrayInputStream;
@@ -631,7 +681,7 @@ and the CertificateUtil class is below:
return (RSAPublicKey) key;
}
}
-
+```