[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10911:
------------------------------------

    Attachment: HADOOP-10911v3.patch

Latest version of the patch.

This patch has the following changes from the previous patch:
- Adds a check that a cookie with a proper token is included with each POST 
request.  This guarantees that the cookies are being handled correctly (i.e. 
the test can't pass by just redoing negotiate on each request).  The previous 
patch had a test for this with httpclient, but not with AuthenticatedURL.
- Changes the cookie format.  Note with the new tests, we are checking {jetty, 
tomcat} x {AuthenticatedURL, HttpClient}.  So we should be pretty confident in 
any cookie that passes all those tests.  To generate the cookie format, I used 
jetty 8.1.15.v20140411.  (NOTE: I didn't actually use the Cookie class and the 
3.0 servlet API, I called the helper function jetty uses to produce the format: 
http://archive.eclipse.org/jetty/9.0.0.RC0/apidocs/org/eclipse/jetty/http/HttpFields.html#addSetCookie%28java.lang.String,%20java.lang.String,%20java.lang.String,%20java.lang.String,%20long,%20java.lang.String,%20boolean,%20boolean,%20int%29
 ).  It produced a cookie like:
{code}
Set-Cookie=hadoop.auth="u=client&[email protected]&t=kerberos&e=1409128342379&s=R6rNnd4CcMV0bNtK1dNLiJr1ivk=";Expires=Mon,
 31-Aug-2026 08:36:23 GMT;HttpOnly
{code}
This is if you set to use version 0 cookies, version 1 is identical with a 
Max-Age entry added:
{code}
Set-Cookie=hadoop.auth="u=client&[email protected]&t=kerberos&e=1409128342379&s=R6rNnd4CcMV0bNtK1dNLiJr1ivk=";Expires=Mon,
 31-Aug-2026 08:36:23 GMT;Max-Age=379069291;HttpOnly
{code}
The Max-Age entry doesn't seem necessary, given even the JDK docs say version 1 
is experimental.  Note that I kept the spaces after the ; in our format, as 
they don't seem to affect anything and make it easier to read.  Also, I tested 
the new format against Solr on real cluster and it passed, while the old format 
fails.

> hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-10911
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10911
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.5.0
>            Reporter: Gregory Chanan
>         Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
> HADOOP-10911v2.patch, HADOOP-10911v3.patch
>
>
> I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
> unable to authenticate with servers running the authentication filter), even 
> with HADOOP-10710 applied.
> From my reading of the spec, the problem is as follows:
> Expires is not a valid directive according to the RFC, though it is mentioned 
> for backwards compatibility with netscape draft spec.  When httpclient sees 
> "Expires", it parses according to the netscape draft spec, but note from 
> RFC2109:
> {code}
> Note that the Expires date format contains embedded spaces, and that "old" 
> cookies did not have quotes around values. 
> {code}
> and note that AuthenticationFilter puts quotes around the value:
> https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
> So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to