> -----Original Message-----
> From: Brian Eaton [mailto:[email protected]]
> Sent: Thursday, May 27, 2010 8:59 PM
> To: Blaine Cook
> Cc: Eran Hammer-Lahav; [email protected]; OAuth WG ([email protected])
> Subject: Re: [OAUTH-WG] FW: Duplicating request component in an HTTP
> authentication scheme
> 
> On Thu, May 27, 2010 at 6:48 PM, Blaine Cook <[email protected]> wrote:
> > On 28 May 2010 02:21, Brian Eaton <[email protected]> wrote:
> >> OAuth 1.0 was unusual in that it required that the server match a
> >> hash of the URL, rather than the real URL.  It's an extra layer of
> >> indirection and complexity.  It doesn't improve security.
> >
> > To be more precise, OAuth 1.0 required that the server match a
> > normalised form of the URL. You're absolutely correct that it doesn't
> > improve security [over matching the URL], but it *is* more secure than
> > either not proving that the token bearer provided the URL in the first
> > place or having the client and server match potentially different
> > versions of the URL.
> 
> Cool.  Glad we can put Roy's security concern to rest, at least.

I disagree. Your signature proposal makes matching worse, but moves the 
"canonicalization problem" to the server side. You just flipped the problem. 
The client gets much simpler but the server gets potentially less secure (as a 
likely result of poor implementation).

The server has to compare the HTTP request method, as well as the host, port, 
scheme, and request URI with the signed blob. In the request URI, the host has 
to compare the path and query parameters, which means it needs some rules about 
how to perform this comparison. Does parameter order matters? Case sensitivity? 
Duplicated parameter names? Percent encoding?

Starting to sound familiar?

Comparing two URIs is notoriously tricky which means servers will need to 
either allow lose comparison and risk security holes, or require a very strict 
comparison (such as case sensitive string comparison of the raw request URI to 
the signed request URI minus the scheme, host, and port). In such a case, the 
client request will often fail because the URI doesn't match (for the same 
reasons 1.0 goes through hoops).

The only way to avoid all of this is for the server to ignore the HTTP request 
and only care about what is signed. This means that a client can make an HTTP 
GET request with a signed POST request in the authentication header, bypassing 
routing security (Roy's point).

Your proposal makes it extremely tempting to cut corners (and when you do, yes, 
it is significantly simpler than 1.0).

> I think we're going to get some real data on which approach is easier soon. =)

I didn't know 'easier' is a guiding principal in designing security. Someone 
should tell the TLS/SSL folks. :-)

---

To be clear, I am not completely against your proposal.

But it has a fundamental design flaw in its duplication of HTTP request data. 
Take the HTTP bits out of the JSON structure and I'm completely supportive 
(i.e. the signature base string is a simple combination of the HTTP bits + the 
provided base64 encoded JSON blob or something like that). Otherwise, it would 
need to include specific rules on how the server MUST validate the request by 
comparing the signed bits with the actual request (and it has to be simpler 
than the OAuth 1.0 signature base string and current text in 2.0).

Your proposal has parts I really like such as the ability to sign arbitrary 
data, beyond just the HTTP request bits. This makes it useful for signing 
identity assertions, server responses, etc. But before we get to that, we need 
to address the duplication problem.

Everything we do is a tradeoff between security and simplicity, but it only 
works when we make accurate analysis. I haven't seen anything in your response 
to Roy to justify your conclusions that we can put it aside.

EHL



_______________________________________________
OAuth mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to