> -----Original Message-----
> From: Blaine Cook [mailto:[email protected]]
> Sent: Thursday, May 27, 2010 6:49 PM
> To: Brian Eaton
> Cc: Eran Hammer-Lahav; [email protected]; OAuth WG ([email protected])
> Subject: Re: [OAUTH-WG] FW: Duplicating request component in an HTTP
> authentication scheme
>
> On 28 May 2010 02:21, Brian Eaton <[email protected]> wrote:
> > OAuth 1.0 was unusual in that it required that the server match a hash
> > of the URL, rather than the real URL. It's an extra layer of
> > indirection and complexity. It doesn't improve security.
>
> To be more precise, OAuth 1.0 required that the server match a normalised
> form of the URL.
No it doesn’t.
OAuth 1.0 requires the server to *independently* normalize the received
request. There is no duplication of data and there is nothing to match. There
are limitation in the way the URI is normalized (parameter order is not kept
for example), but the server works with the actual HTTP request bits.
> You're absolutely correct that it doesn't improve security
> [over matching the URL]
Not sure I get what you mean, but having to compare what was signed with what
was received adds a potential security risk and complexity. There is very
little room for mismatch in 1.0 and in the current 2.0 text.
>, but it *is* more secure than either not proving that
> the token bearer provided the URL in the first place or having the client and
> server match potentially different versions of the URL.
>
> This is a problem of leaky abstractions: if HTTP was used in a way such that
> the client unequivocally asserted "This: {x} is the unabridged HTTP URL that I
> am requesting", and such that {x} was presented untouched to the service
> handling the request, then we wouldn't have to worry about normalisation.
Between the request URI and Host header, you got pretty much everything you
need. The only thing missing is the scheme which can probably be ignored since
the port number is being signed. Some poorly written platforms might not expose
the raw HTTP request, but that doesn't mean HTTP doesn't give you everything
you need to construct the absolute request URI.
> As it stands, getting access to the raw request URL is relatively difficult in
> many environments that handle HTTP requests, and even more difficult to
> obtain from HTTP client libraries, since the actual request URI is often
> constructed in a private method at the last moment before a request is
> actually made.
>
> Which is all to say that it is indeed complex, but much of that complexity is
> a
> result of HTTP libraries trying to hide complexity from users. I'd echo Roy's
> assertion that as library support improves, approaches to URL normalisation
> will become hidden behind the same layers of abstraction as constructing
> query strings and request URIs are today.
I'm willing to limit signatures to platforms that expose the raw bits on the
client and server sides, or that implement native OAuth signature support. This
is, after all, being labeled an advance feature...
EHL
_______________________________________________
OAuth mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/oauth