On Tue, Dec 9, 2008 at 11:54 AM, John Hayes <[EMAIL PROTECTED]> wrote:
> Another problem with RFC 2616 (call this 3a) is that it says the byte order
> of the hash depends on the byte order in the content type (so uploading a
> JPEG is big-endian while text is undefined). This standard should specify
> little-endian.

I think you're misreading RFC 2616.  The definitions of content-md5
and of entity-body do not depend on byte order.

   The MD5 digest is computed based on the content of the entity-body,
   including any content-coding that has been applied, but not including
   any transfer-encoding applied to the message-body. If the message is
   received with a transfer-encoding, that encoding MUST be removed
   prior to checking the Content-MD5 value against the received entity.

As an example, consider a request with a chunked transfer-encoding and
a gzip content-encoding.  The hash would be taken of the gzipped
content.

> This really came out of the example where you demonstrated how an entity
> hash would appear in a base string, however this example in practice isn't
> useful because it's not actually signed with any OAuth fields. Maybe a
> complete example would help.

Ah, got it, I can see how the example might imply that the base string
is fundamentally different than OAuth 1.0.  I'll fix that up in the
next draft.

>> I agree normalizing HTTP headers is tricky.  Given that it provides
>> negligible security benefit and is likely to cause interoperability
>> problems, we shouldn't do it.
>
> The normalization isn't for the benefit of security it's for
> interoperability. Normalization usually reduces security by creating
> accidental iso-representations,

Sorry, my argument was unclear: I don't think we should include any
http headers in the signature because it introduces opportunity for
errors without significantly improving security.

> ... we can't reliably depend on every proxy
> on the internet (and every web server API) providing an exactly byte-wise
> representation of what was transmitted in headers.

Every transparent proxy on the internet will leave the byte-wise
representation of the entity body intact.  See the definition of
transparent proxy in section 1.3 of RFC 2616.  Section 13.5.2 also
describes in detail what we can expect of transparent proxies in terms
of content encoding changes.  The language there is very strict:
transparent proxies do not muck with content encodings.

Some web server APIs do indeed change content encodings or charset
encodings before forwarding content to application code.  However,
most of those web servers have alternative APIs that deal with the raw
entity body and leave content encodings to application code.

If a web server really doesn't let an application see the literal
bytes of a request body (and some of the more rigid frameworks don't),
then those applications aren't going to be able to check signatures on
bodies.  I'm not overly fussed about that.  The reason body signing
got left out of the OAuth 1.0 spec was because no one could guarantee
perfect compatibility everywhere.  That's a good decision for the core
spec, but for an optional extension we should not let perfect be the
enemy of good.

Cheers,
Brian

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"OAuth" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/oauth?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to