From: Eran Hammer-Lahav
>* If HTTPS is to remain optional for protected resource requests, a 
>signature-based alternative is required.

I agree.  If we're going to have signatures for this reason, we should have at 
least one use case on Zeltsan's use case list 
(https://datatracker.ietf.org/doc/draft-zeltsan-use-cases-oauth/) where 
circumstances rule out https.  The non-https and non-signature alternative 
that's in the present draft of the spec should go away.  (I had to look a 
little while to find that alternative.  It's mentioned at "the authorization 
server SHOULD [as opposed to MUST] require the use of a transport-layer 
security mechanism such as TLS when sending requests to the end-user 
authorization endpoint" at page 15 of 
http://tools.ietf.org/html/draft-ietf-oauth-v2-10, perhaps in addition to other 
places.)

>Discovery as well as future use cases for OAuth will create the need for 
>clients to authenticate using tokens against
>servers that are not hard-coded into the application. When this happens, 
>bearer tokens are simply too dangerous.
>Any solution that is based on sending secrets in the clear (that is, whoever 
>receives them on the other side can use
>them, legally or not) is going to cause secrets to leak.

You're probably better off converting those future use cases (including 
discovery) into present use cases, that is, making them specific and putting 
them on the use case list.  Otherwise your guesses about what will be requested 
in the future are not being made explicit and there's no way to review whether 
a proposed interaction is good enough to support the guessed future use cases, 
whether or not the proposal uses signatures.  Furthermore, if the guesses about 
future use cases aren't made explicit, there's no way to get agreement on 
whether those guesses are reasonable.

>I don't need to lay out the exact exploit, only to point to the past 20 years 
>and show that every single password-based solution has been proven broken, 
>even when used over secure channels.

I think it's better to describe the exploit.   That would make it possible to 
verify that a proposed scheme with signatures avoids the exploit.  Furthermore, 
the present version of the spec describes bearer tokens, so unless you're 
planning to eliminate that as a possibility, users of bearer tokens need to 
know about the exploit.   Exploits are use cases.  (In general, the exploit 
you're talking about is where the access token is disclosed but the private key 
that would be used for the signatures is not disclosed, right?  So maybe the 
former are in a database and the latter is in a file, and the database password 
is guessable and the network is misconfigured so random people can read the 
database.)

I agree that passwords are a chronic problem, but the examples that come to 
mind have happened when people were creating low-entropy passwords or typing 
them in while being spied upon, which is not what we're talking about here.  Do 
you have a specific paper in mind with evidence for "every single 
password-based solution [used in the last 20 years] has been proven broken, 
even when used over secure channels"?  Can you recall instances of this 
happening with secure channels and high-entropy passwords that humans never 
type?

The alternatives with asymmetric cryptography aren't perfect either -- SSH was 
once generating weak key pairs 
(http://nic.phys.ethz.ch/news/1210776776/index_html), and some RSA 
implementations were insecure when the exponent was too low 
(http://www.links.org/?p=136), and then as you know there was the OAuth session 
fixation attack 
(http://hueniverse.com/2009/04/explaining-the-oauth-session-fixation-attack/).

It's hard to get fresh cryptographic protocols correct.  Bearer tokens can 
conceivably be stolen, but https has been in use since 1994.  It's not obvious 
to me which path is safer.  Decreasing the number of alternatives increases the 
security of the remaining alternatives because they'll be checked more 
carefully, so keeping all conceivable alternatives isn't the solution either.  
(IMO the present spec has already gone too far in the direction of keeping all 
conceivable alternatives.)

Do you want to support delegation of work?  That is, the user logs into server 
A, uses OAuth to give A the right to access server B on the user's behalf, and 
then server A delegates to server C the work of accessing server B?  That 
appears to need A to have portable credentials it can give to C for this 
specific authority.  If A has a private key that it has to use in conjunction 
with the credentials it has, then it can't delegate work to C unless it gives C 
its private key, which is too much power for C.

Tim Freeman
Email: [email protected]<mailto:[email protected]>
Desk in Palo Alto: (650) 857-2581
Home: (408) 774-1298
Cell: (408) 348-7536

From: [email protected] [mailto:[email protected]] On Behalf Of Eran 
Hammer-Lahav
Sent: Friday, September 24, 2010 7:44 PM
To: Yaron Goland; OAuth WG
Subject: Re: [OAUTH-WG] What's the use case for signing OAuth 2.0 requests?

These are my two:

1. Remove the need to rely solely on HTTPS

There are plenty of cases where people can't or don't want to use HTTPS. 
Clearly, the web is not all HTTPS and OAuth should be useful on the entire web. 
We are not going to settle the long debate over the cost or speed of using 
HTTPS. It doesn't matter. There are enough people who are set in their mind 
about it and we need to offer an alternative.

>From an implementation perspective, HTTP client implementations are 
>notoriously broken. Not the library code, but the way developers use it. For 
>example, most developer ignore exceptions about invalid or expired 
>certificates. This is why the major browsers had to change their UI to make it 
>really hard for users to accept bad certificated (which people still do). We 
>keep forgetting that while HTTPS is properly executed in most browsers, it is 
>not the case in most client applications.

I would like to see a detailed section in the specification detailing 
everything a developer has to check to actually ensure their HTTPS connection 
is secure and to the right destination. It is nice to put the burden on HTTPS 
and point people there, but if we know they are going to do stupid things (like 
they have done before), we have the responsibility to write a defensive 
protocol that prevents exploits when developers make predictable mistakes.

Another aspect of HTTPS is that in most cases, it terminates at the entry point 
to the server deployment which exposes the tokens internally to anyone with 
access, beyond the final applications.

*** If HTTPS is to remain optional for protected resource requests, a 
signature-based alternative is required.

2. Ensure that tokens are not open to phishing attacks

Discovery as well as future use cases for OAuth will create the need for 
clients to authenticate using tokens against servers that are not hard-coded 
into the application. When this happens, bearer tokens are simply too 
dangerous. Any solution that is based on sending secrets in the clear (that is, 
whoever receives them on the other side can use them, legally or not) is going 
to cause secrets to leak.

I don't need to lay out the exact exploit, only to point to the past 20 years 
and show that every single password-based solution has been proven broken, even 
when used over secure channels. It is sad that some of the people who 
criticized WRAP for this exact reason are not taking part of this discussion 
and gave up (some due to lack of time or interest, others due to being told to 
STFU by their employer).

It is true that there are methods for using bearer tokens with discovery, but 
those I have seen all require the client to basically ask the server after each 
new resource they encounter if it is ok to send the token there. This is very 
inefficient, when a signature solves that (as offered successfully by OpenID). 
The other solution is to tell the client using some form of policy where it is 
ok to send tokens (as is done with cookies), and that's clearly an approach 
guaranteed to fail.

EHL



On 9/24/10 2:18 PM, "Yaron Goland" <[email protected]> wrote:
My understanding of Eran's article 
(http://hueniverse.com/2010/09/oauth-2-0-without-signatures-is-bad-for-the-web/)
 is that Eran believes that bearer tokens are not good enough as a security 
mechanism because they allow for replay attacks in discovery style scenarios. 
He then, if I understood the article correctly, argues that the solution to the 
replay attack is to sign OAuth 2.0 requests.
In http://www.goland.org/bearer-tokens-discovery-and-oauth-2-0/ I tried to 
demonstrate that in fact one can easily prevent replay attacks in discovery 
scenarios using OAuth 2.0 and bearer tokens. If the article is correct then it 
is not a requirement to introduce message signing into OAuth 2.0 in order to 
prevent the attacks that Eran identified.

So this leaves me wondering, what's the critical scenario that can't be met 
unless we use sign OAuth 2.0 requests?

                Thanks,

                                                Yaron
_______________________________________________
OAuth mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to