I don't know if I've misunderstood things, but just to clarify - there
*isn't* an issue of an attacker getting hold of an actual access token
itself, because the request for the access token is signed with the
request token secret (which an attacker has no access to).

But basically, I've tried to break down the problem into a couple of
basic ideas about trust.

1) We need to make sure that, if present, the oauth_callback parameter
was generated by the consumer. This can be done easily by including an
additional parameter oauth_callback_signature with the callback
parameter signed by the consumer secret and request secret; that's the
whole point of HMAC. Maybe an additional
oauth_callback_signature_method parameter could be included with the
method (i.e. HMAC-SHA1 or HMAC-RSA, etc).

2) The following sentence is a monster: we need to ensure that the
user who initiated the consumer's request for the request token is the
same as the one who's authorizing it on the provider. This is a much
harder problem to solve, because it means we have to bridge the gap
between user accounts on the consumer and provider, whilst making any
solutions feasible for non-HTTP/HTML systems. Unfortunately, at first
glance, it seems an intractable problem, which cannot be solved
without a fair amount of work on the user's behalf. This is the real
'social engineering' problem; a PIN probably won't work, because if I,
as a malicious user, can convince someone to authorize a service to
use their account, then I can probably just give them a PIN and ask
them to enter it. However, there are still ways of reducing
uncertainty, which is what any protocol revisions or provider
recommendations should tackle.

So how can we ensure that the request token user is the same as the
authorization user? We can begin by making sure that the request token
is authorized within a short period of its requesting. This can be
done purely on the provider side, but where some consumers may wish to
cache request tokens, this might not be optimal. So a consumer could
stamp its user authorization URL with a nonce and timestamp, and sign
it all, enabling the provider to verify that the user auth URL was
presented within a sensible amount of time. For example:

 
http://provider.example.com/oauth/authorize?oauth_token=foo&oauth_callback=bar&oauth_timestamp=1010011010&oauth_nonce=4237&oauth_signature=blahblah&oauth_signature_method=HMAC-SHA1

So it's a bit of a beast, but it jumps the hurdle of ensuring that
authorization requests were made within a sensible amount of time. In
fact, any important parameters in the user auth URL should probably
also be signed by the consumer. This makes sure that attackers are
incapable of spoofing authorization parameters (which no doubt is an
important issue in and of itself).

In addition, as has already been suggested, it could be configured on
the provider such that when a consumer tries to exchange a request
token for an access token, if that request fails, the request token is
put on a blacklist so that it will not be usable again, whether for
authorization or exchange. In addition, a timeout could be specified
on the exchange, such that if the exchange is requested a long time
after the token is authorized, it fails. So what might happen is this:

    * Malicious User (MU) visits consumer site or uses consumer
desktop app and gets a user auth URL with a request token.
    * MU gets Ignorant User (IU) to click on user auth URL and
authorize the third-party application.
    * If the consumer is a web app, then the user will be redirected
to the callback URL (which has been signed, so it cannot be spoofed).
The consumer then gets a request at the callback URL, and tries to
exchange the request token for an access token. This is successful,
but one of two things should happen: consumers should either realise
that the request token was fetched for another user and signal an
error, or it should associate the new access token with IU instead of
MU. An alternative behavior would be unacceptable, but there's nothing
anyone can do to stop evil or ignorant consumer implementations (a
consumer could, for example, decide to share their entire database of
access tokens and consumer key/secret pair). Luckily, that's why OAuth
is good, as access can be revoked.
    * If the consumer is a desktop app, then a few things might
happen. MU could start brute forcing the access token, which would
lead to one of a couple things:
        1) They would start too early and invalidate the request
token.
        2) They would start too late and the request token would time
out.
        3) They would ask for an exchange just at the right time and
get access.
      The last of these is the dangerous one, but also the least
likely to happen. An exchange timeout of 5 seconds, for example, would
be able to mitigate most attacks; a good idea would be for the
provider to keep the timeout secret to prevent people from getting in
towards the end of the Goldilocks period. Don't forget that it will
also take some time for MU to convince IU to click the authorization
link.

Alright, so that's a less-than-brief summary of my ideas on reducing
the severity of this issue; at its heart it's not one that can be
easily solved by just introducing a few more parameters, because it
has to do with people, not computers. Hope someone can gleam some
value from this.

Regards,
Zack

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"OAuth" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [email protected]
For more options, visit this group at http://groups.google.com/group/oauth?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to