On Wed, Apr 29, 2009 at 11:05 PM, Blaine Cook <[email protected]> wrote: > > On Wed, Apr 29, 2009 at 1:58 PM, Dossy Shiobara <[email protected]> wrote: >> >>> What if, in the case of "Login with Twitter", the "identity" of the >>> user logging in is a random cookie string? >> >> As long as it's random every time - i.e., a nonce. > > It's not - on the consumer side, a persistent cookie is the user's > only identifier. They prove that they own a given twitter account by > using OAuth. >
So the persistent cookie is the user's identifier, and as long as there is a way for the user to know that this persistent cookie user is him at the consumer site and he permissions to that persistent cookie user at the SP, then that is OK. It is still Grant (S:V:Data to C:V). (Well, sort of, for the data that is supposed to be broadcast anyways. For sensitive data, I would try to be more careful, e.g., it would probably not pass FSA criteria.) But the current model seem to be Grant (S:V:Data to C:*) though ... I suppose that to patch the last vulnerability that was identified, the signed callback would be ok, but I feel that it has opened issues bigger than that now. The other approach is to make it clear that OAuth is Grant (S:V:Data to C:*) so that the users will be fully aware of the consequence. That will keep our problem rather contained. Perhaps that's what is needed perhaps instead of bolting up the security. But wait: this policy will not pass the Japanese Privacy Law. The use purpose and place is not specific enough to be legal. =nat >> The signed callback approach only closes the security problem we face >> *right now* if and only if ALL consumers maintain perfect secrecy of the >> consumer key and secret and and secret. If SP allows even one consumer >> to use OAuth to gain access to its resources and the consumer is >> compromised, the signed callback approach does NOT close the security >> problem. > > If the consumer is compromised, and holds access tokens and secrets, > then yes, all bets are off. If the consumer key and secret are stolen > from the consumer, then yes, phishing attacks become much easier > (until said theft is discovered and the consumer key is disabled). > This isn't a problem we can solve, since the real problem is that the > consumer was compromised through another means in the first place. > > In the case of applications that are distributed to end users, this > becomes a DRM problem and not one we can solve without user education > and due signaling and out-of-band trust metrics on the service > provider's side. > > b. > > > > -- Nat Sakimura (=nat) http://www.sakimura.org/en/ --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "OAuth" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/oauth?hl=en -~----------~----~----~----~------~----~------~--~---
