On Mon, Jun 7, 2010 at 7:35 PM, Peter Watkins <[email protected]> wrote:
> On Mon, Jun 07, 2010 at 04:38:11PM -0700, John Panzer wrote: > > On Mon, Jun 7, 2010 at 2:27 PM, Peter Watkins <[email protected]> wrote: > > > Do you mean that your current, centralized XAuth implementation doesn't > > > break > > > privacy, or that your "end game" desired XAuth would not break privacy? > > > Both. > > Thanks for clarifying. > > > > > I think it would be great to have a discussion about privacy and > security > > > > aspects of XAuth. Which should start with a discussion about what > attacks > > > > we're worried about preventing, and how XAuth affects them. As an > > > example, > > > > there could be a security concern that knowing that I have an active > > > session > > > > with Google may help phishers know which identity provider to > simulate > > > when > > > > I go to their site. Or, .... > > > > Exactly. That's why I ask about "current" vs "end game". Do you think > those > > > concerns are not significant enough to say that the current XAuth > "breaks" > > > privacy > > > would defend against those attacks? > > > I don't think they add significant risk to the status quo. Note that it > is > > today possible for a rogue RP and rogue IdP to collude to leak your > identity > > with or without XAuth. XAuth provides mechanisms to allow IdPs to > whitelist > > the RPs they'll expose information to. > > Whitelists again? Ugh. Chris had that nice video explaining how XAuth could > put a few IdPs up their in a short, relevant list of choices. Now you're > saying to make privacy OK, IdPs have to whitelist RP sites? That sounds > like a mess, and like a much weaker and less interesting approach to > improving > the UX than I expected. And why would the IdP be in the business of > whitelisting? It's more likely, as I've said in other threads, that the > end user would want to control what information was sent to each RP. > What makes you think their IdP wouldn't be doing this based on the user's preferences? > > > I think that browser support would make some > > things easier -- perhaps defending against "pretend" IdPs that use social > > engineering to get themselves on your IdP list -- but (a) those attacks > > aren't privacy issues and (b) they appear low value to me. > > What social engineering? Getting you to open a page with an XAuth iframe? > That sounds like a relatively easy attack to carry out, esp. in the era > of tinyurl and bit.ly. > And if successful, it gets the attacker a slightly annoying link with a big glowing verified pointer back to their web site. I hope spammers do attempt this; it'll be much easier to combat than other things they do today. > > > I will agree that things would be more secure if we encased every client > > computer in concrete with no 'net connection and sank them to the bottom > of > > the ocean. > > Excuse me? > My somewhat flippant point was that eliminating all possible risks also eliminates all possible usefulness. > > > > I haven't given a lot of thought to this, but it appears to me that the > > > current XAuth setup isn't some rough draft of an oversimplified account > > > management mechanism that could easily be extended or improved. It > looks > > > to me like a pretty mature, feature-complete system. I mean, you seem > > > pretty > > > tied to some very specific HTML5 tech in XAuth > > > No. I'm envisioning that browsers would directly replace the > higher-level > > JS API so there would be no postMessage() calls at all. Why would there > > need to be? The providers just need to call setters and the relying > parties > > to call getters (or queriers). > > Ah, looking at your (Google's) current XAuth materials, I don't get the > sense of this changing. > > > > BTW, I think it's a little sad that Google is pushing this early > > > centralized > > > model for gaining "deployment experience" when you could bake this into > > > Chrome > > > and Firefox extensions (or even release custom builds of Firefox) to > test > > > the > > > system with decentralized code. > > > Sure, we could host extensions at xauth.org. And then people could > download > > them. From, um, a centralized site. How is that more decentralized > > exactly? > > Users could vet the code and not worry about it changing on them the way it > could from a SaaS site like xauth.org. Of course regular users are not going to vet the code. And xauth.org is not a SaaS site. It's just a stock web server. But yes, it would be better for _security in the long run_ to have this code be baked into an otherwise trusted download (like the browser). It is not necessary to do this to start with, and indeed is a very bad idea while the APIs are still in flux. > xauth.org would have no indication > whatsoever that a user was interacting with XAuth-compatible Extenders or > Receivers. Not sure what you're saying here. > And if you use a decent license (or write a nice spec), anyone > else could implement their own extension. Including IE extensions, I don't > know why I omitted IE from my list. > Absolutely -- if the OWFa isn't the stated goal for xauth, it should be. > > I'll post this on the xauth group now, too, but for instance currently the > xauth.org site is not accessible via a https URL (at least not one that > passes normal CN/hostname checks), so a malicious wifi hotspot operator > could intercept traffic and do interesting things like read that shared > LocalStorage even if all the normal RP and IdP traffic was https. Switch > to an in-browser model and that hole, and others, disappear. > > Heh. They're currently using Akamai to mitigate SPOF issues and Akamai is responding to SSL requests as *.akamai.net hosts. Obviously this configuration issue will be fixed. (Note that exactly the same issues arise when downloading extensions. JS is just a way of delivering always-latest-version extensions to your browser.) > -Peter > >
_______________________________________________ specs mailing list [email protected] http://lists.openid.net/mailman/listinfo/openid-specs
