Re: CORS security hole?
On Mon, Jul 16, 2012 at 11:22 PM, Henry Story henry.st...@bblfish.net wrote: On 17 Jul 2012, at 08:10, Adam Barth wrote: On Mon, Jul 16, 2012 at 11:01 PM, Henry Story henry.st...@bblfish.net wrote: I first posted this to public-webapps, and was then told the security discussions were taking place on public-webappsec, so I reposted there. On 17 Jul 2012, at 00:39, Adam Barth wrote: As I wrote when you first posted this to public-webapps: [[ I'm not sure I fully understand the issue you're worried about, but if I understand it correctly, you've installed an HTTP proxy that forcibly adds CORS headers to every response. Such a proxy will indeed lead to security problems. You and I can debate whether the blame for those security problem lies with the proxy or with CORS, but I think reasonable people would agree that forcibly injecting a security policy into the responses of other servers without their consent is a bad practice. ]] Hmm, I think I just understood where the mistake in my reasoning is: When making a request on the CORS proxy ( http://proxy.com/cors?url=http://bank.com/joe/statement on a resource intended for bank.com, the browser will not send the bank.com credentials to the proxy - since bank.com and proxy.com are different domains. So the attack I was imagining won't work, since the proxy won't just be able to use those to pass itself for the browser user. Installing an HTTP proxy locally would of course be a trojan horse attack, and then all bets are off. The danger might rather be the inverse, namely that if a CORS proxy is hosted by a web site used by the browser user for other authenticated purposes ( say a social web server running on a freedom box ) that the browser pass the authentication information to the proxy in the form of cookies, and that this proxy then pass those to any site the initial script was connected to. Here using only client side certificate authentication helps, as that is an authentication mechanism that cannot be forged or accidentally passed on. --- Having said that, are there plans to get to a point where JavaScript agents could be identified in a more fine grained manner? No. Cryptographically perhaps with a WebID? That's unlikely to happen soon. Then the Origin header could be a WebID and it would be possible for a user to specify in his foaf profile his trust for a number of such agents? But perhaps I should start that discussion in another thread I'd recommend getting some implementor interest in your proposals before starting such a thread. A standard without implementations is like a fish without water. Ok, I don't really have a browser to hack on. On the other hand a few of us are working on building a CORS proxy at the read-write-web community group to enable javascript linked data agents to build up in the browser views of data distributed on the web. There may be some interesting results this brings up that we can discuss at TPAC in Lyon... :-) Building a CORS proxy for any reason other than to prototype something sounds (at first blush) like a bad idea; as Adam says, you're basically changing the security policies the web is supposed to be using. It seems like if you're having to do this you're solving the wrong problem, and maybe there's something else that should be changed? -- Dirk
Re: Behavior Attachment Redux, was Re: HTML element content models vs. components
On Wed, Sep 28, 2011 at 4:52 PM, Ian Hickson i...@hixie.ch wrote: If an author invents a new element, it doesn't matter what it inherits from. It won't have fallback behaviour, it won't have semantics that can be interpreted by search engines and accessibility tools, it won't have default renderings, and it won't allow for validation to catch authoring mistakes. I don't see what inheritance has to do with anything here. Ian, apologies if you have answered this before and I haven't seen it, but a (fairly brief) query didn't turn up anything for me: when is it okay to create new elements? Obviously, we created a bunch for HTML 5 that don't have fallback behavior ... It seems like it would be helpful to distinguish between new elements that can be reasonably mapped onto existing elements' semantics and elements that cannot; perhaps we can agree that elements should be reused where possible, but that there should also be mechanism for defining new elements otherwise? -- Dirk
Re: Rename XBL2 to something without X, B, or L?
I like Web Components. -- Dirk On Tue, Dec 21, 2010 at 3:34 PM, Alex Russell slightly...@google.com wrote: How 'bouts a shorter version of Tab's suggestion: Web Components ? On Thu, Dec 16, 2010 at 5:59 AM, Anne van Kesteren ann...@opera.com wrote: On Thu, 16 Dec 2010 14:51:39 +0100, Robin Berjon ro...@berjon.com wrote: On Dec 14, 2010, at 22:24 , Dimitri Glazkov wrote: Looking at the use cases and the problems the current XBL2 spec is trying address, I think it might be a good idea to rename it into something that is less legacy-bound? I strongly object. We have a long and proud tradition of perfectly horrible and meaningless names such as XMLHttpRequest. I don't see why we'd ever have to change. Shadow HTML Anonymous DOm for the Web! Cause I know you are being serious I will be serious as well and point out that XMLHttpRequest's name is legacy bound as that is what implementations call it and applications are using. XBL2 has none of that. -- Anne van Kesteren http://annevankesteren.nl/
Re: CORS ISSUE-108
My recollection matches Tyler's. At one point I volunteered to work on the Security Considerations section and did a draft, but sadly got distracted by other things. I can attempt to dust that draft off and try again if that is useful. -- Dirk On Tue, Nov 23, 2010 at 3:05 PM, Tyler Close tyler.cl...@gmail.com wrote: My recollection of the status of ISSUE-108 is that CORS was going to provide functionality equivalent to that of UMP when the CORS credentials flag is false. CORS was also also going to expand its Security Considerations section to explain the Confused Deputy issues, possibly by borrowing text from UMP. Are you saying that work has been completed or it will not be undertaken? The current editor's draft of CORS does mention a credentials flag, but I haven't found much detail on it. For example, what effect does it have on use of the browser's request cache? --Tyler On Wed, Nov 17, 2010 at 6:40 AM, Anne van Kesteren ann...@opera.com wrote: http://www.w3.org/2008/webapps/track/issues/108 has been open for a year and we have made little concrete progress on it unfortunately. Meanwhile, CORS is shipping, deployed and nobody is planning to take it out or down as far as I know. I think it is time to move on and go to Last Call. I am open to spending a few more days on finding a solution to this problem we can all agree with, but if we have nothing by December 1 and at that point it does not seem likely it will get anywhere we should go for a Last Call CfC (or maybe straight to a formal vote) and call it a day. -- Anne van Kesteren http://annevankesteren.nl/
Re: HTTP access control confusion
On Fri, Jul 30, 2010 at 1:45 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, Jul 29, 2010 at 8:10 AM, Douglas Beck db...@mail.ucf.edu wrote: I have recently read through: https://developer.mozilla.org/En/HTTP_access_control https://wiki.mozilla.org/Security/Origin I've discussed what I've read and learned with my coworkers and there's been some confusion. I understand and appreciate the need for a security policy that allows for cross-site https requests. I do not understand how Access-Control-Allow-Origin addresses usability and security concerns. The basis of our confusion: I create domain-a.com and I want to make an ajax request to domain-b.com. A preflight request is made to domain-b, domain-b responds with if it is safe to send the request. Does it not make more sense for me (the author of domain-a) to define the security policy of my website? I know each and every request that should be made on my site and can define a list of all acceptable content sources. If the preflight request is made to domain-a (not domain-b) then the content author is the source of authority. A more functional example (and the source of my curiosity), I work for the University of Central Florida. I am currently working on a subdomain that wants to pull from the main .edu TLD. The university has yet to define an Access-Control header policy, so my subdomain is unable to read what's available on the main .edu website. Additionally, if I am working with authorized content, it would be useful for me to define/limit where cross-site requests can be made. It seems backwards that an external source can define a security policy that effects the usability of my content. As the author of your site, you *already* have complete control over where cross-site requests can be made. If you don't want to make a particular cross-site request, *just don't make that request*. On the other hand, the content source doesn't have that kind of control. They can't prevent you from making requests to them that they don't want, or allow requests that they like. That's where the same-origin policy (default deny all such requests) and CORS (selectively allow certain requests) comes in. I suppose you might be thinking of a situation where you are allowing untrusted users to add content to your site, and you only want them to be able to link to specific other sites. Same-origin restrictions do part of this for you automatically. Most of the rest should be handled by you in the first place - if untrusted users are doing XHRs, you've got bigger problems. ~TJ Untrusted users (or untrusted content) will probably be doing XHRs quite frequently (this is almost the definition of a widget). You just need to ensure that they're running inside a sandboxed iframe (or something like Caja) so that they can't exploit your site as easily. -- Dirk
Re: CfC: to publish new WD of CORS; deadline July 20
That is correct (both that I volunteered and that I have not had time). I find myself home-bound for a couple days so I should be able to get something out to Anne for feedback by the end of the week. Apologies to all for the delay, -- Dirk On Wed, Jul 14, 2010 at 3:48 AM, Anne van Kesteren ann...@opera.com wrote: On Tue, 13 Jul 2010 17:50:26 +0200, Mark S. Miller erig...@google.com wrote: Has anyone been working towards a revised Security Considerations section? Your Google colleague Dirk has volunteered but I believe has not yet had the time unfortunately. -- Anne van Kesteren http://annevankesteren.nl/
Re: [cors] Allow-Credentials vs Allow-Origin: * on image elements?
On Wed, Jul 7, 2010 at 5:53 PM, Charlie Reis cr...@chromium.org wrote: It's not just implementation effort-- as I mentioned, it's potentially a compatibility question. If you are proposing not sending cookies on any cross-origin images (or other potential candidates for CORS), do you have any data about which sites that might affect? Personally, I would love to see cross-origin subresource requests change to not using cookies, but that could break existing web sites that include subresources from partner sites, etc. Is there a proposal or discussion about this somewhere? I believe we have discussed this in the past and been uncertain as to whether or not this would break things on the web; we have very little real-world data as to how CORS is currently being used (if at all). I think I mentioned the possibility of instrumenting Chrome to look into this, but haven't yet done so. -- Dirk In the mean time, the canvas tainting example in the spec seems difficult to achieve. Charlie On Wed, Jul 7, 2010 at 5:05 PM, Devdatta Akhawe dev.akh...@gmail.com wrote: hmm, I think I quoted the wrong part of your email. I wanted to ask why would it be undesirable to make CORS GET requests cookie-less. It seems the argument here is reduction of implementation work. Is this the only one? Note that even AnonXmlHttpRequest intends to make GET requests cookie-less. Regards devdatta I meant undesirable in that it will require much deeper changes to browsers. I wouldn't mind making it possible to request an image or other subresource without cookies, but I don't think there's currently a mechanism for that, is there? And if there's consensus that user agents shouldn't send cookies at all on third party subresources, I'm ok with that, but I imagine there would be pushback on that sort of proposal-- it would likely affect compatibility with existing web sites. I haven't gathered any data on it, though. The benefit to allowing * with credentials is that it lets CORS work with the existing browser request logic for images and other subresources, where cookies are currently sent with the request. Charlie On 7 July 2010 16:11, Charlie Reis cr...@chromium.org wrote: On Wed, Jul 7, 2010 at 4:04 PM, Mark S. Miller erig...@google.com wrote: On Wed, Jul 7, 2010 at 1:09 PM, Charlie Reis cr...@chromium.org wrote: [...] That's unfortunate-- at least for now, that prevents servers from echoing the origin in the Access-Control-Allow-Origin header, so servers cannot host public images that don't taint canvases. The same problem likely exists for other types of requests that might adopt CORS, like fonts, etc. Why would public images or fonts need credentials? Because it's undesirable to prevent the browser from sending cookies on an img request, and the user might have cookies for the image's site. It's typical for the browser to send cookies on such requests, and those are considered a type of credentials by CORS. Charlie I believe the plan is to change HTML5 once CORS is somewhat more stable and use it for various pieces of infrastructure there. At that point we can change img to transmit an Origin header with an origin. We could also decide to change CORS and allow the combination of * and the credentials flag being true. I think * is not too different from echoing back the value of a header. I would second the proposal to allow * with credentials. It seems roughly equivalent to echoing back the Origin header, and it would allow CORS to work on images and other types of requests without changes to HTML5. Thanks, Charlie -- Cheers, --MarkM
Re: widget example of CORS and UMP
On Thu, May 13, 2010 at 7:53 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 13 May 2010, Dirk Pranke wrote: The initial, insecure CORS solution is straightforward ... a gadget running on My Yahoo! sends an XHR with the users' credentials to http://finance.yahoo.com/api/v1/my_portfolio; and gets some JSON back. If I understand this right, you are saying that the user visits my.yahoo.com, and that page does an XHR to finance.yahoo.com, right? Yes. One interesting aspect of the CORS solution is that there is no way for the CORS-based gadget to get another user's data; it is simply not possible to request it, since a different set of cookies cannot be included in the request. The CORS solution requires you to believe in the security of proper cookie handling. With an HttpOnly cookie, cookies can be handled pretty safely but I imagine all of us are fairly aware that cookies get exposed all the time and hence this is hardly a perfect solution, maybe just good enough. One could also observe that with the naive implementation of the CORS API, *any* site could trivially fetch the user's portfolio data, which is presumably not desirable, and so Yahoo! Finance would also need to check the Origin: header. Actually it just needs to send back a header saying that only my.yahoo.com is allowed to read it, and the client will take care of it. True, with a trusted client (which is generally what we're assuming). And an untrusted client could of course forge the Origin header. Now suppose that My Yahoo! allows the user to install third-party gadgets. Suddenly neither Yahoo! Finance nor the browser can distinguish a safe request from a trusted gadget from an unsafe request from an untrusted gadget. If my.yahoo.com is running _untrusted_ script in the context of my.yahoo.com, then forget XHR -- the game is already over long before we get to CORS. For example, the script could do a convincing phishing attack trivially, get the user's credentials, and then send them to the attacker for use offline. UMP or CORS are irrelevant here. If you want to run untrusted script, you can use iframe sandbox, and then CORS is neutralised (since the Origin is unguessable and unforgable), which seems to me to be what we want in that scenario. True. -- Dirk
Re: widget example of CORS and UMP
On Fri, May 14, 2010 at 1:15 AM, Maciej Stachowiak m...@apple.com wrote: On May 13, 2010, at 6:40 PM, Dirk Pranke wrote: On Thu, May 13, 2010 at 6:13 PM, Maciej Stachowiak m...@apple.com wrote: ; you're right. If you don't run the code in an off-domain iframe or through a sanitizer like Caja, then everything on your site is vulnerable, not just resources protected via CORS. Using different-origin iframes with postMessage to communicate to the container seems like a fine solution for third-party gadgets. What goal is it defeating? Why would embedding gadgets inline without a frame be a goal? One could argue that maintaining off-domain iframes is a hack, or is a maintenance burden. You are correct of course that if you don't do either, you are vulnerable. Hack: not sure why it would be a hack. Embedding self-contained pieces of interactive content is *exactly* what iframes are designed for. The case where using iframes is a bit of ahack (in my opinion) is when you use an invisible iframe as a way to implement a cross-site data API, where visual embedding is not an issue. I think the idea of running untrusted JavaScript code from your own origin is terrifying, no matter how much you have tried to verify and restrict it. So even if I were using a Caja-style tool, I would still want to put the untrusted content in an off-domain iframe. Maintenance burden: I don't see that either. You could either host the iframe content on the domain of the gadget provider, in which case it reduces burden on the hosting site, or else it's easy to support an unbounded number of subdomain hostnames from a single server. Wheres the burden? Speaking from the viewpoint of a (fairly naive) content developer, it still seems like a hassle, but you're right that it's not much of one. And, as others have pointed out the sandbox attribute in HTML 5 makes this pretty painless (as well as making the intent obvious). At any rate, I grant that best practice would probably be to run any third-party code in a sandboxed iframe unless you really needed to not do that. Alternately, tools like Caja would block all use of XHR other than the anonymous kind. Exactly, so the off-domain IFRAME is the only option here. I'm not following you. Wrapping untrusted third-party widgets with Caja would not prevent my.yahoo.com from doing XHR on its own behalf. I guess it is I who am not following you. Why is that an issue? Even if non-cajoled code is doing credentialled XHRs on the page, the cajoled code can't make use of that fact. Thus, any of the reasonably secure ways to embed a third-party widget would not be vulnerable. What about the UMP-based solution; is it vulnerable? If the page containing the third-party gadget does not also contain the Yahoo!-provided portfolio gadget, then the $UNGUESSABLE_ID is not easily obtained, and so, not really. Actually, if any page served off of my.yahoo.com contains $UNGUESSABLE_ID, and the widget is embedded on the My Yahoo origin and not protected with a tool like Cja, then the third-party gadget can trivially get the unguessable ID. It doesn't even have to use it right away while the My Yahoo site is embedding widgets in an insecure way - it can exfiltrate it for later use at the time and place of its choosing. True. I did not consider this interestingly different, but in retrospect I was perhaps wrong. It makes it more clear why tools like Caja have to restrict the networking that Cajoled content is allowed to do. This is the main risk of UMP compared to CORS. Because secret tokens are the only security tool you have, you have the problem of maintaining confidentiality of a shared secret. If that shared secret is embedded in Web pages you serve, and/or embedded in a URL, that is hard to do. Agreed. In this particular use case, protecting that URL is probably not difficult, but it is a general problem. Why do you think so? URLs tend to leak. Browsers store them all over and do not generally treat them like secure information. Users share them freely. If you want to keep something secret, the last thing you want to do is put it in a URL. If I were designing a secret token based defense, at minimum I would use POST and put the secret token in the POST body instead of in the URL. While it is true that users share URLs freely, they don't tend to share URLs that aren't trivially exposed to them (in the URL bar, when you hover over a link). There are plenty of ways to create the URL that a regular user will never see. In my experience, they are a fairly safe way of protecting confidentiality if used properly by page authors. Although, I do not want to downplay this concern too much. There are also more subtle risks to shared secrets. If you are creating your secrets with a bad random number generator, then they will not in fact be unguessable and you have a huge vulnerability. Even security experts can make this mistake, here
Re: widget example of CORS and UMP
On Fri, May 14, 2010 at 1:17 AM, Anne van Kesteren ann...@opera.com wrote: On Fri, 14 May 2010 03:40:12 +0200, Dirk Pranke dpra...@chromium.org wrote: Exactly, so the off-domain IFRAME is the only option here. iframe srcdoc=... sandbox=allow-scripts is an alternative solution, if you want everything in the same document. HTML 5 to the rescue! -- Dirk
Re: widget example of CORS and UMP
On Fri, May 14, 2010 at 10:18 AM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 1:15 AM, Maciej Stachowiak m...@apple.com wrote: OK, so there's two vulnerability scenarios: Actually, there is at least one other kind of vulnerability in the CORS design that has not been mentioned by anyone yet and that does not require XSS or untrusted code. Before I describe the attack, I want to remind everyone that the purpose of this particular scenario was to study the usability of CORS and UMP in a benign situation. This example only has a page from Yahoo talking to servers also operated by Yahoo. There are also no side-effects in this example; it's purely a data presentation example. Given that CORS and UMP are new protocols and that this is the most benign scenario we can conjure, I think it's fair to expect a solution with strong security properties. It should be damning if the solution to this very simple scenario introduces complex security problems. We are talking about enabling a class of functionality (cross-origin messaging) that isn't currently possible on the web. Obviously if it is possible to do so securely and easily, that's a good thing. If that is not possible, and the options are to enable things that have relative degrees of security or ease of use, then it becomes much more debatable. Damning is a strong word to use in this situation, especially since I think most people would see the interchange between Maciej and I that neither solution (CORS or UMP) makes things trivially securable. Another conclusion could be that doing this stuff is just hard. First I'll explain a concrete attack against the concrete example, and then I'll generalize it to explain why we should expect this problem to be recurring. The CORS solution to the scenario creates a widely known URL, http://finance.yahoo.com/api/v1/my_portfolio;, that is treated specially when the request happens to come from the my.yahoo.com origin. If you have tunnel vision on only the portfolio widget, then you might see no problem, but there are also other pages with other content on the my.yahoo.com domain. What happens if they make a request to this same URL, could something unexpected and wrong happen? Let's say myYahoo also has a page that fetches the HotTrade of the Day and posts its current price to my public activity stream, letting my friends know what investment I'm researching at the moment. The code for this content is audited by Yahoo and is not malicious. The HotTrade of the day can be on any market in the world and for anything. It might be hog futures in China or rice in Chicago. The page is used by momentum traders who just want to invest in anything that's moving quickly. Since no site lists the price of everything that can be traded, HotStocks returns the URL to GET the current price. The page content was created by a trading firm that wants to boost trades and attract new customers interested in trading on all of the world's markets. The page content makes the following requests: GET http://hottrades.foo/hotnow HTTP/1.0 Origin: my.yahoo.com HTTP/1.0 200 OK Content-Type: text/plain http://finance.yahoo.com/stock/goog/instaprice and then: GET http://finance.yahoo.com/stock/goog/instaprice HTTP/1.0 Origin: my.yahoo.com HTTP/1.0 200 OK Content-Type: text/plain 510.88 and then: POST http://my.yahoo.com/stream/append HTTP/1.0 Origin: my.yahoo.com Content-Type: text/plain I got my tip at 510.88. Find your price at: http://finance.yahoo.com/stock/goog/instaprice HTTP/1.0 204 OK Later, an attacker causes an unexpected URL to get into the HotTrades tip stream, resulting in the my.yahoo.com page doing the following: GET http://hottrades.foo/hotnow HTTP/1.0 Origin: my.yahoo.com HTTP/1.0 200 OK Content-Type: text/plain http://finance.yahoo.com/api/v1/portfolio/mine and then: GET http://finance.yahoo.com/api/v1/portfolio/mine HTTP/1.0 Origin: my.yahoo.com HTTP/1.0 200 OK Content-Type: application/json { /* all my portfolio data */ } and then: POST http://my.yahoo.com/stream/append HTTP/1.0 Origin: my.yahoo.com Content-Type: text/plain I got my tip at { /* all my portfolio data */ }. Find your price at: http://finance.yahoo.com/api/v1/portfolio/mine HTTP/1.0 204 OK If the Yahoo Finance portfolio was designed to use UMP instead of CORS, this hack would not compromise any private portfolio data since the attacker doesn't know the unguessable secret for anyone's private portfolio. If the code had been audited, then it is reasonable to assume that someone would have caught that allowing the HotTrades service to tell the user to fetch *any url at all* was a bad idea, and the API should have been restricted to GET http://finance.yahoo.com/stock/%s/instaprice; instead of GET %s. This is also what Maciej said in his Don't Be a Deputy Slides - Guarantee that requests on behalf of a third party look different than your
Re: widget example of CORS and UMP
On Fri, May 14, 2010 at 12:00 PM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 11:27 AM, Dirk Pranke dpra...@chromium.org wrote: On Fri, May 14, 2010 at 10:18 AM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 1:15 AM, Maciej Stachowiak m...@apple.com wrote: OK, so there's two vulnerability scenarios: Actually, there is at least one other kind of vulnerability in the CORS design that has not been mentioned by anyone yet and that does not require XSS or untrusted code. Before I describe the attack, I want to remind everyone that the purpose of this particular scenario was to study the usability of CORS and UMP in a benign situation. This example only has a page from Yahoo talking to servers also operated by Yahoo. There are also no side-effects in this example; it's purely a data presentation example. Given that CORS and UMP are new protocols and that this is the most benign scenario we can conjure, I think it's fair to expect a solution with strong security properties. It should be damning if the solution to this very simple scenario introduces complex security problems. We are talking about enabling a class of functionality (cross-origin messaging) that isn't currently possible on the web. Obviously if it is possible to do so securely and easily, that's a good thing. If that is not possible, and the options are to enable things that have relative degrees of security or ease of use, then it becomes much more debatable. Damning is a strong word to use in this situation, If the introduced security problems are complex and therefore hard or infeasible to solve, then damning is the right word. If the simple, benign scenario is made infeasible, that's damning. especially since I think most people would see the interchange between Maciej and I that neither solution (CORS or UMP) makes things trivially securable. Another conclusion could be that doing this stuff is just hard. There's a big difference between trivially and infeasible. What are the issues with UMP where we cannot provide concrete guidance to developers? As I've shown, there are hard unknowns in the CORS solution. You've shown that there are cases where CORS is not secure. I don't know that I would agree with your assessment that you've shown that there are hard unknowns. As Maciej has shown, simply saying make sure the URL can't be easily obtained is not that easy. If the Yahoo Finance portfolio was designed to use UMP instead of CORS, this hack would not compromise any private portfolio data since the attacker doesn't know the unguessable secret for anyone's private portfolio. If the code had been audited, then it is reasonable to assume that someone would have caught that allowing the HotTrades service to tell the user to fetch *any url at all* was a bad idea, and the API should have been restricted to GET http://finance.yahoo.com/stock/%s/instaprice; instead of GET %s. You've changed the scenario so that now HotTrades can only happen on Yahoo listed securities, instead of those listed on any exchange in the world. You have to allow fetching of any URL to make the application work. If that is true, then a reasonable audit would not allow that app to run on my.yahoo.com, because of the dangers involved. A possible CORS solution is to check that the URL does not refer back to a user private resource on my.yahoo.com and so do a check on the domain in the URL from HotTrades. However, now you have to wonder about other domains that accept cross-domain requests from my.yahoo.com, such as finance.yahoo.com. How do you list all other domains that might be giving special cross-domain access to my.yahoo.com? You can't; it's an unbounded list that is not even under the control of my.yahoo.com. This is also what Maciej said in his Don't Be a Deputy Slides - Guarantee that requests on behalf of a third party look different than your own. How do I make a GET request for public data look different? I look forward to seeing DBAD explained in greater depth. I am completely unconvinced by what has been presented to date. If it is not possible to deploy an app with the level of distinction possible, then you don't deploy it. You are correct that it is possible to use CORS unsafely. It is possible to use UMP unsafely, Again, that is broken logic. It is possible to write unsafe code in C++, but it is also possible to write unsafe code in Java, so there's no security difference between the two languages. Please, this illogical argument needs to die. I did not say that they were the same. since - as others have expressed - everything depends on the URL being unguessable. or putting an unguessable token somewhere else in the request. In CORS, everything depends on the cookie being unguessable and confidential. Adam has shown how hard that is. True, and yet people use it all the time to provide a good enough level of security
Re: widget example of CORS and UMP
On Fri, May 14, 2010 at 12:27 PM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 12:20 PM, Ojan Vafai o...@chromium.org wrote: On Fri, May 14, 2010 at 12:00 PM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 11:27 AM, Dirk Pranke dpra...@chromium.org wrote: You are correct that it is possible to use CORS unsafely. It is possible to use UMP unsafely, Again, that is broken logic. It is possible to write unsafe code in C++, but it is also possible to write unsafe code in Java, so there's no security difference between the two languages. Please, this illogical argument needs to die. This feels like a legal proceeding. Taken out of context, this sounds illogical, in the context of the rest of the paragraph Dirk's point makes perfect sense. My email included all of Dirk's text. I didn't remove it from context. I don't think it makes any sense, even in context. In the same way that CORS has security problems, so does UMP. No, not in the same way. The security issues are different in nature and severity. You can't just say there exist problems in both, so they're equivalent. That's not sensible. Ojan said in the same way, not me, and I did not say that they were equivalent. I agree that the security issues are different in nature. Your assessment of the relative severity of the two problems differs from others' assessments. At this point, I think this thread has run its course, unless there are further specific criticisms against the example I gave. I think the important takeaway is that as long as the two sites are only running trusted code (i.e., no third party gadgets), CORS met the intended use case securely. As soon as a third party was introduced, the potential for confused deputies arose and life got a whole lot more complicated. -- Dirk
Re: widget example of CORS and UMP
On Fri, May 14, 2010 at 1:44 PM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 12:27 PM, Dirk Pranke dpra...@chromium.org wrote: On Fri, May 14, 2010 at 12:00 PM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 11:27 AM, Dirk Pranke dpra...@chromium.org wrote: On Fri, May 14, 2010 at 10:18 AM, Tyler Close tyler.cl...@gmail.com wrote: On Fri, May 14, 2010 at 1:15 AM, Maciej Stachowiak m...@apple.com wrote: OK, so there's two vulnerability scenarios: Actually, there is at least one other kind of vulnerability in the CORS design that has not been mentioned by anyone yet and that does not require XSS or untrusted code. Before I describe the attack, I want to remind everyone that the purpose of this particular scenario was to study the usability of CORS and UMP in a benign situation. This example only has a page from Yahoo talking to servers also operated by Yahoo. There are also no side-effects in this example; it's purely a data presentation example. Given that CORS and UMP are new protocols and that this is the most benign scenario we can conjure, I think it's fair to expect a solution with strong security properties. It should be damning if the solution to this very simple scenario introduces complex security problems. We are talking about enabling a class of functionality (cross-origin messaging) that isn't currently possible on the web. Obviously if it is possible to do so securely and easily, that's a good thing. If that is not possible, and the options are to enable things that have relative degrees of security or ease of use, then it becomes much more debatable. Damning is a strong word to use in this situation, If the introduced security problems are complex and therefore hard or infeasible to solve, then damning is the right word. If the simple, benign scenario is made infeasible, that's damning. especially since I think most people would see the interchange between Maciej and I that neither solution (CORS or UMP) makes things trivially securable. Another conclusion could be that doing this stuff is just hard. There's a big difference between trivially and infeasible. What are the issues with UMP where we cannot provide concrete guidance to developers? As I've shown, there are hard unknowns in the CORS solution. You've shown that there are cases where CORS is not secure. I don't know that I would agree with your assessment that you've shown that there are hard unknowns. Further down in this email, you punt on my example, saying that it can't be deployed. If there are classes of applications that CORS cannot address, and these classes are important, then there are hard unknowns in CORS. I should have mentioned that, as many others have pointed out, simply running the widget off-origin (or in a sandboxed iframe) would also work. I do not agree with your second sentence. We know the circumstances that make cookie-based solutions dangers; you yourself have been very helpful in pointing them out. So, I don't think your hard unknown characterization is accurate. For example, will the Security Considerations section of CORS have to say: It is not safe in CORS to make a GET request for public data using a URL obtained from a possibly malicious party. Validating the URL requires global knowledge of all origins that might grant special access to the requestor's origin, and so return private user data. Yes, one would imagine saying something quite similar to that. As Maciej has shown, simply saying make sure the URL can't be easily obtained is not that easy. I saw him assert that. I didn't see him show that. What are the pitfalls that have not been addressed in the UMP spec? Well, he didn't show that any more than I showed how to get the unguessable URL in the first place. However, the fact that he caught errors in my description of what you had to protect against seemed like a strong argument. If the Yahoo Finance portfolio was designed to use UMP instead of CORS, this hack would not compromise any private portfolio data since the attacker doesn't know the unguessable secret for anyone's private portfolio. If the code had been audited, then it is reasonable to assume that someone would have caught that allowing the HotTrades service to tell the user to fetch *any url at all* was a bad idea, and the API should have been restricted to GET http://finance.yahoo.com/stock/%s/instaprice; instead of GET %s. You've changed the scenario so that now HotTrades can only happen on Yahoo listed securities, instead of those listed on any exchange in the world. You have to allow fetching of any URL to make the application work. If that is true, then a reasonable audit would not allow that app to run on my.yahoo.com, because of the dangers involved. Or, a reasonable audit could say: that's a fine app, so long as you're using UMP. If CORS requires the app to be rejected, that's a failure
Re: UMP / CORS: Implementor Interest
On Wed, May 12, 2010 at 6:41 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 5:36 PM, Dirk Pranke dpra...@chromium.org wrote: On Wed, May 12, 2010 at 5:15 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 5:07 PM, Adam Barth w...@adambarth.com wrote: On Wed, May 12, 2010 at 4:56 PM, Tyler Close tyler.cl...@gmail.com wrote: Both Adam and Dirk understood correctly. Ideally, I'd like an actual CORS example to work on, since I'd have to make analogies with postMessage(), and I've already made a ton of analogies, apparently to little effect. If people don't fully appreciate the relationship between form based CSRF and CORS based Confused Deputy, then we need an actual CORS application. Out of curiosity, who are the 3 parties involved in the facebook chat example? The little chat widget in the corner of the facebook page looks like a same origin application. Facebook uses a lot of different domains for different purposes. I don't have a complete count, but at least a dozen. The chat feature itself uses a bunch, possibly to get around various connection limits in browsers. This doesn't seem like a good example then, since the attacker would have to be facebook itself. For robustness, I personally consider these scenarios when designing an application, but people on this list might not find it compelling. They might also argue that no attempt was made to protect against such a vulnerability. Keep in mind that the browser's notion of an origin is often much smaller than a single application, which is part of the reason web developers are so keen on CORS. Many of them plan to use it only to talk to trusted hosts without having to use goofy things like JSONP. Enabling this scenario is a fine thing, but it's not the scenario we should be using to test the security properties of CORS. UMP also enables communication between fully trusted participants. It seems like a fine scenario to me. We know people want to use CORS for this purpose because it makes their code easier and cleaner (both of which are nice security things in and of themselves). If both CORS and UMP are secure for this use case, then an interesting question is, which is easier to use? This is particularly relevant insofar as if the existing JSONP-based solution uses cookies, since CORS would support this but UMP wouldn't (meaning the degree of rework in the app necessary to support the code would be higher). Note that I am not saying that this should be the only scenario to be reviewed, but you shouldn't just pick and choose the cases that best fit your hypothesis. Over the course of this discussion, I've taken every use-case, with every arbitrary constraint that anyone wants to add and shown a corresponding UMP solution, so it is grossly unfair to accuse me of picking and choosing cases. For this particular discussion, we were explicitly looking for an example of a Confused Deputy vulnerability in an actual CORS application. Such a thing doesn't exist in a scenario with only 2 parties and no attacker. When testing security properties, you need an attacker. I apologize, I thought your intent was to do a security review of CORS, which in my mind would show what situations it can be used safely and what situations it can't, not just show the latter. The two-party situation is interesting because it is a situation that cannot be solved today without CORS or UMP. So, the point is that CORS can safely enable a class of scenarios that are not possible otherwise. This is a non-trivial point given the current discussion. Besides, I thought part of the point of the argument is that sites that are often considered to be trusted aren't, because of the possibility of SQL injection, XSS, etc. All that said, if you want to compare the usability of CORS and UMP in a 2 party interaction between fully trusted participants, we can do that. Go ahead and sketch out the challenge problem and corresponding CORS solution. I will send something shortly. -- Dirk
Re: CORS suggestions [Was: Re: UMP / CORS: Implementor Interest]
On Thu, May 13, 2010 at 6:39 AM, Arthur Barstow art.bars...@nokia.com wrote: On May 12, 2010, at 2:42 PM, ext Jonas Sicking wrote: If so, I'd really like to see the chairs move forward with making the WG make some sort of formal decision on weather CORS should be published or not. Repeating the same discussion over and over is not good use your time or mine. There is sufficient interest in CORS such that we should continue to work on it. As such, I don't think any type of formal decision re publication is needed. Although this and other recent and related threads have indeed re-hashed some previous discussions, among some of the suggestions made are: * CORS' security considerations section needs improvements http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0625.html http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0630.html * Need security analysis e.g. with multi-party deployments; test the security properties of CORS (e.g. versus UMP) http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0645.html * Need usage informatin for the app developer and server admin; when is CORS safe to use; which is easier to use; guidelines for not falling prey to attacks with CORS http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0543.html http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0646.html http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0648.html * CORS needs text about Confused Deputy http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0612.html http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0648.html Is anyone willing to contribute to the above? I will happily contribute to this and to whatever work is necessary to merge UMP and CORS into a single spec (plus additional non-normative documents), if that's helpful. -- Dirk
Re: UMP / CORS: Implementor Interest
On Wed, May 12, 2010 at 10:02 PM, Ian Hickson i...@hixie.ch wrote: On Wed, 12 May 2010, Tyler Close wrote: So HTML is not vulnerable to Cross-Site Scripting, C++ is not vulnerable to buffer overflows and so CORS is not vulnerable to Confused Deputy. Correct. As some (at least me) might be confused by what you're saying here, are you saying that C++ isn't vulnerable to buffer overflows, rather *some programs* written in C++ are vulnerable to buffer overflows? And, hence, some usages of CORS aren't vulnerable to buffer overflows and so you can say that CORS itself is not, either? Or are you saying something stronger, and I'm still not following you? Like MarkM, I perhaps am not understanding the Web standards manner of using the word vulnerable and so it would be helpful if you could elaborate. To continue the analogy, there is an essential distinction between C++'s vulnerability to buffer overflows and (Java, Python, ML, etc.) total lack of vulnerability. To say that C++ is not subject to buffer overflows but rather individual programs are at fault is to lose sight of that essential distinction. Much as Tyler is attempting to distinguish between APIs that use ambient authority (and hence, are vulnerable, even if some usages are safe) and APIs where that simply cannot happen. Regardless of the above, I agree 100% that it is more fruitful to focus on actual examples so we can be completely clear about this ... -- Dirk
widget example of CORS and UMP
I mentioned earlier that I would attempt to provide a concrete use case for CORS. Here it is; I suggest that this text be used as a basis for part of the security considerations section of the spec. If my example and the analysis is incorrect, or if this is not in fact an intended use case for CORS, please speak up :) The CORS Use Cases section cites a service such as a news or stock ticker can be on a central server and shared with many other servers as an API that might make use of CORS. Of course, it mentions the HTML 5 eventsource, which I am not very comfortable with, so I will attempt to describe a concrete example using something I am familiar with. I don't think my example changes anything materially and can be deployed using otherwise existing browser technology. There are two variants of such a service - either the service provides generic, anonymous information (i.e., current stock prices), or it provides customized, sensitive information. In the former case, the service does not require authorization or presumably any real credentials, and so the CORS and UMP solutions are essential identical. Problem solved, value added :) In the latter case, assume the service does provide sensitive data. Here's a real example: Yahoo!'s My Yahoo! service (my.yahoo.com) allows the user to build a customizable page including data from multiple different services. One option is to include portfolio information from Yahoo! Finance (finance.yahoo.com) (*). Such components are variously called widgets or gadgets, and I assume we are all familiar with this concept. (Google's iGoogle does the same thing, as do lots of other services). Historically, I believe this functionality was implemented (and may still be) using backend service-to-service communication (so all browser requests went from client to my.yahoo.com servers). It would also be possible to create the desired user experience by embedding an IFRAME pointing to a URL on finance.yahoo.com that returned the same content. I understand one desired usage of CORS is to allow us to remove this IFRAME or backend communication, so that the client can directly pull and reformat the data, and so that finance.yahoo.com need only provide a data feed. Here is how I would imagine a regular web developer (meaning, me) would attempt to solve the problem using CORS and UMP. The initial, insecure CORS solution is straightforward ... a gadget running on My Yahoo! sends an XHR with the users' credentials to http://finance.yahoo.com/api/v1/my_portfolio; and gets some JSON back. The insecure UMP solution is similarly obvious, you just replace the URL with http://finance.yahoo.com/api/v1/portfolio/$USERID; . Given that Yahoo! Finance uses cookies today to identify the user, the CORS solution is trivial to implement. The UMP solution requires My Yahoo! to develop some way of translating $UNGUESSABLE_ID into a user id; this could presumably be almost as easy as stuffing the session id from the cookie into the token. This is slightly more work than the CORS solution, but not a lot more. Now, what are the security implications of these two designs? One interesting aspect of the CORS solution is that there is no way for the CORS-based gadget to get another user's data; it is simply not possible to request it, since a different set of cookies cannot be included in the request. The CORS solution requires you to believe in the security of proper cookie handling. With an HttpOnly cookie, cookies can be handled pretty safely but I imagine all of us are fairly aware that cookies get exposed all the time and hence this is hardly a perfect solution, maybe just good enough. With UMP, on the other hand, the gadget can request any user's data that can be guessed. The obvious answer to this is to change $USERID to an $UNGUESSABLE_ID, and ensure that $UNGUESSABLE_ID isn't leaked. It is a debatable point as to whether ensuring this is easier than ensuring the cookies aren't leaked. One could also observe that with the naive implementation of the CORS API, *any* site could trivially fetch the user's portfolio data, which is presumably not desirable, and so Yahoo! Finance would also need to check the Origin: header. With the UMP solution, this is not strictly necessary but might be a useful defense in depth. The disadvantage of requiring the Origin header means that this is not really an Open API - Yahoo! Finance has to whitelist the callers. Not very web friendly, but this is maybe okay since this particular API was never intended to be Open. Similarly, UMP requires the caller to somehow obtain $UNGUESSABLE_ID. Doing so is undoubtedly some more work than not needing to do so; whether or not this would be more work than maintaining a whitelist would depend on the implementation of both and is outside the scope of this note. Given all this, both designs are relatively okay as long as you trust what is running on My Yahoo!. Now suppose that My Yahoo! allows the user to install
Re: widget example of CORS and UMP
On Thu, May 13, 2010 at 6:13 PM, Maciej Stachowiak m...@apple.com wrote: On May 13, 2010, at 5:37 PM, Dirk Pranke wrote: One could also observe that with the naive implementation of the CORS API, *any* site could trivially fetch the user's portfolio data, which is presumably not desirable, and so Yahoo! Finance would also need to check the Origin: header. With the UMP solution, this is not strictly necessary but might be a useful defense in depth. The disadvantage of requiring the Origin header means that this is not really an Open API - Yahoo! Finance has to whitelist the callers. Not very web friendly, but this is maybe okay since this particular API was never intended to be Open. Actually, with UMP you can't check the Origin header, since it will be missing (or Origin: null) regardless of the origin.. But with CORS you can add defense in depth by requiring an unguessable token in the request, as with the UMP solution. Good point; you're right. Now suppose that My Yahoo! allows the user to install third-party gadgets. Suddenly neither Yahoo! Finance nor the browser can distinguish a safe request from a trusted gadget from an unsafe request from an untrusted gadget. Since the CORS solution uses well-known URLs, it is now helpless against exposing this data to a third party. How can we protect against that? One solution would be to run the third-party gadget in an IFRAME (and from a different domain), again. But this would partially defeat the goal we started with in the first place. Another approach would be to attempt to inspect the widget code (either by a human or by something like Caja) and only allow appropriately sanitized code to execute. However, given that the URL is just a string, I suspect it would be difficult to write a general purpose sanitizer to protect against this, or at least to do so and allow the resulting sanitized gadget to do anything very interesting. Maybe a human could do it correctly - this is more or less the definition of trusted code, after all. If you don't run the code in an off-domain iframe or through a sanitizer like Caja, then everything on your site is vulnerable, not just resources protected via CORS. Using different-origin iframes with postMessage to communicate to the container seems like a fine solution for third-party gadgets. What goal is it defeating? Why would embedding gadgets inline without a frame be a goal? One could argue that maintaining off-domain iframes is a hack, or is a maintenance burden. You are correct of course that if you don't do either, you are vulnerable. Alternately, tools like Caja would block all use of XHR other than the anonymous kind. Exactly, so the off-domain IFRAME is the only option here. Thus, any of the reasonably secure ways to embed a third-party widget would not be vulnerable. What about the UMP-based solution; is it vulnerable? If the page containing the third-party gadget does not also contain the Yahoo!-provided portfolio gadget, then the $UNGUESSABLE_ID is not easily obtained, and so, not really. Actually, if any page served off of my.yahoo.com contains $UNGUESSABLE_ID, and the widget is embedded on the My Yahoo origin and not protected with a tool like Cja, then the third-party gadget can trivially get the unguessable ID. It doesn't even have to use it right away while the My Yahoo site is embedding widgets in an insecure way - it can exfiltrate it for later use at the time and place of its choosing. True. I did not consider this interestingly different, but in retrospect I was perhaps wrong. This is the main risk of UMP compared to CORS. Because secret tokens are the only security tool you have, you have the problem of maintaining confidentiality of a shared secret. If that shared secret is embedded in Web pages you serve, and/or embedded in a URL, that is hard to do. Agreed. In this particular use case, protecting that URL is probably not difficult, but it is a general problem. If the page does contains both the third-party gadget and the Yahoo!-provided portfolio gadget, then there would have to be a way to prevent the third-party gadget from being able to crawl the DOM and extract the $UNGUESSABLE_URL. I don't think that that's possible unless you put the third-party gadget in an IFRAME, again. Or, you can again run the third-party gadget through a sanitizer, but in this case we know that you can implement this programmatically, since that's what Caja does. Indeed, but a gadget running same-origin doesn't even have to craw the DOM, it could use XHR or other means (e.g. iframes) to get access to any resource on the origin unless somehow prevented from doing so. Which Caja does. Agreed. A third option to protect the API would be to modify the Yahoo! Finance API to require an unguessable token even in the CORS case (so you would use cookies + token). This is the analogy to what we do today for XSRF
Re: widget example of CORS and UMP
On Thu, May 13, 2010 at 6:40 PM, Dirk Pranke dpra...@chromium.org wrote: On Thu, May 13, 2010 at 6:13 PM, Maciej Stachowiak m...@apple.com wrote: I think a more likely use case for CORS does not involve embedded gadgets at all. Consider the example of a social network asking for access to your GMail contacts. Fair enough. Sounds like a different email thread; I'll see what I can do :) Okay, having thought about this for a few minutes, I think that this is essentially identical to the calendar example you worked through in your DBAD slide deck (and which is also listed as an example use case in the CORS spec) (it's just a GET instead of a PUT). So, rehashing that thread here doesn't make a lot of sense, but incorporating it into the security considerations section does. Do you agree? -- Dirk
Re: UMP / CORS: Implementor Interest
On Wed, May 12, 2010 at 4:06 PM, Adam Barth w...@adambarth.com wrote: On Wed, May 12, 2010 at 3:16 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 1:38 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, May 12, 2010 at 1:31 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 1:13 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, May 12, 2010 at 12:38 PM, Devdatta dev.akh...@gmail.com wrote: While most of the discussion in this thread is just repeats of previous discussions, I think Tyler makes a good (and new) point in that the current CORS draft still has no mention of the possible security problems that Tyler talks about. The current draft's security section http://dev.w3.org/2006/waf/access-control/#security is ridiculous considering the amount of discussion that has taken place on this issue on this mailing list. Before going to rec, I believe Anne needs to substantially improve this section - based on stuff from maybe Maciej's presentation - which I found really informative. He could also cite UMP as a possible option for those worried about security. I agree that the security section in CORS needs to be improved. As for the should CORS exist discussion, I'll bow out of those until we're starting to move towards officially adopting a WG decision one way or another, or genuinely new information is provided which would affect such a decision (for the record, I don't think I've seen any new information provided since last fall's TPAC). A smart guy once told me that You can't tell people anything, meaning they have to experience it for themselves before they really get it. Has Mozilla tried to build anything non-trivial using CORS where cookies + Origin are the access control mechanism? If so, I'll do a security review of it and we'll see what we learn. Not to my knowledge, no. I believe we use CORS for tinderboxpushlog [1], however since that is only dealing with public data I don't believe it uses cookies or Origin headers. Does anyone have something? At the risk of getting myself involved in this discussion again, you might consider doing a security analysis of Facebook Chat. Although Facebook Chat uses postMessage, it uses both cookies and postMessage's origin property for authentication, so it might be a system of the kind you're interested in analyzing. I think (although I'm not certain) that Tyler is asking partially to figure out where a non-anonymous CORS request is used in the real world. If he isn't, then I am :) Given that a major (but not the only) claim of the need to adopt CORS with support for cookies and the Origin header is that it is in fact already implemented and shipping, it would be good to see how it's being used. If we can't find any examples of it being used (in the non-anonymous case, at least), then the argument against us having to keep it would hold less water. If we can find it being used, then we can see both how we would handle the case with UMP, and whether or not the CORS usage is in fact secure. -- Dirk
Re: UMP / CORS: Implementor Interest
On Wed, May 12, 2010 at 4:45 PM, Adam Barth w...@adambarth.com wrote: On Wed, May 12, 2010 at 4:38 PM, Dirk Pranke dpra...@google.com wrote: On Wed, May 12, 2010 at 4:06 PM, Adam Barth w...@adambarth.com wrote: On Wed, May 12, 2010 at 3:16 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 1:38 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, May 12, 2010 at 1:31 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 1:13 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, May 12, 2010 at 12:38 PM, Devdatta dev.akh...@gmail.com wrote: While most of the discussion in this thread is just repeats of previous discussions, I think Tyler makes a good (and new) point in that the current CORS draft still has no mention of the possible security problems that Tyler talks about. The current draft's security section http://dev.w3.org/2006/waf/access-control/#security is ridiculous considering the amount of discussion that has taken place on this issue on this mailing list. Before going to rec, I believe Anne needs to substantially improve this section - based on stuff from maybe Maciej's presentation - which I found really informative. He could also cite UMP as a possible option for those worried about security. I agree that the security section in CORS needs to be improved. As for the should CORS exist discussion, I'll bow out of those until we're starting to move towards officially adopting a WG decision one way or another, or genuinely new information is provided which would affect such a decision (for the record, I don't think I've seen any new information provided since last fall's TPAC). A smart guy once told me that You can't tell people anything, meaning they have to experience it for themselves before they really get it. Has Mozilla tried to build anything non-trivial using CORS where cookies + Origin are the access control mechanism? If so, I'll do a security review of it and we'll see what we learn. Not to my knowledge, no. I believe we use CORS for tinderboxpushlog [1], however since that is only dealing with public data I don't believe it uses cookies or Origin headers. Does anyone have something? At the risk of getting myself involved in this discussion again, you might consider doing a security analysis of Facebook Chat. Although Facebook Chat uses postMessage, it uses both cookies and postMessage's origin property for authentication, so it might be a system of the kind you're interested in analyzing. I think (although I'm not certain) that Tyler is asking partially to figure out where a non-anonymous CORS request is used in the real world. If he isn't, then I am :) Given that a major (but not the only) claim of the need to adopt CORS with support for cookies and the Origin header is that it is in fact already implemented and shipping, it would be good to see how it's being used. If we can't find any examples of it being used (in the non-anonymous case, at least), then the argument against us having to keep it would hold less water. If we can find it being used, then we can see both how we would handle the case with UMP, and whether or not the CORS usage is in fact secure. Oh, I misunderstood. I thought he wanted to do a security review to show that there was a confused deputy causing problems. I think that's part of the same thing (the whether or not the CORS usage is in fact secure part of my note). -- Dirk
Re: UMP / CORS: Implementor Interest
On Wed, May 12, 2010 at 5:15 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 5:07 PM, Adam Barth w...@adambarth.com wrote: On Wed, May 12, 2010 at 4:56 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 4:45 PM, Adam Barth w...@adambarth.com wrote: On Wed, May 12, 2010 at 4:38 PM, Dirk Pranke dpra...@google.com wrote: On Wed, May 12, 2010 at 4:06 PM, Adam Barth w...@adambarth.com wrote: On Wed, May 12, 2010 at 3:16 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 1:38 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, May 12, 2010 at 1:31 PM, Tyler Close tyler.cl...@gmail.com wrote: On Wed, May 12, 2010 at 1:13 PM, Jonas Sicking jo...@sicking.cc wrote: On Wed, May 12, 2010 at 12:38 PM, Devdatta dev.akh...@gmail.com wrote: While most of the discussion in this thread is just repeats of previous discussions, I think Tyler makes a good (and new) point in that the current CORS draft still has no mention of the possible security problems that Tyler talks about. The current draft's security section http://dev.w3.org/2006/waf/access-control/#security is ridiculous considering the amount of discussion that has taken place on this issue on this mailing list. Before going to rec, I believe Anne needs to substantially improve this section - based on stuff from maybe Maciej's presentation - which I found really informative. He could also cite UMP as a possible option for those worried about security. I agree that the security section in CORS needs to be improved. As for the should CORS exist discussion, I'll bow out of those until we're starting to move towards officially adopting a WG decision one way or another, or genuinely new information is provided which would affect such a decision (for the record, I don't think I've seen any new information provided since last fall's TPAC). A smart guy once told me that You can't tell people anything, meaning they have to experience it for themselves before they really get it. Has Mozilla tried to build anything non-trivial using CORS where cookies + Origin are the access control mechanism? If so, I'll do a security review of it and we'll see what we learn. Not to my knowledge, no. I believe we use CORS for tinderboxpushlog [1], however since that is only dealing with public data I don't believe it uses cookies or Origin headers. Does anyone have something? At the risk of getting myself involved in this discussion again, you might consider doing a security analysis of Facebook Chat. Although Facebook Chat uses postMessage, it uses both cookies and postMessage's origin property for authentication, so it might be a system of the kind you're interested in analyzing. I think (although I'm not certain) that Tyler is asking partially to figure out where a non-anonymous CORS request is used in the real world. If he isn't, then I am :) Given that a major (but not the only) claim of the need to adopt CORS with support for cookies and the Origin header is that it is in fact already implemented and shipping, it would be good to see how it's being used. If we can't find any examples of it being used (in the non-anonymous case, at least), then the argument against us having to keep it would hold less water. If we can find it being used, then we can see both how we would handle the case with UMP, and whether or not the CORS usage is in fact secure. Oh, I misunderstood. I thought he wanted to do a security review to show that there was a confused deputy causing problems. Both Adam and Dirk understood correctly. Ideally, I'd like an actual CORS example to work on, since I'd have to make analogies with postMessage(), and I've already made a ton of analogies, apparently to little effect. If people don't fully appreciate the relationship between form based CSRF and CORS based Confused Deputy, then we need an actual CORS application. Out of curiosity, who are the 3 parties involved in the facebook chat example? The little chat widget in the corner of the facebook page looks like a same origin application. Facebook uses a lot of different domains for different purposes. I don't have a complete count, but at least a dozen. The chat feature itself uses a bunch, possibly to get around various connection limits in browsers. This doesn't seem like a good example then, since the attacker would have to be facebook itself. For robustness, I personally consider these scenarios when designing an application, but people on this list might not find it compelling. They might also argue that no attempt was made to protect against such a vulnerability. Keep in mind that the browser's notion of an origin is often much smaller than a single application, which is part of the reason web developers are so keen on CORS. Many of them plan to use it only to talk to trusted hosts without having to use goofy things like JSONP. Enabling this scenario
Re: UMP / CORS: Implementor Interest
On Tue, May 11, 2010 at 12:01 PM, Tyler Close tyler.cl...@gmail.com wrote: On Tue, May 11, 2010 at 11:41 AM, Ojan Vafai o...@chromium.org wrote: What is the difference between an authoring guide and a specification for web developers? The difference is whether or not the normative statements in UMP actually are normative for a CORS implementation. This comes down to whether or not a developer reading UMP can trust what it says, or must he also read the CORS spec. The key point of making this distinction is that implementors should be able to look solely at the combined spec. No, the key point is to relieve developers of the burden of reading and understanding CORS. The CORS spec takes on the burden of restating UMP in its own algorithmic way so that an implementor can read only CORS. If figuring out how to have two specs is too much hassle, you could probably get 90%+ of what people are looking for by putting all of the normative stuff in the CORS spec and writing an informational note describing UMP that only discusses the subset of CORS needed for UMP. User agent implementors will have to read the CORS spec regardless of whether or not UMP is in it or a different spec, so creating two specs doesn't help much. And, as others have noted, service developers and web authors don't really tend to need to read the specs, so an article would probably be sufficient. -- Dirk
Chromium's support for CORS and UMP
Hi all, A couple weeks back there was a question as to implementor support for UMP and CORS, and that ended up launching a longish thread on the chromium-dev mailing list [1]. Tyler Close has asked me to summarize the conclusions of that thread here, so ... 1) CORS is already implemented and shipping in WebKit, so Chromium supports CORS and will continue to do so for the foreseeable future. 2) We (the Chromium team) are curious as to what CORS is being used for - we don't have a lot of real-world examples, and so we may end up instrumenting the dev channels of Chromium to see if we can actually figure this out [2]. 3) UMP appears to be nearly a subset of CORS, and does have a lot of nice properties for security and simplicity. We support UMP and would like to see the syntax continue to be unified with CORS so that it is in fact a subset (I believe this is already happening). We also (mostly) support UMP being a separate spec so that web authors can read it without being bogged down by the additional complexity CORS offers. If there is a good editorial way to handle this in a single spec, that would probably be fine. 3) We acknowledge that CORS can fall prey to confused-deputy style attacks, although they can be mitigated, as Maciej demonstrated a few months ago. However, it appears that there are certain use cases for CORS that are safe that can be easily deployed, and it is unclear if a UMP-style solution will be as easy to use for web authors. Accordingly, we are reluctant to remove support for CORS altogether until we can better answer (2). Assuming (2) shows that people are using CORS, then, for compatibility reasons, we will not really be in a position to disable it. Thanks, -- Dirk [1] http://groups.google.com/a/chromium.org/group/chromium-dev/browse_thread/thread/4ffa158e71ec4613/5751f9bed8fe7128?lnk=gstq=Implementor+interest+in+a+W3C+WebApps+proposal#5751f9bed8fe7128 [2] I am referring to the usage statistics we gather on an opt-in basis, as documented here: http://www.google.com/support/chrome/bin/answer.py?answer=96817hl=en-US
Re: UMP / CORS: Implementor Interest
On Wed, Apr 21, 2010 at 11:39 PM, Maciej Stachowiak m...@apple.com wrote: On Apr 21, 2010, at 11:11 PM, Anne van Kesteren wrote: On Thu, 22 Apr 2010 14:36:50 +0900, Adam Barth w...@adambarth.com wrote: Unfortunately ambient doesn't have any good antonyms: http://www.synonym.com/antonym/ambient/ Simon suggested XMLHttpRequestNoContext in #whatwg. Seems relatively clear and works nicely with indexes and autocomplete. Other ideas (also suffix variants instead of prefix variants): - GuestXMLHttpRequest Suggested by Mark originally. We now envision more use cases than guest code, however, guest is also the traditional name for an unprivileged account. - UnprivilegedXMLHttpRequest (or abbreviate to NoPrivs) - NoAuthorityXMLHttpRequest (or abbreviate to NoAuth, though that may seem like no authentication) - NoCredentialsXMLHttpRequest (or abbreviate to NoCred) Can anyone else think of ideas? I tried to mentally fill in sentences like, I don't want to do this as root, I want to use a/an ___ account or I don't want to be logged into site X, I want to be when I visit it. Regards, Maciej Here's some new directions ... ContextFreeRequest StatelessRequest SessionlessRequest or, since we're really talking about cookies here ... CookielessRequest CookieFreeRequest SugarFreeRequest IncognitoRequest(playing off of Chrome's Incognito mode, which doesn't use your browser's normal cookie store) -- Dirk