Re: [AC] Helping server admins not making mistakes
Thomas Roessler wrote: I think we've both been arguing this all over the place, and the thread might be getting a bit incoherent. So let's try to start over... The question here is whether it makes sense to add fine-grained controls to the authorization mechanisms to control -- in addition to whether or not cross-site requests are permitted at all --: (a) whether or not cookies are sent (b) what HTTP methods can be used in cross-site requests. I have two basic points: 1. *If* we have to have that kind of fine-grained controls, let's please do them coherently, and within the same framework. The argument here is simply consistency. Am I understanding you right if this is just an argument about what syntax to use? Syntax is certainly important as it's a tool to reduce human factor errors, so not saying syntax isn't important. 2. We shouldn't do (a) above, for several reasons: - it adds complexity For who? It seems to me that it makes it *a lot* simpler for server operators that want to create mashups with public data. For private data we are already relying on server operators to be clueful enough to ask the user first, so to ask them to add an additional (or tweak their syntax) is much to ask at all. - it adds confusion (witness this thread) - it's pointless I don't think I articulated the thinking behind the third of these reasons very clearly. The whole point of the access-control model (with pre-flight check and all that) is that requests that can be caused to come from the user's browser are more dangerous than requests that a third party can make itself. Consider a.example.com and b.example.com. Alice has an account with a.example.com and can wreak some havoc there through requests that have the right authentication headers. The purpose of having the access-control mechanism is: - to prevent b.example.com from reading information at a.example.com *using* *Alice's* *credentials* (because b.example.com can also just send HTTP requests from its own server), unless specifically authorized - to prevent b.example.com from causing non-GET requests to occur at b.example.com *using* *Alice's* *credentials* (because b.example.com can also just send HTTP requests from its own server), unless specifically authorized So, if there is an additional way to authorize third-party requests, but without Alice's credentials, we're effectively introducing an authorization regime for the same requests that our attacker could send through the network anyway, by using their own server -- modulo source IP address, that is. And modulo the fact that the user might be able to connect to a.example.com, whereas b.example.com might not be able to. This is the case if a.example.com and the user are both sitting behind the same firewall. These are some pretty important modulos. Is that really worth the extra complexity, both spec, implementation, and deployment wise? I don't think so. Content and servers behind firewalls means that we have no choice but to authorize even requests that don't include the user credentials. (Oh, and what does a no cookies primitive mean in the presence of VPNs or TLS client certificates?) That is a good question, one that we should address. About the methods point, my concern is that the same people who are clueless about methods when writing web applications will be clueless about the policies. I don't agree. I think it requires more knowledge to know how your server reacts to the full matrix of methods and headers, than to opt in to the headers that you are planning on handling in your CGI script. Of course, it is very hard to get data on this. I do have some ideas here how to get experienced input, so hopefully I will have more data in a few days. / Jonas
Re: [AC] Helping server admins not making mistakes
I think we've both been arguing this all over the place, and the thread might be getting a bit incoherent. So let's try to start over... The question here is whether it makes sense to add fine-grained controls to the authorization mechanisms to control -- in addition to whether or not cross-site requests are permitted at all --: (a) whether or not cookies are sent (b) what HTTP methods can be used in cross-site requests. I have two basic points: 1. *If* we have to have that kind of fine-grained controls, let's please do them coherently, and within the same framework. The argument here is simply consistency. 2. We shouldn't do (a) above, for several reasons: - it adds complexity - it adds confusion (witness this thread) - it's pointless I don't think I articulated the thinking behind the third of these reasons very clearly. The whole point of the access-control model (with pre-flight check and all that) is that requests that can be caused to come from the user's browser are more dangerous than requests that a third party can make itself. Consider a.example.com and b.example.com. Alice has an account with a.example.com and can wreak some havoc there through requests that have the right authentication headers. The purpose of having the access-control mechanism is: - to prevent b.example.com from reading information at a.example.com *using* *Alice's* *credentials* (because b.example.com can also just send HTTP requests from its own server), unless specifically authorized - to prevent b.example.com from causing non-GET requests to occur at b.example.com *using* *Alice's* *credentials* (because b.example.com can also just send HTTP requests from its own server), unless specifically authorized So, if there is an additional way to authorize third-party requests, but without Alice's credentials, we're effectively introducing an authorization regime for the same requests that our attacker could send through the network anyway, by using their own server -- modulo source IP address, that is. Is that really worth the extra complexity, both spec, implementation, and deployment wise? I don't think so. (Oh, and what does a no cookies primitive mean in the presence of VPNs or TLS client certificates?) About the methods point, my concern is that the same people who are clueless about methods when writing web applications will be clueless about the policies. Hope this helps, -- Thomas Roessler, W3C [EMAIL PROTECTED] On 2008-06-11 15:30:22 -0700, Jonas Sicking wrote: From: Jonas Sicking [EMAIL PROTECTED] To: Jonas Sicking [EMAIL PROTECTED], WAF WG (public) [EMAIL PROTECTED], public-webapps@w3.org Date: Wed, 11 Jun 2008 15:30:22 -0700 Subject: Re: [AC] Helping server admins not making mistakes List-Id: public-appformats.w3.org X-Spam-Level: Archived-At: http://www.w3.org/mid/[EMAIL PROTECTED] X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.1.6 Thomas Roessler wrote: On 2008-06-10 16:41:41 -0700, Jonas Sicking wrote: Getting access to a users cookie information is no small task. I disagree. There are any numbers of real-world scenarios in which cookies are regularly leaked - JavaScript that's loaded from untrusted sources, and captive portals are just two examples which make people bleed cookies. Basing the design here on the premise that cookie-based authentication should somehow be enough to protect against cross-site request forgery strikes me as unwise, in particular when the cost is in additional complexity (and therefore risk). Well, if you can get access to a users cookies and auth information then nothing that we do here matters at all. Or at least matters to a much much smaller extent. This whole spec is basically here precisely to protect the information that is protected by cookies and auth headers (and for most sites only cookies). Ooops, I let your remark about getting access to a users cookie information above lead me on a bogus path of argument. That's what I get for doing e-mail on a train. :-) The fundamental point is that (a) GET request and (b) some POST requests can be made with the user's cookies. As far as web application development is concerned, it's probably a really good idea to consider the cross-site request horse (as opposed to the cross-site information access horse) to have left the barn (even if it hasn't left the barn totally). Now, the working group evidently doesn't want to make that assumption for generic POST requests (which is fine); that's why we have the pre-flight check. That's fine with me. However, we're now getting to a point at which we try to prevent the occurence of certain cross-site requests, specifically with cookies. The cost here is complexity in the protocol design and implementation (which is complexity to the deployer). Hmm.. I'm confused. All requests we are trying to prevent are ones with cookies. First you say that you are ok
Re: [AC] Helping server admins not making mistakes
Hi Thomas and everyone, So I realize that I'm not quite understanding your previous mail. It sounds like you have some alternative proposal in mind which I'm not following. So let me start by stating my concerns: My concern with the current spec is that once a server in the pre-flight request has opted in to the Access-Control spec, it is not going to be able to correctly handle all the possible requests that are enabled by the opt-in. With correctly here defined as what the server operator had in mind when opting in. I have this concern since currently opting in means that you have to deal with all possible combinations of all valid http headers and http methods. There is currently no way for the server operator to opt in without also having to deal with this. In the initial mail in this thread I had a proposal to address this concern. At the cost of some complexity in the client. It sounds like you have a counter proposal. Before you describe this proposal, I have four questions: What is the purpose of the proposal? Does this proposal still address all or part of my above concern? Is it simpler than my proposal? Is it simpler than the current spec? And then finally I'm of course interested to hear what your proposal actually is :) Best Regards, / Jonas