Thomas Roessler wrote:
I think we've both been arguing this all over the place, and the
thread might be getting a bit incoherent.

So let's try to start over...

The question here is whether it makes sense to add fine-grained
controls to the authorization mechanisms to control -- in addition
to whether or not cross-site requests are permitted at all --:

  (a) whether or not cookies are sent
  (b) what HTTP methods can be used in cross-site requests.

I have two basic points:

1. *If* we have to have that kind of fine-grained controls, let's
please do them coherently, and within the same framework.  The
argument here is simply consistency.

Am I understanding you right if this is "just" an argument about what syntax to use? Syntax is certainly important as it's a tool to reduce "human factor" errors, so not saying syntax isn't important.

2. We shouldn't do (a) above, for several reasons:

 - it adds complexity

For who? It seems to me that it makes it *a lot* simpler for server operators that want to create mashups with public data.

For private data we are already relying on server operators to be clueful enough to ask the user first, so to ask them to add an additional (or tweak their syntax) is much to ask at all.

 - it adds confusion (witness this thread)
 - it's pointless

I don't think I articulated the thinking behind the third of these
reasons very clearly.  The whole point of the access-control model
(with pre-flight check and all that) is that requests that can be
caused to come from the user's browser are more dangerous than
requests that a third party can make itself.

Consider a.example.com and b.example.com.  Alice has an account with
a.example.com and can wreak some havoc there through requests that
have the right authentication headers.

The purpose of having the access-control mechanism is:

- to prevent b.example.com from reading information at a.example.com
  *using* *Alice's* *credentials* (because b.example.com can also
  just send HTTP requests from its own server), unless specifically
  authorized

- to prevent b.example.com from causing non-GET requests to occur at
  b.example.com *using* *Alice's* *credentials* (because
  b.example.com can also just send HTTP requests from its own
  server), unless specifically authorized

So, if there is an additional way to authorize third-party requests,
but without Alice's credentials, we're effectively introducing an
authorization regime for the same requests that our attacker could
send through the network anyway, by using their own server -- modulo
source IP address, that is.

And modulo the fact that the user might be able to connect to a.example.com, whereas b.example.com might not be able to. This is the case if a.example.com and the user are both sitting behind the same firewall.

These are some pretty important modulos.

Is that really worth the extra
complexity, both spec, implementation, and deployment wise?  I don't
think so.

Content and servers behind firewalls means that we have no choice but to authorize even requests that don't include the user credentials.

(Oh, and what does a "no cookies" primitive mean in the presence of
VPNs or TLS client certificates?)

That is a good question, one that we should address.

About the methods point, my concern is that the same people who are
clueless about methods when writing web applications will be
clueless about the policies.

I don't agree. I think it requires more knowledge to know how your server reacts to the full matrix of methods and headers, than to opt in to the headers that you are planning on handling in your CGI script.

Of course, it is very hard to get data on this. I do have some ideas here how to get experienced input, so hopefully I will have more data in a few days.

/ Jonas

Reply via email to