Your proposal seems to be two-fold:

1. Drop the <?Access-Control?> PI
2. Simplify the syntax for the Access-Control HTTP header such
   that you can no longer list multiple sites, but rather just
   a single one.

I must admit that I've thought about 2 as well. It's mostly just a change in syntax, is the client origin compared to a list of allowed URIs, or just a single URI.

The only downside I can think of is that it gets harder to separate the policy in a configuration file, such as apaches .htaccess, for sites that only want to whitelist a small set of sites, but which remain the same for all users. One example would be www.cnn.com wanting to whitelist cnn.com and cnnpolitics.com in order to allow those sites to read user private data, such as a list of recently read articles, or home zipcode.

If we remove the whitelist you could still in theory write a server module that echoes back the origin for those sites, but it is a lot more work.

What I do like about the proposal is that if someone forgets to add the Vary:Origin header, the result is failing towards being more secure.

However we probably do want to add the ability to allow sharing with '*', in order to allow sharing of static resources (especially non-XML ones). I think requiring everyone that want to share static public data to write a server module that echos back the Origin header with a new name is a pretty high bar.


On removing <?Access-Control?> PI. I do think that enabling simple sharing of for example XBL bindings and XSLT stylesheets is something that UAs are likely to do no matter what we put in this spec. As a recent example I can point to the number of complaints we are getting about the fact that XSLT stylesheets obey the same-origin policy, while CSS stylesheets do not. This increased a lot when we tightened the same-origin algorithm for local-file URIs in FF3.

If we want to put this PI in *this* spec or not can still be debated of course. However note that you'll likely have to deal with the reality of an PI with the same functionality no matter what. If we put it in this spec we can at least ensure that there is a single comprehensible cross-site spec, and that we apply good security principals to the features in that spec.

Like Hixie pointed out, even if we remove the PI from this spec, it is likely that the same feature will appear elsewhere.

/ Jonas

Thomas Roessler wrote:
On 2008-06-12 14:10:56 -0700, Jonas Sicking wrote:

My concern with the current spec is that once a server in the
pre-flight request has opted in to the Access-Control spec, it is
not going to be able to "correctly" handle all the possible
requests that are enabled by the opt-in. With "correctly" here
defined as what the server operator had in mind when opting in.

I share that concern, and I think that your Range example (while
demonstrating the danger of a particularly stupid hypothetical
implementations) in fact illustrates two very real dangers of the
current model:

 - for XML content, the authorization can be triggered by the
   content, and can't be filtered on the HTTP level
- quite generally, the fact that we are moving toward a model with
   relatively complex messaging from the server to the client means
   that server-side filtering is more difficult (and therefore
   error-prone than it needs to be).  The fact that we put the
   default enforcement point on the client (despite the
   *possibility* for server-side enforcement) means that, for many
   web applications, the same code paths that confuse GET and FOOBAR
   in existing web applications will get exercised for serving the
   policies, e.g. in response to a preflight check.  Additionally,
   other weird behavior (such as unknown headers) might further be
   used to cause a badly written web application to send data to the
   client that is then interpreted as an authorization -- even when
   that wasn't intended by the application author at all.

Therefore, I'd propose two additional requirements:

   Servers should be able to easily recognize and handle cross-site
   requests by inspecting HTTP requests, without having to
   manipulate the HTTP response they send.
Servers should be able to easily recognize and handle any
   authorization data that are sent back to the client, by only
   inspecting HTTP response headers.

Put differently, I'd like to be able to implement the server side of
all of this as a separate module or in an HTTP proxy.

The second requirement above rules out the processing instruction.
Let's get rid of it.

Both requirements together mandate two things:

- Extreme simplicity.

- Making information about the party that causes the cross-domain
  request to occur available to the server.
Now, I don't think the current model is all bad -- as has been
discussed ad nauseam, some policy enforcement *must* happen on the
client, that's inevitable.

However, I'd like to look again at a simpler model:

- Let's keep most of the processing model (including the pre-flight
  check), and let's keep the Origin header.

- Let's throw away the policy language.  Instead, let's add a single
  header to the HTTP Response:

  Authorize-XDomain-Origin: http://.....
That header can carry a single origin, nothing else.

In terms of processing model changes, read access would only be
permitted if the origin of the request matches the origin of that
response header.

Pre-flight checks would only succeed if the origin of the request
matches the origin of that response header. against
To deal with different policies for different origins, send
different answers depedingn on the HTTP request's origin header. Use
"Vary: Origin" to control cache behavior.

The main benefit of this approach is that there is a single response
header that would have to be checked in an HTTP response filter, in
order to exclude weird server-side processing from enabling bad
client-side behavior.  A paranoid site could even perform the
comparison of Origin and Authorize-XDomain-Origin headers in a
firewall, and (paranoidly) replace responses to unauthorized
cross-domai requests with a 403 -- without having to implement the
entire policy language.

My preference would be to not differentiate by method in the
client-side decisions, but force the server to do that --
preferably, in fact, by way of a dedicated Web server module that
can live outside the web application proper, and has a configuration
setup that fails safely.  (Yeah, I know, dream on. *sigh*)

If people here think that a differentiation by method is really
called for, then the model I'm suggesting would at least give that
differentiation the same granularity that is present for all the
other decisions; I consider that consistency a very good thing.

Thoughts?



Reply via email to