Over the past few years numerous others have hit this issue, for
instance TimBL and his team on the Tabulator project - they had to move
to a browser extension where the access is granted, as have many others,
and as I'll probably end up doing, and those that follow after me.
quote TimBL yesterday in response to this issue: "Note if they run an
add-on, like Tabulator, then they skip this problem as the code is
deemed trusted." [1]
Here's what happens, applications are no longer portable, and moreover
they are platform specific, as more and more start to see the web as a
data tier then they too move to making browser extensions - browser wars
/ social impact / and notion of trying to "own" the web (or access to)
aside...
This leaves us in a scenario where it is the norm to download, install
and trust an application that runs in the browser, and applications
which have all the access that xhr would have today without any same
origin rules and CORS.
Now, if you apply the same origin restriction and push browser
extensions in to CORS too, then either the apps everybody then depend on
stop working, or everybody opens up the data with CORS and the original
issues CORS tries to address are back.
Problem is, do you address it in the past when it's been noted (time
machine?), now, or wait till the .... really hits the fan?
CORS completely unlinks the web, it makes it not a web (as far as xhr is
concerned anyway) - the only solution that will work, now and through
the future, is to invert the model - everything is public unless access
controlled otherwise - this is why the web works, and xhr doesn't.
arguably broken as somebody recently said.
[1] http://lists.w3.org/Archives/Public/www-tag/2010May/0015.html
replies to specifics below, but they are pretty much obsolete given the
above.
Boris Zbarsky wrote:
On 5/10/10 11:14 PM, Nathan wrote:
2: Implement a user UI confirmation screen to allow JS applications xhr
access to other origin resources. (Similar to the allow desktop
notifications scenario in chromium)
Under what conditions would the typical user be able to make an
informed decision here?
under the same conditions that it's the web and they can put any address
in to the address bar that they want?
That's not the same thing as giving site A access to data from site B.
agree~ish, imho it's more the user giving the Site A potential access to
all the data from Site B which the user has permission to see; if the
browser pops up that facebook is trying to access company-payroll then
surely the user will be able to make a pretty informed decision..?
surely people are free to make decisions
Yes, but forcing people to make decisions given insufficient information
is not a nice thing to do, exactly. It happens all the time, but
purposefully making it happen more often is not called for.
agreed, perhaps if that decision was taken out of their hands for
critical resources which they didn't control (like company-payroll) it
would be a better scenario... perhaps one could use something like cors
to limit xhr access to critical resources..
and indeed make mistakes + learn from them.
Not when their mistakes don't really affect them. See "externality".
agreed, and as above | still the same thing though, the site A/B admins
/ security people should be covering that I guess..
You're being _way_ too optimistic about this. "corporate sys admins"
are still using HTML blacklists in HTML filters on a routine basis,
after years of education attempts...
Yes I'm probably being way too optimistic, incompetency of some doesn't
mean it's not a better approach.
It might if the "incompetency" is widespread enough. We need to design
for the world we live in, not some utopia where everyone has infinite
amounts of time to study everything and be an expert on it.
again, agreed - hence why I'm suggesting that it's better to flip the
scenario order, allow unless denied, as the people who are in charge of
security of sensitive data should really be doing the studying for their
chosen career, not joe public who just wants to put up a few blog posts,
social profile etc on a shared host - how do you explain to joe that his
public profile isn't viewable in his public profile viewer unless he
changes some CORS headers on his shared host that he doesn't have access to?
I get the feeling I'm not the first person to say this, and certainly
not the last - yet feel a bit of a brick wall here - who's web is this
again?
All of ours.
:)
no.. CORS is needed if you want to perform any actions cross-site, with
or without credentials, yeah?
No. If you don't need credentials, then:
1) You could have the UA ask your server to perform the action
cross-site itself. No one's stopping you from doing that.
well noted - yet the whole point of making a purely client side
application is so that it doesn't need a server and uses the web as a
distributed data tier.. not to mention http cache'ing and bandwidth
issues, it creates a silo, a bottleneck, adds in a layer where all a
users data in-out can be monitored and collected, more.. there are many
implications to this.
2) You could use one of the APIs being proposed that do not send
user credentials and hence might well not require CORS, since
they will only be able to fetch public data (modulo the
firewall/routing issue, which will indeed need to be resolved).
? what's the point in CORS if it can be completely circumvented?
And for the use-case where there are user credentials needed then the
browsers all ready do a pretty good job of
asking for credentials / certs and letting the user decide what to do;
In the 99% common case they just send the credentials automatically
without asking the user anything.
it's the one case where CORS totally isn't needed because the server at
the other side does it's own auth* ...
No. The whole point here is that just because a user visits your site
doesn't mean that your script should be able to impersonate that user
when talking to other sites. That means either not allowing your script
to talk to other sites unless they explicitly say they're ready for that
sort of thing (current setup + CORS) or not allowing your script to
impersonate the user in the requests it sends.
granted, conflation over the current common setup, and the scope I'm
talking about which is using client side certificates over https,
restful and stateless - no impersonation can happen :)
Best & obviously nothing personal at all,
Nathan