Re: [XHR2] AnonXMLHttpRequest()

2010-02-04 Thread Kenton Varda
On Thu, Feb 4, 2010 at 2:05 PM, Tyler Close tyler.cl...@gmail.com wrote:

 On Wed, Feb 3, 2010 at 2:34 PM, Maciej Stachowiak m...@apple.com wrote:
  I don't think I've ever seen a Web server send Vary: Cookie. I don't
 know offhand if they consistently send enough cache control headers to
 prevent caching across users.

 I've been doing a little poking around. Wikipedia sends Vary:
 Cookie. Wikipedia additionally uses Cache-Control: private, as do
 some other sites I checked. Other sites seem to be relying on
 revalidation of cached entries by making them already expired.


Unfortunately, lots of sites don't get this right.  Look back to 2005-ish
when Google released the Google web accelerator -- basically a glorified
HTTP proxy.  It assumed that servers correctly implemented the standards,
and got seriously burned for serving private pages meant for one user to
other users.  Naturally, web masters all blamed Google, and the product was
withdrawn.  (Note that I was not an employee at the time, much less on the
team, so my version of the story should not be taken as authoritative.)

On the other hand, refusing to cache anything for which the request
contained a cookie seems like a pretty unfortunate limitation.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Kenton Varda
On Mon, Dec 21, 2009 at 5:35 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Dec 21, 2009 at 5:17 PM, Kenton Varda ken...@google.com wrote:
  The problem we're getting at is that CORS is being presented as a
 security
  mechanism, when in fact it does not provide security.  Yes, CORS is
  absolutely easier to use than UM in some cases -- I don't think anyone is
  going to dispute that.  The problem is that the security it provides in
  those cases simply doesn't exist unless you can ensure that no resource
 on
  *any* of your allowed origins can be tricked into fetching your
 protected
  resource for a third party.  In practice this will be nearly impossible
 to
  ensure except in the most simple cases.

 Why isn't this a big problem today for normal XMLHttpRequest?  Normal
 XMLHttpRequest is just like a CORS deployment in which every server
 has a policy of allowing its own origin.


It *is* a problem today with XMLHttpRequest.  This is, for example, one
reason why we cannot host arbitrary HTML documents uploaded by users on
google.com -- a rather large inconvenience!  If it were feasible, we'd be
arguing for removing this ability from XMLHttpRequest.  However, removing a
feature that exists is generally not possible; better to avoid adding it in
the first place.

With CORS, the problems would be worse, because now you not only have to
ensure that your own server is trust-worthy and free of CSRF, but also the
servers of everyone you allow to access your resource.  Problems are likely
to multiply exponentially.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-18 Thread Kenton Varda
On Fri, Dec 18, 2009 at 12:04 AM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 17 Dec 2009, Kenton Varda wrote:
 
  With the right capability-based infrastructure, the capability-based
  solution would be trivial too.  We don't have this infrastructure.
  This is a valid concern.

 It's not so much that we don't have one, so much as nobody is proposing
 one... I'd be happy if there was a concrete proposal on the table that
 made things as simple as CORS, supported the Web's key use cases as
 easily, and that the browser vendors were all ready to implement.


I hope to work on that.  A lot of it is more a question of software than
standards -- e.g. having a web server which provides easy access to
capability-based design patterns.


  You probably also question the effect of my solution on caching, or
  other technical issues like that.  I could explain how I'd deal with
  them, but then you'd find finer details to complain about, and so on.

 If you're saying that a caps-based infrastructure would have insoluable
 problems, then that makes it a non-starter.


No, I think all the problems are solvable, but the time we might spend
debating them is unbounded.


 If not, then someone who
 thinks this is the right way to go should write up the spec on how to do
 it, and we should iterate it until all the finer details are fixed, just
 like we do with all specs.


  I'm not sure the conversation would benefit anyone, so let's call it a
  draw.

 I'm not in this to win arguments, I'm in this to improve the Web. I'd be
 more than happy to lose if we got something out of it that didn't have
 technical problems. If there's no concrete proposal on the table that
 makes things as simple as CORS, supports the Web's key use cases as
 easily, and that the browser vendors are all ready to implement, then the
 conversation can indeed not benefit anyone.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
Somehow I suspect all this has been said many times before...

On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com wrote:

 CORS would provide at least two benefits, using the exact protocol you'd
 use with UM:

 1) It lets you know what site is sending the request; with UM there is no
 way for the receiving server to tell. Site A may wish to enforce a policy
 that any other site that wants access has to request it individually. But
 with UM, there is no way to prevent Site B from sharing its unguessable URL
 to the resource with another site, or even to tell that Site B has done so.
 (I've seen papers cited that claim you can do proper logging using an
 underlying capabilities mechanism if you do the right things on top of it,
 but Tyler's protocol does not do that; and it is not at all obvious to me
 how to extend such results to tokens passed over the network, where you
 can't count on a type system to enforce integrity at the endpoints like you
 can with a system all running in a single object capability language.)


IMO, this isn't useful information.  If Alice is a user at my site, and I
hand Alice a capability to access her data from my site, it should not make
a difference to me whether Alice chooses to access that data using Bob's
site or Charlie's site, any more than it makes a difference to me whether
Alice chooses to use Firefox or Chrome.  Saying that Alice is only allowed
to access her data using Bob's site but not Charlie's is analogous to saying
she can only use approved browsers.  This provides a small amount of
security at the price of greatly annoying users and stifling innovation
(think mash-ups).

Perhaps, though, you're suggesting that users should be able to edit the
whitelist that is applied to their data, in order to provide access to new
sites?  But this seems cumbersome to me -- both to the user, who needs to
manage this whitelist, and to app developers, who can no longer delegate
work to other hosts.

(Of course, if you want to know the origin for non-security reasons (e.g. to
log usage for statistical purposes, or deal with compatibility issues) then
you can have the origin voluntarily identify itself, just as browsers
voluntarily identify themselves.)


 2) It provides additional defense if the unguessable URL is guessed,
 either because of the many natural ways URLs tend to leak, or because of a
 mistake in the algorithm that generates unguessable URLs, or because either
 Site B or Site A unintentionally disclose it to a third party. By using an
 unguessable URL *and* checking Origin and Cookie, Site A would still have
 some protection in this case. An attacker would have to not only break the
 security of the secret token but would also need to manage a confused
 deputy type attack against Site B, which has legitimate access, thus
 greatly narrowing the scope of the vulnerability. You would need two
 separate vulnerabilities, and an attacker with the opportunity to exploit
 both, in order to be vulnerable to unauthorized access.


Given the right UI, a capability URL should be no more leak-prone than a
cookie.  Sure, we don't want users to ever actually see capability URLs
since they might then choose to copy/paste them into who knows where, but
it's quite possible to hide the details behind the scenes, just like we hide
cookie data.

So, I don't think this additional defense is really worth much, unless you
are arguing that cookies are insecure for the same reasons.  (Perhaps we
should only allow users to use approved browsers because other browsers
might leak cookie data?)

And again, this additional defense has great costs, as described above.

So, no, I still think CORS provides no benefit for the protocol I described.
 It may seem to provide benefits, but the benefits actually cost far more
than they are worth.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak m...@apple.com wrote:


 On Dec 17, 2009, at 1:42 AM, Kenton Varda wrote:

 Somehow I suspect all this has been said many times before...

 On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com wrote:

 CORS would provide at least two benefits, using the exact protocol you'd
 use with UM:

 1) It lets you know what site is sending the request; with UM there is no
 way for the receiving server to tell. Site A may wish to enforce a policy
 that any other site that wants access has to request it individually. But
 with UM, there is no way to prevent Site B from sharing its unguessable URL
 to the resource with another site, or even to tell that Site B has done so.
 (I've seen papers cited that claim you can do proper logging using an
 underlying capabilities mechanism if you do the right things on top of it,
 but Tyler's protocol does not do that; and it is not at all obvious to me
 how to extend such results to tokens passed over the network, where you
 can't count on a type system to enforce integrity at the endpoints like you
 can with a system all running in a single object capability language.)


 IMO, this isn't useful information.  If Alice is a user at my site, and I
 hand Alice a capability to access her data from my site, it should not make
 a difference to me whether Alice chooses to access that data using Bob's
 site or Charlie's site, any more than it makes a difference to me whether
 Alice chooses to use Firefox or Chrome.  Saying that Alice is only allowed
 to access her data using Bob's site but not Charlie's is analogous to saying
 she can only use approved browsers.  This provides a small amount of
 security at the price of greatly annoying users and stifling innovation
 (think mash-ups).


 I'm not saying that Alice should be restricted in who she shares the feed
 with. Just that Bob's site should not be able to automatically grant
 Charlie's site access to the feed without Alice explicitly granting that
 permission. Many sites that use workarounds (e.g. server-to-server
 communication combined with client-side form posts and redirects) to share
 their data today would like grants to be to another site, not to another
 site plus any third party site that the second site chooses to share with.


OK, I'm sure that this has been said before, because it is critical to the
capability argument:

If Bob can access the data, and Bob can talk to Charlie *in any way at all*,
then it *is not possible* to prevent Bob from granting access to Charlie,
because Bob can always just serve as a proxy for Charlie's requests.

What CORS does do is make it so that Bob (and Charlie, if he is proxying
through Bob) can only access the resource while Alice has his site open in
her browser.  The same can be achieved with UM by generating a new URL for
each visit, and revoking it as soon as Alice browses away.



 Perhaps, though, you're suggesting that users should be able to edit the
 whitelist that is applied to their data, in order to provide access to new
 sites?  But this seems cumbersome to me -- both to the user, who needs to
 manage this whitelist, and to app developers, who can no longer delegate
 work to other hosts.


 An automated permission grant system that vends unguessable URLs could just
 as easily manage the whitelist. It is true that app developers could not
 unilaterally grant access to other origins, but this is actually a desired
 property for many service providers. Saying that this feature is
 cumbersome for the service consumer does not lead the service provider to
 desire it any less.


You're right, the same UI I want for hooking up capabilities could also
update the whitelist.  But I still don't see where this is useful, given the
above.



 (Of course, if you want to know the origin for non-security reasons (e.g.
 to log usage for statistical purposes, or deal with compatibility issues)
 then you can have the origin voluntarily identify itself, just as browsers
 voluntarily identify themselves.)


 2) It provides additional defense if the unguessable URL is guessed,
 either because of the many natural ways URLs tend to leak, or because of a
 mistake in the algorithm that generates unguessable URLs, or because either
 Site B or Site A unintentionally disclose it to a third party. By using an
 unguessable URL *and* checking Origin and Cookie, Site A would still have
 some protection in this case. An attacker would have to not only break the
 security of the secret token but would also need to manage a confused
 deputy type attack against Site B, which has legitimate access, thus
 greatly narrowing the scope of the vulnerability. You would need two
 separate vulnerabilities, and an attacker with the opportunity to exploit
 both, in order to be vulnerable to unauthorized access.


 Given the right UI, a capability URL should be no more leak-prone than a
 cookie.  Sure, we don't want users to ever actually see capability URLs

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 10:08 AM, Maciej Stachowiak m...@apple.com wrote:


 On Dec 17, 2009, at 9:15 AM, Kenton Varda wrote:



 On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak m...@apple.com wrote:


 I'm not saying that Alice should be restricted in who she shares the feed
 with. Just that Bob's site should not be able to automatically grant
 Charlie's site access to the feed without Alice explicitly granting that
 permission. Many sites that use workarounds (e.g. server-to-server
 communication combined with client-side form posts and redirects) to share
 their data today would like grants to be to another site, not to another
 site plus any third party site that the second site chooses to share with.


 OK, I'm sure that this has been said before, because it is critical to the
 capability argument:

 If Bob can access the data, and Bob can talk to Charlie *in any way at
 all*, then it *is not possible* to prevent Bob from granting access to
 Charlie, because Bob can always just serve as a proxy for Charlie's
 requests.


 Indeed, you can always act as a proxy and directly share the data rather
 than sharing the token. However, this is not the same as the ability to
 share the token anonymously. Here are a few important differences:

 - As Ian mentioned, in the case of some kinds of resources, one of the
 service provider's goals may be to prevent abuse of their bandwidth.


It seems more useful to attribute resource usage to the user rather than to
the sites the user uses to access those resources.  In my example, I might
want to limit Alice to, say, 1GB data transfer per month, but I don't see
why I would care if that transfer happened through Bob's site vs. Charlie's
site.


 - Service providers often like to know for the sake of record-keeping who
 is using their data, even if they have no interest in restricting it. Often,
 just creating an incentive to identify yourself and ask for separate
 authorization is enough, even if proxy workarounds are possible. The reason
 given below states such an incentive.


I think this is separate from the security question.  As I said earlier,
origins can voluntarily identify themselves for this purpose, just as
browsers voluntarily identify themselves.


 - Proxying to subvert CORS would only work while the user is logged into
 both the service provider and the actually authorized service consumer who
 is acting as a proxy, and only in the user's browser. This limits the window
 in which to get data. Meanwhile, a capability token sent anonymously could
 be used at any time, even when the user is not logged in. The ability to get
 snapshots of the user's data may not be seen to be as great a risk as
 ongoing on-demand access.


Yes, I directly addressed exactly that point...


 I will also add that users may want to revoke capabilities they grant. This
 is likely to be presented to the user as a whitelist of sites to which they
 granted access, whether the actual mechanism is modifying Origin checks, or
 mapping the site to a capability token and disabling it.


Sure.  This is easy to do via caps.


 How would the service provider generate a new URL for each visit to Bob's
 site? How would the service provider even know whether it's Bob asking for
 an update, or whether the user is logged in? If the communication is via UM,
 the service provider has no way to know. If it's via a hidden form post,
 then you are just using forms to fake the effect of CORS. Note also that
 such elaborations increase complexity of the protocol.


Assuming some UI exists for granting capabilities, as I suggested earlier,
it can automatically take care of generating a new capability for every
connection/visit and revoking it when appropriate.


 To enable permissions to be revoked in a granular way, you must vend
 different capability tokens per site. Given that, it seems only sensible to
 check that the token is actually being used by the party to which it was
 granted.


I disagree.  Delegation is useful, and prohibiting it has a cost.  If we
granted the capability to Bob, why should we care if Bob chooses to delegate
to Charlie?  If Charlie misuses the capability, then we blame Bob for that
misuse.  It's Bob's responsibility to take appropriate measures to prevent
this.  If we don't trust Bob we shouldn't have granted him the capability in
the first place.

And again, CORS doesn't prevent delegation anyway; it only makes it less
convenient.


 My goal was merely to argue that adding an origin/cookie check to a
 secret-token-based mechanism adds meaningful defense in depth, compared to
 just using any of the proposed protocols over UM. I believe my argument
 holds. If the secret token scheme has any weakness whatsoever, whether in
 generation of the tokens, or in accidental disclosure by the user or the
 service consumer, origin checks provide an orthogonal defense that must be
 breached separately. This greatly reduces the attack surface. While this may
 not provide any

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote:

 With CORS, I can trivially (one line in the .htaccess file for my site)
 make sure that no sites can use XBL files from my site other than my
 sites. My sites don't do any per-user tracking; doing that would involve
 orders of magnitude more complexity.


I was debating about one particular use case, and this one that you're
talking about now is completely different.  I can propose a different
solution for this case, but I think someone will just change the use case
again to make my new solution look silly, and we'll go in circles.


 How can an origin voluntarily identify itself in an unspoofable fashion?
 Without running scripts?


It can't.  My point was that for simple non-security-related statistics
gathering, spoofing is not a big concern.  People can spoof browser UA
strings but we still gather statistics on them.


 I have no problem with offering a feature like UM in CORS. My objection is
 to making the simple cases non-trivial, e.g. by never including Origin
 headers in any requests.


Personally I'm not actually arguing against standardizing CORS.  What I'm
arguing is that UM is the natural solution for software designed in an
object-oriented, loosely-coupled way.  I'm also arguing that loosely-coupled
object-oriented systems are more powerful and better for users.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote:

 What one liner are your proposing that would solve the problem for XBL,
 XML data, videos, etc, all at once?


Are we debating about the state of existing infrastructure, or theoretically
ideal infrastructure? Honest question.  .htaccess is an example of existing
infrastructure built around the ACL approach.  If no similarly-easy-to-use
capability-based infrastructure exists, that doesn't necessarily mean ACLs
are theoretically better.  But the thread subject line seems to suggest
we're more interested in theory.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 17 Dec 2009, Tyler Close wrote:
  X-FRAME-OPTIONS: *.example.com
  Access-Control-Allow-Origin: *

 Why is this better than:

   Access-Control-Allow-Origin: *.example.com

 ...?


I think Tyler missed on this one.  X-FRAME-OPTIONS looks to me like the same
thing as CORS, except that it doesn't pretend to provide security.

In a capability-based world, when the user accessed your site, you'd send
back the HTML together with a set of capabilities to access other resources
on the site.  These capabilities would expire after some period of time.
 Want to allow one particular other site to use your resources as well?
 Then give them the capability to generate capabilities to your resources --
e.g. by giving them a secret key which they can hash together with the
current time.

I know, your response is:  That's way more complicated than my one-line
.htaccess change!

But your one-line .htaccess change is leveraging a great deal of
infrastructure already built around that model.  With the right
capability-based infrastructure, the capability-based solution would be
trivial too.  We don't have this infrastructure.  This is a valid concern.
 Unfortunately, few people are working to build this infrastructure because
most people would rather focus on the established model, simply because it
is established.  So we have a chicken-and-egg problem.

You probably also question the effect of my solution on caching, or other
technical issues like that.  I could explain how I'd deal with them, but
then you'd find finer details to complain about, and so on.  I'm not sure
the conversation would benefit anyone, so let's call it a draw.

On Thu, Dec 17, 2009 at 5:56 PM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 17 Dec 2009, Kenton Varda wrote:
  On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote:
  
   With CORS, I can trivially (one line in the .htaccess file for my
   site) make sure that no sites can use XBL files from my site other
   than my sites. My sites don't do any per-user tracking; doing that
   would involve orders of magnitude more complexity.
 
  I was debating about one particular use case, and this one that you're
  talking about now is completely different.  I can propose a different
  solution for this case, but I think someone will just change the use
  case again to make my new solution look silly, and we'll go in circles.

 The advantage of CORS is that it addresses all these use cases well.


There are perfectly good cap-based solutions as well.  But every
capability-based equivalent to an existing ACL-based solution is obviously
not going to be identical, and thus will have some trade-offs.  Usually
these trade-offs can be reasonably tailored to fit any particular real-world
use case.  But if you're bent on a solution that provides *exactly* what the
ACL solution provides (ignoring real-world considerations), the solution
usually won't be pretty.

Of course, when presented with a different way of doing things, it's always
easier to see the negative trade-offs than to see the positives, which is
why most debates about capability-based security seem to come down to people
nit-picking about the perceived disadvantages of caps while ignoring the
benefits.  I think this is what makes Mark so grumpy.  :/


   How can an origin voluntarily identify itself in an unspoofable
   fashion? Without running scripts?
 
  It can't.

 I don't understand how it can solve the problem then. If it's trivial for
 a site to spoof another, then the use case isn't solved.

  My point was that for simple non-security-related statistics gathering,
  spoofing is not a big concern.

 None of the use cases I've mentioned involve statistics gathering.


It was Maciej that brought up this use case.  I was responding to him.


   I have no problem with offering a feature like UM in CORS. My
   objection is to making the simple cases non-trivial, e.g. by never
   including Origin headers in any requests.
 
  Personally I'm not actually arguing against standardizing CORS.  What
  I'm arguing is that UM is the natural solution for software designed in
  an object-oriented, loosely-coupled way.

 CORS is a superset of UM; I have no objection to CORS-enabled APIs
 exposing the UM subset (i.e. allowing scripts to opt out of sending the
 Origin header). However, my understanding is that the UM proposal is to
 explictly not allow Origin to ever be sent, which is why there is a
 debate. (If the question was just should we add a feature to CORS to
 allow Origin to not be sent, then I think the debate would have concluded
 without much argument long ago.)


I think the worry is about the chicken-and-egg problem I mentioned above:
 We justify the standard based on the existing infrastructure, but new
infrastructure will be built based on the direction in the standards.  Mark,
Tyler, and I believe the web would be better off if most things were
capability

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Kenton Varda
Without the benefit of full context (I only started following this list
recently), I'd like cautiously to suggest that the UM solution to Ian's
challenge seems awkward because the challenge is itself a poor design, and
UM tends to be more difficult to work with when used to implement designs
that are poor in the first place.

Specifically -- and note that I'm not sure I follow all the details, so I
could be missing things -- it seems that the challenge calls for site B to
be hard-coded to talk to site A.  In a better world, site B would be able to
talk to any site that provides feeds in the desired format.  In order for
this to be possible, the user obviously has to explicitly hook up site B
to site A somehow.  Ideally, this hook-up act itself would additionally
imply permission for site B to access the user's data on site A.  The
natural way to accomplish this would be for an unguessable access token to
be communicated from site A to site B as part of the hook-up step.  Once
such a mechanism exists, UM is obviously the best way for site B to actually
access the data -- CORS provides no benefit at this point.

So how does this hook-up happen?  This is mostly a UI question.  One way
that could work with current browsers would be for the user to copy/paste an
unguessable URL representing the capability from one site to the other, but
this is obviously a poor UI.  Instead, I think what we need is some sort of
browser support for establishing these connections.  This is something I've
already been talking about on the public-device-apis list, as I think the
same UI should be usable to hook-up web apps with physical devices
connected to the user's machine.

So imagine, for example, that when the user visits site A originally, the
site can somehow tell the browser I would like to provide a capability
implementing the com.example.Feed interface.  The URL for this capability is
[something unguessable]..  Then, when the user visits site B, it has a
socket for an object implementing com.example.Feed.  When the user
clicks on this socket, the browser pops up a list of com.example.Feed
implementations that it knows about, such as the one from site A.  The user
can then click on that one and thus hook up the sites.

Obviously there are many issues to work through before this sort of thing
would be possible.  Ian proposed a new device tag on public-device-apis
yesterday -- it serves as the socket in my example above.  But, how a
device list gets populated (and the security implications of this) has yet
to be discussed much at all (as far as I know).

I just wanted to propose this as the ideal world.  In the ideal world,
UM is clearly the right standard.  I worry that CORS, if standardized, would
encourage developers to go down the path of hard-coding which sites they
talk to, since that's the approach that CORS makes easy and UM does not.  In
the long run, I think this would be bad for the web, since it would mean
less interoperability between apps and more vendor lock-in.

That said, I think it's safe to say that if you *want* to hard-code the list
of sites that you can interoperate with, it's easier to do with CORS than
with UM.

On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com wrote:

 On Mon, Dec 14, 2009 at 11:35 AM, Maciej Stachowiak m...@apple.com wrote:
 
  On Dec 14, 2009, at 10:44 AM, Tyler Close wrote:
 
  On Mon, Dec 14, 2009 at 10:16 AM, Adam Barth w...@adambarth.com wrote:
 
  On Mon, Dec 14, 2009 at 5:53 AM, Jonathan Rees 
 j...@creativecommons.org
  wrote:
 
  The only complaint I know of regarding UM is that it is so complicated
  to use in practice that it will not be as enabling as CORS
 
  Actually, Tyler's UM protocol requires the user to confirm message 5
  to prevent a CSRF attack.  Maciej's CORS version of the protocol
  requires no such user confirmation.  I think it's safe to say that
  asking the user to confirm security-critical operations is not a good
  approach.
 
  For Ian Hickson's challenge problem, I came up with a design that does
  not require any confirmation, or any other user interaction. See:
 
  http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1232.html
 
  That same design can be used to solve Maciej's challenge problem.
 
  I see three ways it wouldn't satisfy the requirements given for my CORS
  example:
 
  1) Fails AJAX UI requirement in the grant phase, since a form post is
  required.

 I thought AJAX UI just meant no full page reload. The grant phase
 could be done in an iframe.

  2) The permission grant is intiated and entirely drive by Site B (the
  service consumer). Thus Site A (the service provider in this case) has no
  way to know that the request to grant access is a genuine grant from the
  user.
 
  3) When Site A receives the request from Site B, there is no indication
 of
  what site initiated the communication (unless the request from B is
 expected
  to come with an Origin header). How does it even know it's supposed to
  

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Kenton Varda
On Wed, Dec 16, 2009 at 9:25 PM, Ian Hickson i...@hixie.ch wrote:

 A concrete example of the example I was talking about is Google's Finance
 GData API. There's a fixed URL on A (Google's site) that represents my
 finance information. There's a site B (my portal page) that is hard-coded
 to fetch that data and display it. I'm logged into A, I'm not logged into
 B, and I've told A (Google) that it's ok to give B access to my financial
 data. Today, this involves a complicated set of bouncing back and forth.
 With CORS, it could be done with zero server-side scripting -- the file
 could just be statically generated with an HTTP header that grants
 permission to my portal to read the page.

 ...

 As a user, in both the finance case and XBL case, I don't want any UI. I
 just want it to Work.


Yet you must go through a UI on site A to tell it that it's OK to give your
data to B.  Obviously this step cannot be altogether eliminated.  What I am
suggesting is a slightly different UI which I think would be no more
difficult to use, but which would avoid the need to hard-code.

In fact, I think my UI is easier for users, because in all likelihood, when
you decide I want site B to access my data from site A, you are probably
already on site B at the time.  In your UI, you have to navigate back to A
in order to grant permission to B (and doesn't that also require
copy-pasting the host name?).  In my UI, you don't have to leave site B to
make the connection, because the browser remembers that site A provided the
desired capability and thus can present the option to you directly.

The down side is that I don't know how to implement my UI in today's
browsers.