Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-23 Thread Anne van Kesteren

On Tue, 22 Dec 2009 02:48:42 +0100, Kenton Varda ken...@google.com wrote:

It *is* a problem today with XMLHttpRequest.  This is, for example, one
reason why we cannot host arbitrary HTML documents uploaded by users on
google.com -- a rather large inconvenience!  If it were feasible, we'd be
arguing for removing this ability from XMLHttpRequest.  However,  
removing a feature that exists is generally not possible; better to  
avoid adding it in the first place.


There are plenty of other features that already make that impossible.



With CORS, the problems would be worse, because now you not only have to
ensure that your own server is trust-worthy and free of CSRF, but also  
the servers of everyone you allow to access your resource.  Problems are  
likely to multiply exponentially.


Isn't this also true for the non-CORS solution? A secret token can be  
stolen as well.



I'm personally not really married to either approach, but it is still not  
clear to me how to me how can make us of UM to address the use cases CORS  
has. And for the cases where UM can replace it it appears to be much more  
complicated, which I do not think is a good sign if we expect authors to  
make mistakes.


I tried to clarify the use cases for CORS here (if more detail is needed  
please let me know):


  http://dev.w3.org/2006/waf/access-control/#use-cases

It would be nice to have sufficient detail on how each of these would work  
with UM so we can evaluate things better.



--
Anne van Kesteren
http://annevankesteren.nl/



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Dec 2009, Tyler Close wrote:

 Starting from the X-FRAME-OPTIONS proposal, say the response header
 also applies to all embedding that the page renderer does. So it also
 covers img, video, etc. In addition to the current values, the
 header can also list hostname patterns that may embed the content. So,
 in your case:

 X-FRAME-OPTIONS: *.example.com
 Access-Control-Allow-Origin: *

 Which means anyone can access this content, but sites outside
 *.example.com should host their own copy, rather than framing or
 otherwise directly embedding my copy.

 Why is this better than:

   Access-Control-Allow-Origin: *.example.com

X-FRAME-OPTIONS is a rendering instruction and
Access-Control-Allow-Origin is part of an access-control mechanism.
Combining the two in the way you propose creates an access-control
mechanism that is inherently vulnerable to CSRF-like attacks, because
it determines read access to bits based on the identity of the
requestor.

Using your example, assume an XML resource sitting on an intranet
server at resources.example.com. The author of this resource is trying
to restrict access to the XML data to only other intranet resources
hosted at *.example.com. The author believes this can be accomplished
by simply setting the Access-Control-Allow-Origin header as you've
show above, but that's not strictly true. Every page hosted on
*.example.com is now a potential target for a CSRF-like attack that
reveals the secret data. For example, consider a page at
victim.example.com that uses a third party storage service. To copy
data, the page does a GET on the location of the existing data,
followed by a POST to another location with the data to be copied. If
the storage service says the location of the existing data is the URL
for the secret XML data (http://resources.example.com/...), then the
victim page suffers a CSRF-like attack that exposes the secret data.
The victim page may know nothing of the existence or purpose of the
secret XML resource.

To avoid this pitfall, we instead design the access-control mechanism
to not create these traps. With the bogus technique removed, the
author of a protected resource can now choose amongst techniques that
actually work.

To address your bandwidth stealing concerns, and other similar issues,
we define X-FRAME-OPTIONS so that a resource author can inform the
browser's renderer of these preferences. So your XBL resource can
declare that it was only expecting to be applied to another resource
from *.example.com. The browser can detect this misconfiguration and
raise an error notification.

By separating the two mechanisms, we make the access-control model
clear and correct, while still providing the rendering control you
desired.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Ian Hickson
On Mon, 21 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
 
  Starting from the X-FRAME-OPTIONS proposal, say the response header
  also applies to all embedding that the page renderer does. So it also
  covers img, video, etc. In addition to the current values, the
  header can also list hostname patterns that may embed the content. So,
  in your case:
 
  X-FRAME-OPTIONS: *.example.com
  Access-Control-Allow-Origin: *
 
  Which means anyone can access this content, but sites outside
  *.example.com should host their own copy, rather than framing or
  otherwise directly embedding my copy.
 
  Why is this better than:
 
    Access-Control-Allow-Origin: *.example.com
 
 X-FRAME-OPTIONS is a rendering instruction and
 Access-Control-Allow-Origin is part of an access-control mechanism.
 Combining the two in the way you propose creates an access-control
 mechanism that is inherently vulnerable to CSRF-like attacks, because
 it determines read access to bits based on the identity of the
 requestor.
 
 Using your example, assume an XML resource sitting on an intranet
 server at resources.example.com. The author of this resource is trying
 to restrict access to the XML data to only other intranet resources
 hosted at *.example.com. The author believes this can be accomplished
 by simply setting the Access-Control-Allow-Origin header as you've
 show above, but that's not strictly true. Every page hosted on
 *.example.com is now a potential target for a CSRF-like attack that
 reveals the secret data. For example, consider a page at
 victim.example.com that uses a third party storage service. To copy
 data, the page does a GET on the location of the existing data,
 followed by a POST to another location with the data to be copied. If
 the storage service says the location of the existing data is the URL
 for the secret XML data (http://resources.example.com/...), then the
 victim page suffers a CSRF-like attack that exposes the secret data.
 The victim page may know nothing of the existence or purpose of the
 secret XML resource.
 
 To avoid this pitfall, we instead design the access-control mechanism
 to not create these traps. With the bogus technique removed, the
 author of a protected resource can now choose amongst techniques that
 actually work.
 
 To address your bandwidth stealing concerns, and other similar issues,
 we define X-FRAME-OPTIONS so that a resource author can inform the
 browser's renderer of these preferences. So your XBL resource can
 declare that it was only expecting to be applied to another resource
 from *.example.com. The browser can detect this misconfiguration and
 raise an error notification.
 
 By separating the two mechanisms, we make the access-control model
 clear and correct, while still providing the rendering control you
 desired.

I don't understand the difference between opaque string origin 
opaque string and opaque string origin.

With XBL in particular, what we need is something that decides whether a 
page can access the DOM of the XBL file or not, on a per-origin basis. 
Whether the magic string is:

   X-FRAME-OPTIONS: *.example.com
   Access-Control-Allow-Origin: *

...or:

   X-FRAME-OPTIONS: *.example.com

...or:

   Access-Control-Allow-Origin: *.example.com

...or:

   X: *.example.com

...or some other sequence of bytes doesn't seem to make any difference to 
any actual concrete security. There's only one mechanism here. Either 
access is granted to that origin, or it isn't.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Mon, Dec 21, 2009 at 2:16 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 21 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
 
  Starting from the X-FRAME-OPTIONS proposal, say the response header
  also applies to all embedding that the page renderer does. So it also
  covers img, video, etc. In addition to the current values, the
  header can also list hostname patterns that may embed the content. So,
  in your case:
 
  X-FRAME-OPTIONS: *.example.com
  Access-Control-Allow-Origin: *
 
  Which means anyone can access this content, but sites outside
  *.example.com should host their own copy, rather than framing or
  otherwise directly embedding my copy.
 
  Why is this better than:
 
    Access-Control-Allow-Origin: *.example.com

 X-FRAME-OPTIONS is a rendering instruction and
 Access-Control-Allow-Origin is part of an access-control mechanism.
 Combining the two in the way you propose creates an access-control
 mechanism that is inherently vulnerable to CSRF-like attacks, because
 it determines read access to bits based on the identity of the
 requestor.

 Using your example, assume an XML resource sitting on an intranet
 server at resources.example.com. The author of this resource is trying
 to restrict access to the XML data to only other intranet resources
 hosted at *.example.com. The author believes this can be accomplished
 by simply setting the Access-Control-Allow-Origin header as you've
 show above, but that's not strictly true. Every page hosted on
 *.example.com is now a potential target for a CSRF-like attack that
 reveals the secret data. For example, consider a page at
 victim.example.com that uses a third party storage service. To copy
 data, the page does a GET on the location of the existing data,
 followed by a POST to another location with the data to be copied. If
 the storage service says the location of the existing data is the URL
 for the secret XML data (http://resources.example.com/...), then the
 victim page suffers a CSRF-like attack that exposes the secret data.
 The victim page may know nothing of the existence or purpose of the
 secret XML resource.

 To avoid this pitfall, we instead design the access-control mechanism
 to not create these traps. With the bogus technique removed, the
 author of a protected resource can now choose amongst techniques that
 actually work.

 To address your bandwidth stealing concerns, and other similar issues,
 we define X-FRAME-OPTIONS so that a resource author can inform the
 browser's renderer of these preferences. So your XBL resource can
 declare that it was only expecting to be applied to another resource
 from *.example.com. The browser can detect this misconfiguration and
 raise an error notification.

 By separating the two mechanisms, we make the access-control model
 clear and correct, while still providing the rendering control you
 desired.

 I don't understand the difference between opaque string origin
 opaque string and opaque string origin.

 With XBL in particular, what we need is something that decides whether a
 page can access the DOM of the XBL file or not, on a per-origin basis.
 Whether the magic string is:

   X-FRAME-OPTIONS: *.example.com
   Access-Control-Allow-Origin: *

 ...or:

   X-FRAME-OPTIONS: *.example.com

 ...or:

   Access-Control-Allow-Origin: *.example.com

 ...or:

   X: *.example.com

 ...or some other sequence of bytes doesn't seem to make any difference to
 any actual concrete security. There's only one mechanism here. Either
 access is granted to that origin, or it isn't.

No, there is a difference in access-control between the two designs.

In the two header design:
1) An XHR GET of the XBL file data by example.org *is* allowed.
2) An xbl import of the XBL data by example.org triggers a rendering error.

In the one header design:
1) An XHR GET of the XBL file data by example.org is *not* allowed.
2) An xbl import of the XBL data by example.org triggers a rendering error.

Under the two header design, everyone has read access to the raw bits
of the XBL file. The one header design makes an empty promise to
protect read access to the XBL file.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Ian Hickson
On Mon, 21 Dec 2009, Tyler Close wrote:
 
 No, there is a difference in access-control between the two designs.
 
 In the two header design:
 1) An XHR GET of the XBL file data by example.org *is* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

That's a bad design. It would make people think they had secured the file 
when they had not.

Security should be consistent across everything.


 In the one header design:
 1) An XHR GET of the XBL file data by example.org is *not* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

That's what I want.


 Under the two header design, everyone has read access to the raw bits
 of the XBL file.

That's a bad thing.


 The one header design makes an empty promise to protect read access to 
 the XBL file.

How is it an empty promise?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Mon, Dec 21, 2009 at 2:39 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 21 Dec 2009, Tyler Close wrote:

 No, there is a difference in access-control between the two designs.

 In the two header design:
 1) An XHR GET of the XBL file data by example.org *is* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

 That's a bad design. It would make people think they had secured the file
 when they had not.

The headers explicitly say that a read request from any Origin is allowed:

Access-Control-Allow-Origin: *

The above syntax is the one CORS came up with. How could it be made clearer?

 Security should be consistent across everything.

It is. All Origins have read access. The data just renders in a
different way depending on if/how it is embedded.

 In the one header design:
 1) An XHR GET of the XBL file data by example.org is *not* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

 That's what I want.

What you want, and the mechanism you propose to get it, are at odds.
I've described the CSRF-like attack multiple times. The access control
model you propose doesn't actually work.

To actually control access to the XBL file data you need to use
something like the secret token designs we've discussed.

 Under the two header design, everyone has read access to the raw bits
 of the XBL file.

 That's a bad thing.

In the scenario you described, everyone *does*  have read access to
the raw bits. Anyone can just direct their browser to example.org and
save the data. In your scenario, we were just trying to discourage
bandwidth stealing.

 The one header design makes an empty promise to protect read access to
 the XBL file.

 How is it an empty promise?

See above.

We don't seem to be making any progress at understanding each other,
so I'm going to give up on this thread until I see some signs of
progress. Thanks for your time.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Ian Hickson
On Mon, 21 Dec 2009, Tyler Close wrote:
 On Mon, Dec 21, 2009 at 2:39 PM, Ian Hickson i...@hixie.ch wrote:
  On Mon, 21 Dec 2009, Tyler Close wrote:
 
  No, there is a difference in access-control between the two designs.
 
  In the two header design:
  1) An XHR GET of the XBL file data by example.org *is* allowed.
  2) An xbl import of the XBL data by example.org triggers a rendering 
  error.
 
  That's a bad design. It would make people think they had secured the file
  when they had not.
 
 The headers explicitly say that a read request from any Origin is allowed:
 
 Access-Control-Allow-Origin: *
 
 The above syntax is the one CORS came up with. How could it be made clearer?

By not having two headers, but just having one.


  Security should be consistent across everything.
 
 It is. All Origins have read access. The data just renders in a
 different way depending on if/how it is embedded.

I am not interested in this kind of distinction. I think we should only 
have one distinction -- either an origin can use the data, or it can't.


  In the one header design:
  1) An XHR GET of the XBL file data by example.org is *not* allowed.
  2) An xbl import of the XBL data by example.org triggers a rendering 
  error.
 
  That's what I want.
 
 What you want, and the mechanism you propose to get it, are at odds.
 I've described the CSRF-like attack multiple times.

Sure, you can misuse Origin in complicated scenarios to introduce CSRF 
attacks. But XBL2 doesn't have those scenarios, and nor do video, 
img+canvas, and any number of other options. Most XHR2 uses don't 
involve the multiple sites either. We shouldn't make _everything_ far more 
complicated just because there is a way to misuse the feature in a case 
that is itself already complicated.


 The access control model you propose doesn't actually work.

It works fine for XBL2, Web Sockets, video, img+canvas, sharing 
data across multiple servers in one environment, etc.


 To actually control access to the XBL file data you need to use 
 something like the secret token designs we've discussed.

I'm sorry but it's simply a non-starter to have to use secret tokens for 
embedding XBL resources. That's orders of magnitude more complexity than 
most authors will be able to deal with.

There are no scripts involved in these scenarios. It would simply lead to 
the secret tokens being baked into public resources, which would make it 
trivial for them to be forged, which defeats the entire purpose.


  Under the two header design, everyone has read access to the raw bits 
  of the XBL file.
 
  That's a bad thing.
 
 In the scenario you described, everyone *does* have read access to the 
 raw bits.

Only people behind the intranet, or with the right cookies, or with the 
right HTTP authentication, or with the right IP addresses. That's not 
everyone.


 In your scenario, we were just trying to discourage bandwidth stealing.

I am trying to do many things. Bandwidth stealing is one. Securing 
semi-public resources is another. Securing resources behind intranets is 
yet another. These are all use cases that CORS makes trivial and which UM 
makes incredibly complicated.


Personally the more I discuss this the more convinced I am becoming that 
CORS is the way to go.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Kenton Varda
On Mon, Dec 21, 2009 at 5:35 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Dec 21, 2009 at 5:17 PM, Kenton Varda ken...@google.com wrote:
  The problem we're getting at is that CORS is being presented as a
 security
  mechanism, when in fact it does not provide security.  Yes, CORS is
  absolutely easier to use than UM in some cases -- I don't think anyone is
  going to dispute that.  The problem is that the security it provides in
  those cases simply doesn't exist unless you can ensure that no resource
 on
  *any* of your allowed origins can be tricked into fetching your
 protected
  resource for a third party.  In practice this will be nearly impossible
 to
  ensure except in the most simple cases.

 Why isn't this a big problem today for normal XMLHttpRequest?  Normal
 XMLHttpRequest is just like a CORS deployment in which every server
 has a policy of allowing its own origin.


It *is* a problem today with XMLHttpRequest.  This is, for example, one
reason why we cannot host arbitrary HTML documents uploaded by users on
google.com -- a rather large inconvenience!  If it were feasible, we'd be
arguing for removing this ability from XMLHttpRequest.  However, removing a
feature that exists is generally not possible; better to avoid adding it in
the first place.

With CORS, the problems would be worse, because now you not only have to
ensure that your own server is trust-worthy and free of CSRF, but also the
servers of everyone you allow to access your resource.  Problems are likely
to multiply exponentially.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-18 Thread Ian Hickson
On Thu, 17 Dec 2009, Kenton Varda wrote:
 
 With the right capability-based infrastructure, the capability-based 
 solution would be trivial too.  We don't have this infrastructure.  
 This is a valid concern.

It's not so much that we don't have one, so much as nobody is proposing 
one... I'd be happy if there was a concrete proposal on the table that 
made things as simple as CORS, supported the Web's key use cases as 
easily, and that the browser vendors were all ready to implement.


 You probably also question the effect of my solution on caching, or 
 other technical issues like that.  I could explain how I'd deal with 
 them, but then you'd find finer details to complain about, and so on.

If you're saying that a caps-based infrastructure would have insoluable 
problems, then that makes it a non-starter. If not, then someone who 
thinks this is the right way to go should write up the spec on how to do 
it, and we should iterate it until all the finer details are fixed, just 
like we do with all specs.


 I'm not sure the conversation would benefit anyone, so let's call it a 
 draw.

I'm not in this to win arguments, I'm in this to improve the Web. I'd be 
more than happy to lose if we got something out of it that didn't have 
technical problems. If there's no concrete proposal on the table that 
makes things as simple as CORS, supports the Web's key use cases as 
easily, and that the browser vendors are all ready to implement, then the 
conversation can indeed not benefit anyone.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-18 Thread Kenton Varda
On Fri, Dec 18, 2009 at 12:04 AM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 17 Dec 2009, Kenton Varda wrote:
 
  With the right capability-based infrastructure, the capability-based
  solution would be trivial too.  We don't have this infrastructure.
  This is a valid concern.

 It's not so much that we don't have one, so much as nobody is proposing
 one... I'd be happy if there was a concrete proposal on the table that
 made things as simple as CORS, supported the Web's key use cases as
 easily, and that the browser vendors were all ready to implement.


I hope to work on that.  A lot of it is more a question of software than
standards -- e.g. having a web server which provides easy access to
capability-based design patterns.


  You probably also question the effect of my solution on caching, or
  other technical issues like that.  I could explain how I'd deal with
  them, but then you'd find finer details to complain about, and so on.

 If you're saying that a caps-based infrastructure would have insoluable
 problems, then that makes it a non-starter.


No, I think all the problems are solvable, but the time we might spend
debating them is unbounded.


 If not, then someone who
 thinks this is the right way to go should write up the spec on how to do
 it, and we should iterate it until all the finer details are fixed, just
 like we do with all specs.


  I'm not sure the conversation would benefit anyone, so let's call it a
  draw.

 I'm not in this to win arguments, I'm in this to improve the Web. I'd be
 more than happy to lose if we got something out of it that didn't have
 technical problems. If there's no concrete proposal on the table that
 makes things as simple as CORS, supports the Web's key use cases as
 easily, and that the browser vendors are all ready to implement, then the
 conversation can indeed not benefit anyone.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-18 Thread Ian Hickson
On Fri, 18 Dec 2009, Kenton Varda wrote:
 
  If you're saying that a caps-based infrastructure would have 
  insoluable problems, then that makes it a non-starter.
 
 No, I think all the problems are solvable, but the time we might spend 
 debating them is unbounded.

If the time it takes to create an acceptable solution is bounded, then the 
time it takes to discuss it would also be bounded. Discussion isn't going 
to continue past the point where the solution is acceptable to the people 
debating.

I look forward to seeing a proposal. I recommend studying the lists of use 
cases that were written up when XHR2 and CORS were being designed.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
Somehow I suspect all this has been said many times before...

On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com wrote:

 CORS would provide at least two benefits, using the exact protocol you'd
 use with UM:

 1) It lets you know what site is sending the request; with UM there is no
 way for the receiving server to tell. Site A may wish to enforce a policy
 that any other site that wants access has to request it individually. But
 with UM, there is no way to prevent Site B from sharing its unguessable URL
 to the resource with another site, or even to tell that Site B has done so.
 (I've seen papers cited that claim you can do proper logging using an
 underlying capabilities mechanism if you do the right things on top of it,
 but Tyler's protocol does not do that; and it is not at all obvious to me
 how to extend such results to tokens passed over the network, where you
 can't count on a type system to enforce integrity at the endpoints like you
 can with a system all running in a single object capability language.)


IMO, this isn't useful information.  If Alice is a user at my site, and I
hand Alice a capability to access her data from my site, it should not make
a difference to me whether Alice chooses to access that data using Bob's
site or Charlie's site, any more than it makes a difference to me whether
Alice chooses to use Firefox or Chrome.  Saying that Alice is only allowed
to access her data using Bob's site but not Charlie's is analogous to saying
she can only use approved browsers.  This provides a small amount of
security at the price of greatly annoying users and stifling innovation
(think mash-ups).

Perhaps, though, you're suggesting that users should be able to edit the
whitelist that is applied to their data, in order to provide access to new
sites?  But this seems cumbersome to me -- both to the user, who needs to
manage this whitelist, and to app developers, who can no longer delegate
work to other hosts.

(Of course, if you want to know the origin for non-security reasons (e.g. to
log usage for statistical purposes, or deal with compatibility issues) then
you can have the origin voluntarily identify itself, just as browsers
voluntarily identify themselves.)


 2) It provides additional defense if the unguessable URL is guessed,
 either because of the many natural ways URLs tend to leak, or because of a
 mistake in the algorithm that generates unguessable URLs, or because either
 Site B or Site A unintentionally disclose it to a third party. By using an
 unguessable URL *and* checking Origin and Cookie, Site A would still have
 some protection in this case. An attacker would have to not only break the
 security of the secret token but would also need to manage a confused
 deputy type attack against Site B, which has legitimate access, thus
 greatly narrowing the scope of the vulnerability. You would need two
 separate vulnerabilities, and an attacker with the opportunity to exploit
 both, in order to be vulnerable to unauthorized access.


Given the right UI, a capability URL should be no more leak-prone than a
cookie.  Sure, we don't want users to ever actually see capability URLs
since they might then choose to copy/paste them into who knows where, but
it's quite possible to hide the details behind the scenes, just like we hide
cookie data.

So, I don't think this additional defense is really worth much, unless you
are arguing that cookies are insecure for the same reasons.  (Perhaps we
should only allow users to use approved browsers because other browsers
might leak cookie data?)

And again, this additional defense has great costs, as described above.

So, no, I still think CORS provides no benefit for the protocol I described.
 It may seem to provide benefits, but the benefits actually cost far more
than they are worth.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Maciej Stachowiak


On Dec 17, 2009, at 1:42 AM, Kenton Varda wrote:


Somehow I suspect all this has been said many times before...

On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com  
wrote:
CORS would provide at least two benefits, using the exact protocol  
you'd use with UM:


1) It lets you know what site is sending the request; with UM there  
is no way for the receiving server to tell. Site A may wish to  
enforce a policy that any other site that wants access has to  
request it individually. But with UM, there is no way to prevent  
Site B from sharing its unguessable URL to the resource with another  
site, or even to tell that Site B has done so. (I've seen papers  
cited that claim you can do proper logging using an underlying  
capabilities mechanism if you do the right things on top of it, but  
Tyler's protocol does not do that; and it is not at all obvious to  
me how to extend such results to tokens passed over the network,  
where you can't count on a type system to enforce integrity at the  
endpoints like you can with a system all running in a single object  
capability language.)


IMO, this isn't useful information.  If Alice is a user at my site,  
and I hand Alice a capability to access her data from my site, it  
should not make a difference to me whether Alice chooses to access  
that data using Bob's site or Charlie's site, any more than it makes  
a difference to me whether Alice chooses to use Firefox or Chrome.   
Saying that Alice is only allowed to access her data using Bob's  
site but not Charlie's is analogous to saying she can only use  
approved browsers.  This provides a small amount of security at  
the price of greatly annoying users and stifling innovation (think  
mash-ups).


I'm not saying that Alice should be restricted in who she shares the  
feed with. Just that Bob's site should not be able to automatically  
grant Charlie's site access to the feed without Alice explicitly  
granting that permission. Many sites that use workarounds (e.g. server- 
to-server communication combined with client-side form posts and  
redirects) to share their data today would like grants to be to  
another site, not to another site plus any third party site that the  
second site chooses to share with.


Perhaps, though, you're suggesting that users should be able to edit  
the whitelist that is applied to their data, in order to provide  
access to new sites?  But this seems cumbersome to me -- both to the  
user, who needs to manage this whitelist, and to app developers, who  
can no longer delegate work to other hosts.


An automated permission grant system that vends unguessable URLs could  
just as easily manage the whitelist. It is true that app developers  
could not unilaterally grant access to other origins, but this is  
actually a desired property for many service providers. Saying that  
this feature is cumbersome for the service consumer does not lead  
the service provider to desire it any less.


(Of course, if you want to know the origin for non-security reasons  
(e.g. to log usage for statistical purposes, or deal with  
compatibility issues) then you can have the origin voluntarily  
identify itself, just as browsers voluntarily identify themselves.)


2) It provides additional defense if the unguessable URL is  
guessed, either because of the many natural ways URLs tend to leak,  
or because of a mistake in the algorithm that generates unguessable  
URLs, or because either Site B or Site A unintentionally disclose it  
to a third party. By using an unguessable URL *and* checking Origin  
and Cookie, Site A would still have some protection in this case. An  
attacker would have to not only break the security of the secret  
token but would also need to manage a confused deputy type attack  
against Site B, which has legitimate access, thus greatly narrowing  
the scope of the vulnerability. You would need two separate  
vulnerabilities, and an attacker with the opportunity to exploit  
both, in order to be vulnerable to unauthorized access.


Given the right UI, a capability URL should be no more leak-prone  
than a cookie.  Sure, we don't want users to ever actually see  
capability URLs since they might then choose to copy/paste them into  
who knows where, but it's quite possible to hide the details behind  
the scenes, just like we hide cookie data.


Hiding capability URLs completely from the user would require some  
mechanism that has not yet been proposed in a concrete form. So far  
the ways to vend the URL to the service consumer that have been  
proposed include user copy/paste, and cross-site form submission with  
redirects, both of which expose the URL. However, accidental  
disclosure by the user is not the only risk.


So, I don't think this additional defense is really worth much,  
unless you are arguing that cookies are insecure for the same reasons.


Sites do, on occasion, make mistakes in the algorithms for generating  
session cookies. Or 

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Wed, 16 Dec 2009, Devdatta wrote:

 hmm.. just a XDR GET on the file at hixie.ch which allows access only if 
 the request is from damowmow.com ?

It couldn't be XDR -- XDR is a script-based mechanism, whereas XBL can be 
invoked before the root element is parsed. But even assuming the XDR 
protocol could be extended to XBL, that would require scripting or much 
more complicated .htaccess rules. With CORS, I can do it with one simple 
line in .htaccess.

Also, as I understand it, XDR sends an Origin header, which is what UM 
is trying to avoid.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak m...@apple.com wrote:


 On Dec 17, 2009, at 1:42 AM, Kenton Varda wrote:

 Somehow I suspect all this has been said many times before...

 On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com wrote:

 CORS would provide at least two benefits, using the exact protocol you'd
 use with UM:

 1) It lets you know what site is sending the request; with UM there is no
 way for the receiving server to tell. Site A may wish to enforce a policy
 that any other site that wants access has to request it individually. But
 with UM, there is no way to prevent Site B from sharing its unguessable URL
 to the resource with another site, or even to tell that Site B has done so.
 (I've seen papers cited that claim you can do proper logging using an
 underlying capabilities mechanism if you do the right things on top of it,
 but Tyler's protocol does not do that; and it is not at all obvious to me
 how to extend such results to tokens passed over the network, where you
 can't count on a type system to enforce integrity at the endpoints like you
 can with a system all running in a single object capability language.)


 IMO, this isn't useful information.  If Alice is a user at my site, and I
 hand Alice a capability to access her data from my site, it should not make
 a difference to me whether Alice chooses to access that data using Bob's
 site or Charlie's site, any more than it makes a difference to me whether
 Alice chooses to use Firefox or Chrome.  Saying that Alice is only allowed
 to access her data using Bob's site but not Charlie's is analogous to saying
 she can only use approved browsers.  This provides a small amount of
 security at the price of greatly annoying users and stifling innovation
 (think mash-ups).


 I'm not saying that Alice should be restricted in who she shares the feed
 with. Just that Bob's site should not be able to automatically grant
 Charlie's site access to the feed without Alice explicitly granting that
 permission. Many sites that use workarounds (e.g. server-to-server
 communication combined with client-side form posts and redirects) to share
 their data today would like grants to be to another site, not to another
 site plus any third party site that the second site chooses to share with.


OK, I'm sure that this has been said before, because it is critical to the
capability argument:

If Bob can access the data, and Bob can talk to Charlie *in any way at all*,
then it *is not possible* to prevent Bob from granting access to Charlie,
because Bob can always just serve as a proxy for Charlie's requests.

What CORS does do is make it so that Bob (and Charlie, if he is proxying
through Bob) can only access the resource while Alice has his site open in
her browser.  The same can be achieved with UM by generating a new URL for
each visit, and revoking it as soon as Alice browses away.



 Perhaps, though, you're suggesting that users should be able to edit the
 whitelist that is applied to their data, in order to provide access to new
 sites?  But this seems cumbersome to me -- both to the user, who needs to
 manage this whitelist, and to app developers, who can no longer delegate
 work to other hosts.


 An automated permission grant system that vends unguessable URLs could just
 as easily manage the whitelist. It is true that app developers could not
 unilaterally grant access to other origins, but this is actually a desired
 property for many service providers. Saying that this feature is
 cumbersome for the service consumer does not lead the service provider to
 desire it any less.


You're right, the same UI I want for hooking up capabilities could also
update the whitelist.  But I still don't see where this is useful, given the
above.



 (Of course, if you want to know the origin for non-security reasons (e.g.
 to log usage for statistical purposes, or deal with compatibility issues)
 then you can have the origin voluntarily identify itself, just as browsers
 voluntarily identify themselves.)


 2) It provides additional defense if the unguessable URL is guessed,
 either because of the many natural ways URLs tend to leak, or because of a
 mistake in the algorithm that generates unguessable URLs, or because either
 Site B or Site A unintentionally disclose it to a third party. By using an
 unguessable URL *and* checking Origin and Cookie, Site A would still have
 some protection in this case. An attacker would have to not only break the
 security of the secret token but would also need to manage a confused
 deputy type attack against Site B, which has legitimate access, thus
 greatly narrowing the scope of the vulnerability. You would need two
 separate vulnerabilities, and an attacker with the opportunity to exploit
 both, in order to be vulnerable to unauthorized access.


 Given the right UI, a capability URL should be no more leak-prone than a
 cookie.  Sure, we don't want users to ever actually see capability URLs
 

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Thu, 17 Dec 2009, Kenton Varda wrote:
 
 OK, I'm sure that this has been said before, because it is critical to 
 the capability argument:
 
 If Bob can access the data, and Bob can talk to Charlie *in any way at 
 all*, then it *is not possible* to prevent Bob from granting access to 
 Charlie, because Bob can always just serve as a proxy for Charlie's 
 requests.

If confidentiality was the only problem, this would be true. However, it's 
not the only problem. One of the big reasons to restrict which origin can 
use a particular resource is bandwidth management. For example, 
resources.example.com might want to allow *.example.com to use its XBL 
files, but not allow anyone else to directly use the XBL files straight 
from resources.example.com. A proxy isn't a plausible attack in this 
scenario, because if someone can set up a proxy, they can with much more 
ease simply host the original file (which isn't a problem from the point 
of view of the original site). Furthermore, if someone _does_ host a 
proxy, then they are taking the same load hit as the original site, and 
therefore the risk to the original site is capped.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 10:08 AM, Maciej Stachowiak m...@apple.com wrote:
 My goal was merely to argue that adding an origin/cookie check to a
 secret-token-based mechanism adds meaningful defense in depth, compared to
 just using any of the proposed protocols over UM. I believe my argument
 holds. If the secret token scheme has any weakness whatsoever, whether in
 generation of the tokens, or in accidental disclosure by the user or the
 service consumer, origin checks provide an orthogonal defense that must be
 breached separately. This greatly reduces the attack surface. While this may
 not provide any additional security in theory, where we can assume the
 shared secret is generated and managed correctly, it does provide additional
 security in the real world, where people make mistakes.

The reason the origin/cookie check doesn't provide defense in depth is
that the programming patterns we want to support necessarily blow
holes in any origin/cookie defense. We want clients to act as
deputies, because that's a useful thing to be able to do. For example,
consider a web page widget that implements the Observer pattern: when
its state changes, it fires off a POST request to a list of observer
URLs. Clients can register any URL they want with the web page widget.
If these POST requests carry origin/cookies, then a CSRF-like attack
is easy.

There are lots of other ways we want to use the Web, as it is meant to
be used, that aren't viable if you're trying to maintain the viability
of an origin/cookie defense. For example, Ian correctly points out
that under an origin/cookie defense, using URIs as identifiers is
dangerous, see:

http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1247.html

But we want to use URIs to identify things, because its useful, and we
want it to be safe. For cross-origin scenarios, it can't be safe while
still maintaining the viability of origin/cookie defenses.

Basically, the programming patterns of the Web, when used in
cross-origin scenarios, break origin/cookie defenses. We want to keep
the Web programming patterns and replace the origin/cookie defense
with something that better fits the Web. We're willing to give up our
cookies before we'll give up our URIs.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 10:08 AM, Maciej Stachowiak m...@apple.com wrote:


 On Dec 17, 2009, at 9:15 AM, Kenton Varda wrote:



 On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak m...@apple.com wrote:


 I'm not saying that Alice should be restricted in who she shares the feed
 with. Just that Bob's site should not be able to automatically grant
 Charlie's site access to the feed without Alice explicitly granting that
 permission. Many sites that use workarounds (e.g. server-to-server
 communication combined with client-side form posts and redirects) to share
 their data today would like grants to be to another site, not to another
 site plus any third party site that the second site chooses to share with.


 OK, I'm sure that this has been said before, because it is critical to the
 capability argument:

 If Bob can access the data, and Bob can talk to Charlie *in any way at
 all*, then it *is not possible* to prevent Bob from granting access to
 Charlie, because Bob can always just serve as a proxy for Charlie's
 requests.


 Indeed, you can always act as a proxy and directly share the data rather
 than sharing the token. However, this is not the same as the ability to
 share the token anonymously. Here are a few important differences:

 - As Ian mentioned, in the case of some kinds of resources, one of the
 service provider's goals may be to prevent abuse of their bandwidth.


It seems more useful to attribute resource usage to the user rather than to
the sites the user uses to access those resources.  In my example, I might
want to limit Alice to, say, 1GB data transfer per month, but I don't see
why I would care if that transfer happened through Bob's site vs. Charlie's
site.


 - Service providers often like to know for the sake of record-keeping who
 is using their data, even if they have no interest in restricting it. Often,
 just creating an incentive to identify yourself and ask for separate
 authorization is enough, even if proxy workarounds are possible. The reason
 given below states such an incentive.


I think this is separate from the security question.  As I said earlier,
origins can voluntarily identify themselves for this purpose, just as
browsers voluntarily identify themselves.


 - Proxying to subvert CORS would only work while the user is logged into
 both the service provider and the actually authorized service consumer who
 is acting as a proxy, and only in the user's browser. This limits the window
 in which to get data. Meanwhile, a capability token sent anonymously could
 be used at any time, even when the user is not logged in. The ability to get
 snapshots of the user's data may not be seen to be as great a risk as
 ongoing on-demand access.


Yes, I directly addressed exactly that point...


 I will also add that users may want to revoke capabilities they grant. This
 is likely to be presented to the user as a whitelist of sites to which they
 granted access, whether the actual mechanism is modifying Origin checks, or
 mapping the site to a capability token and disabling it.


Sure.  This is easy to do via caps.


 How would the service provider generate a new URL for each visit to Bob's
 site? How would the service provider even know whether it's Bob asking for
 an update, or whether the user is logged in? If the communication is via UM,
 the service provider has no way to know. If it's via a hidden form post,
 then you are just using forms to fake the effect of CORS. Note also that
 such elaborations increase complexity of the protocol.


Assuming some UI exists for granting capabilities, as I suggested earlier,
it can automatically take care of generating a new capability for every
connection/visit and revoking it when appropriate.


 To enable permissions to be revoked in a granular way, you must vend
 different capability tokens per site. Given that, it seems only sensible to
 check that the token is actually being used by the party to which it was
 granted.


I disagree.  Delegation is useful, and prohibiting it has a cost.  If we
granted the capability to Bob, why should we care if Bob chooses to delegate
to Charlie?  If Charlie misuses the capability, then we blame Bob for that
misuse.  It's Bob's responsibility to take appropriate measures to prevent
this.  If we don't trust Bob we shouldn't have granted him the capability in
the first place.

And again, CORS doesn't prevent delegation anyway; it only makes it less
convenient.


 My goal was merely to argue that adding an origin/cookie check to a
 secret-token-based mechanism adds meaningful defense in depth, compared to
 just using any of the proposed protocols over UM. I believe my argument
 holds. If the secret token scheme has any weakness whatsoever, whether in
 generation of the tokens, or in accidental disclosure by the user or the
 service consumer, origin checks provide an orthogonal defense that must be
 breached separately. This greatly reduces the attack surface. While this may
 not provide any 

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Thu, 17 Dec 2009, Kenton Varda wrote:

 It seems more useful to attribute resource usage to the user rather than 
 to the sites the user uses to access those resources.  In my example, I 
 might want to limit Alice to, say, 1GB data transfer per month, but I 
 don't see why I would care if that transfer happened through Bob's site 
 vs. Charlie's site.

With CORS, I can trivially (one line in the .htaccess file for my site) 
make sure that no sites can use XBL files from my site other than my 
sites. My sites don't do any per-user tracking; doing that would involve 
orders of magnitude more complexity.


  - Service providers often like to know for the sake of record-keeping 
  who is using their data, even if they have no interest in restricting 
  it. Often, just creating an incentive to identify yourself and ask for 
  separate authorization is enough, even if proxy workarounds are 
  possible. The reason given below states such an incentive.
 
 I think this is separate from the security question.  As I said earlier, 
 origins can voluntarily identify themselves for this purpose, just as 
 browsers voluntarily identify themselves.

How can an origin voluntarily identify itself in an unspoofable fashion? 
Without running scripts?


 It seems like the fundamental disagreements here are:
 - Cap proponents think that the ability to delegate is extremely valuable,
 and ACLs provide too much of a barrier against delegation.  ACL people think
 delegation is not as important as Cap people think it is.  Arguments either
 way tend to be abstract, and thus unconvincing to either side.
 - ACL proponents think that capabilities are too easy to leak accidentally.
  Cap people think that the defenses provided by capability design patterns
 provide plenty of protection, but ACL people disagree.  Argument either way
 again tend to be abstract, and thus unconvincing.

I have no problem with offering a feature like UM in CORS. My objection is 
to making the simple cases non-trivial, e.g. by never including Origin 
headers in any requests.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
 One of the big reasons to restrict which origin can
 use a particular resource is bandwidth management. For example,
 resources.example.com might want to allow *.example.com to use its XBL
 files, but not allow anyone else to directly use the XBL files straight
 from resources.example.com.

An XBL file could include some JavaScript code that blows up the page
if the manipulated DOM has an unexpected document.domain.

I think this solution more precisely implements the control you want.
You're not trying to prevent other sites from downloading your XBL
file. You're only trying to encourage them to host their own version
of your XBL file.

In general, the control you want is most similar to iframe busting.
A separate standard that covers these rendering instructions would be
better than conflating them with an access-control standard. For
example, a new HTTP response header could provide instructions on what
embedding configurations are supported. The instructions may be
independent of how the embedding is created, such as by: iframe,
img, script or xbl.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Thu, 17 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
  One of the big reasons to restrict which origin can use a particular 
  resource is bandwidth management. For example, resources.example.com 
  might want to allow *.example.com to use its XBL files, but not allow 
  anyone else to directly use the XBL files straight from 
  resources.example.com.
 
 An XBL file could include some JavaScript code that blows up the page if 
 the manipulated DOM has an unexpected document.domain.

This again requires script. I don't deny there are plenty of solutions you 
could use to do this with script. The point is that CORS allows one line 
in an .htaccess file to solve this for all XBL files, all XML files, all 
videos, everything on a site, all at once.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
  One of the big reasons to restrict which origin can use a particular
  resource is bandwidth management. For example, resources.example.com
  might want to allow *.example.com to use its XBL files, but not allow
  anyone else to directly use the XBL files straight from
  resources.example.com.

 An XBL file could include some JavaScript code that blows up the page if
 the manipulated DOM has an unexpected document.domain.

 This again requires script. I don't deny there are plenty of solutions you
 could use to do this with script. The point is that CORS allows one line
 in an .htaccess file to solve this for all XBL files, all XML files, all
 videos, everything on a site, all at once.

I'm not trying to deny you your one line fix. I'm just saying it
should be a different one line than the one used for access control.
Conflating the two issues, the way CORS does, creates CSRF-like
problems. Address bandwidth management, along with other embedding
issues, while standardizing an iframe busting technique.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Thu, 17 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
  On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
   One of the big reasons to restrict which origin can use a 
   particular resource is bandwidth management. For example, 
   resources.example.com might want to allow *.example.com to use its 
   XBL files, but not allow anyone else to directly use the XBL files 
   straight from resources.example.com.
 
  An XBL file could include some JavaScript code that blows up the page 
  if the manipulated DOM has an unexpected document.domain.
 
  This again requires script. I don't deny there are plenty of solutions 
  you could use to do this with script. The point is that CORS allows 
  one line in an .htaccess file to solve this for all XBL files, all XML 
  files, all videos, everything on a site, all at once.
 
 I'm not trying to deny you your one line fix. I'm just saying it should 
 be a different one line than the one used for access control. Conflating 
 the two issues, the way CORS does, creates CSRF-like problems. Address 
 bandwidth management, along with other embedding issues, while 
 standardizing an iframe busting technique.

What one liner are your proposing that would solve the problem for XBL, 
XML data, videos, etc, all at once?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote:

 With CORS, I can trivially (one line in the .htaccess file for my site)
 make sure that no sites can use XBL files from my site other than my
 sites. My sites don't do any per-user tracking; doing that would involve
 orders of magnitude more complexity.


I was debating about one particular use case, and this one that you're
talking about now is completely different.  I can propose a different
solution for this case, but I think someone will just change the use case
again to make my new solution look silly, and we'll go in circles.


 How can an origin voluntarily identify itself in an unspoofable fashion?
 Without running scripts?


It can't.  My point was that for simple non-security-related statistics
gathering, spoofing is not a big concern.  People can spoof browser UA
strings but we still gather statistics on them.


 I have no problem with offering a feature like UM in CORS. My objection is
 to making the simple cases non-trivial, e.g. by never including Origin
 headers in any requests.


Personally I'm not actually arguing against standardizing CORS.  What I'm
arguing is that UM is the natural solution for software designed in an
object-oriented, loosely-coupled way.  I'm also arguing that loosely-coupled
object-oriented systems are more powerful and better for users.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote:

 What one liner are your proposing that would solve the problem for XBL,
 XML data, videos, etc, all at once?


Are we debating about the state of existing infrastructure, or theoretically
ideal infrastructure? Honest question.  .htaccess is an example of existing
infrastructure built around the ACL approach.  If no similarly-easy-to-use
capability-based infrastructure exists, that doesn't necessarily mean ACLs
are theoretically better.  But the thread subject line seems to suggest
we're more interested in theory.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
  On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
   One of the big reasons to restrict which origin can use a
   particular resource is bandwidth management. For example,
   resources.example.com might want to allow *.example.com to use its
   XBL files, but not allow anyone else to directly use the XBL files
   straight from resources.example.com.
 
  An XBL file could include some JavaScript code that blows up the page
  if the manipulated DOM has an unexpected document.domain.
 
  This again requires script. I don't deny there are plenty of solutions
  you could use to do this with script. The point is that CORS allows
  one line in an .htaccess file to solve this for all XBL files, all XML
  files, all videos, everything on a site, all at once.

 I'm not trying to deny you your one line fix. I'm just saying it should
 be a different one line than the one used for access control. Conflating
 the two issues, the way CORS does, creates CSRF-like problems. Address
 bandwidth management, along with other embedding issues, while
 standardizing an iframe busting technique.

 What one liner are your proposing that would solve the problem for XBL,
 XML data, videos, etc, all at once?

Well, I wasn't intending to make a frame busting proposal, but it
seems something like to following could work...

Starting from the X-FRAME-OPTIONS proposal, say the response header
also applies to all embedding that the page renderer does. So it also
covers img, video, etc. In addition to the current values, the
header can also list hostname patterns that may embed the content. So,
in your case:

X-FRAME-OPTIONS: *.example.com
Access-Control-Allow-Origin: *

Which means anyone can access this content, but sites outside
*.example.com should host their own copy, rather than framing or
otherwise directly embedding my copy.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Thu, 17 Dec 2009, Kenton Varda wrote:
 On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote:
  
  What one liner are your proposing that would solve the problem for 
  XBL, XML data, videos, etc, all at once?
 
 Are we debating about the state of existing infrastructure, or 
 theoretically ideal infrastructure? Honest question.  .htaccess is an 
 example of existing infrastructure built around the ACL approach.  If no 
 similarly-easy-to-use capability-based infrastructure exists, that 
 doesn't necessarily mean ACLs are theoretically better.  But the thread 
 subject line seems to suggest we're more interested in theory.

I'm interested in the practical impact of our specifications on authors. 
Those specifications have to be something that can be implemented; given 
the security model we're starting from, there's basically no way that can 
be an ideal anything.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Thu, 17 Dec 2009, Tyler Close wrote:
 
 Starting from the X-FRAME-OPTIONS proposal, say the response header
 also applies to all embedding that the page renderer does. So it also
 covers img, video, etc. In addition to the current values, the
 header can also list hostname patterns that may embed the content. So,
 in your case:
 
 X-FRAME-OPTIONS: *.example.com
 Access-Control-Allow-Origin: *
 
 Which means anyone can access this content, but sites outside 
 *.example.com should host their own copy, rather than framing or 
 otherwise directly embedding my copy.

Why is this better than:

   Access-Control-Allow-Origin: *.example.com

...?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Ian Hickson
On Thu, 17 Dec 2009, Kenton Varda wrote:
 On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote:
  
  With CORS, I can trivially (one line in the .htaccess file for my 
  site) make sure that no sites can use XBL files from my site other 
  than my sites. My sites don't do any per-user tracking; doing that 
  would involve orders of magnitude more complexity.

 I was debating about one particular use case, and this one that you're 
 talking about now is completely different.  I can propose a different 
 solution for this case, but I think someone will just change the use 
 case again to make my new solution look silly, and we'll go in circles.

The advantage of CORS is that it addresses all these use cases well.


  How can an origin voluntarily identify itself in an unspoofable 
  fashion? Without running scripts?
 
 It can't.

I don't understand how it can solve the problem then. If it's trivial for 
a site to spoof another, then the use case isn't solved.


 My point was that for simple non-security-related statistics gathering, 
 spoofing is not a big concern.

None of the use cases I've mentioned involve statistics gathering.


  I have no problem with offering a feature like UM in CORS. My 
  objection is to making the simple cases non-trivial, e.g. by never 
  including Origin headers in any requests.
 
 Personally I'm not actually arguing against standardizing CORS.  What 
 I'm arguing is that UM is the natural solution for software designed in 
 an object-oriented, loosely-coupled way.

CORS is a superset of UM; I have no objection to CORS-enabled APIs 
exposing the UM subset (i.e. allowing scripts to opt out of sending the 
Origin header). However, my understanding is that the UM proposal is to 
explictly not allow Origin to ever be sent, which is why there is a 
debate. (If the question was just should we add a feature to CORS to 
allow Origin to not be sent, then I think the debate would have concluded 
without much argument long ago.)


 I'm also arguing that loosely-coupled object-oriented systems are more 
 powerful and better for users.

Powerful is not a requirement I'm looking for. Simple is.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Kenton Varda
On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 17 Dec 2009, Tyler Close wrote:
  X-FRAME-OPTIONS: *.example.com
  Access-Control-Allow-Origin: *

 Why is this better than:

   Access-Control-Allow-Origin: *.example.com

 ...?


I think Tyler missed on this one.  X-FRAME-OPTIONS looks to me like the same
thing as CORS, except that it doesn't pretend to provide security.

In a capability-based world, when the user accessed your site, you'd send
back the HTML together with a set of capabilities to access other resources
on the site.  These capabilities would expire after some period of time.
 Want to allow one particular other site to use your resources as well?
 Then give them the capability to generate capabilities to your resources --
e.g. by giving them a secret key which they can hash together with the
current time.

I know, your response is:  That's way more complicated than my one-line
.htaccess change!

But your one-line .htaccess change is leveraging a great deal of
infrastructure already built around that model.  With the right
capability-based infrastructure, the capability-based solution would be
trivial too.  We don't have this infrastructure.  This is a valid concern.
 Unfortunately, few people are working to build this infrastructure because
most people would rather focus on the established model, simply because it
is established.  So we have a chicken-and-egg problem.

You probably also question the effect of my solution on caching, or other
technical issues like that.  I could explain how I'd deal with them, but
then you'd find finer details to complain about, and so on.  I'm not sure
the conversation would benefit anyone, so let's call it a draw.

On Thu, Dec 17, 2009 at 5:56 PM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 17 Dec 2009, Kenton Varda wrote:
  On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote:
  
   With CORS, I can trivially (one line in the .htaccess file for my
   site) make sure that no sites can use XBL files from my site other
   than my sites. My sites don't do any per-user tracking; doing that
   would involve orders of magnitude more complexity.
 
  I was debating about one particular use case, and this one that you're
  talking about now is completely different.  I can propose a different
  solution for this case, but I think someone will just change the use
  case again to make my new solution look silly, and we'll go in circles.

 The advantage of CORS is that it addresses all these use cases well.


There are perfectly good cap-based solutions as well.  But every
capability-based equivalent to an existing ACL-based solution is obviously
not going to be identical, and thus will have some trade-offs.  Usually
these trade-offs can be reasonably tailored to fit any particular real-world
use case.  But if you're bent on a solution that provides *exactly* what the
ACL solution provides (ignoring real-world considerations), the solution
usually won't be pretty.

Of course, when presented with a different way of doing things, it's always
easier to see the negative trade-offs than to see the positives, which is
why most debates about capability-based security seem to come down to people
nit-picking about the perceived disadvantages of caps while ignoring the
benefits.  I think this is what makes Mark so grumpy.  :/


   How can an origin voluntarily identify itself in an unspoofable
   fashion? Without running scripts?
 
  It can't.

 I don't understand how it can solve the problem then. If it's trivial for
 a site to spoof another, then the use case isn't solved.

  My point was that for simple non-security-related statistics gathering,
  spoofing is not a big concern.

 None of the use cases I've mentioned involve statistics gathering.


It was Maciej that brought up this use case.  I was responding to him.


   I have no problem with offering a feature like UM in CORS. My
   objection is to making the simple cases non-trivial, e.g. by never
   including Origin headers in any requests.
 
  Personally I'm not actually arguing against standardizing CORS.  What
  I'm arguing is that UM is the natural solution for software designed in
  an object-oriented, loosely-coupled way.

 CORS is a superset of UM; I have no objection to CORS-enabled APIs
 exposing the UM subset (i.e. allowing scripts to opt out of sending the
 Origin header). However, my understanding is that the UM proposal is to
 explictly not allow Origin to ever be sent, which is why there is a
 debate. (If the question was just should we add a feature to CORS to
 allow Origin to not be sent, then I think the debate would have concluded
 without much argument long ago.)


I think the worry is about the chicken-and-egg problem I mentioned above:
 We justify the standard based on the existing infrastructure, but new
infrastructure will be built based on the direction in the standards.  Mark,
Tyler, and I believe the web would be better off if most things were

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Kenton Varda
Without the benefit of full context (I only started following this list
recently), I'd like cautiously to suggest that the UM solution to Ian's
challenge seems awkward because the challenge is itself a poor design, and
UM tends to be more difficult to work with when used to implement designs
that are poor in the first place.

Specifically -- and note that I'm not sure I follow all the details, so I
could be missing things -- it seems that the challenge calls for site B to
be hard-coded to talk to site A.  In a better world, site B would be able to
talk to any site that provides feeds in the desired format.  In order for
this to be possible, the user obviously has to explicitly hook up site B
to site A somehow.  Ideally, this hook-up act itself would additionally
imply permission for site B to access the user's data on site A.  The
natural way to accomplish this would be for an unguessable access token to
be communicated from site A to site B as part of the hook-up step.  Once
such a mechanism exists, UM is obviously the best way for site B to actually
access the data -- CORS provides no benefit at this point.

So how does this hook-up happen?  This is mostly a UI question.  One way
that could work with current browsers would be for the user to copy/paste an
unguessable URL representing the capability from one site to the other, but
this is obviously a poor UI.  Instead, I think what we need is some sort of
browser support for establishing these connections.  This is something I've
already been talking about on the public-device-apis list, as I think the
same UI should be usable to hook-up web apps with physical devices
connected to the user's machine.

So imagine, for example, that when the user visits site A originally, the
site can somehow tell the browser I would like to provide a capability
implementing the com.example.Feed interface.  The URL for this capability is
[something unguessable]..  Then, when the user visits site B, it has a
socket for an object implementing com.example.Feed.  When the user
clicks on this socket, the browser pops up a list of com.example.Feed
implementations that it knows about, such as the one from site A.  The user
can then click on that one and thus hook up the sites.

Obviously there are many issues to work through before this sort of thing
would be possible.  Ian proposed a new device tag on public-device-apis
yesterday -- it serves as the socket in my example above.  But, how a
device list gets populated (and the security implications of this) has yet
to be discussed much at all (as far as I know).

I just wanted to propose this as the ideal world.  In the ideal world,
UM is clearly the right standard.  I worry that CORS, if standardized, would
encourage developers to go down the path of hard-coding which sites they
talk to, since that's the approach that CORS makes easy and UM does not.  In
the long run, I think this would be bad for the web, since it would mean
less interoperability between apps and more vendor lock-in.

That said, I think it's safe to say that if you *want* to hard-code the list
of sites that you can interoperate with, it's easier to do with CORS than
with UM.

On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com wrote:

 On Mon, Dec 14, 2009 at 11:35 AM, Maciej Stachowiak m...@apple.com wrote:
 
  On Dec 14, 2009, at 10:44 AM, Tyler Close wrote:
 
  On Mon, Dec 14, 2009 at 10:16 AM, Adam Barth w...@adambarth.com wrote:
 
  On Mon, Dec 14, 2009 at 5:53 AM, Jonathan Rees 
 j...@creativecommons.org
  wrote:
 
  The only complaint I know of regarding UM is that it is so complicated
  to use in practice that it will not be as enabling as CORS
 
  Actually, Tyler's UM protocol requires the user to confirm message 5
  to prevent a CSRF attack.  Maciej's CORS version of the protocol
  requires no such user confirmation.  I think it's safe to say that
  asking the user to confirm security-critical operations is not a good
  approach.
 
  For Ian Hickson's challenge problem, I came up with a design that does
  not require any confirmation, or any other user interaction. See:
 
  http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1232.html
 
  That same design can be used to solve Maciej's challenge problem.
 
  I see three ways it wouldn't satisfy the requirements given for my CORS
  example:
 
  1) Fails AJAX UI requirement in the grant phase, since a form post is
  required.

 I thought AJAX UI just meant no full page reload. The grant phase
 could be done in an iframe.

  2) The permission grant is intiated and entirely drive by Site B (the
  service consumer). Thus Site A (the service provider in this case) has no
  way to know that the request to grant access is a genuine grant from the
  user.
 
  3) When Site A receives the request from Site B, there is no indication
 of
  what site initiated the communication (unless the request from B is
 expected
  to come with an Origin header). How does it even know it's supposed to
  

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Ian Hickson
On Wed, 16 Dec 2009, Kenton Varda wrote:

 Without the benefit of full context (I only started following this list 
 recently), I'd like cautiously to suggest that the UM solution to Ian's 
 challenge seems awkward because the challenge is itself a poor design, 
 and UM tends to be more difficult to work with when used to implement 
 designs that are poor in the first place.
 
 Specifically -- and note that I'm not sure I follow all the details, so 
 I could be missing things -- it seems that the challenge calls for site 
 B to be hard-coded to talk to site A.  In a better world, site B would 
 be able to talk to any site that provides feeds in the desired format.

A concrete example of the example I was talking about is Google's Finance 
GData API. There's a fixed URL on A (Google's site) that represents my 
finance information. There's a site B (my portal page) that is hard-coded 
to fetch that data and display it. I'm logged into A, I'm not logged into 
B, and I've told A (Google) that it's ok to give B access to my financial 
data. Today, this involves a complicated set of bouncing back and forth. 
With CORS, it could be done with zero server-side scripting -- the file 
could just be statically generated with an HTTP header that grants 
permission to my portal to read the page.

Another example would be an XBL binding file on hixie.ch that is 
accessible only to pages on damowmow.com. With CORS I can do this with one 
line in my .htaccess file. I don't see how to do it at all with UM.


 So imagine, for example, that when the user visits site A originally, 
 the site can somehow tell the browser I would like to provide a 
 capability implementing the com.example.Feed interface.  The URL for 
 this capability is [something unguessable]..  Then, when the user 
 visits site B, it has a socket for an object implementing 
 com.example.Feed.  When the user clicks on this socket, the browser 
 pops up a list of com.example.Feed implementations that it knows about, 
 such as the one from site A.  The user can then click on that one and 
 thus hook up the sites.

As a user, in both the finance case and XBL case, I don't want any UI. I 
just want it to Work.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Kenton Varda
On Wed, Dec 16, 2009 at 9:25 PM, Ian Hickson i...@hixie.ch wrote:

 A concrete example of the example I was talking about is Google's Finance
 GData API. There's a fixed URL on A (Google's site) that represents my
 finance information. There's a site B (my portal page) that is hard-coded
 to fetch that data and display it. I'm logged into A, I'm not logged into
 B, and I've told A (Google) that it's ok to give B access to my financial
 data. Today, this involves a complicated set of bouncing back and forth.
 With CORS, it could be done with zero server-side scripting -- the file
 could just be statically generated with an HTTP header that grants
 permission to my portal to read the page.

 ...

 As a user, in both the finance case and XBL case, I don't want any UI. I
 just want it to Work.


Yet you must go through a UI on site A to tell it that it's OK to give your
data to B.  Obviously this step cannot be altogether eliminated.  What I am
suggesting is a slightly different UI which I think would be no more
difficult to use, but which would avoid the need to hard-code.

In fact, I think my UI is easier for users, because in all likelihood, when
you decide I want site B to access my data from site A, you are probably
already on site B at the time.  In your UI, you have to navigate back to A
in order to grant permission to B (and doesn't that also require
copy-pasting the host name?).  In my UI, you don't have to leave site B to
make the connection, because the browser remembers that site A provided the
desired capability and thus can present the option to you directly.

The down side is that I don't know how to implement my UI in today's
browsers.


Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Devdatta

 Another example would be an XBL binding file on hixie.ch that is
 accessible only to pages on damowmow.com. With CORS I can do this with one
 line in my .htaccess file. I don't see how to do it at all with UM.


Seems to me that these examples can just as easily be done with IE's
XDomainRequest. Are there examples for CORS which can't be done by
UM/XDR  ?

Cheers
devdatta

2009/12/16 Ian Hickson i...@hixie.ch:
 On Wed, 16 Dec 2009, Kenton Varda wrote:

 Without the benefit of full context (I only started following this list
 recently), I'd like cautiously to suggest that the UM solution to Ian's
 challenge seems awkward because the challenge is itself a poor design,
 and UM tends to be more difficult to work with when used to implement
 designs that are poor in the first place.

 Specifically -- and note that I'm not sure I follow all the details, so
 I could be missing things -- it seems that the challenge calls for site
 B to be hard-coded to talk to site A.  In a better world, site B would
 be able to talk to any site that provides feeds in the desired format.

 A concrete example of the example I was talking about is Google's Finance
 GData API. There's a fixed URL on A (Google's site) that represents my
 finance information. There's a site B (my portal page) that is hard-coded
 to fetch that data and display it. I'm logged into A, I'm not logged into
 B, and I've told A (Google) that it's ok to give B access to my financial
 data. Today, this involves a complicated set of bouncing back and forth.
 With CORS, it could be done with zero server-side scripting -- the file
 could just be statically generated with an HTTP header that grants
 permission to my portal to read the page.

 Another example would be an XBL binding file on hixie.ch that is
 accessible only to pages on damowmow.com. With CORS I can do this with one
 line in my .htaccess file. I don't see how to do it at all with UM.


 So imagine, for example, that when the user visits site A originally,
 the site can somehow tell the browser I would like to provide a
 capability implementing the com.example.Feed interface.  The URL for
 this capability is [something unguessable]..  Then, when the user
 visits site B, it has a socket for an object implementing
 com.example.Feed.  When the user clicks on this socket, the browser
 pops up a list of com.example.Feed implementations that it knows about,
 such as the one from site A.  The user can then click on that one and
 thus hook up the sites.

 As a user, in both the finance case and XBL case, I don't want any UI. I
 just want it to Work.

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'





Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Ian Hickson
On Wed, 16 Dec 2009, Devdatta wrote:
 
  Another example would be an XBL binding file on hixie.ch that is
  accessible only to pages on damowmow.com. With CORS I can do this with one
  line in my .htaccess file. I don't see how to do it at all with UM.
 
 Seems to me that these examples can just as easily be done with IE's
 XDomainRequest.

How?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Devdatta
hmm.. just a XDR GET on the file at hixie.ch which allows access only
if the request is from damowmow.com ?

I am not sure -- is there anything special about XBL bindings which
would result in this not working ?

Cheers
devdatta

2009/12/16 Ian Hickson i...@hixie.ch:
 On Wed, 16 Dec 2009, Devdatta wrote:
 
  Another example would be an XBL binding file on hixie.ch that is
  accessible only to pages on damowmow.com. With CORS I can do this with one
  line in my .htaccess file. I don't see how to do it at all with UM.

 Seems to me that these examples can just as easily be done with IE's
 XDomainRequest.

 How?

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Maciej Stachowiak


On Dec 16, 2009, at 11:30 PM, Devdatta wrote:


hmm.. just a XDR GET on the file at hixie.ch which allows access only
if the request is from damowmow.com ?

I am not sure -- is there anything special about XBL bindings which
would result in this not working ?


If I recall correctly, XDR sends an Origin header, so it would work  
for this kind of use case so long as the resource is not per-user. XDR  
essentially uses a profile of CORS with the credentials flag always  
off. UM is different - it would not send an Origin header. So it would  
be more difficult to apply it to Hixie's problem.


Regards,
Maciej




Cheers
devdatta

2009/12/16 Ian Hickson i...@hixie.ch:

On Wed, 16 Dec 2009, Devdatta wrote:


Another example would be an XBL binding file on hixie.ch that is
accessible only to pages on damowmow.com. With CORS I can do this  
with one

line in my .htaccess file. I don't see how to do it at all with UM.


Seems to me that these examples can just as easily be done with IE's
XDomainRequest.


How?

--
Ian Hickson   U+1047E) 
\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _ 
\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'-- 
(,_..'`-.;.'









Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-16 Thread Maciej Stachowiak


On Dec 16, 2009, at 9:10 PM, Kenton Varda wrote:

Without the benefit of full context (I only started following this  
list recently), I'd like cautiously to suggest that the UM solution  
to Ian's challenge seems awkward because the challenge is itself a  
poor design, and UM tends to be more difficult to work with when  
used to implement designs that are poor in the first place.


Specifically -- and note that I'm not sure I follow all the details,  
so I could be missing things -- it seems that the challenge calls  
for site B to be hard-coded to talk to site A.  In a better world,  
site B would be able to talk to any site that provides feeds in the  
desired format.  In order for this to be possible, the user  
obviously has to explicitly hook up site B to site A somehow.   
Ideally, this hook-up act itself would additionally imply  
permission for site B to access the user's data on site A.  The  
natural way to accomplish this would be for an unguessable access  
token to be communicated from site A to site B as part of the hook- 
up step.  Once such a mechanism exists, UM is obviously the best  
way for site B to actually access the data -- CORS provides no  
benefit at this point.


CORS would provide at least two benefits, using the exact protocol  
you'd use with UM:


1) It lets you know what site is sending the request; with UM there is  
no way for the receiving server to tell. Site A may wish to enforce a  
policy that any other site that wants access has to request it  
individually. But with UM, there is no way to prevent Site B from  
sharing its unguessable URL to the resource with another site, or even  
to tell that Site B has done so. (I've seen papers cited that claim  
you can do proper logging using an underlying capabilities mechanism  
if you do the right things on top of it, but Tyler's protocol does not  
do that; and it is not at all obvious to me how to extend such results  
to tokens passed over the network, where you can't count on a type  
system to enforce integrity at the endpoints like you can with a  
system all running in a single object capability language.)


2) It provides additional defense if the unguessable URL is guessed,  
either because of the many natural ways URLs tend to leak, or because  
of a mistake in the algorithm that generates unguessable URLs, or  
because either Site B or Site A unintentionally disclose it to a third  
party. By using an unguessable URL *and* checking Origin and Cookie,  
Site A would still have some protection in this case. An attacker  
would have to not only break the security of the secret token but  
would also need to manage a confused deputy type attack against Site  
B, which has legitimate access, thus greatly narrowing the scope of  
the vulnerability. You would need two separate vulnerabilities, and an  
attacker with the opportunity to exploit both, in order to be  
vulnerable to unauthorized access.


Regards,
Maciej




Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-15 Thread Adam Barth
On Mon, Dec 14, 2009 at 6:14 PM, Jonas Sicking jo...@sicking.cc wrote:
 For what it's worth, I'm not sure that eliminating is correct here.
 With UM, I can certainly see people doing things like using a wrapping
 library for all UM requests (very commonly done with XHR today), and
 then letting that library add the security token to the request.

There are real examples of this exact vulnerably occurring in CSRF
defenses based on secret tokens.  There's no silver bullet for
security.

Adam



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-15 Thread Tyler Close
On Mon, Dec 14, 2009 at 6:14 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Dec 14, 2009 at 4:52 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Sun, Dec 13, 2009 at 6:15 PM, Maciej Stachowiak m...@apple.com wrote:
 There seem to be two schools of thought that to some extent inform the
 thinking of participants in this discussion:
 1) Try to encourage capability-based mechanisms by not providing anything
 that lets you extend the use of origins and cookies.
 2) Try to build on the model that already exists and that we are likely
 stuck with, and provide practical ways to mitigate its risks.

 My own perspective on this is:
 3) In scenarios involving more than 2 parties, the ACL model is
 inherently vulnerable to CSRF-like problems. So, for cross-origin
 scenarios, a non-ACL model solution is needed.

 The above is a purely practical perspective. When writing or auditing
 code, UM provides a way to eliminate an entire class of attacks. I
 view it the same way I do moving from C to a memory safe language to
 avoid buffer overflow and related attacks.

 For what it's worth, I'm not sure that eliminating is correct here.
 With UM, I can certainly see people doing things like using a wrapping
 library for all UM requests (very commonly done with XHR today), and
 then letting that library add the security token to the request.

Yes, I said provides a way to eliminate. I agree that UM doesn't by
itself eliminate CSRF in a way that can't be undone by poor
application design. The UM draft we sent to this list covers this
point in the Security Considerations section. See the second to last
paragraph in that section:

http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/att-0931/draft.html#security

That paragraph reads:

Application designers should design protocols that transmit only those
permissions justified by the purpose of each request. These
permissions should not be context sensitive, such as apply delete
permission to any identifier in this request. Such a permission
creates the danger of a CSRF-like attack in which an attacker causes
an unexpected identifier to be in the request. Instead, a permission
should be specific, such as apply delete permission to resource foo.


UM provides a safe substrate for application protocols that are
invulnerable to CSRF-like attacks. Without UM, this can't be done
since the browser automatically adds credentials to all requests.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-15 Thread Tyler Close
On Mon, Dec 14, 2009 at 4:26 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Mon, Dec 14, 2009 at 2:38 PM, Adam Barth w...@adambarth.com wrote:
 On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com wrote:
 For example, the
 User Consent Phase and Grant Phase above could be replaced by a single
 copy-paste operation by the user.

 Any design that involves storing confidential information in the
 clipboard is insecure because IE lets arbitrary web sites read the
 user's clipboard.  You can judge that to be a regrettable choice by
 the IE team, but it's just a fact of the world.

 And so we use the alternate, no-copy-paste design on IE while waiting
 for a better world; one in which users can safely copy data between
 web pages.

Just so that everyone knows, IE has changed this policy, so it's not a
situation where we'll be waiting forever. See:

http://msdn.microsoft.com/en-us/library/bb250473(VS.85).aspx

Adam, were you aware of this policy change?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-15 Thread Adam Barth
On Tue, Dec 15, 2009 at 10:12 AM, Tyler Close tyler.cl...@gmail.com wrote:
 Just so that everyone knows, IE has changed this policy, so it's not a
 situation where we'll be waiting forever. See:

 http://msdn.microsoft.com/en-us/library/bb250473(VS.85).aspx

 Adam, were you aware of this policy change?

Nope.  I'm glad to see IE is making progress.  I suspect a large
percentage of users will click through that dialog, but that gives us
hope that the IE team will eventually remove the dialog as well.

Adam



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Jonathan Rees
Comments inline

On Sun, Dec 13, 2009 at 9:15 PM, Maciej Stachowiak m...@apple.com wrote:

 On Dec 13, 2009, at 3:47 PM, Mark S. Miller wrote:

 On Sun, Dec 13, 2009 at 3:19 PM, Maciej Stachowiak m...@apple.com wrote:

 The literature you cited seems to mostly be about whether capability
 systems have various technical flaws, and whether they can be made to do
 various things that ACL-based systems can do. This does not seem to me to
 show that the science is settled on how to design security systems.

The question is whether separating credentials from naming has
advantages over keeping them together. The references talk about
certain kinds of putative advantages that have been proven illusory.
It is true that there may be other advantages that haven't been
articulated or surfaced. Mark is asking for help in understanding what
they are.

 If there are undisputed weaknesses of ACLs compared to capabilities, and
 undisputed refutations of all claimed weaknesses capabilities compared ACLs,
 then what more is needed for the science to be settled?

If the security considerations can't be convincing, then you are
making your judgment of the inadequacy of the capability approach
based on other considerations. I think there is a sincere question as
to what those considerations are.

 Even if that is true with respect to formal security properties (and I
 honestly don't know), it doesn't necessarily show that ACL-based systems are
 always dangerously unsafe, or that the formal differences actually matter in
 practice in a particular case, enough to outweigh any pragmatic
 considerations in the other direction.

Because the trusted computing base can always have flaws, and desired
security policy may be formalized incorrectly, there is *always* risk.
When comparing approaches based on security criteria, you have to ask
which approach has lower risk. When applying other criteria, the
questions are different. This may be a disagreement over goodness,
so we need to work on being transparent about what good means.

 I'm also not sure that this Working Group is an appropriate venue to
 determine the answer to that question in a general way. I don't think most
 of us have the skillset  to review the literature. Beyond that, our goal in
 the Working Group is to do practical security analysis of concrete
 protocols, and if there are flaws, to address them. If there are theoretical
 results that have direct bearing on Working Group deliverables, then the
 best way to provide that information would be to explain how they apply in
 that specific context.

 Fine with me. That's what we were doing before Adam raised the history of
 this controversy as an argument that we should stop.

 One important point to consider is that we are not deploying into a vacuum.
 The Web already pervasively makes use of tokens that are passively passed
 around to identify the agent (I feel a little weird calling these ACLs given
 the specific uses). In particular, the notion of origin is used already to
 control client-side scripting access to the DOM; and cookies are used
 pervasively for persistent login.
 I don't see a clear plan on the table for removing these passive
 identifiers. Removing same-origin policy for scripting would require either
 majorly redesigning scripting APIs or would lead to massive security holes
 in existing sites. As for cookies, it does not seem anyone has a practical
 replacement that allows a user to persistently stay logged into a site. In
 fact, many proposed mechanisms for cross-site communication ultimately
 depend at some point on cookies, including you and Tyler's proposed UM-based
 protocol for cross-site communication without prior arrangement.
 Even if a pure capability-based system is better than a pure ACL-based
 system (and I really have no way to evaluate, except to note that a large
 number of security architectures in widespread production use seem to be on
 some level ACL-based), it's not clear to me that solely pushing capabilities
 is the best way to improve the already existing Web.
 There seem to be two schools of thought that to some extent inform the
 thinking of participants in this discussion:
 1) Try to encourage capability-based mechanisms by not providing anything
 that lets you extend the use of origins and cookies.
 2) Try to build on the model that already exists and that we are likely
 stuck with, and provide practical ways to mitigate its risks.
 I don't see how we are going to settle the disagreement by further mailing
 list debate, because it seems to me that much of it is at the level of
 design philosophy, not provable security properties.

This is a straw man as it does not address the question on the table.
As far as I know, even if current credential-carrying same-origin
requests are being challenged, prohibiting them is in neither the
interest nor the power of the WG, so it's off the table. (Mark may
argue for deprecation, but that in itself will have little effect.)
AFAICT 

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Adam Barth
On Mon, Dec 14, 2009 at 5:53 AM, Jonathan Rees j...@creativecommons.org wrote:
 The only complaint I know of regarding UM is that it is so complicated
 to use in practice that it will not be as enabling as CORS

Actually, Tyler's UM protocol requires the user to confirm message 5
to prevent a CSRF attack.  Maciej's CORS version of the protocol
requires no such user confirmation.  I think it's safe to say that
asking the user to confirm security-critical operations is not a good
approach.

 Regarding the idea that UM is unproven or undeployed - I think this is
 a peculiar charge given that object-oriented programming dates from
 1967, and actors date from 1973; and current use of the capability
 pattern, for example in email list validation, shared calendar access
 control, and CSRF defense (Mark can probably provide many other and
 better examples), *is* something we can build on. Ocaps have been
 essentially unchanged for 40 years, with essentially no elaboration or
 revision despite heavy stress testing. AFAIK the academic and
 practical security communities have not converged on any distributed
 (i.e. multilateral) access control system *other* than capabilities.

You're really overstating your case to the point where it's ridiculous.

Adam



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Mon, Dec 14, 2009 at 10:16 AM, Adam Barth w...@adambarth.com wrote:
 On Mon, Dec 14, 2009 at 5:53 AM, Jonathan Rees j...@creativecommons.org 
 wrote:
 The only complaint I know of regarding UM is that it is so complicated
 to use in practice that it will not be as enabling as CORS

 Actually, Tyler's UM protocol requires the user to confirm message 5
 to prevent a CSRF attack.  Maciej's CORS version of the protocol
 requires no such user confirmation.  I think it's safe to say that
 asking the user to confirm security-critical operations is not a good
 approach.

For Ian Hickson's challenge problem, I came up with a design that does
not require any confirmation, or any other user interaction. See:

http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1232.html

That same design can be used to solve Maciej's challenge problem.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Maciej Stachowiak


On Dec 14, 2009, at 10:44 AM, Tyler Close wrote:

On Mon, Dec 14, 2009 at 10:16 AM, Adam Barth w...@adambarth.com  
wrote:
On Mon, Dec 14, 2009 at 5:53 AM, Jonathan Rees j...@creativecommons.org 
 wrote:
The only complaint I know of regarding UM is that it is so  
complicated

to use in practice that it will not be as enabling as CORS


Actually, Tyler's UM protocol requires the user to confirm message 5
to prevent a CSRF attack.  Maciej's CORS version of the protocol
requires no such user confirmation.  I think it's safe to say that
asking the user to confirm security-critical operations is not a good
approach.


For Ian Hickson's challenge problem, I came up with a design that does
not require any confirmation, or any other user interaction. See:

http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/ 
1232.html


That same design can be used to solve Maciej's challenge problem.


I see three ways it wouldn't satisfy the requirements given for my  
CORS example:


1) Fails AJAX UI requirement in the grant phase, since a form post  
is required.


2) The permission grant is intiated and entirely drive by Site B (the  
service consumer). Thus Site A (the service provider in this case) has  
no way to know that the request to grant access is a genuine grant  
from the user.


3) When Site A receives the request from Site B, there is no  
indication of what site initiated the communication (unless the  
request from B is expected to come with an Origin header). How does it  
even know it's supposed to redirect to B? Is site A expecting that  
it's only going to get service requests from B? That would amount to a  
prior bilateral arrangement.


I also note that the protocol you describe there uses cookies (and  
possibly origins, if point 3 is addressed) to bootstrap a shared- 
secret based scheme. As I've mentioned before, CORS would be a useful  
tool for that type of technique. It can allow such bootstrapping  
without having to jump through hoops with form posts, without  
disrupting the user's interaction with a full page load, and without  
necessarily having to put secrets in the URL (since the URL part of  
the request is by far the most likely to leak to the outside world  
inadvertantly.


Regards,
Maciej




Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Adam Barth
On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com wrote:
 For example, the
 User Consent Phase and Grant Phase above could be replaced by a single
 copy-paste operation by the user.

Any design that involves storing confidential information in the
clipboard is insecure because IE lets arbitrary web sites read the
user's clipboard.  You can judge that to be a regrettable choice by
the IE team, but it's just a fact of the world.

Adam



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Maciej Stachowiak


On Dec 14, 2009, at 2:38 PM, Adam Barth wrote:

On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com  
wrote:

For example, the
User Consent Phase and Grant Phase above could be replaced by a  
single

copy-paste operation by the user.


Any design that involves storing confidential information in the
clipboard is insecure because IE lets arbitrary web sites read the
user's clipboard.  You can judge that to be a regrettable choice by
the IE team, but it's just a fact of the world.


Information that's copied and pasted is highly likely to leak in other  
ways than just the IE paste behavior. For example, if it looks like a  
URL, users are likely to think it's a good idea to do things like  
share the URL with their friends, or to post it to a social bookmark  
site, or to Twitter it, or to send it in email. Even if it does not  
look like a URL, users may think they need to save it (likely  
somewhere insecure) so they don't forget.


Regards,
Maciej




Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Mon, Dec 14, 2009 at 2:38 PM, Adam Barth w...@adambarth.com wrote:
 On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com wrote:
 For example, the
 User Consent Phase and Grant Phase above could be replaced by a single
 copy-paste operation by the user.

 Any design that involves storing confidential information in the
 clipboard is insecure because IE lets arbitrary web sites read the
 user's clipboard.  You can judge that to be a regrettable choice by
 the IE team, but it's just a fact of the world.

And so we use the alternate, no-copy-paste design on IE while waiting
for a better world; one in which users can safely copy data between
web pages.

I imagine many passwords and other PII are made vulnerable by IE's
clipboard policy.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Mon, Dec 14, 2009 at 3:04 PM, Maciej Stachowiak m...@apple.com wrote:

 On Dec 14, 2009, at 2:38 PM, Adam Barth wrote:

 On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com
 wrote:

 For example, the
 User Consent Phase and Grant Phase above could be replaced by a single
 copy-paste operation by the user.

 Any design that involves storing confidential information in the
 clipboard is insecure because IE lets arbitrary web sites read the
 user's clipboard.  You can judge that to be a regrettable choice by
 the IE team, but it's just a fact of the world.

 Information that's copied and pasted is highly likely to leak in other ways
 than just the IE paste behavior. For example, if it looks like a URL, users
 are likely to think it's a good idea to do things like share the URL with
 their friends, or to post it to a social bookmark site, or to Twitter it, or
 to send it in email. Even if it does not look like a URL, users may think
 they need to save it (likely somewhere insecure) so they don't forget.

I think the user would only be tempted to post the URL to the world if
the returned representation was interesting to talk about. That
doesn't need to be the case.

In any case, like I said earlier, if you think copy-paste is evil,
I've provided alternate designs that avoid it.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Sun, Dec 13, 2009 at 6:15 PM, Maciej Stachowiak m...@apple.com wrote:
 There seem to be two schools of thought that to some extent inform the
 thinking of participants in this discussion:
 1) Try to encourage capability-based mechanisms by not providing anything
 that lets you extend the use of origins and cookies.
 2) Try to build on the model that already exists and that we are likely
 stuck with, and provide practical ways to mitigate its risks.

My own perspective on this is:
3) In scenarios involving more than 2 parties, the ACL model is
inherently vulnerable to CSRF-like problems. So, for cross-origin
scenarios, a non-ACL model solution is needed.

The above is a purely practical perspective. When writing or auditing
code, UM provides a way to eliminate an entire class of attacks. I
view it the same way I do moving from C to a memory safe language to
avoid buffer overflow and related attacks.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Jonas Sicking
On Mon, Dec 14, 2009 at 4:52 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Sun, Dec 13, 2009 at 6:15 PM, Maciej Stachowiak m...@apple.com wrote:
 There seem to be two schools of thought that to some extent inform the
 thinking of participants in this discussion:
 1) Try to encourage capability-based mechanisms by not providing anything
 that lets you extend the use of origins and cookies.
 2) Try to build on the model that already exists and that we are likely
 stuck with, and provide practical ways to mitigate its risks.

 My own perspective on this is:
 3) In scenarios involving more than 2 parties, the ACL model is
 inherently vulnerable to CSRF-like problems. So, for cross-origin
 scenarios, a non-ACL model solution is needed.

 The above is a purely practical perspective. When writing or auditing
 code, UM provides a way to eliminate an entire class of attacks. I
 view it the same way I do moving from C to a memory safe language to
 avoid buffer overflow and related attacks.

For what it's worth, I'm not sure that eliminating is correct here.
With UM, I can certainly see people doing things like using a wrapping
library for all UM requests (very commonly done with XHR today), and
then letting that library add the security token to the request.

If such a site then retreives a URL from a 3rd party and uses the
library to fetch, or POST to, a resource, that could lead to the same
confused deputy problems.

I agree that UM lessens the risk that this will happen though. And it
eliminates the ability for anyone to blame the browser vendor when it
happens.

/ Jonas



Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-13 Thread Maciej Stachowiak


I enter this subthread with trepidation, because I do not think the  
Working Group is in a position to engage in a literature review on an  
active research topic. However, a few comments below:


On Dec 13, 2009, at 1:29 PM, Mark S. Miller wrote:

On Sun, Dec 13, 2009 at 12:26 PM, Adam Barth w...@adambarth.com  
wrote:
On Sun, Dec 13, 2009 at 8:54 AM, Mark S. Miller erig...@google.com  
wrote:
 On Sat, Dec 12, 2009 at 7:17 PM, Adam Barth w...@adambarth.com  
wrote:

 I agree with Jonas.  It seems unlikely we'll be able to
 design-by-commitee around a difference in security philosophy  
dating

 back to the 70s.

 Hi Adam, the whole point of arguing is to settle controversies.  
That is how
 human knowledge advances. If after 40 years the ACL side has no  
defenses
 left for its position, ACL advocates should have the good grace to  
concede
 rather than cite the length of the argument as a reason not to  
resolve the

 argument.

I seriously doubt we're going to advance the state of human knowledge
by debating this topic on this mailing list.  The scientific community
is better equipped for that than the standards community.


AFAICT, the last words on this debate in the scientific literature  
are the Horton paper http://www.usenix.org/event/hotsec07/tech/full_papers/miller/miller.pdf 
 and the prior refutations it cites:


Because ocaps operate on an anonymous “bearer right” basis, they  
seem to make reactive control impossible. Indeed, although many  
historical criticisms of ocaps have since been refuted [11, 16, 10,  
17], a remaining unrefuted criticism is that they cannot record who  
to blame for which action [6]. This lack has led some to forego the  
benefits of ocaps.


The point of the Horton paper itself is to refute that last criticism.


That paper seems to respond to a criticism of object-capability  
systems. Specifically, it shows a protocol that apparently lets you  
associate communication with an identity in an object capability  
system to allow logging and reactively restricting access. At least  
I'm pretty sure it does, it took me several readings to properly  
undertand it.


This paper does not appear to give an argument that capability models  
are in general superior to other models.




[11] Capability Myths Demolished http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf 
 or http://www.usenix.org/events/hotsec07/tech/full_papers/miller/miller_html/ 



Those two don't seem to link to the same paper.

Referee rejection of Myths at http://www.eros-os.org/pipermail/cap-talk/2003-March/001133.html 
. Read carefully, especially Boebert's criticisms.


I'm not sure what we are supposed to conclude from the rejection  
comments (or from a rejected paper in general).


[16] Verifying the EROS Confinement Mechanism http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.6577 



The point being made her seems too technical to relate to the current  
discussion (at least to my non-expert understanding).




[10] Robust Composition http://erights.org/talks/thesis/. Notice  
in particular the counter-example to Boebert's famous claim in seven  
lines of simple code, in Figure 11.2.


[17] Patterns of Safe Collaboration http://www.evoluware.eu/fsp_thesis.pdf 
, which does a formal analysis of (among other things) confused  
deputy, Boebert's claim, and my counter-example.


[6] Traditional capability-based systems: An analysis of their  
ability to meet the trusted computer security evaluation criteria. http://www.webstart.com/jed/papers/P-1935/ 



There seems to be a whole lot of material on whether capability-based  
systems can enforce the *-property. It's not obvious to me how this is  
relevant to the discussion. If my understanding of the *-property is  
correct, it's not a property that we are trying to enforce in the  
context of Web security. But to be fair, I did not know what the *- 
property was until just a few minutes ago, so my opinion cannot be  
considered very well informed.




If you know of any responses to these refutations in the scientific  
literature, please cite them. If you believe (as I do) that the lack  
of responses is due to ignorance and avoidance, then either
1) the scientific community has shown itself less well equipped to  
engage in this debate than those who are actively engaged in it --  
such as us here on this list,
2) that the case against these alleged refutations are so obvious  
that they need not be stated, or
3) that the members of the scientific community that cares about  
these issues have found no flaw in these refutations -- in which  
case they legitimately should stand as the last word.


The literature you cited seems to mostly be about whether capability  
systems have various technical flaws, and whether they can be made to  
do various things that ACL-based systems can do. This does not seem to  
me to show that the science is settled on how to design security  
systems.


I'm also not sure that this Working Group is an appropriate 

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-13 Thread Mark S. Miller
On Sun, Dec 13, 2009 at 3:19 PM, Maciej Stachowiak m...@apple.com wrote:


 I enter this subthread with trepidation, because I do not think the Working
 Group is in a position to engage in a literature review on an active
 research topic. However, a few comments below:


I am not the one who brought up the controversy dating from the '70s as
relevant to this discussion. I am merely clarifying how one should interpret
the history of that controversy.



 On Dec 13, 2009, at 1:29 PM, Mark S. Miller wrote:

 On Sun, Dec 13, 2009 at 12:26 PM, Adam Barth w...@adambarth.com wrote:

 On Sun, Dec 13, 2009 at 8:54 AM, Mark S. Miller erig...@google.com
 wrote:
  On Sat, Dec 12, 2009 at 7:17 PM, Adam Barth w...@adambarth.com wrote:
  I agree with Jonas.  It seems unlikely we'll be able to
  design-by-commitee around a difference in security philosophy dating
  back to the 70s.
 
  Hi Adam, the whole point of arguing is to settle controversies. That is
 how
  human knowledge advances. If after 40 years the ACL side has no defenses
  left for its position, ACL advocates should have the good grace to
 concede
  rather than cite the length of the argument as a reason not to
 resolve the
  argument.

 I seriously doubt we're going to advance the state of human knowledge
 by debating this topic on this mailing list.  The scientific community
 is better equipped for that than the standards community.


 AFAICT, the last words on this debate in the scientific literature are the
 Horton paper 
 http://www.usenix.org/event/hotsec07/tech/full_papers/miller/miller.pdf
 and the prior refutations it cites:

 Because ocaps operate on an anonymous “bearer right” basis, they seem to
 make reactive control impossible. Indeed, although many historical
 criticisms of ocaps have since been refuted [11, 16, 10, 17], a remaining
 unrefuted criticism is that they cannot record who to blame for which action
 [6]. This lack has led some to forego the benefits of ocaps.


 The point of the Horton paper itself is to refute that last criticism.


 That paper seems to respond to a criticism of object-capability systems.
 Specifically, it shows a protocol that apparently lets you associate
 communication with an identity in an object capability system to allow
 logging and reactively restricting access. At least I'm pretty sure it does,
 it took me several readings to properly undertand it.

 This paper does not appear to give an argument that capability models are
 in general superior to other models.

 Agreed. Since no one any more challenges the assertion that ocaps are
superior to ACLs in some ways, the remaining question is whether ACLs are
superior to ocaps in other ways. If they are not, then ocaps and simply
strictly superior to ACLs. The references cited refute prior claims about
weaknesses of ocaps compared to ACLs.




 [11] Capability Myths Demolished 
 http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf or 
 http://www.usenix.org/events/hotsec07/tech/full_papers/miller/miller_html/
 


 Those two don't seem to link to the same paper.

 Yes, my mistake. The second link is another link to Horton, not to Myths.


 Referee rejection of Myths at 
 http://www.eros-os.org/pipermail/cap-talk/2003-March/001133.html. Read
 carefully, especially Boebert's criticisms.


 I'm not sure what we are supposed to conclude from the rejection comments
 (or from a rejected paper in general).

 [16] Verifying the EROS Confinement Mechanism 
 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.6577


 The point being made her seems too technical to relate to the current
 discussion (at least to my non-expert understanding).


Adam is an expert. I am challenging him wrt the status of the debate in the
scientific literature and what we should conclude from it, since he raised
the topic.


 [10] Robust Composition http://erights.org/talks/thesis/. Notice in
 particular the counter-example to Boebert's famous claim in seven lines of
 simple code, in Figure 11.2.

 [17] Patterns of Safe Collaboration 
 http://www.evoluware.eu/fsp_thesis.pdf, which does a formal analysis of
 (among other things) confused deputy, Boebert's claim, and my
 counter-example.

 [6] Traditional capability-based systems: An analysis of their ability to
 meet the trusted computer security evaluation criteria. 
 http://www.webstart.com/jed/papers/P-1935/


 There seems to be a whole lot of material on whether capability-based
 systems can enforce the *-property. It's not obvious to me how this is
 relevant to the discussion. If my understanding of the *-property is
 correct, it's not a property that we are trying to enforce in the context of
 Web security. But to be fair, I did not know what the *-property was until
 just a few minutes ago, so my opinion cannot be considered very well
 informed.

 I actually think the *-properties are almost completely unimportant, and
hardly ever relate to any practical issue. However, Boebert's impossibility
claim was then repeatedly cited, including by 

Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-13 Thread Maciej Stachowiak


On Dec 13, 2009, at 3:47 PM, Mark S. Miller wrote:

On Sun, Dec 13, 2009 at 3:19 PM, Maciej Stachowiak m...@apple.com  
wrote:



The literature you cited seems to mostly be about whether capability  
systems have various technical flaws, and whether they can be made  
to do various things that ACL-based systems can do. This does not  
seem to me to show that the science is settled on how to design  
security systems.



If there are undisputed weaknesses of ACLs compared to capabilities,  
and undisputed refutations of all claimed weaknesses capabilities  
compared ACLs, then what more is needed for the science to be settled?


Even if that is true with respect to formal security properties (and I  
honestly don't know), it doesn't necessarily show that ACL-based  
systems are always dangerously unsafe, or that the formal differences  
actually matter in practice in a particular case, enough to outweigh  
any pragmatic considerations in the other direction.




I'm also not sure that this Working Group is an appropriate venue to  
determine the answer to that question in a general way. I don't  
think most of us have the skillset  to review the literature. Beyond  
that, our goal in the Working Group is to do practical security  
analysis of concrete protocols, and if there are flaws, to address  
them. If there are theoretical results that have direct bearing on  
Working Group deliverables, then the best way to provide that  
information would be to explain how they apply in that specific  
context.


Fine with me. That's what we were doing before Adam raised the  
history of this controversy as an argument that we should stop.


One important point to consider is that we are not deploying into a  
vacuum. The Web already pervasively makes use of tokens that are  
passively passed around to identify the agent (I feel a little weird  
calling these ACLs given the specific uses). In particular, the notion  
of origin is used already to control client-side scripting access to  
the DOM; and cookies are used pervasively for persistent login.


I don't see a clear plan on the table for removing these passive  
identifiers. Removing same-origin policy for scripting would require  
either majorly redesigning scripting APIs or would lead to massive  
security holes in existing sites. As for cookies, it does not seem  
anyone has a practical replacement that allows a user to persistently  
stay logged into a site. In fact, many proposed mechanisms for cross- 
site communication ultimately depend at some point on cookies,  
including you and Tyler's proposed UM-based protocol for cross-site  
communication without prior arrangement.


Even if a pure capability-based system is better than a pure ACL-based  
system (and I really have no way to evaluate, except to note that a  
large number of security architectures in widespread production use  
seem to be on some level ACL-based), it's not clear to me that solely  
pushing capabilities is the best way to improve the already existing  
Web.


There seem to be two schools of thought that to some extent inform the  
thinking of participants in this discussion:


1) Try to encourage capability-based mechanisms by not providing  
anything that lets you extend the use of origins and cookies.
2) Try to build on the model that already exists and that we are  
likely stuck with, and provide practical ways to mitigate its risks.


I don't see how we are going to settle the disagreement by further  
mailing list debate, because it seems to me that much of it is at the  
level of design philosophy, not provable security properties.


Regards,
Maciej