Re: XMLHttpRequest Comments from W3C Forms WG
On Dec 16, 2009, at 21:47, Klotz, Leigh wrote: I'd like to suggest that the main issue is dependency of the XHR document on concepts where HTML5 is the only specification that defines several core concepts of the Web platform architecture, such as event loops, event handler attributes, Etc. A user agent that doesn't implement the core concepts isn't much use for browsing the Web. Since the point of the XHR spec is getting interop among Web browsers, it isn't a good allocation of resources to make XHR not depend on things that a user agent that is suitable for browsing the Web needs to support anyway. XHR interop doesn't matter much if XHR is transplanted into an environment where the other pieces fail to be interoperable with Web browsing software. That is, in such a case, it isn't much use if XHR itself works like XHR in browsers--the system as a whole still doesn't interoperate with Web browsers. -- Henri Sivonen hsivo...@iki.fi http://hsivonen.iki.fi/
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
Somehow I suspect all this has been said many times before... On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com wrote: CORS would provide at least two benefits, using the exact protocol you'd use with UM: 1) It lets you know what site is sending the request; with UM there is no way for the receiving server to tell. Site A may wish to enforce a policy that any other site that wants access has to request it individually. But with UM, there is no way to prevent Site B from sharing its unguessable URL to the resource with another site, or even to tell that Site B has done so. (I've seen papers cited that claim you can do proper logging using an underlying capabilities mechanism if you do the right things on top of it, but Tyler's protocol does not do that; and it is not at all obvious to me how to extend such results to tokens passed over the network, where you can't count on a type system to enforce integrity at the endpoints like you can with a system all running in a single object capability language.) IMO, this isn't useful information. If Alice is a user at my site, and I hand Alice a capability to access her data from my site, it should not make a difference to me whether Alice chooses to access that data using Bob's site or Charlie's site, any more than it makes a difference to me whether Alice chooses to use Firefox or Chrome. Saying that Alice is only allowed to access her data using Bob's site but not Charlie's is analogous to saying she can only use approved browsers. This provides a small amount of security at the price of greatly annoying users and stifling innovation (think mash-ups). Perhaps, though, you're suggesting that users should be able to edit the whitelist that is applied to their data, in order to provide access to new sites? But this seems cumbersome to me -- both to the user, who needs to manage this whitelist, and to app developers, who can no longer delegate work to other hosts. (Of course, if you want to know the origin for non-security reasons (e.g. to log usage for statistical purposes, or deal with compatibility issues) then you can have the origin voluntarily identify itself, just as browsers voluntarily identify themselves.) 2) It provides additional defense if the unguessable URL is guessed, either because of the many natural ways URLs tend to leak, or because of a mistake in the algorithm that generates unguessable URLs, or because either Site B or Site A unintentionally disclose it to a third party. By using an unguessable URL *and* checking Origin and Cookie, Site A would still have some protection in this case. An attacker would have to not only break the security of the secret token but would also need to manage a confused deputy type attack against Site B, which has legitimate access, thus greatly narrowing the scope of the vulnerability. You would need two separate vulnerabilities, and an attacker with the opportunity to exploit both, in order to be vulnerable to unauthorized access. Given the right UI, a capability URL should be no more leak-prone than a cookie. Sure, we don't want users to ever actually see capability URLs since they might then choose to copy/paste them into who knows where, but it's quite possible to hide the details behind the scenes, just like we hide cookie data. So, I don't think this additional defense is really worth much, unless you are arguing that cookies are insecure for the same reasons. (Perhaps we should only allow users to use approved browsers because other browsers might leak cookie data?) And again, this additional defense has great costs, as described above. So, no, I still think CORS provides no benefit for the protocol I described. It may seem to provide benefits, but the benefits actually cost far more than they are worth.
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Dec 17, 2009, at 1:42 AM, Kenton Varda wrote: Somehow I suspect all this has been said many times before... On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com wrote: CORS would provide at least two benefits, using the exact protocol you'd use with UM: 1) It lets you know what site is sending the request; with UM there is no way for the receiving server to tell. Site A may wish to enforce a policy that any other site that wants access has to request it individually. But with UM, there is no way to prevent Site B from sharing its unguessable URL to the resource with another site, or even to tell that Site B has done so. (I've seen papers cited that claim you can do proper logging using an underlying capabilities mechanism if you do the right things on top of it, but Tyler's protocol does not do that; and it is not at all obvious to me how to extend such results to tokens passed over the network, where you can't count on a type system to enforce integrity at the endpoints like you can with a system all running in a single object capability language.) IMO, this isn't useful information. If Alice is a user at my site, and I hand Alice a capability to access her data from my site, it should not make a difference to me whether Alice chooses to access that data using Bob's site or Charlie's site, any more than it makes a difference to me whether Alice chooses to use Firefox or Chrome. Saying that Alice is only allowed to access her data using Bob's site but not Charlie's is analogous to saying she can only use approved browsers. This provides a small amount of security at the price of greatly annoying users and stifling innovation (think mash-ups). I'm not saying that Alice should be restricted in who she shares the feed with. Just that Bob's site should not be able to automatically grant Charlie's site access to the feed without Alice explicitly granting that permission. Many sites that use workarounds (e.g. server- to-server communication combined with client-side form posts and redirects) to share their data today would like grants to be to another site, not to another site plus any third party site that the second site chooses to share with. Perhaps, though, you're suggesting that users should be able to edit the whitelist that is applied to their data, in order to provide access to new sites? But this seems cumbersome to me -- both to the user, who needs to manage this whitelist, and to app developers, who can no longer delegate work to other hosts. An automated permission grant system that vends unguessable URLs could just as easily manage the whitelist. It is true that app developers could not unilaterally grant access to other origins, but this is actually a desired property for many service providers. Saying that this feature is cumbersome for the service consumer does not lead the service provider to desire it any less. (Of course, if you want to know the origin for non-security reasons (e.g. to log usage for statistical purposes, or deal with compatibility issues) then you can have the origin voluntarily identify itself, just as browsers voluntarily identify themselves.) 2) It provides additional defense if the unguessable URL is guessed, either because of the many natural ways URLs tend to leak, or because of a mistake in the algorithm that generates unguessable URLs, or because either Site B or Site A unintentionally disclose it to a third party. By using an unguessable URL *and* checking Origin and Cookie, Site A would still have some protection in this case. An attacker would have to not only break the security of the secret token but would also need to manage a confused deputy type attack against Site B, which has legitimate access, thus greatly narrowing the scope of the vulnerability. You would need two separate vulnerabilities, and an attacker with the opportunity to exploit both, in order to be vulnerable to unauthorized access. Given the right UI, a capability URL should be no more leak-prone than a cookie. Sure, we don't want users to ever actually see capability URLs since they might then choose to copy/paste them into who knows where, but it's quite possible to hide the details behind the scenes, just like we hide cookie data. Hiding capability URLs completely from the user would require some mechanism that has not yet been proposed in a concrete form. So far the ways to vend the URL to the service consumer that have been proposed include user copy/paste, and cross-site form submission with redirects, both of which expose the URL. However, accidental disclosure by the user is not the only risk. So, I don't think this additional defense is really worth much, unless you are arguing that cookies are insecure for the same reasons. Sites do, on occasion, make mistakes in the algorithms for generating session cookies. Or
[widgets] white space handling
Hi Widget addicts, While reading again through the spec, I'm wondering why there are differences between the PC spec and the XML spec in terms of white space handling. PC defines: * space characters as: U+0020, U+0009, U+000A, U+000B, U+000C, U+000D * Unicode white space characters as: U+0009-U+000D, U+0020, U+0085, U+00A0, U+1680, U+180E, U+2000-U+200A, U+2028, +2029, U+202F, U+205F, U+3000 * control characters as: U+-U+001F, U+007F * forbidden characters as: control characters and U+003C, U+003E, U+003A, U+0022, U+002F, U+005C, U+007C, U+003F, U+002A, U+005E, U+0060, U+007B, U+007D, U+0021. space characters are used in Rule for Getting a Single Attribute Value, Rule for Getting a List of Keywords From an Attribute, Rule for Parsing a Non-negative Integer, algorithm to derive the user agent locales and ZIP handling. Unicode white space characters are used only in Rule for Getting Text Content with Normalized White Space control characters are only used only in forbidden characters and forbidden characters are used only in ZIP processing. XML defines white space as: U+0020, U+0009, U+000A, U+000D Given that, I have the following questions/remarks: - Why do you define control characters, can't you put their code points in forbidden characters? This would simplify the spec and make it more easy to understand. - Could you rename forbidden characters to ZIP forbidden characters? This would clearly indicate in which area they are forbidden and why they are defined. - Why do the definition of PC space characters and Unicode white space charactes differ from the XML white space definition? For Unicode white space characters, I could understand this difference since it's only used in the Rule for Getting Text Content with Normalized White Space which first applies XML parsing, DOM3 textContent behavior and then applies additional PC-defined behavior. But still, I'm wondering: is this difference really needed? If yes, can you add a note explaining the rationale and difference with the basic XML processing. For space characters, why did you add U+000B and U+000C? - Ignoring U+000B and U+000C, the Rule for Getting a Single Attribute Value seems to me to be already defined in XML as Attribute-Value Normalization(http://www.w3.org/TR/xml/#AVNormalize). I could understand that you want a self-contained spec but you should at least indicate that the behavior is the same as the basic XML processing. Best regards, Cyril -- Cyril Concolato Maître de Conférences/Associate Professor Groupe Mutimedia/Multimedia Group Telecom ParisTech 46 rue Barrault 75 013 Paris, France http://concolato.blog.telecom-paristech.fr/
[widgets] Authorities will never have authority?
Sorry, I missed the followup on Larry's email http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0131.html - can someone tell me where this is tracked? Specifically I want to check that the 'authority' component is adequately futureproofed. Devoid of semantics could mean devoid in this and future versions, i.e. it's a comment field. In this case it would be better to call it comment rather than authority. If instead you mean devoid in this version, but some revision of the spec gives some meaning, then you have to provide at least one value that a widget: URI minter can put in that field that will never, in the future, be taken to mean something that's not meant. Sorry for the post-LC comment, but probably a simple change of wording can clarify this to those who will be implementing the spec. Best Jonathan On Sat, Oct 10, 2009 at 12:44 PM, Larry Masinter masin...@adobe.com wrote: Re the widget: scheme http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0115.html http://www.w3.org/TR/2009/WD-widgets-uri-20091008/ (bcc www-tag since original call was; there's an AWWW suggestion buried in here somewhere, though) 1) ** WELL DEFINED QUERY AND AUTHORITY ** http://www.w3.org/TR/webarch/#URI-scheme points to RFC 2617, which has been replaced by RFC 4395. I think WebArch should be updated to recommend that W3C recommendations must use permanent schemes and not provisional ones. RFC 4395 requires that permanent scheme definitions be Well-defined. Leaving in syntactic components and declaring them out of scope is leaving them undefined. Suggestion: Remove 'authority' from the syntax, and any sections that refer to them; disallow query components Alternate Suggestion: define the meaning of authority and query components. 2) ** WELL-DEFINED MAPPING TO FILES ** Section 4.4 Step 2 makes normative reference: http://www.w3.org/TR/widgets/#rule-for-finding-a-file-within-a-widget- The algorithm there seems to be lacking a clear definition of matches which deals reasonably with the issues surrounding matching and equivalence for Unicode strings, or the handling of character sets in IRIs which are not represented in UTF8. Suggestion (Editorial): Move the definition of the mapping algorithm into the URI scheme registration document so that its definition can be reviewed for completeness. Suggestion (Technical): Define exactly and precisely what match means and make it clear what the appropriate response or error conditions are if there is more than one file that matches. 3) ** Reuse URI schemes ** http://www.w3.org/TR/webarch/#URI-scheme includes Good practice: Reuse URI schemes A specification SHOULD reuse an existing URI scheme (rather than create a new one) when it provides the desired properties of identifiers and their relation to resources. The draft suggests there are many other schemes (with merit) already proposed, but that these existing efforts, rather than identify packaged resources from the outside, widget URIs identify them only on the inside of a package, irrespective of that package's own location., but this seems to indicate that the requirements for widget URIs are weaker, not stronger. Suggestion: Supply use cases where reuse of existing schemes (including thismessage:/) do not provide the desired properties of identifiers and their relation to resources. Alternate Suggestion: Withdraw registration of widget: and reference existing scheme. Alternate Suggestion: Provide guidelines so that widget: can be used for other applications that need a way of referencing components within ZIP packages; rename widget: to use a scheme name that is appropriate for this broader application. AWWW Suggestion: add guideline: Make New URI Schemes Reusable If You Can't Reuse URI schemes. 4) ** EDITORIAL RE OTHER SCHEME ** In fact, it is possible that both this scheme and another defined to access Zip archive content would be used jointly, with little or no overlap in functionality. Without any other context, this is incomprehensible. Suggestion: remove sentence. 5) ** EDITORIAL USE OF URI FOR IRI ** Throughout this specification, wherever the term URI [URI] is used, it can be replaced interchangeably with the term IRI [RFC3987]. All widget URIs are IRIs, but the term URI is more common and was therefore preferred for readability. Seriously, do we need a W3C Guideline or Finding to cover DO NOT REDEFINE TERMS? There's glory for you! (see http://www.sabian.org/Alice/lgchap06.htm ). Suggestion: Use IRI since that's what is meant. 6) ** EDITORIAL RE FRAGMENT ** Note that assigning semantics or interpretation to the query or fragment components is outside the scope of this specification. The ways in which they are used depends on the content types that they are applied to, or what executable script decides to do with them. The wording might be taken to mean that a URI scheme
[widgets] Draft Minutes for 17 December 2009 Voice Conference
The draft minutes from the MMM DD Widgets voice conference are available at the following and copied below: http://www.w3.org/2009/12/17-wam-minutes.html WG Members - if you have any comments, corrections, etc., please send them to the public-webapps mail list before 7 January 2010 (the next Widgets voice conference); otherwise these minutes will be considered Approved. -Regards, Art Barstow [1]W3C [1] http://www.w3.org/ - DRAFT - Widgets Voice Conference 17 Dec 2009 [2]Agenda [2] http://lists.w3.org/Archives/Public/public-webapps/ 2009OctDec/1343.html See also: [3]IRC log [3] http://www.w3.org/2009/12/17-wam-irc Attendees Present Art, Arve, Josh, Marcos, Frederick, AndyB, Robin, Benoit Regrets Marcin Chair Art Scribe Art Contents * [4]Topics 1. [5]Review and tweak agenda 2. [6]Announcements 3. [7]Widget DigSig spec: test suite status 4. [8]Widget DigSig spec: implementation status 5. [9]URI spec: status of LC comments 6. [10]URI spec: scheme registration 7. [11]View Modes Media Features spec 8. [12]Updates spec 9. [13]AOB * [14]Summary of Action Items _ scribe Scribe: Art scribe ScribeNick: ArtB Date: 17 December 2009 Review and tweak agenda AB: yesterday I submitted the draft agenda for today ( [15]http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/13 43.html ). The meeting will end at the end of this hour at the latest. Any change requests? [15] http://lists.w3.org/Archives/Public/public-webapps/ 2009OctDec/1343.html [ no ] Announcements AB: the only announcement I have is the next call is 7 January 2010. Any other short annoucements? ... any others? [ no ] Widget DigSig spec: test suite status AB: the Test Suite wiki ( [16]http://www.w3.org/2008/webapps/wiki/WidgetTesting#Widgets_1.0:_D igital_Signature_spec ) contains some info about DigSig. What is the status of the DigSig test suite? [16] http://www.w3.org/2008/webapps/wiki/ WidgetTesting#Widgets_1.0:_Digital_Signature_spec MC: I have not been working on it ... Kai and Dom started ... but they haven't done anything in the last two months AB: does any anticipate participating in this test suite? MC: yes; expect Opera to contribute tests next year ... but there is a lot of work to do ... must go thru ever assertion in the spec ... and create a test case ... Not sure how much has been done by Kai and Dom ... perhaps they have discussed it in the MWTS WG ... The spec isn't written the same way PC is so it's a bit more work to generate test assertions scribe ACTION: barstow to follow-up with MWTS WG re the Widget DigSig test suite re their plans, status, etc. [recorded in [17]http://www.w3.org/2009/12/17-wam-minutes.html#action01] trackbot Created ACTION-471 - Follow-up with MWTS WG re the Widget DigSig test suite re their plans, status, etc. [on Arthur Barstow - due 2009-12-24]. AB: I'll plan to provide an update re MWTS on Jan 7 MC: I'll also follow-up with Kai AB: anyone else on DigSig test suite for today? Widget DigSig spec: implementation status AB: the Implementation wiki ( [18]http://www.w3.org/2008/webapps/wiki/WidgetImplementation ) contains some info about DigSig. Is there any other Public info about DigSig implementations we can add? ... is there anything else to add here? [18] http://www.w3.org/2008/webapps/wiki/WidgetImplementation [ silence ] fjh what do we need to do with andreas? AB: anything else on this topic for today? FH: I think Andreas is asking for some help ... he is looking for a CA ... perhaps there is some process that needs to start ... I'm not quite sure how W3C would work with him AB: is there anything from previoius XML DigSig interop that can be used? FH: just use openssl ... perhaps he can move his stuff into the WG's space MC: have you looked at Andreas' work? FH: perhaps he's just offering the service ... and we need to wait for another impl scribe ACTION: barstow respond to the 21-Oct-2009 email from Andreas re Widget Dig Sig [recorded in [19]http://www.w3.org/2009/12/17-wam-minutes.html#action02] trackbot Created ACTION-472 - Respond to the 21-Oct-2009 email from Andreas re Widget Dig Sig [on Arthur Barstow - due 2009-12-24]. AB: anything else on DigSig for today? [ no ] URI spec: status of LC comments AB: comment tracking document is ( [20]http://www.w3.org/2006/02/lc-comments-tracker/42538/WD-widgets-u ri-20091007/doc/ ). What is the status of Larry Masinter's replies to your responses Robin? [20]
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Wed, 16 Dec 2009, Devdatta wrote: hmm.. just a XDR GET on the file at hixie.ch which allows access only if the request is from damowmow.com ? It couldn't be XDR -- XDR is a script-based mechanism, whereas XBL can be invoked before the root element is parsed. But even assuming the XDR protocol could be extended to XBL, that would require scripting or much more complicated .htaccess rules. With CORS, I can do it with one simple line in .htaccess. Also, as I understand it, XDR sends an Origin header, which is what UM is trying to avoid. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
RE: XMLHttpRequest Comments from W3C Forms WG
If XHR is wholly dependent on HTML5 then it should either be moved into the HTML5 recommendation-track document, or renamed XHR for HTML5. Ian has made a point that modularizing HTML5 itself is a large task; it's not clear that the same applies to this XHR document, at least to the same degree of work required. I don't see what harm comes from waiting to advance this XHR document until the necessary work has been done. Leigh. -Original Message- From: Henri Sivonen [mailto:hsivo...@iki.fi] Sent: Thursday, December 17, 2009 12:12 AM To: Klotz, Leigh Cc: Anne van Kesteren; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On Dec 16, 2009, at 21:47, Klotz, Leigh wrote: I'd like to suggest that the main issue is dependency of the XHR document on concepts where HTML5 is the only specification that defines several core concepts of the Web platform architecture, such as event loops, event handler attributes, Etc. A user agent that doesn't implement the core concepts isn't much use for browsing the Web. Since the point of the XHR spec is getting interop among Web browsers, it isn't a good allocation of resources to make XHR not depend on things that a user agent that is suitable for browsing the Web needs to support anyway. XHR interop doesn't matter much if XHR is transplanted into an environment where the other pieces fail to be interoperable with Web browsing software. That is, in such a case, it isn't much use if XHR itself works like XHR in browsers--the system as a whole still doesn't interoperate with Web browsers. -- Henri Sivonen hsivo...@iki.fi http://hsivonen.iki.fi/
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak m...@apple.com wrote: On Dec 17, 2009, at 1:42 AM, Kenton Varda wrote: Somehow I suspect all this has been said many times before... On Wed, Dec 16, 2009 at 11:45 PM, Maciej Stachowiak m...@apple.com wrote: CORS would provide at least two benefits, using the exact protocol you'd use with UM: 1) It lets you know what site is sending the request; with UM there is no way for the receiving server to tell. Site A may wish to enforce a policy that any other site that wants access has to request it individually. But with UM, there is no way to prevent Site B from sharing its unguessable URL to the resource with another site, or even to tell that Site B has done so. (I've seen papers cited that claim you can do proper logging using an underlying capabilities mechanism if you do the right things on top of it, but Tyler's protocol does not do that; and it is not at all obvious to me how to extend such results to tokens passed over the network, where you can't count on a type system to enforce integrity at the endpoints like you can with a system all running in a single object capability language.) IMO, this isn't useful information. If Alice is a user at my site, and I hand Alice a capability to access her data from my site, it should not make a difference to me whether Alice chooses to access that data using Bob's site or Charlie's site, any more than it makes a difference to me whether Alice chooses to use Firefox or Chrome. Saying that Alice is only allowed to access her data using Bob's site but not Charlie's is analogous to saying she can only use approved browsers. This provides a small amount of security at the price of greatly annoying users and stifling innovation (think mash-ups). I'm not saying that Alice should be restricted in who she shares the feed with. Just that Bob's site should not be able to automatically grant Charlie's site access to the feed without Alice explicitly granting that permission. Many sites that use workarounds (e.g. server-to-server communication combined with client-side form posts and redirects) to share their data today would like grants to be to another site, not to another site plus any third party site that the second site chooses to share with. OK, I'm sure that this has been said before, because it is critical to the capability argument: If Bob can access the data, and Bob can talk to Charlie *in any way at all*, then it *is not possible* to prevent Bob from granting access to Charlie, because Bob can always just serve as a proxy for Charlie's requests. What CORS does do is make it so that Bob (and Charlie, if he is proxying through Bob) can only access the resource while Alice has his site open in her browser. The same can be achieved with UM by generating a new URL for each visit, and revoking it as soon as Alice browses away. Perhaps, though, you're suggesting that users should be able to edit the whitelist that is applied to their data, in order to provide access to new sites? But this seems cumbersome to me -- both to the user, who needs to manage this whitelist, and to app developers, who can no longer delegate work to other hosts. An automated permission grant system that vends unguessable URLs could just as easily manage the whitelist. It is true that app developers could not unilaterally grant access to other origins, but this is actually a desired property for many service providers. Saying that this feature is cumbersome for the service consumer does not lead the service provider to desire it any less. You're right, the same UI I want for hooking up capabilities could also update the whitelist. But I still don't see where this is useful, given the above. (Of course, if you want to know the origin for non-security reasons (e.g. to log usage for statistical purposes, or deal with compatibility issues) then you can have the origin voluntarily identify itself, just as browsers voluntarily identify themselves.) 2) It provides additional defense if the unguessable URL is guessed, either because of the many natural ways URLs tend to leak, or because of a mistake in the algorithm that generates unguessable URLs, or because either Site B or Site A unintentionally disclose it to a third party. By using an unguessable URL *and* checking Origin and Cookie, Site A would still have some protection in this case. An attacker would have to not only break the security of the secret token but would also need to manage a confused deputy type attack against Site B, which has legitimate access, thus greatly narrowing the scope of the vulnerability. You would need two separate vulnerabilities, and an attacker with the opportunity to exploit both, in order to be vulnerable to unauthorized access. Given the right UI, a capability URL should be no more leak-prone than a cookie. Sure, we don't want users to ever actually see capability URLs
[widgets] test-suite: start file encoding
Test cases e5, e6, z1 and z2 test the ability of a UA to use a widget- specified charset (ISO 8859-1); however the PC specification states that a UA only has to implement UTF-8, and support for additional encodings is optional. Do these test cases then really only require that a UA processes the package and obtains the value ISO 8869-1 or Windows-1252 for the encoding attribute, even if the start file is actually encoded in UTF-8 when it is served by the UA? S smime.p7s Description: S/MIME cryptographic signature
Re: XMLHttpRequest Comments from W3C Forms WG
On Thu, Dec 17, 2009 at 9:10 AM, Klotz, Leigh leigh.kl...@xerox.com wrote: If XHR is wholly dependent on HTML5 then it should either be moved into the HTML5 recommendation-track document, or renamed XHR for HTML5. Ian has made a point that modularizing HTML5 itself is a large task; it's not clear that the same applies to this XHR document, at least to the same degree of work required. I don't see what harm comes from waiting to advance this XHR document until the necessary work has been done. XHR isn't wholly dependent on HTML5. It is however dependent on a few things that are currently only defined by the HTML5 specification. This means that if someone wants to implement XHR they will have to read parts of the HTML5 specification, and implement a few things defined in that specification. It does *not* however mean that if someone wants to implement XHR they will have to implement all, or even a significant portion, of the HTML5 specification. I hope that makes it clear? Yes, we could move XHR into the HTML5 specification, but I don't understand what problems that would solve. Feel free to elaborate. / Jonas
RE: XMLHttpRequest Comments from W3C Forms WG
Jonas, Thank you for your response; comments below: -Original Message- From: Jonas Sicking [mailto:jo...@sicking.cc] Sent: Thursday, December 17, 2009 9:22 AM To: Klotz, Leigh Cc: Henri Sivonen; Anne van Kesteren; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On Thu, Dec 17, 2009 at 9:10 AM, Klotz, Leigh leigh.kl...@xerox.com wrote: If XHR is wholly dependent on HTML5 then it should either be moved into the HTML5 recommendation-track document, or renamed XHR for HTML5. Ian has made a point that modularizing HTML5 itself is a large task; it's not clear that the same applies to this XHR document, at least to the same degree of work required. I don't see what harm comes from waiting to advance this XHR document until the necessary work has been done. XHR isn't wholly dependent on HTML5. It is however dependent on a few things that are currently only defined by the HTML5 specification. This means that if someone wants to implement XHR they will have to read parts of the HTML5 specification, and implement a few things defined in that specification. It does *not* however mean that if someone wants to implement XHR they will have to implement all, or even a significant portion, of the HTML5 specification. I hope that makes it clear? The Forms WG comment is that those items be abstracted out as requirements for any host of XHR, so that XHR need not depend on HTML5, but can interoperate with any system which provides those few things. One easy way to dot his is to split the XHR document in two, with one document called XHR which lists the dependencies, and another called XHR For HTML5 which satisfies them. The amount of text changed need not be large: the references to HTML5 need to be changed to Requirements for host languages. (Moving away from monolithic specs and towards more interoperable document is one of our goals for next year.) Yes, we could move XHR into the HTML5 specification, but I don't understand what problems that would solve. Feel free to elaborate. I agree, it's not desirable at all, but it's the current state; the XHR document currently works only with HTML5. That's why we're suggesting an alternative that preserves the dependency on HTML5 for HTML5 integration, yet allows other implementations. / Jonas Leigh.
Re: [widgets] test-suite: start file encoding
On Thu, Dec 17, 2009 at 6:21 PM, Scott Wilson scott.bradley.wil...@gmail.com wrote: Test cases e5, e6, z1 and z2 test the ability of a UA to use a widget-specified charset (ISO 8859-1); however the PC specification states that a UA only has to implement UTF-8, and support for additional encodings is optional. Correct. Do these test cases then really only require that a UA processes the package and obtains the value ISO 8869-1 or Windows-1252 for the encoding attribute, even if the start file is actually encoded in UTF-8 when it is served by the UA? Yes, the purpose is only to obtain the value from the encoding string. The widget user agent may choose to disregard the acquired values if it does not support it. I might need to clarify that in the spec a bit. -- Marcos Caceres http://datadriven.com.au
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, 17 Dec 2009, Kenton Varda wrote: OK, I'm sure that this has been said before, because it is critical to the capability argument: If Bob can access the data, and Bob can talk to Charlie *in any way at all*, then it *is not possible* to prevent Bob from granting access to Charlie, because Bob can always just serve as a proxy for Charlie's requests. If confidentiality was the only problem, this would be true. However, it's not the only problem. One of the big reasons to restrict which origin can use a particular resource is bandwidth management. For example, resources.example.com might want to allow *.example.com to use its XBL files, but not allow anyone else to directly use the XBL files straight from resources.example.com. A proxy isn't a plausible attack in this scenario, because if someone can set up a proxy, they can with much more ease simply host the original file (which isn't a problem from the point of view of the original site). Furthermore, if someone _does_ host a proxy, then they are taking the same load hit as the original site, and therefore the risk to the original site is capped. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
RE: XMLHttpRequest Comments from W3C Forms WG
Jonas, I apologize if you and other group members consider this to be a pedantic exercise, but it's a necessary part of making the specification reusable. -Original Message- From: Jonas Sicking [mailto:jo...@sicking.cc] Sent: Thursday, December 17, 2009 9:45 AM To: Klotz, Leigh Cc: Henri Sivonen; Anne van Kesteren; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG ... I don't think I understand your suggested changes. As long as the concepts that XHR uses are only defined in the HTML5 spec, XHR will always require that those things from the HTML5 spec are implemented when implementing XHR. This doesn't seem to change even if the XHR spec is split into two. XHR requires things; the things might come from HTML5, though any sufficient definition of them could come from somewhere else. It's up to the implementor of the XHR spec to say where the things come from. HTML5 implementors would obviously choose to provide the HTML5 definitions. This could be done by splitting the XHR document in two, with one part called XHR saying I need things and another part called XHR for HTML5 saying I have perfectly fine things here from HTML5 XHR to use. HTML5 implements would then be implementing XHR for HTML5, which is mete and just. However I don't really understand what specifically you are suggesting should live in the two XHR specs, so I could very well be misunderstanding you. Could you describe that in more detail? It's in the previous trail on this topic, and Ian and Anne have also variously listed the items. Please take the example of one dependency, that of origin and base from Anne's message of October 8, 2009. Anne explains the strategy below: If you reuse it you have to define the XMLHttpRequest origin and XMLHttpRequest base URL. From: Anne van Kesteren annevk at opera.com Subject: Re: [XHR] LC comments from the XForms Working Group Date: 2009-10-08 15:31:27 GMT On Tue, 17 Jun 2008 05:24:48 +0200, Boris Zbarsky bzbarsky at mit.edu wrote: Anne van Kesteren wrote: It would change the conformance criteria. I'm not sure that's a good idea. Especially since the use case put forward is mostly theoretical. Overall, I'm still not convinced this is a good idea. It doesn't seem necessarily that theoretical to me, for what it's worth. Anne, do you happen to have a more or less complete list of the current dependencies of XHR on Window, buy chance? I think that information would be very helpful in seeing where things stand. To wrap this up, I changed XMLHttpRequest some time ago so it can be used in other contexts as well now. If you reuse it you have to define the XMLHttpRequest origin and XMLHttpRequest base URL. My apologies for being a bit stubborn on this earlier. It was mostly because I was hesitant reworking how everything was put together, but it turned out that had to happen anyway. Hopefully it can now be of use to the Forms WG. Kind regards, -- Anne van Kesteren http://annevankesteren.nl/ So, to be clear, here's how do complete the change for the specific dependency that Anne calls about above. (This process is repeated for each dependency of XHR on HTML5.) Cf. section http://www.w3.org/TR/2009/WD-XMLHttpRequest-20091119/#origin-and-base-url Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification defines their values when the global object is represented by the Window object. When the XMLHttpRequest object used in other contexts their values will have to be defined as appropriate for that context. That is considered to be out of scope for this specification. This text still results in a normative reference to HTML5. So change the XHR document to this: Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification does not defines their values; they MUST be defined by the host integration. For an example integration with [HTML5 informative reference] see [XHR For HTML5 informative reference] Further, the actual definitions would be removed when the actually occur. Then the new rec-track document XHR for HTML5 would say this: Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification defines their values when the global object is represented by the Window object. And then go on to cite contain the actual text of the definitions pulled out from XHR. Leigh.
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 10:08 AM, Maciej Stachowiak m...@apple.com wrote: My goal was merely to argue that adding an origin/cookie check to a secret-token-based mechanism adds meaningful defense in depth, compared to just using any of the proposed protocols over UM. I believe my argument holds. If the secret token scheme has any weakness whatsoever, whether in generation of the tokens, or in accidental disclosure by the user or the service consumer, origin checks provide an orthogonal defense that must be breached separately. This greatly reduces the attack surface. While this may not provide any additional security in theory, where we can assume the shared secret is generated and managed correctly, it does provide additional security in the real world, where people make mistakes. The reason the origin/cookie check doesn't provide defense in depth is that the programming patterns we want to support necessarily blow holes in any origin/cookie defense. We want clients to act as deputies, because that's a useful thing to be able to do. For example, consider a web page widget that implements the Observer pattern: when its state changes, it fires off a POST request to a list of observer URLs. Clients can register any URL they want with the web page widget. If these POST requests carry origin/cookies, then a CSRF-like attack is easy. There are lots of other ways we want to use the Web, as it is meant to be used, that aren't viable if you're trying to maintain the viability of an origin/cookie defense. For example, Ian correctly points out that under an origin/cookie defense, using URIs as identifiers is dangerous, see: http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/1247.html But we want to use URIs to identify things, because its useful, and we want it to be safe. For cross-origin scenarios, it can't be safe while still maintaining the viability of origin/cookie defenses. Basically, the programming patterns of the Web, when used in cross-origin scenarios, break origin/cookie defenses. We want to keep the Web programming patterns and replace the origin/cookie defense with something that better fits the Web. We're willing to give up our cookies before we'll give up our URIs. --Tyler -- Waterken News: Capability security on the Web http://waterken.sourceforge.net/recent.html
Re: XMLHttpRequest Comments from W3C Forms WG
From: Anne van Kesteren annevk at opera.com Subject: Re: [XHR] LC comments from the XForms Working Group Date: 2009-10-08 15:31:27 GMT On Tue, 17 Jun 2008 05:24:48 +0200, Boris Zbarsky bzbarsky at mit.edu wrote: Anne van Kesteren wrote: It would change the conformance criteria. I'm not sure that's a good idea. Especially since the use case put forward is mostly theoretical. Overall, I'm still not convinced this is a good idea. It doesn't seem necessarily that theoretical to me, for what it's worth. Anne, do you happen to have a more or less complete list of the current dependencies of XHR on Window, buy chance? I think that information would be very helpful in seeing where things stand. To wrap this up, I changed XMLHttpRequest some time ago so it can be used in other contexts as well now. If you reuse it you have to define the XMLHttpRequest origin and XMLHttpRequest base URL. My apologies for being a bit stubborn on this earlier. It was mostly because I was hesitant reworking how everything was put together, but it turned out that had to happen anyway. Hopefully it can now be of use to the Forms WG. Kind regards, -- Anne van Kesteren http://annevankesteren.nl/ So, to be clear, here's how do complete the change for the specific dependency that Anne calls about above. (This process is repeated for each dependency of XHR on HTML5.) Cf. section http://www.w3.org/TR/2009/WD-XMLHttpRequest-20091119/#origin-and-base-url Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification defines their values when the global object is represented by the Window object. When the XMLHttpRequest object used in other contexts their values will have to be defined as appropriate for that context. That is considered to be out of scope for this specification. This text still results in a normative reference to HTML5. So change the XHR document to this: Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification does not defines their values; they MUST be defined by the host integration. For an example integration with [HTML5 informative reference] see [XHR For HTML5 informative reference] Further, the actual definitions would be removed when the actually occur. Then the new rec-track document XHR for HTML5 would say this: Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification defines their values when the global object is represented by the Window object. And then go on to cite contain the actual text of the definitions pulled out from XHR. Ah, thanks for the concrete example. This makes it clear what you are suggesting. What you are saying makes sense. However it seems to add unnecessary overhead to split the spec in two to accomplish this, for the spec editor, for someone implementing the spec, and for someone using the spec. It would seem to be much lower overhead to put these things in an appendix or something similar. / Jonas
Re: [DataCache] Some Corrections
Joseph Pecoraro wrote: I have changed to using the new method "immediate" and that also removed this call. Immediate looks useful. The specification for immediate is: [[ When this method is called, the user agent creates a new cache transaction, and performs the steps to add a resource to be captured in that cache transaction, and when the identified resource is captured, performs the steps to activate updates for this data cache group. ]] I think this should clarify that it creates an "online" transaction. An off-line transaction will not be particularly meaningful in this case, so yes the transaction is "online". I will clarify this in the spec.
Re: XMLHttpRequest Comments from W3C Forms WG
On Thu, Dec 17, 2009 at 10:54 AM, Jonas Sicking jo...@sicking.cc wrote: From: Anne van Kesteren annevk at opera.com Subject: Re: [XHR] LC comments from the XForms Working Group Date: 2009-10-08 15:31:27 GMT On Tue, 17 Jun 2008 05:24:48 +0200, Boris Zbarsky bzbarsky at mit.edu wrote: Anne van Kesteren wrote: It would change the conformance criteria. I'm not sure that's a good idea. Especially since the use case put forward is mostly theoretical. Overall, I'm still not convinced this is a good idea. It doesn't seem necessarily that theoretical to me, for what it's worth. Anne, do you happen to have a more or less complete list of the current dependencies of XHR on Window, buy chance? I think that information would be very helpful in seeing where things stand. To wrap this up, I changed XMLHttpRequest some time ago so it can be used in other contexts as well now. If you reuse it you have to define the XMLHttpRequest origin and XMLHttpRequest base URL. My apologies for being a bit stubborn on this earlier. It was mostly because I was hesitant reworking how everything was put together, but it turned out that had to happen anyway. Hopefully it can now be of use to the Forms WG. Kind regards, -- Anne van Kesteren http://annevankesteren.nl/ So, to be clear, here's how do complete the change for the specific dependency that Anne calls about above. (This process is repeated for each dependency of XHR on HTML5.) Cf. section http://www.w3.org/TR/2009/WD-XMLHttpRequest-20091119/#origin-and-base-url Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification defines their values when the global object is represented by the Window object. When the XMLHttpRequest object used in other contexts their values will have to be defined as appropriate for that context. That is considered to be out of scope for this specification. This text still results in a normative reference to HTML5. So change the XHR document to this: Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification does not defines their values; they MUST be defined by the host integration. For an example integration with [HTML5 informative reference] see [XHR For HTML5 informative reference] Further, the actual definitions would be removed when the actually occur. Then the new rec-track document XHR for HTML5 would say this: Each XMLHttpRequest object has an associated XMLHttpRequest origin and an XMLHttpRequest base URL. This specification defines their values when the global object is represented by the Window object. And then go on to cite contain the actual text of the definitions pulled out from XHR. Ah, thanks for the concrete example. This makes it clear what you are suggesting. What you are saying makes sense. However it seems to add unnecessary overhead to split the spec in two to accomplish this, for the spec editor, for someone implementing the spec, and for someone using the spec. It would seem to be much lower overhead to put these things in an appendix or something similar. Though I just realized that I'm not sure all dependencies can be solved this way. How would you for example break the dependency on the event loop, currently only specified in the HTML5 spec (but implemented in basically every piece of software with a modern UI)? / Jonas
RE: XMLHttpRequest Comments from W3C Forms WG
-Original Message- From: Jonas Sicking [mailto:jo...@sicking.cc] Sent: Thursday, December 17, 2009 10:54 AM To: Klotz, Leigh Cc: Henri Sivonen; Anne van Kesteren; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG ...snip And then go on to cite contain the actual text of the definitions pulled out from XHR. Ah, thanks for the concrete example. This makes it clear what you are suggesting. What you are saying makes sense. However it seems to add unnecessary overhead to split the spec in two to accomplish this, for the spec editor, for someone implementing the spec, and for someone using the spec. It would seem to be much lower overhead to put these things in an appendix or something similar. As for someone using the spec, the XHR spec would remain small, and the XHR for HTML5 spec would remain small; all spec users would have small mind-sized bites to understand, and it would be clear that XHR works with both HTML5 and can be made to work with other specifications, so it seems a good solution to me. However, I'm not one of the Web API editors, so I don't want to say concretely how the problem must be solved, and wasn't directed to do so by the Forms WG comments. The example is the most obvious solution to me, as the problem is about inter-specification dependence, normative language, and conformance, and I believe it should be solved that way. The Forms WG comment is about the normative reference and the express dependence on the 688-page HTML5 document for definitions. Part of the issue was addressed in Anne's change, but there is no conformance section which declares the implementation optional, and the normative reference remains. Leigh.
RE: XMLHttpRequest Comments from W3C Forms WG
Jonas, I'm not sure how the dependency is specified in the XHR draft. Can you point me to it? The word event loop doesn't appear. I know how XForms defines synchronous vs. asynchronous submissions using XML Events (which are an XML syntax for accessing DOM Events), and XHR is directly specified using DOM Events. Leigh. P.S. Again, I'm not an editor for the Web API specs, so I can't really know how the WG would like to solve these dependency issues, but to the degree that I have technical competence I'm happy to explore possible solutions. This is of course outside the scope of my representation of the Forms WG comment, which is about the issue, not about its solutions. -Original Message- From: Jonas Sicking [mailto:jo...@sicking.cc] Sent: Thursday, December 17, 2009 11:06 AM To: Klotz, Leigh Cc: Henri Sivonen; Anne van Kesteren; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG ...snip Though I just realized that I'm not sure all dependencies can be solved this way. How would you for example break the dependency on the event loop, currently only specified in the HTML5 spec (but implemented in basically every piece of software with a modern UI)? / Jonas
Re: XMLHttpRequest Comments from W3C Forms WG
On Thu, Dec 17, 2009 at 11:18 AM, Klotz, Leigh leigh.kl...@xerox.com wrote: Jonas, I'm not sure how the dependency is specified in the XHR draft. Can you point me to it? The word event loop doesn't appear. The term queue a task is defined in HTML5, and uses the event loop. / Jonas
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 10:08 AM, Maciej Stachowiak m...@apple.com wrote: On Dec 17, 2009, at 9:15 AM, Kenton Varda wrote: On Thu, Dec 17, 2009 at 2:21 AM, Maciej Stachowiak m...@apple.com wrote: I'm not saying that Alice should be restricted in who she shares the feed with. Just that Bob's site should not be able to automatically grant Charlie's site access to the feed without Alice explicitly granting that permission. Many sites that use workarounds (e.g. server-to-server communication combined with client-side form posts and redirects) to share their data today would like grants to be to another site, not to another site plus any third party site that the second site chooses to share with. OK, I'm sure that this has been said before, because it is critical to the capability argument: If Bob can access the data, and Bob can talk to Charlie *in any way at all*, then it *is not possible* to prevent Bob from granting access to Charlie, because Bob can always just serve as a proxy for Charlie's requests. Indeed, you can always act as a proxy and directly share the data rather than sharing the token. However, this is not the same as the ability to share the token anonymously. Here are a few important differences: - As Ian mentioned, in the case of some kinds of resources, one of the service provider's goals may be to prevent abuse of their bandwidth. It seems more useful to attribute resource usage to the user rather than to the sites the user uses to access those resources. In my example, I might want to limit Alice to, say, 1GB data transfer per month, but I don't see why I would care if that transfer happened through Bob's site vs. Charlie's site. - Service providers often like to know for the sake of record-keeping who is using their data, even if they have no interest in restricting it. Often, just creating an incentive to identify yourself and ask for separate authorization is enough, even if proxy workarounds are possible. The reason given below states such an incentive. I think this is separate from the security question. As I said earlier, origins can voluntarily identify themselves for this purpose, just as browsers voluntarily identify themselves. - Proxying to subvert CORS would only work while the user is logged into both the service provider and the actually authorized service consumer who is acting as a proxy, and only in the user's browser. This limits the window in which to get data. Meanwhile, a capability token sent anonymously could be used at any time, even when the user is not logged in. The ability to get snapshots of the user's data may not be seen to be as great a risk as ongoing on-demand access. Yes, I directly addressed exactly that point... I will also add that users may want to revoke capabilities they grant. This is likely to be presented to the user as a whitelist of sites to which they granted access, whether the actual mechanism is modifying Origin checks, or mapping the site to a capability token and disabling it. Sure. This is easy to do via caps. How would the service provider generate a new URL for each visit to Bob's site? How would the service provider even know whether it's Bob asking for an update, or whether the user is logged in? If the communication is via UM, the service provider has no way to know. If it's via a hidden form post, then you are just using forms to fake the effect of CORS. Note also that such elaborations increase complexity of the protocol. Assuming some UI exists for granting capabilities, as I suggested earlier, it can automatically take care of generating a new capability for every connection/visit and revoking it when appropriate. To enable permissions to be revoked in a granular way, you must vend different capability tokens per site. Given that, it seems only sensible to check that the token is actually being used by the party to which it was granted. I disagree. Delegation is useful, and prohibiting it has a cost. If we granted the capability to Bob, why should we care if Bob chooses to delegate to Charlie? If Charlie misuses the capability, then we blame Bob for that misuse. It's Bob's responsibility to take appropriate measures to prevent this. If we don't trust Bob we shouldn't have granted him the capability in the first place. And again, CORS doesn't prevent delegation anyway; it only makes it less convenient. My goal was merely to argue that adding an origin/cookie check to a secret-token-based mechanism adds meaningful defense in depth, compared to just using any of the proposed protocols over UM. I believe my argument holds. If the secret token scheme has any weakness whatsoever, whether in generation of the tokens, or in accidental disclosure by the user or the service consumer, origin checks provide an orthogonal defense that must be breached separately. This greatly reduces the attack surface. While this may not provide any
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, 17 Dec 2009, Kenton Varda wrote: It seems more useful to attribute resource usage to the user rather than to the sites the user uses to access those resources. In my example, I might want to limit Alice to, say, 1GB data transfer per month, but I don't see why I would care if that transfer happened through Bob's site vs. Charlie's site. With CORS, I can trivially (one line in the .htaccess file for my site) make sure that no sites can use XBL files from my site other than my sites. My sites don't do any per-user tracking; doing that would involve orders of magnitude more complexity. - Service providers often like to know for the sake of record-keeping who is using their data, even if they have no interest in restricting it. Often, just creating an incentive to identify yourself and ask for separate authorization is enough, even if proxy workarounds are possible. The reason given below states such an incentive. I think this is separate from the security question. As I said earlier, origins can voluntarily identify themselves for this purpose, just as browsers voluntarily identify themselves. How can an origin voluntarily identify itself in an unspoofable fashion? Without running scripts? It seems like the fundamental disagreements here are: - Cap proponents think that the ability to delegate is extremely valuable, and ACLs provide too much of a barrier against delegation. ACL people think delegation is not as important as Cap people think it is. Arguments either way tend to be abstract, and thus unconvincing to either side. - ACL proponents think that capabilities are too easy to leak accidentally. Cap people think that the defenses provided by capability design patterns provide plenty of protection, but ACL people disagree. Argument either way again tend to be abstract, and thus unconvincing. I have no problem with offering a feature like UM in CORS. My objection is to making the simple cases non-trivial, e.g. by never including Origin headers in any requests. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Why preflight per-resource rather than per-origin?
Despite the costs of doing preflight opt-in on a per-resource basis rather than a per-origin basis, to meet its security goals, CORS proposes to do preflight on a per-resource basis. I have seen the rationale for this stated in bits and pieces. Can anyone point me at a reasonably self contained statement for why we need preflight on a per-resource rather than a per-origin basis? If there's nothing adequate to point at, could someone state a reasonably self contained rationale for this? Thanks. -- Cheers, --MarkM
[widgets] Anyone working on SNIFF in Java?
I've finally narrowed it down to just one test case to pass PC conformance! Unfortunately it involves implementing SNIFF... Does anyone know of an implementation already existing in Java? S /-/-/-/-/-/ Scott Wilson Apache Wookie: http://incubator.apache.org/projects/wookie.html scott.bradley.wil...@gmail.com http://www.cetis.ac.uk/members/scott smime.p7s Description: S/MIME cryptographic signature
RE: XMLHttpRequest Comments from W3C Forms WG
-Original Message- From: Jonas Sicking [mailto:jo...@sicking.cc] Sent: Thursday, December 17, 2009 11:33 AM To: Klotz, Leigh Cc: Henri Sivonen; Anne van Kesteren; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On Thu, Dec 17, 2009 at 11:18 AM, Klotz, Leigh leigh.kl...@xerox.com wrote: Jonas, I'm not sure how the dependency is specified in the XHR draft. Can you point me to it? The word event loop doesn't appear. The term queue a task is defined in HTML5, and uses the event loop. / Jonas Jonas, Thank you for finding it form me. I see the use of queue a task now.: http://www.w3.org/TR/2009/WD-XMLHttpRequest-20091119/#terminology The terms and algorithms fragment, scheme, document base URL, document's character encoding, event handler attributes, event handler event type, fully active, Function, innerHTML, origin, preferred MIME name, resolve a URL, same origin, storage mutex, task, task source, URL, URL character encoding, queue a task, and valid MIME type are defined by the HTML 5 specification. [HTML5] I'd be surprised if some of these aren't terms already defined elsewhere. URL for example, is surely not given a different definition in HTML5 from the definition in RFC 3986. The rest of these terms not elsewhere defined would need to be defined sufficiently by contract in the XHR document and satisfied in the HTML5 implementation document, yet left open for implementation by a collaborating spec for HTML5's implementation. In the case of queue a task, it appears to be used in XHR, but event loop is not used in XHR. While I can't really comment on whether XHR should leave to the implementation the resolution of single vs multiple task queues, it in fact may not me germane to the XHR specification. When I follow the implied link from #terminology to the HTML5 draft, I get this: http://www.w3.org/TR/2009/WD-html5-20090825/Overview.html#queue-a-task This section of the HTML5 document itself admits that there may be other implementations of task queues, as in this note: Note: Other specifications can define other event loops; in particular, the Web Workers specification does so. Therefore, it seems like it would be in the best interest of not only HTML5 but also Web Workers (no link) to have XHR efine its requirements, and let them be satisfied by integration with other specifications, HTML5 being a prime case, but Web Workers possibly another, in addition to the usual suspects. In summary, I must say that I don't see any roadblocks to a positive response to the Forms WG comment in question, and it doesn't appear to me that it requires all of the many months of work cited by Ian. Ian's point may be valid for the entireity of the HTML5 document, but for this XHR document to advance to the next stage, it still seems both necessary and possible to resolve the required definitions in a way that makes HTML5 integration almost unchanged, yet leaves open integration with, as Anne aptly puts it, other contexts. Leigh.
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote: One of the big reasons to restrict which origin can use a particular resource is bandwidth management. For example, resources.example.com might want to allow *.example.com to use its XBL files, but not allow anyone else to directly use the XBL files straight from resources.example.com. An XBL file could include some JavaScript code that blows up the page if the manipulated DOM has an unexpected document.domain. I think this solution more precisely implements the control you want. You're not trying to prevent other sites from downloading your XBL file. You're only trying to encourage them to host their own version of your XBL file. In general, the control you want is most similar to iframe busting. A separate standard that covers these rendering instructions would be better than conflating them with an access-control standard. For example, a new HTTP response header could provide instructions on what embedding configurations are supported. The instructions may be independent of how the embedding is created, such as by: iframe, img, script or xbl. --Tyler -- Waterken News: Capability security on the Web http://waterken.sourceforge.net/recent.html
RE: XMLHttpRequest Comments from W3C Forms WG
Boris, Thank you for the clarification. Surely then this ought to be fixed with an IETF or W3C document describing this fact, and not by requiring all future specifications which use URLs to reference the HTML5 document. Is it defined in http://www.w3.org/html/wg/href/draft ? If so, perhaps that document needs to have a better title than Web Addresses in HTML5 if it's already in use in user agents in practice in web reality? Thank you, Leigh. -Original Message- From: Boris Zbarsky [mailto:bzbar...@mit.edu] Sent: Thursday, December 17, 2009 2:17 PM To: Klotz, Leigh Cc: WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On 12/17/09 2:10 PM, Klotz, Leigh wrote: I'd be surprised if some of these aren't terms already defined elsewhere. URL for example, is surely not given a different definition in HTML5 from the definition in RFC 3986. As it happens, it is. There are various strings that are defined to not be a URL in RFC 3986 terms (as in, don't match the production) but are used on the web in practice and which handling needs to be defined for. In other words, RFC 3986 is pretty well divorced from web reality; a UA trying to actually implement it ends up not compatible with the web. -Boris
Re: XMLHttpRequest Comments from W3C Forms WG
On 12/17/09 2:22 PM, Klotz, Leigh wrote: Thank you for the clarification. Surely then this ought to be fixed with an IETF or W3C document describing this fact After some pushback, there is in fact such a document being worked on. It's not quite far enough to reference normatively last I checked Is it defined in http://www.w3.org/html/wg/href/draft ? Yep. -Boris
RE: XMLHttpRequest Comments from W3C Forms WG
Great! It sounds like more progress is being made on both putting experience from implementations back into specifications, and in modularizing the XHR document references, since it will give a better place than HTML5 for reference. Leigh. -Original Message- From: Boris Zbarsky [mailto:bzbar...@mit.edu] Sent: Thursday, December 17, 2009 2:38 PM To: Klotz, Leigh Cc: WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On 12/17/09 2:22 PM, Klotz, Leigh wrote: Thank you for the clarification. Surely then this ought to be fixed with an IETF or W3C document describing this fact After some pushback, there is in fact such a document being worked on. It's not quite far enough to reference normatively last I checked Is it defined in http://www.w3.org/html/wg/href/draft ? Yep. -Boris
Re: XMLHttpRequest Comments from W3C Forms WG
As Ian already has mentioned. No one is disputing that most of these things should be factored out of the HTML5 spec. But so far no one has stepped up to that task. Until someone does we'll have to live with the reality that these things are defined in the HTML5 spec and the HTML5 spec alone. / Jonas On Thu, Dec 17, 2009 at 2:40 PM, Klotz, Leigh leigh.kl...@xerox.com wrote: Great! It sounds like more progress is being made on both putting experience from implementations back into specifications, and in modularizing the XHR document references, since it will give a better place than HTML5 for reference. Leigh. -Original Message- From: Boris Zbarsky [mailto:bzbar...@mit.edu] Sent: Thursday, December 17, 2009 2:38 PM To: Klotz, Leigh Cc: WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On 12/17/09 2:22 PM, Klotz, Leigh wrote: Thank you for the clarification. Surely then this ought to be fixed with an IETF or W3C document describing this fact After some pushback, there is in fact such a document being worked on. It's not quite far enough to reference normatively last I checked Is it defined in http://www.w3.org/html/wg/href/draft ? Yep. -Boris
RE: XMLHttpRequest Comments from W3C Forms WG
OK, so is the conclusion that XHR is implementable only in HTML5 and should be re-titled XMLHttpRequest in HTML5 or something similar? -Original Message- From: Jonas Sicking [mailto:jo...@sicking.cc] Sent: Thursday, December 17, 2009 3:14 PM To: Klotz, Leigh Cc: Boris Zbarsky; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG As Ian already has mentioned. No one is disputing that most of these things should be factored out of the HTML5 spec. But so far no one has stepped up to that task. Until someone does we'll have to live with the reality that these things are defined in the HTML5 spec and the HTML5 spec alone. / Jonas On Thu, Dec 17, 2009 at 2:40 PM, Klotz, Leigh leigh.kl...@xerox.com wrote: Great! It sounds like more progress is being made on both putting experience from implementations back into specifications, and in modularizing the XHR document references, since it will give a better place than HTML5 for reference. Leigh. -Original Message- From: Boris Zbarsky [mailto:bzbar...@mit.edu] Sent: Thursday, December 17, 2009 2:38 PM To: Klotz, Leigh Cc: WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On 12/17/09 2:22 PM, Klotz, Leigh wrote: Thank you for the clarification. Surely then this ought to be fixed with an IETF or W3C document describing this fact After some pushback, there is in fact such a document being worked on. It's not quite far enough to reference normatively last I checked Is it defined in http://www.w3.org/html/wg/href/draft ? Yep. -Boris
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, 17 Dec 2009, Tyler Close wrote: On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote: One of the big reasons to restrict which origin can use a particular resource is bandwidth management. For example, resources.example.com might want to allow *.example.com to use its XBL files, but not allow anyone else to directly use the XBL files straight from resources.example.com. An XBL file could include some JavaScript code that blows up the page if the manipulated DOM has an unexpected document.domain. This again requires script. I don't deny there are plenty of solutions you could use to do this with script. The point is that CORS allows one line in an .htaccess file to solve this for all XBL files, all XML files, all videos, everything on a site, all at once. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 17 Dec 2009, Tyler Close wrote: On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote: One of the big reasons to restrict which origin can use a particular resource is bandwidth management. For example, resources.example.com might want to allow *.example.com to use its XBL files, but not allow anyone else to directly use the XBL files straight from resources.example.com. An XBL file could include some JavaScript code that blows up the page if the manipulated DOM has an unexpected document.domain. This again requires script. I don't deny there are plenty of solutions you could use to do this with script. The point is that CORS allows one line in an .htaccess file to solve this for all XBL files, all XML files, all videos, everything on a site, all at once. I'm not trying to deny you your one line fix. I'm just saying it should be a different one line than the one used for access control. Conflating the two issues, the way CORS does, creates CSRF-like problems. Address bandwidth management, along with other embedding issues, while standardizing an iframe busting technique. --Tyler -- Waterken News: Capability security on the Web http://waterken.sourceforge.net/recent.html
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, 17 Dec 2009, Tyler Close wrote: On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 17 Dec 2009, Tyler Close wrote: On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote: One of the big reasons to restrict which origin can use a particular resource is bandwidth management. For example, resources.example.com might want to allow *.example.com to use its XBL files, but not allow anyone else to directly use the XBL files straight from resources.example.com. An XBL file could include some JavaScript code that blows up the page if the manipulated DOM has an unexpected document.domain. This again requires script. I don't deny there are plenty of solutions you could use to do this with script. The point is that CORS allows one line in an .htaccess file to solve this for all XBL files, all XML files, all videos, everything on a site, all at once. I'm not trying to deny you your one line fix. I'm just saying it should be a different one line than the one used for access control. Conflating the two issues, the way CORS does, creates CSRF-like problems. Address bandwidth management, along with other embedding issues, while standardizing an iframe busting technique. What one liner are your proposing that would solve the problem for XBL, XML data, videos, etc, all at once? -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote: With CORS, I can trivially (one line in the .htaccess file for my site) make sure that no sites can use XBL files from my site other than my sites. My sites don't do any per-user tracking; doing that would involve orders of magnitude more complexity. I was debating about one particular use case, and this one that you're talking about now is completely different. I can propose a different solution for this case, but I think someone will just change the use case again to make my new solution look silly, and we'll go in circles. How can an origin voluntarily identify itself in an unspoofable fashion? Without running scripts? It can't. My point was that for simple non-security-related statistics gathering, spoofing is not a big concern. People can spoof browser UA strings but we still gather statistics on them. I have no problem with offering a feature like UM in CORS. My objection is to making the simple cases non-trivial, e.g. by never including Origin headers in any requests. Personally I'm not actually arguing against standardizing CORS. What I'm arguing is that UM is the natural solution for software designed in an object-oriented, loosely-coupled way. I'm also arguing that loosely-coupled object-oriented systems are more powerful and better for users.
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote: What one liner are your proposing that would solve the problem for XBL, XML data, videos, etc, all at once? Are we debating about the state of existing infrastructure, or theoretically ideal infrastructure? Honest question. .htaccess is an example of existing infrastructure built around the ACL approach. If no similarly-easy-to-use capability-based infrastructure exists, that doesn't necessarily mean ACLs are theoretically better. But the thread subject line seems to suggest we're more interested in theory.
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 17 Dec 2009, Tyler Close wrote: On Thu, Dec 17, 2009 at 3:46 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 17 Dec 2009, Tyler Close wrote: On Thu, Dec 17, 2009 at 9:38 AM, Ian Hickson i...@hixie.ch wrote: One of the big reasons to restrict which origin can use a particular resource is bandwidth management. For example, resources.example.com might want to allow *.example.com to use its XBL files, but not allow anyone else to directly use the XBL files straight from resources.example.com. An XBL file could include some JavaScript code that blows up the page if the manipulated DOM has an unexpected document.domain. This again requires script. I don't deny there are plenty of solutions you could use to do this with script. The point is that CORS allows one line in an .htaccess file to solve this for all XBL files, all XML files, all videos, everything on a site, all at once. I'm not trying to deny you your one line fix. I'm just saying it should be a different one line than the one used for access control. Conflating the two issues, the way CORS does, creates CSRF-like problems. Address bandwidth management, along with other embedding issues, while standardizing an iframe busting technique. What one liner are your proposing that would solve the problem for XBL, XML data, videos, etc, all at once? Well, I wasn't intending to make a frame busting proposal, but it seems something like to following could work... Starting from the X-FRAME-OPTIONS proposal, say the response header also applies to all embedding that the page renderer does. So it also covers img, video, etc. In addition to the current values, the header can also list hostname patterns that may embed the content. So, in your case: X-FRAME-OPTIONS: *.example.com Access-Control-Allow-Origin: * Which means anyone can access this content, but sites outside *.example.com should host their own copy, rather than framing or otherwise directly embedding my copy. --Tyler -- Waterken News: Capability security on the Web http://waterken.sourceforge.net/recent.html
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, 17 Dec 2009, Kenton Varda wrote: On Thu, Dec 17, 2009 at 4:41 PM, Ian Hickson i...@hixie.ch wrote: What one liner are your proposing that would solve the problem for XBL, XML data, videos, etc, all at once? Are we debating about the state of existing infrastructure, or theoretically ideal infrastructure? Honest question. .htaccess is an example of existing infrastructure built around the ACL approach. If no similarly-easy-to-use capability-based infrastructure exists, that doesn't necessarily mean ACLs are theoretically better. But the thread subject line seems to suggest we're more interested in theory. I'm interested in the practical impact of our specifications on authors. Those specifications have to be something that can be implemented; given the security model we're starting from, there's basically no way that can be an ideal anything. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, 17 Dec 2009, Tyler Close wrote: Starting from the X-FRAME-OPTIONS proposal, say the response header also applies to all embedding that the page renderer does. So it also covers img, video, etc. In addition to the current values, the header can also list hostname patterns that may embed the content. So, in your case: X-FRAME-OPTIONS: *.example.com Access-Control-Allow-Origin: * Which means anyone can access this content, but sites outside *.example.com should host their own copy, rather than framing or otherwise directly embedding my copy. Why is this better than: Access-Control-Allow-Origin: *.example.com ...? -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, 17 Dec 2009, Kenton Varda wrote: On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote: With CORS, I can trivially (one line in the .htaccess file for my site) make sure that no sites can use XBL files from my site other than my sites. My sites don't do any per-user tracking; doing that would involve orders of magnitude more complexity. I was debating about one particular use case, and this one that you're talking about now is completely different. I can propose a different solution for this case, but I think someone will just change the use case again to make my new solution look silly, and we'll go in circles. The advantage of CORS is that it addresses all these use cases well. How can an origin voluntarily identify itself in an unspoofable fashion? Without running scripts? It can't. I don't understand how it can solve the problem then. If it's trivial for a site to spoof another, then the use case isn't solved. My point was that for simple non-security-related statistics gathering, spoofing is not a big concern. None of the use cases I've mentioned involve statistics gathering. I have no problem with offering a feature like UM in CORS. My objection is to making the simple cases non-trivial, e.g. by never including Origin headers in any requests. Personally I'm not actually arguing against standardizing CORS. What I'm arguing is that UM is the natural solution for software designed in an object-oriented, loosely-coupled way. CORS is a superset of UM; I have no objection to CORS-enabled APIs exposing the UM subset (i.e. allowing scripts to opt out of sending the Origin header). However, my understanding is that the UM proposal is to explictly not allow Origin to ever be sent, which is why there is a debate. (If the question was just should we add a feature to CORS to allow Origin to not be sent, then I think the debate would have concluded without much argument long ago.) I'm also arguing that loosely-coupled object-oriented systems are more powerful and better for users. Powerful is not a requirement I'm looking for. Simple is. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: XMLHttpRequest Comments from W3C Forms WG
On Dec 17, 2009, at 2:37 PM, Boris Zbarsky wrote: On 12/17/09 2:22 PM, Klotz, Leigh wrote: Thank you for the clarification. Surely then this ought to be fixed with an IETF or W3C document describing this fact After some pushback, there is in fact such a document being worked on. It's not quite far enough to reference normatively last I checked Is it defined in http://www.w3.org/html/wg/href/draft ? Yep. It's expected that at some point soon, all of the necessary rules for processing URLish strings in a Web-compatible way will be defined in the next version of the IRI RFC. Current draft: http://tools.ietf.org/html/draft-duerst-iri-bis-07 . However, not all the necessary definitions are in there yet. We should change our reference to IRIbis when it is ready. Regards, Maciej
Re: XMLHttpRequest Comments from W3C Forms WG
On Dec 17, 2009, at 3:15 PM, Klotz, Leigh wrote: OK, so is the conclusion that XHR is implementable only in HTML5 and should be re-titled XMLHttpRequest in HTML5 or something similar? I think your premise is false, and I don't such a retitling would be helpful. The XHR spec does not require a full implementation of HTML5. It only references some concepts from HTML5. The XHR spec could be implemented in an SVG or HTML4 or XHTML 1.0 implementation that did not support most aspects of HTML5 at all, as long as it could satisfy the requirements implied by those definitions. Your proposed title change would imply that the XHR spec could only be implemented by an HTML5 UA, but that is not accurate. All we have here is a case of suboptimal factoring of the specifications, so that some concepts of very general applicability to the Web platform are presently only defined in HTML5. Some of them are in the process of being broken out, some of them already have been broken out, and more are likely to be broken out in the future. XMLHttpRequest is in fact a pretty good example of factoring something out of HTML5, and even though we haven't cleaned up its whole chain of dependencies, I do not think that is a reason to stuff it back into HTML5, or to block progress on perfecting its dependencies. Regards, Maciej -Original Message- From: Jonas Sicking [mailto:jo...@sicking.cc] Sent: Thursday, December 17, 2009 3:14 PM To: Klotz, Leigh Cc: Boris Zbarsky; WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG As Ian already has mentioned. No one is disputing that most of these things should be factored out of the HTML5 spec. But so far no one has stepped up to that task. Until someone does we'll have to live with the reality that these things are defined in the HTML5 spec and the HTML5 spec alone. / Jonas On Thu, Dec 17, 2009 at 2:40 PM, Klotz, Leigh leigh.kl...@xerox.com wrote: Great! It sounds like more progress is being made on both putting experience from implementations back into specifications, and in modularizing the XHR document references, since it will give a better place than HTML5 for reference. Leigh. -Original Message- From: Boris Zbarsky [mailto:bzbar...@mit.edu] Sent: Thursday, December 17, 2009 2:38 PM To: Klotz, Leigh Cc: WebApps WG; Forms WG Subject: Re: XMLHttpRequest Comments from W3C Forms WG On 12/17/09 2:22 PM, Klotz, Leigh wrote: Thank you for the clarification. Surely then this ought to be fixed with an IETF or W3C document describing this fact After some pushback, there is in fact such a document being worked on. It's not quite far enough to reference normatively last I checked Is it defined in http://www.w3.org/html/wg/href/draft ? Yep. -Boris
Re: Scientific Literature on Capabilities (was Re: CORS versus Uniform Messaging?)
On Thu, Dec 17, 2009 at 5:49 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 17 Dec 2009, Tyler Close wrote: X-FRAME-OPTIONS: *.example.com Access-Control-Allow-Origin: * Why is this better than: Access-Control-Allow-Origin: *.example.com ...? I think Tyler missed on this one. X-FRAME-OPTIONS looks to me like the same thing as CORS, except that it doesn't pretend to provide security. In a capability-based world, when the user accessed your site, you'd send back the HTML together with a set of capabilities to access other resources on the site. These capabilities would expire after some period of time. Want to allow one particular other site to use your resources as well? Then give them the capability to generate capabilities to your resources -- e.g. by giving them a secret key which they can hash together with the current time. I know, your response is: That's way more complicated than my one-line .htaccess change! But your one-line .htaccess change is leveraging a great deal of infrastructure already built around that model. With the right capability-based infrastructure, the capability-based solution would be trivial too. We don't have this infrastructure. This is a valid concern. Unfortunately, few people are working to build this infrastructure because most people would rather focus on the established model, simply because it is established. So we have a chicken-and-egg problem. You probably also question the effect of my solution on caching, or other technical issues like that. I could explain how I'd deal with them, but then you'd find finer details to complain about, and so on. I'm not sure the conversation would benefit anyone, so let's call it a draw. On Thu, Dec 17, 2009 at 5:56 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 17 Dec 2009, Kenton Varda wrote: On Thu, Dec 17, 2009 at 12:58 PM, Ian Hickson i...@hixie.ch wrote: With CORS, I can trivially (one line in the .htaccess file for my site) make sure that no sites can use XBL files from my site other than my sites. My sites don't do any per-user tracking; doing that would involve orders of magnitude more complexity. I was debating about one particular use case, and this one that you're talking about now is completely different. I can propose a different solution for this case, but I think someone will just change the use case again to make my new solution look silly, and we'll go in circles. The advantage of CORS is that it addresses all these use cases well. There are perfectly good cap-based solutions as well. But every capability-based equivalent to an existing ACL-based solution is obviously not going to be identical, and thus will have some trade-offs. Usually these trade-offs can be reasonably tailored to fit any particular real-world use case. But if you're bent on a solution that provides *exactly* what the ACL solution provides (ignoring real-world considerations), the solution usually won't be pretty. Of course, when presented with a different way of doing things, it's always easier to see the negative trade-offs than to see the positives, which is why most debates about capability-based security seem to come down to people nit-picking about the perceived disadvantages of caps while ignoring the benefits. I think this is what makes Mark so grumpy. :/ How can an origin voluntarily identify itself in an unspoofable fashion? Without running scripts? It can't. I don't understand how it can solve the problem then. If it's trivial for a site to spoof another, then the use case isn't solved. My point was that for simple non-security-related statistics gathering, spoofing is not a big concern. None of the use cases I've mentioned involve statistics gathering. It was Maciej that brought up this use case. I was responding to him. I have no problem with offering a feature like UM in CORS. My objection is to making the simple cases non-trivial, e.g. by never including Origin headers in any requests. Personally I'm not actually arguing against standardizing CORS. What I'm arguing is that UM is the natural solution for software designed in an object-oriented, loosely-coupled way. CORS is a superset of UM; I have no objection to CORS-enabled APIs exposing the UM subset (i.e. allowing scripts to opt out of sending the Origin header). However, my understanding is that the UM proposal is to explictly not allow Origin to ever be sent, which is why there is a debate. (If the question was just should we add a feature to CORS to allow Origin to not be sent, then I think the debate would have concluded without much argument long ago.) I think the worry is about the chicken-and-egg problem I mentioned above: We justify the standard based on the existing infrastructure, but new infrastructure will be built based on the direction in the standards. Mark, Tyler, and I believe the web would be better off if most things were