Re: Fallout of non-encapsulated shadow trees

2014-07-02 Thread Adam Barth
On Tue, Jul 1, 2014 at 8:52 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 7/1/14, 9:13 PM, Brendan Eich wrote:
 Are you sure? Because Gecko has used XBL (1) to implement, e.g., input
 type=file, or so my aging memory says.

 We use XBL to implement marquee.

I'm working on using web components to implement marquee in Blink:

https://github.com/abarth/marquee

I've studied the XBL implementation of marquee in Gecko, and it does
leak some implementation details.  As a simple example,
alert(document.createElement('marquee')) in Firefox says [object
HTMLDivElement] because the XBL implementation uses a div.

The approach I'm using is roughly the one outlined by Maciej in [1].
The most challenging aspect by far is isolating the script interface
inside and outside the component.

If you ignore script isolation, we already know that the current
design of shadow DOM can provide isolation by twiddling some internal
bits because we use shadow DOM in the engine to implement details,
keygen, video, progress, and several other elements.  We could
expose an API to authors that would let them twiddle those same bits,
but I'm not sure we should do that without providing script isolation
of some form.

My sense from following this discussion is that there's been a lot of
talking about this subject and not very much coding.  Hopefully I'll
learn something interesting by writing code that I can report back to
this group.

Kindly,
Adam

[1] http://lists.w3.org/Archives/Public/public-webapps/2014JulSep/0024.html



Re: Fallout of non-encapsulated shadow trees

2014-07-02 Thread Adam Barth
On Wed, Jul 2, 2014 at 8:15 AM, Ryosuke Niwa rn...@apple.com wrote:
 On Jul 2, 2014, at 8:07 AM, Adam Barth w...@adambarth.com wrote:
 On Tue, Jul 1, 2014 at 8:52 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 7/1/14, 9:13 PM, Brendan Eich wrote:
 Are you sure? Because Gecko has used XBL (1) to implement, e.g., input
 type=file, or so my aging memory says.

 We use XBL to implement marquee.

 I'm working on using web components to implement marquee in Blink:

 https://github.com/abarth/marquee

 I've studied the XBL implementation of marquee in Gecko, and it does
 leak some implementation details.  As a simple example,
 alert(document.createElement('marquee')) in Firefox says [object
 HTMLDivElement] because the XBL implementation uses a div.

 The approach I'm using is roughly the one outlined by Maciej in [1].
 The most challenging aspect by far is isolating the script interface
 inside and outside the component.

 If you ignore script isolation, we already know that the current
 design of shadow DOM can provide isolation by twiddling some internal
 bits because we use shadow DOM in the engine to implement details,
 keygen, video, progress, and several other elements.  We could
 expose an API to authors that would let them twiddle those same bits,
 but I'm not sure we should do that without providing script isolation
 of some form.

 By twiddling some internal bits, not exposing the said shadow roots on the 
 element?

That's the general idea, but there are some more details.  Take a look
at how UserAgentShadowRoot is used in Blink if you want to get some
idea of what's required:

https://code.google.com/p/chromium/codesearch#search/q=UserAgentShadowRootsq=package:chromiumtype=cs

 My sense from following this discussion is that there's been a lot of
 talking about this subject and not very much coding.  Hopefully I'll
 learn something interesting by writing code that I can report back to
 this group.

 I don't think we necessarily have to code anything in order to have a 
 discussion in this mailing list.

Yes, the hundreds of messages on this topic demonstrates that's the case.  :)

 Correct me if I'm wrong but neither WebApps WG nor W3C has any sort of policy 
 to mandate that we need to create a polyfill or prototype in order to write a 
 working draft for example as far as I know.

W3C culture doesn't place as much emphasis on running code as IETF
culture does, for example, but, for topics, the best way to understand
them is to try writing some code.

 Having said that, gaining implementation experience is definitely valuable, 
 and I look forward to hearing what you find out with your work.

Thanks!

Adam



Re: [webcomponents] Proposal for Cross Origin Use Case and Declarative Syntax

2013-11-11 Thread Adam Barth
On Mon, Nov 11, 2013 at 12:57 AM, Ryosuke Niwa rn...@apple.com wrote:
 On Nov 11, 2013, at 3:56 PM, Adam Barth w...@adambarth.com wrote:
 Can you help me understand what security properties your proposal
 achieves and how it achieves them?  I spent some time thinking about
 this problem a couple of years ago when this issue was discussed in
 depth, but I couldn't come up with a design that was simultaneously
 useful and secure.

 For example, your proposal seems to have the vulnerability described below:

 == Trusted container document ==

 link rel=import
 href=https://untrusted.org/untrusted-components.html;
 importcomponents=name-card
 body
 name-card /name-card

 == untrusted-components.html ==

 template defines=name-card interface=NameCardElement
 Name: {{name}}brEmail:{{email}}
 /template
 script
 NameCardElement.prototype.created = function (shadowRoot) {
  var victim = shadowRoot.ownerDocument;
  var script = victim.createElement(script);
  script.textContent = alert(/hacked/);;
  victim.body.appendChild(script);
 };
 /script

 Maybe I'm not understanding your proposal correct?  If this issue is
 indeed a vulnerability with your proposal, I have no doubt that you
 can modify your proposal to patch this hole, but iterating in that way
 isn't likely to lead to a secure design.

 The owner document of the shadow root in that case will be that of the 
 component; i.e. https://untrusted.org/untrusted-components.html in this case.

 In other words, we’re inserting a security boundary between the host element 
 and the shadow root.  The shadow root in this case is a funky node object in 
 that it has its host element in an entirely different document.

Was that written somewhere in your proposal and I missed it?

Adam



Re: [webcomponents] Proposal for Cross Origin Use Case and Declarative Syntax

2013-11-10 Thread Adam Barth
Hi Ryosuke,

Can you help me understand what security properties your proposal
achieves and how it achieves them?  I spent some time thinking about
this problem a couple of years ago when this issue was discussed in
depth, but I couldn't come up with a design that was simultaneously
useful and secure.

For example, your proposal seems to have the vulnerability described below:

== Trusted container document ==

link rel=import
href=https://untrusted.org/untrusted-components.html;
importcomponents=name-card
body
name-card /name-card

== untrusted-components.html ==

template defines=name-card interface=NameCardElement
Name: {{name}}brEmail:{{email}}
/template
script
NameCardElement.prototype.created = function (shadowRoot) {
  var victim = shadowRoot.ownerDocument;
  var script = victim.createElement(script);
  script.textContent = alert(/hacked/);;
  victim.body.appendChild(script);
};
/script

Maybe I'm not understanding your proposal correct?  If this issue is
indeed a vulnerability with your proposal, I have no doubt that you
can modify your proposal to patch this hole, but iterating in that way
isn't likely to lead to a secure design.

Thanks,
Adam


On Fri, Nov 8, 2013 at 11:24 AM, Ryosuke Niwa rn...@apple.com wrote:
 Hi all,

 We have been discussing cross-orign use case and declarative syntax of web
 components internally at Apple, and here are our straw man proposal to amend
 the existing Web Components specifications to support it.

 1. Modify HTML Imports to run scripts in the imported document itself
 This allows the importee and the importer to not share the same script
 context, etc…

 2. Add “importcomponents content attribute on link element
 It defines the list of custom element tag names to be imported from the
 imported HTML document.
 e.g. link rel=import href=~ importcomponents=tag-1 tag-2 will export
 custom elements of tag names tag-1 and tag-2 from ~.  Any name that
 didn't have a definition in the import document is ignored (i.e. if tag-2
 was not defined in ~, it would be skipped but tag-1 will be still
 imported).

 This mechanism prevents the imported document from defining arbitrary
 components in the host document.

 3. Support static (write-once) binding of a HTML template
 e.g.
 template id=cardTemplateName: {{name}}brEmail:{{email}}/template
 script
 document.body.appendChild(cardTemplate.instantiate({name: Ryosuke Niwa,
 email:rn...@webkit.org}));
 /script

 4. Add “interface content attribute to template element
 This content attribute specifies the name of the JavaScript constructor
 function to be created in the global scope. The UA creates one and will be
 used to instantiate a given custom element.  The author can then setup the
 prototype chain as needed:

 template defines=name-card interface=NameCardElement
 Name: {{name}}brEmail:{{email}}
 /template
 script
 NameCardElement.prototype.name = function () {...}
 NameCardElement.prototype.email = function () {...}
 /script

 This is similar to doing:
 var NameCardElement = document.register(’name-card');

 5. Add defines content attribute on HTML template element to define a
 custom element
 This new attribute defines a custom element of the given name for the
 template content.
 e.g. template defines=nestedDivdivdiv/div/div/template will
 let you use nestedDiv/nestedDiv

 We didn’t think having a separate custom element was useful because we
 couldn’t think of a use case where you wanted to define a custom element
 declaratively and not use template by default, and having to associate the
 first template element with the custom element seemed unnecessary
 complexity.

 5.1. When a custom element is instantiated, automatically instantiate
 template inside a shadow root after statically binding the template with
 dataset
 This allows statically declaring arguments to a component.
 e.g.
 template defines=name-cardName: {{name}}brEmail:{{email}}/template
 name-card data-name=Ryosuke Niwa data-email=rn...@webkit.org”

 5.2. When a new custom element object is constructed, created callback is
 called with a shadow root
 Unfortunately, we can't let the author define a constructor because the
 element hadn't been properly initialized with the right JS wrapper at the
 time of its construction.  So just like we can't do new HTMLTitleElement,
 we're not going to let the author do an interesting things inside a custom
 element's constructor.  Instead, we're going to call created function on
 its prototype chain:

 template defines=name-card interface=NameCardElement
 Name: {{name}}brEmail:{{email}}
 /template
 script
 NameCardElement.prototype.name = function () {...}
 NameCardElement.prototype.email = function () {...}
 NameCardElement.prototype.created = function (shadowRoot) {
 ... // Initialize the shadowRoot here.
 }
 /script

 This is similar to the way document.register works in that document.register
 creates a constructor automatically.

 6. The cross-origin component does not have access to the shadow host
 

Re: CfC: publish WD of XHR; deadline November 29

2012-11-26 Thread Adam Barth
On Mon, Nov 26, 2012 at 5:53 AM, Ms2ger ms2...@gmail.com wrote:
 On 11/26/2012 02:44 PM, Jungkee Song wrote:
 From: Arthur Barstow [mailto:art.bars...@nokia.com]
 Sent: Monday, November 26, 2012 9:46 PM

 On 11/26/12 1:38 AM, ext Jungkee Song wrote:
 I suggest we put the following wordings for Anne's work and WHATWG to be

 credited. If we make consensus, let me use this content for publishing
 the
 WD.

 Please put your proposed text in a version of the spec we can review and
 send us the URL of that version.

 Please find the version at:
 http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html

 Thanks, this looks a lot better.

Yes.  Thanks for addressing my concern.

Adam

 However, I'd also like to see a link to the
 source in the dl in the header.

 Thanks
 Ms2ger





Re: CfC: publish WD of DOM; deadline December 2

2012-11-25 Thread Adam Barth
It seems like we should be consistent in our handling of the DOM and
XHR documents.  For example, the copy of DOM at
http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html lacks a
Status of this Document section, but presumably the version published
by this working group will have one.  If we decide that the SotD
section of XHR ought to acknowledge the WHATWG, we likely should do
the same for this document.

The copy of DOM at
http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html seems to
give appropriate credit by linking to the Living Standard and listing
sensible Editors.  Will the version of the document published by this
working group also give credit appropriately?

Adam


On Sun, Nov 25, 2012 at 5:49 AM, Arthur Barstow art.bars...@nokia.com wrote:
 This is Call for Consensus to publish a  Working Draft of the DOM spec using
 #ED as the basis.

 Please note Lachlan will continue to edit the ED during this CfC period.

 Agreement to this proposal: a) indicates support for publishing a new WD;
 and b) does not necessarily indicate support of the contents of the WD.

 If you have any comments or concerns about this proposal, please reply to
 this e-mail by December 2 at the latest.

 Positive response to this CfC is preferred and encouraged and silence will
 be assumed to mean agreement with the proposal.

 -Thanks, AB

 #ED http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html







Re: CfC: publish WD of XHR; deadline November 29

2012-11-23 Thread Adam Barth
On Fri, Nov 23, 2012 at 7:57 AM, Glenn Adams gl...@skynav.com wrote:
 On Fri, Nov 23, 2012 at 12:09 AM, Adam Barth w...@adambarth.com wrote:
 On Thu, Nov 22, 2012 at 9:16 AM, Ms2ger ms2...@gmail.com wrote:
  On 11/22/2012 02:01 PM, Arthur Barstow wrote:
  TheXHR Editors  would  like to publish a new WD of XHR and this is a
  Call for  Consensus to do so using the following ED (not yet using the
  WD template) as the basis
  http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html.
 
  Agreement to this proposal: a) indicates support for publishing a new
  WD; and b) does not necessarily indicate support of the contents of the
  WD.
 
  If you have any comments or concerns about this proposal, please reply
  to this e-mail by December 29 at the latest.
 
  Positive response to this CfC is preferred and encouraged and silence
  will be assumed to mean agreement with the proposal.
 
  I object unless the draft contains a clear pointer to the canonical spec
  on
  whatwg.org.

 I agree.  The W3C should not be in the business of plagiarizing the
 work of others.

 Are you claiming that the W3C is in the business of plagiarizing?

I'm saying that the W3C (and this working group in particular) is
taking Anne's work, without his permission, and passing it off as its
own.  That is plagiarism, and we should not do it.

 plagiarism. n. The practice of taking someone else's work or ideas and
 passing them off as one's own.

 The Status of this Document section should state clearly that this
 document is not an original work of authorship of the W3C.

 The SotD section need only refer to the working group that produced the
 document. Authorship is not noted or tracked in W3C documents.

 If Anne's work was submitted to and prepared in the context of the WebApps
 WG, then it is a product of the WG, and there is no obligation to refer to
 other, prior or variant versions.

 Referring to an earlier, draft version published outside of the W3C process
 does not serve any purpose nor is it required by the W3C Process.

Legally, we are under no obligation to acknowledge Anne's work.
However, we should be honest about the origin of the text and not try
to pass off Anne's work as our own.

More pointedly: plagiarism is not illegal but that doesn't mean we should do it.

 If there is a question on the status of the Copyright declaration of the
 material or its origin, then that should be taken up by the W3C Pubs team.

My concern is not about copyright.  My concern is about passing off
Anne's work as our own.

Adam



Re: Re: CfC: publish WD of XHR; deadline November 29

2012-11-23 Thread Adam Barth
On Fri, Nov 23, 2012 at 9:01 AM, Hallvord Reiar Michaelsen Steen
hallv...@opera.com wrote:
 Are you claiming that the W3C is in the business of plagiarizing?

 I'm saying that the W3C (and this working group in particular) is
 taking Anne's work, without his permission, and passing it off as its
 own.

 Speaking as one of the W3C-editors of the spec: first I agree that crediting 
 needs to be sorted out, and that Anne should be credited in a way that better 
 reflects his contributions. I appreciate that Ms2ger points this out during 
 the RfC.

 Secondly, I think it's a bit harsh to say that we take his work without his 
 permission - legally I believe the WHATWG deliberately publishes under a 
 licence that allows this, and on a moral and practical basis as W3C-editors 
 intend to collaborate with Anne in the best possible way under a situation 
 that's not really by our design, we involve him in discussions, appreciate 
 his input, I've also sent pull requests on GitHub to keep the specs in sync 
 and intend to continue to do so. I hope that claiming that we act without 
 Anne's permission depicts a working environment that's less constructive than 
 what we're both aiming for and achieving.

I'm happy that you and Anne have a productive working relationship.
My comment is based on this message:

http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/0538.html

Perhaps I should have moved the phrase without his permission to the
end of the sentence.

Adam



Re: CfC: publish WD of XHR; deadline November 29

2012-11-23 Thread Adam Barth
On Fri, Nov 23, 2012 at 9:11 AM, Glenn Adams gl...@skynav.com wrote:
 On Fri, Nov 23, 2012 at 9:36 AM, Adam Barth w...@adambarth.com wrote:
 My concern is not about copyright.  My concern is about passing off
 Anne's work as our own.

 As I have pointed out above, W3C specs do not track authorship or individual
 contributions to the WG process. If Anne performed his work as author in the
 context of participating in the W3C process,

This premise is false.  We're discussing the work that he is currently
performing outside the W3C process.  Specifically, the changes noted
as Merge Anne's change in the past 11 days:

http://dvcs.w3.org/hg/xhr/shortlog

 then there is no obligation to
 acknowledge that, though there is a long standing practice of including an
 Acknowledgments section or paragraph that enumerates contributors. I would
 think that listing Anne as Editor or Former Editor and listing Anne in an
 Acknowledgments paragraph should be entirely consistent with all existing
 W3C practice.

 Are you asking for more than this?

Yes.  I'm asking for the Status of this Document section more honestly
convene the origin of the text in the document by stating that this
document is based in part (or in whole) on
http://xhr.spec.whatwg.org/.

 And if so, then what is the basis for that?

As I wrote before, not doing the above is taking Anne's work and
passing it off as our own.  That's plagiarism, and we shouldn't do it.

If this working group isn't comfortable stating the truth about this
origin of this document, then we shouldn't publish the document at
all.

On Fri, Nov 23, 2012 at 9:16 AM, Julian Aubourg j...@ubourg.net wrote:
 In an ideal world, Anne would be the editor of the W3C version of the spec
 and that would be the end of it. Such is not the case. Anne is not the
 editor of the W3C version: he doesn't edit and/or publish anything related
 to the W3C XHR spec. Current editors do and while it's mostly brain-dead
 copy/paste, some decisions (especially regarding spec merging) are to be
 made W3C-side. Current editors also act as first-level reviewers and
 actually give Anne feedback.

 To be honest, I hate this situation. As far as I'm concerned, Anne *is* the
 author of the XHR spec but, AFAIK, there is no standardized way to
 acknowledge this in W3C documents nor does the WHATWG Licensing makes it
 mandatory. As a side note, as an open source developper, I can understand
 why. If the specs are on public repos and accept pull requests (or diffs, or
 whatever), then the very notion of authorship becomes a bit blurry.

 Anyway, I'm one of the co-editor of the W3C XHR spec and I don't claim to be
 the author of anything in the spec. I'm more interested in pushing the spec
 forward than achieving glory. I accepted the co-editor position to help
 because help was needed. So while I empathize with the whole W3C
 plagiarizes WHATWG outrage, could this conversation be held where it
 belongs? That is far upper the food chain than this WG.

I'm happy to take this discussion to wherever is appropriate.
However, I object to publishing this document until this issue is
resolved.

 Now, that being said and seeing as we cannot put Anne as an editor of the
 W3C version of the spec (because, technically, he's not). How do you guys
 suggest we go about acknowledging the WHATWG source? Where in the spec? How?
 With what kind of wording?

I would recommend acknowledging the WHATWG upfront in the Status of
this Document.  The document currently reads:

---8---
This document is produced by the Web Applications (WebApps) Working
Group. The WebApps Working Group is part of the Rich Web Clients
Activity in the W3C Interaction Domain.
---8---

I would recommend modifying this paragraph to state that this document
is being produced by the WebApps Working Group based on the WHATWG
version and to include a link or citation to the WHATWG version of the
specification.

Perhaps Anne would be willing to suggest some text that he would find
appropriate?

Adam



Re: CfC: publish WD of XHR; deadline November 29

2012-11-23 Thread Adam Barth
On Fri, Nov 23, 2012 at 11:35 AM, Glenn Adams gl...@skynav.com wrote:
 On Fri, Nov 23, 2012 at 10:28 AM, Adam Barth w...@adambarth.com wrote:
 On Fri, Nov 23, 2012 at 9:11 AM, Glenn Adams gl...@skynav.com wrote:
  On Fri, Nov 23, 2012 at 9:36 AM, Adam Barth w...@adambarth.com wrote:
  My concern is not about copyright.  My concern is about passing off
  Anne's work as our own.
 
  As I have pointed out above, W3C specs do not track authorship or
  individual
  contributions to the WG process. If Anne performed his work as author in
  the
  context of participating in the W3C process,

 This premise is false.  We're discussing the work that he is currently
 performing outside the W3C process.  Specifically, the changes noted
 as Merge Anne's change in the past 11 days:

 http://dvcs.w3.org/hg/xhr/shortlog

 How is this different from the process being used in the HTML WG w.r.t.
 bringing WHATWG ongoing work by Ian back into the W3C draft.

I am not a member of the HTML Working Group.  Were I to be, I might
well object to the process being used there as well.

 It seems like
 whatever solution is used here to satisfy Anne's concerns should be
 coordinated with Ian and the HTML5 editor team so that we don't end up with
 two methods for acknowledgment.

That might be worth doing, but it does not remove my objection to this
working group publishing this document.

Adam



Re: CfC: publish WD of XHR; deadline November 29

2012-11-22 Thread Adam Barth
On Thu, Nov 22, 2012 at 9:16 AM, Ms2ger ms2...@gmail.com wrote:
 On 11/22/2012 02:01 PM, Arthur Barstow wrote:
 TheXHR Editors  would  like to publish a new WD of XHR and this is a
 Call for  Consensus to do so using the following ED (not yet using the
 WD template) as the basis
 http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html.

 Agreement to this proposal: a) indicates support for publishing a new
 WD; and b) does not necessarily indicate support of the contents of the
 WD.

 If you have any comments or concerns about this proposal, please reply
 to this e-mail by December 29 at the latest.

 Positive response to this CfC is preferred and encouraged and silence
 will be assumed to mean agreement with the proposal.

 I object unless the draft contains a clear pointer to the canonical spec on
 whatwg.org.

I agree.  The W3C should not be in the business of plagiarizing the
work of others.

plagiarism. n. The practice of taking someone else's work or ideas and
passing them off as one's own.

The Status of this Document section should state clearly that this
document is not an original work of authorship of the W3C.  Instead,
the document should clearly state that it is based in part (or in
whole) on the WHATWG version.  I don't have a problem with the W3C
attaching its copyright and license to the document.  I do have a
problem with plagiarism.

Adam



Re: Call for Editor: URL spec

2012-11-05 Thread Adam Barth
On Mon, Nov 5, 2012 at 4:46 AM, Arthur Barstow art.bars...@nokia.com wrote:
 On 11/5/12 7:29 AM, ext Julian Reschke wrote:
 On 2012-11-05 12:46, Arthur Barstow wrote:
 We need an Editor(s) to move WebApps' URL spec towards Recommendation.
 If you are interested in this Editor position, please contact me offlist.

 -Thanks, AB

 [URL] http://dvcs.w3.org/hg/url/

 Is this about the URI above or about http://url.spec.whatwg.org/?

 Yes, my expectation is WebApps will use the work Anne is doing.

Is there some reason Anne isn't going to edit the spec?

Adam



Re: Call for Editor: URL spec

2012-11-05 Thread Adam Barth
On Mon, Nov 5, 2012 at 3:32 PM, Arthur Barstow art.bars...@nokia.com wrote:
 On 11/5/12 5:47 PM, ext Tab Atkins Jr. wrote:
 In the meantime, W3C is copying Anne's work in several specs, to

 It seems like W3C groups copying WHATWG's work has been ongoing for several
 years (so I think this is old news, especially since, AFAIU, it is
 permissiable, perhaps even encouraged? via the WHATWG copyright) ;-).

For my part, I don't mind Anne forking my URL spec given that he plans
to put in the effort to improve the document.  I appreciate that he's
licensed his fork in such a way as to let me merge his improvements
into my copy of the spec if I choose.  Does the WebApps Working Group
plan do either of these things?

A) Put in technical effort to improve the specification
B) License the fork in such a way as to let me merge improvements into my copy

I realize that I did not choose a license for my work that imposes
these requirements as a matter of law.

Adam



Re: [webcomponents] More backward-compatible templates

2012-11-01 Thread Adam Barth
On Thu, Nov 1, 2012 at 6:33 AM, Maciej Stachowiak m...@apple.com wrote:


 On Nov 1, 2012, at 1:57 PM, Adam Barth w...@adambarth.com wrote:



 (5) The nested template fragment parser operates like the template
 fragment parser, but with the following additional difference:
  (a) When a close tag named +script is encountered which does not
 match any currently open script tag:


 Let me try to understand what you've written here concretely:

 1) We need to change the end tag open state to somehow recognize
 /+script as an end tag rather than as a bogus comment.
 2) When the tree builder encounter such an end tag in the  state(s),
 we execute the substeps you've outlined below.

 The problem with this approach is that nested templates parse differently
 than top-level templates.  Consider the following example:

 script type=template
  b
 /script

 In this case, none of the nested template parser modifications apply and
 we'll parse this as normal for HTML.  That means the contents of the
 template will be b (let's ignore whitespace for simplicity).

 script type=template
   h1Inbox/h1
   script type=template
 b
   /+script
  /script

 Unfortunately, the nested template in this example parses differently than
 it did when it was a top-level template.  The problem is that the
 characters /+script are not recognized by the tokenizer as an end tag
 because they are encountered by the nested template fragment parser in the
 before attribute name state.  That means they get treated as some sort of
 bogus attributes of the b tag rather than as an end tag.


 OK. Do you believe this to be a serious problem? I feel like inconsistency
 in the case of a malformed tag is not a very important problem, but perhaps
 there are cases that would be more obviously problematic, or reasons not
 obvious to me to be very concerned about cases exactly like this one.


It's going to lead to subtle parsing bugs in web sites, which usually means
security vulnerabilities.  :(

Also: can you think of a way to fix this problem? Or alternately, do you
 believe it's fundamentally not fixable? I've only spent a short amount of
 time thinking about this approach, and I am not nearly as much an expert on
 HTML parsing as you are.


I definitely see the appeal of trying to re-use script for templates.
 Unfortunately, I couldn't figure out how to make it work sensibly with
nested templates, which is why I ended up recommending that we use the
template element.

Another approach we considered was to separate out the hide from legacy
user agents and the define a template operations.  That approach pushes
you towards a design like

xmp
  template
h1Inbox/h1
template
  h2Folder/h2
/template
  /template
/xmp

You could do the same thing with script type=something, but xmp is
shorter (and currently unused).  This approach has a bunch of
disadvantages, including being verbose and having some unexpected parsing:

xmp
  template
div data-foo=xmpbar/xmp
  This text is actually outside the template!
/div
  /template
/xmp

The script type=template has similar problems, of course:

script type=template
  div data-foo=scriptbar/script
This text is actually outside the template!
  /div
/script

Perhaps developers have a clearer understanding of such problems from
having to escape /script in JavaScript?

All this goofiness eventually convinced me that if we want to support
nested templates, we ought to use the usual nesting mechanics of HTML,
which leads to a design like template that nests like a normal tag.

  (a.i) Consume the token for the close tag named +script.
  (a.ii) Crate a DocumentFragment containing that parsed contents
 of the fragment.
  (a.iii) [return to the parent template fragment parser] with the
 result of step (a.ii) with the parent parser to resume after the +script
 close tag.


 This is pretty rough and I'm sure I got some details wrong. But I believe
 it demonstrates the following properties:
 (B) Allows for perfect fidelity polyfills, because it will manifestly end
 the template in the same place that an unaware browser would close the
 script element.
 (C) Does not require multiple levels of escaping.
 (A) Can be implemented without changes to the core HTML parser (though
 you'd need to introduce a new fragment parsing mode).


 I suspect we're quibbling over no true Scotsman semantics here, but you
 obviously need to modify both the HTML tokenizer and tree builder for this
 approach to work.


 In principle you could create a whole separate tokenizer and tree builder.
 But obviously that would probably be a poor choice for a native
 implementation compared to adding some flags and variable behavior. I'm not
 even necessarily claiming that all the above properties are advantages, I
 just wanted to show that there need not be a multi-escapting problem nor
 necessarily scary complicated changes to the tokenizer states for script.

 I think the biggest advantage

Re: WebApps' File: Writer File: DS specs and SysApps' File System and Media Storage specs

2012-11-01 Thread Adam Barth
On Thu, Nov 1, 2012 at 9:31 AM, Arthur Barstow art.bars...@nokia.com wrote:
 [ My apologies in advance for cross-posting but I'd like feedback from both
 the WebApps and SysApps communities ... ]

 Hi Eric, Adam, WonSuk, All,

 During WebApps' Oct 29 discussion about the File: Writer (#Writer) and File:
 Directories and System (#DS) specs (#Mins), someone mentioned SysApps was
 doing some related work. I just scanned SysApps' #Charter and noticed it
 includes these two related deliverables:

 1. File System - I didn't notice any additional information in the charter
 other than a FPWD is expected in Q3-Q6.

 2. Media Storage - An API to manage the device's storage of specific
 content types (e.g. pictures). Examples: Tizen Media Content (#Tizen), B2G
 Device Storage (#B2G).

 Would some please explain the relationship between these various specs?

There are two different facilities:

1) An application might want to gain access to system-level resources,
such as files and directories from the host operating system.  For
example, a photo editing application might want to gain access to the
user's My pictures directory.

2) Once the application has access to these resources, the application
might want to read and write the resources.  For example, a photo
editing application might want to read in files, manipulate them, and
save the result back to the user's My pictures directory.

The tentative plan for SysApps was to take on (1) and have another
working group, such as WebApps or DAP, to provide (2).  The idea
behind this approach is that an API for (2) is useful more broadly
than just in the SysApps security model.  For example, there are many
use cases for web applications to read and write persistent data in an
origin-sandboxed storage location (e.g., as addressed by IndexedDB and
FileSystem).  The part that's unique to SysApps is the privilege to
read and write persistent data outside the origin sandbox (e.g., from
and to the user's My pictures directory).

 In the case of SysApps' Media Storage spec, perhaps it is WebApps' IDB spec
 that is (more) relevant but that may depend on the UCs of Media Storage so
 clarification here is also welcome.

We need to have a discussion in the SysApps working group about
precisely which use cases we want to address.  For example, we might
limit our attention to media, such as photos, audio, and video, in
which case something like IndexedDB might be sufficient for (2), but I
expect that some folks in the working group will want to address
broader use cases, such as a text editor, in which case something like
FileSystem might be more appropriate.

As far as the SysApps working group is concerned, we likely won't get
to that discussion until phase 2.  At the moment, the working group
is focused on less controversial topics.  :)

Adam


 (I noticed a somewhat related thread this past summer in #public-sysapps but
 it appears that was mostly about the applicability (or not) of Web Intents
 vis-a-vis #1 and #2 above.)

 -Thanks, AB

 #Writer http://dev.w3.org/2009/dap/file-system/file-writer.html
 #DS http://dev.w3.org/2009/dap/file-system/file-dir-sys.html
 #Charter http://www.w3.org/2012/09/sysapps-wg-charter
 #Mins http://www.w3.org/2012/10/29-webapps-minutes.html#item19
 #Tizen
 https://developer.tizen.org/help/index.jsp?topic=%2Forg.tizen.web.device.apireference%2Ftizen%2Fmediacontent.html
 #B2G https://wiki.mozilla.org/WebAPI/DeviceStorageAPI
 #public-sysapps
 http://lists.w3.org/Archives/Public/public-sysapps/2012May/0038.html




Re: [webcomponents] More backward-compatible templates

2012-10-31 Thread Adam Barth
On Tue, Oct 30, 2012 at 6:58 AM, Maciej Stachowiak m...@apple.com wrote:

 In the WebApps meeting, we discussed possible approaches to template that 
 may ease the transition between polyfilled implementations and native 
 support, avoid HTML/XHTML parsing inconsistency, and in general adding less 
 weirdness to the Web platform.

 Here are some possibilities, not necessarily mutually exclusive:

 (1) template src

 Specify templates via external files rather than inline content. External CSS 
 and external scripts are very common, and in most cases inline scripts and 
 style are the exception. It seems likely this will also be true for 
 templates. It's likely desirable to provide this even if there is also some 
 inline form of templates.

 (2) template srcdoc

 Use the same approach as iframe srcdoc to specifying content inline while 
 remaining compatible with legacy parsing. The main downside is that the 
 required escaping is ugly and hard to follow, especially in the face of 
 nesting.

 (3) script type=template (or script language=template?)

 Define a new script type to use for templates. This provides almost all the 
 syntactic convenience of the original template element - the main downside 
 is that, if your template contains a script or another nested template, you 
 have to escape the close script tag in some way.

 The contents of the script would be made available as an inert DOM outside 
 the document, via a new IDL attribute on script (say, 
 HTMLScriptElement.template).

 Here's a comparison of syntaxes:

 Template element:
 template
 div id=foo class=bar/div
 script something();/script
 template
  div class=nested-template/div
 /template
 /template

 Script template:
 script type=template
 div id=foo class=bar/div
 script something();\/script
 script type=template
  div class=nested-template/div
 \/script
 /script

 Pros:
 - Similar to the way many JS-implemented templating sches work today
 - Can be polyfilled with full fidelity and no risk of content that's meant to 
 be inert accidentally running
 - Can be translated consistently and compatibly to the XHTML syntax of HTML
 - Less new weirdness. You don't have to create a new construct that appears 
 to have normal markup content, but puts it outside the document.
 - Can be specified (and perhaps even implemented, at least at first) without 
 having to modify the HTML parsing algorithm. In principle, you could specify 
 this as a postprocessing step after parsing, where accessing .template for 
 the first time would be responsible for reparsing the contents and creating 
 the template DOM. In practice, browsers would eventually want to parse in a 
 single pass for performance.


 Cons:
 - script type=template is slightly more verbose than template
 - Closing of nested scripts/templates requires some escaping

 In my opinion, the advantages of the script template approach outweigh the 
 disadvantages. I wanted to raise it for discussion on the list.

I'm not sure I understand the escaping rules.  Will all \ characters
need to be escaped as well?

How would multiply nested templates work?

script type=template
h1Inbox/h1
script type=template
h2Folder i/h2
script type=template
h3Email i/h3
script type=template
h4Recipient i/h4
/script
\\/script
\/script
/script

It seems like the number of backslashes might grow exponentially...

Adam



Re: Moving File API: Directories and System API to Note track?

2012-09-20 Thread Adam Barth
On Wed, Sep 19, 2012 at 11:50 PM, James Graham jgra...@opera.com wrote:
 On Wed, 19 Sep 2012, Adam Barth wrote:
 On Wed, Sep 19, 2012 at 1:46 PM, James Graham jgra...@opera.com wrote:
 On Wed, 19 Sep 2012, Edward O'Connor wrote:
 Olli wrote:
 I think we should discuss about moving File API: Directories and
 System API from Recommendation track to Note.

 Sounds good to me.

 Indeed. We are not enthusiastic about implementing an API that has to
 traverse directory trees as this has significant technical challenges, or
 may expose user's path names, as this has security implications. Also
 AIUI this API is not a good fit for all platforms.

 There's nothing in the spec that exposes user paths.  That's just FUD.

 I was thinking specifically of the combination of this and Drag and Drop and
 this API. I assumed that at some level one would end up with a bunch on
 Entry objects which seem to expose a path. It then seems that then a user
 who is tricked into dragging their root drive onto a webapp would expose all
 their paths.

 It is quite possible that this is a horrible misunderstanding of the spec,
 and if so I apologise. Nevertheless I think it's poor form to immediately
 characterise an error as a deliberate attempt to spread lies.

It just has nothing to do with the spec.  It's like complaining that
DOMString might leak user paths because if you use a DOMString with
drag and drop, you might leak user paths.

Adam



Re: Moving File API: Directories and System API to Note track?

2012-09-19 Thread Adam Barth
On Wed, Sep 19, 2012 at 1:46 PM, James Graham jgra...@opera.com wrote:
 On Wed, 19 Sep 2012, Edward O'Connor wrote:
 Olli wrote:
 I think we should discuss about moving File API: Directories and
 System API from Recommendation track to Note.

 Sounds good to me.

 Indeed. We are not enthusiastic about implementing an API that has to
 traverse directory trees as this has significant technical challenges, or
 may expose user's path names, as this has security implications. Also AIUI
 this API is not a good fit for all platforms.

There's nothing in the spec that exposes user paths.  That's just FUD.

Adam



Re: sandbox

2012-09-15 Thread Adam Barth
You might be interested in the SysApps working group, which is going
to address these sorts of use cases, including the security issues:

http://www.w3.org/2012/05/sysapps-wg-charter.html

Adam


On Sat, Sep 15, 2012 at 5:01 AM, Angelo Borsotti
angelo.borso...@gmail.com wrote:
 Hello,

 restricting the access made by a web app to a sandboxed filesystem is a
 severe restriction.
 I understand that this is done to preserve security, but the result falls
 short of the mark.
 Web apps that cannot access the local filesystem are meant to access mainly
 the data
 that are stored in some computer in the network (albeit they can somehow
 save them in
 some sandboxed storage so as to let the user work offline).
 Now, consider sensitive data, like, e.g. my bank accounts, what shares I
 own, my medical
 data, etc. Storing them in my computer is a lot more secure than storing
 them in some
 other in the network. It has some drawbacks, like, e.g. that I cannot access
 them when
 I am away from home or from my computer, but I could well trade this for
 security.
 I would like to have web apps access them, read and write them, manage them,
 etc.
 Unfortunately, with the current tecnology, and stantards such as the one you
 are developing,
 web apps cannot access them. Of course, I could install and run a web server
 on my
 computer, and have web apps then access my data, but that would effectively
 decrese
 security instead of increase it.
 All we have lived for decades using traditional apps, implemented in C++ and
 Java,
 accessing the local filesystem (and the whole OS). It is time to shift from
 these technologies
 to the new web ones, and implement apps using html and javascript --
 providing that we
 can do the same things at least.
 Security is an issue, but it applies to apps implemented with traditional
 technologies.
 When I download Firefox, or Libreoffice, I trust them not to wipe out my
 filesystem or
 disrupt my OS because I trust the people that implemented them and I trust
 the place from
 which I downloaded them (i.e. that they are not counterfeited and, e.g.,
 contain viruses).
 Once I have installed them I have effectively granted them access to my
 computer.
 This simple scheme could also apply to web apps. Note that downloading a
 (traditional)
 app such as Firefox, installing it and running it is something that is
 nowadays done
 using the web. So, the distinction between apps and web apps tends to be
 confined
 to the technology that is used to implement them. From the users'
 perspective they differ
 mostly in the way they are installed. Why then they should differ in what
 they can do?

 So, my proposal is to get rid altogether with the notion of sandboxed
 filesystem, or,
 alternatively, to consider it as a special case of filesystem, and to
 provide access to
 the whole local filesystem.

 Thank you
 -Angelo Borsotti



Re: [XHR] What referrer to use when making requests

2012-09-12 Thread Adam Barth
I certainly agree that we should define these things clearly.  :)

Adam


On Tue, Sep 11, 2012 at 11:44 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Sep 11, 2012 at 10:43 PM, Adam Barth w...@adambarth.com wrote:
 I'm sorry this thread has taken too long.  I've lost all context.  If
 you'd like to continue this discussion, we'll likely need to start
 again.

 My main reaction to what you've written is that Chromium's network
 stack is in an entirely separate process and we manage to implement
 these requirements without using global variables.  I'm not sure if
 that's at all relevant to the topic we were discussing.

 Yeah, sorry I dropped the ball here.

 My main concern is this:

 It's currently generally poorly defined what the referer header is
 supposed to be when running the fetch [1] algorithm.

 For example, the spec says to use the entry script's document when
 the fetch happens in response to a in response to a call to an API.
 But it's very undefined what that includes.

 For example removing an element from the DOM can cause a restyling to
 happen, which can trigger a new CSS rule to apply which means that a
 new background image is used which in turn means that a resource needs
 to be fetch. This fetch is definitely happening in response to a call
 to an API, specifically the call to remove the element from the DOM.
 Does that mean that we should use the document of the entry script
 which called element.removeChild as the referrer? It seems much more
 logical to use the URI of the stylesheet which included the rule

 What's worse is that it might not even be the call to .removeChild
 which is what's causing the fetch to happen since restyles are lazy.
 So it might be a much later call to get an .offsetTop property which
 causes the fetch to happen. I would think that from a web developers
 point of view, using the entry script for background fetches
 essentially means that the referrer is random.

 Instead we should always use the URI of the stylesheet which linked to
 the background, which doesn't match what any of the options in the
 fetch algorithm currently calls for.

 In general, there are currently very few places that I think should
 fall into the category of in response to a call to an API. Off the
 top of my head it's just EventSource, XMLHttpRequest, Worker,
 SharedWorker and importScripts. I'd rather explicitly define for them
 what the referrer should be rather than using the ambigious catchall
 in response to a call to an API rule.

 Additionally, for at least importScripts there doesn't seem to be a
 Document object we could use that would give us the correct referrer.
 We should use the URI of the worker which called importScripts. The
 same thing applies when EventSource, XMLHttpRequest, Worker or
 SharedWorker is used inside a Worker or SharedWorker.

 And for EventSource, XMLHttpRequest, Worker and SharedWorker I'd much
 rather use the same Document or worker as is used for base-URI
 resolution than the document of the entry script.

 [1] 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/fetching-resources.html#fetching-resources

 / Jonas



Re: [WebIDL] A new way to define UnionTypes

2012-08-29 Thread Adam Barth
On Wed, Aug 29, 2012 at 8:11 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 8/29/12 7:59 AM, Andrei Bucur wrote:
 I was wondering if it would make sense to use supplemental interfaces as a
 way to generate union types, like in this example:
 X implements S
 Y implements S
 Z implements S

 interface someInterface
 {
 NewSyntaxGeneratingUnionTypeFrom(S) someMethod();
 }

 Why can't you just do:

  interface someInterface
  {
 S someMethod();
  };


 ?

 Having this syntax makes it easy to define functions that will definitely
 return objects adhering to a certain interface but are not necessary
 containing S in the prototype chain.

 Yes, that's the idea of implements...

 A real life use case for this is the Region interface [1]. Right now, the
 only interface implementing Region is Element. The plan is to also allow
 pseudo elements [2] implement the Region interface. Because Element and
 CSSPseudoElement are totally different types it is impossible to define a
 method that returns a Region (e.g. NamedFlow.getRegions() [3]).

 Why is it impossible, exactly?  Just define it as returning a Region in the
 IDL.

It's not impossible in IDL.  In fact, it's remarkably easy to define
in IDL.  We just don't want to implement multi-inheritance in WebKit
because it's slow.  However, I don't see how Andrei's proposal makes
the implementation any more efficient.

Note: Gecko already pays this implementation cost because XPCOM
support queryInterface.  We'd rather not add queryInterface to WebKit.

Adam



Re: [WebIDL] A new way to define UnionTypes

2012-08-29 Thread Adam Barth
On Wed, Aug 29, 2012 at 9:46 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 8/29/12 12:40 PM, Andrei Bucur wrote:
 It's not impossible in IDL.  In fact, it's remarkably easy to define in
 IDL.  We
 just don't want to implement multi-inheritance in WebKit because it's
 slow.
 However, I don't see how Andrei's proposal makes the implementation any
 more efficient.

 The proposal tries to reduce the issue this by providing a mechanism to
 distinguish between the two: the inherited type and the implements type.

 I don't understand this part.  The WebIDL already says S is an implements
 type...  What are you trying to distinguish between and why?  Again, WebIDL
 already provides the list all things that have S on the RHS of
 'implements' information: it's right there in the IDL!

 If it's actually OK to return a supplemental interface, then I suppose
 this proposal is useless and the differentiation between the two cases is
 implementation specific.

 Sure sounds like it to me.

 Returning any interface is fine, whether supplemental or not.

That matches my understanding of WebIDL.

Andrei, the problem is not in how to specify this behavior.  The
problem is that we don't want to implement the behavior that you're
trying to specify.

For those of you interested in this topic, the reasons have been
discussed extensively on webkit-dev.

Adam



Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-21 Thread Adam Barth
[Re-sending from the proper address.]

On Tue, Aug 21, 2012 at 1:54 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Aug 20, 2012 at 11:56 PM, Ryosuke Niwa rn...@webkit.org wrote:
 No. Allowing the host to be moved without removing automatic transaction is
 what causes the problem because automatic transactions need to keep relevant
 nodes alive.

 Essentially, this has the same problem has the magic iframe. We can
 alternatively change the way automatic transactions work so that they don't
 retain the original nodes but that model has its own problems.

 I'm not entirely sure what the magic iframe is. But this simply
 seems like a shortcoming in the WebKit memory model. Dealing with
 cycles in C++ objects is a solved problem in Gecko and I'm surprised
 if this doesn't happen in other situations in WebKit too.

WebKit is not able to handle cycles in C++ objects without leaking
memory.  There's no equivalent to Gecko's cycle collector.

 How do you for example deal with the fact that if you create two
 nodes, A and B, and make A a child of B. As long as a reference is
 held to either A or B you need to keep both nodes alive. But as soon
 as the only thing holding references to A and B are just A and B, both
 nodes need to be GCed.

Managing the lifetime of Nodes efficiently is quite complicated.  As a
simplification, you can imagine that we keep track of external
references to the tree separately from internal references within the
tree.

Adam



Re: [IndexedDB] Problems unprefixing IndexedDB

2012-08-08 Thread Adam Barth
On Wed, Aug 8, 2012 at 5:12 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Wed, Aug 8, 2012 at 5:06 PM, Kyle Huey m...@kylehuey.com wrote:
 Hello all,

 Jonas mentioned earlier on this list that we unprefixed IndexedDB in Firefox
 nightlies some time ago.  We ran into a bit of a problem.[0]  Most of the
 IndexedDB tutorials (including ours and HTML5 Rocks[1] :-/) tell authors to
 deal with prefixing with:

 var indexedDB = window.indexedDB || window.webkitIndexedDB ||
 window.mozIndexedDB || window.msIndexedDB || ...

 This code has a bug when executed at global scope.  Because the properties
 are on the prototype chain of the global object, 'var indexedDB' creates a
 new property on the global.  Then window.indexedDB finds the new property
 (which has the value undefined) instead of the IDBFactory on the prototype
 chain.  The result is that all of the pages that do this no longer work
 after we unprefix IndexedDB.

 Just for reference, the correct way to do this is:

 window.indexedDB = window.indexedDB || window.webkitIndexedDB ||
 window.mozIndexedDB || window.msIndexedDB || ...

 This avoids the var hoisting that's causing the problems.

If we're telling people to use that pattern, we might as well just not
prefix the API in the first place because that pattern just tells the
web developers to unilaterally unprefix the API themselves.

Adam



Re: [whatwg] allowfullscreen vs sandbox=allow-fullscreen, and mimicking for pointer lock

2012-08-01 Thread Adam Barth
On Tue, Jul 31, 2012 at 10:24 PM, Robert O'Callahan
rob...@ocallahan.org wrote:
 On Wed, Aug 1, 2012 at 10:33 AM, Adam Barth w...@adambarth.com wrote:
 It's not clear to me from the spec how the allowfullscreen attribute
 works.  It appears to be mentioned only in the security and privacy
 considerations section.  For example, suppose I have three frames:

 Main frame: a.html
   - iframe src=b.html
 - iframe src=c.html allowfullscreen

 Can c.html go full screen?  Where is that specified?

 The intent is that no, it can't. You're right that this is currently
 unspecified.

Even if we don't use the iframe@sandbox syntax, it might be worth
re-using the spec machinery because it's good at solving problems like
the above.

Adam



Re: [whatwg] allowfullscreen vs sandbox=allow-fullscreen, and mimicking for pointer lock

2012-07-31 Thread Adam Barth
It looks like the ability to go full screen is off-by-default and then
enabled via the attribute.  If we used iframe@sandbox, the ability
would be on-by-default for non-sandboxed iframes.

Adam


On Tue, Jul 31, 2012 at 3:11 PM, Vincent Scheib sch...@google.com wrote:
 [correcting Anne van Kesteren's email]


 On Tue, Jul 31, 2012 at 3:03 PM, Vincent Scheib sch...@google.com wrote:

 I'm currently implementing Pointer Lock [1] in WebKit, which was adjusted
 recently to mimic Fullscreen [2].

 Why does the Fullscreen specification use an iframe attribute
 allowfullscreen to permit/restrict iframe capabilities instead of using
 iframe sandbox=allow-fullscreen?

 [1] http://dvcs.w3.org/hg/pointerlock/raw-file/default/index.html
 [2] http://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html





Re: [whatwg] allowfullscreen vs sandbox=allow-fullscreen, and mimicking for pointer lock

2012-07-31 Thread Adam Barth
It's not clear to me from the spec how the allowfullscreen attribute
works.  It appears to be mentioned only in the security and privacy
considerations section.  For example, suppose I have three frames:

Main frame: a.html
  - iframe src=b.html
- iframe src=c.html allowfullscreen

Can c.html go full screen?  Where is that specified?

Adam


On Tue, Jul 31, 2012 at 3:26 PM, Adam Barth w...@adambarth.com wrote:
 It looks like the ability to go full screen is off-by-default and then
 enabled via the attribute.  If we used iframe@sandbox, the ability
 would be on-by-default for non-sandboxed iframes.

 Adam


 On Tue, Jul 31, 2012 at 3:11 PM, Vincent Scheib sch...@google.com wrote:
 [correcting Anne van Kesteren's email]


 On Tue, Jul 31, 2012 at 3:03 PM, Vincent Scheib sch...@google.com wrote:

 I'm currently implementing Pointer Lock [1] in WebKit, which was adjusted
 recently to mimic Fullscreen [2].

 Why does the Fullscreen specification use an iframe attribute
 allowfullscreen to permit/restrict iframe capabilities instead of using
 iframe sandbox=allow-fullscreen?

 [1] http://dvcs.w3.org/hg/pointerlock/raw-file/default/index.html
 [2] http://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html





Re: [gamepad] Polling access point

2012-07-26 Thread Adam Barth
What triggers us to stop polling the gamepad?  When the object returned
by getGamepads() gets garbage collected?

Adam


On Thu, Jul 26, 2012 at 4:32 PM, Scott Graham scot...@chromium.org wrote:

 Thanks Travis, that seems like the least surprising solution.

 I guess getGamepads() would be the preferred name by analog with
 getUserMedia() then.


 On Thu, Jul 26, 2012 at 4:10 PM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

  Going to a function-based design is typically preferable (especially to
 making an attribute non-enumerable). This is the approach taken by
 getUserMedia.

 ** **

 *From:* scot...@google.com [mailto:scot...@google.com] *On Behalf Of *Scott
 Graham
 *Sent:* Thursday, July 26, 2012 4:02 PM
 *To:* public-webapps@w3.org
 *Cc:* Ted Mielczarek
 *Subject:* [gamepad] Polling access point

 ** **

 Hi,

 ** **

 It looks like the access point for the polling part of the API
 (navigator.gamepads[]) is not a good idea.

 ** **

 Based on a prototype implementation, pages seem to have a tendency to
 enumerate Navigator. When the .gamepads[] attribute is accessed, it causes
 possibly expensive background resources to be created to access hardware,
 even though the content is not really interested in reading data from the
 hardware.

 ** **

 Possible solutions:

 - change from navigator.gamepads[] to navigator.gamepads() (or
 navigator.getGamepads()) to be more explicit about when the API is actually
 being used

 - require something to activate the API (meta tag, calling some sort of
 start function)

 - require registering for at least one gamepad-related event before data
 is provided in gamepads[].

 - make .gamepads[] non-enumerable

 ** **

 Any thoughts or other suggestions?

 ** **

 Thanks,

 scott





Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

 No.

 Could you expand on this response, please?

 My understanding is that requests generate from XHR will have Origin
 applied. This can be used to reject requests from 3rd party websites
 within browsers. Therefore, intranets have the potential to restrict
 access from internal user browsing habits.

They have the potential, but existing networks don't do that.  We need
to protect legacy systems that don't understand the Origin header.

 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.

 Yes, but if no user private data is being exposed then there is cost
 being paid for no benefit.

I think it's difficult to discuss ethics without agreeing on an
ethical theory.  Let's stick to technical, rather than ethical,
discussions.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 8:29 AM, Adam Barth w...@adambarth.com wrote:
 On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones cmhjo...@gmail.com wrote:
 Isn't this mitigated by the Origin header?

 No.

 Could you expand on this response, please?

 My understanding is that requests generate from XHR will have Origin
 applied. This can be used to reject requests from 3rd party websites
 within browsers. Therefore, intranets have the potential to restrict
 access from internal user browsing habits.

 They have the potential, but existing networks don't do that.  We need
 to protect legacy systems that don't understand the Origin header.


 Yes, i understand that. When new features are introduced someone's
 security policy is impacted, in this case (and by policy always the
 case) it is those who provide public services who's security policy is
 broken.

 It just depends on who's perspective you look at it from.

 The costs of private security *is* being paid by the public, although
 it seems the public has to pay a high price for everything nowadays.

I'm not sure I understand the point you're making, but it's doesn't
really matter.  We're not going to introduce vulnerabilities into
legacy systems.

 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.

 Yes, but if no user private data is being exposed then there is cost
 being paid for no benefit.

 I think it's difficult to discuss ethics without agreeing on an
 ethical theory.  Let's stick to technical, rather than ethical,
 discussions.

 Yes, but as custodians of a public space there is an ethical duty and
 responsibility to represent the interests of all users of that space.
 This is why the concerns deserve attention even if they may have been
 visited before.

I'm sorry, but I'm unable to respond to any ethical arguments.  I can
only respond to technical arguments.

 Given the level of impact affects the entire corpus of global public
 data, it is valuable to do a impact and risk assessment to garner
 whether the costs are significantly outweighed by either party.

 With some further consideration, i can't see any other way to protect
 IP authentication against targeted attacks through to their systems
 without the mandatory upgrade of these systems to IP + Origin
 Authentication.

 So, this is a non-starter. Thanks for all the fish.

That's why we have the current design.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones cmhjo...@gmail.com wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth w...@adambarth.com wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones cmhjo...@gmail.com wrote:
 So, this is a non-starter. Thanks for all the fish.

 That's why we have the current design.

 Yes, i note the use of the word current and not final.

 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.

 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.

 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.

Welcome to the web.  We support legacy systems.  If you don't want to
support legacy systems, you might not enjoy working on improving the
web platform.

Adam



Re: Making template play nice with XML and tags-and-text

2012-07-18 Thread Adam Barth
On Wed, Jul 18, 2012 at 10:43 AM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 18 Jul 2012, Adam Barth wrote:
 
  Inspired by a conversation with hsivonen in #whatwg, I spend some time
  thinking about how we would design template for an XML world.  One
  idea I had was to put the elements inside the template into a namespace
  other than http://www.w3.org/1999/xhtml.

 Interesting idea.

 To handle multiple namespaces (specifically SVG and MathML), we could say
 that inert namespaces are namespaces that start with or end with a
 particular prefix, e.g. that are in the inert: scheme or that end with
 #inert. Then to de-inert nodes, you just strip the relevant part of the
 namespace string when cloning.

 To make this work in HTML with nested namespaces might be interesting:

template
  div
svg
  foreignObject
math
  mi
var

 I guess what we do in the HTML parser is have it use all the same
 codepaths as now, except the create an element operations check if
 there's a template on the stack, and if there is, then they add the
 inert marker to the namespace, but everything else in the parser acts as
 if the marker is not there?


Yeah, that could work.  We also need to consider nested templates:

template
  div
template
  span

I suspect we'll want the span to be in the same inert namespace as the
div, not doubly inerted.

Adam


Re: Making template play nice with XML and tags-and-text

2012-07-18 Thread Adam Barth
On Wed, Jul 18, 2012 at 11:29 AM, Adam Klein ad...@chromium.org wrote:

 On Wed, Jul 18, 2012 at 9:19 AM, Adam Barth w...@adambarth.com wrote:

 Inspired by a conversation with hsivonen in #whatwg, I spend some time
 thinking about how we would design template for an XML world.  One idea I
 had was to put the elements inside the template into a namespace other than
 http://www.w3.org/1999/xhtml.


 Interesting idea! We considered something like this back before Rafael's
 initial email to WebApps but discarded it for reasons discussed below.

 One question about your proposal: do the contents of template in an HTML
 document parse as HTML or XHTML (I'm not as familiar as I should be with
 how the contents of svg are parsed in HTML)? For example, can I omit
 closing /p tags?


We get to pick, but presumably we'd pick HTML-like parsing.


 Unlike the existing wormhole template semantics, in this approach the
 tags-and-text inside template would translate into DOM as usual for XML.
  We'd get the inert behavior for free because we'd avoid defining any
 behavior for elements in the http://www.w3.org/2012/xhtml-templatenamespace 
 (just as no behavior is defined today).


 This does get you inertness, but doesn't avoid querySelector matching
 elements inside template.

 Also, the elements inside template, though they appear to be HTML,
 wouldn't have any of the IDL attributes one might expect, e.g., a
 href=foo/a would have no href property in JS (nor would img have
 src, etc). They are, perhaps, too inert.

 When combined with the querySelector problem, this seems especially bad,
 since jQuery-style invocations would expect everything matching, say,
 $('img') to behave the same way (I guess we already have this problem with
 SVG a tags, but it seems especially weird in a document that one might
 think is all-HTML).


That's unfortunate.  I guess that means CSS styles will get applied to them
as well, which wouldn't be what authors would want.

Adam


Re: CORS security hole?

2012-07-17 Thread Adam Barth
On Mon, Jul 16, 2012 at 11:01 PM, Henry Story henry.st...@bblfish.netwrote:

 I first posted this to public-webapps, and was then told the security
 discussions were taking
 place on public-webappsec, so I reposted there.

 On 17 Jul 2012, at 00:39, Adam Barth wrote:

 As I wrote when you first posted this to public-webapps:

 [[
 I'm not sure I fully understand the issue you're worried about, but if I
 understand it correctly, you've installed an HTTP proxy that forcibly adds
 CORS headers to every response.  Such a proxy will indeed lead to security
 problems.  You and I can debate whether the blame for those security
 problem lies with the proxy or with CORS, but I think reasonable people
 would agree that forcibly injecting a security policy into the responses of
 other servers without their consent is a bad practice.
 ]]


 Hmm, I think I just understood where the mistake in my reasoning is: When
 making a request on the CORS proxy (
 http://proxy.com/cors?url=http://bank.com/joe/statement on a resource
 intended for bank.com, the browser will not send the bank.com credentials
 to the proxy - since bank.com and proxy.com are different domains.  So
 the attack I was imagining won't work, since the proxy won't just be able
 to use those to pass itself for the browser user.

 Installing an HTTP proxy locally would of course be a trojan horse attack,
 and then all bets are off.

 The danger might rather be the inverse, namely that if a CORS proxy is
 hosted by a web site used by the browser user for other authenticated
 purposes ( say a social web server running on a freedom box ) that the
 browser pass the authentication information to the proxy in the form of
 cookies, and that this proxy then pass those to any site the initial script
 was connected to.

 Here using only client side certificate authentication helps, as that is
 an authentication mechanism that cannot be forged or accidentally passed on.

 ---

 Having said that, are there plans to get to a point where JavaScript
 agents could be identified in a more fine grained manner?


No.


  Cryptographically perhaps with a WebID?


That's unlikely to happen soon.

Then the Origin header could be a WebID and it would be possible for a user
 to specify in his foaf profile his trust for a number of such agents? But
 perhaps I should start that discussion in another thread


I'd recommend getting some implementor interest in your proposals before
starting such a thread.  A standard without implementations is like a fish
without water.

Adam



 On Mon, Jul 16, 2012 at 8:13 AM, Henry Story henry.st...@bblfish.netwrote:

 Hi,

  Two things came together to make me notice the problem I want to discuss
 here:

  1. On the read-write-web and WebID community groups we were discussion
 the possibility of delegated authorisation with WebID [1]
  2. I was building a CORS proxy for linked data javascript agents (see
 the forwarded message )

 Doing this I ended up seeing some very strong parallels between CORS and
 WebID delegation. One could
 say that they are nearly the same protocol except that WebID delegation
 uses a WebID URL to identify an agent, rather than the CORS Origin
 protocol:hostname:port service id. The use case for WebID authorisation
 delegation is also different. In CORS the browser is the secretary doing
 protecting information leakage to an external JS agent, in WebID
 authorisation delegation a server would be doing work on behalf of a user.

 Having worked out the parallel I was able to start looking into the
 security reasoning between CORS. And this is where I found a huge security
 hole ( I think ). Essentially if you allow authenticated CORS requests,
 then any javascript agent can use a doggy CORS proxy, and use that to fool
 any browser into thinking the server was aware that it was not the user
 directly it was communicating with but the javascript agent.

 This is possible because CORS authenticated delegation with cookies is
 hugely insecure. CORS authenticated delegation over TLS would not suffer
 from this problem, as the proxy could not pass itself off as the browser
 user.

  Another reason to use TLS, another reason to use WebID.


  More of the detailed reasoning below...

 Henry


 [1] http://www.w3.org/wiki/WebID/Authorization_Delegation using
 http://webid.info/

 Begin forwarded message:
  From: Henry Story henry.st...@bblfish.net
  Subject: Re: CORS Proxy
  Date: 7 July 2012 08:37:24 CEST
  To: Read-Write-Web public-...@w3.org
  Cc: WebID public-we...@w3.org, Joe Presbrey presb...@gmail.com,
 Mike Jones mike.jo...@manchester.ac.uk, Romain BLIN 
 romain.b...@etu.univ-st-etienne.fr, Julien Subercaze 
 julien.suberc...@univ-st-etienne.fr
 
 
  On 7 Jul 2012, at 07:35, Henry Story wrote:
 
 
  On 6 Jul 2012, at 23:10, Henry Story wrote:
 
  Hi,
 
  I just quickly put together a CORS Proxy [1], inspired by Joe
 Presbrey's data.fm CORS proxy [2].
 
  But first, what is a CORS proxy

Re: CORS security hole

2012-07-16 Thread Adam Barth
I'm not sure I fully understand the issue you're worried about, but if I
understand it correctly, you've installed an HTTP proxy that forcibly adds
CORS headers to every response.  Such a proxy will indeed lead to security
problems.  You and I can debate whether the blame for those security
problem lies with the proxy or with CORS, but I think reasonable people
would agree that forcibly injecting a security policy into the responses of
other servers without their consent is a bad practice.

Adam


On Mon, Jul 16, 2012 at 8:13 AM, Henry Story henry.st...@bblfish.netwrote:

 Hi,

Two things came together to make me notice the problem I want to
 discuss here:

1. On the read-write-web and WebID community groups we were discussion
 the possibility of delegated authorisation with WebID [1]
2. I was building a CORS proxy for linked data javascript agents (see
 the forwarded message )

 Doing this I ended up seeing some very strong parallels between CORS and
 WebID delegation. One could
 say that they are nearly the same protocol except that WebID delegation
 uses a WebID URL to identify an agent, rather than the CORS Origina
 protocol:hostname:port service id. The use case for WebID authorisation
 delegation is also different.

   Having worked out the parallel I was able to start looking into the
 security reasoning between CORS. And this is where I found a huge security
 hole ( I think ). Essentially if you allow authenticated CORS requests,
 then any javascript agent can use a doggy CORS proxy, and use that to fool
 any browser into thinking the server was aware that it was not the user
 directly it was communicating with but the javascript agent.

   This is possible because CORS authenticated delegation with cookies is
 hugely insecure. CORS authenticated delegation over TLS would not suffer
 from this problem, as the proxy could not pass itself off as the browser
 user.

Another reason to use TLS, another reason to use WebID.

 Henry


 [1] http://www.w3.org/wiki/WebID/Authorization_Delegation using
 http://webid.info/

 Begin forwarded message:
  From: Henry Story henry.st...@bblfish.net
  Subject: Re: CORS Proxy
  Date: 7 July 2012 08:37:24 CEST
  To: Read-Write-Web public-...@w3.org
  Cc: WebID public-we...@w3.org, Joe Presbrey presb...@gmail.com,
 Mike Jones mike.jo...@manchester.ac.uk, Romain BLIN 
 romain.b...@etu.univ-st-etienne.fr, Julien Subercaze 
 julien.suberc...@univ-st-etienne.fr
 
 
  On 7 Jul 2012, at 07:35, Henry Story wrote:
 
 
  On 6 Jul 2012, at 23:10, Henry Story wrote:
 
  Hi,
 
  I just quickly put together a CORS Proxy [1], inspired by Joe
 Presbrey's data.fm CORS proxy [2].
 
  But first, what is a CORS proxy?
  
 
  A CORS [3] proxy is needed in order to allow read-write-web pages
 containing javascript agents written with libraries such as rdflib [5] to
 fetch remote resources. Pages containing such javascript agents are able to
 fetch and parse RDF on the web, and thus crawl the web by following their
 robotic nose. A CORS Proxy is needed here because:
 
  1- browsers restrict which sites javascript agents can fetch data from
 to those from which the javascript came from - the famous same origin
 policy ( javascript can only fetch resources from the same site it
 originated from)
  2- CORS allows an exception to the above restriction, if the resource
 has the proper headers. For a GET request this is the
 Access-Control-Allow-Origin header
  3- most RDF resources on the web are not served with such headers
 
  Hence javascript agents running inside web browsers that need to crawl
 the web, need a CORS proxy, so that libraries such as rdflib can go forward
 and make those requests through the proxy. In short: a CORS proxy is a
 service that can forward the request to the appropriate server and on
 receiving the answer add the correct headers, if none were found.
 
  Security
  
 
  So is there a security problem having a CORS proxy make a GET request
 for some information on behalf of JS Agent? This is an important question,
 because otherwise we'd be introducing a security hole with such a proxy.
 
  In order to answer that question we need to explain why browsers have
 the same origin restriction.
 
  The answer is quite simple I think. A Javascript agent running in a
 browser is using the credentials of the user when it makes requests for
 resources on the web. One can therefore think of the browser as acting as a
 secretary for the javascript agent: the JS agent makes a request, but does
 not log in to a web site, but instead asks the browser to fetch the
 information. The browser uses its authentication credentials - the user of
 the browser's credentials to be precise - to connect to remote sites and
 request resources. The remote site may be perfectly fine with the browser
 user/owner having access to the resource, but not like the idea of the
 agent in the browser doing so. (after all that could be some JS 

Re: [XHR] What referrer to use when making requests

2012-07-08 Thread Adam Barth
On Sun, Jul 8, 2012 at 3:33 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Sun, Jul 8, 2012 at 3:54 AM, Jonas Sicking jo...@sicking.cc wrote:
 What is the reason for this? This seems less consistent than using the
 same document as we use for things like same-origin checks and
 resolving relative urls. In general, we've been trying to move away
 from using the entry script in Gecko for things since it basically
 amounts to using a global variable which tends to be a source of bugs
 and unexpected behavior.

 Really? All new APIs still use the entry script to determine origin,
 base URL, and such. The reason is that HTML fetch works this way and
 provides no overrides and last time we discussed this nobody thought
 it was important enough to create an override just for XMLHttpRequest.

When I researched this several years ago, this behavior was consistent
across non-WebKit browsers, so I changed WebKit to match other
browsers (and the spec).  The entry script is used fairly consistently
throughout the platform to set the Referer.

Adam



Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-05 Thread Adam Barth
On Thu, Jul 5, 2012 at 1:37 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 07/05/2012 08:00 AM, Adam Barth wrote:
 On Wed, Jul 4, 2012 at 5:25 PM, Olli Pettay olli.pet...@helsinki.fi
 wrote:
 On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:
 So, it is very much implementation detail.

 (And I still don't understand how a callback can be so hard in this case.
 There are plenty of different kinds of callback objects.
   new MutationObserver(some_callback_function_object) )

 I haven't tested, by my reading of the MutationObserver implementation
 in WebKit is that it leaks.  Specifically:

 MutationObserver --retains-- MutationCallback --retains--
 some_callback_function_object --retains-- MutationObserver

 I don't see any code that breaks this cycle.

 Ok. In Gecko cycle collector breaks the cycle. But very much an
 implementation detail.

 DOM events

 Probably EventListeners, not Events.

 have a bunch of delicate code to avoid break these
 reference cycles and avoid leaks.  We can re-invent that wheel here,

 Or use some generic approach to fix such leaks.

 but it's going to be buggy and leaky.

 In certain kinds of implementations.

 I appreciatie that these jQuery-style APIs are fashionable at the
 moment, but API fashions come and go.  If we use this approach, we'll
 need to maintain this buggy, leaky code forever.

 Implementation detail. Very much so :)

Right, my point is that this style of API is difficult to implement
correctly, which means authors will end up suffering low-quality
implementations for a long time.

On Thu, Jul 5, 2012 at 2:22 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 But anyhow, event based API is ok to me.
 In general I prefer events/event listeners over other callbacks.

Great.  I'd recommend going with that approach because it will let us
provide authors with high-quality implementations of the spec much
sooner.

Adam



Re: [UndoManager] Re-introduce DOMTransaction interface?

2012-07-04 Thread Adam Barth
On Wed, Jul 4, 2012 at 5:25 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 07/05/2012 03:11 AM, Ryosuke Niwa wrote:

 On Wed, Jul 4, 2012 at 5:00 PM, Olli Pettay olli.pet...@helsinki.fi
 mailto:olli.pet...@helsinki.fi wrote:

 On 07/05/2012 01:38 AM, Ryosuke Niwa wrote:

 Hi all,

 Sukolsak has been implementing the Undo Manager API in WebKit but
 the fact undoManager.transact() takes a pure JS object with callback
 functions is
 making it very challenging.  The problem is that this object needs
 to be kept alive by either JS reference or DOM but doesn't have a backing
 C++
 object.  Also, as far as we've looked, there are no other
 specification that uses the same mechanism.


 I don't understand what is difficult.
 How is that any different to
 target.addEventListener(foo, { handleEvent: function() {}})


 It will be very similar to that except this object is going to have 3
 callbacks instead of one.

 The problem is that the event listener is a very special object in WebKit
 for which we have a lot of custom binding code. We don't want to implement a
 similar behavior for the DOM transaction because it's very error prone.


 So, it is very much implementation detail.
 (And I still don't understand how a callback can be so hard in this case.
 There are plenty of different kinds of callback objects.
  new MutationObserver(some_callback_function_object) )

I haven't tested, by my reading of the MutationObserver implementation
in WebKit is that it leaks.  Specifically:

MutationObserver --retains-- MutationCallback --retains--
some_callback_function_object --retains-- MutationObserver

I don't see any code that breaks this cycle.

DOM events have a bunch of delicate code to avoid break these
reference cycles and avoid leaks.  We can re-invent that wheel here,
but it's going to be buggy and leaky.

I appreciatie that these jQuery-style APIs are fashionable at the
moment, but API fashions come and go.  If we use this approach, we'll
need to maintain this buggy, leaky code forever.  Instead, we can save
ourselves a lot of pain by just using events, like the rest of the web
platform.

Adam


 Since I want to make the API consistent with the rest of the
 platform and the implementation maintainable in WebKit, I propose the
 following
 changes:

* Re-introduce DOMTransaction interface so that scripts can
 instantiate new DOMTransaction().
* Introduce AutomaticDOMTransaction that inherits from
 DOMTransaction and has a constructor that takes two arguments: a function
 and an
 optional label


 After this change, authors can write:
 scope.undoManager.transact(new AutomaticDOMTransaction{__function
 () {

   scope.appendChild(foo);
 }, 'append foo'));


 Looks somewhat odd. DOMTransaction would be just a container for a
 callback?


 Right. If we wanted, we can make DOMTransaction an event target and
 implement execute, undo,  redo as event listeners to further simplify the
 matter.


 That could make the code more consistent with rest of the platform, but the
 API would become harder to use.


 - Ryosuke





Re: [IndexedDB] Origins and document.domain

2012-06-19 Thread Adam Barth
IndexedDB should ignore document.domain.

Adam


On Tue, Jun 19, 2012 at 9:06 PM, Kyle Huey m...@kylehuey.com wrote:
 By my reading, the spec does not clearly specify what the 'origin' should
 be.  IDBFactory.open/deleteDatabase say Let origin be the origin of the
 IDBEnvironment used to access this IDBFactory.  The IDL states Window
 implements IDBEnvironment;  The HTML5 spec, as far as I can tell, does not
 define the concept of an origin for a window, but only for a document.

 There is another related question here: how should IndexedDB behave in the
 presence of modifications to document.domain?

 - Kyle



Re: Proposal: Document.parse() [AKA: Implied Context Parsing]

2012-06-05 Thread Adam Barth
On Tue, Jun 5, 2012 at 12:58 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Tue, Jun 5, 2012 at 2:10 AM, Adam Barth w...@adambarth.com wrote:
 Doesn't e4h have the same security problems as e4x?

 If you mean http://code.google.com/p/doctype-mirror/wiki/ArticleE4XSecurity
 I guess that would depend on how we define it.

By the way, it occurs to me that we can solve these security problems
if we restrict the syntax to only working when executing inline or via
script crossorigin src=  If the script has appropriate CORS
headers, then it doesn't matter if we leak its contents because
they're already readable by the document executing the script.

Adam



Re: Proposal: Document.parse() [AKA: Implied Context Parsing]

2012-06-04 Thread Adam Barth
On Mon, Jun 4, 2012 at 4:38 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 4 Jun 2012, Rafael Weinstein wrote:

 Just to be clear: what you are objecting to is the addition of formal
 API for this.

 You're generally supportive of adding a template element whose
 contents would parse the way we're discussing here -- and given that, a
 webdev could trivially polyfil Document.parse().

 Sure.


 I.e. you're ok with the approach of the parser picking a context element
 based on the contents of markup, but against giving webdevs the
 impression that innerHTML is good practice, by adding more API in that
 direction?

 Right.


 Put another way, though you're not happy with adding the API, you
 willing to set that aside and help spec the parser changes required for
 both this and template element (assuming the remaining issues with
 template can be agreed upon)?

 I think template is important. If implementing that happens to make it
 easier for a script to implement a bad practice, so be it.

 (See my e-mail on the template thread for comments on that subject.)


 FWIW, I agree with Hixie in principle, but disagree in practice. I
 think innerHTML is generally to be avoided, but I feel that adding
 Document.parse() improves the situation by making some current uses
 (which aren't likely to go away) less hacky.

 If we want to make things less hacky, let's actually make them less
 hacky, not introduce more APIs that suck.


 Also, I'm not as worried with webdevs taking the wrong message from us
 adding API. My feeling is that they just do what works best for them and
 don't think much about what we are or are not encouraging.

 I strongly disagree on that. Whether consciously or not, we set the
 standard for what is good practice. I've defintely seen authors look at
 the standards community for leadership. Just look at how authors adopted
 XHTML's syntax, even in the absence of actually using XHTML. It was such a
 tidal wave that we ended up actually changing HTML's conformance criteria
 to ignore the extra characters rather than say they were invalid. Why?
 Because XHTML was what the W3C was working on, so it must have been good,
 even though objectively it really added no semantics (literally nothing,
 the language was defined by deferring to HTML4) and the syntax changes
 were a net negative.


 Also, I'm highly supportive of the goal of allowing HTML literals in
 script. I fully agree that better load (compile) time feedback would
 be beneficial to authors here.

 Let's do it! As far as I can tell, the impact on a JS parser would be
 pretty minimal.

   http://www.hixie.ch/specs/e4h/strawman

 Who wants to be first to implement it?

Doesn't e4h have the same security problems as e4x?

Adam



Re: [manifest] Is the Webapp Manifest spec ready for FPWD?

2012-06-01 Thread Adam Barth
On Fri, Jun 1, 2012 at 6:43 AM, Marcos Caceres w...@marcosc.com wrote:
 On 31 May 2012, at 23:23, Adam Barth w...@adambarth.com wrote:
 Is anyone besides Mozilla interested in implementing this specification?

 I think people are just trying to work out what it does and if it
 brings value to particular communities.

 Having said that, the only other really big (i.e. in terms of millions of 
 users right now) candidate for implementation is Google as this proposal is 
 supposed to harmonise the Chrome store approach to installable Web apps with 
 Moz's go at entering the same market (and sharing apps across 
 stores/browsers).

 Adam, as you are associated with Google, can you find out if Google's chrome 
 store team are interested in moving it forward?

My understanding is that standardizing the package format isn't very
high on the team's priority list.  They're more interested in
standardizing the security model and associated APIs, which is why
we're interested in forming the SysApps working group.

As an example, http://dvcs.w3.org/hg/app-manifest/raw-file/tip/index.html
has a required_features field, which makes some implicit assumptions
about the security model.  It seems premature to work on a format for
declaring this sort of information without having nailed down the
security model.  If I were running the show, I'd take up this work in
the SysApps working group after we'd made some progress on the
security model and at least a handful of APIs.  Then we'll have
something concrete to describe in the manifest.

Having said that (and I can check with the team if you'd like a
definitive answer), I doubt they'd oppose folks working on this topic.
 I just wouldn't expect much feedback from them in the near term.

Adam


 On Wed, May 30, 2012 at 11:36 AM, Arthur Barstow art.bars...@nokia.com 
 wrote:
 Hi All,

 Besides the thread below that Anant started a few weeks re the Webapp
 Manifest spec, Marcos also started a few threads on this spec ...

 What are people's thoughts on whether or not the Quota Management API spec
 is ready for First Public Working Draft (FPWD)?

 A rule of thumb for FPWD is that the ED's scope should cover most of the
 expected functionality although the depth of some functionality may be very
 shallow, and it is OK if the ED has some open bugs/issues.

 -Thanks, AB

 On 5/12/12 2:02 PM, ext Anant Narayanan wrote:

 Hi everyone,

 I recently joined the webapps working group and I'd like to introduce
 myself! I work at Mozilla and for the past year or so have been working on
 our Apps initiative [1]. Our goal has been to make it very easy for
 developers to build apps using web technologies that can go above and 
 beyond
 what one might achieve using native SDKs on platforms like iOS and
 Android. We're also trying to make it really easy for users to find and
 acquire these apps, and use them on any device they happen to own 
 regardless
 of platform.

 As part of this work we have devised a simple JSON based manifest format
 to describe an installable web app, in addition to a few DOM APIs to 
 install
 and manage these apps. We have a working implementation of the entire 
 system
 in our latest Nightly builds.

 The manifest and corresponding APIs are described in an early draft at:
 http://dvcs.w3.org/hg/app-manifest/raw-file/tip/index.html

 We'd like to propose using that draft as the basis for a FPWD on this
 topic. I look forward to your feedback!


 FAQs
 --
 There are a few questions I anticipate in advance, which I will try to
 answer here, but we can definitely go in more depth as necessary on the
 list:

 Q. Why not simply reuse the widgets spec [2]?

 A. Aside from naming (we're talking about apps, the word widget seems to
 imply an artificial limitation), and replacing XML with JSON; the other
 fundamental difference is that the widget spec describes packaged apps,
 whereas our manifest describes hosted apps.

 We think hosted apps have several interesting and unique web-like
 properties that are worth retaining. Hosted apps can be made to work 
 offline
 just as well as packaged apps with AppCache (which is in need of some
 improvement, but can be made to work!). Packaged apps do have their own
 advantages though, which we acknowledge, and are open to extending the spec
 to support both types of apps.


 Q. Why is the DOM API in the same spec as the manifest?

 A. One success condition for us would be standardize the DOM APIs so that
 users will be able to visit any app marketplace that publishes web apps
 conforming to the manifest spec in any browser and be able to install and
 use them.

 We understand there might be other platforms on which a JS API may not be
 feasible (for eg: A Java API to install and manage these apps is equally
 important), but that shouldn't preclude us from standardizing the DOM API 
 in
 browsers. The manifest and the API go hand-in-hand, as we think each of 
 them
 is dramatically less useful without the other.


 Q. Why only one app

Re: [manifest] Is the Webapp Manifest spec ready for FPWD?

2012-05-31 Thread Adam Barth
Is anyone besides Mozilla interested in implementing this specification?

Adam


On Wed, May 30, 2012 at 11:36 AM, Arthur Barstow art.bars...@nokia.com wrote:
 Hi All,

 Besides the thread below that Anant started a few weeks re the Webapp
 Manifest spec, Marcos also started a few threads on this spec ...

 What are people's thoughts on whether or not the Quota Management API spec
 is ready for First Public Working Draft (FPWD)?

 A rule of thumb for FPWD is that the ED's scope should cover most of the
 expected functionality although the depth of some functionality may be very
 shallow, and it is OK if the ED has some open bugs/issues.

 -Thanks, AB

 On 5/12/12 2:02 PM, ext Anant Narayanan wrote:

 Hi everyone,

 I recently joined the webapps working group and I'd like to introduce
 myself! I work at Mozilla and for the past year or so have been working on
 our Apps initiative [1]. Our goal has been to make it very easy for
 developers to build apps using web technologies that can go above and beyond
 what one might achieve using native SDKs on platforms like iOS and
 Android. We're also trying to make it really easy for users to find and
 acquire these apps, and use them on any device they happen to own regardless
 of platform.

 As part of this work we have devised a simple JSON based manifest format
 to describe an installable web app, in addition to a few DOM APIs to install
 and manage these apps. We have a working implementation of the entire system
 in our latest Nightly builds.

 The manifest and corresponding APIs are described in an early draft at:
 http://dvcs.w3.org/hg/app-manifest/raw-file/tip/index.html

 We'd like to propose using that draft as the basis for a FPWD on this
 topic. I look forward to your feedback!


 FAQs
 --
 There are a few questions I anticipate in advance, which I will try to
 answer here, but we can definitely go in more depth as necessary on the
 list:

 Q. Why not simply reuse the widgets spec [2]?

 A. Aside from naming (we're talking about apps, the word widget seems to
 imply an artificial limitation), and replacing XML with JSON; the other
 fundamental difference is that the widget spec describes packaged apps,
 whereas our manifest describes hosted apps.

 We think hosted apps have several interesting and unique web-like
 properties that are worth retaining. Hosted apps can be made to work offline
 just as well as packaged apps with AppCache (which is in need of some
 improvement, but can be made to work!). Packaged apps do have their own
 advantages though, which we acknowledge, and are open to extending the spec
 to support both types of apps.


 Q. Why is the DOM API in the same spec as the manifest?

 A. One success condition for us would be standardize the DOM APIs so that
 users will be able to visit any app marketplace that publishes web apps
 conforming to the manifest spec in any browser and be able to install and
 use them.

 We understand there might be other platforms on which a JS API may not be
 feasible (for eg: A Java API to install and manage these apps is equally
 important), but that shouldn't preclude us from standardizing the DOM API in
 browsers. The manifest and the API go hand-in-hand, as we think each of them
 is dramatically less useful without the other.


 Q. Why only one app per origin?

 A. We originally placed this restriction for security reasons. In Firefox
 (and most other browsers), the domain name is the primary security boundary
 - cookie jars, localStorage, XHRs are all bound to the domain. For
 supporting multiple apps per domain we would have to do some extra work to
 ensure that (potentially sensitive) permissions granted to one app do not
 leak into another app from the same domain. Additionally, this lets us use
 the origin of the domain as a globally unique identifier. Note that
 app1.example.org and app2.example.org are two different origins under this
 scheme.

 That said, we've received a lot of developer feedback about the
 inconvenience of this restriction, and we are actively looking to lift it
 [3]. We cannot do this without a few other changes around permissions and
 enforcing specific UA behavior in app mode (as opposed to browser mode),
 but is something we can work towards.


 Q. Apps are just web pages, why bother installing them?

 A. This has been previously discussed on the list [4]. There are clear
 differences in perception between an app and a website for most users. Most
 web content is expected to be free, but the same content wrapped in an app
 is something people seem to be willing to pay for. Monetization is important
 to encourage a thriving web developer community.

 Additionally, treating certain installed websites as apps gives us a
 context separate from loading pages in a browser, which allows us to provide
 privileged APIs to such trusted apps, APIs we would normally not give to
 untrusted web content.


 Thanks for reading!

 Regards,
 -Anant

 [1] https://mozilla.org/apps/
 [2] 

Re: [manifest] Parsing origins, was Re: Review of Web Application Manifest Format and Management APIs

2012-05-26 Thread Adam Barth
On Fri, May 25, 2012 at 7:39 AM, Marcos Caceres w...@marcosc.com wrote:
 On Sunday, May 13, 2012 at 5:47 PM, Anant Narayanan wrote:
   installs_allowed_from: An array of origins that are allowed to trigger 
   installation of this application. This field allows the developer to 
   restrict installation of their application to specific sites. If the 
   value is omitted, installs are allowed from any site.
 
  How are origins parsed?

 I'm not sure what the question means, but origins are essentially a
 combination of [protocol]://[hostname]:[port]. Whenever an install is
 triggered, the UA must check if the origin of the page triggering the
 install is present in this array. * is a valid value for
 installs_allowed_from, in which case the UA may skip this check.

 By parsing I mean which ones win, which ones get discarded, what happens to 
 invalid ones, are they resolved already, etc. in the following:

 installs_allowed_from: [    http://foo/ , bar://, 22, 
 https://foo/bar/#*;, http://foo:80/;, wee!!!, http://baz/hello there!, 
 http://baz/hello%20there!;]

 And so on. So, all the error handling stuff. Or is a single error fatal?

I seem to have missed the context for this thread, but typically
origins are not parsed.  They're compared character-by-character to
see if they're identical.  If you have a URL, you can find its origin
and then serialize it to ASCII or Unicode if you want to compare it
with another origin.

Adam



Re: [manifest] Parsing origins, was Re: Review of Web Application Manifest Format and Management APIs

2012-05-26 Thread Adam Barth
On Sat, May 26, 2012 at 10:26 AM, Anant Narayanan an...@mozilla.com wrote:
 On 05/25/2012 11:11 PM, Adam Barth wrote:
 On Fri, May 25, 2012 at 7:39 AM, Marcos Caceresw...@marcosc.com  wrote:
 On Sunday, May 13, 2012 at 5:47 PM, Anant Narayanan wrote:
 installs_allowed_from: An array of origins that are allowed to trigger
 installation of this application. This field allows the developer to
 restrict installation of their application to specific sites. If the 
 value
 is omitted, installs are allowed from any site.

 How are origins parsed?

 I'm not sure what the question means, but origins are essentially a
 combination of [protocol]://[hostname]:[port]. Whenever an install is
 triggered, the UA must check if the origin of the page triggering the
 install is present in this array. * is a valid value for
 installs_allowed_from, in which case the UA may skip this check.

 By parsing I mean which ones win, which ones get discarded, what happens
 to invalid ones, are they resolved already, etc. in the following:

 installs_allowed_from: [    http://foo/ , bar://, 22,
 https://foo/bar/#*;, http://foo:80/;, wee!!!, http://baz/hello there!,
 http://baz/hello%20there!;]

 And so on. So, all the error handling stuff. Or is a single error fatal?

 I seem to have missed the context for this thread, but typically
 origins are not parsed.  They're compared character-by-character to
 see if they're identical.  If you have a URL, you can find its origin
 and then serialize it to ASCII or Unicode if you want to compare it
 with another origin.

 Ah we could certainly do this, but in our current implementation a single
 error is fatal. I do like the idea of not making sure that the origins are
 valid, especially for installs_allowed_from.

As a point of reference, here's what CORS does:

---8---
If the value of Access-Control-Allow-Origin is not a case-sensitive
match for the value of the Origin header as defined by its
specification, return fail and terminate this algorithm.
---8---

http://www.w3.org/TR/cors/#resource-sharing-check-0

I would encourage you not to allow sloppiness in origins.  That's just
asking for security problems.

Adam



Re: XHR's setRequestHeader and the Do Not Track (DNT) header

2012-05-09 Thread Adam Barth
On Wed, May 9, 2012 at 2:38 PM, Anne van Kesteren ann...@annevk.nl wrote:
 On Tue, May 8, 2012 at 9:34 PM, Ian Melven imel...@mozilla.com wrote:
 i'd like to propose that the Do Not Track header (see 
 http://www.w3.org/TR/tracking-dnt/#dnt-header-field) DNT
 be added to the list of request headers not allowed to be set via XHR's 
 setRequestHeader method (see
 http://dvcs.w3.org/hg/xhr/raw-file/tip/Overview.html#the-setrequestheader%28%29-method)

 That shouldn't be a problem. I wonder, should we remove the Sec-
 handling? That was suggested at some point as we are special casing
 header naming, but it does not appear to be used.

It's used by WebSockets.

 And given that
 updating this magic list is not really a big problem and browsers are
 updated quick enough maybe that is just as well.

Maybe.  Another perspective is that not all browsers are on the
fast-update train yet and folks might want to define headers that
can't be spoofed by them.

Adam



Re: [webcomponents] HTML Parsing and the template element

2012-04-05 Thread Adam Barth
On Wed, Apr 4, 2012 at 12:12 PM, Rafael Weinstein rafa...@google.com wrote:
 On Mon, Apr 2, 2012 at 3:21 PM, Dimitri Glazkov dglaz...@chromium.org wrote:
 On Wed, Feb 8, 2012 at 11:25 PM, Henri Sivonen hsivo...@iki.fi wrote:
 On Thu, Feb 9, 2012 at 12:00 AM, Dimitri Glazkov dglaz...@chromium.org 
 wrote:
 == IDEA 1: Keep template contents parsing in the tokenizer ==

 Not this!

 Here's why:
 Making something look like markup but then not tokenizing it as markup
 is confusing. The confusion leads to authors not having a clear mental
 model of what's going on and where stuff ends. Trying to make things
 just work for authors leads to even more confusing here be dragons
 solutions. Check out
 http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#script-data-double-escaped-dash-dash-state

 Making something that looks like markup but isn't tokenized as markup
 also makes the delta between HTML and XHTML greater. Some people may
 be ready to throw XHTML under the bus completely at this point, but
 this also goes back to the confusion point. Apart from namespaces, the
 mental model you can teach for XML is remarkably sane. Whenever HTML
 deviates from it, it's a complication in the understandability of
 HTML.

 Also, multi-level parsing is in principle bad for perf. (How bad
 really? Dunno.) I *really* don't want to end up writing a single-pass
 parser that has to be black-box indishtinguishable from something
 that's defined as a multi-pass parser.

 (There might be a longer essay about how this sucks in the public-html
 archives, since the SVG WG proposed something like this at one point,
 too.)

 == IDEA 2: Just tweak insertion modes ==

 I think a DWIM insertion mode that switches to another mode and
 reprocesses the token upon the first start tag token *without* trying
 to return to the DWIM insertion mode when the matching end tag is seen
 for the start tag that switched away from the DWIM mode is something
 that might be worth pursuing. If we do it, I think we should make it
 work for a fragment parsing API that doesn't require context beyound
 assuming HTML, too. (I think we shouldn't try to take the DWIM so far
 that a contextless API would try to guess HTML vs. SVG vs. MathML.)

 Just to connect the threads. A few weeks back, I posted an update
 about the HTML Templates spec:
 http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/1171.html

 Perhaps lost among other updates was the fact that I've gotten the
 first draft of HTML Templates spec out:

 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html

 The draft is roughly two parts: motivation for the spec and deltas to
 HTML specification to allow serialization and parsing of the
 template element. To be honest, after finishing the draft, I
 wondered if we should just merge the whole thing into the HTML
 specification.

 As a warm-up exercise for the draft, I first implemented the changes
 to tree construction algorithm here in WebKit
 (https://bugs.webkit.org/show_bug.cgi?id=78734). The patch
 (https://bugs.webkit.org/attachment.cgi?id=128579action=review)
 includes new parsing tests, and should be fairly intuitive to read to
 those familiar with the test format.

 The interesting bit here is that all parser changes are additive: we
 are only adding what effectively are extensions points -- well, that
 and a new contextless parsing mode for when inside of the template
 tag.

 I think the task previously was to show how dramatic the changes to
 the parser would need to be. Talking to Dimitri, it sounds to me like
 they turned out to be less open-heart-surgery and more quick
 outpatient procedure. Adam, Hixie, Henri, how do you guys feel about
 the invasiveness of the parser changes that Dimitri has turned out
 here?

If you're going to change the parser when adding the template
element, what's in that spec looks fairly reasonable to me.  Hixie and
Henri have spent more time designing the algorithm that I have (Eric
and I just implemented it), so they might have a different
perspective.

Adam


 The violation of the Degrade Gracefully principle and tearing the
 parser spec open right when everybody converged on the spec worry me,
 though. I'm still hoping for a design that doesn't require parser
 changes at all and that doesn't blow up in legacy browsers (even
 better if the results in legacy browsers were sane enough to serve as
 input for a polyfill).

 I agree with your concern. It's bugging me too -- that's why I am not
 being an arrogant jerk yelling at people and trying to shove this
 through. In general, it's difficult to justify making changes to
 anything that's stable -- especially considering how long and painful
 the road to getting stable was. However, folks like Yehuda, Erik, and
 Rafael spent years tackling this problem, and I tend to trust their
 steady hand... hands?

 I don't think there's an option to degrade gracefully here. My
 personal feeling is that even if it's years 

Re: [XHR] Constructor behavior seems to be underdefined

2012-04-04 Thread Adam Barth
On Mon, Apr 2, 2012 at 5:27 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 4/2/12 6:46 PM, Cameron McCormack wrote:

 Boris Zbarsky:

 And just to be clear, the discussion about security and document.domain
 is somewhat orthogonal to the original issue. WebIDL requires that all
 objects be associated with a particular global and that any spec
 defining anything that creates an object needs to define how this
 association is set up. For the particular case of constructors, that
 means that either WebIDL needs to have a default (that particular specs
 may be able to override) or that any spec that uses constructors needs
 to explicitly define the global association (which is not quite
 identical to things like which origin and base URI are used).

 Would it make sense to require objects that are returned from a
 constructor be associated with the same global that the constructor
 itself is?

 That seems like the simplest approach to me, yes.  It's what Gecko does in
 practice anyway at the moment, afaict.

Note: WebKit has bugs in this regard, but we've been (slowly!)
converging towards Gecko's behavior, which we believe is aesthetically
correct.

Adam



Re: Review of CORS and WebAppSec prior to LCWD

2012-03-06 Thread Adam Barth
This feedback is probably better addressed to public-webappsec, which
is the main forum for discussing these security-related specs.

Adam


On Tue, Mar 6, 2012 at 6:06 AM, Cameron Jones cmhjo...@gmail.com wrote:
 Hello wg,

 I've been contemplating various aspects of web app security in review
 of CORS, UMP, CSP and resource protection and would like to bring some
 feedback to the group as a set of concerns over the combined proposed
 solution.

 While the pre-cfc calls for positive response i can only provide the
 review in the light that it is seen. I hereby highlight concerns, of
 which some have been previously raised, and some which have lead to
 further specification resulting in a combined solution which, i
 believe, has scope for conglomeration into a more concise set of
 recommendations for implementation.

 This review and feedback is predominantly related to CORS as a
 solution for resource sharing. To step back from this initially and
 scope the problem which it proposes to address, this specification is
 a solution to expanding XHR access control across origins due to the
 implied requirement that resources must be proactively protected from
 CSRF in combination with the exposure to 'ambient authority'.

 This premise requires examination due to its becoming existence
 through proprietary development and release further to wider
 acceptance and adoption over a number of years. To look at the history
 of XHR in this regard, the requirements for its mode of operation have
 been driven by proprietary specification and while there are no
 concerns over independent development and product offering, the
 expansion toward global recommendation and deployment garners greater
 scope and requirements for global integration within a more permanent
 and incumbent environment.

 The principal concerns over XHR, and which are in part being addressed
 within UMP, are in relation to the incurred requirement of CORS due to
 the imposed security policy. This security policy is in response to
 the otherwise unspecified functional requirement of 'ambient
 authority' within HTTP agents. Cookies and HTTP authentication, which
 comprise the 'authority', are shared within a single HTTP agent which
 supports multiple HTTP communication interaction through distinct user
 interface. This multi-faceted interface represents the 'ambient'
 environment. The question this raises with regards to the premise of
 XHR and CSRF protection is therefore; why does the HTTP agent
 introduce the problem of 'ambient authority' which must subsequently
 be solved?

 To examine this further we can look at the consequent definition of
 the 'origin' with respect to HTTP requests. This definition is
 introduced to resolve the problem that HTTP 'authority' was only
 intending to be granted to requests originating from same-origin
 sources. This extension to subsequent requests may be withdrawn back
 to the originating request and reframed by examining - why is HTTP
 'authority' shared between cross-origin browsing contexts?

 To highlight this with a specific example, a user initiates a new
 browsing session with http://www.example.com whereby cookies are set
 and the user logs in using HTTP authentication. In a separate
 workflow, the user instructs the UA to open a new UI and initiate a
 new browsing session with http://www.test.com which happens to include
 resources (images, scripts etc) from the www.example.com domain. Where
 in these two separate user tasks has either the server of
 www.example.com, or the user, or even www.test.com, declared that they
 want to intertleave - or share - the independent browsing contexts
 consisting of HTTP authority? This is primary leak of security and
 privacy which is the root of all further breaches and vulnerabilities.

 When 'ambient authority' is removed from the definition of
 cross-origin requests, as in UMP, the potential for CSRF is
 eliminated, with the exception of public web services which are
 postulated to be at risk due to an assumption that the public web
 service was intended only for same-origin operation. This further
 premise is flawed due to its incompatibility with the architecture of
 the WWW and HTTP as a globally deployed and publicly accessible
 system.

 To summarize this review as a recommendation, it is viewed that the
 problem of CRSF is better addressed through the restriction of imposed
 sharing of 'ambient authority' which amounts to a breach of trust
 between the server and UA and the UA and the user.

 Further to this, the specification of cross-origin resource sharing
 should be targeted at the cross-origin site expressing the request for
 exposure to 'ambient authority' instead of the host site requiring to
 specify that it will accept cross-origin requests.

 I would further like to express grievous concern over some of the
 specific premise and implementation of CORS as it currently stands.
 The notion of 'pre-flight-requests' is too onerous on server
 

Re: [webcomponents] HTML Parsing and the template element

2012-02-08 Thread Adam Barth
Re-using the generic raw text element parsing algorithm would be the
simplest change to the parser.  Do you have a concrete example of
where nested template declarations are required?  For example,
rather than including nested templates, you might instead consider
referencing other template elements by id.

Adam


On Wed, Feb 8, 2012 at 2:00 PM, Dimitri Glazkov dglaz...@chromium.org wrote:
 Hello folks!

 You may be familiar with the work around the template element, or a
 way to declare document fragments in HTML (see
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-November/033868.html
 for some background).

 In trying to understand how this newfangled beast would work, I
 started researching HTML parsing, and--oh boy was I ever sorry! Err..
 I mean.. --and investigating how the contents of the template
 element could be parsed.

 So far, I have two ideas. Both introduce changes to HTML parsing
 algorithm. Both have flaws, and I thought the best thing to do would
 be to share the data with the experts and seek their opinions. Those
 of you cc'd -- you're the designated experts :)

 == IDEA 1: Keep template contents parsing in the tokenizer ==

 PRO: if we could come up with a way to perceive the stuff between
 template and /template as a character stream, we enable a set of
 use cases where the template contents does not need to be a complete
 HTML subtree. For example, I could define a template that sets up a
 start of a table, then a few that provide repetition patterns for
 rows/cells, and then one to close out a table:

 template id=headtablecaptionNyan-nyan/captionthead ...
 tbody/template
 template id=rowtrtemplatetd ... /td/template/tr/template
 template id=foot/tbody/table/template

 Then I could slam these templates together with some API and produce
 an arbitrary set of tables.

 PRO: Since the template contents are parsed as string, we create
 opportunities for performance optimizations at the UA level. If a
 bunch of templates is declared, but only a handful is used, we could
 parse template contents on demand, thus reducing the churn of DOM
 elements.

 CON: Tokenizer needs to be really smart and will start looking a lot
 like a specialized parser. At first glance, template behaves much
 like a textarea -- any tags inside will just be treated as
 characters. It works until you realize that templates sometimes need
 to be nested. Any use case that involves building a
 larger-than-one-dimensional data representation (like tables) will
 involve nested templates. This makes things rather tricky. I made an
 attempt of sketching this out here:
 http://dvcs.w3.org/hg/webcomponents/raw-file/a28e16cc4167/spec/templates/index.html#parsing.
 As you can see, this adds a largish set of new states to tokenizer.
 And it is still incomplete, breaking in cases like
 templatescriptalert('template is
 awesome!');/script/template.

 It could be argued that--while pursuing the tokenizer algorithm
 perfection--we could just stop at some point of complexity and issue a
 stern warning for developers to not get too crazy, because stuff will
 break -- akin to including /script string in your Javascript code.

 == IDEA 2: Just tweak insertion modes ==

 PRO: It's a lot less intrusive to the parser -- just adjust insertion
 modes to allow template tags in places where they would ordinary be
 ignored or foster-parented, and add a new insertion for template
 contents to let all tags in. I made a quick sketch here:
 http://dvcs.w3.org/hg/webcomponents/raw-file/c96f051ca008/spec/templates/index.html#parsing
 (Note: more massaging is needed to make it really work)

 CON: You can't address fun partial-tree scenarios.

 Which idea appeals to you? Perhaps you have better ideas? Please share.

 :DG



Re: [webcomponents] HTML Parsing and the template element

2012-02-08 Thread Adam Barth
On Wed, Feb 8, 2012 at 2:20 PM, Erik Arvidsson a...@chromium.org wrote:
 On Wed, Feb 8, 2012 at 14:10, Adam Barth w...@adambarth.com wrote:
 ... Do you have a concrete example of
 where nested template declarations are required?

 When working with tree like structures it is comment to use recursive 
 templates.

 http://code.google.com/p/mdv/source/browse/use_cases/tree.html

I'm not sure I fully understand how templates work, so please forgive
me if I'm butchering it, but here's how I could imagine changing that
example:

=== Original ===

ul class=tree
  template iterate id=t1
li class={{ children | toggle('has-children') }}{{name}}
  ul
template ref=t1 iterate=children/template
  /ul
/li
  /template
/ul

=== Changed ===

ul class=tree
  template iterate id=t1
li class={{ children | toggle('has-children') }}{{name}}
  ul
template-reference ref=t1 iterate=children/template-reference
  /ul
/li
  /template
/ul

(Obviously you'd want a snappier name than template-reference to
reference another template element.)

I looked at the other examples in the same directory and I didn't see
any other examples of nested template declarations.

Adam



Re: [webcomponents] HTML Parsing and the template element

2012-02-08 Thread Adam Barth
On Wed, Feb 8, 2012 at 2:47 PM, Rafael Weinstein rafa...@chromium.org wrote:
 Here's a real-world example, that's probably relatively simple
 compared to high traffic web pages (i.e. amazon or facebook)

 http://src.chromium.org/viewvc/chrome/trunk/src/chrome/common/extensions/docs/template/api_template.html?revision=120962content-type=text%2Fplain

 that produces each page of the chrome extensions API doc, e.g.

 http://code.google.com/chrome/extensions/contextMenus.html

 This uses jstemplate. Do a search in the first link. Every time you
 see jsdisplay or jsselect, think template.

It's a bit hard for me to understand that example because I don't know
how jstemplate works.

I'm just suggesting that rather than trying to jam a square peg
(template) into a round hole (the HTML parser), there might be a way
of reshaping both the peg and the hole into an octagon.

Adam


 On Wed, Feb 8, 2012 at 2:36 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, Feb 8, 2012 at 2:20 PM, Erik Arvidsson a...@chromium.org wrote:
 On Wed, Feb 8, 2012 at 14:10, Adam Barth w...@adambarth.com wrote:
 ... Do you have a concrete example of
 where nested template declarations are required?

 When working with tree like structures it is comment to use recursive 
 templates.

 http://code.google.com/p/mdv/source/browse/use_cases/tree.html

 I'm not sure I fully understand how templates work, so please forgive
 me if I'm butchering it, but here's how I could imagine changing that
 example:

 === Original ===

 ul class=tree
  template iterate id=t1
    li class={{ children | toggle('has-children') }}{{name}}
      ul
        template ref=t1 iterate=children/template
      /ul
    /li
  /template
 /ul

 === Changed ===

 ul class=tree
  template iterate id=t1
    li class={{ children | toggle('has-children') }}{{name}}
      ul
        template-reference ref=t1 iterate=children/template-reference
      /ul
    /li
  /template
 /ul

 (Obviously you'd want a snappier name than template-reference to
 reference another template element.)

 I looked at the other examples in the same directory and I didn't see
 any other examples of nested template declarations.

 Adam



Re: Concerns regarding cross-origin copy/paste security

2012-02-07 Thread Adam Barth
On Mon, May 16, 2011 at 8:41 PM, Hallvord R. M. Steen
hallv...@opera.com wrote:
 On Thu, 05 May 2011 06:46:55 +0900, Daniel Cheng dch...@chromium.org
 wrote:

 There was a recent discussion involving directly exposing the HTML
 fragment
 in a paste to a page, since we're doing the parsing anyway for security
 reasons. I have some concerns regarding

 http://www.w3.org/TR/clipboard-apis/#cross-origin-copy-paste-of-source-code
 though.

 From my understanding, we are trying to protect against [1] hidden data

 being copied without a user's knowledge and [2] XSS via pasting hostile
 HTML. In my opinion, the algorithm as written is either going to remove
 too
 much information or not enough. If it removes too much, the HTML paste is
 effectively useless to a client app. If it doesn't remove enough, then the
 client app is going to have to sanitize the HTML itself anyway.

 FWIW, my main concern was the hidden data aspect because it can be abused
 for cross-site request forgery if a malicious site by getting the user to
 copy and paste gets access to form anti-CSRF tokens and such.

That's certainly possible, but I don't think it's possible for us to
protect against the long tail of risks here.  In these sorts of cases,
it can be better for security to not implement a half-correct solution
and instead decide not to try to mitigate a particular risk.

 I *intend* to
 leave some processing of the HTML to the client application, for example the
 removal of third-party application-specific or browser-specific CSS
 properties.

 I see that Chrome applies different security policies depending on whether
 the content is read by a JavaScript (getData('text/html') - style) and
 inserted directly. You do some extra work to avoid XSS, such as removing on*
 event listener attributes and href=javascript: when content is inserted
 directly (you also remove some browser-specific elements and class names).
 This sort of clean up and processing on direct data insertion by the
 user-agent is not really in scope for the events spec IMO.

That makes sense.  The risk here is somewhat different from what
you've articulated above.  Rather than trying to prevent information
leaks from the source of the copy to the target of the paste,
these checks aim to prevent the source from injecting script into the
target.

 However, for getData('text/html') it seems you do no clean-up at all, not
 for cross-origin paste either.

Correct.  The idea here is to have a secure default but still let a
sophisticated web application handle the complicated cases if they
want to.  I just spoke with Ryosuke and Daniel, and we're considering
tightening up the default behavior somewhat to prevent injections of
style and other dangerous elements (probably by switching to a
whitelist).

 Implementing the current spec would thus
 require that you tighten your existing security policy. Will you consider
 doing so, or would you rather argue for removal of any spec-mandated
 clean-up of cross-origin source code?

IMHO, we shouldn't try to protect the source of the data, but we
should aim to protect the target.  My understanding of your message
is that would cause us to remove the text in this spec.  If we find a
good whitelist for protecting the target, that's probably worth
writing in a spec so that browsers can interoperate, but it doesn't
have to be this spec if you feel that this behavior is out of scope.

 [2] is no different than using data from any
 other untrusted source, like dragging HTML or data from an XHR. It doesn't
 make sense to special-case HTML pastes.

 Using data is not the only threat model - limiting the damage potential
 when the page you paste into is malicious is harder. However, there is some
 overlap in the strategies we might use - for example event attributes are
 certainly hidden data, might contain secrets and might cause XSS attacks so
 you might argue for their removal based on both abuse scenarios though I
 think [2] is a more relevant threat.

The problem is that the tail of where sensitive information might
reside is long and thick, making these security measures only
partially effective, at best.

Adam



Re: [XHR] chunked requests

2011-12-17 Thread Adam Barth
On Sat, Dec 17, 2011 at 6:11 AM, Anne van Kesteren ann...@opera.com wrote:
 On Fri, 09 Dec 2011 19:54:31 +0100, Eric Rescorla e...@rtfm.com wrote:

 Unfortunately, many servers do not support TLS 1.1, and to make matters
 worse, they do so in a way that is not securely verifiable. By which I
 mean that an active attacker can force a client/server pair both of which
 support TLS 1.1 down to TLS 1.0. This may be detectable in some way, but not
 by TLS's built-in mechanisms. And since the threat model here is an active
 attacker, this is a problem.

 It seems user agents are addressing this issue in general by simply removing
 support for those servers so we might not have to define anything here and
 just leave it to the TLS standards:

 http://my.opera.com/securitygroup/blog/2011/12/11/opera-11-60-and-new-problems-with-some-secure-servers

I would still add a security consideration so folks who implement this
are aware that the two issues are related.

Adam



Re: [XHR] chunked requests

2011-12-09 Thread Adam Barth
On Fri, Dec 9, 2011 at 7:59 AM, Anne van Kesteren ann...@opera.com wrote:
 On Fri, 09 Dec 2011 16:33:08 +0100, Eric Rescorla e...@rtfm.com wrote:
 On Fri, Dec 9, 2011 at 4:59 AM, Anne van Kesteren ann...@opera.com
 wrote:
 Are you saying that if responseType is set to stream and the server
 only supports TLS 1.0 the connection should fail, but if it is greater than
 that it is okay?

 I'm not sure I understand this feature well enough. The issue is streaming
 content from the client, not from the server, and in particular the ability
 of the JS to provide additional content to be sent after the data has
 started to be transmitted.

 My bad. I meant send(Stream) which would indeed allow for that.

 As for what should happen in this setting if the negotiated TLS version
 is 1.0, I could imagine a number of possibilities, including:

 1. The client refuses to send
 2. There is a pre-flight and the server has to explicitly accept.
 3. There is a big nasty warning.
 4. We just warn people in the spec and hope they do something sensible

 Okay. I think I would very much prefer 1 here.

 Same-origin requests are always okay? (Though it seems we should just
 require TLS 1.1 there too then to not make matters too confusing.)


 Same-origin requests should be OK because the JS would have access
 to the relevant sensitive data in any case.

 Okay, I guess we can make that difference.

Correct me if I'm wrong, but I believe these issues are fixed in TLS
1.1.  Most user agents implement TLS 1.1 anyway, so this seems mostly
like a requirement to put in the security considerations section.

Adam



Re: [XHR] chunked requests

2011-12-08 Thread Adam Barth
Keep in mind that streamed or chunked uploads will expose the ability
to exploit the BEAST vulnerability in SSL:

http://www.educatedguesswork.org/2011/09/security_impact_of_the_rizzodu.html

Whatever spec we end up going with should note in its security
consideration that the user agent must implement TLS 1.2 or greater to
avoid this attack.

Adam


On Thu, Dec 8, 2011 at 2:16 PM, Jonas Sicking jo...@sicking.cc wrote:
 I think Microsoft's stream proposal would address this use case.

 / Jonas

 On Wed, Dec 7, 2011 at 5:59 PM, Wenbo Zhu wen...@google.com wrote:
 One use case that we have which is not currently handled by XMLHttpRequest
 is incrementally sending data that takes a long time to generate _from the
 client to the server_. For example, if we were to record data from a
 microphone, we couldn't upload it in real time to the server with the
 current API.

 The MediaStreaming spec also mentioned several use cases which would require
 streaming request data via an API:
 - Sending the locally-produced streams to remote peers and receiving streams
 from remote peers.
 - Sending arbitrary data to remote peers.

 http://www.whatwg.org/specs/web-apps/current-work/multipage/video-conferencing-and-peer-to-peer-communication.html

 - Wenbo




Re: TAG Comment on

2011-11-20 Thread Adam Barth
On Sun, Nov 20, 2011 at 3:30 PM, Mark Nottingham m...@mnot.net wrote:
 Yes, if you configure your browser to do so, you'll be assaulted with 
 requests for a test db from many Web sites that use common frameworks.

 I don't think that this should count as use.

Indeed.  That is not the sort of use I'm referring to.

 I do think now is precisely the time to be asking this kind of question; 
 these features are NOT yet used at *Web* scale -- they're used by people 
 willing to live on the bleeding edge, and therefore willing to accept risk of 
 change.

You're welcome to tilt at that windmill, but the chance that you get
these APIs removed from browser is approximately zero.

Adam


 One of the problems with lumping in a lot of new feature development with a 
 spec maintenance / interop effort is confusion like this. Hopefully, the W3C 
 (and others) will learn from this.



 On 16/11/2011, at 9:47 AM, Adam Barth wrote:

 These APIs are quite widely used on the web.  It seems unlikely that
 we'll be able to delete either of them in favor of a single facility.

 Adam


 On Tue, Nov 15, 2011 at 2:05 PM, Noah Mendelsohn n...@arcanedomain.com 
 wrote:
 This is a comment from the W3C Technical Architecture Group on the last call
 working draft: Web Storage [1].

 The HTML5 Application Cache (AppCache) [2] and Local Storage [1] both
 provide client-side storage that can be used by Web Applications. Although
 the interfaces are different (AppCache has an HTML interface while Local
 Storage has a JavaScript API), and they do seem to have been designed with
 different use cases in mind, they provide somewhat related facilities: both
 cause persistent storage for an application to be created, accessed and
 managed locally at the client. If, for example, the keys in Local Storage
 were interpreted as URIs then Local Storage could be used to store manifest
 files and Web Applications could be written to look transparently for
 manifest files in either the AppCache or in Local Storage. One might also
 envision common facilities for querying the size of or releasing all of the
 local storage for a given application.

 At the Offline Web Applications Workshop on Nov 5, 2011 [3] there was a
 request for a JavaScript API for AppCache and talk about coordinating
 AppCache and Local Storage.

 The TAG believes it is important to consider more carefully the potential
 advantages of providing a single facility to cover the use cases, of perhaps
 modularizing the architecture so that some parts are shared, or if separate
 facilities are indeed the best design, providing common data access and
 manipulation APIs. If further careful analysis suggests that no such
 integration is practical, then, at a minimum, each specification should
 discuss how it is positioned with respect to the other.

 Noah Mendelsohn
 For the: W3C Technical Architecture Group

 [1] http://www.w3.org/TR/2011/WD-webstorage-20111025/
 [2] http://www.w3.org/TR/html5/offline.html#appcache
 [3] http://www.w3.org/2011/web-apps-ws/




 --
 Mark Nottingham   http://www.mnot.net/







Re: Sanitising HTML content through sandboxing

2011-11-10 Thread Adam Barth
On Thu, Nov 10, 2011 at 3:05 PM, Ryan Seddon seddon.r...@gmail.com wrote:
 DOMParser.parseFromString already takes a content type as the second

 argument. The plan is to support HTML parsing when the second argument
 is text/html.

 I quite like this as it keeps it agnostic towards what it is parsing so
 other formats like MathML and SVG won't look out of place with HTMLParser
 object.

 How would this handle sanitising?

The web page could walk the parsed DOM and sanitize it however it
liked before adopting the nodes from the detached document into its
own document.

Adam



Re: Sanatising HTML content through sandboxing

2011-11-08 Thread Adam Barth
Also, a div doesn't represent a security boundary.  It's difficult to
sandbox something unless you have a security boundary around it.
IMHO, an easy way to solve this problem is to just exposes an
HTMLParser object, analogous to DOMParser, which folks can use to
safely parse HTML, e.g., from XMLHttpRequest.

Adam


On Tue, Nov 8, 2011 at 11:28 PM, Jonas Sicking jo...@sicking.cc wrote:
 Given that this type of sandbox would work very differently from the
 iframe sandbox, I think reusing the same attribute name would be
 confusing.

 Additionally, what's the behavior if you remove the attribute? What if
 you do elem.innerHTML += foo on the element after having removed the
 sandbox? Or on an elements parent?

 Or what happens if you do foo.innerHTML = bar.innerHTML where a parent
 of bar has sandbox set?

 When sanitizing, I strongly feel that we should simply remove all
 content that could execute script as to ensure that it doesn't leak
 somewhere else when markup is copied. Trying to ensure that it never
 executes, while still allowing it to exist, is too high risk IMO.

 / Jonas

 On Tue, Nov 8, 2011 at 5:21 PM, Ryan Seddon seddon.r...@gmail.com wrote:
 Right now there is no simple way to sanitise HTML content by stripping it of
 any potentially malicious HTML such as scripts etc.

 In the innerHTML in DocumentFragment thread I suggested following the
 sandbox attribute approach that can be applied to iframes. I've moved this
 out into its own thread, as Jonas suggested, so as not to dilute the
 innerHTML discussion.

 There was mention of a suggested API called innerStaticHTML as a potential
 solution to this, I personally would prefer to reuse the sandbox approach
 that the iframes use.

 e.g.

 xhr.responseText = script
 src='malicious.js'/scriptdivh1contentM/h1/div;

 var div = document.createElement(div);

 div.sandbox = ; // Static content only
 div.innerHTML = xhr.responseText;

 document.body.appendChild(div);

 This could also apply to a documentFragment and any other applicable DOM
 API's, being able to let the HTML parser do what it does best would make
 sense.

 The advantage of this over a new API is that it would also allow the use of
 the space separated tokens to white list certain things within the HTML
 being parsed into the document and open it to future extension.

 -Ryan






Re: AW: AW: AW: WebSocket API: close and error events

2011-10-26 Thread Adam Barth
On Wed, Oct 26, 2011 at 2:09 AM, Tobias Oberstein
tobias.oberst...@tavendo.de wrote:
 Generally speaking, we don't want to show certificate errors for subresource
 loads (including WebSocket connections) because the user has no context
 for making a reasonable security decision.  For example, Chrome doesn't let
 the user click through certificate errors for images or iframes.

 Ok, I see.

 However, aren't subresources somehow different? :

 1. HTML elements which refer an HTTP addressable resource, like IMG or IFRAME

 treating those as subresources and not presenting certificate dialogs for 
 each of them
 definitely is desirable

 2. XMLHttpRequest objects controlled from JS

 same origin policy applies, thus there is no certificate for the 
 XMLHttpRequest done different
 from the certificate of the original site serving the JS doing the request

With CORS, XMLHttpRequest is very capable of encountering certificate
errors on third-party origins.

 3. WebSocket objects controlled from JS

 same origin policy does not apply, the WS server gets the origin, but needs 
 to do its own decision

 the target hosts/ports for WS connections embedded in a HTML/JS are designed 
 to and might often
 address different hosts/ports than the original serving host

 So 3. is somehow different from 2. and 1.

They're really all very similar.

 Is it nevertheless agreed consensus that 3. is to be treated like 1. -- like 
 a subresource, and thus
 presenting no browser builtin certificate dialog?

 ==

 Just asking .. since if thats the case, we're left with the following 
 situation wrt to self-signed certs and WS:

 - browsers won't show dialog because WS is a subresource
 - browsers won't give detailed close code to JS, since that would allow to 
 probe the network

 JS will get general close code 1006 abnormal closure for failed WS 
 connection. That 1006 could be multiple things.

 [*] So we can only present a JS rendered dialog: something went wrong with 
 WS establishment

 We can offer the user to click on a link which takes him to a HTML page 
 rendered by the dedicated WS
 server that will render a server status page.

 When he does, he then can accept a self-signed cert. Subsequent WS then 
 should succeed.

 Dedicated WS server renders the status page when it receives a HTTP/GET 
 without a upgrade websocket header.

 The [*] is not optimal UX .. but probably acceptable.

Usually (in non-attack scenarios) when a server has a self-signed
certificate, all users of the server see the self-signed certificate.
In particular, the developer is likely to find the problem and fix it
by getting a real certificate, which improves security.

Adam



Re: AW: AW: AW: WebSocket API: close and error events

2011-10-25 Thread Adam Barth
On Tue, Oct 25, 2011 at 2:34 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Oct 25, 2011 at 5:18 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 25 Oct 2011, Tobias Oberstein wrote:
 
  There are situations when self-signed certs are quite common like on
  private networks or where self-signed certs might be necessary, like
  with a software appliance that auto-creates a self-signed cert on first
  boot (and the user is too lazy / does not have own CA).

 A self-signed cert essentially provides you with no security. You might as
 well be not bothering with encryption.

 This is complete nonsense.  Protecting against passive attacks is a major,
 clear-cut win, even without protecting against active (MITM) attacks.

Generally speaking, we don't want to show certificate errors for
subresource loads (including WebSocket connections) because the user
has no context for making a reasonable security decision.  For
example, Chrome doesn't let the user click through certificate errors
for images or iframes.

Protection against passive eavesdroppers isn't worth losing protection
against active network attackers.

Adam



Re: A proposal for Element constructors

2011-10-25 Thread Adam Barth
Another solution to that more than one tag per interface problem is
to introduce subclasses of those interfaces for each tag.

Adam


On Tue, Oct 25, 2011 at 8:42 PM, Kentaro Hara hara...@chromium.org wrote:
 Hi folks,
 * Background *
 I have been working on making DOM objects look and feel more like ordinary
 JavaScript objects. I implemented Event constructors  [1], for example.

 * Proposal *
 Element.create() has proposed and been under discussion [2]. Besides
 Element.create(), I propose constructors for Elements, like new
 HTMLButtonElement(...), new HTMLDivElement(...) and new
 HTMLVideoElement(...). I think that both Element.create() *and* Element
 constructors are useful.
 For example,
     var button = new HTMLButtonElement( {disabled: true},
                                                                         [new
 TextNode(Click Me!!),
                                                                          new
 Image( {src: http://example.com/xxx.png} ) ] );
     document.body.appendChild(button);
 is equivalent to the following HTML:
     button disabled
     Click Me!!
     img src = http://example.com/xxx.png; /
     /button
 As shown in above, the constructor has two arguments. The first one is a
 dictionary of Element properties. The second one is an array of Nodes, which
 become child nodes of the Element.

 * Advantages of Element constructors *
 (a) Better errors when you misspell it
 Element.create(vdieo, {src: ...} ) == No error; HTMLUnknownElement is
 created
 new HTMLVdieoElement( {src: ...} ) == ReferenceError; Stack trace points
 you to the faulty line of code
 (b) Consistency with other constructable DOM objects
 e.g. new XMLHttpRequest(), new Image(), new Event(), new CustomEvent(), new
 MessageEvent(), ...
 (c) Enables to subtype DOM objects in the future
 We are planning to make DOM objects subtype-able, like this:
     function MyButton(text) {
         HTMLButtonElement.call(this);    /* (#) */
         this.textContent = text;
     }
     MyButton.prototype = Object.create(HTMLButtonElement.prototype, {...});
     var fancyButton = new MyButton(Click Me!!);
 In order to make the line (#) work, HTMLButtonElement must have a
 constructor.

 * Spec examples *
 interface [
     NamedConstructor(optional HTMLButtonElementInit initDict, optional
 NodeArray children)
 ] HTMLButtonElement : HTMLElement {
     attribute boolean disabled;
     attribute DOMString value;
     ... (omitted)
 }
 dictionary HTMLButtonElementInit : HTMLElementInit {
     boolean disabled;
     DOMString value;
     ... (omitted)
 }
 interface [
     NamedConstructor(optional HTMLElementInit initDict, optional NodeArray
 children)
 ] HTMLElement : Element {
     attribute DOMString lang;
     attribute DOMString className;
     ... (omitted)
 }
 dictionary HTMLElementInit : ElementInit {
     DOMString lang;
     DOMString className;
     ... (omitted)
 }
 interface Element : Node {
     readonly attribute DOMString tagName;
     ... (omitted)
 }
 dictionary ElementInit : NodeInit {
     DOMString tagName;
     ... (omitted)
 }
 interface Node {
     readonly attribute unsigned short nodeType;
     ... (omitted)
 }
 dictionary NodeInit {
     unsigned short nodeType;
     ... (omitted)
 }

 * Discussion
 (a) Element.create() *and* Element constructors?
 I think that both are useful for the reasons that dominicc pointed out in
 [3]. Element.create() is good when we have a tag name in hand, like
     var tag = button;
     Element.create(tag, { }, ...);
 On the other hand, Element constructors have the advantages that I described
 above.
 (b) How to specify properties and attributes in the dictionary argument
 A property and an attribute are two different things [4]. A property is the
 thing that can be set by a setter like foo.value, and an attribute is the
 thing that can be set by foo.setAttribute(). A discussion has been conducted
 on how we can set up properties and attributes in the dictionary argument
 [2]. Proposed approaches in [2] are as follows:
 (b-1) Let a constructor have two dictionaries, one for properties and the
 other for attributes, e.g. new HTMLButtonElement( {disabled: true}, {class:
 myButton, onclick: func}, ...)
 (b-2) Add a prefix character to attributes, e.g new HTMLButtonElement(
 {disabled: true, @class: myButton, @onclick: func} )
 (b-3) Introduce a reserved key attributes, e.g new HTMLButtonElement(
 {disabled: true, attributes: {class: myButton, onclick: func} } )
 Another difficulty around attributes is how to naturally set a value of a
 boolean attribute in the dictionary when we have the value in hand. In case
 of a property, you just need to write like this:
     var b = new HTMLButtonElement({disabled: value});
 However, in case of an attribute, we need to write like this:
     var options = {};
     if (value)
         options.disabled = ;
     var b = new HTMLButtonElement(options);
 Basically, I think that we should keep the succinctness of constructors and
 

Re: [DOM] Name

2011-09-05 Thread Adam Barth
On Sun, Sep 4, 2011 at 2:08 PM, Charles Pritchard ch...@jumis.com wrote:
 On 9/4/11 6:39 AM, Anne van Kesteren wrote:

 On Sun, 04 Sep 2011 15:12:45 +0200, Arthur Barstow art.bars...@nokia.com
 wrote:

 The CfC to publish a new WD of DOM Core was blocked by this RfC. I will
 proceed with a  request to publish a new WD of DOM Core in TR/. The name DOM
 Core will be used for the upcoming WD. If anyone wants to propose a name
 change, please start a *new* thread.

 Given that the specification replaces most of DOM2 and DOM3 I suggest we
 name it DOM4, including for the upcoming WD (or alternatively a WD we
 publish a couple of weeks later).

 I propose calling it Web Core.
 WC1 (Web Core version 1).

WebCore is one of the major implementation components of WebKit.
Calling this spec Web Core might be confusing for folks who work on
WebKit.  It would be somewhat like calling a spec Presto.  :)

Adam


 The Web semantic is popular, easy.

 The w3c lists are heavy with the web semantic: web apps, web components,
 web events.
 The primary dependency for DOMCore is named Web IDL.

 It'd give DOM3 some breathing room, to go down its own track.

 I'd much prefer to go around referring to Web IDL and Web Core.

 -Charles






Re: [DOM] Name

2011-09-05 Thread Adam Barth
On Mon, Sep 5, 2011 at 2:33 PM, Charles Pritchard ch...@jumis.com wrote:
 On Sep 5, 2011, at 12:06 PM, Adam Barth w...@adambarth.com wrote:
 On Sun, Sep 4, 2011 at 2:08 PM, Charles Pritchard ch...@jumis.com wrote:
 On 9/4/11 6:39 AM, Anne van Kesteren wrote:

 On Sun, 04 Sep 2011 15:12:45 +0200, Arthur Barstow art.bars...@nokia.com
 wrote:

 The CfC to publish a new WD of DOM Core was blocked by this RfC. I will
 proceed with a  request to publish a new WD of DOM Core in TR/. The name DOM
 Core will be used for the upcoming WD. If anyone wants to propose a name
 change, please start a *new* thread.

 Given that the specification replaces most of DOM2 and DOM3 I suggest we
 name it DOM4, including for the upcoming WD (or alternatively a WD we
 publish a couple of weeks later).

 I propose calling it Web Core.
 WC1 (Web Core version 1).

 WebCore is one of the major implementation components of WebKit.
 Calling this spec Web Core might be confusing for folks who work on
 WebKit.  It would be somewhat like calling a spec Presto.  :)

 Or calling a browser Chrome.

:)

 Web Core does implement web core, doesn't it?

Yes, but it also implements HTML5, which isn't part of Web Core.

Adam



Re: HTTP, websockets, and redirects

2011-09-01 Thread Adam Barth
On Thu, Sep 1, 2011 at 4:53 AM, Rich Tibbett ri...@opera.com wrote:
 Adam Barth wrote:

 Generally speaking, browsers have been moving away from triggering
 authentication dialogs for subresource loads because they are more
 often used for phishing than for legitimate purposes.  A WebSocket
 connection is much like a subresource load.

 Couldn't we only allow digest-based 401/407 authentication for subresource
 loads so an attacker cannot simply obtain unencrypted credentials?

Unfortunately, digest isn't robust against phishing.

Adam


 The latest WebSocket protocol spec [1] says:

 This protocol doesn't prescribe any particular way that servers can
 authenticate clients during the WebSocket handshake.  The WebSocket server
 can use any client authentication mechanism available to a generic HTTP
 server, such as Cookies, HTTP Authentication, TLS authentication.

 Without HTTP Auth and the difficulty/performance issues of doing TLS auth in
 pure JavaScript it seems we are left with a cookie-only authentication
 option.

 Disabling support for Basic HTTP Authentication but still ticking the box
 for HTTP auth, with the added benefit that all such HTTP auth will be
 encrypted, seems like a win-win here.

 - Rich

 [1]
 http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-13#page-54



 On Wed, Aug 10, 2011 at 9:36 PM, Brian Raymor
 brian.ray...@microsoft.com  wrote:

 What is the rationale for also failing the websocket connection when a
 response for authentication is received such as:

 401 Unauthorized
 407 Proxy Authentication Required





Re: xdash name prefixes (was Re: Component Model Update)

2011-08-26 Thread Adam Barth
On Fri, Aug 26, 2011 at 1:07 AM, Roland Steiner
rolandstei...@google.com wrote:
 On Thu, Aug 25, 2011 at 9:12 AM, Adam Barth w...@adambarth.com wrote:

 On the other hand, it seems likely that some of these xdash names will
 come into multi-party use.  For example, the following use cases
 involve xdash names chosen by one party and then used by another:

 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Widget_Mix-and-Match
 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Contacts_Widget
 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Like.2F.2B1_Button


 Since the components that are used on a page are under the control of the
 page's author, it should be possible to avoid clashes by separating a
 component's definition (potentially pulled from third party) from its tag
 registration (done by page author), e.g.
 // Importing component definition for Facebook Like button
 // Importing component definition for Google+ +1 button
 // ... later:
 Element.register(x-fb, Facebook.LikeButton)
 Element.register(x-gg, GooglePlus.PlusOneButton)

Doesn't it seem more likely that the third-party will do the
registration in whatever script you include that implements the Like
button, or whatever?

Adam


 That's something like 40% of the use cases...

 I don't have much of a better suggestion.  You're running up against
 all the usual distributed extensibility issues.

 We could use namespaces... *ducks and runs* :D

 Cheers,
 - Roland




Re: xdash name prefixes (was Re: Component Model Update)

2011-08-25 Thread Adam Barth
On Wed, Aug 24, 2011 at 5:18 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Wed, Aug 24, 2011 at 5:12 PM, Adam Barth w...@adambarth.com wrote:
 On Wed, Aug 24, 2011 at 2:30 PM, Dominic Cooney domin...@google.com wrote:
 On Thu, Aug 25, 2011 at 2:03 AM, Dimitri Glazkov dglaz...@chromium.org 
 wrote:
 On Tue, Aug 23, 2011 at 9:19 PM, Adam Barth w...@adambarth.com wrote:
 This section http://wiki.whatwg.org/wiki/Component_Model#Performance
 says When an unknown DOM element with an x--prefixed tagName is
 encountered   It seems unfortunate to special-case tag names that
 begin with x-.  The IETF has a lot of experience with x- prefixes,
 and they're somewhat unhappy with them:

 http://tools.ietf.org/html/draft-saintandre-xdash

 I can't seem to draw a parallel between prefixing author-defined
 custom DOM elements and prefixing HTTP parameters -- other than the
 prefix itself, that is. There's a clear meaning of the prefix in the
 Component Model -- this element was defined by an author.
 Additionally, we are explicitly trying to avoid a registry-like
 situation, where one has to announce or qualify for the right to use a
 tag name. Can you help me understand what your concerns are?

 That RFC is interesting, but I didn’t find it a perfect parallel either.

 In protocol headers, clients and servers need to agree on the meaning
 of headers, and require migration from non-standard to standard
 headers with attendant interoperability issues. Components are
 different, because both the x-name and its definition are under
 control of the author. The intent is that if HTML standardizes an
 x-name, it will be christened with the un-prefixed name; the UA can
 continue supporting old x-names and definitions using the generic
 component mechanism.

 I guess we could get into interoperability difficulties if user agents
 start to rely on specific x-names and ignoring or augment their
 definitions. For example, if a crawler ignores the scripts that define
 components but interpret a common x-name a particular way. Or if a
 browser automatically augments the definition of a given x-name for
 better security or accessibility.

 Yeah, the parallel breaks down a bit because in HTTP the X- names
 are used by two parties and here we're only talking about one party.
 Maybe a better parallel is data attributes, which are also segmented
 into their own namespace...

 Yes, the data-* attributes are the correct thing to draw parallels to here.


 On the other hand, it seems likely that some of these xdash names will
 come into multi-party use.  For example, the following use cases
 involve xdash names chosen by one party and then used by another:

 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Widget_Mix-and-Match
 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Contacts_Widget
 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Like.2F.2B1_Button

 That's something like 40% of the use cases...

 These are fine as well; the important case where prefixing causes
 problems is when one of the parties is the browser itself, where it
 will eventually want to change from recognizing the prefixed name to
 recognizing the unprefixed name.

That's a pretty narrow view.  :)

Adam



xdash name prefixes (was Re: Component Model Update)

2011-08-24 Thread Adam Barth
On Wed, Aug 24, 2011 at 2:30 PM, Dominic Cooney domin...@google.com wrote:
 On Thu, Aug 25, 2011 at 2:03 AM, Dimitri Glazkov dglaz...@chromium.org 
 wrote:
 On Tue, Aug 23, 2011 at 9:19 PM, Adam Barth w...@adambarth.com wrote:
 This section http://wiki.whatwg.org/wiki/Component_Model#Performance
 says When an unknown DOM element with an x--prefixed tagName is
 encountered   It seems unfortunate to special-case tag names that
 begin with x-.  The IETF has a lot of experience with x- prefixes,
 and they're somewhat unhappy with them:

 http://tools.ietf.org/html/draft-saintandre-xdash

 I can't seem to draw a parallel between prefixing author-defined
 custom DOM elements and prefixing HTTP parameters -- other than the
 prefix itself, that is. There's a clear meaning of the prefix in the
 Component Model -- this element was defined by an author.
 Additionally, we are explicitly trying to avoid a registry-like
 situation, where one has to announce or qualify for the right to use a
 tag name. Can you help me understand what your concerns are?

 That RFC is interesting, but I didn’t find it a perfect parallel either.

 In protocol headers, clients and servers need to agree on the meaning
 of headers, and require migration from non-standard to standard
 headers with attendant interoperability issues. Components are
 different, because both the x-name and its definition are under
 control of the author. The intent is that if HTML standardizes an
 x-name, it will be christened with the un-prefixed name; the UA can
 continue supporting old x-names and definitions using the generic
 component mechanism.

 I guess we could get into interoperability difficulties if user agents
 start to rely on specific x-names and ignoring or augment their
 definitions. For example, if a crawler ignores the scripts that define
 components but interpret a common x-name a particular way. Or if a
 browser automatically augments the definition of a given x-name for
 better security or accessibility.

Yeah, the parallel breaks down a bit because in HTTP the X- names
are used by two parties and here we're only talking about one party.
Maybe a better parallel is data attributes, which are also segmented
into their own namespace...

On the other hand, it seems likely that some of these xdash names will
come into multi-party use.  For example, the following use cases
involve xdash names chosen by one party and then used by another:

http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Widget_Mix-and-Match
http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Contacts_Widget
http://wiki.whatwg.org/wiki/Component_Model_Use_Cases#Like.2F.2B1_Button

That's something like 40% of the use cases...

I don't have much of a better suggestion.  You're running up against
all the usual distributed extensibility issues.

Adam



Re: Component Model Update

2011-08-23 Thread Adam Barth
I feel somewhat like I'm walking into the middle of a movie, but I
have a couple questions.  Please forgive me if my questions have
already been answer in previous discussions.

On Tue, Aug 23, 2011 at 1:40 PM, Dimitri Glazkov dglaz...@chromium.org wrote:
 All,

 Over the last few weeks, a few folks and myself have been working on
 fleshing out the vision for the Component Model. Here's what we've
 done so far:

 * Created a general overview document for behavior attachment problem
 on the Web (http://wiki.whatwg.org/wiki/Behavior_Attachment);
 * Wrote down the a set of guidelines on how we intend to tackle the
 problem (http://wiki.whatwg.org/wiki/Component_Model_Methodology);
 * Updated the list of use cases and desired properties for each case
 (http://wiki.whatwg.org/wiki/Component_Model_Use_Cases);
 * Captured the overall component model design and how it satisfies
 each desired property (http://wiki.whatwg.org/wiki/Component_Model),

This section http://wiki.whatwg.org/wiki/Component_Model#Consistency
seems to imply that components can override the traversal and
manipulation APIs defined by DOM Core.  Do you mean that they can
override the JavaScript APIs folks use for traversal and manipulation,
or can they override the traversal and manipulation APIs used by other
languages bound to the DOM and internally by specifications?

For example, suppose we implemented the Component Model in WebKit and
a component overrided the nextSibling traversal API.  Would
Objective-C code interacting with the component (e.g., via Mac OS X's
Object-C API for interacting with the DOM) see the original API or the
override?  Similarly, for browsers such as Safari, Chrome, Firefox,
and Opera that provide a script-based extension mechanism, would
extensions interacting with these components (e.g., via isolated
worlds or XPCNativeWrappers) see the original API or the override?

My sense is that you only mean that Components can shadow (and here I
mean shadow in the traditional Computer Science sense
http://en.wikipedia.org/wiki/Variable_shadowing) the normal
traversal and manipulation, not that they can override it, per se.


This section http://wiki.whatwg.org/wiki/Component_Model#Encapsulation
says ... and ensures that no information about the shadow DOM tree
crosses this boundary.  Surely that's an overstatement.  At a
minimum, I assume the shadow DOM participates in layout, so its height
and width is leaked.


---8---
var shadow2 = new ShadowRoot(this); // throws an exception.
---8---

I'm not sure I understand why that's the best design decision.  Maybe
this is explained elsewhere?  I link would help folks like me
understand better.  It looks like this design decision is tied up into
how http://wiki.whatwg.org/wiki/Component_Model#Composability works.


This section http://wiki.whatwg.org/wiki/Component_Model#Desugaring
says ... this also explains why you can't add shadow DOM subtrees to
input or details elements.  It seems unfortunate that some elements
will accept new ShadowRoots but others will not.  Is this an
implementation detail?  What's the list of elements that reject
ShadowRoots?

As an example, it seems entirely reasonable that you'd want to create
an autocomplete dropdown component for use with an input element.  It
seems like the natural thing to do would be to subclass the input
element and add an autocomplete dropdown as a shadow DOM.  This design
choice appears to preclude this use case.  Instead, I need to subclass
div or whatever and replicate the HTMLInputElement API, which seems
like the opposite of the reuse existing mechanisms design principle
http://wiki.whatwg.org/wiki/Component_Model_Methodology#Design_Priniciples.


This section http://wiki.whatwg.org/wiki/Component_Model#Performance
says When an unknown DOM element with an x--prefixed tagName is
encountered   It seems unfortunate to special-case tag names that
begin with x-.  The IETF has a lot of experience with x- prefixes,
and they're somewhat unhappy with them:

http://tools.ietf.org/html/draft-saintandre-xdash


This section 
http://wiki.whatwg.org/wiki/Component_Model#Confinement_Primitives
seems somewhat half-baked at the moment.  It says as much, so I
presume it's more of a work-in-progress.  Getting confinement right is
pretty tricky.

 including a handy comparison with existing relevant specs and
 implementations
 (http://wiki.whatwg.org/wiki/Component_Model#Comparison_With_Existing_Specs_and_Implementations).

 After of this iteration, the proposed shadow DOM API no longer
 includes the .shadow accessor (see details here
 http://dglazkov.github.com/component-model/dom.html). Instead, the
 shadow DOM subtree association happens in ShadowRoot constructor:

 var element = document.createElement(div);
 var shadow = new ShadowRoot(element); // {element} now has shadow DOM
 subtree, and {shadow} is its root.
 shadow.appendChild(document.createElement(p)).textContent = weee!!';

This looks like a substantial improvement.

 Keeping the 

Re: Component Model Update

2011-08-23 Thread Adam Barth
On Tue, Aug 23, 2011 at 9:19 PM, Adam Barth w...@adambarth.com wrote:
 I feel somewhat like I'm walking into the middle of a movie, but I
 have a couple questions.  Please forgive me if my questions have
 already been answer in previous discussions.

 On Tue, Aug 23, 2011 at 1:40 PM, Dimitri Glazkov dglaz...@chromium.org 
 wrote:
 All,

 Over the last few weeks, a few folks and myself have been working on
 fleshing out the vision for the Component Model. Here's what we've
 done so far:

 * Created a general overview document for behavior attachment problem
 on the Web (http://wiki.whatwg.org/wiki/Behavior_Attachment);
 * Wrote down the a set of guidelines on how we intend to tackle the
 problem (http://wiki.whatwg.org/wiki/Component_Model_Methodology);
 * Updated the list of use cases and desired properties for each case
 (http://wiki.whatwg.org/wiki/Component_Model_Use_Cases);
 * Captured the overall component model design and how it satisfies
 each desired property (http://wiki.whatwg.org/wiki/Component_Model),

 This section http://wiki.whatwg.org/wiki/Component_Model#Consistency
 seems to imply that components can override the traversal and
 manipulation APIs defined by DOM Core.  Do you mean that they can
 override the JavaScript APIs folks use for traversal and manipulation,
 or can they override the traversal and manipulation APIs used by other
 languages bound to the DOM and internally by specifications?

 For example, suppose we implemented the Component Model in WebKit and
 a component overrided the nextSibling traversal API.  Would
 Objective-C code interacting with the component (e.g., via Mac OS X's
 Object-C API for interacting with the DOM) see the original API or the
 override?  Similarly, for browsers such as Safari, Chrome, Firefox,
 and Opera that provide a script-based extension mechanism, would
 extensions interacting with these components (e.g., via isolated
 worlds or XPCNativeWrappers) see the original API or the override?

 My sense is that you only mean that Components can shadow (and here I
 mean shadow in the traditional Computer Science sense
 http://en.wikipedia.org/wiki/Variable_shadowing) the normal
 traversal and manipulation, not that they can override it, per se.


 This section http://wiki.whatwg.org/wiki/Component_Model#Encapsulation
 says ... and ensures that no information about the shadow DOM tree
 crosses this boundary.  Surely that's an overstatement.  At a
 minimum, I assume the shadow DOM participates in layout, so its height
 and width is leaked.


 ---8---
 var shadow2 = new ShadowRoot(this); // throws an exception.
 ---8---

 I'm not sure I understand why that's the best design decision.  Maybe
 this is explained elsewhere?  I link would help folks like me
 understand better.  It looks like this design decision is tied up into
 how http://wiki.whatwg.org/wiki/Component_Model#Composability works.


 This section http://wiki.whatwg.org/wiki/Component_Model#Desugaring
 says ... this also explains why you can't add shadow DOM subtrees to
 input or details elements.  It seems unfortunate that some elements
 will accept new ShadowRoots but others will not.  Is this an
 implementation detail?  What's the list of elements that reject
 ShadowRoots?

 As an example, it seems entirely reasonable that you'd want to create
 an autocomplete dropdown component for use with an input element.  It
 seems like the natural thing to do would be to subclass the input
 element and add an autocomplete dropdown as a shadow DOM.  This design
 choice appears to preclude this use case.  Instead, I need to subclass
 div or whatever and replicate the HTMLInputElement API, which seems
 like the opposite of the reuse existing mechanisms design principle
 http://wiki.whatwg.org/wiki/Component_Model_Methodology#Design_Priniciples.


 This section http://wiki.whatwg.org/wiki/Component_Model#Performance
 says When an unknown DOM element with an x--prefixed tagName is
 encountered   It seems unfortunate to special-case tag names that
 begin with x-.  The IETF has a lot of experience with x- prefixes,
 and they're somewhat unhappy with them:

 http://tools.ietf.org/html/draft-saintandre-xdash


 This section 
 http://wiki.whatwg.org/wiki/Component_Model#Confinement_Primitives
 seems somewhat half-baked at the moment.  It says as much, so I
 presume it's more of a work-in-progress.  Getting confinement right is
 pretty tricky.

 including a handy comparison with existing relevant specs and
 implementations
 (http://wiki.whatwg.org/wiki/Component_Model#Comparison_With_Existing_Specs_and_Implementations).

 After of this iteration, the proposed shadow DOM API no longer
 includes the .shadow accessor (see details here
 http://dglazkov.github.com/component-model/dom.html). Instead, the
 shadow DOM subtree association happens in ShadowRoot constructor:

 var element = document.createElement(div);
 var shadow = new ShadowRoot(element); // {element} now has shadow DOM
 subtree, and {shadow} is its

Re: HTTP, websockets, and redirects

2011-08-11 Thread Adam Barth
Generally speaking, browsers have been moving away from triggering
authentication dialogs for subresource loads because they are more
often used for phishing than for legitimate purposes.  A WebSocket
connection is much like a subresource load.

Adam


On Wed, Aug 10, 2011 at 9:36 PM, Brian Raymor
brian.ray...@microsoft.com wrote:

 What is the rationale for also failing the websocket connection when a 
 response for authentication is received such as:

 401 Unauthorized
 407 Proxy Authentication Required


 On 8/10/11 Art Barstow wrote:

 Hi All,

 Bugzilla now reports only 2 bugs for the Web Socket API [WSAPI] and I would
 characterize them both as editorial [Bugs]. As such, the redirect issue 
 Thomas
 originally reported in this thread (see [Head]) appears to be the only
 substantive issue blocking WSAPI Last Call.

 If anyone wants to continue discussing this redirect issue for WSAPI, I
 recommend using e-mail (additionally, it may be useful to also create a new
 bug in Bugzilla).

 As I understand it, the HyBi WG plans to freeze the Web Socket Protocol spec
 real soon now (~August 19?).

 -Art Barstow

 [WSAPI] http://dev.w3.org/html5/websockets/
 [Head]
 http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/0474.html
 [Bugs]
 http://www.w3.org/Bugs/Public/buglist.cgi?query_format=advancedshort_de
 sc_type=allwordssubstrshort_desc=product=WebAppsWGcomponent=We
 bSocket+API+%28editor%3A+Ian+Hickson%29longdesc_type=allwordssubstr
 longdesc=bug_file_loc_type=allwordssubstrbug_file_loc=status_whiteboar
 d_type=allwordssubstrstatus_whiteboard=keywords_type=allwordskeywor
 ds=bug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDemailt
 ype1=substringemail1=emailtype2=substringemail2=bug_id_type=anyex
 actbug_id=votes=chfieldfrom=chfieldto=Nowchfieldvalue=cmdtype=d
 oitorder=Reuse+same+sort+as+last+timefield0-0-0=nooptype0-0-
 0=noopvalue0-0-0=


 On 7/27/11 8:12 PM, ext Adam Barth wrote:
  On Mon, Jul 25, 2011 at 3:52 PM, Gabriel Montenegro
  gabriel.montene...@microsoft.com  wrote:
  Thanks Adam,
 
  By discussed on some  mailing list, do you mean a *W3C* mailing list?
  A quick search turned up this message:
 
  But I'm totally fine with punting on this for the future and just
  disallowing redirects on an API level for now.
 
  http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-March/031079.
  html
 
  I started that thread at Greg Wilkins' recommendation:
 
  This is essentially an API issue for the browser websocket object.
 
  http://www.ietf.org/mail-archive/web/hybi/current/msg06954.html
 
  Also, allowing the users to handle these explicitly implies that the API 
  does
 not mandate dropping the connection. Currently, the API does not have this
 flexibility, nor does it allow other uses of non-101 codes, like for
 authentication. I understand the potential risks with redirects in browsers, 
 and I
 thought at one moment we were going to augment the security considerations
 with your help for additional guidance. If websec has already worked on 
 similar
 language in some draft that we could reuse that would be great, or, 
 similarly, if
 we could work with you on that text.
  We can always add support for explicitly following redirects in the
  future.  If we were to automatically follow them today, we'd never be
  able to remove that behavior by default.
 
  Adam
 
 
  -Original Message-
  From: Adam Barth [mailto:w...@adambarth.com]
  Sent: Sunday, July 24, 2011 13:35
  To: Thomas Roessler
  Cc: public-ietf-...@w3.org; WebApps WG; Salvatore Loreto; Gabriel
  Montenegro; Art Barstow; François Daoust; Eric Rescorla; Harald
  Alvestrand; Tobias Gondrom
  Subject: Re: HTTP, websockets, and redirects
 
  This issue was discussed on some mailing list a while back (I forget
  which).  The consensus seemed to be that redirects are the source of
  a large number of security vulnerabilities in HTTP and we'd like
  users of the WebSocket API to handle them explicitly.
 
  I'm not sure I understand your question regarding WebRTC, but the
  general answer to that class of questions is that WebRTC relies, in
  large part, on ICE to be secure against cross-protocol and voicehammer
 attacks.
 
  Adam
 
 
  On Sun, Jul 24, 2011 at 6:52 AM, Thomas Roesslert...@w3.org  wrote:
  The hybi WG is concerned about the following clause in the
  websocket API
  spec:
  When the user agent validates the server's response during the
  establish a
  WebSocket connection algorithm, if the status code received from
  the server is not 101 (e.g. it is a redirect), the user agent must fail 
  the
 websocket connection.
  http://dev.w3.org/html5/websockets/
 
  Discussion with the WG chairs:
 
  - this looks like a conservative attempt to lock down redirects in
  the face of ill-understood cross-protocol interactions
  - critical path for addressing includes analysis of interaction
  with XHR, XHR2, CORS
  - following redirects in HTTP is optional for the client, therefore
  in principle a decision

Re: HTTP, websockets, and redirects

2011-07-27 Thread Adam Barth
On Mon, Jul 25, 2011 at 3:52 PM, Gabriel Montenegro
gabriel.montene...@microsoft.com wrote:
 Thanks Adam,

 By discussed on some  mailing list, do you mean a *W3C* mailing list?

A quick search turned up this message:

But I'm totally fine with punting on this for the future and just
disallowing redirects on an API level for now.

http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-March/031079.html

I started that thread at Greg Wilkins' recommendation:

This is essentially an API issue for the browser websocket object.

http://www.ietf.org/mail-archive/web/hybi/current/msg06954.html

 Also, allowing the users to handle these explicitly implies that the API does 
 not mandate dropping the connection. Currently, the API does not have this 
 flexibility, nor does it allow other uses of non-101 codes, like for 
 authentication. I understand the potential risks with redirects in browsers, 
 and I thought at one moment we were going to augment the security 
 considerations with your help for additional guidance. If websec has already 
 worked on similar language in some draft that we could reuse that would be 
 great, or, similarly, if we could work with you on that text.

We can always add support for explicitly following redirects in the
future.  If we were to automatically follow them today, we'd never be
able to remove that behavior by default.

Adam


 -Original Message-
 From: Adam Barth [mailto:w...@adambarth.com]
 Sent: Sunday, July 24, 2011 13:35
 To: Thomas Roessler
 Cc: public-ietf-...@w3.org; WebApps WG; Salvatore Loreto; Gabriel
 Montenegro; Art Barstow; François Daoust; Eric Rescorla; Harald Alvestrand;
 Tobias Gondrom
 Subject: Re: HTTP, websockets, and redirects

 This issue was discussed on some mailing list a while back (I forget which). 
  The
 consensus seemed to be that redirects are the source of a large number of
 security vulnerabilities in HTTP and we'd like users of the WebSocket API to
 handle them explicitly.

 I'm not sure I understand your question regarding WebRTC, but the general
 answer to that class of questions is that WebRTC relies, in large part, on 
 ICE to
 be secure against cross-protocol and voicehammer attacks.

 Adam


 On Sun, Jul 24, 2011 at 6:52 AM, Thomas Roessler t...@w3.org wrote:
  The hybi WG is concerned about the following clause in the websocket API
 spec:
 
  When the user agent validates the server's response during the establish 
  a
 WebSocket connection algorithm, if the status code received from the server 
 is
 not 101 (e.g. it is a redirect), the user agent must fail the websocket 
 connection.
 
  http://dev.w3.org/html5/websockets/
 
  Discussion with the WG chairs:
 
  - this looks like a conservative attempt to lock down redirects in the
  face of ill-understood cross-protocol interactions
  - critical path for addressing includes analysis of interaction with
  XHR, XHR2, CORS
  - following redirects in HTTP is optional for the client, therefore in
  principle a decision that a client-side spec can profile
  - concern about ability to use HTTP fully before 101 succeeds, and
  future extensibility
 
  Salvatore and Gabriel will bring this up later in the week with websec, 
  and we'll
 probably want to make it a discussion with Webappsec, too.
 
  Side note: Does WebRTC have related issues concerning multiple protocols 
  in a
 single-origin context?  Are there lessons to learn from them, or is the case
 sufficiently different that we need a specific analysis here?
 
  Thanks,





Re: HTTP, websockets, and redirects

2011-07-24 Thread Adam Barth
This issue was discussed on some mailing list a while back (I forget
which).  The consensus seemed to be that redirects are the source of a
large number of security vulnerabilities in HTTP and we'd like users
of the WebSocket API to handle them explicitly.

I'm not sure I understand your question regarding WebRTC, but the
general answer to that class of questions is that WebRTC relies, in
large part, on ICE to be secure against cross-protocol and voicehammer
attacks.

Adam


On Sun, Jul 24, 2011 at 6:52 AM, Thomas Roessler t...@w3.org wrote:
 The hybi WG is concerned about the following clause in the websocket API spec:

 When the user agent validates the server's response during the establish a 
 WebSocket connection algorithm, if the status code received from the server 
 is not 101 (e.g. it is a redirect), the user agent must fail the websocket 
 connection.

 http://dev.w3.org/html5/websockets/

 Discussion with the WG chairs:

 - this looks like a conservative attempt to lock down redirects in the face 
 of ill-understood cross-protocol interactions
 - critical path for addressing includes analysis of interaction with XHR, 
 XHR2, CORS
 - following redirects in HTTP is optional for the client, therefore in 
 principle a decision that a client-side spec can profile
 - concern about ability to use HTTP fully before 101 succeeds, and future 
 extensibility

 Salvatore and Gabriel will bring this up later in the week with websec, and 
 we'll probably want to make it a discussion with Webappsec, too.

 Side note: Does WebRTC have related issues concerning multiple protocols in a 
 single-origin context?  Are there lessons to learn from them, or is the case 
 sufficiently different that we need a specific analysis here?

 Thanks,



Re: Frame embedding: One problem, three possible specs?

2011-07-07 Thread Adam Barth
My sense from talking with folks is that there isn't a lot of
enthusiasm for supporting this use case in CSP at the present time.
We're trying to concentrate on a core set of directives for the first
iteration.  If it helps reduce complexity, you might consider dropping
option (1) for the time being.

Adam


On Thu, Jul 7, 2011 at 2:11 PM, Thomas Roessler t...@w3.org wrote:
 (Warning, this is cross-posted widely. One of the lists is the IETF websec 
 mailing list, to which the IETF NOTE WELL applies: 
 http://www.ietf.org/about/note-well.html)


 Folks,

 there appear to be at least three possible specifications addressing this 
 space, with similar but different designs:

 1. A proposed deliverable in the WebAppSec group to take up on 
 X-Frame-Options and express those in CSP:
  http://www.w3.org/2011/07/appsecwg-charter.html

 (We expect that this charter might go to the W3C AC for review as soon as 
 next week.)

 2. The From-Origin draft (aka Cross-Origin Resource Embedding Exclusion) 
 currently considered for publication as an FPWD in the Webapps WG:
  http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/0088.html

 This draft mentions integration into CSP as a possible path forward.

 3. draft-gondrom-frame-options, an individual I-D mentioned to websec:
  https://datatracker.ietf.org/doc/draft-gondrom-frame-options/
  http://www.ietf.org/mail-archive/web/websec/current/msg00388.html


 How do we go about it?  One path forward might be to just proceed as 
 currently planned and coordinate when webappsec starts working.

 Another path forward might be to see whether we can agree now on what forum 
 to take these things forward in (and what the coordination dance might look 
 like).

 Thoughts welcome.

 Regards,
 --
 Thomas Roessler, W3C  t...@w3.org  (@roessler)







Re: Mouse Lock

2011-06-20 Thread Adam Barth
So it sounds like we don't have a security model but we're hoping UA
implementors can dream one up by combining enough heuristics.

Adam


On Mon, Jun 20, 2011 at 9:07 AM, Vincent Scheib sch...@google.com wrote:
 A range of security methods have been discussed. Please read the thread in
 detail if this summary is too succinct:
 The Security concern is that of the user agent hiding the mouse and not
 letting it be used normally due to malicious code on a web site. Thus, user
 agents must address this issue. No other security issue has been raised.
 A User agent has a large number of options to gauge user intent, and may use
 a combination of these techniques to avoid user frustration: prompt user
 before lock, is page full screen, require user gesture, avoid immediate
 relock, only lock on a forground page/tab in focus, per-domain permissions,
 installed application permissions, persistent instructional message to user
 on how to unlock, ESC key (or others) always unlocks.

 On Sat, Jun 18, 2011 at 11:38 PM, Adam Barth w...@adambarth.com wrote:

 I'm sorry that I didn't follow the earlier thread.  What is the
 security model for mouse lock?  (Please feel free to point me to a
 message in the archive if this has already been discussed.)

 Thanks,
 Adam


 On Thu, Jun 16, 2011 at 3:21 PM, Vincent Scheib sch...@google.com wrote:
  [Building on the Mouse Capture for Canvas
 
  thread: http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/thread.html#msg437 ]
  I'm working on an implementation of mouse lock in Chrome, and would
  appreciate collaboration on refinement of the spec proposal. Hopefully
  webapps is willing to pick up this spec work? I'm up for helping write
  the
  draft.
  Some updates from Sirisian's Comment 12 on the w3 bug
  (http://www.w3.org/Bugs/Public/show_bug.cgi?id=9557#c12):
  - We shouldn't need events for success/failure to obtain lock, those can
  be
  callbacks similar to geolocation API:
  (http://dev.w3.org/geo/api/spec-source.html#geolocation_interface).
 
  My short summary is then:
  - 2 new methods on an element to enter and exit mouse lock. Two
  callbacks on
  the entering call provide notification of success or failure.
  - Mousemove event gains .deltaX .deltaY members, always valid, not just
  during mouse lock.
  - Elements have an event to detect when mouseLock is lost.
  Example
  x = document.getElementById(x);
  x.addEventListener(mousemove, mouseMove);
  x.addEventListener(mouseLockLost, mouseLockLost);
  x.lockMouse(
    function() { console.log(Locked.); },
    function() { console.log(Lock rejected.); } );
  function mouseMove(e) { console.log(e.deltaX + ,  + e.deltaY); }
  function mouseLockLost(e) { console.log(Lost lock.); }
 





Re: Mouse Lock

2011-06-20 Thread Adam Barth
On Mon, Jun 20, 2011 at 10:48 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Jun 20, 2011 at 10:18 AM, Adam Barth w...@adambarth.com wrote:
 So it sounds like we don't have a security model but we're hoping UA
 implementors can dream one up by combining enough heuristics.

 A model which I suggested privately, and which I believe others have
 suggested publicly, is this:

 1. While fullscreen is enabled, you can lock the mouse to the
 fullscreened element without a prompt or persistent message.  A
 temporary message may still be shown.  The lock is automatically
 released if the user exits fullscreen.

^^^ This part sounds solid.

 2. During a user-initiated click, you can lock the mouse to the target
 or an ancestor without a permissions prompt, but with a persistent
 message, either as an overlay or in the browser's chrome.

^^^ That also sounds reasonable too.  There's some subtly to make sure
the message is actually visible to the user, especially in desktop
situations where one window can overly another.  It's probably also
useful to instruct the user how to release the lock.

 3. Otherwise, any attempt to lock the mouse triggers a permissions
 prompt, and while the lock is active a persistent message is shown.

^^^ This part sounds scary.  What's the use case for locking the mouse
without the user interacting with the page?

 These wouldn't be normative, of course, because different platforms
 may have different permissions models, but they seem like a good
 outline for balancing user safety with author convenience/lack of user
 annoyance.

Sure, it shouldn't be normative, but it's important to think though
the security model for features before adding them to the platform.

Adam



Re: Mouse Lock

2011-06-20 Thread Adam Barth
On Mon, Jun 20, 2011 at 3:30 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Jun 20, 2011 at 3:26 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 06/21/2011 01:08 AM, Tab Atkins Jr. wrote:
 On Mon, Jun 20, 2011 at 3:03 PM, Olli Pettayolli.pet...@helsinki.fi
  wrote:
 On 06/21/2011 12:25 AM, Tab Atkins Jr. wrote:
 The use-case is non-fullscreen games and similar, where you'd prefer
 to lock the mouse as soon as the user clicks into the game.  Minecraft
 is the first example that pops into my head that works like this -
 it's windowed, and mouselocks you as soon as you click at it.

 And how would user unlock when some evil sites locks the mouse?
 Could you give some concrete example about
  It's probably also useful to instruct the user how to release the
 lock.

 I'm assuming that the browser reserves some logical key (like Esc) for
 releasing things like this, and communicates this in the overlay
 message.

 And what if the web page moves focus to some browser window, so that ESC
 is fired there? Or what if the web page moves the window to be outside the
 screen so that user can't actually see the message how to
 unlock mouse?

 How is a webpage able to do either of those things?

window.focus()

Adam



Re: [indexeddb] IDBDatabase.setVersion non-nullable parameter has a default for null

2011-06-19 Thread Adam Barth
On Mon, Jun 13, 2011 at 9:40 AM, Adam Barth w...@adambarth.com wrote:
 On Mon, Jun 13, 2011 at 1:31 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Jun 13, 2011 at 12:15 AM, Adam Barth w...@adambarth.com wrote:
 On Sun, Jun 12, 2011 at 11:19 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Jun 12, 2011 at 8:35 PM, Cameron McCormack c...@mcc.id.au wrote:
 Adam Barth:
  WebKit is looser in this regard.  We probably should change the
  default for new IDL, but it's a delicate task and I've been busy.  :(

 What about for old IDL?  Do you feel that you can make this change
 without breaking sites?  One of the “advantages” of specifying the
 looser approach is that it’s further down the “race to the bottom” hill,
 so if we are going to tend towards it eventually we may as well jump
 there now.

 I can't remember getting a single bug filed on Geckos current
 behavior. There probably have been some which I've missed, but it's
 not a big enough problem that it's ever been discussed at mozilla as
 far as I can remember.

 Unfortunately, there's a bunch of WebKit-dominate content out there
 that folks are afraid to break.  We discussed this topic on some bug
 (which I might be able to dig up).  The consensus was that the value
 in tightening this for old APIs wasn't worth the compat risk (e.g., in
 mobile and in Mac applications such as Dashboard and Mail.app).

 For new APIs, of course, we can do the tighter things (which I agree
 is more aesthetic).  It mostly requires someone to go into the code
 generator and make it the default (and then to special-case all the
 existing APIs).  I'd like to do that, but it's a big job that needs to
 be done carefully and I've got other higher priority things to do, so
 it hasn't happened yet.

 If there is agreement that new APIs should throw for omitted
 non-optional parameters, then it seems clear that WebIDL should use
 that behavior.

 That leaves the work for safari (and possibly other webkit devs) to go
 through and mark parameters as [optional] in their IDL. Possibly also
 filing bugs for cases where you want the relevant spec to actually
 make the argument optional. I realize that this is a large amount of
 work, but this is exactly what we have asked in particular of
 microsoft in the past which has been in a similar situation of large
 divergence from the DOM specs, and large bodies of existing content
 which potentially can depend on IE specific behavior.

 Think folks are agreed that's the path we should follow.  My only
 concern is that we don't have anyone signed up to do the work on the
 WebKit side.

Just to update this thread, Mark Pilgrim has stepped forward to get
the ball rolling on this work, so WebKit is making progress on this
front.

Thanks,
Adam


 FWIW, I'd be happy to do the same to align Gecko behavior with specs
 when needed. For example I thought we were going to end up having to
 do that to make null coerce to null by default for DOMString
 arguments. It appears that in that case the WebIDL behavior changed to
 align with Gecko, which I think is unfortunate, and so it doesn't look
 like this change will have to happen (I used to argue otherwise in the
 past, but I've come around to the idea that aligning with JS behavior
 is more important, even when I'm not a fan of JS behavior).



Re: Mouse Lock

2011-06-19 Thread Adam Barth
I'm sorry that I didn't follow the earlier thread.  What is the
security model for mouse lock?  (Please feel free to point me to a
message in the archive if this has already been discussed.)

Thanks,
Adam


On Thu, Jun 16, 2011 at 3:21 PM, Vincent Scheib sch...@google.com wrote:
 [Building on the Mouse Capture for Canvas
 thread: http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/thread.html#msg437 ]
 I'm working on an implementation of mouse lock in Chrome, and would
 appreciate collaboration on refinement of the spec proposal. Hopefully
 webapps is willing to pick up this spec work? I'm up for helping write the
 draft.
 Some updates from Sirisian's Comment 12 on the w3 bug
 (http://www.w3.org/Bugs/Public/show_bug.cgi?id=9557#c12):
 - We shouldn't need events for success/failure to obtain lock, those can be
 callbacks similar to geolocation API:
 (http://dev.w3.org/geo/api/spec-source.html#geolocation_interface).

 My short summary is then:
 - 2 new methods on an element to enter and exit mouse lock. Two callbacks on
 the entering call provide notification of success or failure.
 - Mousemove event gains .deltaX .deltaY members, always valid, not just
 during mouse lock.
 - Elements have an event to detect when mouseLock is lost.
 Example
 x = document.getElementById(x);
 x.addEventListener(mousemove, mouseMove);
 x.addEventListener(mouseLockLost, mouseLockLost);
 x.lockMouse(
   function() { console.log(Locked.); },
   function() { console.log(Lock rejected.); } );
 function mouseMove(e) { console.log(e.deltaX + ,  + e.deltaY); }
 function mouseLockLost(e) { console.log(Lost lock.); }




Re: [XHR][XHR2] Same-origin policy protection

2011-06-15 Thread Adam Barth
The server still needs to opt-in to allowing the web site to read the
response or you get into trouble with firewalls.  This functionality
is already available in every modern browser.

Adam


On Wed, Jun 15, 2011 at 10:15 AM, Charles Pritchard ch...@jumis.com wrote:
 There have been a few requests for an XHR which does not expose session data 
 to the target. I believe IE9 has an interface for this; I know it's been 
 requested on chromium bug list.



 On Jun 15, 2011, at 9:18 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/15/11 6:43 AM, David Bruant wrote:
 Could someone explain how running in a web browser justify such a
 difference? For instance, could someone explain a threat particular to
 cross-origin XHR in web browser?

 Off the top of my head:

 1)  XHR in the web browser sends the user's cookies, HTTP auth credentials, 
 etc. with the request.  Which means that if you're logged in to some site A, 
 and cross-site XHR to that site is allowed from some other other site B, 
 then B can access all the information you can access due to being logged in 
 to site A.

 2)  XHR in the web browser gives (at the moment, at least) sites that are 
 outside a firewall that your browser is behind the ability to make requests 
 to hosts that are behind the firewall.

 Item 2 is somewhat of an issue for installed apps as well, of course, but 
 installing an app is a trust decision.  I would imagine browsers could relax 
 some of these restrictions if a similar trust decision is explicitly made 
 for a non-locally-hosted app.

 Item 1 is pretty specific to a web browser; the only way to avoid that issue 
 is to run the app's XHRs in a special mode (effectively treating it as not 
 running in the browser at all).

 -Boris






Re: [indexeddb] IDBDatabase.setVersion non-nullable parameter has a default for null

2011-06-13 Thread Adam Barth
On Sun, Jun 12, 2011 at 11:19 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Jun 12, 2011 at 8:35 PM, Cameron McCormack c...@mcc.id.au wrote:
 Adam Barth:
  WebKit is looser in this regard.  We probably should change the
  default for new IDL, but it's a delicate task and I've been busy.  :(

 What about for old IDL?  Do you feel that you can make this change
 without breaking sites?  One of the “advantages” of specifying the
 looser approach is that it’s further down the “race to the bottom” hill,
 so if we are going to tend towards it eventually we may as well jump
 there now.

 I can't remember getting a single bug filed on Geckos current
 behavior. There probably have been some which I've missed, but it's
 not a big enough problem that it's ever been discussed at mozilla as
 far as I can remember.

Unfortunately, there's a bunch of WebKit-dominate content out there
that folks are afraid to break.  We discussed this topic on some bug
(which I might be able to dig up).  The consensus was that the value
in tightening this for old APIs wasn't worth the compat risk (e.g., in
mobile and in Mac applications such as Dashboard and Mail.app).

For new APIs, of course, we can do the tighter things (which I agree
is more aesthetic).  It mostly requires someone to go into the code
generator and make it the default (and then to special-case all the
existing APIs).  I'd like to do that, but it's a big job that needs to
be done carefully and I've got other higher priority things to do, so
it hasn't happened yet.

Adam


 We saw that happen with addEventListener.

 The reason we made the last argument optional for addEventListener
 wasn't that we had compatibility problems, but rather that it seemed
 like a good idea as the argument is almost always set to false, and
 being the last argument, making it optional works great.

 The fact that webkit already had this behavior was only discussed 
 tangentially.

 Jonas Sicking:
 This is why it surprises me of WebIDL specifies WebKit behavior as the
 compliant behavior as Cameron seems to indicate.

 In the spec right now it’s listed as an open issue, and it was the
 WebKit behaviour that I was going to specify to resolve the issue this
 week.  (So it’s not what the spec currently says.)  This is what I
 mentioned in
 http://lists.w3.org/Archives/Public/public-script-coord/2010OctDec/0094.html
 although I didn’t get any pushback then.  I am happy to keep discussing
 it but I would like to settle on a solution soon.

 So I guess you are arguing for “throw if too few arguments are passed,
 ignore any extra arguments”.

 Yes. In fact, unless someone can show that the web depends on the
 currently specced behavior, I don't see a reason to change Gecko.

 When we have overloads like

  void f(in long x);
  void f(in long x, in long y, in long z);

 we’d need to decide whether f(1, 2) throws or is considered a call to
 the first f with an extra, ignored argument.  The former seems more
 consistent to me.

 I agree, throwing seem better as it's more likely a bug on the callers
 part. Or at the very least ambiguous which behavior they want which
 means it's better to throw.

 / Jonas




Re: [indexeddb] IDBDatabase.setVersion non-nullable parameter has a default for null

2011-06-13 Thread Adam Barth
On Mon, Jun 13, 2011 at 12:15 AM, Adam Barth w...@adambarth.com wrote:
 On Sun, Jun 12, 2011 at 11:19 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Jun 12, 2011 at 8:35 PM, Cameron McCormack c...@mcc.id.au wrote:
 Adam Barth:
  WebKit is looser in this regard.  We probably should change the
  default for new IDL, but it's a delicate task and I've been busy.  :(

 What about for old IDL?  Do you feel that you can make this change
 without breaking sites?  One of the “advantages” of specifying the
 looser approach is that it’s further down the “race to the bottom” hill,
 so if we are going to tend towards it eventually we may as well jump
 there now.

 I can't remember getting a single bug filed on Geckos current
 behavior. There probably have been some which I've missed, but it's
 not a big enough problem that it's ever been discussed at mozilla as
 far as I can remember.

 Unfortunately, there's a bunch of WebKit-dominate content out there
 that folks are afraid to break.  We discussed this topic on some bug
 (which I might be able to dig up).  The consensus was that the value
 in tightening this for old APIs wasn't worth the compat risk (e.g., in
 mobile and in Mac applications such as Dashboard and Mail.app).

 For new APIs, of course, we can do the tighter things (which I agree
 is more aesthetic).  It mostly requires someone to go into the code
 generator and make it the default (and then to special-case all the
 existing APIs).  I'd like to do that, but it's a big job that needs to
 be done carefully and I've got other higher priority things to do, so
 it hasn't happened yet.

Here's the bug:

https://bugs.webkit.org/show_bug.cgi?id=21257

Sample opinion from one of the project leaders:

[[
Like other such sweeping changes, it’s highly likely there is some
content unwittingly depends on our old behavior. All a programmer has
to do is forget one argument to a function, and the program gets an
exception whereas before it would keep running. If it’s code or a code
path that was never tested with engines other than WebKit then this
could easily be overlooked.

Thus this change is almost certain to break Safari-only code paths
or even WebKit-only ones on at least some websites, Dashboard
widgets, Mac OS X applications, iPhone applications, and iTunes Store
content. Other than that, I’m all for it!
]]

Adam


 We saw that happen with addEventListener.

 The reason we made the last argument optional for addEventListener
 wasn't that we had compatibility problems, but rather that it seemed
 like a good idea as the argument is almost always set to false, and
 being the last argument, making it optional works great.

 The fact that webkit already had this behavior was only discussed 
 tangentially.

 Jonas Sicking:
 This is why it surprises me of WebIDL specifies WebKit behavior as the
 compliant behavior as Cameron seems to indicate.

 In the spec right now it’s listed as an open issue, and it was the
 WebKit behaviour that I was going to specify to resolve the issue this
 week.  (So it’s not what the spec currently says.)  This is what I
 mentioned in
 http://lists.w3.org/Archives/Public/public-script-coord/2010OctDec/0094.html
 although I didn’t get any pushback then.  I am happy to keep discussing
 it but I would like to settle on a solution soon.

 So I guess you are arguing for “throw if too few arguments are passed,
 ignore any extra arguments”.

 Yes. In fact, unless someone can show that the web depends on the
 currently specced behavior, I don't see a reason to change Gecko.

 When we have overloads like

  void f(in long x);
  void f(in long x, in long y, in long z);

 we’d need to decide whether f(1, 2) throws or is considered a call to
 the first f with an extra, ignored argument.  The former seems more
 consistent to me.

 I agree, throwing seem better as it's more likely a bug on the callers
 part. Or at the very least ambiguous which behavior they want which
 means it's better to throw.

 / Jonas





Re: [indexeddb] IDBDatabase.setVersion non-nullable parameter has a default for null

2011-06-13 Thread Adam Barth
On Mon, Jun 13, 2011 at 1:31 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Jun 13, 2011 at 12:15 AM, Adam Barth w...@adambarth.com wrote:
 On Sun, Jun 12, 2011 at 11:19 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Jun 12, 2011 at 8:35 PM, Cameron McCormack c...@mcc.id.au wrote:
 Adam Barth:
  WebKit is looser in this regard.  We probably should change the
  default for new IDL, but it's a delicate task and I've been busy.  :(

 What about for old IDL?  Do you feel that you can make this change
 without breaking sites?  One of the “advantages” of specifying the
 looser approach is that it’s further down the “race to the bottom” hill,
 so if we are going to tend towards it eventually we may as well jump
 there now.

 I can't remember getting a single bug filed on Geckos current
 behavior. There probably have been some which I've missed, but it's
 not a big enough problem that it's ever been discussed at mozilla as
 far as I can remember.

 Unfortunately, there's a bunch of WebKit-dominate content out there
 that folks are afraid to break.  We discussed this topic on some bug
 (which I might be able to dig up).  The consensus was that the value
 in tightening this for old APIs wasn't worth the compat risk (e.g., in
 mobile and in Mac applications such as Dashboard and Mail.app).

 For new APIs, of course, we can do the tighter things (which I agree
 is more aesthetic).  It mostly requires someone to go into the code
 generator and make it the default (and then to special-case all the
 existing APIs).  I'd like to do that, but it's a big job that needs to
 be done carefully and I've got other higher priority things to do, so
 it hasn't happened yet.

 If there is agreement that new APIs should throw for omitted
 non-optional parameters, then it seems clear that WebIDL should use
 that behavior.

 That leaves the work for safari (and possibly other webkit devs) to go
 through and mark parameters as [optional] in their IDL. Possibly also
 filing bugs for cases where you want the relevant spec to actually
 make the argument optional. I realize that this is a large amount of
 work, but this is exactly what we have asked in particular of
 microsoft in the past which has been in a similar situation of large
 divergence from the DOM specs, and large bodies of existing content
 which potentially can depend on IE specific behavior.

Think folks are agreed that's the path we should follow.  My only
concern is that we don't have anyone signed up to do the work on the
WebKit side.

 FWIW, I'd be happy to do the same to align Gecko behavior with specs
 when needed. For example I thought we were going to end up having to
 do that to make null coerce to null by default for DOMString
 arguments. It appears that in that case the WebIDL behavior changed to
 align with Gecko, which I think is unfortunate, and so it doesn't look
 like this change will have to happen (I used to argue otherwise in the
 past, but I've come around to the idea that aligning with JS behavior
 is more important, even when I'm not a fan of JS behavior).

Thanks,
Adam



Re: Request for feedback: DOMCrypt API proposal

2011-06-02 Thread Adam Barth
Why only SHA256?  Presumably sha1 and md5 are worth exposing as well.
Also, pk and sym appear to be algorithm agonistic but hash isn't.  In
addition to hashing, it would be valuable to expose HMAC modes of the
hash functions.

In the pk API, there doesn't seem to be any way to install a
public/private keypair from another location (e.g., the network).
Also, the encrypt and decrypt functions don't let me specify which
public key I want to use.  Consider introducing a keyID concept to let
me refer to keypairs.

What is a cipherAddressbook ?

When I use generateKeypair, how long dose the keypair persist?  Is are
their privacy implications?

Finally, this all should be on the crypto object, not on a new cipher object.

Adam


On Wed, Jun 1, 2011 at 3:54 PM, David Dahl dd...@mozilla.com wrote:
 Hello public-webapps members,

 (I wanted to post this proposed draft spec for the DOMCrypt API ( 
 https://wiki.mozilla.org/Privacy/Features/DOMCryptAPISpec/Latest ) to this 
 list - if there is a more fitting mailing list, please let me know)

 I recently posted this draft spec for a crypto API for browsers to the whatwg 
 (see: 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-May/031741.html) and 
 wanted to get feedback from W3C as well.

 Privacy and user control on the web is of utter importance. Tracking, 
 unauthorized user data aggregation and personal information breaches are 
 becoming so commonplace you see a new headline almost daily. (It seems).

 We need crypto APIs in browsers to allow developers to create more secure 
 communications tools and web applications that don’t have to implicitly trust 
 the server, among other use cases.

 The DOMCrypt API is a good start, and more feedback and discussion will 
 really help round out how all of this should work – as well as how it can 
 work in any browser that will support such an API.

 This API will provide each web browser window with a ‘cipher’ property[1] 
 that facilitates:

    asymmetric encryption key pair generation
    public key encryption
    public key decryption
    symmetric encryption
    signature generation
    signature verification
    hashing
    easy public key discovery via meta tags or an ‘addressbookentry’ tag

 [1] There is a bit of discussion around adding this API to window.navigator 
 or consolidation within window.crypto

 I have created a Firefox extension that implements most of the above, and am 
 working on an experimental patch that integrates this API into Firefox.

 The project originated in an extension I wrote, the home page is here: 
 http://domcrypt.org

 The source code for the extension is here: 
 https://github.com/daviddahl/domcrypt

 The Mozilla bugs are here:

 https://bugzilla.mozilla.org/show_bug.cgi?id=649154
 https://bugzilla.mozilla.org/show_bug.cgi?id=657432

 Firefox feature wiki page: 
 https://wiki.mozilla.org/Privacy/Features/DOMCryptAPI

 You can test the API by installing the extension hosted at domcrypt.org, and 
 going to http://domcrypt.org

 A recent blog post updating all of this is posted here: 
 http://monocleglobe..wordpress.com/2011/06/01/domcrypt-update-2011-06-01/

 The API:

 window.cipher = {
  // Public Key API
  pk: {
   set algorithm(algorithm){ },
   get algorithm(){ },

  // Generate a keypair and then execute the callback function
  generateKeypair: function ( function callback( aPublicKey ) { } ) {  },

  // encrypt a plainText
  encrypt: function ( plainText, function callback (cipherMessageObject) ) {  
 } ) {  },

  // decrypt a cipherMessage
  decrypt: function ( cipherMessageObject, function callback ( plainText ) { } 
 ) {  },

  // sign a message
  sign: function ( plainText, function callback ( signature ) { } ) {  },

  // verify a signature
  verify: function ( signature, plainText, function callback ( boolean ) { } ) 
 {  },

  // get the JSON cipherAddressbook
  get addressbook() {},

  // make changes to the addressbook
  saveAddressbook: function (JSONObject, function callback ( addresssbook ) { 
 }) {  }
  },

  // Symmetric Crypto API
  sym: {
  get algorithm(),
  set algorithm(algorithm),

  // create a new symmetric key
  generateKey: function (function callback ( key ){ }) {  },

  // encrypt some data
  encrypt: function (plainText, key, function callback( cipherText ){ }) {  },

  // decrypt some data
  decrypt: function (cipherText, key, function callback( plainText ) { }) {  },
  },

  // hashing
  hash: {
    SHA256: function (function callback (hash){}) {  }
  }
 }

 Your feedback and criticism will be invaluable.

 Best regards,

 David Dahl

 Firefox Engineer, Mozilla Corp.






Re: Request for feedback: DOMCrypt API proposal

2011-06-02 Thread Adam Barth
This spec is also incredibly vague:

https://wiki.mozilla.org/Privacy/Features/DOMCryptAPISpec/Latest

There's no description of what these functions do.  There's no way
this spec can be used to create a second interoperable implementation.

Adam


On Thu, Jun 2, 2011 at 4:19 PM, Adam Barth w...@adambarth.com wrote:
 Why only SHA256?  Presumably sha1 and md5 are worth exposing as well.
 Also, pk and sym appear to be algorithm agonistic but hash isn't.  In
 addition to hashing, it would be valuable to expose HMAC modes of the
 hash functions.

 In the pk API, there doesn't seem to be any way to install a
 public/private keypair from another location (e.g., the network).
 Also, the encrypt and decrypt functions don't let me specify which
 public key I want to use.  Consider introducing a keyID concept to let
 me refer to keypairs.

 What is a cipherAddressbook ?

 When I use generateKeypair, how long dose the keypair persist?  Is are
 their privacy implications?

 Finally, this all should be on the crypto object, not on a new cipher object.

 Adam


 On Wed, Jun 1, 2011 at 3:54 PM, David Dahl dd...@mozilla.com wrote:
 Hello public-webapps members,

 (I wanted to post this proposed draft spec for the DOMCrypt API ( 
 https://wiki.mozilla.org/Privacy/Features/DOMCryptAPISpec/Latest ) to this 
 list - if there is a more fitting mailing list, please let me know)

 I recently posted this draft spec for a crypto API for browsers to the 
 whatwg (see: 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-May/031741.html) 
 and wanted to get feedback from W3C as well.

 Privacy and user control on the web is of utter importance. Tracking, 
 unauthorized user data aggregation and personal information breaches are 
 becoming so commonplace you see a new headline almost daily. (It seems).

 We need crypto APIs in browsers to allow developers to create more secure 
 communications tools and web applications that don’t have to implicitly 
 trust the server, among other use cases.

 The DOMCrypt API is a good start, and more feedback and discussion will 
 really help round out how all of this should work – as well as how it can 
 work in any browser that will support such an API.

 This API will provide each web browser window with a ‘cipher’ property[1] 
 that facilitates:

    asymmetric encryption key pair generation
    public key encryption
    public key decryption
    symmetric encryption
    signature generation
    signature verification
    hashing
    easy public key discovery via meta tags or an ‘addressbookentry’ tag

 [1] There is a bit of discussion around adding this API to window.navigator 
 or consolidation within window.crypto

 I have created a Firefox extension that implements most of the above, and am 
 working on an experimental patch that integrates this API into Firefox.

 The project originated in an extension I wrote, the home page is here: 
 http://domcrypt.org

 The source code for the extension is here: 
 https://github.com/daviddahl/domcrypt

 The Mozilla bugs are here:

 https://bugzilla.mozilla.org/show_bug.cgi?id=649154
 https://bugzilla.mozilla.org/show_bug.cgi?id=657432

 Firefox feature wiki page: 
 https://wiki.mozilla.org/Privacy/Features/DOMCryptAPI

 You can test the API by installing the extension hosted at domcrypt.org, and 
 going to http://domcrypt.org

 A recent blog post updating all of this is posted here: 
 http://monocleglobe..wordpress.com/2011/06/01/domcrypt-update-2011-06-01/

 The API:

 window.cipher = {
  // Public Key API
  pk: {
   set algorithm(algorithm){ },
   get algorithm(){ },

  // Generate a keypair and then execute the callback function
  generateKeypair: function ( function callback( aPublicKey ) { } ) {  },

  // encrypt a plainText
  encrypt: function ( plainText, function callback (cipherMessageObject) ) {  
 } ) {  },

  // decrypt a cipherMessage
  decrypt: function ( cipherMessageObject, function callback ( plainText ) { 
 } ) {  },

  // sign a message
  sign: function ( plainText, function callback ( signature ) { } ) {  },

  // verify a signature
  verify: function ( signature, plainText, function callback ( boolean ) { } 
 ) {  },

  // get the JSON cipherAddressbook
  get addressbook() {},

  // make changes to the addressbook
  saveAddressbook: function (JSONObject, function callback ( addresssbook ) { 
 }) {  }
  },

  // Symmetric Crypto API
  sym: {
  get algorithm(),
  set algorithm(algorithm),

  // create a new symmetric key
  generateKey: function (function callback ( key ){ }) {  },

  // encrypt some data
  encrypt: function (plainText, key, function callback( cipherText ){ }) {  },

  // decrypt some data
  decrypt: function (cipherText, key, function callback( plainText ) { }) {  
 },
  },

  // hashing
  hash: {
    SHA256: function (function callback (hash){}) {  }
  }
 }

 Your feedback and criticism will be invaluable.

 Best regards,

 David Dahl

 Firefox Engineer, Mozilla Corp.







Re: Request for feedback: DOMCrypt API proposal

2011-06-02 Thread Adam Barth
On Thu, Jun 2, 2011 at 4:46 PM, David Dahl dd...@mozilla.com wrote:
 - Original Message -
 From: Adam Barth w...@adambarth.com
 To: David Dahl dd...@mozilla.com
 Cc: public-webapps@w3.org
 Sent: Thursday, June 2, 2011 6:19:24 PM
 Subject: Re: Request for feedback: DOMCrypt API proposal

 Why only SHA256?  Presumably sha1 and md5 are worth exposing as well.
  Also, pk and sym appear to be algorithm agonistic but hash isn't.  In
  addition to hashing, it would be valuable to expose HMAC modes of the
  hash functions.

 hash should probably have SHA512 as well, also, I just added an hmac API to 
 the spec.

Really, the API should be algorithm agnostic.  We can discuss
separately which algorithms to provide.

 In the pk API, there doesn't seem to be any way to install a
  public/private keypair from another location (e.g., the network).

 No, there is not. Can you suggest what that API would look like? Importing 
 keys should be allowed.

Something like

crypto.pk.importKey(keyBlob)

where keyBlob is something like a string containing the keypair in
something like PEM format.

 Also, the encrypt and decrypt functions don't let me specify which
  public key I want to use.  Consider introducing a keyID concept to let
  me refer to keypairs.

 Hmmm, I updated the wiki today as I forgot to include the pubKey on 
 encrypt(). There should be a setter and getter for specifying the keypair to 
 use.

Decrypt needs to take a keypair too (or at least a keyid).  As the
caller of the API, you want control over which of your secrets are
used to decrypt the message!  Otherwise, you can get it big trouble if
you accidentally decrypt with a powerful key.

 What is a cipherAddressbook ?

 It is an object literal that you store discovered public keys in - which are 
 referred to as addressbook entries. The Addressbook bits of this API will 
 have to wait for their own spec I think. i was trying to allow for key 
 discovery and storage in the simplest way possible: a custom tag or meta tag 
 is published on your blog and your contacts are alerted by the browser during 
 a visit to the page. The user can then store that 'addressbook entry' (or 
 key) for later use.

Oh man.  I'd remove this from the spec.  That's a whole other can of
worms.  This API should just be the nuts and bolts of the crypto.

 When I use generateKeypair, how long dose the keypair persist?  Is are
  their privacy implications?
 There is nothing in the spec right now. I should probably use a standard cert 
 type that declares validity dates.

This is a very important question.  Also, what sites are allowed to
use the generated keypair?  One hopes just the origin that generated
it.  You could also ask for something tighter so that some random XSS
in one page won't completely compromise all my keys.

You really need to spell all this stuff out in the spec.  What you
haven now just completely lacks any details about what these functions
actually do.

 Finally, this all should be on the crypto object, not on a new cipher object.

 More and more people are saying that.

Maybe you should listen to them?

On Thu, Jun 2, 2011 at 4:47 PM, David Dahl dd...@mozilla.com wrote:
 This spec is also incredibly vague:

 https://wiki.mozilla.org/Privacy/Features/DOMCryptAPISpec/Latest

 There's no description of what these functions do.  There's no way
  this spec can be used to create a second interoperable implementation.

 I really need to change the format to WebIDL or something along those lines.

You should do that, of course, but that's just the first step.  You
need to actually explain what these functions do in a level of detail
such that someone else could implement them in exactly the same way as
you have without looking at your implementation.  That's what it means
to be an standard for the open web.

Adam


 On Wed, Jun 1, 2011 at 3:54 PM, David Dahl dd...@mozilla.com wrote:
 Hello public-webapps members,

 (I wanted to post this proposed draft spec for the DOMCrypt API ( 
 https://wiki.mozilla.org/Privacy/Features/DOMCryptAPISpec/Latest ) to this 
 list - if there is a more fitting mailing list, please let me know)

 I recently posted this draft spec for a crypto API for browsers to the 
 whatwg (see: 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-May/031741.html) 
 and wanted to get feedback from W3C as well.

 Privacy and user control on the web is of utter importance. Tracking, 
 unauthorized user data aggregation and personal information breaches are 
 becoming so commonplace you see a new headline almost daily. (It seems).

 We need crypto APIs in browsers to allow developers to create more secure 
 communications tools and web applications that don’t have to implicitly 
 trust the server, among other use cases.

 The DOMCrypt API is a good start, and more feedback and discussion will 
 really help round out how all of this should work – as well as how it can 
 work in any browser that will support such an API.

 This API will provide

Re: Suggestion (redirection)

2011-04-15 Thread Adam Barth
Feedback about the WebSocket protocol should be sent to the IETF HyBi
working group:

https://www.ietf.org/mailman/listinfo/hybi

Adam


On Wed, Apr 13, 2011 at 7:54 AM, Everton Sales
everton.sales.bra...@gmail.com wrote:
 I'm implementing the WebSocket my company's servers, and would arrange
 an OPCODE which means redirection in my case I set the code %x6 for
 this function.






Re: [websockets] What needs to be done before the spec is LC ready?

2011-04-05 Thread Adam Barth
There's also potentially protocol changes that will cause use to need
to fix things at the API layer.  For example, if the IETF introduces
redirects into the protocol, we'll likely need to account for them at
the API layer:

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-March/031070.html

Adam


On Tue, Apr 5, 2011 at 10:15 AM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 5 Apr 2011, Arthur Barstow wrote:

 What needs to be done before the WebSocket API is LC ready?

 I'm currently waiting for a number of editorial changes to the protocol
 specification to provide hooks for the API specification so that I can
 update the API apecification to work with the new protocol.


 3. The definition of absolute url makes https:foo not an absolute url, 
 since
 its behavior depends on whether the base is https: or not. Is that desired?
 ...
    http://www.w3.org/Bugs/Public/show_bug.cgi?id=10213

 This particular bug is blocked on there being a specification that defines
 how URLs work. Last I heard, Adam was going to be writing it. I would also
 be unblocked if the IRI work at the IETF were to actually happen (it was
 supposed to happen about 2 years ago now). There was a also a recent HTML
 WG decision to put the requirements into the HTML spec. The decision
 starts with text that is known to be broken and so also requires editing
 work; however, it's possible that I'll end up updating that before Adam's
 work is done, in which case I'll use that instead.

 Either way this particular bug isn't likely to be fixed in a hurry. In
 practice it's not a huge blocker; in fact I'm not sure it's a blocker at
 all, as it is possible that the WebSocket protocol spec now defines how
 this issue is handled anyway, in which case the bug won't actually block
 the API spec. I'll find out when the edits to the protocol spec are done
 and I get to update the API spec for the new protocol spec.

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'




Re: SearchBox API

2011-03-21 Thread Adam Barth
On Mon, Mar 21, 2011 at 11:16 AM, Edward Lee edi...@mozilla.com wrote:
 enables instant style interaction between the user agent's search
 Assuming the user agent automatically loads a url that is triggered by
 a user key stroke, e.g., typing g results in
 http://www.google.com/;, the instant-style interaction is almost
 there already for certain urls. These instant-style urls would include
 what the user typed -- perhaps as a query parameter.

 For example, a user agent might request these pages as the user types:
 http://www.google.com/search?q=g
 http://www.google.com/search?q=go
 http://www.google.com/search?q=goo

 Here, the results page shows the new query and updated results on
 every user keystroke.

 These instant-style urls can also avoid refetching and rerendering the
 whole page if the user's input shows up in the #fragment and the page
 detects onHashChange.

That's true, but you can only transmit one event that way.  In this
design, you've chosen to map the change event to hashchange.  How
should the user agent communicate that the user is done typing (i.e.,
the submit event, which triggers when presses the enter key)?
Similarly, the communication in that approach is unidirectional, which
means the page can't suggest search completions.

Adam



Re: SearchBox API

2011-03-20 Thread Adam Barth
[Re-sending to the correct list.]

On Sun, Mar 20, 2011 at 2:34 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 03/20/2011 01:36 AM, Tony Gentilcore wrote:
 Back in October I proposed the SearchBox API to the whatwg [1]. It
 enables instant style interaction between the user agent's search
 box and the default search provider's results page.

 When I tried instant search on Chrome, it did something only when
 I was typing an url. It preloaded possibly right url before
 I pressed enter. It didn't seem to utilize the coordinate
 information of SearchBox API at all. (Perhaps I wasn't testing it correctly)
 A browser could certainly preload pages similarly even
 without the API.

The instant search feature has a bunch of different components.  One
aspect is URL preloading, which happens when the browser thinks you're
typing something navigational (like a URL) into the omnibox and is
not related to the SearchBox API.  Another aspect is what happens when
the browser thinks you're tying something search-like (like potato)
into the omnibox.  In that case, the browser displays a search engine
results page.

 So, why does the search page need any data?

The SearchBox API has a two-way flow of information between the search
engine results page (SERP) and the browser's search field.  (In
Chrome's case, that's the omnibox, but it would work just as sensibly
for browsers with a dedicated search box.)  Essentially, the browser
tells the SERP various information about what the user has typed in
the search field (much like the web site would learn if the user typed
into a text input field in the web site) and the SERP tells the
browser some suggested completions for what the user has typed so far
(e.g., so the browser can display those suggestions to the user).

Additionally, the browser can tell the SERP about the geometry of the
search field (if it overlaps the SERP), which lets the SERP move its
UI out from underneath the search field, if desired.

 Couldn't browser interact with the web search in the
 background and show (and possibly preload) results the
 way it wants to. That way there wouldn't be an API which
 fits in to only one kind of UI.

I wasn't involved in the design, but I suspect there are latency and
synchronization challenges with that approach.  Most modern browsers
use that approach for showing search suggestions in their search
fields today, but with this UI, it's important to synchronize the
browser's search field with the SERP.  Using JavaScript events to
communicate removes some of the network latency.

 I think I'm missing some of the reasoning for the API.
 Could you perhaps clarify why Google ended up with such
 API?

As a general principle, Chrome shouldn't have an special
integrations with google.com.  For example, bing.com should be able to
use any Chrome feature, and other browsers, such as Safari and
Firefox, should be able to use any google.com feature.  Now, the
project doesn't always live up to that principle, but that's the
reasoning behind implementing and specifying a general-purpose API.

 Also, would be great to see some examples where all of
 features of the API are being used.

My understanding is that Google's SERP uses all the features of the
API.  Tony designed the API in coordination with the folks who work on
Google's SERP.  For example, if you enable the instant feature in
Chrome and type potato in the omnibox, you should see similar search
suggestions in the autocomplete dropdown as you'd see if you typed
potato into the google.com search box (assuming you have Google set
as your default search provider).  Similarly, if you type a character,
the SERP should react immediately to the change event instead of
waiting for network latency.  Finally, you'll notice that the
autocomplete dropdown does not overlap with the search results because
of the geometry information provided by the SearchBox API.

Adam



Re: [public-webapps] SearchBox API

2011-03-20 Thread Adam Barth
On Sun, Mar 20, 2011 at 5:26 PM, Olli Pettay olli.pet...@helsinki.fi wrote:
 On 03/21/2011 01:23 AM, Adam Barth wrote:
 On Sun, Mar 20, 2011 at 2:34 PM, Olli Pettayolli.pet...@helsinki.fi
  wrote:
 On 03/20/2011 01:36 AM, Tony Gentilcore wrote:
 Back in October I proposed the SearchBox API to the whatwg [1]. It
 enables instant style interaction between the user agent's search
 box and the default search provider's results page.

 When I tried instant search on Chrome, it did something only when
 I was typing an url. It preloaded possibly right url before
 I pressed enter. It didn't seem to utilize the coordinate
 information of SearchBox API at all. (Perhaps I wasn't testing it
 correctly)
 A browser could certainly preload pages similarly even
 without the API.

 The instant search feature has a bunch of different components.  One
 aspect is URL preloading, which happens when the browser thinks you're
 typing something navigational (like a URL) into the omnibox and is
 not related to the SearchBox API.  Another aspect is what happens when
 the browser thinks you're tying something search-like (like potato)
 into the omnibox.  In that case, the browser displays a search engine
 results page.

 So, why does the search page need any data?

 The SearchBox API has a two-way flow of information between the search
 engine results page (SERP) and the browser's search field.  (In
 Chrome's case, that's the omnibox, but it would work just as sensibly
 for browsers with a dedicated search box.)  Essentially, the browser
 tells the SERP various information about what the user has typed in
 the search field (much like the web site would learn if the user typed
 into a text input field in the web site) and the SERP tells the
 browser some suggested completions for what the user has typed so far
 (e.g., so the browser can display those suggestions to the user).

 Additionally, the browser can tell the SERP about the geometry of the
 search field (if it overlaps the SERP), which lets the SERP move its
 UI out from underneath the search field, if desired.

 Couldn't browser interact with the web search in the
 background and show (and possibly preload) results the
 way it wants to. That way there wouldn't be an API which
 fits in to only one kind of UI.

 I wasn't involved in the design, but I suspect there are latency and
 synchronization challenges with that approach.  Most modern browsers
 use that approach for showing search suggestions in their search
 fields today, but with this UI, it's important to synchronize the
 browser's search field with the SERP.  Using JavaScript events to
 communicate removes some of the network latency.

 One of the problems with this API I have is that
 either browser implements the UI the API expects
 (rectangular dropdown list) or just doesn't support the API.

AFAIK, every user agent occludes the content area with a rectangular
(or roughly rectangular) region as part of the search field
interaction.  If you've got examples of non-rectangular occlusion, I
suspect Tony would be willing to update the API to support other
geometries.

Of course, you could implement the API to always report no occlusion
and still make use of the other features.

 I think I'm missing some of the reasoning for the API.
 Could you perhaps clarify why Google ended up with such
 API?

 As a general principle, Chrome shouldn't have an special
 integrations with google.com.  For example, bing.com should be able to
 use any Chrome feature, and other browsers, such as Safari and
 Firefox, should be able to use any google.com feature.  Now, the
 project doesn't always live up to that principle, but that's the
 reasoning behind implementing and specifying a general-purpose API.

 Sure, but that isn't still the reasoning for the API design.
 (Btw, I sure hope the API is still somehow prefixed.)

Perhaps I misunderstood your question.  As described on
http://dev.chromium.org/searchbox, the API is exposed on the
window.chrome object, but likely a better long-term place for the API
is on window.navigator.  So, yes, it is currently vendor-prefixed.

 Also, would be great to see some examples where all of
 features of the API are being used.

 My understanding is that Google's SERP uses all the features of the
 API.  Tony designed the API in coordination with the folks who work on
 Google's SERP.  For example, if you enable the instant feature in
 Chrome and type potato in the omnibox, you should see similar search
 suggestions in the autocomplete dropdown as you'd see if you typed
 potato into the google.com search box (assuming you have Google set
 as your default search provider).

 The only difference I can see when enabling instant search in Chrome and
 typing potato is that the current web page gets dimmed somehow.
 The web page is not updated in any way (it doesn't matter if the current
 web page is the default search or some other page).
 The dropdown list under omnibox contains exactly the same entries

Re: Cross-Origin Resource Embedding Restrictions

2011-03-01 Thread Adam Barth
On Mon, Feb 28, 2011 at 11:57 PM, Maciej Stachowiak m...@apple.com wrote:
 For what it's worth, I think this is a useful draft and a useful technology. 
 Hotlinking prevention is of considerable interest to Web developers, and 
 doing it via server-side Referer checks is inconvenient and error-prone. I 
 hope we can fit it into Web Apps WG, or if not, find another goo home for it 
 at the W3C.

 One thing I am not totally clear on is how this would fit into CSP. A big 
 focus for CSP is to enable site X to have a policy that prevents it from 
 accidentally including scripts from site Y, and things of that nature. In 
 other words, voluntarily limit the embedding capabilities of site X itself 
 But the desired feature is kind of the opposite of that. I think it would be 
 confusing to stretch CSP to this use case, much as it would have been 
 confusing to reuse CORS for this purpose.

There's been a bunch of discussion on the public-web-security mailing
list about the scope of CSP.  Some folks think that CSP should be a
narrow feature targeted at mitigating cross-site scripting.  Other
folks (e.g., as articulated in
http://w2spconf.com/2010/papers/p11.pdf) would like to see CSP be
more of a one-stop shop for configuring security-relevant policy for a
web site.

From-Origin is closely related to one of the proposed CSP features,
namely frame-ancestors, which also controls how the given resource can
be embedded in other documents:

https://wiki.mozilla.org/Security/CSP/Specification

Aside from the aesthetic questions, I'd imagine folks will want to
include a list of permissible origins in the From-Origin header (or
else they'd have to give up caching their resources).  CSP already has
syntax, semantics, and processing models for lists of origins,
including wildcards.  At a minimum, we wouldn't want to create a
gratuitously different syntax for the same thing.

Adam


 On Feb 28, 2011, at 11:35 PM, Anne van Kesteren wrote:
 The WebFonts WG is looking for a way to prevent cross-origin embedding of 
 fonts as certain font vendors want to license their fonts with such a 
 restriction. Some people think CORS is appropriate for this, some don't. 
 Here is some background material:

 http://weblogs.mozillazine.org/roc/archives/2011/02/distinguishing.html
 http://annevankesteren.nl/2011/02/web-platform-consistency
 http://lists.w3.org/Archives/Public/public-webfonts-wg/2011Feb/0066.html


 More generally, having a way to prevent cross-origin embedding of resources 
 can be useful. In addition to license enforcement it can help with:

 * Bandwidth theft
 * Clickjacking
 * Privacy leakage

 To that effect I wrote up a draft that complements CORS. Rather than 
 enabling sharing of resources, it allows for denying the sharing of 
 resources:

 http://dvcs.w3.org/hg/from-origin/raw-file/tip/Overview.html

 And although it might end up being part of the Content Security Policy work 
 I think it would be useful if publish a Working Draft of this work to gather 
 more input, committing us nothing.

 What do you think?

 Kind regards,


 --
 Anne van Kesteren
 http://annevankesteren.nl/







Re: Cross-Origin Resource Embedding Restrictions

2011-03-01 Thread Adam Barth
+dveditz and +bsterne because they have strong opinions about CSP.

Adam


On Tue, Mar 1, 2011 at 12:26 AM, Adam Barth w...@adambarth.com wrote:
 On Mon, Feb 28, 2011 at 11:57 PM, Maciej Stachowiak m...@apple.com wrote:
 For what it's worth, I think this is a useful draft and a useful technology. 
 Hotlinking prevention is of considerable interest to Web developers, and 
 doing it via server-side Referer checks is inconvenient and error-prone. I 
 hope we can fit it into Web Apps WG, or if not, find another goo home for it 
 at the W3C.

 One thing I am not totally clear on is how this would fit into CSP. A big 
 focus for CSP is to enable site X to have a policy that prevents it from 
 accidentally including scripts from site Y, and things of that nature. In 
 other words, voluntarily limit the embedding capabilities of site X itself 
 But the desired feature is kind of the opposite of that. I think it would be 
 confusing to stretch CSP to this use case, much as it would have been 
 confusing to reuse CORS for this purpose.

 There's been a bunch of discussion on the public-web-security mailing
 list about the scope of CSP.  Some folks think that CSP should be a
 narrow feature targeted at mitigating cross-site scripting.  Other
 folks (e.g., as articulated in
 http://w2spconf.com/2010/papers/p11.pdf) would like to see CSP be
 more of a one-stop shop for configuring security-relevant policy for a
 web site.

 From-Origin is closely related to one of the proposed CSP features,
 namely frame-ancestors, which also controls how the given resource can
 be embedded in other documents:

 https://wiki.mozilla.org/Security/CSP/Specification

 Aside from the aesthetic questions, I'd imagine folks will want to
 include a list of permissible origins in the From-Origin header (or
 else they'd have to give up caching their resources).  CSP already has
 syntax, semantics, and processing models for lists of origins,
 including wildcards.  At a minimum, we wouldn't want to create a
 gratuitously different syntax for the same thing.

 Adam


 On Feb 28, 2011, at 11:35 PM, Anne van Kesteren wrote:
 The WebFonts WG is looking for a way to prevent cross-origin embedding of 
 fonts as certain font vendors want to license their fonts with such a 
 restriction. Some people think CORS is appropriate for this, some don't. 
 Here is some background material:

 http://weblogs.mozillazine.org/roc/archives/2011/02/distinguishing.html
 http://annevankesteren.nl/2011/02/web-platform-consistency
 http://lists.w3.org/Archives/Public/public-webfonts-wg/2011Feb/0066.html


 More generally, having a way to prevent cross-origin embedding of resources 
 can be useful. In addition to license enforcement it can help with:

 * Bandwidth theft
 * Clickjacking
 * Privacy leakage

 To that effect I wrote up a draft that complements CORS. Rather than 
 enabling sharing of resources, it allows for denying the sharing of 
 resources:

 http://dvcs.w3.org/hg/from-origin/raw-file/tip/Overview.html

 And although it might end up being part of the Content Security Policy work 
 I think it would be useful if publish a Working Draft of this work to 
 gather more input, committing us nothing.

 What do you think?

 Kind regards,


 --
 Anne van Kesteren
 http://annevankesteren.nl/








Re: [XHR2] Feedback on sec-* headers

2011-02-21 Thread Adam Barth
I replied on HTTPbis, but I can reply here too.  It seems like the
XMLHttpRequest API is free to decide which headers can and can't be
set using the XMLHttpRequest API.  For example, the XMLHttpRequest API
could decide that it can or cannot be used to set the Banana HTTP
header as the designers of that API see fit.

Adam


On Mon, Feb 21, 2011 at 2:38 PM, Mark Nottingham m...@mnot.net wrote:
 Hello,

 A HTTPbis WG member noticed that the XHR2 draft gives special status to HTTP 
 headers starting with Sec-* and Proxy-*:

 http://www.w3.org/TR/XMLHttpRequest2/#the-setrequestheader-method

 
 Terminate these steps if header is a case-insensitive match for one of the 
 following headers … or if the start of header is a case-insensitive match for 
 Proxy- or Sec- (including when header is just Proxy- or Sec-).
 

 This is problematic. XHR2 is effectively reserving a name space in the range 
 of possible HTTP header field names. Future applications with similar 
 requirements will use this as precedence, and will mint their own header 
 prefixes. When those prefixes need to be combined, we'll see fragmentation 
 (e.g., the Sec-Private-Special-ID header, with all of the associated parsing 
 and special handling of the field name that this entails).

 Instead, it would be much better to use an approach somewhat like the 
 Connection header does; i.e., have the sender declare what headers it isn't 
 allowing the client to modify in a separate header. E.g.,

  XHR2-Secure: Foo, Bar, Baz

 This way, another application can still talk about existing headers without 
 changing their names; e.g.,

  FooAPI-Private: Bar, Boo

 Cheers,


 --
 Mark Nottingham   http://www.mnot.net/








Re: [XHR2] Feedback on sec-* headers

2011-02-21 Thread Adam Barth
On Mon, Feb 21, 2011 at 3:53 PM, Mark Nottingham m...@mnot.net wrote:
 Probably best to follow up here.

 Yes, of course it can define the specific headers it can or cannot send.

 The problem is XHR not only enumerates the headers, it also defines a prefix; 
 by doing so, you're effectively defining a convention that *all* people who 
 register new HTTP headers need to consider;

I guess I don't see a difference between an enumeration and a prefix.

  * If I want to register a header and make it secure in XHR2, I have to start 
 its name with those four characters. If another convention comes along that 
 requires a different prefix, and I want to invoke both behaviours with my 
 header, I'm out of luck.

I'm not sure what you mean by secure in XHR2.  The list that
contains Sec- also contains a number of headers that don't start with
any particular characters.  We can certainly extend that list in the
future.  Sec- is just a convenient way to avoid having to update all
the XMLHttpRequest implementations.

  * If I'm registering a header and am completely oblivious to XHR2, I still 
 need to know that my choice of name makes a difference in some contexts.

That's the case regardless of whether we use an enumeration or a
prefix or our current strategy of using both.  If we decide to block
setting the Banana header and you're completely oblivious to XHR2,
then your choosing to use or not use the name Banana has the same
effect.

Adam


 On 22/02/2011, at 10:09 AM, Adam Barth wrote:
 I replied on HTTPbis, but I can reply here too.  It seems like the
 XMLHttpRequest API is free to decide which headers can and can't be
 set using the XMLHttpRequest API.  For example, the XMLHttpRequest API
 could decide that it can or cannot be used to set the Banana HTTP
 header as the designers of that API see fit.

 Adam


 On Mon, Feb 21, 2011 at 2:38 PM, Mark Nottingham m...@mnot.net wrote:
 Hello,

 A HTTPbis WG member noticed that the XHR2 draft gives special status to 
 HTTP headers starting with Sec-* and Proxy-*:

 http://www.w3.org/TR/XMLHttpRequest2/#the-setrequestheader-method

 
 Terminate these steps if header is a case-insensitive match for one of the 
 following headers … or if the start of header is a case-insensitive match 
 for Proxy- or Sec- (including when header is just Proxy- or Sec-).
 

 This is problematic. XHR2 is effectively reserving a name space in the 
 range of possible HTTP header field names. Future applications with similar 
 requirements will use this as precedence, and will mint their own header 
 prefixes. When those prefixes need to be combined, we'll see fragmentation 
 (e.g., the Sec-Private-Special-ID header, with all of the associated 
 parsing and special handling of the field name that this entails).

 Instead, it would be much better to use an approach somewhat like the 
 Connection header does; i.e., have the sender declare what headers it isn't 
 allowing the client to modify in a separate header. E.g.,

  XHR2-Secure: Foo, Bar, Baz

 This way, another application can still talk about existing headers without 
 changing their names; e.g.,

  FooAPI-Private: Bar, Boo

 Cheers,


 --
 Mark Nottingham   http://www.mnot.net/






 --
 Mark Nottingham   http://www.mnot.net/







Re: [XHR2] Feedback on sec-* headers

2011-02-21 Thread Adam Barth
On Mon, Feb 21, 2011 at 5:13 PM, Mark Nottingham m...@mnot.net wrote:
 On 22/02/2011, at 11:39 AM, Adam Barth wrote:
 On Mon, Feb 21, 2011 at 3:53 PM, Mark Nottingham m...@mnot.net wrote:
 Probably best to follow up here.

 Yes, of course it can define the specific headers it can or cannot send.

 The problem is XHR not only enumerates the headers, it also defines a 
 prefix; by doing so, you're effectively defining a convention that *all* 
 people who register new HTTP headers need to consider;

 I guess I don't see a difference between an enumeration and a prefix.

 An enumeration is specific; you will (presumably) only include headers that 
 are already existant and understood. Furthermore, an enumeration doesn't 
 force the header name into a specific syntactic form.

 A prefix includes a potentially unbounded set of headers, both existant and 
 yet to come.



  * If I want to register a header and make it secure in XHR2, I have to 
 start its name with those four characters. If another convention comes 
 along that requires a different prefix, and I want to invoke both 
 behaviours with my header, I'm out of luck.

 I'm not sure what you mean by secure in XHR2.  The list that
 contains Sec- also contains a number of headers that don't start with
 any particular characters.  We can certainly extend that list in the
 future.  Sec- is just a convenient way to avoid having to update all
 the XMLHttpRequest implementations.

 As would be defining a header that carries this information separately, as I 
 suggested.

I'm not sure I understand how this would work.  Let's take the example
of Sec-WebSocket-Key.  When would the user agent send XHR2-Secure:
Sec-WebSocket-Key ?

Adam


  * If I'm registering a header and am completely oblivious to XHR2, I still 
 need to know that my choice of name makes a difference in some contexts.

 That's the case regardless of whether we use an enumeration or a
 prefix or our current strategy of using both.  If we decide to block
 setting the Banana header and you're completely oblivious to XHR2,
 then your choosing to use or not use the name Banana has the same
 effect.

 Really? You're going to enumerate a header that isn't yet defined?





 Adam


 On 22/02/2011, at 10:09 AM, Adam Barth wrote:
 I replied on HTTPbis, but I can reply here too.  It seems like the
 XMLHttpRequest API is free to decide which headers can and can't be
 set using the XMLHttpRequest API.  For example, the XMLHttpRequest API
 could decide that it can or cannot be used to set the Banana HTTP
 header as the designers of that API see fit.

 Adam


 On Mon, Feb 21, 2011 at 2:38 PM, Mark Nottingham m...@mnot.net wrote:
 Hello,

 A HTTPbis WG member noticed that the XHR2 draft gives special status to 
 HTTP headers starting with Sec-* and Proxy-*:

 http://www.w3.org/TR/XMLHttpRequest2/#the-setrequestheader-method

 
 Terminate these steps if header is a case-insensitive match for one of 
 the following headers … or if the start of header is a case-insensitive 
 match for Proxy- or Sec- (including when header is just Proxy- or Sec-).
 

 This is problematic. XHR2 is effectively reserving a name space in the 
 range of possible HTTP header field names. Future applications with 
 similar requirements will use this as precedence, and will mint their own 
 header prefixes. When those prefixes need to be combined, we'll see 
 fragmentation (e.g., the Sec-Private-Special-ID header, with all of the 
 associated parsing and special handling of the field name that this 
 entails).

 Instead, it would be much better to use an approach somewhat like the 
 Connection header does; i.e., have the sender declare what headers it 
 isn't allowing the client to modify in a separate header. E.g.,

  XHR2-Secure: Foo, Bar, Baz

 This way, another application can still talk about existing headers 
 without changing their names; e.g.,

  FooAPI-Private: Bar, Boo

 Cheers,


 --
 Mark Nottingham   http://www.mnot.net/






 --
 Mark Nottingham   http://www.mnot.net/





 --
 Mark Nottingham   http://www.mnot.net/







Re: [XHR2] Feedback on sec-* headers

2011-02-21 Thread Adam Barth
On Mon, Feb 21, 2011 at 6:28 PM, Mark Nottingham m...@mnot.net wrote:
 On 22/02/2011, at 1:08 PM, Adam Barth wrote:
 I'm not sure I understand how this would work.  Let's take the example
 of Sec-WebSocket-Key.  When would the user agent send XHR2-Secure:
 Sec-WebSocket-Key ?


 Ah, I see; you want to dynamically prohibit the client sending a header, 
 rather than declare what headers the client didn't allow modification of.

 A separate header won't help you, no.

 The problems I brought up still stand, however. I think we need to have a 
 discussion about how much convenience the implementers really need here, and 
 also to look at the impact on the registration procedure for HTTP headers.

The Sec- behavior has only been implemented for a few years at this
point.  If there was another solution that worked better, we could
likely adopt it.  I couldn't think of one at the time, but other folks
might have more clever ideas.

Adam



Re: Local Storage Not Retaining Data Between Local Browser Sessions

2010-12-11 Thread Adam Barth
On Mon, Nov 29, 2010 at 2:03 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Nov 29, 2010 at 1:55 PM, Adam Barth w...@adambarth.com wrote:
 On Sun, Nov 28, 2010 at 10:11 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Nov 28, 2010 at 11:25 AM, Ronald Lane rlane6...@verizon.net wrote:
 We would like to develop a system (javascript) to run on a local browser. 
 We
 would like to make use of local storage however in testing this out we see
 that when running the app from a server local storage works fine but when
 running the app locally it fails to retain the data between sessions.

 Is this an intended function?

 This is really a problem with the file: protocol rather than a problem
 with localStorage. localStorage works on the basis of origin, a
 concept which is defined for protocols like http: and https:, however
 I believe so far has not been defined for file:. I'd recommend talking
 with the IETF, though I wouldn't get my hopes up as it's really a very
 hard problem to solve unfortunately.

 Yeah, we've basically punted on the origin of file URLs in IETF-land.

 The only decent solution I've been able to think of is to make it
 possible to configure a browser to tell it that a given local
 directory constitutes a site and that all files in that directory
 (and its sub-directories) are same-origin.

Yeah.  One clean why to do that is to create a new scheme, say
local-origin, and map in parts of the file system under different host
names.  For example:

local-origin://foobarbaz/css/main.css

would internally be resolved to
file:///home/abarth/Projects/FunWithCSS/css/main.css.

The idea is that everything in the FunWithCSS folder would be mapped
into the local-origin://foobarbaz origin, safely isolated from the
rest of the filesystem.

 A solution like that is likely out of scope for IETF. The best way to
 pursue a solution like that is to start talking to individual browser
 vendors directly.

We've actually implemented the above in Chrome, it's just not really
advertised in the IU.

Adam



Re: Local Storage Not Retaining Data Between Local Browser Sessions

2010-11-29 Thread Adam Barth
On Sun, Nov 28, 2010 at 10:11 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Nov 28, 2010 at 11:25 AM, Ronald Lane rlane6...@verizon.net wrote:
 We would like to develop a system (javascript) to run on a local browser. We
 would like to make use of local storage however in testing this out we see
 that when running the app from a server local storage works fine but when
 running the app locally it fails to retain the data between sessions.

 Is this an intended function?

 This is really a problem with the file: protocol rather than a problem
 with localStorage. localStorage works on the basis of origin, a
 concept which is defined for protocols like http: and https:, however
 I believe so far has not been defined for file:. I'd recommend talking
 with the IETF, though I wouldn't get my hopes up as it's really a very
 hard problem to solve unfortunately.

Yeah, we've basically punted on the origin of file URLs in IETF-land.

Adam



Re: Use cases for Range::createContextualFragment and script nodes

2010-10-20 Thread Adam Barth
On Wed, Oct 20, 2010 at 7:14 AM, Stewart Brodie
stewart.bro...@antplc.com wrote:
 Henri Sivonen hsivo...@iki.fi wrote:
 When WebKit or Firefox trunk create an HTML script element node via
 Range::createContextualFragment, the script has its 'already started' flag
 set, so the script won't run when inserted into a document. In Opera 10.63
 and in Firefox 3.6.x, the script doesn't have the 'already started' flag
 set, so the script behaves like a script created with
 document.createElement(script) when inserted into a document.

 I'd be interested in use cases around createContextualFragment in order to
 get a better idea of which behavior should be the correct behavior going
 forward.

 Does the specification for createContextualFragment say anything about this?

I don't believe such a spec exists, or at least I couldn't find one
the other month.

Adam



  1   2   3   >