Re: Custom elements Constructor-Dmitry baseline proposal

2015-08-21 Thread Maciej Stachowiak

 On Aug 17, 2015, at 3:19 PM, Domenic Denicola d...@domenic.me wrote:
 
 In 
 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_w3c_webcomponents_blob_gh-2Dpages_proposals_Constructor-2DDmitry.mdd=BQIGaQc=eEvniauFctOgLOKGJOplqwr=Cq0heWYmrUNShLjLIzuzGQm=U2qIHSkYawudMNTponjVOJxTr1-blzlm_skvYTCFFrks=Fqq6RL3oe2zmH8pGykh6XfqVC6LYMZILABlqZPRGG74e=
   I’ve written up in some detail what I consider to be the current 
 state-of-the-art in custom elements proposals. That is, if we take the 
 current spec, and modify it in ways that everyone agrees are good ideas, we 
 end up with the Constructor-Dmitry proposal.
 
 The changes, in descending order of importance, are:
 
 - Don't generate new classes as return values from registerElement, i.e. 
 don't treat the second argument as a dumb { prototype } property bag. (This 
 is the original Dmitry proposal.)
 - Allow the use of ES2015 constructors directly, instead of createdCallback. 
 (This uses the constructor-call trick we discovered at the F2F.)
 - Use symbols instead of strings for custom element callbacks.
 - Fire attributeChanged and attached callbacks during parsing/upgrading
 
 Those of you at the F2F may remember me saying something like If only we 
 knew about the constructor call trick before this meeting, I think we would 
 have had consensus! This document outlines what I think the consensus would 
 have looked like, perhaps modulo some quibbling about replacing or 
 supplementing attached/detached with different callbacks.
 
 So my main intent in writing this up is to provide a starting point that we 
 can all use, to talk about potential modifications. In particular, at the F2F 
 there was a lot of contention over the consistent world view issue, which 
 is still present in the proposal:
 
 - Parser-created custom elements and upgraded custom elements will have their 
 constructor and attributeChange callbacks called at a time when all their 
 children and attributes are already present, but

Did you change that relative to the spec? Previously, parser-created custom 
elements would have their constructor called at a time when an unpredictable 
number of their children were present. 

 - Elements created via new XCustomElement() or 
 document.createElement(x-custom-element) will have their constructor run at 
 a time when no children or attributes are present.

If you really got it down to two states, I like reducing the number of word 
states, but I would prefer to put parser-created custom elements in this second 
bucket. They should have their constructor called while they have no children 
and attributes, instead of when they have all of them.

My reasons for this:

(1) This is more likely to lead to correctly coding your custom elements to 
rely on change notifications exclusively, rather than on what is present at 
parse time.
(2) Framework developers find it a footgun and an inconvenience for 
parser-created elements to have their constructor called with children present.
(3) It is possible for a parser-created element to render before all of its 
children are present. It's also possible for it to never reach the state where 
it is known all of its children [that will be created by the parser] are 
present. I can give examples if you like. Based on this, I think it's better 
to upgrade them as early as possible.

For these two reasons (#1 and #2 were mentioned in the f2f, #3 is something I 
thought of later), I think parser-created elements should have their 
constructor called early instead of late.

However, if we care about these edge cases, note that #3 *also* applies to 
upgrade. If we want to do upgrade late, and it's on a parser-created element, 
it may be the case that the element has already rendered when the info needed 
to upgrade it comes in, but it may be an unboundedly long time until all its 
children come in.

I guess I should give examples, so cases where this can happen:

(A) Load of the main document stalls partway through the element's children, 
before a close tag has been seen. But the upgrade script comes in via script 
async even during the stall.
(B) The document keeps loading forever by design. Admittedly this is unlikely 
for modern web design.
(C) The document is created with document.open()/document.write(), the element 
in question never has a close tag written, and no one ever calls 
document.close(). People actually still do this sometimes to populate opened 
windows or the like.

If any of this happens, an upgradeable element will be stuck in the pre-upgrade 
state for a possibly unboundedly long amount of time.

This seems like a bad property. 

Against this, we have the proposal to forcibly put elements in a naked state 
before calling the constructor for upgrade, then restore them. That has the 
known bad property of forcing iframe children to reload. I owe the group 
thoughts on whether we can avoid that problem.

 
 If we still think that this is a showstopper to consensus (do we!?) then I 
 

Apple's updated feedback on Custom Elements and Shadow DOM

2015-07-20 Thread Maciej Stachowiak


A while back we sent a consolidated pile of feedback on the Web Components 
family of specs. In preparation for tomorrow's F2F, here is an update on our 
positions. We've also changed the bugzilla links to point to relevant github 
issues instead.

We're only covering Custom Elements (the main expected topic), and also Shadow 
DOM (in case that gets discussed too).


I.  Custom Elements 

A. ES6 classes / Upgrade / Synchronous Constructors
1. In general, we support the synchronous constructors 
approach to the prototype swizzling approach, as the lesser evil. While 
tricky to implement correctly, it makes a lot more sense and fits more 
naturally into the language. We are willing to do the work to make it feasible.
2. Custom elements should support initialization using an ES6 
class constructo instead of a separate callback. 
https://github.com/w3c/webcomponents/issues/139 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28541
3. We don’t think upgrading should be supported. The tradeoffs 
of different options have been much-discussed. 
https://github.com/w3c/webcomponents/issues/134 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28544
4. Specifically, we don't really like the Optional Upgrades, 
Optional Constructors proposal (seems like it's the worst of both worlds in 
terms of complexity and weirdness) or the Parser-Created Classes proposal 
(not clear how this even solves the problem).

B. Insertion/Removal Callbacks
1. We think the current attached/detached callbacks should be 
removed. They don’t match core DOM concepts and insert/remove is a more natural 
bracket. The primitives should be insertedIntoDocument / removedFromDocument 
and inserted / removed. If you care about whether your document is rendered, 
look at its defaultView property. 
https://github.com/w3c/webcomponents/issues/286
2. We think inserted/removed callbacks should be added, for 
alignment with DOM. https://github.com/w3c/webcomponents/issues/222

C. Inheritance for Built-ins
1. We think support for inheritance from built-in elements 
(other than HTMLElement/SVGElement) should be omitted from a cross-browser v1. 
It raises complex implementation issues. 
https://github.com/w3c/webcomponents/issues/133 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28547

D. Syntactic Sugar / Developer Ergonomics
1. We think it would be useful (perhaps post-v1) to make it 
simpler to create a custom element that is always instantiated with a shadow 
DOM from a template. Right now, this common use case requires script and a 
template in separate places, and a few lines of confusing boilerplate code. 
https://github.com/w3c/webcomponents/issues/135 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28546
2. We think at some point (perhaps post-V1), there should be a 
convenient declarative syntax that combines script and a template to define a 
custom element. JavaScript frameworks on top of web components provide 
something like this. Perhaps with field experience we can make a standardized 
common syntax. https://github.com/w3c/webcomponents/issues/136

E. Renaming the API
1. We’re still not wholly sold on document.registerElement as a 
name. We like document.define or document.defineElement. At minimum, we’d like 
the WG to decide on the name instead of just leaving it at the editor’s initial 
decision. We can probably live with this not changing though. 
https://github.com/w3c/webcomponents/issues/140 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24087
2. If anything about Custom Elements is changed incompatibly, 
we suggest renaming document.registerElement (whether to one of our suggestions 
or another). This is to avoid compat problems with content written for Chrome’s 
shipping implementation. This will almost certainly be true if we switch from 
createdCallback to constructors as the initializers.


II.  Shadow DOM 

A. Closed vs. Open.
1. A closed/open flag has been added to createShadowRoot(). It 
seems this has been done. We are ok with the syntax. 
https://github.com/w3c/webcomponents/issues/100
2. The behavior of closed mode should be actually defined. We 
hope this does not need much justification. We think this is critical for v1. 
https://github.com/w3c/webcomponents/issues/100
3. We wanted closed mode to be the default but we are ok with 
having no default, as was decided at the last F2F.

B. Multiple Generations of Shadow DOM
1. We are glad to see that multiple generations of Shadow DOM 
has been removed per F2F agreement.
2. After further consideration, we are even more convinced that 
the named slot proposal is the way to go for distribution for v1. Original 
proposal here: 

Re: Apple's updated feedback on Custom Elements and Shadow DOM

2015-07-20 Thread Maciej Stachowiak

 On Jul 20, 2015, at 10:29 PM, Domenic Denicola d...@domenic.me wrote:
 
 Thanks very much for your feedback Maciej! I know we'll be talking a lot more 
 tomorrow, but one point in particular confused me:
 
 From: Maciej Stachowiak [mailto:m...@apple.com] 
 
 4. Specifically, we don't really like the Optional Upgrades, Optional 
 Constructors proposal (seems like it's the worst of both worlds in terms of 
 complexity and weirdness) or the Parser-Created Classes proposal (not 
 clear how this even solves the problem).
 
 Specifically with regard to the latter, what is unclear about how it solves 
 the problem? It completely gets rid of upgrades, which I thought you would be 
 in favor of.
 
 The former is, as you noted, a compromise solution, that brings in the best 
 of both worlds (from some perspectives) and the worst of them (from others).


Sorry that this was unclear.

From our (many Apple folks') perspective, the biggest problem with the 
prototype swizzling solution is that it doesn't allow natural use of ES6 
classes, in particular with initialization happening through the constructor. 
It seems like parser-created classes do not solve that problem, since 
initialization happens before the class is even defined. It also does not solve 
the secondary problem of FOUC, or the related flash of non-interactive content. 
It *does* seem to solve the secondary problem of modifying prototype chains 
after the fact and in some sense changing the class identity of elements. 

By my best understanding of the anti synchronous constructors position, I 
think there are two key concerns - the need to run arbitrary user code at 
possibly inconvenient moments of parsing or cloning; and the fact that elements 
can't be upgraded to a fancier version after the fact if they are parsed before 
a relevant library loads. It does seem to solve both those problems.

Does that sound right to you?

If so, it is not much more appealing than prototype swizzling to us, since 
our biggest concern is allowing natural use of ES6 classes.


Regards,
Maciej

(The we in this case includes at least myself, Ryosuke Niwa, Sam Weinig, and 
Gavin Barraclough who composed this position statement today; but others at 
Apple have also expressed similar vies in the past.)


Re: Making ARIA and native HTML play better together

2015-05-11 Thread Maciej Stachowiak

 On May 7, 2015, at 12:59 AM, Domenic Denicola d...@domenic.me wrote:
 
 From: Anne van Kesteren ann...@annevk.nl
 
 On Thu, May 7, 2015 at 9:02 AM, Steve Faulkner faulkner.st...@gmail.com 
 wrote:
 Currently ARIA does not do this stuff AFAIK.
 
 Correct. ARIA only exposes strings to AT. We could maybe make it do more, 
 once we understand what more means, which is basically figuring out HTML as 
 Custom Elements...
 
 These are my thoughts as well. The proposal seems nice as a convenient way to 
 get a given bundle of behaviors. But we *really* need to stop considering 
 these roles as atomic, and instead break them down into what they really 
 mean.
 
 In other words, I want to explain the button behavior as something like:
 
 - Default-focusable

This point isn’t correct for built-in buttons on all browsers and platforms. 
For example, input type=button is not keyboard-focusable on Safari for Mac 
but it is mouse-focusable. Likewise in Safari on iOS (both if you connect a 
physical keyboard, and if you use the onscreen focus-cycle arrows in navigation 
view).

This raises an interesting and subtle point. Built-in controls can add value by 
being consistent with the platform behavior when it varies between platforms. 
Giving very low-level primitives results in developers hardcoding the behavior 
of their biggest target platform - generally Windows, but for some 
mobile-targeted sites it can be iOS. It’s hard to make low-level primitives 
that can sensibly capture these details. Sure, I guess we could have a feature 
that’s default-focusable but only on platforms where that is true for controls 
you don’t type into”. That is pretty specific to particular platform details 
though. Other platforms could make different choices. In fact, what controls 
fall in which focus bucket has changed over time in Safari.

Let’s say you really want to capture all the essences of buttonness in a custom 
control, but give it special appearance or behavior. I think two good ways the 
web platform could provide for that are:

(1) Provide standardized cross-browser support for styling of form controls.
(2) Allow custom elements to subclass button or input type=button (the 
latter is hard to define cleanly due to the way many form controls are all 
overloaded onto a single element).

 - Activatable with certain key commands
 - Announced by AT as a button

Buttons also have further aspects to their specialness, such as the way they 
participate in forms.

I think adding clean primitives for these has value. Adding an easy way to get 
a package deal of standard button behaviors with greater customizability is 
also valuable, potentially more so in some circumstances.

(I don’t think ARIA is the best way to invoke a package deal of behaviors 
though, since it’s already pretty established as a way to expose behavior 
through AT without having many of these effects. It would risk breaking author 
intent to change it now.)

Regards,
Maciej

 
 and then I want to be able to apply any of these abilities (or others like 
 them) to any given custom element. Once we have these lower-level primitives 
 we'll be in a much better place.
 




Re: [components] Isolated Imports and Foreign Custom Elements

2015-05-01 Thread Maciej Stachowiak


On May 1, 2015, at 4:35 PM, Domenic Denicola d...@domenic.me wrote:

 alert(weirdArray.__proto__ == localArray.__proto__)
 
 This alerts false in IE, Firefox, and Chrome.
 

That is what I'd expect it to do. (It does the same in Safari).

I guess I didn't explain why I put this line in, so for clarity: this line 
demonstrates that the instance created was not of the local browsing context's 
Array, but rather of the distinct Array from the other frame. Which shows that 
there is a well-defined meaning to creating instances of constructors from 
another global object. Apologies if this was too subtle.

Regards,
Maciej


Re: [components] Isolated Imports and Foreign Custom Elements

2015-05-01 Thread Maciej Stachowiak

 On May 1, 2015, at 9:47 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Thu, Apr 23, 2015 at 8:58 PM, Maciej Stachowiak m...@apple.com wrote:
 I wrote up a proposal (with input and advice from Ryosuke Niwa) on a
 possible way to extend Web Components to support fully isolated components:
 
 https://github.com/w3c/webcomponents/wiki/Isolated-Imports-Proposal
 
 I welcome comments on whether this approach makes sense.
 
 I don't get the bit where you create a node in one global, but run its
 constructor in another.

It’s already possible to run a constructor from another global object in the 
non-cross-origin case. Simple example below (it uses a built-in type as the 
example, but it could work just as easily with a custom defined prototype-based 
constructor or ES6 class constructor).

iframe id=frame
/iframe
script
function doIt() {
window.ForeignArray = document.getElementById(frame).contentWindow.Array;
var localArray = new Array();
var weirdArray = new ForeignArray();
alert(weirdArray.__proto__ == localArray.__proto__);
}

window.addEventListener(load, doIt);
/script

My proposal suggest something similar, except everything is wrapped with 
translating proxies at the origin boundary.

I think it may be necessary to do an experimental implementation to work out 
all the details of how the two-way isolation works.

 That seems rather Frankenstein-esque. Would
 love to see more details overall, as the direction this is going in
 certainly seems like the kind of thing we want. Allowing a dozen
 Facebook Like buttons to appear on a page using only one additional
 global.

Yes, that’s the goal.

Regards,
Maciej




Re: [components] Isolated Imports and Foreign Custom Elements

2015-05-01 Thread Maciej Stachowiak

Your proposal seems conceptually very similar to mine. I guess that’s a good 
sign!

It seems the biggest difference is the foreign registration hook - whether it’s 
done at class registration time (registerElement) or done at import time.

The reasons I did not go with class registration time are:

(1) All of the parameters to registerElement() should really be provided by the 
cross-origin element itself. It makes no sense for prototype and extends to 
come from the hosting environment.
(2) You need to define an elaborate protocol for the outside origin to do the 
“inside” of the registration operation.
(3) At least in the obvious way to do it, you need one external document per 
custom element type.
(4) It’s less aligned with regular HTML imports, which are the non-cross-origin 
non-isolated tool for importing a bunch of element definitions.
(5) Documents referenced by link can be preloaded aggressively, but loads 
initiated from a script parameter cannot.

On these grounds, I think doing the loading at import time, and explicit 
import/export lists, are a simpler and cleaner solution. It sounds like you 
agree with some of the reasons below.

I guess the other big difference is that your approach allows customizing what 
API is exposed, rather than proxying everything defined by the custom element 
instance in its own world. That is intriguing to me, but I’m not sure it’s 
necessary. The custom element can have truly private methods and slots by using 
ES6 symbols, and my proposed rule to do extended structured cloning on 
parameters and returns limits the risk of exposing everything. But I agree with 
you that if we need fine-grained API control, it’s better to do it with 
property descriptor-like structures than in a fully programmatic way.

Regards,
Maciej

 On May 1, 2015, at 10:46 AM, Travis Leithead travis.leith...@microsoft.com 
 wrote:
 
 If you take a look at [1], we extend the custom elements registration 
 mechanism so that the constructor is still available in the hosting global, 
 yet the implementation is defined in the isolated environment.
 
 An approach to solving this might address another concern I have...
 
 I've been thinking about the way that the APIs are created with my proposal 
 and the design wherein you have an explicit API to create the API signature 
 on the prototype (and instances) leaves a lot of room for potential issues. 
 For example:
 * Nothing requires the isolated component to create any APIs initially 
 (leaving the custom element without any API until some random later time of 
 the isolated component's choosing).
 * There is no way to know when the isolated component's APIs creation is 
 done
 * The isolated component can remove APIs at any time; this is not a pattern 
 that user agents ever make use of and there's no use case for it--doesn't 
 seem appropriate to give this power to the isolated component
 
 To address these problems, if you change the model to work more like what 
 Maciej proposed where you can have N number of custom elements defined by one 
 global, then in the creation of a particular custom element (specifically 
 it's prototype) you can specify what APIs should be defined on it in one shot 
 (creation time) and don't provide any other way to do it. This naturally 
 satisfies my above concerns. So, a rough sketch might be something like:
 
   void exportElement(DOMString customElementName, PropDescDictionary 
 definitions);
 
 usage example:
 
 ```js
 document.exportElement(element-name, {
api1: { enumerable: true, value: function() { return hello, from the 
 isolated component; }},
api2: { /* etc... */ }
 });
 // returns void (or throws is element-name is already defined/exported?)
 ```
 
 Once you divorce the isolated component in this way, you rightly point out 
 the problem of how to get the custom element's constructor function exported 
 outside of the isolated environment. One possible approach to solve this 
 allows the host to ask for the custom element constructor function 
 explicitly. Rough idea:
 
Function importConstructor(element-name);
 
 usage example:
 
 ```js
 window.MyElementName = document.importConstructor(element-name);
 // now new MyElementName(); returns an instance of element-name element
 ```
 
 You can imagine this might be useful for any custom element (either those 
 exported as shown above, or those defined using registerElement -- the 
 non-isolated custom elements).
 
 Just some food for thought.
 
 [1] 
 https://github.com/w3c/webcomponents/wiki/Cross-Origin-Custom-Elements:-Concept-and-Proposal
 
 -Original Message-
 From: Anne van Kesteren [mailto:ann...@annevk.nl] 
 Sent: Friday, May 1, 2015 9:48 AM
 To: Maciej Stachowiak
 Cc: WebApps WG
 Subject: Re: [components] Isolated Imports and Foreign Custom Elements
 
 On Thu, Apr 23, 2015 at 8:58 PM, Maciej Stachowiak m...@apple.com wrote:
 I wrote up a proposal (with input and advice from Ryosuke Niwa) on a 
 possible way to extend Web

Re: Exposing structured clone as an API?

2015-04-23 Thread Maciej Stachowiak

 On Apr 23, 2015, at 3:27 PM, Martin Thomson martin.thom...@gmail.com wrote:
 
 On 23 April 2015 at 15:02, Ted Mielczarek t...@mozilla.com wrote:
 Has anyone ever proposed exposing the structured clone algorithm directly as 
 an API?
 
 If you didn't just do so, I will :)
 
 1. https://twitter.com/TedMielczarek/status/591315580277391360
 
 Looking at your jsfiddle, here's a way to turn that into something useful.
 
 +Object.prototype.clone = Object.prototype.clone || function() {
 - function clone(x) {
return new Promise(function (resolve, reject) {
window.addEventListener('message', function(e) {
resolve(e.data);
});
 +window.postMessage(this, *);
 -window.postMessage(x, *);
});
 }
 
 But are we are in the wrong place to have that discussion?

Code nitpick: it probably should remove the event listener from within the 
handler, or calling this function repeatedly will leak memory. Also it will get 
slower every time.

Actually, now that I think about it, this isn’t usable at all if you are using 
postMessage for anything else, since you could accidentally capture 
non-cloning-related messages.

I guess these are potentially arguments to expose cloning directly.

Regards,
Maciej




[components] Isolated Imports and Foreign Custom Elements

2015-04-23 Thread Maciej Stachowiak

Hi everyone,

I wrote up a proposal (with input and advice from Ryosuke Niwa) on a possible 
way to extend Web Components to support fully isolated components:

https://github.com/w3c/webcomponents/wiki/Isolated-Imports-Proposal 
https://github.com/w3c/webcomponents/wiki/Isolated-Imports-Proposal

I welcome comments on whether this approach makes sense.


I’d also like to discuss this during the proposals section of tomorrow’s F2F. I 
know it’s supposed to be focused on Shadow DOM, but I think this proposal is 
relevant context.

I think this proposal, even if it is itself a post-v1 project, helps explain 
why closed mode is important and relevant. While closed mode doesn’t provide 
strong security isolation by default, its a building block that can be used to 
build full isolation, whereas open mode can’t be used that way.

If we agree to have a closed mode eventually, then it’s probably sensible to 
add it for v1. We’d also want to decide sooner rather than later whether closed 
or open is the default (or whether there is no default and we require authors 
to choose one explicitly).

Regards,
Maciej



Re: [components] Apple's consolidated feedback on Web Components

2015-04-23 Thread Maciej Stachowiak

 On Apr 22, 2015, at 11:10 PM, Maciej Stachowiak m...@apple.com wrote:
 
 
 Hi everyone,
 
 In preparation for Fridays’ face-to-face, a number of us at Apple (including 
 me, Ryosuke Niwa, Sam Weinig, and Ted O’Connor

I forgot to mention that Gavin Barraclough also contributed to this discussion. 
We also incorporated past feedback from others on the team.

 - Maciej


[components] Apple's consolidated feedback on Web Components

2015-04-23 Thread Maciej Stachowiak

Hi everyone,

In preparation for Fridays’ face-to-face, a number of us at Apple (including 
me, Ryosuke Niwa, Sam Weinig, and Ted O’Connor) got together to collect our 
thoughts and feedback about the current state of Web Components.

Before going into the changes we propose, we want to reiterate that we think 
the concept of Web Components is great, and we even like many of the specifics. 
We’re considering significant implementation effort, but we have some concerns. 
We think there is a set of targeted changes that would help web developers, and 
allow us to address a broader set of use cases.

With that in mind, here are our key points of feedback, by spec.

I.  Shadow DOM 

A. Closed vs. Open.
1. Add a closed/open flag to createShadowRoot(). The Shadow DOM 
spec now has the notion of an encapsulation flag for closed mode. Yay! 
Unfortunately, there’s no way yet for a Web developer to pass this flag in. 
Open vs. closed has been much discussed, and while the default is contentious, 
we felt there was a rough consensus to at least expose both modes. We think 
this is critical for v1. https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144
2. The behavior of closed mode should be actually defined. We 
hope this does not need much justification. We think this is critical for v1. 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144
3. We think closed mode should be the default. Alternately, we 
would be ok with a mandatory argument so developers must always explicitly 
choose open or closed. This has been much discussed, so we won’t give further 
rationale here, and can wait for the meeting Friday to debate. 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28445 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28445

B. Multiple Generations of Shadow DOM
1. We think this should be removed. Discussion can wait for 
debate of contentious bits, https://www.w3.org/Bugs/Public/show_bug.cgi? 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28446
2. We think that the Apple / Component Kitchen named slot 
proposal does a better job of addressing the main use cases for this. We think 
it is a superior replacement. 
https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
 
https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
  https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0184.html 
https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0184.html 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28542 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28542

C. Imperative Distribution API
1. We think the imperative distribution API is still worth 
doing. There has been positive feedback from web developers on the concept and 
there isn’t an obvious reason against it. 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=18429

D. Event Retargeting
1. We agree with making it optional (opt-in or opt-out). We 
don’t feel that strongly, but many web developers have asked for this. The 
default should likely match the default for open vs. closed (no retargeting by 
default if open by default).  
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28444 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28444

E. Renaming
1. If any strongly incompatible changes are made, we suggest 
renaming createShadowRoot. This is to avoid compat problems with content 
written for Chrome’s shipping implementation.


II.  Custom Elements 

A. Insertion/Removal Callbacks
1. We think the current attached/detached callbacks should be 
removed. They don’t match core DOM concepts and insert/remove is a more natural 
bracket. https://www.w3.org/Bugs/Public/show_bug.cgi?id=24314 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24314
2. We think inserted/removed callbacks should be added, for 
alignment with DOM. https://www.w3.org/Bugs/Public/show_bug.cgi?id=24866 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=24866 

B. ES6 classes
1. Custom elements should support ES6 classes in a natural way 
- allowing use of the ES6 class constructor instead of a separate callback. We 
believe there is rough consensus on this point. 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28541 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28541

C. Upgrade
1. We don’t think upgrading should be supported. The tradeoffs 
of different options have been much-discussed. 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28544 
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28544
2. We support synchronous constructors, or 

Re: Shadow tree style isolation primitive

2015-02-05 Thread Maciej Stachowiak

 
 ... Hanging but?! Oh lordy. Oooh, let me turn this into a contemplative 
 sidebar opportunity.
 
 Shadow DOM and Web Components seem to have what I call the Unicorn 
 Syndrome. There's a set of specs that works, proven by at least one browser 
 implementation and the use in the wild. It's got warts (compromises) and some 
 of those warts are quite ugly. Those warts weren't there in the beginning -- 
 they are a result of a hard, multi-year slog of trying to make a complete 
 system that doesn't fall over in edge cases, and compromising. A lot.


Compromise
1. a. A settlement of differences in which each side makes concessions.

I don’t remember any of that happening. I guess you mean a different definition 
of “compromising”?

Regards,
Maciej




Re: Fallout of non-encapsulated shadow trees

2014-07-01 Thread Maciej Stachowiak

 On Jul 1, 2014, at 3:26 PM, Domenic Denicola dome...@domenicdenicola.com 
 wrote:
 
 From: Maciej Stachowiak m...@apple.com
 
 Web Components as currently designed cannot explain the behavior of any 
 built-in elements (except maybe those which can be explained with CSS alone).
 
 Unfortunately this is a hard problem that nobody has even sketched a solution 
 to.

I have sketched a solution to it (including publicly in the last Web Apps WG 
meeting). I believe the following set of primitives would be sufficient for 
implementing built-in elements or their close equivalents:

(1) “closed” or “Type 2 Encapsulation” mode for the Shadow DOM as I have 
advocated it (i.e. no access to get into or style the Shadow DOM when in this 
mode).
(2) Ability to have the script associated with the component run in a separate 
“world”
(3) A two-way membrane at the API layer between a component and a script; 
approximately, this would be the Structured Clone algorithm, but extended to 
also translate references to DOM objects between the worlds.
(4) An import mechanism that loads some script to run in a separate world, and 
allows importing custom element definitions from it.
(5) Custom elements with the ability to work with the membrane from (3) and the 
script world from (2).

With this in place, the only “magic” of built-in elements would be access to 
platform capabilities that some built-in elements have (which could be exported 
as lower-level specific APIs, e.g. participation in form submission for form 
controls), and the fact that they come pre-imported.

Unfortunately, Shadow DOM doesn’t provide (1) and HTML Imports doesn’t provide 
(4). Thus, merely adding (2) and (3) won’t be enough; we’ll need to create 
parallels to API that Google has already shipped and frozen without addressing 
this problem.

Thus, when you say the following:

On Jul 1, 2014, at 5:34 PM, Domenic Denicola dome...@domenicdenicola.com 
wrote:

 From: Edward O'Connor [mailto:eocon...@apple.com] 
 
 But soft encapsulation is just as useless for explaining the platform 
 as no encapsulation at all.
 
 I think just as useless overstates your case. Type 2 allows you to hide 
 implementation details of your component from authors better than Type 1 
 does. Yes, it's not isolation for security purposes, so it doesn't get you 
 the whole way, but like Brendan said, we shouldn't let the perfect be the 
 enemy of the good.
 
 Well, but *for explaining the platform* it is just as useless. It may be 
 useful independently for authors who wish to protect against interference by 
 people who are afraid of feeling bad, but it is not useful for explaining the 
 platform.
 
 My personal perspective is that it is already a shame we are on track to have 
 two versions (in some sense) of web components: the existing one, and one 
 that explains the platform. It would be a shame to have a third in between 
 those two, that is unlike the existing one but also does not explain the 
 platform. So I guess along this axis I would strongly prefer perfect to 
 good, I suppose because I think what we have already is good.

I believe you are wrong. DOM-level encapsulation can be used as-is in 
combination with scripting encapsulation to provide a secure platform for 
untrusted components, and a way to explain built-in elements. Shadow DOM 
without DOM-level encapsulation cannot be used in combination with other 
primitives.

Thus, if we’d built Shadow DOM with DOM-level encapsulation in the first place, 
we could add the right other primitives to it and be able to explain built-in 
elements. But the way it was actually done can’t explain anything built-in.


I would really like to build a system that can provide security for mutually 
distrusting components and embedders, and that can explain the platform. 
However, I feel that my efforts to give feedback, both to nudge in this 
direction and otherwise, have been rejected and stonewalled. With Google 
controlling editorship of all the Web Components specs, shipping the only 
implementation so far -- unprefixed, freezing said implementation, and not 
being very open to non-Google input, and making all decisions in private via 
“consensus within Google”, it is hard to see how to meaningfully participate. 
If there is any possibility of some of those factors changing, I think it’s not 
too late to end up with a better component model and I’d be interested in 
helping.

Regards,
Maciej

P.S. Due to the factors stated in the last paragraph, I’m going to try to limit 
my participation in this thread as much as possible for the sake of my own 
mental equilibrium.







Re: Fallout of non-encapsulated shadow trees

2014-06-30 Thread Maciej Stachowiak

 On May 15, 2014, at 6:17 AM, Anne van Kesteren ann...@annevk.nl wrote:
 
 I'm still trying to grasp the philosophy behind shadow trees.
 Sometimes it's explained as exposing the primitives but the more I
 learn (rather slowly, this time at BlinkOn) the more it looks like a
 bunch of new primitives.
 
 We cannot explain input still, but since we allow going inside the
 shadow tree we now see the need for a composed tree walker (a way to
 iterate over a tree including its non-encapsulated interleaved shadow
 trees). In addition we see the need for a composed range of sorts, so
 selection across boundaries makes sense. Neither of these are really
 needed to explain bits of the existing platform.

I agree with the need for encapsulation in Web Components and have been arguing 
for it for a long time. Currently, despite agreement dating back several years, 
it doesn’t even offer a mode with better encapsulation. Now that the 
non-encapsulation version has shipped in Chrome, it may be hard to change other 
than by renaming everything.

Web Components as currently designed cannot explain the behavior of any 
built-in elements (except maybe those which can be explained with CSS alone).

Regards,
Maciej


Re: [April2014Meeting] Building an Issue and Bug focused agenda

2014-04-09 Thread Maciej Stachowiak

Would the WG be willing to put Web Components related topics on Friday when 
building the agenda? (Note: there are at least some Apple folks who will be 
there both days in any case.)

Cheers,
Maciej

On Apr 8, 2014, at 8:40 AM, Dimitri Glazkov dglaz...@chromium.org wrote:

 Actually, Friday sounds better for me too!
 
 :DG
 
 
 On Mon, Apr 7, 2014 at 10:55 PM, Maciej Stachowiak m...@apple.com wrote:
 
 Hi folks,
 
 I’d really appreciate it if we could decide whether Web Components related 
 topics will be discussed Thursday or Friday. It is the topic I am most 
 personally interested in, and I think I might only be able to spare the time 
 for one of the two days, so I’d appreciate knowing which day it will be (even 
 if we don’t work out the other schedule details). Friday would be more 
 convenient for me, for what it’s worth, but either is fine.
 
 Thanks,
 Maciej
 
 On Apr 2, 2014, at 3:50 AM, Arthur Barstow art.bars...@nokia.com wrote:
 
  Hi All,
 
  The [Agenda] page for the April 10-11 meeting includes a list of Potential 
  Topics and other than a meeting with some members of SysApps on April 10 to 
  discuss the Service Worker and Manifest specs, currently, all other time 
  slots are unallocated.
 
  Although we will allocate time slots for agenda topics at the meeting, I 
  think it would be helpful if each of the specs' stakeholders and leads - 
  not just Editors but also implementers and developers - would please 
  identify high priority Issues and Bugs in advance of the meeting. This 
  should help attendees' meeting preparations, reduce some of the overhead of 
  agenda tweaking and help make the meeting bug/issue focused.
 
  Among the specs that come to mind (including the two specs mentioned 
  above) are:
 
  * Web Components; led by Dimitri et al. - what are the high priority 
  issues, bugs, blockers, etc. to discuss?
 
  * File API; led by Jonas (will Arun join remotely?); which high priority 
  bugs need discussion; status of LC processing; next steps
 
  * Streams specs; led by Feras and Takeshi (will Domenic join remotely?); 
  status and plans for this effort; high priority issues
 
  * Editing; led by Ryosuke (will Aryeh join remotely?); plans; spec split?
 
  * Service Workers; led by Alex and Jungkee; high priority issues, bugs, 
  etc.; plan and expectations for first Technical Report publication
 
  * Manifest; led by Marcos; high priority issues, bugs, etc.
 
  * Other high priority topics?
 
  Feedback from All is welcome, as well as proposals for a day + time slot 
  for specific topics.
 
  -Thanks, AB
 
  [Agenda] https://www.w3.org/wiki/Webapps/April2014Meeting#Potential_Topics
 
 
 
 



Re: Indexed DB: Opening connections, versions, and priority

2014-02-27 Thread Maciej Stachowiak

On Feb 26, 2014, at 10:35 AM, Joshua Bell jsb...@google.com wrote:

 While looking at a Chrome bug [1], I reviewed the Indexed DB draft, section 
 3.3.1 [2] Opening a database:
 
 These steps are not run for any other connections with the same origin and 
 name but with a higher version
 
 And the note: This means that if two databases with the same name and 
 origin, but with different versions, are being opened at the same time, the 
 one with the highest version will attempt to be opened first. If it is able 
 to successfully open, then the one with the lower version will receive an 
 error.
 
 I interpret that as (and perhaps the spec should be updated to read): This 
 means that if two open requests are made to the database with the same name 
 and origin at the same time, the open request with the highest version will 
 be processed first. If it is able to successfully open, then the request with 
 the lower version will receive an error.
 
 So far as I can tell with a test [3], none of Chrome (33), Firefox (27), or 
 IE (10) implement this per spec. Instead of processing the request with the 
 highest version first, they process the first request that was received.
 
 Is my interpretation of the spec correct? Is my test [3] correct? If yes and 
 yes, should we update the spec to match reality?

I think the ambiguous language in the spec, and also in your substitute 
proposal, is at the same time. I would think if one request is received 
first, then they are not, in fact, at the same time. Indeed, it would be pretty 
hard for two requests to be exactly simultaneous.

If at the same time is actually supposed to mean something about receiving a 
new open request while an older one is still in flight in some sense, then the 
spec should say that, and specify exactly what it means. I would think the only 
observable time is actually delivering the callback. That would imply a rule 
that if you receive a request with a higher version number before the 
completion callback for a currently pending open request has been delivered, 
you need to cancel the attempt and try with the higher version (possibly 
retrying with the lower version again later).

Regards,
Maciej




Re: Form submission participation (was Re: Goals for Shadow DOM review)

2014-02-21 Thread Maciej Stachowiak

On Feb 21, 2014, at 11:04 AM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 20 Feb 2014, Jonas Sicking wrote:
 On Thu, Feb 20, 2014 at 2:51 PM, Edward O'Connor eocon...@apple.com wrote:
 
 Yeah, I think we just say that [form.elements] is the legacy feature 
 that only exposes built-in controls. form.getParticipants() works for 
 me.
 
 Agreed. [form.elements] is pretty crappy anyway in that it's live and 
 that it doesn't include input type=image elements for webcompat 
 reasons (stemming from an old Netscape bug :))
 
 I actually think form.elements is more important than form submission, as 
 far as custom form controls go.
 
 Pages with custom form controls are highly likely, I would guess, to be 
 interactive apps that don't have any unscripted form submission. I would 
 expect lots of XHR, WebSockets, and the like. However, scripts are hugely 
 simplified by form.elements. Instead of having to grab things by ID, you 
 can just name them, for example. This is even more true for event handlers 
 on form controls, which have the form in the scope chain. For example, one 
 of the big reasons for adding output was that it makes it easier to 
 update text -- instead of:
 
  oninput=document.getElementById('a').textContent = process(value)
 
 ...you can write:
 
  oninput=a.value = process(value)

I'd guess most sophisticated webapps do not use attribute-based event handlers 
(as opposed to addEventListener), so they would not get this convenient scoping 
benefit. If you're looking at an out-of-line function, then your comparison is:

this.a.value = process(value)
this.querySelector(#a).value = process(value)

which is a less dramatic difference. Also, the short version gives you the risk 
of namespace conflicts with the built-in methods and properties of form.


Regards,
Maciej




Re: Form submission participation (was Re: Goals for Shadow DOM review)

2014-02-21 Thread Maciej Stachowiak

On Feb 21, 2014, at 2:28 PM, Ian Hickson i...@hixie.ch wrote:

 On Fri, 21 Feb 2014, Maciej Stachowiak wrote:
 
 I'd guess most sophisticated webapps do not use attribute-based event 
 handlers (as opposed to addEventListener), so they would not get this 
 convenient scoping benefit.
 
 That's not clear to me. I mean, certainly today, with div soup, they 
 don't. But that's at least partly because there's no sane way to do it 
 when your markup isn't really declarative in any useful sense.
 
 When you have Web components that let you get the effect you want while 
 sticking to a terse markup language, it becomes much more feasible to 
 return to using inline event handlers.

Might be, but I'm not sure you can center a design on a hypothetical future 
change in web developer behavior.

 
 If you're looking at an out-of-line function, then your comparison is:
 
 this.a.value = process(value)
 this.querySelector(#a).value = process(value)
 
 which is a less dramatic difference.
 
 It's a pretty compelling difference, IMHO.
 
 
 Also, the short version gives you the risk of namespace conflicts with 
 the built-in methods and properties of form.
 
 You can do this instead if that feels like a real risk:
 
  this.elements.a.value = process(value)

Which is hardly an upgrade at all over:
this.querySelector(#a).value = process(value)

Cheers,
Maciej




WebKit interest in ServiceWorkers (was Re: [manifest] Utility of bookmarking to home screen, was V1 ready for wider review)

2014-02-17 Thread Maciej Stachowiak

On Feb 16, 2014, at 2:16 AM, Marcos Caceres mar...@marcosc.com wrote:

 
 
 On Sunday, February 16, 2014, Alex Russell slightly...@google.com wrote:
 On Sat, Feb 15, 2014 at 5:56 AM, Marcos Caceres w...@marcosc.com wrote:
 tl;dr: I strongly agree (and data below shows) that installable web apps 
 without offline capabilities are essentially useless.
 
 Things currently specified in the manifest are supposed to help make these 
 apps less useless (as I said in the original email, they by no means give us 
 the dream of installable web apps, just one little step closer) - even if 
 we had SW tomorrow, we would still need orientation, display mode, start URL, 
 etc.
 
 So yes, SW and manifest will converge... questions for us to decide on is 
 when? And if appcache can see us through this transitional period to having 
 SW support in browsers? I believe we can initially standardize a limited set 
 of functionality, while we continue to wait for SW to come into fruition 
 which could take another year or two.
 
 SW will becoming to chrome ASAP. We're actively implementing. Jonas or Nikhil 
 can probably provide more Mozilla context.
 
 I'm also interested in the WebKit and Microsoft context. I just don't know 
 who to ask there. Have their been any public signals of their level of 
 interest in SW? 

In general I think it's a good idea and I bet many other WebKit folks do too. 
We haven't yet had a chance to review thoroughly but I expect we'll like the 
general principles. I personally would like to see it become an official draft 
of the Working Group if it isn't already (the Publication Status page implies 
not, but perhaps I have missed something). If it is being actively implemented, 
it would be great to publish it as a Working Draft also, so we can get the IPR 
disclosures out of the way.

Regards,
Maciej



Re: [webcomponents] Imperative API for Insertion Points

2014-02-16 Thread Maciej Stachowiak
On Feb 15, 2014, at 4:57 PM, Ryosuke Niwa rn...@apple.com wrote:Hi all,I’d like to propose one solution for[Shadow]: Specify imperative API for node distributionhttps://www.w3.org/Bugs/Public/show_bug.cgi?id=18429because select content attribute doesn’t satisfy the needs of framework/library authors to support conditionals in their templates,and doesn’t satisfy my random image element use case below.== Use Case ==Random image element is a custom element that shows one of child img elements chosen uniformally random.e.g. the markup of a document that uses random-image-element may look like this:random-image-element img src="" img src="" img src=""/random-image-elementrandom-image-element displays one out of the three img child elements when a user clicks on it.As an author of this element, I could modify the DOM and add style content attribute directly on those elementsbut I would rather use shadow DOM to encapsulate the implementation.I wanted to mention that this handles other use cases besides selecting a random child which are impossible (or at least very awkward) with contentselect="" as presently defined:(1) A container component that can hold an arbitrary number of children, and wraps each of its light DOM children in a piece of markup inside the ShadowDOM. Consider a buttonbar component that placed each child into a button, and styled them all specially:buttonbar  i _onclick_="execCommand('italic')"I/i  b _onclick_="execCommand('bold')B/b  u _onclick_="execCommand('underline')U/ubuttonImagine it would render like this (explaining why separate individual button elements won't cut it).(2) A component that expects alternate labels and corresponding items, wants to parent them into different boxes, but wants to make sure they remaincorresponding.tabview  tabtitlePuppies/tabtitle  tabpane lots of pictures of puppies /tabpane  tabtitleKittens/tabtitle  tabpane lots of pictures of kittens /tabpane  tabtitleSadness/tabtitle  !-- no tab pane provided for this title yet --  tabtitleBunnies/tabtitle  tabpane lots of pictures of bunnies .../tabpane/tabviewThe component author would like this to render as a tabview with 4 tab labels at the top ("Puppies", "Kittens", "Sadness", "Bunnies") and 3 actual tab paneswith one placeholder inserted: (the puppy pane, the kitten pane, a blank placeholder, the bunny pane).But if my shadow DOM looks like this:div class=tab-label-barcontent select="tabtitle"/divdiv class=tab-holdercontent select="tabpane"/divThen the pictures of bunnies would line up with the "Sadness" label, and I don't have an easy way to add the placeholder anywhere but at the beginning or theend of the tab panes.(3) An element that selects some of its children conditionally. Let's say you have an element that will select different children depending on what features thebrowser supports:cond  case condition="Modernizr.webgl"Spiffy WebGL view goes here!/case  case condition="Modernizr.canvas"Passable 2-D canvas view goes here/case  case defaultOh noes! You need more browser features to use this site!/case/condThe idea is to select in only exactly one of the cases - the first that matches. The others don't go into the shadow DOM. There isn't a great way to select only one of the "case" elements here (after having run the JS to evaluate which applies).The SVG "switch" element does something similar, as does Modernizr's normal class-based mode of operation.I hope these examples give more motivation for why programmatically binding an insertion point may be useful.Regards,Maciej

Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-15 Thread Maciej Stachowiak

On Feb 14, 2014, at 7:16 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/14/14 10:07 PM, Ryosuke Niwa wrote:
 We most vigorously object to making the CSS style resolver depend on JS
 DOM object properties.
 
 Ryosuke, I think you misunderstood the proposal.  I'm pretty sure we all 
 object to having the CSS style resolver depend on anything that involves JS 
 properties.  But it's not necessarily a problem to have it depends on 
 internal node state that can be set via DOM APIs (e.g. it already depends on 
 DOM attribute values and whatnot).
 
 So in implementation terms, an element would just have an internal boolean 
 and setting this.shadowRoot = undefined or whatnot would set that boolean; 
 the CSS machinery would read that boolean.  That seems fairly workable to me; 
 whether it's the sort of API we want is a separate issue that I have no 
 opinion on.

The API we currently have doesn't actually allow this:

readonly attribute ShadowRoot? shadowRoot;

Now, we could change this attribute to not be readonly. However, that makes me 
wonder - what happens if you assign something other than null? Would this 
become an API to replace the shadow root? That seems like a bad idea. On the 
whole, this choice seems like messier API change than a parameter to 
createShadowRoot indicating the mode (which I thought the WG had rough 
consensus on over a year ago).

Regards,
Maciej





Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-13 Thread Maciej Stachowiak

On Feb 13, 2014, at 4:01 PM, Alex Russell slightly...@google.com wrote:

 On Thu, Feb 13, 2014 at 1:25 PM, Maciej Stachowiak m...@apple.com wrote:
 
 On Feb 12, 2014, at 4:04 PM, Alex Russell slightly...@google.com wrote:
 
 
 
 In discussion with Elliot and Erik, there appears to be an additional 
 complication: any of the DOM manipulation methods that aren't locked down 
 (marked non-configurable and filtered, ala caja) create avenues to get 
 elements from the Shadow DOM and then inject styles. E.g., even with Arv's 
 lockdown sketch: 
 
   https://gist.github.com/arv/8857167
 
 You still have most of your work ahead of you. DocumentFragment provides 
 tons of ins, as will all incidentally composed APIs.
 
 I'm not totally clear on what you're saying. I believe you're pointing out 
 that injecting hooks into the scripting environment a component runs in (such 
 as by replacing methods on global protototypes) can allow the shadow root to 
 be accessed even if no explicit access is given. I agree. This is not 
 addressed with simple forms of Type 2 encapsutation. It is a non-goal for 
 Type 2.
 
 I'd like to understand what differentiates simple forms of Type 2 
 encapsulation from other potential forms that still meet the Type 2 
 criteria. Can you walk me through an example and show how they would be used 
 in a framework?

Type 4 encapsulation would also meet the Type 2 criteria, for example.

The difference is that with simple Type 2 you do not attempt to protect 
against a pre-poisoned scripting environment. I would be satisfied with Type 4 
(if it's usable) but it is much harder to spec as you need it to give rigorous 
security guarantees.

  
 This is fraught.
 
 Calling something fraught is not an argument.
 
 Good news! I provided an argument in the following sentence to help 
 contextualize my conclusion and, I had hoped, lead you to understand why I'd 
 said that.

Your argument seems to be based on an incorrect premise.

  
 To get real ocap-style denial of access to the shadow DOM, we likely need to 
 intercept and check all DOM accesses. Is the system still usable at this 
 point? It's difficult to know. In either case, a system like caja *can* 
 exist without explicit supportwhich raises the question: what's the 
 goal? Is Type 2 defined by real denial of access? Or is the request for a 
 fig-leaf (perception of security)?
 
 Type 2 is not meant to be a security mechanism.
 
 I'd like to see an example of Type 2 isolation before I agree to that.
  
 It is meant to be an encapsulation mechanism. Let me give a comparison. Many 
 JavaScript programmers choose to use closures as a way to store private data 
 for objects. That is an encapsulation mechanism. It is not, in itself, a hard 
 security mechanism. If the caller can hook your global environment, and for 
 example modify commonly used Object methods, then they may force a leak of 
 your data.
 
 A closure is an iron-clad isolation mechanism for object ownership with 
 regards to the closing-over function object. There's absolutely no iteration 
 of the closed-over state of a function object; any such enumeration would be 
 a security hole (as with the old Mozilla object-as-param-to-eval bug). You 
 can't get the value of foo in this example except with the consent of the 
 returned function:
 
 var maybeVendFoo = function() {
   var foo = 1;
   return function(willMaybeCall) {
 if (/* some test */) { willMaybeCall(foo); }
   }
 };
 
 Leakage via other methods can be locked down by the first code to run in an 
 environment (caja does this, and nothing prevents it from doing this for SD 
 as it can pre-process/filter scripts that might try to access internals).

Caja is effective for protecting a page from code it embeds, since the page can 
have a guarantee that its code is the first to run. But it cannot be used to 
protect embedded code from a page, so for example a JS library cannot guarantee 
that objects it holds only in closure variables will not leak to the 
surrounding page. This is exactly analogous to shadow DOM leakage via DOM 
methods.

 
 Getting to closure-strength encapsulation means neutering all potential 
 DOM/CSS access. Maybe I'm dense, but that seems stronger than the simple 
 form of Type 2.

That seems like a false conclusion to me given the above.

 
 If you're making the case that it might be helpful to folks trying to 
 implement Type 4 if the platform gave them a way to neuter access without so 
 much pre-processing/runtime-filtering, I could take that as an analog with 
 marking things non-configurable in ES. But it seems you think there's an 
 interim point that developers will use directly. I don't understand that and 
 would like to.

I don't believe Type 4 can be implemented on top of Type 1 with a 
pre-processing/runtime-filtering strategy. It would require additions to the 
platform. Filtering cannot create a two-way distrust boundary, only one-way. 
You would need a mutually trusted agent

Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-11 Thread Maciej Stachowiak

On Feb 11, 2014, at 3:29 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

  
 Dimitri, Maciej, Ryosuke - is there a mutually agreeable solution here?
 
 I am exactly sure what problem this thread hopes to raise and whether there 
 is a need for anything other than what is already planned.

In the email Ryosuke cited, Tab said something that sounded like a claim that 
the WG had decided to do public mode only:

http://lists.w3.org/Archives/Public/www-style/2014Feb/0221.html
Quoting Tab:
 The decision to do the JS side of Shadow DOM this way was made over a
 year ago.  Here's the relevant thread for the decision:
 http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/thread.html#msg312
 (it's rather long) and a bug tracking it
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=19562.

I can't speak for Ryosuke but when I saw this claim, I was honestly unsure 
whether there had been a formal WG decision on the matter that I'd missed. I 
appreciate your clarification that you do not see it that way.


Quoting Dmitri again:
 The plan is, per thread I mentioned above, is to add a flag to 
 createShadowRoot that hides it from DOM traversal APIs and relevant CSS 
 selectors: https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144.

That would be great. Can you please prioritize resolving this bug[1]? It has 
been waiting for a year, and at the time the private/public change was made, it 
sounded like this would be part of the package.

It seems like there are a few controversies that are gated on having the other 
mode defined:
- Which of the two modes should be the default (if any)?
- Should shadow DOM styling primitives be designed so that they can work for 
private/closed components too?

Regards,
Maciej

[1] Incidentally, if you find the word private problematic, we could call the 
two modes open and closed, then someday the third mode can be secure or 
sandboxed


Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-11 Thread Maciej Stachowiak

On Feb 11, 2014, at 4:04 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

 On Tue, Feb 11, 2014 at 3:50 PM, Maciej Stachowiak m...@apple.com wrote:
 
 On Feb 11, 2014, at 3:29 PM, Dimitri Glazkov dglaz...@chromium.org wrote:
 
  
 Dimitri, Maciej, Ryosuke - is there a mutually agreeable solution here?
 
 I am exactly sure what problem this thread hopes to raise and whether there 
 is a need for anything other than what is already planned.
 
 In the email Ryosuke cited, Tab said something that sounded like a claim that 
 the WG had decided to do public mode only:
 
 http://lists.w3.org/Archives/Public/www-style/2014Feb/0221.html
 Quoting Tab:
 The decision to do the JS side of Shadow DOM this way was made over a
 year ago.  Here's the relevant thread for the decision:
 http://lists.w3.org/Archives/Public/public-webapps/2012OctDec/thread.html#msg312
 (it's rather long) and a bug tracking it
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=19562.
 
 I can't speak for Ryosuke but when I saw this claim, I was honestly unsure 
 whether there had been a formal WG decision on the matter that I'd missed. I 
 appreciate your clarification that you do not see it that way.
 
 
 Quoting Dmitri again:
 The plan is, per thread I mentioned above, is to add a flag to 
 createShadowRoot that hides it from DOM traversal APIs and relevant CSS 
 selectors: https://www.w3.org/Bugs/Public/show_bug.cgi?id=20144.
 
 That would be great. Can you please prioritize resolving this bug[1]? It has 
 been waiting for a year, and at the time the private/public change was made, 
 it sounded like this would be part of the package.
 
 Can you help me understand why you feel this needs to be prioritized? I mean, 
 I don't mind, but it would be great if I had an idea on what's the driving 
 force behind the urgency?

(1) It blocks the two dependent issues I mentioned.
(2) As a commenter on a W3C spec and member of the relevant WG, I think I am 
entitled to a reasonably prompt level of response from a spec editor. This bug 
has been open since November 2012. I think I have waited long enough, and it is 
fair to ask for some priority now. If it continues to go on, then an outside 
observer might get the impression that failing to address this bug is 
deliberate stalling. Personally, I prefer to assume good faith, and I think you 
have just been super busy. But it would show good faith in return to address 
the bug soon.

Note: as far as I know there is no technical issue or required feedback 
blocking bug 20144. However, if there is any technical input you need, or if 
you would find it helpful to have a spec diff provided to use as you see fit, I 
would be happy to provide such. Please let me know!

 
 It seems like there are a few controversies that are gated on having the 
 other mode defined:
 - Which of the two modes should be the default (if any)?
 
 This is re-opening the old year-old discussion, settled in 
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg800,
  right? 

I'm not sure what you mean by settled. You had a private meeting and the 
people there agreed on what the default should be. That is fine. Even using 
that to make a provisional editing decision seems fine. However, I do not 
believe that makes it settled for purposes of the WG as a whole. In 
particular, I have chosen not to further debate which mode should be the 
default until both modes exist, something that I've been waiting on for a 
while. I don't think that means I lose my right to comment and to have my 
feedback addressed. 

In fact, my understanding of the process is this: the WG is required to address 
any and all feedback that comes in at any point in the process. And an issue is 
not even settled to the point of requiring explicit reopening unless there is a 
formal WG decision (as opposed to just an editor's decision based on their own 
read of input from the WG.)


  
 - Should shadow DOM styling primitives be designed so that they can work for 
 private/closed components too?
 
 Sure. The beauty of a hidden/closed mode is that it's a special case of the 
 open mode, so we can simply say that if a shadow root is closed, the 
 selectors don't match anything in that tree. I left the comment to that 
 effect on the bug.

Right, but that leaves you with no styling mechanism that offers more 
fine-grained control, suitable for use with closed mode. Advocates of the 
current styling approach have said we need not consider closed mode at all, 
because the Web Apps WG has decided on open mode. If what we actually decided 
is to have both (and that is my understanding of the consensus), then I'd like 
the specs to reflect that, so the discussion in www-style can be based on facts.

As a more basic point, mention of closed mode to exclude it from /shadow most 
likely has to exist in the shadow styling spec, not just the Shadow DOM spec. 
So there is a cross-spec dependency even if no new constructs are added.


Thank you for your

Re: Officially deprecating main-thread synchronous XHR?

2014-02-07 Thread Maciej Stachowiak

On Feb 7, 2014, at 9:32 AM, Scott González scott.gonza...@gmail.com wrote:

 What about developers who are sending requests as the page is unloading? My 
 understanding is that sync requests are required. Is this not the case?

Besides the proposed Beacon API, if you don't need to do a POST then you can 
use an img element created at unload time - a load initiated this way will 
generally survive unloading the page.

 - Maciej

 
 On Friday, February 7, 2014, Anne van Kesteren ann...@annevk.nl wrote:
 On Fri, Feb 7, 2014 at 6:18 PM, Jonas Sicking jo...@sicking.cc wrote:
  Agreed. I think for this to be effective we need to get multiple browser
  vendors being willing to add such a warning. We would also need to add text
  to the various versions of the spec (whatwg and w3c).
 
 For what it's worth, was done when Olli brought this up in #whatwg:
 http://xhr.spec.whatwg.org/#sync-warning
 
 
 --
 http://annevankesteren.nl/
 



Re: Officially deprecating main-thread synchronous XHR?

2014-02-07 Thread Maciej Stachowiak

On Feb 7, 2014, at 9:18 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Feb 7, 2014 8:57 AM, Domenic Denicola dome...@domenicdenicola.com 
 wrote:
 
  From: Olli Pettay olli.pet...@helsinki.fi
 
   And at least we'd improve responsiveness of those websites which stop 
   using sync XHR because of the warning.
 
  I think this is a great point that makes such an effort worthwhile even if 
  it ends up not leading to euthanizing sync XHR.
 
 Agreed. I think for this to be effective we need to get multiple browser 
 vendors being willing to add such a warning. We would also need to add text 
 to the various versions of the spec (whatwg and w3c).
 
 Which browsers are game? (I think mozilla is). Which spec editors are?
 

I usually hate deprecation warnings because I think they are ineffective and 
time-wasting. But this case may be worthy of an exception. In addition to 
console warnings in browsers and the alert in the spec, it might be useful to 
have a concerted documentation and outreach effort (e.g. blog posts on the 
topic) as an additional push to get Web developers to stop using sync XHR.

Regards,
Maciej





Re: Background sync push messaging: declarative vs imperative

2014-01-03 Thread Maciej Stachowiak

On Jan 2, 2014, at 9:33 AM, John Mellor joh...@google.com wrote:

 On Thu, Dec 19, 2013 at 9:32 PM, Maciej Stachowiak m...@apple.com wrote:
 
 On Dec 19, 2013, at 9:02 AM, John Mellor joh...@google.com wrote:
 
 [cross-posted to public-webapps and public-sysapps]
 
 A couple of us from Chrome have taken a holistic look at how we could add 
 standardized APIs for web apps to execute/sync in the background.
 
 This is an important capability, yet can be safely granted to low-privilege 
 web apps, as long as battery consumption and metered data usage are (very) 
 carefully limited.
 
 Running arbitrary JS in the background, without the relevant page open, and 
 with full networking capabilities, does not seem safe to me. How did you 
 conclude that this capability can safely be granted to low-privilege web apps?
 
 Good question (and great examples!). Low-privilege is relative - I did not 
 intend that any website could use this without asking; instead some 
 combination of heuristics like the following might be required:
 user added webapp to their homescreen (or bookmarked it?)
Bookmarking doesn't give the feel of an install operation so it seems poor to 
tie it to elevated privileges (as opposed to something like an install 
operation from a curated store).
 user frequently visits webapp domain
That's frighteningly implicit and would also prevent webapps from using this 
mechanism for sync until after the user has used the webapp more than some 
threshold, which gives a poor out-of-the-box experience if sync is a key 
feature.
 user accepted permission dialog/infobar
Explicit permission seems poor for this because the consequences are too subtle 
to understand easily (unlike, say, granting access to geolocation where the 
consequence is obvious and understandable).

 Some of the specific threats I'd be concerned about include:
 
 (1) An attacker could use this capability to spread a botnet that can be used 
 to mount DDOS attacks or to perform offline distributed computations that are 
 not in the user's interest, without needing a code execution exploit or 
 sandbox escape.
 
 Some mitigating factors:
 The same-origin policy would apply as usual, so you'd only be able to DDOS 
 hosts that allow CORS (and the apps own host).
The network request is issued before the browser knows whether CORS is allowed, 
so that would not be a mitigation (unless DDOS specifically requires non-GET 
methods or reading the response).
 Both network requests and computations (Bitcoin mining?) would be heavily 
 throttled by the browser in order to conserve battery; if we base the 
 throttling on how often the webapp is used, and penalise expensive background 
 syncs, it would be possible to enforce that total execution time in the 
 background is limited to some small multiple (possibly even 1x or below) of 
 the total execution time in the foreground.
It seems like such a limit would make it hard to use the feature for the 
posited use cases because it would be unreliable.
 
 (2) An attacker could use this capability to widely spread an innocuous 
 looking background service which, based on remote command-and-control, 
 attempts zero-day exploits against the user's browser at a time of the 
 attacker's choosing. This can happen without visiting the malicious site, 
 indeed, while a user doesn't have a browser open, and can possibly happen 
 faster than even the fastest-updating browser could plausibly patch.
  
 Yes, this is a subtle downside of the imperative approach. For frequently 
 used apps it probably doesn't make a huge difference; conversely if we 
 stopped syncing completely for apps that haven't been used in a long time, 
 this concern might be minimized.

For the posited use cases, the time limit probably needs to be long enough that 
your apps won't stop working after a reasonable but long vacation. That would 
still be a bigger window of opportunity for attack than a one-time visit.

  
 These both seem unsafe to me, compared to the security of the Web as it 
 stands today. And I don't think a permissions dialog would be a sufficient 
 mitigation, since it is hard to explain the danger to users.
 
 On the other hand, native apps can already do both of the above unsafe 
 activities, and installing them is effectively just a permissions dialog 
 away. App store review processes may filter out undesirable behavior in the 
 static parts of apps, but struggle to keep up with dynamically-updating 
 content, e.g. in embedded WebViews.

There's a few relevant differences between apps (particularly installed via an 
app store) and webpages besides review:

(1) Users have different expectations of what apps and webpages can do.
(2) Apps generally have a way to uninstall, after which they stop affecting 
your system. For webpages, there generally isn't an uninstall operation other 
than just closing the page.
(3) At least app stores require personally identifying information such as a 
credit card to get a developer

Re: Passsword managers and autocomplete='off'

2013-12-17 Thread Maciej Stachowiak

On Dec 17, 2013, at 11:21 AM, Joel Weinberger j...@chromium.org wrote:

 Thanks for the feedback, everyone. A few people at this point have suggested 
 emailing the wha...@whatwg.org list since this is really an HTML feature; 
 I'll do that in a few. In response to Ian's question, I'm referring to the W3 
 WebForms standard: http://www.w3.org/Submission/web-forms2/#the-autocomplete

That is pretty out of date and not a standards track document.


 
 
 Safari has similar behavior available as anon-default user preference: Allow 
 AutoFill even for websites that request passwords not be saved. Even with 
 this setting enabled, some user interaction is required both to save and to 
 fill. We agree that on net, refusing to autofill passwords harms security.
 As of yesterday, in Chrome Canary, it is now a flag that users can turn on as 
 well. Not quite the menu option it is in Safari, but an option nonetheless. 
 
 We did not make it the default, because website owners have objections to 
 bypassing autofill=off as a default behavior. The primary types of objections 
 are:
 
 (1) Public computer scenarios. Accidentally saving a password on a shared 
 or public computer may endanger the user's account.
 The browser saving passwords seems like the least of concerns with public 
 computers :-P From our perspective, given that key loggers etc are a big 
 threat on a public computer, we don't consider the benefit here to outweigh 
 the overall user benefit of having managed passwords.
  
 (2) Casual walk-up attacks, for example temporary access to a friend's or 
 relative's computer. AKA friendly fraud.
 Similar to the public computer case, from our perspective, this benefit 
 doesn't seem to outweigh the benefit of having more complex, managed 
 passwords.
 
 I should also mention that, in general, Chrome explicitly does not consider a 
 physically local attacker in our threat model.

To be clear, I'm not saying I agree with these reasons. But these are the 
reasons some major financial or shopping sites state when asked why they use 
autocomplete=off. We have mentioned counter-arguments nearly identical to what 
you say, and in most cases, the site operators were not persuaded. (We did not 
say we explicitly don't consider physically local attackers, as we do give it 
some consideration; though obviously we cannot protect against a sufficiently 
determined attacker with physical access.)

I also forgot to mention:
(3) At least some sites believe that consumer finance regulations require them 
to use autocomplete=off. They believe their requirement to protect the user's 
authentication information and prevent it from being accessed by third parties 
extends to preventing passwords from being automatically stored on the user's 
computer.

 
 At least some website operators (often for financial or shopping sites) are 
 more worried about these threats than about the risk of users using lower 
 quality or shared passwords. This factor is magnified in Safari where we 
 suggest autogenerated per-site random passwords, but only if we can autofill 
 them.
 This is precisely our concern. We believe the priorities of website operators 
 are badly misplaced. There is a huge amount of evidence of the problems with 
 non-complex passwords (see various password database leaks of late), and 
 extremely low evidence of the prevalence of these various local attacker 
 threats. We applaud Safari's generated password management and believe it is 
 the best interest of our users to make their passwords more complex for all 
 of the web.
 
 It's also the case that by prioritizing the request of the website over the 
 desire of users to manage their passwords, we are violating the Priority of 
 Constituencies. I suppose that by offering a flag we are somewhat skirting 
 around that, but I don't consider it a real option.

I largely agree with this.

 
 I do not know if we can collectively assuage the worries of sites that use 
 autocomplete=off. So far, though, I'm not aware of any complaints about our 
 non-default setting to bypass it.
 I'm not sure we can convince website operators of anything like this without 
 making the change happen and demonstrating the world won't collapse. I 
 suspect that a lot of this is a fear of the unknown. 

The sites that are concerned about password autocomplete have at least two 
possible actions they could take that made us hesitant to flip the default:

(A) They could blacklist browsers that ignore autocomplete=off - this has been 
explicitly threatened in the past. It's also been claimed that, even if the 
sites did not want to do this, regulatory bodies would require them to do so 
because of point #3 above.

(B) They could reimplement password input from scratch (using script and not 
using a real password field) in a way that evades browser autocomplete 
heuristics. This would be bad because it would prevent even off-by-default 
autocomplete=off bypass from working.

For these reasons, we 

Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-14 Thread Maciej Stachowiak

On Dec 7, 2013, at 3:31 PM, Dominic Cooney domin...@google.com wrote:

 
 It's not reasonable to require developers to write the above super tricky 
 line of code. It involves at least four different concepts and is easy to get 
 subtly wrong.
 
 For example, when I wrote my first component (after much reading of docs), I 
 did this, which has at least two bugs[1]:
 
 shadow = this.createShadowRoot(this);
 shadow.appendChild(template.content.cloneNode(true));
 
 One of these bugs was copied from the Web Components Primer even.
 
 I assume you are referring to the Explainer (if not, could you please provide 
 a link.) I am the editor of the Explainer. Please file a bug. In some areas 
 the Explainer is out of date.

I meant the document titled Introduction to Web Components. It's the document 
here: http://w3c.github.io/webcomponents/explainer/. At first I did not know 
what you meant by Explainer but I see now that the URL contains that.

 
 I do not find either of those lines in the Explainer. But I could surmise 
 that you are referring to using cloneNode instead of importNode. No doubt 
 your bug will explain what you mean.

The specific lines are:

this._root = this.createShadowRoot();
this._root.appendChild(template.content.cloneNode());

And yes, the issue is that cloneNode() is not a best practice since in general 
the template might be imported. (I don't think we should recommend to authors 
that they pick cloneNode vs importNode on a case-by-case basis). 

I'm not sure offhand how to file a bug against this document but if you point 
me to the right bug tracker and component I'll be glad to do so. It

Another bug I noticed: the callback is named readyCallback instead of 
createdCallback.

Note: it appears that everywhere a shadow root is created in the Introduction, 
it's immediately followed by cloning a template into its contents. The intro 
certainly seems to assume they will be used together, despite the objections to 
making that path more convenient. And similar boilerplate appears over and over.

As an alternate suggestion, and one that might dodge the subclassing issues, 
perhaps createShadowRoot could take an optional template argument and clone it 
automatically. Then this:

this._root = this.createShadowRoot();
this._root.appendChild(template.content.cloneNode());

Could turn into this:

this._root = this.createShadowRoot(template);

Which is quite a bit simpler, and involves fewer basic contents.


  
 In practice, the code delta is going to be more than one line, if you want to 
 avoid polluting the global namespace.
 
 Here is my attempt at writing a trivial component with Custom Elements, 
 Shadow DOM and Templates as they exist today. Note that this works in Chrome 
 Canary as of a week ago with experimental web features turned on:
 
 
 
 
 I sincerely tried to minimize verbosity as much as possible, without 
 introducing gratuitous namespace pollution or sacrificing clarity.
 
 Key lines:
 
 !-- custom element definition --
 script
 (function () {
 var prototype = Object.create(HTMLElement.prototype);
 var template = document.getElementById(rainbow-span-template);
 
 prototype.createdCallback = function() {
 var shadow = this.createShadowRoot(this);
 shadow.appendChild(template.content.cloneNode(true));
 }
 
 document.register(rainbow-span, {prototype: prototype});
 })();
 /script
 
 
 Here is how it would look with the proposed 'template' parameter to 
 document.register:
 
 
 
 
 This replaces 9 rather non-obvious non-blank lines of code with 1:
 
 !-- custom element definition --
 script
 document.register(rainbow-span, {template: 
 document.getElementById(rainbow-span-template)});
 /script
 
 
 Now, obviously things will get more complicated with components that do need 
 to run code on creation beyond the basics. But I believe there will be 
 substantial complexity savings. To make this work properly for more complex 
 cases, the createCallback would need to take
 
 
 [1] (a)Forgot to var-declare shadow; (b) used cloneNode instead of importNode 
 which would be wrong in case of a template from an import document.
 
 
 
  Web components are part of the extensible web concept where we provide a 
  minimal subset of features necessary for opinionated frameworks to build 
  things on top. Supporting template in document.register is easily done in 
  script, so I believe it's better left to developers as they're better at 
  building frameworks than we are.
 
  In either case, that's something we can always add later so it shouldn't 
  stand in the way of the current spec.
 
 The spec as it is makes it gratuitously complicated to create a basic 
 component. As my examples above demonstrate, this would be a significant 
 improvement in developer ergonomics and reduction of learning curve for a 
 modest increase in 

Re: [custom elements] Improving the name of document.register()

2013-12-13 Thread Maciej Stachowiak

Thanks, Google folks, for considering a name to document.register. Though a 
small change, I think it will be a nice improvement to code clarity.

Since we're bikeshedding, let me add a few more notes in favor of defineElement 
for consideration:

1) In programming languages, you would normally say you define or declare a 
function, class structure, variable, etc. I don't know of any language where 
you register a function or class. Defining a custom element seems parallel to 
these cases. It is true that one registers a COM object. At first glance, 
that may seem like an analogous operation, but I don't think it really is. COM 
registration is usually done out of band by a separate tool, not by the program 
itself at runtime.

2) registerElement sounds kind of like it would take an instance of Element and 
register it for some purpose. defineElement sounds more like it is introducing 
a new kind of element, rather than registering a concrete instance of an 
element..

3) If we someday define a standardized declarative equivalent (note that I'm 
not necessarily saying we have to do so right now), defineElement has more 
natural analogs. For example, a define or definition element would convey 
the concept really well. But a register or registration or even 
register-element element would be a weird name.

4) The analogy to registerProtocolHandler is also not a great one, in my 
opinion. First, it has a different scope - it is on navigator and applies 
globally for the UI, rather than being on document and having scope limited to 
that document. Second, the true parallel to registerProtocolHandler would be 
registerElementDefinition. After all, it's not just called registerProtocol. 
That would be an odd name. But defineElement conveys the same idea as 
registerElementDefinition more concisely. The Web Components spec itself says 
Element registration is a process of adding an element definition to a 
registry.

5) Register with the parser is not a good description of what 
document.register does, either. It has an effect regardless of whether elements 
are created with the parser. The best description is what the custom elements 
spec itself calls it

I feel that the preference for registerElement over defineElement may partly be 
inertia due to the old name being document.register. Think about it - is 
registerElement really the name you'd come up with, starting from a blank 
slate? I hope you will give more consideration to defineElement (which seems to 
be the most preferred candidate among the non-register-based names).

Thanks,
Maciej


On Dec 12, 2013, at 10:09 PM, Dominic Cooney domin...@google.com wrote:

 
 
 
 On Fri, Dec 13, 2013 at 2:29 AM, Brian Kardell bkard...@gmail.com wrote:
 
 On Dec 11, 2013 11:48 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 
  On Dec 11, 2013, at 6:46 PM, Dominic Cooney domin...@google.com wrote:
 
 ...
  El 11/12/2013 21:10, Edward O'Connor eocon...@apple.com escribió:
 
  Hi,
 
  The name register is very generic and could mean practically anything.
  We need to adopt a name for document.register() that makes its purpose
  clear to authors looking to use custom elements or those reading someone
  else's code that makes use of custom elements.
 
  I think the method should be called registerElement, for these reasons:
 
  - It's more descriptive about the purpose of the method than just 
  register.
  - It's not too verbose; it doesn't have any redundant part.
  - It's nicely parallel to registerProtocolHandler.
 
 
  I'd still refer declareElement (or defineElement) since registerElement 
  sounds as if we're registering an instance of element with something.  
  Define and declare also match SGML/XML terminologies.
 
  - R. Niwa
 
 
 Define/declare seem a little confusing because we are in the imperative space 
 where these have somewhat different connotations.  It really does seem to me 
 that conceptually we are registering (connecting the definition) with the 
 parser or something.  For whatever that comment is worth. 
 
 While there's no consensus, I think this thread expresses a slightly stronger 
 preference for registerElement than other proposals. I have filed this bug 
 suggesting registerElement.
 
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=24087
 
 Dominic



Re: Passsword managers and autocomplete='off'

2013-12-13 Thread Maciej Stachowiak

On Dec 12, 2013, at 11:20 AM, Joel Weinberger j...@chromium.org wrote:

 Hi all. For a while now, we have wanted on Chrome to ignore 
 autocomplete='off' for password fields for the password manager. We believe 
 that the current respect for autocomplete='off' for passwords is, in fact, 
 harming the security of users by making browser password managers 
 significantly less useful than they should be, thus discouraging their 
 adoption, making it difficult for users to generate, store, and use more 
 complex or (preferably) random passwords. Additionally, the added benefit of 
 autocomplete='off' for security is questionable at best.
 
 We believe that our implementation of this ignore functionality actually 
 falls within the letter of the web-forms standard. A user's password save for 
 an autocomplete='off' field requires a user interaction to save (we do not do 
 it automatically), which ultimately is not different than a copy/paste 
 approach from the user. Additionally, we have taken precautions against 
 password harvesting via XSS. We do not autofill into the DOM until the user 
 has made a gesture (click, keypress, etc.) within the page, and we never 
 autofill into iframe forms (we wait for a user to explicitly select their 
 username from a dropdown).
 
 Part of the issue here is that autocomplete='off' is overloaded. It is 
 simultaneously meant to denote a secure or sensitive field *or* that a 
 field's completion will be handled by the application itself. Thus, we are 
 not proposing to ignore autocomplete='off' for our form fill as there are 
 many places where the application itself creates a suggestion box, and we 
 have no desire to override that functionality. Rather, we care about the 
 sensitive use, which in the case of password fields, is already denoted by 
 the input type='password'.
 
 In the latest version of Chrome (currently in our Canary build), we have 
 already implemented this feature. However, we will putting in behind a flag 
 shortly so that it is not the default, but to still allow users to opt into 
 this. We hope to make this the default for users in the not very distant 
 future.
 
 What are this group's thoughts on this? Any particular concerns with this 
 approach? While we believe that we are within the letter of the standards in 
 our approach, we would love to see this made explicitly clear in the 
 standards and hopefully see other browsers adopt this in the future, as we 
 believe it is in the security interests of all users.

Safari has similar behavior available as anon-default user preference: Allow 
AutoFill even for websites that request passwords not be saved. Even with this 
setting enabled, some user interaction is required both to save and to fill. We 
agree that on net, refusing to autofill passwords harms security.

We did not make it the default, because website owners have objections to 
bypassing autofill=off as a default behavior. The primary types of objections 
are:

(1) Public computer scenarios. Accidentally saving a password on a shared or 
public computer may endanger the user's account.
(2) Casual walk-up attacks, for example temporary access to a friend's or 
relative's computer. AKA friendly fraud.

At least some website operators (often for financial or shopping sites) are 
more worried about these threats than about the risk of users using lower 
quality or shared passwords passwords. This factor is magnified in Safari where 
we suggest autogenerated per-site random passwords, but only if we can autofill 
them.

I do not know if we can collectively assuage the worries of sites that use 
autocomplete=off. So far, though, I'm not aware of any complaints about our 
non-default setting to bypass it.

Regards,
Maciej



Custom form elements (was Re: [webcomponents] Inheritance in Custom Elements (Was Proposal for Cross Origin Use Case and Declarative Syntax))

2013-12-13 Thread Maciej Stachowiak

On Dec 7, 2013, at 4:38 PM, Dominic Cooney domin...@google.com wrote:

 
 
 Built-in HTML elements have lots of hooks to modify their behaviour (for 
 example, HTMLVideoElement's autoplay attribute.) The analogy is extending a 
 Java class which has private and public final members, but no protected or 
 public non-final ones.
 
 If someone were to make proposals about adding more hooks to the web platform 
 to enable more subtyping use cases (for example, a protocol for participating 
 in form submission) I would look forward to working with them on those 
 proposals.

Let's say you have the ability to define a custom element inheriting from a 
form element, but completely replace its rendering and behavior with custom 
shadow DOM.

Is that actually sufficient to participate correctly in form submission? I 
don't think it is. In particular, I don't see how a custom element could 
inherit from HTMLInputElement, fully support the interface, and correctly 
submit a value that is specified via a completely custom mechanism.

Also, even if it worked, inheriting from HTMLInputElement is a particularly 
clunky approach to participating in form submission. To actually correctly 
support the interface, you need your component to support every input type. But 
that is way overkill if you only want to replace a single kind of input, or 
define a new type of your own. The more convenient approach would be to ignore 
type-switching and as a consequence make a subclass that does not respect the 
Liskov Substituton Principle and is therefore bad OO.

I think giving custom elements a specific API for participating in form 
submission, to enable defining new kinds of form elements, would be a better 
approach to this problem and ultimately easier to use. It is also much more 
relevant as a way to explain how the way the Web platform works. Built-in 
form elements participate in form submission by having special hooks to 
participate in form submission, not by inheriting from other form elements.

Regards,
Maciej




Re: [webcomponents] Inheritance in Custom Elements (Was Proposal for Cross Origin Use Case and Declarative Syntax)

2013-12-13 Thread Maciej Stachowiak

On Dec 10, 2013, at 12:24 PM, Elliott Sprehn espr...@chromium.org wrote:

 
 On Tue, Dec 10, 2013 at 8:00 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Tue, Dec 10, 2013 at 3:54 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 12/10/13 10:34 AM, Anne van Kesteren wrote:
  E.g. the dialog's close() method won't work as defined
  right now on a subclass of HTMLDialogElement.
 
  Why not?
 
  I assumed that actual ES6 subclassing, complete with invoking the right
  superclass @@create, would in fact produce an object for which this would
  work correctly.  At least any situation that doesn't lead to that is a
  UA/spec bug.
 
 Well for one because the specification at the moment talks about a
 dialog element and does not consider the case where it may have been
 subclassed. The pending dialog stack is also for dialog elements
 only, not exposed in any way, etc. The way the platform is set up at
 the moment is very defensive and not very extensible.
 
 
 When extending native elements like that you use type extensions, so it'd be 
 dialog is=my-subclass and the tagName is still DIALOG. Registering 
 something that extends HTMLDialogElement but isn't a type extension of 
 dialog does not work, in the same way that doing __proto__ = HTMLDivElement 
 doesn't magically make you into a div today.

The document.register method does not seem to support what you describe as 
currently spec'd, but does have an API doing the thing that won't actually 
work, i.e. registering my-subclass to be a subclass of HTMLDialogElement.

Regards,
Maciej




Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-13 Thread Maciej Stachowiak

On Dec 9, 2013, at 11:13 AM, Scott Miles sjmi...@google.com wrote:

 Domenic Denicola a few messages back gave a highly cogent explanation of the 
 exact line of thinking arrived at last time we went through all this 
 material. 
 
 I'm not wont to try to summarize it here, since he said it already better 
 there. Perhaps the short version is: nobody knows what the 'standard use 
 case' is yet.
 
 In previous adjudications, the straw that broke that camel's back was with 
 respect to handling auto-generation with inheritance. Shadow-roots may need 
 to be generated for each entry in the inheritance chain. Having the system 
 perform this task takes it out of the control of the user's code, which 
 otherwise has ability to modulate calls to super-class methods and manage 
 this process.
 
 class XFoo {
   constructor_or_createdCallback: function() {
 // my shadowRoot was auto-generated
 this.doUsefulStuffLikeDatabinding(this.shadowRoot);
   }
 }
 
 class XBar extends XFoo {
   constructor_or_createdCallback: function() {
 super(); // uh-oh, super call operates on wrong shadowRoot
   }
 }

If the shadow root is optionally automatically generated, it should probably be 
passed to the createdCallback (or constructor) rather than made a property 
named shadowRoot. That makes it possible to pass a different shadow root to 
the base class than to the derived class, thus solving the problem.

Using an object property named shadowRoot would be a bad idea in any case 
since it automatically breaks encapsulation. There needs to be a private way to 
store the shadow root, either using ES6 symbols, or some new mechanism specific 
to custom elements. As it is, there's no way for ES5 custom elements to have 
private storage, which seems like a problem. They can't even use the closure 
approach, because the constructor is not called and the methods are expected to 
be on the prototype. (I guess you could create per-instance copies of the 
methods closing over the private data in the created callback, but that would 
preclude prototype monkeypatching of the sort built-in HTML elements allow.)

Regards,
Maciej




Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-13 Thread Maciej Stachowiak

On Dec 7, 2013, at 1:29 PM, Domenic Denicola dome...@domenicdenicola.com 
wrote:

 From: Brendan Eich [mailto:bren...@secure.meer.net] 
 
 Requiring this kind of boilerplate out of the gave is not:
 
 this.createShadowRoot().appendChild(document.importNode(template.contents));
 
 Wanting to avoid this kind of boilerplate is not a stab in the dark. 
 Why can't we avoid it, even with separate specs that compose well? Part of 
 composing well is not requiring excessive boilerplate.
 
 Part of the issue is that I don't think that's the boilerplate people will be 
 using, uniformly. Adding a line to eliminate *that* boilerplate doesn't help 
 all the other cases, e.g. ones without a shadow DOM but instead a normal DOM, 
 or ones which don't use a template, or which don't use an imported template, 
 or which use multiple nodes. There are lots of ways to create a 
 fully-functioning custom element, and assuming that it will be done via a 
 single imported template element put into a shadow DOM seems like a stab in 
 the dark to me.

In what way does that require using an imported template element specifically, 
as opposed to any template whatsoever? Note that importNode will DTRT whether 
the template comes from another document or not. Or even if I'm mistaken and it 
does not, you could specify that the template contents are cloned into the 
right document whether it was imported or not.

Note also that using normal DOM instead of shadow DOM can be served by never 
using the optional feature (or by making a lightdom-template version if using 
custom elements with normal DOM becomes popular).

Constructing shadow DOM programmatically can also be served by not using the 
optional feature, or by 


 The other aspect of my critique was that scenario-solving this particular use 
 case isn't very useful in light of the large number of other things people 
 will be building out of our building blocks. Why assume this scenario is more 
 common than, say, HTML imports + template elements? Why not add sugar for 
 that? There's a lot of possible combinations that could benefit from some 
 unifying sugar, but we just don't know which of them are actually useful yet.

It's fine to allow those other things. It just seems like using all three of 
Custom Elements, Shadow DOM and Templates could be smoothed out without 
precluding those other options. It also seems like Polymer uses those three 
things together, so it seems unlikely that no one will do it. Using these three 
technologies together is very natural and pretending otherwise seems like an 
excessive level of agnosticism to me.

BTW, I do think using HTML imports + template elements needs more sugar (there 
is no easy way afaik to instantiate a template defined in an imported 
document), but I am not sure it serves the same use cases.

Regards,
Maciej




Re: [custom elements] Improving the name of document.register()

2013-12-12 Thread Maciej Stachowiak
I like defineElement a lot too. I think it gets to the heart of this method's 
potential - the ability to define your own elements.

 - Maciej

 On Dec 11, 2013, at 6:46 PM, Dominic Cooney domin...@google.com wrote:
 
 On Thu, Dec 12, 2013 at 5:17 AM, pira...@gmail.com pira...@gmail.com wrote:
 I have seen registerProtocolHandler() and it's being discused 
 registerServiceWorker(). I believe registerElementDefinition() or 
 registerCustomElement() could help to keep going on this path.
 
 Send from my Samsung Galaxy Note II
 
 El 11/12/2013 21:10, Edward O'Connor eocon...@apple.com escribió:
 
 Hi,
 
 The name register is very generic and could mean practically anything.
 We need to adopt a name for document.register() that makes its purpose
 clear to authors looking to use custom elements or those reading someone
 else's code that makes use of custom elements.
 
 I support this proposal.
  
 Here are some ideas:
 
 document.defineElement()
 document.declareElement()
 document.registerElementDefinition()
 document.defineCustomElement()
 document.declareCustomElement()
 document.registerCustomElementDefinition()
 
 I like document.defineCustomElement() the most, but
 document.defineElement() also works for me if people think
 document.defineCustomElement() is too long.
 
 I think the method should be called registerElement, for these reasons:
 
 - It's more descriptive about the purpose of the method than just register.
 - It's not too verbose; it doesn't have any redundant part.
 - It's nicely parallel to registerProtocolHandler.
 
 If I had to pick from the list Ted suggested, I think defineElement is the 
 best of that bunch and also an improvement over just register. It doesn't 
 line up with registerProtocolHandler, but there's some poetry to 
 defineElement/createElement. 
 
 
 Ted
 
 P.S. Sorry for the bikeshedding. I really believe we can improve the
 name of this function to make its purpose clear.
 
 I searched for bugs on this and found none; I expect this was discussed but I 
 can't find a mail thread about it. The naming of register is something that's 
 been on my mind so thanks for bringing it up.
 
 Dominic


Re: Beacon API

2013-02-15 Thread Maciej Stachowiak

On Feb 15, 2013, at 3:51 AM, Ian Fette (イアンフェッティ) ife...@google.com wrote:

 Anne,
 
 Both Chrome and Safari support the ping attribute. I am not sure about IE, I 
 believe Firefox has it disabled by default. FWIW I wouldn't consider this a 
 huge failure, if anything I'd expect over time people to use ping where it's 
 supported and fallback where it's not, resulting in the same privacy tradeoff 
 for users of all browsers but better performance for some browsers than 
 others, which will eventually lead to a predictable outcome...

Are there any websites that use it, at least in the browsers that support it? 
Relative lack of web developer adoption so far makes it seem like a bad bet to 
make more features that do the same thing, unless we're confident that we know 
what was wrong with a ping in the first place.

 - Maciej





Re: Beacon API

2013-02-15 Thread Maciej Stachowiak

On Feb 15, 2013, at 9:21 PM, Maciej Stachowiak m...@apple.com wrote:

 
 On Feb 15, 2013, at 3:51 AM, Ian Fette (イアンフェッティ) ife...@google.com wrote:
 
 Anne,
 
 Both Chrome and Safari support the ping attribute. I am not sure about IE, I 
 believe Firefox has it disabled by default. FWIW I wouldn't consider this a 
 huge failure, if anything I'd expect over time people to use ping where it's 
 supported and fallback where it's not, resulting in the same privacy 
 tradeoff for users of all browsers but better performance for some browsers 
 than others, which will eventually lead to a predictable outcome...
 
 Are there any websites that use it, at least in the browsers that support it? 
 Relative lack of web developer adoption so far makes it seem like a bad bet 
 to make more features that do the same thing, unless we're confident that we 
 know what was wrong with a ping in the first place.

BTW as far as I know the best current nonblocking technique to phone home on 
unload is to create an img in your unload handler pointing to the ping URL, 
this will result in reliable delivery without blocking at least in IE and 
WebKit-based browsers. I've found it hard to convince even knowledgable web 
developers to use this technique or a ping over synchronous XHR, even sites 
that are otherwise willing to do Safari-specific optimizations. I am not sure 
why sync XHR in unload is so tantalizing.

Regards,
Maciej




Re: Defenses against phishing via the fullscreen api (was Re: full screen api)

2012-12-19 Thread Maciej Stachowiak

On Dec 18, 2012, at 6:44 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Oct 23, 2012 at 12:50 AM, Maciej Stachowiak m...@apple.com wrote:
 Based on all this, I continue to think that requesting keyboard access
 should involve separate API, so that it can be feature-detected and given
 different security treatment by vendors as desired. This is what Flash does,
 and they have the most experience dealing with the security implications of
 fullscreen on the Web.
 
 Gecko and Chrome have indicated they do not desire this distinction.
 You have indicated your desire to maybe enable keyboard access in the
 future, but you do not have a thought out UI. Given this is the data
 we are working with it seems unwise to change direction at this point.
 
 The specification is modeled after Gecko and Chrome and very much
 intents to have keyboard access working. As per usual, everything that
 is not restricted is expected to work.

That seems like a bad basis to make a decision about a security issue.

 
 I am willing to add some wording to the security section to make the
 risks of keyboard access more clear. Does anyone have some suggested
 wording?

What would be the point? Web developers can't protect themselves from phishing 
attacks by other sites, and as you state the spec currently does not allow UAs 
to limit keyboard access. So who is the audience for such as security 
considerations warning? End users?


At minimum, I'd like the spec to explicitly allow not providing full keyboard 
access, as requested in my original message on this thread:

 Despite both of these defenses having drawbacks, I think it is wise for 
 implementations to implement at least one of them. I think the spec should 
 explicitly permit implementations to apply either or both of these 
 limitations, and should discuss their pros and cons in the Security 
 Considerations section.

As you point out, the spec does not currently allow this behavior. Are you 
rejecting this request? If so, why? Safari has had this behavior since forever 
and is unlikely to change in the foreseeable future, so it seems pointless to 
disallow it.

And given this difference in UA behavior, it seems useful to let web developers 
feature-detect the difference in behavior somehow.

Regards,
Maciej




Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-12-01 Thread Maciej Stachowiak

On Nov 29, 2012, at 12:24 PM, Elliott Sprehn espr...@gmail.com wrote:

 
 On Wed, Nov 28, 2012 at 2:51 PM, Maciej Stachowiak m...@apple.com wrote:
 
 Does this support the previously discussed mechanism of allowing either 
 public or private components? I'm not able to tell from the referenced 
 sections.
  
 Can you explain the use case for wanting private shadows that are not 
 isolated?

I still don't entirely buy the use cases for making shadow dom contents public. 
But I thought the rough consensus in the earlier discussion was to support both 
modes, not to support public components only.

Regards,
Maciej



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-28 Thread Maciej Stachowiak

Does this support the previously discussed mechanism of allowing either public 
or private components? I'm not able to tell from the referenced sections.

 - Maciej

On Nov 28, 2012, at 1:17 PM, Dimitri Glazkov dglaz...@chromium.org wrote:

 As of http://dvcs.w3.org/hg/webcomponents/rev/0714c60f265d, there's now an 
 API to traverse the shadow trees:
 
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#api-shadow-aware-shadow-root
 http://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/shadow/index.html#api-html-shadow-element-older-shadow-root
 
 Please let me know if I goofed anything up. File bugs or yell at me.
 
 :DG
 
 
 On Fri, Nov 9, 2012 at 10:17 AM, Dimitri Glazkov dglaz...@chromium.org 
 wrote:
 On Thu, Nov 8, 2012 at 9:26 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 11/8/12 9:28 AM, Elliott Sprehn wrote:
 
  If you're worried about malicious attacks on your widget, shadows being
  private is not enough. You need a whole new scripting context.
 
  Er... yes, you do.  Do widgets not get that?  If not, that's pretty
  broken...
 
 Having a separate scripting context is certainly useful for a certain
 class of widgets (see Like/+1 button discussion).  That's why we have
 this whole notion of the isolated shadow trees. This is also a
 fundamental requirement for enabling UA controls to be built with
 script (I _just_ had a discussion with a fellow WebKit engineer who
 asked for that).
 
 However, for a large class of use cases, mostly represented by the
 libraries/frameworks, the separate scripting context is an unnecessary
 barrier. Libraries like bootstrap, quickui, and x-tags are eager to
 start using shadow DOM to delineate lightweight, purely functional
 boundaries between composable bits in the same document. For these
 developers, where things like:
 a) examining (and sometimes modifying), say a currently selected tab
 in an tab manager;
 b) having a central state stored in the document;
 c) nesting and reusing bits across different libraries;
 are all expected and counted upon. Adding a scripting context boundary
 here is just a WTF! stumbling block.
 
 :DG
 



Re: [webcomponents]: Changing API from constructable ShadowRoot to factory-like

2012-11-09 Thread Maciej Stachowiak

I think a factory function is better here for the reasons Dimitri stated. But I 
also agree that an addFoo function returning a new object seems strange. I 
think that createShadowRoot may be better than either option.

 - Maciej

On Nov 8, 2012, at 11:42 AM, Erik Arvidsson a...@chromium.org wrote:

 addShadowRoot seem wrong to me to. Usually add* methods takes an
 argument of something that is supposed to be added to the context
 object.
 
 If we are going with a factory function I think that createShadowRoot
 is the right name even though create methods have a lot of bad history
 in the DOM APIs.
 
 On Thu, Nov 8, 2012 at 1:01 PM, Elliott Sprehn espr...@google.com wrote:
 True, though that's actually one character longer, probably two with normal
 formatting ;P
 
 new ShadowRoot(element,{
 element.addShadowRoot({
 
 I'm more concerned about the constructor with irreversible side effects of
 course.
 
 - E
 
 
 On Thu, Nov 8, 2012 at 9:57 AM, Dimitri Glazkov dglaz...@google.com wrote:
 
 That _is_ pretty nice, but we can add this as a second argument to the
 constructor, as well:
 
 root = new ShadowRoot(element, {
  applyAuthorSheets: false,
  resetStyleInheritance: true
 });
 
 At this point, the stakes are primarily in aesthetics... Which makes
 the whole question so much more difficult to address objectively.
 
 :DG
 
 On Thu, Nov 8, 2012 at 9:54 AM, Elliott Sprehn espr...@google.com wrote:
 The real sugar I think is in the dictionary version of addShadowRoot:
 
 root = element.addShadowRoot({
  applyAuthorSheets: false,
  resetStyleInheritance: true
 })
 
 
 On Thu, Nov 8, 2012 at 9:49 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
 
 Sure. Here's a simple example without getting into traversable shadow
 trees (those are still being discussed in a different thread):
 
 A1) Using constructable ShadowRoot:
 
 var element = document.querySelector('div#foo');
 // let's add a shadow root to element
 var shadowRoot = new ShadowRoot(element);
 // do work with it..
 shadowRoot.applyAuthorSheets = false;
 shadowRoot.appendChild(myDocumentFragment);
 
 A2) Using addShadowRoot:
 
 var element = document.querySelector('div#foo');
 // let's add a shadow root to element
 var shadowRoot = element.addShadowRoot();
 // do work with it..
 shadowRoot.applyAuthorSheets = false;
 shadowRoot.appendChild(myDocumentFragment);
 
 Now with traversable shadow trees:
 
 B1) Using constructable ShadowRoot:
 
 var element = document.querySelector('div#foo');
 alert(element.shadowRoot); // null
 var root = new ShadowRoot(element);
 alert(root === element.shadowRoot); // true
 var root2 = new ShadowRoot(element);
 alert(root === element.shadowRoot); // false
 alert(root2 === element.shadowRoot); // true
 
 B2) Using addShadowRoot:
 
 var element = document.querySelector('div#foo');
 alert(element.shadowRoot); // null
 var root = element.addShadowRoot();
 alert(root === element.shadowRoot); // true
 var root2 = element.addShadowRoot();
 alert(root === element.shadowRoot); // false
 alert(root2 === element.shadowRoot); // true
 
 :DG
 
 On Thu, Nov 8, 2012 at 9:42 AM, Maciej Stachowiak m...@apple.com
 wrote:
 
 Could you please provide equivalent code examples using both
 versions?
 
 Cheers,
 Maciej
 
 On Nov 7, 2012, at 10:36 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
 
 Folks,
 
 Throughout the year-long (whoa!) history of the Shadow DOM spec,
 various people commented on how odd the constructable ShadowRoot
 pattern was:
 
 var root = new ShadowRoot(host); // both creates an instance *and*
 makes an association between this instance and host.
 
 People (I cc'd most of them) noted various quirks, from the
 side-effectey constructor to relatively uncommon style of the API.
 
 I once was of the strong opinion that having a nice, constructable
 object has better ergonomics and would overcome the mentioned code
 smells.
 
 But... As we're discussing traversable shadows and the possibility
 of
 having Element.shadowRoot, the idea of changing to a factory pattern
 now looks more appealing:
 
 var element = document.querySelector('div#foo');
 alert(element.shadowRoot); // null
 var root = element.addShadowRoot({ resetStyleInheritance: true });
 alert(root === element.shadowRoot); // true
 var root2 = element.addShadowRoot();
 alert(root === element.shadowRoot); // false
 alert(root2 === element.shadowRoot); // true
 
 You gotta admit this looks very consistent and natural relative to
 how
 DOM APIs work today.
 
 We could still keep constructable object syntax as alternative
 method
 or ditch it altogether and make calling constructor throw an
 exception.
 
 What do you think, folks? In the spirit of last night's events,
 let's
 vote:
 
 1) element.addShadowRoot rocks! Let's make it the One True Way!
 2) Keep ShadowRoot constructable! Factories stink!
 3) Let's have both!
 4) element.addShadowRoot, but ONLY if we have traversable shadow
 trees
 5) Kodos.
 
 :DG
 
 P.S. I would like to retain the atomic quality of the operation

Re: [webcomponents]: Changing API from constructable ShadowRoot to factory-like

2012-11-08 Thread Maciej Stachowiak

Could you please provide equivalent code examples using both versions?

Cheers,
Maciej

On Nov 7, 2012, at 10:36 AM, Dimitri Glazkov dglaz...@google.com wrote:

 Folks,
 
 Throughout the year-long (whoa!) history of the Shadow DOM spec,
 various people commented on how odd the constructable ShadowRoot
 pattern was:
 
 var root = new ShadowRoot(host); // both creates an instance *and*
 makes an association between this instance and host.
 
 People (I cc'd most of them) noted various quirks, from the
 side-effectey constructor to relatively uncommon style of the API.
 
 I once was of the strong opinion that having a nice, constructable
 object has better ergonomics and would overcome the mentioned code
 smells.
 
 But... As we're discussing traversable shadows and the possibility of
 having Element.shadowRoot, the idea of changing to a factory pattern
 now looks more appealing:
 
 var element = document.querySelector('div#foo');
 alert(element.shadowRoot); // null
 var root = element.addShadowRoot({ resetStyleInheritance: true });
 alert(root === element.shadowRoot); // true
 var root2 = element.addShadowRoot();
 alert(root === element.shadowRoot); // false
 alert(root2 === element.shadowRoot); // true
 
 You gotta admit this looks very consistent and natural relative to how
 DOM APIs work today.
 
 We could still keep constructable object syntax as alternative method
 or ditch it altogether and make calling constructor throw an
 exception.
 
 What do you think, folks? In the spirit of last night's events, let's vote:
 
 1) element.addShadowRoot rocks! Let's make it the One True Way!
 2) Keep ShadowRoot constructable! Factories stink!
 3) Let's have both!
 4) element.addShadowRoot, but ONLY if we have traversable shadow trees
 5) Kodos.
 
 :DG
 
 P.S. I would like to retain the atomic quality of the operation:
 instantiate+associate in one go. There's a whole forest of problems
 awaits those who contemplate detached shadow roots.
 




Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-08 Thread Maciej Stachowiak

On Nov 6, 2012, at 3:29 PM, Dimitri Glazkov dglaz...@google.com wrote:

 On Thu, Nov 1, 2012 at 8:39 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 
 6) The isolated setting essentially means that there's a new
 document and scripting context for this shadow subtree (specifics
 TBD). Watch https://www.w3.org/Bugs/Public/show_bug.cgi?id=16509 for 
 progress.
 
 That seems like a whole separate feature - perhaps we should figure out 
 private vs public first. It would be good to know the use cases for 
 this feature over using private or something like seamless iframes.
 
 Yeah, sure.  It's useful to bring up at the same time, though, because
 there are some decent use-cases that sound at first blush like they
 should be private, but really want even stronger security/isolation
 constraints.
 
 An existing example, iirc, is the Google +1 button widget.  Every
 single +1 includes an iframe so it can do some secure scripting
 without the page being able to reach in and fiddle with things.
 
 What are the advantages to using an isolated component for the +1 button 
 instead if an iframe, or a private component containing an iframe?
 
 I'm not 100% sure (Dimitri can answer better), but I think it's
 because we can do a somewhat more lightweight isolation than what a
 full iframe provides.
 
 IIRC, several of our use-cases *really* want all of the instances of a
 given component to use the same scripting context, because there's
 going to be a lot of them, and they all need the same simple data;
 they'd gain no benefit from being fully separate and paying the cost
 of a thousand unique scripting contexts.

Is that the semantics isolated would have? All instances of the same 
component are in the same scripting context, but one separate from the page? I 
assumed that new
document and scripting context for this shadow subtree would mean there's a 
new one per instance, and the document plus the scripting context is most of 
the cost of an iframe.

 
 Yup. The typical example that the Google+ people point out to me is
 techcrunch.com. The count of iframes had gotten so high that it
 affected performance to the point where the crunchmasters had to fake
 the buttons (and reveal them on hover, which is tangential to the
 story and may or may not have been the wisest choice).
 
 With isolated shadow trees, the number of scripting contexts would
 equal then number of button-makers, and offer additional opportunities
 like sharing styles among instances.

OK, it wasn't clear that the separate document and scripting context for 
isolated components would be per unique component, rather than per-instance. 
That does seem like a meaningfully different behavior.



 
 
 One thing that makes me nervous about theisolated idea, is thata 
 scripting context is normally bound one-to-one to either a browsing context 
 or a worker; and having multiple scripting contexts per browsing context 
 seems like it could be tricky to implement and may have security risks. But 
 I don't have any more concrete objection at this time.
 
 I think that Workers or something very much like them is a productive
 direction to look in for the isolated components, actually.

Wouldn't that require making the DOM and UI event dispatch threadsafe (which 
are likely not very practical things to do)?

 
 Flipping it around, isolation also serves as a great way for the
 *page* to protect itself from the *component*.  There are tons of
 components that have absolutely no need to interact with the outside
 page, so sealing them off loses you nothing and gains you peace of
 mind when worrying about whether you should include some random
 plugins you found on your favorite component library site.
 
 Would the page be able to choose to make a component isolated without the 
 cooperation of the component? Or alternately load components in such a way 
 that only isolated ones would succeed?
 
 I think we'd like that, but haven't thought it through very hard yet.
 
 Isolation as a problem is something that's often considered in design
 discussions (hence it being brought up here), but it's in a distant
 future in relation to actual progress of the spec. If there were
 Shadow DOM L2, that would a nice place to start.

Maybe it should be set aside from this public vs private discussion for now 
then.

If it may be desirable to force isolated from the outside, then that makes it 
substantially different from the public vs private distinction, which should be 
completely under the control of the component. There's not much point to 
discussing isolated without having a handle on this aspect of its design.

Regards,
Maciej








Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-08 Thread Maciej Stachowiak

On Nov 8, 2012, at 2:15 AM, Elliott Sprehn espr...@gmail.com wrote:

 
 On Thu, Nov 1, 2012 at 6:43 AM, Maciej Stachowiak m...@apple.com wrote:
 
 On Nov 1, 2012, at 12:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 
 ...
 
  For example, being able to re-render the page manually via DOM
  inspection and custom canvas painting code.  Google Feedback does
  this, for example.  If shadows are only exposed when the component
  author thinks about it, and then only by convention, this means that
  most components will be un-renderable by tools like this.
 
 As Adam Barth often points out, in general it's not safe to paint pieces of a 
 webpage into canvas without security/privacy risk. How does Google Feedback 
 deal with non-same-origin images or videos or iframes, or with visited link 
 coloring, to cite a few examples? Does it just not handle those things?
 
 
 We don't handle visited link coloring as there's no way to get that from JS.
 
 For images we proxy all images and do the actual drawing to the canvas in a 
 nested iframe that's on the same domain as the proxy.
 
 For cross domain iframes we have a JS API that the frame can include that 
 handles a special postMessage which serializes the entire page and then 
 unserializes on the other side for rendering. Thankfully this case is 
 extremely rare unlike web components where it turns out you end up with 
 almost the entire page down in some component or another (ex. x-panel, 
 x-conversation-view …). This of course requires you to have control of the 
 cross origin page.
 
 For an architectural overview of Google Feedback's JS HTML rendering engine 
 you can look at this presentation, slides 6 and 10 explain the image proxy:
 
 http://www.elliottsprehn.com/preso/fluentconf/

Are these types of workarounds adequate for the web components case? If not, 
why not?

Regards,
Maciej






Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-01 Thread Maciej Stachowiak

On Nov 1, 2012, at 12:02 AM, Dimitri Glazkov dglaz...@google.com wrote:

 Hi folks!
 
 While you are all having good TPAC fun, I thought I would bring this
 bug to your attention:
 
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=19562
 
 There's been several comments from developers about the fact that
 Shadow DOM encapsulation is _too_ well-sealed for various long tail,
 but important use cases

What are these use cases? I did not seem them in the bug.

http://w3cmemes.tumblr.com/post/34633601085/grumpy-old-maciej-has-a-question-about-your-spec

 In other words, the information that could be accessible (no security 
 concerns, for example) is not. One has to use
 hacks to get at the truth.



 
 Here's a simple strawman (copied from bug for easy reading):
 
 1) There's a 3-position switch on each shadow DOM subtree: public,
 private, isolated.

Is there any special behavior associated with these three settings besides what 
is in the other numbered points?

 
 2) There's a mechanism in place to flip this switch (specifics TBD)

Who gets to flip the switch? Can a private subtree still be accessed via the 
element it is attached to by simply marking it public? That would make 
private useless if so. It seems like whoever creates the shadow DOM should be 
able to make it private in an irreversible way. Without knowing the mechanism 
it's hard to judge if that is the case. 

In cases where a browser implementation provides a built-in shadow DOM, it 
seems particularly necessary to make it irreversibly private.


 
 3) the element.shadowRoot property points to the top of the tree
 stack, or null if the subtree at the top is in private or
 isolated setting.
 
 4) shadow.olderSubtree points to the older subtree in the stack or
 null if the older subtree is in private or isolated setting.
 
 5) ShadowRoot.host points to the shadow host or null, if the subtree
 is in private or isolated setting.
 
 6) The isolated setting essentially means that there's a new
 document and scripting context for this shadow subtree (specifics
 TBD). Watch https://www.w3.org/Bugs/Public/show_bug.cgi?id=16509 for progress.

That seems like a whole separate feature - perhaps we should figure out 
private vs public first. It would be good to know the use cases for this 
feature over using private or something like seamless iframes.

Cheers,
Maciej






Re: [webcomponents] More backward-compatible templates

2012-11-01 Thread Maciej Stachowiak

On Nov 1, 2012, at 1:57 PM, Adam Barth w...@adambarth.com wrote:

 
 
 (5) The nested template fragment parser operates like the template fragment 
 parser, but with the following additional difference:
  (a) When a close tag named +script is encountered which does not match 
 any currently open script tag:
 
 Let me try to understand what you've written here concretely:
 
 1) We need to change the end tag open state to somehow recognize 
 /+script as an end tag rather than as a bogus comment.
 2) When the tree builder encounter such an end tag in the  state(s), we 
 execute the substeps you've outlined below.
 
 The problem with this approach is that nested templates parse differently 
 than top-level templates.  Consider the following example:
 
 script type=template
  b
 /script
 
 In this case, none of the nested template parser modifications apply and 
 we'll parse this as normal for HTML.  That means the contents of the template 
 will be b (let's ignore whitespace for simplicity).
 
 script type=template
   h1Inbox/h1
   script type=template
 b
   /+script
 /script
 
 Unfortunately, the nested template in this example parses differently than it 
 did when it was a top-level template.  The problem is that the characters 
 /+script are not recognized by the tokenizer as an end tag because they 
 are encountered by the nested template fragment parser in the before 
 attribute name state.  That means they get treated as some sort of bogus 
 attributes of the b tag rather than as an end tag.

OK. Do you believe this to be a serious problem? I feel like inconsistency in 
the case of a malformed tag is not a very important problem, but perhaps there 
are cases that would be more obviously problematic, or reasons not obvious to 
me to be very concerned about cases exactly like this one.

Also: can you think of a way to fix this problem? Or alternately, do you 
believe it's fundamentally not fixable? I've only spent a short amount of time 
thinking about this approach, and I am not nearly as much an expert on HTML 
parsing as you are.


  
  (a.i) Consume the token for the close tag named +script.
  (a.ii) Crate a DocumentFragment containing that parsed contents of 
 the fragment.
  (a.iii) [return to the parent template fragment parser] with the 
 result of step (a.ii) with the parent parser to resume after the +script 
 close tag.
 
 
 This is pretty rough and I'm sure I got some details wrong. But I believe it 
 demonstrates the following properties:
 (B) Allows for perfect fidelity polyfills, because it will manifestly end the 
 template in the same place that an unaware browser would close the script 
 element.
 (C) Does not require multiple levels of escaping.
 (A) Can be implemented without changes to the core HTML parser (though you'd 
 need to introduce a new fragment parsing mode).
 
 I suspect we're quibbling over no true Scotsman semantics here, but you 
 obviously need to modify both the HTML tokenizer and tree builder for this 
 approach to work.

In principle you could create a whole separate tokenizer and tree builder. But 
obviously that would probably be a poor choice for a native implementation 
compared to adding some flags and variable behavior. I'm not even necessarily 
claiming that all the above properties are advantages, I just wanted to show 
that there need not be a multi-escapting problem nor necessarily scary 
complicated changes to the tokenizer states for script.

I think the biggest advantage to this kind of approach is that it can be 
polyfilled with full fidelity. But I am in no way wedded to this solution and I 
am intrigued at the mention of other approaches with this property. The others 
I know of (external source only, srcdoc like on iframe) seem clearly worse, but 
there might be other bigger ones.

  
 (D) Can be implemented with near-identical behavior for XHTML, except that 
 you'd need an XML fragment parser.
 
 The downside is that nested templates don't parse the same as top-level 
 templates.  

Indeed. That is in addition to the previously conceded downsides that the 
syntax is somewhat less congenial.

 Another issue is that you've also introduced the following security risk:
 
 Today, the following line of JavaScript is safe to include in an inline 
 script tag:
 
 var x = /+scriptimg onerror=alert(1);
 
 Because that line does not contain /script, the string alert(1) will be 
 treated as the contents of a string.  However, if that line is included in an 
 inline script inside of a template, the modifications of to the parser above 
 will mean that alert(1) will execute as JavaScript rather than being treated 
 as a string, introducing an XSS vector.

I don't follow. Can you give a full example of how this would be included in a 
template and therefore be executed?


  
 I hope this clarifies the proposal.
 
 Notes:
 - Just because it's described this way doesn't mean it has to be implemented 
 this way - implementations 

Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-01 Thread Maciej Stachowiak

On Nov 1, 2012, at 12:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Thu, Nov 1, 2012 at 9:37 AM, Maciej Stachowiak m...@apple.com wrote:
 On Nov 1, 2012, at 12:02 AM, Dimitri Glazkov dglaz...@google.com wrote:
 Hi folks!
 
 While you are all having good TPAC fun, I thought I would bring this
 bug to your attention:
 
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=19562
 
 There's been several comments from developers about the fact that
 Shadow DOM encapsulation is _too_ well-sealed for various long tail,
 but important use cases
 
 What are these use cases? I did not seem them in the bug.
 
 http://w3cmemes.tumblr.com/post/34633601085/grumpy-old-maciej-has-a-question-about-your-spec
 
 For example, being able to re-render the page manually via DOM
 inspection and custom canvas painting code.  Google Feedback does
 this, for example.  If shadows are only exposed when the component
 author thinks about it, and then only by convention, this means that
 most components will be un-renderable by tools like this.

As Adam Barth often points out, in general it's not safe to paint pieces of a 
webpage into canvas without security/privacy risk. How does Google Feedback 
deal with non-same-origin images or videos or iframes, or with visited link 
coloring, to cite a few examples? Does it just not handle those things?

 For the public/private part at least, this is just a switching of the
 defaults.  There was no good *reason* to be private by default, we
 just took the shortest path to *allowing* privacy and the default fell
 out of that.  As a general rule, we should favor being public over
 being private unless there's a good privacy or security reason to be
 private.  So, I don't think we need strong use-cases here, since we're
 not having to make a compat argument, and the new model adds minimal
 complexity.

I don't enough of the context to follow this. Right now there's no good general 
mechanism for a component exposing its guts, other than by convention, right? 
It seems like adding a general mechanism to do so is a good idea, but it could 
work with either default or with no default at all and requiring authors to 
specify explicitly. I think specifying either way explicitly would be best. JS 
lets you have public properties in an object, or private properties 
(effectively) in a closure, so both options are available and neither is 
default. It's your choice whether to use encapsulation. I am not sure we need 
to specifically nudge web developers away from encapsulation.

 
 1) There's a 3-position switch on each shadow DOM subtree: public,
 private, isolated.
 
 Is there any special behavior associated with these three settings besides 
 what is in the other numbered points?
 
 I don't think so, no.  Public/private is just a matter of exposing or
 nulling some references.  Isolated is obviously not well-defined in
 this email, but the implications are relatively straightforward - it's
 like a cross-domain iframe.  (Which is, in fact, exactly how existing
 components hack in some isolation/security.)
 
 
 2) There's a mechanism in place to flip this switch (specifics TBD)
 
 Who gets to flip the switch? Can a private subtree still be accessed via 
 the element it is attached to by simply marking it public? That would make 
 private useless if so. It seems like whoever creates the shadow DOM should 
 be able to make it private in an irreversible way. Without knowing the 
 mechanism it's hard to judge if that is the case.
 
 In cases where a browser implementation provides a built-in shadow DOM, it 
 seems particularly necessary to make it irreversibly private.
 
 The idea so far is that the switch is just set at shadow creation
 time, and can't be changed.

That seems workable.

 
 
 6) The isolated setting essentially means that there's a new
 document and scripting context for this shadow subtree (specifics
 TBD). Watch https://www.w3.org/Bugs/Public/show_bug.cgi?id=16509 for 
 progress.
 
 That seems like a whole separate feature - perhaps we should figure out 
 private vs public first. It would be good to know the use cases for this 
 feature over using private or something like seamless iframes.
 
 Yeah, sure.  It's useful to bring up at the same time, though, because
 there are some decent use-cases that sound at first blush like they
 should be private, but really want even stronger security/isolation
 constraints.
 
 An existing example, iirc, is the Google +1 button widget.  Every
 single +1 includes an iframe so it can do some secure scripting
 without the page being able to reach in and fiddle with things.

What are the advantages to using an isolated component for the +1 button 
instead if an iframe, or a private component containing an iframe?

One thing that makes me nervous about theisolated idea, is thata scripting 
context is normally bound one-to-one to either a browsing context or a worker; 
and having multiple scripting contexts per browsing context seems like it could

[webcomponents] More backward-compatible templates

2012-10-30 Thread Maciej Stachowiak

In the WebApps meeting, we discussed possible approaches to template that may 
ease the transition between polyfilled implementations and native support, 
avoid HTML/XHTML parsing inconsistency, and in general adding less weirdness to 
the Web platform.

Here are some possibilities, not necessarily mutually exclusive:

(1) template src

Specify templates via external files rather than inline content. External CSS 
and external scripts are very common, and in most cases inline scripts and 
style are the exception. It seems likely this will also be true for templates. 
It's likely desirable to provide this even if there is also some inline form of 
templates.

(2) template srcdoc

Use the same approach as iframe srcdoc to specifying content inline while 
remaining compatible with legacy parsing. The main downside is that the 
required escaping is ugly and hard to follow, especially in the face of nesting.

(3) script type=template (or script language=template?)

Define a new script type to use for templates. This provides almost all the 
syntactic convenience of the original template element - the main downside is 
that, if your template contains a script or another nested template, you have 
to escape the close script tag in some way. 

The contents of the script would be made available as an inert DOM outside the 
document, via a new IDL attribute on script (say, HTMLScriptElement.template).

Here's a comparison of syntaxes:

Template element:
template
div id=foo class=bar/div
script something();/script
template
 div class=nested-template/div
/template
/template

Script template:
script type=template
div id=foo class=bar/div
script something();\/script
script type=template
 div class=nested-template/div
\/script
/script

Pros:
- Similar to the way many JS-implemented templating sches work today
- Can be polyfilled with full fidelity and no risk of content that's meant to 
be inert accidentally running
- Can be translated consistently and compatibly to the XHTML syntax of HTML
- Less new weirdness. You don't have to create a new construct that appears to 
have normal markup content, but puts it outside the document.
- Can be specified (and perhaps even implemented, at least at first) without 
having to modify the HTML parsing algorithm. In principle, you could specify 
this as a postprocessing step after parsing, where accessing .template for the 
first time would be responsible for reparsing the contents and creating the 
template DOM. In practice, browsers would eventually want to parse in a single 
pass for performance.


Cons:
- script type=template is slightly more verbose than template
- Closing of nested scripts/templates requires some escaping


In my opinion, the advantages of the script template approach outweigh the 
disadvantages. I wanted to raise it for discussion on the list.



Re: Defenses against phishing via the fullscreen api (was Re: full screen api)

2012-10-22 Thread Maciej Stachowiak

On Oct 22, 2012, at 3:04 PM, Chris Pearce cpea...@mozilla.com wrote:

 
 This looks remarkably like Mozilla's original proposal:
 https://wiki.mozilla.org/Gecko:FullScreenAPI
 
 We chose not to implement this as it offers little protection against 
 phishing or spoofing attacks that don't rely on keyboard access. In those 
 cases making the user aware that they've entered fullscreen is pretty much 
 the best defence the user has. Other than not having a fullscreen API at all.

There may be phishing scenarios that work without keyboard access, but I expect 
they are *far* less common and harder to pull off. To argue from anecdote, I 
visit many sites where I identify myself with a typed password, and none where 
I exclusively have a mouse-based credential that does not involve typing 
(though I've seen sites that use it as an additional factor). I think it's not 
justified to conclude that the phishing risk with and without alphanumeric 
keyboard access is identical. They are not.

 
 Our fullscreen approval UI in Firefox is based around the assumption that for 
 most users the set of sites that use the fullscreen API that the user 
 encounters on a daily basis is small, and users would tend to opt to 
 remember the fullscreen approval for those domains. I'd imagine the set 
 would be YouTube, Facebook, and possibly ${FavouriteGame}.com for most users. 
 Thus users would see a notification and not an approval prompt most of the 
 time when they entered fullscreen. But when some other site goes fullscreen 
 they do get a prompt, which is out of the ordinary and more likely to be read.

I think the chance of the user paying attention to a prompt that, every time 
they have seen it before, has been completely harmless, is pretty low. The odds 
of the user making an informed security decision based on what the prompt says 
is even lower.

Based on all this, I continue to think that requesting keyboard access should 
involve separate API, so that it can be feature-detected and given different 
security treatment by vendors as desired. This is what Flash does, and they 
have the most experience dealing with the security implications of fullscreen 
on the Web.

Regards,
Maciej


Re: Defenses against phishing via the fullscreen api (was Re: full screen api)

2012-10-15 Thread Maciej Stachowiak

On Oct 14, 2012, at 3:54 PM, Chris Pearce cpea...@mozilla.com wrote:

 On 14/10/12 00:49, Maciej Stachowiak wrote:
 
 Despite both of these defenses having drawbacks, I think it is wise for 
 implementations to implement at least one of them. I think the spec should 
 explicitly permit implementations to apply either or both of these 
 limitations, and should discuss their pros and cons in the Security 
 Considerations section.
 
 
 I don't support making these mandatory, but they should certainly be added to 
 the Security Considerations section; we considered them, and we may indeed 
 re-consider them in future if it proves necessary.
 
 I support making the spec general enough that implementors can chose their 
 security features based on their requirements; what's appropriate for a 
 desktop browser may not be appropriate for a tablet, for example.

I agree with both of these comments (in case it wasn't clear). I suggest that 
these mechanisms should be permitted, not mandatory. Right now it is not 
entirely clear if either is permitted per spec.

Regards,
Maciej




Re: full screen api

2012-10-15 Thread Maciej Stachowiak

On Oct 14, 2012, at 3:52 PM, Chris Pearce cpea...@mozilla.com wrote:

 On 13/10/12 07:20, Carr, Wayne wrote:
 There’s a recent post on a phishing attack using the full screen api 
 [1][2}[3].
 
 It's worth noting that this attack has been possible in Flash for years, and 
 the sky hasn't fallen.

For most of that time, Flash has either not allowed any keyboard input, or 
allowed only non-alphanumeric keys. That has significantly different security 
characteristics against a phishing threat model than full-keyboard-enabled 
fullscreen.

Just recently (in Flash 11.3) they added optional full keyboard input, but that 
puts up a separate permission prompt and doesn't pass through keys until the 
user approves.

Regards,
Maciej



Re: Defenses against phishing via the fullscreen api (was Re: full screen api)

2012-10-15 Thread Maciej Stachowiak

That's why I liked having a separate API to request fullscreen with full 
alphanumeric keyboard access. This allows apps to determine if fullscreen with 
keyboard is available on a given browser, and allows browsers to set separate 
security policies for that case. I think the spec should change back to having 
two distinct APIs, even though Mozilla is not interested in making a 
distinction between the two cases.

Regards,
Maciej

On Oct 15, 2012, at 3:45 AM, Florian Bösch pya...@gmail.com wrote:

 Ok, so here's my question. You have a webapp (that oh, happens to be a game, 
 or a slideshow app, or a video player with controls, etc.) which needs 
 keyboard/UI events access to work (come to think of it, can you honestly 
 think of any sort of usecase that does work entirely without user 
 intercation?). Anyways, so now this app needs to figure out if it's worth the 
 bother to even display a fullscreen icon/request fullscren (see, after all, 
 there woulnd't be a point if there's no keyboard/UI access).
 
 So how does an app do that? How do we figure out what the random behavior 
 changes are that vendors add, that would break our app, that make it 
 pointless to try to use the API on that vendors browser? Anyone?
 
 On Mon, Oct 15, 2012 at 12:32 PM, Maciej Stachowiak m...@apple.com wrote:
 
 On Oct 14, 2012, at 3:54 PM, Chris Pearce cpea...@mozilla.com wrote:
 
  On 14/10/12 00:49, Maciej Stachowiak wrote:
 
  Despite both of these defenses having drawbacks, I think it is wise for 
  implementations to implement at least one of them. I think the spec should 
  explicitly permit implementations to apply either or both of these 
  limitations, and should discuss their pros and cons in the Security 
  Considerations section.
 
 
  I don't support making these mandatory, but they should certainly be added 
  to the Security Considerations section; we considered them, and we may 
  indeed re-consider them in future if it proves necessary.
 
  I support making the spec general enough that implementors can chose their 
  security features based on their requirements; what's appropriate for a 
  desktop browser may not be appropriate for a tablet, for example.
 
 I agree with both of these comments (in case it wasn't clear). I suggest that 
 these mechanisms should be permitted, not mandatory. Right now it is not 
 entirely clear if either is permitted per spec.
 
 Regards,
 Maciej
 
 



Re: Defenses against phishing via the fullscreen api (was Re: full screen api)

2012-10-15 Thread Maciej Stachowiak

On Oct 15, 2012, at 5:01 PM, Chris Pearce cpea...@mozilla.com wrote:

 On 16/10/12 11:39, Maciej Stachowiak wrote:
 
 That's why I liked having a separate API to request fullscreen with full 
 alphanumeric keyboard access. This allows apps to determine if fullscreen 
 with keyboard is available on a given browser, and allows browsers to set 
 separate security policies for that case.
 
 Would you implement keyboard access in fullscreen via this API if we spec'd 
 it? Or are you looking for a way to for authors to determine if key input 
 isn't supported in fullscreen mode?

Our most likely short-term goal would be the latter (enabling capability 
detection) but I wouldn't take full keyboard access off the table forever. We 
would want the freedom to apply different security policy to that case when/if 
we do it though. 

 
 
 I think the spec should change back to having two distinct APIs, even though 
 Mozilla is not interested in making a distinction between the two cases.
 
 I'd say fullscreen video is the only fullscreen use case where page script 
 shouldn't need key events dispatched to it. I'm sure some other fullscreen 
 uses wouldn't want key events, but most non-trivial users of fullscreen would 
 want keyboard shortcuts or input.

Many games could work with only non-alphanumeric keys or in some cases only the 
mouse. As could slideshows. You only need space/enter/arrows for a full screen 
slide presentation.

What are the cases where webpage-driven (as opposed to browser-chrome-driven) 
fullscreen is really compelling, but they need full keyboard access including 
alphanumeric keys? (Not saying there aren't any, I am just not sure what they 
would be - fullscreen Nethack?)

 
 Anyway, I'm curious what the Chrome guys think.

Likewise.


Cheers,
Maciej




Defenses against phishing via the fullscreen api (was Re: full screen api)

2012-10-13 Thread Maciej Stachowiak

On Oct 13, 2012, at 1:49 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Fri, Oct 12, 2012 at 8:25 PM, Florian Bösch pya...@gmail.com wrote:
 There was a limited discussion on that a few days ago with the limited
 consensus (?) being that requiring user-consent up front before switching to
 fullscreen is desired, should be in the standard and isn't sacrificing UX.
 
 There was no implementor involved in that discussion. I want to see
 their feedback before changing the standard.
 
 Also, FYI, http://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html
 is not maintained, http://fullscreen.spec.whatwg.org/ is.

I think it's unlikely that Apple would implement a requirement of prior user 
consent before entering fullscreen.

I also personally think OK/Cancel security nag dialogs are a very poor security 
mechanism in general. Users do not read them, and placing them in the path of 
operations that are harmless the vast majority of the time only has the effect 
of training users to click ok on dialogs. Cancel or allow dialogs are nearly 
useless for real security and seem mainly to provide CYA security - if a user 
gets hacked, you can tell them they were bad for clicking OK on the dialog.

Now, there are some limited cases where a permissions dialog may make sense. 
Specifically, these are cases where the user can reasonably be expected to 
relate the risk to the functionality requested. For example, when a site asks 
for your geolocation, a user can generally understand that there may be privacy 
implications to having a location tracked. But this does not really apply to 
fullscreen. A user is not likely to understand the security implications of 
fullscreen. So they won't be able to make a reasoned risk assessment based on a 
warning dialog. This situation is much like bad certificate warnings, where the 
evidence indicates that users almost always click through, even relatively 
informed users.


I think the most effective defense against phishing via fullscreen is to 
prevent keyboard access. The original design for requestFullscreen had an 
optional argument for requesting keyboard access, which led to a warning in 
some browsers and which for Safari we chose to ignore as the risk outweighed 
the benefit. The new spec does not have this parameter and makes no mention of 
keyboard access. It is not even clear if refusing to send key events or grant 
keyboard focus in fullscreen would be conforming. I think this should be fixed. 
I think the spec should at minimum explicitly allow browsers to block delivery 
of key events (or at least key events for alphanumeric keys). Regrettably, this 
defense would not be very effective on pure touchscreen devices, since there is 
no physical keyboard and the soft keyboard can likely be convincingly faked 
with HTML.

The second most effective defense that I can think of is a distinctive visible 
indicator that prevents convincingly faking the system UI. The common 
notification to press escape to exit partly serves that purpose. A potentially 
more effective version would be to show a noticeable visible indicator every 
time the user moves the mouse, presses a key, or registers a tap on a 
touchscreen. Ideally this would cover key areas needed to fake a real browser 
UI such as where the toolbar and address bar would go, and would indicate what 
site is showing the fullscreen UI. However, while such an effect is reasonable 
for fullscreen video (where the user will mostly watch without interacting), it 
might be distracting for fullscreen games, or the fullscreen mode of a 
presentation program, or a fullscreen editor.

Despite both of these defenses having drawbacks, I think it is wise for 
implementations to implement at least one of them. I think the spec should 
explicitly permit implementations to apply either or both of these limitations, 
and should discuss their pros and cons in the Security Considerations section.

Regards,
Maciej




Re: Defenses against phishing via the fullscreen api (was Re: full screen api)

2012-10-13 Thread Maciej Stachowiak

On Oct 13, 2012, at 4:58 AM, Florian Bösch pya...@gmail.com wrote:

 On Sat, Oct 13, 2012 at 1:49 PM, Maciej Stachowiak m...@apple.com wrote:
 I think the most effective defense against phishing via fullscreen is to 
 prevent keyboard access. The original design for requestFullscreen had an 
 optional argument for requesting keyboard access, which led to a warning in 
 some browsers and which for Safari we chose to ignore as the risk outweighed 
 the benefit. The new spec does not have this parameter and makes no mention 
 of keyboard access. It is not even clear if refusing to send key events or 
 grant keyboard focus in fullscreen would be conforming. I think this should 
 be fixed. I think the spec should at minimum explicitly allow browsers to 
 block delivery of key events (or at least key events for alphanumeric keys). 
 Regrettably, this defense would not be very effective on pure touchscreen 
 devices, since there is no physical keyboard and the soft keyboard can likely 
 be convincingly faked with HTML.
 I've got no objection against a user poll for things like keyboard 
 interactions in fullscreen as long as the implemention honors the intent to 
 show this once for a session or remembered state and not all the time when 
 going back and forth.

Our current intended behavior in Safari is to never allow alphanumeric keyboard 
access in fullscreen. No cancel/allow prompt. Did you read the part where I 
explained why such prompts are useless for security?

  
 The second most effective defense that I can think of is a distinctive 
 visible indicator that prevents convincingly faking the system UI. The common 
 notification to press escape to exit partly serves that purpose. A 
 potentially more effective version would be to show a noticeable visible 
 indicator every time the user moves the mouse, presses a key, or registers a 
 tap on a touchscreen. Ideally this would cover key areas needed to fake a 
 real browser UI such as where the toolbar and address bar would go, and would 
 indicate what site is showing the fullscreen UI. However, while such an 
 effect is reasonable for fullscreen video (where the user will mostly watch 
 without interacting), it might be distracting for fullscreen games, or the 
 fullscreen mode of a presentation program, or a fullscreen editor
 Such a scheme would render fullscreen virtually useless for most of its 
 intended purpose. 

That depends on what you think most of its intended purpose is. Many native 
video fullscreen implementations already have behavior somewhat like this, 
because they expect that the user is not producing UI events most of the time 
while watching the video. It may be annoying in the context of a game or 
slideshow. So far I have encountered such uses much less often than video.

Regards,
Maciej



Sandboxed Filesystem use cases? (was Re: Moving File API: Directories and System API to Note track?)

2012-09-25 Thread Maciej Stachowiak

On Sep 25, 2012, at 10:20 AM, James Graham jgra...@opera.com wrote:

 
 In addition, this would be the fourth storage API that we have tried to 
 introduce to the platform in 5 years (localStorage, WebSQL, IndexedDB being 
 the other three), and the fifth in total. Of the four APIs excluding this 
 one, one has failed over interoperability concerns (WebSQL), one has 
 significant performance issues and is discouraged from production use 
 (localStorage) and one suffers from a significant problems due to its legacy 
 design (cookies). The remaining API (IndexedDB) has not yet achieved 
 widespread use. It seems to me that we don't have a great track record in 
 this area, and rushing to add yet another API probably isn't wise. I would 
 rather see JS-level implementations of a filesystem-like API on top of 
 IndexedDB in order to work out the kinks without creating a legacy that has 
 to be maintained for back-compat than native implementations at this time.

I share your concerns about adding yet-another-storage API. (Although I believe 
there are major websites that have adopted or are in the process of adopting 
IndexedDB). I like my version better than the Google one, too, but I also worry 
about whether we should be adding another storage API at all.

I think we need to go back to the use case for sandboxed filesystem storage and 
understand which use cases cannot be served with IndexedDB.


Here are some use cases I have heard:

(1) A webapp (possibly working on offline mode) wants to stage files for later 
upload (e.g. via XHR).
Requirements:
- Must be able to store distinct named items containing arbitrary 
binary data.
- Must be able to read the data back for later upload.
- Must be able to delete items.

(2) A web-based mail client wants to download the user's attachments locally, 
then reference them by URL from the email and allow them to be extracted into 
the user's filesystem space.
Requirements:
- Must be able to store distinct named items containing arbitrary 
binary data.
- Must be able to reference items by persistent  URL from constructs in 
a webpage that use URLs.
- Must be able to delete items.

(3) A web-based web developer tool downloads copies of all the resources of a 
webpage, lets the user edit the webpage live potentially adding new resources, 
and then uploads it all again to one or more servers.
Requirements: 
- Must be able to store distinct named items containing arbitrary 
binary data.
- Must be able to replace items.
- Must be able to reference items by persistent  URL from constructs in 
a webpage that use URLs.
- Must be able to delete items.
- Must be able to enumerate items.
Highly desirable:
- Hierarchical namespace.

(4) A game wants to download game resources locally for efficient operation, 
and later update them
Requirements: 
- Must be able to store distinct named items containing arbitrary 
binary data.
- Must be able to replace items.
- Must be able to reference items by persistent URL from constructs in 
a webpage that use URLs.
- Must be able to delete items.
Highly desirable:
- Hierarchical namespace.


I believe the only requirement here that is not met by IndexedDB is:
- The ability to reference an item by persistent URL.

IndexedDB has enumeration, hierarchical namespace, ability to add, replace, 
remove, get, etc.


Are there other use cases? In particular, are there use cases that justify a 
whole new storage API instead of adding this one feature to IndexedDB?


Note: one aspect of the MinimalFileSystem proposal that is not obviously 
required by any of these use cases is the ability to incrementally update a 
file (beyond what you could already do with slice() and BlobBuilder). Basically 
the whole FileHandle interface. Is there truly a use case that you can't 
satisfy by using BlobBuilder to make your update and then atomically replacing?


Regards,
Maciej





Re: Moving File API: Directories and System API to Note track?

2012-09-25 Thread Maciej Stachowiak

Hi Glenn,

I read over your points. But I don't think they would change Apple's 
calculation about exposing an API to the real user filesystem in Safari, 
particularly as specified. I do think that my more minimal API might also be a 
better fit for the real filesystem use case, as it removes a bunch of 
unnecessary levels of indirection and abstraction that exist in the other 
proposals. But I think it is unlikely we would expose that aspect.

We are still open to solving the sandboxed local storage area use case, with 
a more minimal API. But first we'd like to understand why those use cases can't 
be solved with targeted additions to IndexedDB (see forked thread).

Regards,
Maciej

On Sep 25, 2012, at 12:35 PM, Glenn Maynard gl...@zewt.org wrote:

 On Wed, Sep 19, 2012 at 3:46 PM, James Graham jgra...@opera.com wrote:
 Indeed. We are not enthusiastic about implementing an API that has to 
 traverse directory trees as this has significant technical challenges, or may 
 expose user's path names, as this has security implications. Also AIUI this 
 API is not a good fit for all platforms.
 
 I don't think there are any unsolvable problems for traversing native 
 directory trees.  (For example, hard links can be dealt with if they do turn 
 up--keep track of inodes in parents--and pathname limitations put an upper 
 limit, preventing infinite recursion.  I think they're very unlikely in 
 practice--infinite directory loops would break tons of native apps, too.  No 
 filesystem I've tried in Linux allows it.)  I don't think any proposals 
 expose user pathnames (eg. above the item that was dropped/opened).
 
 The main issue that needs to be explored is how to prevent users from 
 accidentally giving more access than they mean to, eg. pages saying please 
 open C:\ and drag in Users, and users not realizing what they're doing.  
 This is critically important, but I think it's too early to conclude that 
 this is unsolvable.
 
 (That's a UI issue more than a technical issue.  There are other clearly 
 technical issues, but I haven't seen any raised that look unsolvable.  The 
 two tricky ones I've seen are maximum filename/path length limitations, and 
 valid characters varying between platforms, which have been discussed a lot 
 and would need to be revisited--it's been too long and I forget where that 
 left off.)
 
 
 On Fri, Sep 21, 2012 at 7:37 PM, Maciej Stachowiak m...@apple.com wrote:
 - I'm not keen on exposing portions of the user's filesystem. In particular:
 - This seems like a potential security/social-engineering risk.
 - This use case seems more relevant to system apps based on Web 
 technology than Web apps as such; I fear that system-app-motivated complexity 
 is bloating the api.
 
 These are the *primary* use cases of a filesystem-like API.  There's lot of 
 value in being able to let a media player app play and (optionally) 
 manipulate my local media directories, for example--obviously I won't move my 
 music into IndexedDB.
 
 (I don't find the sandboxed case very compelling, and while I consider 
 IndexedDB to-be-proven for things like storing gigabytes of data for a 
 game, it should definitely be given that chance before introducing another 
 API for that set of use cases.)
 
 - Many of Apple's key target platforms don't even *have* a user-visible 
 filesystem.
 
 It's not inconsistent to have APIs that aren't relevant on every single 
 platform.  When you don't have a user filesystem in the first place, the use 
 cases themselves go away and you don't care about any of this.
 
 (For the sandboxed case, this isn't relevant.  It would be useful to separate 
 the sandboxed and native discussions, since except for the API style 
 question, the issues and objections are almost completely distinct, and it's 
 sometimes hard to keep track of which one we're talking about.)
 
 - We already have way too many storage APIs. To add another, the use cases 
 would have to be overwhelmingly compelling, and highly impractical to serve 
 by extending any of the other storage APIs (e.g. IndexedDB).
 
 I do find being able to allow users to work with their native data, just as 
 native apps do, to be overwhelmingly compelling.  I have about 4 TB of data, 
 and the only way I can use any of it with web apps is by dragging in 
 individual, flat lists of files, and even that's a strictly one-way street 
 (FileSaver is only a very tiny help here).
 
 -- 
 Glenn Maynard
 
 



Re: Moving File API: Directories and System API to Note track?

2012-09-24 Thread Maciej Stachowiak

On Sep 22, 2012, at 9:35 PM, Maciej Stachowiak m...@apple.com wrote:

 
 On Sep 22, 2012, at 8:18 PM, Brendan Eich bren...@mozilla.com wrote:
 
 
 And two of the interfaces are generic and reusable in other contexts.
 
 Nice, and DOMRequest predates yours -- should it be done separately since (I 
 believe) it is being used by other proposals unrelated to FileSystem-like 
 ones?
 
 Sorry if I missed it and it's already being split out.
 
 Yes, I borrowed DOMRequest. I think DOMRequest and DOMMultiRequest could be a 
 separate spec, if that sort of asynchronous response pattern is generally 
 useful. And it seems like it might be. That would leave only two interfaces 
 specific to the Minimal File System proposal, Directory and FileHandle.

Here's an alternate version where I renamed some things to match Filesystem API 
and FileWriter, and added the missing key feature of getting a persistent URL 
for a file in a local filesystem (extending the URL interface). It's still a 
much simpler API that provides most of the same functionality. 

https://trac.webkit.org/wiki/MinimalFileStorageAlternate

Regards,
Maciej



Re: Moving File API: Directories and System API to Note track?

2012-09-22 Thread Maciej Stachowiak

What does getMetadata a synchronously return?

I think this API as written is still a fair bit more complex than needed for 
the sandboxed storage use case. It does seem simpler than Filesystem API.

Regards,
Maciej

On Sep 21, 2012, at 10:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Sep 21, 2012 at 5:37 PM, Maciej Stachowiak m...@apple.com wrote:
 
 My personal objections (ones that I think are shared by at least some other 
 Safari folks):
 
 - It's way too complicated. (As one crude metric, I count 22 interfaces; and 
 yes, I know many of those are callback interfaces or sync versions of 
 interfaces; it still seems overengineered).
 - I see value in the use case of a private sandboxed storage area, as 
 staging for uploads, to hold downloaded resources, etc. But:
- It's not totally clear to me why a filesystemlike API is the best way 
 to serve these use cases, particularly if in practice it is actually backed 
 by a database.
- The API as designed is way too complex for just that use case.
 - I'm not keen on exposing portions of the user's filesystem. In particular:
- This seems like a potential security/social-engineering risk.
- This use case seems more relevant to system apps based on Web 
 technology than Web apps as such; I fear that system-app-motivated 
 complexity is bloating the api.
- Many of Apple's key target platforms don't even *have* a user-visible 
 filesystem.
- We'd like to keep Web APIs consistent between our various platforms 
 consistent as much as possible.
- I think trying to serve the real filesystem use cases adds a lot of 
 complexity over the private storage area use case.
 - We already have way too many storage APIs. To add another, the use cases 
 would have to be overwhelmingly compelling, and highly impractical to serve 
 by extending any of the other storage APIs (e.g. IndexedDB).
- In particular, I have heard an explanation of why IndexedDB as it 
 currently exists can't handle all the use cases for file-like storage, but I 
 have heard no explanation of why it can't be extended in that direction.
 
 For these reasons, I think it is unlikely that Safari would ever support 
 Filesystem API in its current form. I could imagine considering a *much* 
 narrower API, scoped only to the use case of private storage areas for Web 
 apps, but only if a compelling case is made that there's no other way to 
 serve that use case.
 
 For what it's worth, even the DeviceStorage API proposal is too complex for 
 my tastes, in its current iteration.
 
 I think keeping the Filseystem API on the REC track in its current form is 
 actively bad, because it leads outside observers to be misled about where 
 the Web platform is going. For example,  sites like http://html5test.com 
 give out points for it even though it seems unlikely to advance on the 
 standards track as it stands today.
 
 For what it's worth, I put together a draft for what an API would look
 like that has basically the same feature set as the current FileSystem
 API, but based on DeviceStorage. It's a much smaller API that the
 current FileSystem drafts, but supports things like shallow as well as
 deep directory iteration.
 
 https://wiki.mozilla.org/WebAPI/DeviceStorageAPI2
 
 I think that if we at mozilla were to implement a sandboxed
 filesystem, it'd be something more like this.
 
 The FileHandle part of the API is definitely more complex to implement
 than the FileWriter API, but I'd argue that it's actually easier to
 use. For example you can start multiple write operations without
 having to wait for each individual write to finish.
 
 I also added an example of what a read-only DeviceStorage would look
 like if we wanted something like that for input type=file,
 drag'n'drop or a zip-reader. This shouldn't be considered as part of
 the remaining draft though since exposing a filesystem in those
 scenarios might very well be overkill.
 
 / Jonas



Re: Moving File API: Directories and System API to Note track?

2012-09-22 Thread Maciej Stachowiak

On Sep 21, 2012, at 10:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 
 For what it's worth, I put together a draft for what an API would look
 like that has basically the same feature set as the current FileSystem
 API, but based on DeviceStorage. It's a much smaller API that the
 current FileSystem drafts, but supports things like shallow as well as
 deep directory iteration.
 
 https://wiki.mozilla.org/WebAPI/DeviceStorageAPI2
 
 I think that if we at mozilla were to implement a sandboxed
 filesystem, it'd be something more like this.

I took a crack at a pruned and cleaned up version:

https://trac.webkit.org/wiki/MinimalFileStorage

Only 4 interfaces (excluding the Navigator addition), 16 methods, 9 attributes. 
And two of the interfaces are generic and reusable in other contexts.

It removes cruft but adds features relative to the Mozilla version:

- Atomic create/open operation
- Renaming files and directories
- A Directory interface for less weirdness in directory handling
- Ability to open files in append mode (this makes more sense than an append() 
operation on the handle, given the way underlying filesystems work)

Features omitted include:
 - Multiple named filesystems (just use directories)
- Separate enumeration for writing (just use open)
- Ability to fetch matadata (does not appear to be needed for the use case)


It could be simplified a little bit, but really not that much, by removing 
hierarchy.

Regards,
Maciej 



Re: Moving File API: Directories and System API to Note track?

2012-09-22 Thread Maciej Stachowiak

On Sep 22, 2012, at 8:18 PM, Brendan Eich bren...@mozilla.com wrote:

 
 And two of the interfaces are generic and reusable in other contexts.
 
 Nice, and DOMRequest predates yours -- should it be done separately since (I 
 believe) it is being used by other proposals unrelated to FileSystem-like 
 ones?
 
 Sorry if I missed it and it's already being split out.

Yes, I borrowed DOMRequest. I think DOMRequest and DOMMultiRequest could be a 
separate spec, if that sort of asynchronous response pattern is generally 
useful. And it seems like it might be. That would leave only two interfaces 
specific to the Minimal File System proposal, Directory and FileHandle.

Regards,
Maciej



Re: Moving File API: Directories and System API to Note track?

2012-09-21 Thread Maciej Stachowiak

I like the idea of offering asynchronous listing of files in input type=file 
multiple. But I think Filesystem API is overkill for this use case.

Regards,
Maciej

On Sep 21, 2012, at 3:50 PM, Darin Fisher da...@chromium.org wrote:

 No comment on the value of DirectoryEntry for enabling asynchronous listing 
 of files in input type=file multiple?
 
 -Darin
 
 On Thu, Sep 20, 2012 at 4:48 PM, Maciej Stachowiak m...@apple.com wrote:
 
 +1
 
 I don't see an indication of any major browser but Chrome planning to 
 implement this and expose it to the Web.
 
  - Maciej
 
 On Sep 18, 2012, at 4:04 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 
  Hi all,
 
 
  I think we should discuss about moving File API: Directories and System API
  from Recommendation track to Note. Mainly because the API hasn't been 
  widely accepted nor
  implemented and also because there are other proposals which handle the 
  same use cases.
  The problem with keeping the API in recommendation track is that people 
  outside
  standardization world think that the API is the one which all the browsers 
  will implement and
  as of now that doesn't seem likely.
 
 
 
 
  -Olli
 
 
 
 



Re: Moving File API: Directories and System API to Note track?

2012-09-21 Thread Maciej Stachowiak

My personal objections (ones that I think are shared by at least some other 
Safari folks):

- It's way too complicated. (As one crude metric, I count 22 interfaces; and 
yes, I know many of those are callback interfaces or sync versions of 
interfaces; it still seems overengineered).
- I see value in the use case of a private sandboxed storage area, as staging 
for uploads, to hold downloaded resources, etc. But:
- It's not totally clear to me why a filesystemlike API is the best way to 
serve these use cases, particularly if in practice it is actually backed by a 
database.
- The API as designed is way too complex for just that use case.
- I'm not keen on exposing portions of the user's filesystem. In particular:
- This seems like a potential security/social-engineering risk.
- This use case seems more relevant to system apps based on Web technology 
than Web apps as such; I fear that system-app-motivated complexity is bloating 
the api.
- Many of Apple's key target platforms don't even *have* a user-visible 
filesystem.
- We'd like to keep Web APIs consistent between our various platforms 
consistent as much as possible.
- I think trying to serve the real filesystem use cases adds a lot of 
complexity over the private storage area use case.
- We already have way too many storage APIs. To add another, the use cases 
would have to be overwhelmingly compelling, and highly impractical to serve by 
extending any of the other storage APIs (e.g. IndexedDB).
- In particular, I have heard an explanation of why IndexedDB as it 
currently exists can't handle all the use cases for file-like storage, but I 
have heard no explanation of why it can't be extended in that direction.

For these reasons, I think it is unlikely that Safari would ever support 
Filesystem API in its current form. I could imagine considering a *much* 
narrower API, scoped only to the use case of private storage areas for Web 
apps, but only if a compelling case is made that there's no other way to serve 
that use case.

For what it's worth, even the DeviceStorage API proposal is too complex for my 
tastes, in its current iteration.

I think keeping the Filseystem API on the REC track in its current form is 
actively bad, because it leads outside observers to be misled about where the 
Web platform is going. For example,  sites like http://html5test.com give out 
points for it even though it seems unlikely to advance on the standards track 
as it stands today.

Regards,
Maciej

On Sep 21, 2012, at 4:32 PM, Eric U er...@google.com wrote:

 While I don't see any other browsers showing interest in implementing
 the FileSystem API as currently specced, I do see Firefox coming
 around to the belief that a filesystem-style API is a good thing,
 hence their DeviceStorage API.  Rather than scrap the API that we've
 put 2 years of discussion and work into, why not work with us to
 evolve it to something you'd like more?  If you have objections to
 specific attributes of the API, wouldn't it be more efficient to
 change just those things than to start over from scratch?  Or worse,
 to have the Chrome filesystem API, the Firefox filesystem API, etc.?
 
 If I understand correctly, folks at Mozilla think having a directory
 abstraction is too heavy-weight, and would prefer users to slice and
 dice paths by hand.  OK, that's a small change, and the
 functionality's roughly equivalent.  We could probably even make
 migration fairly easy with a small polyfill.
 
 Jonas suggests FileHandle to replace FileWriter.  That's clearly not a
 move to greater simplicity, and no polyfill is possible, but it does
 open up the potential for higher perfomance, especially in a
 multi-process browser.  As i said when you proposed it, I'm
 interested, and we'd also like to solve the locking use cases.
 
 Let's talk about it, rather than throw the baby out with the bathwater.
 
   Eric
 
 On Tue, Sep 18, 2012 at 4:04 AM, Olli Pettay olli.pet...@helsinki.fi wrote:
 Hi all,
 
 
 I think we should discuss about moving File API: Directories and System API
 from Recommendation track to Note. Mainly because the API hasn't been widely
 accepted nor
 implemented and also because there are other proposals which handle the same
 use cases.
 The problem with keeping the API in recommendation track is that people
 outside
 standardization world think that the API is the one which all the browsers
 will implement and
 as of now that doesn't seem likely.
 
 
 
 
 -Olli
 
 




Re: Moving File API: Directories and System API to Note track?

2012-09-20 Thread Maciej Stachowiak

+1

I don't see an indication of any major browser but Chrome planning to implement 
this and expose it to the Web.

 - Maciej

On Sep 18, 2012, at 4:04 AM, Olli Pettay olli.pet...@helsinki.fi wrote:

 Hi all,
 
 
 I think we should discuss about moving File API: Directories and System API
 from Recommendation track to Note. Mainly because the API hasn't been widely 
 accepted nor
 implemented and also because there are other proposals which handle the same 
 use cases.
 The problem with keeping the API in recommendation track is that people 
 outside
 standardization world think that the API is the one which all the browsers 
 will implement and
 as of now that doesn't seem likely.
 
 
 
 
 -Olli
 




Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-28 Thread Maciej Stachowiak

On Aug 27, 2012, at 2:07 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 
 
 
 I have mixed feelings about this proposal overall, but I think it's a little 
 weird to use CSS property syntax instead of markup-like attribute syntax to 
 set attributes. I think this makes the syntax confusingly similar to CSS 
 even though proposed behavior is quite different. Something like this might 
 make more sense:
 
 img.placeholder {
src=/images/1x1.gif
alt=
 }
 
 In other words, make attribute setting look like attribute syntax.
 
 This is similar to the discussions about monocle-mustache in JS.  I
 prefer being able to simple re-use the existing CSS parser, but I'm
 not completely wedded to it.  I'd need some strong opinions from more
 relevant parties to switch over, though.

I doubt we would be able to actually reuse the CSS parser, at least at the code 
level, at least for WebKit. To give some examples of issues that arise:

- The CSS parser has specific knowledge of all the CSS properties we recognize 
and their value syntax. 
- It has no provision for catchall recognition of all other property names with 
a quoted string value.
- There are some CSS properties that have the same name as meaningful HTML 
attributes.

 
 And finally, I'm skeptical whether the convenience here is worth adding a 
 whole new language to the Web Platform technology stack. It is literally 
 just convenience / syntactic sugar. I'm not sure that rises to the high bar 
 needed to add an additional language to the Web Platform.
 
 Yeah, this is the major hurdle for this.  My hope is that, by reusing
 the CSS parser and restricting all the expensive parts of CSS
 (dynamicness, remembering where a value came from, etc.), I can pull
 the bar down low enough for this to make it.

I think your assumption is that it's possible to reuse the CSS parser is 
incorrect, at least for strong values of reuse.

Regards,
Maciej




Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-23 Thread Maciej Stachowiak

On Aug 22, 2012, at 11:08 PM, Olli Pettay olli.pet...@helsinki.fi wrote:

 On 08/22/2012 10:44 PM, Maciej Stachowiak wrote:
 
 On Aug 22, 2012, at 6:53 PM, Ojan Vafai o...@chromium.org 
 mailto:o...@chromium.org wrote:
 
 On Wed, Aug 22, 2012 at 6:49 PM, Ryosuke Niwa rn...@webkit.org 
 mailto:rn...@webkit.org wrote:
 
On Wed, Aug 22, 2012 at 5:55 PM, Glenn Maynard gl...@zewt.org 
 mailto:gl...@zewt.org wrote:
 
On Wed, Aug 22, 2012 at 7:36 PM, Maciej Stachowiak m...@apple.com 
 mailto:m...@apple.com wrote:
 
Ryosuke also raised the possibility of multiple text fields 
 having separate UndoManagers. On Mac, most apps wipe they undo queue when
you change text field focus. WebKit preserves a single undo 
 queue across text fields, so that tabbing out does not kill your ability to
undo. I don't know of any app where you get separate switchable 
 persistent undo queues. Thins are similar on iOS.
 
 
 Think of the use-case of a threaded email client where you can reply to any 
 message in the thread. If it shows your composing mails inline (e.g. as
 gmail does), the most common user expectation IMO is that each email gets 
 it's own undo stack. If you undo the whole stack in one email you wouldn't
 expect the next undo to start undo stuff in another composing mail. In 
 either case, since there's a simple workaround (seamless iframes), I don't
 think we need the added complexity of the attribute.
 
 Depends on the user and their platform of choice. On the Mac I think it's 
 pretty much never the case that changing focus within a window changes your
 undo stack, it either has a shared one or wipes undo history on focus 
 switch. So if GMail forced that, users would probably be surprised. I can
 imagine a use case for having an API that allows multiple undo stacks on 
 platforms where they are appropriate, but merges to a single undo stack on
 platforms where they are not. However, I suspect an API that could handle 
 this automatically would be pretty hairy. So maybe we should handle the
 basic single-undo-stack use case first and then think about complexifying it.
 
 
 I think the undo-stack per editing context (like input) is pretty basics, 
 and certainly something I wouldn't remove from Gecko.
 (Largely because using the same undo for separate input elements is just 
 very weird, and forcing web apps to use iframes to achieve
 Gecko's current behavior would be horribly complicated.)

It might be ok to let Web pages conditionally get Gecko-like separate undo 
stack behavior inside Firefox, at least on Windows. (Firefox even seems to do 
per-field undo on Mac, so I'm starting to think that it's more of a Gecko quirk 
than a Windows platform thing.)

But, again, letting webpages force that behavior in Safari seems wrong to me. I 
don't think we should allow violating the platform conventions for undo so 
freely. You seem to feel strongly that webpages should be able to align with 
the Gecko behavior, but wouldn't it be even worse to let them forcibly violate 
the WebKit behavior?

So if there is an API for separate undo stacks, it has to handle the case where 
there's really a single undo stack. And that would potentially be hard to 
program with.

On the other hand, there are certainly use cases where a single global undo 
stack is right (such as a page with a single rich text editor). And it's easy 
to handle those cases without adding a lot of complexity. And if we get that 
right, we could try to add on something for conditional multiple undo stacks.

Regards,
Maciej


 



Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-22 Thread Maciej Stachowiak

Hi folks,

I wanted to mention that, in addition to the extra implementation complexity, I 
am not sure that multiple independent UndoManagers per page is even a good 
feature.

The use cases document gives a use case of a text editor with an embedded 
vector graphics editor. But for all the native apps I know of that have this 
feature, they present a single unified undo queue per top level document, at 
least on the Mac.

Ryosuke also raised the possibility of multiple text fields having separate 
UndoManagers. On Mac, most apps wipe they undo queue when you change text field 
focus. WebKit preserves a single undo queue across text fields, so that tabbing 
out does not kill your ability to undo. I don't know of any app where you get 
separate switchable persistent undo queues. Thins are similar on iOS.

Maybe things are wildly different on Windows or Linux or Android. But my 
feeling is that the use case for scoped UndoManagers is dubious at best, and 
may cause authors to create confusing UI.

Since the use cases are dubious *and* this aspect of UndoManager causes extra 
implementation complexity, I think dropping it is the right thing to do.

Regards,
Maciej


On Aug 21, 2012, at 2:00 PM, Ryosuke Niwa rn...@webkit.org wrote:

 Maciej, Ojan, and I had a further conversion about this matter off the list, 
 and we've concluded that we should drop the support for undoscope content 
 attribute altogether. So we're just going to do that and let authors use 
 iframe to have multiple undo managers.
 
 I can keep it around in the spec if other browser vendors are so inclined, 
 but I'm somewhat skeptical that we can get the required two independent 
 implementations given how much trouble we've had.
 
 - Ryosuke
 
 On Mon, Aug 20, 2012 at 9:52 PM, Ryosuke Niwa rn...@webkit.org wrote:
 Greetings all,
 
 We've been implementing undo manager in WebKit, and we've found out that 
 allowing live undo manager on a detached undo scope host is a terrible idea.
 
 e.g. say you have a subtree like follows:
 A
 B
 D
 C
 where A is the undo scope host. If we then detach B from A, and then insert A 
 under D, all automatic transactions in A's undo manager are broken and may 
 create a cyclic reference graph because nodes touched in automatic 
 transactions need to be kept alive for undo/redo.
 
 If there is no objection, I'm changing the spec to disallow live undo manager 
 on detached nodes so that scripts can't move the host around inside a 
 document or between documents; i.e. when a undo scope host is removed from 
 its parent, its undo manager must be disconnected and a new undo manager be 
 created for the node.
 
 Alternatively, we can turn all automatic transactions in the undo manager 
 into no-ops but I'd prefer disconnecting the undo manager altogether to make 
 the behavior simple.
 
 Best,
 Ryosuke Niwa
 Software Engineer
 Google Inc.
 
 
 



Re: Proposal for Cascading Attribute Sheets - like CSS, but for attributes!

2012-08-22 Thread Maciej Stachowiak

On Aug 21, 2012, at 1:59 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, Aug 21, 2012 at 1:37 PM, Brian Kardell bkard...@gmail.com wrote:
 On Tue, Aug 21, 2012 at 4:32 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 Correct.  If we applied CAS on attribute changes, we'd have... problems.
 
 Because you could do something like:
 
 .foo[x=123]{ x:  234; }
 .foo[x=234]{ x:  123; }
 
 ?
 
 Precisely.  Any way around this problem pulls in a lot more complexity
 that I don't think is worthwhile.

I suspect it's actually pretty simple to fix. Ban attribute selectors, and 
forbid setting the class or id attributes using this language.


I have mixed feelings about this proposal overall, but I think it's a little 
weird to use CSS property syntax instead of markup-like attribute syntax to set 
attributes. I think this makes the syntax confusingly similar to CSS even 
though proposed behavior is quite different. Something like this might make 
more sense:

img.placeholder {
src=/images/1x1.gif
alt=
}

In other words, make attribute setting look like attribute syntax.

I also think the proposed semi-dynamic behavior of applying only at DOM 
insertion time but otherwise being static is super confusing.

And finally, I'm skeptical whether the convenience here is worth adding a whole 
new language to the Web Platform technology stack. It is literally just 
convenience / syntactic sugar. I'm not sure that rises to the high bar needed 
to add an additional language to the Web Platform.

Regards,
Maciej




Re: [UndoManager] Disallowing live UndoManager on detached nodes

2012-08-22 Thread Maciej Stachowiak

On Aug 22, 2012, at 6:53 PM, Ojan Vafai o...@chromium.org wrote:

 On Wed, Aug 22, 2012 at 6:49 PM, Ryosuke Niwa rn...@webkit.org wrote:
 On Wed, Aug 22, 2012 at 5:55 PM, Glenn Maynard gl...@zewt.org wrote:
 On Wed, Aug 22, 2012 at 7:36 PM, Maciej Stachowiak m...@apple.com wrote:
 Ryosuke also raised the possibility of multiple text fields having separate 
 UndoManagers. On Mac, most apps wipe they undo queue when you change text 
 field focus. WebKit preserves a single undo queue across text fields, so that 
 tabbing out does not kill your ability to undo. I don't know of any app where 
 you get separate switchable persistent undo queues. Thins are similar on iOS.
 
 Think of the use-case of a threaded email client where you can reply to any 
 message in the thread. If it shows your composing mails inline (e.g. as gmail 
 does), the most common user expectation IMO is that each email gets it's own 
 undo stack. If you undo the whole stack in one email you wouldn't expect the 
 next undo to start undo stuff in another composing mail. In either case, 
 since there's a simple workaround (seamless iframes), I don't think we need 
 the added complexity of the attribute.

Depends on the user and their platform of choice. On the Mac I think it's 
pretty much never the case that changing focus within a window changes your 
undo stack, it either has a shared one or wipes undo history on focus switch. 
So if GMail forced that, users would probably be surprised. I can imagine a use 
case for having an API that allows multiple undo stacks on platforms where they 
are appropriate, but merges to a single undo stack on platforms where they are 
not. However, I suspect an API that could handle this automatically would be 
pretty hairy. So maybe we should handle the basic single-undo-stack use case 
first and then think about complexifying it.

  
 Firefox in Windows has a separate undo list for each input.  I would find a 
 single undo list strange.
 
 Internet Explorer and WebKit don't.
 
 While we're probably all biased to think that what we're used to is the best 
 behavior, it's important to design our API so that implementors need not to 
 violate platform conventions. In this case, it might mean that whether text 
 field has its own undo manager by default depends on the platform convention.
 
 Also, another option is that we could allow shadow DOMs to have their own 
 undo stack. So, you can make a control that has it's own undo stack if you 
 want.

Again, I think it's not right to leave this purely up to the web page. That 
will lead to web apps that match their developer's platform of choice but which 
don't seem quite right elsewhere.


BTW, I don't think the API should impose any requirements on how browsers 
handle undo for their built-in form controls. I have not read the spec close 
enough to know if that is the case.


Regards,
Maciej



URL spec parameter-related methods use parameter in a way inconsistent with the URI RFC

2012-05-24 Thread Maciej Stachowiak

The current draft URL spec has a number of Parameter-related methods 
(getParameterNames, getParameterValues, hasParameter, getParameter, 
setParameter, addParameter, removeParameter, clearParameters)[1]. Apparently 
these methods refer to key-value pairs in the query part of the URL as 
parameters. However, the term parameter is used by the URI RFC[2] to refer 
to something else, a semicolon-delimited part of a path (which I think is 
nearly obsolete in modern use; I am not sure what it is for). I understand that 
for legacy reasons, much of the URL interface cannot be consistent with 
RFC-official terminology. But it seems like a bad idea to use the same term for 
a different piece of the URL, worse than using the same term for a different 
part. At least call it something like query parameters to disambiguate.

Another point of feedback on the parameter-related methods: they seem to form a 
dictionary-style interface, and it seems inelegant to have all these different 
methods giving a dictionary-style interface to something that is a piece of the 
URL, rather than something that is the URL.

One possible way to solve both these problems:

interface URL {
StringMultiMap queryParameters;
}

interface StringMultiMap {
 sequenceDOMString keys;
 sequenceDOMString getAll(DOMString name)
 boolean contains(DOMString name)
 DOMString? get(DOMString name);
 void set(DOMString name, DOMString value);
 void add(DOMString name, DOMString value);
 void remove(DOMString name);
 void clear();
}

The StringMultiMap interface could be reusable for other, similar key-value 
list contexts.

Or else use an appropriate dictionary type from ES if one is ever provided.

Regards,
Maciej


[1] 
http://dvcs.w3.org/hg/url/raw-file/tip/Overview.html#dom-url-getparameternames
[2] http://www.ietf.org/rfc/rfc2396.txt



Re: Shrinking existing libraries as a goal

2012-05-18 Thread Maciej Stachowiak

On May 17, 2012, at 10:58 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, May 17, 2012 at 3:21 PM, Yehuda Katz wyc...@gmail.com wrote:
 I am working on it. I was just getting some feedback on the general idea
 before I sunk a bunch of time in it.
 
 For what it's worth, I definitely support this idea too on a general
 level. However as others have pointed out, the devil's in the details,
 so looking forward to those :)
 
 Of course, ideal are proposals which not just shrinks existing
 libraries, but also helps people that aren't using libraries at all
 but rather uses the DOM directly.

I also agree that providing functionality which can help reduce side of JS 
libraries is a good goal (though one of many). And also that the merits of 
specific proposals depend on the details. One aspect of this that can be 
challenging is finding functionality that will allow a broad range of libraries 
to shrink, rather than only one or a few.

 - Maciej




Re: [websockets] Moving Web Sockets back to LCWD; is 15210 a showstopper?

2012-05-08 Thread Maciej Stachowiak

I think it would be reasonable to defer the feature requested in 15210 to a 
future version of Web Sockets API. It would also be reasonable to include it if 
anyone feels strongly. Was a reason cited for why 15210 should be considered 
critical? I could not find one in the minutes.

Cheers,
Maciej


On May 3, 2012, at 3:41 PM, Arthur Barstow art.bars...@nokia.com wrote:

 During WebApps' May 2 discussion about the Web Sockets API CR, four Sockets 
 API bugs were identified as high priority to fix: 16157, 16708, 16703 and 
 15210. Immediately after that discussion, Hixie checked in fixes for 16157, 
 16708 and 16703and these changes will require the spec going back to LC.
 
 Since 15210 remains open, before I start a CfC for a new LC, I would like 
 some feedback on whether the new LC should be blocked until 15210 is fixed, 
 or if we should move toward a new LC without the fix (and thus consider 15210 
 for the next version of the spec). If you have any comments, please send them 
 by May 10.
 
 -AB
 
 [Mins] http://www.w3.org/2012/05/02-webapps-minutes.html#item08
 [CR] http://www.w3.org/TR/2011/CR-websockets-20111208/
 [Bugz] http://tinyurl.com/Bugz-Web-Socket-API
 [15210] https://www.w3.org/Bugs/Public/show_bug.cgi?id=15210
 




Re: Element.create(): a proposal for more convenient element creation

2011-08-02 Thread Maciej Stachowiak

On Aug 1, 2011, at 8:36 PM, João Eiras wrote:

 On , Ian Hickson i...@hixie.ch wrote:
 
 On Mon, 1 Aug 2011, Ryosuke Niwa wrote:
 On Mon, Aug 1, 2011 at 6:33 PM, Maciej Stachowiak m...@apple.com wrote:
 
  In an IRC discussion with Ian Hickson and Tab Atkins, we can up with
  the following idea for convenient element creation:
 
  Element.create(tagName, attributeMap, children�)
 
 Can we alternatively extend document.createElement?  Or was this
 intentionally avoided to associate new elements with documents?
 
 We could, but I'd much rather have the shorter name, personally. Having
 the name be so long really makes that API unusable.
 
 
 However, Nodes need a ownerDocument, and that needs to be supplied, even if 
 optionally. Doing document.createElement implies the document, Element.create 
 does not.

The intent is that it supplies the Window's current Document. It's true that 
sometimes you need to make a node for another Document, sometimes even one that 
lacks a namespace, but that's a sufficiently specialized case that it does not 
really need a convenience shortcut.

Regards,
Maciej




Re: Element.create(): a proposal for more convenient element creation

2011-08-02 Thread Maciej Stachowiak

On Aug 1, 2011, at 8:43 PM, Tab Atkins Jr. wrote:

 On Mon, Aug 1, 2011 at 7:05 PM, Charles Pritchard ch...@jumis.com wrote:
 Can we have it 'inherit' a parent namespace, and have chaining properties?
 
 Element.create('div').create('svg').create('g').create('rect', {title: 'An 
 svg rectangle in an HTML div'});
 
 Ooh, so .create is defined both on Element (defaults to HTML
 namespace, just creates an element) and on Element.prototype (defaults
 to namespace of the element, inserts as a child)?  That's pretty
 interesting.  Presumably the new element gets inserted as a last child
 of the parent.
 
 I like it.

With just the old propose you could get the same effect like this:

Element.create('div', {}, Element.create('svg', {}, Element.create('g', {}, 
Element.create('rect', {title: 'An svg rectangle in an HTML div'}

Chaining .create() is certainly less verbose. Doesn't work as well for 
inserting multiple children into an element though.

Regards,
Maciej




Re: CORS/UMP to become joint WebApps and WebAppSec joint deliverable

2011-08-02 Thread Maciej Stachowiak

On Aug 2, 2011, at 4:10 AM, Anne van Kesteren wrote:

 On Tue, 02 Aug 2011 12:53:49 +0200, Thomas Roessler t...@w3.org wrote:
 Well, groups can decide to stop working on a deliverable without having to 
 recharter; further, we've had separate groups work on joint deliverables in 
 the past.  In practical terms, the minimum that webapps needs to do is to 
 give its consent to publication and transition decisions; that can easily be 
 done through a call for consensus.  I trust that Art will help to nudge 
 discussions over to the WebAppSec group.
 
 None of that requires charter language beyond what's there already, and none 
 of it requires a rechartering of webapps.
 
 Can we at least make it so that public-webapps@w3.org stays the list for 
 technical discussion on CORS? We already switched mailing lists once (twice 
 if you count going from the initial proposal on public-web...@w3.org to 
 public-appform...@w3.org) and I would like to avoid doing it again. Getting 
 feedback is hard enough as it is, requiring all relevant people to subscribe 
 to yet another list would be bad.
 
 If that is not possible I think I would prefer CORS and From-Origin to stay 
 in the WebApps WG.

At Apple we have a somewhat lengthy internal process for joining new Working 
Groups, so if feedback has to go to a new WG's list, you will likely miss out 
on Apple feedback for at least a few months.

In addition to this, I'd personally prefer to have discussion remain on 
public-webapps because we've managed to gather all the stakeholders here, and 
gathering them again will just add disruption and delay. Perhaps WebAppSec 
could be focused on new deliverables instead of taking over a deliverable that 
is relatively on track as it is. This could include a hypothetical CORS 2, or 
even production of the CORS test suite.

Regards,
Maciej




Re: CORS/UMP to become joint WebApps and WebAppSec joint deliverable

2011-08-01 Thread Maciej Stachowiak

On Jul 15, 2011, at 7:51 AM, Thomas Roessler wrote:

 On Jul 15, 2011, at 16:47 , Anne van Kesteren wrote:
 
 On Fri, 15 Jul 2011 14:43:13 +0200, Arthur Barstow art.bars...@nokia.com 
 wrote:
 As indicated a year ago [1] and again at the end of last month [2], the 
 proposal to create a new Web Application Security WG has moved forward with 
 a formal AC review now underway and ending August 19.
 
 The proposed charter includes making CORS and UMP a joint deliverable 
 between the WebApps and WebAppSec WGs:
 
 [[
 http://www.w3.org/2011/07/appsecwg-charter.html
 
 Secure Cross-Domain Resource Sharing - Advance existing recommendations 
 specifying mechanisms necessary for secure mashup applications, including 
 the Cross-Origin Request Sharing (CORS) and Uniform Messaging Policy (UMP) 
 Such recommendations will be harmonized and published as joint work with 
 the W3C Web Applications Working Group.
 ]]
 
 Does this mean twice as much email? 
 
 Hope not -- the intent is that, for all intents and purposes, webappsec takes 
 over the deliverable, but webapps needs to agree to formal publications.
 
 Joint deliverable seems even worse than moving it.
 
 The goal of making this a joint deliverable is to preserve the patent 
 commitments out of webapps.  This was a concern that came up when we proposed 
 moving the document over to webappsec instead of making it a joint one.

Perhaps the charter should be edited to make this clear, so that AC reps will 
understand that Web Apps would no longer have operational control over these 
deliverables. (And incidentally, I don't see how that can validly be done 
without amending the Web Apps WG charter.)

Regards,
Maciej




Re: From-Origin FPWD

2011-08-01 Thread Maciej Stachowiak

On Jul 31, 2011, at 5:52 PM, Bjoern Hoehrmann wrote:

 * Anne van Kesteren wrote:
  http://www.w3.org/TR/from-origin/
 
 The proposed `From-Origin` header conveys a subset of the information
 that is already available through the Referer header.

From-Origin is a response header and Referer is a request header, so this 
statement would be irrelevant  even if true. Also, it is not true. Referer 
indicates the resource responsible for generating the request, From-Origin list 
the sites that are allowed embed access to the resource.

I think you may be confusing the From-Origin header with the Origin header.

Regards,
Maciej




Re: From-Origin FPWD

2011-08-01 Thread Maciej Stachowiak

On Aug 1, 2011, at 10:29 AM, Hill, Brad wrote:

 The ability to do all of these things server-side, with referrer checking, 
 has been universally available for fifteen years.  (RFC 1945)
  
 In every one of the use cases below, From-Origin is a worse solution than 
 referrer checking.  What is the benefit?  Why should I choose From-Origin?  
 Why should we expect it to become universally deployed where referrer 
 checking is not?

The From-Origin design has two advantages over server-side Referer checking:

1) The Referer header is stripped by intermediaries, often enough that sites 
targeting a wide user base must be prepared for the fact that it may not be 
present. This limits the effectiveness of checking it.

2) In many static hosting environments, it is easier to add a fixed response 
header than to add server-side logic to check Referer. It also enables better 
caching by intermediaries, as the response would not require a Vary: Referer 
rule. It's quite common to serve resources such as images or CSS from a 
dedicated host that only serves static resources and does not execute 
application logic.

For these reasons, despite the availability of server-side Referer checking, 
many Web developers would find a solution like From-Origin convenient and 
helpful

Regards,
Maciej



Element.create(): a proposal for more convenient element creation

2011-08-01 Thread Maciej Stachowiak

In an IRC discussion with Ian Hickson and Tab Atkins, we can up with the 
following idea for convenient element creation:

Element.create(tagName, attributeMap, children…)

   Creates an element with the specified tag, attributes, and children.

   tagName - tag name as a string; by default it does smart selection of SVG, 
HTML or MathML namespace. Authors can also use an html: svg: or mathml: prefix 
to override these defaults. (And further, you can use xmlns in attribute map to 
use a custom namespace.)
   attributeMap - JS object-as-dictonary or whatever dictionary type is 
appropriate to the language, or null for no attributes
   children… - veridic parameter, can include nodes, strings, or arrays. 
Strings are converted to text nodes. Arrays are unpacked and treated as lists 
of nodes/strings. Array support is for cases where you want to have a call-site 
that may take a veriable-length list, with possible prefix and suffix.

Examples:

Element.create(a, {href: http://google.com/}, Google}

Element.create(p, null, 
Please consider these instructions ,
Element.create(em, {class: intense}, very),
carefully)

Element.create('svg:a', {href: 'example.html'}, 'Click Me! Yay bad link text!');

Element.create('select', {multi: ''}, optionArray);

The default namespace mapping would be HTML, with possibly SVG or MathML 
elements that don't localName-collide with HTML elements mapping to their 
default namespace.


Why Element.create() instead of new Element()? It's a factory method. In 
general it returns instances of many different classes. new Foo() carries a 
strong expectation of returning a direct instance of Foo, not one of several 
subclasses.

We could also add new Text('foo') for the cases where you want to create a text 
node without an element around it.

Regards,
Maciej



Re: Publishing From-Origin Proposal as FPWD

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 7:22 AM, Anne van Kesteren wrote:

 Hi hi,
 
 Is there anyone who has objections against publishing 
 http://dvcs.w3.org/hg/from-origin/raw-file/tip/Overview.html as a FPWD. The 
 idea is mainly to gather more feedback to see if there is any interest in 
 taking this forward.
 
 (Added public-web-security because of the potential for doing this in CSP 
 instead. Though that would require a slight change of scope for CSP, which 
 I'm not sure is actually desirable.)

I approve of publishing this as FWPD.

I also don't think it makes sense to tie this to CSP.

Regards,
Maciej




Re: Component Model: Landing Experimental Shadow DOM API in WebKit

2011-06-30 Thread Maciej Stachowiak

On Jun 29, 2011, at 9:08 AM, Dimitri Glazkov wrote:

 Hi Folks!
 
 With use cases (http://wiki.whatwg.org/wiki/Component_Model_Use_Cases)

So I looked at this list of use cases. It seems to me almost none of these are 
met by the proposal at http://dglazkov.github.com/component-model/dom.html.

Can you give a list of use cases that are actually intended to be addressed by 
this proposal?

(I would be glad to explain in more detail why the requirements aren't 
satisfied in case it isn't obvious; the main issues being that the proposal on 
the table can't handle multiple bindings, can't form controls, and lacks proper 
(type 2) encapsulation).

These use cases are also, honestly, rather vague. In light of this, it's very 
hard to evaluate the proposal, since it has no obvious relationship to its 
supporting use cases.

Could we please get the following to be able to evaluate this component 
proposal:

- List of use cases, ideally backed up with very concrete examples, not just 
vague high-level statements
- Identification of which use cases the proposal even intends to address, and 
which will possibly be addressed later
- Explanation of how the proposal satisfies the use cases it is intended to 
address
- For bonus points, explanation of how XBL2 fails to meet the stated use cases

Regards,
Maciej




Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 10:57 AM, Dimitri Glazkov wrote:

 Hi Maciej!
 
 First off, I really appreciate your willingness to get into the mix of
 things. It's a hard problem and I welcome any help we can get to solve
 it.
 
 I also very much liked your outline of encapsulation and I would like
 to start using the terminology you introduced.
 
 I am even flattered to see the proposal you outlined, because it's
 similar to the one we originally considered as part of the first
 iteration of the API
 (https://raw.github.com/dglazkov/component-model/cbb28714ada37ddbaf49b3b2b24569b5b5e4ccb9/dom.html)
 or even earlier versions
 (https://github.com/dglazkov/component-model/blob/ed6011596a0213fc1eb9f4a12544bb7ddd4f4894/api-idl.txt)
 
 We did remove them however, and opted for the simplest possible API,
 which effectively only exposes the shadow DOM part of the component
 model (see my breakdown here
 http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1345.html).
 
 One of the things to keep in mind is that the proposal outlined in
 http://dglazkov.github.com/component-model/dom.html is by no means a
 complete component model API. It's just the smallest subset that can
 already be useful in addressing some of the use cases listed in
 http://wiki.whatwg.org/wiki/Component_Model_Use_Cases.
 
 It seem obvious that it is better to have few small, closely related
 useful bits that could be combined into a bigger picture rather than
 one large monolithic feature that can't be teased apart.

The problem is that some pervasive properties (encapsulation, security, etc) 
can't be added after the fact to a system that doesn't have them designed in.

 
 As for addressing encapsulation concerns, one of the simplest things
 we could is to introduce a flag on the ShadowRoot (we can discuss the
 default value), which if set, prohibits access to it with the
 element.shadow property.

Why is that better than my proposal? I believe all the benefits I listed for my 
proposal over yours still apply to this new proposal. Can you either rebut 
those stated benefits, or tell me what benefits this version has over mine?



Regards,
Maciej




Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 1:03 PM, Dimitri Glazkov wrote:

 Maciej, as promised on #whatwg, here's a more thorough review of your
 proposal. I am in agreement in the first parts of your email, so I am
 going to skip those.
 
 == Are there other limitations created by the lack of encapsulation? ==
 
 My understanding is yes, there are some serious limitations:
 
 (1) It won't be possible (according to Dmitri) to attach a binding to an 
 object that has a native shadow DOM in the implementation (e.g. form 
 controls). That's because there can only be one shadow root, and form 
 controls have already used it internally and made it private. This seems 
 like a huge limitation. The ability to attach bindings/components to form 
 elements is potentially a huge win - authors can use the correct semantic 
 element instead of div soup, but still have the total control over look and 
 feel from a custom script-based implementation.
 
 (2) Attaching more than one binding with this approach is a huge hazard. 
 You'll either inadvertently blow away the previous, or won't be able to 
 attach more than one, or if your coding is sloppy, may end up mangling both 
 of them.
 
 I think these two limitations are intrinsic to the approach, not incidental.
 
 I would like to frame this problem as multiple-vs-single shadow tree
 per element.
 
 Encapsulation is achievable with single shadow tree per element by
 removing access via webkitShadow. You can discover whether a tree
 exists (by the fact that an exception is thrown when you attempt to
 set webkitShadow), but that's hardly breaking encapsulation.
 
 The issues you've described above are indeed real -- if you view
 adding new behavior to elements a process of binding, that is
 something added to existing elements, possibly more than once. If we
 decide that this the correct way to view attaching behavior, we
 definitely need to fix this.
 
 I attempted to articulate a different view here
 http://lists.w3.org/Archives/Public/public-webapps/2011JanMar/0941.html.
 Here, adding new behavior to elements means creating a sub-class of an
 element. This should be a very familiar programming concept, probably
 more understood than the decorator or mixin-like binding approach.

How would your subclass idea resolve the two problems above?

 
 For the key use case of UI widgets, sub-classing is very natural. I
 take a div, and sub-class it into a hovercard
 (http://blog.twitter.com/2010/02/flying-around-with-hovercards.html).
 I rarely bind a hovercard behavior to some random element -- not just
 because I typically don't need to, but also because I expect a certain
 behavior from the base element from which to build on. Binding a
 hovercard to an element that doesn't display its children (like img or
 input) is useless, since I want to append child nodes to display that
 user info.
 
 I could then make superhovercard by extending the hovercard. The
 single shadow DOM tree works perfectly in this case, because you
 either:
 1) inherit the tree of the subclass and add behavior;
 2) override it.
 
 In cases where you truly need a decorator, use composition. Once we
 have the basics going, we may contemplate concepts like inherited
 (http://dev.w3.org/2006/xbl2/#the-inherited-element) to make
 sub-classing more convenient.
 
 Sub-classing as a programming model is well-understood, and easy to grasp.
 
 On the other hand, the decorators are less known and certainly carry
 hidden pains. How do you resolve API conflicts (two bindings have two
 properties/functions by the same name)? As a developer, how do you
 ensure a stable order of bindings (bindings competing for the z-index
 and depending on the order of they are initialized, for example)?

I think decorators have valid use cases. For example, let's say I want to make 
a component that extracts microformat or microdata marked up content from an 
element and present hover UI to allow handy access to it. For example, it could 
extract addresses and offer map links. I would want this to work on any 
element, even if the element already has an active behavior implemented by a 
component. I should not have to subclass every type of element I may want to 
apply this to. It's especially problematic if you have to subclass even 
different kinds of built in elements. Do I need separate subclasses for div, 
span, address, section p, and whatever other kind of element I imagine this 
applying to? That doesn't seem so great.

You are correct that figuring out how multiple bindings work is tricky. But 
even if we choose not to do it, making components truly encapsulated does not 
make it any harder to have a one-binding-only model with no inheritance.


 Notice that this scheme is not significantly more complex to use, spec or 
 implement than the shadow/shadowHost proposal. And it provides a number of 
 advantages:
 
 A) True encapsulation is possible, indeed it is the easy default path. The 
 component provider has to go out of its way to 

Re: Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-30 Thread Maciej Stachowiak

On Jun 30, 2011, at 2:07 PM, Dimitri Glazkov wrote:

 On Thu, Jun 30, 2011 at 1:32 PM, Maciej Stachowiak m...@apple.com wrote:
 
 On Jun 30, 2011, at 1:03 PM, Dimitri Glazkov wrote:
 
 
 In the case of extending elements with native shadow DOM, you have to
 use composition or have something like inherited, where you nest
 native shadow tree in your own.

Why should a Web developer need to know or care which HTML elements have a 
native shadow DOM to be able to attach components to them? Is this actually 
something we want to specify? Would we specify exactly what the native shadow 
DOM is for each element to make it possible to inherit them? This seems like it 
would lock in a lot of implementation details of form controls and so strikes 
me as a bad direction.

 
 In the case case of attaching multiple bindings -- you just can't.
 That's the difference between inheritance and mixins :)

OK, so your proposal would be unable to address my microformat decorator sample 
use case at all, no matter how it was modified. It would also not be able to 
handle both a Web page and a browser extension attaching behavior to the same 
element via components at the same time. Those seem like major limitations.

 
 To make further progress, I would like to concentrate on resolving
 these two issues:
 
 1) should we use object inheritance (one shadow subtree) or mixins
 (multiple shadow subtrees)?
 
 I think it's possible to partially table this issue. If mixing are required, 
 then raw access to the shadow tree is not viable. But using inheritance / 
 single binding is possible with either proposal.
 
 I think that changes a lot of nomenclature though, right? You don't
 have bindings with inheritance. It's just you or your sub-class.
 Also, element.bindComponent doesn't make much sense if you can only
 inherit the relationship.

You can call it attachComponent if you want. Or setComponent. I think we can 
make the way of attaching to a native element different from the way you 
inherit from another component. I don't really see how element.shadow = 
whatever is a better fit for inheritance than 
element.bindComponent(whatever).

Still, I think this is diving too far into the details where we are not even 
clear on the use cases.

 
 
 2) do we need webkitShadow or similar accessor to shadow subtree(s)?
 
 This question is a helpful one. I haven't seen any reason articulated for 
 why such an accessor is required. The fact that it's not present in other 
 similar technologies seems like proof that it is not required.
 
 Yes, I will work on use cases. Though this concept is certainly
 present in other technologies. Just take a look at Silverlight and its
 LogicalTreeHelper
 (http://msdn.microsoft.com/en-us/library/ms753391.aspx).

Is there anything that Silverlight can do that Mozilla's XBL, sXBL, and HTC 
can't, as a result of this choice?

 
 
 
 
 I think these are all resolved by supplying use cases and rationale. Right?
 
 If so, I think we need a real list of use cases to be addressed. The one 
 provided seems to bear no relationship to your original proposal (though I 
 believe my rough sketch satisfies more of them as-is and is more obviously 
 extensible to satisfying more of them).
 
 Did you mean the hovercard? I bet I can write a pretty simple bit of
 code that would usefully consume the API from my proposal.

I meant the wiki list of use cases.

For concrete use cases, the most valuable kind would be examples from real Web 
sites, including the URL of the original, a description of how it works, and 
the code it uses to make that happen. Made-up examples can be illustrative but 
won't help us sort out questions of what are Web authors really doing and what 
do they need? which seem to come up a lot in this discussion.

Regards,
Maciej








Component Models and Encapsulation (was Re: Component Model: Landing Experimental Shadow DOM API in WebKit)

2011-06-29 Thread Maciej Stachowiak


I am not a fan of this API because I don't think it provides sufficient 
encapsulation. The words encapsulation and isolation have been used in 
different ways in this discussion, so I will start with an outline of different 
possible senses of encapsulation that could apply here.

== Different kinds of encapsulation == 

1) Encapsulation against accidental exposure - DOM Nodes from the shadow tree 
are not leaked via pre-existing generic APIs - for example, events flowing out 
of a shadow tree don't expose shadow nodes as the event target, 

2) Encapsulation against deliberate access - no API is provided which lets code 
outside the component poke at the shadow DOM. Only internals that the component 
chooses to expose are exposed.

3) Inverse encapsulation - no API is provided which lets code inside the 
component see content from the page embedding it (this would have the effect of 
something like sandboxed iframes or Caja).

4) Isolation for security purposes - it is strongly guaranteed that there is no 
way for code outside the component to violate its confidentiality or integrity.

5) Inverse isolation for security purposes - it is strongly guaranteed that 
there is no way for code inside the component to violate the confidentiality or 
integrity of the embedding page.


I believe the proposed API has property 1, but not properties 2, 3 or 4. The 
webkitShadow IDL attribute violates property #2, I assume it is obvious why the 
others do not hold.

I am not greatly interested in 3 or 4, but I believe #2 is important for a 
component model.


== Why is encapsulation (type 2) important for components? ==

I believe type 2 encapsulation is important, because it allows components to be 
more maintainable, reusable and robust. Type 1 encapsulation keeps components 
from breaking the containing page accidentally, and can keep the containing 
page from breaking the component. If the shadow DOM is exposed, then you have 
the following risks:

(1) A page using the component starts poking at the shadow DOM because it can - 
perhaps in a rarely used code path.
(2) The component is updated, unaware that the page is poking at its guts.
(3) Page adopts new version of component.
(4) Page breaks.
(5) Page author blames component author or rolls back to old version.

This is not good. Information hiding and hiding of implementation details are 
key aspects of encapsulation, and are good software engineering practice. 
Dmitri has argued that pages today do a version of components with no 
encapsulation whatsoever, because many are written by monolithic teams that 
control the whole stack. This does not strike me as a good argument. 
Encapsulation can help teams maintain internal interfaces as they grow, and can 
improve reusability of components to the point where maybe sites aren't quite 
so monolithically developed.

Furthermore, consider what has happened with JavaScript. While the DOM has no 
good mechanism for encapsulation, JavaScript offers a choice. Object properties 
are not very encapsulated at all, by default anyone can read or write. But 
local variables in a closure are fully encapsulated. It's more and more 
consider a good practice in JavaScript to build objects based on closures to 
hide implementation details. This is the case even though the closure approach 
is more awkward to code, and may hurt performance. For ECMAScript Harmony, a 
form of object that provides true encapsulation is being considered.

I don't want us to make the same mistake with DOM components that in retrospect 
I think was made with JavaScript objects. Let's provide good encapsulation out 
of the gate.

And it's important to keep in mind here that this form of encapsulation is 
*not* meant as a security measure; it is meant as a technique for robustness 
and good software engineering.


== Are there use cases for breaking type 2 encapsulation against the will of 
the component? ==

I'm not aware of any. I asked Dmitri to explain these uses cases on IRC and he 
didn't have any specific ones in mind, just said that exposing the shadow DOM 
directly is the simplest thing that could possibly work and so is easy to 
prototype and implement. I think there are other starter approaches that are 
easy to implement but provide stronger encapsulation.


== Are there other limitations created by the lack of encapsulation? ==

My understanding is yes, there are some serious limitations:

(1) It won't be possible (according to Dmitri) to attach a binding to an object 
that has a native shadow DOM in the implementation (e.g. form controls). That's 
because there can only be one shadow root, and form controls have already used 
it internally and made it private. This seems like a huge limitation. The 
ability to attach bindings/components to form elements is potentially a huge 
win - authors can use the correct semantic element instead of div soup, but 
still have the total control over look and feel from a custom script-based 

Re: Model-driven Views

2011-04-29 Thread Maciej Stachowiak

On Apr 28, 2011, at 5:46 AM, Alex Russell wrote:

 On Thu, Apr 28, 2011 at 12:09 PM, Maciej Stachowiak m...@apple.com wrote:
 
 On Apr 28, 2011, at 2:33 AM, Jonas Sicking wrote:
 
 
 I agree with much of this. However it's hard to judge without a bit
 more meat on it. Do you have any ideas for what such primitives would
 look like?
 
 That's best discussed in the context of Rafael explaining what limitations 
 prevent his proposal from working as well as it could purely as a JS library.
 
 The goal for this work is explicitly *not* to leave things to
 libraries -- I'd like for that not to creep into the discussion as an
 assumption or a pre-req.

I introduce this not as a pre-req or assumption but rather as my view of the 
best approach to addressing templating use cases, at least as a first step.I 
would also like it not to be a pre-req that templating must be addressed by a 
monolithic solution. But I am willing to hear out arguments for how it is 
better.


 Libraries are expensive, slow, and lead to a tower-of-babel problem.

That is all potentially true. But the tower-of-babel problem already exists in 
this area. Adding a new solution won't make the existing solutions disappear. 
The best way to mitigate the costs you describe is to provide primitives that 
enable the existing solutions to improve their quality of implementation.

 On the other hand, good layering and the
 ability to explain current behavior in terms of fewer, smaller
 primitives is desirable, if only to allow libraries to play whatever
 role they need to when the high-level MDV system doesn't meet some
 particular need.

That is a reasonable line of thinking. But in addition to modularity, I would 
also suggest a particular ordering - first add the right primitives to enable 
efficient, convenient DOM-based templating, then look for libraries to adopt it 
and/or promulgate new libraries, and only then standardize the high-level bits 
if they turn out to be high-value at that point. I had many particular 
supporting arguments for this approach, which your comments do not address.

 
 The one specific thing I recall from a previous discussion of this proposal 
 is that a way is needed to have a section of the DOM that is inactive - 
 doesn't execute scripts, load anything, play media, etc - so that your 
 template pattern can form a DOM but does not have side effects until the 
 template is instantiated.
 
 Right. The contents of the template element are in that inactive state.
 
 This specific concept has already been discussed on the list, and it seems 
 like it would be very much reusable for other DOM-based templating systems, 
 if it wasn't tied to a specific model of template instantiation and updates.
 
 Having it be a separately addressable primitive sounds like a good
 thing...perhaps as some new Element type?

I'm glad we agree on this aspect.

I'm not sure what you mean by new Element type, but nothing prevents us from 
simply defining a new ordinary element (HTML element or otherwise) that has 
this semantic. Note that HTML elements generally already have the desired 
inactive behavior in viewless documents (as created by createDocuemtn or 
XMLHttpRequest) so an element that introduces such behavior should be quite 
modest in terms of spec and implementation burden.

Regards,
Maciej







Re: Model-driven Views

2011-04-28 Thread Maciej Stachowiak

On Apr 27, 2011, at 6:46 PM, Rafael Weinstein wrote:

 
 
 
 What do you think?
 
 
 - Is this something you'd like to be implemented in the browsers,
 
 Yes.
 
  and if yes, why? What would be the reasons to not just use script
  libraries (like your prototype).
 
 FAQ item also coming for this.

Having heard Rafael's spiel for this previously, I believe there are some 
things that templating engines want to do, which are hard to do efficiently and 
conveniently using the existing Web platform.

However, I think it would be better to add primitives to the Web platform that 
could be used by the many templating libraries that already exist, at least as 
a first step:

- There is a lot of code built using many of the existing templating solutions. 
If we provide primitives that let those libraries become more efficient, that 
is a greater immediate payoff than creating a new templating system, where Web 
apps would have to be rewritten to take advantage.

- It seems somewhat hubristic to assume that a newly invented templating 
library is so superior to all the already existing solutions that we should 
encode its particular design choices into the Web platform immediately.

- This new templating library doesn't have enough real apps built on it yet to 
know if it is a good solution to author problems.

- Creating APIs is best done incrementally. API is forever, on the Web.

- Looking at the history of querySelector(), I come to the following 
conclusion: when there are already a lot of library-based solutions to a 
problem, the best approach is to provide technology that can be used inside 
those libraries to improve them; this is more valuable than creating an API 
with a primary goal of direct use. querySelector gets used a lot more via 
popular JavaScript libraries than directly, and should have paid more attention 
to that use case in the first place.

Perhaps there are novel arguments that will dissuade me from this line of 
thinking, but these are my tentative thoughts.

Regards,
Maciej




Re: Model-driven Views

2011-04-28 Thread Maciej Stachowiak

On Apr 28, 2011, at 2:33 AM, Jonas Sicking wrote:

 On Thu, Apr 28, 2011 at 2:02 AM, Maciej Stachowiak m...@apple.com wrote:
 
 On Apr 27, 2011, at 6:46 PM, Rafael Weinstein wrote:
 
 
 
 
 What do you think?
 
 
 - Is this something you'd like to be implemented in the browsers,
 
 Yes.
 
  and if yes, why? What would be the reasons to not just use script
  libraries (like your prototype).
 
 FAQ item also coming for this.
 
 Having heard Rafael's spiel for this previously, I believe there are some 
 things that templating engines want to do, which are hard to do efficiently 
 and conveniently using the existing Web platform.
 
 However, I think it would be better to add primitives to the Web platform 
 that could be used by the many templating libraries that already exist, at 
 least as a first step:
 
 - There is a lot of code built using many of the existing templating 
 solutions. If we provide primitives that let those libraries become more 
 efficient, that is a greater immediate payoff than creating a new templating 
 system, where Web apps would have to be rewritten to take advantage.
 
 - It seems somewhat hubristic to assume that a newly invented templating 
 library is so superior to all the already existing solutions that we should 
 encode its particular design choices into the Web platform immediately.
 
 - This new templating library doesn't have enough real apps built on it yet 
 to know if it is a good solution to author problems.
 
 - Creating APIs is best done incrementally. API is forever, on the Web.
 
 - Looking at the history of querySelector(), I come to the following 
 conclusion: when there are already a lot of library-based solutions to a 
 problem, the best approach is to provide technology that can be used inside 
 those libraries to improve them; this is more valuable than creating an API 
 with a primary goal of direct use. querySelector gets used a lot more via 
 popular JavaScript libraries than directly, and should have paid more 
 attention to that use case in the first place.
 
 Perhaps there are novel arguments that will dissuade me from this line of 
 thinking, but these are my tentative thoughts.
 
 I agree with much of this. However it's hard to judge without a bit
 more meat on it. Do you have any ideas for what such primitives would
 look like?

That's best discussed in the context of Rafael explaining what limitations 
prevent his proposal from working as well as it could purely as a JS library. 

The one specific thing I recall from a previous discussion of this proposal is 
that a way is needed to have a section of the DOM that is inactive - doesn't 
execute scripts, load anything, play media, etc - so that your template pattern 
can form a DOM but does not have side effects until the template is 
instantiated. This specific concept has already been discussed on the list, and 
it seems like it would be very much reusable for other DOM-based templating 
systems, if it wasn't tied to a specific model of template instantiation and 
updates.

Regards,
Maciej




Re: publish a new Working Draft of DOM Core; comment deadline March 2

2011-02-28 Thread Maciej Stachowiak

On Feb 24, 2011, at 5:21 PM, Doug Schepers wrote:

 Hi, Anne-
 
 I object to publishing a Working Draft of the DOM Core spec that includes DOM 
 Events.
 
 Introducing conflicting specifications that cover the same materials 
 dramatically harms interoperability, and the idea of competing 
 specifications is an anti-pattern when it comes to standardization.
 
 If there are changes that you want to the DOM3 Events spec, and if you get 
 the support of the browser vendors to make those changes, then I am happy to 
 change the spec; I'm not married to the spec as it exists, but that is the 
 result of what the last few years of discussing it with the browser vendors 
 and users has resulted in.  Please simply raise the individual issues on the 
 www-dom mailing list for discussion.  So far, I've seen no support on the 
 list for adding events to DOM Core.
 
 Finally, at TPAC, when we discussed working on DOM Core and DOM 3 Events in 
 parallel, we did not agree to adding events to DOM Core; in fact, we agreed 
 to exactly the opposite: you wanted to move mutation events into DOM Core in 
 a thinly-veiled attempt to remove them completely (rather than simply 
 deprecate them as is done in DOM3 Events), and all the browser vendors 
 disagreed with that.  Claiming otherwise is simply an attempt to rewrite 
 history.
 
 So, in summary: please remove section 4, Events, from DOM Core before 
 publishing it as a Working Draft, for now.  After serious discussion, if the 
 group agrees, we can always add them back later, but I would prefer to change 
 DOM3 Events to seeing conflicting specifications.

I recall that we discussed putting core event support into DOM Core, so that it 
could be a unified Web-compatible successor to both DOM Level 3 Core and DOM 
Level 3 Events. Many specific reasons were given why it's better to define 
events together with the core instead of separately. I don't think we had 
agreement to leave events out of DOM Core. 

I believe what implementors present at TPAC agreed to is that we do not like 
mutation events and want them to die in a fire.

I can't recall the details beyond that, I would have to check the minutes.

For what it's worth, I (still) think it makes sense to define DOM Core and the 
central parts of DOM Events together (not necessarily the individual event 
names and interfaces though). They are not really logically separate.

Regards,
Maciej




Re: publish a new Working Draft of DOM Core; comment deadline March 2

2011-02-28 Thread Maciej Stachowiak

On Feb 26, 2011, at 7:15 AM, Doug Schepers wrote:

 
 I will remove my objection to publish DOM Core if: 1) conflicts (rather than 
 extensions) are removed from the draft, or reconciled with changes in DOM3 
 Events; and 2) for those changes that have broad consensus, we can integrate 
 them into DOM3 Events, which means that the changes should be sent as 
 comments on DOM3 Events, to be discussed by the group and their current 
 status determined.

What conflicts or contradictions exist currently? Does anyone have a list?

Regards,
Maciej




Re: Cross-Origin Resource Embedding Restrictions

2011-02-28 Thread Maciej Stachowiak

For what it's worth, I think this is a useful draft and a useful technology. 
Hotlinking prevention is of considerable interest to Web developers, and doing 
it via server-side Referer checks is inconvenient and error-prone. I hope we 
can fit it into Web Apps WG, or if not, find another goo home for it at the W3C.

One thing I am not totally clear on is how this would fit into CSP. A big focus 
for CSP is to enable site X to have a policy that prevents it from accidentally 
including scripts from site Y, and things of that nature. In other words, 
voluntarily limit the embedding capabilities of site X itself But the desired 
feature is kind of the opposite of that. I think it would be confusing to 
stretch CSP to this use case, much as it would have been confusing to reuse 
CORS for this purpose.

Regards,
Maciej

On Feb 28, 2011, at 11:35 PM, Anne van Kesteren wrote:

 Hi,
 
 The WebFonts WG is looking for a way to prevent cross-origin embedding of 
 fonts as certain font vendors want to license their fonts with such a 
 restriction. Some people think CORS is appropriate for this, some don't. Here 
 is some background material:
 
 http://weblogs.mozillazine.org/roc/archives/2011/02/distinguishing.html
 http://annevankesteren.nl/2011/02/web-platform-consistency
 http://lists.w3.org/Archives/Public/public-webfonts-wg/2011Feb/0066.html
 
 
 More generally, having a way to prevent cross-origin embedding of resources 
 can be useful. In addition to license enforcement it can help with:
 
 * Bandwidth theft
 * Clickjacking
 * Privacy leakage
 
 To that effect I wrote up a draft that complements CORS. Rather than enabling 
 sharing of resources, it allows for denying the sharing of resources:
 
 http://dvcs.w3.org/hg/from-origin/raw-file/tip/Overview.html
 
 And although it might end up being part of the Content Security Policy work I 
 think it would be useful if publish a Working Draft of this work to gather 
 more input, committing us nothing.
 
 What do you think?
 
 Kind regards,
 
 
 -- 
 Anne van Kesteren
 http://annevankesteren.nl/
 




Re: CfC: publish a new Working Draft of DOM Core; comment deadline March 2

2011-02-24 Thread Maciej Stachowiak

I support this publication.

 - Maciej

On Feb 23, 2011, at 8:20 AM, Arthur Barstow wrote:

 Anne and Ms2ger (representing Mozilla Foundation) have continued to work on 
 the DOM Core spec and they propose publishing a new Working Draft of the spec:
 
   http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html
 
 As such, this is a Call for Consensus (CfC) to publish a new WD of DOM Core. 
 If you have any comments or concerns about this proposal, please send them to 
 public-webapps by March 2 at the latest.
 
 As with all of our CfCs, positive response is preferred and encouraged and 
 silence will be assumed to be agreement with the proposal.
 
 -Art Barstow
 
 



Re: XBL2: First Thoughts and Use Cases

2010-12-15 Thread Maciej Stachowiak

On Dec 15, 2010, at 11:14 AM, Boris Zbarsky wrote:

 
 
 At least in Gecko's case, we still use XBL1 in this way, and those design
 goals would apply to XBL2 from our point of view.  It sounds like you have
 entirely different design goals, right?
 
 Sounds like it.
 
 OK, so given contradictory design goals, where do we go from here?

Are they really contradictory? It sounds like Tab doesn't care about the use 
case where you want hundreds or thousands of instances without undue memory 
use, since he's looking to replace technologies that already don't support 
this. But it doesn't seem like these use cases are fundamentally incompatible.

Personally, I think it would be a huge win if XBL2-based components could be 
more scalable than ones written in pure JavaScript using vanilla DOM calls. 
That way, XBL2 could enable new kinds of applications and reduce memory use of 
existing applications, rather than just providing convenience and bridging, as 
Tab seems to envision.

Regards,
Maciej




Re: Structured clone in WebStorage

2010-12-02 Thread Maciej Stachowiak

On Dec 2, 2010, at 5:45 AM, Arthur Barstow wrote:

 On Nov/29/2010 9:59 AM, ext Adrian Bateman wrote:
 On Wednesday, November 24, 2010 3:01 AM, Jeremy Orlow wrote:
 For over a year now, the WebStorage spec has stipulated that
 Local/SessionStorage store and retrieve objects per the structured clone
 algorithm rather than strings.  And yet there isn't a single implementation
 who's implemented this.  I've talked to people in the know from several of
 the other major browsers and, although no one is super against implementing
 it (including us), no one has it on any of their (even internal)
 roadmaps.  It's just not a high enough priority for anyone at the moment.
 I feel pretty strongly that we should _at least_ put in some non-normative
 note that no browser vendor is currently planning on implementing this
 feature.  Or, better yet, just remove it from the spec until support starts
 emerging.
 I agree. We have no plans to support this in the near future either. At the
 very least, I think this should be noted as a feature at risk in the Call
 for Implementations [1].
 I don't have a strong preference for removing this feature or marking it as a 
 Feature At Risk when the Candidate is published.
 
 It would be good to get feedback from other implementers (Maciej?, Jonas?, 
 Anne?). If no one plans to implement it, perhaps it should just be removed.

We think this feature would be straightforward to implement in Safari/WebKit, 
and we think it is a useful feature. We would like to implement it at some 
point. I can't give a specific timeline.

Regards,
Maciej




Re: Structured clone in WebStorage

2010-12-02 Thread Maciej Stachowiak

On Dec 2, 2010, at 10:41 AM, Jeremy Orlow wrote:

 On Thu, Dec 2, 2010 at 6:29 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Dec 2, 2010 at 5:45 AM, Arthur Barstow art.bars...@nokia.com wrote:
  On Nov/29/2010 9:59 AM, ext Adrian Bateman wrote:
 
  On Wednesday, November 24, 2010 3:01 AM, Jeremy Orlow wrote:
 
  For over a year now, the WebStorage spec has stipulated that
  Local/SessionStorage store and retrieve objects per the structured clone
  algorithm rather than strings.  And yet there isn't a single
  implementation
  who's implemented this.  I've talked to people in the know from several
  of
  the other major browsers and, although no one is super against
  implementing
  it (including us), no one has it on any of their (even internal)
  roadmaps.  It's just not a high enough priority for anyone at the moment.
  I feel pretty strongly that we should _at least_ put in some
  non-normative
  note that no browser vendor is currently planning on implementing this
  feature.  Or, better yet, just remove it from the spec until support
  starts
  emerging.
 
  I agree. We have no plans to support this in the near future either. At
  the
  very least, I think this should be noted as a feature at risk in the
  Call
  for Implementations [1].
 
  I don't have a strong preference for removing this feature or marking it as
  a Feature At Risk when the Candidate is published.
 
  It would be good to get feedback from other implementers (Maciej?, Jonas?,
  Anne?). If no one plans to implement it, perhaps it should just be removed.
 
 I personally would like to see it implemented in Firefox (and other
 browsers), but I don't feel super strongly. It's something that we
 likely will be discussing in a few weeks here at Mozilla.
 
 My understanding is that many people across many browsers have thought it was 
 a cool idea and would have been happy to have seen it implemented.  But no 
 one has done so.
 
 Which is why I think we should _at least_ add a non-normative note stating 
 the situation to the spec.  Once it's being implemented then, by all means, 
 we can remove it.  But who knows how much longer it'll be before anyone 
 actually implements it.

I don't think it is necessary for specs to include non-normative notes about 
the current implementation status of particular features.

I would be ok with marking the feature at risk if it still lacks 
implementations by the time Web Storage goes to CR.

Regards,
Maciej



Re: CfC: FPWD of Web Messaging; deadline November 13

2010-11-07 Thread Maciej Stachowiak

On Nov 6, 2010, at 3:04 PM, Ian Hickson wrote:

 On Sat, 6 Nov 2010, Arthur Barstow wrote:
 
 Ian, All - during WebApps' November 1 gathering, participants expressed 
 in an interest in publishing a First Public Working Draft of Web 
 Messaging [1] and this is a CfC to do so:
 
  http://dev.w3.org/html5/postmsg/
 
 This CfC satisfies the group's requirement to record the group's 
 decision to request advancement.
 
 I'd rather not add another document to the list of documents for which I 
 have to maintain separate W3C headers and footers at this time (especially 
 given that I'm behind on taking the other drafts I'm editing to LC). The 
 text in the spec really belongs in the HTML spec anyway and is already 
 published by the WHATWG in the HTML spec there, and is already getting 
 ample review and maintenance there, so I don't think it's especially 
 pressing to publish it as a separate doc on the TR/ page. (The contents of 
 the doc have already gone through FPWD at the W3C, so there's not even a 
 patent policy reason to do it.)

Once HTML5 goes to Last Call, then the relevant scope of the patent policy will 
be the LCWD, not the FPWD. At that point, there will be a strong patent policy 
reason to have an FPWD of this material.

Regards,
Maciej




Re: CfC: FPWD of Web Messaging; deadline November 13

2010-11-06 Thread Maciej Stachowiak

I favor publication of Web Messaging.

Regards,
Maciej

On Nov 6, 2010, at 12:48 PM, Arthur Barstow wrote:

 Ian, All - during WebApps' November 1 gathering, participants expressed in an 
 interest in publishing a First Public Working Draft of Web Messaging [1] and 
 this is a CfC to do so:
 
   http://dev.w3.org/html5/postmsg/
 
 This CfC satisfies the group's requirement to record the group's  decision 
 to request advancement.
 
 By publishing this FPWD, the group sends a signal to the community to  begin 
 reviewing the document. The FPWD reflects where the group is on this spec at 
 the time of publication; it does not necessarily mean there is consensus on 
 the spec's contents.
 
 As with all of our CfCs, positive response is preferred and encouraged and 
 silence will be assumed to be assent.
 
 The deadline for comments is November 13.
 
 -Art Barstow
 
 [1] http://www.w3.org/2010/11/01-webapps-minutes.html#item04
 
  Original Message 
 Subject:  ACTION-598: Start a CfC to publish a FPWD of Web Messaging (Web 
 Applications Working Group)
 Date: Mon, 1 Nov 2010 11:35:29 +0100
 From: ext Web Applications Working Group Issue Tracker sysbot+trac...@w3.org
 Reply-To: Web Applications Working Group WG public-webapps@w3.org
 To:   Barstow Art (Nokia-CIC/Boston) art.bars...@nokia.com
 ACTION-598: Start a CfC to publish a FPWD of Web Messaging  (Web Applications 
 Working Group)
 
 http://www.w3.org/2008/webapps/track/actions/598
 
 On: Arthur Barstow
 Due: 2010-11-08
 



Re: [XHR2] HTTP Trailers

2010-10-31 Thread Maciej Stachowiak

On Oct 26, 2010, at 12:02 PM, Julian Reschke wrote:

 On 26.10.2010 12:12, Anne van Kesteren wrote:
 ...
 If they were exposed via getResponseHeader() you would have the
 potential for clashes so that does not seem like a good idea.
 ...
 
 The clashes would be the same as for any repeating header, right?

Pre-existing XHR code might get confused if the results from 
getResponseHeader() change in the course of loading more of the stream. Seems 
safer to keep it separate, if it is useful to pass this info at all. Also, 
header fields like Content-Type are probably not going to have their usual 
effect if sent as trailers. So it's good to distinguish for that reason too.

Regards,
Maciej



Re: XHR responseArrayBuffer attribute: suggestion to replace asBlob with responseType

2010-10-28 Thread Maciej Stachowiak

On Oct 27, 2010, at 5:36 PM, Boris Zbarsky wrote:

 
 But both approaches would reliably throw exceptions if a client got things 
 wrong.
 
 See, there's the thing.  Neither approach is all that reliable (even to the 
 point of throwing sometimes but not others for identical code), and access is 
 more prone to issues where which code the exception is thrown in is not 
 consistent (including being timing-dependent), if multiple listeners are 
 involved.
 
 Do people really think that action at a distance situations where pulling 
 slightly and innocuously on one bit of a system perturbs other parts of the 
 system in fatal ways are acceptable for the web platform? They're the sort of 
 things that one avoids as much as possible in other systems, but this thread 
 is all about proposing such behaviors for the web platform...

I don't think that kind of approach is good design. When design APIs 
(especially for a platform as widely used as the Web), it's better to design 
them with fewer possible ways to use them wrong. Making a subtle mistake 
impossible by design is better than throwing an exception when you make that 
mistake.

I realize memory use is a concern and it's definitely easy to use too much 
memory in all sorts of ways. But a fragile API is an even bigger problem, in my 
opinion.

Regards,
Maciej




  1   2   3   4   >