Re: Custom Elements: createdCallback cloning
On 07/13/2015 09:22 AM, Anne van Kesteren wrote: On Sun, Jul 12, 2015 at 9:32 PM, Olli Pettay o...@pettay.fi wrote: Well, this printing case would just clone the final flattened tree without the original document knowing any cloning happened. (scripts aren't suppose to run in Gecko's static clone documents, which print preview on linux and Windows and printing use) If one needs a special DOM tree for printing, beforeprint event should be used to modify the DOM. Sure, but you'd lose some stuff, e.g. canvas, and presumably custom elements if they require copying some state, due to the cloning. (Unless it's doing more than just cloning.) Clone-for-printing takes a snapshot of canvas and animated images etc. And what state from a custom element would be needed in static clone document? If the state is there in original document, and the state is somehow affecting layout, it should be copied (well, not :focus/:active and such). Anyhow, I see clone-for-printing very much an implementation detail, and wouldn't be too worried about it here. There is enough to worry with plain normal element.cloneNode(true); or selection/range handling. -Olli
Re: Custom Elements: createdCallback cloning
On Jul 12, 2015, at 11:30 PM, Anne van Kesteren ann...@annevk.nl wrote: On Mon, Jul 13, 2015 at 1:10 AM, Dominic Cooney domin...@google.com wrote: Yes. I am trying to interpret this in the context of the esdiscuss thread you linked. I'm not sure I understand the problem with private state, actually. Private state is allocated for DOM wrappers in Chromium today (like Gecko), including Custom Elements; it's not a problem. DOM wrapper creation is controlled by the UA, which can arrange for allocating the slots. Sure, but this assumes elements will be backed by something other than JavaScript forever. Or at the very least that custom elements will always be able to do less than builtin elements. Is there a plan for author classes to be able to have private state or something? Yes, as discussed in that es-discuss thread. Thanks. I can understand how editing and Range.cloneContents would use cloning. How is it relevant that Range is depended on by Selection? Selection may delete things but it does not clone them. Editing operations operate on selections, but maybe I'm mistaken about that? Either way, you got the problem. Editing operations use cloning heavily. As counter-intuitive as it sounds, deleting a range of text also involves cloning elements in some cases. - R. Niwa
Re: Custom Elements: createdCallback cloning
On Sun, Jul 12, 2015 at 9:32 PM, Olli Pettay o...@pettay.fi wrote: Well, this printing case would just clone the final flattened tree without the original document knowing any cloning happened. (scripts aren't suppose to run in Gecko's static clone documents, which print preview on linux and Windows and printing use) If one needs a special DOM tree for printing, beforeprint event should be used to modify the DOM. Sure, but you'd lose some stuff, e.g. canvas, and presumably custom elements if they require copying some state, due to the cloning. (Unless it's doing more than just cloning.) -- https://annevankesteren.nl/
Re: Custom Elements: createdCallback cloning
On Mon, Jul 13, 2015 at 1:10 AM, Dominic Cooney domin...@google.com wrote: Yes. I am trying to interpret this in the context of the esdiscuss thread you linked. I'm not sure I understand the problem with private state, actually. Private state is allocated for DOM wrappers in Chromium today (like Gecko), including Custom Elements; it's not a problem. DOM wrapper creation is controlled by the UA, which can arrange for allocating the slots. Sure, but this assumes elements will be backed by something other than JavaScript forever. Or at the very least that custom elements will always be able to do less than builtin elements. Is there a plan for author classes to be able to have private state or something? Yes, as discussed in that es-discuss thread. Thanks. I can understand how editing and Range.cloneContents would use cloning. How is it relevant that Range is depended on by Selection? Selection may delete things but it does not clone them. Editing operations operate on selections, but maybe I'm mistaken about that? Either way, you got the problem. That during cloning certain DOM operations cease to function, basically. This sounds interesting; it may even be useful for authors to be able to assert that between two points they did not modify the DOM. Short of rewriting ranges and editing, that seems like the only viable alternative to prototype swizzling, provided you're okay with seeing upgrades as a distinct problem. -- https://annevankesteren.nl/
Re: Custom Elements: createdCallback cloning
On 07/12/2015 08:09 PM, Anne van Kesteren wrote: On Fri, Jul 10, 2015 at 10:11 AM, Dominic Cooney domin...@google.com wrote: I think the most important question here, though, is not constructors or prototype swizzling. I guess that depends on what you want to enable. If you want to recreate existing elements in terms of Custom Elements, you need private state. - Progressive Enhancement. The author can write more things in markup and present them while loading definitions asynchronously. Unlike progressive enhancement by finding and replacing nodes in the tree, prototype swizzling means that the author is free to detach a subtree, do a setTimeout, and reattach it without worrying whether the definition was registered in the interim. How does this not result in the same issues we see with FOUC? It seems rather problematic for the user to be able to interact with components that do not actually work, but I might be missing things. - Fewer (no?) complications with parsing and cloning. Prototype swizzling makes it possible to decouple constructing the tree, allocating the wrapper, and running Custom Element initialization. For example, if you have a Custom Element in Chromium that does not have a createdCallback, we don't actually allocate its wrapper until it's touched (like any Element.) But it would not be possible to distinguish whether a user-provided constructor is trivial and needs this. True true. Could you share a list of things that use the cloning algorithm? In the DOM specification a heavy user is ranges. In turn, selection heavily depends upon ranges. Which brings us to editing operations such as cut copy. None of those algorithms anticipate the DOM changing around under them. (Though perhaps as long as mutation events are still supported there are some corner cases there, though the level of support of those varies.) In Gecko printing also clones the tree and definitely does not expect that to have side effects. Well, this printing case would just clone the final flattened tree without the original document knowing any cloning happened. (scripts aren't suppose to run in Gecko's static clone documents, which print preview on linux and Windows and printing use) If one needs a special DOM tree for printing, beforeprint event should be used to modify the DOM. Note that this would break with prototype swizzling too. Or at least you'd get a less pretty page when printing... What do you mean by mode switch? That during cloning certain DOM operations cease to function, basically.
Re: Custom Elements: createdCallback cloning
On Mon, Jul 13, 2015 at 4:32 AM, Olli Pettay o...@pettay.fi wrote: On 07/12/2015 08:09 PM, Anne van Kesteren wrote: On Fri, Jul 10, 2015 at 10:11 AM, Dominic Cooney domin...@google.com wrote: I think the most important question here, though, is not constructors or prototype swizzling. I guess that depends on what you want to enable. If you want to recreate existing elements in terms of Custom Elements, you need private state. Yes. I am trying to interpret this in the context of the esdiscuss thread you linked. I'm not sure I understand the problem with private state, actually. Private state is allocated for DOM wrappers in Chromium today (like Gecko), including Custom Elements; it's not a problem. DOM wrapper creation is controlled by the UA, which can arrange for allocating the slots. Is there a plan for author classes to be able to have private state or something? - Progressive Enhancement. The author can write more things in markup and present them while loading definitions asynchronously. Unlike progressive enhancement by finding and replacing nodes in the tree, prototype swizzling means that the author is free to detach a subtree, do a setTimeout, and reattach it without worrying whether the definition was registered in the interim. How does this not result in the same issues we see with FOUC? It seems rather problematic for the user to be able to interact with components that do not actually work, but I might be missing things. Although you mention FOUC, this isn't the same as FOUC because the component can be styled (particularly with :unresolved). But as you go on to mention, there is the question of interactivity. Here are my observations: - In some cases, the user may not care that the component is briefly not interactive. For example, have you noticed that the GitHub 3 days ago labels don't have their definitions briefly? - In some cases, the author can continue to use things like onclick attributes to provide some fallback interactivity. - In the majority of cases, you have to ask what's better. Is it better to have some presentation (like an :unresolved rule showing a loading spinner, maybe with animated transitions to the interactive state) with the page scrollable and other things interactive, or a blank page blocked on definitions? - Fewer (no?) complications with parsing and cloning. Prototype swizzling makes it possible to decouple constructing the tree, allocating the wrapper, and running Custom Element initialization. For example, if you have a Custom Element in Chromium that does not have a createdCallback, we don't actually allocate its wrapper until it's touched (like any Element.) But it would not be possible to distinguish whether a user-provided constructor is trivial and needs this. True true. Could you share a list of things that use the cloning algorithm? In the DOM specification a heavy user is ranges. In turn, selection heavily depends upon ranges. Which brings us to editing operations such as cut copy. None of those algorithms anticipate the DOM changing around under them. (Though perhaps as long as mutation events are still supported there are some corner cases there, though the level of support of those varies.) Thanks. I can understand how editing and Range.cloneContents would use cloning. How is it relevant that Range is depended on by Selection? Selection may delete things but it does not clone them. In Gecko printing also clones the tree and definitely does not expect that to have side effects. Well, this printing case would just clone the final flattened tree without the original document knowing any cloning happened. (scripts aren't suppose to run in Gecko's static clone documents, which print preview on linux and Windows and printing use) If one needs a special DOM tree for printing, beforeprint event should be used to modify the DOM. Note that this would break with prototype swizzling too. Or at least you'd get a less pretty page when printing... What do you mean by mode switch? That during cloning certain DOM operations cease to function, basically. This sounds interesting; it may even be useful for authors to be able to assert that between two points they did not modify the DOM.
Re: Custom Elements: createdCallback cloning
On Fri, Jul 10, 2015 at 10:11 AM, Dominic Cooney domin...@google.com wrote: I think the most important question here, though, is not constructors or prototype swizzling. I guess that depends on what you want to enable. If you want to recreate existing elements in terms of Custom Elements, you need private state. - Progressive Enhancement. The author can write more things in markup and present them while loading definitions asynchronously. Unlike progressive enhancement by finding and replacing nodes in the tree, prototype swizzling means that the author is free to detach a subtree, do a setTimeout, and reattach it without worrying whether the definition was registered in the interim. How does this not result in the same issues we see with FOUC? It seems rather problematic for the user to be able to interact with components that do not actually work, but I might be missing things. - Fewer (no?) complications with parsing and cloning. Prototype swizzling makes it possible to decouple constructing the tree, allocating the wrapper, and running Custom Element initialization. For example, if you have a Custom Element in Chromium that does not have a createdCallback, we don't actually allocate its wrapper until it's touched (like any Element.) But it would not be possible to distinguish whether a user-provided constructor is trivial and needs this. True true. Could you share a list of things that use the cloning algorithm? In the DOM specification a heavy user is ranges. In turn, selection heavily depends upon ranges. Which brings us to editing operations such as cut copy. None of those algorithms anticipate the DOM changing around under them. (Though perhaps as long as mutation events are still supported there are some corner cases there, though the level of support of those varies.) In Gecko printing also clones the tree and definitely does not expect that to have side effects. Note that this would break with prototype swizzling too. Or at least you'd get a less pretty page when printing... What do you mean by mode switch? That during cloning certain DOM operations cease to function, basically. -- https://annevankesteren.nl/
Re: Custom Elements: createdCallback cloning
On Thu, Jul 2, 2015 at 4:05 PM, Anne van Kesteren ann...@annevk.nl wrote: In the interest of moving forward I tried to more seriously consider Dmitry's approach. Based on es-discussion discussion https://esdiscuss.org/topic/will-any-new-features-be-tied-to-constructors it seems likely new JavaScript features (such as private state) will be tied to object creation. This makes the prototype-swizzling design even less appealing, in my opinion. Ironically this is a minor reason that Chromium preferred prototype swizzling to blessing author-provided constructors: When allocating JavaScript objects to wrap DOM objects (wrappers), Chromium adds extra internal space to hold the address of the corresponding C++ object. (V8 calls this extra internal space mechanism an Internal Field https://code.google.com/p/chromium/codesearch#chromium/src/v8/include/v8.hq=v8.h%20include/vsq=package:chromiumtype=csl=4660.) This is efficient (the space is accessed by offset; they don't have names like properties do) and private (this mechanism is ancient--it predates symbols--and is inaccessible to JavaScript because the space has no name and is separate to how properties are stored.) Because Custom Elements did not change how wrappers were allocated, Custom Elements wrappers in Chromium continue to have the necessary extra storage space. Because the Custom Element constructor was controlled by the user agent, we could ensure that it used the same mechanism for allocating wrappers. We can't do that with a user-provided constructor, because V8 has already allocated the object (for this) by the time the constructor starts to run. Any solution in this space may force Chromium to allocate an additional object with the extra space. This may not be a disaster because, first, generational GC makes allocating lots of short-lived objects relatively cheap; and second, like Objective-C, JavaScript allows constructors to return a different object (and my understanding is that doing this in an ES6 constructor effectively sets this.) But it is a bit gross. Around summer 2012? 2013? I experimented with using a Symbol-like thing (which V8 calls Hidden Fields) to store the pointer to the C++ object on Custom Element wrappers in Chromium. (So those wrappers had a different shape to most element wrappers.) We felt this design was not feasible because it was slower to access, made the objects larger, and complicated Chromium's DOM bindings which now needed to fall back to checking for this property. I think the most important question here, though, is not constructors or prototype swizzling. It's whether elements created before a definition is available get enlivened with the definition when it is available. I don't like prototype swizzling, but I do like what it lets us do: - Progressive Enhancement. The author can write more things in markup and present them while loading definitions asynchronously. Unlike progressive enhancement by finding and replacing nodes in the tree, prototype swizzling means that the author is free to detach a subtree, do a setTimeout, and reattach it without worrying whether the definition was registered in the interim. - Fewer (no?) complications with parsing and cloning. Prototype swizzling makes it possible to decouple constructing the tree, allocating the wrapper, and running Custom Element initialization. For example, if you have a Custom Element in Chromium that does not have a createdCallback, we don't actually allocate its wrapper until it's touched (like any Element.) But it would not be possible to distinguish whether a user-provided constructor is trivial and needs this. It's a tradeoff. There are definite disadvantages to the current specification; prototype swizzling is one of them. Not sure if that background is useful... Meanwhile, I have not made much progress on the cloning question. As Domenic pointed out, that would also either require prototype-swizzling or invoking the constructor, there's not really a third way. I guess for that to work cloneNode() and various editing operations would have to become resilient against JavaScript executing in the middle of them, Could you share a list of things that use the cloning algorithm? something that has caused (is causing?) us a ton of headaches with mutation events. (Or the alternative, have some kind of mode switch for the DOM which is similarly a large undertaking.) What do you mean by mode switch? Not sure what to do now :-/ -- https://annevankesteren.nl/