Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
But If I do getElementsByClass()[0], and LiveNodeList is immediately garbage 
collectable, then if I change the DOM, Blink won't traverse ancestors, thus no 
penalty for DOM mutation?

> On Apr 28, 2015, at 2:28 PM, Elliott Sprehn  wrote:
> 
> 
> 
> On Mon, Apr 27, 2015 at 11:13 PM, Glen Huang  > wrote:
> On second thought, if the list returned by getElementsByClass() is lazy 
> populated as Boris says, it shouldn't be a problem. The list is only updated 
> when you access that list again.
> 
> The invalidation is what makes your code slower. Specifically any time you 
> mutate the tree, and you have live node lists, we traverse ancestors to mark 
> them as needing to be updated.
> 
> Blink (and likely other browsers) will eventually garbage collect the 
> LiveNodeList and then your DOM mutations will get faster again.
>  
> 
>> On Apr 28, 2015, at 2:08 PM, Glen Huang > > wrote:
>> 
>>> Live node lists make all dom mutation slower
>>> 
>> Haven't thought about this before. Thank you for pointing it out. So if I 
>> use, for example, lots of getElementsByClass() in the code, I'm actually 
>> slowing down all DOM mutating APIs?
> 
> 



Re: Why is querySelector much slower?

2015-04-27 Thread Elliott Sprehn
On Mon, Apr 27, 2015 at 11:13 PM, Glen Huang  wrote:

> On second thought, if the list returned by getElementsByClass() is lazy
> populated as Boris says, it shouldn't be a problem. The list is only
> updated when you access that list again.
>

The invalidation is what makes your code slower. Specifically any time you
mutate the tree, and you have live node lists, we traverse ancestors to
mark them as needing to be updated.

Blink (and likely other browsers) will eventually garbage collect the
LiveNodeList and then your DOM mutations will get faster again.


>
> On Apr 28, 2015, at 2:08 PM, Glen Huang  wrote:
>
> Live node lists make all dom mutation slower
>
> Haven't thought about this before. Thank you for pointing it out. So if I
> use, for example, lots of getElementsByClass() in the code, I'm actually
> slowing down all DOM mutating APIs?
>
>
>


Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
On second thought, if the list returned by getElementsByClass() is lazy 
populated as Boris says, it shouldn't be a problem. The list is only updated 
when you access that list again.

> On Apr 28, 2015, at 2:08 PM, Glen Huang  wrote:
> 
>> Live node lists make all dom mutation slower
>> 
> Haven't thought about this before. Thank you for pointing it out. So if I 
> use, for example, lots of getElementsByClass() in the code, I'm actually 
> slowing down all DOM mutating APIs?



Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
> Live node lists make all dom mutation slower
> 
Haven't thought about this before. Thank you for pointing it out. So if I use, 
for example, lots of getElementsByClass() in the code, I'm actually slowing 
down all DOM mutating APIs?

Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
Wow, it's now super clear. Thanks for the detailed explanation.

Just a quick follow up question to quench my curiosity: if I do "list[1]" and 
no one has ever asked the list for any element, Gecko will find the first two 
matching elements, and store them in the list, if I then immediately do 
"list[0]", the first element is returned without walking the DOM (assuming 
there are at least two matching elements)? 

> querySelector("foo") and getElementsByTagName("foo")[0] can return different 
> nodes

Still a bit confused regarding this. If the premise is the selector only 
contains characters allowed in a tag name, how can they return different nodes, 
maybe I missed something? Unless you mean querySelector(":foo") and 
getElementsByTagName(":foo")[0] can return different results, which is obvious.

If by parsing the passed selector (or lookup the cached parsed selectors) you 
know it only contains a tag name, why it is a bit harder to optimize? You just 
look up the (tagname, root) hash table, no?

> In practice this hasn't come up as a bottleneck in anything we've profiled so 
> far

I'm probably prematurely optimizing my code. But nevertheless learned something 
quite valuable by asking. Thanks for answering it. :)


Re: Directory Upload Proposal

2015-04-27 Thread Arun Ranganathan
On Fri, Apr 24, 2015 at 2:28 AM, Ali Alabbas  wrote:
>
>
> If there is sufficient interest, I would like to work on this within the
> scope of the WebApps working group.
>
>
And I'll help with the FileSystem API bit, ensuring that "full" spec.[3]
has bits about the Directory Upload proposal (outside of the sandbox).
There's an HTML WG bit, too, as Jonas and Anne v K point out. I'm not sure
what the best way to tackle that is, but I think a bug on HTML plus nudging
over IRC over attribute names will go a fair distance.

-- A*


[Bug 28577] New: [XMLHttpRequest] Throwing NetworkError on open() call for some kind of simple errors

2015-04-27 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28577

Bug ID: 28577
   Summary: [XMLHttpRequest] Throwing NetworkError on open() call
for some kind of simple errors
   Product: WebAppsWG
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: XHR
  Assignee: ann...@annevk.nl
  Reporter: tyosh...@google.com
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org

Seems the WebAppSec WG attempted to make xhr.open() throw NetworkError for some
errors that could be detected synchronously such as mixed content. What's the
status of that?

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Directory Upload Proposal

2015-04-27 Thread Jonas Sicking
On Thu, Apr 23, 2015 at 5:45 PM, Anne van Kesteren  wrote:
> On Thu, Apr 23, 2015 at 12:28 PM, Ali Alabbas  wrote:
>> If there is sufficient interest, I would like to work on this within the 
>> scope of the WebApps working group.
>
> It seems somewhat better to just file a bug against the HTML Standard
> since this also affects the processing model of e.g. .files.
> Which I think was the original proposal for how to address this...
> Just expose all the files in .files and expose the relative
> paths, but I guess that might be a bit too synchronous...

Yeah. Recursively enumerating the selected directory (/directories)
can be a potentially very lengthy process. So something which the page
might want to display progress UI while it's happening. We looked at
various ways of doing this in [1] but ultimately all of them felt
clunky and not as flexible as allowing the page to enumerate the
directory tree itself. This way pages could even save time on
enumeration by displaying UI which allows the user to select which
sub-directories to traverse.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=846931

/ Jonas



Re: Inheritance Model for Shadow DOM Revisited

2015-04-27 Thread Hayato Ito
I'm aware that our consensus is to defer this until v2. Don't worry. :)

The feature of " as function" supports *subclassing*. That's
exactly the motivation I've introduced it once in the spec (and implemented
it in blink).
I think Jan Miksovsky, co-author of Apple's proposal, knows well that.

The reason I reverted it from the spec (and the blink), [1], is a technical
difficulty to implement, though I've not proved that it's impossible to
implement.

[1] https://codereview.chromium.org/137993003


On Tue, Apr 28, 2015 at 1:33 PM Ryosuke Niwa  wrote:

> Note: Our current consensus is to defer this until v2.
>
> > On Apr 27, 2015, at 9:09 PM, Hayato Ito  wrote:
> >
> > For the record, I, as a spec editor, still think "Shadow Root hosts yet
> another Shadow Root" is the best idea among all ideas I've ever seen, with
> a " as function", because it can explain everything in a unified
> way using a single tree of trees, without bringing yet another complexity
> such as multiple templates.
> >
> > Please see
> https://github.com/w3c/webcomponents/wiki/Multiple-Shadow-Roots-as-%22a-Shadow-Root-hosts-another-Shadow-Root%22
>
> That's a great mental model for multiple generations of shadow DOM but it
> doesn't solve any of the problems with API itself.  Like I've repeatedly
> stated in the past, the problem is the order of transclusion.  Quoting from
> [1],
>
> The `` element is optimized for wrapping a base class, not filling
> it in. In practice, no subclass ever wants to wrap their base class with
> additional user interface elements. A subclass is a specialization of a
> base class, and specialization of UI generally means adding specialized
> elements in the middle of a component, not wrapping new elements outside
> some inherited core.
>
> In the three component libraries [1] described above, the only cases where
> a subclass uses `` is if the subclass wants to add additional
> styling. That is, a subclass wants to override base class styling, and can
> do so via:
>
>   ```
>   
> subclass styles go here
> 
>   
>   ```
>
> One rare exception is `core-menu` [3], which does add some components in a
> wrapper around a ``. However, even in that case, the components in
> question are instances of ``, a component which defines
> keyboard shortcuts. That is, the component is not using this wrapper
> ability to add visible user interface elements, so the general point stands.
>
> As with the above point, the fact that no practical component has need for
> this ability to wrap an older shadow tree suggests the design is solving a
> problem that does not, in fact, exist in practice.
>
>
> [1]
> https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
> [2] Polymer’s core- elements, Polymer’s paper- elements, and the Basic Web
> Components’ collection of basic- elements
> [3]
> https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FPolymer%2Fcore-menu%2Fblob%2Fmaster%2Fcore-menu.html&sa=D&sntz=1&usg=AFQjCNH0Rv14ENbplb6VYWFh8CsfVo9m_A
>
> - R. Niwa
>
>


[Bug 23726] Integration between XMLHttpRequest and Streams API

2015-04-27 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=23726

Takeshi Yoshino  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |WONTFIX

--- Comment #2 from Takeshi Yoshino  ---
Integration of Streams and XMLHttpRequest has been discontinued as the WG
roughly agreed on feature freeze of XMLHttpRequest.

Integration with the Fetch API is happening at
https://github.com/yutakahirano/fetch-with-streams.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Directory Upload Proposal

2015-04-27 Thread Jonas Sicking
On Thu, Apr 23, 2015 at 12:28 PM, Ali Alabbas  wrote:
> Hello WebApps Group,

Hi Ali,

Yay! This is great to see a formal proposal for! Definitely something
that mozilla is very interested in working on.

> If there is sufficient interest, I would like to work on this within the 
> scope of the WebApps working group.

I personally will stay out of WG politics. But I think the proposal
will receive more of the needed attention and review in this WG than
in the HTML WG. But I'm not sure if W3C policies dictate that this is
done in the HTML WG.

> [4] Proposal: http://internetexplorer.github.io/directory-upload/proposal.html

So, some specific feedback on the proposal.

First off, I don't think you can use the name "dir" for the new
attribute since that's already used for setting rtl/ltr direction.
Simply renaming the attribute to something else should fix this.

Second, rather than adding a .directory attribute, I think that we
should simply add any selected directories to the .files list. My
experience is that having a direct mapping between what the user does,
and what we expose to the webpage, generally results in less developer
confusion and/or annoyance.

My understanding is that the current proposal is mainly so that if we
in the future add something like Directory.enumerateDeep(), that that
would automatically enable deep enumeration through all user options.
However that could always be solved by adding a
HTMLInputElement.enumerateFilesDeep() function.

/ Jonas



Re: Why is querySelector much slower?

2015-04-27 Thread Boris Zbarsky

On 4/27/15 11:27 PM, Glen Huang wrote:

When you say "var node = list[0];" walks the DOM until the first item is found, 
do you mean it only happens under the condition that some previous code has changed the 
DOM structure?


Or that no one has ever asked the list for its [0] element before, yes.


If not, the returned list object will be marked as up-to-day, and accessing the 
first element is very cheap?


In Gecko, yes.


I ask because in the first paragraph you said the returned list and returned 
first element is probably precomputed.


In the case of the microbenchmark where you just ask for it repeatedly 
without changing the DOM, yes.



After UA has parsed html, it caches a hash table of elements with class names 
(also all element with ids, all elements with tag names, etc in different hash 
tables), keyed under the class names.


At least in Gecko, that's not how it works.

There _is_ a hashtable mapping ids to element lists used for 
getElementById that is populated eagerly.


There is also hashtable mapping the pair (class string, root) to an 
element list that's used by getElementsByClassName and is populated 
lazily.  Likewise, a hashtable mapping the pair (tagname, root) to an 
element list that's used by getElementsByClassName; this one is also 
populated lazily.


> When getElementsByClassName() is called, and the DOM hasn't been 
modified, it simply creates a list of elements with that class name from 
the hash table.


No, it just gets the list pointer from the hashtable (if any) and 
returns it.  That is, getElementsByClasName("foo") === 
getElementsByClassName("foo") tests true.  If there is no list pointer 
in the hashtable, an empty list is created, stored in the hashtable, and 
returned.



When the first element is accessed from that list, and the DOM still isn't 
modified, the element is returned directly.


If the list has computed it before.  If not, it walks the DOM until it 
finds its first element, then adds that one element to the list and 
returns it.



The hash table is kept in sync with the DOM when it's modified.


The id hashtable is.  The class/tagname hashtables aren't kept in sync 
per se, since they're lazily populated anyway, but the lists stored in 
them may need to be marked dirty.



And if the DOM is changed after the list is returned but before it's accessed


Or before the list is returned.  The order really doesn't matter here; 
what matters is whether the DOM is changed after the previous access, if 
any.



Why can't querySelector benefit from these hash tables?


It could, somewhat.  querySelector with an id selector does in fact 
benefit from the id hashtable.  For the specific case of querySelector 
with a class selector, we _could_ internally try to optimize a bit, 
especially for the class case (the tag name case is a bit harder 
because, for example, querySelector("foo") and 
getElementsByTagName("foo")[0] can return different nodes depending on 
the value of the string "foo" and whether it contains any ':' characters 
and whatnot).



I currently feel the urge to optimize it myself by overriding it with a custom function which will parse the 
passed selector, and if it's a simple selector like "div", ".class", "#id", 
call the corresponding getElement*() function instead.


Then you'll end up with incorrect behavior in some cases, which you may 
of course not care about.



Why can't UAs perform this for us?


To some extent they could.  In practice this hasn't come up as a 
bottleneck in anything we've profiled so far, so people have avoided 
adding what seems like unnecessary complexity, but if there's a 
real-life example (not a microbenchmark) where this is being a problem 
that would certainly help a lot with getting this sort of thing on the 
radar.



If my mental model is correct


It's not quite.


The only price it  pays is parsing the selector.


Not even that, possibly; at least Gecko has a cache of parsed selectors.


Is it because authors don't use querySelector often enough that UAs aren't 
interested in optimizing it?


Or more precisely don't use it in ways that make it the performance 
bottleneck.


-Boris



[Bug 28522] [Shadow] Cascading for trees of no-inner/outer and no-younger/older relationship

2015-04-27 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28522

Koji Ishii  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||r...@opera.com
 Resolution|--- |WONTFIX

--- Comment #4 from Koji Ishii  ---
This is no longer an issue because multiple shadow roots and piercing
combinators were both removed.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



[Bug 28552] [Shadow]: Shadow DOM v1

2015-04-27 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28552
Bug 28552 depends on bug 28522, which changed state.

Bug 28522 Summary: [Shadow] Cascading for trees of no-inner/outer and 
no-younger/older relationship
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28522

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |WONTFIX

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Exposing structured clone as an API?

2015-04-27 Thread Elliott Sprehn
On Apr 24, 2015 3:16 PM, "Joshua Bell"  wrote:
>
> It seems like the OP's intent is just to deep-copy an object. Something
like the OP's tweet... or this, which we use in some tests:
>
> function structuredClone(o) {
> return new Promise(function(resolve) {
> var mc = new MessageChannel();
> mc.port2.onmessage = function(e) { resolve(e.data); };
> mc.port1.postMessage(o);
> });
> }
>
> ... but synchronous, which is fine, since the implicit
serialization/deserialization needs to be synchronous anyway.
>
> If we're not dragging in the notion of extensibility, is there
complication?  I'm pretty sure this would be about a two line function in
Blink. That said, without being able to extend it, is it really interesting
to developers?

The two line function won't be very fast since it'll serialize into a big
byte array first since structured clone is for sending objects across
threads/processes. It also means going through the runtime API which is
slower.

That was my point, exposing this naively is just exposing the slow path to
developers since a handwritten deep clone will likely be much faster.
Developers shouldn't be using structured clone for general deep cloning.
TC39 should expose an @@clone callback developers can override for all
objects.

Indexeddb has a similar situation, there's a comparison function in there
that seems super useful since it can compare arrays, but in reality you
shouldn't use it for general purpose code. JS should instead add an array
compare function, or a general compare function.

>
>
>
> On Fri, Apr 24, 2015 at 2:05 PM, Anne van Kesteren 
wrote:
>>
>> On Fri, Apr 24, 2015 at 2:08 AM, Robin Berjon  wrote:
>> > Does this have to be any more complicated than adding a toClone()
convention
>> > matching the ones we already have?
>>
>> Yes, much more complicated. This does not work at all. You need
>> something to serialize the object so you can transport it to another
>> (isolated) global.
>>
>>
>> --
>> https://annevankesteren.nl/
>>
>


Inheritance Model for Shadow DOM Revisited

2015-04-27 Thread Ryosuke Niwa
Note: Our current consensus is to defer this until v2.

> On Apr 27, 2015, at 9:09 PM, Hayato Ito  wrote:
> 
> For the record, I, as a spec editor, still think "Shadow Root hosts yet 
> another Shadow Root" is the best idea among all ideas I've ever seen, with a 
> " as function", because it can explain everything in a unified way 
> using a single tree of trees, without bringing yet another complexity such as 
> multiple templates.
> 
> Please see 
> https://github.com/w3c/webcomponents/wiki/Multiple-Shadow-Roots-as-%22a-Shadow-Root-hosts-another-Shadow-Root%22

That's a great mental model for multiple generations of shadow DOM but it 
doesn't solve any of the problems with API itself.  Like I've repeatedly stated 
in the past, the problem is the order of transclusion.  Quoting from [1],

The `` element is optimized for wrapping a base class, not filling it 
in. In practice, no subclass ever wants to wrap their base class with 
additional user interface elements. A subclass is a specialization of a base 
class, and specialization of UI generally means adding specialized elements in 
the middle of a component, not wrapping new elements outside some inherited 
core.

In the three component libraries [1] described above, the only cases where a 
subclass uses `` is if the subclass wants to add additional styling. 
That is, a subclass wants to override base class styling, and can do so via:

  ```
  
subclass styles go here

  
  ```

One rare exception is `core-menu` [3], which does add some components in a 
wrapper around a ``. However, even in that case, the components in 
question are instances of ``, a component which defines 
keyboard shortcuts. That is, the component is not using this wrapper ability to 
add visible user interface elements, so the general point stands.

As with the above point, the fact that no practical component has need for this 
ability to wrap an older shadow tree suggests the design is solving a problem 
that does not, in fact, exist in practice.


[1] 
https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution
[2] Polymer’s core- elements, Polymer’s paper- elements, and the Basic Web 
Components’ collection of basic- elements
[3] 
https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FPolymer%2Fcore-menu%2Fblob%2Fmaster%2Fcore-menu.html&sa=D&sntz=1&usg=AFQjCNH0Rv14ENbplb6VYWFh8CsfVo9m_A

- R. Niwa




Re: Why is querySelector much slower?

2015-04-27 Thread Elliott Sprehn
Live node lists make all dom mutation slower, so while it might look faster
in your benchmark it's actually slower elsewhere (ex. appendChild).

Do you have a real application where you see querySelector as the
bottleneck?
On Apr 27, 2015 5:32 PM, "Glen Huang"  wrote:

> I wonder why querySelector can't get the same optimization: If the passed
> selector is a simple selector like ".class", do exactly as
> getElementsByClassName('class')[0] does?
>
> > On Apr 28, 2015, at 10:51 AM, Ryosuke Niwa  wrote:
> >
> >
> >> On Apr 27, 2015, at 7:04 PM, Jonas Sicking  wrote:
> >>
> >> On Mon, Apr 27, 2015 at 1:57 AM, Glen Huang 
> wrote:
> >>> Intuitively, querySelector('.class') only needs to find the first
> matching
> >>> node, whereas getElementsByClassName('.class')[0] needs to find all
> matching
> >>> nodes and then return the first. The former should be a lot quicker
> than the
> >>> latter. Why that's not the case?
> >>
> >> I can't speak for other browsers, but Gecko-based browsers only search
> >> the DOM until the first hit for getElementsByClassName('class')[0].
> >> I'm not sure why you say that it must scan for all hits.
> >
> > WebKit (and, AFAIK, Blink) has the same optimization. It's a very
> important optimization.
> >
> > - R. Niwa
> >
>
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Hayato Ito
For the record, I, as a spec editor, still think "Shadow Root hosts yet
another Shadow Root" is the best idea among all ideas I've ever seen, with
a " as function", because it can explain everything in a unified
way using a single tree of trees, without bringing yet another complexity
such as multiple templates.

Please see
https://github.com/w3c/webcomponents/wiki/Multiple-Shadow-Roots-as-%22a-Shadow-Root-hosts-another-Shadow-Root%22




On Tue, Apr 28, 2015 at 12:51 PM Ryosuke Niwa  wrote:

>
> > On Apr 27, 2015, at 12:25 AM, Justin Fagnani 
> wrote:
> >
> > On Sun, Apr 26, 2015 at 11:05 PM, Anne van Kesteren 
> wrote:
> >> On Sat, Apr 25, 2015 at 10:49 PM, Ryosuke Niwa  wrote:
> >> > If we wanted to allow non-direct child descendent (e.g. grand child
> node) of
> >> > the host to be distributed, then we'd also need O(m) algorithm where
> m is
> >> > the number of under the host element.  It might be okay to carry on
> the
> >> > current restraint that only direct child of shadow host can be
> distributed
> >> > into insertion points but I can't think of a good reason as to why
> such a
> >> > restriction is desirable.
> >
> > The main reason is that you know that only a direct parent of a node can
> distribute it. Otherwise any ancestor could distribute a node, and in
> addition to probably being confusing and fragile, you have to define who
> wins when multiple ancestors try to.
> >
> > There are cases where you really want to group element logically by one
> tree structure and visually by another, like tabs. I think an alternative
> approach to distributing arbitrary descendants would be to see if nodes can
> cooperate on distribution so that a node could pass its direct children to
> another node's insertion point. The direct child restriction would still be
> there, so you always know who's responsible, but you can get the same
> effect as distributing descendants for a cooperating sets of elements.
>
> That's an interesting approach. Ted and I discussed this design, and it
> seems workable with Anne's `distribute` callback approach (= the second
> approach in my proposal).
>
> Conceptually, we ask each child of a shadow host the list of distributable
> node for under that child (including itself). For normal node without a
> shadow root, it'll simply itself along with all the distribution candidates
> returned by its children. For a node with a shadow root, we ask its
> implementation. The recursive algorithm can be written as follows in pseudo
> code:
>
> ```
> NodeList distributionList(Node n):
>   if n has shadowRoot:
> return 
>   else:
> list = [n]
> for each child in n:
>   list += distributionList(n)
> return list
> ```
>
> Now, if we adopted `distribute` callback approach, one obvious mechanism
> to do (1) is to call `distribute` on n and return whatever it didn't
> distribute as a list. Another obvious approach is to simply return [n] to
> avoid the mess of n later deciding to distribute a new node.
>
> >> So you mean that we'd turn distributionList into a subtree? I.e. you
> >> can pass all descendants of a host element to add()? I remember Yehuda
> >> making the point that this was desirable to him.
> >>
> >> The other thing I would like to explore is what an API would look like
> >> that does the subclassing as well. Even though we deferred that to v2
> >> I got the impression talking to some folks after the meeting that
> >> there might be more common ground than I thought.
> >
> > I really don't think the platform needs to do anything to support
> subclassing since it can be done so easily at the library level now that
> multiple generations of shadow roots are gone. As long as a subclass and
> base class can cooperate to produce a single shadow root with insertion
> points, the platform doesn't need to know how they did it.
>
> I think we should eventually add native declarative inheritance support
> for all of this.
>
> One thing that worries me about the `distribute` callback approach (a.k.a.
> Anne's approach) is that it bakes distribution algorithm into the platform
> without us having thoroughly studied how subclassing will be done upfront.
>
> Mozilla tried to solve this problem with XBS, and they seem to think what
> they have isn't really great. Google has spent multiple years working on
> this problem but they come around to say their solution, multiple
> generations of shadow DOM, may not be as great as they thought it would be.
> Given that, I'm quite terrified of making the same mistake in spec'ing how
> distribution works and later regretting it.
>
> In that regard, the first approach w/o distribution has an advantage of
> letting Web developer experiment with the bare minimum and try out which
> distribution algorithms and mechanisms work best.
>
> - R. Niwa
>
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 12:25 AM, Justin Fagnani  wrote:
> 
> On Sun, Apr 26, 2015 at 11:05 PM, Anne van Kesteren  wrote:
>> On Sat, Apr 25, 2015 at 10:49 PM, Ryosuke Niwa  wrote:
>> > If we wanted to allow non-direct child descendent (e.g. grand child node) 
>> > of
>> > the host to be distributed, then we'd also need O(m) algorithm where m is
>> > the number of under the host element.  It might be okay to carry on the
>> > current restraint that only direct child of shadow host can be distributed
>> > into insertion points but I can't think of a good reason as to why such a
>> > restriction is desirable.
> 
> The main reason is that you know that only a direct parent of a node can 
> distribute it. Otherwise any ancestor could distribute a node, and in 
> addition to probably being confusing and fragile, you have to define who wins 
> when multiple ancestors try to.
> 
> There are cases where you really want to group element logically by one tree 
> structure and visually by another, like tabs. I think an alternative approach 
> to distributing arbitrary descendants would be to see if nodes can cooperate 
> on distribution so that a node could pass its direct children to another 
> node's insertion point. The direct child restriction would still be there, so 
> you always know who's responsible, but you can get the same effect as 
> distributing descendants for a cooperating sets of elements.

That's an interesting approach. Ted and I discussed this design, and it seems 
workable with Anne's `distribute` callback approach (= the second approach in 
my proposal).

Conceptually, we ask each child of a shadow host the list of distributable node 
for under that child (including itself). For normal node without a shadow root, 
it'll simply itself along with all the distribution candidates returned by its 
children. For a node with a shadow root, we ask its implementation. The 
recursive algorithm can be written as follows in pseudo code:

```
NodeList distributionList(Node n):
  if n has shadowRoot:
return 
  else:
list = [n]
for each child in n:
  list += distributionList(n)
return list
```

Now, if we adopted `distribute` callback approach, one obvious mechanism to do 
(1) is to call `distribute` on n and return whatever it didn't distribute as a 
list. Another obvious approach is to simply return [n] to avoid the mess of n 
later deciding to distribute a new node.

>> So you mean that we'd turn distributionList into a subtree? I.e. you
>> can pass all descendants of a host element to add()? I remember Yehuda
>> making the point that this was desirable to him.
>> 
>> The other thing I would like to explore is what an API would look like
>> that does the subclassing as well. Even though we deferred that to v2
>> I got the impression talking to some folks after the meeting that
>> there might be more common ground than I thought.
> 
> I really don't think the platform needs to do anything to support subclassing 
> since it can be done so easily at the library level now that multiple 
> generations of shadow roots are gone. As long as a subclass and base class 
> can cooperate to produce a single shadow root with insertion points, the 
> platform doesn't need to know how they did it.

I think we should eventually add native declarative inheritance support for all 
of this.

One thing that worries me about the `distribute` callback approach (a.k.a. 
Anne's approach) is that it bakes distribution algorithm into the platform 
without us having thoroughly studied how subclassing will be done upfront.

Mozilla tried to solve this problem with XBS, and they seem to think what they 
have isn't really great. Google has spent multiple years working on this 
problem but they come around to say their solution, multiple generations of 
shadow DOM, may not be as great as they thought it would be. Given that, I'm 
quite terrified of making the same mistake in spec'ing how distribution works 
and later regretting it.

In that regard, the first approach w/o distribution has an advantage of letting 
Web developer experiment with the bare minimum and try out which distribution 
algorithms and mechanisms work best.

- R. Niwa




Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
I wonder why querySelector can't get the same optimization: If the passed 
selector is a simple selector like ".class", do exactly as 
getElementsByClassName('class')[0] does?

> On Apr 28, 2015, at 10:51 AM, Ryosuke Niwa  wrote:
> 
> 
>> On Apr 27, 2015, at 7:04 PM, Jonas Sicking  wrote:
>> 
>> On Mon, Apr 27, 2015 at 1:57 AM, Glen Huang  wrote:
>>> Intuitively, querySelector('.class') only needs to find the first matching
>>> node, whereas getElementsByClassName('.class')[0] needs to find all matching
>>> nodes and then return the first. The former should be a lot quicker than the
>>> latter. Why that's not the case?
>> 
>> I can't speak for other browsers, but Gecko-based browsers only search
>> the DOM until the first hit for getElementsByClassName('class')[0].
>> I'm not sure why you say that it must scan for all hits.
> 
> WebKit (and, AFAIK, Blink) has the same optimization. It's a very important 
> optimization.
> 
> - R. Niwa
> 




Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
Thank you for the sample code. It's very helpful.

When you say "var node = list[0];" walks the DOM until the first item is found, 
do you mean it only happens under the condition that some previous code has 
changed the DOM structure? If not, the returned list object will be marked as 
up-to-day, and accessing the first element is very cheap? I ask because in the 
first paragraph you said the returned list and returned first element is 
probably precomputed.

Also, this is my mental model after reading your explanation, I wonder if 
that's correct:

After UA has parsed html, it caches a hash table of elements with class names 
(also all element with ids, all elements with tag names, etc in different hash 
tables), keyed under the class names. When getElementsByClassName() is called, 
and the DOM hasn't been modified, it simply creates a list of elements with 
that class name from the hash table. When the first element is accessed from 
that list, and the DOM still isn't modified, the element is returned directly.

The hash table is kept in sync with the DOM when it's modified. And if the DOM 
is changed after the list is returned but before it's accessed, the list will 
be masked as dirty, and accessing its element will walk the DOM (and mark the 
list as partially updated after that).

Is this description correct?

And the final question:

Why can't querySelector benefit from these hash tables? I currently feel the 
urge to optimize it myself by overriding it with a custom function which will 
parse the passed selector, and if it's a simple selector like "div", ".class", 
"#id", call the corresponding getElement*() function instead. Why can't UAs 
perform this for us?

If my mental model is correct, it's simpler than getElement*() from an UA's 
point of view. It simply needs to lookup the first matching element from the 
hash table and return it, no need to return a list and mark it as clean or 
dirty any more. The only price it  pays is parsing the selector.

Is it because authors don't use querySelector often enough that UAs aren't 
interested in optimizing it?

> On Apr 27, 2015, at 9:51 PM, Boris Zbarsky  wrote:
> 
> On 4/27/15 4:57 AM, Glen Huang wrote:
>> Intuitively, querySelector('.class') only needs to find the first
>> matching node, whereas getElementsByClassName('.class')[0] needs to find
>> all matching nodes
> 
> Not true; see below.
> 
>> and then return the first. The former should be a lot
>> quicker than the latter. Why that's not the case?
>> 
>> See http://jsperf.com/queryselectorall-vs-getelementsbytagname/119 for
>> the test
> 
> All getElementsByClassName(".foo") has to do in a microbenchmark like this is 
> look up a cached list (probably a single hashtable lookup) and return its 
> first element (likewise precomputed, unless you're modifying the DOM in ways 
> that would affect the list).  It doesn't have to walk the tree at all.
> 
> querySelector(".foo"), on the other hand, probably walks the tree at the 
> moment in implementations.
> 
> Also, back to the "not true" above: since the list returned by getElementsBy* 
> is live and periodically needs to be recomputed anyway, and since grabbing 
> just its first element is a common usage pattern, Gecko's implementation is 
> actually lazy (see https://bugzilla.mozilla.org/show_bug.cgi?id=104603#c0 for 
> the motivation): it will only walk as much of the DOM as needed to reply to 
> the query being made.  So for example:
> 
>  // Creates a list object, doesn't do any walking of the DOM, marks
>  // object as dirty and returns it.
>  var list = document.getElementsByClassName(".foo");
> 
>  // Walks the DOM until it finds the first element of the list, marks
>  // the list as "partially updated", and returns that first element.
>  var node = list[0];
> 
>  // Marks the list as dirty again, since the set of nodes it matches
>  // has changed
>  document.documentElement.className = "foo";
> 
> I can't speak for what other UAs here, but the assumption that 
> getElementsByClassName('.class')[0] needs to find all matching nodes is just 
> not true in Gecko.
> 
> -Boris




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 7:32 PM, Steve Orvell  wrote:
>> 
>> Perhaps we need to make childrenChanged optionally get called when 
>> attributes of child nodes are changed just like the way you can configure 
>> mutation observers to optionally monitor attribute changes.
> 
> Wow, let me summarize if I can. Let's say we have (a) a custom elements 
> synchronous callback `childrenChanged` that can see child adds/removes and 
> child attribute mutations, (b) the first option in the proposed api here 
> https://gist.github.com/rniwa/2f14588926e1a11c65d3, (c) user element code 
> that wires everything together correctly. Then, unless I am mistaken, we have 
> enough power to implement something like the currently spec'd declarative 
> `select` mechanism or the proposed `slot` mechanism without any change to 
> user's expectations about when information in the dom can be queried.

Right. The sticking point is that it's like re-introducing mutation events all 
over again if we don't do it carefully.

> Do the implementors think all of that is feasible?

I think something alone this line should be feasible to implement but the 
performance impact of firing so many events may warrant going back to 
micro-task timing and think of an alternative solution for the consistency.

> Possible corner case: If a  is added to a shadowRoot, this should 
> probably invalidate the distribution and redo everything. To maintain a 
> synchronous mental model, the  mutation in the shadowRoot subtree 
> needs to be seen synchronously. This is not possible with the tools mentioned 
> above, but it seems like a reasonable requirement that the shadowRoot author 
> can be aware of this change since the author is causing it to happen.

Alternatively, an insertion point could start empty, and the author could move 
stuff into it after running. We can also add `removeAll` on HTMLContentElement 
or 'resetDistribution' on ShadowRoot to remove all distributed nodes from a 
given insertion point or all insertion points associated with a shadow root.

- R. Niwa




Re: Why is querySelector much slower?

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 7:04 PM, Jonas Sicking  wrote:
> 
> On Mon, Apr 27, 2015 at 1:57 AM, Glen Huang  wrote:
>> Intuitively, querySelector('.class') only needs to find the first matching
>> node, whereas getElementsByClassName('.class')[0] needs to find all matching
>> nodes and then return the first. The former should be a lot quicker than the
>> latter. Why that's not the case?
> 
> I can't speak for other browsers, but Gecko-based browsers only search
> the DOM until the first hit for getElementsByClassName('class')[0].
> I'm not sure why you say that it must scan for all hits.

WebKit (and, AFAIK, Blink) has the same optimization. It's a very important 
optimization.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Steve Orvell
>
> Perhaps we need to make childrenChanged optionally get called when
> attributes of child nodes are changed just like the way you can configure
> mutation observers to optionally monitor attribute changes.


Wow, let me summarize if I can. Let's say we have (a) a custom elements
synchronous callback `childrenChanged` that can see child adds/removes and
child attribute mutations, (b) the first option in the proposed api here
https://gist.github.com/rniwa/2f14588926e1a11c65d3, (c) user element code
that wires everything together correctly. Then, unless I am mistaken, we
have enough power to implement something like the currently spec'd
declarative `select` mechanism or the proposed `slot` mechanism without any
change to user's expectations about when information in the dom can be
queried.

Do the implementors think all of that is feasible?

Possible corner case: If a  is added to a shadowRoot, this should
probably invalidate the distribution and redo everything. To maintain a
synchronous mental model, the  mutation in the shadowRoot subtree
needs to be seen synchronously. This is not possible with the tools
mentioned above, but it seems like a reasonable requirement that the
shadowRoot author can be aware of this change since the author is causing
it to happen.


On Mon, Apr 27, 2015 at 7:01 PM, Ryosuke Niwa  wrote:

>
> > On Apr 27, 2015, at 5:43 PM, Steve Orvell  wrote:
> >>
> >> That might be an acceptable mode of operations. If you wanted to
> synchronously update your insertion points, rely on custom element's
> lifecycle callbacks and you can only support direct children for
> distribution.
> >
> > That's interesting, thanks for working through it. Given a
> `childrenChanged` callback, I think your first proposal
> `.insertAt` and `.remove` best supports a synchronous
> mental model. As you note, re-distribution is then the element author's
> responsibility. This would be done by listening to the synchronous
> `distributionChanged` event. That seems straightforward.
> >
> > Mutations that are not captured in childrenChanged that can affect
> distribution would still be a problem, however. Given:
> >
> > 
> >   
> > 
> >
> > child.setAttribute('slot', 'a');
> > host.offsetHeight;
> >
> > Again, we are guaranteed that parent's offsetHeight includes any
> contribution that adding the slot attribute caused (e.g. via a
> #child[slot=a] rule)
> >
> > If the `host` is a custom element that uses distribution, would it be
> possible to have this same guarantee?
> >
> > 
> >   
> > 
> >
> > child.setAttribute('slot', 'a');
> > host.offsetHeight;
>
> That's a good point. Perhaps we need to make childrenChanged optionally
> get called when attributes of child nodes are changed just like the way you
> can configure mutation observers to optionally monitor attribute changes.
>
> - R. Niwa
>
>


Re: Exposing structured clone as an API?

2015-04-27 Thread Jonas Sicking
On Thu, Apr 23, 2015 at 6:31 PM, Kyle Huey  wrote:
> On Thu, Apr 23, 2015 at 6:06 PM, Boris Zbarsky  wrote:
>> On 4/23/15 6:34 PM, Elliott Sprehn wrote:
>>>
>>> Have you benchmarked this? I think you're better off just writing your
>>> own clone library.
>>
>>
>> That requires having a list of all objects browsers consider clonable and
>> having ways of cloning them all, right?  Maintaining such a library is
>> likely to be a somewhat demanding undertaking as new clonable objects are
>> added...
>>
>> -Boris
>>
>
> Today it's not demanding, it's not even possible.  e.g. how do you
> duplicate a FileList object?

We should just fix [1] and get rid of the FileList interface. Are
there more interfaces this applies to?

[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=23682

/ Jonas



Re: Why is querySelector much slower?

2015-04-27 Thread Jonas Sicking
On Mon, Apr 27, 2015 at 1:57 AM, Glen Huang  wrote:
> Intuitively, querySelector('.class') only needs to find the first matching
> node, whereas getElementsByClassName('.class')[0] needs to find all matching
> nodes and then return the first. The former should be a lot quicker than the
> latter. Why that's not the case?

I can't speak for other browsers, but Gecko-based browsers only search
the DOM until the first hit for getElementsByClassName('class')[0].
I'm not sure why you say that it must scan for all hits.

/ Jonas



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 5:43 PM, Steve Orvell  wrote:
>> 
>> That might be an acceptable mode of operations. If you wanted to 
>> synchronously update your insertion points, rely on custom element's 
>> lifecycle callbacks and you can only support direct children for 
>> distribution. 
> 
> That's interesting, thanks for working through it. Given a `childrenChanged` 
> callback, I think your first proposal `.insertAt` and 
> `.remove` best supports a synchronous mental model. As you note, 
> re-distribution is then the element author's responsibility. This would be 
> done by listening to the synchronous `distributionChanged` event. That seems 
> straightforward.
> 
> Mutations that are not captured in childrenChanged that can affect 
> distribution would still be a problem, however. Given:
> 
> 
>   
> 
> 
> child.setAttribute('slot', 'a');
> host.offsetHeight;
> 
> Again, we are guaranteed that parent's offsetHeight includes any contribution 
> that adding the slot attribute caused (e.g. via a #child[slot=a] rule)
> 
> If the `host` is a custom element that uses distribution, would it be 
> possible to have this same guarantee?
> 
> 
>   
> 
> 
> child.setAttribute('slot', 'a');
> host.offsetHeight;

That's a good point. Perhaps we need to make childrenChanged optionally get 
called when attributes of child nodes are changed just like the way you can 
configure mutation observers to optionally monitor attribute changes.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Steve Orvell
>
> That might be an acceptable mode of operations. If you wanted to
> synchronously update your insertion points, rely on custom element's
> lifecycle callbacks and you can only support direct children for
> distribution.


That's interesting, thanks for working through it. Given a
`childrenChanged` callback, I think your first proposal
`.insertAt` and `.remove` best supports a synchronous
mental model. As you note, re-distribution is then the element author's
responsibility. This would be done by listening to the synchronous
`distributionChanged` event. That seems straightforward.

Mutations that are not captured in childrenChanged that can affect
distribution would still be a problem, however. Given:


  


child.setAttribute('slot', 'a');
host.offsetHeight;

Again, we are guaranteed that parent's offsetHeight includes any
contribution that adding the slot attribute caused (e.g. via a
#child[slot=a] rule)

If the `host` is a custom element that uses distribution, would it be
possible to have this same guarantee?


  


child.setAttribute('slot', 'a');
host.offsetHeight;








On Mon, Apr 27, 2015 at 4:55 PM, Ryosuke Niwa  wrote:

>
> > On Apr 27, 2015, at 4:41 PM, Steve Orvell  wrote:
> >
> >> Again, the timing was deferred in [1] and [2] so it really depends on
> when each component decides to distribute.
> >
> > I want to be able to create an element  that acts like other dom
> elements. This element uses Shadow DOM and distribution to encapsulate its
> details.
> >
> > Let's imagine a 3rd party user named Bob that uses  and .
> Bob knows he can call div.appendChild(element) and then immediately ask
> div.offsetHeight and know that this height includes whatever the added
> element should contribute to the div's height. Bob expects to be able to do
> this with the  element also since it is just another element from
> his perspective.
> >
> > How can I, the author of , craft my element such that I don't
> violate Bob's expectations? Does your proposal support this?
>
> In order to support this use case, the author of x-foo must use some
> mechanism to observe changes to x-foo's child nodes and involve
> `distribute` synchronously.  This will become possible, for example, if we
> added childrenChanged lifecycle callback to custom elements.
>
> That might be an acceptable mode of operations. If you wanted to
> synchronously update your insertion points, rely on custom element's
> lifecycle callbacks and you can only support direct children for
> distribution. Alternatively, if you wanted to support to distribute a
> non-direct-child descendent, just use mutation observers to do it at the
> end of a micro task.
>
> - R. Niwa
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 4:41 PM, Steve Orvell  wrote:
> 
>> Again, the timing was deferred in [1] and [2] so it really depends on when 
>> each component decides to distribute.
> 
> I want to be able to create an element  that acts like other dom 
> elements. This element uses Shadow DOM and distribution to encapsulate its 
> details.
> 
> Let's imagine a 3rd party user named Bob that uses  and . Bob 
> knows he can call div.appendChild(element) and then immediately ask 
> div.offsetHeight and know that this height includes whatever the added 
> element should contribute to the div's height. Bob expects to be able to do 
> this with the  element also since it is just another element from his 
> perspective.
> 
> How can I, the author of , craft my element such that I don't violate 
> Bob's expectations? Does your proposal support this?

In order to support this use case, the author of x-foo must use some mechanism 
to observe changes to x-foo's child nodes and involve `distribute` 
synchronously.  This will become possible, for example, if we added 
childrenChanged lifecycle callback to custom elements.

That might be an acceptable mode of operations. If you wanted to synchronously 
update your insertion points, rely on custom element's lifecycle callbacks and 
you can only support direct children for distribution. Alternatively, if you 
wanted to support to distribute a non-direct-child descendent, just use 
mutation observers to do it at the end of a micro task.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Steve Orvell
>
> Again, the timing was deferred in [1] and [2] so it really depends on when
> each component decides to distribute.


I want to be able to create an element  that acts like other dom
elements. This element uses Shadow DOM and distribution to encapsulate its
details.

Let's imagine a 3rd party user named Bob that uses  and . Bob
knows he can call div.appendChild(element) and then immediately ask
div.offsetHeight and know that this height includes whatever the added
element should contribute to the div's height. Bob expects to be able to do
this with the  element also since it is just another element from
his perspective.

How can I, the author of , craft my element such that I don't
violate Bob's expectations? Does your proposal support this?

On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa  wrote:

>
> > On Apr 27, 2015, at 3:15 PM, Steve Orvell  wrote:
> >
> > IMO, the appeal of this proposal is that it's a small change to the
> current spec and avoids changing user expectations about the state of the
> dom and can explain the two declarative proposals for distribution.
> >
> >> It seems like with this API, we’d have to make O(n^k) calls where n is
> the number of distribution candidates and k is the number of insertion
> points, and that’s bad.  Or am I misunderstanding your design?
> >
> > I think you've understood the proposed design. As you noted, the cost is
> actually O(n*k). In our use cases, k is generally very small.
>
> I don't think we want to introduce O(nk) algorithm. Pretty much every
> browser optimization we implement these days are removing O(n^2) algorithms
> in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because
> we can't even theoretically optimize it away.
>
> >> Do you mean instead that we synchronously invoke this algorithm when a
> child node is inserted or removed from the host?  If so, that’ll impose
> unacceptable runtime cost for DOM mutations.
> >> I think the only timing UA can support by default will be at the end of
> micro task or at UA-code / user-code boundary as done for custom element
> lifestyle callbacks at the moment.
> > Running this callback at the UA-code/user-code boundary seems like it
> would be fine. Running the more complicated "distribute all the nodes"
> proposals at this time would obviously not be feasible. The notion here is
> that since we're processing only a single node at a time, this can be done
> after an atomic dom action.
>
> Indeed, running such an algorithm each time node is inserted or removed
> will be quite expensive.
>
> >> “always correct” is somewhat stronger statement than I would state here
> since during UA calls these shouldDistributeToInsertionPoint callbacks,
> we'll certainly see transient offsetHeight values.
> >
> > Yes, you're right about that. Specifically it would be bad to try to
> read `offsetHeight` in this callback and this would be an anti-pattern. If
> that's not good enough, perhaps we can explore actually not working
> directly with the node but instead the subset of information necessary to
> be able to decide on distribution.
>
> I'm not necessarily saying that it's not good enough.  I'm just saying
> that it is possible to observe such a state even with this API.
>
> > Can you explain, under the initial proposal, how a user can ask an
> element's dimensions and get the post-distribution answer? With current dom
> api's I can be sure that if I do parent.appendChild(child) and then
> parent.offsetWidth, the answer takes child into account. I'm looking to
> understand how we don't violate this expectation when parent distributes.
> Or if we violate this expectation, what is the proposed right way to ask
> this question?
>
> You don't get that guarantee in the design we discussed on Friday [1] [2].
> In fact, we basically deferred the timing issue to other APIs that observe
> DOM changes, namely mutation observers and custom elements lifecycle
> callbacks. Each component uses those APIs to call distribute().
>
> > In addition to rendering information about a node, distribution also
> effects the flow of events. So a similar question: when is it safe to call
> child.dispatchEvent such that if parent distributes elements to its
> shadowRoot, elements in the shadowRoot will see the event?
>
> Again, the timing was deferred in [1] and [2] so it really depends on when
> each component decides to distribute.
>
> - R. Niwa
>
> [1] https://gist.github.com/rniwa/2f14588926e1a11c65d3
> [2] https://gist.github.com/annevk/e9e61801fcfb251389ef
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Tab Atkins Jr.
On Mon, Apr 27, 2015 at 4:06 PM, Tab Atkins Jr.  wrote:
> On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa  wrote:
>>> On Apr 27, 2015, at 3:15 PM, Steve Orvell  wrote:
>>> IMO, the appeal of this proposal is that it's a small change to the current 
>>> spec and avoids changing user expectations about the state of the dom and 
>>> can explain the two declarative proposals for distribution.
>>>
 It seems like with this API, we’d have to make O(n^k) calls where n is the 
 number of distribution candidates and k is the number of insertion points, 
 and that’s bad.  Or am I misunderstanding your design?
>>>
>>> I think you've understood the proposed design. As you noted, the cost is 
>>> actually O(n*k). In our use cases, k is generally very small.
>>
>> I don't think we want to introduce O(nk) algorithm. Pretty much every 
>> browser optimization we implement these days are removing O(n^2) algorithms 
>> in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because 
>> we can't even theoretically optimize it away.
>
> You're aware, obviously, that O(n^2) is a far different beast than
> O(nk).  If k is generally small, which it is, O(nk) is basically just
> O(n) with a constant factor applied.

To make it clear: I'm not trying to troll Ryosuke here.

He argued that we don't want to add new O(n^2) algorithms if we can
help it, and that we prefer O(n).  (Uncontroversial.)

He then further said that an O(nk) algorithm is sufficiently close to
O(n^2) that he'd similarly like to avoid it.  I'm trying to
reiterate/expand on Steve's message here, that the k value in question
is usually very small, relative to the value of n, so in practice this
O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to
new O(n^2) algorithms may be mistargeted here.

~TJ



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Tab Atkins Jr.
On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa  wrote:
>> On Apr 27, 2015, at 3:15 PM, Steve Orvell  wrote:
>> IMO, the appeal of this proposal is that it's a small change to the current 
>> spec and avoids changing user expectations about the state of the dom and 
>> can explain the two declarative proposals for distribution.
>>
>>> It seems like with this API, we’d have to make O(n^k) calls where n is the 
>>> number of distribution candidates and k is the number of insertion points, 
>>> and that’s bad.  Or am I misunderstanding your design?
>>
>> I think you've understood the proposed design. As you noted, the cost is 
>> actually O(n*k). In our use cases, k is generally very small.
>
> I don't think we want to introduce O(nk) algorithm. Pretty much every browser 
> optimization we implement these days are removing O(n^2) algorithms in the 
> favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't 
> even theoretically optimize it away.

You're aware, obviously, that O(n^2) is a far different beast than
O(nk).  If k is generally small, which it is, O(nk) is basically just
O(n) with a constant factor applied.

~TJ



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 3:31 PM, Hayato Ito  wrote:
> 
> I think there are a lot of user operations where distribution must be updated 
> before returning the meaningful result synchronously.
> Unless distribution result is correctly updated, users would take the dirty 
> result.

Indeed.

> For example:
> - element.offsetWidth:  Style resolution requires distribution. We must 
> update distribution, if it's dirty, before calculation offsetWidth 
> synchronously.
> - event dispatching: event path requires distribution because it needs a 
> composed tree.
> 
> Can the current HTML/DOM specs are rich enough to explain the timing when the 
> imperative APIs should be run in these cases?

It certainly doesn't tell us when style resolution happens. In the case of 
event dispatching, it's impossible even in theory unless we somehow disallow 
event dispatching within our `distribute` callbacks since we can dispatch new 
events within the callbacks to decide to where a given node gets distributed. 
Given that, I don't think we should even try to make such a guarantee.

We could, however, make a slightly weaker guarantee that some level of 
conditions for the user code outside of `distribute` callbacks. For example, I 
can think of three levels (weakest to strongest) of self-consistent invariants:
1. every node is distributed to at most one insertion point.
2. all first-order distributions is up-to-date (redistribution may happen 
later).
3. all distributions is up-to-date.

> For me, the imperative APIs for distribution sounds very similar to the 
> imperative APIs for style resolution. The difficulties of both problems might 
> be similar.

We certainly don't want to (in fact, we'll object to) spec the timing for style 
resolution or what even style resolution means.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 3:15 PM, Steve Orvell  wrote:
> 
> IMO, the appeal of this proposal is that it's a small change to the current 
> spec and avoids changing user expectations about the state of the dom and can 
> explain the two declarative proposals for distribution.
> 
>> It seems like with this API, we’d have to make O(n^k) calls where n is the 
>> number of distribution candidates and k is the number of insertion points, 
>> and that’s bad.  Or am I misunderstanding your design?
> 
> I think you've understood the proposed design. As you noted, the cost is 
> actually O(n*k). In our use cases, k is generally very small.

I don't think we want to introduce O(nk) algorithm. Pretty much every browser 
optimization we implement these days are removing O(n^2) algorithms in the 
favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't 
even theoretically optimize it away.

>> Do you mean instead that we synchronously invoke this algorithm when a child 
>> node is inserted or removed from the host?  If so, that’ll impose 
>> unacceptable runtime cost for DOM mutations.
>> I think the only timing UA can support by default will be at the end of 
>> micro task or at UA-code / user-code boundary as done for custom element 
>> lifestyle callbacks at the moment.
> Running this callback at the UA-code/user-code boundary seems like it would 
> be fine. Running the more complicated "distribute all the nodes" proposals at 
> this time would obviously not be feasible. The notion here is that since 
> we're processing only a single node at a time, this can be done after an 
> atomic dom action.

Indeed, running such an algorithm each time node is inserted or removed will be 
quite expensive.

>> “always correct” is somewhat stronger statement than I would state here 
>> since during UA calls these shouldDistributeToInsertionPoint callbacks, 
>> we'll certainly see transient offsetHeight values.
> 
> Yes, you're right about that. Specifically it would be bad to try to read 
> `offsetHeight` in this callback and this would be an anti-pattern. If that's 
> not good enough, perhaps we can explore actually not working directly with 
> the node but instead the subset of information necessary to be able to decide 
> on distribution.

I'm not necessarily saying that it's not good enough.  I'm just saying that it 
is possible to observe such a state even with this API.

> Can you explain, under the initial proposal, how a user can ask an element's 
> dimensions and get the post-distribution answer? With current dom api's I can 
> be sure that if I do parent.appendChild(child) and then parent.offsetWidth, 
> the answer takes child into account. I'm looking to understand how we don't 
> violate this expectation when parent distributes. Or if we violate this 
> expectation, what is the proposed right way to ask this question?

You don't get that guarantee in the design we discussed on Friday [1] [2]. In 
fact, we basically deferred the timing issue to other APIs that observe DOM 
changes, namely mutation observers and custom elements lifecycle callbacks. 
Each component uses those APIs to call distribute().

> In addition to rendering information about a node, distribution also effects 
> the flow of events. So a similar question: when is it safe to call 
> child.dispatchEvent such that if parent distributes elements to its 
> shadowRoot, elements in the shadowRoot will see the event?

Again, the timing was deferred in [1] and [2] so it really depends on when each 
component decides to distribute.

- R. Niwa

[1] https://gist.github.com/rniwa/2f14588926e1a11c65d3
[2] https://gist.github.com/annevk/e9e61801fcfb251389ef




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Hayato Ito
I think there are a lot of user operations where distribution must be
updated before returning the meaningful result synchronously.
Unless distribution result is correctly updated, users would take the dirty
result.

For example:
- element.offsetWidth:  Style resolution requires distribution. We must
update distribution, if it's dirty, before calculation offsetWidth
synchronously.
- event dispatching: event path requires distribution because it needs a
composed tree.

Can the current HTML/DOM specs are rich enough to explain the timing when
the imperative APIs should be run in these cases?

For me, the imperative APIs for distribution sounds very similar to the
imperative APIs for style resolution. The difficulties of both problems
might be similar.





On Tue, Apr 28, 2015 at 7:18 AM Steve Orvell  wrote:

> IMO, the appeal of this proposal is that it's a small change to the
> current spec and avoids changing user expectations about the state of the
> dom and can explain the two declarative proposals for distribution.
>
>
>> It seems like with this API, we’d have to make O(n^k) calls where n is
>> the number of distribution candidates and k is the number of insertion
>> points, and that’s bad.  Or am I misunderstanding your design?
>
>
> I think you've understood the proposed design. As you noted, the cost is
> actually O(n*k). In our use cases, k is generally very small.
>
> Do you mean instead that we synchronously invoke this algorithm when a
>> child node is inserted or removed from the host?  If so, that’ll impose
>> unacceptable runtime cost for DOM mutations.
>> I think the only timing UA can support by default will be at the end of
>> micro task or at UA-code / user-code boundary as done for custom element
>> lifestyle callbacks at the moment.
>
>
> Running this callback at the UA-code/user-code boundary seems like it
> would be fine. Running the more complicated "distribute all the nodes"
> proposals at this time would obviously not be feasible. The notion here is
> that since we're processing only a single node at a time, this can be done
> after an atomic dom action.
>
> “always correct” is somewhat stronger statement than I would state here
>> since during UA calls these shouldDistributeToInsertionPoint callbacks,
>> we'll certainly see transient offsetHeight values.
>
>
> Yes, you're right about that. Specifically it would be bad to try to read
> `offsetHeight` in this callback and this would be an anti-pattern. If
> that's not good enough, perhaps we can explore actually not working
> directly with the node but instead the subset of information necessary to
> be able to decide on distribution.
>
> Can you explain, under the initial proposal, how a user can ask an
> element's dimensions and get the post-distribution answer? With current
> dom api's I can be sure that if I do parent.appendChild(child) and then
> parent.offsetWidth, the answer takes child into account. I'm looking to
> understand how we don't violate this expectation when parent distributes.
> Or if we violate this expectation, what is the proposed right way to ask
> this question?
>
> In addition to rendering information about a node, distribution also
> effects the flow of events. So a similar question: when is it safe to call
> child.dispatchEvent such that if parent distributes elements to its
> shadowRoot, elements in the shadowRoot will see the event?
>
> On Mon, Apr 27, 2015 at 1:45 PM, Ryosuke Niwa  wrote:
>
>>
>> On Apr 27, 2015, at 11:47 AM, Steve Orvell  wrote:
>>
>> Here's a minimal and hopefully simple proposal that we can flesh out if
>> this seems like an interesting api direction:
>>
>>
>> https://gist.github.com/sorvell/e201c25ec39480be66aa
>>
>>
>> It seems like with this API, we’d have to make O(n^k) calls where n is
>> the number of distribution candidates and k is the number of insertion
>> points, and that’s bad.  Or am I misunderstanding your design?
>>
>>
>> We keep the currently spec'd distribution algorithm/timing but remove
>> `select` in favor of an explicit selection callback.
>>
>>
>> What do you mean by keeping the currently spec’ed timing?  We certainly
>> can’t do it at “style resolution time” because style resolution is an
>> implementation detail that we shouldn’t expose to the Web just like GC and
>> its timing is an implementation detail in JS.  Besides that, avoiding style
>> resolution is a very important optimizations and spec’ing when it happens
>> will prevent us from optimizing it away in the future/
>>
>> Do you mean instead that we synchronously invoke this algorithm when a
>> child node is inserted or removed from the host?  If so, that’ll impose
>> unacceptable runtime cost for DOM mutations.
>>
>> I think the only timing UA can support by default will be at the end of
>> micro task or at UA-code / user-code boundary as done for custom element
>> lifestyle callbacks at the moment.
>>
>> The user simply returns true if the node should be distributed to the
>> given insertion point.

Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Hayato Ito
Could you clarify what you are trying to achieve? If we don't support,
everything would be weird.

I guess you are proposing the alternative of the current pool population
algorithm and pool distribution algorithm.
I appreciate you could explain what are expected result using algorithms.



On Tue, Apr 28, 2015 at 6:58 AM Ryosuke Niwa  wrote:

> On Apr 27, 2015, at 2:38 PM, Hayato Ito  wrote:
>
> On Tue, Apr 28, 2015 at 6:18 AM Ryosuke Niwa  wrote:
>
>>
>> > On Apr 26, 2015, at 6:11 PM, Hayato Ito  wrote:
>> >
>> > I think Polymer folks will answer the use case of re-distribution.
>> >
>> > So let me just show a good analogy so that every one can understand
>> intuitively what re-distribution *means*.
>> > Let me use a pseudo language and define XComponent's constructor as
>> follows:
>> >
>> > XComponents::XComponents(Title text, Icon icon) {
>> >   this.text = text;
>> >   this.button = new XButton(icon);
>> >   ...
>> > }
>> >
>> > Here, |icon| is *re-distributed*.
>> >
>> > In HTML world, this corresponds the followings:
>> >
>> > The usage of  element:
>> >   
>> > Hello World
>> > My Icon
>> >   
>> >
>> > XComponent's shadow tree is:
>> >
>> >   
>> > 
>> > 
>> >   
>>
>> I have a question as to whether x-button then has to select which nodes
>> to use or not.  In this particular example at least, x-button will put
>> every node distributed into (2) into a single insertion point in its shadow
>> DOM.
>>
>> If we don't have to support filtering of nodes at re-distribution time,
>> then the whole discussion of re-distribution is almost a moot because we
>> can just treat a content element like any other element that gets
>> distributed along with its distributed nodes.
>>
>>
> x-button can select.
> You might want to take a look at the distribution algorithm [1], where
> the behavior is well defined.
>
>
> I know we can in the current spec but should we support it?  What are
> concrete use cases in which x-button or other components need to select
> nodes in nested shadow DOM case?
>
> - R. Niwa
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Steve Orvell
IMO, the appeal of this proposal is that it's a small change to the current
spec and avoids changing user expectations about the state of the dom and
can explain the two declarative proposals for distribution.


> It seems like with this API, we’d have to make O(n^k) calls where n is the
> number of distribution candidates and k is the number of insertion points,
> and that’s bad.  Or am I misunderstanding your design?


I think you've understood the proposed design. As you noted, the cost is
actually O(n*k). In our use cases, k is generally very small.

Do you mean instead that we synchronously invoke this algorithm when a
> child node is inserted or removed from the host?  If so, that’ll impose
> unacceptable runtime cost for DOM mutations.
> I think the only timing UA can support by default will be at the end of
> micro task or at UA-code / user-code boundary as done for custom element
> lifestyle callbacks at the moment.


Running this callback at the UA-code/user-code boundary seems like it would
be fine. Running the more complicated "distribute all the nodes" proposals
at this time would obviously not be feasible. The notion here is that since
we're processing only a single node at a time, this can be done after an
atomic dom action.

“always correct” is somewhat stronger statement than I would state here
> since during UA calls these shouldDistributeToInsertionPoint callbacks,
> we'll certainly see transient offsetHeight values.


Yes, you're right about that. Specifically it would be bad to try to read
`offsetHeight` in this callback and this would be an anti-pattern. If
that's not good enough, perhaps we can explore actually not working
directly with the node but instead the subset of information necessary to
be able to decide on distribution.

Can you explain, under the initial proposal, how a user can ask an
element's dimensions and get the post-distribution answer? With current dom
api's I can be sure that if I do parent.appendChild(child) and then
parent.offsetWidth, the answer takes child into account. I'm looking to
understand how we don't violate this expectation when parent distributes.
Or if we violate this expectation, what is the proposed right way to ask
this question?

In addition to rendering information about a node, distribution also
effects the flow of events. So a similar question: when is it safe to call
child.dispatchEvent such that if parent distributes elements to its
shadowRoot, elements in the shadowRoot will see the event?

On Mon, Apr 27, 2015 at 1:45 PM, Ryosuke Niwa  wrote:

>
> On Apr 27, 2015, at 11:47 AM, Steve Orvell  wrote:
>
> Here's a minimal and hopefully simple proposal that we can flesh out if
> this seems like an interesting api direction:
>
>
> https://gist.github.com/sorvell/e201c25ec39480be66aa
>
>
> It seems like with this API, we’d have to make O(n^k) calls where n is the
> number of distribution candidates and k is the number of insertion points,
> and that’s bad.  Or am I misunderstanding your design?
>
>
> We keep the currently spec'd distribution algorithm/timing but remove
> `select` in favor of an explicit selection callback.
>
>
> What do you mean by keeping the currently spec’ed timing?  We certainly
> can’t do it at “style resolution time” because style resolution is an
> implementation detail that we shouldn’t expose to the Web just like GC and
> its timing is an implementation detail in JS.  Besides that, avoiding style
> resolution is a very important optimizations and spec’ing when it happens
> will prevent us from optimizing it away in the future/
>
> Do you mean instead that we synchronously invoke this algorithm when a
> child node is inserted or removed from the host?  If so, that’ll impose
> unacceptable runtime cost for DOM mutations.
>
> I think the only timing UA can support by default will be at the end of
> micro task or at UA-code / user-code boundary as done for custom element
> lifestyle callbacks at the moment.
>
> The user simply returns true if the node should be distributed to the
> given insertion point.
>
> Advantages:
>  * the callback can be synchronous-ish because it acts only on a specific
> node when possible. Distribution then won't break existing expectations
> since `offsetHeight` is always correct.
>
>
> “always correct” is somewhat stronger statement than I would state here
> since during UA calls these shouldDistributeToInsertionPoint callbacks,
> we'll certainly see transient offsetHeight values.
>
> - R. Niwa
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 2:38 PM, Hayato Ito  wrote:
> 
> On Tue, Apr 28, 2015 at 6:18 AM Ryosuke Niwa  > wrote:
> 
> > On Apr 26, 2015, at 6:11 PM, Hayato Ito  > > wrote:
> >
> > I think Polymer folks will answer the use case of re-distribution.
> >
> > So let me just show a good analogy so that every one can understand 
> > intuitively what re-distribution *means*.
> > Let me use a pseudo language and define XComponent's constructor as follows:
> >
> > XComponents::XComponents(Title text, Icon icon) {
> >   this.text = text;
> >   this.button = new XButton(icon);
> >   ...
> > }
> >
> > Here, |icon| is *re-distributed*.
> >
> > In HTML world, this corresponds the followings:
> >
> > The usage of  element:
> >   
> > Hello World
> > My Icon
> >   
> >
> > XComponent's shadow tree is:
> >
> >   
> > 
> > 
> >   
> 
> I have a question as to whether x-button then has to select which nodes to 
> use or not.  In this particular example at least, x-button will put every 
> node distributed into (2) into a single insertion point in its shadow DOM.
> 
> If we don't have to support filtering of nodes at re-distribution time, then 
> the whole discussion of re-distribution is almost a moot because we can just 
> treat a content element like any other element that gets distributed along 
> with its distributed nodes.
> 
> 
> x-button can select.
> You might want to take a look at the distribution algorithm [1], where the 
> behavior is well defined.

I know we can in the current spec but should we support it?  What are concrete 
use cases in which x-button or other components need to select nodes in nested 
shadow DOM case?

- R. Niwa



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 1:45 PM, Ryosuke Niwa  wrote:
> 
> 
>> On Apr 27, 2015, at 11:47 AM, Steve Orvell > > wrote:
>> 
>> Here's a minimal and hopefully simple proposal that we can flesh out if this 
>> seems like an interesting api direction:
>> 
>> https://gist.github.com/sorvell/e201c25ec39480be66aa 
>> 
> 
> It seems like with this API, we’d have to make O(n^k)

I meant to say O(nk).  Sorry, I'm still waking up :(

> calls where n is the number of distribution candidates and k is the number of 
> insertion points, and that’s bad.  Or am I misunderstanding your design?
> 
>> 
>> We keep the currently spec'd distribution algorithm/timing but remove 
>> `select` in favor of an explicit selection callback.
> 
> What do you mean by keeping the currently spec’ed timing?  We certainly can’t 
> do it at “style resolution time” because style resolution is an 
> implementation detail that we shouldn’t expose to the Web just like GC and 
> its timing is an implementation detail in JS.  Besides that, avoiding style 
> resolution is a very important optimizations and spec’ing when it happens 
> will prevent us from optimizing it away in the future/
> 
> Do you mean instead that we synchronously invoke this algorithm when a child 
> node is inserted or removed from the host?  If so, that’ll impose 
> unacceptable runtime cost for DOM mutations.
> 
> I think the only timing UA can support by default will be at the end of micro 
> task or at UA-code / user-code boundary as done for custom element lifestyle 
> callbacks at the moment.
> 
>> The user simply returns true if the node should be distributed to the given 
>> insertion point.
>> 
>> Advantages:
>>  * the callback can be synchronous-ish because it acts only on a specific 
>> node when possible. Distribution then won't break existing expectations 
>> since `offsetHeight` is always correct.
> 
> “always correct” is somewhat stronger statement than I would state here since 
> during UA calls these shouldDistributeToInsertionPoint callbacks, we'll 
> certainly see transient offsetHeight values.
> 
> - R. Niwa
> 



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Hayato Ito
On Tue, Apr 28, 2015 at 6:18 AM Ryosuke Niwa  wrote:

>
> > On Apr 26, 2015, at 6:11 PM, Hayato Ito  wrote:
> >
> > I think Polymer folks will answer the use case of re-distribution.
> >
> > So let me just show a good analogy so that every one can understand
> intuitively what re-distribution *means*.
> > Let me use a pseudo language and define XComponent's constructor as
> follows:
> >
> > XComponents::XComponents(Title text, Icon icon) {
> >   this.text = text;
> >   this.button = new XButton(icon);
> >   ...
> > }
> >
> > Here, |icon| is *re-distributed*.
> >
> > In HTML world, this corresponds the followings:
> >
> > The usage of  element:
> >   
> > Hello World
> > My Icon
> >   
> >
> > XComponent's shadow tree is:
> >
> >   
> > 
> > 
> >   
>
> I have a question as to whether x-button then has to select which nodes to
> use or not.  In this particular example at least, x-button will put every
> node distributed into (2) into a single insertion point in its shadow DOM.
>
> If we don't have to support filtering of nodes at re-distribution time,
> then the whole discussion of re-distribution is almost a moot because we
> can just treat a content element like any other element that gets
> distributed along with its distributed nodes.
>
>
x-button can select.
You might want to take a look at the distribution algorithm [1], where the
behavior is well defined.

[1]: http://w3c.github.io/webcomponents/spec/shadow/#distribution-algorithms

In short, the distributed nodes of  will be the
next candidates of nodes from where insertion points in the shadow tree
 hosts can select.




> - R. Niwa
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 26, 2015, at 6:11 PM, Hayato Ito  wrote:
> 
> I think Polymer folks will answer the use case of re-distribution.
> 
> So let me just show a good analogy so that every one can understand 
> intuitively what re-distribution *means*.
> Let me use a pseudo language and define XComponent's constructor as follows:
> 
> XComponents::XComponents(Title text, Icon icon) {
>   this.text = text;
>   this.button = new XButton(icon);
>   ...
> }
> 
> Here, |icon| is *re-distributed*.
> 
> In HTML world, this corresponds the followings:
> 
> The usage of  element:
>   
> Hello World
> My Icon
>   
> 
> XComponent's shadow tree is:
> 
>   
> 
> 
>   

I have a question as to whether x-button then has to select which nodes to use 
or not.  In this particular example at least, x-button will put every node 
distributed into (2) into a single insertion point in its shadow DOM.

If we don't have to support filtering of nodes at re-distribution time, then 
the whole discussion of re-distribution is almost a moot because we can just 
treat a content element like any other element that gets distributed along with 
its distributed nodes.

- R. Niwa




Re: :host pseudo-class

2015-04-27 Thread Tab Atkins Jr.
On Sat, Apr 25, 2015 at 9:32 AM, Anne van Kesteren  wrote:
> I don't understand why :host is a pseudo-class rather than a
> pseudo-element. My mental model of a pseudo-class is that it allows
> you to match an element based on a boolean internal slot of that
> element. :host is not that since e.g. * does not match :host as I
> understand it. That seems super weird. Why not just use ::host?
>
> Copying WebApps since this affects everyone caring about Shadow DOM.

Pseudo-elements are things that aren't DOM elements, but are created
by Selectors for the purpose of CSS to act like elements.

The host element is a real DOM element.  It just has special selection
behavior from inside its own shadow root, for practical reasons: there
are good use-cases for being able to style your host, but also a lot
for *not* doing so, and so mixing the host into the normal set of
elements leads to a large risk of accidentally selecting the host.
This is particularly true for things like class selectors; since the
*user* of the component is the one that controls what classes/etc are
set on the host element, it's very plausible that a class used inside
the shadow root for internal purposes could accidentally collide with
one used by the outer page for something completely different, and
cause unintentional styling issues.

Making the host element present in the shadow tree, but featureless
save for the :host and :host-context() pseudo-classes, was the
compromise that satisfies all of the use-cases adequately.

It's possible we could change how we define the concept of
"pseudo-element" so that it can sometimes refer to real elements that
just aren't ordinarily accessible, but I'm not sure that's necessary
or desirable at the moment.

On Sun, Apr 26, 2015 at 8:37 PM, L. David Baron  wrote:
> We haven't really used (in the sense of shipping across browsers)
> pseudo-elements before for things that are both tree-like (i.e., not
> ::first-letter, ::first-line, or ::selection) and not leaves of the
> tree.  (Gecko doesn't implement any pseudo-elements that can have
> other selectors to their right.  I'm not sure if other engines
> have.)
>
> I'd be a little worried about ease of implementation, and doing so
> without disabling a bunch of selector-related optimizations that
> we'd rather have.
>
> At some point we probably do want to have this sort of
> pseudo-element, but it's certainly adding an additional dependency
> on to this spec.

The ::shadow and ::content pseudo-elements are this way (tree-like,
and not leaves).  We implement them in Blink currently, at least to
some extent.  (Not sure if it's just selector tricks, or if we do it
"properly" so that, for example, inheritance works.)

On Mon, Apr 27, 2015 at 1:06 AM, Anne van Kesteren  wrote:
> Thanks, that example has another confusing bit, ::content. As far as I
> can tell ::content is not actually an element that ends up in the
> tree. It would make more sense for that to be a named-combinator of
> sorts. (And given ::content allowing selectors on the right hand, it's
> now yet more unclear why :host is not ::host.)

It's a (pseudo-)element in the tree, it's just required to not
generate a box.  Having ::content (and ::shadow) be pseudo-elements
lets you do a few useful things: you can use other combinators (child
*or* descendant, depending on what you need) and you can set inherited
properties to cascade down to all the children (especially useful for,
for example, setting 'color' of direct text node children, which can
appear in a shadow root or in a  with no select='', and can't
be targeted by a selector otherwise).  I did originally use
combinators for this, but they're less useful for the reasons just
listed.

(This was explicitly discussed in a telcon, when I noted that
sometimes you want to select the "top-level" things in a shadow tree
or distribution list, and sometimes all the things.  I had proposed
two versions of each combinator, or an argument to a named combinator
(like /shadow >/ versus /shadow >>/), but someone else (I think it was
fantasai?) suggested using a pseudo-element instead, and it turned out
to be a pretty good suggestion.)

~TJ



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 26, 2015, at 11:05 PM, Anne van Kesteren  wrote:
> 
> On Sat, Apr 25, 2015 at 10:49 PM, Ryosuke Niwa  wrote:
>> One major drawback of this API is computing insertionList is expensive
>> because we'd have to either (where n is the number of nodes in the shadow
>> DOM):
>> 
>> Maintain an ordered list of insertion points, which results in O(n)
>> algorithm to run whenever a content element is inserted or removed.
>> Lazily compute the ordered list of insertion points when `distribute`
>> callback is about to get called in O(n).
> 
> The alternative is not exposing it and letting developers get hold of
> the slots. The rationale for letting the browser do it is because you
> need the slots either way and the browser should be able to optimize
> better.

I don’t think that’s true.  If you’re creating a custom element, you’re pretty 
much in the control of what goes into your shadow DOM.  I’m writing any kind of 
component that creates a shadow DOM, I’d just keep references to all my 
insertion points instead of querying them each time I need to distribute nodes.

Another important use case to consider is adding insertion points given the 
list of nodes to distribute.  For example, you may want to “wrap” each node you 
distribute by an element.  That requires the component author to know the 
number of nodes to distribute upfront and then dynamically create as many 
insertion points as needed.

>> If we wanted to allow non-direct child descendent (e.g. grand child node) of
>> the host to be distributed, then we'd also need O(m) algorithm where m is
>> the number of under the host element.  It might be okay to carry on the
>> current restraint that only direct child of shadow host can be distributed
>> into insertion points but I can't think of a good reason as to why such a
>> restriction is desirable.
> 
> So you mean that we'd turn distributionList into a subtree? I.e. you
> can pass all descendants of a host element to add()? I remember Yehuda
> making the point that this was desirable to him.

Consider table-chart component which coverts a table element into a chart with 
each column represented as a line graph in the chart. The user of this 
component will wrap a regular table element with table-chart element to 
construct a shadow DOM:

```html

  
...
  253 ± 5
...
  

```

For people who like is attribute on custom elements, pretend it's
```html
  
...
  253 ± 5
...
  
```

Now, suppose I wanted to show a tooltip with the value in the chart. One 
obvious way to accomplish this would be distributing the td corresponding to 
the currently selected point into the tooltip. But this requires us allowing 
non-direct child nodes to be distributed.


> The other thing I would like to explore is what an API would look like
> that does the subclassing as well. Even though we deferred that to v2
> I got the impression talking to some folks after the meeting that
> there might be more common ground than I thought.

For the slot approach, we can model the act of filling a slot as if attaching a 
shadow root to the slot and the slot content going into the shadow DOM for both 
content distribution and filling of slots by subclasses.

Now we can do this in either of the following two strategies:
1. Superclass wants to see a list of slot contents from subclasses.
2. Each subclass "overrides" previous distribution done by superclass by 
inspecting insertion points in the shadow DOM and modifying them as needed.

- R. Niwa




Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Ryosuke Niwa

> On Apr 27, 2015, at 11:47 AM, Steve Orvell  wrote:
> 
> Here's a minimal and hopefully simple proposal that we can flesh out if this 
> seems like an interesting api direction:
> 
> https://gist.github.com/sorvell/e201c25ec39480be66aa 
> 

It seems like with this API, we’d have to make O(n^k) calls where n is the 
number of distribution candidates and k is the number of insertion points, and 
that’s bad.  Or am I misunderstanding your design?

> 
> We keep the currently spec'd distribution algorithm/timing but remove 
> `select` in favor of an explicit selection callback.

What do you mean by keeping the currently spec’ed timing?  We certainly can’t 
do it at “style resolution time” because style resolution is an implementation 
detail that we shouldn’t expose to the Web just like GC and its timing is an 
implementation detail in JS.  Besides that, avoiding style resolution is a very 
important optimizations and spec’ing when it happens will prevent us from 
optimizing it away in the future/

Do you mean instead that we synchronously invoke this algorithm when a child 
node is inserted or removed from the host?  If so, that’ll impose unacceptable 
runtime cost for DOM mutations.

I think the only timing UA can support by default will be at the end of micro 
task or at UA-code / user-code boundary as done for custom element lifestyle 
callbacks at the moment.

> The user simply returns true if the node should be distributed to the given 
> insertion point.
> 
> Advantages:
>  * the callback can be synchronous-ish because it acts only on a specific 
> node when possible. Distribution then won't break existing expectations since 
> `offsetHeight` is always correct.

“always correct” is somewhat stronger statement than I would state here since 
during UA calls these shouldDistributeToInsertionPoint callbacks, we'll 
certainly see transient offsetHeight values.

- R. Niwa



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Steve Orvell
Here's a minimal and hopefully simple proposal that we can flesh out if
this seems like an interesting api direction:

https://gist.github.com/sorvell/e201c25ec39480be66aa

We keep the currently spec'd distribution algorithm/timing but remove
`select` in favor of an explicit selection callback. The user simply
returns true if the node should be distributed to the given insertion point.

Advantages:
 * the callback can be synchronous-ish because it acts only on a specific
node when possible. Distribution then won't break existing expectations
since `offsetHeight` is always correct.
 * can implement either the currently spec'd `select` mechanism or the
proposed `slot` mechanism
 * can easily evolve to support distribution to isolated roots by using a
pure function that gets read only node 'proxies' as arguments.

Disadvantages:
 * cannot re-order the distribution
 * cannot distribute sub-elements

On Sat, Apr 25, 2015 at 1:58 PM, Ryosuke Niwa  wrote:

>
> > On Apr 25, 2015, at 1:17 PM, Olli Pettay  wrote:
> >
> > On 04/25/2015 09:28 AM, Anne van Kesteren wrote:
> >> On Sat, Apr 25, 2015 at 12:17 AM, Ryosuke Niwa  wrote:
> >>> In today's F2F, I've got an action item to come up with a concrete
> workable
> >>> proposal for imperative API.  I had a great chat about this afterwards
> with
> >>> various people who attended F2F and here's a summary.  I'll continue
> to work
> >>> with Dimitri & Erik to work out details in the coming months (our
> deadline
> >>> is July 13th).
> >>>
> >>> https://gist.github.com/rniwa/2f14588926e1a11c65d3
> >>
> >> I thought we came up with something somewhat simpler that didn't
> >> require adding an event or adding remove() for that matter:
> >>
> >>   https://gist.github.com/annevk/e9e61801fcfb251389ef
> >
> >
> > That is pretty much exactly how I was thinking the imperative API to
> work.
> > (well, assuming errors in the example fixed)
> >
> > An example explaining how this all works in case of nested shadow trees
> would be good.
> > I assume the more nested shadow tree just may get some nodes, which were
> already distributed, in the distributionList.
>
> Right, that was the design we discussed.
>
> > How does the distribute() behave? Does it end up invoking distribution
> in all the nested shadow roots or only in the callee?
>
> Yes, that's the only reason we need distribute() in the first place.  If
> we didn't have to care about redistribution, simply exposing methods to
> insert/remove distributed nodes on content element is sufficient.
>
> > Should distribute callback be called automatically at the end of the
> microtask if there has been relevant[1] DOM mutations since the last
> > manual call to distribute()? That would make the API a bit simpler to
> use, if one wouldn't have to use MutationObservers.
>
> That's a possibility.  It could be an option to specify as well.  But
> there might be components that are not interested in updating distributed
> nodes for the sake of performance for example.  I'm not certain forcing
> everyone to always update distributed nodes is necessarily desirable given
> the lack of experience with an imperative API for distributing nodes.
>
> > [1] Assuming we want to distribute only direct children, then any child
> list change or any attribute change in the children
> > might cause distribution() automatically.
>
> I think that's a big if now that we've gotten rid of "select" attribute
> and multiple generations of shadow DOM.  As far as I could recall, one of
> the reasons we only supported distributing direct children was so that we
> could implement "select" attribute and multiple generations of shadow DOM.
>  If we wanted, we could always impose such a restriction in a declarative
> syntax and inheritance mechanism we add in v2 since those v2 APIs are
> supposed to build on top of this imperative API.
>
> Another big if is whether we even need to let each shadow DOM select nodes
> to redistribute.  If we don't need to support filtering distributed nodes
> in insertion points for re-distribution (i.e. we either distribute
> everything under a given content element or nothing), then we don't need
> all of this redistribution mechanism baked into the browser and the model
> where we just have insert/remove on content element will work.
>
> - R. Niwa
>
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Olli Pettay

On 04/27/2015 02:11 AM, Hayato Ito wrote:

I think Polymer folks will answer the use case of re-distribution.



I wasn't questioning the need for re-distribution. I was questioning the need 
to distribute grandchildren etc -
and even more, I was wondering what kind of algorithm would be sane in that 
case.

And passing random not-in-document, nor in-shadow-DOM elements to be 
distributed would be hard too.




So let me just show a good analogy so that every one can understand intuitively 
what re-distribution *means*.
Let me use a pseudo language and define XComponent's constructor as follows:

XComponents::XComponents(Title text, Icon icon) {
   this.text = text;
   this.button = new XButton(icon);
   ...
}

Here, |icon| is *re-distributed*.

In HTML world, this corresponds the followings:

The usage of  element:
   
 Hello World
 My Icon
   

XComponent's shadow tree is:

   
 
 
   

Re-distribution enables the constructor of X-Component to pass the given 
parameter to other component's constructor, XButton's constructor.
If we don't have a re-distribution, XComponents can't create X-Button using the 
dynamic information.

XComponents::XCompoennts(Title text, Icon icon) {
   this.text = text;
   // this.button = new xbutton(icon);  // We can't!  We don't have 
redistribution!
   this.button = new xbutton("icon.png");  // XComponet have to do 
"hard-coding". Please allow me to pass |icon| to x-button!
   ...
}


On Sun, Apr 26, 2015 at 12:23 PM Olli Pettay mailto:o...@pettay.fi>> wrote:

On 04/25/2015 01:58 PM, Ryosuke Niwa wrote:
 >
 >> On Apr 25, 2015, at 1:17 PM, Olli Pettay mailto:o...@pettay.fi>> wrote:
 >>
 >> On 04/25/2015 09:28 AM, Anne van Kesteren wrote:
 >>> On Sat, Apr 25, 2015 at 12:17 AM, Ryosuke Niwa mailto:rn...@apple.com>> wrote:
  In today's F2F, I've got an action item to come up with a concrete 
workable proposal for imperative API.  I had a great chat about this
  afterwards with various people who attended F2F and here's a summary.  
I'll continue to work with Dimitri & Erik to work out details in the
  coming months (our deadline is July 13th).
 
  https://gist.github.com/rniwa/2f14588926e1a11c65d3
 >>>
 >>> I thought we came up with something somewhat simpler that didn't 
require adding an event or adding remove() for that matter:
 >>>
 >>> https://gist.github.com/annevk/e9e61801fcfb251389ef
 >>
 >>
 >> That is pretty much exactly how I was thinking the imperative API to 
work. (well, assuming errors in the example fixed)
 >>
 >> An example explaining how this all works in case of nested shadow trees 
would be good. I assume the more nested shadow tree just may get some
 >> nodes, which were already distributed, in the distributionList.
 >
 > Right, that was the design we discussed.
 >
 >> How does the distribute() behave? Does it end up invoking distribution 
in all the nested shadow roots or only in the callee?
 >
 > Yes, that's the only reason we need distribute() in the first place.  If 
we didn't have to care about redistribution, simply exposing methods to
 > insert/remove distributed nodes on content element is sufficient.
 >
 >> Should distribute callback be called automatically at the end of the 
microtask if there has been relevant[1] DOM mutations since the last manual
 >> call to distribute()? That would make the API a bit simpler to use, if 
one wouldn't have to use MutationObservers.
 >
 > That's a possibility.  It could be an option to specify as well.  But 
there might be components that are not interested in updating distributed
 > nodes for the sake of performance for example.  I'm not certain forcing 
everyone to always update distributed nodes is necessarily desirable given
 > the lack of experience with an imperative API for distributing nodes.
 >
 >> [1] Assuming we want to distribute only direct children, then any child 
list change or any attribute change in the children might cause
 >> distribution() automatically.
 >
 > I think that's a big if now that we've gotten rid of "select" attribute 
and multiple generations of shadow DOM.

It is not clear to me at all how you would handle the case when a node has 
several ancestors with shadow trees, and each of those want to distribute
the node to some insertion point.
Also, what is the use case to distribute non-direct descendants?




 >  As far as I could recall, one of
 > the reasons we only supported distributing direct children was so that we could 
implement "select" attribute and multiple generations of shadow
 > DOM.   If we wanted, we could always impose such a restriction in a 
declarative syntax and inheritance mechanism we add in v2 since those v2 APIs
 > are supposed to build on top of this imperative API.
 >
 > Another big if is whether we even need to let each shado

Re: PSA: publishing new WD of Push API on April 30

2015-04-27 Thread Michael van Ouwerkerk
Looks good to me. While we are continuing work on data payload encryption
and alignment with the IETF Web Push Protocol, it makes sense to refresh
the WD snapshot.

Regards,

Michael


On Mon, Apr 27, 2015 at 2:39 PM, Arthur Barstow 
wrote:

> This is an announcement of the intent to publish a new WD of Push API on
> April 30 using the following document as the basis:
>
>   
>
> (The sequence diagram is not found when the above document is loaded but
> the diagram will be available in the WD version.)
>
> If anyone has any major concerns with this proposal, please speak up
> immediately; otherwise the WD will be published as proposed.
>
> -Thanks, ArtB
>
>


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Anne van Kesteren
On Mon, Apr 27, 2015 at 3:41 PM, Matthew Robb  wrote:
> I know this isn't the biggest deal but I think naming the function
> distribute is highly suggestive, why not just expose this as
> `childListChangedCallback` ?

Because that doesn't match the actual semantics. The callback is
invoked once distribute() is invoked by the web developer or
distribute() has been invoked on a composed ancestor ShadowRoot and
all composed ancestor ShadowRoot's have already had their callback
run. (Note that the distribute callback and the distribute method are
different things.)

Since the distribute callback is in charge of distribution it does in
fact make sense to call it such I think.


-- 
https://annevankesteren.nl/



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Matthew Robb
I know this isn't the biggest deal but I think naming the function
distribute is highly suggestive, why not just expose this as
`childListChangedCallback` ?


- Matthew Robb

On Mon, Apr 27, 2015 at 4:34 AM, Anne van Kesteren  wrote:

> On Mon, Apr 27, 2015 at 10:23 AM, Justin Fagnani
>  wrote:
> > A separate hidden tree per class sounds very much like multiple
> generations
> > of shadow trees, and we just killed that...
>
> We "killed" it for v1, not indefinitely. As I already said, based on
> my post-meeting conversations it might not have been as contentious as
> I thought. It's mostly the specifics. I haven't quite wrapped my head
> around those specifics, but the way Gecko implemented  (which
> does not match the specification or Chrome) seemed to be very similar
> to what Apple wanted.
>
>
> --
> https://annevankesteren.nl/
>
>


PSA: publishing new WD of Push API on April 30

2015-04-27 Thread Arthur Barstow
This is an announcement of the intent to publish a new WD of Push API on 
April 30 using the following document as the basis:


  

(The sequence diagram is not found when the above document is loaded but 
the diagram will be available in the WD version.)


If anyone has any major concerns with this proposal, please speak up 
immediately; otherwise the WD will be published as proposed.


-Thanks, ArtB



Re: Limiting the scope of user permissions

2015-04-27 Thread Frederik Braun
You can tackle bits of this problem by assigning different subdomains
either per-permission set or per-client.

AFAIU, the settings should be stored independently.

With more and more permissions coming to the web, the numbers of
combinations might grow exponentially though.

On 27.04.2015 12:50, Andy Earnshaw wrote:
> I work for an adserving company, where many third-party creatives are
> served from the same CDN domain.  One of the things we're starting to
> see now is more use of APIs that require permissions, such as
> Geolocation and, since the recent Chrome 42 release, Push Notifications.
> 
> These APIs are great, though I'm particularly a fan of the idea of ads
> trying to "re-engage with users" (the words of one of our clients
> wanting to use these APIs) via push notifications, it's a bit of a scary
> thought.  A user's acceptance or disallowing of permissions presents a
> new problem for us, though.  One third-party creative might be
> uninteresting enough to a specific user such that the user immediately
> chooses to disallow permissions for a specific API, resulting in that
> user never being prompted for those permissions again even if another
> creative they see interests them enough that they would otherwise allow
> them.  Conversely, a user might give their permissions for one creative,
> unwittingly giving the same permissions to all third-party creatives
> from that point on.
> 
> A specific example:
> 
> 1. User visits a website with an ad placement served from our CDN.
> 2. User interacts with the ad, which wants to show him offers from local
> businesses close to him, asking to locate him via the Geolocation API.
> 3. User disallows access to Geolocation API and stops interacting with
> the ad.
> 4. Later, user visits another website with a different ad placement
> served from the same CDN.
> 5. User interacts with the ad, which shows him a map with all the
> advertiser's retail stores as markers on the map.
> 6. User repeatedly clicks the "locator" button on the map with
> frustration, but it has no effect because he already denied permissions
> to the Geolocation API earlier.
> 
> I hope this serves as an appropriate example for the problem.  Similar
> cases could occur with, for example, sites like CodePen or JSFiddle,
> where demonstrations cannot rely on user prompting to actually work. 
> Basically, we need a way of protecting advertisers from each other,
> perhaps by scoping permissions to an origin and path instead of just the
> origin.  I'm not sure how this would work, the best idea I can come up
> with is using a custom HTTP header, for example:
> 
> X-Permissions-Scope: path | host
> 
> I realise that this is outside the scope of the specification and that
> the spec only goes as far as making a recommendation[1] to use the
> origin of the document or worker when making security decisions, but
> this seemed like the best place to start a discussion about it.
> 
> [1]:https://w3c.github.io/permissions/#h-status-of-a-permission




Re: Limiting the scope of user permissions

2015-04-27 Thread Anne van Kesteren
On Mon, Apr 27, 2015 at 12:50 PM, Andy Earnshaw  wrote:
> I realise that this is outside the scope of the specification and that the
> spec only goes as far as making a recommendation to use the origin of the
> document or worker when making security decisions, but this seemed like the
> best place to start a discussion about it.

You probably want to contribute your use cases here:

  https://github.com/w3c/webappsec/issues/206


-- 
https://annevankesteren.nl/



Limiting the scope of user permissions

2015-04-27 Thread Andy Earnshaw
I work for an adserving company, where many third-party creatives are
served from the same CDN domain.  One of the things we're starting to see
now is more use of APIs that require permissions, such as Geolocation and,
since the recent Chrome 42 release, Push Notifications.

These APIs are great, though I'm particularly a fan of the idea of ads
trying to "re-engage with users" (the words of one of our clients wanting
to use these APIs) via push notifications, it's a bit of a scary thought.
A user's acceptance or disallowing of permissions presents a new problem
for us, though.  One third-party creative might be uninteresting enough to
a specific user such that the user immediately chooses to disallow
permissions for a specific API, resulting in that user never being prompted
for those permissions again even if another creative they see interests
them enough that they would otherwise allow them.  Conversely, a user might
give their permissions for one creative, unwittingly giving the same
permissions to all third-party creatives from that point on.

A specific example:

1. User visits a website with an ad placement served from our CDN.
2. User interacts with the ad, which wants to show him offers from local
businesses close to him, asking to locate him via the Geolocation API.
3. User disallows access to Geolocation API and stops interacting with the
ad.
4. Later, user visits another website with a different ad placement served
from the same CDN.
5. User interacts with the ad, which shows him a map with all the
advertiser's retail stores as markers on the map.
6. User repeatedly clicks the "locator" button on the map with frustration,
but it has no effect because he already denied permissions to the
Geolocation API earlier.

I hope this serves as an appropriate example for the problem.  Similar
cases could occur with, for example, sites like CodePen or JSFiddle, where
demonstrations cannot rely on user prompting to actually work.  Basically,
we need a way of protecting advertisers from each other, perhaps by scoping
permissions to an origin and path instead of just the origin.  I'm not sure
how this would work, the best idea I can come up with is using a custom
HTTP header, for example:

X-Permissions-Scope: path | host

I realise that this is outside the scope of the specification and that the
spec only goes as far as making a recommendation[1] to use the origin of
the document or worker when making security decisions, but this seemed like
the best place to start a discussion about it.

[1]:https://w3c.github.io/permissions/#h-status-of-a-permission


RE: Why is querySelector much slower?

2015-04-27 Thread François REMY
Not sure this is a public-webapps issue but I'm pretty sure however that the 
reason is the return values of << getElement*By*(...) >> are cached by the 
browser which means that you end up not doing the work at all at some point in 
the loop, while you probably do it every single time for << querySelector >> 
which cannot return a cached result.



From: curvedm...@gmail.com
Date: Mon, 27 Apr 2015 16:57:23 +0800
To: public-webapps@w3.org
Subject: Why is querySelector much slower?

Intuitively, querySelector('.class') only needs to find the first matching 
node, whereas getElementsByClassName('.class')[0] needs to find all matching 
nodes and then return the first. The former should be a lot quicker than the 
latter. Why that's not the case?
See http://jsperf.com/queryselectorall-vs-getelementsbytagname/119 for the test
I know querySelectorAll is slow because of the static nature of returned 
NodeList, but this shouldn't be an issue for querySelector. 
 

Re: Exposing structured clone as an API?

2015-04-27 Thread Anne van Kesteren
On Sat, Apr 25, 2015 at 12:41 AM, Brendan Eich  wrote:
> Step where you need to, to avoid falling over :-P.

Fair. I filed

  https://www.w3.org/Bugs/Public/show_bug.cgi?id=28566

on adding a structured clone API.


> The problems with generalized/extensible clone are clear but we have
> structured clone already. It is based on a hardcoded type-case statement. It
> could be improved a bit without trying to solve all possible problems, IMHO.

Dmitry and I worked a bit on

  https://github.com/dslomov-chromium/ecmascript-structured-clone

at some point to clean up things and integrate it with ECMAScript, but
it hasn't really gone anywhere so far.


-- 
https://annevankesteren.nl/



Why is querySelector much slower?

2015-04-27 Thread Glen Huang
Intuitively, querySelector('.class') only needs to find the first matching 
node, whereas getElementsByClassName('.class')[0] needs to find all matching 
nodes and then return the first. The former should be a lot quicker than the 
latter. Why that's not the case?

See http://jsperf.com/queryselectorall-vs-getelementsbytagname/119 
 for the test

I know querySelectorAll is slow because of the static nature of returned 
NodeList, but this shouldn't be an issue for querySelector.

Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Anne van Kesteren
On Mon, Apr 27, 2015 at 10:23 AM, Justin Fagnani
 wrote:
> A separate hidden tree per class sounds very much like multiple generations
> of shadow trees, and we just killed that...

We "killed" it for v1, not indefinitely. As I already said, based on
my post-meeting conversations it might not have been as contentious as
I thought. It's mostly the specifics. I haven't quite wrapped my head
around those specifics, but the way Gecko implemented  (which
does not match the specification or Chrome) seemed to be very similar
to what Apple wanted.


-- 
https://annevankesteren.nl/



[Bug 28564] New: [Shadow]: Event model

2015-04-27 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28564

Bug ID: 28564
   Summary: [Shadow]: Event model
   Product: WebAppsWG
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P2
 Component: Component Model
  Assignee: dglaz...@chromium.org
  Reporter: ann...@annevk.nl
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org
Depends on: 20017, 20247, 23887, 25458, 26892, 28558, 28560
Blocks: 28552

This is a bug to figure out the overall changes needed to the DOM event
dispatch algorithm for shadow DOM.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Justin Fagnani
On Mon, Apr 27, 2015 at 1:01 AM, Anne van Kesteren  wrote:

> On Mon, Apr 27, 2015 at 9:25 AM, Justin Fagnani
>  wrote:
> > I really don't think the platform needs to do anything to support
> > subclassing since it can be done so easily at the library level now that
> > multiple generations of shadow roots are gone. As long as a subclass and
> > base class can cooperate to produce a single shadow root with insertion
> > points, the platform doesn't need to know how they did it.
>
> So a) this is only if they cooperate


In reality, base and sub class are going to have to cooperate. There's no
style or dom isolation between the two anymore, and lifecycle callbacks,
templating, and data binding already make them pretty entangled.


> and the superclass does not want
> to keep its tree and distribution logic hidden


A separate hidden tree per class sounds very much like multiple generations
of shadow trees, and we just killed that... This is one of my concerns
about the inheritance part of the slots proposal: it appeared to give new
significance to  tags which essentially turn them into multiple
shadow roots, just without the style isolation.


> and b) if we want to
> eventually add declarative functionality we'll need to explain it
> somehow. Seems better that we know upfront how that will work.
>

I think this is a case where the frameworks would lead and the platform, if
it ever decided to, could integrate the best approach - much like data
binding.

I imagine that frameworks will create declarative forms of distribution and
template inheritance that work something like the current system, or the
slots proposal (or other template systems with inheritance like Jinja). I
don't even think a platform-based solution won't be any faster in the
common case because the frameworks can pre-compute the concrete template
(including distribution points and bindings) from the entire inheritance
hierarchy up front, and stamp out the same thing per instance.

Cheers,
  Justin



>
> --
> https://annevankesteren.nl/
>


Re: :host pseudo-class

2015-04-27 Thread Anne van Kesteren
On Mon, Apr 27, 2015 at 9:03 AM, Rune Lillesveen  wrote:
> On Mon, Apr 27, 2015 at 6:22 AM, Anne van Kesteren  wrote:
>> Would they match against elements in the host's tree or the shadow
>> tree? I don't see anything in the specification about this.
>
> They would match elements in the shadow tree.
>
> A typical use case is to style elements in the shadow tree based on
> its host's attributes:
>
> 
> 
> 
>
> In the shadow tree for :
>
> 
> :host([disabled]) ::content custom-child { color: #444 }
> :host([disabled]) input { border-color: #ccc }
> 
> 
> 

Thanks, that example has another confusing bit, ::content. As far as I
can tell ::content is not actually an element that ends up in the
tree. It would make more sense for that to be a named-combinator of
sorts. (And given ::content allowing selectors on the right hand, it's
now yet more unclear why :host is not ::host.)


-- 
https://annevankesteren.nl/



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Anne van Kesteren
On Mon, Apr 27, 2015 at 9:25 AM, Justin Fagnani
 wrote:
> I really don't think the platform needs to do anything to support
> subclassing since it can be done so easily at the library level now that
> multiple generations of shadow roots are gone. As long as a subclass and
> base class can cooperate to produce a single shadow root with insertion
> points, the platform doesn't need to know how they did it.

So a) this is only if they cooperate and the superclass does not want
to keep its tree and distribution logic hidden and b) if we want to
eventually add declarative functionality we'll need to explain it
somehow. Seems better that we know upfront how that will work.


-- 
https://annevankesteren.nl/



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-27 Thread Justin Fagnani
On Sun, Apr 26, 2015 at 11:05 PM, Anne van Kesteren 
wrote:

> On Sat, Apr 25, 2015 at 10:49 PM, Ryosuke Niwa  wrote:
> > One major drawback of this API is computing insertionList is expensive
> > because we'd have to either (where n is the number of nodes in the shadow
> > DOM):
> >
> > Maintain an ordered list of insertion points, which results in O(n)
> > algorithm to run whenever a content element is inserted or removed.
>

I don't expect shadow roots to be modified that much. We certainly don't
see it now, though the imperative API opens up some new possibilities like
calculating a grouping of child nodes and generating a  tag per
group, or even generating a  tag per child to perform decoration.
I still think those would be very rare cases.


> > Lazily compute the ordered list of insertion points when `distribute`
> > callback is about to get called in O(n).
>
> The alternative is not exposing it and letting developers get hold of
> the slots. The rationale for letting the browser do it is because you
> need the slots either way and the browser should be able to optimize
> better.
>
>
> > If we wanted to allow non-direct child descendent (e.g. grand child
> node) of
> > the host to be distributed, then we'd also need O(m) algorithm where m is
> > the number of under the host element.  It might be okay to carry on the
> > current restraint that only direct child of shadow host can be
> distributed
> > into insertion points but I can't think of a good reason as to why such a
> > restriction is desirable.
>

The main reason is that you know that only a direct parent of a node can
distribute it. Otherwise any ancestor could distribute a node, and in
addition to probably being confusing and fragile, you have to define who
wins when multiple ancestors try to.

There are cases where you really want to group element logically by one
tree structure and visually by another, like tabs. I think an alternative
approach to distributing arbitrary descendants would be to see if nodes can
cooperate on distribution so that a node could pass its direct children to
another node's insertion point. The direct child restriction would still be
there, so you always know who's responsible, but you can get the same
effect as distributing descendants for a cooperating sets of elements.


> So you mean that we'd turn distributionList into a subtree? I.e. you
> can pass all descendants of a host element to add()? I remember Yehuda
> making the point that this was desirable to him.
>
> The other thing I would like to explore is what an API would look like
> that does the subclassing as well. Even though we deferred that to v2
> I got the impression talking to some folks after the meeting that
> there might be more common ground than I thought.
>

I really don't think the platform needs to do anything to support
subclassing since it can be done so easily at the library level now that
multiple generations of shadow roots are gone. As long as a subclass and
base class can cooperate to produce a single shadow root with insertion
points, the platform doesn't need to know how they did it.

Cheers,
  Justin



> As for the points before about mutation observers. I kind of like just
> having distribute() for v1 since it allows maximum flexibility. I
> would be okay with having an option that is either optin or optout
> that does the observing automatically, though I guess if we move from
> children to descendants that gets more expensive.
>
>
> --
> https://annevankesteren.nl/
>
>


Re: :host pseudo-class

2015-04-27 Thread Rune Lillesveen
On Mon, Apr 27, 2015 at 6:22 AM, Anne van Kesteren  wrote:
> On Mon, Apr 27, 2015 at 5:56 AM, L. David Baron  wrote:
>> For :host it's less interesting, but I thought a major use of
>> :host() and :host-context() is to be able to write selectors that
>> have combinators to the right of :host() or :host-context().
>
> Would they match against elements in the host's tree or the shadow
> tree? I don't see anything in the specification about this.

They would match elements in the shadow tree.

A typical use case is to style elements in the shadow tree based on
its host's attributes:





In the shadow tree for :


:host([disabled]) ::content custom-child { color: #444 }
:host([disabled]) input { border-color: #ccc }




-- 
Rune Lillesveen