Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Elliott Sprehn
A distribute callback means running script any time we update distribution,
which is inside the style update phase (or event path computation phase,
...) which is not a location we can run script. We could run script in
another scripting context like is being considered for custom layout and
paint though, but that has a different API shape since you'd register a
separate .js file as the custom distributor. like

(document || shadowRoot).registerCustomDistributor({src: distributor.js});

I also don't believe we should support distributing any arbitrary
descendant, that has a large complexity cost and doesn't feel like
simplification. It makes computing style and generating boxes much more
complicated.

A synchronous childrenChanged callback has similar issues with when it's
safe to run script, we'd have to defer it's execution in a number of
situations, and it feels like a duplication of MutationObservers which
specifically were designed to operate in batch for better performance and
fewer footguns (ex. a naive childrenChanged based distributor will be n^2).


On Mon, Apr 27, 2015 at 8:48 PM, Ryosuke Niwa rn...@apple.com wrote:


  On Apr 27, 2015, at 12:25 AM, Justin Fagnani justinfagn...@google.com
 wrote:
 
  On Sun, Apr 26, 2015 at 11:05 PM, Anne van Kesteren ann...@annevk.nl
 wrote:
  On Sat, Apr 25, 2015 at 10:49 PM, Ryosuke Niwa rn...@apple.com wrote:
   If we wanted to allow non-direct child descendent (e.g. grand child
 node) of
   the host to be distributed, then we'd also need O(m) algorithm where
 m is
   the number of under the host element.  It might be okay to carry on
 the
   current restraint that only direct child of shadow host can be
 distributed
   into insertion points but I can't think of a good reason as to why
 such a
   restriction is desirable.
 
  The main reason is that you know that only a direct parent of a node can
 distribute it. Otherwise any ancestor could distribute a node, and in
 addition to probably being confusing and fragile, you have to define who
 wins when multiple ancestors try to.
 
  There are cases where you really want to group element logically by one
 tree structure and visually by another, like tabs. I think an alternative
 approach to distributing arbitrary descendants would be to see if nodes can
 cooperate on distribution so that a node could pass its direct children to
 another node's insertion point. The direct child restriction would still be
 there, so you always know who's responsible, but you can get the same
 effect as distributing descendants for a cooperating sets of elements.

 That's an interesting approach. Ted and I discussed this design, and it
 seems workable with Anne's `distribute` callback approach (= the second
 approach in my proposal).

 Conceptually, we ask each child of a shadow host the list of distributable
 node for under that child (including itself). For normal node without a
 shadow root, it'll simply itself along with all the distribution candidates
 returned by its children. For a node with a shadow root, we ask its
 implementation. The recursive algorithm can be written as follows in pseudo
 code:

 ```
 NodeList distributionList(Node n):
   if n has shadowRoot:
 return ask n the list of distributable noes under n (1)
   else:
 list = [n]
 for each child in n:
   list += distributionList(n)
 return list
 ```

 Now, if we adopted `distribute` callback approach, one obvious mechanism
 to do (1) is to call `distribute` on n and return whatever it didn't
 distribute as a list. Another obvious approach is to simply return [n] to
 avoid the mess of n later deciding to distribute a new node.

  So you mean that we'd turn distributionList into a subtree? I.e. you
  can pass all descendants of a host element to add()? I remember Yehuda
  making the point that this was desirable to him.
 
  The other thing I would like to explore is what an API would look like
  that does the subclassing as well. Even though we deferred that to v2
  I got the impression talking to some folks after the meeting that
  there might be more common ground than I thought.
 
  I really don't think the platform needs to do anything to support
 subclassing since it can be done so easily at the library level now that
 multiple generations of shadow roots are gone. As long as a subclass and
 base class can cooperate to produce a single shadow root with insertion
 points, the platform doesn't need to know how they did it.

 I think we should eventually add native declarative inheritance support
 for all of this.

 One thing that worries me about the `distribute` callback approach (a.k.a.
 Anne's approach) is that it bakes distribution algorithm into the platform
 without us having thoroughly studied how subclassing will be done upfront.

 Mozilla tried to solve this problem with XBS, and they seem to think what
 they have isn't really great. Google has spent multiple years working on
 this problem but they come around to say their solution, 

Re: Directory Upload Proposal

2015-04-28 Thread Jonas Sicking
On Tue, Apr 28, 2015 at 4:28 PM, Travis Leithead
travis.leith...@microsoft.com wrote:
 Aaron opened an issue for this on GitHub [1] and I agree that it is a 
 problem and we should definitely rename it to something else! One option 
 might be to change dir to directory, but we would need a different name for 
 directory (the attribute that gets back the virtual root holding the 
 selected files and folders).

 I wonder, is it necessary to have a separate dir/directory attribute from 
 multiple? Adding a new DOM attribute will allow for feature detecting this 
 change. UA's can handle the presentation of a separate directory picker if 
 necessary--why force this distinction on the web developer?

We need the dir/directory attribute in order for pages to indicate
that it can handle Directory objects. No matter where/how we expose
those Directory objects.

/ Jonas



RE: Directory Upload Proposal

2015-04-28 Thread Ali Alabbas
On Tue, Apr 28, 2015 at 4:15 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

On Tue, Apr 28, 2015 at 3:53 PM, Ryan Seddon seddon.r...@gmail.com wrote:
 To enable developers to build future interoperable solutions, we've 
 drafted a proposal [4], with the helpful feedback of Mozilla and 
 Google, that focuses strictly on providing the mechanisms necessary 
 to enable directory uploads.

 The use of the dir attribute seems odd since I can already apply dir=rtl
 to an input to change the text direction.

Good catch; that's a fatal naming clash, and needs to be corrected.
The obvious one is to just expand out the name to directory.

~TJ

Aaron opened an issue for this on GitHub [1] and I agree that it is a problem 
and we should definitely rename it to something else! One option might be to 
change dir to directory, but we would need a different name for directory (the 
attribute that gets back the virtual root holding the selected files and 
folders).

[1] https://github.com/InternetExplorer/directory-upload/issues/1


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Justin Fagnani
On Tue, Apr 28, 2015 at 4:32 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 27, 2015, at 4:23 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Mon, Apr 27, 2015 at 4:06 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:

 On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 27, 2015, at 3:15 PM, Steve Orvell sorv...@google.com wrote:
 IMO, the appeal of this proposal is that it's a small change to the
 current spec and avoids changing user expectations about the state of the
 dom and can explain the two declarative proposals for distribution.

 It seems like with this API, we’d have to make O(n^k) calls where n is the
 number of distribution candidates and k is the number of insertion points,
 and that’s bad.  Or am I misunderstanding your design?


 I think you've understood the proposed design. As you noted, the cost is
 actually O(n*k). In our use cases, k is generally very small.


 I don't think we want to introduce O(nk) algorithm. Pretty much every
 browser optimization we implement these days are removing O(n^2) algorithms
 in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because
 we can't even theoretically optimize it away.


 You're aware, obviously, that O(n^2) is a far different beast than
 O(nk).  If k is generally small, which it is, O(nk) is basically just
 O(n) with a constant factor applied.


 To make it clear: I'm not trying to troll Ryosuke here.

 He argued that we don't want to add new O(n^2) algorithms if we can
 help it, and that we prefer O(n).  (Uncontroversial.)

 He then further said that an O(nk) algorithm is sufficiently close to
 O(n^2) that he'd similarly like to avoid it.  I'm trying to
 reiterate/expand on Steve's message here, that the k value in question
 is usually very small, relative to the value of n, so in practice this
 O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to
 new O(n^2) algorithms may be mistargeted here.


 Thanks for clarification. Just as Justin pointed out [1], one of the most
 important use case of imperative API is to dynamically insert as many
 insertion points as needed to wrap each distributed node.  In such a use
 case, this algorithm DOES result in O(n^2).


I think I said it was a possibility opened by an imperative API, but I
thought it would be very rare (as will be any modification of the shadow
root in the distribution callback). I think that accomplishing decoration
by inserting an insertion point per distributed node is probably a
degenerate case and it would be better if we supported decoration, but that
seems like a v2+ type feature.

-Justin



 In fact, it could even result in O(n^3) behavior depending on how we spec
 it.  If the user code had dynamically inserted insertion points one by one
 and UA invoked this callback function for each insertion point and each
 node.  If we didn't, then author needs a mechanism to let UA know that the
 condition by which insertion points select a node has changed and it needs
 to re-distribute all the nodes again.

 - R. Niwa

 [1]
 https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html




Re: Directory Upload Proposal

2015-04-28 Thread Ryan Seddon
 To enable developers to build future interoperable solutions, we've
 drafted a proposal [4], with the helpful feedback of Mozilla and Google,
 that focuses strictly on providing the mechanisms necessary to enable
 directory uploads.


The use of the dir attribute seems odd since I can already apply dir=rtl
to an input to change the text direction.


RE: Directory Upload Proposal

2015-04-28 Thread Travis Leithead
 Second, rather than adding a .directory attribute, I think that we should 
 simply add any selected directories to the .files list. My experience is 
 that having a direct mapping between what the user does, and what we expose 
 to the webpage, generally results in less developer confusion and/or 
 annoyance.

I like this consolidation, but Ali concern (and one I share) is that legacy 
code using .files will not expect to encounter new Directory objects in the 
list and will likely break unless the Directory object maintains a 
backwards-compatible File-like appearance.

In the proposed model, the directory would be a virtual wrapper around any 
existing selected files, and could wholly replaces .files, while providing a 
nice extension point for additional behavior later.

I have a concern about revealing the user's directory names to the server, and 
suggested anonymizing the names, but it seems that having directory path names 
flow through to the server intact is an important scenario for file-syncing, 
which anonymizing might break.

-Original Message-
From: Jonas Sicking [mailto:jo...@sicking.cc] 
Sent: Monday, April 27, 2015 9:45 PM
To: Ali Alabbas
Cc: Web Applications Working Group WG
Subject: Re: Directory Upload Proposal

On Thu, Apr 23, 2015 at 12:28 PM, Ali Alabbas a...@microsoft.com wrote:
 Hello WebApps Group,

Hi Ali,

Yay! This is great to see a formal proposal for! Definitely something that 
mozilla is very interested in working on.

 If there is sufficient interest, I would like to work on this within the 
 scope of the WebApps working group.

I personally will stay out of WG politics. But I think the proposal will 
receive more of the needed attention and review in this WG than in the HTML WG. 
But I'm not sure if W3C policies dictate that this is done in the HTML WG.

 [4] Proposal: 
 http://internetexplorer.github.io/directory-upload/proposal.html

So, some specific feedback on the proposal.

First off, I don't think you can use the name dir for the new attribute since 
that's already used for setting rtl/ltr direction.
Simply renaming the attribute to something else should fix this.



My understanding is that the current proposal is mainly so that if we in the 
future add something like Directory.enumerateDeep(), that that would 
automatically enable deep enumeration through all user options.
However that could always be solved by adding a
HTMLInputElement.enumerateFilesDeep() function.

/ Jonas




Re: Directory Upload Proposal

2015-04-28 Thread Jonas Sicking
On Mon, Apr 27, 2015 at 9:45 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Thu, Apr 23, 2015 at 12:28 PM, Ali Alabbas a...@microsoft.com wrote:
 Hello WebApps Group,

 Hi Ali,

 Yay! This is great to see a formal proposal for! Definitely something
 that mozilla is very interested in working on.

 If there is sufficient interest, I would like to work on this within the 
 scope of the WebApps working group.

 I personally will stay out of WG politics. But I think the proposal
 will receive more of the needed attention and review in this WG than
 in the HTML WG. But I'm not sure if W3C policies dictate that this is
 done in the HTML WG.

 [4] Proposal: 
 http://internetexplorer.github.io/directory-upload/proposal.html

 So, some specific feedback on the proposal.

 First off, I don't think you can use the name dir for the new
 attribute since that's already used for setting rtl/ltr direction.
 Simply renaming the attribute to something else should fix this.

 Second, rather than adding a .directory attribute, I think that we
 should simply add any selected directories to the .files list. My
 experience is that having a direct mapping between what the user does,
 and what we expose to the webpage, generally results in less developer
 confusion and/or annoyance.

 My understanding is that the current proposal is mainly so that if we
 in the future add something like Directory.enumerateDeep(), that that
 would automatically enable deep enumeration through all user options.
 However that could always be solved by adding a
 HTMLInputElement.enumerateFilesDeep() function.

Oh, there's another thing missing that I missed. We also need some
function, similar to .click(), which allows a webpage to
programmatically bring up a directory picker. This is needed on
platforms like Windows and Linux which use separate platform widgets
for picking a directory and picking a file. Many websites hide the
default browser provided input type=file UI and then call .click()
when the user clicks the website UI.

A tricky question is what to do on platforms that don't have a
separate directory picker (like OSX) or which doesn't have a concept
of directories (most mobile platforms). We could either make those UAs
on those platforms not have the separate .clickDirectoryPicker()
function (or whatever we'll call it), or make them have it but just do
the same as .click().

/ Jonas



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Ryosuke Niwa
On Apr 27, 2015, at 4:23 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Apr 27, 2015 at 4:06 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa rn...@apple.com wrote:
 On Apr 27, 2015, at 3:15 PM, Steve Orvell sorv...@google.com wrote:
 IMO, the appeal of this proposal is that it's a small change to the 
 current spec and avoids changing user expectations about the state of the 
 dom and can explain the two declarative proposals for distribution.
 
 It seems like with this API, we’d have to make O(n^k) calls where n is 
 the number of distribution candidates and k is the number of insertion 
 points, and that’s bad.  Or am I misunderstanding your design?
 
 I think you've understood the proposed design. As you noted, the cost is 
 actually O(n*k). In our use cases, k is generally very small.
 
 I don't think we want to introduce O(nk) algorithm. Pretty much every 
 browser optimization we implement these days are removing O(n^2) algorithms 
 in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because 
 we can't even theoretically optimize it away.
 
 You're aware, obviously, that O(n^2) is a far different beast than
 O(nk).  If k is generally small, which it is, O(nk) is basically just
 O(n) with a constant factor applied.
 
 To make it clear: I'm not trying to troll Ryosuke here.
 
 He argued that we don't want to add new O(n^2) algorithms if we can
 help it, and that we prefer O(n).  (Uncontroversial.)
 
 He then further said that an O(nk) algorithm is sufficiently close to
 O(n^2) that he'd similarly like to avoid it.  I'm trying to
 reiterate/expand on Steve's message here, that the k value in question
 is usually very small, relative to the value of n, so in practice this
 O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to
 new O(n^2) algorithms may be mistargeted here.

Thanks for clarification. Just as Justin pointed out [1], one of the most 
important use case of imperative API is to dynamically insert as many insertion 
points as needed to wrap each distributed node.  In such a use case, this 
algorithm DOES result in O(n^2).

In fact, it could even result in O(n^3) behavior depending on how we spec it.  
If the user code had dynamically inserted insertion points one by one and UA 
invoked this callback function for each insertion point and each node.  If we 
didn't, then author needs a mechanism to let UA know that the condition by 
which insertion points select a node has changed and it needs to re-distribute 
all the nodes again.

- R. Niwa

[1] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html 
https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html



Re: Directory Upload Proposal

2015-04-28 Thread Tab Atkins Jr.
On Tue, Apr 28, 2015 at 3:53 PM, Ryan Seddon seddon.r...@gmail.com wrote:
 To enable developers to build future interoperable solutions, we've
 drafted a proposal [4], with the helpful feedback of Mozilla and Google,
 that focuses strictly on providing the mechanisms necessary to enable
 directory uploads.

 The use of the dir attribute seems odd since I can already apply dir=rtl
 to an input to change the text direction.

Good catch; that's a fatal naming clash, and needs to be corrected.
The obvious one is to just expand out the name to directory.

~TJ



Re: Directory Upload Proposal

2015-04-28 Thread Jonas Sicking
On Tue, Apr 28, 2015 at 4:26 PM, Travis Leithead
travis.leith...@microsoft.com wrote:
 Second, rather than adding a .directory attribute, I think that we should 
 simply add any selected directories to the .files list. My experience is 
 that having a direct mapping between what the user does, and what we expose 
 to the webpage, generally results in less developer confusion and/or 
 annoyance.

 I like this consolidation, but Ali concern (and one I share) is that legacy 
 code using .files will not expect to encounter new Directory objects in the 
 list and will likely break unless the Directory object maintains a 
 backwards-compatible File-like appearance.

Legacy pages won't be setting the directory attribute.

In fact, this is the whole purpose of the directory attribute. To
enable pages to signal I can handle the user picking directories.

 I have a concern about revealing the user's directory names to the server, 
 and suggested anonymizing the names, but it seems that having directory path 
 names flow through to the server intact is an important scenario for 
 file-syncing, which anonymizing might break.

I agree that this is a concern, though one separate from what API we use.

I do think it's fine to expose the directory name of the directory
that the user pick. It doesn't seem very different from the fact that
we expose the filename of the files that the user pick.

/ Jonas



[selectors-api] How to mark TR version as obsolete/superseded? [Was: Re: Obsolete Document]

2015-04-28 Thread Arthur Barstow

On 3/26/15 8:30 AM, Gulfaraz Yasin wrote:

Hi

It has come to my notice that the following document

http://www.w3.org/TR/selectors-api/#resolving-namespaces

is obsolete.


Hi Gulfaraz,

Thanks for your e-mail and sorry for the slow reply.


I was directed to it's page from one of StackOverflow's answers and 
after following up a bit I've been informed that the above document is 
obsolete.



Yes, this is true.


It would be very helpful if there was a notice on the page that 
informed it's visitors of the same.



Yes, I agree. I think the principle of least surprise implies the 
document at w3.org/TR/selectors-api/ should be gutted of all technical 
content and a reference to the DOM spec [DOM] (which supersede Selectors 
API) should be added (as well as a clear statement work on selectors-api 
has stopped and its features/APIs are superseded by [DOM]). However, I 
suspect the consortium's publication processes might not permit that.


Xiaoqian, Yves - can we do as I suggest above? If not, what is your 
recommendation re making sure people understand work on selectors-api 
has stopped and it is superseded by [DOM]?


-Thanks, AB

[DOM] http://www.w3.org/TR/dom/





Re: Web Storage Rec errata?

2015-04-28 Thread Arthur Barstow

On 4/21/15 5:39 AM, Kostiainen, Anssi wrote:

Hi,

Is there a plan to publish an errata to sync the Web Storage Rec [1] with the 
latest? I counted 8 commits cherry picked into the Editor's Draft since Rec [2].

If no errata publication is planned, I'd expect the Rec to clearly indicate its 
status.



Hi Anssi,

Re the priority of this issue, is this mostly a truth and beauty 
process-type request or is this issue actually creating a problem(s)? 
(If the later, I would appreciate it, if you would please provide some 
additional context.)


The main thing blocking the publication of errata is a commitment from 
someone to actually do the work. I also think Ian's automatic push of 
commits from the WHATWG version of Web Storage to [2] was stopped a long 
time ago so there could be additional changes to be considered, and the 
totality of changes could include normative changes. Did you check for 
these later changes?


If you, or anyone else, would like to help with this effort, that would 
be great. (If it would be helpful, we could create a new webstorage repo 
under github/w3c/, work on the errata in that repo and redirect the 
CVS-backed errata document to the new repo.)


Personally, I think putting errata in a separate file - as opposed to 
putting changes directly into [1] - is mostly make work and fails the 
principle of least surprise. However, I think the consortium's various 
processes preclude us from doing what I consider is the right thing.


-Thanks, ArtB



Thanks,

-Anssi

[1]http://www.w3.org/TR/webstorage/
[2]http://dev.w3.org/cvsweb/html5/webstorage/Overview.html





RE: Directory Upload Proposal

2015-04-28 Thread Travis Leithead
 Aaron opened an issue for this on GitHub [1] and I agree that it is a 
 problem and we should definitely rename it to something else! One option 
 might be to change dir to directory, but we would need a different name for 
 directory (the attribute that gets back the virtual root holding the 
 selected files and folders).

I wonder, is it necessary to have a separate dir/directory attribute from 
multiple? Adding a new DOM attribute will allow for feature detecting this 
change. UA's can handle the presentation of a separate directory picker if 
necessary--why force this distinction on the web developer?

-Original Message-
From: Ali Alabbas [mailto:a...@microsoft.com] 
Sent: Tuesday, April 28, 2015 4:21 PM
To: Tab Atkins Jr.; Ryan Seddon
Cc: Web Applications Working Group WG
Subject: RE: Directory Upload Proposal

On Tue, Apr 28, 2015 at 4:15 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

On Tue, Apr 28, 2015 at 3:53 PM, Ryan Seddon seddon.r...@gmail.com wrote:
 To enable developers to build future interoperable solutions, we've 
 drafted a proposal [4], with the helpful feedback of Mozilla and 
 Google, that focuses strictly on providing the mechanisms necessary 
 to enable directory uploads.

 The use of the dir attribute seems odd since I can already apply dir=rtl
 to an input to change the text direction.

Good catch; that's a fatal naming clash, and needs to be corrected.
The obvious one is to just expand out the name to directory.

~TJ

Aaron opened an issue for this on GitHub [1] and I agree that it is a problem 
and we should definitely rename it to something else! One option might be to 
change dir to directory, but we would need a different name for directory (the 
attribute that gets back the virtual root holding the selected files and 
folders).

[1] https://github.com/InternetExplorer/directory-upload/issues/1


Re: Why is querySelector much slower?

2015-04-28 Thread Boris Zbarsky

On 4/28/15 2:44 AM, Glen Huang wrote:

But If I do getElementsByClass()[0], and LiveNodeList is immediately
garbage collectable


Then it will only be collected the next time GC happens.  Which might 
not be for a while.


-Boris



Re: Why is querySelector much slower?

2015-04-28 Thread Boris Zbarsky

On 4/28/15 1:58 AM, Glen Huang wrote:

Just a quick follow up question to quench my curiosity: if I do list[1] and no one has 
ever asked the list for any element, Gecko will find the first two matching elements, and store 
them in the list, if I then immediately do list[0], the first element is returned 
without walking the DOM (assuming there are at least two matching elements)?


Yes, exactly.


querySelector(foo) and getElementsByTagName(foo)[0] can return different 
nodes


Still a bit confused regarding this. If the premise is the selector only 
contains characters allowed in a tag name


Then in that case I _think_ those are now equivalent, though I'd have to 
check it carefully.  They didn't use to be back when 
getElementsByTagName was defined as matching on qualified name, not 
localname...


-Boris



Re: Why is querySelector much slower?

2015-04-28 Thread Boris Zbarsky

On 4/28/15 2:13 AM, Glen Huang wrote:

On second thought, if the list returned by getElementsByClass() is lazy
populated as Boris says, it shouldn't be a problem. The list is only
updated when you access that list again.


Mutations have to check whether the list is marked dirty already or not.

This is not too bad if you only have a few lists around, but if you have 
several thousand it can start to hurt.


-Boris



[Bug 28353] [Shadow]: Use a parent/child relationship in the composed tree for some elements, i.e. ol/li

2015-04-28 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28353

Koji Ishii kojii...@gmail.com changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution|--- |FIXED

--- Comment #6 from Koji Ishii kojii...@gmail.com ---
Ah, sorry, I was thinking of table DOM operation but that was not the original
topic. So for rendering of table is defined well with display and anonymous
boxes, so I agree that it should work.

So, what you meant was anything that has mapping to CSS display should work,
but details is not? That I did not read well from your sentence at the first
point, but now make sense. Thank you (and sorry) for explaining twice.

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
Ww, this is pure gold. Thank you so much for such thorough explanation, any 
even took the trouble to actually implement optimizations to make sure the 
numbers are right. I'm so grateful for the work you put into this just to 
answer my question. How do I accept your answer here? ;)

 So what you're seeing is that the benchmark claims the operation is performed 
 in 1-2 clock cycles

I never thought about relating ops/sec numbers to clock cycles. Thanks for the 
tip.

 So what this getElementById benchmark measures is how fast a loop counter can 
 be decremented from some starting value to 0.

This makes so much sense now.

 because of the proxy machinery involved on the JS engine side

Do you mean the cost introduced by passing a C++ object into ecmascript world?

 In this case, those all seem to have about the same cost;

I now see why querySelector has some extract work to do.

 But for real-life testcases algorithmic complexity can often be much more 
 important.

Yes. But I suddenly find microbenchmarks to be a wonderful conversation 
starter. ;)

Thanks again for all the explanations, I'm motivated by them to actually dig 
into the engine source code to discover things myself next time (probably not 
easy, but should be rewarding). :)


Re: Why is querySelector much slower?

2015-04-28 Thread Boris Zbarsky

On 4/28/15 2:59 AM, Glen Huang wrote:

Looking at the microbenchmark again, for Gecko, getElementById is around 300x 
faster than querySelector('#id'), and even getElementsByClassName is faster 
than it.


This is why one should not write microbenchmarks.  ;)  Or at least if 
one does, examine the results very carefully.


The numbers you see for the getElementById benchmark there are on the 
order of 2e9 operations per second, yes?  And modern desktop/laptop CPUs 
are clocked somewhere in the 2-4 GHz range.  So what you're seeing is 
that the benchmark claims the operation is performed in 1-2 clock 
cycles.  This should seem unlikely if you think the operation involves a 
hashtable lookup!


What's happening there is that Gecko happens to know at JIT compile time 
in this microbenchmark:


1)  The bareword lookup is going to end up at the global, because there 
is nothing on the scope chain that would shadow the document name.
2)  The global has an own property named document whose getter is 
side-effect-free.
3)  The return value of the document property has only been observed 
to be a Document.
4)  Looking up getElementById on the return value of the document 
property has consistently found it on Document.prototype.

5)  Document.prototype.getElementById is known to be side-effect-free.
6)  The return value of getElementById is not used (assigned to a 
function-local variable that is then not used).


The upshot of all that is that with a few guards both the 
getElementById call get and the  document get can be dead-code 
eliminated here.  And even if you stored the value somewhere persistent 
they could both still be loop-hoisted in this case.  So what this 
getElementById benchmark measures is how fast a loop counter can be 
decremented from some starting value to 0.  It happens that this can be 
done in about 1-2 clock cycles per loop iteration.


OK, so what about querySelector(#id) vs getElementsByClassName?

In the former case, loop-hoisting and dead code elimination are 
disallowed because querySelector can throw.  That means that you can't 
eliminate it, and you can't move it past other things that might have 
observable side effects (like the counter increment).  Arguably this is 
a misfeature in the design of querySelector.


In the latter case, loop-hoisting or dead code elimination can't happen 
because Gecko doesn't know enough about what [0] will do so assumes the 
worst: that it can have side-effects that can affect what the document 
getter returns as well as what the getElementsByClassName() call returns.


So there are no shortcuts here; you have to actually do the calls. What 
do those calls do?


querySelector does a hashtable lookup for the selector to find a parsed 
selector.  Then it sets up some state that's needed for selector 
matching.  Then it detects that the selector's right-hand-most bit has a 
simple ID selector and does a fast path that involves looking up that id 
in the hashtable and then comparing the selector to the elements that 
are returned until one of them matches.


getElementsByClassName has to do a hashtable lookup on the class name, 
then return the result.  Then it has to do the [0] (which is actually 
surprisingly expensive, by the way, because of the proxy machinery 
involved on the JS engine side).


So we _could_ make querySelector faster here by adding another special 
case for selectors that are _just_ an id as opposed to the existing 
optimization (which works for #foo  #bar and similar as well).  And 
of course the new special case would only work the way you want for 
document.querySelector, not element.querySelector; the latter needs to 
check for your result being a descendant of the element anyway.  It's a 
tradeoff between complexity of implementation (which has its own 
maintenance _and_ performance costs) and real-life use cases.


Lastly, I'd like to put numbers to this.  On this particular testcase, 
the querySelector(#list) call takes about 100ns on my hardware: about 
300 CPU cycles.  We could add that other set of special-casing and get 
it down to 70ns (I just checked by implementing it, so this is not a 
random guess).  At that point you've got two hashtable lookups (which we 
could try to make faster, perhaps), the logic to detect that the 
optimization can be done at all (which is not that trivial; our selector 
representation requires a bunch of checks to ensure that it's just an id 
selector), and whatever work is involved in the binding layer.  In this 
case, those all seem to have about the same cost; about 17-18ns (50 CPU 
cycles) each.


So is your use case one where the difference between querySelector 
costing 100ns and it costing 70ns actually makes a difference?



It doesn't look like it benefits much from an eagerly populated hash table?


It benefits a good bit for non-toy documents where avoiding walking the 
entire DOM is the important part of the optimization.  Again, 
microbenchmarks mostly serve to highlight the 

Re: Inheritance Model for Shadow DOM Revisited

2015-04-28 Thread Ryosuke Niwa

 On Apr 27, 2015, at 9:50 PM, Hayato Ito hay...@chromium.org wrote:
 
 The feature of shadow as function supports *subclassing*. That's exactly 
 the motivation I've introduced it once in the spec (and implemented it in 
 blink). I think Jan Miksovsky, co-author of Apple's proposal, knows well that.

We're (and consequently I'm) fully aware of that feature/prosal, and we still 
don't think it adequately addresses the needs of subclassing.

The problem with shadow as function is that the superclass implicitly 
selects nodes based on a CSS selector so unless the nodes a subclass wants to 
insert matches exactly what the author of superclass considered, the subclass 
won't be able to override it. e.g. if the superclass had an insertion point 
with select=input.foo, then it's not possible for a subclass to then override 
it with, for example, an input element wrapped in a span.

 The reason I reverted it from the spec (and the blink), [1], is a technical 
 difficulty to implement, though I've not proved that it's impossible to 
 implement.

I'm not even arguing about the implementation difficulty. I'm saying that the 
semantics is inadequate for subclassing.

- R. Niwa




=[xhr]

2015-04-28 Thread Ken Nelson
RE async: false being deprecated

There's still occasionally a need for a call from client javascript back to
server and wait on results. Example: an inline call from client javascript
to PHP on server to authenticate an override password as part of a
client-side operation. The client-side experience could be managed with a
sane timeout param - eg return false if no response after X seconds (or ms).

Thanks


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Ryosuke Niwa
I've updated the gist to reflect the discussion so far:
https://gist.github.com/rniwa/2f14588926e1a11c65d3 
https://gist.github.com/rniwa/2f14588926e1a11c65d3

Please leave a comment if I missed anything.

- R. Niwa



IndieUI Teleconference Agenda; 29 April at 21:00Z

2015-04-28 Thread Janina Sajka

Cross-posting as is usual ...

What:   IndieUI Task Force Teleconference
When:   Wednesday 29 April
 2:00 PMSan Francisco -- U.S. Pacific  Time (PDT: UTC -7)
 4:00 PMAustin -- U.S. Central  Time(CDT: UTC -5)
 5:00 PMBoston -- U.S. Eastern  Time(EDT: UTC -4)
10:00 PMLondon -- British  Time (BST: UTC +1)
11:00 PMParis -- Central European Time  (CET: UTC +2)
 5:00 AMBeijing -- China Standard Time  (Thursday, 30 April 
CST: UTC +8)
 6:00 AMTokyo -- Japan Standard Time(Thursday, 30 April 
JST: UTC +9)
Where:  W3C Teleconference--See Below

* Time of day conversions

Please verify the correct time of this meeting in your time zone using
the Fixed Time Clock at:

http://timeanddate.com/worldclock/fixedtime.html?msg=IndieUI+Teleconferenceiso=20150429T1700p1=43ah=1

** Preliminary Agenda for IndieUI Task Force Teleconference 29 April 2015

Meeting: IndieUI Task Force Teleconference
Chair:  Janina_Sajka
agenda+ preview agenda with items from two minutes
agenda+ Editors' Reports; Heartbeat Publications Update
agenda+ Future of IndieUI Work (Continued)
agenda+  Other Business
agenda+ Be Done

Resource: Teleconference Minutes
http://www.w3.org/2015/04/15-indie-ui-minutes.html

Resource: First Survey Results
https://www.w3.org/2002/09/wbs/54997/201503_planning/results

Resource: Second Survey Results
https://www.w3.org/2002/09/wbs/54997/201504_fate/results

Resource: Schema.org meta data mapping to Indie UI User context
https://docs.google.com/spreadsheets/d/1pb92piOlud5sXQadXYnbmtp9LCut26gv8ku-qqZTwec/edit#gid=0

Resource: Web Apps Editing TF
Editing Explainer:  http://w3c.github.io/editing-explainer/
User Intentions:
http://w3c.github.io/editing-explainer/commands-explainer.html

Resource: For Reference
Home Page:  http://www.w3.org/WAI/IndieUI/
Email Archive:  http://lists.w3.org/Archives/Public/public-indie-ui/

Resource: Teleconference Logistics
Dial the Zakim bridge using either SIP or the PSTN.
PSTN: +1.617.761.6200 (This is a U.S. number).
SIP: za...@voip.w3.org
You should be prompted for a pass code,
This is
46343#
(INDIE#)

Alternatively, bypass the Zakim prompts and SIP directly into our
teleconference.
SIP: 0046...@voip.w3.org

Instructions for connecting using SIP:
http://www.w3.org/2006/tools/wiki/Zakim-SIP
Place for users to contribute additional VoIP tips.
http://www.w3.org/2006/tools/wiki/Zakim-SIP-tips

IRC: server: irc.w3.org, channel: #indie-ui.

During the conference you can manage your participation with Zakim
commands as follows:
   61# to mute yourself
   60# to unMute yourself
   41# to raise your hand (enter speaking queue)
   40# to lower your hand (exit speaking queue)

The system acknowledges these commands with a rapid, three-tone
confirmation.  Mobile phone users especially should use the mute
function
if they don't have a mute function in their phone.  But the hand-raising
function is a good idea for anyone not using IRC.

* IRC access

An IRC channel will be available. The server is
irc.w3.org,
The port number is 6665 (Note this is not the normal default) and
The channel is #indie-ui.

* Some helpful Scribing and Participation Tips
http://www.w3.org/WAI/PF/wiki/Teleconference_cheat_sheet

For more on the IRC setup and the robots we use for agenda and speaker
queuing and for posting the log to the web, see:

- For RRSAgent, that captures and posts the log with special attention
to action items:
http://www.w3.org/2002/03/RRSAgent

- For Zakim, the IRC interface to the bridge manager, that will
maintain speaker and agenda queues:
http://www.w3.org/2001/12/zakim-irc-bot

- For a Web gateway to IRC you can use if your network administrators
forbid IRC, see:
http://www.w3.org/2001/01/cgi-irc

- For more on W3C use of IRC see:
http://www.w3.org/Project/IRC/

--

Janina Sajka,   Phone:  +1.443.300.2200
sip:jan...@asterisk.rednote.net
Email:  jan...@rednote.net

The Linux Foundation
Chair, Open Accessibility:  http://a11y.org

The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Chair,  Protocols  Formats http://www.w3.org/wai/pf
IndieUI http://www.w3.org/WAI/IndieUI/



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Ryosuke Niwa

 On Apr 28, 2015, at 1:04 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 A distribute callback means running script any time we update distribution, 
 which is inside the style update phase (or event path computation phase, ...) 
 which is not a location we can run script.

That's not what Anne and the rest of us are proposing. That idea only came up 
in Steve's proposal [1] that kept the current timing of distribution.

 I also don't believe we should support distributing any arbitrary descendant, 
 that has a large complexity cost and doesn't feel like simplification. It 
 makes computing style and generating boxes much more complicated.

That certainly is a trade off. See a use case I outlined in [2].

 A synchronous childrenChanged callback has similar issues with when it's safe 
 to run script, we'd have to defer it's execution in a number of situations, 
 and it feels like a duplication of MutationObservers which specifically were 
 designed to operate in batch for better performance and fewer footguns (ex. a 
 naive childrenChanged based distributor will be n^2).

Since the current proposal is to add it as a custom element's lifecycle 
callback (i.e. we invoke it when we cross UA code / user code boundary), this 
shouldn't be an issue. If it is indeed an issue, then we have a problem with a 
lifecycle callback that gets triggered when an attribute value is modified.

In general, I don't think we can address Steve's need to make the consistency 
guarantee [3] without running some script either synchronously or as a 
lifecycle callback in the world of an imperative API.

- R. Niwa

[1] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0342.html
[2] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0344.html
[3] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0357.html




Re: =[xhr]

2015-04-28 Thread Tab Atkins Jr.
On Tue, Apr 28, 2015 at 7:51 AM, Ken Nelson k...@pure3interactive.com wrote:
 RE async: false being deprecated

 There's still occasionally a need for a call from client javascript back to
 server and wait on results. Example: an inline call from client javascript
 to PHP on server to authenticate an override password as part of a
 client-side operation. The client-side experience could be managed with a
 sane timeout param - eg return false if no response after X seconds (or ms).

Nothing prevents you from waiting on an XHR to return before
continuing.  Doing it with async operations is slightly more complex
than blocking with a sync operation, is all.

~TJ



Re: Inheritance Model for Shadow DOM Revisited

2015-04-28 Thread Hayato Ito
Could you help me to understand what implicitly means here?

In this particular case, you might want to blame the super class's author
and tell the author, Please use content select=.input-foo so that
subclass can override it with arbitrary element with class=input-foo.

Could you give me an concrete example which content slot can support, but
shadow as function can't support?


On Wed, Apr 29, 2015 at 2:09 AM Ryosuke Niwa rn...@apple.com wrote:


  On Apr 27, 2015, at 9:50 PM, Hayato Ito hay...@chromium.org wrote:
 
  The feature of shadow as function supports *subclassing*. That's
 exactly the motivation I've introduced it once in the spec (and implemented
 it in blink). I think Jan Miksovsky, co-author of Apple's proposal, knows
 well that.

 We're (and consequently I'm) fully aware of that feature/prosal, and we
 still don't think it adequately addresses the needs of subclassing.

 The problem with shadow as function is that the superclass implicitly
 selects nodes based on a CSS selector so unless the nodes a subclass wants
 to insert matches exactly what the author of superclass considered, the
 subclass won't be able to override it. e.g. if the superclass had an
 insertion point with select=input.foo, then it's not possible for a
 subclass to then override it with, for example, an input element wrapped in
 a span.

  The reason I reverted it from the spec (and the blink), [1], is a
 technical difficulty to implement, though I've not proved that it's
 impossible to implement.

 I'm not even arguing about the implementation difficulty. I'm saying that
 the semantics is inadequate for subclassing.

 - R. Niwa




RE: [components] Isolated Imports and Foreign Custom Elements

2015-04-28 Thread Jonathan Bond-Caron
On Thu Apr 23 02:58 PM, Maciej Stachowiak wrote:

 https://github.com/w3c/webcomponents/wiki/Isolated-Imports-Proposal
 
 I welcome comments on whether this approach makes sense.

Security rules are unclear but love this approach

https://lists.w3.org/Archives/Public/public-webapps/2014JulSep/0024.html
(2) Ability to have the script associated with the component run in a separate 
“world”

An alternative syntax:
link rel=loader space=isolation-name 
href=http://other-server.example.com/component-library.html;

Fits in nicely with ES realms/spaces/worlds/add your definition.




[Bug 28579] New: [Shadow]:

2015-04-28 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=28579

Bug ID: 28579
   Summary: [Shadow]:
   Product: WebAppsWG
   Version: unspecified
  Hardware: PC
OS: Windows NT
Status: NEW
  Severity: minor
  Priority: P2
 Component: Component Model
  Assignee: dglaz...@chromium.org
  Reporter: ty...@tylerlubeck.com
QA Contact: public-webapps-bugzi...@w3.org
CC: m...@w3.org, public-webapps@w3.org
Blocks: 14978

Any element can host zero of one associated node tree, called shadow tree.


Should this say Any element can host zero or one associated node trees, called
a shadow tree?

-- 
You are receiving this mail because:
You are on the CC list for the bug.



Re: Inheritance Model for Shadow DOM Revisited

2015-04-28 Thread Ryosuke Niwa

 On Wed, Apr 29, 2015 at 2:09 AM Ryosuke Niwa rn...@apple.com wrote:
 
  On Apr 27, 2015, at 9:50 PM, Hayato Ito hay...@chromium.org wrote:
 
  The feature of shadow as function supports *subclassing*. That's 
  exactly the motivation I've introduced it once in the spec (and 
  implemented it in blink). I think Jan Miksovsky, co-author of Apple's 
  proposal, knows well that.
 
 We're (and consequently I'm) fully aware of that feature/prosal, and we 
 still don't think it adequately addresses the needs of subclassing.
 
 The problem with shadow as function is that the superclass implicitly 
 selects nodes based on a CSS selector so unless the nodes a subclass wants 
 to insert matches exactly what the author of superclass considered, the 
 subclass won't be able to override it. e.g. if the superclass had an 
 insertion point with select=input.foo, then it's not possible for a 
 subclass to then override it with, for example, an input element wrapped in 
 a span.
 
  The reason I reverted it from the spec (and the blink), [1], is a 
  technical difficulty to implement, though I've not proved that it's 
  impossible to implement.
 
 I'm not even arguing about the implementation difficulty. I'm saying that 
 the semantics is inadequate for subclassing.

 On Apr 28, 2015, at 10:34 AM, Hayato Ito hay...@chromium.org wrote:
 
 Could you help me to understand what implicitly means here?

I mean that the superclass’ insertion points use a CSS selector to select nodes 
to distribute. As a result, unless the subclass can supply the exactly kinds of 
nodes that matches the CSS selector, it won’t be able to override the contents 
into the insertion point.

 In this particular case, you might want to blame the super class's author and 
 tell the author, Please use content select=.input-foo so that subclass can 
 override it with arbitrary element with class=input-foo”.

The problem is that it may not be possible to coordinate across class hierarchy 
like that if the superclass was defined in a third party library. With the 
named slot approach, superclass only specifies the name of a slot, so subclass 
will be able to override it with whatever element it supplies as needed.

 Could you give me an concrete example which content slot can support, but 
 shadow as function can't support?

The problem isn’t so much that slot can do something shadow as function 
can’t support. It’s that shadow as function promotes over specification of 
what element can get into its insertion points by the virtue of using a CSS 
selector.

Now, it's possible that we can encourage authors to always use a class name in 
select attribute to support this use case. But then why are we adding a 
capability that we then discourage authors from using it.

- R. Niwa




Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
Wow, it's now super clear. Thanks for the detailed explanation.

Just a quick follow up question to quench my curiosity: if I do list[1] and 
no one has ever asked the list for any element, Gecko will find the first two 
matching elements, and store them in the list, if I then immediately do 
list[0], the first element is returned without walking the DOM (assuming 
there are at least two matching elements)? 

 querySelector(foo) and getElementsByTagName(foo)[0] can return different 
 nodes

Still a bit confused regarding this. If the premise is the selector only 
contains characters allowed in a tag name, how can they return different nodes, 
maybe I missed something? Unless you mean querySelector(:foo) and 
getElementsByTagName(:foo)[0] can return different results, which is obvious.

If by parsing the passed selector (or lookup the cached parsed selectors) you 
know it only contains a tag name, why it is a bit harder to optimize? You just 
look up the (tagname, root) hash table, no?

 In practice this hasn't come up as a bottleneck in anything we've profiled so 
 far

I'm probably prematurely optimizing my code. But nevertheless learned something 
quite valuable by asking. Thanks for answering it. :)


Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
 querySelector with an id selector does in fact benefit from the id hashtable

Looking at the microbenchmark again, for Gecko, getElementById is around 300x 
faster than querySelector('#id'), and even getElementsByClassName is faster 
than it. It doesn't look like it benefits much from an eagerly populated hash 
table?

P.S it's very interesting to see Gecko is around 100x faster than others when 
it comes to the performance of getElementById. It probably does something 
unusual?


Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
 Live node lists make all dom mutation slower
 
Haven't thought about this before. Thank you for pointing it out. So if I use, 
for example, lots of getElementsByClass() in the code, I'm actually slowing 
down all DOM mutating APIs?

Re: Why is querySelector much slower?

2015-04-28 Thread Elliott Sprehn
On Mon, Apr 27, 2015 at 11:13 PM, Glen Huang curvedm...@gmail.com wrote:

 On second thought, if the list returned by getElementsByClass() is lazy
 populated as Boris says, it shouldn't be a problem. The list is only
 updated when you access that list again.


The invalidation is what makes your code slower. Specifically any time you
mutate the tree, and you have live node lists, we traverse ancestors to
mark them as needing to be updated.

Blink (and likely other browsers) will eventually garbage collect the
LiveNodeList and then your DOM mutations will get faster again.



 On Apr 28, 2015, at 2:08 PM, Glen Huang curvedm...@gmail.com wrote:

 Live node lists make all dom mutation slower

 Haven't thought about this before. Thank you for pointing it out. So if I
 use, for example, lots of getElementsByClass() in the code, I'm actually
 slowing down all DOM mutating APIs?





Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
But If I do getElementsByClass()[0], and LiveNodeList is immediately garbage 
collectable, then if I change the DOM, Blink won't traverse ancestors, thus no 
penalty for DOM mutation?

 On Apr 28, 2015, at 2:28 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 
 
 On Mon, Apr 27, 2015 at 11:13 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 On second thought, if the list returned by getElementsByClass() is lazy 
 populated as Boris says, it shouldn't be a problem. The list is only updated 
 when you access that list again.
 
 The invalidation is what makes your code slower. Specifically any time you 
 mutate the tree, and you have live node lists, we traverse ancestors to mark 
 them as needing to be updated.
 
 Blink (and likely other browsers) will eventually garbage collect the 
 LiveNodeList and then your DOM mutations will get faster again.
  
 
 On Apr 28, 2015, at 2:08 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 
 Live node lists make all dom mutation slower
 
 Haven't thought about this before. Thank you for pointing it out. So if I 
 use, for example, lots of getElementsByClass() in the code, I'm actually 
 slowing down all DOM mutating APIs?
 
 



Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
On second thought, if the list returned by getElementsByClass() is lazy 
populated as Boris says, it shouldn't be a problem. The list is only updated 
when you access that list again.

 On Apr 28, 2015, at 2:08 PM, Glen Huang curvedm...@gmail.com wrote:
 
 Live node lists make all dom mutation slower
 
 Haven't thought about this before. Thank you for pointing it out. So if I 
 use, for example, lots of getElementsByClass() in the code, I'm actually 
 slowing down all DOM mutating APIs?