Re: Is polyfilling future web APIs a good idea?

2015-08-11 Thread Glen Huang
Awesome. Now I think I understand the full picture you described.

When trying to offer a feature that is still being specced, prefix the specced 
APIs, and once the spec is stable, for browsers that don't ship these APIs, 
alias the prefixed ones by dropping the prefix. Is that correct?

 On Aug 10, 2015, at 9:33 PM, Brian Kardell bkard...@gmail.com wrote:
 
 On Aug 6, 2015 11:05 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 
   This assumes you'll match
 
  That's a good point. I agree for most APIs it's probably better to simply 
  use polyfill code for all browsers. But some APIs have some extra benefits 
  that might not be polyfillable. For example, the native version of web 
  animations happen in a compositor thread, so google's web animations 
  polyfill use native APIs when they exist and only provide polyfill when 
  they don't.
 
 Right, and if ABC Co. shipped software a few years ago before that was 
 available they probably used jQuery animations and if they haven't touched 
 their site since then it still works - in fact, it very likely works much 
 better because the trend for performance is always improving.  If they came 
 back later and used the animations polyfill it takes advantage of some 
 additional stuff on the compositor thread in some browsers.  If this is 
 really a polyfill because it is a settled standard then as browsers implement 
 they should automatically get the same boosts because we can ensure future 
 interop - no harm no foul.  You couldn't though have a mix of all of those 
 worlds - we couldn't have old jQuery code, reinvent how it's 
 expressed/capable of on the way to standards and somehow automatically fill 
 the old code - the best you can do there is improve general performance. If 
 there are underlying tools in some browsers that help you solve a prolyfill 
 better, you can use them in the same way - but you can't really prognosticate 
 that that's exactly how it's going to come out the other end when the 
 standard ships.
 
   But it's not deprecated in browsers that don't support it
 
  Probably I still don't quite understand how prollyfill works.
 
 
 I don't feel like there are standard best practices worked out here - I'm 
 giving you my own perspective as someone who has spent an inordinate amount 
 of time thinking about this, I am not speaking for anyone else here - your 
 mileage may vary - but I'm advocating it's worth developing some.
 
 
  Let's say the prollyfill offered node._foo(), one browser shiped 
  experimental node.foo(), users ignored that, and used our polyfilled 
  version. Everyone was happy. Then other browsers are on board, this API 
  becomes stable. Now, what should the prollyfill do?
 
  Should it still ship node._foo() and expect users to use that when most 
  browsers ship node.foo() and this API has a precise definition?
 
  Or it should deprecate node._foo(), polyfill node.foo() for browsers still 
  don't support it and encourage users to switch to node.foo()?
 
 Of course when it is a standard, released and interop - it's a polyfill... 
 For the polyfill maker I expect at that point they would simply lose the 
 underscore and, perhaps to make it easy for authors they could just make _foo 
 an alias of .foo (which was my example one liner)...  It's entirely up to 
 authors whether they will even go back and update their code and imports and 
 how they will do so -- their old code will continue to work just fine.  But a 
 lot of authors will be happy to gain perf if there is no rewrite involved 
 (ie, if they happened to use a version that is ultimately compatible with the 
 final standard)  and there's not really a penalty in using an aliased name 
 for a method, so that's the approach I would likely use or at least document. 
   
 
  Or it should do something else?
 
  I don't quite understand the oneliner you gave:
 
  HTMLElement.prototype.foo = HTMLElement.prototype._foo;
 
  Why would you want to overwrite a native API with polyfill? How does that 
  work for users? Users can choose either native API or prollyfill's prefixed 
  version and they will both use the polyfill?
 
 You wouldn't want to overwrite a native API, I'm not suggesting that -- I'm 
 suggesting that if you have a prollyfill implementation already which happens 
 to match the final spec interoperably you can both keep your existing uses 
 working and polyfill with the one-liner above for implementations that don't 
 support it.  I guess I thought that much was implied, sorry for the confusion 
 but - it's meant to be in an if or conditional of some kind.. Could be as 
 simple as adding this to the end of your file at this point (ie, when it is 
 actually a polyfill):
 
 HTMLElement.prototype.foo = HTMLElement.prototype.foo || 
 HTMLElement.prototype._foo;
 
 The important part here is that ideas for _foo can compete because it is up 
 to authors to decide what ._foo should look like because they import a 
 specific

Re: Is polyfilling future web APIs a good idea?

2015-08-06 Thread Glen Huang
 This assumes you'll match

That's a good point. I agree for most APIs it's probably better to simply use 
polyfill code for all browsers. But some APIs have some extra benefits that 
might not be polyfillable. For example, the native version of web animations 
happen in a compositor thread, so google's web animations polyfill use native 
APIs when they exist and only provide polyfill when they don't.

 But it's not deprecated in browsers that don't support it

Probably I still don't quite understand how prollyfill works.

Let's say the prollyfill offered node._foo(), one browser shiped experimental 
node.foo(), users ignored that, and used our polyfilled version. Everyone was 
happy. Then other browsers are on board, this API becomes stable. Now, what 
should the prollyfill do?

Should it still ship node._foo() and expect users to use that when most 
browsers ship node.foo() and this API has a precise definition?

Or it should deprecate node._foo(), polyfill node.foo() for browsers still 
don't support it and encourage users to switch to node.foo()?

Or it should do something else?

I don't quite understand the oneliner you gave:

HTMLElement.prototype.foo = HTMLElement.prototype._foo;

Why would you want to overwrite a native API with polyfill? How does that work 
for users? Users can choose either native API or prollyfill's prefixed version 
and they will both use the polyfill?

 On Aug 7, 2015, at 7:07 AM, Brian Kardell bkard...@gmail.com wrote:
 
 On Thu, Aug 6, 2015 at 6:50 PM, Glen Huang curvedm...@gmail.com wrote:
 @William @Matthew
 
 Ah, thanks. Now I think prollyfill is prolly a good name. :)
 
 @Brian
 
 Actually, I had this pattern in mind:
 
 When no browsers ship the API:
 
 ```
 if (HTMLElement.prototype.foo) {
  HTMLElement.prototype._foo = HTMLElement.prototype.foo;
 } else {
  HTMLElement.prototype._foo = polyfill;
 };
 ```
 
 This assumes you'll match, which - again depending on how far you are
 might be a big bet... Personally, I wouldn't use that myself if
 writing something -- Seems a lot like  when people simply provided N
 versions of the same prefixed properties instead of just one, it has
 potential to go awry... No one can actually vary because they've done
 the equivalent of shipping the unprefixed thing inadvertently
 intending it to be an experiment, but it wasnt.
 
 
 When at least two browsers ship this API:
 
 ```
 if (!HTMLElement.prototype.foo) {
 HTMLElement.prototype.foo = polyfill;
 }
 HTMLElement.prototype._foo = function() {
  console.warn(deprecated);
  return this.foo();
 };
 ```
 
 But it's not deprecated in browsers that don't support it, it's a
 polyfill at that point and aside from the console.warn (which again,
 in this case seems incorrect in the message at least) it should be
 generally be identical to the oneliner I gave before - the prototype
 for _foo is the polyfill version.
 
 
 
 -- 
 Brian Kardell :: @briankardell :: hitchjs.com




Re: Is polyfilling future web APIs a good idea?

2015-08-06 Thread Glen Huang
@William @Matthew

Ah, thanks. Now I think prollyfill is prolly a good name. :)

@Brian

Actually, I had this pattern in mind:

When no browsers ship the API:

```
if (HTMLElement.prototype.foo) {
  HTMLElement.prototype._foo = HTMLElement.prototype.foo;
} else {
  HTMLElement.prototype._foo = polyfill;
};
```

When at least two browsers ship this API:

```
if (!HTMLElement.prototype.foo) {
 HTMLElement.prototype.foo = polyfill;
}
HTMLElement.prototype._foo = function() {
  console.warn(deprecated);
  return this.foo();
};
```



Re: Is polyfilling future web APIs a good idea?

2015-08-05 Thread Glen Huang
Thanks for the detailed explanation.

The only thing I'm not sure I understand is the pattern you described:

```
HTMLElement.prototype.foo = HTMLElement.prototype._foo;
```

I had this pattern in mind when you talked about prollyfills:

```
HTMLElement.prototype._foo = function() {
  if (HTMLElement.prototype.foo) return this.foo();
  return polyfill();
};
```

And users are expected to use it like html._foo() My concern was that when most 
browsers ship HTMLElement.prototype.foo, users might want to change html._foo() 
to html.foo() so they can use the native version, and the prollyfill is expect 
to release a new version with

```
if (!HTMLElement.prototype.foo) {
  HTMLElement.prototype.foo = function() {
return polyfill();
  };
}
```

I was saying changing html._foo() to html.foo() aren't that different from 
changing foo(html) to html.foo();

Where does HTMLElement.prototype.foo = HTMLElement.prototype._foo fit in the 
picture?

BTW, just curious, how do you come up with the name prollyfill :) ? Why 
adding a R there?






Re: Is polyfilling future web APIs a good idea?

2015-08-04 Thread Glen Huang
On second thought, what's the difference between prollyfills and libraries 
exposed web APIs in a functional style (e.g., node1._replaceWith(node2) vs 
replaceWith(node2, node1)? Or in a wrapper style like jquery does? Prefixing 
APIs doesn't seem to be that different from using custom APIs? You might say 
the prefixing approach resembles native APIs more closely, but when changing 
your code to use native APIs, modifying one character or several doesn't really 
make much difference (they are the same if you find  replace), as long as you 
have to modify the code.


Re: Is polyfilling future web APIs a good idea?

2015-08-03 Thread Glen Huang
Brian,

prollyfills seems pragmatic. But what about when the logic of an API changes, 
but not the name? The node.replaceWith() API for example is about to be 
revamped to cover some edge cases. If the prollyfills exposed 
node._replaceWith(), what should it do when the new node.replaceWith() comes? 
Expose node._replaceWith2()? This doesn't seem to scale.

But I do see the benefit of prefixing in prollyfills. node.replaceWith() used 
to be node.replace(). If we exposed _replace() earlier, we can swap the 
underlying function with node.replaceWith() when we release a new version, and 
old code immediately benefit from the new API. But over time, prollyfills are 
going to accumulate a lot obsolete APIs. Do you think we should use semver to 
introduce breaking changes? Or these obsolete APIs should always be there?

And if we are going this route, I think we need blessing from the WG. They have 
to promise they will never design an API that starts with the prefix we used.

Let's say we write a prollyfills for the node.replace API. So our lib exposes 
node._replace
 On Aug 3, 2015, at 10:16 AM, Brian Kardell bkard...@gmail.com wrote:
 
 On Sun, Aug 2, 2015 at 9:39 PM, Glen Huang curvedm...@gmail.com wrote:
 I'm pretty obsessed with all kinds of web specs, and invest heavily in tools 
 based on future specs. I was discussing with Tab the other day about whether 
 he thinks using a css preprocessor that desugars future css is a good idea. 
 His answer was surprisingly (at least to me) negative, and recommended sass. 
 His arguments were that
 
 1. the gramma is in flux and can change
 2. css might never offer some constructs used in sass, at least with very 
 low priority.
 
 I think these are good points, and it reduced my enthusiasm for future spec 
 based css preprocessors. But this got me thinking about polyfills for future 
 web APIs. Are they equally not recommended? Likewise, the APIs might change, 
 and for DOM operations we should rely on React because the native DOM might 
 never offer such declarative APIs, at least with very low priority. Do 
 polyfills like WebReflection's DOM4 look promising? For new projects, should 
 I stick with polyfills that only offers compatibilities for older browser, 
 and for future spec features, only use libraries that offer similar features 
 but invent their own APIs, or should I track future specs and use these 
 unstable polyfills?
 
 I'm torn on this subject. Would like to be enlightened.
 [snip]
 
 TL;DR: Yes, I think they are good - really good actually, with some
 best practices.
 
 CSS is a slightly different beast at the moment because it is not
 (yet) extensible, but let's pretend for a moment that it is so that a
 uniform answer works ok...
 
 This was why I and others advocated defining the idea of/using the
 term prollyfill as opposed to a polyfill.  With a polyfill you are
 filling in gaps and cracks in browser support for an established
 standard, with a prollyfill you might be charting some new waters.  In
 a sense, you're taking a guess.  If history is any indicator then the
 chances that it will ultimately ship that way without change is very
 small until it really ships in two interoperable browsers that way.
 There's more to it than slight semantics too I think:  Polyfill was
 originally defined as above and now for many developers the
 expectation is that this is what it's doing.  In other words, it's
 just providing a fill for something which will ultimately be native,
 therefore won't change.  Except, as we are discussing, this might not
 be so.  Personally, I think this matters in a big way because so much
 depends on people understanding things:  If users had understood and
 respected vendor-prefixed CSS for use as intended, for example, they
 wouldn't have been much of a problem -- but they were.  Users didn't
 understand that and things shipped natively, so vendors had to adjust
 - things got messy.
 
 Debates about this took up a lot of email space in early extensible
 web cg lists - my own take remains unchanged, mileage may vary:
 
 It is my opinion that when possible, we should 'prefix' prollyfilled
 APIs - this could be something as simple as an underscore in DOM APIs
 or a --property in CSS, etc.  Hopefully this makes it obvious that
 it is not native and is subject to change, but that isn't the reason
 to do it.  The reason to do it is the one above:  it *may* actually
 change so you shouldn't mislead people to think otherwise - it
 potentially affects a lot.  For example, if something gets very
 popular masquerading as native but no one will actually implement
 natively it without changes - they are stuck having to deal with
 shitty compromises in standards to keep the web from breaking.  Also,
 what happens when devs sell a standard with the promise that it's
 going to be native and then we rip that rug out from underneath them.
 
 For me then, following a nice pattern where authors opt in and provide
 whether or not to prefix

Is polyfilling future web APIs a good idea?

2015-08-02 Thread Glen Huang
I'm pretty obsessed with all kinds of web specs, and invest heavily in tools 
based on future specs. I was discussing with Tab the other day about whether he 
thinks using a css preprocessor that desugars future css is a good idea. His 
answer was surprisingly (at least to me) negative, and recommended sass. His 
arguments were that

1. the gramma is in flux and can change
2. css might never offer some constructs used in sass, at least with very low 
priority.

I think these are good points, and it reduced my enthusiasm for future spec 
based css preprocessors. But this got me thinking about polyfills for future 
web APIs. Are they equally not recommended? Likewise, the APIs might change, 
and for DOM operations we should rely on React because the native DOM might 
never offer such declarative APIs, at least with very low priority. Do 
polyfills like WebReflection's DOM4 look promising? For new projects, should I 
stick with polyfills that only offers compatibilities for older browser, and 
for future spec features, only use libraries that offer similar features but 
invent their own APIs, or should I track future specs and use these unstable 
polyfills?

I'm torn on this subject. Would like to be enlightened.

My obsession with future specs based tools doesn't come out of nowhere. 
Coffeescript used to offer sugars for es5. But then es2015 catches up, and it 
looks obsolete, since its user base is likely migrating to es over time. The 
same goes for GSAP vs Web Animations. So I have this sense of feeling that 
technologies without the blessing of specs/browser vendors are likely to be 
abandoned eventually. So instead of investing on custom designed APIs, I feel 
it's more sustainable to bet on spec APIs. What's your take on this topic?

P.S. I called out some projects. I, by no means, slight these projects and 
their authors in any way. The projects usually offer some useful higher 
abstractions and the authors are all extremely talented and I respect them a 
lot. This is more from users point of view, and about how they should choose 
which technologies to use.


Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
Ww, this is pure gold. Thank you so much for such thorough explanation, any 
even took the trouble to actually implement optimizations to make sure the 
numbers are right. I'm so grateful for the work you put into this just to 
answer my question. How do I accept your answer here? ;)

 So what you're seeing is that the benchmark claims the operation is performed 
 in 1-2 clock cycles

I never thought about relating ops/sec numbers to clock cycles. Thanks for the 
tip.

 So what this getElementById benchmark measures is how fast a loop counter can 
 be decremented from some starting value to 0.

This makes so much sense now.

 because of the proxy machinery involved on the JS engine side

Do you mean the cost introduced by passing a C++ object into ecmascript world?

 In this case, those all seem to have about the same cost;

I now see why querySelector has some extract work to do.

 But for real-life testcases algorithmic complexity can often be much more 
 important.

Yes. But I suddenly find microbenchmarks to be a wonderful conversation 
starter. ;)

Thanks again for all the explanations, I'm motivated by them to actually dig 
into the engine source code to discover things myself next time (probably not 
easy, but should be rewarding). :)


Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
Wow, it's now super clear. Thanks for the detailed explanation.

Just a quick follow up question to quench my curiosity: if I do list[1] and 
no one has ever asked the list for any element, Gecko will find the first two 
matching elements, and store them in the list, if I then immediately do 
list[0], the first element is returned without walking the DOM (assuming 
there are at least two matching elements)? 

 querySelector(foo) and getElementsByTagName(foo)[0] can return different 
 nodes

Still a bit confused regarding this. If the premise is the selector only 
contains characters allowed in a tag name, how can they return different nodes, 
maybe I missed something? Unless you mean querySelector(:foo) and 
getElementsByTagName(:foo)[0] can return different results, which is obvious.

If by parsing the passed selector (or lookup the cached parsed selectors) you 
know it only contains a tag name, why it is a bit harder to optimize? You just 
look up the (tagname, root) hash table, no?

 In practice this hasn't come up as a bottleneck in anything we've profiled so 
 far

I'm probably prematurely optimizing my code. But nevertheless learned something 
quite valuable by asking. Thanks for answering it. :)


Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
 querySelector with an id selector does in fact benefit from the id hashtable

Looking at the microbenchmark again, for Gecko, getElementById is around 300x 
faster than querySelector('#id'), and even getElementsByClassName is faster 
than it. It doesn't look like it benefits much from an eagerly populated hash 
table?

P.S it's very interesting to see Gecko is around 100x faster than others when 
it comes to the performance of getElementById. It probably does something 
unusual?


Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
 Live node lists make all dom mutation slower
 
Haven't thought about this before. Thank you for pointing it out. So if I use, 
for example, lots of getElementsByClass() in the code, I'm actually slowing 
down all DOM mutating APIs?

Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
But If I do getElementsByClass()[0], and LiveNodeList is immediately garbage 
collectable, then if I change the DOM, Blink won't traverse ancestors, thus no 
penalty for DOM mutation?

 On Apr 28, 2015, at 2:28 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 
 
 On Mon, Apr 27, 2015 at 11:13 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 On second thought, if the list returned by getElementsByClass() is lazy 
 populated as Boris says, it shouldn't be a problem. The list is only updated 
 when you access that list again.
 
 The invalidation is what makes your code slower. Specifically any time you 
 mutate the tree, and you have live node lists, we traverse ancestors to mark 
 them as needing to be updated.
 
 Blink (and likely other browsers) will eventually garbage collect the 
 LiveNodeList and then your DOM mutations will get faster again.
  
 
 On Apr 28, 2015, at 2:08 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 
 Live node lists make all dom mutation slower
 
 Haven't thought about this before. Thank you for pointing it out. So if I 
 use, for example, lots of getElementsByClass() in the code, I'm actually 
 slowing down all DOM mutating APIs?
 
 



Re: Why is querySelector much slower?

2015-04-28 Thread Glen Huang
On second thought, if the list returned by getElementsByClass() is lazy 
populated as Boris says, it shouldn't be a problem. The list is only updated 
when you access that list again.

 On Apr 28, 2015, at 2:08 PM, Glen Huang curvedm...@gmail.com wrote:
 
 Live node lists make all dom mutation slower
 
 Haven't thought about this before. Thank you for pointing it out. So if I 
 use, for example, lots of getElementsByClass() in the code, I'm actually 
 slowing down all DOM mutating APIs?



Why is querySelector much slower?

2015-04-27 Thread Glen Huang
Intuitively, querySelector('.class') only needs to find the first matching 
node, whereas getElementsByClassName('.class')[0] needs to find all matching 
nodes and then return the first. The former should be a lot quicker than the 
latter. Why that's not the case?

See http://jsperf.com/queryselectorall-vs-getelementsbytagname/119 
http://jsperf.com/queryselectorall-vs-getelementsbytagname/119 for the test

I know querySelectorAll is slow because of the static nature of returned 
NodeList, but this shouldn't be an issue for querySelector.

Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
Thank you for the sample code. It's very helpful.

When you say var node = list[0]; walks the DOM until the first item is found, 
do you mean it only happens under the condition that some previous code has 
changed the DOM structure? If not, the returned list object will be marked as 
up-to-day, and accessing the first element is very cheap? I ask because in the 
first paragraph you said the returned list and returned first element is 
probably precomputed.

Also, this is my mental model after reading your explanation, I wonder if 
that's correct:

After UA has parsed html, it caches a hash table of elements with class names 
(also all element with ids, all elements with tag names, etc in different hash 
tables), keyed under the class names. When getElementsByClassName() is called, 
and the DOM hasn't been modified, it simply creates a list of elements with 
that class name from the hash table. When the first element is accessed from 
that list, and the DOM still isn't modified, the element is returned directly.

The hash table is kept in sync with the DOM when it's modified. And if the DOM 
is changed after the list is returned but before it's accessed, the list will 
be masked as dirty, and accessing its element will walk the DOM (and mark the 
list as partially updated after that).

Is this description correct?

And the final question:

Why can't querySelector benefit from these hash tables? I currently feel the 
urge to optimize it myself by overriding it with a custom function which will 
parse the passed selector, and if it's a simple selector like div, .class, 
#id, call the corresponding getElement*() function instead. Why can't UAs 
perform this for us?

If my mental model is correct, it's simpler than getElement*() from an UA's 
point of view. It simply needs to lookup the first matching element from the 
hash table and return it, no need to return a list and mark it as clean or 
dirty any more. The only price it  pays is parsing the selector.

Is it because authors don't use querySelector often enough that UAs aren't 
interested in optimizing it?

 On Apr 27, 2015, at 9:51 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 
 On 4/27/15 4:57 AM, Glen Huang wrote:
 Intuitively, querySelector('.class') only needs to find the first
 matching node, whereas getElementsByClassName('.class')[0] needs to find
 all matching nodes
 
 Not true; see below.
 
 and then return the first. The former should be a lot
 quicker than the latter. Why that's not the case?
 
 See http://jsperf.com/queryselectorall-vs-getelementsbytagname/119 for
 the test
 
 All getElementsByClassName(.foo) has to do in a microbenchmark like this is 
 look up a cached list (probably a single hashtable lookup) and return its 
 first element (likewise precomputed, unless you're modifying the DOM in ways 
 that would affect the list).  It doesn't have to walk the tree at all.
 
 querySelector(.foo), on the other hand, probably walks the tree at the 
 moment in implementations.
 
 Also, back to the not true above: since the list returned by getElementsBy* 
 is live and periodically needs to be recomputed anyway, and since grabbing 
 just its first element is a common usage pattern, Gecko's implementation is 
 actually lazy (see https://bugzilla.mozilla.org/show_bug.cgi?id=104603#c0 for 
 the motivation): it will only walk as much of the DOM as needed to reply to 
 the query being made.  So for example:
 
  // Creates a list object, doesn't do any walking of the DOM, marks
  // object as dirty and returns it.
  var list = document.getElementsByClassName(.foo);
 
  // Walks the DOM until it finds the first element of the list, marks
  // the list as partially updated, and returns that first element.
  var node = list[0];
 
  // Marks the list as dirty again, since the set of nodes it matches
  // has changed
  document.documentElement.className = foo;
 
 I can't speak for what other UAs here, but the assumption that 
 getElementsByClassName('.class')[0] needs to find all matching nodes is just 
 not true in Gecko.
 
 -Boris




Re: Why is querySelector much slower?

2015-04-27 Thread Glen Huang
I wonder why querySelector can't get the same optimization: If the passed 
selector is a simple selector like .class, do exactly as 
getElementsByClassName('class')[0] does?

 On Apr 28, 2015, at 10:51 AM, Ryosuke Niwa rn...@apple.com wrote:
 
 
 On Apr 27, 2015, at 7:04 PM, Jonas Sicking jo...@sicking.cc wrote:
 
 On Mon, Apr 27, 2015 at 1:57 AM, Glen Huang curvedm...@gmail.com wrote:
 Intuitively, querySelector('.class') only needs to find the first matching
 node, whereas getElementsByClassName('.class')[0] needs to find all matching
 nodes and then return the first. The former should be a lot quicker than the
 latter. Why that's not the case?
 
 I can't speak for other browsers, but Gecko-based browsers only search
 the DOM until the first hit for getElementsByClassName('class')[0].
 I'm not sure why you say that it must scan for all hits.
 
 WebKit (and, AFAIK, Blink) has the same optimization. It's a very important 
 optimization.
 
 - R. Niwa
 




Re: JSON imports?

2015-04-21 Thread Glen Huang
I just checked the html living standard, all it says about prefetch is 

 The prefetch keyword indicates that preemptively fetching and caching the 
 specified resource is likely to be beneficial

This is pretty vague, and I sense the caching mechanism used by one vendor is 
not guaranteed to be the same for another.

script type=application/json src=foo.json/script

This is very easy to understand how it works, and much succinct comparing to 
prefetch and then refetch the same url (all you want is to load the json file 
asap, but you need to spread the logic in two places). Why not support it 
directly? The only downside I can see is that it should probably honor CORS 
headers, thus making it work a bit differently than a vanilla script, but 
that's something should be easy to grasp too.

 On Apr 19, 2015, at 5:15 PM, Elliott Sprehn espr...@chromium.org wrote:
 
 I'd hope with prefetch that we'd keep the data around in the memory cache 
 waiting for the request.
 
 On Apr 18, 2015 7:07 AM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Didn't know about this trick. Thanks.
 
 But I guess you have to make sure the file being prefetched must have a long 
 cache time set in the http header? Otherwise when it's fetched, the file is 
 going to be downloaded again?
 
 What if you don't have control over the json file's http header?
 
 On Apr 18, 2015, at 10:12 AM, Elliott Sprehn espr...@chromium.org 
 mailto:espr...@chromium.org wrote:
 
 link rel=prefetch does that for you.
 
 On Apr 17, 2015 7:08 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 One benefit is that browsers can start downloading it asap, instead of 
 waiting util the fetch code is executed (which could itself be in a separate 
 file).
 
 On Apr 18, 2015, at 8:41 AM, Elliott Sprehn espr...@chromium.org 
 mailto:espr...@chromium.org wrote:
 
 
 
 On Fri, Apr 17, 2015 at 6:33 AM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Basic feature like this shouldn't rely on a custom solution. However, it 
 does mean that if browsers implement this, it's easily polyfillable.
 
 What does this get you over fetch() ? Imports run scripts and enforce 
 ordering an deduplication. Importing JSON doesn't really make much sense.
 
 
 On Apr 17, 2015, at 9:23 PM, Wilson Page wilsonp...@me.com 
 mailto:wilsonp...@me.com wrote:
 
 Sounds like something you could write yourself with a custom-elements. Yay 
 extensible web :)
 
 On Fri, Apr 17, 2015 at 1:32 PM, Matthew Robb matthewwr...@gmail.com 
 mailto:matthewwr...@gmail.com wrote:
 I like the idea of this. It reminds me of polymer's core-ajax component.
 
 On Apr 16, 2015 11:39 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Inspired by HTML imports, can we add JSON imports too?
 
 ```html
 script type=application/json src=foo.json id=foo/script
 script type=application/json id=bar
 { foo: bar }
 /script
 ```
 
 ```js
 document.getElementById(foo).json // or whatever
 document.getElementById(bar).json
 ```
 
 
 
 
 
 



Re: JSON imports?

2015-04-18 Thread Glen Huang
Didn't know about this trick. Thanks.

But I guess you have to make sure the file being prefetched must have a long 
cache time set in the http header? Otherwise when it's fetched, the file is 
going to be downloaded again?

What if you don't have control over the json file's http header?

 On Apr 18, 2015, at 10:12 AM, Elliott Sprehn espr...@chromium.org wrote:
 
 link rel=prefetch does that for you.
 
 On Apr 17, 2015 7:08 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 One benefit is that browsers can start downloading it asap, instead of 
 waiting util the fetch code is executed (which could itself be in a separate 
 file).
 
 On Apr 18, 2015, at 8:41 AM, Elliott Sprehn espr...@chromium.org 
 mailto:espr...@chromium.org wrote:
 
 
 
 On Fri, Apr 17, 2015 at 6:33 AM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Basic feature like this shouldn't rely on a custom solution. However, it 
 does mean that if browsers implement this, it's easily polyfillable.
 
 What does this get you over fetch() ? Imports run scripts and enforce 
 ordering an deduplication. Importing JSON doesn't really make much sense.
 
 
 On Apr 17, 2015, at 9:23 PM, Wilson Page wilsonp...@me.com 
 mailto:wilsonp...@me.com wrote:
 
 Sounds like something you could write yourself with a custom-elements. Yay 
 extensible web :)
 
 On Fri, Apr 17, 2015 at 1:32 PM, Matthew Robb matthewwr...@gmail.com 
 mailto:matthewwr...@gmail.com wrote:
 I like the idea of this. It reminds me of polymer's core-ajax component.
 
 On Apr 16, 2015 11:39 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Inspired by HTML imports, can we add JSON imports too?
 
 ```html
 script type=application/json src=foo.json id=foo/script
 script type=application/json id=bar
 { foo: bar }
 /script
 ```
 
 ```js
 document.getElementById(foo).json // or whatever
 document.getElementById(bar).json
 ```
 
 
 
 
 



Re: JSON imports?

2015-04-17 Thread Glen Huang
Basic feature like this shouldn't rely on a custom solution. However, it does 
mean that if browsers implement this, it's easily polyfillable.

 On Apr 17, 2015, at 9:23 PM, Wilson Page wilsonp...@me.com wrote:
 
 Sounds like something you could write yourself with a custom-elements. Yay 
 extensible web :)
 
 On Fri, Apr 17, 2015 at 1:32 PM, Matthew Robb matthewwr...@gmail.com 
 mailto:matthewwr...@gmail.com wrote:
 I like the idea of this. It reminds me of polymer's core-ajax component.
 
 On Apr 16, 2015 11:39 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Inspired by HTML imports, can we add JSON imports too?
 
 ```html
 script type=application/json src=foo.json id=foo/script
 script type=application/json id=bar
 { foo: bar }
 /script
 ```
 
 ```js
 document.getElementById(foo).json // or whatever
 document.getElementById(bar).json
 ```
 
 



Re: JSON imports?

2015-04-17 Thread Glen Huang
One benefit is that browsers can start downloading it asap, instead of waiting 
util the fetch code is executed (which could itself be in a separate file).

 On Apr 18, 2015, at 8:41 AM, Elliott Sprehn espr...@chromium.org wrote:
 
 
 
 On Fri, Apr 17, 2015 at 6:33 AM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Basic feature like this shouldn't rely on a custom solution. However, it does 
 mean that if browsers implement this, it's easily polyfillable.
 
 What does this get you over fetch() ? Imports run scripts and enforce 
 ordering an deduplication. Importing JSON doesn't really make much sense.
 
 
 On Apr 17, 2015, at 9:23 PM, Wilson Page wilsonp...@me.com 
 mailto:wilsonp...@me.com wrote:
 
 Sounds like something you could write yourself with a custom-elements. Yay 
 extensible web :)
 
 On Fri, Apr 17, 2015 at 1:32 PM, Matthew Robb matthewwr...@gmail.com 
 mailto:matthewwr...@gmail.com wrote:
 I like the idea of this. It reminds me of polymer's core-ajax component.
 
 On Apr 16, 2015 11:39 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 Inspired by HTML imports, can we add JSON imports too?
 
 ```html
 script type=application/json src=foo.json id=foo/script
 script type=application/json id=bar
 { foo: bar }
 /script
 ```
 
 ```js
 document.getElementById(foo).json // or whatever
 document.getElementById(bar).json
 ```
 
 
 
 



JSON imports?

2015-04-16 Thread Glen Huang
Inspired by HTML imports, can we add JSON imports too?

```html
script type=application/json src=foo.json id=foo/script
script type=application/json id=bar
{ foo: bar }
/script
```

```js
document.getElementById(foo).json // or whatever
document.getElementById(bar).json
```



[IndexedDB] When is an error event dispatched at a transcation?

2015-02-05 Thread Glen Huang
The IDBTransaction interface exposes an onerror event handler. I wonder when 
that handler gets called? The algorithm of Steps for aborting a transaction” 
dispatches error events at requests of the transaction, but never at the 
transaction itself, only an abort event is dispatched, if I understand the spec 
correctly.

If that is true, why exposing the onerror event handler on the IDBTransaction 
interface?


Re: [IndexedDB] When is an error event dispatched at a transcation?

2015-02-05 Thread Glen Huang
Darn it, I forgot they bubble. Thank you for the detailed explanation.

 On Feb 6, 2015, at 1:59 AM, Joshua Bell jsb...@google.com wrote:
 
 On Thu, Feb 5, 2015 at 12:58 PM, Glen Huang curvedm...@gmail.com 
 mailto:curvedm...@gmail.com wrote:
 The IDBTransaction interface exposes an onerror event handler. I wonder when 
 that handler gets called? The algorithm of Steps for aborting a transaction” 
 dispatches error events at requests of the transaction, but never at the 
 transaction itself, only an abort event is dispatched, if I understand the 
 spec correctly.
 
 If that is true, why exposing the onerror event handler on the IDBTransaction 
 interface?
 
 
 In the steps 3.3.12 Fire an error event, The event bubbles and is 
 cancelable. The propagation path for the event is the transaction's 
 connection, then transaction and finally request. Which is to say: if 
 cancelBubble() is not called, the event will bubble from the request to the 
 transaction to the connection.
 
 A common use case is to attach an error handler on the transaction or 
 database connection to e.g. log errors back to the server, rather than having 
 to attach such a handler to every request.
 



Re: oldNode.replaceWith(...collection) edge case

2015-01-27 Thread Glen Huang
before/after/replaceWith behave the same in this case is just a side effect of 
DOM trying to be less surprising and more symmetrical for the curious ones. I 
doubt most people would even aware they behave the same in this case. Whenever 
the user cases come, I believe most people will just use replaceWith.

 On Jan 27, 2015, at 8:51 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Thu, Jan 22, 2015 at 11:43 AM, Jonas Sicking jo...@sicking.cc wrote:
 In general I agree that it feels unintuitive that you can't replace a node
 with a collection which includes the node itself. So the extra line or two
 of code seems worth it.
 
 You don't think it's weird that before/after/replaceWith all end up
 doing the same for that scenario? Perhaps it's okay...
 
 
 -- 
 https://annevankesteren.nl/




Re: oldNode.replaceWith(...collection) edge case

2015-01-21 Thread Glen Huang
I have two more things to add.

1. One more reason why context node should be allowed as an argument.

These work as intended:

node.before(node)
node.after(node)
node.replaceWith(node)

By passing an additional argument, they suddenly fail:

node.before(node, another)
node.after(node, another)
node.replaceWith(node, another)

This doesn’t feel right.

2. Algorithm performance

The performance hit is based on the assumption that an implementation will 
follow the spec algorithm closely and use a document fragment when multiple 
arguments are passed. In reality, I believe an implementation can totally get 
away from that and insert arguments directly, as long as it can make sure the 
final DOM structure is correct, mutation records are queued, ranges are 
modified, etc. Nothing in the algorithm dictates the using of a document 
fragment.

For example, for the before() method, I can imagine it could be implemented it 
like this (in pseudo code, ignoring mutation records and range, etc):

ensure context node validity
set prevInserted to null
for node from last argument to first argument, if the argument is a document 
fragment, from its last child to first child
if node is the context node
if prevInserted is not null
insert node before it
set prevInserted to node
else
set prevInserted to node
continue
else
insert node before the context node
set prevInserted to node

This algorithm shouldn’t slow normal operations down, and I wonder if the spec 
could use an algorithm like this and not using document fragment.

 On Jan 21, 2015, at 5:50 PM, Glen Huang curvedm...@gmail.com wrote:
 
 @Boris @Simon
 
 From the jsperf, looks like Blink is indeed making document fragment 
 obsolete. Run the tests in webkit, although still way faster with document 
 fragment, I believe the gap will narrow in the future. (I really think 
 authors shouldn’t have to rely on it to get good performance).
 
 Thank you for debunking the myth for me. :)
 
 Now, let’s go back to the original topic. I brought up document fragment 
 because I wanted to argue that by disallowing passing the context node as an 
 argument, authors would be unable find an equally performant solution. You 
 guys tell me that’s not the case. I agree, and I will drop that.
 
 But I still don’t feel disallowing that is the right way to go, for two 
 reasons:
 
 1. Passing context node as an argument does have a meaningful result, and 
 practical use cases.
 2. Although the use cases might not come as often, these are native DOM 
 methods, everybody is expected to use them. Given the huge user base, the use 
 cases might not come as rare also.
 
 I see the argument against it is probably that it may slow down normal 
 operations too. But is that really true? The key here is to find the correct 
 insertion point after the macro action. Although in the spec algorithm, using 
 a transient node seems to be the most performant way (and it slows down 
 normal operations), I doubt it’s impossible for an actual implementation to 
 optimize that.
 
 This isn't some feature that can be disallowed now and allowed in the future. 
 And by disallowing it, I think it qualifies as a gotcha when you do need it.
 
 But that’s just my personal feelings. If it turns out it’s really not that 
 cheap to implement. I’d be happy to correctly use before() and after() and 
 not root for something that will slow everybody down. :)
 
 On Jan 21, 2015, at 4:52 PM, Simon Pieters sim...@opera.com wrote:
 
 On Wed, 21 Jan 2015 00:45:32 +0100, Glen Huang curvedm...@gmail.com wrote:
 
 Ah, thank you for letting me know.
 
 I vaguely remember document fragment is introduced just to reduce reflows. 
 Looks like this best practice is obsolete now? (I remember myself wondering 
 why bowsers couldn’t optimize that back then.) Many people still suggest it 
 though, including google 
 (https://developers.google.com/speed/articles/javascript-dom 
 https://developers.google.com/speed/articles/javascript-dom the 
 DocumentFragment DOM Generation” section), and you can find more by 
 googling “why use document fragment.
 
 I think that article is a bit misguided. Changing a class does trigger a 
 reflow, but it doesn't force a reflow while the script is running (maybe it 
 does in old browsers). Asking for layout information does force a reflow.
 
 I think documentfragment has been faster in several browsers and maybe still 
 is, but in Blink at least it appears that the different methods are getting 
 about equally fast. It probably depends on how you do it, though. This 
 jsperf might be interesting:
 
 http://jsperf.com/appendchild-vs-documentfragment-vs-innerhtml/81
 
 So to recap, when you have the need to pass the context node as an argument 
 along with other nodes, just use before() and after() to insert these other 
 nodes

Re: oldNode.replaceWith(...collection) edge case

2015-01-21 Thread Glen Huang
@Boris @Simon

From the jsperf, looks like Blink is indeed making document fragment obsolete. 
Run the tests in webkit, although still way faster with document fragment, I 
believe the gap will narrow in the future. (I really think authors shouldn’t 
have to rely on it to get good performance).

Thank you for debunking the myth for me. :)

Now, let’s go back to the original topic. I brought up document fragment 
because I wanted to argue that by disallowing passing the context node as an 
argument, authors would be unable find an equally performant solution. You guys 
tell me that’s not the case. I agree, and I will drop that.

But I still don’t feel disallowing that is the right way to go, for two reasons:

1. Passing context node as an argument does have a meaningful result, and 
practical use cases.
2. Although the use cases might not come as often, these are native DOM 
methods, everybody is expected to use them. Given the huge user base, the use 
cases might not come as rare also.

I see the argument against it is probably that it may slow down normal 
operations too. But is that really true? The key here is to find the correct 
insertion point after the macro action. Although in the spec algorithm, using a 
transient node seems to be the most performant way (and it slows down normal 
operations), I doubt it’s impossible for an actual implementation to optimize 
that.

This isn't some feature that can be disallowed now and allowed in the future. 
And by disallowing it, I think it qualifies as a gotcha when you do need it.

But that’s just my personal feelings. If it turns out it’s really not that 
cheap to implement. I’d be happy to correctly use before() and after() and not 
root for something that will slow everybody down. :)

 On Jan 21, 2015, at 4:52 PM, Simon Pieters sim...@opera.com wrote:
 
 On Wed, 21 Jan 2015 00:45:32 +0100, Glen Huang curvedm...@gmail.com wrote:
 
 Ah, thank you for letting me know.
 
 I vaguely remember document fragment is introduced just to reduce reflows. 
 Looks like this best practice is obsolete now? (I remember myself wondering 
 why bowsers couldn’t optimize that back then.) Many people still suggest it 
 though, including google 
 (https://developers.google.com/speed/articles/javascript-dom 
 https://developers.google.com/speed/articles/javascript-dom the 
 DocumentFragment DOM Generation” section), and you can find more by 
 googling “why use document fragment.
 
 I think that article is a bit misguided. Changing a class does trigger a 
 reflow, but it doesn't force a reflow while the script is running (maybe it 
 does in old browsers). Asking for layout information does force a reflow.
 
 I think documentfragment has been faster in several browsers and maybe still 
 is, but in Blink at least it appears that the different methods are getting 
 about equally fast. It probably depends on how you do it, though. This jsperf 
 might be interesting:
 
 http://jsperf.com/appendchild-vs-documentfragment-vs-innerhtml/81
 
 So to recap, when you have the need to pass the context node as an argument 
 along with other nodes, just use before() and after() to insert these other 
 nodes? And even insert them one by one is fine?
 
 Yeah.
 
 -- 
 Simon Pieters
 Opera Software




Re: oldNode.replaceWith(...collection) edge case

2015-01-20 Thread Glen Huang
jQuery doesn’t support that out of performance and code size reasons: 

http://bugs.jquery.com/ticket/14380 http://bugs.jquery.com/ticket/14380
https://github.com/jquery/jquery/pull/1276#issuecomment-24526014 
https://github.com/jquery/jquery/pull/1276#issuecomment-24526014

Both reasons shouldn’t be a problem with the native DOM.

 On Jan 20, 2015, at 6:39 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Sun, Jan 18, 2015 at 4:40 AM, Glen Huang curvedm...@gmail.com wrote:
 To generalize the use case, when you have a bunch of nodes, some of which 
 need to be inserted before a node, and some of which after it, you are 
 likely to want `replaceWith` could accept the context node as an argument.
 
 This sound somewhat reasonable but I haven't been able to reproduce
 this in existing libraries. E.g. in Jquery
 
  $(div).replaceWith([$(div), btest/b])
 
 ends up as just btest/b...
 
 
 -- 
 https://annevankesteren.nl/



Re: oldNode.replaceWith(...collection) edge case

2015-01-20 Thread Glen Huang
I wonder what the correct method should be? For the example I gave in the 
previous mail, it looks like I have to either create two fragments (and compute 
which nodes go to which fragment) and insert them before or after the node (two 
reflows), or implement the transient node algorithm myself (but with no 
suppressing observer ability, also three reflows (insert fake node, pull out 
context node, insert fragment), i guess if browsers implement it natively, they 
can reduce it to just one reflow?). Both doesn’t sound very optimal.

 On Jan 20, 2015, at 9:34 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Tue, Jan 20, 2015 at 2:22 PM, Glen Huang curvedm...@gmail.com wrote:
 jQuery doesn’t support that out of performance and code size reasons:
 
 http://bugs.jquery.com/ticket/14380
 https://github.com/jquery/jquery/pull/1276#issuecomment-24526014
 
 Both reasons shouldn’t be a problem with the native DOM.
 
 They are. I also realized that if we did this we would have to make
 before(), after(), and replaceWith() identical when passed the context
 object. Not allowing the context object and requiring usage of the
 correct method seems simpler.
 
 
 -- 
 https://annevankesteren.nl/




Re: oldNode.replaceWith(...collection) edge case

2015-01-20 Thread Glen Huang
Ah, thank you for letting me know.

I vaguely remember document fragment is introduced just to reduce reflows. 
Looks like this best practice is obsolete now? (I remember myself wondering why 
bowsers couldn’t optimize that back then.) Many people still suggest it though, 
including google (https://developers.google.com/speed/articles/javascript-dom 
https://developers.google.com/speed/articles/javascript-dom the 
DocumentFragment DOM Generation” section), and you can find more by googling 
“why use document fragment.

So to recap, when you have the need to pass the context node as an argument 
along with other nodes, just use before() and after() to insert these other 
nodes? And even insert them one by one is fine?

 On Jan 20, 2015, at 11:57 PM, Simon Pieters sim...@opera.com wrote:
 
 On Tue, 20 Jan 2015 15:00:41 +0100, Glen Huang curvedm...@gmail.com wrote:
 
 I wonder what the correct method should be? For the example I gave in the 
 previous mail, it looks like I have to either create two fragments (and 
 compute which nodes go to which fragment) and insert them before or after 
 the node (two reflows), or implement the transient node algorithm myself 
 (but with no suppressing observer ability, also three reflows (insert fake 
 node, pull out context node, insert fragment), i guess if browsers implement 
 it natively, they can reduce it to just one reflow?). Both doesn’t sound 
 very optimal.
 
 In all cases it would be just one reflow after the script has finished, 
 unless you force a reflow by asking for layout information (e.g. offsetTop) 
 between the mutations.
 
 -- 
 Simon Pieters
 Opera Software



Re: oldNode.replaceWith(...collection) edge case

2015-01-17 Thread Glen Huang
Oh crap. Just realized saving index won’t work if context node’s previous 
siblings are passed as arguments. Looks like inserting transient node is still 
the best way.

 On Jan 18, 2015, at 11:40 AM, Glen Huang curvedm...@gmail.com wrote:
 
 To generalize the use case, when you have a bunch of nodes, some of which 
 need to be inserted before a node, and some of which after it, you are likely 
 to want `replaceWith` could accept the context node as an argument.
 
 I just realized another algorithm: Before running the macro, save the context 
 node’s index and its parent, and after running it, pre-insert node into 
 parent before parent’s index’th child (could be null). No transient node 
 involved and no recursive finding.
 
 Hope you can reconsider if this edge case should be accepted.
 
 On Jan 16, 2015, at 5:04 PM, Glen Huang curvedm...@gmail.com wrote:
 
 Oh, right. Trying to be smart and it just proved otherwise. :P
 
 I don't really see a good reason to complicate the algorithm for this 
 scenario, personally.
 
 This edge case may seem absurd at first sight. Let me provide a use case:
 
 Imagine you have this simple site
 
 ```
 ul
  lia href=“blog.html”Blog/li
  lia href=“blog.html”About/li
  lia href=“blog.html”Contact/li
 /ul
 mainAbout page content/main
 ```
 
 You are currently at the about page. What you are trying to do is that when 
 the user clicks a nav link, the corresponding page is fetched via ajax, and 
 inserted before or after the current main element, depending on whether the 
 clicked nav link exists before or after the current nav link.
 
 So when the page is first loaded, you first loop over the nav links to 
 create empty mains for placeholder purposes.
 
 ```
 ul
  lia href=“blog.html”Blog/li
  lia href=“about.html”About/li
  lia href=“contact.html”Contact/li
 /ul
 main/main
 mainAbout page content/main
 main/main
 ```
 
 How do you do that? Well, ideally, you should be able to just do (in pseudo 
 code):
 
 ```
 currentMain = get the main element
 links = get all a elements
 mains = []
 
 for i, link in links
  if link is current link
  mains[i] = currentMain
  else
  mains[i] = clone currentMain shallowly
 
 currentMain.replaceWith(…mains)
 ```
 
 This way you are inserting nodes in batch, and not having to deal with 
 choosing insertBefore or appendChild.
 
 Without `replaceWith` supporting it, in order to do batch insertions (nav 
 links could be a large list, imagining a very long TOC links), you are 
 forced to manually do the steps I mentioned in the first mail.
 
 On Jan 16, 2015, at 4:22 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Jan 16, 2015 at 8:47 AM, Glen Huang curvedm...@gmail.com wrote:
 Another way to do this is that in mutation method macro, prevent `oldNode` 
 from being added to the doc frag, and after that, insert the doc frag 
 before `oldNode`, finally remove `oldNode`. No recursive finding of next 
 sibling is needed this way.
 
 But then d2 would no longer be present?
 
 I don't really see a good reason to complicate the algorithm for this
 scenario, personally.
 
 
 -- 
 https://annevankesteren.nl/
 
 




Re: oldNode.replaceWith(...collection) edge case

2015-01-17 Thread Glen Huang
To generalize the use case, when you have a bunch of nodes, some of which need 
to be inserted before a node, and some of which after it, you are likely to 
want `replaceWith` could accept the context node as an argument.

I just realized another algorithm: Before running the macro, save the context 
node’s index and its parent, and after running it, pre-insert node into parent 
before parent’s index’th child (could be null). No transient node involved and 
no recursive finding.

Hope you can reconsider if this edge case should be accepted.

 On Jan 16, 2015, at 5:04 PM, Glen Huang curvedm...@gmail.com wrote:
 
 Oh, right. Trying to be smart and it just proved otherwise. :P
 
 I don't really see a good reason to complicate the algorithm for this 
 scenario, personally.
 
 This edge case may seem absurd at first sight. Let me provide a use case:
 
 Imagine you have this simple site
 
 ```
 ul
   lia href=“blog.html”Blog/li
   lia href=“blog.html”About/li
   lia href=“blog.html”Contact/li
 /ul
 mainAbout page content/main
 ```
 
 You are currently at the about page. What you are trying to do is that when 
 the user clicks a nav link, the corresponding page is fetched via ajax, and 
 inserted before or after the current main element, depending on whether the 
 clicked nav link exists before or after the current nav link.
 
 So when the page is first loaded, you first loop over the nav links to create 
 empty mains for placeholder purposes.
 
 ```
 ul
   lia href=“blog.html”Blog/li
   lia href=“about.html”About/li
   lia href=“contact.html”Contact/li
 /ul
 main/main
 mainAbout page content/main
 main/main
 ```
 
 How do you do that? Well, ideally, you should be able to just do (in pseudo 
 code):
 
 ```
 currentMain = get the main element
 links = get all a elements
 mains = []
 
 for i, link in links
   if link is current link
   mains[i] = currentMain
   else
   mains[i] = clone currentMain shallowly
 
 currentMain.replaceWith(…mains)
 ```
 
 This way you are inserting nodes in batch, and not having to deal with 
 choosing insertBefore or appendChild.
 
 Without `replaceWith` supporting it, in order to do batch insertions (nav 
 links could be a large list, imagining a very long TOC links), you are forced 
 to manually do the steps I mentioned in the first mail.
 
 On Jan 16, 2015, at 4:22 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Jan 16, 2015 at 8:47 AM, Glen Huang curvedm...@gmail.com wrote:
 Another way to do this is that in mutation method macro, prevent `oldNode` 
 from being added to the doc frag, and after that, insert the doc frag 
 before `oldNode`, finally remove `oldNode`. No recursive finding of next 
 sibling is needed this way.
 
 But then d2 would no longer be present?
 
 I don't really see a good reason to complicate the algorithm for this
 scenario, personally.
 
 
 -- 
 https://annevankesteren.nl/
 




Re: oldNode.replaceWith(...collection) edge case

2015-01-16 Thread Glen Huang
Oh, right. Trying to be smart and it just proved otherwise. :P

 I don't really see a good reason to complicate the algorithm for this 
 scenario, personally.

This edge case may seem absurd at first sight. Let me provide a use case:

Imagine you have this simple site

```
ul
lia href=“blog.html”Blog/li
lia href=“blog.html”About/li
lia href=“blog.html”Contact/li
/ul
mainAbout page content/main
```

You are currently at the about page. What you are trying to do is that when the 
user clicks a nav link, the corresponding page is fetched via ajax, and 
inserted before or after the current main element, depending on whether the 
clicked nav link exists before or after the current nav link.

So when the page is first loaded, you first loop over the nav links to create 
empty mains for placeholder purposes.

```
ul
lia href=“blog.html”Blog/li
lia href=“about.html”About/li
lia href=“contact.html”Contact/li
/ul
main/main
mainAbout page content/main
main/main
```

How do you do that? Well, ideally, you should be able to just do (in pseudo 
code):

```
currentMain = get the main element
links = get all a elements
mains = []

for i, link in links
if link is current link
mains[i] = currentMain
else
mains[i] = clone currentMain shallowly

currentMain.replaceWith(…mains)
```

This way you are inserting nodes in batch, and not having to deal with choosing 
insertBefore or appendChild.

Without `replaceWith` supporting it, in order to do batch insertions (nav links 
could be a large list, imagining a very long TOC links), you are forced to 
manually do the steps I mentioned in the first mail.

 On Jan 16, 2015, at 4:22 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Jan 16, 2015 at 8:47 AM, Glen Huang curvedm...@gmail.com wrote:
 Another way to do this is that in mutation method macro, prevent `oldNode` 
 from being added to the doc frag, and after that, insert the doc frag before 
 `oldNode`, finally remove `oldNode`. No recursive finding of next sibling is 
 needed this way.
 
 But then d2 would no longer be present?
 
 I don't really see a good reason to complicate the algorithm for this
 scenario, personally.
 
 
 -- 
 https://annevankesteren.nl/




Re: oldNode.replaceWith(...collection) edge case

2015-01-16 Thread Glen Huang
Here is another try:

How about before executing the mutation method macro, we insert a transient 
node after `oldNode`, suppressing observers. Then run the mutation method 
macro, pre-insert `node` before the transient node and finally remove the 
transient node, suppressing observers.

Idea comes from Andrea’s DOM4 library: 
https://github.com/WebReflection/dom4/commit/ffc8cbdf88fa98627dd82cf11084a0660b9bbfc0
 
https://github.com/WebReflection/dom4/commit/ffc8cbdf88fa98627dd82cf11084a0660b9bbfc0

 On Jan 16, 2015, at 4:22 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Jan 16, 2015 at 8:47 AM, Glen Huang curvedm...@gmail.com wrote:
 Another way to do this is that in mutation method macro, prevent `oldNode` 
 from being added to the doc frag, and after that, insert the doc frag before 
 `oldNode`, finally remove `oldNode`. No recursive finding of next sibling is 
 needed this way.
 
 But then d2 would no longer be present?
 
 I don't really see a good reason to complicate the algorithm for this
 scenario, personally.
 
 
 -- 
 https://annevankesteren.nl/



oldNode.replaceWith(...collection) edge case

2015-01-15 Thread Glen Huang
Currently, for `oldNode.replaceWith(…collection)`, if `collection` is array of 
multiple nodes, and `oldNode` is in `collection`, after the mutation method 
macro,  `oldNode` lives in a doc frag. So in the replace algorithm, `parent` is 
the doc frag, `node` is also the doc frag, an `HierarchyRequestError` is thrown.

I wonder if an error really should be thrown in this case? Intuitively, 
`collection` should be inserted before `oldNode`’s original next sibling.

For example:

```
div id=d1/div
div id=d2/div
div id=d3/div
div id=d4/div
```

Imagine `oldNode` is #d2, `collection` is [#d1,#d2,#d4], executing 
`oldNode.replaceWith(…collection)` should give

```
div id=d1/div
div id=d2/div
div id=d4/div
div id=d3/div
```

Instead of throwing an error.

To make it this work, before executing the mutation method macro, `oldNode`’s 
parent should be saved. It’s next sibling should also be saved, but the next 
sibling need to be found recursively if it happens to be in `collection` too.

So, If I’m not wrong, this edge case could work in principle. I’m not sure if 
there is any interest to allow this?


Re: oldNode.replaceWith(...collection) edge case

2015-01-15 Thread Glen Huang
Another way to do this is that in mutation method macro, prevent `oldNode` from 
being added to the doc frag, and after that, insert the doc frag before 
`oldNode`, finally remove `oldNode`. No recursive finding of next sibling is 
needed this way.

 On Jan 16, 2015, at 1:37 PM, Glen Huang curvedm...@gmail.com wrote:
 
 Currently, for `oldNode.replaceWith(…collection)`, if `collection` is array 
 of multiple nodes, and `oldNode` is in `collection`, after the mutation 
 method macro,  `oldNode` lives in a doc frag. So in the replace algorithm, 
 `parent` is the doc frag, `node` is also the doc frag, an 
 `HierarchyRequestError` is thrown.
 
 I wonder if an error really should be thrown in this case? Intuitively, 
 `collection` should be inserted before `oldNode`’s original next sibling.
 
 For example:
 
 ```
 div id=d1/div
 div id=d2/div
 div id=d3/div
 div id=d4/div
 ```
 
 Imagine `oldNode` is #d2, `collection` is [#d1,#d2,#d4], executing 
 `oldNode.replaceWith(…collection)` should give
 
 ```
 div id=d1/div
 div id=d2/div
 div id=d4/div
 div id=d3/div
 ```
 
 Instead of throwing an error.
 
 To make it this work, before executing the mutation method macro, `oldNode`’s 
 parent should be saved. It’s next sibling should also be saved, but the next 
 sibling need to be found recursively if it happens to be in `collection` too.
 
 So, If I’m not wrong, this edge case could work in principle. I’m not sure if 
 there is any interest to allow this?