Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)

2015-01-17 Thread Olivier Forget
I'd be interested in hearing more about what didn't work with that API by
both devs who tried to make use of it and the implementors too.

For the record: web developers don't usually take advantage of additional
functionality that is provided by only one browser, or implemented in
differing unpolished ways by different browsers. When possible we take the
lowest common denominator approach to offer a consistent experience from
browser to browser, and to avoid spending resources writing code that only
a subset of users will be able to use anyways.

What I'm saying is that the fact few devs worked with multiple ranges may
not be a reflection of the quality of the API, but rather that because it
wasn't implemented across browsers it wasn't worth from a cost-benefit
point of view.

And no I'm not saying the API is great either, just that saying developers
won't do it is not really fair to anybody.


On Sat Jan 17 2015 at 8:55:43 AM Aryeh Gregor a...@aryeh.name wrote:

 On Mon, Jan 12, 2015 at 9:59 PM, Ben Peters ben.pet...@microsoft.com
 wrote:
  Multiple selection is an important feature in the future. Table columns
 are
  important, but we also need to think about BIDI. Depending on who you
 talk
  to, BIDI should support selection in document order or layout order.
 Layout
  order is not possible without multi-selection.
 
 
 
  I do not believe “everyone wants to kill it” is accurate. I agree with
  Olivier that it’s crucial to a full-featured editor. We don’t want to
 make
  sites implement this themselves.

 If implementers are interested, then that's fine by me.  I was
 summarizing the result of a previous discussion or two I was aware of,
 and the current implementation reality.  However, I think thought
 should go into an API that supports non-contiguous selections without
 making authors try to handle the non-contiguous case specially,
 because they won't.  Exposing a list of selected nodes/parts of
 CharacterData nodes is a possibility that has occurred to me -- like
 returning a list of SelectedNodes, where SelectedNode has .node,
 .start, and .end properties, and .start and .end are null unless it's
 a partially-selected CharacterData node, and no node is in the list if
 it has an ancestor in a list.  So fo[obbaribaz/i/b]quuz
 would expose [{node: foo, start: 2, end: 3}, {node: b, start:
 null, end: null}, {node: quuz, start: 0, end: 0}] as the selected
 nodes.  Then authors would use it by iterating over the selected
 nodes, and non-contiguous selections would be handled automatically.
 I once thought over some use-cases and concluded that a lot of them
 would Just Work for non-contiguous selections this way -- although
 doubtless some cases would still break.

 (Obvious disadvantages disadvantages of this approach include a)
 authors will still continue using the old API, and b) calculating the
 list might be somewhat expensive.  (a) might be mitigated by the fact
 that it's easier to use for some applications, particularly
 editing-related ones -- it saves you from having to walk through the
 range yourself.)

 I certainly agree that non-contiguous selection is a good feature to
 have!  But as far as I'm aware, in Gecko's implementation experience,
 multiple Ranges per Selection has proven to be a bad way to expose
 them to authors.  Ehsan could tell you more.



Re: oldNode.replaceWith(...collection) edge case

2015-01-17 Thread Glen Huang
Oh crap. Just realized saving index won’t work if context node’s previous 
siblings are passed as arguments. Looks like inserting transient node is still 
the best way.

 On Jan 18, 2015, at 11:40 AM, Glen Huang curvedm...@gmail.com wrote:
 
 To generalize the use case, when you have a bunch of nodes, some of which 
 need to be inserted before a node, and some of which after it, you are likely 
 to want `replaceWith` could accept the context node as an argument.
 
 I just realized another algorithm: Before running the macro, save the context 
 node’s index and its parent, and after running it, pre-insert node into 
 parent before parent’s index’th child (could be null). No transient node 
 involved and no recursive finding.
 
 Hope you can reconsider if this edge case should be accepted.
 
 On Jan 16, 2015, at 5:04 PM, Glen Huang curvedm...@gmail.com wrote:
 
 Oh, right. Trying to be smart and it just proved otherwise. :P
 
 I don't really see a good reason to complicate the algorithm for this 
 scenario, personally.
 
 This edge case may seem absurd at first sight. Let me provide a use case:
 
 Imagine you have this simple site
 
 ```
 ul
  lia href=“blog.html”Blog/li
  lia href=“blog.html”About/li
  lia href=“blog.html”Contact/li
 /ul
 mainAbout page content/main
 ```
 
 You are currently at the about page. What you are trying to do is that when 
 the user clicks a nav link, the corresponding page is fetched via ajax, and 
 inserted before or after the current main element, depending on whether the 
 clicked nav link exists before or after the current nav link.
 
 So when the page is first loaded, you first loop over the nav links to 
 create empty mains for placeholder purposes.
 
 ```
 ul
  lia href=“blog.html”Blog/li
  lia href=“about.html”About/li
  lia href=“contact.html”Contact/li
 /ul
 main/main
 mainAbout page content/main
 main/main
 ```
 
 How do you do that? Well, ideally, you should be able to just do (in pseudo 
 code):
 
 ```
 currentMain = get the main element
 links = get all a elements
 mains = []
 
 for i, link in links
  if link is current link
  mains[i] = currentMain
  else
  mains[i] = clone currentMain shallowly
 
 currentMain.replaceWith(…mains)
 ```
 
 This way you are inserting nodes in batch, and not having to deal with 
 choosing insertBefore or appendChild.
 
 Without `replaceWith` supporting it, in order to do batch insertions (nav 
 links could be a large list, imagining a very long TOC links), you are 
 forced to manually do the steps I mentioned in the first mail.
 
 On Jan 16, 2015, at 4:22 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Jan 16, 2015 at 8:47 AM, Glen Huang curvedm...@gmail.com wrote:
 Another way to do this is that in mutation method macro, prevent `oldNode` 
 from being added to the doc frag, and after that, insert the doc frag 
 before `oldNode`, finally remove `oldNode`. No recursive finding of next 
 sibling is needed this way.
 
 But then d2 would no longer be present?
 
 I don't really see a good reason to complicate the algorithm for this
 scenario, personally.
 
 
 -- 
 https://annevankesteren.nl/
 
 




Re: oldNode.replaceWith(...collection) edge case

2015-01-17 Thread Glen Huang
To generalize the use case, when you have a bunch of nodes, some of which need 
to be inserted before a node, and some of which after it, you are likely to 
want `replaceWith` could accept the context node as an argument.

I just realized another algorithm: Before running the macro, save the context 
node’s index and its parent, and after running it, pre-insert node into parent 
before parent’s index’th child (could be null). No transient node involved and 
no recursive finding.

Hope you can reconsider if this edge case should be accepted.

 On Jan 16, 2015, at 5:04 PM, Glen Huang curvedm...@gmail.com wrote:
 
 Oh, right. Trying to be smart and it just proved otherwise. :P
 
 I don't really see a good reason to complicate the algorithm for this 
 scenario, personally.
 
 This edge case may seem absurd at first sight. Let me provide a use case:
 
 Imagine you have this simple site
 
 ```
 ul
   lia href=“blog.html”Blog/li
   lia href=“blog.html”About/li
   lia href=“blog.html”Contact/li
 /ul
 mainAbout page content/main
 ```
 
 You are currently at the about page. What you are trying to do is that when 
 the user clicks a nav link, the corresponding page is fetched via ajax, and 
 inserted before or after the current main element, depending on whether the 
 clicked nav link exists before or after the current nav link.
 
 So when the page is first loaded, you first loop over the nav links to create 
 empty mains for placeholder purposes.
 
 ```
 ul
   lia href=“blog.html”Blog/li
   lia href=“about.html”About/li
   lia href=“contact.html”Contact/li
 /ul
 main/main
 mainAbout page content/main
 main/main
 ```
 
 How do you do that? Well, ideally, you should be able to just do (in pseudo 
 code):
 
 ```
 currentMain = get the main element
 links = get all a elements
 mains = []
 
 for i, link in links
   if link is current link
   mains[i] = currentMain
   else
   mains[i] = clone currentMain shallowly
 
 currentMain.replaceWith(…mains)
 ```
 
 This way you are inserting nodes in batch, and not having to deal with 
 choosing insertBefore or appendChild.
 
 Without `replaceWith` supporting it, in order to do batch insertions (nav 
 links could be a large list, imagining a very long TOC links), you are forced 
 to manually do the steps I mentioned in the first mail.
 
 On Jan 16, 2015, at 4:22 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Jan 16, 2015 at 8:47 AM, Glen Huang curvedm...@gmail.com wrote:
 Another way to do this is that in mutation method macro, prevent `oldNode` 
 from being added to the doc frag, and after that, insert the doc frag 
 before `oldNode`, finally remove `oldNode`. No recursive finding of next 
 sibling is needed this way.
 
 But then d2 would no longer be present?
 
 I don't really see a good reason to complicate the algorithm for this
 scenario, personally.
 
 
 -- 
 https://annevankesteren.nl/
 




Re: Minimum viable custom elements

2015-01-17 Thread Ryosuke Niwa
On Jan 16, 2015, at 4:07 PM, Dimitri Glazkov dglaz...@google.com wrote:
 On Fri, Jan 16, 2015 at 1:14 PM, Ryosuke Niwa rn...@apple.com wrote:
 
 On Jan 15, 2015, at 7:25 PM, Dimitri Glazkov dglaz...@google.com wrote:
 On Thu, Jan 15, 2015 at 6:43 PM, Ryosuke Niwa rn...@apple.com wrote:
 When an author imports a ES6 module, we don't create a fake object which 
 gets resolved later by rewriting its prototype.
 
 These are two completely different things, right? In one situation, you are 
 dealing with HTML Parser and blocking thereof. In another, there's no such 
 concern. 
 
 How are they different at all?  I have a hard time understanding how 
 differentiating DOM objects from other ES6 builtins will fit your stated 
 design goal to explain the Web platform.
 
 ... because DOM objects are not the ES6 built-ins?
 
 If we are implementing the HTML parser as well as the entire DOM in 
 JavaScript, why wouldn't we just use constructors to create DOM nodes?
 
 I feel like we're walking in circles at this point.

Indeed.

 It's pretty safe to say that we're years away from being able to implement 
 HTML parser and the entire DOM in JS. Even then, time-shifted callbacks (or 
 similar write-barrier-style abstraction) still make sense. The JS that 
 implements the parser or DOM may desire to run the custom elements JS code 
 only in certain places (see Mutation Events - Mutation Observers).

If implementations are so desired, they could certainly do that.  There is 
nothing that prevents from UAs to run any scripts at any timing as long as the 
observed behavior is interoperable.  For example, an implementation could 
construct a queue of nodes as a form of a detached DOM tree, and then call 
constructors on those objects after manually detaching them; the noes could be 
inserted again once the elements are constructed.  Such an approach allows 
implementing Jonas' proposal with very small set of changes to existing HTML 
parser implementations.

 Let me repeat and extend what I said earlier in the thread. In the world 
 where we have non-blocking scripts and HTML parser that yields, upgrades are 
 a performance/ergonomics primitive.
 
 With upgrades, the non-blocking script could execute during a yield, register 
 elements, and keep on going. The chunk of html that was parsed previously 
 will upgrade, and the chunk that hasn't yet parsed will start queueing 
 callbacks. The developer does not need to care about the timing of 
 registration. From their perspective, the order of callbacks will be the same.

Unless I'm missing something, scripts that use JavaScript APIs exposed by those 
custom elements cannot run until those custom elements are upgraded.  Again, 
this is not a problem that should be solved for only custom elements.  This is 
a generic script and resource dependency problem for which an extensive list of 
use cases have been identified: 
https://lists.w3.org/Archives/Public/public-whatwg-archive/2014Aug/0177.html

 Without upgrades, you as a developer are highly incentivized to reduce the 
 amount of html after your non-blocking script, because you need to wait until 
 the doc parsed completely until you can sensibly run the script.

There are a lot of high profile websites that run scripts prior to the document 
has finished parsing.  Furthermore, if that were already the case, how does 
introducing the mechanism for elements to asynchronously upgrade help authors 
at all?  If anything, it adds another dimension to the already complex problem 
space developers have to face today.

 This is what's happening today, as evidenced by most frameworks moving away 
 from using HTML Parser as anything but script bootstrapping device, and 
 relying on imperative tree construction. And in that world, upgrades are 
 useless -- but so is HTML. And eventually, DOM.

Correlation doesn't imply causation.  The way I see it, a lot of Web apps are 
using the HTML parser as a bootstrapping mechanism today because they need to 
get data out of JSON they fetched out of services instead of embedded in HTML.  
Also, if the needs of developers obsolete HTML and DOM, then so be it.  Perhaps 
the future of the Web apps is a bundle of ES6 modules.

- R. Niwa



Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)

2015-01-17 Thread Aryeh Gregor
On Mon, Jan 12, 2015 at 9:59 PM, Ben Peters ben.pet...@microsoft.com wrote:
 Multiple selection is an important feature in the future. Table columns are
 important, but we also need to think about BIDI. Depending on who you talk
 to, BIDI should support selection in document order or layout order. Layout
 order is not possible without multi-selection.



 I do not believe “everyone wants to kill it” is accurate. I agree with
 Olivier that it’s crucial to a full-featured editor. We don’t want to make
 sites implement this themselves.

If implementers are interested, then that's fine by me.  I was
summarizing the result of a previous discussion or two I was aware of,
and the current implementation reality.  However, I think thought
should go into an API that supports non-contiguous selections without
making authors try to handle the non-contiguous case specially,
because they won't.  Exposing a list of selected nodes/parts of
CharacterData nodes is a possibility that has occurred to me -- like
returning a list of SelectedNodes, where SelectedNode has .node,
.start, and .end properties, and .start and .end are null unless it's
a partially-selected CharacterData node, and no node is in the list if
it has an ancestor in a list.  So fo[obbaribaz/i/b]quuz
would expose [{node: foo, start: 2, end: 3}, {node: b, start:
null, end: null}, {node: quuz, start: 0, end: 0}] as the selected
nodes.  Then authors would use it by iterating over the selected
nodes, and non-contiguous selections would be handled automatically.
I once thought over some use-cases and concluded that a lot of them
would Just Work for non-contiguous selections this way -- although
doubtless some cases would still break.

(Obvious disadvantages disadvantages of this approach include a)
authors will still continue using the old API, and b) calculating the
list might be somewhat expensive.  (a) might be mitigated by the fact
that it's easier to use for some applications, particularly
editing-related ones -- it saves you from having to walk through the
range yourself.)

I certainly agree that non-contiguous selection is a good feature to
have!  But as far as I'm aware, in Gecko's implementation experience,
multiple Ranges per Selection has proven to be a bad way to expose
them to authors.  Ehsan could tell you more.



Re: [Selection] Should selection.getRangeAt return a clone or a reference?

2015-01-17 Thread Aryeh Gregor
I just said it in the other thread, but just to clarify in this thread
too: I think non-contiguous selections are a great feature.  I think
exposing them to authors as multiple Ranges in a Selection has proven
to be not a good way to do it, because authors basically without
exception just ignore any ranges beyond the first.  When writing the
Selection code, I reviewed a decent amount of author code, and all of
it (I don't think I found an exception) just did .getRangeAt(0) and
ignored the rest.  Gecko has found that they misused the code
internally as well, as Ehsan demonstrated to me once.  If we want
non-contiguous selections to work in author code that's not specially
written to accommodate them, we should think of a different API,
perhaps the one I suggested in the other thread.

Also, to clarify, my initial selection spec accommodated multiple
ranges.  I deliberately removed support when it looked like no one
wanted to support the feature:
https://dvcs.w3.org/hg/editing/rev/b1598801692d.  Speccing it is not
the problem.  The bug was here, where I say that Ehsan and Ryosuke
agreed with it (at a face-to-face meeting we had at Mozilla Toronto):
http://www.w3.org/Bugs/Public/show_bug.cgi?id=13975

On Wed, Jan 14, 2015 at 6:14 PM, Mats Palmgren m...@mozilla.com wrote:
 On 01/09/2015 12:40 PM, Aryeh Gregor wrote:

 The advantage of the IE/Gecko behavior is you can alter the selection
 using Range methods.  The advantage of the WebKit/Blink behavior is
 you can restrict the ranges in the selection in some sane fashion,
 e.g., not letting them be in detached nodes.

 It would be easy to change Gecko to ignore addRange() calls if the
 range start/end node is detached.  We could easily do the same for
 range.setStart*/setEnd* for ranges that are in the Selection.
 Would that address your concern about detached nodes?

I think so, yes, but it would mean making Range methods behave
differently depending on whether the range is in a selection.  Is that
really sane?

What are the reasons to return a clone anyway?  Is it important to be
able to call (mutating) Range methods on a Selection?  If we really
want authors to have convenience methods like setStartBefore() on
Selection, we could add them to Selection.