RE: [XHR]

2016-03-19 Thread Elliott Sprehn
Can we get an idl definition too? You shouldn't need to read the algorithm
to know the return types.
On Mar 17, 2016 12:09 PM, "Domenic Denicola"  wrote:

> From: Gomer Thomas [mailto:go...@gomert-consulting.com]
>
> > I looked at the Streams specification, and it seems pretty immature and
> underspecified. I’m not sure it is usable by someone who doesn’t already
> know how it is supposed to work before reading the specification. How many
> of the major web browsers are supporting it?
>
> Thanks for the feedback. Streams is intended to be a lower-level primitive
> used by other specifications, primarily. By reading it you're supposed to
> learn how to implement your own streams from basic underlying source APIs.
>
> > (1) The constructor of the ReadableStream object is “defined” by
> > Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )
> > The “specification” states that the underlyingSource object “can”
> implement various methods, but it does not say anything about how to create
> or identify a particular underlyingSource
>
> As you noticed, specific underlying sources are left to other places.
> Those could be other specs, like Fetch:
>
> https://fetch.spec.whatwg.org/#concept-construct-readablestream
>
> or it could be used by authors directly:
>
> https://streams.spec.whatwg.org/#example-rs-push-no-backpressure
>
> > In my case I want to receive a stream from a remote HTTP server. What do
> I put in for the underlyingSource?
>
> This is similar to asking the question "I want to create a promise for an
> animation. What do I put in the `new Promise(...)` constructor?" In other
> words, a ReadableStream is a data type that can stream anything, and the
> actual capability needs to be supplied by your code. Fetch supplies one
> underlying source, for HTTP responses.
>
> > Also, what does the “highWaterMark” parameter mean? The “specification”
> says it is part of the queuing strategy object, but it does not say what it
> does.
>
> Hmm, I think the links (if you follow them) are fairly clear.
> https://streams.spec.whatwg.org/#queuing-strategy. Do you have any
> suggestions on how to make it clearer?
>
> > Is it the maximum number of bytes of unread data in the Stream? If so,
> it should say so.
>
> Close; it is the maximum number of bytes before a backpressure signal is
> sent. But, that is already exactly what the above link (which was found by
> clicking the links "queuing strategy" in the constructor definition) says,
> so I am not sure what you are asking for.
>
> > If the “size” parameter is omitted, is the underlyingSource free to send
> chunks of any size, including variable sizes?
>
> Upon re-reading, I agree it's not 100% clear that the size() function maps
> to "The queuing strategy assigns a size to each chunk". However, the
> behavior of how the stream uses the size() function is defined in a lot of
> detail if you follow the spec. I agree maybe it could use some more
> non-normative notes explaining, and will work to add some, but in the end
> if you really want to understand what happens you need to either read the
> spec's algorithms or wait for someone to write an in-depth tutorial
> somewhere like MDN.
>
> > (2) The ReadableStream class has a “getReader()” method, but the
> specification gives no hint as to the data type that this method returns. I
> suspect that it is an object of the ReadableStreamReader class, but if so
> it would be nice if the “specification” said so.
>
> This is actually normatively defined if you click the link in the step
> "Return AcquireReadableStreamReader(this)," whose first line tells you what
> it constructs (indeed, a ReadableStreamReader).
>
>


Re: [custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-26 Thread Elliott Sprehn
On Fri, Feb 26, 2016 at 3:31 PM, Ryosuke Niwa <rn...@apple.com> wrote:

>
> > On Feb 26, 2016, at 3:22 PM, Elliott Sprehn <espr...@chromium.org>
> wrote:
> >
> >
> >
> > On Fri, Feb 26, 2016 at 3:09 PM, Ryosuke Niwa <rn...@apple.com> wrote:
> >>
> >> > On Feb 24, 2016, at 9:06 PM, Elliott Sprehn <espr...@chromium.org>
> wrote:
> >> >
> >> > Can you give a code example of how this happens?
> >>
> >> For example, execCommand('Delete') would result in sequentially
> deleting nodes as needed.
> >> During this compound operation, unload events may fire on iframes that
> got deleted by this operation.
> >>
> >> I would like components to be notified that they got
> removed/disconnected from the document
> >> before such an event is getting fired.
> >>
> >
> > I'd rather not do that, all the sync script inside editing operations is
> a bug, and you shouldn't depend on the state of the world around you in
> there anyway since all browsers disagree (ex. not all of them fire the
> event sync).
>
> I don't think that's a bug given Safari, Chrome, and Gecko all fires
> unload event before finishing the delete operation.  It's an interoperable
> behavior, which should be spec'ed.
>

Firefox's behavior of when to fire unload definitely doesn't match Chrome
or Safari, but maybe it does in this one instance. I don't think it's worth
trying to get consistency there though, unload is largely a bug, we should
add a new event and get people to stop using it.


>
> Anyway, this was just an easy example I could come up with.  There are
> many other examples that involve DOM mutation events if you'd prefer seeing
> those instead.
>

I'm not interested in making using mutation events easier.


>
> The fact of matter is that we don't live in the future, and it's better
> for API to be consistent in this imperfect world than for it to have weird
> edge cases.  As a matter of fact, if you end up being able to kill those
> sync events in the future, this will become non-issue since
> end-of-nano-task as you (Google) proposed will happen before dispatching of
> any event.
>
> As things stand, however, we should dispatch lifecycle callbacks before
> dispatching these (legacy but compat mandating) events.
>

I disagree. Mutation events are poorly speced and not interoperably
implemented across browsers. I don't think we should run nanotasks down
there.

- E


Re: [custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-26 Thread Elliott Sprehn
On Fri, Feb 26, 2016 at 3:09 PM, Ryosuke Niwa <rn...@apple.com> wrote:

>
> > On Feb 24, 2016, at 9:06 PM, Elliott Sprehn <espr...@chromium.org>
> wrote:
> >
> > Can you give a code example of how this happens?
>
> For example, execCommand('Delete') would result in sequentially deleting
> nodes as needed.
> During this compound operation, unload events may fire on iframes that got
> deleted by this operation.
>
> I would like components to be notified that they got removed/disconnected
> from the document
> before such an event is getting fired.
>
>
I'd rather not do that, all the sync script inside editing operations is a
bug, and you shouldn't depend on the state of the world around you in there
anyway since all browsers disagree (ex. not all of them fire the event
sync).

- E


Re: [custom-elements] Invoking lifecycle callbacks before invoking author scripts

2016-02-24 Thread Elliott Sprehn
Can you give a code example of how this happens?
On Feb 24, 2016 8:30 PM, "Ryosuke Niwa"  wrote:

>
> > On Feb 23, 2016, at 1:16 AM, Anne van Kesteren  wrote:
> >
> > On Tue, Feb 23, 2016 at 5:26 AM, Ryosuke Niwa  wrote:
> >> Hi,
> >>
> >> We propose to change the lifecycle callback to be fired both before
> invoking author scripts (e.g. for dispatching events) and before returning
> to author scripts.
> >>
> >> Without this change, event listeners that call custom elements' methods
> would end up seeing inconsistent states during compound DOM operation such
> as Range.extractContents and editing operations, and we would like to avoid
> that as much as possible.
> >
> > These are the events we wanted to try and delay to dispatch around the
> > same time lifecycle callbacks are supposed to be called?
>
> Yeah, I'm talking about focus, unload, etc... and DOM mutation events.
> It's possible that we can make all those event async in the future but
> that's not the current state of the world, and we would like to keep the
> custom elements' states consistent for authors.
>
> - R. Niwa
>
>
>


Re: Meeting date, january

2015-12-03 Thread Elliott Sprehn
Great, lets do the 25th then. :)

On Wed, Dec 2, 2015 at 1:09 PM, Travis Leithead <
travis.leith...@microsoft.com> wrote:

> 25th works for me.
>
> -Original Message-
> From: Domenic Denicola [mailto:d...@domenic.me]
> Sent: Tuesday, December 1, 2015 8:32 AM
> To: Chaals McCathie Nevile ; 'public-webapps WG' <
> public-webapps@w3.org>; Léonie Watson 
> Cc: Anne van Kesteren 
> Subject: RE: Meeting date, january
>
> From: Chaals McCathie Nevile [mailto:cha...@yandex-team.ru]
>
> > Yes, likewise for me. Anne, Olli specifically called you out as
> > someone we should ask. I am assuming most people are OK either way,
> > having heard no loud screaming except for Elliot...
>
> I would be pretty heartbroken if we met without Elliott. So let's please
> do the 25th.
>


Re: Meeting date, january

2015-11-25 Thread Elliott Sprehn
CSSWG and Houdini are in Sydney starting on the 30th, which means I
couldn't go to both which is unfortunate. I'd prefer the 25th. :)

On Wed, Nov 25, 2015 at 5:54 PM, Chaals McCathie Nevile <
cha...@yandex-team.ru> wrote:

> Hi,
>
> it appears that there are some people may not be able to attend a meeting
> on the 29th - although Apple has generously offered to host that day.
>
> Is there anyone who would only be able to attend if we moved the meeting
> to the 25th?
> Conversely, would that shift cause problems for anyone (e.g. bought
> inflexible tickets, another clash, …)
>
> By default, we won't move the meeting, but if there are a number of people
> affected and it makes sense, we could do so. If so, I'd like to make that
> decision early next week.
>
> cheers
>
> Chaals
>
> --
> Charles McCathie Nevile - web standards - CTO Office, Yandex
>  cha...@yandex-team.ru - - - Find more at http://yandex.com
>
>


Re: Callback when an event handler has been added to a custom element

2015-11-06 Thread Elliott Sprehn
On Fri, Nov 6, 2015 at 5:12 PM, Domenic Denicola  wrote:

> In general I would be cautious about this kind of API. Events are not
> expected to have side effects, and adding listeners should not cause an
> (observable) action. See e.g.
> https://dom.spec.whatwg.org/#action-versus-occurance which tries to
> explain this in some detail. A better design in your case would probably be
> to have a specific method on the custom element which "starts" it (and thus
> starts its associated message port).
>
> As such I don't think we should add such a capability to the custom
> element API (or elsewhere in the platform). Although it is possible to use
> such callbacks for "good" (using them only to perform unobservable
> optimizations, like lazy initialization), it is way too easy to use them
> for "evil" (causing observable effects that would better be allocated to
> dedicated action-causing methods).
>
>
I agree, this doesn't seem like something authors should do. You element
should use the attachedCallback to "start" doing something, and the
detachedCallback to stop. You can also have an explicit start() API so
callers could make it begin before it's been inserted into the page. Events
are passive notification of the operation of the element.

- E


Re: Shadow DOM and SVG use elements

2015-10-23 Thread Elliott Sprehn
On Fri, Oct 23, 2015 at 12:42 PM, Travis Leithead <
travis.leith...@microsoft.com> wrote:

> Well, since SVG 'use' is mostly about replicating the composed tree
> anyway, it seems that is should probably render the composed tree--e.g.,
> this seems natural, because use would "replicate" the host element, which
> would then render it's shadow DOM.


The current implementation of  in Blink (and WebKit IIRC) is to
literally cloneNode the referenced content into a ShadowRoot off the 
element. Cloning the composed tree would change the way selectors in the
cloned tree matched by changing the shape and order of the tree. It also
means potentially cloning cousin elements of the used element which is
somewhat surprising. I'd be inclined to say all shadow roots inside the
used element are ignored, and all slots are inert.


> The interactivity behaviors associated with the shadow dom is an
> interesting question though.. today you are expected to attach event
> handlers to the ElementInstance (
> http://www.w3.org/TR/SVG/struct.html#InterfaceSVGElementInstance) which
> is the DOM representation of the "replicated" tree--I'm not sure what this
> would look like for Elements with an attached shadow.
>

The instance tree was removed in SVG2 I think, we certainly removed it from
Blink.

- E


Custom elements backing swap proposal

2015-10-23 Thread Elliott Sprehn
I've been thinking about ways to make custom elements violate the
consistency principle less often and had a pretty awesome idea recently.

Unfortunately I won't be at TPAC, but I'd like to discuss this idea in
person. Can we setup a custom element discussion later in the year?

The current  "synchronous in the parser" model doesn’t feel good to me
because cloneNode() and upgrades are still async, and I fear other things
(ex. parser in editing, innerHTML) may need to be as well. So, while we've
covered up the inconsistent state of the element in one place (parser),
we've left it to be a surprise in the others which seems worse than just
always being async. This led me to a crazy idea that would get us
consistency between all these cases:

What if we use a different pimpl object (C++ implementation object) when
running the constructor, and then move the children/shadows and attributes
after the fact? This element can be reused (as implementation detail), and
any attempt to append it to another element would throw.

An example algorithm for created callback is:
https://gist.github.com/esprehn/505303896aa97cd8b33e

In an engine you could imagine doing this by creating a new C++ Element
with the same tag name, having the JS wrapper point to it, running the
constructor, merging the new C++ element data with the original C++ element
data, and then associating the wrapper back to the original C++ element.
Note that during this time the original C++ element, and the temp one both
point to the same wrapper, but the wrapper only points to the temp one.
This ensures that any held references to MutationRecords/Events or other
objects will also observe the temporary state change.

This is similar to Maciej’s idea of removing the elements from the tree and
removing their attributes before running the callback, then adding them
back after, but it avoids having to actually remove the elements and update
the associated tree state.

There's lots of details to work out here, but I think this could be
workable. The hardest part is making the swapping step robust in the
engine, for example the internal Attr objects would need to handle the
ownerElement.

- E


Re: Call for Consensus: Publish First Public Working Draft of FindText API, respond by 14 October

2015-10-06 Thread Elliott Sprehn
How does this work with shadow dom? Range is not very friendly to that.
On Oct 6, 2015 4:35 PM, "Frederick Hirsch"  wrote:

> This is a call for consensus (CfC) to publish a First Public Working Draft
> (FPWD) of FindText API; deadline 14 October (1 week)
>
> This FindText API is joint deliverable of the WebApps WG and Web
> Annotation WG (listed as "Robust Anchoring" in the charters [1], [2]).
>
> This is a Call for Consensus (CfC) to publish a FPWD of the FindText API,
> using the following Editor's Draft as the basis:
>
>  http://w3c.github.io/findtext/
>
> "This specification describes an API for finding ranges of text in a
> document or part of a document, using a variety of selection criteria."
>
> This API was presented to the WebApps WG last TPAC under a different name,
> and with a fairly different design; it was modified to fit the feedback
> from that meeting and from others, including narrowing of scope, the
> introduction of Promises, and exposing low-level functionality in the
> spirit of the Extensible Web.
>
> The specification is largely based on the Annotator JavaScript library's
> "robust anchoring" code, and a standalone polyfill is under development.
> Feedback from possible implementers is especially welcome.
>
> This CfC satisfies the group's requirement to "record the groups' decision
> to request advancement".
>
> By publishing this FPWD, the group sends a signal to the community to
> begin reviewing the document. The FPWD reflects where the groups are on
> this spec at the time of publication; it does _not_ necessarily mean there
> is consensus on the spec's contents and the specification may be updated.
>
> If you have any comments or concerns about this CfC, please reply to this
> e-mail by 14 October at the latest. Positive response is preferred and
> encouraged, even a +1 will do  Silence will be considered as agreement with
> the proposal.
>
> regards, Frederick & Rob
>
> Frederick Hirsch, Rob Sanderson
>
> Co-Chairs, W3C Web Annotation WG
>
> www.fjhirsch.com
> @fjhirsch
>
> [1] http://www.w3.org/2014/06/webapps-charter.html#deliverables
>
> [2] http://www.w3.org/annotation/charter/#scope
>
>
>
>
>
>
>


Re: [shadow-dom] ::before/after on shadow hosts

2015-06-30 Thread Elliott Sprehn
On Wed, Jul 1, 2015 at 12:08 AM, Hayato Ito hay...@chromium.org wrote:

  ::before and ::after are basically *siblings* of the shadow host,

 That's not a correct sentence. ::before and ::after shouldn't be a
 siblings of the shadow host.
 I just wanted to say that #2 is the desired behavior.


Indeed they're children, immediately before and immediately after the
composed children of an element.

fwiw this is also how it must work to prevent breaking the web or
implementing special cases. input (and textarea) has a ShadowRoot and
input::before and input::after are both common ways to add decorations to
input elements. I broke this once in WebKit and we found all that content.

- E


Re: Clipboard API: remove dangerous formats from mandatory data types

2015-06-11 Thread Elliott Sprehn
On Thu, Jun 11, 2015 at 10:51 AM, Wez w...@google.com wrote:

 Hallvord,

 The proposal isn't to remove support for copying/pasting images, but to
 restrict web content from placing compressed image data in one of these
 formats on the clipboard directly - there's no issue with content pasting
 raw pixels from a canvas, for example, since scope for abusing that to
 compromise the recipient is extremely limited by comparison to JPEG, PNG or
 GIF.

 The UA is still at liberty to synthesize these formats itself, based on
 the raw imagery provided by the content, to populate the clipboard with
 formats that other applications want.



I don't think the clipboard should forbid inserting image data, there's so
many ways to compromise desktop software. ex. pasting text/html into
Mail.app might even do it. This API shouldn't be trying to prevent that.

- E


Re: Writing spec algorithms in ES6?

2015-06-11 Thread Elliott Sprehn
I've seen this in some specs, and I found the JS code quite difficult to
understand. There's so much subtle behavior you can do, and it's easy to be
too fancy.

In the example in the color spec, why does undefined become 0 but not null?
Also the properties are actually doubles so there's missing type coercion
in that pseudo code I think.

On Thu, Jun 11, 2015 at 1:50 PM, Erik Arvidsson a...@google.com wrote:

 Dare I say ecma-speak?

 (Maybe I got stockholm-syndrome?)

 On Thu, Jun 11, 2015 at 4:47 PM, Adam Klein ad...@chromium.org wrote:
  On Thu, Jun 11, 2015 at 1:32 PM, Dimitri Glazkov dglaz...@google.com
  wrote:
 
  Folks,
 
  Many specs nowadays opt for a more imperative method of expressing
  normative requirements, and using algorithms. For example, both HTML
 and DOM
  spec do the run following steps list that looks a lot like
 pseudocode, and
  the Web components specs use their own flavor of prose-pseudo-code.
 
  I wonder if it would be good the pseudo-code would actually be ES6, with
  comments where needed?
 
  I noticed that the CSS Color Module Level 4 actually does this, and it
  seems pretty nice:
  http://dev.w3.org/csswg/css-color/#dom-rgbcolor-rgbcolorcolor
 
  WDYT?
 
 
  I love the idea of specifying algorithms in something other than English.
  But I'm afraid that ECMAScript is not a good language for this purpose,
 for
  the same reasons Boris cites in his response (which arrived as I was
 typing
  this).
 
  - Adam



 --
 erik




Re: [webcomponents] How about let's go with slots?

2015-05-19 Thread Elliott Sprehn
On Tue, May 19, 2015 at 10:09 AM, Domenic Denicola d...@domenic.me wrote:

 From: Elliott Sprehn [mailto:espr...@chromium.org]

  Given the widget ui-collapsible that expects a ui-collapsible-header
 in the content model, with slots I can write:
 
  ui-collapsible
   my-header-v1 slot=ui-collapsible-header ... /...
  /ui-collapsible
 
  ui-collapsible
   my-header-v2 slot=ui-collapsible-header ... /...
  /ui-collapsible
 
  within the same application. It also means the library can ship with an
 implementation of the header widget, but you can replace it with your own.
 This is identical to the common usage today in polymer apps where you
 annotate your own element with classes. There's no restriction on the type
 of the input.

 I see. Thanks for explaining.

 I think this model you cite Polymer using is different from what HTML
 normally does, which is why it was confusing to me. In HTML the insertion
 point tags (e.g. summary or li or option) act as dumb containers.
 This was reinforced by the examples in the proposal, which use div
 content-slot= with the div being a clear dumb container. You cannot
 replace them with your own choice of container and have things still work.


li is actually just a layout mode, you can make anything a list item by
doing display: list-item. There's no special content model for ul.
details and select are examples of selection based on tag name.

In general these dumb containers have proven harmful though because they
insert extra boxes around your content and are not configurable which are
annoying for authors. They're the reason we keep coming back to trying to
figure out how to customize option instead of just letting you replace it
with your own widget we tell to layout/paint at a specific size like other
platforms would.

ex. Authors don't like the disclosure triangle that summary inserts and
want to change it. They'd much rather do div slot=summary and do
whatever they want instead.

- E


Re: [webcomponents] How about let's go with slots?

2015-05-18 Thread Elliott Sprehn
I'd like this API to stay simple for v1 and support only named slots and
not tag names. I believe we can explain what details does with the
imperative API in v2.

On Mon, May 18, 2015 at 5:11 PM, Justin Fagnani justinfagn...@google.com
wrote:



 On Mon, May 18, 2015 at 4:58 PM, Philip Walton phi...@philipwalton.com
 wrote:

 Pardon my question if this has been discussed elsewhere, but it's not
 clear from my reading of the slots proposal whether they would be allowed
 to target elements that are not direct children of the component.

 I believe the with the `select` attribute this was implicitly required
 because only compound selectors were supported (i.e. no child or descendent
 combinators) [1].


 I think the actually issue is that you might have fights over who gets to
 redistribute an element. Given

 my-el-1
   my-el-2
 div content-slot=foo/div
   /my-el-2
 /my-el-1

 If both my-el-1 and my-el-2 have foo slots, who wins? What if the winner
 by whatever rules adds a clashing slot name in a future update?

 I mentioned in this in Imperative API thread, but I think the least
 surprising way forward for distributing non-children is to allow nodes to
 cooperate on distribution, so a element could send its distributed nodes to
 an ancestor:
 https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html




 Would named slots be able to target elements farther down in the tree?


 [1]
 http://w3c.github.io/webcomponents/spec/shadow/#dfn-content-element-select





Re: [webcomponents] How about let's go with slots?

2015-05-18 Thread Elliott Sprehn
On Mon, May 18, 2015 at 6:34 PM, Domenic Denicola d...@domenic.me wrote:

 From: Justin Fagnani [mailto:justinfagn...@google.com]

  They're not equivalent, because any element can have the right
 content-slot value, but with tag names, only one (or maybe N) names would
 be supported.

 Hmm, I don't understand, and fear we might be talking past each other. Can
 you give an example where content-slot works but tag names do not? For
 example
 https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution.md#proposal-part-1-syntax-for-named-insertion-points
 gets translated from

 combo-box
   icon/icon
   dropdown
 … Choices go here …
   /dropdown
 /combo-box

 Your stated sentence doesn't make much sense to me; you can have multiple
 elements with the same tag name. Literally, just take any example you can
 write x content-slot=y ... /x and replace those with y and /y.


Given the widget ui-collapsible that expects a ui-collapsible-header in
the content model, with slots I can write:

ui-collapsible
  my-header-v1 slot=ui-collapsible-header ... /...
/ui-collapsible

ui-collapsible
  my-header-v2 slot=ui-collapsible-header ... /...
/ui-collapsible

within the same application. It also means the library can ship with an
implementation of the header widget, but you can replace it with your own.
This is identical to the common usage today in polymer apps where you
annotate your own element with classes. There's no restriction on the type
of the input.

With tag names I must write:

ui-collapsible
  ui-collapsible-header ... /...
/ui-collapsible

which means I can't replace the header with any widget I choose, I must use
that custom element. This is identical to using a tag name with content
select and it restricts the type of input. There's no way to have both an
implementation in the library and one in your application, or multiple
implementations.

- E


Re: [webcomponents] How about let's go with slots?

2015-05-18 Thread Elliott Sprehn
On Mon, May 18, 2015 at 6:24 PM, Justin Fagnani justinfagn...@google.com
wrote:



 On Mon, May 18, 2015 at 6:13 PM, Domenic Denicola d...@domenic.me wrote:

  In case it wasn't clear, named slots vs. tag names is purely a bikeshed
 color (but an important one, in the syntax is UI sense). None of the
 details of how the proposal works change at all.

 They're not equivalent, because any element can have the right
 content-slot value, but with tag names, only one (or maybe N) names would
 be supported.


Indeed they're not the same, and supporting both requires coming up with a
syntax to allow both when doing reprojection or selection which rapidly
converges on @select.

We should only support a single selection type for v1, either tag names or
content-slot.

 If you already knew that but still prefer content-slot attributes, then I
 guess we just disagree. But it wasn't clear.

 I'm saying we should pick a single kind, not both. Our customers should
decide which one.

(btw the platform doesn't use dashes in attribute names, so this is either
slot or contentslot when we add it, I'd suggest slot).

- E


Re: Custom Elements: insert/remove callbacks

2015-05-09 Thread Elliott Sprehn
On May 9, 2015 9:41 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Fri, May 8, 2015 at 2:50 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 5/8/15 1:42 AM, Elliott Sprehn wrote:
  That actually seems pretty similar to what we have, ours is in the form
  of:
 
  Node#insertedInto(Node insertionPoint)
  Node#removedFrom(Node insertionPoint)
 
  To be clear, ours is also in the form of two methods
  (BindToTree/UnbindFromTree) that take various arguments.

 The DOM only has insert/remove hooks:

   https://dom.spec.whatwg.org/#concept-node-insert-ext
   https://dom.spec.whatwg.org/#concept-node-remove-ext

 So that seems clearly wrong (in the specification)... Are descendants
 notified in tree order?

Yes, and anything that can run script is notified in a second pass. So for
example if you create a script, put it in a subtree, then append the
subtree, the script runs after all insertedInto notifications have been
sent to the subtree.

- E


Re: Custom Elements: is=

2015-05-08 Thread Elliott Sprehn
On Fri, May 8, 2015 at 12:56 PM, Travis Leithead 
travis.leith...@microsoft.com wrote:

 The 'is' attribute is only a declarative marker; it's the indicator that
 the native element has a [potential] custom prototype and hierarchy, right?

 I don't mean to drudge up past history and decisions already laid to rest,
 but if subclassing native elements is a good compromise until we get to the
 underlying behaviors that make up native HTML elements, why should we limit
 registerElement to hyphenated custom element names?


This doesn't work, the parser needs to allocate the right C++ object
associated with the tag name. There's no way to do upgrades if we allow you
register any tag name you want. It was also disliked by Hixie because it
encourages using up the namespace so then the spec would need to invent
weirder names to work around ones that were already in wide spread use.



 In other words, why not simplify by:
 1. Allow any localName to be used by registerElement. (This would imply
 the HTML namespace by default; we can later add registerElementNS if needed
 :)
 2.  Drop the 'extends' member from the ElementRegistrationOptions
 dictionary.

 With this simplification, serializing elements wouldn't include any sign
 that they are 'customized' in any way (as is done with 'is' today). I don't
 see this as a problem, since web devs today can already do this, but
 without the help of the parser.

 It always seemed weird to me that 'prototype' of
 ElementRegistrationOptions can inherit from anything (including null), and
 be completely disassociated from the localName provided in 'extends'.


I think this should probably throw if you inherit from the wrong thing.

- E


Re: Custom Elements: insert/remove callbacks

2015-05-07 Thread Elliott Sprehn
On Thu, May 7, 2015 at 10:44 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Fri, May 8, 2015 at 7:42 AM, Elliott Sprehn espr...@chromium.org
 wrote:
  That actually seems pretty similar to what we have, ours is in the form
 of:
 
  Node#insertedInto(Node insertionPoint)
  Node#removedFrom(Node insertionPoint)
 
  where insertionPoint is the ancestor in the tree where a connection was
  added or removed which may be arbitrarily far up the ancestor chain. From
  that you can figure out all the cases Boris is describing.

 Cool. So maybe the DOM specification needs to be updated to have that
 model and we should expose that as low-level hook to web developers.


We'd consider adding a new MutationObserver type, we'd prefer not to add
any more tree mutation callbacks. Anything you can do with those you can do
with an ancestorChanged record type and takeRecords().

- E


Re: Custom Elements: insert/remove callbacks

2015-05-07 Thread Elliott Sprehn
On Thu, May 7, 2015 at 10:24 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, May 7, 2015 at 10:14 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  In Gecko, yes.  The set of hooks Gecko builtin elements have today is,
  effectively:
 
  1)  This element used to not have a parent and now does.
  2)  This element has an ancestor that used to not have a parent and now
  does.
  3)  This element used to have a a parent and now does not.
  4)  This element has an ancestor that used to have a parent and
  now does not.

 So that is more granular than what Dominic said Chrome has. I wonder
 why there's a difference. Normally at the low-level things are pretty
 close (or have a difference like linked list vs array).


That actually seems pretty similar to what we have, ours is in the form of:

Node#insertedInto(Node insertionPoint)
Node#removedFrom(Node insertionPoint)

where insertionPoint is the ancestor in the tree where a connection was
added or removed which may be arbitrarily far up the ancestor chain. From
that you can figure out all the cases Boris is describing.

- E


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-07 Thread Elliott Sprehn
On Wed, May 6, 2015 at 11:08 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, May 7, 2015 at 6:02 AM, Hayato Ito hay...@chromium.org wrote:
  I'm saying:
  - Composed tree is related with CSS.
  - Node distribution should be considered as a part of style concept.

 Right, I think Ryosuke and I simply disagree with that assessment. CSS
 operates on the composed tree (and forms a render tree from it).
 Events operate on the composed tree. Selection operates on the
 composed tree (likely, we haven't discussed this much).


Selection operates on the render tree. The current selection API is
(completely) busted for modern apps, and a new one is needed that's based
around layout. Flexbox w/ order, positioned objects, distributions, grid,
none of them work with the DOM based API.

- E


Re: Shadow DOM: state of the distribution API

2015-05-06 Thread Elliott Sprehn
The 3 proposal is what the houdini effort is already researching for custom
style/layout/paint. I don't think it's acceptable to make all usage of
Shadow DOM break when used with libraries that read layout information
today, ie. offsetTop must work. I also don't think it's acceptable to
introduce new synchronous hooks and promote n^2 churn in the distribution.

Distribution is an async batched operation that can happen in a separate
scripting context. There's no issue with re-entrancy there, and it allows
us to define a nice functional style API that lets you rebuild what we have
today (and more).

shadowRoot.registerCustomDistributor(src.js);

src.js:

distributenodes = function(ArrayCandidate candidates,
ArrayInsertionPoint insertionPoints) {
  // For each candidate add it to the insertionPoint you want.
};

Candidates are objects of:

{
  tagName: string,
  attributes: Mapstring, string
}

InsertionPoints are objects of:

{
  attributes: Mapstring, string
  add: function(candidate) { ... }
}

That allows you to rebuild content select, implement pseudo classes in
content select like :first-child and :nth-child which were originally in
the spec but were removed, and also allows you to implement new features
like content order={n} so you can order the distribution process instead
of it being in tree order. This also means the browser can distribute on
another thread. :)

On Wed, May 6, 2015 at 1:14 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, May 6, 2015 at 11:07 AM, Anne van Kesteren ann...@annevk.nl
 wrote:
  On Wed, May 6, 2015 at 7:57 PM, Jonas Sicking jo...@sicking.cc wrote:
  Has at-end-of-microtask been debated rather than 1/2? Synchronous
  always has the downside that the developer has to deal with
  reentrancy.
 
  1/2 are triggered by the component author.

 So component author code isn't triggered when a webpage does
 element.appendChild() if 'element' is a custom element?

 FWIW, I am by no means trying to diminish the task that adding async
 layout APIs would be. Though it might not be that different in size
 compared to the '3' proposal.

 / Jonas




Re: Custom Elements: is=

2015-05-06 Thread Elliott Sprehn
Removing this breaks several use cases of which there's no alternative
currently:

https://chromium.googlesource.com/infra/infra/+/master/appengine/chromium_rietveld/new_static/common/cr-action.html
https://chromium.googlesource.com/infra/infra/+/master/appengine/chromium_rietveld/new_static/common/cr-button.html

where you want to hook into the focus, tab navigation, and action behavior
in the browser. For example links unfocus in some browsers after unclicking
them. We also don't have a facility to be focusable but not have a tab
index currently.


On Wed, May 6, 2015 at 9:59 AM, Alice Boxhall aboxh...@google.com wrote:



 On Wed, May 6, 2015 at 8:33 AM, Anne van Kesteren ann...@annevk.nl
 wrote:

 On Wed, May 6, 2015 at 4:46 PM, Léonie Watson lwat...@paciellogroup.com
 wrote:
  My understanding is that sub-classing would give us the accessibility
 inheritance we were hoping is= would provide. Apologies if I've missed it
 somewhere obvious, but is there any information/detail about the proposed
 sub-classing feature available anywhere?

 It should fall out of figuring out HTML as Custom Elements. Apart from
 styling which I think we can solve independently to some extent that's
 the big elephant in the room.


 As I see it there are two problems which is= could conceivably solve,
 which seem to be being conflated:
 - providing access to capabilities which so far only native elements have
 (roles not captured by ARIA, forms behaviour, etc)
 - allowing re-use of the bundled complete set of behaviours captured in
 each element in the platform (what is focusable, what keyboard interactions
 work, what happens on mobile vs. desktop, what semantic values are exposed
 - essentially everything required in the HTML specs for that particular
 element)

 A solution to the first problem I agree should fall out of the HTML as
 custom elements effort.

 The second is the one I'm more concerned about falling off the radar: when
 using a native button, you can be reasonably certain that it will adhere
 to the HTML spec in behaviour; when using an x-button, you only have the
 reputation of the custom element vendor to give you any sort of guarantee
 about behaviour, and it could regress at any time.

 I definitely acknowledge is= may not be the ideal solution to the latter
 problem - it definitely has some holes in it, especially when you start
 adding author shadow roots to things - but I think it does have potential.
 I'd really like to be convinced that we either have a reasonable
 alternative solution, or that it's not worth worrying about.



Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-05-04 Thread Elliott Sprehn
On Thu, Apr 30, 2015 at 6:22 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Apr 30, 2015 at 3:05 PM, Hayato Ito hay...@chromium.org wrote:
  That's the exactly intended behavior in the current spec.
  The timing of distribution is not observable.

 Right, but you can synchronously observe whether something is
 distributed. The combination of those two things coupled with us not
 wanting to introduce new synchronous mutation observers is what
 creates problems for an imperative API.


 So if we want an imperative API we need to make a tradeoff. Do we care
 about offsetTop et al or do we care about microtask-based mutation
 observers? I'm inclined to think we care more about the latter, but
 the gist I put forward takes a position on neither and leaves it up to
 web developers when they want to distribute (if at all).


We don't need to pick from either of those choices. We can solve this
problem by running the distribution code in a separate scripting context
with a restricted (distribution specific) API as is being discussed for
other extension points in the platform.

One thing to consider here is that we very much consider distribution a
style concept. It's about computing who you inherit style from and where
you should be in the box tree. It just so happens it's also leveraged in
event dispatch too (like pointer-events). It happens asynchronously from
DOM mutation as needed just like style and reflow though.

We don't want synchronous reflow inside appendChild because it means
authors would have to be very careful when mutating the DOM to avoid extra
churn. Distribution is the same way, we want it async so the browser can
batch the work and only distribute when the result is actually needed.

In our code if you look at the very few places we update distribution
explicitly:

3 event related
3 shadow dom JS api
9 style (one of these is flushing style)
1 query selector (for ::content and :host-context)

And all other places where distribution wants to be updated are because we
flush style (or layout) because what that caller really wanted to know was
something about the rendering.

- E


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-30 Thread Elliott Sprehn
On Thu, Apr 30, 2015 at 8:57 PM, Ryosuke Niwa rn...@apple.com wrote:

 ...
 
  The return value of (2) is the same in either case. There is no
 observable difference. No interop issue.
 
  Please file a bug for the spec with a concrete example if you can find a
 observable difference due to the lazy-evaluation of the distribution.

 The problem isn't so much that the current shadow DOM specification has an
 interop issue because what we're talking here, as the thread title clearly
 communicates, is the imperative API for node distribution, which doesn't
 exist in the current specification.

 In particular, invoking user code at the timing specified in section 3.4
 which states if any condition which affects the distribution result
 changes, the distribution result must be updated before any use of the
 distribution result introduces a new interoperability issue because
 before any use of the distribution result is implementation dependent.
 e.g. element.offsetTop may or not may use the distribution result depending
 on UA.  Furthermore, it's undesirable to precisely spec this since doing so
 will impose a serious limitation on what UAs could optimize in the future.


element.offsetTop must use the distribution result, there's no way to know
what your style is without computing your distribution. This isn't any
different than getComputedStyle(...).color needing to flush style, or
getBoundingClientRect() needing to flush layout.

Distribution is about computing who your parent and siblings are in the box
tree, and where your should inherit your style from. Doing it lazy is not
going to be any worse in terms of interop than defining new properties that
depend on style.

- E


Re: :host pseudo-class

2015-04-30 Thread Elliott Sprehn
On Thu, Apr 30, 2015 at 10:25 PM, Anne van Kesteren ann...@annevk.nl
wrote:

 ...

  My problem is not with the ability to address the host element, but by
  addressing it through a pseudo-class, which has so far only been used
  for matching elements in the tree that have a particular internal
  slot.
 
  I don't understand what distinction you're trying to draw here.  Can
  you elaborate?

 A pseudo-class selector is like a class selector. You match an element
 based on a particular trait it has. Your suggestion for :host()
 however is to make it match an element that cannot otherwise be
 matched. That's vastly different semantics


That's still true if you use ::host, what is the thing on the left hand
side the ::host lives on? I'm not aware of any pseudo element that's not
connected to another element such that you couldn't write {thing}::pseudo.

- E


Re: Imperative API for Node Distribution in Shadow DOM (Revisited)

2015-04-28 Thread Elliott Sprehn
A distribute callback means running script any time we update distribution,
which is inside the style update phase (or event path computation phase,
...) which is not a location we can run script. We could run script in
another scripting context like is being considered for custom layout and
paint though, but that has a different API shape since you'd register a
separate .js file as the custom distributor. like

(document || shadowRoot).registerCustomDistributor({src: distributor.js});

I also don't believe we should support distributing any arbitrary
descendant, that has a large complexity cost and doesn't feel like
simplification. It makes computing style and generating boxes much more
complicated.

A synchronous childrenChanged callback has similar issues with when it's
safe to run script, we'd have to defer it's execution in a number of
situations, and it feels like a duplication of MutationObservers which
specifically were designed to operate in batch for better performance and
fewer footguns (ex. a naive childrenChanged based distributor will be n^2).


On Mon, Apr 27, 2015 at 8:48 PM, Ryosuke Niwa rn...@apple.com wrote:


  On Apr 27, 2015, at 12:25 AM, Justin Fagnani justinfagn...@google.com
 wrote:
 
  On Sun, Apr 26, 2015 at 11:05 PM, Anne van Kesteren ann...@annevk.nl
 wrote:
  On Sat, Apr 25, 2015 at 10:49 PM, Ryosuke Niwa rn...@apple.com wrote:
   If we wanted to allow non-direct child descendent (e.g. grand child
 node) of
   the host to be distributed, then we'd also need O(m) algorithm where
 m is
   the number of under the host element.  It might be okay to carry on
 the
   current restraint that only direct child of shadow host can be
 distributed
   into insertion points but I can't think of a good reason as to why
 such a
   restriction is desirable.
 
  The main reason is that you know that only a direct parent of a node can
 distribute it. Otherwise any ancestor could distribute a node, and in
 addition to probably being confusing and fragile, you have to define who
 wins when multiple ancestors try to.
 
  There are cases where you really want to group element logically by one
 tree structure and visually by another, like tabs. I think an alternative
 approach to distributing arbitrary descendants would be to see if nodes can
 cooperate on distribution so that a node could pass its direct children to
 another node's insertion point. The direct child restriction would still be
 there, so you always know who's responsible, but you can get the same
 effect as distributing descendants for a cooperating sets of elements.

 That's an interesting approach. Ted and I discussed this design, and it
 seems workable with Anne's `distribute` callback approach (= the second
 approach in my proposal).

 Conceptually, we ask each child of a shadow host the list of distributable
 node for under that child (including itself). For normal node without a
 shadow root, it'll simply itself along with all the distribution candidates
 returned by its children. For a node with a shadow root, we ask its
 implementation. The recursive algorithm can be written as follows in pseudo
 code:

 ```
 NodeList distributionList(Node n):
   if n has shadowRoot:
 return ask n the list of distributable noes under n (1)
   else:
 list = [n]
 for each child in n:
   list += distributionList(n)
 return list
 ```

 Now, if we adopted `distribute` callback approach, one obvious mechanism
 to do (1) is to call `distribute` on n and return whatever it didn't
 distribute as a list. Another obvious approach is to simply return [n] to
 avoid the mess of n later deciding to distribute a new node.

  So you mean that we'd turn distributionList into a subtree? I.e. you
  can pass all descendants of a host element to add()? I remember Yehuda
  making the point that this was desirable to him.
 
  The other thing I would like to explore is what an API would look like
  that does the subclassing as well. Even though we deferred that to v2
  I got the impression talking to some folks after the meeting that
  there might be more common ground than I thought.
 
  I really don't think the platform needs to do anything to support
 subclassing since it can be done so easily at the library level now that
 multiple generations of shadow roots are gone. As long as a subclass and
 base class can cooperate to produce a single shadow root with insertion
 points, the platform doesn't need to know how they did it.

 I think we should eventually add native declarative inheritance support
 for all of this.

 One thing that worries me about the `distribute` callback approach (a.k.a.
 Anne's approach) is that it bakes distribution algorithm into the platform
 without us having thoroughly studied how subclassing will be done upfront.

 Mozilla tried to solve this problem with XBS, and they seem to think what
 they have isn't really great. Google has spent multiple years working on
 this problem but they come around to say their solution, 

Re: Why is querySelector much slower?

2015-04-28 Thread Elliott Sprehn
On Mon, Apr 27, 2015 at 11:13 PM, Glen Huang curvedm...@gmail.com wrote:

 On second thought, if the list returned by getElementsByClass() is lazy
 populated as Boris says, it shouldn't be a problem. The list is only
 updated when you access that list again.


The invalidation is what makes your code slower. Specifically any time you
mutate the tree, and you have live node lists, we traverse ancestors to
mark them as needing to be updated.

Blink (and likely other browsers) will eventually garbage collect the
LiveNodeList and then your DOM mutations will get faster again.



 On Apr 28, 2015, at 2:08 PM, Glen Huang curvedm...@gmail.com wrote:

 Live node lists make all dom mutation slower

 Haven't thought about this before. Thank you for pointing it out. So if I
 use, for example, lots of getElementsByClass() in the code, I'm actually
 slowing down all DOM mutating APIs?





Re: Exposing structured clone as an API?

2015-04-27 Thread Elliott Sprehn
On Apr 24, 2015 3:16 PM, Joshua Bell jsb...@google.com wrote:

 It seems like the OP's intent is just to deep-copy an object. Something
like the OP's tweet... or this, which we use in some tests:

 function structuredClone(o) {
 return new Promise(function(resolve) {
 var mc = new MessageChannel();
 mc.port2.onmessage = function(e) { resolve(e.data); };
 mc.port1.postMessage(o);
 });
 }

 ... but synchronous, which is fine, since the implicit
serialization/deserialization needs to be synchronous anyway.

 If we're not dragging in the notion of extensibility, is there
complication?  I'm pretty sure this would be about a two line function in
Blink. That said, without being able to extend it, is it really interesting
to developers?

The two line function won't be very fast since it'll serialize into a big
byte array first since structured clone is for sending objects across
threads/processes. It also means going through the runtime API which is
slower.

That was my point, exposing this naively is just exposing the slow path to
developers since a handwritten deep clone will likely be much faster.
Developers shouldn't be using structured clone for general deep cloning.
TC39 should expose an @@clone callback developers can override for all
objects.

Indexeddb has a similar situation, there's a comparison function in there
that seems super useful since it can compare arrays, but in reality you
shouldn't use it for general purpose code. JS should instead add an array
compare function, or a general compare function.




 On Fri, Apr 24, 2015 at 2:05 PM, Anne van Kesteren ann...@annevk.nl
wrote:

 On Fri, Apr 24, 2015 at 2:08 AM, Robin Berjon ro...@w3.org wrote:
  Does this have to be any more complicated than adding a toClone()
convention
  matching the ones we already have?

 Yes, much more complicated. This does not work at all. You need
 something to serialize the object so you can transport it to another
 (isolated) global.


 --
 https://annevankesteren.nl/




Re: Why is querySelector much slower?

2015-04-27 Thread Elliott Sprehn
Live node lists make all dom mutation slower, so while it might look faster
in your benchmark it's actually slower elsewhere (ex. appendChild).

Do you have a real application where you see querySelector as the
bottleneck?
On Apr 27, 2015 5:32 PM, Glen Huang curvedm...@gmail.com wrote:

 I wonder why querySelector can't get the same optimization: If the passed
 selector is a simple selector like .class, do exactly as
 getElementsByClassName('class')[0] does?

  On Apr 28, 2015, at 10:51 AM, Ryosuke Niwa rn...@apple.com wrote:
 
 
  On Apr 27, 2015, at 7:04 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Mon, Apr 27, 2015 at 1:57 AM, Glen Huang curvedm...@gmail.com
 wrote:
  Intuitively, querySelector('.class') only needs to find the first
 matching
  node, whereas getElementsByClassName('.class')[0] needs to find all
 matching
  nodes and then return the first. The former should be a lot quicker
 than the
  latter. Why that's not the case?
 
  I can't speak for other browsers, but Gecko-based browsers only search
  the DOM until the first hit for getElementsByClassName('class')[0].
  I'm not sure why you say that it must scan for all hits.
 
  WebKit (and, AFAIK, Blink) has the same optimization. It's a very
 important optimization.
 
  - R. Niwa
 





Re: Exposing structured clone as an API?

2015-04-23 Thread Elliott Sprehn
The way many browsers implement this isn't going to be particularly fast.
It serializes the objects to a byte sequence so it can be transferred to
another thread or process and then inflates the objects on the other side.

Have you benchmarked this? I think you're better off just writing your own
clone library.
On Apr 23, 2015 12:30 PM, Martin Thomson martin.thom...@gmail.com wrote:

 On 23 April 2015 at 15:02, Ted Mielczarek t...@mozilla.com wrote:
  Has anyone ever proposed exposing the structured clone algorithm
 directly as an API?

 If you didn't just do so, I will :)

  1. https://twitter.com/TedMielczarek/status/591315580277391360

 Looking at your jsfiddle, here's a way to turn that into something useful.

 +Object.prototype.clone = Object.prototype.clone || function() {
 - function clone(x) {
 return new Promise(function (resolve, reject) {
 window.addEventListener('message', function(e) {
 resolve(e.data);
 });
 +window.postMessage(this, *);
 -window.postMessage(x, *);
 });
 }

 But are we are in the wrong place to have that discussion?




Re: JSON imports?

2015-04-19 Thread Elliott Sprehn
I'd hope with prefetch that we'd keep the data around in the memory cache
waiting for the request.
On Apr 18, 2015 7:07 AM, Glen Huang curvedm...@gmail.com wrote:

 Didn't know about this trick. Thanks.

 But I guess you have to make sure the file being prefetched must have a
 long cache time set in the http header? Otherwise when it's fetched, the
 file is going to be downloaded again?

 What if you don't have control over the json file's http header?

 On Apr 18, 2015, at 10:12 AM, Elliott Sprehn espr...@chromium.org wrote:

 link rel=prefetch does that for you.
 On Apr 17, 2015 7:08 PM, Glen Huang curvedm...@gmail.com wrote:

 One benefit is that browsers can start downloading it asap, instead of
 waiting util the fetch code is executed (which could itself be in a
 separate file).

 On Apr 18, 2015, at 8:41 AM, Elliott Sprehn espr...@chromium.org wrote:



 On Fri, Apr 17, 2015 at 6:33 AM, Glen Huang curvedm...@gmail.com wrote:

 Basic feature like this shouldn't rely on a custom solution. However, it
 does mean that if browsers implement this, it's easily polyfillable.


 What does this get you over fetch() ? Imports run scripts and enforce
 ordering an deduplication. Importing JSON doesn't really make much sense.


 On Apr 17, 2015, at 9:23 PM, Wilson Page wilsonp...@me.com wrote:

 Sounds like something you could write yourself with a custom-elements.
 Yay extensible web :)

 On Fri, Apr 17, 2015 at 1:32 PM, Matthew Robb matthewwr...@gmail.com
 wrote:

 I like the idea of this. It reminds me of polymer's core-ajax component.
 On Apr 16, 2015 11:39 PM, Glen Huang curvedm...@gmail.com wrote:

 Inspired by HTML imports, can we add JSON imports too?

 ```html
 script type=application/json src=foo.json id=foo/script
 script type=application/json id=bar
 { foo: bar }
 /script
 ```

 ```js
 document.getElementById(foo).json // or whatever
 document.getElementById(bar).json
 ```









Re: Privileged context features and JavaScript

2015-04-17 Thread Elliott Sprehn
It's preferable not to do that for us because you can then create a static
heap snapshot at compile time and memcpy to start JS contexts faster.
On Apr 17, 2015 12:03 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/17/15 2:52 AM, Boris Zbarsky wrote:

 If that preference is toggled, we in fact remove the API entirely, so
 that 'geolocation' in navigator tests false.


 Oh, I meant to mention: this is more web-compatible than having the API
 entrypoints throw, because it can be object-detected.  Of course we could
 have made the API entrypoints just always reject the request instead, I
 guess; removing the API altogether was somewhat simpler to do.

 -Boris





Re: JSON imports?

2015-04-17 Thread Elliott Sprehn
link rel=prefetch does that for you.
On Apr 17, 2015 7:08 PM, Glen Huang curvedm...@gmail.com wrote:

 One benefit is that browsers can start downloading it asap, instead of
 waiting util the fetch code is executed (which could itself be in a
 separate file).

 On Apr 18, 2015, at 8:41 AM, Elliott Sprehn espr...@chromium.org wrote:



 On Fri, Apr 17, 2015 at 6:33 AM, Glen Huang curvedm...@gmail.com wrote:

 Basic feature like this shouldn't rely on a custom solution. However, it
 does mean that if browsers implement this, it's easily polyfillable.


 What does this get you over fetch() ? Imports run scripts and enforce
 ordering an deduplication. Importing JSON doesn't really make much sense.


 On Apr 17, 2015, at 9:23 PM, Wilson Page wilsonp...@me.com wrote:

 Sounds like something you could write yourself with a custom-elements.
 Yay extensible web :)

 On Fri, Apr 17, 2015 at 1:32 PM, Matthew Robb matthewwr...@gmail.com
 wrote:

 I like the idea of this. It reminds me of polymer's core-ajax component.
 On Apr 16, 2015 11:39 PM, Glen Huang curvedm...@gmail.com wrote:

 Inspired by HTML imports, can we add JSON imports too?

 ```html
 script type=application/json src=foo.json id=foo/script
 script type=application/json id=bar
 { foo: bar }
 /script
 ```

 ```js
 document.getElementById(foo).json // or whatever
 document.getElementById(bar).json
 ```








Re: JSON imports?

2015-04-17 Thread Elliott Sprehn
On Fri, Apr 17, 2015 at 6:33 AM, Glen Huang curvedm...@gmail.com wrote:

 Basic feature like this shouldn't rely on a custom solution. However, it
 does mean that if browsers implement this, it's easily polyfillable.


What does this get you over fetch() ? Imports run scripts and enforce
ordering an deduplication. Importing JSON doesn't really make much sense.


 On Apr 17, 2015, at 9:23 PM, Wilson Page wilsonp...@me.com wrote:

 Sounds like something you could write yourself with a custom-elements. Yay
 extensible web :)

 On Fri, Apr 17, 2015 at 1:32 PM, Matthew Robb matthewwr...@gmail.com
 wrote:

 I like the idea of this. It reminds me of polymer's core-ajax component.
 On Apr 16, 2015 11:39 PM, Glen Huang curvedm...@gmail.com wrote:

 Inspired by HTML imports, can we add JSON imports too?

 ```html
 script type=application/json src=foo.json id=foo/script
 script type=application/json id=bar
 { foo: bar }
 /script
 ```

 ```js
 document.getElementById(foo).json // or whatever
 document.getElementById(bar).json
 ```






Re: [Imports] Considering imperative HTML imports?

2015-04-16 Thread Elliott Sprehn
On Wed, Apr 15, 2015 at 9:37 PM, Travis Leithead 
travis.leith...@microsoft.com wrote:

  Was an imperative form of HTML imports already considered? E.g., the
 following springs to mind:

   PromiseDocument importDocument(DOMString url);



 I was thinking about Worker’s importScripts(DOMString… urls), and the
 above seems like a nice related corollary.


We did consider this, I think there's still a proposal for an imperative
document.import(url) = Promise API. The major advantage of the declarative
approach is that the browser can fetch the entire import tree and even
start tokenizing on a background thread without ever running any script.

- E


Re: [Shadow] Q: Removable shadows (and an idea for lightweight shadows)?

2015-03-26 Thread Elliott Sprehn
On Thu, Mar 26, 2015 at 11:36 AM, Travis Leithead 
travis.leith...@microsoft.com wrote:

  From: Justin Fagnani [mailto:justinfagn...@google.com]
  Elements expose this “shadow node list” via APIs that are very similar
 to
  existing node list management, e.g., appendShadowChild(),
 insertShadowBefore(),
  removeShadowChild(), replaceShadowChild(), shadowChildren[],
 shadowChildNodes[].
 
 This part seems like a big step back to me. Shadow roots being actual
 nodes means
 that existing code and knowledge work against them.

 existing code and knowledge work against them -- I'm not sure you
 understood correctly.
 Nodes in the shadow child list wouldn't show up in the childNodes list,
 nor in any of the
 node traversal APIs (e.g., not visible to qSA, nextSibling,
 previousSibling, children, childNodes,
 ect.

 Trivially speaking, if you wanted to hide two divs that implement a stack
 panel and have some
 element render it, you'd just do:
 element.appendShadowChild(document.createElement('div'))
 element.appendShadowChild(document.createElement('div'))

 Those divs would not be discoverable by any traditional DOM APIs (they
 would now be on the
 shadow side), and the only way to see/use them would be to use the new
 element.shadowChildren
 collection.

 But perhaps I'm misunderstanding your point.

 The API surface that you'd have to duplicate with shadow*() methods would
 be quite large.

 That's true. Actually, I think the list above is probably about it.


So if I want to query down into those children I need to do
element.shadowFirstChild.querySelectorAll or
shadowFirstChild.getElementById? That requires looking at all siblings in
the shadowChildList, so I suppose you'd want shadowQuerySelector,
shadowGetElementById, etc? You also need to duplicate elementFromPoint
(FromRect, etc.) down to Element/Text or add special shadow* versions since
right now they only exist on Document and ShadowRoot.

I have to admit I have an allergic reaction to having an element like div
id=foo and then doing element.parentNode.querySelector(#foo) != div.

Another fundamental requirement of Shadow DOM is that you never
accidentally fall out or fall into a shadow and must always take an
explicit step to get there. Having shadow node's parentNode be the host
breaks that.

We could make the parentNode be null like ShadowRoot of today, but you're
still stuck adding API duplication or writing code to iterate the
shadowChildren list.

- E


Re: [Shadow] Q: Removable shadows (and an idea for lightweight shadows)?

2015-03-26 Thread Elliott Sprehn
On Thu, Mar 26, 2015 at 1:38 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Mar 26, 2015, at 1:23 PM, Travis Leithead 
 travis.leith...@microsoft.com wrote:

  You make a series of excellent points.



 In the sense that you have a new set of nodes to manage holistically, then
 having some sort of “document” container does makes sense for that (a
 ShadowRoot) in order to place all your search/navigation APIs.



 You got me thinking though—getElementById is currently not available on
 ShadowRoot right? Does that API in the host’s document find IDs in the
 shadow? I presume not given the guidelines you mentioned. I wonder what
 other APIs from Document are desired?


 I thought getElementById existed in ShadowRoot at some point but the
 latest Blink code doesn't have it.  It looks like Blink has
 querySelector/querySelectorAll via ParentNode:


 https://chromium.googlesource.com/chromium/blink/+/master/Source/core/dom/shadow/ShadowRoot.idl

 https://chromium.googlesource.com/chromium/blink/+/master/Source/core/dom/ParentNode.idl


The spec changed,
https://dom.spec.whatwg.org/#interface-nonelementparentnode

ShadowRoot is a DocumentFragment and DocumentFragment implements
NonElementParentNode.

- E


Re: Standardising canvas-driven background images

2015-02-21 Thread Elliott Sprehn
On Fri, Feb 20, 2015 at 11:08 AM, Matthew Robb matthewwr...@gmail.com
wrote:

 I can atest that this feature helped me to dramatically reduce the drag on
 http://arena.net. The section header backgrounds are using canvas
 elements to avoid touching the DOM during scroll events.


Can you give an example where touching the DOM was too slow? It's great to
get those into benchmarks so we can make it fast. You shouldn't have to
work around the DOM.


 I would really like to see this feature finished and fully standardized. I
 will say I prefer being able to use any arbitrary element as the background
 of another element (-moz-element() ) but I understand that is probably less
 likely.

 In any case +1 this!


 - Matthew Robb

 On Fri, Feb 20, 2015 at 10:51 AM, Ashley Gullen ash...@scirra.com wrote:

 Forgive me if I've missed past discussion on this feature but I need it
 so I'm wondering what the status of it is. (Ref:
 https://www.webkit.org/blog/176/css-canvas-drawing/ and
 http://updates.html5rocks.com/2012/12/Canvas-driven-background-images,
 also known as -webkit-canvas() or -moz-element())

 The use case I have for it is this: we are building a large web app that
 could end up dealing with thousands of dynamically generated icons since it
 deals with large user-generated projects. The most efficient way to deal
 with this many small images is to basically sprite sheet them on to a
 canvas 2d context. For example a 512x512 canvas would have room for a grid
 of 256 different 32x32 icons. (These are drawn scaled down from
 user-generated content, so they are not known at the time the app loads and
 so a normal image cannot be used.) To display an icon, a 32x32 div sets its
 background image to the canvas at an offset, like a normal CSS sprite sheet
 but with a canvas.

 -webkit-canvas solves this, but I immediately ran in to bugs (in Chrome
 updating the canvas does not always redraw the background image), and as
 far as I can tell it has an uncertain future so I'm wary of depending on
 it. The workarounds are:
 - toDataURL() - synchronous so will jank the main thread, data URL
 inflation (+30% size), general insanity of dumping a huge string in to CSS
 properties
 - toBlob() - asynchronous which raises complexity problems (needs a way
 of firing events to all dependent icons to update them; updating them
 requires DOM/style changes; needs to handle awkward cases like the canvas
 changing while toBlob() is processing; needs to be carefully scheduled to
 avoid thrashing toBlob() if changes being made regularly e.g. as network
 requests complete). I also assume this uses more memory, since it
 effectively requires creating a separate image the same size which is
 stored in addition to the canvas.

 In comparison being able to put a canvas in a background images solves
 this elegantly: there is no need to convert the canvas or update the DOM as
 it changes, and it seems the memory overhead would be lower. It also opens
 up other use cases such as animated backgrounds.

 I see there may be security concerns around -moz-element() since it can
 use any DOM content. This does not appear to be necessary or even useful
 (what use cases is arbitrary DOM content for?). If video is desirable, then
 video can already be rendered to canvases, so -webkit-canvas still covers
 that.

 Therefore I would like to propose standardising this feature based off
 the -webkit-canvas() implementation.

 Ashley Gullen
 Scirra.com





Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Elliott Sprehn
On Tuesday, February 10, 2015, Marc Fawzi marc.fa...@gmail.com wrote:

 Here is a really bad idea:

 Launch an async xhr and monitor its readyState in a while loop and don't
 exit the loop till it has finished.

 Easier than writing charged emails. Less drain on the soul


This won't work, state changes are async and long running while loops
result in the hung script dialog which means we'll probably just kill your
page.

The main thread of your web app is the UI thread, you shouldn't be doing IO
there (or anything else expensive). Some other application platforms will
even flash the whole screen or kill your process if you do that to warn
you're doing something awful.



 Sent from my iPhone

  On Feb 10, 2015, at 8:48 AM, Michaela Merz michaela.m...@hermetos.com
 javascript:; wrote:
 
  No argument in regard to the problems that might arise from using sync
  calls.  But it is IMHO not the job of the browser developers to decide
  who can use what, when and why. It is up the guys (or gals) coding a
  web site to select an appropriate AJAX call to get the job done.
 
  Once again: Please remember that it is your job to make my (and
  countless other web developers) life easier and to give us more
  choices, more possibilities to do cool stuff. We appreciate your work.
  But must of us don't need hard coded education in regard to the way we
  think that web-apps and -services should be created.
 
  m.
 
  On 02/10/2015 08:47 AM, Ashley Gullen wrote:
  I am on the side that synchronous AJAX should definitely be
  deprecated, except in web workers where sync stuff is OK.
 
  Especially on the modern web, there are two really good
  alternatives: - write your code in a web worker where synchronous
  calls don't hang the browser - write async code which doesn't hang
  the browser
 
  With modern tools like Promises and the new Fetch API, I can't
  think of any reason to write a synchronous AJAX request on the main
  thread, when an async one could have been written instead with
  probably little extra effort.
 
  Alas, existing codebases rely on it, so it cannot be removed
  easily. But I can't see why anyone would argue that it's a good
  design principle to make possibly seconds-long synchronous calls on
  the UI thread.
 
 
 
 
  On 9 February 2015 at 19:33, George Calvert
  george.calv...@loudthink.com javascript:;
  mailto:george.calv...@loudthink.com javascript:; wrote:
 
  I third Michaela and Gregg.
 
  __ __
 
  It is the app and site developers' job to decide whether the user
  should wait on the server — not the standard's and, 99.9% of the
  time, not the browser's either.
 
  __ __
 
  I agree a well-designed site avoids synchronous calls.  BUT —
  there still are plenty of real-world cases where the best choice is
  having the user wait: Like when subsequent options depend on the
  server's reply.  Or more nuanced, app/content-specific cases where
  rewinding after an earlier transaction fails is detrimental to the
  overall UX or simply impractical to code.
 
  __ __
 
  Let's focus our energies elsewhere — dispensing with browser
  warnings that tell me what I already know and with deprecating
  features that are well-entrenched and, on occasion, incredibly
  useful.
 
  __ __
 
  Thanks, George Calvert
 
 




Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Elliott Sprehn
On Tuesday, February 10, 2015, Marc Fawzi marc.fa...@gmail.com wrote:

 If readyState is async then have set a variable in the readyState change
 callback and monitor that variable in a while loop :D

 What am I missing?


The event fires async, at the time we update the property. There is no way
to synchronously observe an XHR.



 On Tue, Feb 10, 2015 at 9:44 AM, Elliott Sprehn espr...@chromium.org
 javascript:_e(%7B%7D,'cvml','espr...@chromium.org'); wrote:



 On Tuesday, February 10, 2015, Marc Fawzi marc.fa...@gmail.com
 javascript:_e(%7B%7D,'cvml','marc.fa...@gmail.com'); wrote:

 Here is a really bad idea:

 Launch an async xhr and monitor its readyState in a while loop and don't
 exit the loop till it has finished.

 Easier than writing charged emails. Less drain on the soul


 This won't work, state changes are async and long running while loops
 result in the hung script dialog which means we'll probably just kill your
 page.

 The main thread of your web app is the UI thread, you shouldn't be doing
 IO there (or anything else expensive). Some other application
 platforms will even flash the whole screen or kill your process if you do
 that to warn you're doing something awful.




 Sent from my iPhone

  On Feb 10, 2015, at 8:48 AM, Michaela Merz michaela.m...@hermetos.com
 wrote:
 
  No argument in regard to the problems that might arise from using sync
  calls.  But it is IMHO not the job of the browser developers to decide
  who can use what, when and why. It is up the guys (or gals) coding a
  web site to select an appropriate AJAX call to get the job done.
 
  Once again: Please remember that it is your job to make my (and
  countless other web developers) life easier and to give us more
  choices, more possibilities to do cool stuff. We appreciate your work.
  But must of us don't need hard coded education in regard to the way we
  think that web-apps and -services should be created.
 
  m.
 
  On 02/10/2015 08:47 AM, Ashley Gullen wrote:
  I am on the side that synchronous AJAX should definitely be
  deprecated, except in web workers where sync stuff is OK.
 
  Especially on the modern web, there are two really good
  alternatives: - write your code in a web worker where synchronous
  calls don't hang the browser - write async code which doesn't hang
  the browser
 
  With modern tools like Promises and the new Fetch API, I can't
  think of any reason to write a synchronous AJAX request on the main
  thread, when an async one could have been written instead with
  probably little extra effort.
 
  Alas, existing codebases rely on it, so it cannot be removed
  easily. But I can't see why anyone would argue that it's a good
  design principle to make possibly seconds-long synchronous calls on
  the UI thread.
 
 
 
 
  On 9 February 2015 at 19:33, George Calvert
  george.calv...@loudthink.com
  mailto:george.calv...@loudthink.com wrote:
 
  I third Michaela and Gregg.
 
  __ __
 
  It is the app and site developers' job to decide whether the user
  should wait on the server — not the standard's and, 99.9% of the
  time, not the browser's either.
 
  __ __
 
  I agree a well-designed site avoids synchronous calls.  BUT —
  there still are plenty of real-world cases where the best choice is
  having the user wait: Like when subsequent options depend on the
  server's reply.  Or more nuanced, app/content-specific cases where
  rewinding after an earlier transaction fails is detrimental to the
  overall UX or simply impractical to code.
 
  __ __
 
  Let's focus our energies elsewhere — dispensing with browser
  warnings that tell me what I already know and with deprecating
  features that are well-entrenched and, on occasion, incredibly
  useful.
 
  __ __
 
  Thanks, George Calvert
 
 





Re: Minimum viable custom elements

2015-01-29 Thread Elliott Sprehn
On Fri, Jan 30, 2015 at 3:52 AM, Brian Kardell bkard...@gmail.com wrote:



 On Thu, Jan 29, 2015 at 10:33 AM, Bruce Lawson bru...@opera.com wrote:

 On 29 January 2015 at 14:54, Steve Faulkner faulkner.st...@gmail.com
 wrote:
  I think being able to extend existing elements has potential value to
  developers far beyond accessibility (it just so happens that
 accessibility
  is helped a lot by re-use of existing HTML features.)

 I agree with everything Steve has said about accessibility. Extending
 existing elements also gives us progressive enhancement potential.

 Try https://rawgit.com/alice/web-components-demos/master/index.html in
 Safari or IE. The second column isn't functional because it's using
 brand new custom elements. The first column loses the web componenty
 sparkles but remains functional because it extends existing HTML
 elements.

 There's a similar story with Opera Mini, which is used by at least
 250m people (and another potential 100m transitioning on Microsoft
 feature phones) because of its proxy architecture.

 Like Steve, I've no particularly affection (or enmity) towards the
 input type=radio is=luscious-radio syntax. But I'd like to know,
 if it's dropped, how progressive enhancement can be achieved so we
 don't lock out users of browsers that don't have web components
 capabilities, JavaScript disabled or proxy browsers. If there is a
 concrete plan, please point me to it. If there isn't, it's
 irresponsible to drop a method that we can see working in the example
 above with nothing else to replace it.

 I also have a niggling worry that this may affect the uptake of web
 components. When I led a dev team for a large UK legal site, there's
 absolutely no way we could have used a technology that was
 non-functional in older/proxy browsers.

 bruce


 Humor me for a moment while I recap some historical arguments/play devil's
 advocate here.

 One conceptual problem I've always had with the is= form is that it adds
 some amount of ambiguity for authors and makes it plausible to author
 non-sense.  It's similar to the problem of aria being bolt on with mix
 and match attributes.  With the imperative form of extending you wind up
 with a tag name that definitely is defined as subclassing something
 super-button 'inherits' from HTMLButtonElement and I'll explain how it's
 different.  With the declarative attribute form you basically have to
 manage 3 things: ANY tag, the base class and the final definition.  This
 means it's possible to do things like iframe is=button which likely
 won't work.  Further, you can then proceed to define something which is
 clearly none-of-the-above.


The is@ only works on the element you defined it to apply to, so iframe
is=button does nothing unless the element button was registered as a
type extension to iframe. I don't see that as any more error prone than
writing paper-buton instead of paper-button.

Also fwiw most share buttons on the web are actually iframes, so iframe
is=facebook-button makes total sense.

- E


Re: Custom element design with ES6 classes and Element constructors

2015-01-27 Thread Elliott Sprehn
On Thursday, January 15, 2015, Domenic Denicola d...@domenic.me wrote:

 Just to clarify, this argument for symbols is not dependent on modules.
 Restated, the comparison is between:

 ```js
 class MyButton extends HTMLElement {
   createdCallback() {}
 }
 ```

 vs.

 ```js
 class MyButton extends HTMLElement {
   [Element.create]() {}
 }
 ```


This doesn't save you anything, classes can have statics and the statics
inherit, so the .create will cause issues with name conflicts anyway.

We should probably introduce a new namespace if we want to do this.



  We're already doing some crude namespacing with *Callback. I'd expect
 that as soon as the first iteration of Custom Elements is out, people will
 copy the *Callback style in user code.

 This is a powerful point that I definitely agree with. I would not be
 terribly surprised to find some library on the web already that asks you to
 create custom elements but encourages you supply a few more
 library-specific hooks with -Callback suffixes.




Re: Custom element design with ES6 classes and Element constructors

2015-01-27 Thread Elliott Sprehn
On Tuesday, January 27, 2015, Domenic Denicola d...@domenic.me wrote:

 It does. If a framework says “use clonedCallback and we will implementing
 cloning for you,” we cannot add a clonedCallback with our own semantics.

 Whereas, if a framework says “use [Framework.cloned] and we will implement
 cloning for you,” we’re in the clear.

 Better yet! If a framework is a bad citizen and says “we did
 Element.cloned = Symbol() for you; now use [Element.cloned] and we will
 implement cloning for you,” we are still in the clear, since the original
 Element.cloned we supply with the browser is not === to the Element.cloned
 supplied by the framework.

 This last is not at all possible with string-valued properties, since the
 string “clonedCallback” is the same no matter who supplies it.


Perhaps, but that logically boils down to never use string properties
ever just in case some library conflicts with a different meaning. We'd
have $[jQuery.find](...) and so on for plugins.

Or more concretely isn't the new DOM Element#find() method going to
conflict with my polymer-database's find() method? So why not make that
[Element.find] so polymer never conflicts?

- E


Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)

2014-02-14 Thread Elliott Sprehn
On Fri, Feb 14, 2014 at 5:17 PM, Alex Russell slightly...@google.comwrote:

 On Fri, Feb 14, 2014 at 3:56 PM, Ryosuke Niwa rn...@apple.com wrote:

 [...]

  We all agree it's not a security boundary and you can go through great
 lengths to get into the ShadowRoot if you really wanted, all we've done by
 not exposing it is make sure that users include some crazy
 jquery-make-shadows-visible.js library so they can build tools like Google
 Feedback or use a new framework or polyfill.


 I don’t think Google Feedback is a compelling use case since all
 components on Google properties could simply expose “shadow” property
 themselves.


 So you've written off the massive coordination costs of adding a uniform
 to all code across all of Google and, on that basis, have suggested there
 isn't really a problem? ISTM that it would be a multi-month (year?) project
 to go patch every project in google3 and then wait for them to all deploy
 new code.

 Perhaps you can imagine a simpler/faster way to do it that doesn't include
 getting owners-LGTMs from nearly every part of google3 and submitting tests
 in nearly every part of the tree??



Please also note that Google Feedback's screenshot technology works fine on
many non-Google web pages and is used in situations that are not on Google
controlled properties. If we're going to ask the entire web to expose
.shadow by convention so things like Google Feedback or Readability can
work we might as well just expose it in the platform.

- E


Re: [webcomponents]: Allowing text children of ShadowRoot is a bad time

2014-01-08 Thread Elliott Sprehn
On Tue, Jan 7, 2014 at 2:59 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Tue, Jan 7, 2014 at 2:42 PM, Elliott Sprehn espr...@gmail.com wrote:
  On Tue, Oct 29, 2013 at 4:20 AM, Anne van Kesteren ann...@annevk.nl
 wrote:
  On Tue, Oct 29, 2013 at 7:34 AM, Simon Pieters sim...@opera.com
 wrote:
   On Tue, 29 Oct 2013 00:54:05 +0100, Anne van Kesteren 
 ann...@annevk.nl
   wrote:
   We are considering not throwing in XML.
  
   Only on getting innerHTML, though, right?
 
  Oh I missed that. In that case throwing if you include text nodes for
  ShadowRoot nodes is not too bad. And would match what happens if you
  append a DocumentFragment that contains them which is the same
  operation. Sounds good.
 
 
  I've been pondering this more recently and I think we want to just
 silently
  drop the Text nodes instead. If you do
  shadowRoot.appendChild(template.contents) and the author did:
 
  template
divheader/div
divcontent/div
  /template
 
  We're going to throw an exception for all the Text between the elements
  which is not really what the author wanted (or realized they were doing).
 
  If dropping them is too gross we might want to just consider this a lost
  cause and warn authors away from putting text in there due to the issues
 I
  outlined in my original email.

 Alternately: silently drop whitespace, but still throw on significant text?


And have textNode.textContent or nodeValue throw an exception if you try to
make it into a non-whitespace node? That could work.

- E


Re: [webcomponents]: Allowing text children of ShadowRoot is a bad time

2014-01-07 Thread Elliott Sprehn
On Tue, Oct 29, 2013 at 4:20 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Oct 29, 2013 at 7:34 AM, Simon Pieters sim...@opera.com wrote:
  On Tue, 29 Oct 2013 00:54:05 +0100, Anne van Kesteren ann...@annevk.nl
  wrote:
  We are considering not throwing in XML.
 
  Only on getting innerHTML, though, right?

 Oh I missed that. In that case throwing if you include text nodes for
 ShadowRoot nodes is not too bad. And would match what happens if you
 append a DocumentFragment that contains them which is the same
 operation. Sounds good.


I've been pondering this more recently and I think we want to just silently
drop the Text nodes instead. If you do
shadowRoot.appendChild(template.contents) and the author did:

template
  divheader/div
  divcontent/div
/template

We're going to throw an exception for all the Text between the elements
which is not really what the author wanted (or realized they were doing).

If dropping them is too gross we might want to just consider this a lost
cause and warn authors away from putting text in there due to the issues I
outlined in my original email.

- E


Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-13 Thread Elliott Sprehn
On Fri, Dec 13, 2013 at 1:16 AM, Maciej Stachowiak m...@apple.com wrote:


 On Dec 9, 2013, at 11:13 AM, Scott Miles sjmi...@google.com wrote:

 ...


 If the shadow root is optionally automatically generated, it should
 probably be passed to the createdCallback (or constructor) rather than made
 a property named shadowRoot. That makes it possible to pass a different
 shadow root to the base class than to the derived class, thus solving the
 problem.


Why generate it at all then? Since you're going to need to do
super(this.createShadowRoot()) for each super call we only save the call to
createShadowRoot() on the base class with a loss of flexibility or an
increase in configuration options (ex. being able to turn off the auto
creation). Instead we should just let authors make these decisions.



 Using an object property named shadowRoot would be a bad idea in any
 case since it automatically breaks encapsulation. There needs to be a
 private way to store the shadow root, either using ES6 symbols, or some new
 mechanism specific to custom elements.


We discussed this many months ago on this list, but there's no way to
create real encapsulation like that because authors will just override
Element.prototype.createShadowRoot or Node.prototype.appendChild and hijack
your ShadowRoot. Making it private is a lie, and leads to false security
assumptions from authors. We had several teams here attempt to use
ShadowRoot as a security boundary, which is just can't be, because you're
sharing the same JS global prototypes.


 As it is, there's no way for ES5 custom elements to have private storage,
 which seems like a problem. They can't even use the closure approach,
 because the constructor is not called and the methods are expected to be on
 the prototype. (I guess you could create per-instance copies of the methods
 closing over the private data in the created callback, but that would
 preclude prototype monkeypatching of the sort built-in HTML elements allow.)


I'm not sure what you mean. How would calling the constructor help you?
Private storage can be had with ES6's SideTable (formerly WeakMap), making
it inconvenient to get the ShadowRoot from an Element just gives false
assumptions about how private it is.

- E


Re: [webcomponents] Inheritance in Custom Elements (Was Proposal for Cross Origin Use Case and Declarative Syntax)

2013-12-10 Thread Elliott Sprehn
On Tue, Dec 10, 2013 at 8:00 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Dec 10, 2013 at 3:54 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 12/10/13 10:34 AM, Anne van Kesteren wrote:
  E.g. the dialog's close() method won't work as defined
  right now on a subclass of HTMLDialogElement.
 
  Why not?
 
  I assumed that actual ES6 subclassing, complete with invoking the right
  superclass @@create, would in fact produce an object for which this would
  work correctly.  At least any situation that doesn't lead to that is a
  UA/spec bug.

 Well for one because the specification at the moment talks about a
 dialog element and does not consider the case where it may have been
 subclassed. The pending dialog stack is also for dialog elements
 only, not exposed in any way, etc. The way the platform is set up at
 the moment is very defensive and not very extensible.


When extending native elements like that you use type extensions, so it'd
be dialog is=my-subclass and the tagName is still DIALOG. Registering
something that extends HTMLDialogElement but isn't a type extension of
dialog does not work, in the same way that doing __proto__ =
HTMLDivElement doesn't magically make you into a div today.

- E


Re: [webcomponents] Inheritance in Custom Elements (Was Proposal for Cross Origin Use Case and Declarative Syntax)

2013-12-09 Thread Elliott Sprehn
On Mon, Dec 9, 2013 at 5:50 PM, Brendan Eich bren...@secure.meer.netwrote:

 Ryosuke Niwa wrote:

 As for the social endorse button, I've never seen a popular SNS share
 buttons implemented using HTML button elements; most of them add their own
 DOM to add icons, etc...


 Right you are. And there's a deeper reason why like (endorse, LOL)
 buttons use iframes: the ability to set 3rd party cookies.


They don't actually need to be able to set cookies, but they do need to be
able to _read_ them. For example some widgets show your username and your
face next to the button so it'd say: Like this as +Elliott Sprehn, or
Leave a comment as Elliott.

- E


Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-06 Thread Elliott Sprehn
On Thu, Dec 5, 2013 at 5:37 PM, Ryosuke Niwa rn...@apple.com wrote:

 Hi,

 Given that many important/natural use cases of custom elements involve
 shadow DOM,
 can we add a flag to auto-create shadow DOM for custom elements?

 In particular, can we add template as the third argument to
 document.register so that
 when a custom element is instantiated, the specified template is
 automatically closed
 and inserted into a shadow DOM of the custom element.


Can you explain the advantage of this? It saves one line of code:

this.createShadowRoot().appendChild(document.importNode(template.contents));

I don't see how pushing that one line down into the browser is helping
anyone. Web components are part of the extensible web concept where we
provide a minimal subset of features necessary for opinionated frameworks
to build things on top. Supporting template in document.register is
easily done in script, so I believe it's better left to developers as
they're better at building frameworks than we are.

In either case, that's something we can always add later so it shouldn't
stand in the way of the current spec.

- E


Re: [HTML Imports]: Sync, async, -ish?

2013-12-04 Thread Elliott Sprehn
On Tue, Dec 3, 2013 at 2:22 PM, Bryan McQuade bmcqu...@google.com wrote:

 Second question: should *rendering* of page content block on the load of
 link rel=import

 Steve Souders wrote another nice post about this topic:
 http://www.stevesouders.com/blog/2013/11/26/performance-and-custom-elements/which
  I recommend reading (read the comments too).

 We should start by looking to how the web platform behaves today. All
 browsers I am aware of block rendering on the load of pending stylesheets.
 Note that blocking rendering (blocking of render tree construction and
 painting content to the screen) is different from blocking parsing.
 Browsers do not block HTML parsing (DOM construction) on stylesheets. Nor
 should they block DOM construction on loading of link rel=imports.


Correct. Imports are even better than the main document though.
document.write doesn't work, and we allow the parser to run ahead and not
block on script inside an import. That means putting script
src=slow-resource.js/script in your import will not slow down the
parser as it processes the import tree. It'll continue building the
documents of imports while the resource loads.


 The reason to block rendering while a custom element's load is pending is
 to prevent a flash of unstyled content/reflow of content on the page. At a
 high level:


We are not blocking rendering on custom elements. We're waiting for a
critical resource to load because it may contain style or script.


 1. if there are no custom elements in the DOM, [...]
 2. if there are custom elements in the DOM [...]


We should not be focusing on custom elements here. Imports and custom
elements are separate but related features and neither depends on the
other. Lots of things could be inside the import. Custom elements don't
block anything.


 However, if the positioning of the custom element changes as a result of
 its being upgraded, i.e. its x/y/width/height changes, [...]


 So how do we strike a balance between fast rendering and avoiding
 reflow/content moving around on the screen? In the case of custom elements
 there are a few things that browsers can do:

  1. if there are no custom elements in the DOM, then even if a load of a
 custom element import is pending, then there is no reason to block
 rendering of the rest of the page, since the custom element's load can't
 impact the styling of content in the DOM.


This is not correct. Imports are not just for custom elements, they're a
generic system for importing stuff. You can put a style in the import
and it will apply to the main page.


 2. if there are custom elements in the DOM and a custom element import's
 load is pending, *and* it can be determined by the rendering engine that
 the load/upgrade of that element will not cause its *position* to change
 (e.g. that element has a style=width:x;height:y; attribute, or a
 style=display:none, etc - we'll have to overlook !important styles for
 this...) then we should not block rendering of the rest of the page, since
 the content other than the custom element will not change position as a
 result of the custom element loading.


This is a heuristic that browsers could apply, yes. I don't believe there
is a spec for browsers waiting to paint on remote stylesheets though, in
fact they probably should paint if the sheet is taking forever to load.



 However, if there are custom elements in the DOM whose position isn't
 specified in an inline style, and a custom element import load is pending,
 then rendering (not parsing!) should block until that custom element has
 finished loading, to prevent a FOUC/reflow.


The parser does not block on rel=import and document.write() is disabled
down there for this reason. There's no predictability for where it's going
to write since the parser could be anywhere when your import finally loads.



 If we take this approach, then developers have two ways to prevent the
 load of custom elements from blocking rendering:
 ...



 So, I propose that, similar to pending stylesheets, the load of custom
 elements should block rendering


We can't block anything on custom elements, but we certainly can on
imports. I think we should as well and then add an async attribute to allow
an author to declare the import as non-critical.


 . With custom elements, however, we can be a bit smarter than with
 stylesheets: rendering should block only in cases where the upgrade of a
 custom element in the DOM might cause a reflow, moving other page content
 on the screen.


There is no way to do this. A document might contain custom elements that
are not upgraded until hours later when the author decides to call
document.register. In general imports are separate from custom elements as
well, we block on them because they're effectively a set of packaged
style and script blocks which would have blocked painting had they been
inside the main document.

- E


Re: [webcomponents] Proposal for Cross Origin Use Case and Declarative Syntax

2013-11-12 Thread Elliott Sprehn
On Tue, Nov 12, 2013 at 12:45 AM, Ryosuke Niwa rn...@apple.com wrote:

 [...]


- Script in the import is executed in the context of the window that
contains the importingdocument. So window.document refers to the main
page document. This has two useful corollaries:
   - functions defined in an import end up on window.
   - you don't have to do anything crazy like append the import's
   script blocks to the main page. Again, script gets executed.

 What we’re proposing is to execute the script in the imported document so
 the only real argument is the point that “functions defined in an imported
 end up on window” (of the host document).

 I think that’s a bad thing.  We don’t want imported documents to start
 polluting global scope without the user explicitly importing them.  e.g. 
 import
 X in Python doesn’t automatically import stuff inside the module into
 your global scope.  To do that, you explicitly say “import * from X”.
  Similarly, “using std” is discouraged in C++.

 I don’t think the argument that this is how external script and stylesheet
 fly either because the whole point of web components is about improving the
 modularity and reusability of the Web.


What you're proposing breaks a primary use case of:

link rel=import href=//apis.google.com/jquery-ui.html

Authors don't want to list every single component from jQuery UI in the
import directive, and they don't want the jQuery UI logic to be in a
different global object. They want to be able to import jQuery UI and have
it transitively import jQuery thus providing $ in the window in addition to
all the widgets and their API. ex. body.appendChild(new
JQUIPanel()).showPanel().

Note also that using a different global produces craziness like Array being
different or the prototypes of nodes being different. You definitely don't
want that for the same origin or CORS use case.


 Fortunately, there is already a boundary that we built that might be just
 the right fit for this problem: the shadow DOM boundary. A while back, we
 had lunch with Mozilla security researchers who were interested in
 harnessing the power of Shadow DOM, and Elliott (cc'd) came up with a
 pretty nifty proposal called the DOMWorker. I nagged him and he is
 hopefully going to post it on public-webapps. I am pretty sure that his
 proposal can address your use case and not cripple the rest of the spec in
 the process.


 Assuming you’re referring to
 https://docs.google.com/document/d/1V7ci1-lBTY6AJxgN99aCMwjZKCjKv1v3y_7WLtcgM00/edit,
 the security model of our proposal is very similar.  All we’re doing is
 using a HTML-imported document instead of a worker to isolate the
 cross-origin component.

 Since we don’t want to run the cross-origin component on a separate
 thread, I don’t think worker is a good model for cross-origin components.


A DOMWorker doesn't run on another thread, see the Note in the introduction.

- E


Re: [webcomponents] Proposal for Cross Origin Use Case and Declarative Syntax

2013-11-11 Thread Elliott Sprehn
On Mon, Nov 11, 2013 at 1:33 AM, Ryosuke Niwa rn...@apple.com wrote:

 [...] we’re open to creating a proxy/fake element subclass which is not
 visible in the global scope and identical to HTMLKnownElement in its
 prototype chain in the host document as well.


Can you clarify why it can't be visible in the global scope? Why can't I do
document.body.appendChild(new FBLikeButton()) or
document.body.firstElementChild instanceof FBLikeButton?

-E


Re: [webcomponents]: Allowing text children of ShadowRoot is a bad time

2013-10-28 Thread Elliott Sprehn
On Thu, Oct 24, 2013 at 1:29 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thursday, October 24, 2013, Dimitri Glazkov wrote:

  Woke up in the middle of the night and realized that throwing breaks
 ShadowRoot.innerHTML (or we'll have to add new rules to hoist/drop text
 nodes in parsing), which sounds bad.


 innerHTML would end up re-throwing the same exception, unless you
 special-cased parsing.  innerHTML throwing is somewhat unexpected though.


We don't really need to special case parsing. innerHTML works by parsing
into a DocumentFragment and then copying the nodes over from there, so we
can just silently drop them in the copy step or throw an exception.

Note that innerHTML can already throw an exception in XHTML/SVG documents
if the content is not well formed. Admittedly leaving some of the content
appended and throwing is somewhat confusing, but I think that's fine given
that once you get the text in there the API is full of sadness.

As a counter point appendChild(documentType) would throw an exception and
innerHTML silently drops if you do innerHTML = !DOCTYPE html (bogus
comment IIRC).

So perhaps dropping the text to avoid having authors deal with the
exception is best. I think we should do that.

- E


Re: [webcomponents]: Allowing text children of ShadowRoot is a bad time

2013-10-09 Thread Elliott Sprehn
On Tue, Oct 8, 2013 at 11:04 PM, Hayato Ito hay...@chromium.org wrote:

 Good points. All you pointed out make sense to me.

 But I am wondering what we should do for these issues:

 A). Discourage developers to use direct text children of ShadowRoot.
 B). Disallow direct text children of ShadowRoot in the Shadow DOM spec.
 C). Find a nice way to style direct text children of ShadowRoot.

 Did you mean B?


I did mean B. ShadowRoot is very similar to Document which also disallows
direct Text children. All of the APIs we're putting on ShadowRoot are also
on Document so I think it makes sense for them to behave the same as well.

shadowRoot.appendChild(new Text()) should probably throw an exception.



 On Wed, Oct 9, 2013 at 2:46 AM, Elliott Sprehn espr...@gmail.com wrote:

 Direct text children of ShadowRoot are full of sadness:

 1) You can't call getComputedStyle on them since that's only allowed for
 Elements, and the old trick of parentNode doesn't work since that's a
 ShadowRoot. ShadowRoot doesn't expose a host property so I can't get
 outside to find the host style that's inherited either. If the ShadowRoot
 has resetStyleInheritance set then the text uses a root default style,
 but I have no way to get that as well.

 2) There's no way to set the style of the Text. Normally I can do
 parentNode.style.color = ...; but since ShadowRoot has no style property I
 have no way to influence the text of the ShadowRoot without dynamically
 changing a style element.

 3) You can't use elementFromPoint(). It returns null since
 ShadowRoot.elementFromPoint should always return an element in that scope,
 but there is no element in that scope. This means you have no sensible way
 to do a hit test of the text in the shadow root.

 - E




 --
 Hayato



[webcomponents]: Allowing text children of ShadowRoot is a bad time

2013-10-08 Thread Elliott Sprehn
Direct text children of ShadowRoot are full of sadness:

1) You can't call getComputedStyle on them since that's only allowed for
Elements, and the old trick of parentNode doesn't work since that's a
ShadowRoot. ShadowRoot doesn't expose a host property so I can't get
outside to find the host style that's inherited either. If the ShadowRoot
has resetStyleInheritance set then the text uses a root default style,
but I have no way to get that as well.

2) There's no way to set the style of the Text. Normally I can do
parentNode.style.color = ...; but since ShadowRoot has no style property I
have no way to influence the text of the ShadowRoot without dynamically
changing a style element.

3) You can't use elementFromPoint(). It returns null since
ShadowRoot.elementFromPoint should always return an element in that scope,
but there is no element in that scope. This means you have no sensible way
to do a hit test of the text in the shadow root.

- E


Re: InedxedDB events : misconception?

2013-04-22 Thread Elliott Sprehn
On Mon, Apr 22, 2013 at 12:32 PM, Alec Flett alecfl...@chromium.org wrote:

 On Mon, Apr 22, 2013 at 9:56 AM, Michaël Rouges 
 michael.rou...@gmail.comwrote:


 Hum ... thank you for this answer, but ...

 Are you sure there is no possibility that the application is completed
 before adding events?

 I find it hard to perceive how it couldn't happen.


 Just to close the loop on this concern: the reason there is no possibility
 is that this part of the IndexedDB specification - all browsers must
 guarantee this behavior to have a working IndexedDB - in fact the rest of
 IndexedDB itself would be unusable if this guarantee was not met.

 Stuff like this can feel a little awkward if you're using to dealing in a
 multi-threaded world, but this API is fairly normal for a web api, at least
 in this respect. In fact XHR is the outlier here in requiring a specific
 xhrrequest.send() call.


Yeah, there's a bunch of APIs like this; EventSource, Notification,
IndexDB's stuff, ...

- E


Re: [PointerLock] Should there be onpointerlockchange/onpointerlockerror properties defined in the spec

2013-04-12 Thread Elliott Sprehn
I'm not sure this makes sense to use futures exclusively. As a library
author you want to know when the page transitions into full screen mode
even if you didn't invoke the requestFullScreen call.

Futures are also not extensible. With an event we could always tack on more
information in the future. With Futureboolean we're stuck with that
boolean forever and can't add new information.

It seems like there's a bit too much Future worshiping going on right now.


On Thu, Apr 11, 2013 at 10:48 AM, Anne van Kesteren ann...@annevk.nlwrote:

 On Thu, Apr 11, 2013 at 6:38 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  Isn't the thing that matters whether _sites_ have it unprefixed?

 I don't know. I believe the browsers don't match each other exactly
 either so there's probably some wiggle room. I suspect transitioning
 at this point to be hard whatever we do, which is probably why it has
 not happened yet.

 Also, we dispatch fullscreenchange to more than just the requesting
 document. We could still get rid of fullscreenerror though and have
 the simple non-iframe case be much more friendly towards developers.


 --
 http://annevankesteren.nl/




Re: [editing] Comments on WebKit addRange implementation

2013-04-12 Thread Elliott Sprehn
On Fri, Apr 5, 2013 at 4:48 PM, Nathan Vander Wilt nate-li...@calftrail.com
 wrote:

 The comments on
 https://dvcs.w3.org/hg/editing/raw-file/tip/editing.html#dom-selection-addrange
  say
 Chrome 15 dev seems to ignore addRange() if there's already a range.

 In case it's helpful, I wanted to note that this isn't quite the case. The
 WebKit implementation is here:

 http://trac.webkit.org/browser/trunk/Source/WebCore/page/DOMSelection.cpp#L385

 What that code does, if the selection has not already been cleared via
 e.g. `.removeAllRanges()`, is set the selection to the *intersection* of
 the old selection and range being added.

 Why? I have no idea. Union or extension I could see; intersection just
 seems bizarre. Hopefully this information is useful, though — even if it
 seems really hard to reconcile spec-wise with other implementations. If
 WebKit/Blink can't be fixed, maybe the behaviour of `addRange(range)` is
 simply undefined if `rangeCount` is 1…


Webkit does use the union, that's what the VisiblePosition stuff is doing.

- E


Re: [dom-futures] Making ProgressFuture use real events

2013-04-12 Thread Elliott Sprehn
On Wed, Apr 3, 2013 at 11:45 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Apr 3, 2013 at 10:43 AM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  The ProgressFuture strawman at
  https://github.com/slightlyoff/DOMFuture/blob/master/ProgressFuture.idl
 
  augments a future with an analog of progress events.
 
  Why isn't this just using actual events?  That is, make ProgressFuture
  an eventTarget as well, which fires progress events.
 
  I'm fine with the existing API existing as it is, where convenience
  APIs are provided both for firing (a 'progress' method on the
  resolver) and listening to (a 'progress' method on the future)
  progress events.  It just seems weird to implement events without them
  being actual Events.

 Define seems weird.

 Using Events as a way to do callbacks has many advantages when using
 them on Nodes. However they provide much less value when used on
 standalone objects like XMLHttpRequest and a Future object. With
 using a callback it means that we can both provide a more convenient
 registration syntax:


Sure they do, they allow future extensibility. If we had used Futures for
everything from the start the world would be quite different since we
couldn't add any new information to the notifications.

All the Futureboolean stuff going on right now is just going to bite us
later when we want to add more features to some notification. This is fine
in a JS app, just refactor the app, this isn't nice on the web when we can
never change an API once it's shipped.



 doSomeAsyncThing(...).progress(showProgress).then(processResult,
 handleError);

 Additionally the actual progress handler gets a cleaner API. Instead
 of having an Event object that has a bunch of irrelevant stuff on it,
 like .stopPropagation(), .preventDefault() and .bubbling, the caller
 just gets the relevant data.

 We've come to use Events as a general callback mechanism. However
 that's never been where they shine. Using Events here just causes more
 edge cases to define for us, likely more code for implementations, and
 more and less convenient API for authors.


When I first saw the Future stuff go by it seemed it was trying to address
one off notifications, but now we're busy shoving it down the throat of
every API, even ones that are sending repeated notifications.

For example requestAnimationFrame _could_ be a Futureint since it's only
called once, but now we want to pass a bunch of information into from the
compositor but can't since there's no extensibility mechanism.

What's the long term evolution plan for all this? How do we extend the APIs
in the future since we don't have an Event object to add properties to?

- E


Re: [webcomponents] Adjusting offsetParent, offsetTop, offsetLeft properties in Shadow DOM

2013-03-28 Thread Elliott Sprehn
On Mon, Mar 25, 2013 at 2:48 AM, Dominic Cooney domin...@chromium.orgwrote:

 On Sun, Mar 24, 2013 at 3:50 PM, Elliott Sprehn espr...@gmail.com wrote:


 On Mon, Mar 18, 2013 at 4:48 AM, Dominic Cooney domin...@chromium.orgwrote:

 ...

 I think the offset{Parent, Top, Left} properties should be adjusted.
 This means that in the above example, b.offsetParent would be body and
 b.offsetLeft would be silently adjusted to accumulate an offset of 10px
 from c. I think this makes sense because typical uses of offsetParent and
 offsetLeft, etc. are used to calculate the position of one element in the
 coordinate space of another element, and adjusting these properties to work
 this way will mean code that naively implements this use case will continue
 to work.

 This behavior is unfortunately slightly lossy: If the author had #c and
 wanted to calculate the position of #b in the coordinate space of #c, they
 will need to do some calculation to work it out via body. But presumably
 a script of this nature is aware of the existence of Shadow DOM.

 The question of what to do for offset* properties across a shadow
 boundary when the shadow *is* traversable is a vexing one. In this case
 there is no node disclosed that you could not find anyway using
 .shadowRoot, etc. tree walking. From that point of view it seems acceptable
 for offsetParent to return an offsetParent inside the (traversable) shadow.


 This seems like correct behavior. We should walk up to find a traversable
 parent and then offsetLeft/offsetTop should be relative to those.

 (Note in webkit this is trivial since offsetLeft, offsetTop both call
 offsetParent internally and then compute their value from it)



 On the other hand, this violates the lower-boundary encapsulation of the
 Shadow DOM spec. This means that pages that are using traversable shadows,
 but relying on convention (ie don't use new properties like .shadowRoot)
 to get the encapsulation benefits of Shadow DOM, now have to audit the
 offsetParent property. It also means you need to have two ways of dealing
 with offsetParent in both user agents and author scripts. So for simplicity
 and consistency I think it makes sense to treat both traversable and
 non-traversable shadows uniformly.


 I disagree with this


 Which part?

 Returning an element inside Shadow DOM in an attribute of a node outside
 Shadow DOM violates lower boundary encapsulation.


Yes, it's sad that you can fall into a shadow by mistake here where all
the other APIs were designed to prevent that.


 If offsetParent returns an element inside traversable Shadow DOM, pages
 that are using traversable shadows but relying on convention to get
 encapsulation benefits will have to audit uses of the offsetParent property.

 If offsetParent returns an element inside traversable Shadow DOM (but not
 non-traversable Shadow DOM), there are two ways of dealing with
 offsetParent in the user agent.

 If offsetParent returns an element inside traversable Shadow DOM (but not
 non-traversable Shadow DOM), there are two ways of dealing with
 offsetParent in author scripts.


What you're proposing doesn't reduce the issues. There's still two cases,
you're just offloading all the complexity into author code by making them
walk up the tree and call getComputedStyle everywhere.



 It makes sense to treat both traversable and non-traversable shadows
 uniformly.


I disagree with this statement. By the virtue of what a non-traversable
shadow is we need to treat it special all over the place.




 since it means offsetParent returns a nonsensical value for elements in,
 or projected into, traversable shadow roots as it traverses all the way up
 into the main page until it's not inside a ShadowRoot anymore.


 In what way is that nonsensical? The return value makes sense at the level
 of abstraction the code calling innerParent is working at.


var rect = node.offsetParent.getBoundingClientRect();
node.style.top = computePosition(rect);
node.style.left = computePosition(rect);

Since you're walking all the way out of the shadow into the main page
you're going to get a nonsense result here. More importantly since all the
apps we've seen built using custom elements so far have x-app and the
entire app down there you're effectively saying that offsetParent should
return body for nearly every element in a Toolkit app and that the
feature becomes totally useless.




 offsetParent is very useful to find your positioned parent, and you're
 crippling that feature and making authors use distributedParent +
 getComputedStyle() repeatedly which is considerably more expensive.


 What are those use cases, except finding the position of an element
 relative to another element, which I think is not excessively complicated
 by what I am proposing here?


Not complicated, just very expensive. getComputedStyle allocates a new
object on every invocation and does string parsing. Unfortunately it seems
jQuery already does this:
https://github.com/jquery

Re: [webcomponents] Adjusting offsetParent, offsetTop, offsetLeft properties in Shadow DOM

2013-03-28 Thread Elliott Sprehn
On Wed, Mar 27, 2013 at 2:02 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 Scott Miles wrote:

  This is a thorny problem, but my initial reaction is that you
  threaded the needle appropriately. I don't see how we avoid some
  lossiness in this situation.

 Note that if you're using offsetWith/Height/Top/Bottom you already lose,
 because they return integers.

 I think we should be doing what we can to discourage use of these broken
 APIs, for what it's worth, instead of worrying how we can extend their
 already-incorrect behavior to cover more cases well.


That's fair, how do you feel about getPositionedAncestor(bool
includingTraversableShadows) ? This would solve the primary use cases for
offsetParent, and also mean jQuery wouldn't need to call getComputedStyle
all the way up the tree.

Walking the box tree is super fast in C++, and isPositioned is a bitfield
check. Doing this in JS with getComputedStyle is quite a lot more expensive
(allocate CSSComputedStyleDeclaration, parse the property name, return the
position value, compare the string).

- E


Re: [webcomponents] Adjusting offsetParent, offsetTop, offsetLeft properties in Shadow DOM

2013-03-24 Thread Elliott Sprehn
On Mon, Mar 18, 2013 at 4:48 AM, Dominic Cooney domin...@chromium.orgwrote:

 ...

 I think the offset{Parent, Top, Left} properties should be adjusted. This
 means that in the above example, b.offsetParent would be body and
 b.offsetLeft would be silently adjusted to accumulate an offset of 10px
 from c. I think this makes sense because typical uses of offsetParent and
 offsetLeft, etc. are used to calculate the position of one element in the
 coordinate space of another element, and adjusting these properties to work
 this way will mean code that naively implements this use case will continue
 to work.

 This behavior is unfortunately slightly lossy: If the author had #c and
 wanted to calculate the position of #b in the coordinate space of #c, they
 will need to do some calculation to work it out via body. But presumably
 a script of this nature is aware of the existence of Shadow DOM.

 The question of what to do for offset* properties across a shadow boundary
 when the shadow *is* traversable is a vexing one. In this case there is no
 node disclosed that you could not find anyway using .shadowRoot, etc. tree
 walking. From that point of view it seems acceptable for offsetParent to
 return an offsetParent inside the (traversable) shadow.


This seems like correct behavior. We should walk up to find a traversable
parent and then offsetLeft/offsetTop should be relative to those.

(Note in webkit this is trivial since offsetLeft, offsetTop both call
offsetParent internally and then compute their value from it)



 On the other hand, this violates the lower-boundary encapsulation of the
 Shadow DOM spec. This means that pages that are using traversable shadows,
 but relying on convention (ie don't use new properties like .shadowRoot)
 to get the encapsulation benefits of Shadow DOM, now have to audit the
 offsetParent property. It also means you need to have two ways of dealing
 with offsetParent in both user agents and author scripts. So for simplicity
 and consistency I think it makes sense to treat both traversable and
 non-traversable shadows uniformly.


I disagree with this since it means offsetParent returns a nonsensical
value for elements in, or projected into, traversable shadow roots as it
traverses all the way up into the main page until it's not inside a
ShadowRoot anymore.

offsetParent is very useful to find your positioned parent, and you're
crippling that feature and making authors use distributedParent +
getComputedStyle() repeatedly which is considerably more expensive.

- E


Re: [webcomponents]: Making link rel=components produce DocumentFragments

2013-03-18 Thread Elliott Sprehn
On Mon, Mar 18, 2013 at 9:19 AM, Dimitri Glazkov dglaz...@google.comwrote:


 On Sun, Mar 17, 2013 at 1:46 PM, Elliott Sprehn espr...@gmail.com wrote:


 I'd rather like it if the spec said the component document is a document
 that's always in standards mode and has no children and then the contents
 of the component were put into a DocumentFragment.


 Should it bother us that depending on the implementation, one document
 could be shared among all component fragments or not?


That seems like an advantage to me. We can have the spec require unique
documents for now if people want, but using DocumentFragment at least lets
us decide to share in the future. Using a Document (or subclass) would
prevent us from ever making that optimization.

I think the simplicity argument is more important. Document has tons of
APIs on it that are not useful, DocumentFragment is much more focused.

- E


Re: [webcomponents]: Making link rel=components produce DocumentFragments

2013-03-17 Thread Elliott Sprehn
On Sat, Mar 16, 2013 at 2:29 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Thu, Mar 14, 2013 at 8:09 PM, Dominic Cooney domin...@google.comwrote:

 On Fri, Mar 15, 2013 at 9:43 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Here's one scenario where keeping components Documents might be a good
 idea. Suppose you just built a multi-threaded parser into your renderer
 engine, and you would like to hook it up to start loading multiple
 components in parallel. How difficult will it be for you to do this if they
 were all just DocumentFragments in the same document?


 Given that having components be parsed in the same document complicates
 the specification, complicates the implementation (for example resolving
 relative resources), might threaten some optimizations (multi-threaded
 parsing), and gives a benefit that authors could achieve using tools to
 crunch multiple component files into one, I propose that:

 Each resource is loaded in its own document.

 What about the type of the Component's content attribute? Should that be
 DocumentFragment or Document?


 Might as well be Document, then. Why create an extra DocumentFragment,
 right?


Because it simplifies the interface for interacting with things, and means
we can possibly share the Document between some components that are same
origin in the future.

ex. Where is the component body? document.documentElement ? document.body
? Why are we creating a body and head on this thing? Do I need to add a
!DOCTYPE html ?

I'd rather like it if the spec said the component document is a document
that's always in standards mode and has no children and then the contents
of the component were put into a DocumentFragment.

- E


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Elliott Sprehn
On Wed, Mar 6, 2013 at 5:36 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 3/6/13 5:05 PM, Dimitri Glazkov wrote:

 * attributeChangedCallback -- synchronously called when an attribute
 of an element is added, removed, or modified


 Synchronously in what sense?  Why are mutation observers not sufficient
 here?


 * insertedCallback -- asynchronously called when an element is added
 to document tree (TBD: is this called when parser is constructing the
 tree?)


 Again, why is this not doable with mutation observers?


inserted and removed can probably be end of micro task, but
attributeChanged definitely needs to be synchronous to model the behavior
of input type where changing it from X to Y has an immediate effect on
the APIs available (like stepUp).

MutationObservers are not sufficient because childList mutations are about
children, but you want to observe when *yourself* is added or removed from
the Document tree. There's also no inserted into document and removed
from document mutation records, and since ShadowRoot has no host
property there's also no way to walk up to the root to find out if you're
actually in the document. (Dimtiri should fix this... I hope).

The ready callback should probably also be synchronous (but at least it
happens in script invocation of the new operator, or after tree building),
since you want your widget to be usable immediately.

- E


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-11 Thread Elliott Sprehn
On Mon, Mar 11, 2013 at 2:32 PM, Daniel Buchner dan...@mozilla.com wrote:

 inserted and removed can probably be end of micro task, but
 attributeChanged definitely needs to be synchronous to model the behavior
 of input type where changing it from X to Y has an immediate effect on
 the APIs available (like stepUp).

 Actually, I disagree. Attribute changes need not be assessed syncronously,
 as long as they are evaluated before critical points, such as before paint
 (think requestAnimationFrame timing). Can you provide a common, real-world
 example of where queued timing would not work?



Yes, I already gave one. Where you go from input type=text to input
type=range and then stepUp() suddenly starts working.

I guess we could force people to use properties here, but that doesn't
model how the platform itself works.

An even more common example is iframe src. Setting a different @src value
synchronously navigates the frame. Also inserting an iframe into the page
synchronously loads an about:blank document.

Neither of theses cases are explained by the end-of-microtask behavior
you're describing.

- E


Re: [webcomponents]: First stab at the Web Components spec

2013-03-11 Thread Elliott Sprehn
On Mon, Mar 11, 2013 at 2:45 PM, Philip Walton phi...@philipwalton.comwrote:

 Personally, I had no objection to rel=component. It's similar in
 usage to rel=stylesheet in the fact that it's descriptive of what you're
 linking to.

 On the other hand, rel=include is very broad. It could just as easily
 apply to a stylesheet as a Web component, and may limit the usefulness of
 the term if/when future rel values are introduced.

 (p.s. I'm new to this list and haven't read through all the previous
 discussions on Web components. Feel free to disregard this comment if I'm
 rehashing old topics)



+1, I like rel=component, document or include doesn't make sense.

- E


Re: [webcomponents]: First stab at the Web Components spec

2013-03-11 Thread Elliott Sprehn
On Mon, Mar 11, 2013 at 4:39 PM, Scott Miles sjmi...@google.com wrote:

 My issue is that the target of this link will not in general be an atomic
 thing like a 'component' or a 'module'. It's a carrier for resources and
 links to other resources. IMO this is one of the great strengths of this
 proposal.

 For this reason, when it was rel=components (plural) there was no
 problem for me.

 Having said all that, I'm not particularly up in arms about this issue.
 The name will bend to the object in the fullness of time. :)



I guess that doesn't bother me because rel=stylesheet isn't just one
stylesheet either, you can @import lots of them down there. :)

Similarly when I think of a component I don't think of one custom widget,
I think of lots of logically related bundled things.

- E


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-07 Thread Elliott Sprehn
On Thu, Mar 7, 2013 at 9:55 AM, Bronislav Klučka 
bronislav.klu...@bauglir.com wrote:


 ...

 I do not mean to sound cocky here, but I'd really like to know how many
 people here are used to languages that can separate internals and
 externals, because if you are simply not used to it, you simply cannot see
 the benefits and all goes to I'm used to play with internals of controls,


I think you'll find everyone in this discussion has used a wide variety of
systems from XUL to Cocoa to Swing to MFC and many more.

I think it's important to note that all these native platforms support
walking the hierarchy as well.

Cocoa has [NSView subviews], Windows has FindWindowEx/EnumChildWindows,
Swing has getComponents(), ...

I'm struggling to think of a widely used UI platform that _doesn't_ give
you access. Sure, there's encapsulation, Shadow DOM that has too, but they
all still give you an accessor to get down into the components.

...

 From my JS/HTML control experience?
 * I want all my tables to look certain way - boom jQury datepicker brokes
 down, tinyMCE brokes down
 * I want all my tables to have and option for exporting data - boom jQury
 datepicker brokes down, tinyMCE brokes down
 * I switch from content-box to border-box - pretty much every 3rd party
 control breaks down
 * I want to autogenerate table of contents (page menu links) from headings
 in the article, f*ck, some stupid plugin gets involved
 that's like the last week experience
 ...


Private shadows are not necessary to address any if the issues you cite.
Indeed all of these issues are already fixed with the current design by way
of scoped styles, resetting style inheritance, and shadows being separate
trees you don't accidentally fall into.

I think this is really the compelling argument. We solved the major issues
already, and none of the other very successful platforms (ex. Cocoa,
Android, etc.) needs to be so heavy handed as to prevent you from walking
the tree if you choose.

- E


Re: [webcomponents]: What callbacks do custom elements need?

2013-03-06 Thread Elliott Sprehn
On Wed, Mar 6, 2013 at 2:05 PM, Dimitri Glazkov dglaz...@google.com wrote:

 ...
 * insertedCallback -- asynchronously called when an element is added
 to document tree (TBD: is this called when parser is constructing the
 tree?)

 * removedCallback -- asynchronously called when an element is removed
 from the document tree (except the situations when the document is
 destroyed)


The inserted and removed callbacks need to happen in
batches asynchronously, specifically if I have a tree:

A
  \
   B
   / \
 C   D

And B and D are custom elements and then I remove A, the remove should
happen first and then B and D should be notified after the removal and all
clean up has happened. We don't want to reinvent mutation events here.

insertedCallback should happen the same way. We should fully construct the
tree first and then all of them get called in a big batch at the end.

- E


[shadow-dom] Counters and list item counting

2013-02-19 Thread Elliott Sprehn
Currently in Webkit list item counting is done on the render tree, but we
are looking at making it use the DOM instead so that ordered lists work
properly in regions. This raises an interesting question about if they
should use the composed shadow tree, or the original tree.

ex.

x-widget
ol
  li
  li
/ol
/x-widget

inside x-widget:

div
  content select=li:last-child
/div

What's the count on that projected list item?

This also raises questions of how counters interact with shadows. Should
counters work on the project DOM or the original DOM?

We're leaning towards the original DOM since otherwise counters become
difficult to work with when they're reprojected deeper and deeper down a
component hierarchy.

- E


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-29 Thread Elliott Sprehn
On Wed, Nov 28, 2012 at 2:51 PM, Maciej Stachowiak m...@apple.com wrote:


 Does this support the previously discussed mechanism of allowing either
 public or private components? I'm not able to tell from the referenced
 sections.


Can you explain the use case for wanting private shadows that are not
isolated?

- E


Re: [webcomponents]: Changing API from constructable ShadowRoot to factory-like

2012-11-09 Thread Elliott Sprehn
Sounds good to me. :)



On Fri, Nov 9, 2012 at 12:30 PM, Maciej Stachowiak m...@apple.com wrote:


 I think a factory function is better here for the reasons Dimitri stated.
 But I also agree that an addFoo function returning a new object seems
 strange. I think that createShadowRoot may be better than either option.

  - Maciej

 On Nov 8, 2012, at 11:42 AM, Erik Arvidsson a...@chromium.org wrote:

  addShadowRoot seem wrong to me to. Usually add* methods takes an
  argument of something that is supposed to be added to the context
  object.
 
  If we are going with a factory function I think that createShadowRoot
  is the right name even though create methods have a lot of bad history
  in the DOM APIs.
 
  On Thu, Nov 8, 2012 at 1:01 PM, Elliott Sprehn espr...@google.com
 wrote:
  True, though that's actually one character longer, probably two with
 normal
  formatting ;P
 
  new ShadowRoot(element,{
  element.addShadowRoot({
 
  I'm more concerned about the constructor with irreversible side effects
 of
  course.
 
  - E
 
 
  On Thu, Nov 8, 2012 at 9:57 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
 
  That _is_ pretty nice, but we can add this as a second argument to the
  constructor, as well:
 
  root = new ShadowRoot(element, {
   applyAuthorSheets: false,
   resetStyleInheritance: true
  });
 
  At this point, the stakes are primarily in aesthetics... Which makes
  the whole question so much more difficult to address objectively.
 
  :DG
 
  On Thu, Nov 8, 2012 at 9:54 AM, Elliott Sprehn espr...@google.com
 wrote:
  The real sugar I think is in the dictionary version of addShadowRoot:
 
  root = element.addShadowRoot({
   applyAuthorSheets: false,
   resetStyleInheritance: true
  })
 
 
  On Thu, Nov 8, 2012 at 9:49 AM, Dimitri Glazkov dglaz...@google.com
  wrote:
 
  Sure. Here's a simple example without getting into traversable shadow
  trees (those are still being discussed in a different thread):
 
  A1) Using constructable ShadowRoot:
 
  var element = document.querySelector('div#foo');
  // let's add a shadow root to element
  var shadowRoot = new ShadowRoot(element);
  // do work with it..
  shadowRoot.applyAuthorSheets = false;
  shadowRoot.appendChild(myDocumentFragment);
 
  A2) Using addShadowRoot:
 
  var element = document.querySelector('div#foo');
  // let's add a shadow root to element
  var shadowRoot = element.addShadowRoot();
  // do work with it..
  shadowRoot.applyAuthorSheets = false;
  shadowRoot.appendChild(myDocumentFragment);
 
  Now with traversable shadow trees:
 
  B1) Using constructable ShadowRoot:
 
  var element = document.querySelector('div#foo');
  alert(element.shadowRoot); // null
  var root = new ShadowRoot(element);
  alert(root === element.shadowRoot); // true
  var root2 = new ShadowRoot(element);
  alert(root === element.shadowRoot); // false
  alert(root2 === element.shadowRoot); // true
 
  B2) Using addShadowRoot:
 
  var element = document.querySelector('div#foo');
  alert(element.shadowRoot); // null
  var root = element.addShadowRoot();
  alert(root === element.shadowRoot); // true
  var root2 = element.addShadowRoot();
  alert(root === element.shadowRoot); // false
  alert(root2 === element.shadowRoot); // true
 
  :DG
 
  On Thu, Nov 8, 2012 at 9:42 AM, Maciej Stachowiak m...@apple.com
  wrote:
 
  Could you please provide equivalent code examples using both
  versions?
 
  Cheers,
  Maciej
 
  On Nov 7, 2012, at 10:36 AM, Dimitri Glazkov dglaz...@google.com
  wrote:
 
  Folks,
 
  Throughout the year-long (whoa!) history of the Shadow DOM spec,
  various people commented on how odd the constructable ShadowRoot
  pattern was:
 
  var root = new ShadowRoot(host); // both creates an instance *and*
  makes an association between this instance and host.
 
  People (I cc'd most of them) noted various quirks, from the
  side-effectey constructor to relatively uncommon style of the API.
 
  I once was of the strong opinion that having a nice, constructable
  object has better ergonomics and would overcome the mentioned code
  smells.
 
  But... As we're discussing traversable shadows and the possibility
  of
  having Element.shadowRoot, the idea of changing to a factory
 pattern
  now looks more appealing:
 
  var element = document.querySelector('div#foo');
  alert(element.shadowRoot); // null
  var root = element.addShadowRoot({ resetStyleInheritance: true });
  alert(root === element.shadowRoot); // true
  var root2 = element.addShadowRoot();
  alert(root === element.shadowRoot); // false
  alert(root2 === element.shadowRoot); // true
 
  You gotta admit this looks very consistent and natural relative to
  how
  DOM APIs work today.
 
  We could still keep constructable object syntax as alternative
  method
  or ditch it altogether and make calling constructor throw an
  exception.
 
  What do you think, folks? In the spirit of last night's events,
  let's
  vote:
 
  1) element.addShadowRoot rocks! Let's make it the One True Way!
  2) Keep

Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-08 Thread Elliott Sprehn
Traversable shadows are a requirement for a number of things like:

- Generic page level libraries and polyfills that need to transform DOM
nodes
- Generic event handling libraries (ex. Pointer events)
- Creating screenshots of the page by rendering every node to a canvas (ex.
Google Feedback)
- Creating awesome bookmarklets like Readability

In our discussions with widget authors we'd either end up making shadows
exposed by convention on almost all widget libraries under a common name as
authors expect to be able to drop in libraries, polyfills and tools like
Feedback, or we'd end up with awful hacks like overriding ShadowRoot or
document.createElement.

querySelector and friends will still stop at these boundaries, so you would
never accidentally fall down into a ShadowRoot. That means that I doubt
you'll get widgets being broken as Boris suggests because people aren't
going to accidentally modify the inside of your widget.

I'd also hate to prevent future innovation like Google Feedback which has
turned out to be a critical component for Google product success. I can't
share specific numbers, but it's had a very high impact and being able to
be dropped into existing pages and just work was fundamental to that. While
perhaps we can eventually solve that use case better, who knows what future
ideas people will come up with.

- E


On Tue, Nov 6, 2012 at 3:44 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 On Thu, Nov 1, 2012 at 9:02 AM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 11/1/12 7:41 AM, Tab Atkins Jr. wrote:
 
  There was no good *reason* to be private by default
 
 
  Yes, there was.  It makes it much simpler to author non-buggy components.
  Most component authors don't really contemplate how their code will
 behave
  if someone violates the invariants they're depending on in their shadow
  DOMs.  We've run into this again and again with XBL.
 
  So pretty much any component that has a shadow DOM people can mess with
 but
  doesn't explicitly consider that it can happen is likely to be very
 broken.
  Depending on what exactly it does, the brokenness can be more or less
  benign, ranging from doesn't render right to leaks private user data
 to
  the world.
 
 
  As a general rule, we should favor being public over
  being private unless there's a good privacy or security reason to be
  private.
 
 
  As a general rule we should be making it as easy as possible to write
  non-buggy code, while still allowing flexibility.  In my opinion.

 This has been my concern as well.

 The story that made me sway is the elementFromPoint story. It goes
 like this: we had an engineer come by and ask to add elementFromPoint
 to ShadowRoot API.

 ... this is a short story with a happy ending
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=18912), since
 ShadowRoot hasn't shipped anywhere yet. However, imagine all browsers
 ship Shadow DOM (oh glorious time), and there's a new cool DOM thing
 that we haven't thought of yet. Without ability to get into shadow
 trees and polyfill, we'll quickly see people throw nasty hacks at the
 problem, like they always do (see one that Dominic suggested here:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=15409#c5). And that
 seems like a bad smell.

 I am both excited and terrified.

 Excited, because discovering Angelina Farro's talk
 (http://www.youtube.com/watch?v=JNjnv-Gcpnw) makes me realize that
 this Web Components thing is starting to catch on.

 Terrified, because we gotta get this right. The Web is traditionally
 very monkey-patchey and pliable and our strides to make the boundaries
 hard will just breed perversion.

 Anyhow. Elliott has made several passionate arguments for travsersable
 shadow trees in person. Maybe he'll have a chance to chime in here.


 :DG



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-08 Thread Elliott Sprehn
On Thu, Nov 1, 2012 at 6:43 AM, Maciej Stachowiak m...@apple.com wrote:


 On Nov 1, 2012, at 12:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:

 ...
 
  For example, being able to re-render the page manually via DOM
  inspection and custom canvas painting code.  Google Feedback does
  this, for example.  If shadows are only exposed when the component
  author thinks about it, and then only by convention, this means that
  most components will be un-renderable by tools like this.

 As Adam Barth often points out, in general it's not safe to paint pieces
 of a webpage into canvas without security/privacy risk. How does Google
 Feedback deal with non-same-origin images or videos or iframes, or with
 visited link coloring, to cite a few examples? Does it just not handle
 those things?


We don't handle visited link coloring as there's no way to get that from JS.

For images we proxy all images and do the actual drawing to the canvas in a
nested iframe that's on the same domain as the proxy.

For cross domain iframes we have a JS API that the frame can include that
handles a special postMessage which serializes the entire page and then
unserializes on the other side for rendering. Thankfully this case is
extremely rare unlike web components where it turns out you end up with
almost the entire page down in some component or another (ex. x-panel,
x-conversation-view …). This of course requires you to have control of
the cross origin page.

For an architectural overview of Google Feedback's JS HTML rendering engine
you can look at this presentation, slides 6 and 10 explain the image proxy:

http://www.elliottsprehn.com/preso/fluentconf/

- E


Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2012-11-08 Thread Elliott Sprehn
On Thu, Nov 8, 2012 at 8:13 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/8/12 1:45 AM, Elliott Sprehn wrote:

 That means that I
 doubt you'll get widgets being broken as Boris suggests because people
 aren't going to accidentally modify the inside of your widget.


 The problems start when people _maliciously_ modify the inside of your
 widget.  Again, with XBL you don't get to accidentally modify the insides
 of anonymous content (shadow) trees.  But there were all sorts of attack
 scenarious where people could modify them at all.


If you're worried about malicious attacks on your widget, shadows being
private is not enough. You need a whole new scripting context. I can
override all the String and Array methods, DOM prototype methods,
document.createElement, document.implementation methods, MutationObserver
etc. or even the ShadowRoot constructor with the current API and still
likely capture the inside of your component. This is JavaScript after all.
:)

You're much better off using a public shadow and then putting your whole
widget in a cross domain iframe to get a new scripting context instead of
depending on the false security of a private shadow.



  I'd also hate to prevent future innovation like Google Feedback which
 has turned out to be a critical component for Google product success.


 I would like to understand more here.  How does preventing touching the
 shadow tree by default prevent something like Google Feedback?


Google Feedback is an HTML rendering engine written in JS. To render the
document you need access to every DOM node so you can draw it to a canvas.
In the world of web components much, or often all, of your web application
ends up inside of a component. We can imagine Gmail is something like:

x-toolbar/x-toolbar
x-panel
  x-label-sidebar/x-label-sidebar
  x-conversation/x-conversation
/x-panel

Google Feedback would be unnable to access the private shadow tree where
the actual content of the page is so your screenshot would be blank.

Today Google Feedback just works on most pages on the web and can be
activated through a bookmarklet on any website, even ones that Google does
not control. In the future this wouldn't be possible if shadows were
private by default and authors didn't consider all future library and
widget integrations.

For more information about Google Feedback see my recent architecture
presentation:
http://elliottsprehn.com/preso/fluentconf/

Another example is Readability:
http://www.readability.com/bookmarklets

Once the articles on news websites are actually just x-news-article
articleId={bindingForArticleId}/x-news-article and load from the model
into their shadow they become hidden from bookmarklets that wish to
traverse down into them making future innovations like Readbility difficult
without super hacks.

- E


Re: [webcomponents]: Changing API from constructable ShadowRoot to factory-like

2012-11-08 Thread Elliott Sprehn
The real sugar I think is in the dictionary version of addShadowRoot:

root = element.addShadowRoot({
  applyAuthorSheets: false,
  resetStyleInheritance: true
})


On Thu, Nov 8, 2012 at 9:49 AM, Dimitri Glazkov dglaz...@google.com wrote:

 Sure. Here's a simple example without getting into traversable shadow
 trees (those are still being discussed in a different thread):

 A1) Using constructable ShadowRoot:

 var element = document.querySelector('div#foo');
 // let's add a shadow root to element
 var shadowRoot = new ShadowRoot(element);
 // do work with it..
 shadowRoot.applyAuthorSheets = false;
 shadowRoot.appendChild(myDocumentFragment);

 A2) Using addShadowRoot:

 var element = document.querySelector('div#foo');
 // let's add a shadow root to element
 var shadowRoot = element.addShadowRoot();
 // do work with it..
 shadowRoot.applyAuthorSheets = false;
 shadowRoot.appendChild(myDocumentFragment);

 Now with traversable shadow trees:

 B1) Using constructable ShadowRoot:

 var element = document.querySelector('div#foo');
 alert(element.shadowRoot); // null
 var root = new ShadowRoot(element);
 alert(root === element.shadowRoot); // true
 var root2 = new ShadowRoot(element);
 alert(root === element.shadowRoot); // false
 alert(root2 === element.shadowRoot); // true

 B2) Using addShadowRoot:

 var element = document.querySelector('div#foo');
 alert(element.shadowRoot); // null
 var root = element.addShadowRoot();
 alert(root === element.shadowRoot); // true
 var root2 = element.addShadowRoot();
 alert(root === element.shadowRoot); // false
 alert(root2 === element.shadowRoot); // true

 :DG

 On Thu, Nov 8, 2012 at 9:42 AM, Maciej Stachowiak m...@apple.com wrote:
 
  Could you please provide equivalent code examples using both versions?
 
  Cheers,
  Maciej
 
  On Nov 7, 2012, at 10:36 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
 
  Folks,
 
  Throughout the year-long (whoa!) history of the Shadow DOM spec,
  various people commented on how odd the constructable ShadowRoot
  pattern was:
 
  var root = new ShadowRoot(host); // both creates an instance *and*
  makes an association between this instance and host.
 
  People (I cc'd most of them) noted various quirks, from the
  side-effectey constructor to relatively uncommon style of the API.
 
  I once was of the strong opinion that having a nice, constructable
  object has better ergonomics and would overcome the mentioned code
  smells.
 
  But... As we're discussing traversable shadows and the possibility of
  having Element.shadowRoot, the idea of changing to a factory pattern
  now looks more appealing:
 
  var element = document.querySelector('div#foo');
  alert(element.shadowRoot); // null
  var root = element.addShadowRoot({ resetStyleInheritance: true });
  alert(root === element.shadowRoot); // true
  var root2 = element.addShadowRoot();
  alert(root === element.shadowRoot); // false
  alert(root2 === element.shadowRoot); // true
 
  You gotta admit this looks very consistent and natural relative to how
  DOM APIs work today.
 
  We could still keep constructable object syntax as alternative method
  or ditch it altogether and make calling constructor throw an
  exception.
 
  What do you think, folks? In the spirit of last night's events, let's
 vote:
 
  1) element.addShadowRoot rocks! Let's make it the One True Way!
  2) Keep ShadowRoot constructable! Factories stink!
  3) Let's have both!
  4) element.addShadowRoot, but ONLY if we have traversable shadow trees
  5) Kodos.
 
  :DG
 
  P.S. I would like to retain the atomic quality of the operation:
  instantiate+associate in one go. There's a whole forest of problems
  awaits those who contemplate detached shadow roots.
 
 



Re: [webcomponents]: Changing API from constructable ShadowRoot to factory-like

2012-11-08 Thread Elliott Sprehn
True, though that's actually one character longer, probably two with normal
formatting ;P

new ShadowRoot(element,{
element.addShadowRoot({

I'm more concerned about the constructor with irreversible side effects of
course.

- E


On Thu, Nov 8, 2012 at 9:57 AM, Dimitri Glazkov dglaz...@google.com wrote:

 That _is_ pretty nice, but we can add this as a second argument to the
 constructor, as well:

 root = new ShadowRoot(element, {
   applyAuthorSheets: false,
   resetStyleInheritance: true
 });

 At this point, the stakes are primarily in aesthetics... Which makes
 the whole question so much more difficult to address objectively.

 :DG

 On Thu, Nov 8, 2012 at 9:54 AM, Elliott Sprehn espr...@google.com wrote:
  The real sugar I think is in the dictionary version of addShadowRoot:
 
  root = element.addShadowRoot({
applyAuthorSheets: false,
resetStyleInheritance: true
  })
 
 
  On Thu, Nov 8, 2012 at 9:49 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
 
  Sure. Here's a simple example without getting into traversable shadow
  trees (those are still being discussed in a different thread):
 
  A1) Using constructable ShadowRoot:
 
  var element = document.querySelector('div#foo');
  // let's add a shadow root to element
  var shadowRoot = new ShadowRoot(element);
  // do work with it..
  shadowRoot.applyAuthorSheets = false;
  shadowRoot.appendChild(myDocumentFragment);
 
  A2) Using addShadowRoot:
 
  var element = document.querySelector('div#foo');
  // let's add a shadow root to element
  var shadowRoot = element.addShadowRoot();
  // do work with it..
  shadowRoot.applyAuthorSheets = false;
  shadowRoot.appendChild(myDocumentFragment);
 
  Now with traversable shadow trees:
 
  B1) Using constructable ShadowRoot:
 
  var element = document.querySelector('div#foo');
  alert(element.shadowRoot); // null
  var root = new ShadowRoot(element);
  alert(root === element.shadowRoot); // true
  var root2 = new ShadowRoot(element);
  alert(root === element.shadowRoot); // false
  alert(root2 === element.shadowRoot); // true
 
  B2) Using addShadowRoot:
 
  var element = document.querySelector('div#foo');
  alert(element.shadowRoot); // null
  var root = element.addShadowRoot();
  alert(root === element.shadowRoot); // true
  var root2 = element.addShadowRoot();
  alert(root === element.shadowRoot); // false
  alert(root2 === element.shadowRoot); // true
 
  :DG
 
  On Thu, Nov 8, 2012 at 9:42 AM, Maciej Stachowiak m...@apple.com
 wrote:
  
   Could you please provide equivalent code examples using both versions?
  
   Cheers,
   Maciej
  
   On Nov 7, 2012, at 10:36 AM, Dimitri Glazkov dglaz...@google.com
   wrote:
  
   Folks,
  
   Throughout the year-long (whoa!) history of the Shadow DOM spec,
   various people commented on how odd the constructable ShadowRoot
   pattern was:
  
   var root = new ShadowRoot(host); // both creates an instance *and*
   makes an association between this instance and host.
  
   People (I cc'd most of them) noted various quirks, from the
   side-effectey constructor to relatively uncommon style of the API.
  
   I once was of the strong opinion that having a nice, constructable
   object has better ergonomics and would overcome the mentioned code
   smells.
  
   But... As we're discussing traversable shadows and the possibility of
   having Element.shadowRoot, the idea of changing to a factory pattern
   now looks more appealing:
  
   var element = document.querySelector('div#foo');
   alert(element.shadowRoot); // null
   var root = element.addShadowRoot({ resetStyleInheritance: true });
   alert(root === element.shadowRoot); // true
   var root2 = element.addShadowRoot();
   alert(root === element.shadowRoot); // false
   alert(root2 === element.shadowRoot); // true
  
   You gotta admit this looks very consistent and natural relative to
 how
   DOM APIs work today.
  
   We could still keep constructable object syntax as alternative method
   or ditch it altogether and make calling constructor throw an
   exception.
  
   What do you think, folks? In the spirit of last night's events, let's
   vote:
  
   1) element.addShadowRoot rocks! Let's make it the One True Way!
   2) Keep ShadowRoot constructable! Factories stink!
   3) Let's have both!
   4) element.addShadowRoot, but ONLY if we have traversable shadow
 trees
   5) Kodos.
  
   :DG
  
   P.S. I would like to retain the atomic quality of the operation:
   instantiate+associate in one go. There's a whole forest of problems
   awaits those who contemplate detached shadow roots.
  
  
 
 



Re: [webcomponents]: Changing API from constructable ShadowRoot to factory-like

2012-11-07 Thread Elliott Sprehn
I'm for 1) , having a constructor with side effects is confusing and
inconsistent with the platform (and other languages).



On Wed, Nov 7, 2012 at 10:36 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Folks,

 Throughout the year-long (whoa!) history of the Shadow DOM spec,
 various people commented on how odd the constructable ShadowRoot
 pattern was:

 var root = new ShadowRoot(host); // both creates an instance *and*
 makes an association between this instance and host.

 People (I cc'd most of them) noted various quirks, from the
 side-effectey constructor to relatively uncommon style of the API.

 I once was of the strong opinion that having a nice, constructable
 object has better ergonomics and would overcome the mentioned code
 smells.

 But... As we're discussing traversable shadows and the possibility of
 having Element.shadowRoot, the idea of changing to a factory pattern
 now looks more appealing:

 var element = document.querySelector('div#foo');
 alert(element.shadowRoot); // null
 var root = element.addShadowRoot({ resetStyleInheritance: true });
 alert(root === element.shadowRoot); // true
 var root2 = element.addShadowRoot();
 alert(root === element.shadowRoot); // false
 alert(root2 === element.shadowRoot); // true

 You gotta admit this looks very consistent and natural relative to how
 DOM APIs work today.

 We could still keep constructable object syntax as alternative method
 or ditch it altogether and make calling constructor throw an
 exception.

 What do you think, folks? In the spirit of last night's events, let's vote:

 1) element.addShadowRoot rocks! Let's make it the One True Way!
 2) Keep ShadowRoot constructable! Factories stink!
 3) Let's have both!
 4) element.addShadowRoot, but ONLY if we have traversable shadow trees
 5) Kodos.

 :DG

 P.S. I would like to retain the atomic quality of the operation:
 instantiate+associate in one go. There's a whole forest of problems
 awaits those who contemplate detached shadow roots.




Re: Should MutationObservers be able to observe work done by the HTML parser?

2012-09-04 Thread Elliott Sprehn
On Mon, Sep 3, 2012 at 8:45 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Sep 3, 2012 at 1:24 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Aug 30, 2012 at 2:18 PM, Jonas Sicking jo...@sicking.cc wrote:
 ...
...

 But I'm also not very worried about small differences in
 implementations here as long as everyone maintains the invariants that
 mutation observers set out to hold. Pages can't depend on an exact set
 of mutations happening anyway due to the stochastic nature of parsing.


It concerns me to require consistent records for mutations except in
parsing. While the batches of mutation records might be different
based on when the parser yields, the same records should still be
generated from parsing the same document in each browser. Especially
if you're following the HTML5 parser algorithm...

- E



Re: Making template play nice with XML and tags-and-text

2012-08-08 Thread Elliott Sprehn
On Sun, Aug 5, 2012 at 7:00 AM, Henri Sivonen hsivo...@iki.fi wrote:

 On Wed, Jul 18, 2012 at 11:35 PM, Adam Barth w...@adambarth.com wrote:
  On Wed, Jul 18, 2012 at 11:29 AM, Adam Klein ad...@chromium.org wrote:
 
  On Wed, Jul 18, 2012 at 9:19 AM, Adam Barth w...@adambarth.com wrote:
 
  Inspired by a conversation with hsivonen in #whatwg, I spend some time
  thinking about how we would design template for an XML world.  One
 idea I
  had was to put the elements inside the template into a namespace other
 than
  http://www.w3.org/1999/xhtml.

 On the face of things, this seems a lot less scary than the wormhole
 model. I think this merits further exploration! Thank you!


This proposal seems worse than wormhole parsing because the interface of
the template nodes is not HTMLElement, unless we're assuming it's a
different but identical namespace?

For instance it's super weird if img src=x is missing the .src property
because it's not in the HTML namespace, but suddenly when it's cloned for
instantiation it's back in the HTML namespace and has the src property.

- E


Re: [webcomponents] HTML Parsing and the template element

2012-06-26 Thread Elliott Sprehn
Silly question but why not specify the template element as if it's contents
were PCDATA, and the document fragment is the value. Then this whole
thing isn't really any different than a textarea.

On Tue, Jun 26, 2012 at 8:25 AM, Rafael Weinstein rafa...@google.comwrote:

 I think I'm not understanding the implications of your argument.

 You're making a principled argument about future pitfalls. Can you
 help me get my head around it by way of example?

 Perhaps:
 -pitfalls developers fall into
 -further dangerous points along the slippery slope you think this
 opens up (you mentioned pandoras box)


 On Fri, Jun 15, 2012 at 4:04 AM, Henri Sivonen hsivo...@iki.fi wrote:
  On Thu, Jun 14, 2012 at 11:48 PM, Ian Hickson i...@hixie.ch wrote:
  Does anyone object to me adding template, content, and shadow to
  the HTML parser spec next week?
 
  I don't object to adding them if they create normal child elements in
  the DOM. I do object if template has a null firstChild and the new
  property that leads to a fragment that belongs to a different owner
  document.
 
  (My non-objection to creating normal children in the DOM should not be
  read as a commitment to support templates Gecko.)
 
 
  --
  Henri Sivonen
  hsivo...@iki.fi
  http://hsivonen.iki.fi/
 




Re: [webcomponents] HTML Parsing and the template element

2012-06-26 Thread Elliott Sprehn
Hmm, I might be in agreement with Henri then. Having all these parallel
trees in the DOM is getting kind of out of control. Now there's the shadow
DOM trees on every node, and also this nested tree of document fragments
from template. There's a lot of simplicity in the DOM design that's lost
from these two changes.

On Tue, Jun 26, 2012 at 1:19 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Jun 26, 2012 at 1:03 PM, Elliott Sprehn espr...@gmail.com wrote:
  Silly question but why not specify the template element as if it's
 contents
  were PCDATA, and the document fragment is the value. Then this whole
 thing
  isn't really any different than a textarea.

 Because you can't nest textarea inside of itself, but we want
 templates to be nestable.

 ~TJ



Re: [webcomponents] HTML Parsing and the template element

2012-06-26 Thread Elliott Sprehn
On Fri, Jun 15, 2012 at 4:04 AM, Henri Sivonen hsivo...@iki.fi wrote:

 On Thu, Jun 14, 2012 at 11:48 PM, Ian Hickson i...@hixie.ch wrote:
  Does anyone object to me adding template, content, and shadow to
  the HTML parser spec next week?

 I don't object to adding them if they create normal child elements in
 the DOM.


If we go this route how does template iterate work when the array is
empty? Could you give some detail on what you'd like the behavior to be for
iterate over [], [oneThing] and [oneThing, twoThings] ?

- E


Re: Browser Payments API proposal

2012-06-19 Thread Elliott Sprehn
I'm not sure this is a problem worth solving in the platform. In 5-10 years
I doubt we'll be typing our card numbers into pages. You'll tap your phone
to your laptop or use some kind of payment service like paypal/wallet/etc.

There's so many security/privacy issues with exposing your payment
information behind an infobar to any page that requests it.

On Tue, Jun 19, 2012 at 10:15 AM, Yaar Schnitman y...@chromium.org wrote:

 Nice idea Alex!

 I have done some work on this in the past, but it didn't go very far. A
 few tips:
 1. As long as many users don't have this, websites would still have to do
 form-based credit-card forms. But browsers and extensions are getting
 pretty good at auto-filling these forms. So you have a tough competition
 from the entrenched technology and there are ways websites can help the
 auto-complete work even better (e.g. proper element names).

 2. The permissions dialog needs to be more visible and proactive. Users
 (even advanced ones) often miss the permissions prompts.

 3. Requiring the user to type a security code / pin every time you give a
 site your credit card info might increase awareness and security.

 4. Can we do something that doesn't require scripting? Maybe a new tag?
 The motivation for that is embedding one click payments in emails where
 scripting is disabled.

 5. Minor things: How to deal with multiple credit cards? What if a site
 only suports AmEx but not Visa?


 On Sun, Jun 17, 2012 at 5:34 AM, Arthur Barstow art.bars...@nokia.comwrote:

 On 6/16/12 8:16 PM, ext Alex MacCaw wrote:

 The blog article link has changed to: http://blog.alexmaccaw.com/**
 preview/**Pc1LYBw4xDT95OPWZGihod7z8Whrnf**AdXMjQxMDg3MTc5NDIaXNjA1phttp://blog.alexmaccaw.com/preview/Pc1LYBw4xDT95OPWZGihod7z8WhrnfAdXMjQxMDg3MTc5NDIaXNjA1p


 Alex - perhaps this API will be of interest to the Web Payments Community
 Group 
 http://www.w3.org/community/**webpayments/http://www.w3.org/community/webpayments/.
 -AB





Re: [selectors-api] Consider backporting find() behavior to querySelector()

2012-06-19 Thread Elliott Sprehn
On Tue, Jun 19, 2012 at 1:38 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 ...
 This is not a good argument.  qSA is used often enough, and has a long
 enough name, that the name is actually a pretty significant
 misfeature.  This is a pretty core API, and both it and its precursors
 (getElementByID, etc.) are very commonly renamed by libraries
 precisely because you need a very short name for such a commonly used
 function.


Why does it need a short name? If the name is too long to type then that's
an argument for better IDEs. Otherwise you end up with stuff like strncpy
to save typing. gzip eliminates the file size issue.

I'm in agreement with Marat that find() is not as clear as most DOM APIs
usually are. findBySelector() makes much more sense.

- Elliott

- E


Re: [DOM4] Mutation algorithm imposed order on document children

2012-06-12 Thread Elliott Sprehn
Okay, I'll use that one. Both the editors draft and the referenced one are
same in this respect  though.

On Tue, Jun 12, 2012 at 5:15 AM, Arthur Barstow art.bars...@nokia.comwrote:

 Elliott, All - please use the www-...@w3.org list for DOM4 discussions 
 http://lists.w3.org/Archives/**Public/www-dom/http://lists.w3.org/Archives/Public/www-dom/
 .

 (Elliott, since that spec is still in the draft phase, you should probably
 use the latest Editor's Draft http://dvcs.w3.org/hg/**
 domcore/raw-file/tip/Overview.**htmlhttp://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html
 instead of the version in w3.org/TR/)




Re: [DOM4] Mutation algorithm imposed order on document children

2012-06-12 Thread Elliott Sprehn
On Mon, Jun 11, 2012 at 9:17 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/11/12 7:39 PM, Elliott Sprehn wrote:

 After discussing this with some other contributors there were questions
 on why we're enforcing the order of the document child nodes.


 Because otherwise serialization of the result would be ... very broken?


Inserting doctype nodes has no effect on the mode of the document though,
so it's already possible to produce a broken serialization (one in the
wrong mode). For instance you can remove the doctype node and then
serialize or swap the doctype node and then serialize.



  Can we leave the behavior when your document is out of order unspecified?


 You mean allow UAs to throw or not as they wish?  That seems like a pretty
 bad idea, honestly.  We should require that the insertion be allowed (and
 then specify what DOM it produces) or require that it throw.


In practice I don't think anyone inserts these in the wrong order (or
insert doctypes at all since they have no effect). If you wanted to
dynamically create a document you'd do it with document.write('!DOCTYPE
html') and then replaceChild the root element which was created for you.

Implementing this ordering restriction requires changing the append and
replace methods substantially in Webkit for a case I'm not sure developers
realize exists.

- Elliott


[DOM4] Mutation algorithm imposed order on document children

2012-06-11 Thread Elliott Sprehn
I'm working on places where Webkit doesn't follow the DOM4 mutation
algorithm and one of the bugs is not throwing an exception when a doctype
node is inserted after an element in a document (or other permutations of
the same situation).

https://bugs.webkit.org/show_bug.cgi?id=88682
http://www.w3.org/TR/domcore/#mutation-algorithms

After discussing this with some other contributors there were questions on
why we're enforcing the order of the document child nodes. Specifically
since inserting a doctype node doesn't actually change the doctype so this
situation is very unlikely (possibly never happens) in the wild. Not
implementing this keeps the code simpler for a case that developers likely
never see.

Can we leave the behavior when your document is out of order unspecified?

- Elliott