RE: [gamepad] New feature proposals: pose, touchpad, vibration

2016-04-25 Thread Domenic Denicola
From: Brandon Jones [mailto:bajo...@google.com]

>   readonly attribute Float32Array? position;

Just as a quick API surface comment, Float32Array? is not a very good way of 
representing a vector (or quaternion). You want either separate x/y/z/w 
properties, or to use a DOMPoint(ReadOnly).

Web VR seems to have changed from DOMPointReadOnly to Float32Array? for some 
reason. Maybe they can explain? It seems like a big downgrade, especially since 
Flaot32Arrays are mutable.

> First, I'm proposing that we add an optional "pose" object to the the Gamepad 
> interface that would expose all the information necessary to track a 
> controller with 3 Degree of Freedom or 6 Degree of Freedom tracking 
> capabilities. The primary motivator for supporting this information is to 
> allow devices like HTC Vive controllers or Oculus Touch to be exposed, but 
> the same structure could also expose motion and orientation data for devices 
> such as Wii, PS3, and PS4 controllers, as well as anything else that had a 
> gyroscope and/or accelerometer.
>
> ...
>
> The proposed pose interface largely mirrors the 
> http://mozvr.github.io/webvr-spec/#interface-vrpose interface from the WebVR 
> spec, but drops the timestamp since that's already provided on the Gamepad 
> interface itself. The VRPose interface could conceivably be used directly, 
> but I'm assuming that we'd rather not have a dependency on a separate 
> work-in-progress spec.

It seems to me like the Web VR spec is very specifically designed to support 
these use cases. Why are you interested in creating a competing extension to 
the Gamepad API, instead of working with the Web VR folks on their interface? 
We certainly shouldn't have both on the platform, so you need to either talk to 
them about moving portions of their spec into the Gamepad API, or go help them 
work on their spec.

In practice the only non-political difference will be whether it's on 
navigator.getGamepads() or navigator.getVRDisplays(). I don't have enough 
game/VR-developer experience to tell which of these would be more intuitive for 
developers.

Maybe you are already collaborating, as seen by 
https://github.com/MozVR/webvr-spec/issues/31. It's not really clear.


RE: [Custom Elements] They are globals.

2016-04-11 Thread Domenic Denicola
I think you are being misled by a superficial similarity with React's JSX.

JSX's `` desugars to `React.createElement(Foo)`, which creates a `` 
element with some of its behavior derived from the `Foo` class, found in 
JavaScript lexical scope. The `Foo` token has no impact on the DOM tree.

Custom elements' `` is completely unlike that. In that case, `x-foo` is 
a tag name, and a full participant in the DOM tree structure. It affects CSS 
selector matching, APIs like querySelector and getElementsByTagName, and more. 
It's not just a div.

As Ryosuke notes, it's very hard to imagine how "scoped tag names" would work. 
Both for implementation reasons, like the HTMLElement constructor, but also for 
cases like CSS or the DOM APIs, which operate fundamentally on a global mapping.



RE: [Web Components] Editor for Custom Elements

2016-04-07 Thread Domenic Denicola
From: Philippe Le Hegaret [mailto:p...@w3.org] 

> But I hope you realize that coming in the W3C community, working with them 
> for while, and then take things away to continue the work elsewhere is 
> received as not working in good faith with the W3C community. This is not a 
> judgment of whether it was the right technical decision to do or not but 
> rather a pure social judgment. People in W3C working groups don't expect to 
> be told to go somewhere else after they contributed for a while. Now, if 
> you're interested in figuring out a way to solve this, I'm sure plenty of 
> folks in the W3C community, myself included, would be interested in finding a 
> way.

Yeah, I agree it is socially awkward that this work on a monkeypatch spec was 
started outside the standards community whose specs it was monkeypatching. That 
was in fact one of the original impetuses for Anne's famous "Monkey patch" 
post. [1] Given that as a starting point for this effort, this move was an 
inevitable outcome---as we've already seen with , the first 
successful web components spec.

Setting aside the social judgement, from a technical point of view the path is 
clear. So although I regret the social downsides, it's unavoidable that if we 
want to as a larger community produce technically excellent specifications, we 
need to be able to accept this social issue and continue with the work of 
making the web platform better, without territorial concerns.

[1]: https://annevankesteren.nl/2014/02/monkey-patch


RE: [Web Components] Editor for Custom Elements

2016-04-06 Thread Domenic Denicola
From: Léonie Watson [mailto:t...@tink.uk]

> Domenic Denicola briefly stepped into the role, but regretfully he has since
> declined to work within the W3C community [2].

That is not at all an accurate description of what has happened. I've very much 
enjoyed working with the W3C community on the w3c/webcomponents issue tracker, 
and think the collaboration there has been a success. It was a very helpful 
"incubation phase" for custom elements.

Now that the incubation is largely complete, they can graduate to the HTML and 
DOM Standards, like was done with  before. 

I intend to continue editing custom elements, in the upstream standards that 
the concepts patch. The first half of this effort is 
https://github.com/whatwg/dom/pull/204, and the second half will follow later 
today on https://github.com/whatwg/html. I hope I will continue to be welcome 
in the W3C community, and especially the issue tracker at 
https://github.com/w3c/webcomponents/issues, where several remaining ideas are 
still being incubated.



RE: Telecon / meeting on first week of April for Web Components

2016-04-05 Thread Domenic Denicola
From: Jan Miksovsky [mailto:jan@component.kitchen] 

> As a reminder: the proposed agenda for the meeting is to go through the 
> “needs consensus” items posted at 
> https://github.com/w3c/webcomponents/issues?q=is:issue+is:open+label:"needs+consensus;.
>  It sounds like it would be good to resolve pending Shadow DOM issues before 
> tackling Custom Elements issues.

I spent some time organizing these "needs consensus" issues into a 
category/priority list. Let's try to go in this order (which interleaves SD and 
CE):

## Definitely needs discussion

* [SD] https://github.com/w3c/webcomponents/issues/308: should we use `display: 
contents`
* [CE] https://github.com/w3c/webcomponents/issues/417: caching vs. late 
binding of lifecycle callbacks

## Discussion would be good; still contentious/tricky

* [SD] https://github.com/w3c/webcomponents/issues/477: 
`document.currentScript` censoring
* [SD] https://github.com/w3c/webcomponents/issues/355: use CSS containment 
features by default
* [CE] https://github.com/w3c/webcomponents/issues/186: integrating callback 
invocation with IDL and editing operations

## New feature proposals

* [CE] https://github.com/w3c/webcomponents/issues/468: provide a mechanism for 
adding default/"UA" styles to a custom element
* [SD] https://github.com/w3c/webcomponents/issues/288: `slotchange` event

## Discussion would be good, but there's likely a default consensus based on 
GitHub discussions so far

* [SD] https://github.com/w3c/webcomponents/issues/59: parse `` like 
``
* [Both] https://github.com/w3c/webcomponents/issues/449: restrict to secure 
contexts
* [CE] https://github.com/w3c/webcomponents/issues/474: define order for 
attributeChangedCallback invocations

## Bikeshedding; probably does not need telecom time

* [SD] https://github.com/w3c/webcomponents/issues/451: rename getAssignedNodes 
and assignedSlot
* [CE] https://github.com/w3c/webcomponents/issues/434: rename "custom tag" and 
"type extension"

See you all in a few!


RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Gomer Thomas [mailto:go...@gomert-consulting.com]


>   [GT] It would be good to say this in the specification, and 
> reference
> some sample source APIs. (This is an example of what I meant when I said it
> is very difficult to read the specification unless one already knows how it is
> supposed to work.)

Hmm, I think that is pretty clear in https://streams.spec.whatwg.org/#intro. Do 
you have any ideas on how to make it clearer?

>   [GT] I did follow the link before I sent in my questions. In 
> section 2.5 it
> says "The queuing strategy assigns a size to each chunk, and compares the
> total size of all chunks in the queue to a specified number, known as the high
> water mark. The resulting difference, high water mark minus total size, is
> used to determine the desired size to fill the stream’s queue." It appears
> that this is incorrect. It does not seem to jibe with the default value and 
> the
> examples. As far as I can tell from the default value and the examples, the
> high water mark is not the total size of all chunks in the queue. It is the
> number of chunks in the queue.

It is both, because in these cases "size" is measured to be 1 for all chunks by 
default. If you supply a different definition of size, by passing a size() 
method, as Fetch implementations do, then you will get a difference.

>[GT] My original question was directed at how an application can issue 
> an
> XMLHttpRequest() call and retrieve the results piecewise as they arrive,
> rather than waiting for the entire response to arrive. It looks like Streams
> might meet this need, but It would take quite a lot of study to figure out how
> to make this solution work, and the actual code would be pretty complex. I
> would also not be able to use this approach as a mature technology in a
> cross-browser environment for quite a while -- years? I think we will need to
> implement a non-standard solution based on WebSocket messages for now.
> We can then revisit the issue later. Thanks again for your help.

Well, you can be the judge of how complex. 
https://fetch.spec.whatwg.org/#fetch-api, 
https://googlechrome.github.io/samples/fetch-api/fetch-response-stream.html, 
and https://jakearchibald.com/2016/streams-ftw/ can give you some more help and 
examples.

I agree that it might be a while for this to arrive cross-browser. I know it's 
in active development in WebKit, and Mozilla was hoping to begin work soon, but 
indeed for today's apps you're probably better off with a custom solution based 
on web sockets, if you control the server as well as the client.


RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Elliott Sprehn [mailto:espr...@chromium.org] 

> Can we get an idl definition too? You shouldn't need to read the algorithm to 
> know the return types.

Streams, like promises/maps/sets, are not specced or implemented using the IDL 
type system. (Regardless, the Web IDL's return types are only documentation.)



RE: [XHR]

2016-03-19 Thread Domenic Denicola
From: Gomer Thomas [mailto:go...@gomert-consulting.com] 

> I looked at the Streams specification, and it seems pretty immature and 
> underspecified. I’m not sure it is usable by someone who doesn’t already know 
> how it is supposed to work before reading the specification. How many of the 
> major web browsers are supporting it?

Thanks for the feedback. Streams is intended to be a lower-level primitive used 
by other specifications, primarily. By reading it you're supposed to learn how 
to implement your own streams from basic underlying source APIs.

> (1) The constructor of the ReadableStream object is “defined” by 
> Constructor (underlyingSource = { }, {size, highWaterMark = 1 } = { } )
> The “specification” states that the underlyingSource object “can” implement 
> various methods, but it does not say anything about how to create or identify 
> a particular underlyingSource

As you noticed, specific underlying sources are left to other places. Those 
could be other specs, like Fetch:

https://fetch.spec.whatwg.org/#concept-construct-readablestream

or it could be used by authors directly:

https://streams.spec.whatwg.org/#example-rs-push-no-backpressure

> In my case I want to receive a stream from a remote HTTP server. What do I 
> put in for the underlyingSource?

This is similar to asking the question "I want to create a promise for an 
animation. What do I put in the `new Promise(...)` constructor?" In other 
words, a ReadableStream is a data type that can stream anything, and the actual 
capability needs to be supplied by your code. Fetch supplies one underlying 
source, for HTTP responses.

> Also, what does the “highWaterMark” parameter mean? The “specification” says 
> it is part of the queuing strategy object, but it does not say what it does.

Hmm, I think the links (if you follow them) are fairly clear. 
https://streams.spec.whatwg.org/#queuing-strategy. Do you have any suggestions 
on how to make it clearer?

> Is it the maximum number of bytes of unread data in the Stream? If so, it 
> should say so.

Close; it is the maximum number of bytes before a backpressure signal is sent. 
But, that is already exactly what the above link (which was found by clicking 
the links "queuing strategy" in the constructor definition) says, so I am not 
sure what you are asking for.

> If the “size” parameter is omitted, is the underlyingSource free to send 
> chunks of any size, including variable sizes?

Upon re-reading, I agree it's not 100% clear that the size() function maps to 
"The queuing strategy assigns a size to each chunk". However, the behavior of 
how the stream uses the size() function is defined in a lot of detail if you 
follow the spec. I agree maybe it could use some more non-normative notes 
explaining, and will work to add some, but in the end if you really want to 
understand what happens you need to either read the spec's algorithms or wait 
for someone to write an in-depth tutorial somewhere like MDN.

> (2) The ReadableStream class has a “getReader()” method, but the 
> specification gives no hint as to the data type that this method returns. I 
> suspect that it is an object of the ReadableStreamReader class, but if so it 
> would be nice if the “specification” said so.

This is actually normatively defined if you click the link in the step "Return 
AcquireReadableStreamReader(this)," whose first line tells you what it 
constructs (indeed, a ReadableStreamReader).



RE: Custom elements contentious bits

2015-12-28 Thread Domenic Denicola
From: Brian Kardell [mailto:bkard...@gmail.com] 

> I'd really like to understand where things really are with async/sync/almost 
> sync - does anyone have more notes or would they be willing to provide more 
> exlpanation?  I've read the linked contentious bit and I'm still not sure 
> that I understand.

The question is essentially about at what time the custom element callbacks are 
being called. There are three possible times:

1. Synchronously, i.e. from within the middle of the DOM Standard's relevant 
algorithms/from within browser C++ code, we call out to JavaScript. There is 
then no queuing.
2. "Nanotask" timing, i.e. right before exiting C++ code and returning back to 
JS, we call out to queued JS.
3. Microtask timing, i.e. we put the callbacks on the standard microtask queue, 
letting any JS run to completion before running the microtasks.

The difference between 1 and 2 is very subtle and shows up only rarely, in 
complicated algorithms like cloning or editing. 3 is what you would more 
traditionally think of as "async".

> I can say, for whatever it is worth, that given some significant time now 
> (we're a few years in) with web component polyfills at this point I do see 
> more clearly the desire for sync.  It's unintuitive at some level in a world 
> where we (including me) tend to really want to make things async but if I am 
> being completely honest, I've found an increasing number of times where this 
> is a actually little nightmarish to deal with and I feel almost like perhaps 
> this might be something of a "least worst" choice.  Perhaps there's some 
> really good ideas that I just haven't thought of/stumbled across yet but I 
> can tell you that for sure a whole lot of frameworks and even apps have their 
> own lifecycles in which they reason about things and the async nature makes 
> this very hard in those cases.

I'm not really sure what you're talking about here. Is it something besides 
Chrome's web components, perhaps some unrelated framework or polyfill? Chrome's 
web components have always used (2), which is not really async in any 
meaningful sense.

> Shadow DOM will definitely help address a whole lot of my cases because it'll 
> hide one end of things, but I can definitely see cases where even that 
> doesn't help if I need to actually coordinate.  I don't know if it is really 
> something to panic about but I feel like it's worth bringing up while there 
> are discussions going on.  The declarative nature and the seeming agreement 
> to adopt web-component _looking_ tags even in situations where they are not 
> exactly web components makes it easy enough to have mutually agreeing 
> "enough" implementations of things.  For example, I currently have a few 
> custom elements for which I have both a "native" definition and an angular 
> directive so that designers I know who write HTML and CSS can learn a 
> slightly improved vocabulary, say what they mean and quickly get a page setup 
> while app engineers can then simply make sure they wire up the right 
> implementation for the final product.  This wasn't my first choice:  I tried 
> going purely native but problems like the one described above created way too 
> much contention, more code, pitfalls and performance issues.  In the end it 
> was much simpler to have two for now and reap a significant portion of the 
> benefit if not the whole thing.

This is a bit off-topic but...

As you point out, the main benefit of custom elements is not that you can use 
custom tag names in your markup---that is easily accomplished today. The 
benefits are being able to use the DOM API on the resulting tree 
(document.querySelector("my-custom-el").myCustomMethod()) and, most 
importantly, lifecycle callbacks. If you want lifecycle callbacks with a 
third-party framework, you must let that framework control your page lifecycle. 
The things Angular does to ensure this are horrendous, e.g. monkeypatching 
every DOM method so that it can know when certain lifecycle events happen. 
Letting the browser simply call out to author-supplied callbacks when it's 
already doing work is going to be much nicer.

> Anywho... I'm really curious to understand where this stands atm or where 
> various companies disagree if they do.

Since nobody is proposing 3, where we stand now is that there seems to be a 
disagreement between 1 and 2. I don't really understand the advantages of 1 
over 2, but I do believe Apple still prefers 1. It would be great to hear more.


RE: [WebIDL] T[] migration

2015-12-18 Thread Domenic Denicola
From: Simon Pieters [mailto:sim...@opera.com] 

> Note that it requires liveness. Does that work for a frozen array?

Frozen array instances are frozen and cannot change. However, you can have the 
property that returns them start returning a new frozen array. The spec needs 
to track when these new instances are created.

> Maybe this particular API should be a method instead that returns a 
> sequence?

That does seem potentially better... Either could work, I think?


RE: Meeting date, january

2015-12-01 Thread Domenic Denicola
From: Chaals McCathie Nevile [mailto:cha...@yandex-team.ru]

> Yes, likewise for me. Anne, Olli specifically called you out as someone we
> should ask. I am assuming most people are OK either way, having heard no
> loud screaming except for Elliot...

I would be pretty heartbroken if we met without Elliott. So let's please do the 
25th.


RE: Callback when an event handler has been added to a custom element

2015-11-07 Thread Domenic Denicola
From: Mitar [mailto:mmi...@gmail.com] 

> Hm, but message port API itself has such a side-effect:

I think that is just a very bad API. The platform is unfortunately full of bad 
APIs :). In particular, a difference between two different ways of adding event 
listeners is not something authors ever think about.

But regardless, if that's all you want, you could do it easily by declaring 
your custom element class like so:

class MyCustomElement {
  set onmessage(handler) {
this.addEventListener("message", handler);
this.start();
  }
  // ...
}

> For me this feels like leaking internal implementation details to the outside.

I strongly disagree. I would instead say it feels like getting rid of magical 
implicit I/O behavior, in favor of making the code say what it does and do what 
it says.



RE: Callback when an event handler has been added to a custom element

2015-11-06 Thread Domenic Denicola
In general I would be cautious about this kind of API. Events are not expected 
to have side effects, and adding listeners should not cause an (observable) 
action. See e.g. https://dom.spec.whatwg.org/#action-versus-occurance which 
tries to explain this in some detail. A better design in your case would 
probably be to have a specific method on the custom element which "starts" it 
(and thus starts its associated message port). 

As such I don't think we should add such a capability to the custom element API 
(or elsewhere in the platform). Although it is possible to use such callbacks 
for "good" (using them only to perform unobservable optimizations, like lazy 
initialization), it is way too easy to use them for "evil" (causing observable 
effects that would better be allocated to dedicated action-causing methods).



RE: [web-animations] Should computedTiming return a live object?

2015-10-02 Thread Domenic Denicola
Anne's questions are interesting and worth answering. For example, which of 
these properties are typically held in memory already, versus which would 
require some kind of computation---the former usually are better as properties, 
and the latter as methods.

But setting aside the deeper issues he alludes to, my gut instinct is that 
option 1 is pretty reasonable. 



RE: Indexed DB + Promises

2015-09-29 Thread Domenic Denicola
This seems ... reasonable, and quite possibly the best we can do. It has a 
several notable rough edges:

- The need to remember to use .promise, instead of just having functions whose 
return values you can await directly
- The two-stage error paths (exceptions + rejections), necessitating 
async/await to make it palatable
- The waitUntil/IIAFE pattern in the incrementSlowly example, instead of a more 
natural `const t = await openTransaction(); try { await useTransaction(t); } 
finally { t.close(); }` structure

I guess part of the question is, does this add enough value, or will authors 
still prefer wrapper libraries, which can afford to throw away backward 
compatibility in order to avoid these ergonomic problems? From that 
perspective, the addition of waitUntil or a similar primitive to allow better 
control over transaction lifecycle is crucial, since it will enable better 
wrapper libraries. But the .promise and .complete properties end up feeling 
like halfway measures, compared to the usability gains a wrapper can achieve. 
Maybe they are still worthwhile though, despite their flaws. You probably have 
a better sense of what authors have been asking for here than I do.

Minor usability suggestions:

- Maybe tx.waitUntil could return tx.complete? That would shorten the 
incrementSlowly example a bit.
- .promise is a pretty generic name. For operations, .ready or .complete or 
.done might be nicer. (Although nothing sticks out as perfect.) For cursors, 
I'd suggest something like .next or .nextReady or similar.

From: Joshua Bell [mailto:jsb...@google.com] 
Sent: Monday, September 28, 2015 13:44
To: public-webapps@w3.org
Subject: Indexed DB + Promises

One of the top requests[1] we've received for future iterations of Indexed DB 
is integration with ES Promises. While this initially seems straightforward 
("aren't requests just promises?") the devil is in the details - events vs. 
microtasks, exceptions vs. rejections, automatic commits, etc.

After some noodling and some very helpful initial feedback, I've got what I 
think is a minimal proposal for incrementally evolving (i.e. not replacing) the 
Indexed DB API with some promise-friendly affordances, written up here:

https://github.com/inexorabletash/indexeddb-promises

I'd appreciate feedback from the WebApps community either here or in that 
repo's issue tracker.

[1] https://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures



RE: Normative references to Workers.

2015-09-21 Thread Domenic Denicola
From: Xiaoqian Wu [mailto:xiaoq...@w3.org] 

> If the spec is still changing frequently, indeed it isn't a good idea to 
> publish another CR… but the WebApps WG needs to clearly tell the community 
> that the 2012 CR should be considered obsolete. 
>
> I’d suggest that we publish a WD for Workers, which adapts to the current 
> changes and revise the 2012 CR. The community is encouraged to refer to 
> either the WHATWG version or the new WD. 

The best way to accomplish this may be to do the same as has been done with 
other specs, and either redirect to the original source document (e.g. as has 
been done with Fullscreen) or replace the WD and CR with NOTEs directing 
visitors to the source document.


RE: PSA: publish WD of "WebIDL Level 1"

2015-08-31 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com]

> For our internal documentation purposes, I'd refer having a perm link to a
> document that never changes.
> 
> Let's say we implement some feature based on Web IDL published as of
> today.  I'm going to refer that in my source code commit message.  Future
> readers of my code has no idea what I was implementing when they look at
> my commit message in five years if it refers to the living standard that
> changes over time.

I agree this is an important use case. Fortunately it is covered by commit 
snapshot URLs. E.g. 
https://rawgit.com/heycam/webidl/a90316a16f639aaa3531208fc0451a1b79a35a7d/index.html

This is better than expecting that the one time a snapshot happens ("v1"), it 
also aligns with what you're implementing at the time.

(BTW: for streams I have made this concept first-class; at 
https://streams.spec.whatwg.org/ you can click the "Snapshot as of this commit" 
link to get 
https://streams.spec.whatwg.org/commit-snapshots/5bd0ab1af09153fd72745516dadc27103e84043c/.)


RE: Custom elements Constructor-Dmitry baseline proposal

2015-08-21 Thread Domenic Denicola
From: Maciej Stachowiak [mailto:m...@apple.com]


 On Aug 17, 2015, at 3:19 PM, Domenic Denicola d...@domenic.me wrote:

  - Parser-created custom elements and upgraded custom elements will
 have their constructor and attributeChange callbacks called at a time when all
 their children and attributes are already present, but
 
 Did you change that relative to the spec? Previously, parser-created custom
 elements would have their constructor called at a time when an
 unpredictable number of their children were present.

You're right that I didn't get this quite spelled out. Fortunately, given your 
below suggestion, I would have had to rewrite it anyway :).

  - Elements created via new XCustomElement() or
 document.createElement(x-custom-element) will have their constructor
 run at a time when no children or attributes are present.
 
 If you really got it down to two states, I like reducing the number of word
 states, but I would prefer to put parser-created custom elements in this
 second bucket. They should have their constructor called while they have no
 children and attributes, instead of when they have all of them.

This seems reasonable and doable. Here's my try at making it work:

- Diff from previous revision: 
https://github.com/w3c/webcomponents/pull/297/files?short_path=876522d#diff-876522df93719f9da9871064880af5d2
- All together: 
https://github.com/domenic/webcomponents/blob/constructor-dmitry-revisions/proposals/Constructor-Dmitry.md

 If any of this happens, an upgradeable element will be stuck in the pre-
 upgrade state for a possibly unboundedly long amount of time.

This seems totally fine to me though. You're basically saying that you don't 
upgrade elements that are left often. Not a big deal. You have to actually have 
a full `x-custom-foo.../x-custom-foo` before you get a proper custom 
element. Just `x-custom-foo...` alone does not cut it.

 Against this, we have the proposal to forcibly put elements in a naked state
 before calling the constructor for upgrade, then restore them. That has the
 known bad property of forcing iframe children to reload. I owe the group
 thoughts on whether we can avoid that problem.

 ...

 I will read the document more closely soon, to see if it seems like a
 reasonable baseline. I agree it would be fine to start with something good
 that uses the constructor and has two possible stats for constructor
 invocation, then see if we can get it down to one.

Awesome! And yeah, the intent of this was exactly to provide such a baseline, 
and see if we could come up with a good solution for reducing the two states 
down to one using trickery such as that you mention. As a summary, after the 
above changes we have these two states:

- Upgraded custom elements, or elements created via other algorithms that are 
constructing a larger tree (such as cloneNode), will have their constructor and 
attributeChange callbacks called at a time when all their children and 
attributes are already present.
- Elements created via new XCustomElement() or 
document.createElement(x-custom-element) or via the parser will have their 
constructor run at a time when no children or attributes are present.

Note that we could move elements created via other algorithms that are 
constructing a larger tree into the latter bucket too, at the cost of 
synchronous script execution during all such algorithms. But the UA appetite 
for that was... mixed, and I don't think we should do so. It seems fine to 
leave their fate the same as upgrades, whichever way it shakes out.


Custom elements Constructor-Dmitry baseline proposal

2015-08-17 Thread Domenic Denicola
In 
https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Constructor-Dmitry.md
 I’ve written up in some detail what I consider to be the current 
state-of-the-art in custom elements proposals. That is, if we take the current 
spec, and modify it in ways that everyone agrees are good ideas, we end up with 
the Constructor-Dmitry proposal.

The changes, in descending order of importance, are:

- Don't generate new classes as return values from registerElement, i.e. don't 
treat the second argument as a dumb { prototype } property bag. (This is the 
original Dmitry proposal.)
- Allow the use of ES2015 constructors directly, instead of createdCallback. 
(This uses the constructor-call trick we discovered at the F2F.)
- Use symbols instead of strings for custom element callbacks.
- Fire attributeChanged and attached callbacks during parsing/upgrading

Those of you at the F2F may remember me saying something like If only we knew 
about the constructor call trick before this meeting, I think we would have had 
consensus! This document outlines what I think the consensus would have looked 
like, perhaps modulo some quibbling about replacing or supplementing 
attached/detached with different callbacks.

So my main intent in writing this up is to provide a starting point that we can 
all use, to talk about potential modifications. In particular, at the F2F there 
was a lot of contention over the consistent world view issue, which is still 
present in the proposal:

- Parser-created custom elements and upgraded custom elements will have their 
constructor and attributeChange callbacks called at a time when all their 
children and attributes are already present, but
- Elements created via new XCustomElement() or 
document.createElement(x-custom-element) will have their constructor run at a 
time when no children or attributes are present.

If we still think that this is a showstopper to consensus (do we!?) then I hope 
we can use this proposal as a baseline from which to derive additive solutions. 
Alternately, maybe you all will read this proposal and be amazed at how great 
it is already, and agree we can move on, leaving the consistent world view 
issue aside as an edge-case that shouldn't prevent cross-browser consensus on 
custom elements :)


RE: W3C's version of XMLHttpRequest should be abandoned

2015-08-06 Thread Domenic Denicola
From: Hallvord Reiar Michaelsen Steen [mailto:hst...@mozilla.com] 

 I still like the idea of having a stable spec documenting the interoperable 
 behaviour of XHR by a given point in time - but I haven't been able to 
 prioritise it and neither, apparently, have the other two editors.

Thankfully, such snapshots already exist :). See for example [1] whose date 
(2014-05-21) matches the date of [2] (2014-05-26). A version viewable 
in-browser is at [3], although eventually we might want to put in a modicum of 
work to host it on xhr.spec.whatwg.org and display an appropriate warning 
banner, similar to what Streams does with [4].

[1]: 
https://github.com/whatwg/xhr/blob/848b22e99f36cb4a4481b77c382a1fd484ddf737/Overview.html
[2]: https://dvcs.w3.org/hg/xhr/raw-file/default/Overview.html
[3]: 
https://rawgit.com/whatwg/xhr/848b22e99f36cb4a4481b77c382a1fd484ddf737/Overview.html
[4]: 
https://streams.spec.whatwg.org/commit-snapshots/db28a0dcbb81a9bb1c9642f25364f33dcae0bb49/



RE: Apple's updated feedback on Custom Elements and Shadow DOM

2015-07-21 Thread Domenic Denicola
From: Maciej Stachowiak [mailto:m...@apple.com] 

 Does that sound right to you?

 If so, it is not much more appealing than prototype swizzling to us, since 
 our biggest concern is allowing natural use of ES6 classes.

Got it, thanks. So it really does sound like it comes down to

class XFoo extends HTMLElement {
  constructor() {
super();
// init code here
  }
}

vs.

class XFoo extends HTMLElement {
  [Element.created]() {
// init code here
  }
}

which I guess we covered in the past at 
https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0283.html as 
being a general instance of the inversion of control design pattern, which I 
still don't really understand Apple's objection to. I suppose we can leave that 
for tomorrow.



RE: Apple's updated feedback on Custom Elements and Shadow DOM

2015-07-20 Thread Domenic Denicola
Thanks very much for your feedback Maciej! I know we'll be talking a lot more 
tomorrow, but one point in particular confused me:

From: Maciej Stachowiak [mailto:m...@apple.com] 

 4. Specifically, we don't really like the Optional Upgrades, Optional 
 Constructors proposal (seems like it's the worst of both worlds in terms of 
 complexity and weirdness) or the Parser-Created Classes proposal (not clear 
 how this even solves the problem).

Specifically with regard to the latter, what is unclear about how it solves the 
problem? It completely gets rid of upgrades, which I thought you would be in 
favor of.

The former is, as you noted, a compromise solution, that brings in the best of 
both worlds (from some perspectives) and the worst of them (from others).


RE: The key custom elements question: custom constructors?

2015-07-17 Thread Domenic Denicola
From: Anne van Kesteren [mailto:ann...@annevk.nl] 

 // What about
 document.body.innerHTML = [512 KiB of normal HTML] x-foo/x-foo; 
 // ? does the HTML make it in, or does the operation fail atomically, or 
 something else?

 It fails atomically, based on the definition of innerHTML.

What if that 512 KiB of HTML contains img src=foo.png? Following 
definitions, I assume we fire off the network request?

What if it contains a x-baz/x-baz where XBaz's constructor does 
`document.body.innerHTML = pHello/p`? Can the parsing algorithm deal with 
the (following the definitions, required) re-entrancy?



RE: alternate view on constructors for custom elements

2015-07-17 Thread Domenic Denicola
From: Travis Leithead [mailto:travis.leith...@microsoft.com] 

 Something magical happens here. The use of super() is supposed to call the 
 constructor of the HTMLElement class—but that’s not a normal JS class. It 
 doesn’t have a defined constructor() method [yet?].

Yep. We'd need to define one; it's absolutely required. ES2015 classes that 
don't call super() in their constructor simply aren't allowed, which means 
inheriting from a constructor that throws (like HTMLElement currently does) is 
impossible.

https://github.com/domenic/element-constructors is a start at this.

 I’m trying to rationalize the custom elements previous design with the use of 
 constructors. Basically, I think it should still be a two-stage creation 
 model:
 1. Native [internal] element is created with appropriate tag name, 
 attributes, etc.
 2. JS constructor is called and provided the instance (as ‘this’)

 #1 is triggered by the parser, or by a native constructor function. That 
 constructor function could either be provided separately like it is returned 
 from registerElement today, or in some other way (replacing the original 
 constructor?). Since replacing the original constructor sounds weird and 
 probably violates a bunch of JS invariants, I’ll assume sticking with the 
 original model. 

 This makes it much safer for implementations, since the native custom element 
 can always be safely created first, before running JS code. It also means 
 there’s no magic super() at work—which seems to leave too much control up to 
 author code to get right.

Ah! This is a big mistake!! You simply cannot do this split with ES2015 classes.

ES2015 classes are predicated on the idea that construction is an atomic 
operation, consisting of both allocation and initialization in one step. As 
such, class constructors cannot be called, only `new`ed. You cannot apply them 
with an already-created `this`, as if they were methods. This is fundamental to 
the class design; any other model needs to fall back to simply functions, not 
classes.

Indeed, the currently-specced design had a two-stage model---allocation/base 
initialization by the UA-generated constructor returned by 
document.registerElement, and author-controlled initialization by 
createdCallback(). This separation is, as you point out, makes things safer and 
easier to specify. And it is absolutely impossible to achieve if you insist on 
custom constructors, because in that case all allocation and initialization 
must happen together atomically. If we allow custom constructors, the 
allocation and initialization must both happen with a synchronous call to the 
custom constructor (which itself must call super()).

I take it when you said in [1]:

 I've discussed this issue with some of Edge's key parser developers. From a 
 technical ground, we do not have a problem with stopping the parser to 
 callout to author code in order to run a constructor, either during parsing 
 or cloning. For example, in parsing, I would expect that the callout happens 
 after initial instance creation, but before the target node is attached to 
 the DOM tree by the parser.

you were not aware of this? Maybe now you better understand my follow-up 
question in [2],

 Can you expand on this more? In particular I am confused on how initial 
 instance creation can happen without calling the constructor.

[1]: https://lists.w3.org/Archives/Public/public-webapps/2015JulSep/0161.html
[2]: https://lists.w3.org/Archives/Public/public-webapps/2015JulSep/0162.html

 Basic example of what I’m thinking:

 class XFooStartup extends HTMLElement {
   constructor(val1, val2) {
  this.prop = val1;
  this.prop2 = val2;

This is invalid at runtime, since it fails to call super().

   }
 }
 window.XFoo = document.registerElement(‘x-foo’, XFooStartup);

Why is XFoo different from XFooStartup? If I define a method in XFooStartup, 
does it exist in XFoo?

 // (1)
 var x1 = new XFooStartup(“first”, “second”);
 // (2)
 var x2 = new XFoo(“first”, “second”);

 Calling (1) does not create a custom element. Extending from HTMLElement is 
 not magical, it’s just a prototype inheritance, as can be done today. super() 
 would do whatever super() does when the super class has no defined method to 
 invoke.

(It would throw, in other words.)

 x1 is a regular JS object with a .prop and .prop2.

This can't be true. Either x1 doesn't exist, because you didn't call super() 
and so `new XFooStartup` threw. Or x1 called super(), and it is no longer an 
ordinary JS object, because by calling super() (which is basically shorthand 
for `this = new super()`) you have made sure that it is a true 
allocated-and-initialized-by-the-UA HTMLElement.

 Calling (2) runs the platform-provided constructor function which internally 
 inits a new HTMLElement in the C++ (or could be Element if in XML document?). 
 Then the platform immediately (synchronously) invokes the provided 
 constructor function as if:

Two new custom elements ideas

2015-07-17 Thread Domenic Denicola
Hi all,

Over the last few days I’ve worked on two new potential ideas for custom 
elements, hoping to shake things up with new possibilities. These are both 
largely geared around how we react to the key custom elements question [1].

https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Optional-Upgrades-Optional-Constructors.md
 : this proposal assumes we come out in favor of running author code during 
parsing/cloning/editing/etc. It allows component authors to choose between 
using constructors, thus disallowing their components to be used with 
server-side rendering/progressive enhancement, and using a 
createdCallback-style two-stage initialization, which will then allow 
progressive enhancement. It is meant explicitly as a compromise proposal, 
similar in spirit to the { mode: open/closed } proposal, since we know 
different parties have different values and the way forward may be to simply 
accommodate both value systems and let developers light a path.

https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Parser-Created-Constructors.md
 : this proposal assumes we do not achieve consensus to run author code during 
parsing/cloning/editing/etc. It recognizes that, if we disallow this, we cannot 
allow custom constructors, and then tries to make the best of that world. In 
particular, it is an alternative to the “Dmitry” proposal, designed to entirely 
avoid the dreaded proto-swizzling, while still having many of the same 
benefits. If you scroll to the bottom, you note how it also leaves the door 
open for future custom constructors, if we decide that it's something that we 
want in the future, but simply cannot afford to specify or implement right now 
due to how hard that is. In this sense it's meant somewhat as a bridging 
proposal, similar in spirit to the slots proposal, which falls short of the 
ideal imperative distribution API but will probably work for most developers 
anyway.

These are largely meant to get ideas moving, and to avoid polarizing the 
discussion into two camps. As I noted in [2], there are several degrees of 
freedom here; the key custom elements question is distinct from upgrades, which 
is distinct from ES2015 class syntax, which is distinct from constructor vs. 
created lifecycle hook, etc. The possibility space is pretty varied, and we 
have multiple tools in our toolbox to help arrive at a resolution that everyone 
finds agreeable.

Comments are of course welcome, and if you have time to read these before the 
F2F that would be really appreciated.

Thanks,
-Domenic

[1]: https://lists.w3.org/Archives/Public/public-webapps/2015JulSep/0159.html
[2]: https://lists.w3.org/Archives/Public/public-webapps/2015JulSep/0162.html



RE: alternate view on constructors for custom elements

2015-07-17 Thread Domenic Denicola
From: Travis Leithead [mailto:travis.leith...@microsoft.com]

 if super() is absolutely required for a constructor in a class
 that extends something, is there a requirement about when in the
 constructor method it be invoked? Must it always be the first call? Can it be
 later on, say at the end of the function?

It must be before `this` is referenced.

 Basically, like this (reverting to non-class syntax):

Yeah, and at that point you might as well make it a method on the actual class, 
and give it a nice name, like, say, createdCallback ;). Or, make it a function 
that takes the element as a parameter, and keep it separate from the class, 
perhaps in a hooks object ;).



RE: [WebIDL] T[] migration

2015-07-16 Thread Domenic Denicola
So in terms of concrete updates, we'd need to fix

- https://html.spec.whatwg.org/
- https://w3c.github.io/webrtc-pc/
- http://dev.w3.org/csswg/cssom/ (sigh, still no https?)

The other documents mentioned are either obsolete or forks of (sections of) the 
first. Once the LS/EDs are fixed, then we can let The Process take over and 
worry about the copy-and-pasting/errata/etc., but getting the LS/EDs right is 
the important part for implementers.

Note that FrozenArrayT is not a drop-in replacement for T[]. In particular, 
T[] is read-only by authors but mutable by UAs, whereas FrozenArrayT is 
immutable. This might make things trickier.

---

From: Travis Leithead [mailto:travis.leith...@microsoft.com] 
Sent: Thursday, July 16, 2015 11:45
To: public-webapps; Ian Hickson
Subject: [WebIDL] T[] migration

Hey folks, 

Now that WebIDL has added FrozenArray and dropped T[], it’s time to switch 
over! On the other hand, there are a number of specs that have already gone to 
Rec that used the old syntax.

Recommendations:
• HTML5
• Web Messaging

Other references:
• CSS OM
• Web Sockets
• WebRTC

Legacy/Deprecated references:
• TypedArrays (replaced by ES2015)
• Web Intents

Thoughts on what to do about this? Should we consider keeping T[] in WebIDL, 
but having it map to FrozenArray? Should we issue errata to those Recs?


RE: The key custom elements question: custom constructors?

2015-07-16 Thread Domenic Denicola
From: Anne van Kesteren [mailto:ann...@annevk.nl]

 I think the problem is that nobody has yet tried to figure out what invariants
 that would break and how we could solve them. I'm not too worried about
 the parser as it already has script synchronization, but cloneNode(), ranges,
 and editing, do seem problematic. If there is a clear processing model,
 Mozilla might be fine with running JavaScript during those operations.

Even if it can be specced/implemented, should it? I.e., why would this be OK 
where MutationEvents are not?



RE: The key custom elements question: custom constructors?

2015-07-16 Thread Domenic Denicola
From: Olli Pettay [mailto:o...@pettay.fi]

 That is too strongly said, at least if you refer to my email (where I 
 expressed
 my opinions, but as usually, others from Mozilla may have different opinions).
 I said I'd prefer if we could avoid that [Running author code during
 cloneNode(true)].
 
 And my worry is largely in the spec level.
 It would be also a bit sad to reintroduce some of the issues MutationEvents
 have to the platform, now that we're finally getting rid of those events

Ah OK, thanks. Is there any way to get a consensus from Mozilla as a whole, 
preferably ahead of the F2F?



RE: The key custom elements question: custom constructors?

2015-07-16 Thread Domenic Denicola
From: Jonas Sicking [mailto:jo...@sicking.cc]

 Like Anne says, if it was better defined when the callbacks should happen,
 and that it was defined that they all happen after all internal datastructures
 had been updated, but before the API call returns, then that would have
 been much easier to implement.

Right, but that's not actually possible with custom constructors. In 
particular, you need to insert the elements into the tree *after* calling out 
to author code that constructs them. What you mention is more like the 
currently-specced custom elements lifecycle callback approach.

Or am I misunderstanding? 

 This is a problem inherent with synchronous callbacks and I can't think of a
 way to improve specifications or implementations to help here. It's entirely
 the responsibility of web authors to deal with this complexity.

Well, specifications could just not allow synchronous callbacks of this sort, 
which is kind of what we're discussing in this thread. That would help avoid 
the horrors of

class XFoo extends HTMLElement {
  constructor(stuff) {
super();

// Set up some default content that happens to use another custom element
this.innerHTML = `x-barp${stuff}/p/x-bar`;

// All foos should also appear in a list off on the side!
// Let's take care of that automatically for any consumers!
document.querySelector(#list-of-foos).appendChild(this.cloneNode(true));
  }
}

which seems like a well-meaning thing that authors could do, without knowing 
what they've unleashed.


RE: The key custom elements question: custom constructors?

2015-07-16 Thread Domenic Denicola
I have a related question: what happens if the constructor throws? Example:

!DOCTYPE html
script
use strict;

window.throwingMode = true;

class XFoo extends HTMLElement {
constructor() {
if (window.throwingMode) {
throw new Error(uh-oh!);
}
}
}

document.registerElement(x-foo, XFoo);
/script

x-foo/x-foo

script
use strict;

// What does the DOM tree look like here? Is an x-foo present in some form?
// HTMLUnknownElement maybe? Just removed from existence?

// This will presumably throw:
document.body.innerHTML = x-foo/x-foo;
// But will it wipe out body first?

// What about
document.body.innerHTML = [512 KiB of normal HTML] x-foo/x-foo;
// ? does the HTML make it in, or does the operation fail atomically, or 
something else?


// Now let's try something weirder.
// Assume x-bar / XBar is a well-behaved custom element.

window.throwingMode = false;
const el = document.createElement(div);
el.innerHTML = pa/px-bar/x-barx-foob/x-foopb/px-bar/x-bar;

window.throwingMode = true;
el.cloneNode(true); // this will throw, presumably...
// ... but does the XBar constructor run or not?
// ... if so, how many times?
/script

 -Original Message-
 From: Domenic Denicola [mailto:d...@domenic.me]
 Sent: Wednesday, July 15, 2015 20:45
 To: public-webapps
 Subject: The key custom elements question: custom constructors?
 
 Hi all,
 
 Ahead of next week's F2F, I'm trying to pull together some clarifying and
 stage-setting materials, proposals, lists of open issues, etc. In the end, 
 they
 all get blocked on one key question:
 
 **Is it OK to run author code during parsing/cloning/editing/printing (in
 Gecko)/etc.?**
 
 If we allow custom elements to have custom constructors, then those must
 run in order to create properly-allocated instances of those elements; there
 is simply no other way to create those objects. You can shuffle the timing
 around a bit: e.g., while cloning a tree, you could either run the 
 constructors
 at the normal times, or try to do something like almost-synchronous
 constructors [1] where you run them after constructing a skeleton of the
 cloned tree, but before inserting them into the tree. But the fact remains
 that if custom elements have custom constructors, those custom
 constructors must run in the middle of all those operations.
 
 We've danced around this question many times. But I think we need a clear
 answer from the various implementers involved before we can continue. In
 particular, I'm not interested in whether the implementers think it's
 technically feasible. I'd like to know whether they think it's something we
 should standardize.
 
 I'm hoping we can settle this on-list over the next day or two so that we all
 come to the meeting with a clear starting point. Thanks very much, and
 looking forward to your replies,
 
 -Domenic
 
 [1]: https://lists.w3.org/Archives/Public/public-
 webapps/2014JanMar/0098.html



RE: URL bugs and next steps

2015-06-16 Thread Domenic Denicola
[+Sebastian]

From: Anne van Kesteren [mailto:ann...@annevk.nl]

 the state of the specification and testsuite.

Worth pointing out, since I guess it hasn't been publicized beyond IRC: as part 
of the jsdom project [1] (which is a hobby of mine), Sebastian has been working 
on a reference implementation of the URL Standard that follows the spec fairly 
exactly [2]. 

Notably, this has allowed us to recover coverage numbers [3] for the 
web-platform-tests test suite. Currently they are not so great, at ~70% of the 
reference implementation, and thus presumably ~70% of the specification, 
covered by tests. We plan to work on expanding this to 100%, and to contribute 
those tests back to web-platform-tests as we go.

This of course becomes even more powerful when combined with Sam's tooling for 
comparing cross-browser (and indeed cross-platform) results of the test suite.

From there of course we fall back to the usual pattern, of evaluating UA 
compatibility vs. the spec, and fixing instances where the spec is misaligned 
with reality. But with 100% coverage we should be in a better starting position.

[1]: https://github.com/tmpvar/jsdom
[2]: https://github.com/jsdom/whatwg-url
[3]: https://github.com/jsdom/whatwg-url/issues/8#issuecomment-109705181



RE: Writing spec algorithms in ES6?

2015-06-11 Thread Domenic Denicola
Some previous discussion: [1] especiallly [2]

In general I think this is a reasonable thing, but it requires a decent bit 
more infrastructure to do things “safely”. For example, consider the definition 
[3]. It's generic in its arguments, which I think is nice (but does not fit 
with Web IDL---whatever). However, it's susceptible to author code overriding 
Array.prototype.join. Similarly, [4] relies upon the author-modifiable Math.max 
and Math.min, not to mention the author-modifiable Math binding.

I go into these issues in a bit more detail in [5], although that's from an 
implementation perspective. Regardless, it at least opens with another good 
example.

If we wanted to make this feasible---and I think that would be a good thing---I 
think at a minimum we'd need:

- JS bindings for all the ES abstract operations, so you could do e.g. 
max(...) or DefinePropertyOrThrow(...) instead of Math.max(...) and 
Object.defineProperty(...)
- Some way of expressing private state in ES. (Maybe WeakMaps are sufficient, 
semantically, but syntactically they are awkward.)
- Some kind of Record type, as well as some kind of List type, corresponding to 
those in the ES spec. Trying to use just objects or arrays will fail in the 
face of modifications to Object.prototype and Array.prototype. (Boris goes over 
this in [2])

It's a decent bit of work...

[1]: 
https://esdiscuss.org/topic/for-of-loops-iteratorclose-and-the-rest-of-the-iterations-in-the-spec
[2]: 
https://esdiscuss.org/topic/for-of-loops-iteratorclose-and-the-rest-of-the-iterations-in-the-spec#content-26
[3]: http://dev.w3.org/csswg/css-color/#dom-rgbcolor-stringifiers
[4]: http://dev.w3.org/csswg/css-color/#dom-hslcolor-hslcolorrgb
[5]: 
https://docs.google.com/document/d/1AT5-T0aHGp7Lt29vPWFr2-qG8r3l9CByyvKwEuA8Ec0/edit#heading=h.9yixony1a18r


RE: [webcomponents] How about let's go with slots?

2015-05-19 Thread Domenic Denicola
From: Elliott Sprehn [mailto:espr...@chromium.org] 

 Given the widget ui-collapsible that expects a ui-collapsible-header in the 
 content model, with slots I can write:

 ui-collapsible
  my-header-v1 slot=ui-collapsible-header ... /...
 /ui-collapsible

 ui-collapsible
  my-header-v2 slot=ui-collapsible-header ... /...
 /ui-collapsible

 within the same application. It also means the library can ship with an 
 implementation of the header widget, but you can replace it with your own. 
 This is identical to the common usage today in polymer apps where you 
 annotate your own element with classes. There's no restriction on the type of 
 the input.

I see. Thanks for explaining.
 
I think this model you cite Polymer using is different from what HTML normally 
does, which is why it was confusing to me. In HTML the insertion point tags 
(e.g. summary or li or option) act as dumb containers. This was 
reinforced by the examples in the proposal, which use div content-slot= 
with the div being a clear dumb container. You cannot replace them with your 
own choice of container and have things still work.

As such, I think it would make more sense to write

ui-collapsible
  ui-collapsible-header
my-header-v1/my-header-v1
  /ui-collapsible-header  
/ui-collapsible

ui-collapsible
  ui-collapsible-header
my-header-v2/my-header-v2
  /ui-collapsible-header  
/ui-collapsible

if we were trying to match HTML. This would be easier to refactor later, e.g. to

ui-collapsible
  ui-collapsible-header
my-icon/my-icon
Header for Foo
a href=https://example.com/help-for-foo;(?)/a
  /ui-collapsible-header  
/ui-collapsible

However, I understand how this departs from the Polymer/select= model now. 
Polymer/select= allows anything to stand in and be distributed to the 
appropriate destination location, whereas HTML only allows certain tag names. I 
am not sure which the original proposal from Jan, Ryosuke, and Ted *intended*, 
although I agree that as presented it leans toward Polymer/select=.


RE: [webcomponents] How about let's go with slots?

2015-05-19 Thread Domenic Denicola
From: Dimitri Glazkov [mailto:dglaz...@google.com] 

 Not sure what you mean by Polymer model.

I was referring to Elliot's This is identical to the common usage today in 
polymer apps where you annotate your own element with classes.

 When we have custom elements, the assumption of dumb containers simply goes 
 out of the window.

I don't think it has to, as I showed in my message.



RE: [webcomponents] How about let's go with slots?

2015-05-18 Thread Domenic Denicola
From: Dimitri Glazkov [mailto:dglaz...@google.com] 

 What do you think, folks?

Was there a writeup that explained how slots did not have the same 
performance/timing problems as select=? I remember Alex and I were pretty 
convinced they did at the F2F, but I think you became convinced they did not 
... did anyone capture that?

My only other contribution is that I sincerely hope we can use tag names 
instead of the content-slot attribute, i.e. dropdown instead of div 
content-slot=dropdown. Although slots cannot fully emulate native elements 
in this manner (e.g. select/optgroup/option), they would at least get 
syntactically closer, and would in some cases match up (e.g. 
details/summary). I think it would be a shame to start proliferating markup 
in the div content-slot=dropdown vein if we eventually want to get to a 
place where shadow DOM can be used to emulate native elements, which do not use 
this pattern.


RE: [webcomponents] How about let's go with slots?

2015-05-18 Thread Domenic Denicola
I was thinking opposed. I don’t see any reason to invent two ways to do the 
same thing.

If we do support content-slot then I think we should allow detailsdiv 
content-slot=summary.../div.../details and a few others.



Re: [webcomponents] How about let's go with slots?

2015-05-18 Thread Domenic Denicola
In case it wasn't clear, named slots vs. tag names is purely a bikeshed color 
(but an important one, in the syntax is UI sense). None of the details of how 
the proposal works change at all.

If you already knew that but still prefer content-slot attributes, then I guess 
we just disagree. But it wasn't clear.


From: Elliott Sprehn
Sent: Monday, May 18, 21:03
Subject: Re: [webcomponents] How about let's go with slots?
To: Justin Fagnani
Cc: Philip Walton, Domenic Denicola, Daniel Freedman, Dimitri Glazkov, Scott 
Miles, Ryosuke Niwa, Edward O'Connor, Anne van Kesteren, Travis Leithead, 
Maciej Stachowiak, Arron Eicholz, Alex Russell, public-webapps

I'd like this API to stay simple for v1 and support only named slots and not 
tag names. I believe we can explain what details does with the imperative API 
in v2.

On Mon, May 18, 2015 at 5:11 PM, Justin Fagnani 
justinfagn...@google.commailto:justinfagn...@google.com wrote:

On Mon, May 18, 2015 at 4:58 PM, Philip Walton 
phi...@philipwalton.commailto:phi...@philipwalton.com wrote:

Pardon my question if this has been discussed elsewhere, but it's not clear 
from my reading of the slots proposal whether they would be allowed to target 
elements that are not direct children of the component.

I believe the with the `select` attribute this was implicitly required because 
only compound selectors were supported (i.e. no child or descendent 
combinators) [1].

I think the actually issue is that you might have fights over who gets to 
redistribute an element. Given

my-el-1

  my-el-2

div content-slot=foo/div

  /my-el-2

/my-el-1

If both my-el-1 and my-el-2 have foo slots, who wins? What if the winner by 
whatever rules adds a clashing slot name in a future update?

I mentioned in this in Imperative API thread, but I think the least surprising 
way forward for distributing non-children is to allow nodes to cooperate on 
distribution, so a element could send its distributed nodes to an ancestor:  
https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0325.html



Would named slots be able to target elements farther down in the tree?


[1]  http://w3c.github.io/webcomponents/spec/shadow/#dfn-content-element-select





RE: Custom Elements: is=

2015-05-08 Thread Domenic Denicola
From: Travis Leithead [mailto:travis.leith...@microsoft.com] 

 It always seemed weird to me that 'prototype' of ElementRegistrationOptions 
 can inherit from anything (including null), and be completely disassociated 
 from the localName provided in 'extends'.

Yes, the current spec is completely borked when it comes to classes and how it 
just treats { prototype } as an options object. I think there is wide agreement 
to fix that.

The solution that maintains the least delta from the current spec is outlined 
in https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0230.html, 
coupled with the work at https://github.com/domenic/element-constructors (see 
especially the registry). You could imagine other solutions that allow 
author-supplied constructors (instead of just inheriting the default behavior 
from HTMLElement and delegating to this.createdCallback() or 
this[Element.create]()) but those require running such author-supplied 
constructors during parsing and during cloning, which has its own issues.



Re: Making ARIA and native HTML play better together

2015-05-07 Thread Domenic Denicola
From: Anne van Kesteren ann...@annevk.nl

 On Thu, May 7, 2015 at 9:02 AM, Steve Faulkner faulkner.st...@gmail.com 
 wrote:
 Currently ARIA does not do this stuff AFAIK.

 Correct. ARIA only exposes strings to AT. We could maybe make it do more, 
 once we understand what more means, which is basically figuring out HTML as 
 Custom Elements...

These are my thoughts as well. The proposal seems nice as a convenient way to 
get a given bundle of behaviors. But we *really* need to stop considering these 
roles as atomic, and instead break them down into what they really mean.

In other words, I want to explain the button behavior as something like:

- Default-focusable
- Activatable with certain key commands
- Announced by AT as a button

and then I want to be able to apply any of these abilities (or others like 
them) to any given custom element. Once we have these lower-level primitives 
we'll be in a much better place.



RE: [components] Isolated Imports and Foreign Custom Elements

2015-05-01 Thread Domenic Denicola
 alert(weirdArray.__proto__ == localArray.__proto__)

This alerts false in IE, Firefox, and Chrome.



RE: Proposal for changes to manage Shadow DOM content distribution

2015-04-22 Thread Domenic Denicola
Between content-slot-specified slots, attribute-specified slots, element-named 
slots, and everything-else-slots, we're now in a weird place where we've 
reinvented a micro-language with some, but not all, of the power of CSS 
selectors. Is adding a new micro-language to the web platform worth helping 
implementers avoid the complexity of implementing CSS selector matching in this 
context?



RE: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Domenic Denicola
This distinction between user permission and general permission is key, I think.

For example, I could naively imagine something like the browser auto-granting 
permission if the requested remoteAddress is equal to the IP address of the 
origin executing the API. Possibly with a pre-flight request that checks e.g. 
/.well-known/tcp-udp-permission-port-remotePort on that origin for a header 
to ensure the server is cooperative. (But I am sure there are security people 
standing by to tell me how this is very naive...) The async permissions style 
is flexible enough to allow any such techniques to come in to play.

-Original Message-
From: Nilsson, Claes1 [mailto:claes1.nils...@sonymobile.com] 
Sent: Wednesday, April 1, 2015 09:58
To: 'Anne van Kesteren'
Cc: public-sysa...@w3.org; public-webapps; Device APIs Working Group; Domenic 
Denicola; slightly...@chromium.org; yass...@gmail.com
Subject: RE: [W3C TCP and UDP Socket API]: Status and home for this 
specification

Hi Anne,

This is a misunderstanding that probably depends on that I used the word 
permission, which people associate with user permission. User permissions 
are absolutely not enough to provide access to this API. However, work is 
ongoing in the Web App Sec WG that may provide basis for a security model for 
this API. Please read section 4, 
http://www.w3.org/2012/sysapps/tcp-udp-sockets/#security-and-privacy-considerations.
 

I am trying to get to a point to see if a TCP and UDP Socket is possible to 
standardize taking the changed assumption into consideration, i.e. there will 
be no W3C web system applications.

BR
  Claes


Claes Nilsson
Master Engineer - Web Research
Advanced Application Lab, Technology

Sony Mobile Communications
Tel: +46 70 55 66 878
claes1.nils...@sonymobile.com

sonymobile.com



 -Original Message-
 From: Anne van Kesteren [mailto:ann...@annevk.nl]
 Sent: den 1 april 2015 11:58
 To: Nilsson, Claes1
 Cc: public-sysa...@w3.org; public-webapps; Device APIs Working Group; 
 Domenic Denicola; slightly...@chromium.org; yass...@gmail.com
 Subject: Re: [W3C TCP and UDP Socket API]: Status and home for this 
 specification
 
 On Wed, Apr 1, 2015 at 11:22 AM, Nilsson, Claes1 
 claes1.nils...@sonymobile.com wrote:
  A webapp could for example request permission to create a TCP
 connection to a certain host.
 
 That does not seem like an acceptable solution. Deferring this to the 
 user puts the user at undue risk as they cannot reason about this 
 question without a detailed understanding of networking.
 
 The best path forward here would still be standardizing some kind of 
 public proxy protocol developers could employ:
 
   https://annevankesteren.nl/2015/03/public-internet-proxy
 
 
 --
 https://annevankesteren.nl/



RE: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Domenic Denicola
I think it's OK for different browsers to experiment with different 
non-interoperable conditions under which they fulfill or reject the permissions 
promise. That's already true for most permissions grants today.



RE: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Domenic Denicola
From: Boris Zbarsky [mailto:bzbar...@mit.edu]

 This particular example sets of alarm bells for me because of virtual hosting.

Eek! Yeah, OK, I think it's best I refrain from trying to come up with specific 
examples. Let's forget I said anything...

 As in, this seems like precisely the sort of thing that one browser might
 experiment with, another consider an XSS security bug, and then we have
 content that depends on a particular browser, no?

My argument is that it's not materially different from existing permissions 
APIs. Sometimes the promise is rejected, sometimes it isn't. (Note that either 
outcome could happen without the user ever seeing a prompt.) The code works in 
every browser---some just follow the denied code path, and some follow the 
accepted code path. That's fine: web pages already need to handle that.



RE: [W3C TCP and UDP Socket API]: Status and home for this specification

2015-04-01 Thread Domenic Denicola
From: Jonas Sicking [mailto:jo...@sicking.cc]

 I agree with Anne. What Domenic describes sounds like something similar to
 CORS. I.e. a network protocol which lets a server indicate that it trusts a 
 given
 party.

I think my point would have been stronger without the /.well-known protocol 
thingy. Removing that:

Do you think it's acceptable for browser to experiment with e.g. auto-granting 
permission if the requested remoteAddress is equal to the IP address of the 
origin executing the API? Does that seem like current permission API conditions 
(i.e. not standardized), or more like CORS (standardized)?
 
 However, in my experience the use case for the TCPSocket and UDPSocket
 APIs is to connect to existing hardware and software systems. Like printers or
 mail servers. Server-side opt-in is generally not possible for them.

Right. My thrown-out-there idea was really just meant as an example of a 
potential experiment browsers could independently run on their own (like they 
do with other permissions today). It's not a proposal for the ultimate security 
model for this API.



RE: Minimum viable custom elements

2015-02-04 Thread Domenic Denicola
I hope others can address the question of why custom element callbacks are 
useful, and meet the bar of being a feature we should add to the web platform 
(with all the UA-coordination that requires). I just wanted to interject into 
this input discussion.

In IRC Anne and I were briefly discussing how type= is the is= of Web 
Applications 1.0. That is, input type=date is similar to img is=x-gif 
or similar---it has a reasonable fallback behavior, but in reality it is a 
completely different control than the local name indicates.

For input type this is accomplished with an ever-growing base class with an 
internal mode switch that makes its various properties/methods meaningful, 
whereas for is= you get a tidier inheritance hierarchy with the applicable 
properties and methods confined to the specific element and not impacting all 
others with the same local name. I.e., is= is type= done right.

The fact that type= exists and doesn't fit with the is= paradigm is 
unfortunate. But I don't think we'd be better off trying to somehow generalize 
the type= paradigm, or pretend it is any good. What is our advice to authors, 
then---they should modify HTMLImgElement.prototype with x-gif-related 
properties and methods, that only work if type=x-gif is present? That seems 
to be the road that this thread is heading down.

You could imagine plans to try to rationalize type= on top of is= (or, 
perhaps better, deprecate input type= in favor of control is= or 
something). But those seem pretty speculative.



RE: Custom element design with ES6 classes and Element constructors

2015-01-27 Thread Domenic Denicola
From: Elliott Sprehn [mailto:espr...@google.com] 

 Perhaps, but that logically boils down to never use string properties ever 
 just in case some library conflicts with a different meaning. We'd have 
 $[jQuery.find](...) and so on for plugins.

Nah, it boils down to don't use string properties for meta-programming hooks.

 Or more concretely isn't the new DOM Element#find() method going to conflict 
 with my polymer-database's find() method? So why not make that 
 [Element.find] so polymer never conflicts?

You can overwrite methods on your prototype with no problem. The issue comes 
when the browser makes assumptions about what user-supplied methods (i.e., 
metaprogramming hooks) are supposed to behave like. 

More concretely, if there was browser code that *called* 
arbitraryElement.find(), then we'd be in a lot more trouble. But as-is we're 
not.

(BTW find() was renamed to query(), IIRC because of conflicts with 
HTMLSelectElement or something?)


RE: Better focus support for Shadow DOM

2015-01-21 Thread Domenic Denicola
Thanks Takoyoshi! This new version looks great to me. It would allow people to 
create custom elements with the same focus capabilities as native elements, 
including both the simple cases (like custom-a) and the more complicated ones 
with a shadow DOM (like custom-input type=date). Very exciting stuff!

I hope others are as enthused as I am :)

From: Takayoshi Kochi (河内 隆仁) [mailto:ko...@google.com]
Sent: Wednesday, January 21, 2015 02:41
To: public-webapps
Subject: Re: Better focus support for Shadow DOM

Hi,

After conversation with Domenic Denicola, I changed the title and the whole 
story of
solving this issue, though the result is not quite different.  Instead of 
adding delegatesFocus
property, we added isTabStop() to expose tab focusable flag explained in 
HTML5 spec.

https://docs.google.com/document/d/1k93Ez6yNSyWQDtGjdJJqTBPmljk9l2WS3JTe5OHHB50/edit?usp=sharing

Any comments/suggestions welcome.

On Wed, Jan 14, 2015 at 2:27 PM, Takayoshi Kochi (河内 隆仁) 
ko...@google.commailto:ko...@google.com wrote:
Hi,

For shadow DOMs which has multiple focusable fields under the host,
the current behavior of tab navigation order gets somewhat weird
when you want to specify tabindex explicitly.

This is the doc to introduce a new attribute delegatesFocus to resolve the 
issue.
https://docs.google.com/document/d/1k93Ez6yNSyWQDtGjdJJqTBPmljk9l2WS3JTe5OHHB50/edit?usp=sharing

Any comments are welcome!
--
Takayoshi Kochi



--
Takayoshi Kochi


RE: Minimum viable custom elements

2015-01-16 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 However, nobody has suggested a design that satisfies both of our 
 requirements: using ES6 constructor for element initialization

Hi Ryosuke,

Could you say more about why this is a requirement? In particular, why you 
require that developers type

```js
class MyElement extends HTMLElement {
  constructor(htmlElementConstructorOptions, ...extraArgs) {
super(htmlElementConstructorOptions);
// initialization code here, potentially using extraArgs for non-parser 
cases
  }
}
```

instead of them typing

```js
class MyElement extends HTMLElement {
  [Element.create](...extraArgs) {
// initialization code here, potentially using extraArgs for non-parser 
cases
  }
}
```

? This kind of inversion-of-control pattern is, as I've tried to point out, 
fairly common in UI frameworks and in programming in general. Don't call me, 
I'll call you is the catchphrase, explained in 
https://en.wikipedia.org/wiki/Hollywood_principle. As the article says:

 It is a useful paradigm that assists in the development of code with high 
 cohesion and low coupling that is easier to debug, maintain and test. ... 
 Most beginners are first introduced to programming from a diametrically 
 opposed viewpoint. ... [But] It would be much more elegant if the programmer 
 could concentrate on the application [...] and leave the parts common to 
 every application to something else.

If this is a formal objection-level complaint, I'm motivated to understand why 
you guys don't think this software engineering best-practice applies to custom 
elements. It seems like a textbook example of where inversion-of-control 
applies.

(BTW I really recommend that Wikipedia article as reading for anyone 
interested; it ties together in one place a lot of wisdom about object-oriented 
design that I've had to absorb in bits and pieces throughout the years. I wish 
I'd seen it earlier.)



RE: Defining a constructor for Element and friends

2015-01-16 Thread Domenic Denicola
From: Anne van Kesteren [mailto:ann...@annevk.nl] 

 How can that work if the custom element constructor needs to look in the 
 registry to find its name? Pick a name at random?

Nah, it just automatically starts acting like HTMLQuoteElement: the localName 
option becomes required. See 

https://github.com/domenic/element-constructors/blob/5e6e00bb2bb525f04c8c796e467f103c8aa0bcf7/element-constructors.js#L229-L233

https://github.com/domenic/element-constructors/blob/5e6e00bb2bb525f04c8c796e467f103c8aa0bcf7/element-constructors.js#L51-L54



RE: Custom element design with ES6 classes and Element constructors

2015-01-15 Thread Domenic Denicola
Just to clarify, this argument for symbols is not dependent on modules. 
Restated, the comparison is between:

```js
class MyButton extends HTMLElement {
  createdCallback() {}
}
```

vs.

```js
class MyButton extends HTMLElement {
  [Element.create]() {}
}
```

 We're already doing some crude namespacing with *Callback. I'd expect that as 
 soon as the first iteration of Custom Elements is out, people will copy the 
 *Callback style in user code.

This is a powerful point that I definitely agree with. I would not be terribly 
surprised to find some library on the web already that asks you to create 
custom elements but encourages you supply a few more library-specific hooks 
with -Callback suffixes.



RE: Minimum viable custom elements

2015-01-15 Thread Domenic Denicola
From: Dimitri Glazkov [mailto:dglaz...@google.com] 

 Why is Not having identity at creation-time is currently a mismatch with the 
 rest of the platform a problem? Why does it all have to be consistent across 
 the board? Are there any other platform objects that are created by HTML 
 parser or a similar device?

In IRC we've been discussing how I don't think there's actually any 
(observable) mismatch with the rest of the platform if we are careful.

In particular, if we make it a rule that when parsing a given fragment, 
createdCallbacks run in registration order (not, say, in parse order), I think 
it works. Because then the built-in elements, which are (conceptually) 
registered first, go through the __proto__-munge + createdCallback() process 
first. Then, when the createdCallback() for any user-defined elements runs, all 
existing elements look to be already upgraded.

If we want createdCallbacks to run in parse order (which does seem probably 
better, although I'd be curious for concrete arguments why), then the only 
deviation required between custom and built-in elements is privileging the 
native elements with some ability to jump to the front the queue. Which seems 
pretty reasonable given that we already have lots of cases where custom 
elements do things sorta-async and native elements need to do things more 
synchronously. 



RE: Minimum viable custom elements

2015-01-15 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 Unfortunately for developers, native syntax for inheritance in Stink 2.0 
 cannot be used to subclass views in Odour.

The native syntax for inheritance can definitely be used! You just can't 
override the constructor, since constructing a view is a very delicate 
operation. Instead, you can provide some code that runs during the constructor, 
by overriding a specific method.

I wouldn't call this incompatible, any more than saying that ASP.NET Page 
classes are incompatible with C# classes:

http://www.4guysfromrolla.com/articles/041305-1.aspx

(you define Page_Load instead of the constructor). I imagine I could find many 
other frameworks (perhaps ones written for Stink developers?) where when you 
use a framework and derive from a framework-provided base class, you can't 
override the framework's methods or constructors directly, but instead have to 
override provided hooks.

 If ES6 classes' constructor doesn't fundamentally work with custom elements, 
 then why don't we change the design of ES6 classes.

We would essentially be saying that the design of ES6 classes should be built 
to support one particular construction pattern (two-stage construction), over 
any others. Why not design it to support three-stage construction? There are 
surely libraries that have more than two phases of boot-up.

One way to think about it is that there is a base constructor for all classes 
(corresponding, in spec language, to the definition of [[Construct]]), that in 
the newest TC39 design does the simplest thing possible. The DOM needs a more 
complicated setup. Shouldn't that be the DOM's responsibility to encode into 
*its* base constructor?

 Saying that TC39 doesn't have a time is like saying we can't make a spec 
 change because WG has already decided to move the spec into REC by the end of 
 the year in W3C.

That, I certainly agree with. Any process reasons brought up are bogus. But the 
technical arguments for the simplest base constructor possible in the 
language are pretty sound. They're in fact reasons that motivated TC39 to go 
into a last-minute redesign marathon over the holiday weeks, to get *away* from 
the more complicated base constructor that contained a two-stage 
allocation/initialization split.



RE: Minimum viable custom elements

2015-01-15 Thread Domenic Denicola
Steve's concerns are best illustrated with a more complicated element like 
button. He did a great pull request to the custom elements spec that 
contrasts all the work you have to do with taco-button vs. button 
is=tequila-button:

https://w3c.github.io/webcomponents/spec/custom/#custom-tag-example vs. 
https://w3c.github.io/webcomponents/spec/custom/#type-extension-example

The summary is that you *can* duplicate *some* of the semantics and 
accessibility properties of a built-in element when doing custom tags, but it's 
quite arduous. (And, it has minor undesirable side effects, such as new DOM 
attributes which can be overwritten, whereas native role semantics are baked 
in.)

Additionally, in some cases you *can't* duplicate the semantics and 
accessibility:

https://github.com/domenic/html-as-custom-elements/blob/master/docs/accessibility.md#incomplete-mitigation-strategies

An easy example is that you can never get a screen reader to announce 
custom-p as a paragraph, while it will happily do so for p is=custom-p. 
This is because there is no ARIA role for paragraphs that you could set in the 
createdCallback of your CustomP.

However, this second point is IMO just a gap in the capabilities of ARIA that 
should be addressed. If we could assume it will be addressed on the same 
timeline as custom elements being implemented (seems ... not impossible), that 
still leaves the concern about having to duplicate all the functionality of a 
button, e.g. keyboard support, focus support, reaction to the presence/absence 
of the disabled attribute, etc.

-Original Message-
From: Edward O'Connor [mailto:eocon...@apple.com] 
Sent: Thursday, January 15, 2015 18:33
To: WebApps WG
Subject: Re: Minimum viable custom elements

Hi all,

Steve wrote:

 [I]t also does not address subclassing normal elements. Again, while 
 that seems desirable

 Given that subclassing normal elements is the easiest and most robust 
 method (for developers) of implementing semantics[1] and interaction 
 support necessary for accessibility I would suggest it is undesirable 
 to punt on it.

Apologies in advance, Steve, if I'm missing something obvious. I probably am.

I've been writing an article about turtles and I've gotten to the point that 
six levels of headings aren't enough. I want to use a seventh-level heading 
element in this article, but HTML only has h1–6. Currently, without custom 
elements, I can do this:

div role=heading aria-level=7Cuora amboinensis, the southeast Asian box 
turtle/div

Suppose instead that TedHaitchSeven is a subclass of HTMLElement and I've 
registered it as ted-h7. In its constructor or createdCallback or whatever, I 
add appropriate role and aria-level attributes. Now I can write this:

ted-h7Cuora amboinensis, the southeast Asian box turtle/ted-h7

This is just as accessible as the div was, but is considerably more 
straightforward to use. So yay custom elements!

If I wanted to use is= to do this, I guess I could write:

h1 is=ted-h7Cuora amboinensis, the southeast Asian box turtle/h1

How is this easier? How is this more robust?

I think maybe you could say this is more robust (if not easier) because, in a 
browser with JavaScript disabled, AT would see an h1. h1 is at least a 
heading, if not one of the right level. But in such a browser the div example 
above is even better, because AT would see both that the element is a heading 
and it would also see the correct level.

OK, so let's work around the wrong-heading-level-when-JS-is-disabled
problem by explicitly overriding h1's implicit heading level:

h1 is=ted-h7 aria-level=7Cuora amboinensis, the southeast Asian box 
turtle/h1

I guess this is OK, but seeing aria-level=7 on and h1 rubs me the wrong way 
even if it's not technically wrong, and I don't see how this is easier or more 
robust than the other options.


Thanks,
Ted



RE: Defining a constructor for Element and friends

2015-01-15 Thread Domenic Denicola
I've updated my element constructors sketch at 
https://github.com/domenic/element-constructors/blob/master/element-constructors.js
 with a design that means no subclasses of HTMLElement (including the built-in 
elements) need to override their constructor or [Symbol.species](). It also 
uses an options argument for the constructors so it is more extensible in the 
future (e.g. Yehuda's attributes (or was it properties?) argument).

It mostly doesn't delve into custom elements, but does contain a small sample 
to illustrate how their don't need to override the constructor or 
[Symbol.species] either, and how their constructor works exactly the same as 
that of e.g. HTMLParagraphElement.

One interesting thing that falls out of the design is that it's trivial to 
allow a custom element class to be registered for multiple names; it requires 
no more work on the part of the class author than writing a class that 
corresponds to a single name, and is painless to use for consumers.

-Original Message-
From: Domenic Denicola [mailto:d...@domenic.me] 
Sent: Friday, January 9, 2015 20:01
To: Anne van Kesteren; WebApps WG; www-...@w3.org
Subject: RE: Defining a constructor for Element and friends

OK, so I've thought about this a lot, and there was some discussion on an 
unfortunately-TC39-private thread that I want to put out in the open. In [1] I 
outlined some initial thoughts, but that was actually a thread on a different 
topic, and my thinking has evolved.
 
[1]: http://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0035.html

I was writing up my ideas in an email but it kind of snowballed into something 
bigger so now it's a repo: https://github.com/domenic/element-constructors

One primary concern of mine is the one you mention:

 whether it is acceptable to have an element whose name is a, namespace is 
 the HTML namespace, and interface is Element

I do not really think this is acceptable, and furthermore I think it is 
avoidable.

In the private thread Boris suggested a design where you can do `new 
Element(localName, namespace, prefix)`. This seems necessary to explain how 
`createElementNS` works, so we do want that. He also suggested the following 
invariants:

1.  The localName and namespace of an element determine its set of internal 
slots.
2.  The return value of `new Foo` has `Foo.prototype` as the prototype.

I agree we should preserve these invariants, but added a few more to do with 
keeping the existing (localName, namespace) - constructor links solid.

I've outlined the added invariants in the readme of the above repo. Other 
points of interest:

- Explainer for a very-recently-agreed-upon ES6 feature that helps support the 
design: 
https://github.com/domenic/element-constructors/blob/master/new-target-explainer.md
- Jump straight to the code: 
https://github.com/domenic/element-constructors/blob/master/element-constructors.js
- Jump straight to the examples of what works and what doesn't: 
https://github.com/domenic/element-constructors/blob/master/element-constructors.js#L194

One ugly point of my design is that the constructor signature is `new 
Element(localName, document, namespace, prefix)`, i.e. I require the document 
to be passed in. I am not sure this is necessary but am playing it safe until 
someone with better understanding tells me one way or the other. See 
https://github.com/domenic/element-constructors/issues/1 for that discussion.

---

As for how this applies to custom elements, in the private thread Boris asked: 

 what is the use case for producing something that extends HTMLImageElement 
 (and presumably has its internal slots?) but doesn't have img as the tag 
 name and hence will not have anything ever look at those internal slots?

Elsehwere on this thread or some related one IIRC he pointed out code that 
looks at the local name, finds img, and casts to the C++ backing 
representation of HTMLImageElement. So from what I am gathering in his view the 
parts of the platform that treat img elements specially currently work by 
checking explicitly that something has local name img (and HTML namespace).

From a naïve authoring point of view that seems suboptimal. I'd rather be able 
to do `class MyImg extends HTMLImageElement { constructor(document) { 
super(document); } }` and have MyImg instances treated specially by the 
platform in all the ways img currently is.

Or, for an easier example, I'd like to be able to do `class MyQ extends 
HTMLQuoteElement { constructor(document) { super(document); } }` and have `(new 
MyQ()).cite` actually work, instead of throw a cite getter incompatible with 
MyQ error because I didn't get the HTMLQuoteElement internal slots.

The logical extension of this, then, is that if after that 
`document.registerElement` call I do `document.body.innerHTML = my-q 
cite=fooblah/my-q` I'd really like to see 
`document.querySelector(my-q).cite` return `foo`.

However this idea that we'd like custom elements which inherit from

RE: Minimum viable custom elements

2015-01-14 Thread Domenic Denicola
From: Erik Arvidsson [mailto:a...@google.com] 

 I'm not sure how that is speced but in Blink we have an extended IDL 
 attribute called CustomElementCallbacks which means that we are going to call 
 the pending callbacks after the method/setter.

If I recall this is how Anne and others were saying that it *should* be 
specced, but the spec itself is a bit more vague, leaving it as

 When transitioning back from the user agent code to script, pop the element 
 queue from the processing stack and invoke callbacks in that queue.


RE: Minimum viable custom elements

2015-01-14 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 Let me restate the problem using an example.  Suppose we're parsing 
 my-element/my-elementmy-other-element/my-other-element.

 Once the HTML is parsed, the DOM tree is constructed with two DOM elements.  
 Now we call the constructors on those elements.  Without loss of generality, 
 let's assume we're doing this in the tree order.

 We call the constructor of my-element first. However, inside this 
 constructor, you can access this.nextSibling after calling super().  What's 
 nextSibling in this case? An uninitialized my-other-element.

Thanks, that is very helpful. And I'd guess that with the current spec, it's an 
uninitialized my-other-element in the sense that its createdCallback has not 
been called, even though its constructor (which does nothing beside the normal 
HTMLElement/HTMLUnknownElement stuff) has indeed been called.



RE: Custom element design with ES6 classes and Element constructors

2015-01-14 Thread Domenic Denicola
I had a chat with Dmitry Lomov (V8 team/TC39, helped shape the new ES6 classes 
design, CC'ed). His perspective was helpful. He suggested a way of evolving the 
current createdCallback design that I think makes it more palatable, and allows 
us to avoid all of the teeth-gnashing we've been doing in this thread.

- Define a default constructor for HTMLElement that includes something like:

  ```js
  const createdCallback = this.createdCallback;
  if (typeof createdCallback === function) {
createdCallback();
  }
  ```

- Detect whether the constructor passed to document.registerElement is the 
default ES class constructor or not. (By default, classes generate a 
constructor like `constructor(...args) { super(...args); }`.) If it is not the 
default constructor, i.e. if an author tried to supply one, throw an error. 
This functionality doesn't currently exist in ES, but it exists in V8 and seems 
like a useful addition to ES (e.g. as `Reflect.isDefaultConstructor`).

- Define the HTMLElement constructor to be smart enough to work without any 
arguments. It could do this by e.g. looking up `new.target` in the registry. I 
can prototype this in https://github.com/domenic/element-constructors.

With these tweaks, the syntax for registering an element becomes:

```js
class MyEl extends HTMLElement {
  createdCallback() {
// initialization code goes here
  }
}

document.registerElement(my-el, MyEl);
```

Note how we don't need to save the return value of `document.registerElement`. 
`new MyEl()` still works, since it just calls the default `HTMLElement` 
constructor. (It can get its tag name by looking up `new.target === MyEl` in 
the custom element registry.) And, `new MyEl()` is equivalent to the 
parse-then-upgrade dance, since parsing corresponds to the main body of the 
HTMLElement constructor, and upgrading corresponds to proto-munging plus 
calling `this.createdCallback()`.

Compare this to the ideal syntax that we've been searching for a way to make 
work throughout this thread:

```js
class MyEl extends HTMLElement {
  constructor() {
super();
// initialization code goes here
  }
}

document.registerElement(my-el, MyEl);
```

It's arguable just as good. You can't use the normal constructor mechanism, 
which feels sad. But let's talk about that.

Whenever you're working within a framework, be it Rails or ASP.NET or Polymer 
or the DOM, sometimes the extension mechanism for the framework is to allow 
you to derive from a given base class. When you do so, there's a contract of 
what methods you implement, what methods you *don't* override, and so on. When 
doing this kind of base-class extension, you're implementing a specific 
protocol that the framework tells you to, and you don't have complete freedom. 
So, it seems pretty reasonable if you're working within a framework that says, 
you must not override the constructor; that's my domain. Instead, I've 
provided a protocol for how you can do initialization. Especially for a 
framework as complicated and full of initialization issues as the DOM.

This design seems pretty nice to me. It means we can get upgrading (which, a 
few people have emphasized, is key in an ES6 modules world), we don't need to 
generate new constructors, and we can still use class syntax without it 
becoming just an indirect way of generating a `__proto__` to munge later. Via 
the isDefaultConstructor detection, we can protect people from the footgun of 
overriding the constructor.

On a final note, we could further bikeshed the name of createdCallback(), e.g. 
to either use a symbol or to be named something short and appealing like 
initialize() or create(), if that would make it even more appealing :)



RE: Adopting a Custom Element into Another Document

2015-01-14 Thread Domenic Denicola
From: Anne van Kesteren [mailto:ann...@annevk.nl] 

 Because it's very easy to move nodes from one tree to another and this 
 happens quite a bit through iframe and such. If the iframe then goes away 
 it would be a shame to have to leak it forever. This is all discussed to 
 great extent in the aforementioned bug.

It's no more or less easy than moving normal JS objects from one realm to 
another, and we don't `__proto__`-munge those.


RE: Minimum viable custom elements

2015-01-14 Thread Domenic Denicola
From: Anne van Kesteren [mailto:ann...@annevk.nl] 

 Could you explain how this works in more detail?

I haven't checked, but my impression was we could just use the same processing 
model the current spec uses for createdCallback, and use the constructor 
instead.



RE: Minimum viable custom elements

2015-01-14 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 See Boris' responses in another thread [1] and [2].  Jonas outlined how this 
 could work in the same thread [3]

Thanks for the references. But avoiding this problem is exactly what Arv and I 
were talking about.

The mechanism that createdCallback (and all other custom element callbacks) 
operate via is to batch up all user code after the parser, but before returning 
control to script. So in the example of setting innerHTML then reading it on 
the next line, you let the parser run, *then* run the constructors, then run 
the next line.

Again, I might be missing something, but if you just do 
s/createdCallback/constructor in the existing spec, I think you get what we're 
describing.


RE: Defining a constructor for Element and friends

2015-01-13 Thread Domenic Denicola
From: Bjoern Hoehrmann [mailto:derhoe...@gmx.net] 

 I know that this a major concern to you, but my impression is that few if any 
 other people regard that as anything more than nice to have, especially if 
 you equate explaining with having a public API for it.

How do you propose having a private constructor API?

How do you propose instances of the objects even existing at all, if there is 
no constructor that creates them?

This is one of those only makes sense to a C++ programmer things.



RE: Custom element design with ES6 classes and Element constructors

2015-01-13 Thread Domenic Denicola
From: Boris Zbarsky [mailto:bzbar...@mit.edu] 

 Hmm.  So given the current direction whereby ES6 constructors may not even be 
 [[Call]]-able at all, I'm not sure we have any great options here.  :(  
 Basically, ES6 is moving toward coupling allocation and initialization but 
 the upgrade scenario can't really be expressed by coupled alloc+init if it 
 preserves object identity, right?

Yes, that is my feeling exactly. The old @@create design was perfect for our 
purposes, since its two-stage allocation-then-initialization could be staged 
appropriately by doing allocation initially, then initialization upon 
upgrading. But the new coupled design defeats that idea.

 I was hopeful that ES6 would give us a way out of this, but after thinking 
 things through, I don't see any improvement at all. In particular it seems 
 you're always going to have to have `var C2 = 
 document.registerElement(my-el, C1)` giving `C2 !== C1`.

 This part is not immediately obvious to me.  Why does that have to be true?

Well, I was skipping several steps and making a few assumptions. Roughly, my 
thought process was that you want *some* constructor that corresponds to 
parser/document.createElement behavior. And, since as discussed it definitely 
can't be your own constructor, the browser will need to generate one for you. 
Thus, it'll generate C2, which is different from the C1 you passed in.

Even if you removed the assumption that having a (user-exposed) constructor 
that corresponds to parser behavior is useful, it doesn't fix the issue that 
the C1 constructor is useless.



RE: Custom element design with ES6 classes and Element constructors

2015-01-13 Thread Domenic Denicola
From: Gabor Krizsanits [mailto:gkrizsan...@mozilla.com] 

 Isn't there a chance to consider our use-case in ES6 spec. then?

I kind of feel like I and others dropped the ball on this one. Until this 
thread I didn't realize how important the dual-stage allocation + 
initialization was, for upgrading in particular. So, we happily helped redesign 
away from the old ES6 dual-stage @@create design to a new ES6 coupled design, 
over the last few weeks.

The @@create design met with heavy implementer resistance from V8, for IMO 
valid reasons. But I think in the rush to fix it we forgot part of why it was 
done in the first place :(

 Yes, and it seems to me that we are trying to hack around the fact that ES6 
 classes are not compatible with what we are trying to do.
 ...
 And if there is no official way to do it people will start to try and hack 
 their way through in 1000 horrible ways...

The interesting thing is, ES5 classes also have the coupled allocation + 
initialization. So, the recent coupled ES6 class design is seen as a natural 
extension that fits well with ES5, while being more flexible and allowing 
subclassable built-ins.

One of the benefits of the new coupled ES6 class design over the coupled ES5 
design is that it exposes enough hooks so that you can, indeed, do such hacks. 
They might not even be so horrible.

For example, if you can just guarantee that everyone uses the constructor only 
for allocation, and puts their initialization code into a initialize() method 
that they call as the last line of their constructor, you can get an 
author-code version of the decoupled @@create design. You could even consider 
calling that initialize() method, say, createdCallback().

Viewed from this perspective, the real benefit of the old ES6 @@create design 
was that it standardized on exactly how that pattern would work: it would 
always be `new C(..args) = C.call(C[Symbol.create](), ...args)`. These days, 
if we were to invent our own pattern, so that e.g. `new C(...args) = 
C.prototype.createdCallback.call(new C(...args))`, this would only work for 
constructors whose definition we control, instead of all constructors ever. 
This is what leads to the idea (present in the current custom elements spec) of 
`document.registerElement` generating the constructor and ignoring any 
constructor that is passed in.



RE: Custom element design with ES6 classes and Element constructors

2015-01-13 Thread Domenic Denicola
From: Boris Zbarsky [mailto:bzbar...@mit.edu] 

 Just to be clear, this still didn't allow you to upgrade a my-img to be a 
 subclass of img, because that required a change in allocation, right?

Agreed. That needs to be done with img is=my-img, IMO. (Assuming the 
upgrading design doesn't get switched to DOM mutation, of course.)

Although! Briefly yesterday Arv mentioned that for Blink's DOM implementation 
there's no real difference in internal slots between img and span: both 
just have a single internal slot pointing to the C++ backing object. So in 
practice maybe it could. But, in theory the internal slots would be quite 
different between img and span, so I wouldn't really want to go down this 
road.

 Well, I was skipping several steps and making a few assumptions. 
 Roughly, my thought process was that you want *some* constructor that 
 corresponds to parser/document.createElement behavior. And, since as 
 discussed it definitely can't be your own constructor

 This is the part I'm not quite following.  Why can't it be your own 
 constructor?  Sorry for losing the thread of the argument here

No problem, I'm still skipping steps. There is an alternative design, which Arv 
outlined, which does allow the constructor to be the same.

The argument for different constructors is that: **assuming we want a design 
such that parsing-then-upgrading an element is the same as calling its 
constructor**, then we need the constructor to be split into two pieces: 
allocation, which happens on parse and is not overridden by author-defined 
subclasses, and initialization, which happens at upgrade time and is what 
authors define.

However, when authors design constructors with `class C1 extends HTMLElement { 
constructor(...) { ... } }`, their constructor will do both allocation and 
initialization. We can't separately call the initialization part of it at 
upgrade time without also allocating a new object. Thus, given a 
parse-then-upgrade scenario, we can essentially never call C1's constructor.

We *could* call some other method of C1 at upgrade time. Say, createdCallback. 
(Or upgradedCallback.) We could even generate a constructor, call it C2, that 
does HTMLElement allocation + calls createdCallback. That's what the current 
spec does.

In summary, the current spec design is:

 - parse-then-upgrade: HTMLElement allocation, then createdCallback.
 - parse an already-registered element: HTMLElement allocation, then 
createdCallback.
 - generated constructor: HTMLElement allocation, then createdCallback.
 - author-supplied constructor: ignored

Arv's message had a different design, that does indeed give C1 === C2:

 - parse-then-upgrade: HTMLElement allocation, then upgradedCallback.
 - parse an already-registered element: call author-supplied constructor 
(requires some trickiness to avoid executing user code during parse, but can 
use the same tricks createdCallback uses today)
 - generated constructor = author-supplied constructor

This requires a bit of manual synchronization to ensure that parse-then-upgrade 
behaves the same as the constructor/already-registered element case, as he 
illustrates in his message. In other words, it *doesn't* assume we want 
parsing-then-upgrading to be the same as calling the constructor. That might be 
the right tradeoff. Yesterday I was convinced it was. Today I have written so 
many emails that I'm not sure what I think anymore.



RE: Defining a constructor for Element and friends

2015-01-13 Thread Domenic Denicola
From: Boris Zbarsky [mailto:bzbar...@mit.edu] 

 Terminology: In what follows I use 'own-instances of X' to mean objects 
 where obj.constructor === X,

 That doesn't make much sense to me as a useful test, since it's pretty simple 
 to produce, say, an HTMLParagraphElement instance on the web that has 
 whatever .constructor value you desire, right?  Unless the intent is to talk 
 about this in some setup where no one has messed with any of the objects or 
 something.

 I guess the intent here is that we want obj to have been constructed via X in 
 some sense?  Modulo whatever the story is for the things that have 
 NamedConstructors.

Right, I was being imprecise. I am not sure how to make it precise. Maybe 
something like was created via `new X` assuming `X` doesn't use 
return-override and we buy in to the story where all instances are created via 
constructors (even those originating from the browser instead of the author).

 Anyway, modulo exactly what this definition should be, let's talk about the 
 proposed the constructor of an element determines its set of internal slots 
 invariant.  I'm OK with that if we include constructor arguments.  
 Otherwise, I don't see how it can make sense.  In particular, say someone 
 does:

   var x = new Element(a, http://www.w3.org/1999/xhtml;)

 or whatever argument order we do.  Per invariant 1 in your document, this 
 should get the internal slots of an HTMLAnchorElement, right?  Per invariant 
 2, x.constructor == Element, and in particular x.__proto__ == 
 Element.prototype.  So suddenly we have an HTMLAnchorElement as an 
 own-instance of Element, which I think violates your invariant 3.

The idea is that your above example throws, preserving the invariant. Since 
there is already an entry in the registry for (a, HTML_NS), and it's not 
Element, the constructor fails. (Except when invoked as part of a super() chain 
from the actual HTMLAnchorElement constructor.) Details:

- 
https://github.com/domenic/element-constructors/blob/88dbec40494aefc03825e00ff1bfc8d5e3f02f1e/element-constructors.js#L62-L66
- 
https://github.com/domenic/element-constructors/blob/88dbec40494aefc03825e00ff1bfc8d5e3f02f1e/element-constructors.js#L211-L215

 Moving on to invariant 4, is that instances in terms on instanceof (which 
 can be almost tautologically true, given what Web IDL says about 
 [[HasInstance]] on interface objects), or in terms of what the proto chain 
 looks like, or something else?  In particular, the x defined above doesn't 
 have HTMLElement.prototype on its proto chain, but is instanceof 
 HTMLElement...

I was assuming non-exotic [[HasInstance]], but I agree it's ambiguous given 
that. I meant prototype chain. Probably I also implicitly meant internal slots.

 The one piece of terminology that I think we have so far that I understand is 
 what it means for an object to implement an interface. 
 At least Web IDL has a normative requirement on such a thing being defined 
 (e.g. see http://heycam.github.io/webidl/#es-operations step 4 of the 
 behavior), presumably in terms of some sort of branding.

Heh, I don't really understand what that means; I indeed noticed that Web IDL 
uses it without defining it.

I too would guess that it's branding-related. Note that in any sensible scheme 
I can think of, subclass instances (for a subclass that calls super() in the 
constructor) also get the brands of their superclass.

 So it makes sense to me to talk about things implementing Element but not any 
 interface that has Element as an inherited interface.  That would correspond 
 to is an Element but not any specific subclass of Element.  Could use a 
 shorter way of saying it, for sure.

Don't think this attempt at pinning down terminology works for user-defined 
subclasses of Element. E.g. as far as I can tell, `new (class X extends Element 
{})()` has only the Element brand but no other brand (since X didn't install 
any brands itself). But I would say that it's an own-instance of X instead of 
an own-instance of Element.


RE: Defining a constructor for Element and friends

2015-01-13 Thread Domenic Denicola
From: Boris Zbarsky [mailto:bzbar...@mit.edu] 

 But it also means that user-space code that has to create an HTML element 
 generically now has to go through document.createElement instead of being 
 able to do |new HTMLElement(a)|, right?

That seems totally fine to me though. The idea of a string-based factory for 
when you don't know what constructor you want to use has precedent all over 
software design.

 Those aren't the same thing at all, right?  The prototype chain has 
 absolutely nothing to do with internal slots, unless we're assuming some sort 
 of vanilla untouched tate of the world.

Agreed. However, in a normal situation---where all constructors in the chain 
call super() appropriately, and nobody __proto__-munges, and so on---they 
should be the same. That's why I'm saying that implicitly it was probably also 
part of what I was thinking when writing that.

 Really, this idea of primary interface and your idea of own-instance seem 
 fairly similar, right?  Except that primary interface can only refer to Web 
 IDL interfaces, not user-defined subclasses... or something.

Yeah, that sounds about right. Honestly, own-interface was just my attempt at 
capturing a JavaScript concept that I work with pretty often (this over here 
is a Foo; this over here is a Bar).



RE: Defining a constructor for Element and friends

2015-01-13 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 Shouldn't we throw in this case because the concert type of somename is 
 HTMLUnknownElement?

Yes, that's exactly the current design. Hidden a bit:

https://github.com/domenic/element-constructors/blob/master/element-constructors.js#L4

This still leaves the potential hazard of someone doing `new 
HTMLUnknownElement(somename)` and their code breaking later once someone 
becomes a real tag... hopefully the Unknown is a bit more of a deterrent 
though?

(It'd be nice if HTMLElement weren't a global and you had to do `import 
HTMLElement from html/parser-internals` or something. Ah well.)



RE: Adopting a Custom Element into Another Document

2015-01-13 Thread Domenic Denicola
I imagine this has all been discussed before, but why do __proto__-munging when 
adopting cross document? That seems bizarre, and causes exactly these problems. 
When you put an object in a Map from another realm, it doesn't __proto__-munge 
it to that other realm's Object.prototype. Why is the tree data structure 
embodied by the DOM any different?

-Original Message-
From: Ryosuke Niwa [mailto:rn...@apple.com] 
Sent: Tuesday, January 13, 2015 14:37
To: Anne van Kesteren
Cc: Webapps WG; Boris Zbarsky
Subject: Re: Adopting a Custom Element into Another Document

On Jan 13, 2015, at 11:27 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Tue, Jan 13, 2015 at 8:15 PM, Ryosuke Niwa rn...@apple.com wrote:
 By the same thing, do you mean that they will manually change 
 __proto__ themselves?
 
 Yes.
 
 
 Let's say we have MyElement that inherits from HTMLElement and we're 
 adopting an instance of this element (let's call it myElement) from a 
 document A to document B; MyElement is currently defined in A's 
 global object.
 
 I have two questions:
 
 Why did we decide to let custom elements handle this themselves 
 instead of doing it in the browser?
 
 With two realms it's not a given that both realms use the same custom 
 element registry. (As you acknowledge.)

Okay.

 When myElement is adopted into B, there is no guarantee that 
 MyElement is also defined in B.  Does that mean MyElement may need to 
 create another class, let us call this MyElementB, in the B's context 
 that inherits from B's HTMLElement?
 
 Well, if you control both realms I would assume you would give the same name.

I'm not sure I understand what you mean by that.  The name of the new class 
MyElementB isn't important.

Do you agree that the author has to create a new class in B if there is no 
definition of MyElement there?
But how do we even determine whether MyElement's defined in B?
And if did, how do we get the class's prototype?

 Didn't Boris say Gecko needs to do this synchronously in order to 
 enforce their security policy?
 
 I don't think that's conclusive, though if it turns out that's the 
 case we would need different timing between normal and custom 
 elements, yes.

Great. Thanks for the clarification.

- R. Niwa




RE: Defining a constructor for Element and friends

2015-01-13 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 Or, we could always throw an exception in the constructor of 
 HTMLUnknownElement so that nobody could do it.  It would mean that libraries 
 and frameworks that do support custom elements without - would have to use 
 document.createElement but that might be a good thing since they wouldn't be 
 doing that in the first place.

That kind of breaks the design goal that we be able to explain how everything 
you see in the DOM was constructed. How did the parser (or 
document.createElement(NS)) create a HTMLUnknownElement, if the constructor for 
HTMLUnknownElement doesn't work?



RE: Custom element design with ES6 classes and Element constructors

2015-01-12 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 As we have repeatedly stated elsewhere in the mailing list, we support option 
 1 since authors and frameworks can trivially implement 2 or choose to set 
 prototype without us baking the feature into the platform.

At first I was sympathetic toward option 1, but then I realized that with ES6 
modules all script loading becomes async, so it would be literally impossible 
to use custom elements in a .html file (unless your strategy was to wait for 
element registration, XHR the .html file into a string, then do 
`document.documentElement.innerHTML = theBigString`).

In other words, in an ES6 modules world, all custom elements are upgraded 
elements.


RE: Custom element design with ES6 classes and Element constructors

2015-01-12 Thread Domenic Denicola
From: Domenic Denicola [mailto:d...@domenic.me] 

 In other words, in an ES6 modules world, all custom elements are upgraded 
 elements.

Should be,  In other words, in an ES6 modules world, all custom elements 
__that appear in the initially-downloaded .html file__ are upgraded elements.


RE: Custom element design with ES6 classes and Element constructors

2015-01-12 Thread Domenic Denicola
From: Ryosuke Niwa [mailto:rn...@apple.com] 

 In that case, we can either delay the instantiation of those unknown elements 
 with - in their names until pending module loads are finished

Could you explain this in a bit more detail? I'm hoping there's some brilliant 
solution hidden here that I haven't been able to find yet.

For example, given

my-el/my-el
script
  window.theFirstChild = document.body.firstChild;
  console.log(window.theFirstChild);
  console.log(window.theFirstChild.method);
/script
script type=module src=my-module.js/script


with my-module.js containing something like


document.registerElement(my-el, class MyEl extends HTMLElement {
  constructor() {
super();
console.log(constructed!);
  }
  method() { }
});
console.log(document.body.firstChild === window.theFirstChild);
console.log(document.body.method);


what happens, approximately?

 or go with option 2

There are a few classes of objections to option 2, approximately:

A. It would spam the DOM with mutations (in particular spamming any mutation 
observers)
B. It would invalidate any references to the object (e.g. the 
`window.theFirstChild !== document.body.firstChild` problem), which is 
problematic if you were e.g. using those as keys in a map.
C. What happens to any changes you made to the element? (E.g. attributes, event 
listeners, expando properties, ...)

I am not sure why A is a big deal, and C seems soluble (copy over most 
everything, maybe not expandos---probably just follow the behavior of 
cloneNode). B is the real problem though.

One crazy idea for solving B is to make every DOM element (or at least, every 
one generated via parsing a hyphenated or is= element) into a proxy whose 
target can switch from e.g. `new HTMLUnknownElement()` to `new MyEl()` after 
upgrading. Like WindowProxy, basically. I haven't explored this in much detail 
because proxies are scary.



RE: Custom element design with ES6 classes and Element constructors

2015-01-12 Thread Domenic Denicola
From: Tab Atkins Jr. [mailto:jackalm...@gmail.com] 

 Proto munging isn't even that big of a deal. It's the back-end stuff that's 
 kinda-proto but doesn't munge that's the problem.  This is potentially 
 fixable if we can migrate more elements out into JS space.

It really isn't though, at least, not without a two-stage process like empty 
constructor() with [[Construct]] semantics that can never be applied to 
upgraded elements + createdCallback() with [[Call]] semantics that can be 
applied to upgraded elements after having their __proto__ munged.


RE: Defining a constructor for Element and friends

2015-01-11 Thread Domenic Denicola
From: Boris Zbarsky [mailto:bzbar...@mit.edu] 

 That said, I do have one question already: what does the term own-instances 
 mean in that document?

Explained at the top:

Terminology: In what follows I use 'own-instances of X' to mean objects where 
obj.constructor === X, as distance from 'instances of X' which means objects 
for which obj instanceof X.

 whether it is acceptable to have an element whose name is a, 
 namespace is the HTML namespace, and interface is Element

 I'd like to understand what you mean by interface is Element here, exactly.

I'm just quoting Anne :). My interpretation is that the (object representing 
the) element is an own-instance of Element.

  From a naïve authoring point of view that seems suboptimal. I'd rather be 
 able to do `class MyImg extends HTMLImageElement { constructor(document) { 
 super(document); } }` and have MyImg instances treated specially by the 
 platform in all the ways img currently is.

 I don't quite see the issue here.  Presumably the HTMLImageElement 
 constructor passes img as the localName to the HTMLElement constructor, so 
 your MyImg would get img as the localName, right?

Ah, right, of course. I am skipping a few steps and my steps are wrong, so my 
concern wasn't well-founded. My vision was that MyImg had local name my-img via 
some custom element stuff, as with the my-q example below. I agree that it 
works though as stated.

 Or, for an easier example, I'd like to be able to do `class MyQ extends 
 HTMLQuoteElement { constructor(document) { super(document); } }` and have 
 `(new MyQ()).cite` actually work, instead of throw a cite getter 
 incompatible with MyQ error because I didn't get the HTMLQuoteElement 
 internal slots.

 This should totally work, of course.  Why wouldn't it, exactly?  Given the 
 subclassing proposal on the table in ES6 right now, it would work splendidly, 
 since the HTMLQuoteElement constructor is what would perform the object 
 allocation and it would pass along q as the localName. 
 (Though actually, HTMLQuoteElement is excitingly complicated, because both 
 q and blockquote would use that constructor, so it would need to either 
 require one of those two strings be passed in, or default to q unless 
 blockquote is passed in or something.)

Right, so I should have actually written `class MyQ extends HTMLQuoteElement { 
constructor(document) { super(q, document); } }`. My repo's .js file covers 
the case of HTMLQuoteElement specifically to illustrate how to deal with 
classes that cover more than one local name.

And yeah, I agree it works again.

The other stuff, which is really about custom elements, I'll spin off into a 
new thread.


Custom element design with ES6 classes and Element constructors

2015-01-11 Thread Domenic Denicola
This is a spinoff from the thread Defining a constructor for Element and 
friends at 
http://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0038.html, 
focused specifically on the design of custom elements in that world.

 The logical extension of this, then, is that if after that 
 `document.registerElement` call I do `document.body.innerHTML = my-q 
 cite=fooblah/my-q`

 Ah, here we go.  This is the part where the trouble starts, indeed.

 This is why custom elements currently uses q is=my-q for creating custom 
 element subclasses of things that are more specific than HTMLElement.  Yes, 
 it's ugly.  But the alternative is at least major rewrite of the HTML spec 
 and at least large parts of Gecko/WebKit/Blink. 
  :( I can't speak to whether Trident is doing a bunch of localName checks 
 internally.

So, at least as a thought experiment: what if we got rid of all the local name 
checks in implementations and the spec. I think then `my-q` could work, as 
long as it was done *after* `document.registerElement` calls.

However, I don't understand how to make it work for upgraded elements at all, 
i.e., in the case where `my-q` is parsed, and then later 
`document.registerElement(my-q, MyQ)` happens. You'd have to somehow graft 
the internal slots onto all MyQ instances after the fact, which is antithetical 
to the ES6 subclassing design and to how implementations work. __proto__ 
mutation doesn't suffice at all. Is there any way around this you could imagine?

Assuming that there isn't, I agree that any extension of an existing HTML 
element must be used with `is=` syntax, and there's really no way of fixing 
that. (So, as a side effect, the local name tests can stay. I know how 
seriously you were considering my suggestion to rewrite them all ;).)

This makes me realize that there are really two possible operations going on 
here:

- Register a custom element, i.e. a new tag name, whose corresponding 
constructor *must* derive *directly* from HTMLElement.
- Register an existing element extension, i.e. something to be used with 
is=, whose corresponding constructor can derive from some other constructor.

I'd envision this as

```js
document.registerElement(my-el, class MyEl extends HTMLElement {});
document.registerElementExtension(qq, class QQ extends HTMLQuoteElement {});
```

This would allow my-el, plus q is=qq and blockquote is=qq, but not 
qq or q is=my-el or span is=qq. (Also note that element extensions 
don't need to be hyphenated, and there's no need for extends since you can 
get the appropriate information from looking at the prototype chain of the 
passed constructor.) document.registerElement could even throw for things that 
don't directly extend HTMLElement. And document.registerElementExtension could 
throw for things which don't derive from constructors that are already in the 
registry. (BTW, as noted in the existing spec, for SVG you always want to use 
element extensions, not custom elements.)

---

The story is still pretty unsatisfactory, however. Consider the case where your 
document consists of `my-el/my-el`, and then later you do `class MyEl 
extends HTMLElement {}; document.registerElement(my-el, MyEl)`. (Note how I 
don't save the return value of document.registerElement.) When the parser 
encountered `my-el`, it called `new HTMLUnknownElement(...)`, allocating a 
HTMLUnknownElement. The current design says to `__proto__`-munge the element 
after the fact, i.e. `document.body.firstChild.__proto__ = MyEl.prototype`. But 
it never calls the `MyEl` constructor!

This is troubling in a couple ways, at least:

- It means that the code `class MyEl extends HTMLElement {}` is largely a lie. 
We are just using ES6 class syntax as a way of creating a new prototype object, 
and not as a way of creating a proper class. At least for upgrading purposes.
- It means that what you get when doing `new MyEl()` is different from what you 
got when parsing-then-upgrading `my-el/my-el`.
- If we decide for the parser and/or for document.createElement to use the 
custom element registry when deciding how to instantiate elements, as was my 
plan in [1], it means that you'll get different results before registering the 
element as you would post-upgrade.

[1]: 
https://github.com/domenic/element-constructors/blob/master/element-constructors.js#L166-L189

(The same problems apply with q is=qq, by the way. It needs to be upgraded 
from HTMLQuoteElement to QQ, but we can only `__proto__`-munge, not call the 
constructor.)

I guess the current design solves this all by saying that indeed, the use of 
ES6 class syntax is a lie; you are only using it to create a prototype object. 
And indeed, you can't have a proper constructor, so use this createdCallback 
thing, which we will use to *make* a proper constructor for you, which behaves 
essentially like an upgrade does. So you have to save the return value of 
document.registerElement, and ignore the original constructor from your 
so-called class 

RE: Custom element design with ES6 classes and Element constructors

2015-01-11 Thread Domenic Denicola
Following some old bug links indicates to me this has all been gone over many 
times before. In particular:

- https://www.w3.org/Bugs/Public/show_bug.cgi?id=20913 discussion of class 
syntax, prototypes, my-el vs. q is=my-qq, and more
- https://www.w3.org/Bugs/Public/show_bug.cgi?id=21063 and related about 
upgrading (at one time upgrades *did* mutate the DOM tree)

I think perhaps the only new information is that, in the old ES6 class design 
with @@create, there was a feasible path forward where document.registerElement 
mutated the @@create to allocate the appropriate backing store, and left the 
constructor's initialize behavior (approximately it's [[Call]] behavior) 
alone. This would also work for upgrades, since you could just call the 
constructor on the upgraded element, as constructors in old-ES6 were not 
responsible for allocation. This would perform initialization on it (but leave 
allocation for the browser-generated @@create).

This was all somewhat handwavey, but I believe it was coherent, or close to it.

In contrast, the new ES6 subclassing designs have backed away from the 
allocation/initialization split, which seems like it torpedoes the idea of 
using user-authored constructors. Instead, we essentially have to re-invent the 
@@create/constructor split in user-space as constructor/createdCallback. That's 
a shame.

-Original Message-
From: Domenic Denicola [mailto:d...@domenic.me] 
Sent: Sunday, January 11, 2015 15:13
To: WebApps WG
Subject: Custom element design with ES6 classes and Element constructors

This is a spinoff from the thread Defining a constructor for Element and 
friends at 
http://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0038.html, 
focused specifically on the design of custom elements in that world.

 The logical extension of this, then, is that if after that 
 `document.registerElement` call I do `document.body.innerHTML = my-q 
 cite=fooblah/my-q`

 Ah, here we go.  This is the part where the trouble starts, indeed.

 This is why custom elements currently uses q is=my-q for creating custom 
 element subclasses of things that are more specific than HTMLElement.  Yes, 
 it's ugly.  But the alternative is at least major rewrite of the HTML spec 
 and at least large parts of Gecko/WebKit/Blink. 
  :( I can't speak to whether Trident is doing a bunch of localName checks 
 internally.

So, at least as a thought experiment: what if we got rid of all the local name 
checks in implementations and the spec. I think then `my-q` could work, as 
long as it was done *after* `document.registerElement` calls.

However, I don't understand how to make it work for upgraded elements at all, 
i.e., in the case where `my-q` is parsed, and then later 
`document.registerElement(my-q, MyQ)` happens. You'd have to somehow graft 
the internal slots onto all MyQ instances after the fact, which is antithetical 
to the ES6 subclassing design and to how implementations work. __proto__ 
mutation doesn't suffice at all. Is there any way around this you could imagine?

Assuming that there isn't, I agree that any extension of an existing HTML 
element must be used with `is=` syntax, and there's really no way of fixing 
that. (So, as a side effect, the local name tests can stay. I know how 
seriously you were considering my suggestion to rewrite them all ;).)

This makes me realize that there are really two possible operations going on 
here:

- Register a custom element, i.e. a new tag name, whose corresponding 
constructor *must* derive *directly* from HTMLElement.
- Register an existing element extension, i.e. something to be used with 
is=, whose corresponding constructor can derive from some other constructor.

I'd envision this as

```js
document.registerElement(my-el, class MyEl extends HTMLElement {}); 
document.registerElementExtension(qq, class QQ extends HTMLQuoteElement {}); 
```

This would allow my-el, plus q is=qq and blockquote is=qq, but not 
qq or q is=my-el or span is=qq. (Also note that element extensions 
don't need to be hyphenated, and there's no need for extends since you can 
get the appropriate information from looking at the prototype chain of the 
passed constructor.) document.registerElement could even throw for things that 
don't directly extend HTMLElement. And document.registerElementExtension could 
throw for things which don't derive from constructors that are already in the 
registry. (BTW, as noted in the existing spec, for SVG you always want to use 
element extensions, not custom elements.)

---

The story is still pretty unsatisfactory, however. Consider the case where your 
document consists of `my-el/my-el`, and then later you do `class MyEl 
extends HTMLElement {}; document.registerElement(my-el, MyEl)`. (Note how I 
don't save the return value of document.registerElement.) When the parser 
encountered `my-el`, it called `new HTMLUnknownElement(...)`, allocating a 
HTMLUnknownElement. The current design says to `__proto__`-munge

RE: Defining a constructor for Element and friends

2015-01-09 Thread Domenic Denicola
OK, so I've thought about this a lot, and there was some discussion on an 
unfortunately-TC39-private thread that I want to put out in the open. In [1] I 
outlined some initial thoughts, but that was actually a thread on a different 
topic, and my thinking has evolved.
 
[1]: http://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0035.html

I was writing up my ideas in an email but it kind of snowballed into something 
bigger so now it's a repo: https://github.com/domenic/element-constructors

One primary concern of mine is the one you mention:

 whether it is acceptable to have an element whose name is a, namespace is 
 the HTML namespace, and interface is Element

I do not really think this is acceptable, and furthermore I think it is 
avoidable.

In the private thread Boris suggested a design where you can do `new 
Element(localName, namespace, prefix)`. This seems necessary to explain how 
`createElementNS` works, so we do want that. He also suggested the following 
invariants:

1.  The localName and namespace of an element determine its set of internal 
slots.
2.  The return value of `new Foo` has `Foo.prototype` as the prototype.

I agree we should preserve these invariants, but added a few more to do with 
keeping the existing (localName, namespace) - constructor links solid.

I've outlined the added invariants in the readme of the above repo. Other 
points of interest:

- Explainer for a very-recently-agreed-upon ES6 feature that helps support the 
design: 
https://github.com/domenic/element-constructors/blob/master/new-target-explainer.md
- Jump straight to the code: 
https://github.com/domenic/element-constructors/blob/master/element-constructors.js
- Jump straight to the examples of what works and what doesn't: 
https://github.com/domenic/element-constructors/blob/master/element-constructors.js#L194

One ugly point of my design is that the constructor signature is `new 
Element(localName, document, namespace, prefix)`, i.e. I require the document 
to be passed in. I am not sure this is necessary but am playing it safe until 
someone with better understanding tells me one way or the other. See 
https://github.com/domenic/element-constructors/issues/1 for that discussion.

---

As for how this applies to custom elements, in the private thread Boris asked: 

 what is the use case for producing something that extends HTMLImageElement 
 (and presumably has its internal slots?) but doesn't have img as the tag 
 name and hence will not have anything ever look at those internal slots?

Elsehwere on this thread or some related one IIRC he pointed out code that 
looks at the local name, finds img, and casts to the C++ backing 
representation of HTMLImageElement. So from what I am gathering in his view the 
parts of the platform that treat img elements specially currently work by 
checking explicitly that something has local name img (and HTML namespace).

From a naïve authoring point of view that seems suboptimal. I'd rather be able 
to do `class MyImg extends HTMLImageElement { constructor(document) { 
super(document); } }` and have MyImg instances treated specially by the 
platform in all the ways img currently is.

Or, for an easier example, I'd like to be able to do `class MyQ extends 
HTMLQuoteElement { constructor(document) { super(document); } }` and have `(new 
MyQ()).cite` actually work, instead of throw a cite getter incompatible with 
MyQ error because I didn't get the HTMLQuoteElement internal slots.

The logical extension of this, then, is that if after that 
`document.registerElement` call I do `document.body.innerHTML = my-q 
cite=fooblah/my-q` I'd really like to see 
`document.querySelector(my-q).cite` return `foo`.

However this idea that we'd like custom elements which inherit from existing 
elements to have their internal slots ties in to the whole upgrading mess, 
which seems quite hard to work around. So maybe it is not a good idea? On the 
other hand upgrading might be borked no matter what, i.e. it might not be 
possible at all to make upgraded elements behave anything like 
parsed-from-scratch elements. (I am planning to think harder about the 
upgrading problem but I am not hopeful.)



RE: Custom element lifecycle callbacks

2015-01-09 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 On Fri, Jan 9, 2015 at 3:30 AM, Adam Klein ad...@chromium.org wrote:
 Do you have a proposal for where these symbols would be vended?

 My idea was to put them on Node or Element as statics, similar to the Symbol 
 object design. The discussion on public-script-coord is this bug:

In an alternate reality where modules were specced and implemented before 
symbols, they should go there. (Both for Symbol and for Node/Element; the 
placement of them on Symbol was a somewhat-last-minute hack when we realized 
modules were lagging symbols substantially.)

But yes, in this reality somewhere in the Element/Node/HTMLElement hierarchy 
makes sense. (I'd guess Element?) We might not want to take it as precedent 
though. Or maybe we do, for consistency.


RE: ES6 and upgrading custom elements

2015-01-07 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 That's why I tried to scope this thread to upgrading and not the script side.

 The main question is how you tie MyInputElement to something like my-input 
 and have that actually work. It seems Dimitri's answer is that you don't, you 
 use input is=my-input in combination with a (delayed) prototype mutation 
 and creation callback. And you use createElement(input, my-input) or the 
 constructor on the script side of things.

Oh my goodness, I'm sorry. I completely misunderstood what was meant by 
upgrading. My fault entirely, as it's a well-defined custom elements term.

I see the problem now. I'll try to think on it harder and get back to you. In 
the meantime, apologies for derailing the thread.



RE: ES6 and upgrading custom elements

2015-01-06 Thread Domenic Denicola
This is all intimately tied to the still-ongoing how-to-subclass-builtins 
discussion that is unfortunately happening on a private TC39 thread. The 
essential idea, however, is that as long as you do

```js
class MyInputElement extends HTMLInputElement {
  constructor() {
super(); // this is key
  }
}
```

then all instances of MyInputElement will get all internal slots and other 
exotic behavior of HTMLInputElement. The `super()` call is the signal that the 
object should be allocated and initialized using the logic of HTMLInputElement 
instead of Function.prototype (the default).

This will require defining a constructor for HTMLInputElement, of course, but 
it's pretty predictable you wouldn't be able to create subclasses of something 
without a constructor.

I am not sure exactly how this fits in with an ES6-era 
document.registerElement. This is the right time to start thinking about that, 
certainly.

Off the top of my head, ideally document.registerElement should just be a 
registration call, and should not actually need to modify the constructor or 
create a new constructor as it does today. The generated constructor in [1] 
does nothing special that the above constructor (which consists entirely of 
`super()`) could not be made to do automatically, as long as the Element 
constructor was made smart enough. (`super()` eventually will delegate up to 
the Element constructor.)

To explain that in more detail I need one more probable-ES6 concept, which is 
the new target. In the expression `var x = new MyInputElement()`, ` 
MyInputElement` is the new target. Part of the ES6 subclassing story is that 
all superclass constructors will be passed an implicit parameter for the new 
target. So, when the `MyInputElement()` constructor calls `super()`, that calls 
the [[Construct]] internal method of HTMLInputElement, passing in the new 
target of MyInputElement. Similarly when HTMLInputElement in turn calls 
`super()`, that will call the [[Construct]] internal method of HTMLElement, 
still passing in the new target of MyInputElement. Etc. until Element also gets 
a new target of MyInputElement.

With this in hand we could see how to define a smart enough element 
constructor. It would first look up the new target in the custom element 
registry. If it is there, it retrieves the definition for it, and can do all 
the work currently specified in [1]. That means that the actual constructor for 
`MyInputElement` can be as above, i.e. simply `super()`.

[1]: 
https://w3c.github.io/webcomponents/spec/custom/#dfn-custom-element-constructor-generation

 Conceptually, when I wrote it I'd imagined that the constructor will be 
 called only when you explicitly invoke it (new FooElement...). When parsing 
 or upgrading, the constructor would not be called. The createdCallback will 
 be invoked in either case.

I do not think this makes any sense. All instances of a class can only be 
created through its constructor. createdCallback should be left behind as 
legacy IMO.

--

With all this in mind I do not think extends is necessary. I am less sure 
about this though so take what follows with a grain of salt. But anyway, IMO 
you should just look at the inheritance chain for the constructor passed in to 
registerElement.

Also there is a pretty simple hack if you still want extends. Which is that the 
HTML standard should just define that `HTMLInputElement.extends = input`. 
Then in Dmitry's

```js
class X extends HTMLInputElement { .. }
X.extends = input // additional line.
document.register(x-input, X)
var xinput = new X
```

the need for the additional line disappears since `X.extends === 
HTMLInputElement.extends === input`.

(I guess this breaks for classes that have multiple tag names :-/. But it is 
pretty weird that I can't define `class QQ extends HTMLQuoteElement {}` and 
then do both `q is=x-qq` and `blockquote is=x-qq`. You would think I 
could do both. More reason to dislike `extends`.)



RE: Publishing working draft of URL spec

2014-12-02 Thread Domenic Denicola
From: cha...@yandex-team.ru [mailto:cha...@yandex-team.ru] 

 There is no need for a CfC, per our Working Mode documents, so this is 
 announcement that we intend to publish a new Public Working Draft of the URL 
 spec, whose technical content will be based on what is found at 
 https://specs.webplatform.org/url/webspecs/develop/ and 
 https://url.spec.whatwg.org/

Which of these two? They are quite different.




RE: URL Spec WorkMode (was: PSA: Sam Ruby is co-Editor of URL spec)

2014-12-01 Thread Domenic Denicola
What we really need to do is get some popular library or website to take a 
dependency on mobile Chrome or mobile Safari's file URL parsing. *Then* we'd 
get interoperability, and quite quickly I'd imagine.

From: Jonas Sickingmailto:jo...@sicking.cc
Sent: ‎2014-‎12-‎01 22:07
To: Sam Rubymailto:ru...@intertwingly.net
Cc: Webapps WGmailto:public-webapps@w3.org
Subject: Re: URL Spec WorkMode (was: PSA: Sam Ruby is co-Editor of URL spec)

Just in case I haven't formally said this elsewhere:

My personal feeling is that it's probably better to stay away from
speccing the behavior of file:// URLs.

There's very little incentive for browsers to align on how to handle
file:// handling. The complexities of different file system behaviors
on different platforms and different file system backends makes doing
comprehensive regression testing painful. And the value is pretty low
because there's almost no browser content that uses absolute file://
URLs.

I'm not sure if non-browser URL consuming software has different
incentives. Most software that loads resources from the local file
system use file paths, rather than file:// URLs. Though I'm sure there
are exceptions.

And it seems like file:// URLs add a significant chunk of complexity
to the spec. Complexity which might be for naught if implementations
don't implement them.

/ Jonas



On Mon, Dec 1, 2014 at 5:17 PM, Sam Ruby ru...@intertwingly.net wrote:
 On 11/18/2014 03:18 PM, Sam Ruby wrote:


 Meanwhile, I'm working to integrate the following first into the WHATWG
 version of the spec, and then through the WebApps process:

 http://intertwingly.net/projects/pegurl/url.html


 Integration is proceeding, current results can be seen here:

 https://specs.webplatform.org/url/webspecs/develop/

 It is no longer clear to me what through the WebApps process means. In an
 attempt to help define such, I'm making a proposal:

 https://github.com/webspecs/url/blob/develop/docs/workmode.md#preface

 At this point, I'm looking for general feedback.  I'm particularly
 interested in things I may have missed.  Pull requests welcome!

 Once discussion dies down, I'll try go get agreement between the URL
 editors, the WebApps co-chairs and W3C Legal.  If/when that is complete,
 this will go to W3C Management and whatever the WHATWG equivalent would be.

 - Sam Ruby




RE: =[xhr]

2014-11-24 Thread Domenic Denicola
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] 

 IMO, exposing such degree of (low level) control should be avoided.

I disagree on principle :). If we want true webapps we need to not be afraid to 
give them capabilities (like POSTing data to S3) that native apps have.

 In cases where the size of the body is known beforehand, Content-Length 
 should be generated automatically;  in cases where it is not, chunked 
 encoding should be used.

I agree this is a nice default. However it should be overridable for cases 
where you know the server in question doesn't support chunked encoding.


RE: =[xhr]

2014-11-24 Thread Domenic Denicola
From: Rui Prior [mailto:rpr...@dcc.fc.up.pt] 

 If you absolutely need to stream content whose length is unknown beforehand 
 to a server not supporting ckunked encoding, construct your web service so 
 that it supports multiple POSTs (or whatever), one per piece of data to 
 upload.

Unfortunately I don't control Amazon's services or servers :(


RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 The only way I could imagine us doing this is by setting the Content-Length 
 header value through an option (not through Headers) and by having the 
 browser enforce the specified length somehow. It's not entirely clear how a 
 browser would go about that. Too many bytes could be addressed through a 
 transform stream I suppose, too few bytes... I guess that would just leave 
 the connection hanging. Not sure if that is particularly problematic.

I don't understand why the browser couldn't special-case the handling of 
`this.headers.get(Content-Length)`? I.e. why would a separate option be 
required? So for example the browser could stop sending any bytes past the 
number specified by reading the Content-Length header value. And if you 
prematurely close the request body stream before sending the specified number 
of bytes then the server just has to deal with it, as they normally do...

I still think we should just allow the developer full control over the 
Content-Length header if they've taken full control over the contents of the 
request body (by writing to its stream asynchronously and piecemeal). It gives 
no more power than using CURL. (Except the usual issues of ambient/cookie 
authority, but those seem orthogonal to Content-Length mismatch.)



RE: =[xhr]

2014-11-18 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 On Tue, Nov 18, 2014 at 12:50 PM, Takeshi Yoshino tyosh...@google.com wrote:
 How about padding the remaining bytes forcefully with e.g. 0x20 if the 
 WritableStream doesn't provide enough bytes to us?

 How would that work? At some point when the browser decides it wants to 
 terminate the fetch (e.g. due to timeout, tab being closed) it attempts to 
 transmit a bunch of useless bytes? What if the value is really large?

I think there are several different scenarios under consideration.

1. The author says Content-Length 100, writes 50 bytes, then closes the stream.
2. The author says Content-Length 100, writes 50 bytes, and never closes the 
stream.
3. The author says Content-Length 100, writes 150 bytes, then closes the stream.
4. The author says Content-Length 100 , writes 150 bytes, and never closes the 
stream.

It would be helpful to know how most servers handle these. (Perhaps HTTP 
specifies a mandatory behavior.) My guess is that they are very capable of 
handling such situations. 2 in particular resembles a long-polling setup.

As for whether we consider this kind of thing an attack, instead of just a 
new capability, I'd love to get some security folks to weigh in. If they think 
it's indeed a bad idea, then we can discuss mitigation strategies; 3 and 4 are 
easily mitigatable, whereas 1 could be addressed by an idea like Takeshi's. I 
don't think mitigating 2 makes much sense as we can't know when the author 
intends to send more data.



RE: =[xhr]

2014-11-17 Thread Domenic Denicola
If I recall how Node.js does this, if you don’t provide a `Content-Length` 
header, it automatically sets `Transfer-Encoding: chunked` the moment you start 
writing to the body.

What do we think of that kind of behavior for fetch requests? My opinion is 
that it’s pretty convenient, but I am not sure I like the implicitness.

Examples, based on fetch-with-streams:

```js
// non-chunked, non-streaming
fetch(http://example.com/post-to-me;, {
  method: POST,
  headers: {
// implicit Content-Length (I assume)
  },
  body: a string
});

// non-chunked, streaming
fetch(http://example.com/post-to-me;, {
  method: POST,
  headers: {
Content-Length: 10
  },
  body(stream) {
stream.write(new ArrayBuffer(5));
setTimeout(() = stream.write(new ArrayBuffer(5)), 100);
setTimeout(() = stream.close(), 200);
  }
});

// chunked, streaming
fetch(http://example.com/post-to-me;, {
  method: POST,
  headers: {
// implicit Transfer-Encoding: chunked? Or require it explicitly?
  },
  body(stream) {
stream.write(new ArrayBuffer(1024));
setTimeout(() = stream.write(new ArrayBuffer(1024)), 100);
setTimeout(() = stream.write(new ArrayBuffer(1024)), 200);
setTimeout(() = stream.close(), 300);
  }
});
```


RE: =[xhr]

2014-11-17 Thread Domenic Denicola
From: Takeshi Yoshino [mailto:tyosh...@google.com] 

 On Tue, Nov 18, 2014 at 12:11 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Mon, Nov 17, 2014 at 3:50 PM, Domenic Denicola d...@domenic.me wrote:
 What do we think of that kind of behavior for fetch requests?

 I'm not sure we want to give a potential hostile piece of script that much 
 control over what goes out. Being able to lie about Content-Length would be 
 a new feature that does not really seem desirable. Streaming should probably 
 imply chunked given that.

 Agreed.

That would be very sad. There are many servers that will not accept chunked 
upload (for example Amazon S3). This would mean web apps would be unable to do 
streaming upload to such servers.


RE: CfC: publish a WG Note of Fullscreen; deadline November 14

2014-11-08 Thread Domenic Denicola
From: Arthur Barstow [mailto:art.bars...@gmail.com] 

 OK, so I just checked in a patch that sets the Latest Editor's Draft points 
 to Anne's document 
 https://dvcs.w3.org/hg/fullscreen/raw-file/default/TR.html.

I think it would be ideal to change the label to e.g. See Instead or 
Maintained Version or Replaced By. Framing the WHATWG as a source of 
Editor's Drafts for the W3C is unnecessarily combative.




RE: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-08 Thread Domenic Denicola
From: cha...@yandex-team.ru [mailto:cha...@yandex-team.ru] 

 That doesn't work with the way W3C manages its work and paper trails.

I guess I was just inspired by Mike Smith earlier saying something along the 
lines of don't let past practice constrain your thinking as to what can be 
done in this case, and was hopeful we could come to the even-more-optimal 
solution.

In any case, maybe we could also add meta name=robots contents=noindex to 
this and previous drafts?



Re: CfC: publish WG Note of XHR Level 2; deadline November 14

2014-11-07 Thread Domenic Denicola




 On Nov 7, 2014, at 17:55, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Fri, Nov 7, 2014 at 5:46 PM, Arthur Barstow art.bars...@gmail.com wrote:
 https://dvcs.w3.org/hg/xhr/raw-file/default/TR/XHRL2-Note-2014-Nov.html
 
 Should this not include a reference to https://xhr.spec.whatwg.org/?

Or better yet, just be a redirect to it, as was done with WHATWG's DOM Parsing 
spec to the W3C one?





RE: [streams] Seeking status and plans [Was: [TPAC2014] Creating focused agenda]

2014-10-23 Thread Domenic Denicola
From: Arthur Barstow [mailto:art.bars...@gmail.com] 

 I think recent threads about Streams provided some useful information about 
 the status and plans for Streams. I also think it could be useful if some set 
 of you were available to answer questions raised at the meeting. Can any of 
 you commit some time to be available? If so, please let me know some time 
 slots you are available. My preference is Monday morning, if possible.

I'd be happy to call in at that time or another. Just let me know so I can put 
it in my calendar.



RE: Questions on the future of the XHR spec, W3C snapshot

2014-10-19 Thread Domenic Denicola
I just remembered another similar situation that occurred recently, and in my 
opinion was handled perfectly:

When it became clear that the WHATWG DOM Parsing and Serialization Standard was 
not being actively worked on, whereas the W3C version was, a redirect was 
installed so that going to https://domparsing.spec.whatwg.org/ redirected 
immediately to https://dvcs.w3.org/hg/innerhtml/raw-file/tip/index.html.

This kind of solution seems optimal to me because it removes any potential 
confusion from the picture. XHR in particular seems like a good opportunity for 
the W3C to reciprocate, since with both specs there's a pretty clear sense that 
we all want what's best for the web and nobody wants to have their own outdated 
copy just for the sake of owning it.

-Original Message-
From: Hallvord R. M. Steen [mailto:hst...@mozilla.com] 
Sent: Friday, October 17, 2014 20:19
To: public-webapps
Subject: [xhr] Questions on the future of the XHR spec, W3C snapshot

Apologies in advance that this thread will deal with something that's more in 
the realm of politics.

First, I'm writing as one of the W3C-appointed editors of the snapshot the 
WebApps WG presumably would like to release as the XMLHttpRequest 
recommendation, but I'm not speaking on behalf of all three editors, although 
we've discussed the topic a bit between us.

Secondly, we've all been through neverending threads about the merits of TR, 
spec stability W3C style versus living standard, spec freedom and reality 
alignment WHATWG style. I'd appreciate if those who consider responding to 
this thread could be to-the-point and avoid the ideological swordmanship as 
much as possible.

When accepting editor positions, we first believed that we could ship a TR of 
XHR relatively quicky. (I think of that fictive TR as XHR 2 although W3C 
haven't released any XHR 1, simply because CORS and the other more recent API 
changes feel like version 2 stuff to me). As editors, we expected to update 
it with a next version if and when there were new features or significant 
updates). However, the WHATWG version is now quite heavily refactored to be 
XHR+Fetch. It's no longer clear to me whether pushing forward to ship XHR2 
stand-alone is the right thing to do.. However, leaving an increasingly 
outdated snapshot on the W3C side seems to be the worst outcome of this 
situation. Hence I'd like a little bit of discussion and input on how we should 
move on.

All options I can think of are:

a) Ship a TR based on the spec just *before* the big Fetch refactoring. The 
only reason to consider this option is *if* we want something that's sort of 
stand-alone, and not just a wrapper around another and still pretty dynamic 
spec. I think such a spec and the test suite would give implementors a pretty 
good reference (although some corner cases may not be sufficiently clarified to 
be compatible). Much of the refactoring work seems to have been just that - 
refactoring, more about pulling descriptions of some functionality into another 
document to make it more general and usable from other contexts, than about 
making changes that could be observed from JS - so presumably, if an 
implementor followed the TR 2.0 standard they would end up with a generally 
usable XHR implementation.

b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to. This requires 
maintaining snapshots of both specs.

c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.

d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

For a-c the editors should of course commit to updating snapshots and 
eventually probably release new TRs.

Is it even possible to have this discussion without seeding new W3C versus 
WHATWG ideology permathreads?

Input welcome!
-Hallvord



RE: Questions on the future of the XHR spec, W3C snapshot

2014-10-17 Thread Domenic Denicola
No need to make this a vs.; we're all friends here :).

FWIW previous specs which have needed to become abandoned because they were 
superceded by another spec have been re-published as NOTEs pointing to the 
source material. That is what I would advise for this case.

Examples:

- http://www.w3.org/TR/components-intro/
- https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html
- http://lists.w3.org/Archives/Public/www-style/2014Oct/0295.html (search for 
Fullscreen)

-Original Message-
From: Hallvord R. M. Steen [mailto:hst...@mozilla.com] 
Sent: Friday, October 17, 2014 20:19
To: public-webapps
Subject: [xhr] Questions on the future of the XHR spec, W3C snapshot

Apologies in advance that this thread will deal with something that's more in 
the realm of politics.

First, I'm writing as one of the W3C-appointed editors of the snapshot the 
WebApps WG presumably would like to release as the XMLHttpRequest 
recommendation, but I'm not speaking on behalf of all three editors, although 
we've discussed the topic a bit between us.

Secondly, we've all been through neverending threads about the merits of TR, 
spec stability W3C style versus living standard, spec freedom and reality 
alignment WHATWG style. I'd appreciate if those who consider responding to 
this thread could be to-the-point and avoid the ideological swordmanship as 
much as possible.

When accepting editor positions, we first believed that we could ship a TR of 
XHR relatively quicky. (I think of that fictive TR as XHR 2 although W3C 
haven't released any XHR 1, simply because CORS and the other more recent API 
changes feel like version 2 stuff to me). As editors, we expected to update 
it with a next version if and when there were new features or significant 
updates). However, the WHATWG version is now quite heavily refactored to be 
XHR+Fetch. It's no longer clear to me whether pushing forward to ship XHR2 
stand-alone is the right thing to do.. However, leaving an increasingly 
outdated snapshot on the W3C side seems to be the worst outcome of this 
situation. Hence I'd like a little bit of discussion and input on how we should 
move on.

All options I can think of are:

a) Ship a TR based on the spec just *before* the big Fetch refactoring. The 
only reason to consider this option is *if* we want something that's sort of 
stand-alone, and not just a wrapper around another and still pretty dynamic 
spec. I think such a spec and the test suite would give implementors a pretty 
good reference (although some corner cases may not be sufficiently clarified to 
be compatible). Much of the refactoring work seems to have been just that - 
refactoring, more about pulling descriptions of some functionality into another 
document to make it more general and usable from other contexts, than about 
making changes that could be observed from JS - so presumably, if an 
implementor followed the TR 2.0 standard they would end up with a generally 
usable XHR implementation.

b) Ship a TR based on the newest WHATWG version, also snapshot and ship the 
Fetch spec to pretend there's something stable to refer to. This requires 
maintaining snapshots of both specs.

c) Ship a TR based on the newest WHATWG version, reference WHATWG's Fetch spec 
throughout.

d) Abandon the WebApps snapshot altogether and leave this spec to WHATWG.

For a-c the editors should of course commit to updating snapshots and 
eventually probably release new TRs.

Is it even possible to have this discussion without seeding new W3C versus 
WHATWG ideology permathreads?

Input welcome!
-Hallvord



RE: [streams-api] Seeking status of the Streams API spec

2014-10-15 Thread Domenic Denicola
From: Paul Cotton [mailto:paul.cot...@microsoft.com] 

 Would it be feasible to resurrect this interface as a layer on top of [1] so 
 that W3C specifications like MSE that have a dependency on the Streams 
 interface are not broken?

The decision we came to in web apps some months ago was that the interfaces in 
that spec would disappear in favor of WritableStream, ReadableStream, and the 
rest. The idea of a single Stream interface was not sound; that was one of the 
factors driving the conversations that led to that decision.



RE: Push API and Service Workers

2014-10-15 Thread Domenic Denicola
I'm not an expert either, but it seems to me that push without service workers 
(or some other means of background processing) is basically just server-sent 
events. That is, you could send push notifications to an active webpage over 
a server-sent events channel (or web socket, or long-polling...), which would 
allow it to display a toast notification with the push message.

So from my perspective, implementing the push API without service workers would 
be pretty pointless, as it would give no new capabilities.





RE: [streams-api] Seeking status of the Streams API spec

2014-10-14 Thread Domenic Denicola
From: annevankeste...@gmail.com [mailto:annevankeste...@gmail.com] On Behalf Of 
Anne van Kesteren

 On Mon, Oct 13, 2014 at 10:00 PM, Paul Cotton paul.cot...@microsoft.com 
 wrote:
 MSE is in CR and there are shipping implementations.

 Yes, but the Stream object is not shipping. Unless Microsoft has a prototype 
 it is in exactly 0 implementations. So MSE would have to get out of CR anyway 
 to get that removed (assuming it's following the two implementation rule).

The more interesting question is whether BufferSource is shipping (unprefixed), 
since ideally we would make BufferSource a WritableStream.


RE: [streams-api] Seeking status of the Streams API spec

2014-10-14 Thread Domenic Denicola
From: Domenic Denicola [mailto:dome...@domenicdenicola.com] 

 The more interesting question is whether BufferSource is shipping 
 (unprefixed), since ideally we would make BufferSource a WritableStream.

Sorry, SourceBuffer, not BufferSource---both in this message and the previous 
one.


  1   2   >