Re: Service Worker issues
> caches.open("blog - 2016-06-10 14:14:23 -0700").then(c => c.keys()) > Promise { : "pending" } Note that this test will *not* tell you whether or not c.keys() returns a promise; the .then callback is allowed to return a non-promise, and .then() always returns a promise regardless. You have to log the return value of c.keys() directly. ~TJ
Re: [custom-elements] Prefix x- for custom elements like data- attributes
On Mon, Apr 25, 2016 at 10:06 AM, Bang Seongbeomwrote: > It would be good to restrict custom element's name to start with like > 'x-' for the future standards. User-defined custom attributes; data > attributes are also restricted its name to start with 'data-' so we can > define easily new standard attribute names ('aria-*' or everything > except for 'data-*'.) We already have a similar restriction - custom elements names must *contain* a dash. ~TJ
Re: [Custom Elements] Not requiring hyphens in names.
On Wed, Apr 13, 2016 at 12:33 PM, /#!/JoePeawrote: > What if custom Elements simply override existing ones then? > > ```js > shadowRoot.registerElement('div', MyElement) > ``` That means we lose the lingua franca that HTML provides; two independent libraries can't ever depend on the core HTML elements, because the other library might have overridden some of them. Having a well-known common API is worthwhile. (JS technically has this problem, but replacing built-ins, when it's done, is typically just to *expand* them. And once modules finally ship, we'll have a built-in module with pristine versions of all the built-ins, too.) > If overriding native elements was documented, it'd be fine. By > default, a blank document or shadow root has no elements registered, > so would use the native DIV. But, why not let the user define > what a is? There could optionally be a warning outputted to > console: > > ``` > Warning: DIV was overridden: /path/to/some/file.js:123:4 > ``` This means every website that overrides any built-in element will have never-ending console spam, which isn't great. ~TJ
Re: [Custom Elements] Not requiring hyphens in names.
On Wed, Apr 13, 2016 at 11:12 AM, /#!/JoePeawrote: > I personally don't like this limitation. I think Custom Elements would > be better if we could create elements that have > , with the possible exception that we can't override the > native elements. This would prevent us from ever adding any new elements to the language, or at least require us to do real-world usage checks and avoid names that would break too many pages if we took it over. Requiring a dash is a minimal cost to element authors, and permanently avoids any clashes. This is similar to CSS requiring custom properties to start with a double-dash, like --foo. ~TJ
Re: [XHR]
On Wed, Mar 16, 2016 at 5:10 AM, Jonathan Garbeewrote: > On Wed, Mar 16, 2016 at 7:10 AM Hallvord Reiar Michaelsen Steen > wrote: >> On Tue, Mar 15, 2016 at 11:19 PM, Gomer Thomas >> wrote: >> >> > According to IETF RFC 7230 all HTTP recipients “MUST be able to parse >> > the >> > chunked transfer coding”. The logical interpretation of this is that >> > whenever possible HTTP recipients should deliver the chunks to the >> > application as they are received, rather than waiting for the entire >> > response to be received before delivering anything. >> > >> > In the latest version this can only be done for “text” responses. For >> > any >> > other type of response, the “response” attribute returns “null” until >> > the >> > transmission is completed. >> >> How would you parse for example an incomplete JSON source to expose an >> object? Or incomplete XML markup to create a document? Exposing >> partial responses for text makes sense - for other types of data >> perhaps not so much. > > If I understand correctly, streams [1] with fetch should solve this > use-case. > > [1] https://streams.spec.whatwg.org/ No, streams do not solve the problem of "how do you present a partially-downloaded JSON object". They handle chunked data *better*, so they'll improve "text" response handling, but there's still the fundamental problem that an incomplete JSON or XML document can't, in general, be reasonably parsed into a result. Neither format is designed for streaming. (This is annoying - it would be nice to have a streaming-friendly JSON format. There are some XML variants that are streaming-friendly, but not "normal" XML.) ~TJ
Re: Shadow DOM and alternate stylesheets
On Tue, Dec 8, 2015 at 4:43 AM, Rune Lillesveenwrote: > what should happen with the title attribute of style elements in Shadow DOM? > > In Blink you can currently select style elements in shadow trees based > on the alternate stylesheet name set for the document. You even set > the preferred stylesheet using the title on style elements inside > shadow trees. Perhaps the title element should be ignored on style > elements in shadow trees? I agree. ~TJ
Re: Call for Consensus: Publish First Public Working Draft of FindText API, respond by 14 October
On Tue, Oct 6, 2015 at 3:34 PM, Doug Scheperswrote: > Hi, Eliott– > > Good question. > > I don't have a great answer yet, but this is something that will need to be > worked out with Shadow DOM, not just for this spec, but for Selection API > and others, as well as to CSS, which has some Range-like styling. CSS doesn't care about this, because it doesn't expose its selections to the wider DOM; it can freely style whatever it wants, including ranges that span into shadows. This is indeed equivalent to the problem that the generic Selection API has with Shadow DOM, tho. ~TJ
Re: Indexed DB + Promises
On Tue, Sep 29, 2015 at 10:51 AM, Domenic Denicolawrote: > I guess part of the question is, does this add enough value, or will authors > still prefer wrapper libraries, which can afford to throw away backward > compatibility in order to avoid these ergonomic problems? From that > perspective, the addition of waitUntil or a similar primitive to allow better > control over transaction lifecycle is crucial, since it will enable better > wrapper libraries. But the .promise and .complete properties end up feeling > like halfway measures, compared to the usability gains a wrapper can achieve. > Maybe they are still worthwhile though, despite their flaws. You probably > have a better sense of what authors have been asking for here than I do. Remember that the *entire point* of IDB was to provide a "low-level" set of functionality, and then to add a sugar layer on top once authors had explored the space a bit and shown what would be most useful. I'd prefer we kept with that approach, and defined a consistent, easy-to-use sugar layer that's just built with IDB primitives underneath, rather than trying to upgrade the IDB primitives into more usable forms that end up being inconsistent or difficult to use. ~TJ
Re: Indexed DB + Promises
On Wed, Sep 30, 2015 at 11:07 AM, Kyle Huey <m...@kylehuey.com> wrote: > On Wed, Sep 30, 2015 at 10:50 AM, Tab Atkins Jr. <jackalm...@gmail.com> wrote: >> On Tue, Sep 29, 2015 at 10:51 AM, Domenic Denicola <d...@domenic.me> wrote: >>> I guess part of the question is, does this add enough value, or will >>> authors still prefer wrapper libraries, which can afford to throw away >>> backward compatibility in order to avoid these ergonomic problems? From >>> that perspective, the addition of waitUntil or a similar primitive to allow >>> better control over transaction lifecycle is crucial, since it will enable >>> better wrapper libraries. But the .promise and .complete properties end up >>> feeling like halfway measures, compared to the usability gains a wrapper >>> can achieve. Maybe they are still worthwhile though, despite their flaws. >>> You probably have a better sense of what authors have been asking for here >>> than I do. >> >> Remember that the *entire point* of IDB was to provide a "low-level" >> set of functionality, and then to add a sugar layer on top once >> authors had explored the space a bit and shown what would be most >> useful. >> >> I'd prefer we kept with that approach, and defined a consistent, >> easy-to-use sugar layer that's just built with IDB primitives >> underneath, rather than trying to upgrade the IDB primitives into more >> usable forms that end up being inconsistent or difficult to use. > > At a bare minimum we need to actually specify how transaction > lifetimes interact with tasks, microtasks, etc. Especially since the > behavior differs between Gecko and Blink (or did, the last time I > checked). > > waitUntil() alone is a pretty large change to IDB semantics. Somebody > mentioned earlier that you can get this behavior today which is true, > but it requires you to continually issue "keep-alive" read requests to > the transaction, so it's fairly obvious you aren't using it as > intended. Yeah, any necessary extensions to the underlying "bare" IDB semantics that need to be made to support the sugar layer are of course appropriate; they indicate an impedance mismatch that we need to address for usability. ~TJ
Re: Normative references to Workers.
On Tue, Sep 15, 2015 at 10:31 AM, Mike Westwrote: > The "Upgrade Insecure Requests" specification[1] references the WHATWG HTML > spec for the > "set up a worker environment settings object" algorithm[2], as the Web > Workers Candidate Recommendation from May 2012[3] substantially predates the > entire concept of a "settings object", and because the WHATWG is the group > where work on Workers seems to be being done. > > This referential choice was flagged during a discussion of transitioning the > Upgrade spec to CR, where it was noted that the Web Workers editor's draft > from May 2014 does contain the referenced concept[4]. > > It seems appropriate, then, to bring the question to this group: does > WebApps intend to update the Workers draft in TR? If so, is there a path > forward to aligning the Workers document with the work that's happened over > the last year and a half in WHATWG? Alternatively, does WebApps intend to > drop work on Workers in favor of the WHATWG's document? Agreed with Hixie; the WHATWG spec is the most recent normative version of that section, and should be referenced instead. Remember, there's nothing wrong with reffing WHATWG specs. It will not delay or hamper your publication or Rec-track advancement, despite the occasional misinformed complaint from someone not aware of the policies. ~TJ
Re: PSA: publish WD of WebIDL Level 1
On Fri, Aug 7, 2015 at 9:23 AM, Travis Leithead travis.leith...@microsoft.com wrote: This is, at a minimum, incremental goodness. It's better than leaving the prior L1 published document around--which already tripped up a few folks on my team recently. I strongly +1 it. There are alternatives! In particular, you can publish a gravestone revision. Bikeshed has boilerplate for this you can steal the wording of: ``` details open class='annoying-warning' summaryThis Document Is Obsolete and Has Been Replaced/summary p This specification is obsolete and has been replaced by the document at a href=[REPLACEDBY][REPLACEDBY]/a. Do not attempt to implement this specification. Do not refer to this specification except as a historical artifact. /details ``` Just publish a new WD containing *only* that as the content, and you're golden. For bonus points, publish revisions of all the dated webidl1 specs, with that as an actual warning (no need to wipe out their contents). Look at the styling of the message on https://tabatkins.github.io/specs/respimg/ for a good example that makes it impossible to miss that you're looking at an obsolete spec. ~TJ
Re: PSA: publish WD of WebIDL Level 1
On Thu, Jul 30, 2015 at 7:29 AM, Arthur Barstow art.bars...@gmail.com wrote: Hi All, This is heads-up re the intent to publish a Working Draft of WebIDL Level 1 (on or around August 4) using Yves' document as the basis and a new shortname of WebIDL-1: https://ylafon.github.io/webidl/publications/fpwd-20150730.html There is an open question about what should happen with TR/WebIDL/ (which now is the 2012 Candidate Recommendation). One option is to serve it as WebIDL-1. Another option is to replace it with the latest version of Cameron's Editor's Draft. A third option is to make it some type of landing page the user can use to load the various versions. Feedback on these options is welcome and the default (if there are no non-resolvable issues) is to go with option #2 (Yves' preference). The CSSWG always points the non-leveled url to the latest spec. (#2, if I'm counting your options correctly) ~TJ
Re: [shadow-dom] ::before/after on shadow hosts
All right, sounds pretty unanimous that #2 (current behavior) is what we should go with. I'll clarify the Scoping spec. Thanks! ~TJ
[shadow-dom] ::before/after on shadow hosts
I was recently pointed to this StackOverflow thread http://stackoverflow.com/questions/31094454/does-the-shadow-dom-replace-before-and-after/ which asks what happens to ::before and ::after on shadow hosts, as it's not clear from the specs. I had to admit that I hadn't thought of this corner-case, and it wasn't clear what the answer was! In particular, there seem to be two reasonable options: 1. ::before and ::after are *basically* children of the host element, so they get suppressed when the shadow contents are displayed 2. ::before and ::after aren't *really* children of the host element, so they still show up before/after the shadow contents. According to the SO thread (I haven't tested this myself), Firefox and Chrome both settled on #2. I'm fine to spec this in the Scoping module, I just wanted to be sure this was the answer we wanted. ~TJ
Re: Writing spec algorithms in ES6?
On Thu, Jun 11, 2015 at 1:41 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 6/11/15 4:32 PM, Dimitri Glazkov wrote: I noticed that the CSS Color Module Level 4 actually does this, and it seems pretty nice: http://dev.w3.org/csswg/css-color/#dom-rgbcolor-rgbcolorcolor I should note that the ES code there produces semantics that don't match the IDL in this spec (or is complete nonsense, depending on how literally you choose to read it). Yes, the code currently there is... loose. There's a lot of problems trying to map IDL directly into ES without a lot of boilerplate. So there are basically at least two problems here: 1) You have to clearly delineate when you're working with JS values and when you're working with IDL values, to the extent that these are not the same thing. 2) You have to clearly delineate which bits of JS run in the page global and which bits run in some other clean global and which bits run in the page global but have access to some clean intrinsics somehow. I would actually prefer some sort of pseudocode that is _not_ JS-looking, just so people don't accidentally screw this up. I actually rather like using JS code for these; it's familiar and easy to read. But yeah, Domenic outlines some of the things we'd have to change before this was actually useful. I wrote Color the way I did because writing math in prose is *incredibly* awkward, but writing it in ES-ese is *even more incredibly awkward*. So for now, I opted for the insufficient third alternative of JS, if we handwave and pretend all the obvious problems don't occur. ~TJ
Re: Shadow DOM spec bugs will be migrated into GitHub issues
Note for the future (to you and editors of other specs in WebApps): Before doing this kind of mass bug editting, please turn off the automatic email to public-webapps. If you can't do that yourself, Mike Smith can (at least, he's done it in the past). That prevents the mass flood of bugspam from clogging up people's inboxes. ^_^ ~TJ On Tue, May 26, 2015 at 8:30 PM, Hayato Ito hay...@google.com wrote: PSA: I've finished the migration. All open bugs are now marked as MOVED with a link to the corresponding GitHub issue. On Mon, May 25, 2015 at 5:58 PM Hayato Ito hay...@google.com wrote: Regarding with the Shadow DOM spec, more and more workflows are happening [1] on GitHub w3c/webcomponents repository recently. Therefore, I am thinking about migrating the bugs of the Shadow DOM spec, from the bugzilla [2], to the GitHub issues [3], as some of other specs are already doing so. As an experiment, I've just migrated the existing open bugs on the bugzilla to the GitHub issues, by a tiny script I've written using GitHub APIs. Unless there is an objection to the migration, I am going to close the existing open Shadow DOM spec bugs on the bugzilla, with a link to the corresponding bug on the GitHub issues. Please let me know if you have a concern. [1] https://github.com/w3c/webcomponents/commits/gh-pages [2] https://www.w3.org/Bugs/Public/showdependencytree.cgi?id=14978 [3] https://github.com/w3c/webcomponents/issues
Re: :host pseudo-class
On Tue, May 5, 2015 at 10:56 PM, Anne van Kesteren ann...@annevk.nl wrote: On Tue, May 5, 2015 at 8:39 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: It's certainly no weirder, imo, than having a pseudo-element that doesn't actually live in any element's pseudo-tree, but instead just lives in the normal DOM, but can only be selected by using a pseudo-element selector with no LHS. Pseudo-elements are fucked, unfortunately, but we have to live with their quirks, and those quirks make them really bad for this particular case. Why? As was said before, pseudo-elements have to be attached to a real element. Pseudo-element selectors have a built-in combinator; they're actually complex selectors all by themselves. (This isn't properly reflected in Selectors right now; I haven't made the edits to the data model that need to happen to make pseudo-elements work properly.) But the host element isn't attached to any of the elements in its shadow tree; it's a *parent* of all of them. If we ignored this and let it attach to an element, which one? There's no single top-most element to privilege. If we attach to *all* of them, then we get the bizarre result that #foo::host #foo might actually select something. Having lots of elements share the same pseudo-element is also unprecedented currently. Pseudo-elements also, because they're complex selectors, aren't usable everywhere that other selectors are. If you have a context that only accepts simple or compound selectors (as a filter, for instance), pseudo-elements aren't available. So if we use :host, we have to invent a new concept to make it work (featureless). If we use ::host, we have to invent a new concept to make it work (new ways for pseudo-elements to exist and be targeted). I think the latter is weirder than the former. And again, from the perspective of the shadow tree, the host element is not part of its normal DOM. The shadow tree is its normal DOM. This is the same as ::-webkit-range-thumb. From the perspective of the light DOM, that element is not part of its normal DOM. But it is part of the composed DOM. And again, it depends on what level of authority you're talking about. As far as the outer page is concerned, the input element is empty, and ::webkit-range-thumb is a fictitious pseudo-element created solely by the platform. There's no real DOM underlying it, because the shadow dom is fully sealed, so anything inside of it is dead. From the platform's perspective, sure, there's a real element under there. And the platform does get special powers that the page might not have. But the fact that input is implemented with shadow DOM is an undetectable implementation detail at the moment. ~TJ
Re: :host pseudo-class
On Mon, May 4, 2015 at 9:38 PM, Anne van Kesteren ann...@annevk.nl wrote: On Tue, May 5, 2015 at 2:08 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, Apr 30, 2015 at 10:51 PM, Anne van Kesteren ann...@annevk.nl wrote: But maybe you're right and the whole pseudo-class/pseudo-element distinction is rather meaningless. But at least pseudo-class til date made some sense. I still don't understand what you find wrong with this. It's not that :host() [can] match an element that cannot otherwise be matched, it's that the host element element is featureless, save for the ability to match :host. (That's the definition of a featureless element - it's allowed to specify particular things that can still match it.) In other words, it's not :host that's magical, it's the host element itself that's magical. So :host:hover would not work? I guess you would have to spell that :host(:hover)? Because although it does not have features, it has features inside the parenthesis? Correct. The functional :host() pseudo-class can match the selector against the *real* host element that hosts the shadow tree (and :host-context() can do so for the entire shadow-piercing ancestor tree). But normal selectors inside a shadow tree only see the featureless version of the host element that lives inside of the shadow tree. Was this concept introduced for other scenarios or just for :host? Seems like a very weird rationalization. Yeah, it was introduced to give the host element the selection behavior we wanted (I explained this in more detail in my first post in the thread). It's certainly no weirder, imo, than having a pseudo-element that doesn't actually live in any element's pseudo-tree, but instead just lives in the normal DOM, but can only be selected by using a pseudo-element selector with no LHS. Pseudo-elements are fucked, unfortunately, but we have to live with their quirks, and those quirks make them really bad for this particular case. ~TJ
Re: :host pseudo-class
On Mon, May 4, 2015 at 9:52 PM, Jonas Sicking jo...@sicking.cc wrote: On Sun, Apr 26, 2015 at 8:37 PM, L. David Baron dba...@dbaron.org wrote: On Saturday 2015-04-25 09:32 -0700, Anne van Kesteren wrote: I don't understand why :host is a pseudo-class rather than a pseudo-element. My mental model of a pseudo-class is that it allows you to match an element based on a boolean internal slot of that element. :host is not that since e.g. * does not match :host as I understand it. That seems super weird. Why not just use ::host? Copying WebApps since this affects everyone caring about Shadow DOM. We haven't really used (in the sense of shipping across browsers) pseudo-elements before for things that are both tree-like (i.e., not ::first-letter, ::first-line, or ::selection) and not leaves of the tree. (Gecko doesn't implement any pseudo-elements that can have other selectors to their right. I'm not sure if other engines have.) I'd be a little worried about ease of implementation, and doing so without disabling a bunch of selector-related optimizations that we'd rather have. At some point we probably do want to have this sort of pseudo-element, but it's certainly adding an additional dependency on to this spec. My understanding is that the question here isn't what is being matched, but rather what syntax to use for the selector. I.e. in both cases the thing that the selector is matching is the DocumentFragment which is the root of the shadow DOM. As Anne said, no, the thing matched is the actual host element. But otherwise, yeah, we're just debating the syntax of how to select that (while obeying the constraints I outlined in my first post to this thread). If implementing :host is easier than ::host, then it seems like the implementation could always convert the pseudo-element into a pseudo-class at parse time. That should make the implementation the same other than in the parser. Though maybe the concern here is about parser complexity? It's not about parser complexity. (dbaron did use that as an argument against ::host, but I'm not making that argument; Blink's parser has no problem with it.) It's about hitting the (admittedly complex) constraints sanely within the existing Selectors model. ~TJ
Re: Imperative API for Node Distribution in Shadow DOM (Revisited)
On Tue, May 5, 2015 at 11:20 AM, Ryosuke Niwa rn...@apple.com wrote: On May 4, 2015, at 10:20 PM, Anne van Kesteren ann...@annevk.nl wrote: On Tue, May 5, 2015 at 6:58 AM, Elliott Sprehn espr...@chromium.org wrote: We can solve this problem by running the distribution code in a separate scripting context with a restricted (distribution specific) API as is being discussed for other extension points in the platform. That seems like a lot of added complexity, but yeah, that would be an option I suppose. Dimitri added something like this to the imperative API proposal page a couple of days ago. One thing to consider here is that we very much consider distribution a style concept. It's about computing who you inherit style from and where you should be in the box tree. It just so happens it's also leveraged in event dispatch too (like pointer-events). It happens asynchronously from DOM mutation as needed just like style and reflow though. I don't really see it that way. The render tree is still computed from the composed tree. The composed tree is still a DOM tree, just composed from various other trees. In the open case you can access it synchronously through various APIs (e.g. if we keep that for querySelector() selectors and also deepPath). I agree. I don't see any reason node distribution should be considered as a style concept. It's a DOM concept. There is no CSS involved here. Yes there is. As Elliot stated in the elided parts of his quoted response above, most of the places where we update distribution are for CSS or related concerns: # 3 event related # 3 shadow dom JS api # 9 style (one of these is flushing style) # 1 query selector (for ::content and :host-context) I have issues with the argument that we should do it lazily. On one hand, if node distribution is so expensive that we need to do it lazily, then it's unacceptable to make event dispatching so much slower. On the other hand, if node distribution is fast, as it should be, then there is no reason we need to do it lazily. The problem is really the redistributions. If we instead had explicit insertion points under each shadow host, then we wouldn't really need redistributions at all, and node distribution can happen in O(1) per child change. As repeatedly stated, redistribution appears to be a necessity for composition to work in all but the most trivial cases. ~TJ
Re: :host pseudo-class
On Thu, Apr 30, 2015 at 10:51 PM, Anne van Kesteren ann...@annevk.nl wrote: On Fri, May 1, 2015 at 7:39 AM, Elliott Sprehn espr...@chromium.org wrote: That's still true if you use ::host, what is the thing on the left hand side the ::host lives on? I'm not aware of any pseudo element that's not connected to another element such that you couldn't write {thing}::pseudo. ::selection? ::selection has a host element. If you use it by itself it just means you're selecting *::selection. But maybe you're right and the whole pseudo-class/pseudo-element distinction is rather meaningless. But at least pseudo-class til date made some sense. I still don't understand what you find wrong with this. It's not that :host() [can] match an element that cannot otherwise be matched, it's that the host element element is featureless, save for the ability to match :host. (That's the definition of a featureless element - it's allowed to specify particular things that can still match it.) In other words, it's not :host that's magical, it's the host element itself that's magical. ~TJ
Re: :host pseudo-class
On Thu, Apr 30, 2015 at 2:27 AM, Anne van Kesteren ann...@annevk.nl wrote: On Mon, Apr 27, 2015 at 11:14 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Pseudo-elements are things that aren't DOM elements, but are created by Selectors for the purpose of CSS to act like elements. That's not true for e.g. ::-webkit-slider-thumb as I already indicated. Sure it is. input type=range has no children, and the shadow tree is sealed, so the fact that a shadow tree even exists is hidden from DOM. As far as CSS is capable of discerning, there is no thumb element, so the pseudo-element makes sense. The host element is a real DOM element. It just has special selection behavior from inside its own shadow root, for practical reasons: there are good use-cases for being able to style your host, but also a lot for *not* doing so, and so mixing the host into the normal set of elements leads to a large risk of accidentally selecting the host. This is particularly true for things like class selectors; since the *user* of the component is the one that controls what classes/etc are set on the host element, it's very plausible that a class used inside the shadow root for internal purposes could accidentally collide with one used by the outer page for something completely different, and cause unintentional styling issues. Making the host element present in the shadow tree, but featureless save for the :host and :host-context() pseudo-classes, was the compromise that satisfies all of the use-cases adequately. My problem is not with the ability to address the host element, but by addressing it through a pseudo-class, which has so far only been used for matching elements in the tree that have a particular internal slot. I don't understand what distinction you're trying to draw here. Can you elaborate? It's possible we could change how we define the concept of pseudo-element so that it can sometimes refer to real elements that just aren't ordinarily accessible, but I'm not sure that's necessary or desirable at the moment. Well, it would for instance open up the possibility of using :host in the light tree to match elements that are host elements. That's just a naming-collision thing. We can come up with a different name for either has a shadow tree or is the host element of the current shadow tree; it's just that right now, the latter has claimed the name :host. I certainly don't want to create both a pseudo-class and pseudo-element with the same name if I can help it (or at least, not ones that refer to similar things); the distinction between pseudo-classes and pseudo-elements in most author's minds is already tenuous. (Definitely not helped by the legacy :before/etc syntax.) ~TJ
Re: Imperative API for Node Distribution in Shadow DOM (Revisited)
On Wed, Apr 29, 2015 at 4:15 PM, Dimitri Glazkov dglaz...@google.com wrote: On Mon, Apr 27, 2015 at 8:48 PM, Ryosuke Niwa rn...@apple.com wrote: One thing that worries me about the `distribute` callback approach (a.k.a. Anne's approach) is that it bakes distribution algorithm into the platform without us having thoroughly studied how subclassing will be done upfront. Mozilla tried to solve this problem with XBS, and they seem to think what they have isn't really great. Google has spent multiple years working on this problem but they come around to say their solution, multiple generations of shadow DOM, may not be as great as they thought it would be. Given that, I'm quite terrified of making the same mistake in spec'ing how distribution works and later regretting it. At least the way I understand it, multiple shadow roots per element and distributions are largely orthogonal bits of machinery that solve largely orthogonal problems. Yes. Distribution is mainly about making composition of components work seamlessly, so you can easily pass elements from your light dom into some components you're using inside your shadow dom. Without distribution, you're stuck with either: * avoiding content entirely and literally moving the elements from the light dom to your shadow tree (like, appendChild() the nodes themselves), which means the outer page no longer has access to the elements for their own styling or scripting purposes (this is terribad, obviously), or * components have to be explicitly written with the expectation of being composed into other components, writing their own content select *to target the content elements of the outer shadow*, which is also extremely terribad. Distribution makes composition *work*, in a fundamental way. Without it, you simply don't have the ability to use components inside of components except in special cases. ~TJ
Re: Imperative API for Node Distribution in Shadow DOM (Revisited)
On Wed, Apr 29, 2015 at 4:47 PM, Ryosuke Niwa rn...@apple.com wrote: On Apr 29, 2015, at 4:37 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Apr 29, 2015 at 4:15 PM, Dimitri Glazkov dglaz...@google.com wrote: On Mon, Apr 27, 2015 at 8:48 PM, Ryosuke Niwa rn...@apple.com wrote: One thing that worries me about the `distribute` callback approach (a.k.a. Anne's approach) is that it bakes distribution algorithm into the platform without us having thoroughly studied how subclassing will be done upfront. Mozilla tried to solve this problem with XBS, and they seem to think what they have isn't really great. Google has spent multiple years working on this problem but they come around to say their solution, multiple generations of shadow DOM, may not be as great as they thought it would be. Given that, I'm quite terrified of making the same mistake in spec'ing how distribution works and later regretting it. At least the way I understand it, multiple shadow roots per element and distributions are largely orthogonal bits of machinery that solve largely orthogonal problems. Yes. Distribution is mainly about making composition of components work seamlessly, so you can easily pass elements from your light dom into some components you're using inside your shadow dom. Without distribution, you're stuck with either: As I clarified my point in another email, neither I nor anyone else is questioning the value of the first-degree of node distribution from the light DOM into insertion points of a shadow DOM. What I'm questioning is the value of the capability to selectively re-distribute those nodes in a tree with nested shadow DOMs. * components have to be explicitly written with the expectation of being composed into other components, writing their own content select *to target the content elements of the outer shadow*, which is also extremely terribad. Could you give me a concrete use case in which such inspection of content elements in the light DOM is required without multiple generations of shadow DOM? In all the use cases I've studied without multiple generations of shadow DOM, none required the ability to filter nodes inside a content element. Distribution makes composition *work*, in a fundamental way. Without it, you simply don't have the ability to use components inside of components except in special cases. Could you give us a concrete example in which selective re-distribution of nodes are required? That'll settle this discussion/question altogether. I'll let a Polymer person provide a concrete example, as they're the ones that originally brought up redistribution and convinced us it was needed, but imagine literally any component that uses more than one content (so you can't get away with just distributing the content element itself), being used inside of some other component that wants to pass some of its light-dom children to the nested component. Without redistribution, you can only nest components (using one component inside the shadow dom of another) if you either provide contents directly to the nested component (no content) or the nested component only has a single distribution point in its own shadow. ~TJ
Re: Directory Upload Proposal
On Tue, Apr 28, 2015 at 3:53 PM, Ryan Seddon seddon.r...@gmail.com wrote: To enable developers to build future interoperable solutions, we've drafted a proposal [4], with the helpful feedback of Mozilla and Google, that focuses strictly on providing the mechanisms necessary to enable directory uploads. The use of the dir attribute seems odd since I can already apply dir=rtl to an input to change the text direction. Good catch; that's a fatal naming clash, and needs to be corrected. The obvious one is to just expand out the name to directory. ~TJ
Re: =[xhr]
On Tue, Apr 28, 2015 at 7:51 AM, Ken Nelson k...@pure3interactive.com wrote: RE async: false being deprecated There's still occasionally a need for a call from client javascript back to server and wait on results. Example: an inline call from client javascript to PHP on server to authenticate an override password as part of a client-side operation. The client-side experience could be managed with a sane timeout param - eg return false if no response after X seconds (or ms). Nothing prevents you from waiting on an XHR to return before continuing. Doing it with async operations is slightly more complex than blocking with a sync operation, is all. ~TJ
Re: :host pseudo-class
On Sat, Apr 25, 2015 at 9:32 AM, Anne van Kesteren ann...@annevk.nl wrote: I don't understand why :host is a pseudo-class rather than a pseudo-element. My mental model of a pseudo-class is that it allows you to match an element based on a boolean internal slot of that element. :host is not that since e.g. * does not match :host as I understand it. That seems super weird. Why not just use ::host? Copying WebApps since this affects everyone caring about Shadow DOM. Pseudo-elements are things that aren't DOM elements, but are created by Selectors for the purpose of CSS to act like elements. The host element is a real DOM element. It just has special selection behavior from inside its own shadow root, for practical reasons: there are good use-cases for being able to style your host, but also a lot for *not* doing so, and so mixing the host into the normal set of elements leads to a large risk of accidentally selecting the host. This is particularly true for things like class selectors; since the *user* of the component is the one that controls what classes/etc are set on the host element, it's very plausible that a class used inside the shadow root for internal purposes could accidentally collide with one used by the outer page for something completely different, and cause unintentional styling issues. Making the host element present in the shadow tree, but featureless save for the :host and :host-context() pseudo-classes, was the compromise that satisfies all of the use-cases adequately. It's possible we could change how we define the concept of pseudo-element so that it can sometimes refer to real elements that just aren't ordinarily accessible, but I'm not sure that's necessary or desirable at the moment. On Sun, Apr 26, 2015 at 8:37 PM, L. David Baron dba...@dbaron.org wrote: We haven't really used (in the sense of shipping across browsers) pseudo-elements before for things that are both tree-like (i.e., not ::first-letter, ::first-line, or ::selection) and not leaves of the tree. (Gecko doesn't implement any pseudo-elements that can have other selectors to their right. I'm not sure if other engines have.) I'd be a little worried about ease of implementation, and doing so without disabling a bunch of selector-related optimizations that we'd rather have. At some point we probably do want to have this sort of pseudo-element, but it's certainly adding an additional dependency on to this spec. The ::shadow and ::content pseudo-elements are this way (tree-like, and not leaves). We implement them in Blink currently, at least to some extent. (Not sure if it's just selector tricks, or if we do it properly so that, for example, inheritance works.) On Mon, Apr 27, 2015 at 1:06 AM, Anne van Kesteren ann...@annevk.nl wrote: Thanks, that example has another confusing bit, ::content. As far as I can tell ::content is not actually an element that ends up in the tree. It would make more sense for that to be a named-combinator of sorts. (And given ::content allowing selectors on the right hand, it's now yet more unclear why :host is not ::host.) It's a (pseudo-)element in the tree, it's just required to not generate a box. Having ::content (and ::shadow) be pseudo-elements lets you do a few useful things: you can use other combinators (child *or* descendant, depending on what you need) and you can set inherited properties to cascade down to all the children (especially useful for, for example, setting 'color' of direct text node children, which can appear in a shadow root or in a content with no select='', and can't be targeted by a selector otherwise). I did originally use combinators for this, but they're less useful for the reasons just listed. (This was explicitly discussed in a telcon, when I noted that sometimes you want to select the top-level things in a shadow tree or distribution list, and sometimes all the things. I had proposed two versions of each combinator, or an argument to a named combinator (like /shadow / versus /shadow /), but someone else (I think it was fantasai?) suggested using a pseudo-element instead, and it turned out to be a pretty good suggestion.) ~TJ
Re: Imperative API for Node Distribution in Shadow DOM (Revisited)
On Mon, Apr 27, 2015 at 4:06 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa rn...@apple.com wrote: On Apr 27, 2015, at 3:15 PM, Steve Orvell sorv...@google.com wrote: IMO, the appeal of this proposal is that it's a small change to the current spec and avoids changing user expectations about the state of the dom and can explain the two declarative proposals for distribution. It seems like with this API, we’d have to make O(n^k) calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design? I think you've understood the proposed design. As you noted, the cost is actually O(n*k). In our use cases, k is generally very small. I don't think we want to introduce O(nk) algorithm. Pretty much every browser optimization we implement these days are removing O(n^2) algorithms in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't even theoretically optimize it away. You're aware, obviously, that O(n^2) is a far different beast than O(nk). If k is generally small, which it is, O(nk) is basically just O(n) with a constant factor applied. To make it clear: I'm not trying to troll Ryosuke here. He argued that we don't want to add new O(n^2) algorithms if we can help it, and that we prefer O(n). (Uncontroversial.) He then further said that an O(nk) algorithm is sufficiently close to O(n^2) that he'd similarly like to avoid it. I'm trying to reiterate/expand on Steve's message here, that the k value in question is usually very small, relative to the value of n, so in practice this O(nk) is more similar to O(n) than O(n^2), and Ryosuke's aversion to new O(n^2) algorithms may be mistargeted here. ~TJ
Re: Imperative API for Node Distribution in Shadow DOM (Revisited)
On Mon, Apr 27, 2015 at 3:42 PM, Ryosuke Niwa rn...@apple.com wrote: On Apr 27, 2015, at 3:15 PM, Steve Orvell sorv...@google.com wrote: IMO, the appeal of this proposal is that it's a small change to the current spec and avoids changing user expectations about the state of the dom and can explain the two declarative proposals for distribution. It seems like with this API, we’d have to make O(n^k) calls where n is the number of distribution candidates and k is the number of insertion points, and that’s bad. Or am I misunderstanding your design? I think you've understood the proposed design. As you noted, the cost is actually O(n*k). In our use cases, k is generally very small. I don't think we want to introduce O(nk) algorithm. Pretty much every browser optimization we implement these days are removing O(n^2) algorithms in the favor of O(n) algorithms. Hard-baking O(nk) behavior is bad because we can't even theoretically optimize it away. You're aware, obviously, that O(n^2) is a far different beast than O(nk). If k is generally small, which it is, O(nk) is basically just O(n) with a constant factor applied. ~TJ
Re: Proposal for changes to manage Shadow DOM content distribution
On Wed, Apr 22, 2015 at 4:40 PM, Jan Miksovsky jan@component.kitchen wrote: Hi Tab, Thanks for your feedback! A primary motivation for proposing names instead of CSS selectors to control distribution is to enable subclassing. We think it’s important for a subclass to be able to override a base class insertion point. That seems easier to achieve with a name. It lets content insertion points behave like named DOM-valued component parameters that can be overridden by subclasses. To use an example, consider the page template component example at https://github.com/w3c/webcomponents/wiki/Shadow-DOM-Design-Constraints-In-Examples#page-templates. The image shows a page template for a large university web site. In this example, a base page template class defines a header slot. A university department wants to create a subclass of this template that partially populates some of the base class’ slots. In this case, it may want to add the department name to the header slot, then redefine an insertion point with the name that lets an individual page in that department add additional text to the header. The physics department page template subclass could then write something like this (following the proposal's syntax): template div content-slot=“header” Physics Department content slot=“header”/content /div template If an instance of this page then says physics-department-page headerFaculty/header /physics-department-page then the rendered result shows “Physics Department Faculty” in the base template header. This is analogous to what typical OOP languages enable when a subclass overrides a base class property. In such languages, the subclass simply defines a property with the same name as a base class property. The subclass’ property implementation can invoke the base class property implementation if it wants. The model is fairly easy to understand and implement, because the properties are always identified by name. A similar result could theoretically be achieved with CSS selectors, but the approach feels looser and a bit unwieldy, particularly if there are not rigid conventions about how the content select clauses are written. Assuming it were possible to reproject into a base class’ shadow — and that’s not actually possible today — you’d have to write something like: template shadow div class=“header” Physics Department content select=“.header/content /div /shadow /template So that approach could be made to work, but to me at least, feels harder, especially if the base class is using complex CSS selectors. I'm not really seeing the complexity. In particular, I'm not seeing why content-slot/slot is easier than class/select. For all simple cases (class, attribute, tagname) it seems pretty much identical. More complex selectors, like :nth-child(), might make it a bit more difficult (as you have to match your markup to what the superclass is expecting), but that's probably normally okay, and in problematic cases is just because the superclass is being too clever. On the other hand, requiring content-slot in order to place things in anything but the default slot means you have to ugly up your markup quite a bit. It makes current UA-shadow-using elements, such as details, impossible to duplicate, and makes all uses of shadow DOM uglier and more verbose. For example, I think tagname is a very common thing to select on. details uses it, your fingers automatically used it before you corrected your example, a number of Polymer examples use it, etc. Requiring the use of header content-slot=header is adding 20 characters to the element. A shadow-heavy page, such as some of the Polymer examples, would have a relatively large amount of its page weight being paid to this attribute scattered all over the place. As Justin said, this seems to be extremely over-engineering towards make subclass-and-reproject slightly more reliable, to the detriment of every other case. Subclass-and-reproject works just fine with select='' unless the superclass's selector is too complex/specialized to easily satisfy in your subclass's preferred markup; in the common case of a tagname, class name, or attribute name, the two are identical. In return, content-slot makes the common case (just project into some shadow) more verbose for the user, and make some cases, such as the date-range-combo-box illustrated by Daniel earlier in the thread https://gist.github.com/azakus/676590eb4d5b07b94428 impossible. Overall, this feels like an over-optimization for implementation complexity (matching content-slot to slot is easier than matching an element to a selector) without fully considering the cost to the author and user, which is a strong inversion of the priority of constituencies. ~TJ
Re: Proposal for changes to manage Shadow DOM content distribution
On Wed, Apr 22, 2015 at 5:04 PM, Ryosuke Niwa rn...@apple.com wrote: I find it decidedly relevant given I'm pointing out that attribute-specified slots Domenic mentioned isn't what you described. Since the only venue in which attribute-specified slots came up are [1], [2], and [3]. We're DEFINITELY NOT interested in filling slots based on values of arbitrary attributes. [1] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0188.html [2] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0190.html [3] https://lists.w3.org/Archives/Public/public-webapps/2015AprJun/0195.html Apologies, I'd misread [1] and didn't realize it really was talking about projecting the value of an attribute into the content of a slot. (Though I'm confused by the vehemence of your denial here, given that in [2] you said you could imagine that such a feature could be easily added.) ~TJ
Re: Proposal for changes to manage Shadow DOM content distribution
This is literally reinventing Selectors at this point. The solution to we don't think it's worth implementing *all* of Selectors is to define a subset of supported Selectors, not to define a brand new mechanism that's equivalent to selectors but with a new syntax. On Wed, Apr 22, 2015 at 10:21 AM, Justin Fagnani justinfagn...@google.com wrote: Another technique I've seen used is compound selectors, which could be used to migrate from one selector to another while preserving backwards compatibility, or to provide some nice default distributions that are also accessible via a class or attribute (ie, select=header, .header). Could slots have multiple names to support something like this? On Wed, Apr 22, 2015 at 10:16 AM, Justin Fagnani justinfagn...@google.com wrote: On Tue, Apr 21, 2015 at 10:40 PM, Ryosuke Niwa rn...@apple.com wrote: On Apr 21, 2015, at 10:23 PM, Justin Fagnani justinfagn...@google.com wrote: I do want the ability to redirect distributed nodes into a holes in the base template, so that part is welcome to me. However, my first reaction to the slot idea is that forcing users to add the content-slot attribute on children significantly impairs the DOM API surface area of custom elements. For the single-level distribution case, how is this different from content select=[content-slot=name] except that content select can distribute based on features of the children that might already exist, like tag names or an attribute? At the conceptual level, they're equivalent. However, we didn't find the extra flexibility of using CSS selectors compelling as we mentioned in our proposal [1]. I personally would like to see more power, especially positional selectors. Some components would be better off selecting their first child, rather than requiring a class. [1] See points 3 and 4 in https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution#some-issues-with-the-current-shadow-dom-spec Point 4 is interesting, because unless I'm missing something (which could be!) it's incorrect. You can create selectors with :not() that exclude the content selectors that come after in document order. I would rewrite the example as: template content select=.header/content content select=:not(.footer)/content content select=.footer/content /template Cheers, Justin - R. Niwa
Re: Proposal for changes to manage Shadow DOM content distribution
On Wed, Apr 22, 2015 at 2:29 PM, Ryosuke Niwa rn...@apple.com wrote: On Apr 22, 2015, at 8:52 AM, Domenic Denicola d...@domenic.me wrote: Between content-slot-specified slots, attribute-specified slots, element-named slots, and everything-else-slots, we're now in a weird place where we've reinvented a micro-language with some, but not all, of the power of CSS selectors. Is adding a new micro-language to the web platform worth helping implementers avoid the complexity of implementing CSS selector matching in this context? I don't think mapping an attribute value to a slot is achievable with a content element with select attribute. content select=[my-attr='the slot value'] I don't think defining a slot based on an attribute value is something we'd like to support. That is *literally* what your proposal already is, except limited to only paying attention to the value of the content-slot attribute. ~TJ
Re: Proposal for changes to manage Shadow DOM content distribution
On Wed, Apr 22, 2015 at 2:53 PM, Ryosuke Niwa rn...@apple.com wrote: On Apr 22, 2015, at 2:38 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Apr 22, 2015 at 2:29 PM, Ryosuke Niwa rn...@apple.com wrote: On Apr 22, 2015, at 8:52 AM, Domenic Denicola d...@domenic.me wrote: Between content-slot-specified slots, attribute-specified slots, element-named slots, and everything-else-slots, we're now in a weird place where we've reinvented a micro-language with some, but not all, of the power of CSS selectors. Is adding a new micro-language to the web platform worth helping implementers avoid the complexity of implementing CSS selector matching in this context? I don't think mapping an attribute value to a slot is achievable with a content element with select attribute. content select=[my-attr='the slot value'] No. That's not what I'm talking here. I'm talking about putting the attribute value into the insertion point in [1] [2] [3], not distributing an element based on an attribute value. Oh, interesting. That appears to be a complete non-sequitur, tho, as no one has asked for anything like that. It's *certainly* irrelevant as a response to the text you quoted. On Apr 22, 2015, at 2:38 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Apr 22, 2015 at 2:29 PM, Ryosuke Niwa rn...@apple.com wrote: I don't think defining a slot based on an attribute value is something we'd like to support. That is *literally* what your proposal already is, except limited to only paying attention to the value of the content-slot attribute. Distributing elements based on the value of a single well scoped attribute versus of an arbitrary attribute is A HUGE difference. Interesting. Why? And why do you think the difference is significant enough to justify such a limitation? You seem to be okay with distributing elements based on the *name* of an arbitrary attribute; can you justify why that is so much better than using the value that you're willing to allow it, but not the other? ~TJ
Re: PSA: publishing new WD of File API on April 21
On Wed, Apr 15, 2015 at 7:23 AM, Arthur Barstow art.bars...@gmail.com wrote: * This spec is now using Github https://w3c.github.io/FileAPI/ and the ED is https://w3c.github.io/FileAPI/Overview.html. PRs are welcome and encouraged. (I think it would be useful if this spec used ReSpec and if anyone can help with that port, please do contact me.) This was actually already next on my list of specs to Bikeshed, as soon as I finish DOM (which I'm doing as I type this). WebIDL-heavy specs benefit a lot from being Bikeshedded, so all the IDL definitions get properly marked up for the linking database. ^_^ ~TJ
Re: PSA: publishing new WD of File API on April 21
On Wed, Apr 15, 2015 at 7:23 AM, Arthur Barstow art.bars...@gmail.com wrote: Hi All, A new Working Draft publication of File API is planned for April 21 using the following version as the basis: https://w3c.github.io/FileAPI/TR.html Note that this version appears to be based off the Overview-FAWD.xml file in the CVS repo, which hasn't been touched in 5 years. The file Overview-FA.xml is much more recent and appears to be what the current file at http://www.w3.org/TR/FileAPI/ is based on (note the relative positions of the FileList and Blob sections - in Overview-FA.xml and the current TR, FileList comes first). I suspect, then, that the file you're referencing is out-of-date and shouldn't be used. ~TJ
Re: PSA: publishing new WD of File API on April 21
On Wed, Apr 15, 2015 at 3:00 PM, Arthur Barstow art.bars...@gmail.com wrote: On 4/15/15 5:56 PM, Tab Atkins Jr. wrote: On Wed, Apr 15, 2015 at 7:23 AM, Arthur Barstow art.bars...@gmail.com wrote: https://w3c.github.io/FileAPI/TR.html Note that this version appears to be based off the Overview-FAWD.xml file in the CVS repo, which hasn't been touched in 5 years. The file Overview-FA.xml is much more recent and appears to be what the current file at http://www.w3.org/TR/FileAPI/ is based on (note the relative positions of the FileList and Blob sections - in Overview-FA.xml and the current TR, FileList comes first). I suspect, then, that the file you're referencing is out-of-date and shouldn't be used. I didn't use either of those files but Overview.html, as directed by Arun. (He told me he stopped editing the Overview-FA.xml file some type ago). Oh god, you're right, it looks like Arun has been directly editting the generated HTML since Jan 2013. Confusingly, there's a single commit to Overview-FA.xml in Nov 2014 which just updates the Prev/Current links in the header; the immediately preceding commit is from Jan 2013, though, while Overview.html has been editted repeatedly in that span. Ugh, working with the XML was a lot easier. Darn. Arun, buddy, I'm sorry you had to go through the pain of directly editting generated HTML. ~TJ
Re: PSA: publishing new WD of File API on April 21
On Wed, Apr 15, 2015 at 12:54 PM, Martin Thomson martin.thom...@gmail.com wrote: On 15 April 2015 at 07:26, Arthur Barstow art.bars...@gmail.com wrote: * This spec is now using Github https://w3c.github.io/FileAPI/ That repo is actually https://github.com/w3c/FileAPI/. Since the most obvious github.io link is currently broken, would it make sense to move Overview.html to index.html? Does the name Overview.html hold special meaning? No, it's just an older tradition for specs in some working groups. I also recommend using index.html as the generated file name (and am doing so in my Bikeshedding, which is now underway). ~TJ
Re: template namespace attribute proposal
On Wed, Mar 18, 2015 at 2:06 PM, Ryosuke Niwa rn...@apple.com wrote: On Mar 16, 2015, at 3:14 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Karl Dubost said: The intersection seems to be: ['a', 'style', 'script', 'track', 'title', 'canvas', 'source', 'video', 'iframe', 'audio', 'font'] Whoops, sorry, forgot about title. We can resolve that conflict in favor of SVG; putting an html title into a template is, I suspect, exceedingly rare. That may be true but the added complexity of title inside a template element being special (i.e. it’s treated as SVG instead of HTML) may not be worth the effort. HTML parser is already complex enough that using a simple rule of always falling back to HTML if element names are ambiguous may result in a better overall developer ergonomics. Possibly, dunno. I could go either way. Consistently resolving ambiguity in one direction could be simpler, but so could resolving in favor of the massively-more-likely one. Spec/impl complexity isn't really of concern here; impls would need an explicit list of elements/namespaces anyway. The spec could possible get away with a blanket statement, but that opens up the possibility of ambiguity, like image (which should clearly be SVG, despite the parser turning it into img for you). ~TJ
Re: template namespace attribute proposal
[Sorry for the reply-chain breaking; Gmail is being super weird about your message in particular, and won't let me reply directly to it. Some bug.] Karl Dubost said: The intersection seems to be: ['a', 'style', 'script', 'track', 'title', 'canvas', 'source', 'video', 'iframe', 'audio', 'font'] Whoops, sorry, forgot about title. We can resolve that conflict in favor of SVG; putting an html title into a template is, I suspect, exceedingly rare. track/canvas/source/video/iframe/audio are all being removed as the SVGWG switches to allowing HTML elements natively in SVG. ~TJ
Re: template namespace attribute proposal
On Fri, Mar 13, 2015 at 2:09 PM, Jonas Sicking jo...@sicking.cc wrote: Unless the SVG WG is willing to drop support for script![CDATA[...]]/script. But that seems like it'd break a lot of content. Like, on the same line? Because I recall that sort of thing showing up in old HTML tutorials, with the CDATA parts on their own lines. ~TJ
Re: template namespace attribute proposal
On Thu, Mar 12, 2015 at 3:07 AM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, Mar 12, 2015 at 4:32 AM, Benjamin Lesh bl...@netflix.com wrote: What are your thoughts on this idea? I think it would be more natural (HTML-parser-wise) if we special-cased SVG elements, similar to how e.g. table elements are special-cased today. A lot of template-parsing logic is set up so that things work without special effort. Absolutely. Forcing authors to write, or even *think* about, namespaces in HTML is a complete usability failure, and utterly unnecessary. The only conflicts in the namespaces are font (deprecated in SVG2), script and style (harmonizing with HTML so there's no difference), and a (attempting to harmonize API surface). If you just looked at the root element, skipping through as, you could do the same magical mode selection we currently do for tr/etc. Ideally we could do this by just pulling SVG into the HTML namespace, which the SVGWG is comfortable with, but no implementors have felt like doing it yet. :/ ~TJ
Re: template namespace attribute proposal
On Fri, Mar 13, 2015 at 1:27 PM, Benjamin Lesh bl...@netflix.com wrote: I agree completely, Tab, but it's actually too late to stop forcing authors to think about namespaces, the fact I currently have to think about it is the source of this suggestion. You have to think about it today *because we've failed to do things correctly*. That doesn't mean we can't fix it so you can continue to blithely ignore namespaces, like you can otherwise do for everything except the createElement*() functions. The merging of namespaces is the ideal solution, no doubt, but it's probably not a realistic solution in the short or even medium term. It's almost the equivalent of punting. SVG and HTML differ too drastically to just combine them overnight, I suspect. Different types stored in properties, different APIs, etc. On the API level this is completely unproblematic. SVGElement is already a subclass of Element; the namespace really isn't a big deal. The compat issue is just with libraries that branch on the namespace for some reason, and that might be what kills this. Importantly, though, we don't have to merge namespaces to make template work correctly. SVG already resolved that template inside of SVG *should* be the HTMLTemplateElement, not a brand new svg:template element that acts identically. The template element itself already has special parsing rules that cause it to start in particular parser modes, to correctly handle things like templatetrtdfoo/td/tr/template; giving it some more rules to correctly handle templatecircle/circle/template isn't difficult or unrealistic. With that, users of template can just ignore namespaces entirely, except for the corner case of templateatext/a/template, which'll get interpreted as the html:a element rather than svg:a. Namespace unification closes this final gap; it's not needed for the rest. It would be far easier/quicker to add an attribute and deprecate it later than get the namespaces merged. At the very least, it would immediately provide authors something they could polyfill to solve this issue. Deprecation doesn't remove things; every thing we add, we should assume is permanent. Immediately is a funny term to use when discussing standards. Adding an attribute and having it be useful cross-browser isn't any faster than adding to the special cases for SVG-in-template. ~TJ
Re: template namespace attribute proposal
On Fri, Mar 13, 2015 at 1:48 PM, Jonas Sicking jo...@sicking.cc wrote: On Fri, Mar 13, 2015 at 1:16 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Thu, Mar 12, 2015 at 3:07 AM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, Mar 12, 2015 at 4:32 AM, Benjamin Lesh bl...@netflix.com wrote: What are your thoughts on this idea? I think it would be more natural (HTML-parser-wise) if we special-cased SVG elements, similar to how e.g. table elements are special-cased today. A lot of template-parsing logic is set up so that things work without special effort. Absolutely. Forcing authors to write, or even *think* about, namespaces in HTML is a complete usability failure, and utterly unnecessary. The only conflicts in the namespaces are font (deprecated in SVG2), script and style (harmonizing with HTML so there's no difference), and a (attempting to harmonize API surface). Note that the contents of a HTML script parses vastly different from an SVG script. I don't recall if the same is true for style. So the parser sadly still needs to be able to tell an SVG script from a HTML one. I proposed aligning these so that parsing would be the same, but there was more opposition than interest back then. That's back then. The SVGWG is more interested in pursuing convergence now, per our last few F2Fs. ~TJ
Re: Standardising canvas-driven background images
On Fri, Feb 20, 2015 at 1:50 PM, Matthew Robb matthewwr...@gmail.com wrote: On Fri, Feb 20, 2015 at 2:25 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Images level 4 If/when that spec reappears it would be great if you could reply to this thread with a link or something... Thanks! Here we are: http://dev.w3.org/csswg/css-images-4/#element-notation ~TJ
Re: Standardising canvas-driven background images
On Fri, Feb 20, 2015 at 7:51 AM, Ashley Gullen ash...@scirra.com wrote: Forgive me if I've missed past discussion on this feature but I need it so I'm wondering what the status of it is. (Ref: https://www.webkit.org/blog/176/css-canvas-drawing/ and http://updates.html5rocks.com/2012/12/Canvas-driven-background-images, also known as -webkit-canvas() or -moz-element()) The use case I have for it is this: we are building a large web app that could end up dealing with thousands of dynamically generated icons since it deals with large user-generated projects. The most efficient way to deal with this many small images is to basically sprite sheet them on to a canvas 2d context. For example a 512x512 canvas would have room for a grid of 256 different 32x32 icons. (These are drawn scaled down from user-generated content, so they are not known at the time the app loads and so a normal image cannot be used.) To display an icon, a 32x32 div sets its background image to the canvas at an offset, like a normal CSS sprite sheet but with a canvas. -webkit-canvas solves this, but I immediately ran in to bugs (in Chrome updating the canvas does not always redraw the background image), and as far as I can tell it has an uncertain future so I'm wary of depending on it. The workarounds are: - toDataURL() - synchronous so will jank the main thread, data URL inflation (+30% size), general insanity of dumping a huge string in to CSS properties - toBlob() - asynchronous which raises complexity problems (needs a way of firing events to all dependent icons to update them; updating them requires DOM/style changes; needs to handle awkward cases like the canvas changing while toBlob() is processing; needs to be carefully scheduled to avoid thrashing toBlob() if changes being made regularly e.g. as network requests complete). I also assume this uses more memory, since it effectively requires creating a separate image the same size which is stored in addition to the canvas. In comparison being able to put a canvas in a background images solves this elegantly: there is no need to convert the canvas or update the DOM as it changes, and it seems the memory overhead would be lower. It also opens up other use cases such as animated backgrounds. I see there may be security concerns around -moz-element() since it can use any DOM content. This does not appear to be necessary or even useful (what use cases is arbitrary DOM content for?). If video is desirable, then video can already be rendered to canvases, so -webkit-canvas still covers that. Therefore I would like to propose standardising this feature based off the -webkit-canvas() implementation. The correct standardized approach is the element() function, defined in Images level 4 (I'd link you, but I think I accidentally killed the spec; wait a bit). -moz-element() is a pre-spec implementation of this that mostly matches the spec. There aren't any security bugs; this just lets you paint a part of the tree twice. Anything you can do to attack the image generated by element(), you can do to attack the DOM that element() is pointing to. ~TJ
Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest
On Thu, Feb 12, 2015 at 1:45 PM, Marc Fawzi marc.fa...@gmail.com wrote: this backward compatibility stuff is making me think that the web is built upon the axiom that we will never start over and we must keep piling up new features and principles on top of the old ones Yup. this has worked so far, miraculously and not without overhead, but I can only assume that it's at the cost of growing complexity in the browser codebase. I'm sure you have to manage a ton of code that has to do with old features and old ideas... how long can this be sustained? forever? what is the point in time where the business of retaining backward compatibility becomes a huge nightmare? When someone comes up with something sufficiently better to be worth abandoning trillions of existing pages for (or duplicating the read trillions of old pages engine alongside the new one). ~TJ
Re: Are web components *seriously* not namespaced?
On Fri, Feb 6, 2015 at 8:12 AM, Kurt Cagle kurt.ca...@gmail.com wrote: Tab, I spend the vast majority of my time anymore in RDF-land, where namespaces actually make sense (I'm not going to argue on the XML use of namespaces - they are, agreed, ugly and complex). I know that when I've been at Balisage or any of the W3 confabs, the issue of namespaces ex-XML has been hotly debated, and many, many potential solutions proposed. Regardless, I do think that there is a very real need for namespaces in the general sense, if only as a way of being able to assert conceptual domain scope and to avoid collisions (div is the prototypical example here). Yes, as I've said, I don't disagree that we'll want a namespacing mechanism at some point. It's just not needed at the moment, and we can safely introduce it in the future. ~TJ
Re: Are web components *seriously* not namespaced?
On Thu, Feb 5, 2015 at 3:57 PM, Benjamin Goering b...@livefyre.com wrote: Glad to see this. I was 'checking in' on the professional practicalities of custom elements earlier this week, and was pretty bummed when I couldn't use XHTML5 namespaces for my employer's organization. I build widgets all day. They run in inhospitable that websites I'm not in control of. They have so many globals I just can't even. I get planning, execution, and/or distribution friction when the standards prevent be from creating a truly universal web component that will work in all those environments. To Tab's point, I don't think that will prevent a 90%-sufficient solution, or one that is 99%-sufficient for some subset of the potential market. But I do agree with Kurt that eventually it seems like 'the right way'. It seems valuable today to at least standardize and have a spec for XHTML5 Custom Elements (e.g. my-vendor:jquery/). 1% of sites will actually use these in a way that fully validates against XHTML5. But at least web authors and developers will be using the web instead of Contrived JavaScript Embeds. With a vote of confidence (or better yet spec) on the consistency of XHTML5 Custom Elements, I see no reason why I couldn't in the interim use this, and sleep at night knowing it will eventually be the way the web actually works: html xmlns:my-vendor=https://html.my-vendor.com/elements; span is=my-vendor:jquery / /html or div xmlns=https://html.my-vendor.com/elements; span is=jquery@~2.9 / span is=react@^1.3 / /div Right now, those are invalid, and the document.register() call will throw an error due to incorrect characters. One of the cool things about this is: Let's say in that last example I need to switch vendors or change where in the cloud my elements come from (e.g. QA, Staging, Production). All I need to change is the xmlns URL in that one attribute. Namespaces do not enable this. Switching the url of the script that defines the elements does. That can be done regardless of whether namespaces are used or not, regardless of whether the elements have the same name or different ones. A namespace is literally nothing more than a convenience API over prefix-based uniquified names, so you can define a long and very-likely-unique prefix name without having to write it over and over again. It does not enable any new or unique programming models or abilities. foo:bar and foo-bar are identical, except that it's possible that foo is a label for a longer and more-likely-unique prefix. ~TJ
Re: Are web components *seriously* not namespaced?
On Thu, Feb 5, 2015 at 7:44 PM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, Feb 5, 2015 at 2:15 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: Yes, real namespacing does eventually prove necessary as the population grows. That's fine. It's something that can be added organically as necessary; letting everything live in the null namespace first doesn't harm future namespacing efforts. (It shouldn't, but yet when XML added namespacing it did so in a way that was incompatible with XML itself (see the : and the horrible set of APIs in the DOM we ended up with as a result). And when XHTML came along it used a namespace whereas HTML did not (we later fixed that).) Yes, I said can be added organically. It's always possible to shoot yourself in the foot, as XML Namespaces did, if you really try. ~TJ
Re: Are web components *seriously* not namespaced?
On Fri, Feb 6, 2015 at 12:48 AM, Glen glen...@gmail.com wrote: So in other words it *is* a case of it's good enough. Web components are quite possibly the future of the web, and yet we're implementing them to be good enough in 90% of use cases? jQuery is JavaScript which means that it's different for various reasons: 1. It's less important to keep the names short. 2. It's possible to rename a plug-in if there is a conflict (e.g. @ http://stackoverflow.com/questions/11898992/conflict-between-two-jquery-plugins-with-same-function-name) 3. It's a library, not something built into the browser, which means that if jQuery decides to add some form of namespacing, it doesn't require a major specification and implementation by 5+ major browsers, etc. Web Components are also JS. Any renaming you do in JS, you can do just as easily in HTML. Complicating things further simply isn't all that necessary. Complicating it for the developer, or the implementers? I can't speak for the latter, but for developers, this would be an *optional* feature. If you don't have conflicts you don't *have* to alter an element's NS prefix, but specifying the prefix in your HTML would provide rich IDE functionality, so I don't see why anyone would *not* want to do this. Again, namespaces are nothing more than an indirection mechanism for prefixes, so you can write a long and more-likely-unique prefix as a shorter prefix that you know is unique for your page. No functionality is enabled by namespaces that can't be done without them just as easily but with a little more verbosity. It's something that can be added organically as necessary. Anne has already made a point about this, but also consider the fact that without real namespacing, we're forced to name based on *potential* conflicts. For example, in the ms- case, ms- may either already exist, or *potentially* exist and be useful, so I name my element mks- instead. Therefore I'm not able to give something the name that I want it to have, for fear of future conflicts. Anne pointed out that XML Namespaces screwed this up, not that it's not easy to get right. You don't need to fear future conflicts. Googling for a name is often sufficient. You can change later if there is a conflict. Even just being able to optionally shorten a custom element's NS prefix can be useful. For example, if a vendor uses excalibur-grid, we can just change that to x-grid and things will be easier to type, cleaner, etc. Regarding XML, I never even mentioned XML in my initial post, so I'm not sure what all the fuss is about. This can be implemented in a way that supports both HTML *and* XHTML/XML, yet doesn't look at all like XML namespacing. The only important part is the use of URIs, I can see no better way of providing both a unique namespace, as well as an endpoint for gathering human- machine-readable data about a set of custom elements. Is there something inherently wrong with this? Or is this just about people being too lazy to type a closing tag, because that can remain optional. Most people who mention namespaces on the web are referring to XML Namespaces, so I assumed you were as well. Your suggestion is shaped exactly like XML Namespaces, with the use of urls as namespace, etc. They [XML namespaces] have a number of terrible affordances + Most of them don't commit the same mistakes that XML namespaces did Such as? A few are: * URLs are not a good fit for namespaces. Humans make a number of assumptions about how urls can be changed (capitalization, trailing /, http vs https, www or not, etc) which are often true for real urls due to nice server software, but are not true for urls, which are opaque strings. * There's no consistency in the URL structure used: some namespaces end in a word, some in a slash, some in a hash, etc. * You can't actually fetch namespace urls. Again, they're opaque strings, not urls, so there's no guarantee or expectation that there's anything useful on the other side, or that what is on the other side is parseable in any way. As a given XML namespace becomes more popular, fetching the namespace url constitutes a DDOS attack; the W3C, for example, has to employ sophisticated caching to prevent namespace url requests from taking down their website. * URLs contain a bunch of extra typing baggage that don't serve to uniquify anything, just make it longer to type. The http://; prefix, for example, is identical for all namespaces (and if it's not, it's one more hurdle for authors to run into). Using a string with a higher information content is better for authors. * Domain names don't mean much. For example, Dublin Core's namespace starts with http://purl.org/;, which is effectively meaningless. * Similarly, path components often exist which are worthless and just lengthen the namespace for no uniquifying gain, such as the SVG namespace http://www.w3.org/2000/svg which contains /2000/ for some historical reason (it was minted
Re: Shadow tree style isolation primitive
On Wed, Feb 4, 2015 at 11:56 PM, Olli Pettay o...@pettay.fi wrote: Why do we need shadow DOM (or something similar) at all if we expose it easily to the outside world. One could even now just require that elements in components in a web page have class=component, and then .component could be used as . Sure, it would require :not(.component) usage too. And from DOM APIs side one could easily implement filtering for the contents of components using small script libraries. Aa;erlhas;dlgpasodifapsldikjf; I keep hearing this kind of sentiment pop up, and I'm like, have you ever done serious webdev? I know a lot of browser devs haven't, and I don't know if you have or not, but this is the sort of thing that is plain as day if you have. You don't need strong isolation primitives to do a lot of good. Simple composition helpers lift an *enormous* weight off the shoulders of web devs, and make whole classes of bugs obsolete. Shadow DOM is precisely that composition helper right now. In most contexts, you can't ever touch something inside of shadow DOM unless you're doing it on purpose, so there's no way to friendly fire (as Brian puts it). Stronger isolation does solve some problems, sure. But trying to imply that those are the only problems we need to solve, because they're the only problems related to explaining the current DOM, shows a serious lack of insight into the types of problems experienced by webdevs today, when developing complex webapps. There is no naming scheme that accomplishes this. There is no amount of discipline that will help. Devs are humans, and webpages are very complicated multi-party computer programs, and helping people organize and manage that complexity is an enormous win. Existing Shadow DOM composition is a tailored solution to that. If it looks complex, it's because the platform is complex, and so there's a lot of interface to deal with. Its core, though, is just what if this piece of the document was hidden from the rest of the document by default, and you can't cut away much of Shadow DOM without losing that entirely. - Separate-but-related rant: And like, the sort of mindset that can just throw out Sure, it would require :not(.component) usage too. like it was just some simple thing, easy to implement and do, is amazing. You'd need to spam that EVERYWHERE, on NEARLY ALL of your selectors, and need to do it for EVERY COMPONENT. Heck, to do it correctly, you have to do in on EVERY COMPOUND SELECTOR *within* each selector. It's optimizing in the exact wrong direction; you have to explicitly say every time that you don't want to match against a finite set of components; missing it once, or adding a new component that you haven't expressed in your giant-list-of-exclusions-on-every-selector-in-your-page, means you've got a potential styling bug. I, just, man. What. I'm unclear how to have a productive discussion when this is entertained as a serious suggestion. ~TJ
Re: Shadow tree style isolation primitive
On Thu, Feb 5, 2015 at 10:56 AM, Ryosuke Niwa rn...@apple.com wrote: On Feb 4, 2015, at 3:20 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Wed, Feb 4, 2015 at 11:56 PM, Olli Pettay o...@pettay.fi wrote: Why do we need shadow DOM (or something similar) at all if we expose it easily to the outside world. One could even now just require that elements in components in a web page have class=component, and then .component could be used as . Sure, it would require :not(.component) usage too. And from DOM APIs side one could easily implement filtering for the contents of components using small script libraries. Aa;erlhas;dlgpasodifapsldikjf; I keep hearing this kind of sentiment pop up, and I'm like, have you ever done serious webdev? I know a lot of browser devs haven't, and I don't know if you have or not, but this is the sort of thing that is plain as day if you have. That sounds rather demeaning and insulting [1]. public-webapps, or a mailing list of any W3C working group, isn't an appropriate forum to rant. Most browser devs are not webdevs. This is not an insult nor is it demeaning. I've been a member of the W3C for many years, and problems with browser devs not understanding the issues of real webdevs on the ground have always been rampant. It's a large part of the reason I joined the W3C in the first place, so I could help develop CSS specs that were important for webdevs but being ignored by the browser devs in the group. If you find that demeaning, I'm sorry? It's not something you can ignore; it's always an elephant in the room when trying to solve problems that frustrate webdevs. It really is important to ensure that browser devs sit down and listen to people with real webdev experience (such as, for example, Brian Kardell). You don't need strong isolation primitives to do a lot of good. Simple composition helpers lift an *enormous* weight off the shoulders of web devs, and make whole classes of bugs obsolete. Shadow DOM is precisely that composition helper right now. In most contexts, you can't ever touch something inside of shadow DOM unless you're doing it on purpose, so there's no way to friendly fire (as Brian puts it). Stronger isolation does solve some problems, sure. But trying to imply that those are the only problems we need to solve, because they're the only problems related to explaining the current DOM, shows a serious lack of insight into the types of problems experienced by webdevs today, when developing complex webapps. While I agree those are problems worth solving, let us recognize and respect that different participates of the working group care about different use cases and are interested in solving different set of problems. That's precisely what I'm getting frustrated with - when people make suggestions that composition isn't important, and people can just use better selectors and some discipline to solve those problems, it makes it difficult to have productive conversations. Composition is a massive problem with today's webapps. Isolation is a minor problem that is important for some valuable use-cases, so it shouldn't be ignored, but neither should it be elevated to the sole important thing to be discussed here. There is no naming scheme that accomplishes this. There is no amount of discipline that will help. Devs are humans, and webpages are very complicated multi-party computer programs, and helping people organize and manage that complexity is an enormous win. Existing Shadow DOM composition is a tailored solution to that. If it looks complex, it's because the platform is complex, and so there's a lot of interface to deal with. Its core, though, is just what if this piece of the document was hidden from the rest of the document by default, and you can't cut away much of Shadow DOM without losing that entirely. ... Separate-but-related rant: And like, the sort of mindset that can just throw out Sure, it would require :not(.component) usage too. like it was just some simple thing, easy to implement and do, is amazing. You'd need to spam that EVERYWHERE, on NEARLY ALL of your selectors, and need to do it for EVERY COMPONENT. Heck, to do it correctly, you have to do in on EVERY COMPOUND SELECTOR *within* each selector. It's optimizing in the exact wrong direction; you have to explicitly say every time that you don't want to match against a finite set of components; missing it once, or adding a new component that you haven't expressed in your giant-list-of-exclusions-on-every-selector-in-your-page, means you've got a potential styling bug. However, the use case we're talking here is multiple teams working on a single website potentially stepping on each other's toes; implying the enormous size of the entity working on the website. I have a hard time imagining that any organization of that scale to not have a sever-side or pre-deployment build step for the website at which point
Re: Shadow tree style isolation primitive
On Thu, Feb 5, 2015 at 11:03 AM, Olli Pettay o...@pettay.fi wrote: On 02/05/2015 01:20 AM, Tab Atkins Jr. wrote: You don't need strong isolation primitives to do a lot of good. Simple composition helpers lift an *enormous* weight off the shoulders of web devs, and make whole classes of bugs obsolete. Shadow DOM is precisely that composition helper right now. In most contexts, you can't ever touch something inside of shadow DOM unless you're doing it on purpose, so there's no way to friendly fire (as Brian puts it). If we want to just help with composition, then we can find simpler model than shadow DOM with its multiple shadow root per host and event handling oddities and what not. (and all the mess with is-in-doc is still something to be sorted out etc.) Try to. ^_^ Stronger isolation does solve some problems, sure. But trying to imply that those are the only problems we need to solve, No one has tried to imply that. I don't know where you got that. By your statements implying that composition issues can just be handled by better discipline and some selector modification, in the message I responded to earlier. I'm not sure how to interpret those statement if you didn't mean that composition wasn't worth solving. ~TJ
Re: Are web components *seriously* not namespaced?
On Thu, Feb 5, 2015 at 8:31 AM, Glen glen...@gmail.com wrote: I know I'm rather late to the party, but I've been doing a lot of reading lately about web components and related technologies, and the one thing that confounds me is the fact that web components appear not to have any real namespacing. Prefix-based informal namespacing appears to be more than sufficient for 90%+ of use-cases. It works fine, for example, for the huge collection of jQuery widgets/extensions. Complicating things further simply isn't all that necessary. We do plan to help solve it at some point, as Dimitri says, as there are some cases where real namespacing is useful. In particular, if you have a name that you can assume is globally unique with high confidence, you can actually share custom elements across documents. Within a single page, however, prefix-based informal namespaces are nearly always sufficient. XML Namespaces are a pox on the platform, however, and they'll definitely not get reproduced in custom elements. They have a number of terrible affordances. ~TJ
Re: Are web components *seriously* not namespaced?
On Thu, Feb 5, 2015 at 12:00 PM, Kurt Cagle kurt.ca...@gmail.com wrote: I predict that sometime around 2025, we will end up redefining namespaces because the number of jQuery-like components have ballooned into the millions, the web has descended once again into a sea of interoperability, and registries will, once again, have proven to be a bottleneck, as they have EVERY SINGLE TIME they have been implemented. Yes, real namespacing does eventually prove necessary as the population grows. That's fine. It's something that can be added organically as necessary; letting everything live in the null namespace first doesn't harm future namespacing efforts. Of course, they won't be called namespaces, and they'll probably use a dash instead of a colon , and they definitely won't be XML based because everyone knows that XML is EVIL ... (sigh) ! There are more namespacing solutions in heaven and earth, Horatio, than are dreamt of in your XML. Most of them don't commit the same mistakes that XML namespaces did. ~TJ
Re: Shadow tree style isolation primitive
On Wed, Feb 4, 2015 at 11:36 PM, Olli Pettay o...@pettay.fi wrote: Why should even !important work if the component wants to use its own colors? Because that's how !important usually works. If the author has progressed to the point of doing !important, we should assume that they know what they're doing and let it work. At the end of the day, it should be possible for the outer page to have some way of styling any part of a non-sealed (a la inputs, etc) shadow DOM. Anyone claiming this isn't necessary should be sentenced to a year of doing web dev with jQuery components and hostile clients. ~TJ
Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)
On Sat, Jan 24, 2015 at 9:35 AM, Aryeh Gregor a...@aryeh.name wrote: It's not just that it was only implemented by one UA. It's also that even in Firefox, multiple-range selections practically never occur. The only way for a user to create them to to either Ctrl-select multiple things, which practically nobody knows you can do; or select a table column, which is also extremely uncommon; or maybe some other obscure ways. In evidence of this fact, Gecko code doesn't handle them properly either. Ehsan might be able to provide more details on this if you're interested. Though I believe browsers will soon have much more pressure to support multiple ranges as a matter of course, as increased design with Flexbox and Grid will mean that highlighting from one point to another, in the world of a range is defined by two DOM endpoints and contains everything between them in DOM order, can mean highlighting random additional parts of the page that are completely unexpected. Switching to a model of visual highlighting for selections will require multi-range support. In other words, it'll switch from being a rare thing to much more common. ~TJ
Re: Minimum viable custom elements
On Thu, Jan 15, 2015 at 12:27 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 15, 2015, at 11:47 AM, Brian Kardell bkard...@gmail.com wrote: Not to sidetrack the discussion but Steve Faulker made what I think was a valid observation and I haven't seen a response... Did I miss it? When and in which thread? Could you give us a pointer? Earlier in *this* thread. ~TJ
Re: [Selection] Support of Multiple Selection (was: Should selection.getRangeAt return a clone or a reference?)
On Tue, Jan 13, 2015 at 2:05 PM, Mats Palmgren m...@mozilla.com wrote: On 01/12/2015 07:59 PM, Ben Peters wrote: Multiple selection is an important feature in the future. Indeed, there are many important use cases for it. Here are some use cases that are implemented using multi-range selections in Gecko today: * visual selection of bidirectional text * selecting table columns * selecting multiple fragments of arbitrary content (just hold CTRL) * selection with disjoint unselected islands due to CSS user-select:none * mapping spell-checking errors * highlighting matched words for Find in Page etc If we ever want to make selection of text in pages using Flexbox or Grid make sense, we'll need multiple selections, too. The assumption that you can accurately capture a visually-contiguous block of content with two DOM endpoints is rapidly becoming untenable. ~TJ
Re: Shadow tree style isolation primitive
On Fri, Jan 9, 2015 at 5:40 AM, Anne van Kesteren ann...@annevk.nl wrote: I'm wondering if it's feasible to provide developers with the primitive that the combination of Shadow DOM and CSS Scoping provides. Namely a way to isolate a subtree from selector matching (of document stylesheets, not necessarily user and user agent stylesheets) and requiring a special selector, such as , to pierce through the boundary. This is a bit different from the `all` property as that just changes the values of all properties, it does not make a selector such as div no longer match. So to be clear, the idea is that if you have a tree such as section class=example h1Example/h1 div ... /div /section Then a simple div selector would not match the innermost div if we isolated the section. Instead you would have to use section div or some such. Or perhaps associate a set of selectors and style declarations with that subtree in some manner. It's probably feasible, sure. But I'm not sure that it's necessary, or that browsers will go for it. Using a shadow root as the isolation boundary is *really convenient*, because it's a separate tree entirely; the fact that outside rules don't apply within it, and inside rules don't apply outside, falls out for free. Let's assume we did it, though. We'd have to have some mechanism for defining an isolation boundary, and denoting whether rules were inside or outside the boundary. This sounds like an at-rule, like: @isolate .example { h1 { ... } div { ... } } Now, a problem here is that you have a conflict between nesting isolated things and specifying isolation. Say you have foo and bar elements, both of which need to be isolated. You'd think you could just write: @isolate foo { ... } @isolate bar { ... } But this won't work! If you have markup like foobar.../bar/foo, the bar there is inside the foo's isolation boundary, so the @isolate rule can't find it. You'd need to *also* nest the @isolate bar rule (and all its styling rules) within the foo one, and vice versa. The effect of this on *three* mutually isolated components is, obviously, terrible; let's not even mention trying to use multiple modules together that weren't explicitly designed together. Alternately, say that it does work - the @isolate selector pierces through isolation boundaries. Then you're still screwed, because if the outer page wants to isolate .example blocks, but within your component you use .example normally, without any isolation, whoops! Suddenly your .example blocks are isolated, too, and getting weird styles applied to them, while your own styles break since they can't cross the unexpected boundary. Basically, trying to smuggle private state into a global declarative language is a bitch. So, CSS is out. We can't reasonably do this within the confines of CSS application. It needs to be handled at a different layer. We could do it in HTML, potentially - some new global attribute that creates a styling boundary that prevents outside styling from targeting anything inside. Then you can just use standard style scoped to apply your own styles within the boundary - as long as the scoping root is inside the boundary, styling is allowed. But that means you have to add an attribute to every element that uses this styling boundary, and move your style info into inline scoped blocks. That's annoying. :/ Let's check out JS. If you can mark some elements as always being styling boundaries, then whenever they're constructed, whether manually or via the parser, they'll get the right mechanics automatically. And since this is JS, it shouldn't be too hard to say always attach this stylesheet to the element whenever it gets created, or perhaps introduce some explicit ability to do this in the platform. This last one, though, is pretty much exactly Custom Elements, just with the children staying in the light tree rather than being moved into a shadow tree. But keeping them in the light tree has complications; it means that everything in the platform needs to be made aware of the isolation boundary. Should qSA respect the isolation boundaries or not? Depends on what you're using it for. What about things that aren't CSS at all, like getElementsByTagName()? That's equivalent to a qSA with the same argument, but it's not a selector, per se. Manual tree-walking would also need to be made aware of this, or else you might accidentally descend into something that wants isolation. Shadow DOM at least gives an answer to all of these, by putting the elements in a separate tree. You don't need to think of every one individually, or deal with inconsistent design when someone forgets to spec their new tree-searching thing to respect the boundary. So, do you still think it's worth it to try to subdivide the functionality further? I think it's packaged in a reasonable way at the moment. ~TJ
Re: Shadow tree style isolation primitive
On Mon, Jan 12, 2015 at 2:14 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 12, 2015, at 1:28 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Let's assume we did it, though. We'd have to have some mechanism for defining an isolation boundary, and denoting whether rules were inside or outside the boundary. This sounds like an at-rule, like: @isolate .example { h1 { ... } div { ... } } Now, a problem here is that you have a conflict between nesting isolated things and specifying isolation. Say you have foo and bar elements, both of which need to be isolated. You'd think you could just write: @isolate foo { ... } @isolate bar { ... } But this won't work! If you have markup like foobar.../bar/foo, the bar there is inside the foo's isolation boundary, so the @isolate rule can't find it. You'd need to *also* nest the @isolate bar rule (and all its styling rules) within the foo one, and vice versa. The effect of this on *three* mutually isolated components is, obviously, terrible; let's not even mention trying to use multiple modules together that weren't explicitly designed together. Alternately, say that it does work - the @isolate selector pierces through isolation boundaries. Then you're still screwed, because if the outer page wants to isolate .example blocks, but within your component you use .example normally, without any isolation, whoops! Suddenly your .example blocks are isolated, too, and getting weird styles applied to them, while your own styles break since they can't cross the unexpected boundary. Another alternative. We can add a host language dependent mechanism such as an element or an attribute to end the current isolation, just like insertion points in a shadow DOM would. Better yet, we can provide this mechanism in CSS. e.g. @isolate foo integrates(bar) { ... } @isolate bar { ... } (I'm not proposing this exact syntax. We can certainly do better.) Yeah, something like that would work, but it also means you need to account for all the things that might want to be isolated in your component. That's relatively clumsy. Let's check out JS. If you can mark some elements as always being styling boundaries, then whenever they're constructed, whether manually or via the parser, they'll get the right mechanics automatically. And since this is JS, it shouldn't be too hard to say always attach this stylesheet to the element whenever it gets created, or perhaps introduce some explicit ability to do this in the platform. There is a huge benefit in providing declarative alternative. There are many use cases in which style/selector isolations are desirable on an existing element such as section and article elements. Sure, I agree. I'm just pointing out that trying to implement it with our existing declarative mechanisms is going to be at least somewhat clumsy and ugly. Having an explicit tree that delineates the isolation context makes things a little clearer, imo. This last one, though, is pretty much exactly Custom Elements, just with the children staying in the light tree rather than being moved into a shadow tree. But keeping them in the light tree has complications; it means that everything in the platform needs to be made aware of the isolation boundary. Should qSA respect the isolation boundaries or not? Depends on what you're using it for. What about things that aren't CSS at all, like getElementsByTagName()? That's equivalent to a qSA with the same argument, but it's not a selector, per se. Manual tree-walking would also need to be made aware of this, or else you might accidentally descend into something that wants isolation. Shadow DOM at least gives an answer to all of these, by putting the elements in a separate tree. You don't need to think of every one individually, or deal with inconsistent design when someone forgets to spec their new tree-searching thing to respect the boundary. Let's not conflate style isolation with isolation of DOM subtrees. They're two distinct features. Even though I do agree it might be desirable to have both in many important use cases, there are use cases in which we don't need subtree isolations. I'm not trying to, I'm pointing out that style isolation, as a concept, seamlessly blends into DOM isolation as you move across API surfaces. I don't think there's a clear and obvious point where you can draw the line and say it only applies up to here, no further, except by going all the way to subtree isolation. ~TJ
Re: Shadow tree style isolation primitive
[oof, somehow your latest response flattened all of the quotes] On Mon, Jan 12, 2015 at 4:18 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 12, 2015, at 4:10 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: ? I didn't mention DOM APIs. I'm referring back to the example you're replying to - if you use a bar element inside your foo component, and you know that bar has isolation styles, you have to specifically call that out inside your foo styling so that it (a) is shielded from your foo styles, and (b) is able to pick up the global definition for bar styles. This is relatively clumsy. Some of the other solutions attach the I want to be isolated information to the element itself more directly, so you don't have to worry about what you put inside of yourself. This is no more clumsy than defining an insertion points in shadow DOM. Or am I misunderstanding you? Yeah. In Shadow DOM, you can just *use* the bar element, without having to think about it. If it happens to also use shadows to isolate its contents, that's irrelevant to you; you dont' have to make a single change to your foo component in order to recognize that. That's nice composition, which isn't *strictly* necessary in any solution, but it's a really good thing. I listed a number of APIs in the text you're responding to, all of which may or may not want to pay attention to style isolation, depending on the use-case. I'm not saying you necessarily need DOM isolation for any given use-case. I'm saying that there are a lot of APIs that query or walk the DOM, and whether they should pay attention to a style isolation boundary is a question without clear answers. I don't understand what you mean here. As far as I know, there are only two sensible options here: Style isolation implies DOM subtree isolation in all DOM APIs Style isolation doesn't affect DOM APIs at all Shadow DOM does 1. I'm suggesting that we need a mechanism to do 2. It's not terrible if we introduced @isolate to do 1 and also provided shadow DOM to do 1. In that world, shadow DOM is a syntax sugar around @isolate in the CSS land with DOM API implications. I mean, those are two possible options. They're not the only ones. For example, you could say that all selectors pay attention to the isolation boundary, so qSA is affected. That's *a* consistent answer, and could be very reasonable - people often use qSA to do styling-related things, and having it respect the style boundaries makes sense there. I'm saying there are multiple places you can draw the line. I think there's a nice defensible spot at the point you end up with when you do DOM isolation - everything that cares about the DOM tree (which includes CSS selectors, defined in terms of the DOM tree) gets locked out by default. Anywhere else has arguments for it, but I don't think any of them are particularly more compelling than any other. ~TJ
Re: Shadow tree style isolation primitive
On Mon, Jan 12, 2015 at 3:51 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 12, 2015, at 2:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Mon, Jan 12, 2015 at 2:14 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 12, 2015, at 1:28 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Let's assume we did it, though. We'd have to have some mechanism for defining an isolation boundary, and denoting whether rules were inside or outside the boundary. This sounds like an at-rule, like: @isolate .example { h1 { ... } div { ... } } Now, a problem here is that you have a conflict between nesting isolated things and specifying isolation. Say you have foo and bar elements, both of which need to be isolated. You'd think you could just write: @isolate foo { ... } @isolate bar { ... } But this won't work! If you have markup like foobar.../bar/foo, the bar there is inside the foo's isolation boundary, so the @isolate rule can't find it. You'd need to *also* nest the @isolate bar rule (and all its styling rules) within the foo one, and vice versa. The effect of this on *three* mutually isolated components is, obviously, terrible; let's not even mention trying to use multiple modules together that weren't explicitly designed together. Alternately, say that it does work - the @isolate selector pierces through isolation boundaries. Then you're still screwed, because if the outer page wants to isolate .example blocks, but within your component you use .example normally, without any isolation, whoops! Suddenly your .example blocks are isolated, too, and getting weird styles applied to them, while your own styles break since they can't cross the unexpected boundary. Another alternative. We can add a host language dependent mechanism such as an element or an attribute to end the current isolation, just like insertion points in a shadow DOM would. Better yet, we can provide this mechanism in CSS. e.g. @isolate foo integrates(bar) { ... } @isolate bar { ... } (I'm not proposing this exact syntax. We can certainly do better.) Yeah, something like that would work, but it also means you need to account for all the things that might want to be isolated in your component. That's relatively clumsy. Examples? Are you talking about DOM APIs such as querySelectorAll and alike? Then, please refer to my other reply [1] in which I listed use cases that involve no author scripts. ? I didn't mention DOM APIs. I'm referring back to the example you're replying to - if you use a bar element inside your foo component, and you know that bar has isolation styles, you have to specifically call that out inside your foo styling so that it (a) is shielded from your foo styles, and (b) is able to pick up the global definition for bar styles. This is relatively clumsy. Some of the other solutions attach the I want to be isolated information to the element itself more directly, so you don't have to worry about what you put inside of yourself. This last one, though, is pretty much exactly Custom Elements, just with the children staying in the light tree rather than being moved into a shadow tree. But keeping them in the light tree has complications; it means that everything in the platform needs to be made aware of the isolation boundary. Should qSA respect the isolation boundaries or not? Depends on what you're using it for. What about things that aren't CSS at all, like getElementsByTagName()? That's equivalent to a qSA with the same argument, but it's not a selector, per se. Manual tree-walking would also need to be made aware of this, or else you might accidentally descend into something that wants isolation. Shadow DOM at least gives an answer to all of these, by putting the elements in a separate tree. You don't need to think of every one individually, or deal with inconsistent design when someone forgets to spec their new tree-searching thing to respect the boundary. Let's not conflate style isolation with isolation of DOM subtrees. They're two distinct features. Even though I do agree it might be desirable to have both in many important use cases, there are use cases in which we don't need subtree isolations. I'm not trying to, I'm pointing out that style isolation, as a concept, seamlessly blends into DOM isolation as you move across API surfaces. I don't see any connection between the two. Many of the use cases I listed [1] require us to have DOM isolations. I listed a number of APIs in the text you're responding to, all of which may or may not want to pay attention to style isolation, depending on the use-case. I'm not saying you necessarily need DOM isolation for any given use-case. I'm saying that there are a lot of APIs that query or walk the DOM, and whether they should pay attention to a style isolation boundary is a question without clear answers. ~TJ
Re: Shadow tree style isolation primitive
[ryosuke, your mail client keeps producing flattened replies. maybe send as plain-text, not HTML?] On Mon, Jan 12, 2015 at 5:23 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 12, 2015, at 4:59 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Mon, Jan 12, 2015 at 4:18 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 12, 2015, at 4:10 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: ? I didn't mention DOM APIs. I'm referring back to the example you're replying to - if you use a bar element inside your foo component, and you know that bar has isolation styles, you have to specifically call that out inside your foo styling so that it (a) is shielded from your foo styles, and (b) is able to pick up the global definition for bar styles. This is relatively clumsy. Some of the other solutions attach the I want to be isolated information to the element itself more directly, so you don't have to worry about what you put inside of yourself. This is no more clumsy than defining an insertion points in shadow DOM. Or am I misunderstanding you? Yeah. In Shadow DOM, you can just *use* the bar element, without having to think about it. I don't know what you mean by one doesn't have to think about it. The style applied on bar won't propagate into the shadow DOM by default [1] unless we use /deep/ or [2] The style defined for bar *in bar's setup code* (that is, in a style contained inside bar's shadow tree) works automatically without you having to care about what bar is doing. bar is like a replaced element - it has its own rendering, and you can generally just leave it alone to do its thing. In the previous examples, we weren't talking about defining styling for bars that are specifically inside of foos, just how to style bar generically, regardless of its context. Current shadow DOM makes that easy to do without requiring the different components to know about each other in any way; the declarative CSS mechanisms we were previously discussing did not. I mean, those are two possible options. They're not the only ones. For example, you could say that all selectors pay attention to the isolation boundary, so qSA is affected. That's *a* consistent answer, and could be very reasonable - people often use qSA to do styling-related things, and having it respect the style boundaries makes sense there. I'm saying there are multiple places you can draw the line. I think there's a nice defensible spot at the point you end up with when you do DOM isolation - everything that cares about the DOM tree (which includes CSS selectors, defined in terms of the DOM tree) gets locked out by default. Anywhere else has arguments for it, but I don't think any of them are particularly more compelling than any other. What are other sensible alternatives? I agree there are other options but they aren't sensible as far as I'm concerned. [1] http://jsfiddle.net/seyL1vqn/ [2] http://jsfiddle.net/seyL1vqn/1/ I listed several in the text you're responding to, and previous replies. ~TJ
Re: Shadow tree style isolation primitive
On Mon, Jan 12, 2015 at 5:59 PM, Ryosuke Niwa rn...@apple.com wrote: On Jan 12, 2015, at 5:41 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: [ryosuke, your mail client keeps producing flattened replies. maybe send as plain-text, not HTML?] Weird. I'm not seeing that at all on my end. It's sending HTML-quoted stuff, which doesn't survive the flattening to plain-text that I and a lot of others do. Plain-text is more interoperable. The style defined for bar *in bar's setup code* (that is, in a style contained inside bar's shadow tree) works automatically without you having to care about what bar is doing. bar is like a replaced element - it has its own rendering, and you can generally just leave it alone to do its thing. If that's the behavior we want, then we should simply make @isolate pierce through isolates. You previously mentioned that: On Jan 12, 2015, at 1:28 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: Alternately, say that it does work - the @isolate selector pierces through isolation boundaries. Then you're still screwed, because if the outer page wants to isolate .example blocks, but within your component you use .example normally, without any isolation, whoops! Suddenly your .example blocks are isolated, too, and getting weird styles applied to them, while your own styles break since they can't cross the unexpected boundary. But this same problem seems to exist in shadow DOM as well. We can't have a bar inside a foo behave differently from ones outside foo since all bar elements share the same implementation. I agree Yes! But pay attention to precisely what I said: it's problematic to, for example, have a command to isolate all class=example elements pierce through isolation boundaries, because classes aren't expected to be unique in a page between components - it's very likely that you'll accidentally hit elements that aren't supposed to be isolated. It's okay to have *element name* isolations pierce, though, because we expect all elements with a given tagname to be the same kind of thing (and Web Components in general is built on this assumption; we don't scope the tagnames in any way). But then we're not actually providing selectors to the isolate mechanism, we're just providing tagnames, and having that affect the global registry of tagnames. That's fine, it's just a different type of solution, with different contours, and it's much closer to normal web components stuff. (And thus it makes more sense to stick as close to web components as reasonable, to reduce the number of slightly-different concepts authors have to think about.) ~TJ
Re: Custom element design with ES6 classes and Element constructors
On Mon, Jan 12, 2015 at 5:16 AM, Anne van Kesteren ann...@annevk.nl wrote: On Sun, Jan 11, 2015 at 9:13 PM, Domenic Denicola d...@domenic.me wrote: However, I don't understand how to make it work for upgraded elements at all Yes, upgrading is the problem. There's two strategies as far as I can tell to maintain a sane class design: 1) There is no upgrading. We synchronously invoke the correct constructor. I've been trying to figure out the drawbacks, but I can't find the set of mutation events problems that relates to this. One obvious drawback is needing to have all the code in place so you might need a way to delay the parser (return of synchronous script loading). That's the issue - you have to have all custom element definitions loaded before any of your app is allowed to load, or else you'll have confusing errors where your elements just don't work, or work in racy conditions because you're racing an async script download against the download+parse of the rest of the document. 2) As you indicate, upgrading becomes replacing. This used to be the old model and got eventually killed through https://www.w3.org/Bugs/Public/show_bug.cgi?id=21063 though there's no clear summary as to why that happened. Issues seem to be: mutation observer spam, dangling references, attributes, event listeners. Yeah, as you say, this is also likely to be racy and bug-prone - sometimes your events stick around (because the script that set them ran after the script that initialized the element) and sometimes they don't (because the race went the other way). Even in the lack of races, more non-obvious ordering constraints are confusing to authors. Forever prototype munging seems really broken too so we should really revisit these two approaches to custom elements I think. Proto munging isn't even that big of a deal. It's the back-end stuff that's kinda-proto but doesn't munge that's the problem. This is potentially fixable if we can migrate more elements out into JS space. ~TJ
Re: Shadow tree style isolation primitive
On Fri, Jan 9, 2015 at 8:08 AM, Dimitri Glazkov dglaz...@google.com wrote: Here's an attempt from 2012. This approach doesn't work (the trivial plumbing mentioned in the doc is actually highly non-trivial), but maybe it will give some insights to find the right a proper solution: https://docs.google.com/document/d/1x2CBgvlXOtCde-Ui-A7K63X1v1rPPuIcN2tCZcipBzk/edit?usp=sharing tl;dr: Cramming a subtree into a TreeScope container and then hanging that off the DOM would do the job for free (because it bakes all that functionality in).
Re: Help with WebIDL v1?
On Mon, Dec 1, 2014 at 12:57 PM, Travis Leithead travis.leith...@microsoft.com wrote: At TPAC, I mentioned wanting to help move along WebIDL v1 to REC. Can you enumerate the next steps, and where I might be able to help? Thanks! Is there any actual value in doing this, since v2 has many additions, improvements, and bug fixes over the contents of v1? ~TJ
Re: Broken links
On Sun, Nov 30, 2014 at 4:28 AM, Jakub Mareda jmar...@seznam.cz wrote: Hello, I'm investigating how to actually allow user to copy image data from web application. I have encountered broken links in the specification: http://dev.w3.org/2006/webapi/clipops/clipops.html#h2_apis-from-other-specifications Click setData and you'll be radirected on page that doesn't seem to be relevant. I'd be actually quite happy if I could see how does setData work for binary data. I'm sorry you accidentally landed on that spec; it's an old and desperately obsolete document that is, unfortunately, not marked as such. You should be reading the latest version of that document, at https://html.spec.whatwg.org/#dnd. ~TJ
Re: CfC: publish a WG Note of Fullscreen; deadline November 14
On Sat, Nov 8, 2014 at 5:43 AM, Domenic Denicola d...@domenic.me wrote: From: Arthur Barstow [mailto:art.bars...@gmail.com] OK, so I just checked in a patch that sets the Latest Editor's Draft points to Anne's document https://dvcs.w3.org/hg/fullscreen/raw-file/default/TR.html. I think it would be ideal to change the label to e.g. See Instead or Maintained Version or Replaced By. Framing the WHATWG as a source of Editor's Drafts for the W3C is unnecessarily combative. I use a replaced by wording on specs I've moved elsewhere; see https://tabatkins.github.io/specs/css-color/ for an example. ~TJ
Re: [Imports]: Styleshet cascading order clarification
On Mon, Nov 3, 2014 at 7:28 AM, Gabor Krizsanits gkrizsan...@mozilla.com wrote: During our last meeting we all seemed to agree on that defining/implementing order for style-sheets is imports is super hard (if possible) and will bring more pain than it's worth for the web (aka. let's not make an already over-complicated system twice as complicated for very little benefits). And the consensus was that we should just not allow global styles in imports. Some months has passed but I still don't see any update on the spec. in this regard, so I'm just double checking that we still planning to do this or if anything changed since then. Out of curiosity, why is it hard? Without much background in the implementation matters, it doesn't seem that a link rel=import that contains a link rel=stylesheet should be any different than a link rel=stylesheet that contains an @import rule. ~TJ
Re: Push API and Service Workers
On Tue, Oct 21, 2014 at 7:25 AM, Erik Corry erikco...@google.com wrote: * Push doesn't actually need SW's ability to intercept network communications on behalf of a web page. * You can imagine a push-handling SW that does all sorts of complicated processing of notifications, downloading things to a local database, but does not cache/intercept a web page. * This ties into the discussion of whether it should be possible to register a SW without giving it a network-intercept namespace As was discussed over in https://github.com/slightlyoff/ServiceWorker/issues/445#issuecomment-60304515 earlier today, you need a scope for all uses of SW, because you need to *request permission* on a *page*, not within a SW (so the user has appropriate context on whether to grant the permission or not), and the scope maps the page to the SW that the registration is for. (The permission grant is actually per-origin, not per-scope/SW, but the registration itself is per-scope/SW, and it has to be done from within a page context because there *might* be a permission grant needed.) ~TJ
Re: Questions on the future of the XHR spec, W3C snapshot
On Fri, Oct 17, 2014 at 6:05 PM, Domenic Denicola dome...@domenicdenicola.com wrote: No need to make this a vs.; we're all friends here :). FWIW previous specs which have needed to become abandoned because they were superceded by another spec have been re-published as NOTEs pointing to the source material. That is what I would advise for this case. Examples: - http://www.w3.org/TR/components-intro/ - https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html - http://lists.w3.org/Archives/Public/www-style/2014Oct/0295.html (search for Fullscreen) CSS just did it for a bunch more specs, too, which had been hanging around since forever without any update or interest. This sounds like the best way to go. ~TJ
Re: [admin] Towards making ED boilerplates more useful and consistency
On Thu, Sep 4, 2014 at 5:43 AM, Arthur Barstow art.bars...@gmail.com wrote: Hi Editors, All, Speaking of ED boilerplate data ... do we want to try to get some consistency regarding boilerplate data in our EDs? We have quite a bit of variation now. For example Clipboard and others are toward the more minimalist end of the spectrum: http://dev.w3.org/2006/webapi/clipops/clipops.html Whereas, the Manifest spec's boilerplate data is more thorough: http://w3c.github.io/manifest/ I personally prefer the Manifest approach (especially a link to the spec's bugs/issues and the comment list). Should we try to get more consistency, and if so, what data should be the minimal recommended set? We've found in the CSSWG that linking to issue tracking is indeed helpful, when the spec uses anything more than email and inline issues. We also provide feedback information: see Color http://dev.w3.org/csswg/css-color/ for one example. ~TJ
Re: {Spam?} Re: [xhr]
On Wed, Sep 3, 2014 at 12:45 PM, Glenn Maynard gl...@zewt.org wrote: My only issue is the wording: it doesn't make sense to have normative language saying you must not use this feature. This should be a non-normative note warning that this shouldn't be used, not a normative requirement telling people that they must not use it. (This is a more general problem--the use of normative language to describe authoring conformance criteria is generally confusing.) This is indeed just that general problem that some people have with normative requirements on authors. I've got no problem with normatively requiring authors to do (or not do) things; the restrictions can then be checked in validators or linting tools, and give those tools a place to point to as justification. ~TJ
Re: XMLHttpRequest: uppercasing method names
On Tue, Aug 12, 2014 at 6:26 AM, Anne van Kesteren ann...@annevk.nl wrote: In https://github.com/slightlyoff/ServiceWorker/issues/120 the question came up whether we should perhaps always uppercase method names as that is what people seem to expect. mnot seemed okay with adding appropriate advice on the HTTP side. The alternative is that we stick with our current subset and make that consistent across APIs, and treat other method names as case-sensitive. I somewhat prefer always uppercasing, but that would require changes to XMLHttpRequest. I prefer making them all case-insensitive, which I guess means always uppercasing. It's not a strong desire, but it seems silly to require a particular, unusual, casing for this kind of thing. ~TJ
Re: =[xhr]
On Aug 1, 2014 8:16 AM, nmork_consult...@cusa.canon.com wrote: In this case, a freeze on all browser operations is desirable. The thread cannot be interrupted, and if it is interrupted (by browser closure or other such) then the transactions are immediately stopped and no update will occur (this is the most desirable outcome.) Assuming you're handling transactions yourself, using async XHR has no effect on this. (The browser doesn't provide any transactions for you.) Async XHR doesn't continue after tab closure. Async is not desirable, since it gives control back to the browser and the user has a false impression that interaction may be ok or even desirable. In this case it is not, it is an all stop until complete requirement. You can throw up a spinner to indicate that if you want, and get the same effect. The spinner solution lets you do more things, too, such as providing feedback or other information to the user. (Or just allowing hover effects to work - freezing the main thread restricts a *lot* of things.) I use both async and sync xmlhttprequests, and they both have their place. Please do not deprecate sync requests simply because you cannot imagine where they would be desirable. When they are needed, they are ABSOLUTELY needed and async requests are not only NOT desirable, but can lead to potentially disastrous results. Sync XHR offers you literally nothing over async XHR besides a little bit of restrictive simplicity. There is absolutely no situation in which sync XHR is actually required. ~TJ
Re: =[xhr]
On Aug 1, 2014 8:39 AM, nmork_consult...@cusa.canon.com wrote: Spinner is not sufficient. All user activity must stop. They can take a coffee break if it takes too long. Browser must be frozen and locked down completely. No other options are desirable. All tabs, menus, etc. must be frozen. That is exactly the desired result. By spinner, I also meant freezing other parts of the page as necessary, or obscuring them so they can't be clicked. Asking to freeze the rest of the browser is unnecessary and extremely user-hostile, and we don't support allowing content to do that. ~TJ
Re: =[xhr]
On Aug 1, 2014 8:49 AM, nmork_consult...@cusa.canon.com wrote: Thank you for letting me know my input is not desired. All input is definitely desired, but you seem to either not fully understand what async XHR does, or are ascribing greater functionality to sync XHR than it actually possesses. So far you have not described any problem for which sync XHR is actually required, and I'm fairly certain that such a problem does not exist. ~TJ
Re: =[xhr]
On Tue, Jul 29, 2014 at 1:41 PM, nmork_consult...@cusa.canon.com wrote: While debugging an intranet application using xmlHttpRequest recently, I got a message on the Firefox browser console: Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience. This worries me, since many useful web browser features which are deprecated eventually disappear (e.g. CSS width specification in the col tag.) This is definitely one of those things. I have an application which makes many http requests to make multiple large updates to database work tables, finally running a single SQL xmlHttpRequest to copy all work table data to the main data tables after all updates are successful. 1. The volume and size of the data is too large to be sent by a single request 2. Each subsequent request cannot be submitted until the prior request is completed SUCCESSFULLY or the database will be corrupted 3. The final SQL acts as the commit for the whole shebang and has its own BEGIN TRANSACTION and COMMIT/ROLLBACK for safety In this case, synchronous xmlHttpRequests are not only NOT deprecated, they are ABSOLUTELY ESSENTIAL to provide reliable database updating for the end user, and reliability is what the end user most desires, in addition to IMMEDIATE FEEDBACK whether the update succeeded or not. None of what you have described requires a synchronous XHR; it can all be done with async XHR. You just wait to send the subsequent batches of data until the listener from the previous one informs you that it has succeeded. This is slightly more complicated than doing sync, but no slower (and possible faster, if some of them can be done in parallel), and just as reliable. You get feedback exactly as quickly, modulo a millisecond or two of accumulated delay from waiting for your listener to reach the top of the message queue. Your users get a vastly better experience out of it, too. Synchronous XHR freezes the JS main thread until it returns, which means that any interaction with the page is frozen too. (Users *might* be able to scroll, if the browser is doing scrolling on another thread, but that's about it.) Multiple large consecutive sync XHRs mean things are frozen for a noticeable amount of time, especially if the network is slow for whatever reason. Async XHR has none of this problem. The last person on this list to assert that they absolutely needed sync XHR didn't seem to understand what async XHR was. (They seemed to think it was related to form submission; I don't know what you think it is.) It's exactly identical to sync XHR, but rather than freezing javascript until the response comes back, and giving you the result as a return value, you just have to register an event listener which'll get called when the response comes back, passing the result as an argument to your callback. Spreading your logic across callbacks is a little bit more complicated than doing sync code, but it's a necessary part of tons of JS APIs already, so if you're not familiar with how it works, you're gonna have to get familiar with it really soon. ^_^ Also, unrelated, please bring back CSS width to the col tag. On very large data tables, this can reduce page downloads by megabytes, no matter how small you make your column class names. Rather than putting classes on your tds, you can just use better selectors. If you need to set the width of the cells in the second column, you can just do `td:nth-child(2) { width: XXX; }`. Save yourself a couple megabytes. ^_^ (This isn't *precisely* reliable, because it doesn't know about rowspans/colspans, but you can often deal with that yourself. Selectors level 4 adds an :nth-column() pseudo-class which is identical to :nth-child(), but only works on table cells and knows about rowspans and colspans, so it'll properly style everything in the desired column no matter how the table is structured.) ~TJ
Re: =[xhr]
On Wed, Jul 23, 2014 at 4:49 PM, Paul bellamy p...@appl.com.au wrote: In the specification for XMLHttpRequest you posted a “warning” about using async=false which indicates that it is the intention to eventually remove this feature due to “detrimental effects to the user experience” when in a document environment. I understand that synchronous events retrieving data can, if not managed properly in the code, cause delays to the flow of the parsing and display of the document. This may, if the programming practices are poor, be extrapolated to be “detrimental to the users experience”, however there are times when there is a need to have data retrieved and passed synchronously when dealing with applications. In business application development there will always be the situation of the client needing to manipulate the display based on actions that retrieve data or on previously retrieved data. In these cases it is necessary for the data retrieval to be synchronous. If the document/form has to be resubmitted in full each time a client-side action is taken or the client needs to retrieve data to decide what action to take, then the user experience is definitely affected detrimentally as the entire document needs to be uploaded, downloaded, parsed and displayed again. Further there is the unnecessary need to retain instances of variables describing the client-side environment on the server-side. Variables which are not necessary for processing and should be handled by the client. This last paragraph suggests that you don't really understand what asynchronous XHR means. You appear to be equating it with submitting a form and loading a fresh page. Async XHR just means that .send() returns immediately, rather than pausing JS and waiting for the response to come back; the XHR object then fires an event on itself when the response comes back, which you have to listen for. ~TJ
Re: =[xhr]
On Sat, Jul 12, 2014 at 8:57 AM, Robert Hanson hans...@stolaf.edu wrote: Hello, I am the principal developer of Jmol, which has been successfully ported to JavaScript/HTML5 as JSmol. The following statement at http://xhr.spec.whatwg.org/ concerns me greatly: Developers must not pass false for the async argument when the JavaScript global environment is a document environment as it has detrimental effects to the end user's experience. User agents are strongly encouraged to warn about such usage in developer tools and may experiment with throwing an InvalidAccessError exception when it occurs so the feature can eventually be removed from the platform. I do understand the overall desire to not load files synchronously. I have designed Jmol to use asynchronous file transfer whenever possible. However, this seems rather heavy-handed, as there are situations where asynchronous AJAX file transfer is absolutely critical. (Or convince me that is not the case.) JSmol is using Java2Script, which is a highly effective port of Java to JavaScript. I have been able to reproduce all the thread-based behavior of Jmol in HTML5 using just the one JavaScript thread, however a key component of this system is that Java classes (such as java.io.OutputStream..js, which is the JavaScript equivalent of java.io.OutputStream.java) must be loaded on the fly. For example, a call to x = new java.io.ByteOutputStream(b) must hold while java.io.ByteOutputStream.js is loaded, if this is the first call to instantiate that class. Q1) I don't see how that could possibly be done asynchronously. This could easily be called from a stack that is 50 levels deep. Am I missing something here? How would one restart an entire JavaScript stack asynchronously? Q2) Is there an alternative to the main thread involving AJAX still using synchronous transfer? You're right that this isn't really possible, but that's intentional, as halting your program while you wait for a network fetch is a bad idea (particularly when it can happen unexpectedly, because the first call to a given API may be in different spots depending on user behavior or other non-determinism). This is why the module system being developed for Javascript doesn't do this, and requires code to explicitly ask for the module, rather than auto-loading. The built-in syntax just waits to execute the entire file until all dependencies are satisfied, while the Loader API instead operates in a traditional async style. ~TJ
Re: WebIDL Spec Status
On Wed, Jul 2, 2014 at 9:46 AM, Ryosuke Niwa rn...@apple.com wrote: There are other ways to mitigate these issues in addition to publishing every revision of a given specification. For example, spec authors could list support every historical terminology and fragmentation ever introduced. We could even create some service/database to map such historical names to the new ones, explaining the difference. I've been meaning to add a feature to Bikeshed to make it easier to specify the old id for a heading/dfn that changed its id for some reason, to help support this kind of thing. ~TJ
Re: WebIDL Spec Status
On Wed, Jul 2, 2014 at 11:10 AM, Domenic Denicola dome...@domenicdenicola.com wrote: From: Ian Hickson i...@hixie.ch I was going to link to the picture spec as my favorite example, but they seem to have made it less annoying (by moving it to the bottom instead of the middle), which is sad. That's for consistency with the messages used elsewhere. Also, it means it's still *possible* to use the draft if you need to. That said, it's still a very annoying message, so you shouldn't worry about that. (It also fades the rest of the page, making it even harder to use unintentionally.) ~TJ
Re: Fallout of non-encapsulated shadow trees
On Tue, Jul 1, 2014 at 6:13 PM, Brendan Eich bren...@secure.meer.net wrote: Domenic Denicola wrote: From: Brendan Eich [mailto:bren...@secure.meer.net] That is a false idol if it means no intermediate steps that explain some but not all of the platform. Sure. But I don't think the proposed type 2 encapsulation explains any of the platform at all. Are you sure? Because Gecko has used XBL (1) to implement, e.g., input type=file, or so my aging memory says. That's good enough and it has shipped for years, unless I'm mistaken. XBL is either type 3, or it's type 2 but weak/magical enough that it doesn't actually expose anything. Gecko does *not* today leak any internal details of input type=file, in the way that type 2 web components would leak; that would be a major security breach. (Leaking other elements would be something between a bug and a security breach, depending on the element.) ~TJ
Re: publish new WD of Shadow DOM on June 12
On Fri, Jun 6, 2014 at 3:14 PM, Domenic Denicola dome...@domenicdenicola.com wrote: From: Arthur Barstow [mailto:art.bars...@gmail.com] Could you live with those short qualifications/clarifications? Definitely; I see the concern and am glad you caught that. Yeah, sounds good. I've added an issue to Bikeshed https://github.com/tabatkins/bikeshed/issues/174 to address this. ~TJ
Re: Fetch API
On Sat, May 31, 2014 at 11:06 PM, Domenic Denicola dome...@domenicdenicola.com wrote: - Named constructors scare me (I can't figure out how to make them work in JavaScript without breaking at least one of the normal invariants). I think a static factory method would make more sense for RedirectResponse. What invariants are you concerned about? Using NamedConstructor is identical to doing: ```js class Foo { ... } let Bar = Foo; // now I can do new Foo() or new Bar(), to the same effect. ``` ~TJ
Re: Fetch API
On Sun, Jun 1, 2014 at 2:19 PM, Domenic Denicola dome...@domenicdenicola.com wrote: From: Tab Atkins Jr. [mailto:jackalm...@gmail.com] Using NamedConstructor is identical to doing: ```js class Foo { ... } let Bar = Foo; // now I can do new Foo() or new Bar(), to the same effect. ``` Not true, since the constructors take different arguments. Ah right, I forgot that the arguments are different. This effectively makes it a constructor with overrides, where the function you call it with is taken as one of the arguments used for discriminating between overrides. Out of curiosity, would you be okay with it if it was just an override? That is, if new Response(...) took either set of arguments for the ... part? It sounds like you would be. Instead it is equivalent to ```js class Response { constructor(body, init) { ... } ... } function RedirectResponse(url, status = 302) { return new Response(???, ???); } RedirectResponse.prototype = Response.prototype; ``` What invariants are you concerned about? In particular, we have that ```js RedirectResponse.prototype.constructor !== RedirectResponse (new RedirectResponse(...)).constructor !== RedirectResponse // Also, omitting the `new` does not throw a `TypeError`, like it does for real constructors. ``` and possibly a few others I am forgetting. This is identically a problem with the case I gave, as Bar.prototype.constructor would be Foo, not Bar. It's possible that this is still a problem, it's just not unique to named constructors. ^_^ Since you suggested a static method, that suggests you're fine with Response.Redirect(http://example.com) giving a new Response object, right? It's just the fact that RedirectResponse has a .prototype pointing to Response.prototype that gives you pause? Presumably RedirectResponse being a subtype would also be acceptable, as its .prototype.constructor would be RedirectResponse? ~TJ
Re: Fetch API
On Thu, May 29, 2014 at 8:10 AM, Tobie Langel tobie.lan...@gmail.com wrote: On Thu, May 29, 2014 at 4:58 PM, Marcos mar...@marcosc.com wrote: enum RequestMode { same-origin, tainted cross-origin, CORS, CORS-with-forced-preflight }; I think these are badly named (even though they use the names used in HTML and Fetch). It's going to be annoying to type these out for developers. I would change them to: enum RequestMode { same-origin, cors, cors-tainted, cors-preflight }; I like those better. We want consistency with lowercasing or uppercasing cors/CORS in enums, though. Yes. Lowercasing always, please. ~TJ
Re: Last Call for CSS Font Loading Module Level 3
On Tue, May 27, 2014 at 1:22 AM, Jonas Sicking jo...@sicking.cc wrote: I've provided this input through a few channels already, but I don't think the user of [SetClass] here is good (and in fact I've been arguing that SetClass should be removed from WebIDL). Yes, there's an issue in the spec already saying that I need to move off of [SetClass] and do Set-fakery instead, until JS stops being a jerk and allows actual subclassing. I wanted to get the LC publication in before I had the time to make the change, but my intent is pretty clear. ^_^ First off you likely don't want to key the list of fonts on the FontFace object instance like the spec currently does. What it looks like you want here is a simple enumerable list of FontFace objects which are currently available to the document. Second, subclassing the ES6 Set class should mean that the following two calls are equivalent: Set.prototype.add.call(myFontFaceSet, someFontFace); myFontFaceSet.add(someFontFace); However I don't think the former would cause the rendering of the document to change in, whereas the latter would. Hence I would strongly recommend coming up with a different solution than using SetClass. Yes, all of these issues are related to the fact that Set and Map are currently broken in ES, and cannot be subclassed in any meaningful way. Separately, FontFace.loaded seems to fulfill the same purpose as FontFaceSet.ready(). I.e. both indicate that the object is done loading/parsing/applying its data. It seems more consistent if they had the same name, and if both were either an attribute or both were a function. No, the two do completely different (but related) things. Why do you think they're identical? One fulfills when a *particular* FontFace object finishes loading, the other repeatedly fulfills whenever the set of loading fonts goes from non-zero to zero. ~TJ
Re: Last Call for CSS Font Loading Module Level 3
On Tue, May 27, 2014 at 11:44 AM, Jonas Sicking jo...@sicking.cc wrote: On Tue, May 27, 2014 at 10:41 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: Separately, FontFace.loaded seems to fulfill the same purpose as FontFaceSet.ready(). I.e. both indicate that the object is done loading/parsing/applying its data. It seems more consistent if they had the same name, and if both were either an attribute or both were a function. No, the two do completely different (but related) things. Why do you think they're identical? One fulfills when a *particular* FontFace object finishes loading, the other repeatedly fulfills whenever the set of loading fonts goes from non-zero to zero. Semantically they both indicate the async processing that this object was doing is done. Yes, in one instance it just signals that a given FontFace instance is ready to be used, in the other that the full FontFaceSet is ready. Putting the properties on different objects is enough to indicate that, the difference in name doesn't seem important? The loaded/ready distinction exists elsewhere, too. Using .loaded for FontFaceSet is incorrect, since in many cases not all of the fonts in the set will be loaded. In general it would be nice if we started establishing a pattern of a .ready() method (or property) on various objects to indicate that they are ready to be used. Rather than authors knowing that they need to listen to load events on images, success events on IDBOpenRequests, .loaded promise on FontFace objects and .ready() promise on FontFaceSets. Yes, I'm actively working with Anne, Domenic, and others to help figure out the right patterns for this that we can extend to the rest of the platform. ~TJ
Re: Last Call for CSS Font Loading Module Level 3
On Tue, May 27, 2014 at 3:47 PM, Jonas Sicking jo...@sicking.cc wrote: On Tue, May 27, 2014 at 12:14 PM, Tab Atkins Jr. jackalm...@gmail.com wrote: On Tue, May 27, 2014 at 11:44 AM, Jonas Sicking jo...@sicking.cc wrote: On Tue, May 27, 2014 at 10:41 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: Separately, FontFace.loaded seems to fulfill the same purpose as FontFaceSet.ready(). I.e. both indicate that the object is done loading/parsing/applying its data. It seems more consistent if they had the same name, and if both were either an attribute or both were a function. No, the two do completely different (but related) things. Why do you think they're identical? One fulfills when a *particular* FontFace object finishes loading, the other repeatedly fulfills whenever the set of loading fonts goes from non-zero to zero. Semantically they both indicate the async processing that this object was doing is done. Yes, in one instance it just signals that a given FontFace instance is ready to be used, in the other that the full FontFaceSet is ready. Putting the properties on different objects is enough to indicate that, the difference in name doesn't seem important? The loaded/ready distinction exists elsewhere, too. Using .loaded for FontFaceSet is incorrect, since in many cases not all of the fonts in the set will be loaded. Sure, but would using .ready() for FontFace be wrong? Depends on how we end up designing the loaded/ready duo. ~TJ
Re: [Bug 25376] - Web Components won't integrate without much testing
On Tue, May 20, 2014 at 8:41 PM, Axel Dahmen bril...@hotmail.com wrote: I got redirected here from a HTML5 discussion on an IFrame's SEAMLESS attribute: https://www.w3.org/Bugs/Public/show_bug.cgi?id=25376 Ian Hickson suggested to publish my findings here so the Web Components team may consider to re-evaluate the draft and probably amending the spec. Could you post your findings here? Digging through the bug thread, it appears you might be talking about this: Web Components require a plethora of additional browser features and behaviours. Web Components require loads of additional HTML, CSS and client script code for displaying content. Web Components install complex concepts (e.g. decorators) by introducing unique, complex, opaque behaviours, abandoning the pure nature of presentation. Web Components require special script event handling, so existing script code cannot be reused. Web Components require special CSS handling, so existing CSS cannot be reused. Web Components unnecessarily introduce a new clumsy “custom”, or “undefined” element, leaving the path of presentation. Custom Elements could as easy be achieved using CSS classes, and querySelector() in ECMA Script. The W3C DOM MutationObserver specification already provides functionality equivalent to insertedCallback()/readyCallback()/removeCallback(). Is this correct? Is this the full list of comments you wish to make? ~TJ
Re: [push-api] Identifying registrations
On Tue, May 13, 2014 at 1:08 AM, Martin Thomson martin.thom...@gmail.com wrote: The push API currently identifies a registration with a tuple: interface PushRegistration { readonlyattribute DOMString pushEndpoint; readonlyattribute DOMString pushRegistrationId; }; It looks like both are used by the push server. Local methods seem to rely on the pushRegistrationId; the remote application server uses the pushEndpoint, though details are not currently specified [1]. In my experience, the pushEndpoint is a sufficiently unique identifier. Contingent on some conclusions on the protocol side, this could be defined as a URL and used as an identifier. That single identifier should suffice. Using URLs as identifiers is an anti-pattern which we should have learned by now. In practice, multiple distinct URLs map to the same resource, and people understand this intuitively. The fact that those multiple distinct URLs are multiple distinct identifiers is unintuitive and hard to use correctly, as XML Namespaces has taught us well over the years. (For example, the presence/absence of a slash at the end of a URL is almost never relevant in real life, but you have to memorize which pattern is used by a particular URL-as-identifier, and there's no real-life consensus about which to use. Same with ordering of query params, http vs https, capitalization of domain name, etc. The hash is relevant as an identifier, but not as a URL. It's all terrible.) ~TJ
Re: Should events be preferably sent to the Window or the nearest object?
On Thu, Mar 20, 2014 at 8:33 AM, Ian Hickson i...@hixie.ch wrote: On Fri, 21 Mar 2014, Mounir Lamouri wrote: I would love to gather the opinion of public-webapps on a discussion Hixie and I had for two different APIs recently: if an array |foo| can change, should the change event be fired on its parent or the window (its grandparent)? The two cases we discussed with Hixie were navigator.languages and screen.orientation for which Hixie thinks the change events should be fired on the window so developers can do body onlanguagechange=... onorientationchange=... but I feel that having the change event sent on the parent would make things more self-contained. I would love to hear people's opinion on this. (Note: sending an orientationchange event to the window object would have other implications because there is a proprietary API that does that but this is entirely orthogonal.) To be clear, my opinion is just that we should be consistent. If the event is related to the Document, then we should fire it at the Document. If it's related to the html, we should fire on the html. Some objects are already EventTargets, including many new objects. I'm just saying that for existing objects that aren't EventTargets, and for which events have historically been sent to the Window, we should continue to send events to the Window, so that we don't end up sending some events to the object and some to the Window. In the case of Navigation, online and offline events go to the Window. In the case of Screen, resize and scroll events go to the Window. So it makes sense to put new events there too. Agreed. The exact target isn't very important here, and so being consistent with legacy event firing for the same system is probably a good idea. ~TJ
Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)
On Mon, Mar 17, 2014 at 9:08 AM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, Feb 13, 2014 at 2:34 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 2/13/14 5:35 AM, Anne van Kesteren wrote: Also, Type 2 can be used for built-in elements Built-in elements need Type 4. Well, Chrome does not have Type 4, yet is implementing parts of the their elements using shadow DOM constructs. So clearly something is possible with the current design. We enforce that through C++ magic right now. There's a lot of details to figure out before it's something that can be exposed to JS. ~TJ
Re: Browser search API
On Thu, Mar 13, 2014 at 8:17 AM, Marcos Caceres w...@marcosc.com wrote: On March 12, 2014 at 7:16:53 PM, Mitar (mmi...@gmail.com) wrote: There was no reply. :-( It usually takes a bit of time for Hixie to get around to all the emails (the volume of email on the WHATWG list + other priorities slow things down - but I’ve never seen him not respond to a proposal). Give him a few more weeks. If you don’t hear back by the end of the month you can try to ping him directly. Yes, Hixie's delay is usually a few months, unless it happens to be swept up as part of something earlier in the queue that he's dealing with, or someone asks him to prioritize. Don't worry, it'll get taken care of. ~TJ
Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)
On Fri, Feb 14, 2014 at 6:12 PM, Daniel Freedman dfre...@google.com wrote: The other hand of this argument is that components that wish to lock themselves down could write: this.shadowRoot = undefined; Of course, this does would not change the outcome of the Shadow Selector spec, which is why a flag for createShadowRoot or something would be necessary to configure the CSS engine (unless you're ok with having the existence of a property on some DOM object control CSS parsing rules). There's nothing wrong with doing that, by the way. The Selectors data model is already based on DOM, for DOM-based documents. I don't currently specify how you know when an element in the selectors tree has shadow trees, but I can easily say that it's whatever's reachable via the DOM properties in DOM-based documents. ~TJ
Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)
On Thu, Feb 13, 2014 at 2:35 AM, Anne van Kesteren ann...@annevk.nl wrote: On Thu, Feb 13, 2014 at 12:04 AM, Alex Russell slightly...@google.com wrote: Until we can agree on this, Type 2 feels like an attractive nuisance and, on reflection, one that I think we should punt to compilers like caja in the interim. If toolkits need it, I'd like to understand those use-cases from experience. I think Maciej explains fairly well in http://lists.w3.org/Archives/Public/public-webapps/2011AprJun/1364.html why it's good to have. Also, Type 2 can be used for built-in elements, which I thought was one of the things we are trying to solve here. Stay after class and write 100 times on the board: Type 2 is not a security boundary. ~TJ