Re: [webcomponents] How about let's go with slots?
I assume you mean to have tag names in addition to content-slot, and not as opposed to content-slot? On Mon, May 18, 2015 at 3:45 PM, Domenic Denicola d...@domenic.me wrote: From: Dimitri Glazkov [mailto:dglaz...@google.com] What do you think, folks? Was there a writeup that explained how slots did not have the same performance/timing problems as select=? I remember Alex and I were pretty convinced they did at the F2F, but I think you became convinced they did not ... did anyone capture that? My only other contribution is that I sincerely hope we can use tag names instead of the content-slot attribute, i.e. dropdown instead of div content-slot=dropdown. Although slots cannot fully emulate native elements in this manner (e.g. select/optgroup/option), they would at least get syntactically closer, and would in some cases match up (e.g. details/summary). I think it would be a shame to start proliferating markup in the div content-slot=dropdown vein if we eventually want to get to a place where shadow DOM can be used to emulate native elements, which do not use this pattern.
Re: Proposal for changes to manage Shadow DOM content distribution
Hi Ryosuke, I want to start by thanking you, Ted, and Jan for taking the time to make this proposal. I read through the proposal, and had a quick question about how redistribution should work with this slot concept. I created a quick date-range-combo-box example that would take two date inputs (start date and end date) and distribute them through the example date-combo-box, but I found myself stuck. I can't name the two date inputs with the same slot or they will end up in only one of the date-combo-box content elements, but date-combo-box only takes inputs with slot inputElement. How should this work? I drafted a quick gist to illustrate this: https://gist.github.com/azakus/676590eb4d5b07b94428 Thanks! On Tue, Apr 21, 2015 at 8:19 PM, Ryosuke Niwa rn...@apple.com wrote: Hi all, Following WebApps discussion last year [1] and earlier this year [2] about template transclusions and inheritance in shadow DOM, Jan Miksovsky at Component Kitchen, Ted O'Connor and I (Ryosuke Niwa) at Apple had a meeting where we came up with changes to the way shadow DOM distributes nodes to better fit real world use cases. After studying various real world use of web component APIs as well as exiting GUI frameworks, we noticed that selector based node distribution as currently spec'ed doesn't address common use cases and the extra flexibility CSS selectors offers isn't needed in practice. Instead, we propose named insertion slots that could be filled with the contents in the original DOM as well as contents in subclasses. Because the proposal uses the same slot filling mechanism for content distributions and inheritance transclusions, it eliminates the need for multiple shadow roots. Please take a look at our proposal at https://github.com/w3c/webcomponents/wiki/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution [1] https://lists.w3.org/Archives/Public/public-webapps/2014AprJun/0151.html [2] https://lists.w3.org/Archives/Public/public-webapps/2015JanMar/0611.html
Re: Mozilla and the Shadow DOM
Just to close the loop, filed https://github.com/webcomponents/webcomponentsjs/issues/289 to track the specific Polymer web component polyfill blocker. On Tue, Apr 14, 2015 at 5:38 AM, Anne van Kesteren ann...@annevk.nl wrote: On Wed, Apr 8, 2015 at 6:11 PM, Dimitri Glazkov dglaz...@google.com wrote: Thanks for the feedback! While the iron is hot I went ahead and created/updated bugs in the tracker. A problem I have with this approach is that with Shadow DOM (and maybe Web Components in general) there's a lot of open bugs. Of those bugs it's not at all clear which the editors plan on addressing. Which makes it harder to plan for us. Also, a point that I forgot to make in my initial email is that Polymer makes it rather hard for us to ship any part of Web Components without all the other parts (and in the manner that Chrome implemented the features): https://bugzilla.mozilla.org/show_bug.cgi?id=1107662 The linked bug is simply incorrect. Polymer depends on webcomponentsjs ( https://github.com/webcomponents/webcomponentsjs) for browsers without all the Web Components specs, but each part is feature detected, with separate checks for Custom Elements, HTML Imports, and ShadowDOM, as well as HTML Template, constructable URL, and MutationObserver Chrome did not implement and ship all the specs at once, so Polymer has had to feature detect from the start. -- https://annevankesteren.nl/
Re: Mozilla and the Shadow DOM
!-- Whoops, my draft got cut off. -- We found some timing issues with polyfill HTML Imports and native Custom Elements in Chrome that made us force the Custom Elements polyfill when HTML Imports is polyfilled. I don't remember the specifics, but I filed the github issue to either track down and resolve the issues, or provide a flag to override this behavior for Mozilla to use in testing. Sorry for the grumbly note, we've tried very hard to make sure Polymer !== Web Components in public discourse to keep people from conflating the two. On Tue, Apr 14, 2015 at 12:29 PM, Daniel Freedman dfre...@google.com wrote: Just to close the loop, filed https://github.com/webcomponents/webcomponentsjs/issues/289 to track the specific Polymer web component polyfill blocker. On Tue, Apr 14, 2015 at 5:38 AM, Anne van Kesteren ann...@annevk.nl wrote: On Wed, Apr 8, 2015 at 6:11 PM, Dimitri Glazkov dglaz...@google.com wrote: Thanks for the feedback! While the iron is hot I went ahead and created/updated bugs in the tracker. A problem I have with this approach is that with Shadow DOM (and maybe Web Components in general) there's a lot of open bugs. Of those bugs it's not at all clear which the editors plan on addressing. Which makes it harder to plan for us. Also, a point that I forgot to make in my initial email is that Polymer makes it rather hard for us to ship any part of Web Components without all the other parts (and in the manner that Chrome implemented the features): https://bugzilla.mozilla.org/show_bug.cgi?id=1107662 The linked bug is simply incorrect. Polymer depends on webcomponentsjs ( https://github.com/webcomponents/webcomponentsjs) for browsers without all the Web Components specs, but each part is feature detected, with separate checks for Custom Elements, HTML Imports, and ShadowDOM, as well as HTML Template, constructable URL, and MutationObserver Chrome did not implement and ship all the specs at once, so Polymer has had to feature detect from the start. -- https://annevankesteren.nl/
Re: [Shadow] Q: Removable shadows (and an idea for lightweight shadows)?
How would you style these shadow children? Would the main document CSS styles affect these children? On Thu, Mar 26, 2015 at 11:36 AM, Travis Leithead travis.leith...@microsoft.com wrote: From: Justin Fagnani [mailto:justinfagn...@google.com] Elements expose this “shadow node list” via APIs that are very similar to existing node list management, e.g., appendShadowChild(), insertShadowBefore(), removeShadowChild(), replaceShadowChild(), shadowChildren[], shadowChildNodes[]. This part seems like a big step back to me. Shadow roots being actual nodes means that existing code and knowledge work against them. existing code and knowledge work against them -- I'm not sure you understood correctly. Nodes in the shadow child list wouldn't show up in the childNodes list, nor in any of the node traversal APIs (e.g., not visible to qSA, nextSibling, previousSibling, children, childNodes, ect. Trivially speaking, if you wanted to hide two divs that implement a stack panel and have some element render it, you'd just do: element.appendShadowChild(document.createElement('div')) element.appendShadowChild(document.createElement('div')) Those divs would not be discoverable by any traditional DOM APIs (they would now be on the shadow side), and the only way to see/use them would be to use the new element.shadowChildren collection. But perhaps I'm misunderstanding your point. The API surface that you'd have to duplicate with shadow*() methods would be quite large. That's true. Actually, I think the list above is probably about it.
Re: [Custom] Custom elements and ARIA
Why not put the `implicitAria` role on the element's prototype? That way each instance can override with the attribute in a naive and natural manner: `el.role = link`. This would necessitate some getter/setter logic for the aria properties to handle the something like the details case with conditional states, but as long as the custom setter/getters can call the HTMLElement's role setter/getters, then I think we can keep the deep magic in the UA's bindings. On Thu, Aug 28, 2014 at 2:24 PM, Domenic Denicola dome...@domenicdenicola.com wrote: Thanks to all for their responses. The fact that I misread a bunch of authoring requirements as UA requirements made things a lot more complicated than they are in reality. I updated my ARIA summary [1] and illustrative scenarios [2] to reflect the actual spec/browser behavior. And, given the much-easier requirements, I was able to draft up a pretty simple solution: https://gist.github.com/domenic/8ae33f320b856a9aef43 I still need to investigate whether this kind of solution is feasible from an implementation perspective, but from an author perspective it seems natural and easy to use. Would love to hear what others think. [1]: https://gist.github.com/domenic/ae2331ee72b3847ce7f5 [2]: https://gist.github.com/domenic/bc8a36d9608d65bd7fa9 -Original Message- From: Domenic Denicola [mailto:dome...@domenicdenicola.com] Sent: Wednesday, August 27, 2014 19:43 To: public-webapps Subject: [Custom] Custom elements and ARIA TL;DR: we (Google) are trying to explain the platform with custom elements [1], and noticed they can't do ARIA as well as native elements. We would like to prototype a solution, ideally as a standardized API that we can let authors use too. (If that doesn't work, then we can instead add a non-web-exposed API that we can use inside Chrome, but that would be a shame.) A succinct statement of the problem is that we need some way to explain [3]. The rest of this mail explains the problem in great depth in the hopes other people are interested in designing a solution with me. Also, in the course of figuring all this out, I put together an intro to the relevant aspects of ARIA, which you might find useful, at [2]. ## The problem Right now, custom elements can manually add ARIA roles and stoperties (= states or properties) to themselves, by setting attributes on themselves. In practice, this kind of allows them to have default ARIA roles and stoperties, but they are fragile and exposed in a way that is incongruous with the capabilities of native elements in this regard. For example, if we were to implement `hr` as a custom element, we would attempt to give it the separator role by doing `this.setAttribute(role, separator)` in the `createdCallback`. However, if the author then did `document.querySelector(custom-hr).setAttribute(role, menuitem)`, assistive technology would reflect our `custom-hr` as a menu item, and not as a separator. So **unlike native elements, custom elements cannot have non-overridable semantics**. Furthermore, even if the author wasn't overriding the role attribute, there would still be a difference between `hr` and `custom-hr`. Namely, `document.querySelector(hr).getAttribute(role) === null`, whereas `document.querySelector(custom-hr).getAttribute(role) === separator`. So **unlike native elements, custom elements cannot have default ARIA roles or stoperties without them being reflected in the DOM**. As another example, imagine trying to implement `button` as a custom element. To enforce the restriction to a role of either `button` or `menuitem`, the custom element implementation would need to use its `attributeChangedCallback` to revert changes that go outside those possibilities. And that of course only occurs at the end of the microtask, so in the meantime screen-readers are giving their users bad information. And even then, the experience of the attribute value being reverted for the custom element is not the same as that for a native element, where the attribute value stays the same but the true ARIA role as reflected to screenreaders remains `button`. So: **unlike native elements, custom elements cannot robustly restrict their ARIA roles to certain values**. Finally, consider how `details` synchronizes `aria-expanded` with the `open` attribute. To implement this with custom elements, you would use the `attributeChangedCallback` to set an `aria-expanded` attribute. But again, the author could change it via `setAttribute`, causing it to be out-of-sync. The takeaway here is that **unlike native elements, custom elements cannot reserve the management of certain stoperties for themselves**. In the end, trying to manage one's ARIA state HTML attributes is fragile, and lacks the conceptual stratification and the resultant power of the internal state/mutable HTML attribute approach used by native elements. [3] illustrates more drastically the
Re: [webcomponents] Encapsulation and defaulting to open vs closed (was in www-style)
On Fri, Feb 14, 2014 at 5:39 PM, Ryosuke Niwa rn...@apple.com wrote: On Feb 14, 2014, at 5:17 PM, Alex Russell slightly...@google.com wrote: On Fri, Feb 14, 2014 at 3:56 PM, Ryosuke Niwa rn...@apple.com wrote: On Feb 14, 2014, at 2:50 PM, Elliott Sprehn espr...@chromium.org wrote: On Fri, Feb 14, 2014 at 2:39 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 2/14/14 5:31 PM, Jonas Sicking wrote: Also, I think that the Type 2 encapsulation has the same characteristics. If the component author does things perfectly and doesn't depend on any outside code And never invokes any DOM methods on the nodes in the component's anonymous content. Which is a pretty strong restriction; I'm having a bit of trouble thinking of a useful component with this property. I think my biggest issue with Type-2 is that unlike the languages cited for providing private it's trying to mimic it provides no backdoor for tools and frameworks to get at private state and at the same time it doesn't add any security benefits. Except that JavaScript doesn’t have “private”. Right, it only has the stronger form (closures) I don’t think we have the stronger form in that using any builtin objects and their functions would result in leaking information inside the closure. Ruby, Python, Java, C# and almost all other modern languages that provide a private facility for interfaces (as advocated by the Type-2 design) provide a backdoor through reflection to get at the variables and methods anyway. This allowed innovation like AOP, dependency injection, convention based frameworks and more. So if we provide Type-2 I'd argue we _must_ provide some kind of escape hatch to still get into the ShadowRoot from script. I'm fine providing some kind of don't let CSS styles enter me feature, but hiding the shadowRoot property from the Element makes no sense. I don’t see how the above two sentences lead to a consolation that we must provide an escape hatch to get shadow root from script given that such an escape hatch already exists if the component authors end up using builtin DOM functions. It's the difference between using legit methods and hacking around the platform. If it's desirable to allow continued access in these situations, why isn't .shadowRoot an acceptable speed bump? The point is that it’s NOT ALWAYS desirable to allow continued access. We saying that components should have a choice. If it's not desirable, isn't the ability to get around the restriction *at all* a bug to be fixed (arguing, implicitly, that we should be investigating stronger primitives that Maciej and I were discussing to enable Type 4)? Are you also arguing that we should “fix” closures so that you can safely call builtin objects and their methods without leaking information? If not, I don’t see why we need to fix this problem only for web components. We all agree it's not a security boundary and you can go through great lengths to get into the ShadowRoot if you really wanted, all we've done by not exposing it is make sure that users include some crazy jquery-make-shadows-visible.js library so they can build tools like Google Feedback or use a new framework or polyfill. I don’t think Google Feedback is a compelling use case since all components on Google properties could simply expose “shadow” property themselves. So you've written off the massive coordination costs of adding a uniform to all code across all of Google and, on that basis, have suggested there isn't really a problem? ISTM that it would be a multi-month (year?) project to go patch every project in google3 and then wait for them to all deploy new code. On the other hand, Google representatives have previously argued that adding template instantiation mechanism into browser isn’t helping anyone, because framework authors would figure that out better than we can. I have a hard time understanding why anyone would come to conclusion that forcing every single web components that use template to have: this.createShadowRoot().appendChild(document.importNode(template.contents)); I don't understand how this pertains to encapsulation. Could you elaborate? is any less desirable than having components that want to expose shadowRoot to write: this.shadowRoot = createShadowRoot(); The other hand of this argument is that components that wish to lock themselves down could write: this.shadowRoot = undefined; Of course, this does would not change the outcome of the Shadow Selector spec, which is why a flag for createShadowRoot or something would be necessary to configure the CSS engine (unless you're ok with having the existence of a property on some DOM object control CSS parsing rules). (Also your example would not handle multiple shadow roots correctly, here's one that would) var sr = this.shadowRoot; var newSr = this.createShadowRoot(); newSr.olderShadowRoot = sr;
Re: Controling style of elements in Injection Points
I've updated your pen with the other minor syntax changes that have occured in Chrome Canary: @host - :host template.content.cloneNode(true) - document.importNode(template.content) ::content p {} will always win over ::content {}, so I moved the black color to the style for p { } http://codepen.io/anon/pen/tcjeh On Tue, Dec 10, 2013 at 2:58 PM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Thank you all! Finally I understand how it works :) I made a small pen to illustrate this better http://codepen.io/dbugger/pen/Hyihd On 1 December 2013 23:35, Daniel Freedman dfre...@google.com wrote: ::content is behind the Experimental Web Platform Features chrome flag, along with the unprefixed createShadowRoot. On Fri, Nov 29, 2013 at 6:00 AM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: I have actually have gotten to work with :-webkit-distributed(p) but as I read it has been deprecated and :content p should work (but it doesnt). Do I need maybe to activate a flag? Im using Chrome 31 under Ubuntu, in case it matters. On 28 November 2013 18:39, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Thank you :) But it is quite confusing to understand how it works. I have tried to update my example, but still doesn't seem to work. Could you tell me, in my example, what would be the selector? I tried p::content, but as you see that doesnt seem to work. My use case example, again, is here: http://codepen.io/dbugger/pen/Hyihd On 28 November 2013 09:58, Hayato Ito hay...@google.com wrote: Yeah, Chrome has already implemented that. I've implemented that. :) On Thu, Nov 28, 2013 at 6:25 PM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Oh, interesting! Has this been implemented in any browser? On 28 November 2013 08:46, Hayato Ito hay...@google.com wrote: We can use '::content' pseudo elements if we want to apply styles to distributed nodes from a shadow tree. See http://w3c.github.io/webcomponents/spec/shadow/#content-pseudo-element On Thu, Nov 28, 2013 at 9:14 AM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Hello, I have been experimenting with Web Components and I have an issue, which I think I have represented fairly clear here: (Tested on Chrome) http://codepen.io/dbugger/pen/Hyihd If I understood correctly, one of the points of web components is to control the presentation of our component. That is why we have the Shadow Boundary. BUT what happens with the elements that are represented with content? They are not considered part of the shadow, therefore the styles that the author decides to apply on them, will invade the Web Component. Any thought on this? -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Hayato -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Hayato -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com
Re: Controling style of elements in Injection Points
And here's yet another version that should be usable in Stable Chrome and Canary: http://codepen.io/anon/pen/ybEch On Tue, Dec 10, 2013 at 4:08 PM, Daniel Freedman dfre...@google.com wrote: I've updated your pen with the other minor syntax changes that have occured in Chrome Canary: @host - :host template.content.cloneNode(true) - document.importNode(template.content) ::content p {} will always win over ::content {}, so I moved the black color to the style for p { } http://codepen.io/anon/pen/tcjeh On Tue, Dec 10, 2013 at 2:58 PM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Thank you all! Finally I understand how it works :) I made a small pen to illustrate this better http://codepen.io/dbugger/pen/Hyihd On 1 December 2013 23:35, Daniel Freedman dfre...@google.com wrote: ::content is behind the Experimental Web Platform Features chrome flag, along with the unprefixed createShadowRoot. On Fri, Nov 29, 2013 at 6:00 AM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: I have actually have gotten to work with :-webkit-distributed(p) but as I read it has been deprecated and :content p should work (but it doesnt). Do I need maybe to activate a flag? Im using Chrome 31 under Ubuntu, in case it matters. On 28 November 2013 18:39, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Thank you :) But it is quite confusing to understand how it works. I have tried to update my example, but still doesn't seem to work. Could you tell me, in my example, what would be the selector? I tried p::content, but as you see that doesnt seem to work. My use case example, again, is here: http://codepen.io/dbugger/pen/Hyihd On 28 November 2013 09:58, Hayato Ito hay...@google.com wrote: Yeah, Chrome has already implemented that. I've implemented that. :) On Thu, Nov 28, 2013 at 6:25 PM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Oh, interesting! Has this been implemented in any browser? On 28 November 2013 08:46, Hayato Ito hay...@google.com wrote: We can use '::content' pseudo elements if we want to apply styles to distributed nodes from a shadow tree. See http://w3c.github.io/webcomponents/spec/shadow/#content-pseudo-element On Thu, Nov 28, 2013 at 9:14 AM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Hello, I have been experimenting with Web Components and I have an issue, which I think I have represented fairly clear here: (Tested on Chrome) http://codepen.io/dbugger/pen/Hyihd If I understood correctly, one of the points of web components is to control the presentation of our component. That is why we have the Shadow Boundary. BUT what happens with the elements that are represented with content? They are not considered part of the shadow, therefore the styles that the author decides to apply on them, will invade the Web Component. Any thought on this? -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Hayato -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Hayato -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com
Re: [webcomponents] Auto-creating shadow DOM for custom elements
I've been thinking through the implications of this auto shadow proposal, and I'm glad people are seeing the utility of template, but I don't think this feature would see much use. Developers want data-binding, and the auto cloning template does not give them a favorable timing model. They want to set those up before the ShadowDOM is stamped, on a per-instance level. If they were to use the automatic template, it would be far too late, and there could be unnecessary network requests or FOUC. To remove a bit of vaguness from this scenario, Polymer elements use data-binding in almost all cases. Event handlers, computed properties, MVC, everywhere. As such, no Polymer element would use the automatic template registration argument. I doubt that elements created with other libraries like Ember or Angular would make much use of it either. However, if some low level data-binding primitives were introduced to the platform, there would be some real merit in an automatic template argument. There would have to be some modifications to the proposal, such as adding hooks for data-binding information to be given to the template instance, but I think those details can be discussed when such a data-binding spec arrives. Until data-binding primitives arise, I think this automatic template is a premature discussion. On Sat, Dec 7, 2013 at 8:33 PM, Rafael Weinstein rafa...@google.com wrote: On Sat, Dec 7, 2013 at 6:56 PM, Ryosuke Niwa rn...@apple.com wrote: On Dec 7, 2013, at 3:53 PM, Rafael Weinstein rafa...@google.com wrote: The issue is that being an element and having shadow DOM -- or any display DOM, for that matter -- are orthogonal concerns. There are lots of c++ HTML elements that have no display DOM. Polymer already has an even larger number. While that's true in browser implementations, there is very little authors can do with a plain element without any shadow content it since JavaScript can't implement it's own style model (i.e. creating a custom frame object in Gecko / render object in WebKit/Blink) or paint code in JavaScript. If the only customization author has to do is adding some CSS, then we don't need custom element hook at all. I'm was thinking about elements whose purpose isn't presentational. For example, link or script in html, or polymer-ajax in polymer. It's true that mutation observers wouldn't run immediately after innerHTML if authors wanted to add some JS properties but we could fix that issue in some other way; e.g. by delivering mutation records every time we run a parser. - R. Niwa
Re: Controling style of elements in Injection Points
::content is behind the Experimental Web Platform Features chrome flag, along with the unprefixed createShadowRoot. On Fri, Nov 29, 2013 at 6:00 AM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: I have actually have gotten to work with :-webkit-distributed(p) but as I read it has been deprecated and :content p should work (but it doesnt). Do I need maybe to activate a flag? Im using Chrome 31 under Ubuntu, in case it matters. On 28 November 2013 18:39, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Thank you :) But it is quite confusing to understand how it works. I have tried to update my example, but still doesn't seem to work. Could you tell me, in my example, what would be the selector? I tried p::content, but as you see that doesnt seem to work. My use case example, again, is here: http://codepen.io/dbugger/pen/Hyihd On 28 November 2013 09:58, Hayato Ito hay...@google.com wrote: Yeah, Chrome has already implemented that. I've implemented that. :) On Thu, Nov 28, 2013 at 6:25 PM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Oh, interesting! Has this been implemented in any browser? On 28 November 2013 08:46, Hayato Ito hay...@google.com wrote: We can use '::content' pseudo elements if we want to apply styles to distributed nodes from a shadow tree. See http://w3c.github.io/webcomponents/spec/shadow/#content-pseudo-element On Thu, Nov 28, 2013 at 9:14 AM, Enrique Moreno Tent enriquemorenot...@gmail.com wrote: Hello, I have been experimenting with Web Components and I have an issue, which I think I have represented fairly clear here: (Tested on Chrome) http://codepen.io/dbugger/pen/Hyihd If I understood correctly, one of the points of web components is to control the presentation of our component. That is why we have the Shadow Boundary. BUT what happens with the elements that are represented with content? They are not considered part of the shadow, therefore the styles that the author decides to apply on them, will invade the Web Component. Any thought on this? -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Hayato -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Hayato -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com -- Enrique Moreno Tent, Web developer http://enriquemorenotent.com
Re: [HTML Imports]: Sync, async, -ish?
I don't see this solution scaling at all. Imports are a tree. If you have any import that includes any other import, now the information about what tags to wait for has to be duplicated along every node in that tree. If a library author chooses to make any sort of all-in-one import to reduce network requests, they will have an absurdly huge list. For example: Brick link rel=import href=brick.html elements=x-tag-appbar x-tag-calendar x-tag-core x-tag-deck x-tag-flipbox x-tag-layout x-tag-slidebox x-tag-slider x-tag-tabbar x-tag-toggle x-tag-tooltip or Polymer link rel=import href=components/polymer-elements.html elements=polymer-ajax polymer-anchor-point polymer-animation polymer-collapse polymer-cookie polymer-dev polymer-elements polymer-expressions polymer-file polymer-flex-layout polymer-google-jsapi polymer-grid-layout polymer-jsonp polymer-key-helper polymer-layout polymer-list polymer-localstorage polymer-media-query polymer-meta polymer-mock-data polymer-overlay polymer-page polymer-scrub polymer-sectioned-list polymer-selection polymer-selector polymer-shared-lib polymer-signals polymer-stock polymer-ui-accordion polymer-ui-animated-pages polymer-ui-arrow polymer-ui-breadcrumbs polymer-ui-card polymer-ui-clock polymer-ui-collapsible polymer-ui-elements polymer-ui-field polymer-ui-icon polymer-ui-icon-button polymer-ui-line-chart polymer-ui-menu polymer-ui-menu-button polymer-ui-menu-item polymer-ui-nav-arrow polymer-ui-overlay polymer-ui-pages polymer-ui-ratings polymer-ui-scaffold polymer-ui-sidebar polymer-ui-sidebar-header polymer-ui-sidebar-menu polymer-ui-splitter polymer-ui-stock polymer-ui-submenu-item polymer-ui-tabs polymer-ui-theme-aware polymer-ui-toggle-button polymer-ui-toolbar polymer-ui-weather polymer-view-source-link On Thu, Nov 21, 2013 at 2:21 PM, Daniel Buchner dan...@mozilla.com wrote: Steve and I talked at the Chrome Dev Summit today and generated an idea that may align the stars for our async/sync needs: link rel=import elements=x-foo, x-bar / The idea is that imports are always treated as async, unless the developer opts-in to blocking based on the presence of specific tags. If the parser finds custom elements in the page that match user-defined elements tag names, it would block rendering until the associated link import has finished loading and registering the containing custom elements. Thoughts? - Daniel On Wed, Nov 20, 2013 at 11:19 AM, Daniel Buchner dan...@mozilla.comwrote: On Nov 20, 2013 11:07 AM, John J Barton johnjbar...@johnjbarton.com wrote: On Wed, Nov 20, 2013 at 10:41 AM, Daniel Buchner dan...@mozilla.com wrote: Dimitri: right on. The use of script-after-import is the forcing function in the blocking scenario, not imports. Yes. Let's not complicate the new APIs and burden the overwhelming use-case to service folks who intend to use the technology in alternate ways. I disagree, but happily the current API seems to handle the alternative just fine. The case Steve raise is covered and IMO correctly, now that you have pointed out that link supports load event. His original example must block and if he wants it not to block it's on him to hook the load event. For my bit, as long as the size of the components I include are not overly large, I want them to load before the first render and avoid getting FUCd or having to write a plethora of special CSS for the not-yet-upgraded custom element case. According to my understanding, you are likely to be disappointed: the components are loaded asynchronously and on a slow network with a fast processor we will render page HTML before the component arrives. We should expect this to be the common case for the foresable future. There is, of course, the case of direct document.register() invocation from a script tag, which will/should block to ensure all elements in original source are upgraded. My only point, is that we need to be realistic - both cases are valid and there are good reasons for each. Might we be able to let imports load async, even when a script proceeds them, if we added a *per component type* upgrade event? (note: I'm not talking about a perf-destroying per component instance event) jjb Make the intended/majority case easy, and put the onus on the less common cases to think about more complex asset arrangement. - Daniel On Nov 20, 2013 10:22 AM, Dimitri Glazkov dglaz...@google.com wrote: John's commentary just triggered a thought in my head. We should stop saying that HTML Imports block rendering. Because in reality, they don't. It's the scripts that block rendering. Steve's argument is not about HTML Imports needing to be async. It's about supporting legacy content with HTML Imports. And I have a bit less sympathy for that argument. You can totally build fully asynchronous HTML Imports-based documents, if you follow these two simple rules: 1) Don't put scripts after imports in
Re: [HTML Imports]: Sync, async, -ish?
On Wed, Nov 20, 2013 at 9:51 AM, John J Barton johnjbar...@johnjbarton.comwrote: Another alternative: First let's agree that Souder's example must block: link rel=import href=import.php ... div id=import-container/div script var link = document.querySelector('link[rel=import]'); var content = link.import.querySelector('#imported-content'); document.getElementById('import-container').appendChild(content.cloneNode(true)); /script If we don't block on the script tag then we have a race between the querySelector and the import, fail. To me the async solution is on the script, just like it is today load events: div id=import-container/div script var link = document.querySelector('link[rel=import]'); link.addEventListener('load', function() { var content = link.import.querySelector('#imported-content'); document.getElementById('import-container').appendChild(content.cloneNode(true)); } /script Now the script still blocks but it is short and just registers a handler. When the link loads it is executed. Here the dependency is correctly set by the script tag. But I don't see a 'load' event in the Import spec, http://www.w3.org/TR/html-imports/ The link element already has a generic load event in the HTML spec: http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#attr-link-crossorigin(I tried to get as close as possible to the actual line without overshooting). jjb On Mon, Nov 18, 2013 at 1:40 PM, Dimitri Glazkov dglaz...@google.comwrote: 'Sup yo! There was a thought-provoking post by Steve Souders [1] this weekend that involved HTML Imports (yay!) and document.write (boo!), which triggered a Twitter conversation [2], which triggered some conversations with Arv and Alex, which finally erupted in this email. Today, HTML Imports loading behavior is very simply defined: they act like stylesheets. They load asynchronously, but block script from executing. Some peeps seem to frown on that and demand moar async. I am going to claim that there are two distinct uses of link rel=import: 1) The import is the most important part of the document. Typically, this is when the import is the underlying framework that powers the app, and the app simply won't function without it. In this case, any more async will be all burden and no benefit. 2) The import is the least important of the document. This is the +1 button case. The import is useful, but sure as hell doesn't need to take rendering engine's attention from presenting this document to the user. In this case, async is sorely needed. We should address both of these cases, and we don't right now -- which is a problem. Shoot-from-the-hip Strawman: * The default behavior stays currently specified * The async attribute on link makes import load asynchronously * Also, consider not blocking rendering when blocking script This strawman is intentionally full of ... straw. Please provide a better strawman below: __ __ __ :DG [1]: http://www.stevesouders.com/blog/2013/11/16/async-ads-with-html-imports/ [2]: https://twitter.com/codepo8/status/401752453944590336
Re: [Shadow DOM] Simplifying level 1 of Shadow DOM
On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote: On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote: I'm concerned that if the spec shipped as you described, that it would not be useful enough to developers to bother using it at all. I'm concerned that we can never ship this feature due to the performance penalties it imposes. Without useful redistributions, authors can't use composition of web components very well without scripting. At that point, it's not much better than just leaving it all in the document tree. I don't think having to inspect the light DOM manually is terrible I'm surprised to hear you say this. The complexity of the DOM and CSS styling that moden web applications demand is mind numbing. Having to create possibly hundreds of unique CSS selectors applied to possibly thousands of DOM nodes, hoping that no properties conflict, and that no bizarre corner cases arise as nodes move in and out of the document. Inspecting that DOM is a nightmare. Just looking at Twitter, a Tweet UI element is very complicated. It seems like they embed parts of the UI into data attributes (like data-expanded-footer). That to me looks like a prime candidate for placement in a ShadowRoot. The nested structure of it also suggests that they would benefit from node distribution through composition. That's why ShadowDOM is so important. It has the ability to scope complexity into things that normal web developers can understand, compose, and reuse. , and we had been using shadow DOM to implement textarea, input, and other elements years before we introduced node redistributions. Things like input and textarea are trivial compared to a youtube video player, or a threaded email list with reply buttons and formatting toolbars. These are the real candidates for ShadowDOM: the UI controls that are complicated. On May 1, 2013, at 8:57 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: It's difficult to understand without working through examples yourself, but removing these abilities does not make Shadow DOM simpler, it just makes it much, much weaker. It does make shadow DOM significantly simpler at least in the areas we're concerned about. - R. Niwa
Re: [Shadow DOM] Simplifying level 1 of Shadow DOM
I'm concerned that if the spec shipped as you described, that it would not be useful enough to developers to bother using it at all. Without useful redistributions, authors can't use composition of web components very well without scripting. At that point, it's not much better than just leaving it all in the document tree. I too would like to see the Web develop organically bits at a time, but ShadowDOM fundamentally changes how the DOM works. This feels like at the inception of the automobile, deciding that a gasoline engine is too complicated and instead selling customers a horse hitched to body. We won't get good feedback on developer uptake if we take out the biggest gamechanger. On Tue, Apr 30, 2013 at 11:10 AM, Ryosuke Niwa rn...@apple.com wrote: I'm concerned that we're over engineering here. I do understand adding reprojection significantly reduces the need to write author scripts, but I would like to see us implement the truly minimal set of features, have all browsers ship it, and see how authors, particularly that of high profile websites such as Facebook and Twitter use it in real life. Granted, I know people have talked to developers and got feedback, etc… but nothing beats the use in production code. I'd like to see the Web platform improve organically by means of positive feedback loop between browser vendors and Web developers. Take position: sticky for example. We saw web developers using JavaScript and CSS to emulate a particular mode of layout so we've added a new CSS value to natively support that. For reprojection, I don't think we have sufficient data points to tell how exactly authors are going to use or what they're going to create with shadow DOM, and what percentage of common use cases reprojections address. On Apr 30, 2013, at 10:52 AM, Erik Arvidsson a...@chromium.org wrote: The thing about reprojection is that it makes implementers life harder but it makes developers life easy. I'd rather have us do the hard work here. For the record, we have two independent implementations of the Shadow DOM spec so that should debunk some of the myths that this is too hard to implement and maintain. On Tue, Apr 30, 2013 at 10:37 AM, Ryosuke Niwa rn...@apple.com wrote: On Apr 25, 2013, at 2:42 PM, Edward O'Connor eocon...@apple.com wrote: First off, thanks to Dimitri and others for all the great work on Shadow DOM and the other pieces of Web Components. While I'm very enthusiastic about Shadow DOM in the abstract, I think things have gotten really complex, and I'd like to seriously propose that we simplify the feature for 1.0, and defer some complexity to the next level. I think we can address most of the use cases of shadow DOM while seriously reducing the complexity of the feature by making one change: What if we only allowed one insertion point in the shadow DOM? Having just 1 insertion point would let us push (most? all?) of this complexity off to level 2: * distribution, getDistributedNodes(), etc. * selector fragments matching criteria * /select/ combinator * content select * shadow ? * reprojection I'm in favor of removing all forms of redistributions except the one where the entire content is inserted at exactly one location. This will reduce things authors can do but it will considerably reduce the implementation complexity and eliminates almost all performance penalties. - R. Niwa -- erik
Re: [webcomponents] linking using link rel=components href=...?
The spec you're looking for is https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/components/index.htmlwhich defines link rel=component href=.. On Fri, Mar 15, 2013 at 7:07 AM, Mike Kamermans niho...@gmail.com wrote: Hey all, I searched the archive at http://lists.w3.org/Archives/Public/public-webapps/ and checked out the https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#definitions and https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html specs, but couldn't find anything about this in them: are there provisions in the spec (or is it in the works) to add a new link type so that if some person A defines a stack of useful templates, and person B wants to use those templates, they can include them on their own page using link rel=templates href=http://personA/templates.html; (or rel=components, or some other relation name that makes sense for the role the link plays)? I was thinking about this in terms of using web components for something like Mozilla's Popcorn Maker, where it would be really cool if we could define all our components as templates, and then tell everyone this is our collection of templates, go grab popcorn.webmaker.org/templates.html if you want to use these on your own pages!. I really love the idea of web components, but it feels like being able to share them in the same way you can share .js or ..css files would make them ridiculously powerful on the future web =) - Mike Pomax Kamermans Mozilla Foundation