[webcomponents] How about let's go with slots?

2015-05-15 Thread Scott Miles
Polymer really wants Shadow DOM natively, and we think the `slot` proposal
can work, so maybe let's avoid blocking on design of an imperative API
(which we still should make in the long run).

As our entire stack is built on Web Components, the Polymer team is highly
motivated to assist browser implementers come to agreement on a Shadow DOM
specification. Specifically, as authors of the `webcomponents-js`
polyfills, and more than one Shadow DOM shim, we are keenly aware of how
difficult Shadow DOM is to simulate without true native support.

I believe we are in general agreement with the implementers that an
imperative API, especially one that cleanly explains platform behavior, is
an ideal end point for Shadow DOM distribution. However, as has been
discussed at length, it’s likely that a proper imperative API is blocked on
other still-to-mature technologies. For this reason, we would like for the
working group to focus on writing the spec for the declarative `slot`
proposal [1]. We're happy to participate in the process.

[1]:
https://github.com/w3c/webcomponents/blob/gh-pages/proposals/Proposal-for-changes-to-manage-Shadow-DOM-content-distribution.md#proposal-part-1-syntax-for-named-insertion-points


Re: [webcomponents] How about let's go with slots?

2015-05-15 Thread Scott Miles
 How does it work for redistribution

We've done some investigation and think it can work.

 and the other downsides that have been brought up?

We have to tackle these deliberately, but mostly we think there is room for
consensus.

 You're okay with the required to plaster content-slot='foo' all
over your page requirement?

Me personally, this is the least palatable part of the `slot` proposal. But
if after all the discussion is over, if the consensus is that the pros
outweigh the cons, then yeah it's not blocking from my perspective. For
sure, I'd at least like a shorter attribute name than `content-slot`, but
seems like that bikeshedding can wait until later. ;)

Scott

On Fri, May 15, 2015 at 5:24 PM, Tab Atkins Jr. jackalm...@gmail.com
wrote:

 On Fri, May 15, 2015 at 4:58 PM, Scott Miles sjmi...@google.com wrote:
  Polymer really wants Shadow DOM natively, and we think the `slot`
 proposal
  can work, so maybe let's avoid blocking on design of an imperative API
  (which we still should make in the long run).

 How does it work for redistribution, and the other downsides that have
 been brought up?  Are you saying that those cases just aren't
 important enough to be blocking at the moment?

 You're okay with the required to plaster content-slot='foo' all over
 your page requirement?

 ~TJ



Re: [Imports]: Styleshet cascading order clarification

2014-11-03 Thread Scott Miles
I know this is probably the wrong place/time to say this, but fwiw, a
primary use case for imports is replacing:

script src=my-lib/my-lib.js/script
!-- the script above might have some HTML in it, encoded as a string,
comment, or other hack --
!-- the script above may load additional dependencies via some elaborate
loader --
link rel=stylesheet href=my-lib/my-lib.css


with

link rel=import href=my-lib/my-lib.html
!-- html and transitive loading all taken care of by imports --


Having the imported stylesheets apply to the main document is a big part of
the value here. If the stylesheets are for some other purpose, it's easy to
put them in a template, but the reverse is not true.

I realize implementation difficulty may trump ergonomics, but I wanted to
make sure this part was heard.

Scott


On Mon, Nov 3, 2014 at 10:12 AM, Tab Atkins Jr. jackalm...@gmail.com
wrote:

 On Mon, Nov 3, 2014 at 7:28 AM, Gabor  Krizsanits
 gkrizsan...@mozilla.com wrote:
  During our last meeting we all seemed to agree on that
 defining/implementing
  order for style-sheets is imports is super hard (if possible) and will
 bring more
  pain than it's worth for the web (aka. let's not make an already
 over-complicated
  system twice as complicated for very little benefits). And the consensus
 was that we
  should just not allow global styles in imports.
 
  Some months has passed but I still don't see any update on the spec. in
 this regard,
  so I'm just double checking that we still planning to do this or if
 anything changed
  since then.

 Out of curiosity, why is it hard?  Without much background in the
 implementation matters, it doesn't seem that a link rel=import that
 contains a link rel=stylesheet should be any different than a link
 rel=stylesheet that contains an @import rule.

 ~TJ




Re: Relative URLs in Web Components

2014-10-05 Thread Scott Miles
 The URL is parsed again? That seems like something that should not
 happen. Are you copying the node perhaps?

There is no explicit copying, but I don't know if there is something
implicit happening when the element goes trans-document.

Sample code (assume index.html that imports import/import.html).

!-- import/import.html --

img src=beaker.jpg

script

  // node above in this import
  var img = document.currentScript.ownerDocument.querySelector('img');

  console.log(img.src, img.getAttribute('src'));
  //  'import/beaker.jpg', 'beaker.jpg'

  // do this to freeze the src
  //img.src = img.src;

  // move node to window.document
  document.body.appendChild(img);

  console.log(img.src, img.getAttribute('src'));
  //  'beaker.jpg', 'beaker.jpg'

  // img.src is 404

/script


Live version at http://sjmiles.github.io/import-beaker/ (needs Chrome for
HTMLImports).

 neither of these solves the case for a script in an
 imported document
 [that would] require usage of the URL API to properly resolve base URLs
first which
 not likely something you would think about.

Very true. For Polymer we took this hit and gave every element a
`resolvePath` fixup method. As you say, you'd never consider this without
documentation, and some lose a toe. Otoh, it may be a survivable
compromise. Anecdotally, it hasn't come up with heinous frequency, and the
solution seems to appease those who find it.

 The lack of encapsulation is major hassle.

I'm not sure what you mean, can you elaborate? If this is the root cause,
maybe we attack it there.

Thanks,
Scott

On Sun, Oct 5, 2014 at 1:09 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Sat, Oct 4, 2014 at 7:00 PM, Scott Miles sjmi...@google.com wrote:
  An issue is that a relative URL is only correct as long as the `img` (for
  example) is owned by the import. If you migrate the element to the main
  document, the path is now relative to the wrong base, and users are
  confused. One can do `img.src = img.src;` before migrating the node, and
  that will freeze the resolved path, but this is somewhat arcane.

 The URL is parsed again? That seems like something that should not
 happen. Are you copying the node perhaps?


  As Anne points out, this issue is worse when you start using templates.
 The
  `img` in the template isn't live and never has a resolved `src` property.
 
  If the use cases are:
 
  - migrating a sub-tree from an import to another document
  - migrating a sub-tree from a template to another document
 
  In both cases, users frequently want migrated elements to retain a base
 URL
  from the original document (except when they don't, for example when they
  are using anchor links, href=#foo =/).

 This problem stems directly from components not having proper
 encapsulation.

 There's two ways this can be solved and neither seems particularly
 attractive:

 1) We parse imported documents in a special way. Either having
 elements parse their URLs at an earlier point or actually storing the
 parsed URL in the tree. Note that this would also require a special
 parsing mode for CSS and that we would have to parse CSS (not apply)
 even in template.

 2) Rather than document-scoped, we make base URLs node-scoped and
 provide a way to move a node around while preserving its base URL
 (node because at least Element and DocumentFragment would need this).
 The implications here at that URL parsing for every node becomes a
 more expensive operation due to tree traversal.

 And again, neither of these solves the case for a script in an
 imported document setting innerHTML, or fetching something. They would
 require usage of the URL API to properly resolve base URLs first which
 is not likely something you would think about.

 The lack of encapsulation is major hassle.


 --
 https://annevankesteren.nl/



Re: Relative URLs in Web Components

2014-10-04 Thread Scott Miles
An issue is that a relative URL is only correct as long as the `img` (for
example) is owned by the import. If you migrate the element to the main
document, the path is now relative to the wrong base, and users are
confused. One can do `img.src = img.src;` before migrating the node, and
that will freeze the resolved path, but this is somewhat arcane.

As Anne points out, this issue is worse when you start using templates. The
`img` in the template isn't live and never has a resolved `src` property.

If the use cases are:

- migrating a sub-tree from an import to another document
- migrating a sub-tree from a template to another document

In both cases, users frequently want migrated elements to retain a base URL
from the original document (except when they don't, for example when they
are using anchor links, href=#foo =/).

I've hand-waved definitions of 'migrating', 'base URL', and 'original
document', but I'm only trying to frame the (as Anne said, hard) problem.

Scott

On Sat, Oct 4, 2014 at 6:27 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Oct 2, 2014 at 3:09 PM, Anne van Kesteren ann...@annevk.nl
 wrote:
  This is a hard problem:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=20976#c8

 I saw you commented on the bug, but I prefer keeping that bug focused
 on several other problems around base URLs so let's continue here. You
 gave this example:

 # Consider this document, located at
 # https://example.com/some-component/import-me.html:
 #
 #   img src=foo
 #
 # It would be nice if the import process would *somehow*
 # turn that into…
 #
 #   img src=https://example.com/some-component/foo
 #
 # …before inserting it into the parent document.

 As far as I can tell this particular example should already work. The
 base URL for that img element will be that of the document it is in,
 which is the import (at least per the algorithms in HTML Imports).
 What makes you think it would not work?

 The problem is with template as that isolates elements which will
 therefore not be processed and their associated URLs will therefore
 not parse, etc. Now we could perhaps add a special
 in-template-processing model for all elements that can have one or
 more associated URLs, or something along those lines, but it's not
 clear that's worth it.


 --
 https://annevankesteren.nl/




Re: [HTML Imports] What is the imagined work flow?

2014-05-21 Thread Scott Miles
Some of the ways Polymer team uses imports are as follows:

- aggregating script src and/or link rel=stylesheet elements into
functional units
- aggregating imports themselves into units
- expressing dependencies (N modules can each import jquery2-import.html
and I only get one copy of jquery)
- importing self-organizing databases via custom elements (e.g. core-meta
elements describe/provide metadata using monostate pattern)

Also, one of the first things Polymer does is register a custom-element
which itself provides a declarative interface to the custom element
machinery. Most other Polymer elements are then structured declaratively
(as HTML) which makes using imports highly convenient.

 would stick a style element in the imported document

You can do that, reference an external stylesheet, or place a (scoped)
style tag directly in the shadow-root.

E.g. using Polymer idiom

polymer-element name=my-button noscript
template
style
  :host  div.someclass {
color: aliceblue;
  }
/style
div class=someclassmy-button/div
/template
/polymer-element


On Tue, May 20, 2014 at 10:08 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 Over here at mozilla we've been trying to understand how the HTML
 Imports spec is intended to be used.

 We have so far received descriptions of how the spec works. I.e. what
 happens when the various import related attributes are added to an
 link rel=import.

 However I'm curious to understand the expected developer work flows.

 Let me make a few guesses to clarify the type of description I'm looking
 for.

 Clearly imports are expected to be used together with web components.
 However so far web components are mainly an imperative API, and not a
 declarative thing. Any element registrations needs to be created
 through JS calls, and all the shadow DOM inside a custom element needs
 to be created using JS calls and then inserted into the shadow root
 using script.

 At first glance it seems like a simple script src=... would then
 provide all that you need?

 However it might be tedious to create all elements using createElement
 and appendChild calls. A better work flow is to stick a script in a
 link rel=imported document together with some template elements.
 Then clone the of those templates from the constructors of the custom
 elements.

 And in order to style the custom elements, you would stick a style
 element in the imported document which would have rules like

 my-button::shadow  div.someclass {
   color: aliceblue;
 }

 Is this an accurate description? Are there other reasons to stick
 non-script content in the HTML? Are there any examples out there of
 how HTML imports are intended to be used?

 / Jonas




Re: [HTML Imports] What is the imagined work flow?

2014-05-21 Thread Scott Miles
Sorry, but just a bit of follow up.

One may notice that the Web Components spec is imperative and assume that
declarative support is not important. But as it turns out, the notion of
using custom-elements to bootstrap declarative syntaxes allows various
parties to experiment in the real-world, as opposed to a working group
trying to resolve the trade-offs in an a-priori spec.

I mention this, because although I used Polymer as an example (it's my
project after all), the fact is we hope people will use web-components like
this:

link rel=import href=sweet-button.html
...
sweet-button/sweet-button

Is sweet-button implemented via Polymer? X-tags? Vanilla JavaScript? User
doesn't need to know to use the resource.

Ideally, best-of-breed technology emerges and the option to standardize is
still available.



On Tue, May 20, 2014 at 11:56 PM, Scott Miles sjmi...@google.com wrote:

 Some of the ways Polymer team uses imports are as follows:

 - aggregating script src and/or link rel=stylesheet elements into
 functional units
 - aggregating imports themselves into units
 - expressing dependencies (N modules can each import jquery2-import.html
 and I only get one copy of jquery)
 - importing self-organizing databases via custom elements (e.g.
 core-meta elements describe/provide metadata using monostate pattern)

 Also, one of the first things Polymer does is register a custom-element
 which itself provides a declarative interface to the custom element
 machinery. Most other Polymer elements are then structured declaratively
 (as HTML) which makes using imports highly convenient.

  would stick a style element in the imported document

 You can do that, reference an external stylesheet, or place a (scoped)
 style tag directly in the shadow-root.

 E.g. using Polymer idiom

 polymer-element name=my-button noscript
 template
 style
   :host  div.someclass {
 color: aliceblue;
   }
 /style
 div class=someclassmy-button/div
 /template
 /polymer-element


 On Tue, May 20, 2014 at 10:08 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 Over here at mozilla we've been trying to understand how the HTML
 Imports spec is intended to be used.

 We have so far received descriptions of how the spec works. I.e. what
 happens when the various import related attributes are added to an
 link rel=import.

 However I'm curious to understand the expected developer work flows.

 Let me make a few guesses to clarify the type of description I'm looking
 for.

 Clearly imports are expected to be used together with web components.
 However so far web components are mainly an imperative API, and not a
 declarative thing. Any element registrations needs to be created
 through JS calls, and all the shadow DOM inside a custom element needs
 to be created using JS calls and then inserted into the shadow root
 using script.

 At first glance it seems like a simple script src=... would then
 provide all that you need?

 However it might be tedious to create all elements using createElement
 and appendChild calls. A better work flow is to stick a script in a
 link rel=imported document together with some template elements.
 Then clone the of those templates from the constructors of the custom
 elements.

 And in order to style the custom elements, you would stick a style
 element in the imported document which would have rules like

 my-button::shadow  div.someclass {
   color: aliceblue;
 }

 Is this an accurate description? Are there other reasons to stick
 non-script content in the HTML? Are there any examples out there of
 how HTML imports are intended to be used?

 / Jonas





Re: [Custom Elements] attributeChanged not sufficient?

2014-03-31 Thread Scott Miles
I certainly support some kind of per-element media query, or resize event,
if the well-known performance issues around these ideas can be resolved,
but otherwise Custom Elements don't have a lot to say about this problem.

 Typically, when using a plain JS API (and not a custom declarative
markup), users are used to call a size synchronization routine, shall the
map viewport change.

This notion hasn't changed. In the absence of native resize signals,
applications or frameworks will need to manage this information themselves,
and broadcast custom signals (e.g. 'call a size synchronization routine').

Fwiw, I believe this question is orthogonal to `attributeChanged` (or
attributes in general).

Scott


On Mon, Mar 31, 2014 at 4:20 AM, Ondřej Žára ondrej.z...@firma.seznam.czwrote:

 Hi,

 let me introduce my Custom Element scenario: an interactive map element,
 powered by one of the well-known JS APIs (such as Google Maps API or so).

 Typically, the markup will be like

 my-map lat=... lon=... zoom=... controls

 However, the underlying JS needs to know when this element's rendered size
 changes; the viewport needs to be filled with new map tiles and other geo
 data.

 Typically, when using a plain JS API (and not a custom declarative
 markup), users are used to call a size synchronization routine, shall the
 map viewport change. This is no longer the case when a Custom Element is
 introduced (and scripting is replaced by declarative HTML).

 A user may insert a map element anywhere in the page (see
 http://api4.mapy.cz/ for reference), including a variable-width box in a
 sidebar or so. This means that the my-map element itself cannot determine
 when its own (rendered) size changes, as the attributeChanged callback only
 applies to own attributes.

 Is there some recommended way of dealing with this?


 Sincerely,
 Ondrej Zara



 --
 *RNDr. Ondřej Žára*
 Programátor UI senior

 https://twitter.com/0ndras
 ondrej.z...@firma.seznam.cz mailto:ondrej.z...@firma.seznam.cz
 http://www.seznam.cz/

 Seznam.cz, a.s., Radlická 3294/10, 150 00 Praha 5 http://mapy.cz/s/6rw4






Re: [custom-elements] :unresolved and :psych

2014-03-26 Thread Scott Miles
Yes, I agree with what R. Niwa says.

I believe there are many variations on what should happen during element
lifecycle, and the element itself is best positioned to make those choices.

`:unresolved` is special because it exists a-priori to the element having
any control.

Scott


On Wed, Mar 26, 2014 at 12:26 PM, Ryosuke Niwa rn...@apple.com wrote:

 Maybe the problem comes from not distinguishing elements being created and
 ready for API access versus elements is ready for interactions?

 I’d also imagine that the exact appearance of a custom element between the
 time the element is created and the time it is ready for interaction will
 depend on what the element does.   e.g. img behaves more or less like
 display:none at least until the dimension is available, and then updates
 the screen as the image is loaded.  iframe on the other hand will occupy
 the fixed size in accordance to its style from the beginning, and simply
 updates its content.

 Given that, I’m not certain adding another pseudo element in UA is the
 right approach here.  I suspect there could be multiple states between the
 time element is created and it’s ready for user interaction for some custom
 elements.  Custom pseudo, for example, seems like a more appealing solution
 in that regard.

 - R. Niwa

 On Mar 25, 2014, at 2:31 PM, Brian Kardell bkard...@gmail.com wrote:

 I'm working with several individuals of varying skillsets on using/making
 custom elements - we are using a way cut-back subset of what we think are
 the really stable just to get started but I had an observation/thought that
 I wanted to share with the list based on feedback/experience so far...

 It turns out that we have a lot of what I am going to call async
 components - things that involve calling 1 or more services during their
 creation in order to actually draw something useful on the screen.  These
 range from something simple like an RSS element (which, of course, has to
 fetch the feed) to complex wizards which have to consult a service to
 determine which view/step they are even on and then potentially additional
 request(s) to display that view in a good way.  In both of these cases I've
 seen confusion over the :unresolved pseudo-class.  Essentially, the created
 callback has happened so from the currently defined lifecycle state it's
 :resolved, but still not useful.  This can easily be messed up at both
 ends (assuming that the thing sticking markup in a page and the CSS that
 styles it are two ends) such that we get FOUC garbage between the time
 something is :resolved and when it is actually conceptually ready.  I
 realize that there are a number of ways to work around this and maybe do it
 properly such that this doesn't happen, but there are an infinitely
 greater number of ways to barf unhappy content into the screen before its
 time.  To everyone who I see look at this, it seems they conceptually
 associate :resolved with ok it's ready, and my thought is that isn't
 necessarily an insensible thing to think since there is clearly a
 pseudo-class about 'non-readiness' of some kind and nothing else that seems
 to address this.

 I see a few options, I think all of them can be seen as enhancements, not
 necessary to a v1 spec if it is going to hold up something important.   The
 first would be to let the created callback optionally return a promise - if
 returned we can delay :resolved until the promise is fulfilled.  The other
 is to introduce another pseudo like :loaded and let the author
 participate in that somehow, perhaps the same way (optionally return a
 promise from created).  Either way, it seems to me that if we had that, my
 folks would use that over the current definition of :resolved in a lot of
 cases.



 --
 Brian Kardell :: @briankardell :: hitchjs.com





Re: [HTML imports]: Imports and Content Security Policy

2014-01-30 Thread Scott Miles
I'm hoping there are some constraints we can impose on imports to allow
them to contain inline scripts to exist under CSP.

Failing that, we already have a tool ('vulcanizer') which can separate
scripts out of imports (and to the reverse as well).

Whether an import uses inline or external scripts is invisible to the
importer.


On Wed, Jan 29, 2014 at 5:47 PM, Gabor Krizsanits
gkrizsan...@mozilla.comwrote:

 One more thing that little bit worries me, that the most common request
 when it comes to CSP is banning inline scripts. If all the imports obey the
 CSP of the master, which I think the only way to go, that also probably
 means that in most cases we can only use imports those do not have any
 inline scripting either... I think this should be mentioned in the spec.
 Since if you develop some huge library let's say, based on imports, and
 then no costumer can use it who also want to have CSP, because it's full of
 inline scripts, that would be quite annoying.





Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-13 Thread Scott Miles
You cannot pass the shadow root to the constructor, because each class in
the chain can have it's own shadow root. This is the core of the problem.



On Fri, Dec 13, 2013 at 1:16 AM, Maciej Stachowiak m...@apple.com wrote:


 On Dec 9, 2013, at 11:13 AM, Scott Miles sjmi...@google.com wrote:

 Domenic Denicola a few messages back gave a highly cogent explanation of
 the exact line of thinking arrived at last time we went through all this
 material.

 I'm not wont to try to summarize it here, since he said it already better
 there. Perhaps the short version is: nobody knows what the 'standard use
 case' is yet.

 In previous adjudications, the straw that broke that camel's back was with
 respect to handling auto-generation with inheritance. Shadow-roots may need
 to be generated for each entry in the inheritance chain. Having the system
 perform this task takes it out of the control of the user's code, which
 otherwise has ability to modulate calls to super-class methods and manage
 this process.

 class XFoo {
  constructor_or_createdCallback: function() {
 // my shadowRoot was auto-generated
this.doUsefulStuffLikeDatabinding(this.shadowRoot);
  }
 }

 class XBar extends XFoo {
  constructor_or_createdCallback: function() {
super(); // uh-oh, super call operates on wrong shadowRoot
  }
 }


 If the shadow root is optionally automatically generated, it should
 probably be passed to the createdCallback (or constructor) rather than made
 a property named shadowRoot. That makes it possible to pass a different
 shadow root to the base class than to the derived class, thus solving the
 problem.

 Using an object property named shadowRoot would be a bad idea in any
 case since it automatically breaks encapsulation. There needs to be a
 private way to store the shadow root, either using ES6 symbols, or some new
 mechanism specific to custom elements. As it is, there's no way for ES5
 custom elements to have private storage, which seems like a problem. They
 can't even use the closure approach, because the constructor is not called
 and the methods are expected to be on the prototype. (I guess you could
 create per-instance copies of the methods closing over the private data in
 the created callback, but that would preclude prototype monkeypatching of
 the sort built-in HTML elements allow.)

 Regards,
 Maciej





Re: [custom elements] Improving the name of document.register()

2013-12-11 Thread Scott Miles
I also agree with Ted.

I prefer 'registerElement' because I'm used to the concept of registration
wrt custom elements, but I'm not ginding any axe.

Scott


On Wed, Dec 11, 2013 at 6:46 PM, Dominic Cooney domin...@google.com wrote:

 On Thu, Dec 12, 2013 at 5:17 AM, pira...@gmail.com pira...@gmail.comwrote:

 I have seen registerProtocolHandler() and it's being discused
 registerServiceWorker(). I believe registerElementDefinition() or
 registerCustomElement() could help to keep going on this path.

 Send from my Samsung Galaxy Note II
 El 11/12/2013 21:10, Edward O'Connor eocon...@apple.com escribió:

 Hi,

 The name register is very generic and could mean practically anything.
 We need to adopt a name for document.register() that makes its purpose
 clear to authors looking to use custom elements or those reading someone
 else's code that makes use of custom elements.


 I support this proposal.


  Here are some ideas:

 document.defineElement()
 document.declareElement()
 document.registerElementDefinition()
 document.defineCustomElement()
 document.declareCustomElement()
 document.registerCustomElementDefinition()

 I like document.defineCustomElement() the most, but
 document.defineElement() also works for me if people think
 document.defineCustomElement() is too long.


 I think the method should be called registerElement, for these reasons:

 - It's more descriptive about the purpose of the method than just
 register.
 - It's not too verbose; it doesn't have any redundant part.
 - It's nicely parallel to registerProtocolHandler.

 If I had to pick from the list Ted suggested, I think defineElement is the
 best of that bunch and also an improvement over just register. It doesn't
 line up with registerProtocolHandler, but there's some poetry to
 defineElement/createElement.


 Ted

 P.S. Sorry for the bikeshedding. I really believe we can improve the
 name of this function to make its purpose clear.


 I searched for bugs on this and found none; I expect this was discussed
 but I can't find a mail thread about it. The naming of register is
 something that's been on my mind so thanks for bringing it up.

 Dominic



Re: [webcomponents] Auto-creating shadow DOM for custom elements

2013-12-09 Thread Scott Miles
Domenic Denicola a few messages back gave a highly cogent explanation of
the exact line of thinking arrived at last time we went through all this
material.

I'm not wont to try to summarize it here, since he said it already better
there. Perhaps the short version is: nobody knows what the 'standard use
case' is yet.

In previous adjudications, the straw that broke that camel's back was with
respect to handling auto-generation with inheritance. Shadow-roots may need
to be generated for each entry in the inheritance chain. Having the system
perform this task takes it out of the control of the user's code, which
otherwise has ability to modulate calls to super-class methods and manage
this process.

class XFoo {

 constructor_or_createdCallback: function() {

// my shadowRoot was auto-generated

   this.doUsefulStuffLikeDatabinding(this.shadowRoot);

 }
}

class XBar extends XFoo {

 constructor_or_createdCallback: function() {

   super(); // uh-oh, super call operates on wrong shadowRoot

 }
}

Scott


On Mon, Dec 9, 2013 at 7:20 AM, Brian Kardell bkard...@gmail.com wrote:

 +public-nextweb _ i encourage folks there to check out
 public-webapps@w3.org as this conversation is deep and multi-forked and I
 am late to the party.

 On Dec 7, 2013 4:44 PM, Brendan Eich bren...@secure.meer.net wrote:
 
  What does polymer do? Cows are already treading paths.
 
  I still smell a chance to do better out of the gate (gave, thanks
 autospellcheck! lol). Call me picky. Knee-jerking about scenario solving (I
 think I taught Yehuda that one) doesn't help. Particular response, please.
 
  /be
 

 I think the most important part is to first ensure that we -can- explain
 the magic with core apis even if they are initially saltier than we'd all
 like.  When reasonable opportunities present themselves to improve
 developer ergonomics, we should take them - it doesn't preclude other
 opportunities for better flowers to bloom.

 The issues in this specific case in my mind surround the observation that
 it feels like it is attempting to bind several layers together which are in
 various states of done and conceptually what we have is more like a
 squirrel path than a cow path on this piece.  Without bindings or some kind
 of  pattern for solving those use cases, template is a less thing - and i
 think we are far from that.Templates aren't necessary for a useful
 document.register().  Shadow DOM isn't either but it's more obvious where
 the connections are and it seems considerably more stable.  There also
 isn't necessarily a 1:1 relationship of component to template, so we have
 to be careful there lest we add confusion.  Is this really a ShadowHost?

 I am not actually sure that the initial message in this thread really
 needs to have anything particular to the template element though, it looks
 like the optional third argument could be any Node - and that does actually
 seem to describe a useful and common pattern which we could easily explain
 in existing terms and it might be fruitful to think about that.



Re: [webcomponents] HTML Imports

2013-12-04 Thread Scott Miles
 seems a specification that seems really pushed/rushed

Since my team (Polymer) has been working with imports in practice for a
year-and-a-half (100% public and open-source, btw) this seems a strange
conclusion. But this is only my perspective, I'm still a standards n00b I
suppose.

In any case, I codified the concepts that our team has been espousing in a
document here:

https://docs.google.com/document/d/14qJlCgda7GX2_KKxYhj1EULmY_hqNH35wjqDgGSkkOo/edit#

The aim of this document was to address some of the questions around
pragmatic operation of the spec as we see it.

Scott

On Wed, Dec 4, 2013 at 4:32 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Wed, Dec 4, 2013 at 9:21 AM, Brian Di Palma off...@gmail.com wrote:
  I would say though that I get the feeling that Web Components seems a
  specification that seems really pushed/rushed and I worry that might
  lead to some poor design decisions whose side effects will be felt by
  developers in the future.

 I very much share this sentiment.


 --
 http://annevankesteren.nl/



Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread Scott Miles
 I love the idea of making HTML imports *not* block rendering as the
default behavior

So, for what it's worth, the Polymer team has the exact opposite desire. I
of course acknowledge use cases where imports are being used to enhance
existing pages, but the assertion that this is the primary use case is at
least arguable.

  It would be the web dev's responsibility to confirm that the import was
done loading

Our use cases almost always rely on imports to make our pages sane.
Requiring extra code to manage import readiness is a headache.

Dimitri's proposal above tries to be inclusive to both world views, which I
strongly support as both use-cases are valid.

Scott

On Mon, Nov 18, 2013 at 2:25 PM, Steve Souders soud...@google.com wrote:

 I love the idea of making HTML imports *not* block rendering as the
 default behavior. I believe this is what JJB is saying: make link
 rel=import NOT block script.

 This is essential because most web pages are likely to have a SCRIPT tag
 in the HEAD, thus the HTML import will block rendering of the entire page.
 While this behavior is the same as stylesheets, it's likely to be
 unexpected. Web devs know the stylesheet is needed for the entire page and
 thus the blocking behavior is more intuitive. But HTML imports don't affect
 the rest of the page - so the fact that an HTML import can block the entire
 page the same way as stylesheets is likely to surprise folks. I don't have
 data on this, but the reaction to my blog post reflects this surprise.

 Do we need to add a sync (aka blockScriptFromExecuting) attribute? I
 don't think so. It would be the web dev's responsibility to confirm that
 the import was done loading before trying to insert it into the document
 (using the import ready flag). Even better would be to train web devs to
 use the LINK's onload handler for that.

 -Steve





 On Mon, Nov 18, 2013 at 10:16 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Maybe Steve's example[1] could be on JS rather than on components:

 System.component(import.php, function(component) {
   var content = component.content

 document.getElementById('import-container').appendChild(content.cloneNode(true));
 });

 Here we mimic System.load(jsId, success, error).  Then make link not
 block script: it's on JS to express the dependency correctly.

 jjb


 On Mon, Nov 18, 2013 at 1:40 PM, Dimitri Glazkov dglaz...@google.comwrote:

 'Sup yo!

 There was a thought-provoking post by Steve Souders [1] this weekend
 that involved HTML Imports (yay!) and document.write (boo!), which
 triggered a Twitter conversation [2], which triggered some conversations
 with Arv and Alex, which finally erupted in this email.

 Today, HTML Imports loading behavior is very simply defined: they act
 like stylesheets. They load asynchronously, but block script from
 executing. Some peeps seem to frown on that and demand moar async.

 I am going to claim that there are two distinct uses of link
 rel=import:

 1) The import is the most important part of the document. Typically,
 this is when the import is the underlying framework that powers the app,
 and the app simply won't function without it. In this case, any more async
 will be all burden and no benefit.

 2) The import is the least important of the document. This is the +1
 button case. The import is useful, but sure as hell doesn't need to take
 rendering engine's attention from presenting this document to the user. In
 this case, async is sorely needed.

 We should address both of these cases, and we don't right now -- which
 is a problem.

 Shoot-from-the-hip Strawman:

 * The default behavior stays currently specified
 * The async attribute on link makes import load asynchronously
 * Also, consider not blocking rendering when blocking script

 This strawman is intentionally full of ... straw. Please provide a
 better strawman below:
 __
 __
 __

 :DG

 [1]:
 http://www.stevesouders.com/blog/2013/11/16/async-ads-with-html-imports/
 [2]: https://twitter.com/codepo8/status/401752453944590336






Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread Scott Miles
 I'll assert that the primary use case for JS interacting with HTML
components ought to be 'works well with JS modules'.

You can happily define modules in your imports, those two systems are
friends as far as I can tell. I've said this before, I've yet to hear the
counter argument.

 But if you believe in modularity for Web Components then you should
believe in modularity for JS

Polymer team relies on Custom Elements for JS modularity. But again, this
is not mutually exclusive with JS modules, so I don't see the problem.

 Dimitri's proposal makes the async case much more difficult: you need
both the link tag with async attribute then again you need to express the
dependency with the clunky onload busines

I believe you are making assumptions about the nature of link and async.
There are ways of avoiding this problem, but it begs the question, which
is: if we allow Expressing the dependency in JS then why doesn't 'async'
(or 'sync') get us both what we want?

Scott

On Mon, Nov 18, 2013 at 2:58 PM, John J Barton
johnjbar...@johnjbarton.comwrote:




 On Mon, Nov 18, 2013 at 2:33 PM, Scott Miles sjmi...@google.com wrote:

  I love the idea of making HTML imports *not* block rendering as the
 default behavior

 So, for what it's worth, the Polymer team has the exact opposite
 desire. I of course acknowledge use cases where imports are being used to
 enhance existing pages, but the assertion that this is the primary use case
 is at least arguable.


 I'll assert that the primary use case for JS interacting with HTML
 components ought to be 'works well with JS modules'. Today, in the current
 state of HTML Import and JS modules, this sounds too hard. But if you
 believe in modularity for Web Components then you should believe in
 modularity for JS (or look at the Node ecosystem) and gee they ought to
 work great together.




   It would be the web dev's responsibility to confirm that the import
 was done loading

 Our use cases almost always rely on imports to make our pages sane.
 Requiring extra code to manage import readiness is a headache.


 I think your app would be overall even more sane if the dependencies were
 expressed directly where they are needed. Rather than loading components
 A,B,C,D then some JS that uses B,C,F, just load the JS and let it pull B,
 C, F.  No more checking back to the list of link to compare to the JS
 needs.



 Dimitri's proposal above tries to be inclusive to both world views, which
 I strongly support as both use-cases are valid.


 Dimitri's proposal makes the async case much more difficult: you need both
 the link tag with async attribute then again you need to express the
 dependency with the clunky onload business. Expressing the dependency in JS
 avoids both of these issues.

 Just to point out: System.component()-ish need not be blocked by
 completing ES module details and my arguments only apply for JS dependent
 upon Web Components.




 Scott

 On Mon, Nov 18, 2013 at 2:25 PM, Steve Souders soud...@google.comwrote:

 I love the idea of making HTML imports *not* block rendering as the
 default behavior. I believe this is what JJB is saying: make link
 rel=import NOT block script.

 This is essential because most web pages are likely to have a SCRIPT tag
 in the HEAD, thus the HTML import will block rendering of the entire page.
 While this behavior is the same as stylesheets, it's likely to be
 unexpected. Web devs know the stylesheet is needed for the entire page and
 thus the blocking behavior is more intuitive. But HTML imports don't affect
 the rest of the page - so the fact that an HTML import can block the entire
 page the same way as stylesheets is likely to surprise folks. I don't have
 data on this, but the reaction to my blog post reflects this surprise.

 Do we need to add a sync (aka blockScriptFromExecuting) attribute? I
 don't think so. It would be the web dev's responsibility to confirm that
 the import was done loading before trying to insert it into the document
 (using the import ready flag). Even better would be to train web devs to
 use the LINK's onload handler for that.

 -Steve





 On Mon, Nov 18, 2013 at 10:16 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Maybe Steve's example[1] could be on JS rather than on components:

 System.component(import.php, function(component) {
   var content = component.content

 document.getElementById('import-container').appendChild(content.cloneNode(true));
 });

 Here we mimic System.load(jsId, success, error).  Then make link not
 block script: it's on JS to express the dependency correctly.

 jjb


 On Mon, Nov 18, 2013 at 1:40 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 'Sup yo!

 There was a thought-provoking post by Steve Souders [1] this weekend
 that involved HTML Imports (yay!) and document.write (boo!), which
 triggered a Twitter conversation [2], which triggered some conversations
 with Arv and Alex, which finally erupted in this email.

 Today, HTML Imports loading behavior

Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread Scott Miles
 Scott, is that because of what I said above (why polymer has the exact
opposite desire)?

Yes. Plus some salt from the KISS principle.

  I feel like it is maybe valuable to think that we should worry about
how you express [dependencies] in JS

I guess I thought ES6 modules already went through all these issues. I'm
happy to let modules be The Way for handling JS dependencies. Imports can
provide an entree to modules *and* be a vehicle for my other stuff.

Scott

On Mon, Nov 18, 2013 at 3:56 PM, Brian Kardell bkard...@gmail.com wrote:

 Mixed response here...

  I love the idea of making HTML imports *not* block rendering as the
 default behavior
 In terms of custom elements, this creates as a standard, the dreaded FOUC
 problem about which a whole different group of people will be blogging and
 tweeting... Right?  I don't know that the current solution is entirely
 awesome, I'm just making sure we are discussing the same fact.  Also, links
 in the head and links in the body both work though the spec disallows the
 later it is defacto - the former blocks, the later doesn't I think.
  This creates some interesting situations for people that use something
 like a CMS where they don't get to own the head upfront.

  So, for what it's worth, the Polymer team has the exact opposite
 desire. I of course acknowledge use cases
  where imports are being used to enhance existing pages, but the
 assertion that this is the primary use case is  at least arguable.

 Scott, is that because of what I said above (why polymer has the exact
 opposite desire)?

   if we allow Expressing the dependency in JS then why doesn't 'async'
 (or 'sync') get us both what we want?

 Just to kind of flip this on its head a bit - I feel like it is maybe
 valuable to think that we should worry about how you express it in JS
 *first* and worry about declarative sugar for one or more of those cases
 after.  I know it seems the boat has sailed on that just a little with
 imports, but nothing is really final else I think we wouldnt be having this
 conversation in the first place.  Is it plausible to excavate an
 explantation for the imports magic and define a JS API and then see how we
 tweak that to solve all the things?





Re: [HTML Imports]: what scope to run in

2013-11-18 Thread Scott Miles
I've made similar comments on various threads, so sorry if you've heard
this song before, but here are some basic comments:

The HTMLImports we've been working with so far is not primarily about JS,
it's about HTML.
There are already various ways to modularize JS, including requirejs, other
libs, and of course, ES6 modules.
Isolation of globals has definite use cases, but it also has costs, and is
more intervention than is required for first-party use cases. It's been
argued (convincingly, from my perspective) that isolation can be
successfully implemented via a separate opt-in mechanism.

There are the principles that guide the design as it is now. You have lots
of interesting ideas there, but it feels like re-scoping the problem into a
declarative form of JS modules. I suggest that keeping HTMLImports as
primitive as possible is a virtue on almost all fronts.

Scott



On Mon, Nov 18, 2013 at 4:14 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 Largely independently from the thread that Dimitri just started on the
 sync/async/-ish nature of HTML imports I have a problem with how
 script execution in the imported document works.

 Right now it's defined that any script elements in the imported
 document are run in the scope of the window of the document linking to
 the import. I.e. the global object of the document that links to the
 import is used as global object of the running script.

 This is exactly how script elements have always worked in HTML.

 However this is a pretty terrible way of importing libraries.
 Basically the protocol becomes here is my global, do whatever
 modifications you want to it in order to install yourself.

 This has several downsides:
 * Libraries can easily collide with each other by trying to insert
 themselves into the global using the same property name.
 * It means that the library is forced to hardcode the property name
 that it's accessed through, rather allowing the page importing the
 library to control this.
 * It makes it harder for the library to expose multiple entry points
 since it multiplies the problems above.
 * It means that the library is more fragile since it doesn't know what
 the global object that it runs in looks like. I.e. it can't depend on
 the global object having or not having any particular properties.
 * Internal functions that the library does not want to expose require
 ugly anonymous-function tricks to create a hidden scope.

 Many platforms, including Node.js and ES6 introduces modules as a way
 to address these problems.

 It seems to me that we are repeating the same mistake again with HTML
 imports.

 Note that this is *not* about security. It's simply about making a
 more robust platform for libraries. This seems like a bad idea given
 that HTML imports essentially are libraries.

 At the very least, I would like to see a way to write your
 HTML-importable document as a module. So that it runs in a separate
 global and that the caller can access exported symbols and grab the
 ones that it wants.

 Though I would even be interested in having that be the default way of
 accessing HTML imports.

 I don't know exactly what the syntax would be. I could imagine something
 like

 In markup:
 link rel=import href=... id=mylib

 Once imported, in script:
 new $('mylib').import.MyCommentElement;
 $('mylib').import.doStuff(12);

 or

 In markup:
 link rel=import href=... id=mylib import=MyCommentElement doStuff

 Once imported, in script:
 new MyCommentElement;
 doStuff(12);

 / Jonas




Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread Scott Miles
I believe the primary issue here is 'synchronous with respect to
rendering'. Seems like you ignored this issue. See Brian's post.

Scott


On Mon, Nov 18, 2013 at 5:47 PM, John J Barton
johnjbar...@johnjbarton.comwrote:




 On Mon, Nov 18, 2013 at 3:06 PM, Scott Miles sjmi...@google.com wrote:

  I'll assert that the primary use case for JS interacting with HTML
 components ought to be 'works well with JS modules'.

 You can happily define modules in your imports, those two systems are
 friends as far as I can tell. I've said this before, I've yet to hear the
 counter argument.


 Yes indeed. Dimitri was asking for a better solution, but I agree that
 both are feasible and compatible.



  But if you believe in modularity for Web Components then you should
 believe in modularity for JS

 Polymer team relies on Custom Elements for JS modularity. But again, this
 is not mutually exclusive with JS modules, so I don't see the problem.


 Steve's example concerns synchrony between script and link
 rel='import'. It would be helpful if you can outline how your modularity
 solution works for this case.




  Dimitri's proposal makes the async case much more difficult: you need
 both the link tag with async attribute then again you need to express the
 dependency with the clunky onload busines

 I believe you are making assumptions about the nature of link and async.
 There are ways of avoiding this problem,


 Yes I am assuming Steve's example, so again your version would be
 interesting to see.


  but it begs the question, which is: if we allow Expressing the
 dependency in JS then why doesn't 'async' (or 'sync') get us both what we
 want?


 I'm not arguing against any other solution that also works. I'm only
 suggesting a solution that always synchronizes just those blocks of JS that
 need order-of-execution and thus never needs 'sync' or 'async' and which
 leads us to unify the module story for the Web.

 jjb



 Scott

 On Mon, Nov 18, 2013 at 2:58 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Nov 18, 2013 at 2:33 PM, Scott Miles sjmi...@google.com wrote:

  I love the idea of making HTML imports *not* block rendering as the
 default behavior

 So, for what it's worth, the Polymer team has the exact opposite
 desire. I of course acknowledge use cases where imports are being used to
 enhance existing pages, but the assertion that this is the primary use case
 is at least arguable.


 I'll assert that the primary use case for JS interacting with HTML
 components ought to be 'works well with JS modules'. Today, in the current
 state of HTML Import and JS modules, this sounds too hard. But if you
 believe in modularity for Web Components then you should believe in
 modularity for JS (or look at the Node ecosystem) and gee they ought to
 work great together.




   It would be the web dev's responsibility to confirm that the import
 was done loading

 Our use cases almost always rely on imports to make our pages sane.
 Requiring extra code to manage import readiness is a headache.


 I think your app would be overall even more sane if the dependencies
 were expressed directly where they are needed. Rather than loading
 components A,B,C,D then some JS that uses B,C,F, just load the JS and let
 it pull B, C, F.  No more checking back to the list of link to compare to
 the JS needs.



 Dimitri's proposal above tries to be inclusive to both world views,
 which I strongly support as both use-cases are valid.


 Dimitri's proposal makes the async case much more difficult: you need
 both the link tag with async attribute then again you need to express the
 dependency with the clunky onload business. Expressing the dependency in JS
 avoids both of these issues.

 Just to point out: System.component()-ish need not be blocked by
 completing ES module details and my arguments only apply for JS dependent
 upon Web Components.




 Scott

 On Mon, Nov 18, 2013 at 2:25 PM, Steve Souders soud...@google.comwrote:

 I love the idea of making HTML imports *not* block rendering as the
 default behavior. I believe this is what JJB is saying: make link
 rel=import NOT block script.

 This is essential because most web pages are likely to have a SCRIPT
 tag in the HEAD, thus the HTML import will block rendering of the entire
 page. While this behavior is the same as stylesheets, it's likely to be
 unexpected. Web devs know the stylesheet is needed for the entire page and
 thus the blocking behavior is more intuitive. But HTML imports don't 
 affect
 the rest of the page - so the fact that an HTML import can block the 
 entire
 page the same way as stylesheets is likely to surprise folks. I don't have
 data on this, but the reaction to my blog post reflects this surprise.

 Do we need to add a sync (aka blockScriptFromExecuting) attribute?
 I don't think so. It would be the web dev's responsibility to confirm that
 the import was done loading before trying to insert it into the document
 (using the import ready

Re: [webcomponents] Proposal for Cross Origin Use Case and Declarative Syntax

2013-11-12 Thread Scott Miles
 pollute the window object with $, with ES6 modules around the corner

The $ was just an example, the import could also happily define one or more
modules. This concept allows us to decouple scoping from imports.

Now, the import is only a vehicle, but it advances the state of the art by
also delivering canonical HTML and CSS (instead of requiring JavaScript to
load or encode additional resources). We right away have an efficient
method for draining some of the existing resource management swamp.

From there I can see paths to supporting opt-in isolation models, either
directly, or by delegating to an agent like DOMWorker.


On Tue, Nov 12, 2013 at 3:21 PM, Brian Di Palma off...@gmail.com wrote:

 I'm not sure I would want jQuery UI to pollute the window object with
 $, with ES6 modules around the corner it seems like a step backwards
 for imports to start polluting window objects with their libraries...

 On Tue, Nov 12, 2013 at 9:01 PM, Elliott Sprehn espr...@gmail.com wrote:
 
  On Tue, Nov 12, 2013 at 12:45 AM, Ryosuke Niwa rn...@apple.com wrote:
 
  [...]
 
  Script in the import is executed in the context of the window that
  contains the importingdocument. So window.document refers to the main
 page
  document. This has two useful corollaries:
 
  functions defined in an import end up on window.
  you don't have to do anything crazy like append the import's script
  blocks to the main page. Again, script gets executed.
 
  What we’re proposing is to execute the script in the imported document
 so
  the only real argument is the point that “functions defined in an
 imported
  end up on window” (of the host document).
 
  I think that’s a bad thing.  We don’t want imported documents to start
  polluting global scope without the user explicitly importing them.  e.g.
  import X in Python doesn’t automatically import stuff inside the
 module
  into your global scope.  To do that, you explicitly say “import * from
 X”.
  Similarly, “using std” is discouraged in C++.
 
  I don’t think the argument that this is how external script and
 stylesheet
  fly either because the whole point of web components is about improving
 the
  modularity and reusability of the Web.
 
 
  What you're proposing breaks a primary use case of:
 
  link rel=import href=//apis.google.com/jquery-ui.html
 
  Authors don't want to list every single component from jQuery UI in the
  import directive, and they don't want the jQuery UI logic to be in a
  different global object. They want to be able to import jQuery UI and
 have
  it transitively import jQuery thus providing $ in the window in addition
 to
  all the widgets and their API. ex. body.appendChild(new
  JQUIPanel()).showPanel().
 
  Note also that using a different global produces craziness like Array
 being
  different or the prototypes of nodes being different. You definitely
 don't
  want that for the same origin or CORS use case.
 
 
  Fortunately, there is already a boundary that we built that might be
 just
  the right fit for this problem: the shadow DOM boundary. A while back,
 we
  had lunch with Mozilla security researchers who were interested in
  harnessing the power of Shadow DOM, and Elliott (cc'd) came up with a
 pretty
  nifty proposal called the DOMWorker. I nagged him and he is hopefully
 going
  to post it on public-webapps. I am pretty sure that his proposal can
 address
  your use case and not cripple the rest of the spec in the process.
 
 
  Assuming you’re referring to
 
 https://docs.google.com/document/d/1V7ci1-lBTY6AJxgN99aCMwjZKCjKv1v3y_7WLtcgM00/edit
 ,
  the security model of our proposal is very similar.  All we’re doing is
  using a HTML-imported document instead of a worker to isolate the
  cross-origin component.
 
  Since we don’t want to run the cross-origin component on a separate
  thread, I don’t think worker is a good model for cross-origin
 components.
 
 
  A DOMWorker doesn't run on another thread, see the Note in the
 introduction.
 
  - E
 




Re: [webcomponents] HTML Imports

2013-10-18 Thread Scott Miles
 they'll have to use a closure to capture the document that the template
lives in

Yes, this is true. But stamping of templates tends to be something custom
elements are really good at, so this paritcular use case doesn't come up
very often.

 Out of curiosity, what have the Polymer guys been using imports for?

1. Bundling resources. Imports can contain or chain to JS, CSS, or
additional HTML imports, so I have access to bundles in silos of
functionality instead of syntax.

2. Obscuring production details. I can import library.html and get an
entire dependency without knowing if it's an optimized build file or a
dependency tree of imports.

3. Relocatability. I can import elements.html and that package can
reference resources relative to itself.

4. Importing data as markup, where typically it's then the responsibility
of the importer to consume the data, not the import itself.

5. We would like to use imports for preloading images, depending on the
resolution of the 'view-in-import' discussion.

[sidebar] we tend to declare self-organizing custom elements for data and
then load them in imports. For example, many of our library elements has an
associated `metadata.html` file that contains `x-meta` elements with
various details. An interested object can make an blank x-meta element to
access the database, and the details are are encapsulated inside the x-meta
implementation.

Scott

On Fri, Oct 18, 2013 at 3:37 PM, Blake Kaplan mrb...@gmail.com wrote:

 On Sun, Oct 6, 2013 at 9:38 AM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  So you have link href=blah.html in meh.html and blah.html is:
  div id=test/div
  script /* how do I get to #test? */ /script
  document.currentScript.ownerDocument.querySelector(#test) :)

 This only works for code running directly in the script. The current
 setup means that any time an author has something like:

 template id=foo.../template
 script
 function cloneFoo() { /* get foo and return it. */ }
 /script

 they'll have to use a closure to capture the document that the
 template lives in, which is rather surprising to me. Also, storing the
 document in a global variable is a footgun, because that global
 variable would potentially collide with another import trying to do
 the same thing. ES6 modules would help here, but there a way's off.

  I think the greatest impact here will be on developers. They have to
 start
  thinking in terms of multiple documents. We should ask Polymer people:
 they
  wrote a ton of code with Imports now and I bet they have opinions.

 Out of curiosity, what have the Polymer guys been using imports for?
 More than just declaring custom elements? I'm worried that we're
 coming up with a very generic feature with odd semantics that only
 make sense for one narrow use-case.
 --
 Blake Kaplan



Re: [webcomponents] HTML Imports

2013-10-09 Thread Scott Miles
On Mon, Oct 7, 2013 at 3:24 AM, James Graham ja...@hoppipolla.co.uk wrote:

 On 06/10/13 17:25, Dimitri Glazkov wrote:

  And, if the script is executed against the global/window object of
 the main document, can and should you be able to access the imported
 document?


 You can and you should. HTML Imports are effectively #include for the Web.


 Yes, that sounds like a good description of the problem :) It is rather
 noticable that no one making programming languages today replicates the
 #include mechanism, and I think html-imports has some of the same design
 flaws that makes #include unpopular.

 I think authors will find it very hard to write code in an environment
 where simple functions like document.getElementById don't actually work on
 the document containing the script, but on some other document that they
 can't see.


It's true we are introducing something new, but this actually one of The
Good Parts. Imports are not the main document, they are satellite to the
main document. The main document maintains primacy, but your imports can
act on it. So far, we haven't really had any problems with developers on
this point.


 It also seems that the design requires you to be super careful about
 having side effects; if the author happens to have a non-idempotent action
 in a document that is imported, then things will break in the relatively
 uncommon case where a single document is imported more than once.


Can you give an example of a non-idempotent, potentially breaking action?


 Overall it feels like html imports has been designed as an over general
 mechanism to address certain narrow use cases and, in so doing, has handed
 authors a footgun.


I guess I would instead suggest that generality of HTML Imports is due to
the group attempting to find a virtuous primitive, instead of a special
case.

For just one issue, look how much HTML becomes embedded in strings, or
hidden as comments, or other crazy hacks. We can import (relocatable!) CSS
and JS, why can we not import our most basic content?


 Whilst I don't doubt it is usable by the highly competent people who are
 working at the bleeding edge on polyfilling components, the rest of the
 population can't be expected to understand the implemetation details that
 seem to have led the design in this direction.


We created polyfills not as an end-in-itself, but as a way of making it
possible to test these concepts in the real world. The fact is, that one of
my team's mandates is to (try to) ensure that what comes out if this
process is actually useful for end-users. We're certainly open to criticism
on this point (or any point!), but it's basically upside-down to assume we
are focused on the technology more than the usability.


 I think it would be useful to go right back to use cases here and work out
 if we can't design something better.


Welcome to the discussion, we are grateful for your participation! Let's
keep up the discussion. In particular, it would be very helpful if you
could fill in some details on the foot-gun as described above.

Thanks again,
Scott


Re: [webcomponents] HTML Imports

2013-10-06 Thread Scott Miles
 We should ask Polymer people: they wrote a ton of code with Imports now
and I bet they have opinions.

The Polymer team has successfully adopted/evolved the modality Dimitri
describes. Imported documents work roughly as #includes, and
`currentScript.ownerDocument` is interrogated if one needs to locate their
containing import from (non custom-element) script.

 I sincerely hope that when we get back to declarative form, we will be
able to write declarative custom element syntax as a custom element itself.
:)

Of course, this is exactly polymer-element, and because it is itself an
element it has easy access to the import tree.



On Sun, Oct 6, 2013 at 9:38 AM, Dimitri Glazkov dglaz...@chromium.orgwrote:




 On Sun, Oct 6, 2013 at 9:21 AM, Anne van Kesteren ann...@annevk.nlwrote:

 On Sun, Oct 6, 2013 at 5:25 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Sun, Oct 6, 2013 at 6:26 AM, Angelina Fabbro 
 angelinafab...@gmail.com
  wrote:
  And, if the script is executed against the global/window object of the
  main document, can and should you be able to access the imported
 document?
 
  You can and you should. HTML Imports are effectively #include for the
 Web.

 So you have link href=blah.html in meh.html and blah.html is:

 div id=test/div
 script /* how do I get to #test? */ /script


 document.currentScript.ownerDocument.querySelector(#test) :)


 Having thought a bit more about how declarative custom elements would
 work that might not actually be much of a problem (assuming we go with
 Allen's model), but it seems somewhat worrying that the document the
 script elements are inserted in is not actually the one the scripts
 operate on.


 I think the greatest impact here will be on developers. They have to start
 thinking in terms of multiple documents. We should ask Polymer people: they
 wrote a ton of code with Imports now and I bet they have opinions.



 (The way I expect we'll do declarative custom elements is element
 constructor=X combined with script class X extends HTMLElement {
 ... } /script.)


 I sincerely hope that when we get back to declarative form, we will be
 able to write declarative custom element syntax as a custom element itself.
 :)

 :DG



Re: [webcomponents]: The Shadow Cat in the Hat Edition

2013-09-09 Thread Scott Miles
I'm one of the guinea people, for whatever biases that gives me. Fwiw and
IMO, Dimitri summarized our thinking better than our own brains did.

  finally ruined encapsulation?

As I see it the main Web Components system is based on soft encapsulation.
Each boundary is in force by default, but each one is also easily pierced
when needed.

E.g., shadow-roots are traversable, JS prototypes are mungeable (in
general). Ability to pierce CSS encapsulation (on purpose, doesn't happen
incidentally) allows us to do theming and other necessary customization
tasks without having to over-engineer.

It may be counter intuitive given the virtues of encapsulation, but IMO
this is a good design for a UI system.

As I understand there is work afoot to come up with (optional) 'sealed' or
'strongly encapsulated' components for other less laissez-faire uses. It
makes sense to me to have both extremes.

Scott


On Mon, Sep 9, 2013 at 4:32 PM, Dimitri Glazkov dglaz...@google.com wrote:

 This progress update is brought to you in part by the Sith Order:
 Sith: When The Light Side Just Ain't Cuttin' It.


 Part 1: Revenge of the :host

 Turns out, it's bad to be Super Man. After the Shadow DOM meetup,
 where we decided that shadow host could be matched by both outer and
 inner trees (
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0985.html,
 first point), we quickly coded this up in Blink and gave it to Mikey
 ...erm, the Polymer folks to chew on.

 The folks spat out that morsel right across the table
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=22980), and presented
 good arguments to justify their etiquette faux pas.

 For what it's worth, it would be fairly easy to make shadow tree rules
 match shadow host only when :host is present in a rule.

 Unfortunately, this would leave Tab (and other CSS WG folks) in a sad
 state, since addressing these arguments makes it harder to keep a
 straight face with the concept of a pseudo class in regard to :host.
 See discussion on bug for the gory details.

 As of now, we are in that angsty state of not knowing what to do next.
 Any ideas are appreciated. Note that there are some well-established
 concepts in CSS and inventing fewer new concepts is much preferred.
 Reuse, reduce, recycle.


 Part 2: Party ::part part

 Another possible wrinkle is the ::part pseudo element. After also
 chewing on ::part for a little while, our brave guinea pi.. erm,
 people also declared it to be tasting somewhat bitter.

 The best symptom can be seen here:
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=23162. When explained
 that you would just chain x-nurk::part(woot)::part(zorp)::part(bler)
 to cross each shadow tree boundary, the guinea people looked at me
 like this: _. Then they pointed out that this both:

 a) implies ever-growing part API for each component, which will
 quickly lead to the anti-pattern of developers simply declaring all
 elements in their shadow tree as parts, and

 b) just looks ugly and constipated.

 Agitated shouts of Y U NO LET US JUS DO EET were echoing across the
 San Francisco Bay, frightening America's Cup spectators.

 To calm the brave guinea people down, I showed them a magic trick. Out
 of my sleeve, I pulled out two new combinators: A hat (^) and a cat
 (^^).

 You would use them instead of ::part. The hat is generally equivalent
 to a descendant combinator, except it crosses 1 (one) shadow tree
 boundary (from shadow host to shadow root). The cat is similar, except
 it crosses any number of boundaries. So, to target bler in the
 previous part-y chain could be written as simply as
 x-nurk^^[part=bler] or x-nurk^^#bler if ids are used instead of
 part=bler attribute. Respectively, you would target woot as simply
 x-nurk^#woot.

 But wait there's more: you could use these new combinators in
 querySelector, I proclaimed! In the nascent shadow DOM code, we
 already started seeing the blood-curling

 document.querySelector('x-nurk').shadowRoot.querySelector('#woot').shadowRoot.querySelector('#zorp')
 chains of hell -- a problem that these new combinators would solve.

 Think of them simply general combinators that opens shadow trees for
 selector traversal, just like Element.shadowRoot did for DOM
 traversal.

 The brave guinea people became content and reverted to their natural
 behaviors, but I then started worrying. Did I over-promise and finally
 ruined encapsulation? When will our styling woes finally converge into
 one solution?

 Luckily, I have you, my glorious WebApp-erators. Put on your thinking
 hats and help find one uniform solution. Something that fits well into
 CSS, doesn't add too many new moving parts, and keeps the brave guinea
 people at bay. That'll be the day.

 :DG



Re: [webcomponents]: The Shadow Cat in the Hat Edition

2013-09-09 Thread Scott Miles
 simply general combinators that opens shadow trees for
  selector traversal, just like Element.shadowRoot did for DOM
  traversal.

 You should be able to just do this with ::part as well.  Note, though,
 that this mixes up the questions of exposing a part for styling, and
 exposing it for script-based manipulation.  I was under the impression
 that HTML elements that exposed a native shadow DOM would expose their
 parts for styling, but were still black boxes for interaction
 purposes.  Has that changed?

 On Mon, Sep 9, 2013 at 5:29 PM, Scott Miles sjmi...@google.com wrote:
  finally ruined encapsulation?
 
  As I see it the main Web Components system is based on soft
 encapsulation.
  Each boundary is in force by default, but each one is also easily pierced
  when needed.
 
  E.g., shadow-roots are traversable, JS prototypes are mungeable (in
  general). Ability to pierce CSS encapsulation (on purpose, doesn't happen
  incidentally) allows us to do theming and other necessary customization
  tasks without having to over-engineer.

 I am okay with piercable boundaries, but I'm still concerned about the
 pain that'll come from having *all* of your DOM exposed to all
 clients, meaning it becomes difficult/impossible to upgrade a
 component used by many people, since you have no clue what parts of
 your existing markup structure are being depended on by others.

 I'd greatly prefer to stick with the current plan of having to mark
 things to be exposed explicitly, but would be okay with a switch to
 toggle this to fully-open, like you can do with selectors today.  We
 could even build this into the existing switch that lets selectors
 match across the boundary, adding a partially-open value that keeps
 selectors from matching across naively, but allows matching when you
 explicitly pierce the boundary with ::part.

 ~TJ



Re: The JavaScript context of a custom element

2013-05-20 Thread Scott Miles
Custom elements have a closure to work in, as well as their own prototypes.
I don't believe ES6 modules add much in this regard (possibly I'm missing
something there).

Separate global scope is a bigger issue.

I believe there was general agreement to pursue (at some point) an otp-in
'isolated' mode for custom elements, where each element would have it's own
global scope and access to a sealed version of the JS/DOM apis.

Scott


On Mon, May 20, 2013 at 1:26 PM, John J Barton
johnjbar...@johnjbarton.comwrote:

 Aren't ES6 modules is a good-enough solution for this issue? They make
 global collision rare and likely to be what the author really needed.

 jjb


 On Mon, May 20, 2013 at 1:00 PM, Aaron Boodman a...@google.com wrote:

 Hello public-webapps,

 I have been following along with web components, and am really excited
 about the potential.

 However, I just realized that unlike the DOM and CSS, there is no real
 isolation for JavaScript in a custom element. In particular, the global
 scope is shared.

 This seems really unfortunate to me, and limits the ability of element
 authors to create robustly reusable components.

 I would like to suggest that custom elements have the ability to ask for
 a separate global scope for their JavaScript. This would be analogous to
 what happens today when you have multiple script-connected frames on the
 same origin.

 Has there been any thought along these lines in the past?

 Thanks,

 - a





Re: [webcomponents]: Declarative Custom Elements Take Umpteen, The Karate Kid Edition

2013-05-15 Thread Scott Miles
As long as there is a way to access the element from the script, I'm
good.


On Wed, May 15, 2013 at 11:31 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Despite little love from Scott for the mischievous walrus -- }); --
 proliferation across the Web, are there any other cries of horror that
 I should be listening to? I am hankering to write this as a spec
 draft. Yell now to stop me.

 :DG



Re: [webcomponents]: Declarative Custom Elements Take Umpteen, The Karate Kid Edition

2013-05-15 Thread Scott Miles
Since 'currentScript' is already spec'd (right?) that seems better.

I suppose my concern was about implementation, which is an orthogonal
problem to the specification.



On Wed, May 15, 2013 at 11:38 AM, Erik Arvidsson a...@chromium.org wrote:

 Walking the ancestors from document.currentScript is a start. Is that
 sufficient or should we add a document.currentElement?


 On Wed, May 15, 2013 at 2:34 PM, Scott Miles sjmi...@google.com wrote:

 As long as there is a way to access the element from the script, I'm
 good.


 On Wed, May 15, 2013 at 11:31 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Despite little love from Scott for the mischievous walrus -- }); --
 proliferation across the Web, are there any other cries of horror that
 I should be listening to? I am hankering to write this as a spec
 draft. Yell now to stop me.

 :DG





 --
 erik





Re: webcomponents: import instead of link

2013-05-14 Thread Scott Miles
I can't think of any reason I would want to be able to mess with an import
link ex-post-facto and have it do anything. I would also expect any
registrations to be final and have no particular connection to the link tag
itself.

Now, this may be tangential, but users definitely need a way of loading
imports dynamically. I believe the current gambit would be to inject a
fresh link tag into head, which seems like the long way around Hogan's barn.

I've been meaning to ask about the possibility of an imperative 'import'
method.


On Tue, May 14, 2013 at 12:53 PM, Jonas Sicking jo...@sicking.cc wrote:


 http://w3cmemes.tumblr.com/post/34633601085/grumpy-old-maciej-has-a-question-about-your-spec

 On Tue, May 14, 2013 at 12:42 PM, Dimitri Glazkov dglaz...@google.com
 wrote:
  On the second thought: why not make imports dynamic, just like
 stylesheets?
 
  :DG
 
  On Tue, May 14, 2013 at 11:29 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
  On Tue, May 14, 2013 at 9:35 AM, Anne van Kesteren ann...@annevk.nl
 wrote:
  On Tue, May 14, 2013 at 12:45 AM, Hajime Morrita morr...@google.com
 wrote:
  Just after started prototyping HTML Imports on Blink, this idea comes
 to my
  mind: Why not have import for HTML Imports?
 
  Because changing parsing for head is not done, basically.
 
  rel=import not being dynamic kinda sucks though. Maybe we should
  consider using meta? It has a bunch of uses that are non-dynamic.
 
  I used link primarily because most of the link rel=stylesheet
  plumbing seems to fit best with link rel=import.
 
  Interesting idea about meta...
 
  :DG
 




Re: webcomponents: import instead of link

2013-05-14 Thread Scott Miles
It's not clear to me why link rel=import can't be dynamic. As long as
the previous document isn't somehow banished, I don't see the problem
(admittedly, looking through a keyhole).


On Tue, May 14, 2013 at 2:21 PM, Simon Pieters sim...@opera.com wrote:

 On Tue, 14 May 2013 23:13:13 +0200, Dimitri Glazkov dglaz...@chromium.org
 wrote:

  On Tue, May 14, 2013 at 2:08 PM, Simon Pieters sim...@opera.com wrote:

  I have proposed script import=url/script instead of link rel=import
 href=url before.

 http://lists.w3.org/Archives/**Public/public-webapps/**
 2013AprJun/0009.htmlhttp://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0009.html
 http://lists.w3.org/Archives/**Public/public-webapps/**
 2013AprJun/0024.htmlhttp://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0024.html

 Benefits:

  * Components can execute script from an external resource, which script
 src can do as well, so that seems like a good fit in terms of security
 policy and expectations in Web sites and browsers.
  * script src is not dynamic, so making script import also not
 dynamic
 seems like a good fit.
  * script can appear in head without making changes to the HTML
 parser
 (in contrast with a new element).

 To pre-empt confusion shown last time I suggested this:

  * This is not script src.
  * This is not changing anything of the component itself.


 Both meta and script somewhat fail the taste test for me. I am not
 objecting, just alerting of the weakness of stomach.

 link rel=import has near-perfect semantics. It fails in the
 implementation specifics (the dynamic nature).

 Both meta and script are mis-declarations. An HTML Import is
 neither script nor metadata.


 That seems to be an argument based on aesthetics. That's worth
 considering, of course, but I think is a relatively weak argument. In
 particular I care about the first bullet point above. link is not capable
 of executing script from an external resource today. What are the
 implications if it suddenly gains that ability?


 --
 Simon Pieters
 Opera Software




Re: webcomponents: import instead of link

2013-05-14 Thread Scott Miles
I really didn't mean to suggest any particular name, just that an
imperative form should be provided or every lib will roll their own.


On Tue, May 14, 2013 at 1:45 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Tue, May 14, 2013 at 4:01 PM, Scott Miles sjmi...@google.com wrote:

 I can't think of any reason I would want to be able to mess with an
 import link ex-post-facto and have it do anything. I would also expect any
 registrations to be final and have no particular connection to the link tag
 itself.

 Now, this may be tangential, but users definitely need a way of loading
 imports dynamically. I believe the current gambit would be to inject a
 fresh link tag into head, which seems like the long way around Hogan's barn.

 I've been meaning to ask about the possibility of an imperative 'import'
 method.


 import is a FutureReservedWord that will be a Keyword as of ES6, so this
 import method would have to be a method of some platform object and not a
 method on [[Global]] object.


 Rick




Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
 I'm concerned that we can never ship this feature due to the performance
penalties it imposes.

Can you be more explicit about the penalty to which you refer? I understand
there are concerns about whether the features can be made fast, but I
wasn't aware of an overall penalty on code that is not actually using said
features. Can you elucidate?

 It does make shadow DOM significantly simpler at least in the areas we're
concerned about.

Certainly there is no argument there. I believe the point that Tab was
making that at some point it becomes so simple it's only useful for very
basic problems, and developers at large no longer care. This question is at
least worthy of discussion, yes?

Scott


On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

  I'm concerned that if the spec shipped as you described, that it would
 not be useful enough to developers to bother using it at all.

 I'm concerned that we can never ship this feature due to the performance
 penalties it imposes.

  Without useful redistributions, authors can't use composition of web
 components very well without scripting.
  At that point, it's not much better than just leaving it all in the
 document tree.

 I don't think having to inspect the light DOM manually is terrible, and we
 had been using shadow DOM to implement textarea, input, and other elements
 years before we introduced node redistributions.

 On May 1, 2013, at 8:57 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:

  It's difficult to understand without working through examples
  yourself, but removing these abilities does not make Shadow DOM
  simpler, it just makes it much, much weaker.

 It does make shadow DOM significantly simpler at least in the areas we're
 concerned about.

 - R. Niwa





Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
 Note that the interesting restriction isn't that it shouldn't regress 
 performance
for the web-at-large.

No argument, but afaict, the implication of R. Niwa's statement was was in
fact that there was a penalty for these features merely existing.

 The restriction is that it shouldn't be slow when there is heavy usage
of Shadow DOM on the page.

Again, no argument. But as a developer happily coding away against Canary's
Shadow DOM implementation, it's hard for me to accept the the prima facie
case that it must be simplified to achieve this goal.

Scott

P.S. No footguns!


On Wed, May 1, 2013 at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:
  On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com
 wrote:
 
  I'm concerned that if the spec shipped as you described, that it would
 not be useful enough to developers to bother using it at all.
 
  I'm concerned that we can never ship this feature due to the
 performance penalties it imposes.
 
  Can you tell me more about this concern? I am pretty sure the current
  implementation in WebKit/Blink does not regress performance for the
  Web-at-large.

 Note that the interesting restriction isn't that it shouldn't regress
 performance for the web-at-large. The restriction is that it
 shouldn't be slow when there is heavy usage of Shadow DOM on the
 page.

 Otherwise we recreate one of the problems of Mutation Events. Gecko
 was able to make them not regress performance as long as they weren't
 used. But that meant that we had to go around telling everyone to not
 use them. And creating features and then telling people not to use
 them is a pretty boring exercise.

 Or, to put it another way: Don't create footguns.

 / Jonas




Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
Sorry it got lost in other messages, but fwiw, I also don't have problem
with

 revisiting and even tightening selectors

Scott


On Wed, May 1, 2013 at 12:55 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 FWIW, I don't mind revisiting and even tightening selectors on
 insertion points. I don't want this to be a sticking point.

 :DG

 On Wed, May 1, 2013 at 12:46 PM, Scott Miles sjmi...@google.com wrote:
  Note that the interesting restriction isn't that it shouldn't regress
  performance for the web-at-large.
 
  No argument, but afaict, the implication of R. Niwa's statement was was
 in
  fact that there was a penalty for these features merely existing.
 
  The restriction is that it shouldn't be slow when there is heavy usage
  of Shadow DOM on the page.
 
  Again, no argument. But as a developer happily coding away against
 Canary's
  Shadow DOM implementation, it's hard for me to accept the the prima facie
  case that it must be simplified to achieve this goal.
 
  Scott
 
  P.S. No footguns!
 
 
  On Wed, May 1, 2013 at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org
 
  wrote:
   On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com
 wrote:
   On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com
   wrote:
  
   I'm concerned that if the spec shipped as you described, that it
 would
   not be useful enough to developers to bother using it at all.
  
   I'm concerned that we can never ship this feature due to the
   performance penalties it imposes.
  
   Can you tell me more about this concern? I am pretty sure the current
   implementation in WebKit/Blink does not regress performance for the
   Web-at-large.
 
  Note that the interesting restriction isn't that it shouldn't regress
  performance for the web-at-large. The restriction is that it
  shouldn't be slow when there is heavy usage of Shadow DOM on the
  page.
 
  Otherwise we recreate one of the problems of Mutation Events. Gecko
  was able to make them not regress performance as long as they weren't
  used. But that meant that we had to go around telling everyone to not
  use them. And creating features and then telling people not to use
  them is a pretty boring exercise.
 
  Or, to put it another way: Don't create footguns.
 
  / Jonas
 
 



Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-05-01 Thread Scott Miles
 I'm sure Web developers are happy to have more features but I don't want
to introduce a feature that imposes such a high maintenance cost without
knowing for sure that they're absolutely necessary.

You are not taking 'yes' for an answer. :) I don't really disagree with you
here.

With respect to my statement to which you refer, I'm only saying that we
haven't had a discussion about the costs or the features. The discussion
jumped straight to mitigation.



On Wed, May 1, 2013 at 9:45 PM, Ryosuke Niwa rn...@apple.com wrote:

 On May 1, 2013, at 12:46 PM, Scott Miles sjmi...@google.com wrote:

  Note that the interesting restriction isn't that it shouldn't regress 
  performance
 for the web-at-large.

 No argument, but afaict, the implication of R. Niwa's statement was was in
 fact that there was a penalty for these features merely existing.


 Node redistributions restricts the kinds of performance optimizations we
 can implement and negatively affects our code maintainability.

  The restriction is that it shouldn't be slow when there is heavy
 usage of Shadow DOM on the page.

 Again, no argument. But as a developer happily coding away against
 Canary's Shadow DOM implementation, it's hard for me to accept the the prima
 facie case that it must be simplified to achieve this goal.


 I'm sure Web developers are happy to have more features but I don't want
 to introduce a feature that imposes such a high maintenance cost without
 knowing for sure that they're absolutely necessary.

 On May 1, 2013, at 12:46 PM, Daniel Freedman dfre...@google.com wrote:

 I'm surprised to hear you say this. The complexity of the DOM and CSS
 styling that moden web applications demand is mind numbing.
 Having to create possibly hundreds of unique CSS selectors applied to
 possibly thousands of DOM nodes, hoping that no properties conflict, and
 that no bizarre corner cases arise as nodes move in and out of the document.


 I'm not sure why you're talking about CSS selectors here because that
 problem has been solved by scoped style element regardless of whether we
 have a shadow DOM or not.

 Just looking at Twitter, a Tweet UI element is very complicated.
 It seems like they embed parts of the UI into data attributes (like
 data-expanded-footer).
 That to me looks like a prime candidate for placement in a ShadowRoot.
 The nested structure of it also suggests that they would benefit from node
 distribution through composition.


 Whether Twitter uses data attribute or text nodes and custom elements is
 completely orthogonal to node redistributions. They can write one line
 JavaScript to extra data out from either embedding mechanism.

 That's why ShadowDOM is so important. It has the ability to scope
 complexity into things that normal web developers can understand, compose,
 and reuse.


 I'm not objecting to the usefulness of shadow DOM.  I'm objecting to the
 usefulness of node redistributions.

 Things like input and textarea are trivial compared to a youtube video
 player, or a threaded email list with reply buttons and formatting toolbars.
 These are the real candidates for ShadowDOM: the UI controls that are
 complicated.


 FYI, input and textarea elements aren't trivial.


 On May 1, 2013, at 12:41 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, May 1, 2013 at 12:15 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:

 On Wed, May 1, 2013 at 11:49 AM, Ryosuke Niwa rn...@apple.com wrote:

 On Apr 30, 2013, at 12:07 PM, Daniel Freedman dfre...@google.com wrote:

 I'm concerned that if the spec shipped as you described, that it would not
 be useful enough to developers to bother using it at all.


 I'm concerned that we can never ship this feature due to the performance
 penalties it imposes.


 Can you tell me more about this concern? I am pretty sure the current
 implementation in WebKit/Blink does not regress performance for the
 Web-at-large.


 Note that the interesting restriction isn't that it shouldn't regress
 performance for the web-at-large. The restriction is that it
 shouldn't be slow when there is heavy usage of Shadow DOM on the
 page.


 Exactly.

 Otherwise we recreate one of the problems of Mutation Events. Gecko
 was able to make them not regress performance as long as they weren't
 used. But that meant that we had to go around telling everyone to not
 use them. And creating features and then telling people not to use
 them is a pretty boring exercise.


 Agreed.


 On May 1, 2013, at 12:37 PM, Jonas Sicking jo...@sicking.cc wrote:

 However restrict the set of selectors such that only an elements
 intrinsic state affects which insertion point it is inserted in.


 Wouldn't that be confusing? How can author tell which selector is allowed
 and which one isn't?

 That way when an element is inserted or modified, you don't have to
 worry about having to check any descendants or any siblings to see if
 the selectors that they match suddenly changed.


 Yeah, that's a much saner requirement.  However

Re: [Shadow DOM] Simplifying level 1 of Shadow DOM

2013-04-25 Thread Scott Miles
Hello,

This is an interesting suggestion. Here are some notes:

Reducing to one insertion point doesn't, in itself, eliminate

 distribution, getDistributedNodes(), etc
 content select

I assume you mean one insertion point that always selects everything.

Also, fwiw,

 shadow

we use for inheritance.

And

 * reprojection

we use for composition.

We in the above refers to my team which is using ShadowDOM (in native and
polyfill forms) to make interesting custom elements.



On Thu, Apr 25, 2013 at 2:42 PM, Edward O'Connor eocon...@apple.com wrote:

 (Resent from correct email address)

 Hi,

 First off, thanks to Dimitri and others for all the great work on Shadow
 DOM and the other pieces of Web Components. While I'm very enthusiastic
 about Shadow DOM in the abstract, I think things have gotten really
 complex, and I'd like to seriously propose that we simplify the feature
 for 1.0, and defer some complexity to the next level.

 I think we can address most of the use cases of shadow DOM while
 seriously reducing the complexity of the feature by making one change:
 What if we only allowed one insertion point in the shadow DOM? Having
 just 1 insertion point would let us push (most? all?) of this complexity
 off to level 2:

 * distribution, getDistributedNodes(), etc.
 * selector fragments  matching criteria
 * /select/ combinator
 * content select
 * shadow ?
 * reprojection

 Notably, I don't think insertion point(s) get used (much or at all) in
 WebKit's internal shadow trees, so I don't think all of the above
 complexity is worth it right now. Baby Steps.[1]



 Ted

 1. The lost HTML design principle:
   http://www.w3.org/html/wg/wiki/DesignPrinciplesReview#Baby_Steps



Re: [webcomponents]: element Wars: A New Hope

2013-04-17 Thread Scott Miles
It probably goes without saying, but, as far as I know this is the best
idea on the table so far.

Couple notes:

 erhmahgerd: { writable: false, value: BOOKS! }

I don't know why we would use 'propery definitions' there. If you let me
pass in an object, I can define the properties however I like.

 When is the registration line

It's hard for me to see any way other than after all 'imports' and
'scripts' have been processed. At that point you would need to do an
upgrade step, and fire some kind of 'all done' event after that.

S


On Wed, Apr 17, 2013 at 3:16 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Inspired by Allen's and Scott's ideas in the Benadryl thread, I dug
 into understanding what element actually represents.

 It seems that the problem arises when we attempt to make element
 _be_ the document.register invocation, since that draws the line of
 when the declaration comes to existence (the registration line) and
 imposes overly restrictive constraints on what we can do with it.

 What if instead, the mental model of element was a statement of
 intent? In other words, it says: Hey browser-thing, when the time is
 right, go ahead and register this custom element. kthxbai

 In this model, the proverbial registration line isn't drawn until
 later (more on that in a moment), which means that both element and
 script can contribute to defining the same custom element.

 With that in mind, we take Scott's/Allen's excellent idea and twist it
 up a bit. We invent a HTMLElementElement.define method (name TBD),
 which takes two arguments: a custom element name, and an object. I
 know folks will cringe, but I am thinking of an Object.create
 properties object:

 HTMLElementElement.define('x-foo', {
 erhmahgerd: { writable: false, value: BOOKS! }
 });

 When the registration line comes, the browser-thing matches element
 instances and supplied property objects by custom element names, uses
 them to create prototypes, and then calls document.register with
 respective custom element name and prototype as arguments.

 We now have a working declarative syntax that doesn't hack script,
 is ES6-module-friendly, and still lets Scott build his tacos. Sounds
 like a win to me. I wonder how Object.create properties object and
 Class syntax could mesh better. I am sure ES6 Classes peeps will have
 ideas here.

 So... When is the registration line? Clearly, by the time the parser
 finishes with the document, we're too late.

 We have several choices. We could draw the line for an element when
 its corresponding /element is seen in document. This is not going to
 work for deferred scripts, but maybe that is ok.

 For elements that are imported, we have a nice delineation, since we
 explicitly process each import in order, so no problems there.

 What do you think?

 :DG



Re: [webcomponents]: element Wars: A New Hope

2013-04-17 Thread Scott Miles
The key concept is that, to avoid timing issues, neither processing
element nor evaluating script[function-to-be-named-later]/script are
the terminal point for defining an element.

Rather, at some third quantum of time a combination of those things is
constructed, keyed on 'element name'.

Most of the rest is syntax, subject to bikeshedding when and if the main
idea has taken root.


On Wed, Apr 17, 2013 at 4:33 PM, Daniel Buchner dan...@mozilla.com wrote:

 So let me be *crystal clear*:

 If define() internally does this -- When the registration line comes,
 the browser-thing matches element instances and supplied property objects
 by custom element names, uses them to create prototypes, and then calls
 document.register with respective custom element name and prototype as
 arguments. - it's doing a hell-of-a-lot more than simply redirecting to
 Object.create - in fact, I was thinking it would need to do this:

- Retain all tagName-keyed property descriptors passed to it on a
common look-up object
- Interact with the portion of the system that handles assessment of
the registration line, and whether it has been crossed
- and if called sometime after the registration line has been
crossed, immediately invokes code that upgrades all in-DOM elements
matching the tagName provided

 I could be mistaken - but my interest is valid, because if true I would
 need to polyfill the above detailed items, vs writing something as simple
 and derpish as: HTMLElementElement.prototype.define = ...alias to
 Object.create...

 Dimitri, Scott can you let me know if that sounds right, for polyfill sake?

 On Wed, Apr 17, 2013 at 4:11 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 *This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.
 *
 Is another difference that the 'x-foo' doesn't have to be 'known' yet?
 It seems to be a bit more than a repack of Object.defineProperties to me.


 I'm sorry if I was unclear, but my comments weren't subjective, nor was I
 looking for feedback.

 Looks like Dimitri agrees:
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0306.html

 Rick





 On Wed, Apr 17, 2013 at 3:53 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 17, 2013 at 6:16 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 Inspired by Allen's and Scott's ideas in the Benadryl thread, I dug
 into understanding what element actually represents.

 It seems that the problem arises when we attempt to make element
 _be_ the document.register invocation, since that draws the line of
 when the declaration comes to existence (the registration line) and
 imposes overly restrictive constraints on what we can do with it.

 What if instead, the mental model of element was a statement of
 intent? In other words, it says: Hey browser-thing, when the time is
 right, go ahead and register this custom element. kthxbai

 In this model, the proverbial registration line isn't drawn until
 later (more on that in a moment), which means that both element and
 script can contribute to defining the same custom element.

 With that in mind, we take Scott's/Allen's excellent idea and twist it
 up a bit. We invent a HTMLElementElement.define method (name TBD),
 which takes two arguments: a custom element name, and an object. I
 know folks will cringe, but I am thinking of an Object.create
 properties object:


 The are called Property Descriptors.




 HTMLElementElement.define('x-foo', {
 erhmahgerd: { writable: false, value: BOOKS! }
 });


 This is just a repackaging of Object.defineProperties( target,
 PropertyDescriptors ) thats slightly less obvious because the target
 appears to be a string.


 Rick





 When the registration line comes, the browser-thing matches element
 instances and supplied property objects by custom element names, uses
 them to create prototypes, and then calls document.register with
 respective custom element name and prototype as arguments.

 We now have a working declarative syntax that doesn't hack script,
 is ES6-module-friendly, and still lets Scott build his tacos. Sounds
 like a win to me. I wonder how Object.create properties object and
 Class syntax could mesh better. I am sure ES6 Classes peeps will have
 ideas here.

 So... When is the registration line? Clearly, by the time the parser
 finishes with the document, we're too late.

 We have several choices. We could draw the line for an element when
 its corresponding /element is seen in document. This is not going to
 work for deferred scripts, but maybe that is ok.

 For elements that are imported, we have a nice delineation, since we
 explicitly process each import in order, so no problems there.

 What do you think?

 :DG








Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
Again, 'readyCallback' exists because it's a Bad Idea to run user code
during parsing (tree construction). Ready-time is not the same as
construct-time.

This is the Pinocchio problem:
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html

Scott


On Mon, Apr 15, 2013 at 7:45 AM, Rick Waldron waldron.r...@gmail.comwrote:




 On Mon, Apr 15, 2013 at 8:57 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/14/13 5:35 PM, Rick Waldron wrote:

 I have a better understanding of problem caused by these generated
 HTML*Element constructors: they aren't constructable.


 I'd like to understand what's meant here.  I have a good understanding of
 how these constructors work in Gecko+SpiderMonkey, but I'm not sure what
 the lacking bit is, other than the fact that they have to create JS objects
 that have special state associated with them, so can't work with an object
 created by the [[Construct]] of a typical function.

 Is that what you're referring to, or something else?


 Sorry, I should've been more specific. What I meant was that:

 new HTMLButtonElement();

 Doesn't construct an HTMLButtonElement, it throws with an illegal
 constructor in Chrome and HTMLButtonElement is not a constructor in
 Firefox (I'm sure this is the same across other browsers)

 Which of course means that this is not possible even today:

 function Smile() {
   HTMLButtonElement.call(this);
   this.textContent = :);
 }

 Smile.prototype = Object.create(HTMLButtonElement.prototype);


 Since this doesn't work, the prototype method named readyCallback was
 invented as a bolt-on stand-in for the actual [[Construct]]

 Hopefully that clarifies?

 Rick


 PS. A bit of trivial... A long time ago some users requested that
 jQuery facilitate a custom constructor; to make this work, John put the
 actual constructor code in a prototype method called init and set that
 method's prototype to jQuery's own prototype. The thing called
 readyCallback is similar. For those that are interested, I created a gist
 with a minimal illustration here: https://gist.github.com/rwldrn/5388544







 -Boris





Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
 Why do the constructors of component instances run during component
loading?

I'm not sure what you are referring to. What does 'component loading' mean?

 Why not use standard events rather than callbacks?

This was discussed quite a bit, here is my off-the-cuff response. I may
have to do archaeology to get a better one.

Custom elements can inherit from custom elements. The callbacks are
convenient because (1) there is no question of 'who registers a listener'
(2) I can simply call my 'super' callback (or not) to get inherited
behavior.

IIRC, it is also advantageous for performance and for having control over
the timing these calls.

Scott


On Mon, Apr 15, 2013 at 9:37 AM, John J Barton
johnjbar...@johnjbarton.comwrote:

 Why do the constructors of component instances run during component
 loading?

 Why not use standard events rather than callbacks?

 Thanks,
 jjb
 On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user code
 during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html

 Scott


 On Mon, Apr 15, 2013 at 7:45 AM, Rick Waldron waldron.r...@gmail.comwrote:




 On Mon, Apr 15, 2013 at 8:57 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/14/13 5:35 PM, Rick Waldron wrote:

 I have a better understanding of problem caused by these generated
 HTML*Element constructors: they aren't constructable.


 I'd like to understand what's meant here.  I have a good understanding
 of how these constructors work in Gecko+SpiderMonkey, but I'm not sure what
 the lacking bit is, other than the fact that they have to create JS objects
 that have special state associated with them, so can't work with an object
 created by the [[Construct]] of a typical function.

 Is that what you're referring to, or something else?


 Sorry, I should've been more specific. What I meant was that:

 new HTMLButtonElement();

 Doesn't construct an HTMLButtonElement, it throws with an illegal
 constructor in Chrome and HTMLButtonElement is not a constructor in
 Firefox (I'm sure this is the same across other browsers)

 Which of course means that this is not possible even today:

 function Smile() {
   HTMLButtonElement.call(this);
   this.textContent = :);
 }

 Smile.prototype = Object.create(HTMLButtonElement.prototype);


 Since this doesn't work, the prototype method named readyCallback was
 invented as a bolt-on stand-in for the actual [[Construct]]

 Hopefully that clarifies?

 Rick


 PS. A bit of trivial... A long time ago some users requested that
 jQuery facilitate a custom constructor; to make this work, John put the
 actual constructor code in a prototype method called init and set that
 method's prototype to jQuery's own prototype. The thing called
 readyCallback is similar. For those that are interested, I created a gist
 with a minimal illustration here: https://gist.github.com/rwldrn/5388544







 -Boris






Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
 The callbacks are convenient because (1) there is no question of 'who
registers a listener' (2) I can simply call my 'super' callback (or not) to
get inherited behavior.

One minute later, these seem like bad reasons. I shouldn't have shot from
the hip, let me do some research.


On Mon, Apr 15, 2013 at 9:44 AM, Scott Miles sjmi...@google.com wrote:

  Why do the constructors of component instances run during component
 loading?

 I'm not sure what you are referring to. What does 'component loading' mean?

  Why not use standard events rather than callbacks?

 This was discussed quite a bit, here is my off-the-cuff response. I may
 have to do archaeology to get a better one.

 Custom elements can inherit from custom elements. The callbacks are
 convenient because (1) there is no question of 'who registers a listener'
 (2) I can simply call my 'super' callback (or not) to get inherited
 behavior.

 IIRC, it is also advantageous for performance and for having control over
 the timing these calls.

 Scott


 On Mon, Apr 15, 2013 at 9:37 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Why do the constructors of component instances run during component
 loading?

 Why not use standard events rather than callbacks?

 Thanks,
 jjb
 On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user code
 during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html

 Scott


 On Mon, Apr 15, 2013 at 7:45 AM, Rick Waldron waldron.r...@gmail.comwrote:




 On Mon, Apr 15, 2013 at 8:57 AM, Boris Zbarsky bzbar...@mit.eduwrote:

 On 4/14/13 5:35 PM, Rick Waldron wrote:

 I have a better understanding of problem caused by these generated
 HTML*Element constructors: they aren't constructable.


 I'd like to understand what's meant here.  I have a good understanding
 of how these constructors work in Gecko+SpiderMonkey, but I'm not sure 
 what
 the lacking bit is, other than the fact that they have to create JS 
 objects
 that have special state associated with them, so can't work with an object
 created by the [[Construct]] of a typical function.

 Is that what you're referring to, or something else?


 Sorry, I should've been more specific. What I meant was that:

 new HTMLButtonElement();

 Doesn't construct an HTMLButtonElement, it throws with an illegal
 constructor in Chrome and HTMLButtonElement is not a constructor in
 Firefox (I'm sure this is the same across other browsers)

 Which of course means that this is not possible even today:

 function Smile() {
   HTMLButtonElement.call(this);
   this.textContent = :);
 }

 Smile.prototype = Object.create(HTMLButtonElement.prototype);


 Since this doesn't work, the prototype method named readyCallback was
 invented as a bolt-on stand-in for the actual [[Construct]]

 Hopefully that clarifies?

 Rick


 PS. A bit of trivial... A long time ago some users requested that
 jQuery facilitate a custom constructor; to make this work, John put the
 actual constructor code in a prototype method called init and set that
 method's prototype to jQuery's own prototype. The thing called
 readyCallback is similar. For those that are interested, I created a gist
 with a minimal illustration here:
 https://gist.github.com/rwldrn/5388544







 -Boris







Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
Sorry for the extra posts. I promise to slow down.

 Why not use standard events rather than callbacks?

I believe a better answer is that it was decided these callbacks had to
work synchronously relative to imperative construction.

So, I can do

  var xfoo = document.createElement('x-foo');
  xfoo.doImportantThing();


On Mon, Apr 15, 2013 at 9:46 AM, Scott Miles sjmi...@google.com wrote:

  The callbacks are convenient because (1) there is no question of 'who
 registers a listener' (2) I can simply call my 'super' callback (or not) to
 get inherited behavior.

 One minute later, these seem like bad reasons. I shouldn't have shot from
 the hip, let me do some research.


 On Mon, Apr 15, 2013 at 9:44 AM, Scott Miles sjmi...@google.com wrote:

  Why do the constructors of component instances run during component
 loading?

 I'm not sure what you are referring to. What does 'component loading'
 mean?

  Why not use standard events rather than callbacks?

 This was discussed quite a bit, here is my off-the-cuff response. I may
 have to do archaeology to get a better one.

 Custom elements can inherit from custom elements. The callbacks are
 convenient because (1) there is no question of 'who registers a listener'
 (2) I can simply call my 'super' callback (or not) to get inherited
 behavior.

 IIRC, it is also advantageous for performance and for having control over
 the timing these calls.

 Scott


 On Mon, Apr 15, 2013 at 9:37 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Why do the constructors of component instances run during component
 loading?

 Why not use standard events rather than callbacks?

 Thanks,
 jjb
 On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user code
 during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html

 Scott


 On Mon, Apr 15, 2013 at 7:45 AM, Rick Waldron 
 waldron.r...@gmail.comwrote:




 On Mon, Apr 15, 2013 at 8:57 AM, Boris Zbarsky bzbar...@mit.eduwrote:

 On 4/14/13 5:35 PM, Rick Waldron wrote:

 I have a better understanding of problem caused by these generated
 HTML*Element constructors: they aren't constructable.


 I'd like to understand what's meant here.  I have a good
 understanding of how these constructors work in Gecko+SpiderMonkey, but 
 I'm
 not sure what the lacking bit is, other than the fact that they have to
 create JS objects that have special state associated with them, so can't
 work with an object created by the [[Construct]] of a typical function.

 Is that what you're referring to, or something else?


 Sorry, I should've been more specific. What I meant was that:

 new HTMLButtonElement();

 Doesn't construct an HTMLButtonElement, it throws with an illegal
 constructor in Chrome and HTMLButtonElement is not a constructor in
 Firefox (I'm sure this is the same across other browsers)

 Which of course means that this is not possible even today:

 function Smile() {
   HTMLButtonElement.call(this);
   this.textContent = :);
 }

 Smile.prototype = Object.create(HTMLButtonElement.prototype);


 Since this doesn't work, the prototype method named readyCallback
 was invented as a bolt-on stand-in for the actual [[Construct]]

 Hopefully that clarifies?

 Rick


 PS. A bit of trivial... A long time ago some users requested that
 jQuery facilitate a custom constructor; to make this work, John put the
 actual constructor code in a prototype method called init and set that
 method's prototype to jQuery's own prototype. The thing called
 readyCallback is similar. For those that are interested, I created a 
 gist
 with a minimal illustration here:
 https://gist.github.com/rwldrn/5388544







 -Boris








Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
Dimitri is trying to avoid 'block[ing] instance construction' because
instances can be in the main document markup.

The main document can have a bunch of markup for custom elements. If the
user has made element definitions a-priori to parsing that markup
(including inside link rel='import'), he expects those nodes to be 'born'
correctly.

Sidebar: running user's instance code while the parser is constructing the
tree is Bad(tm) so we already have deferred init code until immediately
after the parsing step. This is why I keep saying 'ready-time' is different
from 'construct-time'.

Today, I don't see how we can construct a custom element with the right
prototype at parse-time without blocking on imported scripts (which is
another side-effect of using script execution for defining prototype, btw.)

If we don't block, the parser has to construct some kind of place holder
for each custom instance, and then we upgrade them in a second pass.



On Mon, Apr 15, 2013 at 9:54 AM, John J Barton
johnjbar...@johnjbarton.comwrote:




 On Mon, Apr 15, 2013 at 9:44 AM, Scott Miles sjmi...@google.com wrote:

  Why do the constructors of component instances run during component
 loading?

 I'm not sure what you are referring to. What does 'component loading'
 mean?

  Why not use standard events rather than callbacks?


 I'll some of the doc you link below and re-ask.

  On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user code
 during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html


 ---

 Here's why:

 i) when we load component document, it blocks scripts just like a
 stylesheet 
 (http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts)

 ii) this is okay, since our constructors are generated (no user code)
 and most of the tree could be constructed while the component is
 loaded.

 iii) However, if we make constructors run at the time of tree
 construction, the tree construction gets blocked much sooner, which
 effectively makes component loading synchronous. Which is bad.

 

 Why do the constructors of component *instances* which don't need to run 
 until instances are created, need to block the load of component documents?

 Seems to me that you could dictate that script in components load async WRT 
 components but block instance construction.

 jjb







Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
  1) call 'init' when component instance tag is encountered, blocking
parsing,

Fwiw, it was said that calling user code from inside the Parser could
cause Armageddon, not just block the parser. I don't recall the details,
unfortunately.


On Mon, Apr 15, 2013 at 11:44 AM, John J Barton johnjbar...@johnjbarton.com
 wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.com wrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute before
 the instance is fully 'constructed'. Parser encounters an x-foo/x-foo
 and constructs it. My understanding is that calling 'init' from the parser
 at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only three
 choices:
1) call 'init' when component instance tag is encountered, blocking
 parsing,
2) call 'init' later, causing reflows and losing the value of not
 blocking parsing,
3) don't allow 'init' at all, limiting components.

 So non-starter is just a vote against one of three Bad choices as far as
 I can tell. In other words, these are all non-starters ;-).


  But my original question concerns blocking component documents on their
 own script tag compilation. Maybe I misunderstood.

 I don't think imports (nee component documents) have any different
 semantics from the main document in this regard. The import document may
 have an x-foo instance in it's markup, and element tags or link
 rel=import just like the main document.


 Indeed, however the relative order of the component's script tag
 processing and the component's tag element is all I was talking about.




 On Mon, Apr 15, 2013 at 11:23 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 10:38 AM, Scott Miles sjmi...@google.comwrote:

 Dimitri is trying to avoid 'block[ing] instance construction' because
 instances can be in the main document markup.


 Yes we sure hope so!



 The main document can have a bunch of markup for custom elements. If
 the user has made element definitions a-priori to parsing that markup
 (including inside link rel='import'), he expects those nodes to be 'born'
 correctly.


 Sure.




 Sidebar: running user's instance code while the parser is constructing
 the tree is Bad(tm) so we already have deferred init code until immediately
 after the parsing step. This is why I keep saying 'ready-time' is different
 from 'construct-time'.


 ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?



 Today, I don't see how we can construct a custom element with the right
 prototype at parse-time without blocking on imported scripts (which is
 another side-effect of using script execution for defining prototype, btw.)


 You must block creating instances of components until component
 documents are parsed and initialized.  Because of limitations in HTML DOM
 construction, you may have to block HTML parsing until instances of
 components are created. Thus I imagine that creating instances may block
 HTML parsing until component documents are parsed and initialized or the
 HTML parsing must have two passes as your Pinocchio link outlines.

 But my original question concerns blocking component documents on their
 own script tag compilation. Maybe I misunderstood.

 jjb





 On Mon, Apr 15, 2013 at 9:54 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 9:44 AM, Scott Miles sjmi...@google.comwrote:

  Why do the constructors of component instances run during
 component loading?

 I'm not sure what you are referring to. What does 'component loading'
 mean?

  Why not use standard events rather than callbacks?


 I'll some of the doc you link below and re-ask.

  On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user
 code during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html


 ---

 Here's why:

 i) when we load component document, it blocks scripts just like a
 stylesheet 
 (http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts)

 ii) this is okay, since our constructors are generated (no user code)
 and most of the tree could be constructed while the component is
 loaded.

 iii) However, if we make constructors run at the time of tree
 construction, the tree construction gets blocked much sooner, which
 effectively makes component loading synchronous. Which is bad.

 

 Why do the constructors of component *instances* which don't need to run 
 until instances are created, need to block the load of component 
 documents?

 Seems to me

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
 What happens if the construction/initialization of the custom element
calls one of the element's member functions overridden by code in a
prototype?

IIRC it's not possible to override methods that will be called from inside
of builtins, so I don't believe this is an issue (unless we change the
playfield).

 How, as component author, do I ensure that my imperative set up code
runs and modifies my element DOM content before the user sees the
un-modified custom element declared in mark-up? (I'm cheating, since this
issue isn't specific to your prototype)

This is another can of worms. Right now we blanket solve this by waiting
for an 'all clear' event (also being discussed, 'DOMComponentsReady' or
something) and handling this appropriately for our application.


On Mon, Apr 15, 2013 at 1:46 PM, John J Barton
johnjbar...@johnjbarton.comwrote:

 What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 How, as component author, do I ensure that my imperative set up code runs
 and modifies my element DOM content before the user sees the un-modified
 custom element declared in mark-up? (I'm cheating, since this issue isn't
 specific to your prototype)


 On Mon, Apr 15, 2013 at 12:39 PM, Scott Miles sjmi...@google.com wrote:

 Sorry for beating this horse, because I don't like 'prototype' element
 anymore than anybody else, but I can't help thinking if there was a way to
 express a prototype without script 98% of this goes away.

 The parser can generate an object with the correct prototype, we can run
 init code directly after parsing, there are no 'this' issues or problems
 associating element with script.

 At least somebody explain why this is conceptually wrong.


 On Mon, Apr 15, 2013 at 11:52 AM, Scott Miles sjmi...@google.com wrote:

   1) call 'init' when component instance tag is encountered, blocking
 parsing,

 Fwiw, it was said that calling user code from inside the Parser could
 cause Armageddon, not just block the parser. I don't recall the details,
 unfortunately.


 On Mon, Apr 15, 2013 at 11:44 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.comwrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute
 before the instance is fully 'constructed'. Parser encounters an
 x-foo/x-foo and constructs it. My understanding is that calling 'init'
 from the parser at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only three
 choices:
1) call 'init' when component instance tag is encountered, blocking
 parsing,
2) call 'init' later, causing reflows and losing the value of not
 blocking parsing,
3) don't allow 'init' at all, limiting components.

 So non-starter is just a vote against one of three Bad choices as far
 as I can tell. In other words, these are all non-starters ;-).


  But my original question concerns blocking component documents on
 their own script tag compilation. Maybe I misunderstood.

 I don't think imports (nee component documents) have any different
 semantics from the main document in this regard. The import document may
 have an x-foo instance in it's markup, and element tags or link
 rel=import just like the main document.


 Indeed, however the relative order of the component's script tag
 processing and the component's tag element is all I was talking about.




 On Mon, Apr 15, 2013 at 11:23 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 10:38 AM, Scott Miles sjmi...@google.comwrote:

 Dimitri is trying to avoid 'block[ing] instance construction'
 because instances can be in the main document markup.


 Yes we sure hope so!



 The main document can have a bunch of markup for custom elements. If
 the user has made element definitions a-priori to parsing that markup
 (including inside link rel='import'), he expects those nodes to be 
 'born'
 correctly.


 Sure.




 Sidebar: running user's instance code while the parser is
 constructing the tree is Bad(tm) so we already have deferred init code
 until immediately after the parsing step. This is why I keep saying
 'ready-time' is different from 'construct-time'.


 ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?



 Today, I don't see how we can construct a custom element with the
 right prototype at parse-time without blocking on imported scripts 
 (which
 is another side-effect of using script execution for defining prototype,
 btw.)


 You must block creating instances of components until component
 documents are parsed and initialized.  Because of limitations in HTML DOM
 construction, you may have to block HTML parsing

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread Scott Miles
 So we can override some methods but not others, depending on the
implementation?

You can override methods that will be called from JS, but not from C++
(depending on platform).

 Gee, that's not very encouraging

I was trying to just say we have been aware of these issue too and there
are efforts going on here.

We are already building apps on these techniques and are exploring these
issues with developers. I'd rather not get into all those issues too on
this thread.

Rather, for the hard-core platform peeps here, I'd prefer to focus on some
semantics for document.register and element that doesn't cause hives (see
what I did there?).

Scott

On Mon, Apr 15, 2013 at 2:23 PM, John J Barton
johnjbar...@johnjbarton.comwrote:




 On Mon, Apr 15, 2013 at 2:01 PM, Scott Miles sjmi...@google.com wrote:

  What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 IIRC it's not possible to override methods that will be called from
 inside of builtins, so I don't believe this is an issue (unless we change
 the playfield).


 Ugh. So we can override some methods but not others, depending on the
 implementation?

 So really these methods are more like callbacks with a funky kind of
 registration. It's not like inheriting and overriding, it's like onLoad
 implemented with an inheritance-like wording.  An API users doesn't think
 like an object, rather they ask the Internet some HowTo questions and get
 a recipe for a particular function override.

 Ok, I'm exaggerating, but I still think the emphasis on inheritance in the
 face of so me is a high tax on this problem.




  How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)

 This is another can of worms. Right now we blanket solve this by waiting
 for an 'all clear' event (also being discussed, 'DOMComponentsReady' or
 something) and handling this appropriately for our application.


 Gee, that's not very encouraging: this is the most important kind of issue
 for a developer, more so than whether the API is inheritance-like or not.





 On Mon, Apr 15, 2013 at 1:46 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)


 On Mon, Apr 15, 2013 at 12:39 PM, Scott Miles sjmi...@google.comwrote:

 Sorry for beating this horse, because I don't like 'prototype' element
 anymore than anybody else, but I can't help thinking if there was a way to
 express a prototype without script 98% of this goes away.

 The parser can generate an object with the correct prototype, we can
 run init code directly after parsing, there are no 'this' issues or
 problems associating element with script.

 At least somebody explain why this is conceptually wrong.


 On Mon, Apr 15, 2013 at 11:52 AM, Scott Miles sjmi...@google.comwrote:

   1) call 'init' when component instance tag is encountered, blocking
 parsing,

 Fwiw, it was said that calling user code from inside the Parser could
 cause Armageddon, not just block the parser. I don't recall the details,
 unfortunately.


 On Mon, Apr 15, 2013 at 11:44 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.comwrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute
 before the instance is fully 'constructed'. Parser encounters an
 x-foo/x-foo and constructs it. My understanding is that calling 
 'init'
 from the parser at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only three
 choices:
1) call 'init' when component instance tag is encountered,
 blocking parsing,
2) call 'init' later, causing reflows and losing the value of not
 blocking parsing,
3) don't allow 'init' at all, limiting components.

 So non-starter is just a vote against one of three Bad choices as
 far as I can tell. In other words, these are all non-starters ;-).


  But my original question concerns blocking component documents on
 their own script tag compilation. Maybe I misunderstood.

 I don't think imports (nee component documents) have any different
 semantics from the main document in this regard. The import document may
 have an x-foo instance in it's markup, and element tags

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-14 Thread Scott Miles
  the challenge with creating a normal constructor

Forgive me if my language is imprecise, but the basic notion is that in
general one cannot create a constructor that creates a DOM node because
(most? all?) browsers make under the hood mappings to internal code (C++
for Blink and Webkit). For example, note that HTMLElement and descendents
are not callable from JS.

Erik Arvidsson came up with a strategy for overcoming this in Blink, but to
my recollection Boris Zbarsky said this was a non-starter in Gecko.

Because of this constraint Dimitri's current system involves supplying only
a prototype to the system, which hands you back a generated constructor.

Wrt 'has-a' and 'is-a', at one point I polyfilled a system where the user
object has-a Element instead of is-a Element. This gets around the
constructor problem, but has some drawbacks: e.g. users want custom API on
the node (at instance time we populated the node with public API, a
per-instance cost). The fatal problem was that utlimately users rejected
the separation between the true element and the code they wrote (which
boils down to 'this !== Element instance' in the custom code).

HTH,
Scott

On Sun, Apr 14, 2013 at 6:11 AM, Brian Kardell bkard...@gmail.com wrote:

 Can Scott or Daniel or someone explain the challenge with creating a
 normal constructor that has been mentioned a few times (Scott mentioned
 has-a).  I get the feeling that several people are playing catch up on that
 challenge and the implications that are causing worry.  Until people have
 some shared understanding it is difficult to impossible to reach something
 acceptable all around.  Hard to solve the unknown problems.



Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-14 Thread Scott Miles
First of all, thanks for diving in on this Allen (and Rick and Blake et al).

 all built-in HTML*Element constructors are global so making app defined
custom elements also be global doesn't seem like it is introducing any new
ugliness

Yes, but that set of constructors is tightly controlled, and application
level custom elements will be the Wild West. I may have to give on this
one, but I really appreciated that document.register let me do my work
without having to name a global symbol.

 Here are four ways to avoid the subclassing problem for custom elements
 1)  Only allow instances of custome dom elements to be instantiated
using document.createElement(x-foo).

Wearing web developer hat, I never make elements any other way than
createElement (or HTML), so this would be standard operating procedure, so
that's all good if we can get buy in.

 2, 3, 4

I believe have been suggested in one form or another, but as I mentioned,
were determined to be non-starters for Gecko. I don't think we've heard
anything from IE team.


On Sun, Apr 14, 2013 at 11:28 AM, Allen Wirfs-Brock
al...@wirfs-brock.comwrote:


 On Apr 13, 2013, at 9:13 PM, Scott Miles wrote:

  I think if an element needs such custom behavior it should be required
 to use a constructor= attribute to associate an app provided constructor
 object with the element and

 I don't like this because it requires me to make a global symbol.
 document.register, as you pointed out does not. In the original scenario,
 the nesting of the script in the element provided the linkage between
 them (however it played out), I hate to lose that. If you think this is a
 bogus objection, please let me know (and I will take it seriously).


 Well, all built-in HTML*Element constructors are global so making app
 defined custom elements also be global doesn't seem like it is introducing
 any new ugliness.  Regardless, I believe when I first described
 constructor= is suggest that its value should be interpreted as a script
 expression.  In that case you can say things like:

 element name=x-foo constructor=appNamespace.FooElement
 /element

 element name=x-foo constructor=myExtensionRegistry.lookup('x-foo')
 /element

 etc.


  the constructor should be specified in a normal script bock using
 normal JS techniques.

 There is a practical problem that we cannot make a constructor using
 'normal' techniques that creates a DOM node, so sayeth the Gecko guys
 (iirc). There was a suggestion to make the custom objects have-a node
 instead of be-a node, which has many positives, but we tried that in
 polyfills and the users revolted against 'this !== my-element-instance'
 inside their class.


 What they are referring to is what in the TC39 world we call the
 built-ins  subclassing problem. The issue is that many built-in objects
 (for example arrays) and many host objects have special implementation
 specific object representations that gives them special runtime
 characteristics and that simply inheriting from their prototype isn't
 enough to give an object created by a subclass constructor those special
 characteristics.

 In TC39 we use the term exotic object for any object with such special
 characteristics.  In ES specs. we talk about the [[Construct]] behavior of
 a  function.  This is the protocol that is followed when the new operator
 is applied to a constructor function.  The normal default [[Construct]]
 behavior is to allocated a new normal object and then to call the
 constructor function to initialize the state of that object. Exotic object
 typically have a different dispatch at a low level to  a special
 [[Construct]] implementation that knows how to allocate and initialize the
 appropriate form of exotic object.

 Here are four ways to avoid the subclassing problem for custom elements:

 1)  Only allow instances of custome dom elements to be instantiated using
 document.createElement(x-foo).  createElement would instantiate the
 appropriate implementation level exotic dom element object structure and
 then invoke the app provided constructor (with this bound to the new exotic
 instance) to initialize the instance.

 2)  Whenever a constructor is associated with an element  either via a
 constructor= attribute or via a call to document.register the
 implementation would modify (if necessary) the [[Construct]] dispatch of
 the app provided constructor to first crate an implementation specific
 exotic dom element object and then to call the provided constructor to do
 any app specific initialization.  This would allow saying things like: new
 HTMLXFooElement() to instantiate custom elements

 3) Provide a new API that blesses an app provided constructor as an DOM
 element constructor.  Only blessed constructors would be allowed to be
 associated with an element.  The blessing process would be essentially
 the same as describe for #2 above.

 4) ES6 includes specified behavior that eliminates the built-in
 subclassing problem and browsers will be implementing

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-14 Thread Scott Miles
Re: subclassing builtins, the problem we have is stated here:
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0266.html


On Sun, Apr 14, 2013 at 11:52 AM, Allen Wirfs-Brock
al...@wirfs-brock.comwrote:


 On Apr 14, 2013, at 10:49 AM, Scott Miles wrote:

   the challenge with creating a normal constructor

 Forgive me if my language is imprecise, but the basic notion is that in
 general one cannot create a constructor that creates a DOM node because
 (most? all?) browsers make under the hood mappings to internal code (C++
 for Blink and Webkit). For example, note that HTMLElement and descendents
 are not callable from JS.

 Erik Arvidsson came up with a strategy for overcoming this in Blink, but
 to my recollection Boris Zbarsky said this was a non-starter in Gecko.

 Because of this constraint Dimitri's current system involves supplying
 only a prototype to the system, which hands you back a generated
 constructor.


 I addressed this issue in a follow message.

 For background on the problem and general solution see
 http://wiki.ecmascript.org/lib/exe/fetch.php?id=meetings%3Ameeting_jan_29_2013cache=cachemedia=meetings:subclassing_builtins.pdfhttp://wiki.ecmascript.org/lib/exe/fetch.php?id=meetings:meeting_jan_29_2013cache=cachemedia=meetings:subclassing_builtins.pdf


 Also http://www.2ality.com/2013/03/subclassing-builtins-es6.html

 Allen





Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-14 Thread Scott Miles
 This is somehow ok because it's polyfillable?

The polyfill thing is a red-herring, let's not cloud the issue.

  The platforms need to make them constructable

Agreed, but the estimates are months or years to make it so, which is too
long to block these features.

 hack around the problem

Many good people have been trying to solve this over-constrained problem
for months. I don't think this is a fair characterization.

 bolt-on ready* callbacks

The 'callback' naming and semantics were the result of a long debate. Can
you make your objections clearer? Also, please refer to this thread
http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html for
more information about 'readyCallback' in particular.


On Sun, Apr 14, 2013 at 2:35 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Sun, Apr 14, 2013 at 3:46 PM, Daniel Buchner dan...@mozilla.comwrote:

 * Here are four ways to avoid the subclassing problem for custom
 elements
 *
 * 1)  Only allow instances of custome dom elements to be instantiated
 using document.createElement(x-foo).
 *
 *
 *
 *Wearing web developer hat, I never make elements any other way than
 createElement (or HTML), so this would be standard operating procedure, so
 that's all good if we can get buy in.*

 As long as the above supports all other DOM element creation vectors
 (innerHTML, outerHTML, etc), then this is fine. Practically speaking, if it
 so happened that custom elements could *never *be instantiated with
 constructors, developers on the web today wouldn't shed a tear, they use
 doc.createElement(), not constructors --
 https://docs.google.com/forms/d/16cNqHRe-7CFRHRVcFo94U6tIYnohEpj7NZhY02ejiXQ/viewanalytics

 -


 * Alex Russell have been advocating that WebIDL should be allow
 constructor-like interfaces*
 *
 *
 *Absolutely agree. But these are horns of this dilemma.*
 *
 *
 * #4 has been accepted for ES6 by all TC39 participants*
 *
 *
 *Yes, I believe this is a timing issue. I am told it will be a long time
 before #4 is practical.*

 Yes, it will be along time, especially for IE9 and 10 (read: never),
 which are support targets for custom element polyfills. Reliance on
 anything that is optional or future should be avoided for the custom
 element base case. Right now the polyfills for document.register(), and a
 few of the declarative proposals, can give developers these awesome APIs
 today - please, do not imperil this.


 After reading Scott Miles' post here
 http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0209.html,
 I have a better understanding of problem caused by these generated
 HTML*Element constructors: they aren't constructable. No amount of ES6
 subclass support will fix that problem. The platforms need to make them
 constructable—and that can't be polyfilled. I also now understand why such
 great lengths have been taken to hack around the problem, and the
 resulting solution with bolt-on ready* callbacks (that aren't really
 callbacks, just prototype methods that will be called after some turn of
 execution has initialized some element state) as stand-ins for the real
 constructor function. This is somehow ok because it's polyfillable?


 Rick





Re: [webcomponents]: de-duping in HTMLImports

2013-04-11 Thread Scott Miles
On Thu, Apr 11, 2013 at 12:33 AM, Angelina Fabbro
angelinafab...@gmail.comwrote:

  I don't believe it's *needed* exactly, but we imagined somebody wanting
 to import HTML, use it destructively, then import it again.

 That does sound totally crazy. Can you give an example as to what someone
 might want to do with this? Maybe it's not totally crazy and I'm just not
 being creative enough.


You have to assume some facts not in evidence, but imagine an import that
runs script and generates content based on the current time, or some other
dynamic. Then imagine a page injects a link tag, based on some event, to
import the latest content.


 Then I guess I need this spec'd :)

 I'd rather de-duping be a nice optimization performed by the user-agent
 and hidden from me entirely. Although, now I'm really curious about an
 argument for opting out of de-duping.


If there is no automatic de-duping then the author has to take care to
specifically avoid duplication in various cases. Therefore, it cannot be an
optimization, in the sense that it's not optional. It has to be required by
the spec or you cannot rely on it.



 On Wed, Apr 10, 2013 at 11:56 AM, Scott Miles sjmi...@google.com wrote:

  Interesting. Why do you need [attribute to opt-out of deduping]?

 I don't believe it's *needed* exactly, but we imagined somebody wanting
 to import HTML, use it destructively, then import it again.

 That may be totally crazy. :)

 Scott

 On Wed, Apr 10, 2013 at 11:50 AM, Dimitri Glazkov dglaz...@google.comwrote:

 On Tue, Apr 9, 2013 at 11:42 AM, Scott Miles sjmi...@google.com wrote:
  Duplicate fetching is not observable, but duplicate parsing and
 duplicate
  copies are observable.
 
  Preventing duplicate parsing and duplicate copies allows us to use
 'imports'
  without a secondary packaging mechanism. For example, I can load 100
  components that each import 'base.html' without issue. Without this
 feature,
  we would need to manage these dependencies somehow; either manually,
 via
  some kind of build tool, or with a packaging system.

 Then I guess I need this spec'd :)

 
  If import de-duping is possible, then ideally there would also be an
  attribute to opt-out.

 Interesting. Why do you need it?

 :DG






Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Scott Miles
So, what you quoted are thoughts I already deprecated mysefl in this
thread. :)

If you read a bit further, see that  I realized that shadow-root is
really part of the 'outer html' of the node and not the inner html.

 I think that is actually a feature, not a detriment and easily
explainable.

What is actually a feature? You mean that the shadow root is invisible to
innerHTML?

Yes, that's true. But without some special handling of Shadow DOM you get
into trouble when you start using innerHTML to serialize DOM into HTML and
transfer content from A to B. Or even from A back to itself.

Again, treating (non intrinsic) Shadow DOM as outerHTML solves this problem
IMO.

Scott


On Wed, Apr 10, 2013 at 10:11 AM, Brian Kardell bkard...@gmail.com wrote:

 On Mon, Mar 18, 2013 at 5:05 PM, Scott Miles sjmi...@google.com wrote:
  I'm already on the record with A, but I have a question about
 'lossiness'.
 
  With my web developer hat on, I wonder why I can't say:
 
  div id=foo
shadowroot
  shadow stuff
/shadowroot
 
light stuff
 
  /div
 
 
  and then have the value of #foo.innerHTML still be
 
shadowroot
   shadow stuff
/shadowroot
 
lightstuff
 
  I understand that for DOM, there is a wormhole there and the reality of
 what
  this means is new and frightening; but as a developer it seems to be
  perfectly fine as a mental model.
 
  We web devs like to grossly oversimplify things. :)
 
  Scott

 I am also a Web developer and I find that proposal (showing in
 innerHTML) feels really wrong/unintuitive to me... I think that is
 actually a feature, not a detriment and easily explainable.

 I am in a) camp



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Scott Miles
I don't see any reason why my document markup for some div should not be
serializable back to how I wrote it via innerHTML. That seems just plain
bad.

I hope you can take a look at what I'm saying about outerHTML. I believe at
least the concept there solves all cases.



On Wed, Apr 10, 2013 at 11:27 AM, Brian Kardell bkard...@gmail.com wrote:


 On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:
 
  So, what you quoted are thoughts I already deprecated mysefl in this
 thread. :)
 
  If you read a bit further, see that  I realized that shadow-root is
 really part of the 'outer html' of the node and not the inner html.
 
 Yeah sorry, connectivity issue prevented me from seeing those until after
 i sent i guess.

   I think that is actually a feature, not a detriment and easily
 explainable.
 
  What is actually a feature? You mean that the shadow root is invisible
 to innerHTML?
 


 Yes.

  Yes, that's true. But without some special handling of Shadow DOM you
 get into trouble when you start using innerHTML to serialize DOM into HTML
 and transfer content from A to B. Or even from A back to itself.
 

 I think Dimiti's implication iii is actually intuitive - that is what I am
 saying... I do think that round-tripping via innerHTML would be lossy of
 declarative markup used to create the instances inside the shadow... to get
 that it feels like you'd need something else which I think he also
 provided/mentioned.

 Maybe I'm alone on this, but it's just sort of how I expected it to work
 all along... Already, roundtripping can differ from the original source, If
 you aren't careful this can bite you in the hind-quarters but it is
 actually sensible.  Maybe I need to think about this a little deeper, but I
 see nothing at this stage to make me think that the proposal and
 implications are problematic.



Re: [webcomponents]: de-duping in HTMLImports

2013-04-10 Thread Scott Miles
 Interesting. Why do you need [attribute to opt-out of deduping]?

I don't believe it's *needed* exactly, but we imagined somebody wanting to
import HTML, use it destructively, then import it again.

That may be totally crazy. :)

Scott

On Wed, Apr 10, 2013 at 11:50 AM, Dimitri Glazkov dglaz...@google.comwrote:

 On Tue, Apr 9, 2013 at 11:42 AM, Scott Miles sjmi...@google.com wrote:
  Duplicate fetching is not observable, but duplicate parsing and duplicate
  copies are observable.
 
  Preventing duplicate parsing and duplicate copies allows us to use
 'imports'
  without a secondary packaging mechanism. For example, I can load 100
  components that each import 'base.html' without issue. Without this
 feature,
  we would need to manage these dependencies somehow; either manually, via
  some kind of build tool, or with a packaging system.

 Then I guess I need this spec'd :)

 
  If import de-duping is possible, then ideally there would also be an
  attribute to opt-out.

 Interesting. Why do you need it?

 :DG



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Scott Miles
I think we all agree that node.innerHTML should not reveal node's
ShadowDOM, ever.

What I am arguing is that, if we have shadow-root element that you can
use to install shadow DOM into an arbitrary node, like this:

div
  shadow-root
Decoration -- content/content -- Decoration
  shadow-root
  Light DOM
/div


Then, as we agree, innerHTML is

LightDOM


but outerHTML would be

div
  shadow-root
Decoration -- content/content -- Decoration
  shadow-root
  Light DOM
/div


I'm suggesting this outerHTML only for 'non-intrinsic' shadow DOM, by which
I mean Shadow DOM that would never exist on a node unless you hadn't
specifically put it there (as opposed to Shadow DOM intrinsic to a
particular element type).

With this inner/outer rule, all serialization cases I can think of work in
a sane fashion, no lossiness.

Scott



On Wed, Apr 10, 2013 at 12:05 PM, Erik Arvidsson a...@chromium.org wrote:

 Maybe I'm missing something but we clearly don't want to include
 shadowroot in the general innerHTML getter case. If I implement
 input[type=range] using custom elements + shadow DOM I don't want innerHTML
 to show the internal guts.


 On Wed, Apr 10, 2013 at 2:30 PM, Scott Miles sjmi...@google.com wrote:

 I don't see any reason why my document markup for some div should not be
 serializable back to how I wrote it via innerHTML. That seems just plain
 bad.

 I hope you can take a look at what I'm saying about outerHTML. I believe
 at least the concept there solves all cases.



 On Wed, Apr 10, 2013 at 11:27 AM, Brian Kardell bkard...@gmail.comwrote:


 On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:
 
  So, what you quoted are thoughts I already deprecated mysefl in this
 thread. :)
 
  If you read a bit further, see that  I realized that shadow-root is
 really part of the 'outer html' of the node and not the inner html.
 
 Yeah sorry, connectivity issue prevented me from seeing those until
 after i sent i guess.

   I think that is actually a feature, not a detriment and easily
 explainable.
 
  What is actually a feature? You mean that the shadow root is invisible
 to innerHTML?
 


 Yes.

  Yes, that's true. But without some special handling of Shadow DOM you
 get into trouble when you start using innerHTML to serialize DOM into HTML
 and transfer content from A to B. Or even from A back to itself.
 

 I think Dimiti's implication iii is actually intuitive - that is what I
 am saying... I do think that round-tripping via innerHTML would be lossy of
 declarative markup used to create the instances inside the shadow... to get
 that it feels like you'd need something else which I think he also
 provided/mentioned.

 Maybe I'm alone on this, but it's just sort of how I expected it to work
 all along... Already, roundtripping can differ from the original source, If
 you aren't careful this can bite you in the hind-quarters but it is
 actually sensible.  Maybe I need to think about this a little deeper, but I
 see nothing at this stage to make me think that the proposal and
 implications are problematic.





 --
 erik





Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Scott Miles
Thank you for distilling all that down into digestible content (yum,
distillates).

A couple of notes:

The 'magic script' problem has been difficult to reconcile with template,
so there is willingness to continue to use element, but ideally without
nesting template. In other words, perhaps element can be a subtype of
template.

Where we really get into trouble is when we get into inheritance. I'm happy
to discuss this further, but I figure I will wait until people have had
time to think about your main content.

Scott


On Wed, Apr 10, 2013 at 11:47 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Dear Webappsonites,

 There's been a ton of thinking on what the custom elements declarative
 syntax must look like. Here, I present something has near-ideal
 developer ergonomics at the expense of terrible sins in other areas.
 Consider it to be beacon, rather than a concrete proposal.

 First, let's cleanse your palate. Forget about the element element
 and what goes inside of it. Eat some parsley.

 == Templates Bound to Tags ==

 Instead, suppose you only have a template:

 template
 divYay!/div
 /template

 Templates are good for stamping things out, right? So let's invent a
 way to _bind_ a template to a _tag_. When the browser sees a tag to
 which the template is bound, it stamps the template out. Like so:

 1) Define a template and bind it to a tag name:

 template bindtotagname=my-yay
 divYay!/div
 /template

 2) Whenever my-yay is seen by the parser or
 createElement/NS(my-yay) is called, the template is stamped out to
 produce:

 my-yay
 divYay!/div
 /my-yay

 Cool! This is immediately useful for web developers. They can
 transform any markup into something they can use.

 Behind the scenes: the presence of boundtotagname triggers a call to
 document.register, and the argument is a browser-generated prototype
 object whose readyCallback takes the template and appends it to
 this.

 == Organic Shadow Trees  ==

 But what if they also wanted to employ encapsulation boundaries,
 leaving initial markup structure intact? No problem, much-maligned
 shadowroot to the rescue:

 1) Define a template with a shadow tree and bind it to a tag name:

 template bindtotagname=my-yay
 shadowroot
 divYay!/div
 /shadowroot
 /template

 2) For each my-yay created, the template is stamped out to create a
 shadow root and populate it.

 Super-cool! Note, how the developer doesn't have to know anything
 about Shadow DOM to build custom elements (er, template-bound tags).
 Shadow trees are just an option.

 Behind the scenes: exactly the same as the first scenario.

 == Declarative Meets Imperative ==

 Now, the developer wants to add some APIs to my-yay. Sure, no problem:

 template bindtotagname=my-yay
 shadowroot
 divYay!/div
 /shadowroot
 script runwhenbound
 // runs right after document.register is triggered
 this.register(ExactSyntaxTBD);
 script
 /template

 So-cool-it-hurts! We built a fully functional custom element, taking
 small steps from an extremely simple concept to the full-blown thing.

 In the process, we also saw a completely decoupled shadow DOM from
 custom elements in both imperative and declarative forms, achieving
 singularity. Well, or at least a high degree of consistence.

 == Problems ==

 There are severe issues.

 The shadowroot is turning out to be super-magical.

 The bindtotagname attribute will need to be also magical, to be
 consistent with how document.register could be used.

 The stamping out, after clearly specified, may raise eyebrows and
 turn out to be unintuitive.

 Templates are supposed to be inert, but the whole script
 runwhenbound thing is strongly negating this. There's probably more
 that I can't remember now.

 == Plea ==

 However, I am hopeful that you smart folk will look at this, see the
 light, tweak the idea just a bit and hit the homerun. See the light,
 dammit!

 :DG



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Scott Miles
input/video would have intrinsic Shadow DOM, so it would not ever be part
of outerHTML.

I don't have a precise way to differentiate intrinsic Shadow DOM from
non-intrinsic Shadow DOM, but my rule of thumb is this: 'node.outerHTML'
should produce markup that parses back into 'node' (assuming all
dependencies exist).


On Wed, Apr 10, 2013 at 12:15 PM, Erik Arvidsson a...@chromium.org wrote:

 Once again, how would this work for input/video?

 Are you suggesting that `createShadowRoot` behaves different than when you
 create the shadow root using markup?


 On Wed, Apr 10, 2013 at 3:11 PM, Scott Miles sjmi...@google.com wrote:

 I think we all agree that node.innerHTML should not reveal node's
 ShadowDOM, ever.

 What I am arguing is that, if we have shadow-root element that you can
 use to install shadow DOM into an arbitrary node, like this:

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 Then, as we agree, innerHTML is

 LightDOM


 but outerHTML would be

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 I'm suggesting this outerHTML only for 'non-intrinsic' shadow DOM, by
 which I mean Shadow DOM that would never exist on a node unless you hadn't
 specifically put it there (as opposed to Shadow DOM intrinsic to a
 particular element type).

 With this inner/outer rule, all serialization cases I can think of work
 in a sane fashion, no lossiness.

 Scott



 On Wed, Apr 10, 2013 at 12:05 PM, Erik Arvidsson a...@chromium.orgwrote:

 Maybe I'm missing something but we clearly don't want to include
 shadowroot in the general innerHTML getter case. If I implement
 input[type=range] using custom elements + shadow DOM I don't want innerHTML
 to show the internal guts.


 On Wed, Apr 10, 2013 at 2:30 PM, Scott Miles sjmi...@google.com wrote:

 I don't see any reason why my document markup for some div should not
 be serializable back to how I wrote it via innerHTML. That seems just plain
 bad.

 I hope you can take a look at what I'm saying about outerHTML. I
 believe at least the concept there solves all cases.



 On Wed, Apr 10, 2013 at 11:27 AM, Brian Kardell bkard...@gmail.comwrote:


 On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:
 
  So, what you quoted are thoughts I already deprecated mysefl in this
 thread. :)
 
  If you read a bit further, see that  I realized that shadow-root
 is really part of the 'outer html' of the node and not the inner html.
 
 Yeah sorry, connectivity issue prevented me from seeing those until
 after i sent i guess.

   I think that is actually a feature, not a detriment and easily
 explainable.
 
  What is actually a feature? You mean that the shadow root is
 invisible to innerHTML?
 


 Yes.

  Yes, that's true. But without some special handling of Shadow DOM
 you get into trouble when you start using innerHTML to serialize DOM into
 HTML and transfer content from A to B. Or even from A back to itself.
 

 I think Dimiti's implication iii is actually intuitive - that is what
 I am saying... I do think that round-tripping via innerHTML would be lossy
 of declarative markup used to create the instances inside the shadow... to
 get that it feels like you'd need something else which I think he also
 provided/mentioned.

 Maybe I'm alone on this, but it's just sort of how I expected it to
 work all along... Already, roundtripping can differ from the original
 source, If you aren't careful this can bite you in the hind-quarters but 
 it
 is actually sensible.  Maybe I need to think about this a little deeper,
 but I see nothing at this stage to make me think that the proposal and
 implications are problematic.





 --
 erik






 --
 erik





Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Scott Miles
No, strictly ergonomic. Less nesting and less characters (less nesting is
more important IMO).

I would also argue that there is less cognitive load on the author then the
more explicit factoring, but I believe this is subjective.

Scott


On Wed, Apr 10, 2013 at 12:36 PM, Rafael Weinstein rafa...@google.comwrote:

 On Wed, Apr 10, 2013 at 11:47 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
  Dear Webappsonites,
 
  There's been a ton of thinking on what the custom elements declarative
  syntax must look like. Here, I present something has near-ideal
  developer ergonomics at the expense of terrible sins in other areas.
  Consider it to be beacon, rather than a concrete proposal.
 
  First, let's cleanse your palate. Forget about the element element
  and what goes inside of it. Eat some parsley.
 
  == Templates Bound to Tags ==
 
  Instead, suppose you only have a template:
 
  template
  divYay!/div
  /template
 
  Templates are good for stamping things out, right? So let's invent a
  way to _bind_ a template to a _tag_. When the browser sees a tag to
  which the template is bound, it stamps the template out. Like so:
 
  1) Define a template and bind it to a tag name:
 
  template bindtotagname=my-yay
  divYay!/div
  /template
 
  2) Whenever my-yay is seen by the parser or
  createElement/NS(my-yay) is called, the template is stamped out to
  produce:
 
  my-yay
  divYay!/div
  /my-yay
 
  Cool! This is immediately useful for web developers. They can
  transform any markup into something they can use.
 
  Behind the scenes: the presence of boundtotagname triggers a call to
  document.register, and the argument is a browser-generated prototype
  object whose readyCallback takes the template and appends it to
  this.
 
  == Organic Shadow Trees  ==
 
  But what if they also wanted to employ encapsulation boundaries,
  leaving initial markup structure intact? No problem, much-maligned
  shadowroot to the rescue:
 
  1) Define a template with a shadow tree and bind it to a tag name:
 
  template bindtotagname=my-yay
  shadowroot
  divYay!/div
  /shadowroot
  /template
 
  2) For each my-yay created, the template is stamped out to create a
  shadow root and populate it.
 
  Super-cool! Note, how the developer doesn't have to know anything
  about Shadow DOM to build custom elements (er, template-bound tags).
  Shadow trees are just an option.
 
  Behind the scenes: exactly the same as the first scenario.
 
  == Declarative Meets Imperative ==
 
  Now, the developer wants to add some APIs to my-yay. Sure, no problem:
 
  template bindtotagname=my-yay
  shadowroot
  divYay!/div
  /shadowroot
  script runwhenbound
  // runs right after document.register is triggered
  this.register(ExactSyntaxTBD);
  script
  /template
 
  So-cool-it-hurts! We built a fully functional custom element, taking
  small steps from an extremely simple concept to the full-blown thing.
 
  In the process, we also saw a completely decoupled shadow DOM from
  custom elements in both imperative and declarative forms, achieving
  singularity. Well, or at least a high degree of consistence.
 
  == Problems ==
 
  There are severe issues.
 
  The shadowroot is turning out to be super-magical.
 
  The bindtotagname attribute will need to be also magical, to be
  consistent with how document.register could be used.
 
  The stamping out, after clearly specified, may raise eyebrows and
  turn out to be unintuitive.
 
  Templates are supposed to be inert, but the whole script
  runwhenbound thing is strongly negating this. There's probably more
  that I can't remember now.

 The following expresses the same semantics:

 element tagname=my-yay
   template
 shadowroot
   divYay!/div
 /shadowroot
   /template
   script runwhenbound
   /script
 /element

 I get that your proposal is fewer characters to type. Are there other
 advantages?

 
  == Plea ==
 
  However, I am hopeful that you smart folk will look at this, see the
  light, tweak the idea just a bit and hit the homerun. See the light,
  dammit!
 
  :DG



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Scott Miles
I made an attempt to describe how these things can be non-lossy here:
https://gist.github.com/sjmiles/5358120


On Wed, Apr 10, 2013 at 12:19 PM, Scott Miles sjmi...@google.com wrote:

 input/video would have intrinsic Shadow DOM, so it would not ever be part
 of outerHTML.

 I don't have a precise way to differentiate intrinsic Shadow DOM from
 non-intrinsic Shadow DOM, but my rule of thumb is this: 'node.outerHTML'
 should produce markup that parses back into 'node' (assuming all
 dependencies exist).


 On Wed, Apr 10, 2013 at 12:15 PM, Erik Arvidsson a...@chromium.org wrote:

 Once again, how would this work for input/video?

 Are you suggesting that `createShadowRoot` behaves different than when
 you create the shadow root using markup?


 On Wed, Apr 10, 2013 at 3:11 PM, Scott Miles sjmi...@google.com wrote:

 I think we all agree that node.innerHTML should not reveal node's
 ShadowDOM, ever.

 What I am arguing is that, if we have shadow-root element that you can
 use to install shadow DOM into an arbitrary node, like this:

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 Then, as we agree, innerHTML is

 LightDOM


 but outerHTML would be

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 I'm suggesting this outerHTML only for 'non-intrinsic' shadow DOM, by
 which I mean Shadow DOM that would never exist on a node unless you hadn't
 specifically put it there (as opposed to Shadow DOM intrinsic to a
 particular element type).

 With this inner/outer rule, all serialization cases I can think of work
 in a sane fashion, no lossiness.

 Scott



 On Wed, Apr 10, 2013 at 12:05 PM, Erik Arvidsson a...@chromium.orgwrote:

 Maybe I'm missing something but we clearly don't want to include
 shadowroot in the general innerHTML getter case. If I implement
 input[type=range] using custom elements + shadow DOM I don't want innerHTML
 to show the internal guts.


 On Wed, Apr 10, 2013 at 2:30 PM, Scott Miles sjmi...@google.comwrote:

 I don't see any reason why my document markup for some div should not
 be serializable back to how I wrote it via innerHTML. That seems just 
 plain
 bad.

 I hope you can take a look at what I'm saying about outerHTML. I
 believe at least the concept there solves all cases.



 On Wed, Apr 10, 2013 at 11:27 AM, Brian Kardell bkard...@gmail.comwrote:


 On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:
 
  So, what you quoted are thoughts I already deprecated mysefl in
 this thread. :)
 
  If you read a bit further, see that  I realized that shadow-root
 is really part of the 'outer html' of the node and not the inner html.
 
 Yeah sorry, connectivity issue prevented me from seeing those until
 after i sent i guess.

   I think that is actually a feature, not a detriment and easily
 explainable.
 
  What is actually a feature? You mean that the shadow root is
 invisible to innerHTML?
 


 Yes.

  Yes, that's true. But without some special handling of Shadow DOM
 you get into trouble when you start using innerHTML to serialize DOM into
 HTML and transfer content from A to B. Or even from A back to itself.
 

 I think Dimiti's implication iii is actually intuitive - that is what
 I am saying... I do think that round-tripping via innerHTML would be 
 lossy
 of declarative markup used to create the instances inside the shadow... 
 to
 get that it feels like you'd need something else which I think he also
 provided/mentioned.

 Maybe I'm alone on this, but it's just sort of how I expected it to
 work all along... Already, roundtripping can differ from the original
 source, If you aren't careful this can bite you in the hind-quarters but 
 it
 is actually sensible.  Maybe I need to think about this a little deeper,
 but I see nothing at this stage to make me think that the proposal and
 implications are problematic.





 --
 erik






 --
 erik






Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Scott Miles
Well ok, that's unfortunate, I do wish you gave me more than no, that's
bad. Here are a couple more thoughts.

1. I suspect the loss-of-symmetry occurs because Dimitri has defined an
element that has markup as if it is a child, but it's not actually a child.
This is the 'wormhole' effect.

2. My proposal is consistent and functional. I haven't heard any other
proposal that isn't lossy or damaging to the standard methods of working
(not saying there isn't one, I just haven't seen any outline of it yet).

Scott


On Wed, Apr 10, 2013 at 1:53 PM, Erik Arvidsson a...@chromium.org wrote:

 For the record I'm opposed to what you are proposoing. I don't like that
 you lose the symmetry between innerHTML and outerHTML.


 On Wed, Apr 10, 2013 at 4:34 PM, Scott Miles sjmi...@google.com wrote:

 I made an attempt to describe how these things can be non-lossy here:
 https://gist.github.com/sjmiles/5358120


 On Wed, Apr 10, 2013 at 12:19 PM, Scott Miles sjmi...@google.com wrote:

 input/video would have intrinsic Shadow DOM, so it would not ever be
 part of outerHTML.

 I don't have a precise way to differentiate intrinsic Shadow DOM from
 non-intrinsic Shadow DOM, but my rule of thumb is this: 'node.outerHTML'
 should produce markup that parses back into 'node' (assuming all
 dependencies exist).


 On Wed, Apr 10, 2013 at 12:15 PM, Erik Arvidsson a...@chromium.orgwrote:

 Once again, how would this work for input/video?

 Are you suggesting that `createShadowRoot` behaves different than when
 you create the shadow root using markup?


 On Wed, Apr 10, 2013 at 3:11 PM, Scott Miles sjmi...@google.comwrote:

 I think we all agree that node.innerHTML should not reveal node's
 ShadowDOM, ever.

 What I am arguing is that, if we have shadow-root element that you
 can use to install shadow DOM into an arbitrary node, like this:

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 Then, as we agree, innerHTML is

 LightDOM


 but outerHTML would be

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 I'm suggesting this outerHTML only for 'non-intrinsic' shadow DOM, by
 which I mean Shadow DOM that would never exist on a node unless you hadn't
 specifically put it there (as opposed to Shadow DOM intrinsic to a
 particular element type).

 With this inner/outer rule, all serialization cases I can think of
 work in a sane fashion, no lossiness.

 Scott



 On Wed, Apr 10, 2013 at 12:05 PM, Erik Arvidsson a...@chromium.orgwrote:

 Maybe I'm missing something but we clearly don't want to include
 shadowroot in the general innerHTML getter case. If I implement
 input[type=range] using custom elements + shadow DOM I don't want 
 innerHTML
 to show the internal guts.


 On Wed, Apr 10, 2013 at 2:30 PM, Scott Miles sjmi...@google.comwrote:

 I don't see any reason why my document markup for some div should
 not be serializable back to how I wrote it via innerHTML. That seems 
 just
 plain bad.

 I hope you can take a look at what I'm saying about outerHTML. I
 believe at least the concept there solves all cases.



 On Wed, Apr 10, 2013 at 11:27 AM, Brian Kardell 
 bkard...@gmail.comwrote:


 On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:
 
  So, what you quoted are thoughts I already deprecated mysefl in
 this thread. :)
 
  If you read a bit further, see that  I realized that
 shadow-root is really part of the 'outer html' of the node and not 
 the
 inner html.
 
 Yeah sorry, connectivity issue prevented me from seeing those until
 after i sent i guess.

   I think that is actually a feature, not a detriment and easily
 explainable.
 
  What is actually a feature? You mean that the shadow root is
 invisible to innerHTML?
 


 Yes.

  Yes, that's true. But without some special handling of Shadow DOM
 you get into trouble when you start using innerHTML to serialize DOM 
 into
 HTML and transfer content from A to B. Or even from A back to itself.
 

 I think Dimiti's implication iii is actually intuitive - that is
 what I am saying... I do think that round-tripping via innerHTML would 
 be
 lossy of declarative markup used to create the instances inside the
 shadow... to get that it feels like you'd need something else which I 
 think
 he also provided/mentioned.

 Maybe I'm alone on this, but it's just sort of how I expected it to
 work all along... Already, roundtripping can differ from the original
 source, If you aren't careful this can bite you in the hind-quarters 
 but it
 is actually sensible.  Maybe I need to think about this a little 
 deeper,
 but I see nothing at this stage to make me think that the proposal and
 implications are problematic.





 --
 erik






 --
 erik







 --
 erik





Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Scott Miles
 mistake to make element registration a concern of template.

Raf: does that include making a new type 'element' which is a subtype of
'template', which is specifically given this concern?

 stamp out light DOM is a new semantic

This is true, it sort of appeared organically and we haven't wrestled with
it enough. We have discussed it though.

There are many libraries that use this kind of technique today, and it has
a lot of appeal because it's an easier way to on-ramp people into
web-components without hitting them over the head with shadow dom right
away.

There was originally some zeal that this would solve inheritance issues wrt
declarative shadow dom, but all it really does is shift the problems around.


On Wed, Apr 10, 2013 at 2:47 PM, Rafael Weinstein rafa...@google.comwrote:

 On Wed, Apr 10, 2013 at 2:45 PM, Rafael Weinstein rafa...@google.com
 wrote:
  FWIW, I think it's a design mistake to make element registration a
  concern of template.

 Sorry. I over-stated my conviction here. Let me walk that back: I'm
 not yet hearing sufficient justification for making element
 registration a concern of template

 
  I'd be more persuaded by the developer ergonomics argument if this was
  a cost that was incurred with the usage of custom elements, but it's
  not. It's only incurred with the element definition.
 
  Separately, I may have missed it, but it seems to me that allowing
  custom elements to stamp out light DOM is a new semantic, that isn't
  obviously solving a problem which is either identified, or related to
  web components. Did I miss earlier discussion about this?
 
  On Wed, Apr 10, 2013 at 12:40 PM, Scott Miles sjmi...@google.com
 wrote:
  No, strictly ergonomic. Less nesting and less characters (less nesting
 is
  more important IMO).
 
  I would also argue that there is less cognitive load on the author then
 the
  more explicit factoring, but I believe this is subjective.
 
  Scott
 
 
  On Wed, Apr 10, 2013 at 12:36 PM, Rafael Weinstein rafa...@google.com
  wrote:
 
  On Wed, Apr 10, 2013 at 11:47 AM, Dimitri Glazkov dglaz...@google.com
 
  wrote:
   Dear Webappsonites,
  
   There's been a ton of thinking on what the custom elements
 declarative
   syntax must look like. Here, I present something has near-ideal
   developer ergonomics at the expense of terrible sins in other areas.
   Consider it to be beacon, rather than a concrete proposal.
  
   First, let's cleanse your palate. Forget about the element element
   and what goes inside of it. Eat some parsley.
  
   == Templates Bound to Tags ==
  
   Instead, suppose you only have a template:
  
   template
   divYay!/div
   /template
  
   Templates are good for stamping things out, right? So let's invent a
   way to _bind_ a template to a _tag_. When the browser sees a tag to
   which the template is bound, it stamps the template out. Like so:
  
   1) Define a template and bind it to a tag name:
  
   template bindtotagname=my-yay
   divYay!/div
   /template
  
   2) Whenever my-yay is seen by the parser or
   createElement/NS(my-yay) is called, the template is stamped out to
   produce:
  
   my-yay
   divYay!/div
   /my-yay
  
   Cool! This is immediately useful for web developers. They can
   transform any markup into something they can use.
  
   Behind the scenes: the presence of boundtotagname triggers a call
 to
   document.register, and the argument is a browser-generated prototype
   object whose readyCallback takes the template and appends it to
   this.
  
   == Organic Shadow Trees  ==
  
   But what if they also wanted to employ encapsulation boundaries,
   leaving initial markup structure intact? No problem, much-maligned
   shadowroot to the rescue:
  
   1) Define a template with a shadow tree and bind it to a tag name:
  
   template bindtotagname=my-yay
   shadowroot
   divYay!/div
   /shadowroot
   /template
  
   2) For each my-yay created, the template is stamped out to create a
   shadow root and populate it.
  
   Super-cool! Note, how the developer doesn't have to know anything
   about Shadow DOM to build custom elements (er, template-bound tags).
   Shadow trees are just an option.
  
   Behind the scenes: exactly the same as the first scenario.
  
   == Declarative Meets Imperative ==
  
   Now, the developer wants to add some APIs to my-yay. Sure, no
 problem:
  
   template bindtotagname=my-yay
   shadowroot
   divYay!/div
   /shadowroot
   script runwhenbound
   // runs right after document.register is triggered
   this.register(ExactSyntaxTBD);
   script
   /template
  
   So-cool-it-hurts! We built a fully functional custom element, taking
   small steps from an extremely simple concept to the full-blown thing.
  
   In the process, we also saw a completely decoupled shadow DOM from
   custom elements in both imperative and declarative forms, achieving
   singularity. Well, or at least a high degree of consistence

Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread Scott Miles
  how that specific script tag knows what its this value is

I think I'm probably not answering your question, but I believe the notion
was that that script tag is handled specially by element, so it's a
script* which only ever executes in the 'scope' of element.


On Wed, Apr 10, 2013 at 2:54 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Apr 10, 2013 at 5:35 PM, Daniel Buchner dan...@mozilla.comwrote:

 One thing I'm wondering re template elements and the association of a
 specific script with them, is what is it really doing for me? From what I
 see, not much. It seems the only thing it does, is allows you to have the
 generic, globally-scoped script run at a given time (via a new runwhen___
 attribute) and the implicit relationship created by inclusion within the
 template element itself - which is essentially no different than just
 setting a global delegate in any 'ol script tag on the page.


 I'd be interested in seeing a reasonable set of semantics defining how
 that specific script tag knows what its this value is; so far I
 understand that inside those script tags, |this| !== |window|.


 @Erik, what about |self|? That actually makes more sense and *almost* has
 a precedent in worker global scope


  Rick



Re: [webcomponents]: Re-imagining shadow root as Element

2013-04-10 Thread Scott Miles
I thought of another tack. I should have made it clear that I wasn't so
much making a proposal as trying to suggest the nature of 'shadow-dom'
works without lossiness when considered part of outerHTML. The notion
itself doesn't seem to me to be particularly worthy of controversy, so I
suspect the issue is around notation.

As a strawman, pretend we defined a 'shadowroot' attribute, like this:

div shadowroot=Decoration --content/content-- Decoration
  Light DOM
/div

Treated this way, there is no confusion about inner and outerHTML, and
there is no lossiness when serializing. A video tag, e.g.,  has no
'shadowroot' attribute on it (unless user adds one) so there is no
confusion about intrinsic and extrinsic shadow-roots.

Given the strawman, then I could reframe Dimitri's idea as: the shadowroot
attribute sucks for ergonomics, what if we just use a syntax where we
(cheat and) mark up the shadowroot as if it were a child node.

If we decide that's a bad idea, the so be it, but I suggest that's a
separate argument from my claim that there can be a clean lossless mental
model for shadowroot markup.


On Wed, Apr 10, 2013 at 2:04 PM, Scott Miles sjmi...@google.com wrote:

 Well ok, that's unfortunate, I do wish you gave me more than no, that's
 bad. Here are a couple more thoughts.

 1. I suspect the loss-of-symmetry occurs because Dimitri has defined an
 element that has markup as if it is a child, but it's not actually a child.
 This is the 'wormhole' effect.

 2. My proposal is consistent and functional. I haven't heard any other
 proposal that isn't lossy or damaging to the standard methods of working
 (not saying there isn't one, I just haven't seen any outline of it yet).

 Scott


 On Wed, Apr 10, 2013 at 1:53 PM, Erik Arvidsson a...@chromium.org wrote:

 For the record I'm opposed to what you are proposoing. I don't like that
 you lose the symmetry between innerHTML and outerHTML.


 On Wed, Apr 10, 2013 at 4:34 PM, Scott Miles sjmi...@google.com wrote:

 I made an attempt to describe how these things can be non-lossy here:
 https://gist.github.com/sjmiles/5358120


 On Wed, Apr 10, 2013 at 12:19 PM, Scott Miles sjmi...@google.comwrote:

 input/video would have intrinsic Shadow DOM, so it would not ever be
 part of outerHTML.

 I don't have a precise way to differentiate intrinsic Shadow DOM from
 non-intrinsic Shadow DOM, but my rule of thumb is this: 'node.outerHTML'
 should produce markup that parses back into 'node' (assuming all
 dependencies exist).


 On Wed, Apr 10, 2013 at 12:15 PM, Erik Arvidsson a...@chromium.orgwrote:

 Once again, how would this work for input/video?

 Are you suggesting that `createShadowRoot` behaves different than when
 you create the shadow root using markup?


 On Wed, Apr 10, 2013 at 3:11 PM, Scott Miles sjmi...@google.comwrote:

 I think we all agree that node.innerHTML should not reveal node's
 ShadowDOM, ever.

 What I am arguing is that, if we have shadow-root element that you
 can use to install shadow DOM into an arbitrary node, like this:

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 Then, as we agree, innerHTML is

 LightDOM


 but outerHTML would be

 div
   shadow-root
 Decoration -- content/content -- Decoration
   shadow-root
   Light DOM
 /div


 I'm suggesting this outerHTML only for 'non-intrinsic' shadow DOM, by
 which I mean Shadow DOM that would never exist on a node unless you 
 hadn't
 specifically put it there (as opposed to Shadow DOM intrinsic to a
 particular element type).

 With this inner/outer rule, all serialization cases I can think of
 work in a sane fashion, no lossiness.

 Scott



 On Wed, Apr 10, 2013 at 12:05 PM, Erik Arvidsson 
 a...@chromium.orgwrote:

 Maybe I'm missing something but we clearly don't want to include
 shadowroot in the general innerHTML getter case. If I implement
 input[type=range] using custom elements + shadow DOM I don't want 
 innerHTML
 to show the internal guts.


 On Wed, Apr 10, 2013 at 2:30 PM, Scott Miles sjmi...@google.comwrote:

 I don't see any reason why my document markup for some div should
 not be serializable back to how I wrote it via innerHTML. That seems 
 just
 plain bad.

 I hope you can take a look at what I'm saying about outerHTML. I
 believe at least the concept there solves all cases.



 On Wed, Apr 10, 2013 at 11:27 AM, Brian Kardell bkard...@gmail.com
  wrote:


 On Apr 10, 2013 1:24 PM, Scott Miles sjmi...@google.com wrote:
 
  So, what you quoted are thoughts I already deprecated mysefl in
 this thread. :)
 
  If you read a bit further, see that  I realized that
 shadow-root is really part of the 'outer html' of the node and not 
 the
 inner html.
 
 Yeah sorry, connectivity issue prevented me from seeing those
 until after i sent i guess.

   I think that is actually a feature, not a detriment and
 easily explainable.
 
  What is actually a feature? You mean that the shadow root is
 invisible

Re: [webcomponents]: de-duping in HTMLImports

2013-04-09 Thread Scott Miles
Duplicate fetching is not observable, but duplicate parsing and duplicate
copies are observable.

Preventing duplicate parsing and duplicate copies allows us to use
'imports' without a secondary packaging mechanism. For example, I can load
100 components that each import 'base.html' without issue. Without this
feature, we would need to manage these dependencies somehow; either
manually, via some kind of build tool, or with a packaging system.

If import de-duping is possible, then ideally there would also be an
attribute to opt-out.

Scott


On Tue, Apr 9, 2013 at 11:08 AM, Dimitri Glazkov dglaz...@google.comwrote:

 The trick here is to figure out whether de-duping is observable by the
 author (other than as a performance gain). If it's not, it's a
 performance optimization by a user agent. If it is, it's a spec
 feature.

 :DG

 On Tue, Apr 9, 2013 at 10:53 AM, Scott Miles sjmi...@google.com wrote:
  When writing polyfills for HTMLImports/CustomElements, we included a
  de-duping mechanism, so that the same document/script/stylesheet is not
 (1)
  fetched twice from the network and (2) not parsed twice.
 
  But these features are not in specification, and are not trivial as
 design
  decisions.
 
  WDYT?
 
  Scott
 



[webcomponents]: de-duping in HTMLImports

2013-04-09 Thread Scott Miles
When writing polyfills for HTMLImports/CustomElements, we included a
de-duping mechanism, so that the same document/script/stylesheet is not (1)
fetched twice from the network and (2) not parsed twice.

But these features are not in specification, and are not trivial as design
decisions.

WDYT?

Scott


Re: [webcomponents] self-documenting component.html files

2013-04-05 Thread Scott Miles
attributeChangedCallback is provided by spec, I don't believe one needs
another avenue for observing attributes.

Mapping properties to attributes is non-trivial, that's where a higher
level abstraction (toolkit or x-tags, e.g.) comes in.

Scott

On Apr 5, 2013, at 11:31 AM, Travis Leithead travis.leith...@microsoft.com
wrote:

  For the attribute changes, you can use MutationObservers, unless you need
to have the values updated synchronously, in which case, you can always
fallback to Mutation Events or hook the relevant APIs with ES5
defineProperty overrides…? Generally, I think all the tools you need for
notifications are probably already available.



*From:* Mike Kamermans [mailto:niho...@gmail.com niho...@gmail.com]
*Sent:* Friday, April 5, 2013 4:51 AM
*To:* public-webapps@w3.org
*Subject:* [webcomponents] self-documenting component.html files



Hi all,



a short while back I'd been working on a web components demo, with one
result being a components.html that also acted as its own documentation
(since as a components.html anything that isn't 'more components', script,
or element, gets ignored), which sparked a small discussion on how
self-documentation might be done at all. I've been trying to think of some
way to do this while staying within the custom element specification, but I
keep ending up with needing bits that aren't in the spec. So, let me just
list what I have, and perhaps some of the bits are useful enough for
further discussion, while some other bits can be shown to be silly, with
much better alternatives. This is what I come up with if the idea is to
make a custom element as self-descriptive as possible:
https://gist.github.com/Pomax/5304557



One obvious difference is that for attributes that you actually want to do
anything with (i.e., you're creating your own custom audio element, and
setting the src should reload the data and start autoplaying or something),
you want to be able to specify the getter/setter and events that will
occur. I didn't see anything in the webcomponents/custom element specs that
would currently allow for this. I did hear from Scott Miles that some work
had already been done, and that the custom element shim now already
supports an attributeChangedCallback function to do just this thing, but
that's a bit broader level than specific get/set behaviour on attributes.
Consider my gist to be some thinking out loud =)



Also, out of the discussion on fully documenting vs. docstripped
(essentially the develop vs. production question): I'd make this something
that people who deploy their components for others to use are responsible
for in the same way they are responsible for full vs. minified javascript
libraries right now. If you only put up a fully-documented components.html
you're probably inconveniencing your audience, but having it available next
to a minified version is great for when people want to look something up -
they'll know where to go by simply removing the .min part in a CND'ed
components.html URL. So as long as the minification process is easily
performed, that should be enough (so my gist also contains a description of
what minification would look like)



- Mike Pomax Kamermans


Re: [webcomponents]: Naming the Baby

2013-03-27 Thread Scott Miles
The problem I'm trying to get at, is that while a 'custom element' has a
chance of meeting your 1-6 criterion, the thing on the other end of link
rel='to-be-named'... has no such qualifications. As designed, the target
of this link is basically arbitrary HTML.

This is why I'm struggling with link rel='component' ...

Scott


On Wed, Mar 27, 2013 at 10:20 AM, Angelina Fabbro
angelinafab...@gmail.comwrote:

 Just going to drop this in here for discussion. Let's try and get at what
 a just a component 'is':

 A gold-standard component:

 1. Should do one thing well
 2. Should contain all the necessary code to do that one thing (HTML, JS,
 CSS)
 3. Should be modular (and thus reusable)
 4. Should be encapsulated
 5. (Bonus) Should be as small as it can be

 I think it follows, then, that a 'web component' is software that fits all
 of these criteria, but for explicit use in the browser to build web
 applications. The tools provided - shadow DOM, custom elements etc. give
 developers tools to create web components. In the case of:

 link rel=component href=..

 I would (as mentioned before) call this a 'component include' as I think
 this description is pretty apt.

 It is true that widgets and components are synonymous, but that has been
 that way for a couple of years now at least already. Widgets, components,
 modules - they're all interchangeable depending on who you talk to. We've
 stuck with 'components' to describe things so far. Let's not worry about
 the synonyms. So far, the developers I've introduced to this subject
 understood implicitly that they could build widgets with this stuff, all
 the while I used the term 'components'.

 Cheers,

 - A

 On Tue, Mar 26, 2013 at 10:58 PM, Scott Miles sjmi...@google.com wrote:

 Forgive me if I'm perseverating, but do you imagine 'component' that is
 included to be generic HTML content, and maybe some scripts or some custom
 elements?

 I'm curious what is it you envision when you say 'component', to test my
 previous assertion about this word.

 Scott


 On Tue, Mar 26, 2013 at 10:46 PM, Angelina Fabbro 
 angelinafab...@gmail.com wrote:

 'Component Include'

 'Component Include' describes what the markup is doing, and I like that
 a lot. The syntax is similar to including a stylesheet or a script and so
 this name should be evocative enough for even a novice to understand what
 is implied by it.

 - Angelina


 On Tue, Mar 26, 2013 at 4:19 PM, Scott Miles sjmi...@google.com wrote:

 Fwiw, my main concern is that for my team and for lots of other people
 I communicate with, 'component' is basically synonymous with 'custom
 element'. In that context, 'component' referring to
 chunk-of-web-resources-loaded-via-link is problematic, even if it's not
 wrong, per se.

 We never complained about this before because Dimitri always wrote the
 examples as link rel=components... (note the plural). When it was
 changed to link rel=component... was when the rain began.

 Scott


 On Tue, Mar 26, 2013 at 4:08 PM, Ryan Seddon seddon.r...@gmail.comwrote:

 I like the idea of package seems all encompassing which captures the
 requirements nicely. That or perhaps resource, but then resource seems
 singular.

 Or perhaps component-package so it is obvious that it's tied to web
 components?

 -Ryan


 On Tue, Mar 26, 2013 at 6:03 AM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 Hello folks!

 It seems that we've had a bit of informal feedback on the Web
 Components as the name for the link rel=component spec (cc'd some
 of the feedbackers).

 So... these malcontents are suggesting that Web Components is more a
 of a general name for all the cool things we're inventing, and link
 rel=component should be called something more specific, having to do
 with enabling modularity and facilitating component dependency
 management that it actually does.

 I recognize the problem, but I don't have a good name. And I want to
 keep moving forward. So let's come up with a good one soon? As
 outlined in
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0742.html

 Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG









Re: [webcomponents] writing some pages that use webcomponents, and blogging along the way

2013-03-27 Thread Scott Miles
This is great stuff Mike, thanks for making it available. I think we are
all #facepalm at the notion of self-documenting component files, very
clever.

 making things that use components and custom elements is proving
extremely fun =)

Music to my ears.

Scott


On Tue, Mar 26, 2013 at 11:48 AM, Mike Kamermans niho...@gmail.com wrote:

 Hey all,

 I've been playing with web components and custom elements for a bit,
 blogging about my understanding of it at
 http://pomax.nihongoresources.com/index.php?entry=1364168314 and
 writing a demo for the Mozilla webmaker dev group to see what we can
 do with them, which is hosted at
 http://pomax.github.com/WebComponentDemo/

 This demo has a stack of custom elements that all tack onto a media
 element on the page, if there is one, with two pages, one with a media
 element, the other with an image instead, but identical code outside
 of that difference, using the components defined in
 http://pomax.github.com/WebComponentDemo/webmaker-components.html

 One thing we're wondering about how to play with is self-documenting
 components. Was there already work done on this, or has anyone else
 already played with that idea? Right now we've hardcoded the
 documentation as plain HTML, trying to come up with a nice way of
 autogenerating it by having some JS that checks whether the components
 were loaded as the document itself and if so, generate the
 documentation from the element definitions, but finding a clean way
 to include a general description as well as attribute documentation is
 tricky. If anyone has good ides for doing this, I'd be delighted to
 hear from you!

 Also, if there's anything on those pages that we did wrong, or that
 can be done better, I'd also love to hear from you. These things feel
 like game-changers, and making things that use components and custom
 elements is proving extremely fun =)

 - Mike Pomax Kamermans




Re: [webcomponents]: Naming the Baby

2013-03-26 Thread Scott Miles
Fwiw, my main concern is that for my team and for lots of other people I
communicate with, 'component' is basically synonymous with 'custom
element'. In that context, 'component' referring to
chunk-of-web-resources-loaded-via-link is problematic, even if it's not
wrong, per se.

We never complained about this before because Dimitri always wrote the
examples as link rel=components... (note the plural). When it was
changed to link rel=component... was when the rain began.

Scott


On Tue, Mar 26, 2013 at 4:08 PM, Ryan Seddon seddon.r...@gmail.com wrote:

 I like the idea of package seems all encompassing which captures the
 requirements nicely. That or perhaps resource, but then resource seems
 singular.

 Or perhaps component-package so it is obvious that it's tied to web
 components?

 -Ryan


 On Tue, Mar 26, 2013 at 6:03 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Hello folks!

 It seems that we've had a bit of informal feedback on the Web
 Components as the name for the link rel=component spec (cc'd some
 of the feedbackers).

 So... these malcontents are suggesting that Web Components is more a
 of a general name for all the cool things we're inventing, and link
 rel=component should be called something more specific, having to do
 with enabling modularity and facilitating component dependency
 management that it actually does.

 I recognize the problem, but I don't have a good name. And I want to
 keep moving forward. So let's come up with a good one soon? As
 outlined in
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0742.html

 Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG





Re: [webcomponents]: Naming the Baby

2013-03-26 Thread Scott Miles
Forgive me if I'm perseverating, but do you imagine 'component' that is
included to be generic HTML content, and maybe some scripts or some custom
elements?

I'm curious what is it you envision when you say 'component', to test my
previous assertion about this word.

Scott


On Tue, Mar 26, 2013 at 10:46 PM, Angelina Fabbro
angelinafab...@gmail.comwrote:

 'Component Include'

 'Component Include' describes what the markup is doing, and I like that a
 lot. The syntax is similar to including a stylesheet or a script and so
 this name should be evocative enough for even a novice to understand what
 is implied by it.

 - Angelina


 On Tue, Mar 26, 2013 at 4:19 PM, Scott Miles sjmi...@google.com wrote:

 Fwiw, my main concern is that for my team and for lots of other people I
 communicate with, 'component' is basically synonymous with 'custom
 element'. In that context, 'component' referring to
 chunk-of-web-resources-loaded-via-link is problematic, even if it's not
 wrong, per se.

 We never complained about this before because Dimitri always wrote the
 examples as link rel=components... (note the plural). When it was
 changed to link rel=component... was when the rain began.

 Scott


 On Tue, Mar 26, 2013 at 4:08 PM, Ryan Seddon seddon.r...@gmail.comwrote:

 I like the idea of package seems all encompassing which captures the
 requirements nicely. That or perhaps resource, but then resource seems
 singular.

 Or perhaps component-package so it is obvious that it's tied to web
 components?

 -Ryan


 On Tue, Mar 26, 2013 at 6:03 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Hello folks!

 It seems that we've had a bit of informal feedback on the Web
 Components as the name for the link rel=component spec (cc'd some
 of the feedbackers).

 So... these malcontents are suggesting that Web Components is more a
 of a general name for all the cool things we're inventing, and link
 rel=component should be called something more specific, having to do
 with enabling modularity and facilitating component dependency
 management that it actually does.

 I recognize the problem, but I don't have a good name. And I want to
 keep moving forward. So let's come up with a good one soon? As
 outlined in
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0742.html

 Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG







Re: [webcomponents] Adjusting offsetParent, offsetTop, offsetLeft properties in Shadow DOM

2013-03-23 Thread Scott Miles
Sorry for the late response, this is one of those bad cases where agreement
was expressed as silence.

This is a thorny problem, but my initial reaction is that you threaded the
needle appropriately. I don't see how we avoid some lossiness in this
situation.

Scott


On Mon, Mar 18, 2013 at 1:48 AM, Dominic Cooney domin...@chromium.orgwrote:

 Summary: I think the Shadow DOM spec should specify how offset* properties
 are handled around shadows. Further, I think traversable and
 non-traversable shadows should be handled uniformly. The offsetParent
 property should return the first offsetParent at the same level of shadow
 as the receiver, or the document, to maintain lower-boundary encapsulation.
 And the offset{Top, Left} properties should be accumulated across skipped
 offsetParents.

 Problem:

 It seems the consensus is that there will be two kinds of shadows, ones
 that are exposed to the page through properties such as
 HTMLElement.shadowRoot, and ones that aren't [1]. The language is emerging
 but for now I will refer to these as traversable and non-traversable
 shadows respectively.

 In both cases, there's a question of how to handle HTMLElement.offset*
  properties, particularly offsetParent. [2]

 Let's talk about a specific example:

 div id=a
   div id=b

 {#a's ShadowRoot}
   div id=c style=position: relative; left: 10px;
 div id=d
   content

 In this case, the positioned ancestor of #b is #c. What should the result
 of b.offsetParent be?

 If the ShadowRoot is not traversable it is clear that b.offsetParent
 should NOT be c. If it were, it would be very difficult to use
 not-traversable shadows that don't accidentally leak an internal node.
 (Especially when you consider that c could be a pseudo-element, and the
 author could set position: relative on the element that way.)

 Discussion:

 I think the offset{Parent, Top, Left} properties should be adjusted. This
 means that in the above example, b.offsetParent would be body and
 b.offsetLeft would be silently adjusted to accumulate an offset of 10px
 from c. I think this makes sense because typical uses of offsetParent and
 offsetLeft, etc. are used to calculate the position of one element in the
 coordinate space of another element, and adjusting these properties to work
 this way will mean code that naively implements this use case will continue
 to work.

 This behavior is unfortunately slightly lossy: If the author had #c and
 wanted to calculate the position of #b in the coordinate space of #c, they
 will need to do some calculation to work it out via body. But presumably
 a script of this nature is aware of the existence of Shadow DOM.

 The question of what to do for offset* properties across a shadow boundary
 when the shadow *is* traversable is a vexing one. In this case there is no
 node disclosed that you could not find anyway using .shadowRoot, etc. tree
 walking. From that point of view it seems acceptable for offsetParent to
 return an offsetParent inside the (traversable) shadow.

 On the other hand, this violates the lower-boundary encapsulation of the
 Shadow DOM spec. This means that pages that are using traversable shadows,
 but relying on convention (ie don't use new properties like .shadowRoot)
 to get the encapsulation benefits of Shadow DOM, now have to audit the
 offsetParent property. It also means you need to have two ways of dealing
 with offsetParent in both user agents and author scripts. So for simplicity
 and consistency I think it makes sense to treat both traversable and
 non-traversable shadows uniformly.

 Dominic

 [1] Thread starts here: 
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0535.html
 [2] http://www.w3.org/TR/cssom-view/#offset-attributes


 http://goto.google.com/dc-email-sla



Re: [webcomponents] calling JS on custom element construction

2013-03-20 Thread Scott Miles
Sorry for the extra email, but I realize we didn't discuss 'constructor'.

Most user-agents today cannot construct an HTML element via a vanilla
constructor. For example,

new HTMLDivElement()
 TypeError: Illegal constructor

The problem is that element construction code typically does 'under the
hood' bindings. There are efforts afoot to bring browser internals closer
to JavaScript, but we are not there yet.

For this reason, the 'custom elements' work in general has real problems
with constructors. Ideally we want to be able to register a vanilla
JavaScript class with a tag name and be done with it, but it cannot be so.

In the meantime, the custom elements systems all _produce_ a constructor
for you, and the best you can do is define the prototype and do your
initialization tasks in special callbacks (aka _readyCallback_ as in my
previous email).

For example, you will see usage of document.register like this:

  XFoo = document.register(x-foo, {prototype: XFooPrototype});

Notice that the XFoo constructor is emitted by register, you don't get to
supply one.

Similarly, when you supply constructor=MyCustomElement attribute to
element, you are asking the system to emit the generated constructor into
a variable named MyCustomElement.

There is more to talk about on this subject but I feel like I've been
long-winded already. Follow ups appreciated.

P.S. XFooPrototype shown above must extend HTMLElement.prototype (or a
descendent).

P.P.S. You can get fancy and do stuff like this, but you have to be careful

  // potential foot-gun, will not actually be our constructor

XFoo = function() {
 this.textContent = I'm an XFoo!;

};
XFoo.prototype = Object.create(HTMLElement.prototype);
XFoo.prototype.readyCallback = XFoo; // tricky! pretend we are using our
ctor
// almost what we want except we have to capture new XFoo on the
left-hand-side
XFoo = document.register(x-foo, XFoo);




On Wed, Mar 20, 2013 at 10:35 AM, Scott Miles sjmi...@google.com wrote:

 The answer depends a bit on the particular implementation of
 HTMLElementElement (aka element) that you are using. The spec is behind
 the various discussions on this topic, so the implementations vary.

 Our main polyfill https://github.com/toolkitchen/CustomElements for 
 HTMLElementElement
 adds a method called *register* to HTMLElementElement that allows you to
 specify a custom prototype for your custom element. If you add a *
 readyCallback* method on that prototype, it will be called when your
 custom element is instanced.

 For example,

   element name=x-foo
 script
   this.register({
 prototype: {
   readyCallback: function() {
 this.textContent = 'Hello World';
   }
 }
   });
 /script
   /element

 Note that in this version of the polyfill template is not instanced for
 you at all, so you in fact need to do that yourself in your readyCallback.
 Specifically,

   element name=x-foo
 template
   Hello World
 /template
 script
   var template = this.querySelector(template);
   this.register({
 prototype: {
   readyCallback: function() {
 // YMMV depending on your platform's interpretation of
 template
 this.innerHTML = template.innerHTML;
   }
 }
   });
 /script
   /element

 If you want to be free of all such worries, you can try the higher level
 code under the toolkit https://github.com/toolkitchen/toolkitrepository,
 but it's even more bleeding edge.

 HTH,
 Scott


 On Wed, Mar 20, 2013 at 9:46 AM, Mike Kamermans niho...@gmail.com wrote:

 Hey all,

 still playing with web components, I was wondering if there's a way to
 make a custom element trigger JS whenever an element is built either
 through JS calls or when used in the DOM. From the spec, combined with
 the toolkitchen polyfills, I can see that the element's script block
 runs once, when the element gets defined, so I figured I could use an
 explicit constructor instead and make things work that way:


 var MyCustomElement = function() { console.log(element built); }

 element name=my-custom-element constructor=MyCustomElement
   template
   content/content
   /template
 /element

 but this does not appear to call the MyCustomElement constructor when
 the element is built through the DOM by virtue of just being used on
 a page, and when called on the console with new MyCustomElement();
 I get the error TypeError: Object #HTMLDivElement has no method
 'instantiate'... If I use MyCustomElement.prototype = new
 HTMLDivElement() to try to set up a sensible prototype chain, I just
 get the error Uncaught TypeError: Illegal constructor./

 What's the recommended way to set up a custom element that runs some
 JS or calls a JS function any time such an element is built for use on
 the page?

 - Mike Pomax Kamermans





Re: [webcomponents]: Re-imagining shadow root as Element

2013-03-18 Thread Scott Miles
I'm already on the record with A, but I have a question about 'lossiness'.

With my web developer hat on, I wonder why I can't say:

div id=foo
  shadowroot
shadow stuff
  /shadowroot

  light stuff

/div


and then have the value of #foo.innerHTML still be

  shadowroot
 shadow stuff
  /shadowroot

  lightstuff

I understand that for DOM, there is a wormhole there and the reality of
what this means is new and frightening; but as a developer it seems to be
perfectly fine as a mental model.

We web devs like to grossly oversimplify things. :)

Scott

On Mon, Mar 18, 2013 at 1:53 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Last Friday, still energized after the productive Mozilla/Google
 meeting, a few of us (cc'd) dug into Shadow DOM. And boy, did that go
 south quickly! But let's start from the top.

 We puzzled over the the similarity of two seemingly disconnected problems:

 a) ShadowRoot is a DocumentFragment and not an Element, and
 b) there is no declarative way to specify shadow trees.

 The former is well-known (see

 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg356
 ).

 The latter came into view very early as a philosophical problem
 (provide declarative syntax for new imperative APIs) and much later as
 a practical problem: many modern apps use a freeze-drying
 performance technique where they load as-rendered HTML content of a
 page on immediately (so that the user sees content immediately), and
 only later re-hydrate it with script. With shadow DOM, the lack of
 declarative syntax means that the content will not appear
 as-rendered until the script starts running, thus ruining the whole
 point of freeze-drying.

 We intentionally stayed away from the arguments like well, with
 custom elements, all of this happens without script. We did this
 precisely because we wanted to understand what all of this happens
 actually means.

 Trapped between these two problems, we caved in and birthed a new
 element. Let's call it shadowroot (Second Annual Naming Contest
 begins in 3.. 2.. ).

 This element _is_ the ShadowRoot. It's deliciously strange. When you
 do div.appendChild(document.createElement('shadowroot')), the DOM:

 0) opens a magic wormhole to the land of rainbows and unicorns (aka
 the Gates of Hell)
 1) adds shadowroot at the top of div's shadow tree stack

 This behavior has three implications:

 i) You can now have detached ShadowRoots. This is mostly harmless. In
 fact, being able to prepare ShadowRoot instances before adding them to
 a host seems like a good thing.

 ii) ShadowRoot never appears as a child of an element. This is desired
 original behavior.

 iii) Parsing HTML with shadowroot in it results in loss of data when
 round-tripping. This is hard to swallow, but one can explain it as a
 distinction between two trees: a document tree and a composed tree.
 When you invoke innerHTML, you get a document tree. When you invoke
 (yet to be invented) innerComposedHTML, you get composed tree.

 Alternatively, we could just make appendChild/insertBefore/etc. throw
 and make special rules for shadowroot in HTML parser.

 Pros:

 * The shadow root is now an Element with localName and defined DOM behavior
 * There's now a way to declare shadow trees in HTML
 * Just like DocumentFragment, neatly solves the problem of root being
 inserted in a tree somewhere

 Cons:

 * We're messing with how appendChild/insertBefore work

 What do you folks think?

 A. This is brilliant, I love it
 B. You have made your last mistake, RELEASE THE KRAKEN!
 C. I tried reading this, but Firefly reruns were on
 D. ___

 :DG



Re: [webcomponents]: Re-imagining shadow root as Element

2013-03-18 Thread Scott Miles
Ok, well obviously, there are times when you don't want the shadowroot to
be in innerHTML, so I was correct that I was grossly over simplifying. I
guess this is where the second kind of innHTML accessor comes in. Sorry!

It's still A though. :)


On Mon, Mar 18, 2013 at 2:05 PM, Scott Miles sjmi...@google.com wrote:

 I'm already on the record with A, but I have a question about 'lossiness'.

 With my web developer hat on, I wonder why I can't say:

 div id=foo
   shadowroot
 shadow stuff
   /shadowroot

   light stuff

 /div


 and then have the value of #foo.innerHTML still be

   shadowroot
  shadow stuff
   /shadowroot

   lightstuff

 I understand that for DOM, there is a wormhole there and the reality of
 what this means is new and frightening; but as a developer it seems to be
 perfectly fine as a mental model.

 We web devs like to grossly oversimplify things. :)

 Scott

 On Mon, Mar 18, 2013 at 1:53 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Last Friday, still energized after the productive Mozilla/Google
 meeting, a few of us (cc'd) dug into Shadow DOM. And boy, did that go
 south quickly! But let's start from the top.

 We puzzled over the the similarity of two seemingly disconnected problems:

 a) ShadowRoot is a DocumentFragment and not an Element, and
 b) there is no declarative way to specify shadow trees.

 The former is well-known (see

 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg356
 ).

 The latter came into view very early as a philosophical problem
 (provide declarative syntax for new imperative APIs) and much later as
 a practical problem: many modern apps use a freeze-drying
 performance technique where they load as-rendered HTML content of a
 page on immediately (so that the user sees content immediately), and
 only later re-hydrate it with script. With shadow DOM, the lack of
 declarative syntax means that the content will not appear
 as-rendered until the script starts running, thus ruining the whole
 point of freeze-drying.

 We intentionally stayed away from the arguments like well, with
 custom elements, all of this happens without script. We did this
 precisely because we wanted to understand what all of this happens
 actually means.

 Trapped between these two problems, we caved in and birthed a new
 element. Let's call it shadowroot (Second Annual Naming Contest
 begins in 3.. 2.. ).

 This element _is_ the ShadowRoot. It's deliciously strange. When you
 do div.appendChild(document.createElement('shadowroot')), the DOM:

 0) opens a magic wormhole to the land of rainbows and unicorns (aka
 the Gates of Hell)
 1) adds shadowroot at the top of div's shadow tree stack

 This behavior has three implications:

 i) You can now have detached ShadowRoots. This is mostly harmless. In
 fact, being able to prepare ShadowRoot instances before adding them to
 a host seems like a good thing.

 ii) ShadowRoot never appears as a child of an element. This is desired
 original behavior.

 iii) Parsing HTML with shadowroot in it results in loss of data when
 round-tripping. This is hard to swallow, but one can explain it as a
 distinction between two trees: a document tree and a composed tree.
 When you invoke innerHTML, you get a document tree. When you invoke
 (yet to be invented) innerComposedHTML, you get composed tree.

 Alternatively, we could just make appendChild/insertBefore/etc. throw
 and make special rules for shadowroot in HTML parser.

 Pros:

 * The shadow root is now an Element with localName and defined DOM
 behavior
 * There's now a way to declare shadow trees in HTML
 * Just like DocumentFragment, neatly solves the problem of root being
 inserted in a tree somewhere

 Cons:

 * We're messing with how appendChild/insertBefore work

 What do you folks think?

 A. This is brilliant, I love it
 B. You have made your last mistake, RELEASE THE KRAKEN!
 C. I tried reading this, but Firefly reruns were on
 D. ___

 :DG





Re: [webcomponents]: Re-imagining shadow root as Element

2013-03-18 Thread Scott Miles
Sorry if I'm clobbering this thread, I promise to stop after this, but I
solved my own mental model. Namely, I decide to treat shadowroot like
outerHTML.

If I define (pseudo):

div id=A
  shadowroot
 span id=B
shadowroot
   ...

The A.innerHTML == span id=Bshadowroot...

I don't see A's shadowroot, because it's really part of it's outer-ness.
It's part of what makes A, it's not part of A's content.

Now I can send A's innerHTML to B with no problem. Or roundtrip A's content
with no problem.

I realize I've broken several standard laws, but in any event it seems
consistent to itself.



On Mon, Mar 18, 2013 at 2:08 PM, Scott Miles sjmi...@google.com wrote:

 Ok, well obviously, there are times when you don't want the shadowroot
 to be in innerHTML, so I was correct that I was grossly over simplifying. I
 guess this is where the second kind of innHTML accessor comes in. Sorry!

 It's still A though. :)


 On Mon, Mar 18, 2013 at 2:05 PM, Scott Miles sjmi...@google.com wrote:

 I'm already on the record with A, but I have a question about 'lossiness'.

 With my web developer hat on, I wonder why I can't say:

 div id=foo
   shadowroot
 shadow stuff
   /shadowroot

   light stuff

 /div


 and then have the value of #foo.innerHTML still be

   shadowroot
  shadow stuff
   /shadowroot

   lightstuff

 I understand that for DOM, there is a wormhole there and the reality of
 what this means is new and frightening; but as a developer it seems to be
 perfectly fine as a mental model.

 We web devs like to grossly oversimplify things. :)

 Scott

 On Mon, Mar 18, 2013 at 1:53 PM, Dimitri Glazkov dglaz...@google.comwrote:

 Last Friday, still energized after the productive Mozilla/Google
 meeting, a few of us (cc'd) dug into Shadow DOM. And boy, did that go
 south quickly! But let's start from the top.

 We puzzled over the the similarity of two seemingly disconnected
 problems:

 a) ShadowRoot is a DocumentFragment and not an Element, and
 b) there is no declarative way to specify shadow trees.

 The former is well-known (see

 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/thread.html#msg356
 ).

 The latter came into view very early as a philosophical problem
 (provide declarative syntax for new imperative APIs) and much later as
 a practical problem: many modern apps use a freeze-drying
 performance technique where they load as-rendered HTML content of a
 page on immediately (so that the user sees content immediately), and
 only later re-hydrate it with script. With shadow DOM, the lack of
 declarative syntax means that the content will not appear
 as-rendered until the script starts running, thus ruining the whole
 point of freeze-drying.

 We intentionally stayed away from the arguments like well, with
 custom elements, all of this happens without script. We did this
 precisely because we wanted to understand what all of this happens
 actually means.

 Trapped between these two problems, we caved in and birthed a new
 element. Let's call it shadowroot (Second Annual Naming Contest
 begins in 3.. 2.. ).

 This element _is_ the ShadowRoot. It's deliciously strange. When you
 do div.appendChild(document.createElement('shadowroot')), the DOM:

 0) opens a magic wormhole to the land of rainbows and unicorns (aka
 the Gates of Hell)
 1) adds shadowroot at the top of div's shadow tree stack

 This behavior has three implications:

 i) You can now have detached ShadowRoots. This is mostly harmless. In
 fact, being able to prepare ShadowRoot instances before adding them to
 a host seems like a good thing.

 ii) ShadowRoot never appears as a child of an element. This is desired
 original behavior.

 iii) Parsing HTML with shadowroot in it results in loss of data when
 round-tripping. This is hard to swallow, but one can explain it as a
 distinction between two trees: a document tree and a composed tree.
 When you invoke innerHTML, you get a document tree. When you invoke
 (yet to be invented) innerComposedHTML, you get composed tree.

 Alternatively, we could just make appendChild/insertBefore/etc. throw
 and make special rules for shadowroot in HTML parser.

 Pros:

 * The shadow root is now an Element with localName and defined DOM
 behavior
 * There's now a way to declare shadow trees in HTML
 * Just like DocumentFragment, neatly solves the problem of root being
 inserted in a tree somewhere

 Cons:

 * We're messing with how appendChild/insertBefore work

 What do you folks think?

 A. This is brilliant, I love it
 B. You have made your last mistake, RELEASE THE KRAKEN!
 C. I tried reading this, but Firefly reruns were on
 D. ___

 :DG






Re: [webcomponents]: Making link rel=components produce DocumentFragments

2013-03-13 Thread Scott Miles
Developers will absolutely concat components together, often the entire
apps worth. They will also use them separately. This flexibility is one of
the great strengths of this simple concept.

As Dimitri mentioned, Web Components solves a great many of the loader
issues (both at development and production time) that currently plague
developers and have a fragmented solution space.

Fwiw I posted a bug to address some of the script and load-order questions:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=21229

Scott


On Wed, Mar 13, 2013 at 8:08 PM, Dominic Cooney domin...@google.com wrote:




 On Thu, Mar 14, 2013 at 5:14 AM, Dimitri Glazkov dglaz...@google.comwrote:




 On Tue, Mar 12, 2013 at 10:20 PM, Dominic Cooney domin...@google.comwrote:

 On Tue, Mar 12, 2013 at 8:13 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Hi folks!

 Just had a quick discussion with Elliott and he suggested that instead
 of building full-blown Documents, the link rel=components just make
 DocumentFragments, just like template does.


 I am confused by what you are proposing here.

 Templates produce document fragments in the sense that the
 HTMLTemplateElement's content attribute is a DocumentFragment.

 On the other hand, templates use full-blown documents in the sense that
 the template contents owner is a document which does not have a browsing
 context. 
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/templates/index.html#definitions
 


 Looking at
 http://www.whatwg.org/specs/web-apps/current-work/multipage/dom.html#the-document-object
  and
 the bunch of APIs that will never be useful on a document without a
 browsing context, I think it's a pretty good idea. It should make also
 components more lightweight.


 It you are proposing to make the Component interface's content attribute
 a DocumentFragment, I think that is OK. It does not make the components any
 lighter, because component.content.ownerDocument will inevitably point to
 that other document.

 Could you provide a more specific proposal? I don't understand what
 you're proposing here.


 Just like in HTML templates, you don't have to have a distinct document
 for each component. They could share the same document. In fact, it might
 be nice for both templates and components to all share the same document.


 Now I understand.

 The concerns about resource resolution are tricky ones and having a
 separate document sounds straighforward.

 What is the plan for inline scripts in the linked file? Is it possible for
 components to have a top-level script which does some setup? If so,
 running script in a document fragment seems a bit weird.

 I think this in part depends on whether you think web apps will crunch
 their components into few files or not because the benefit of a shared
 document is limited if there are fewer components. There are reasons for
 developers to crunch components (minimize latency as with CSS spriting,
 script concatenating, and stylesheet merging) or not (it is an extra step;
 lazy loading wants multiple files anyway; SPDY will ameliorate the latency
 problem anyway; loading components from separate third-party sites).

 Dominic

 --
 Email SLA http://goto.google.com/dc-email-sla • 
 Google+https://plus.sandbox.google.com/111762620242974506845/posts



Re: [webcomponents]: First stab at the Web Components spec

2013-03-11 Thread Scott Miles
My issue is that the target of this link will not in general be an atomic
thing like a 'component' or a 'module'. It's a carrier for resources and
links to other resources. IMO this is one of the great strengths of this
proposal.

For this reason, when it was rel=components (plural) there was no problem
for me.

Having said all that, I'm not particularly up in arms about this issue. The
name will bend to the object in the fullness of time. :)

S


On Mon, Mar 11, 2013 at 3:35 PM, Elliott Sprehn espr...@gmail.com wrote:


 On Mon, Mar 11, 2013 at 2:45 PM, Philip Walton phi...@philipwalton.comwrote:

 Personally, I had no objection to rel=component. It's similar in
 usage to rel=stylesheet in the fact that it's descriptive of what you're
 linking to.

 On the other hand, rel=include is very broad. It could just as easily
 apply to a stylesheet as a Web component, and may limit the usefulness of
 the term if/when future rel values are introduced.

 (p.s. I'm new to this list and haven't read through all the previous
 discussions on Web components. Feel free to disregard this comment if I'm
 rehashing old topics)



 +1, I like rel=component, document or include doesn't make sense.

 - E



Re: [webcomponents]: HTMLElementElement missing a primitive

2013-03-08 Thread Scott Miles
Mostly it's cognitive dissonance. It will be easy to trip over the fact
that both things involve a user-supplied prototype, but they are required
to be critically different objects.

Also it's hard for me to justify why this difference should exist. If the
idea is that element provides extra convenience, then why not make the
imperative form convenient? If it's important to be able to do your own
prototype marshaling, then won't this feature be missed in declarative form?

I'm wary of defanging the declarative form completely. But I guess I want
to break it down first before we build it up, if that makes any sense.

Scott



On Fri, Mar 8, 2013 at 9:55 AM, Erik Arvidsson a...@chromium.org wrote:

 If you have a tag name it is easy to get the prototype.

 var tmp = elementElement.ownerDocument.createElement(tagName);
 var prototype = Object.getPrototypeOf(tmp);

 On Fri, Mar 8, 2013 at 12:16 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Thu, Mar 7, 2013 at 2:35 PM, Scott Miles sjmi...@google.com wrote:
  Currently, if I document.register something, it's my job to supply a
  complete prototype.
 
  For HTMLElementElement on the other hand, I supply a tag name to
 extend, and
  the prototype containing the extensions, and the system works out the
  complete prototype.
 
  However, this ability of HTMLElementElement to construct a complete
  prototype from a tag-name is not provided by any imperative API.
 
  As I see it, there are three main choices:
 
  1. HTMLElementElement is recast as a declarative form of
 document.register,
  in which case it would have no 'extends' attribute, and you need to make
  your own (complete) prototype.
 
  2. We make a new API for 'construct prototype from a tag-name to extend
 and
  a set of extensions'.
 
  3. Make document.register work like HTMLElementElement does now (it
 takes a
  tag-name and partial prototype).
 
  4. Let declarative syntax be a superset of the imperative API.
 
  Can you help me understand why you feel that imperative and
  declarative approaches must mirror each other exactly?
 
  :DG
 



 --
 erik



Re: [webcomponents]: HTMLElementElement missing a primitive

2013-03-08 Thread Scott Miles
I also want to keep ES6 classes in mind. Presumably in declarative form I
declare my class as if it extends nothing. Will 'super' still work in that
case?

Scott


On Fri, Mar 8, 2013 at 11:40 AM, Scott Miles sjmi...@google.com wrote:

 Mostly it's cognitive dissonance. It will be easy to trip over the fact
 that both things involve a user-supplied prototype, but they are required
 to be critically different objects.

 Also it's hard for me to justify why this difference should exist. If the
 idea is that element provides extra convenience, then why not make the
 imperative form convenient? If it's important to be able to do your own
 prototype marshaling, then won't this feature be missed in declarative form?

 I'm wary of defanging the declarative form completely. But I guess I want
 to break it down first before we build it up, if that makes any sense.

 Scott



 On Fri, Mar 8, 2013 at 9:55 AM, Erik Arvidsson a...@chromium.org wrote:

 If you have a tag name it is easy to get the prototype.

 var tmp = elementElement.ownerDocument.createElement(tagName);
 var prototype = Object.getPrototypeOf(tmp);

 On Fri, Mar 8, 2013 at 12:16 PM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  On Thu, Mar 7, 2013 at 2:35 PM, Scott Miles sjmi...@google.com wrote:
  Currently, if I document.register something, it's my job to supply a
  complete prototype.
 
  For HTMLElementElement on the other hand, I supply a tag name to
 extend, and
  the prototype containing the extensions, and the system works out the
  complete prototype.
 
  However, this ability of HTMLElementElement to construct a complete
  prototype from a tag-name is not provided by any imperative API.
 
  As I see it, there are three main choices:
 
  1. HTMLElementElement is recast as a declarative form of
 document.register,
  in which case it would have no 'extends' attribute, and you need to
 make
  your own (complete) prototype.
 
  2. We make a new API for 'construct prototype from a tag-name to
 extend and
  a set of extensions'.
 
  3. Make document.register work like HTMLElementElement does now (it
 takes a
  tag-name and partial prototype).
 
  4. Let declarative syntax be a superset of the imperative API.
 
  Can you help me understand why you feel that imperative and
  declarative approaches must mirror each other exactly?
 
  :DG
 



 --
 erik





Re: [webcomponents]: Custom element constructors are pinocchios

2013-03-08 Thread Scott Miles
IMO, there is no benefit to 'real' constructors other than technical
purity, which is no joke, but I hate to use that as a club to beat users
with.

This is strictly anecdotal, but I've played tricks with 'constructor'
before (in old Dojo) and there was much hand-wringing about it, but to my
knowledge there was never even one bug report (insert grain-of-salt here).

The main thing is to try to make sure 'instanceof' is sane.


On Fri, Mar 8, 2013 at 11:27 AM, Dimitri Glazkov dglaz...@google.comwrote:

 As I started work on the components spec, I realized something terrible:

 a) even if all HTML parsers could run script at any point when
 constructing tree, and

 b) even if all JS engines supported overriding [[Construct]] internal
 method on Function,

 c) we still can't make custom element constructors run exactly at the
 time of creating an element in all cases,

 d) unless we bring back element upgrade.

 Here's why:

 i) when we load component document, it blocks scripts just like a
 stylesheet (
 http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts
 )

 ii) this is okay, since our constructors are generated (no user code)
 and most of the tree could be constructed while the component is
 loaded.

 iii) However, if we make constructors run at the time of tree
 construction, the tree construction gets blocked much sooner, which
 effectively makes component loading synchronous. Which is bad.

 I see two ways out of this conundrum:

 1) Give up on custom element constructors ever meeting the Blue Fairy
 and becoming real boys, thus making them equivalent to readyCallback

 Pros:
 * Now that readyCallback and constructor are the same thing, we could
 probably avoid a dual-path API in document.register

 Cons:
 * constructors are not real (for example, when a constructor runs, the
 element is already in the tree, with all of the attributes set), so
 there is no pure instantiation phase for an element

 2) resurrect element upgrade

 Pros:
 * constructors are real

 Cons:
 * rejiggering document tree during upgrades will probably eat all (and
 then some!) performance benefits of asynchronous load

 WDYT?

 :DG



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-03-08 Thread Scott Miles
Fwiw, I'm still following this thread, but so far Scott G. has been saying
everything I would say (good on ya, brother :P).

 My understanding is that you have to explicitly ask to walk into the
shadow, so this wouldn't happen accidentally. Can someone please confirm or
deny this? 

Confirmed. The encapsulation barriers are there to prevent you from
stumbling into shadow.


On Fri, Mar 8, 2013 at 12:14 PM, Scott González scott.gonza...@gmail.comwrote:

 On Fri, Mar 8, 2013 at 12:03 AM, Bronislav Klučka 
 bronislav.klu...@bauglir.com wrote:

 On 7.3.2013 19:54, Scott González wrote:

 Who is killing anything?

 Hi, given
 http://lists.w3.org/Archives/**Public/public-webapps/**
 2013JanMar/0676.htmlhttp://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0676.html
 I've misunderstood your point as advocating against Shadow altogether.


 Ok, good to know that this was mostly just a miscommunication.



 2nd is is practical: not having to care about the internals, so I do not
 break it by accident from outside. If the only way to work with internals
 is by explicit request for internals and then working with them, but
 without the ability to breach the barrier accidentally, without the
 explicit request directly on the shadow host, this concern is satisfied and
 yes, there will be no clashes except for control naming.


 My understanding is that you have to explicitly ask to walk into the
 shadow, so this wouldn't happen accidentally. Can someone please confirm or
 deny this?



Re: [webcomponents]: First stab at the Web Components spec

2013-03-08 Thread Scott Miles
Agree. Seems like Dimitri and Anne decided that these targets are
'document', did they not?

Scott


On Fri, Mar 8, 2013 at 1:12 PM, Bronislav Klučka 
bronislav.klu...@bauglir.com wrote:

 hi
 let's apply KISS here
 how about just
 rel=document
 or
 rel=htmldocument

 Brona


 On 8.3.2013 22:05, Dimitri Glazkov wrote:

 On Fri, Mar 8, 2013 at 12:30 PM, Steve Orvell sorv...@google.com wrote:

 Indeed. Unfortunately, using 'module' here could be confusing wrt ES6
 modules. Perhaps package is better?

 The name is difficult. My main point is that using components causes
 unnecessary confusion.

 I understand. Welcome to the 2013 Annual Naming Contest/bikeshed. Rules:

 1) must reflect the intent and convey the meaning.
 2) link type and name of the spec must match.
 3) no biting.

 :DG







[webcomponents]: HTMLElementElement missing a primitive

2013-03-07 Thread Scott Miles
Currently, if I document.register something, it's my job to supply a
complete prototype.

For HTMLElementElement on the other hand, I supply a tag name to extend,
and the prototype containing the extensions, and the system works out the
complete prototype.

However, this ability of HTMLElementElement to construct a complete
prototype from a tag-name is not provided by any imperative API.

As I see it, there are three main choices:

1. HTMLElementElement is recast as a declarative form of document.register,
in which case it would have no 'extends' attribute, and you need to make
your own (complete) prototype.

2. We make a new API for 'construct prototype from a tag-name to extend and
a set of extensions'.

3. Make document.register work like HTMLElementElement does now (it takes a
tag-name and partial prototype).

Am I making sense? WDYT?

Scott


Re: [webcomponents]: Moving custom element callbacks to prototype/instance

2013-03-06 Thread Scott Miles
I favor #2. It's much simpler. Simple is good.

Fwiw, I'm filtering these things through the idea that someday we will be
able to do:

document.register(x-foo, XFoo);

That's the ultimate goal IMO, and when I channel Alex Russell (without
permission). =P

Scott


On Wed, Mar 6, 2013 at 1:55 PM, Dimitri Glazkov dglaz...@google.com wrote:

 A few of browser/webdev folks got together and went (again!) over the
 custom elements design. One problem stuck out: handling of created
 callbacks (and other future callbacks, by induction) for derived
 custom elements.

 For example, if Raj defined a create callback for his foo-raj
 element, and Lucy later extended foo-raj to make a foo-lucy
 element. As spec'd today, Lucy has no obvious way of invoking Raj's
 create callback, other than Raj and Lucy coming up with some
 convention on how to collect and pass these callbacks.

 Rather than watch developers come up with multiple, subtly different
 such conventions, how can we, the browserfolk help? A couple of ideas:

 1) Somehow magically chain create callbacks. In Lucy's case,
 foo-lucy will call both Raj's and Lucy's callbacks.

 Pros:
 * Magic is exciting
 * Callbacks are tucked away safely in their own object, unexposed to
 the consumer of custom elements.

 Cons:
 * Magic of calling callbacks can't be controlled by the author. If
 Lucy wants to override Raj's callback (or call it in the middle of her
 callback), she can't.
 * We're somewhat reinventing either prototype inheritance or event
 listener model just for these callbacks.

 2) Get rid of a separate lifecycle object and just put the callbacks
 on the prototype object, similar to printCallback
 (
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Jan/0259.html
 )

 Pros:
 * We make prototype inheritance do the work for us. Lucy can do
 whatevs with Raj's callback.
 * No magic, no special callback interface.

 Cons:
 * The callbacks now hang out in the wind as prototype members. Foolish
 people can invoke them, inspectors show them, etc.

 I am leaning toward the second solution, but wanted to get your opinions.

 :DG



Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-02-25 Thread Scott Miles
I agree with Tab 100% on this.

You cannot accidentally stumble into ShadowDOM. You have to actively take
that step.

For one thing, I suggest that most of the time, the component code is
shipping w/your application, you are not depending on some resource that
will simply be upgraded out from under you.

For another thing, if I decide it's necessary to monkey-patch some
third-party code that's I'm using in my application, I'm generally pretty
upset if that code is privatized. It makes that unpleasant work much
harder. I need to ship ASAP, and maintenance concerns are secondary.

Either way the last thing I'm going to do is wily-nily update that code and
then blame the developer that my monkey-patch broke. Yes, someone could
complain in that scenario, but they have no leg to stand on.

Boris says the above has been a big problem at Mozilla. This confuses me.
Do developer's not know that monkey-patching clearly private code is bad
for their maintenance? I don't see how this can be the library vendor's
problem (unless maybe it's an auto-update situation).

I suppose there is a moral hazard argument: if we make it possible, people
will overdo it. This is probably true, but IMO it's akin to saying chefs
should only use butter knives because they could cut themselves on the
sharp kind.

Lastly, only a subset of possible upgrades actually are transparent, not
affecting public API or behavior. Intersect that set of updates with
monkey-patchers who can't live without the update, and you are talking
about a relatively small affected class.

Scott


On Mon, Feb 25, 2013 at 9:54 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/25/13 12:38 PM, Tab Atkins Jr. wrote:

 Still, though, the private by default impulse is nearly always
 wrong


 That's an interesting claim.  Do you think that C++ classes should be
 public by default?  (Binary patching that can mess even with private
 members notwithstanding for now)


  and contrary to a lot of patterns on the web


 This is at least partly a historical artifact of two things:

 1)  The web was not originally designed for serious application
 development.

 2)  There is no way to do private by default right now, really.  There are
 some things you can try to do with closures and whatnot, but the shared
 global makes even those not exactly private.


  the current status quo, where shadow DOM is hidden from everything
 unless you're explicitly looking for it, is necessary for *lots* of
 useful and completely benign things.


 I think we may have different definitions of benign...


  If you want high integrity (not security - this is a much broader
 concept), it's expensive.  This is always true, because low-integrity
 things are *useful*, and people often try to reach for high-integrity
 without thinking through its downsides.


 I can assure you that I have thought through the downsides of
 high-integrity and low-integrity components, both.  Furthermore, we at
 Mozilla have a  lot of implementation experience with the low-integrity
 version.  It's been a constant battle against people monkeypatching things
 in ways that totally fail if you change the implementation at all, and I'm
 not sure why we should impose such a battle on component developers by
 default.

 -Boris




Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-02-25 Thread Scott Miles
Don't we have a situation where people can simply take your source and
change it regardless (barring legal imperatives, which are orthogonal in my
view)?

Given Boris' arguments, Big Marketshare can simply always mess up his
project and blame me and it's my fault. I don't accept it.

Btw, If Big Marketshare is so powerful, why haven't we already fixed
whatever thing he is monkey patching?

Also, Optimizely, et al, doesn't simply appear at random. Again, seems like
your argument is that some developer or user may take wanton stet X to
break my stuff, and I must prevent it or it's my fault.

re: forced depend on where they got their tomatoes from and You cannot
accidentally stumble into ShadowDOM

The reason the latter keeps being mentioned is because of statements like
the former. Nobody is forcing anybody to break encapsulation. Seems to me
the demarcation is very clear.

 And all I have to do is to check the points whee my app touches the
controls

Yes, transparent upgrades are great. No argument here. But If you had
monkey-patched your libraries, you wouldn't have this ability. You didn't,
so life is good.

  Can we go with options when creating Shadow dom?

My understanding is that we have this option and are only talking about
what would be the default.

Lastly, my point about upgrade statistics is only that the intersection of
the two sets is generally going to be smaller than the union of them. I
should not have qualified that difference. To be clear, the intersection I
posed included monkey-patchers that require the update, not simply
monkey-patchers.



On Mon, Feb 25, 2013 at 10:37 AM, Bronislav Klučka 
bronislav.klu...@bauglir.com wrote:


 On 25.2.2013 19:15, Scott Miles wrote:

 I agree with Tab 100% on this.

 You cannot accidentally stumble into ShadowDOM. You have to actively take
 that step.

 Sure, someone can actively take step to access my shadow DOM, thou I
 explicitly made it shadow and in next version of the control things will
 break.



 For one thing, I suggest that most of the time, the component code is
 shipping w/your application, you are not depending on some resource that
 will simply be upgraded out from under you.

 Sure, but as a desktop programmer I cannot tell how many times over the
 last decade I have upgraded my applications including 3rd party controls...
 And all I have to do is to check the points whee my app touches the
 controls... Not caring about the rest, because the rest cannot be broken
 (well, can, under extreme circumstances)



 For another thing, if I decide it's necessary to monkey-patch some
 third-party code that's I'm using in my application, I'm generally pretty
 upset if that code is privatized. It makes that unpleasant work much
 harder. I need to ship ASAP, and maintenance concerns are secondary.

 Assuming of course you can legally do that... privacy of 3rd party control
 has nothing to do with monkey-patchif you have the code... sure you cannot
 do that from outside of the control, but that makes no difference (the
 problem with private clause is inheritance, protected is better choice)


 I suppose there is a moral hazard argument: if we make it possible,
 people will overdo it. This is probably true, but IMO it's akin to saying
 chefs should only use butter knives because they could cut themselves on
 the sharp kind.

 Again, do we have to go with one choice here? Either or? Can we go with
 options when creating Shadow dom?



 Lastly, only a subset of possible upgrades actually are transparent, not
 affecting public API or behavior. Intersect that set of updates with
 monkey-patchers who can't live without the update, and you are talking
 about a relatively small affected class.

 Well.. yours upgrades maybe...


 Scott


 B.




Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-02-25 Thread Scott Miles
  Can we go with options when creating Shadow dom?

 My understanding is that we have this option and are only talking about
what would be the default.

Bronislav correctly pointed out to me that this is a fact not in evidence.
We have discussed 'isolate' option, but it's not in any spec that we can
find (yet).


On Mon, Feb 25, 2013 at 10:52 AM, Scott Miles sjmi...@google.com wrote:

 Don't we have a situation where people can simply take your source and
 change it regardless (barring legal imperatives, which are orthogonal in my
 view)?

 Given Boris' arguments, Big Marketshare can simply always mess up his
 project and blame me and it's my fault. I don't accept it.

 Btw, If Big Marketshare is so powerful, why haven't we already fixed
 whatever thing he is monkey patching?

 Also, Optimizely, et al, doesn't simply appear at random. Again, seems
 like your argument is that some developer or user may take wanton stet X to
 break my stuff, and I must prevent it or it's my fault.

 re: forced depend on where they got their tomatoes from and You cannot
 accidentally stumble into ShadowDOM

 The reason the latter keeps being mentioned is because of statements like
 the former. Nobody is forcing anybody to break encapsulation. Seems to me
 the demarcation is very clear.


  And all I have to do is to check the points whee my app touches the
 controls

 Yes, transparent upgrades are great. No argument here. But If you had
 monkey-patched your libraries, you wouldn't have this ability. You didn't,
 so life is good.

   Can we go with options when creating Shadow dom?

 My understanding is that we have this option and are only talking about
 what would be the default.

 Lastly, my point about upgrade statistics is only that the intersection of
 the two sets is generally going to be smaller than the union of them. I
 should not have qualified that difference. To be clear, the intersection I
 posed included monkey-patchers that require the update, not simply
 monkey-patchers.



 On Mon, Feb 25, 2013 at 10:37 AM, Bronislav Klučka 
 bronislav.klu...@bauglir.com wrote:


 On 25.2.2013 19:15, Scott Miles wrote:

 I agree with Tab 100% on this.

 You cannot accidentally stumble into ShadowDOM. You have to actively
 take that step.

 Sure, someone can actively take step to access my shadow DOM, thou I
 explicitly made it shadow and in next version of the control things will
 break.



 For one thing, I suggest that most of the time, the component code is
 shipping w/your application, you are not depending on some resource that
 will simply be upgraded out from under you.

 Sure, but as a desktop programmer I cannot tell how many times over the
 last decade I have upgraded my applications including 3rd party controls...
 And all I have to do is to check the points whee my app touches the
 controls... Not caring about the rest, because the rest cannot be broken
 (well, can, under extreme circumstances)



 For another thing, if I decide it's necessary to monkey-patch some
 third-party code that's I'm using in my application, I'm generally pretty
 upset if that code is privatized. It makes that unpleasant work much
 harder. I need to ship ASAP, and maintenance concerns are secondary.

 Assuming of course you can legally do that... privacy of 3rd party
 control has nothing to do with monkey-patchif you have the code... sure you
 cannot do that from outside of the control, but that makes no difference
 (the problem with private clause is inheritance, protected is better choice)


 I suppose there is a moral hazard argument: if we make it possible,
 people will overdo it. This is probably true, but IMO it's akin to saying
 chefs should only use butter knives because they could cut themselves on
 the sharp kind.

 Again, do we have to go with one choice here? Either or? Can we go with
 options when creating Shadow dom?



 Lastly, only a subset of possible upgrades actually are transparent, not
 affecting public API or behavior. Intersect that set of updates with
 monkey-patchers who can't live without the update, and you are talking
 about a relatively small affected class.

 Well.. yours upgrades maybe...


 Scott


 B.





Re: [webcomponents]: Making Shadow DOM Subtrees Traversable

2013-02-25 Thread Scott Miles
On Mon, Feb 25, 2013 at 11:30 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/25/13 1:52 PM, Scott Miles wrote:

 Given Boris' arguments, Big Marketshare can simply always mess up his
 project and blame me and it's my fault.


 Scott,

 That's how it often works in the court of public opinion, yes.

 Your employer is not immune to this behavior.


  I don't accept it.


 That's nice.  So what?


  Btw, If Big Marketshare is so powerful, why haven't we already fixed
 whatever thing he is monkey patching?


 Because he hasn't bothered to tell us about it; just monkeypatched and
 shipped (not least because he didn't want to wait for us to fix it). Again,
 your employer is not immune to this behavior.


The good part is that in this forum I get to argue my own opinion, which I
would say is that of a (single) web developer.



  Also, Optimizely, et al, doesn't simply appear at random.


 Sure.  They get included by the page, but the page may not realize what
 all they then go and mess with.


  Again, seems
 like your argument is that some developer or user may take wanton stet X
 to break my stuff, and I must prevent it or it's my fault.


 I think you're trying to paint this black-or-white in a way that seems
 more about arguing strawmen than addressing the problem.

 When something breaks in app A due to a change in component B, the problem
 could be fixed in B, in A, neither, or both.

 What happens in practice typically depends on the specifics of the change
 an the specifics of who A and B are, what contracts they have signed, and
 how much market power they have.

 You may not like this.  _I_ don't like it.  But it's reality.


Ironically, I was trying to argue that these things are on a spectrum and
that it is in fact not black and white. Often the argument is, with
isolation, maintenance is free! and the alternative is chaos. Seems like
we both agree this is not true.




  re: forced depend on where they got their tomatoes from and You
 cannot accidentally stumble into ShadowDOM

 The reason the latter keeps being mentioned is because of statements
 like the former. Nobody is forcing anybody to break encapsulation. Seems
 to me the demarcation is very clear.


 My point is that people will break encapsulation without being forced to.
  A lot.  At least that's what my implementation experience with XBL leads
 me to believe.


This is the moral hazard argument, which is completely worth discussing.
Because it's about human nature, I believe there is no objective right
answer, but my position as a developer is that I'm annoyed when tools
prevent me from doing something I need to do because somebody else might
hurt themselves doing it.



  Lastly, my point about upgrade statistics is only that the intersection
 of the two sets is generally going to be smaller than the union of them.


 Sure.

 -Boris



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-20 Thread Scott Miles
Since many of these cases are 'semantic' elements, whose only raison d'être
(afaik) is having a particular localName, I'm not sure how we get around
this without being able to specify an 'extends' option.

document.register('fancy-header', {
  prototype: FancyHeaderPrototype,
  extends: 'header'
...



On Wed, Feb 20, 2013 at 9:54 AM, Dimitri Glazkov dglaz...@google.comwrote:

 It seems that there's some additional reasoning that needs to go into
 whether an element could be constructed as custom tag. Like in this
 case, it should work both as a custom tag and as a type extension (the
 is attr).

 :DG

 On Tue, Feb 19, 2013 at 10:13 PM, Daniel Buchner dan...@mozilla.com
 wrote:
  Nope, you're 100% right, I saw header and thought HTMLHeadingElement for
  some reason - so this seems like a valid concern. What are the
  mitigation/solution options we can present to developers for this case?
 
 
  Daniel J. Buchner
  Product Manager, Developer Ecosystem
  Mozilla Corporation
 
 
  On Tue, Feb 19, 2013 at 9:17 PM, Scott Miles sjmi...@google.com wrote:
 
  Perhaps I'm making a mistake, but there is no specific prototype for the
  native header element. 'header', 'footer', 'section', e.g., are all
  HTMLElement, so all I can do is
 
  FancyHeaderPrototype = Object.create(HTMLElement.prototype);
 
  Afaict, the 'headerness' cannot be expressed this way.
 
 
  On Tue, Feb 19, 2013 at 8:34 PM, Daniel Buchner dan...@mozilla.com
  wrote:
 
  Wait a sec, perhaps I've missed something, but in your example you
 never
  extend the actual native header element, was that on purpose? I was
 under
  the impression you still needed to inherit from it in the prototype
  creation/registration phase, is that not true?
 
  On Feb 19, 2013 8:26 PM, Scott Miles sjmi...@google.com wrote:
 
  Question: if I do
 
  FancyHeaderPrototype = Object.create(HTMLElement.prototype);
  document.register('fancy-header', {
prototype: FancyHeaderPrototype
  ...
 
  In this case, I intend to extend header. I expect my custom elements
  to look like header is=fancy-header, but how does the system know
 what
  localName to use? I believe the notion was that the localName would be
  inferred from the prototype, but there are various semantic tags that
 share
  prototypes, so it seems ambiguous in these cases.
 
  S



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-20 Thread Scott Miles
[I messed up and failed to reply-all a few messages back, see the quoted
text to pick up context]

 semantic is only important in markup

Hrm, ok. I'll have to think about that.

At any rate, I'm concerned that developers will not be able to predict what
kind of node they will get from a constructor. We had a rule that you get
one kind of node for 'custom' elements and another for extensions of known
elements. But now it's more complicated.

Scott

On Wed, Feb 20, 2013 at 10:39 AM, Dimitri Glazkov dglaz...@google.comwrote:

 On Wed, Feb 20, 2013 at 10:34 AM, Scott Miles sjmi...@google.com wrote:
  var FancyHeader = document.register('fancy-header', {prototype:
  FancyHeaderPrototype});
  document.appendChild(new FancyHeader());
 
  what I expect in my document:
 
  !-- better have localName 'header', because I specifically want to
  communicate that semantic --
  header is=fancy-header

 But semantic is only important in markup? If you're building this
 imperatively, there's really no semantics anymore. You're in a DOM
 tree.

 Now, a valid question would be: what if I wanted to serialize this DOM
 tree in a certain way? I don't have an answer to that.

 :DG



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-19 Thread Scott Miles
 I'd be a much happier camper if I didn't have to think about handling
different return values.

I agree, and If it were up to me, there would be just one API for
document.register.

However, the argument given for dividing the API is that it is improper to
have a function return a value that is only important on some platforms. If
that's the winning argument, then isn't it pathological to make the 'non
constructor-returning API' return a constructor?


On Mon, Feb 18, 2013 at 12:59 PM, Daniel Buchner dan...@mozilla.com wrote:

 I agree with your approach on staging the two specs for this, but the last
 part about returning a constructor in one circumstance and undefined in the
 other is something developers would rather not deal with (in my
 observation). If I'm a downstream consumer or library author who's going to
 wrap this function (or any function for that matter), I'd be a much happier
 camper if I didn't have to think about handling different return values. Is
 there a clear harm in returning a constructor reliably that would make us
 want to diverge from an expected and reliable return value? It seems to me
 that the unexpected return value will be far more annoying than a little
 less mental separation between the two invocation setups.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Feb 18, 2013 at 12:47 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Fri, Feb 15, 2013 at 8:42 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I'm not sure I buy the idea that two ways of doing the same thing does
 not
  seem like a good approach - the web platform's imperative and
 declarative
  duality is, by nature, two-way. Having two methods or an option that
 takes
  multiple input types is not an empirical negative, you may argue it is
 an
  ugly pattern, but that is largely subjective.

 For what it's worth, I totally agree with Anne that two-prong API is a
 huge wart and I feel shame for proposing it. But I would rather feel
 shame than waiting for Godot.

 
  Is this an accurate summary of what we're looking at for possible
 solutions?
  If so, can we at least get a decision on whether or not _this_ route is
  acceptable?
 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
prototype: ELEMENT_PROTOTYPE,
lifecycle: {
   created: CALLBACK
}
  });

 I will spec this first.

 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
constructor: FOO_CONSTRUCTOR
  });
 

 When we have implementers who can handle it, I'll spec that.

 Eventually, we'll work to deprecate the first approach.

 One thing that Scott suggested recently is that the second API variant
 always returns undefined, to better separate the two APIs and their
 usage patterns.

 :DG





Re: [webcomponents]: Building HTML elements with custom elements

2013-02-19 Thread Scott Miles
I think you captured it well, thank you for distillation.

Perhaps one other COST of the localName issue is the question of
document.createElement.

document.createElement('x-button') creates button is='x-button', people
complain because the tag names do not match.
document.createElement('button').setAttribute('is', 'x-button'), doesn't
work this way, is is not a standard attribute (according to me)
document.createElement('button', 'x-button'), now I cannot encode my tag in
a single variable (i.e. document.createElement(someTag))
document.createElement('button/x-button'), I just made this up, but maybe
it would work.

Scott


On Tue, Feb 19, 2013 at 3:52 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 Hi folks!

 Since the very early ages of Web Components, one of the use cases was
 implementing built-in HTML elements
 (
 http://www.w3.org/2008/webapps/wiki/Component_Model_Use_Cases#Built-in_HTML_Elements
 ).

 So, I spent a bit of time today trying to understand how our progress
 with custom elements aligns with that cooky idea of explaining the
 magic in Web platform with existing primitives.

 Here are the three things where we've found problems and ended up with
 compromises. I don't think any of those are critically bad, but it's
 worth enumerating them here:

 1) For custom elements, the [[Construct]] internal method creates a
 platform object (https://www.w3.org/Bugs/Public/show_bug.cgi?id=20831)
 and eventually, this [[Construct]] special behavior disappears --
 that's when an HTML element becomes nothing more than just a JS
 object.

 PROBLEM: This is a lot of work for at least one JS engine to support
 overriding [[Construct]] method, and can't happen within a reasonable
 timeframe.

 COMPROMISE: Specify an API that produces a generated constructor
 (which creates a proper platform object), then later introduce the API
 that simply changes the [[Construct]] method, then deprecate the
 generated constructor API.

 COST: We may never get to the deprecation part, stuck with two
 slightly different API patterns for document.register.

 2) Custom element constructor runs at the time of parsing HTML, as the
 tree is constructed.

 PROBLEM: Several implementers let me know that allowing to run JS
 while parsing HTML is not something they can accommodate in a
 reasonable timeframe.

 COMPROMISE: Turn constructor into a callback, which runs in a
 microtask at some later time (like upon encountering /script).

 COST:  Constructing an element when building a tree != createElement.
 Also, there's an observable difference between the callback and the
 constructor. Since the constructor runs before element is inserted
 into a tree, it will not have any children or the parent. At the time
 the callback is invoked, the element will already be in the tree--and
 thus have children and the parent.

 3) Since the elements could derive from other existing elements, the
 localName should not be used for determining custom element's type
 (https://www.w3.org/Bugs/Public/show_bug.cgi?id=20913)

 PROBLEM: The localName checks are everywhere, from C++ code to
 extensions, to author code, and a lot of things will break if a custom
 element that is, for example, an HTMLButtonElement does not have
 localName of button. Addressing this issue head on seems
 intractable.

 COMPROMISE: Only allow custom tag syntax for elements that do not
 inherit from existing HTML or SVG elements.

 COST:  Existing HTML elements are forever stuck in type-extension
 world (
 https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/custom/index.html#dfn-type-extension
 ),
 which seems like another bit of magic.

 I think I got them all, but I could have missed things. Please look
 over and make noise if stuff looks wrong.

 :DG




Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-19 Thread Scott Miles
Question: if I do

FancyHeaderPrototype = Object.create(HTMLElement.prototype);
document.register('fancy-header', {
  prototype: FancyHeaderPrototype
...

In this case, I intend to extend header. I expect my custom elements to
look like header is=fancy-header, but how does the system know what
localName to use? I believe the notion was that the localName would be
inferred from the prototype, but there are various semantic tags that share
prototypes, so it seems ambiguous in these cases.

S


On Tue, Feb 19, 2013 at 1:01 PM, Daniel Buchner dan...@mozilla.com wrote:

 What is the harm in returning the same constructor that is being input for
 this form of invocation? The output constructor is simply a pass-through of
 the input constructor, right?

 FOO_CONSTRUCTOR = document.register(‘x-foo’, {
   constructor: FOO_CONSTRUCTOR
 });

 I guess this isn't a big deal though, I'll certainly defer to you all on
 the best course :)

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Tue, Feb 19, 2013 at 12:51 PM, Scott Miles sjmi...@google.com wrote:

  I'd be a much happier camper if I didn't have to think about handling
 different return values.

 I agree, and If it were up to me, there would be just one API for
 document.register.

 However, the argument given for dividing the API is that it is improper
 to have a function return a value that is only important on some platforms. 
 If
 that's the winning argument, then isn't it pathological to make the 'non
 constructor-returning API' return a constructor?


 On Mon, Feb 18, 2013 at 12:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 I agree with your approach on staging the two specs for this, but the
 last part about returning a constructor in one circumstance and undefined
 in the other is something developers would rather not deal with (in my
 observation). If I'm a downstream consumer or library author who's going to
 wrap this function (or any function for that matter), I'd be a much happier
 camper if I didn't have to think about handling different return values. Is
 there a clear harm in returning a constructor reliably that would make us
 want to diverge from an expected and reliable return value? It seems to me
 that the unexpected return value will be far more annoying than a little
 less mental separation between the two invocation setups.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Feb 18, 2013 at 12:47 PM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 On Fri, Feb 15, 2013 at 8:42 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I'm not sure I buy the idea that two ways of doing the same thing
 does not
  seem like a good approach - the web platform's imperative and
 declarative
  duality is, by nature, two-way. Having two methods or an option that
 takes
  multiple input types is not an empirical negative, you may argue it
 is an
  ugly pattern, but that is largely subjective.

 For what it's worth, I totally agree with Anne that two-prong API is a
 huge wart and I feel shame for proposing it. But I would rather feel
 shame than waiting for Godot.

 
  Is this an accurate summary of what we're looking at for possible
 solutions?
  If so, can we at least get a decision on whether or not _this_ route
 is
  acceptable?
 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
prototype: ELEMENT_PROTOTYPE,
lifecycle: {
   created: CALLBACK
}
  });

 I will spec this first.

 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
constructor: FOO_CONSTRUCTOR
  });
 

 When we have implementers who can handle it, I'll spec that.

 Eventually, we'll work to deprecate the first approach.

 One thing that Scott suggested recently is that the second API variant
 always returns undefined, to better separate the two APIs and their
 usage patterns.

 :DG







Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-19 Thread Scott Miles
Perhaps I'm making a mistake, but there is no specific prototype for the
native header element. 'header', 'footer', 'section', e.g., are all
HTMLElement, so all I can do is

FancyHeaderPrototype = Object.create(HTMLElement.prototype);

Afaict, the 'headerness' cannot be expressed this way.


On Tue, Feb 19, 2013 at 8:34 PM, Daniel Buchner dan...@mozilla.com wrote:

 Wait a sec, perhaps I've missed something, but in your example you never
 extend the actual native header element, was that on purpose? I was under
 the impression you still needed to inherit from it in the prototype
 creation/registration phase, is that not true?
 On Feb 19, 2013 8:26 PM, Scott Miles sjmi...@google.com wrote:

 Question: if I do

 FancyHeaderPrototype = Object.create(HTMLElement.prototype);
 document.register('fancy-header', {
   prototype: FancyHeaderPrototype
 ...

 In this case, I intend to extend header. I expect my custom elements to
 look like header is=fancy-header, but how does the system know what
 localName to use? I believe the notion was that the localName would be
 inferred from the prototype, but there are various semantic tags that share
 prototypes, so it seems ambiguous in these cases.

 S


 On Tue, Feb 19, 2013 at 1:01 PM, Daniel Buchner dan...@mozilla.comwrote:

 What is the harm in returning the same constructor that is being input
 for this form of invocation? The output constructor is simply a
 pass-through of the input constructor, right?

 FOO_CONSTRUCTOR = document.register(‘x-foo’, {
   constructor: FOO_CONSTRUCTOR
 });

 I guess this isn't a big deal though, I'll certainly defer to you all on
 the best course :)

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Tue, Feb 19, 2013 at 12:51 PM, Scott Miles sjmi...@google.comwrote:

  I'd be a much happier camper if I didn't have to think about
 handling different return values.

 I agree, and If it were up to me, there would be just one API for
 document.register.

 However, the argument given for dividing the API is that it is improper
 to have a function return a value that is only important on some 
 platforms. If
 that's the winning argument, then isn't it pathological to make the 'non
 constructor-returning API' return a constructor?


 On Mon, Feb 18, 2013 at 12:59 PM, Daniel Buchner dan...@mozilla.comwrote:

 I agree with your approach on staging the two specs for this, but the
 last part about returning a constructor in one circumstance and undefined
 in the other is something developers would rather not deal with (in my
 observation). If I'm a downstream consumer or library author who's going 
 to
 wrap this function (or any function for that matter), I'd be a much 
 happier
 camper if I didn't have to think about handling different return values. 
 Is
 there a clear harm in returning a constructor reliably that would make us
 want to diverge from an expected and reliable return value? It seems to me
 that the unexpected return value will be far more annoying than a little
 less mental separation between the two invocation setups.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Feb 18, 2013 at 12:47 PM, Dimitri Glazkov dglaz...@google.com
  wrote:

 On Fri, Feb 15, 2013 at 8:42 AM, Daniel Buchner dan...@mozilla.com
 wrote:
  I'm not sure I buy the idea that two ways of doing the same thing
 does not
  seem like a good approach - the web platform's imperative and
 declarative
  duality is, by nature, two-way. Having two methods or an option
 that takes
  multiple input types is not an empirical negative, you may argue it
 is an
  ugly pattern, but that is largely subjective.

 For what it's worth, I totally agree with Anne that two-prong API is a
 huge wart and I feel shame for proposing it. But I would rather feel
 shame than waiting for Godot.

 
  Is this an accurate summary of what we're looking at for possible
 solutions?
  If so, can we at least get a decision on whether or not _this_
 route is
  acceptable?
 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
prototype: ELEMENT_PROTOTYPE,
lifecycle: {
   created: CALLBACK
}
  });

 I will spec this first.

 
  FOO_CONSTRUCTOR = document.register(‘x-foo’, {
constructor: FOO_CONSTRUCTOR
  });
 

 When we have implementers who can handle it, I'll spec that.

 Eventually, we'll work to deprecate the first approach.

 One thing that Scott suggested recently is that the second API variant
 always returns undefined, to better separate the two APIs and their
 usage patterns.

 :DG








Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Scott Miles
MyButton = document.register(‘x-button’, {
  prototype: MyButton.prototype,
  lifecycle: {
 created: MyButton
  }
});

What's the benefit of allowing this syntax? I don't immediately see why you
couldn't just do it the other way.


On Thu, Feb 14, 2013 at 2:21 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Thu, Feb 14, 2013 at 5:15 PM, Erik Arvidsson a...@chromium.org wrote:

 Yeah, this post does not really talk about syntax. It comes after a
 discussion how we could use ES6 class syntax.

 The ES6 classes have the same semantics as provided in this thread using
 ES5.

 On Thu, Feb 14, 2013 at 5:10 PM, Rick Waldron waldron.r...@gmail.comwrote:


 On Thu, Feb 14, 2013 at 4:48 PM, Dimitri Glazkov dglaz...@google.comwrote:


 MyButton = document.register(‘x-button’, {
   prototype: MyButton.prototype,
   lifecycle: {
  created: MyButton
   }
 });



 Does this actually mean that the second argument has a property called
 prototype that itself has a special meaning?


 This is just a dictionary.



 Is the re-assignment MyButton intentional? I see the original MyButton
 reference as the value of the created property, but then
 document.register's return value is assigned to the same identifier? Maybe
 this was a typo?


 document.register(‘x-button’, {
  constructor: MyButton,
  ...
 });


 Same question as above, but re: constructor?


 Same answer here.

 I'm not happy with these names but I can't think of anything better.


 Fair enough, I trust your judgement here. Thanks for the follow up—always
 appreciated.

 Rick


 --
 erik





Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Scott Miles
Developer cannot call HTMLButtonElement. So whatever work it represents
that MUST be done by the browser.

Perhaps the browser doesn't call that exact function, but in any event,
neither does any user code.

Note that we are specifically taking about built ins, not custom
constructors.

S


On Thu, Feb 14, 2013 at 2:45 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Thu, Feb 14, 2013 at 2:40 PM, Scott Miles sjmi...@google.com wrote:
  In all constructions the *actual* calling of HTMLButtonElement is done by
  the browser.

 No, this is not correct. It's the exact opposite :)

 In this compromise proposal, the browser isn't calling any of the
 constructors. Arv pointed out that since the invention of [[Create]]
 override, we don't really need them anyway -- they never do anything
 useful for existing HTML elements.

 For your custom elements, I can totally see your library/framework
 having a convention of calling the super constructor.

 I did confuse matters but not putting in the invocation of the
 HTMLButtonElement.call.

 :DG



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Scott Miles
On Thu, Feb 14, 2013 at 2:48 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Thu, Feb 14, 2013 at 2:23 PM, Scott Miles sjmi...@google.com wrote:
  MyButton = document.register(‘x-button’, {
prototype: MyButton.prototype,
lifecycle: {
   created: MyButton
}
  });
 
  What's the benefit of allowing this syntax? I don't immediately see why
 you
  couldn't just do it the other way.

 Daniel answered the direct question, I think,


I must have missed that.


 but let me see if I
 understand the question hiding behind your question :)

 Why can't we just have one API, since these two are so close already?
 In other words, can we not just use constructor API and return a
 generated constructor?

 Do I get a cookie? :)

 :DG


Well, yes, here ya go: (o). But I must be missing something. You wouldn't
propose two APIs if they were equivalent, and I don't see how these are not
(in any meaningful way).


Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Scott Miles
Ok. Since you showed both returning constructors, I just assumed in both
cases the returned constructor would be different, if required by platform.

I guess my attitude is to say always write it like this MyThing =
document.register(...), because depending on your runtime scenario it may
return a different method.

Yes, it's not ideal, but then there is only one way to write it.


On Thu, Feb 14, 2013 at 3:16 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Thu, Feb 14, 2013 at 2:53 PM, Scott Miles sjmi...@google.com wrote:

  Well, yes, here ya go: (o). But I must be missing something. You wouldn't
  propose two APIs if they were equivalent, and I don't see how these are
 not
  (in any meaningful way).

 The only difference is that one spits out a generated constructor, and
 the other just returns a constructor unmodified (well, not in a
 detectable way). My thinking was that if we have both be one and the
 same API, we would have:

 1) problems writing specification in an interoperable way (if you can
 override [[Construct]] function, then do this...)

 2) problems with authors seeing different effects of the API on each
 browser (in Webcko, I get the same object as I passed in, maybe I
 don't need the return value, oh wait, why does it fail in Gekit?)

 Am I worrying about this too much?

 :DG



Re: Custom elements ES6/ES5 syntax compromise, was: document.register and ES6

2013-02-14 Thread Scott Miles
Is saying just do this and it will always work not good enough?

That part I'm not getting.


On Thu, Feb 14, 2013 at 3:30 PM, Daniel Buchner dan...@mozilla.com wrote:

 No, I believe this is *precisely *the thing to worry about - these nits
 and catch-case gotchas are the sort of things developers see in an emerging
 API/polyfill and say awe, that looks like an fractured, uncertain hassle,
 I'll just wait until it is native in all browsers -- we must avoid this
 at all cost, the web needs this *now*.

 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Thu, Feb 14, 2013 at 3:16 PM, Dimitri Glazkov dglaz...@google.comwrote:

 On Thu, Feb 14, 2013 at 2:53 PM, Scott Miles sjmi...@google.com wrote:

  Well, yes, here ya go: (o). But I must be missing something. You
 wouldn't
  propose two APIs if they were equivalent, and I don't see how these are
 not
  (in any meaningful way).

 The only difference is that one spits out a generated constructor, and
 the other just returns a constructor unmodified (well, not in a
 detectable way). My thinking was that if we have both be one and the
 same API, we would have:

 1) problems writing specification in an interoperable way (if you can
 override [[Construct]] function, then do this...)

 2) problems with authors seeing different effects of the API on each
 browser (in Webcko, I get the same object as I passed in, maybe I
 don't need the return value, oh wait, why does it fail in Gekit?)

 Am I worrying about this too much?

 :DG





Re: document.register and ES6

2013-02-08 Thread Scott Miles
The idea is supposed to be that 1 and 3 are only stopgaps until we get
'what we want'. In the future when you can derive a DOM element directly,
both bits of extra code can fall away. Was that clear? Does it change
anything in your mind?

If we go with 2, I believe it means nobody will ever use a custom element
without having to load a helper library first to make the nasty syntax go
away, which seems less than ideal. I donno, I'm not 100% either way.

Scott




On Fri, Feb 8, 2013 at 7:46 AM, Erik Arvidsson a...@chromium.org wrote:

 On Thu, Feb 7, 2013 at 11:51 PM, Scott Miles sjmi...@google.com wrote:

  P.P.S. Arv, do you have a preference from my three versions (or none of
 the
  above)?

 I prefer number 2. This is what we want for ES6 anyway. Both 1 and 3
 makes me have to repeat myself.

 --
 erik



  1   2   >