Re: [HTML Imports]: Sync, async, -ish?

2013-11-27 Thread John J Barton
I just can't help thinking this is whole line of reasoning all too
complicated to achieve wide adoption and thus impact.

The supposed power of declarative languages is ability to reason from top
to bottom. Creating all of these exceptions causes the very problems being
discussed: FOUC occurs because HTML Import runs async even though it looks
like is it sync.  Then we patch that up with eg elements and paint.

On the other hand, JS has allowed very sophisticated application loading to
be implemented. If the async HTML Import were done with JS and if we added
(if needed) rendering control support to JS, then we allow high function
sites complete control of the loading sequence.

I think we should be asking: what can we do to have the best chance that
most sites will show reasonable default content while loading on mobile
networks? A complex solution with confusing order of operations is fine
for some sites, let them do it in JS. A declarative solution where default
content appears before high-function content seems more likely to succeed
for the rest. A complex declarative solution seems like the worst of both.
HTH,
jjb


On Wed, Nov 27, 2013 at 11:50 AM, Daniel Buchner dan...@mozilla.com wrote:

 Right on Dimitri, I couldn't agree more. It seems like an involved (but
 highly beneficial) pursuit - but heck, maybe we'll find an answer quickly,
 let's give it a shot!

 Alex, I completely agree that declarative features should play a huge role
 in the solution, and I love the power/granularity you're alluding to in
 your proposal. WARNING: the following may be completely lol-batshit-crazy,
 so be nice! (remember, I'm not *really *a CS person...I occasionally play
 one on TV). What if we created something like this:

  head
paint policy=blocking  *// non-blocking would be the default
 policy*
  link rel=import href=first-load-components.html /
  script

   *// Some script here** that is required for initial setup of or
 interaction*
 *   // ** with the custom elements imported from
 first-load-components.html*

 /script
   /paint
 /head

 body

   section
  *// content here is subject to default browser paint flow*
   /section

   aside
 paint framerate=5

 *// this content is essentially designated as low-priority,   // but
 framerate=5 could also be treated as a lower-bound target.*
 /paint
   /aside

 /body


 Here's what I intended in the example above:

- A paint element would allow devs to easily, and explicitly, wrap
multiple elements with their own paint settings. *(you could go also
use attributes I suppose, but this way it is easy for someone new to the
code to Jump Right In™) *
- If there was a paint element, we could build-in a ton of tunable,
high-precision features that are easy to manipulate from all contexts

 I'm going to duck now - I anticipate things will soon be thrown at me.

 - Daniel


 On Wed, Nov 27, 2013 at 11:03 AM, Alex Russell slightly...@google.comwrote:

 On Wed, Nov 27, 2013 at 9:46 AM, Dimitri Glazkov dglaz...@google.comwrote:

 Stepping back a bit, I think we're struggling to ignore the elephant in
 the room. This elephant is the fact that there's no specification (or API)
 that defines (or provides facilities to control) when rendering happens.
 And for that matter, what rendering means.

 The original reason why script blocks execution until imports are
 loaded was not even related to rendering. It was a simple solution to an
 ordering problem -- if I am inside a script block, I am assured that any
 script before it had also run (whether it came from imports or not). It's
 the same reason why ES modules need a new HTML element (or script type at
 the very list).

 Blocking rendering was as a side effect, since we simply took the
 plumbing from stylesheets.

 Then, events took a bewildering turn. Suddenly, this side effect turned
 into a feature/bug and now we're knee-deep in the sync-vs-async argument.
  And that's why all solutions look bad.

 With elements attribute, we're letting the user of the import pick the
 poison they prefer (would you like your page to be slow or would you rather
 it flash spastically?)

 With sync or async attribute, we're faced with an enormous
 responsibility of predicting the right default for a new feature. Might
 as well flip a coin there.

 I say we call out the elephant.


 Agree entirely. Most any time we get into a situation where the UA can't
 do the right thing it's because we're trying to have a debate without all
 the information. There's a big role for us to play in setting defaults one
 way or the other, particularly when they have knock-on optimization
 effects, but that's something we know how to do.


 We need an API to control when things appear on screen. Especially, when
 things _first_ appear on screen.


 +1000!!!

 I'll take a stab at it. To prevent running afoul of existing heuristics
 in runtimes regarding paint, I suggest this be 

Re: [HTML Imports]: Sync, async, -ish?

2013-11-27 Thread John J Barton
What if:
head
...
link rel=import href=elements/pi-app.html
...
/head
body
...
pi-app theme=polymer-ui-light-theme
div class=app-loading/div
  /pi-app
...
was instead:
pi-app import=elements/pi-app.html theme=polymer-ui-light-theme
div class=app-loading/div
  /pi-app
If I want to avoid FOUC, I precede this code with style that fill
app-loading or that display:none pi-app, then pi-app.html changes the
styles.
If I want to block script on pi-app I use load events.  If I want script to
block pi-app loading, I put that script before pi-app.

I'm just suggesting this, as a dependency - driven model where the
dependency is attached directly to the dependent rather than floating up in
link. This is similar to JS modules where the importer says import, not
.html saying script.

(There are lots of HTML Import things that this may complicate, it's just a
suggestion of another point of view).



On Wed, Nov 27, 2013 at 12:32 PM, Daniel Buchner dan...@mozilla.com wrote:

 JJB, this is precisely why the paint concept seemed like a good idea to
 me:

- Easy to use in just one or two places if needed, without a steep
cliff
   - The choice shouldn't be: either put up with the browser's default
   render flow, or become a low-level, imperative, perf hacker
   - Enables load/render/paint tuning of both graphical and
non-visible, purely-functional elements
- Flexible enough to allow for complex cases, while being (relatively)
easy to grok for beginners
- Doesn't require devs to juggle a mix of declarative, top-level
settings, and imperative, per-element settings

 - Daniel

 On Wed, Nov 27, 2013 at 12:19 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 I just can't help thinking this is whole line of reasoning all too
 complicated to achieve wide adoption and thus impact.

 The supposed power of declarative languages is ability to reason from top
 to bottom. Creating all of these exceptions causes the very problems being
 discussed: FOUC occurs because HTML Import runs async even though it looks
 like is it sync.  Then we patch that up with eg elements and paint.

 On the other hand, JS has allowed very sophisticated application loading
 to be implemented. If the async HTML Import were done with JS and if we
 added (if needed) rendering control support to JS, then we allow high
 function sites complete control of the loading sequence.

 I think we should be asking: what can we do to have the best chance that
 most sites will show reasonable default content while loading on mobile
 networks? A complex solution with confusing order of operations is fine
 for some sites, let them do it in JS. A declarative solution where default
 content appears before high-function content seems more likely to succeed
 for the rest. A complex declarative solution seems like the worst of both.
 HTH,
 jjb


 On Wed, Nov 27, 2013 at 11:50 AM, Daniel Buchner dan...@mozilla.comwrote:

 Right on Dimitri, I couldn't agree more. It seems like an involved (but
 highly beneficial) pursuit - but heck, maybe we'll find an answer quickly,
 let's give it a shot!

 Alex, I completely agree that declarative features should play a huge
 role in the solution, and I love the power/granularity you're alluding to
 in your proposal. WARNING: the following may be completely
 lol-batshit-crazy, so be nice! (remember, I'm not *really *a CS
 person...I occasionally play one on TV). What if we created something like
 this:

  head
paint policy=blocking  *// non-blocking would be the
 default policy*
  link rel=import href=first-load-components.html /
  script

   *// Some script here** that is required for initial setup of or
 interaction*
 *   // ** with the custom elements imported from
 first-load-components.html*

 /script
   /paint
 /head

 body

   section
  *// content here is subject to default browser paint flow*
   /section

   aside
 paint framerate=5

 *// this content is essentially designated as low-priority,   // but
 framerate=5 could also be treated as a lower-bound target.*
 /paint
   /aside

 /body


 Here's what I intended in the example above:

- A paint element would allow devs to easily, and explicitly, wrap
multiple elements with their own paint settings. *(you could go also
use attributes I suppose, but this way it is easy for someone new to the
code to Jump Right In™) *
- If there was a paint element, we could build-in a ton of
tunable, high-precision features that are easy to manipulate from all
contexts

 I'm going to duck now - I anticipate things will soon be thrown at me.

 - Daniel


 On Wed, Nov 27, 2013 at 11:03 AM, Alex Russell 
 slightly...@google.comwrote:

 On Wed, Nov 27, 2013 at 9:46 AM, Dimitri Glazkov 
 dglaz...@google.comwrote:

 Stepping back a bit, I think we're struggling to ignore the elephant
 in the room. This elephant is the fact that there's no specification (or
 API) that defines

Re: [HTML Imports]: what scope to run in

2013-11-23 Thread John J Barton
On Sat, Nov 23, 2013 at 1:51 AM, Jonas Sicking jo...@sicking.cc wrote:


 It would technically be possible to define that script elements
 inside the imported documents also run inside a scope object the same
 way that modules do. This way imported documents would be less
 likely to pollute the global object of the importing page. This idea
 didn't seem very popular though. (I still like it :) ).


Running JS in a function scope to avoid implicit global creation is quite
popular, the Immediate Invoked Function Expression (IIFE) pattern. Since
that option is available and since module should be available, we should
not redefine what script means in an import.



 One thing that we did discuss but that I think we never reached a
 conclusion on was if imported HTML documents need to block module
 tags in the main document. Otherwise there's a risk that named modules
 introduced by the imported HTML document won't be known at the time
 when name resolution happens in the main document. Whether this is a
 problem or not depends on now this name resolution works. I think this
 is still an outstanding question to resolve.


If we want HTML imports to be able to define named modules, then the ES
System loader must be able to load that module by name. IMO we cannot allow
named modules to be defined that cannot be loaded by name. Then the problem
you outline cannot happen: if the main document or any other ongoing HTML
Import or ES module loading needs a module it looks it up in the System and
blocks until the module is available.


jjb


Re: [HTML Imports]: Sync, async, -ish?

2013-11-22 Thread John J Barton
I agree that we should allow developers to set 'sync' attribute on link
tags to block rendering until load. That will allow them to create sites
that appear to load slowly rather than render their standard HTML/CSS.

I think that the default should be the current solution and 'sync' should
be opt-in. Developers may choose:
   1. Do nothing. The site looks fine when it renders before the components
arrive.
   2. Add small static content fixes. The site looks fine after a few
simple HTML / CSS adjustments.
   3. Add 'sync', the site flashes too much, let it block.
This progression is the best for users.

jjb


On Thu, Nov 21, 2013 at 5:04 PM, Steve Souders soud...@google.com wrote:

 DanielF: You would only list the custom tags that should be treated as
 blocking. If *every* tag in Brick and Polymer should be blocking, then we
 have a really big issue because right now they're NOT-blocking and there's
 nothing in Web Components per se to specify a blocking behavior.

 JJB: Website owners aren't going to be happy with either situation:
   - If custom tags are async (backfilled) by default and the custom tag is
 a critical part of the page, subjecting users to a page that suddenly
 changes layout isn't good.
   - If custom tags (really HTML imports) are sync (block rendering) by
 default, then users stare at a blank screen during slow downloads.

 I believe we need to pick the best default while also giving developers
 the ability to choose what's best for them. Right now I don't see a way for
 a developer to choose to have a custom element block rendering, as opposed
 to be backfilled later. Do we think this is important? (I think so.) If so,
 what's a good way to let web devs make custom elements block?

 -Steve



 On Thu, Nov 21, 2013 at 3:07 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Ok, so my 2 cents: it's ok but it gives a very Web 1.0 solution. We had
 to invent AJAX so developers could control the user experience in the face
 of significant network delay. As I said earlier, most apps will turn this
 problem over to the design team rather than cause users to leave while the
 browser spins waiting for the page to render.


 On Thu, Nov 21, 2013 at 3:01 PM, Daniel Buchner dan...@mozilla.comwrote:

 Yes, that's the primary motivation. Getting FUC'd is going to be a
 non-starter for serious app developers. We were just thinking of ways to
 satisfy the use-case without undue burden.






Re: [HTML Imports]: Sync, async, -ish?

2013-11-22 Thread John J Barton
On Fri, Nov 22, 2013 at 8:22 AM, Daniel Buchner dan...@mozilla.com wrote:

 Personally I don't have any issues with this solution, it provides for the
 use-cases we face. Also, it isn't without precedent - you can opt for a
 sync XMLHttpRequest (not much different).

 The best part of an explicit 'sync' attribute, is that we can now remove
 the block if a script comes after an import condition, right Dimitri?

As far as I know, script already blocks rendering and I don't think even
Dimitri can change that ;-)  Blocking script until HTML Import succeeds is
not needed as we discussed earlier: scripts that want to run after Import
already have an effective and well known mechanism to delay execution,
listening for load events.


 - Daniel
  On Nov 22, 2013 8:05 AM, John J Barton johnjbar...@johnjbarton.com
 wrote:

 I agree that we should allow developers to set 'sync' attribute on link
 tags to block rendering until load. That will allow them to create sites
 that appear to load slowly rather than render their standard HTML/CSS.

 I think that the default should be the current solution and 'sync' should
 be opt-in. Developers may choose:
1. Do nothing. The site looks fine when it renders before the
 components arrive.
2. Add small static content fixes. The site looks fine after a few
 simple HTML / CSS adjustments.
3. Add 'sync', the site flashes too much, let it block.
 This progression is the best for users.

 jjb


 On Thu, Nov 21, 2013 at 5:04 PM, Steve Souders soud...@google.comwrote:

 DanielF: You would only list the custom tags that should be treated as
 blocking. If *every* tag in Brick and Polymer should be blocking, then we
 have a really big issue because right now they're NOT-blocking and there's
 nothing in Web Components per se to specify a blocking behavior.

 JJB: Website owners aren't going to be happy with either situation:
   - If custom tags are async (backfilled) by default and the custom tag
 is a critical part of the page, subjecting users to a page that suddenly
 changes layout isn't good.
   - If custom tags (really HTML imports) are sync (block rendering) by
 default, then users stare at a blank screen during slow downloads.

 I believe we need to pick the best default while also giving developers
 the ability to choose what's best for them. Right now I don't see a way for
 a developer to choose to have a custom element block rendering, as opposed
 to be backfilled later. Do we think this is important? (I think so.) If so,
 what's a good way to let web devs make custom elements block?

 -Steve



 On Thu, Nov 21, 2013 at 3:07 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Ok, so my 2 cents: it's ok but it gives a very Web 1.0 solution. We had
 to invent AJAX so developers could control the user experience in the face
 of significant network delay. As I said earlier, most apps will turn this
 problem over to the design team rather than cause users to leave while the
 browser spins waiting for the page to render.


 On Thu, Nov 21, 2013 at 3:01 PM, Daniel Buchner dan...@mozilla.comwrote:

 Yes, that's the primary motivation. Getting FUC'd is going to be a
 non-starter for serious app developers. We were just thinking of ways to
 satisfy the use-case without undue burden.







Re: [HTML Imports]: Sync, async, -ish?

2013-11-21 Thread John J Barton
I guess this is designed to solve the flash of unstyled content problem by
blocking rendering of tags dependent upon unloaded custom elements?



On Thu, Nov 21, 2013 at 2:21 PM, Daniel Buchner dan...@mozilla.com wrote:

 Steve and I talked at the Chrome Dev Summit today and generated an idea
 that may align the stars for our async/sync needs:

 link rel=import elements=x-foo, x-bar /

 The idea is that imports are always treated as async, unless the developer
 opts-in to blocking based on the presence of specific tags. If the parser
 finds custom elements in the page that match user-defined elements tag
 names, it would block rendering until the associated link import has
 finished loading and registering the containing custom elements.

 Thoughts?

 - Daniel


 On Wed, Nov 20, 2013 at 11:19 AM, Daniel Buchner dan...@mozilla.comwrote:


 On Nov 20, 2013 11:07 AM, John J Barton johnjbar...@johnjbarton.com
 wrote:
 
 
 
 
  On Wed, Nov 20, 2013 at 10:41 AM, Daniel Buchner dan...@mozilla.com
 wrote:
 
  Dimitri: right on.
 
  The use of script-after-import is the forcing function in the blocking
 scenario, not imports.
 
  Yes.
 
  Let's not complicate the new APIs and burden the overwhelming use-case
 to service folks who intend to use the technology in alternate ways.
 
  I  disagree, but happily the current API seems to handle the
 alternative just fine. The case Steve raise is covered and IMO correctly,
 now that you have pointed out that link supports load event. His original
 example must block and if he wants it not to block it's on him to hook the
 load event.
 
  For my bit, as long as the size of the components I include are not
 overly large, I want them to load before the first render and avoid getting
 FUCd or having to write a plethora of special CSS for the not-yet-upgraded
 custom element case.
 
  According to my understanding, you are likely to be disappointed: the
 components are loaded asynchronously and on a slow network with a fast
 processor we will render page HTML before the component arrives.  We should
 expect this to be the common case for the foresable future.
 

 There is, of course, the case of direct document.register() invocation
 from a script tag, which will/should block to ensure all elements in
 original source are upgraded. My only point, is that we need to be
 realistic - both cases are valid and there are good reasons for each.

 Might we be able to let imports load async, even when a script proceeds
 them, if we added a *per component type* upgrade event? (note: I'm not
 talking about a perf-destroying per component instance event)

  jjb
 
  Make the intended/majority case easy, and put the onus on the less
 common cases to think about more complex asset arrangement.
 
  - Daniel
 
  On Nov 20, 2013 10:22 AM, Dimitri Glazkov dglaz...@google.com
 wrote:
 
  John's commentary just triggered a thought in my head. We should stop
 saying that HTML Imports block rendering. Because in reality, they don't.
 It's the scripts that block rendering.
 
  Steve's argument is not about HTML Imports needing to be async. It's
 about supporting legacy content with HTML Imports. And I have a bit less
 sympathy for that argument.
 
  You can totally build fully asynchronous HTML Imports-based
 documents, if you follow these two simple rules:
  1) Don't put scripts after imports in main document
  2) Use custom elements
 
  As an example:
 
  index.html:
  link rel=import href=my-ad.html
  ...
  my-ad/my-ad
  ..
 
  my-ad.html:
  script
  document.register(my-ad, ... );
  ...
 
  There won't be any rendering blocked here. The page will render, then
 when my-add.html loads, it will upgrade the my-ad element to display the
 punch-the-monkey thing.
 
  :DG
 
 





Re: [HTML Imports]: Sync, async, -ish?

2013-11-21 Thread John J Barton
Ok, so my 2 cents: it's ok but it gives a very Web 1.0 solution. We had to
invent AJAX so developers could control the user experience in the face of
significant network delay. As I said earlier, most apps will turn this
problem over to the design team rather than cause users to leave while the
browser spins waiting for the page to render.


On Thu, Nov 21, 2013 at 3:01 PM, Daniel Buchner dan...@mozilla.com wrote:

 Yes, that's the primary motivation. Getting FUC'd is going to be a
 non-starter for serious app developers. We were just thinking of ways to
 satisfy the use-case without undue burden.



Re: [HTML Imports]: Sync, async, -ish?

2013-11-20 Thread John J Barton
On Wed, Nov 20, 2013 at 10:41 AM, Daniel Buchner dan...@mozilla.com wrote:

 Dimitri: right on.

 The use of script-after-import is the forcing function in the blocking
 scenario, not imports.

Yes.

 Let's not complicate the new APIs and burden the overwhelming use-case to
 service folks who intend to use the technology in alternate ways.

I  disagree, but happily the current API seems to handle the alternative
just fine. The case Steve raise is covered and IMO correctly, now that you
have pointed out that link supports load event. His original example must
block and if he wants it not to block it's on him to hook the load event.

 For my bit, as long as the size of the components I include are not overly
 large, I want them to load before the first render and avoid getting FUCd
 or having to write a plethora of special CSS for the not-yet-upgraded
 custom element case.

According to my understanding, you are likely to be disappointed: the
components are loaded asynchronously and on a slow network with a fast
processor we will render page HTML before the component arrives.  We should
expect this to be the common case for the foresable future.

jjb

 Make the intended/majority case easy, and put the onus on the less common
 cases to think about more complex asset arrangement.

 - Daniel
  On Nov 20, 2013 10:22 AM, Dimitri Glazkov dglaz...@google.com wrote:

 John's commentary just triggered a thought in my head. We should stop
 saying that HTML Imports block rendering. Because in reality, they don't.
 It's the scripts that block rendering.

 Steve's argument is not about HTML Imports needing to be async. It's
 about supporting legacy content with HTML Imports. And I have a bit less
 sympathy for that argument.

 You can totally build fully asynchronous HTML Imports-based documents, if
 you follow these two simple rules:
 1) Don't put scripts after imports in main document
 2) Use custom elements

 As an example:

 index.html:
 link rel=import href=my-ad.html
 ...
 my-ad/my-ad
 ..

 my-ad.html:
 script
 document.register(my-ad, ... );
 ...

 There won't be any rendering blocked here. The page will render, then
 when my-add.html loads, it will upgrade the my-ad element to display the
 punch-the-monkey thing.

 :DG




Re: [HTML Imports]: what scope to run in

2013-11-20 Thread John J Barton
On Wed, Nov 20, 2013 at 7:34 PM, Ryosuke Niwa rn...@apple.com wrote:

 On Nov 21, 2013, at 10:41 AM, Hajime Morrita morr...@google.com wrote:
  Seems like almost everyone agrees that we need better way to
  modularize JavaScript, and ES6 modules are one of the most promising
  way to go. And we also agree (I think) that we need a way to connect
  ES6 modules and the browser.
 
  What we don't consent to is what is the best way to do it. One option
  is to introduce new primitive like jorendorff's module element.
  People are also seeing that HTML imports could be another option. So
  the conversation could be about which is better, or whether we need
  both or not.

 This is a nice summary.

  * Given above, HTML imports introduces an indirection with script
  src=... and will be slower than directly loading .js files.

 This is not the case when you're defining components/custom elements in
 the imported document
 because you want both templates, styles, and inline scripts to define
 those custom elements in one HTML document.


And in this model, the 'inline script' can use module, meaning that the
JS is modular and relies on modular JS. In this way the two specification
work together.

Earlier I suggested a way to combine these specifications from the other
direction, inventing a small extension to ES module-loader,
System.component(), where JS drives the load of an HTML import.



  * HTML imports will work well with module-ish thing and it makes
  the spec small as it gets off-loaded module loading responsibility.
  This seems good modularization of the feature.

 But authors have to opt-in to benefit from such modularization mechanisms.


As I argued for modularity support previously, I also think there is a
strong case for non-modular forms as well: early adoption, test cases,
simple pages, purely declarative markup sites, future innovations.
 Enforcing modularization is likely, based on history, to lead to fewer
uses even much less that the amount of opt-in to a less rigid solution.
Just consider JavaScript vs every other solution for Web scripting, or HTML
vs XML.



  HTML Imports make sense only if you need HTML fragments and/or
  stylesheets, but people need modularization regardless they develop
  Web Components or plain JS pieces. I think the web standard should
  help both cases and module or something similar serves better for
  that purpose.

 I'm fine with saying that link[rel=import] is a simple include and module
 element is the way to include modularized HTML and JS files. That, however,
 raises a question as to whether we really need two very similar mechanism
 to accomplish the same thing.


The module element remains a rumor to my knowledge and this rumor has it
as a JS only feature. The rest of the module specification is far along,
but again only JS. The story for HTML is well represented...by the HTML
Import system.

The strength of the HTML Import story is declarative include of a complete
HTML sub-document, something we have never enjoyed as a common technology.
Its weakness in my mind is lack of advanced JS features like modules and
dynamic loading; these I see as great additions to be made by the JS
leaders.

I don't see two similar mechanisms for the same goal, but two cooperating
specifications with different strengths that combine nicely. At least on
paper, we need more empirical work with the combination to be sure.

HTH,
jjb


Re: [HTML Imports]: what scope to run in

2013-11-19 Thread John J Barton
On Tue, Nov 19, 2013 at 2:07 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Mon, Nov 18, 2013 at 7:14 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi All,

 Largely independently from the thread that Dimitri just started on the
 sync/async/-ish nature of HTML imports I have a problem with how
 script execution in the imported document works.

 Right now it's defined that any script elements in the imported
 document are run in the scope of the window of the document linking to
 the import. I.e. the global object of the document that links to the
 import is used as global object of the running script.

 This is exactly how script elements have always worked in HTML.

 However this is a pretty terrible way of importing libraries.
 Basically the protocol becomes here is my global, do whatever
 modifications you want to it in order to install yourself.

 This has several downsides:
 * Libraries can easily collide with each other by trying to insert
 themselves into the global using the same property name.
 * It means that the library is forced to hardcode the property name
 that it's accessed through, rather allowing the page importing the
 library to control this.
 * It makes it harder for the library to expose multiple entry points
 since it multiplies the problems above.
 * It means that the library is more fragile since it doesn't know what
 the global object that it runs in looks like. I.e. it can't depend on
 the global object having or not having any particular properties.
 * Internal functions that the library does not want to expose require
 ugly anonymous-function tricks to create a hidden scope.

 Many platforms, including Node.js and ES6 introduces modules as a way
 to address these problems.

 It seems to me that we are repeating the same mistake again with HTML
 imports.

 Note that this is *not* about security. It's simply about making a
 more robust platform for libraries. This seems like a bad idea given
 that HTML imports essentially are libraries.

 At the very least, I would like to see a way to write your
 HTML-importable document as a module. So that it runs in a separate
 global


 This isn't how node modules or ES6 modules work. A module designed for use
 with node can define properties on the `global` (ie. the object whose bound
 identifier is the word global) and this is the same global object making
 the require(...) call. ES6 modules are evaluated in the same global scope
 from which they are imported.


However ES6 modules do solve the list of downsides in Jonas' list. And ES6
modules create a scope so variables and functions declared in a module but
not exported do not pollute the global object as a side-effect of
declaration.

I think ES6 modules for HTML imports provide a good compromise between
current HTML import design (no modules just packaging) and total
iframe-like encapsulation (many practical and design issues).



 Rick





Re: [HTML Imports]: Sync, async, -ish?

2013-11-19 Thread John J Barton
I sent this to Scott only by accident, then noticed when I realized I need
to correct myself. First a softer version of my message to Scott:

On Mon, Nov 18, 2013 at 5:53 PM, Scott Miles sjmi...@google.com wrote:

 I believe the primary issue here is 'synchronous with respect to
 rendering'. Seems like you ignored this issue. See Brian's post.


Yes, I agree that issue should be discussed. Rendering synchrony makes a
strong case against Dimitri's proposal (now I switch my earlier
not-competing opinion). In my opinion, asynchronous component loading
driven by dependencies gives the best user experience and simplest dev
model. Second best is a synchronous model based on declaration order. Last
is an asynchronous declarative model (quote because such solutions are
not declarative).

I guess you would agree that the best user experience occurs after the web
components are loaded. So let's get there the fastest possible way: non
blocking asynchronous i/o.

After picking the fastest path, we get a bonus: we first render HTML5
content, anything our designers like: blank page, 'brought to you by ...',
etc. Thus we get control of the load-time UI.

The flash of unstyled content (FOUC) issue need not affect us because we
use web components with proper dependences rather than a pile-of-div-s plus
some independent JS.

The synchronous solution takes longer to load and shows browser defined
content in the meantime.

Therefore the dependency-driven asynchronous solution has better user
experience and a somewhat better dev model than declarative synchronous.

The declarative synchronous model has three important advantages: it is
simple to code, easy to reason about, and familiar. For the top- or
application-level on simple pages I think these advantages are important,
despite its failing in performance.

A declarative asynchronous solution (async attribute on link tag) can be
used to give the same user experience, but it loses on the development
model. It gives developers no help in load order and it creates confusion
by simulating imperative actions with declarative syntax.

FOUC is a sign of the failing of this kind of solution: the unstyled
content hits the rendering engine in the wrong order, before the JS that it
depends upon. If our dependency design is correct, we only deliver useful
content to the rendering engine.


 Scott


 On Mon, Nov 18, 2013 at 5:47 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Nov 18, 2013 at 3:06 PM, Scott Miles sjmi...@google.com wrote:

  I'll assert that the primary use case for JS interacting with HTML
 components ought to be 'works well with JS modules'.

 You can happily define modules in your imports, those two systems are
 friends as far as I can tell. I've said this before, I've yet to hear the
 counter argument.


 Yes indeed. Dimitri was asking for a better solution, but I agree that
 both are feasible and compatible.



  But if you believe in modularity for Web Components then you should
 believe in modularity for JS

 Polymer team relies on Custom Elements for JS modularity. But again,
 this is not mutually exclusive with JS modules, so I don't see the problem.


 Steve's example concerns synchrony between script and link
 rel='import'. It would be helpful if you can outline how your modularity
 solution works for this case.




  Dimitri's proposal makes the async case much more difficult: you need
 both the link tag with async attribute then again you need to express the
 dependency with the clunky onload busines

 I believe you are making assumptions about the nature of link and async.
 There are ways of avoiding this problem,


 Yes I am assuming Steve's example, so again your version would be
 interesting to see.


  but it begs the question, which is: if we allow Expressing the
 dependency in JS then why doesn't 'async' (or 'sync') get us both what we
 want?


 I'm not arguing against any other solution that also works. I'm only
 suggesting a solution that always synchronizes just those blocks of JS that
 need order-of-execution and thus never needs 'sync' or 'async' and which
 leads us to unify the module story for the Web.

 jjb



 Scott

 On Mon, Nov 18, 2013 at 2:58 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Nov 18, 2013 at 2:33 PM, Scott Miles sjmi...@google.comwrote:

  I love the idea of making HTML imports *not* block rendering as the
 default behavior

 So, for what it's worth, the Polymer team has the exact opposite
 desire. I of course acknowledge use cases where imports are being used to
 enhance existing pages, but the assertion that this is the primary use 
 case
 is at least arguable.


 I'll assert that the primary use case for JS interacting with HTML
 components ought to be 'works well with JS modules'. Today, in the current
 state of HTML Import and JS modules, this sounds too hard. But if you
 believe in modularity for Web Components then you should believe in
 modularity for JS (or look at the Node

Re: [HTML Imports]: Sync, async, -ish?

2013-11-19 Thread John J Barton
Now a correction:

On Tue, Nov 19, 2013 at 4:25 PM, John J Barton
johnjbar...@johnjbarton.comwrote:

  Last is an asynchronous declarative model (quote because such solutions
 are not declarative).

 Broadly I am advocating using ES6 modules with HTML imports. The
particular example I made up earlier was patterned after ES6 asynchronous
loading, here I repeat it:
script
System.component(import.php, function(component) {
  var content = component.content
  document.getElementById('import-container').appendChild(content.cloneNode(
true));
});
/script

How does this differ from Dimitri's
link rel=import async href=/imports/heart.html

Well not as much as I claimed before.

Both cases are parsed synchronously and cause subsequent loading. Both can
trigger module loading recursively, my made up version by wiring ES6 module
loading to allow inputs to be HTML Imports and Dimitri's version through
subimports.

The primary difference in this starting the load operation is the
callback. In my made up version the callback would follow the System.load()
pattern from ES6. In Dimtrii's version you have to have a separate script
tag with an event handler and an event triggered by the import.

If the application needs no callback, these two forms are draw on all
counts.

So the crux of a ES6-compatible solution is a JS loader supporting
component loading. If the JS in an HTML import does not import any JS
modules, then asynchronous module loading works, we just don't get JS
modularity.

So I'm back to these don't compete. I think integrating ES6 modules with
HTML Imports can be on ES6.  The ES6 solution would be better for the
reasons I outlined previously but everything is better in the future.

jjb


Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread John J Barton
Maybe Steve's example[1] could be on JS rather than on components:

System.component(import.php, function(component) {
  var content = component.content

document.getElementById('import-container').appendChild(content.cloneNode(true));
});

Here we mimic System.load(jsId, success, error).  Then make link not
block script: it's on JS to express the dependency correctly.

jjb


On Mon, Nov 18, 2013 at 1:40 PM, Dimitri Glazkov dglaz...@google.comwrote:

 'Sup yo!

 There was a thought-provoking post by Steve Souders [1] this weekend that
 involved HTML Imports (yay!) and document.write (boo!), which triggered a
 Twitter conversation [2], which triggered some conversations with Arv and
 Alex, which finally erupted in this email.

 Today, HTML Imports loading behavior is very simply defined: they act like
 stylesheets. They load asynchronously, but block script from executing.
 Some peeps seem to frown on that and demand moar async.

 I am going to claim that there are two distinct uses of link rel=import:

 1) The import is the most important part of the document. Typically, this
 is when the import is the underlying framework that powers the app, and the
 app simply won't function without it. In this case, any more async will be
 all burden and no benefit.

 2) The import is the least important of the document. This is the +1
 button case. The import is useful, but sure as hell doesn't need to take
 rendering engine's attention from presenting this document to the user. In
 this case, async is sorely needed.

 We should address both of these cases, and we don't right now -- which is
 a problem.

 Shoot-from-the-hip Strawman:

 * The default behavior stays currently specified
 * The async attribute on link makes import load asynchronously
 * Also, consider not blocking rendering when blocking script

 This strawman is intentionally full of ... straw. Please provide a better
 strawman below:
 __
 __
 __

 :DG

 [1]:
 http://www.stevesouders.com/blog/2013/11/16/async-ads-with-html-imports/
 [2]: https://twitter.com/codepo8/status/401752453944590336



Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread John J Barton
On Mon, Nov 18, 2013 at 2:33 PM, Scott Miles sjmi...@google.com wrote:

  I love the idea of making HTML imports *not* block rendering as the
 default behavior

 So, for what it's worth, the Polymer team has the exact opposite desire. I
 of course acknowledge use cases where imports are being used to enhance
 existing pages, but the assertion that this is the primary use case is at
 least arguable.


I'll assert that the primary use case for JS interacting with HTML
components ought to be 'works well with JS modules'. Today, in the current
state of HTML Import and JS modules, this sounds too hard. But if you
believe in modularity for Web Components then you should believe in
modularity for JS (or look at the Node ecosystem) and gee they ought to
work great together.




   It would be the web dev's responsibility to confirm that the import
 was done loading

 Our use cases almost always rely on imports to make our pages sane.
 Requiring extra code to manage import readiness is a headache.


I think your app would be overall even more sane if the dependencies were
expressed directly where they are needed. Rather than loading components
A,B,C,D then some JS that uses B,C,F, just load the JS and let it pull B,
C, F.  No more checking back to the list of link to compare to the JS
needs.



 Dimitri's proposal above tries to be inclusive to both world views, which
 I strongly support as both use-cases are valid.


Dimitri's proposal makes the async case much more difficult: you need both
the link tag with async attribute then again you need to express the
dependency with the clunky onload business. Expressing the dependency in JS
avoids both of these issues.

Just to point out: System.component()-ish need not be blocked by completing
ES module details and my arguments only apply for JS dependent upon Web
Components.




 Scott

 On Mon, Nov 18, 2013 at 2:25 PM, Steve Souders soud...@google.com wrote:

 I love the idea of making HTML imports *not* block rendering as the
 default behavior. I believe this is what JJB is saying: make link
 rel=import NOT block script.

 This is essential because most web pages are likely to have a SCRIPT tag
 in the HEAD, thus the HTML import will block rendering of the entire page.
 While this behavior is the same as stylesheets, it's likely to be
 unexpected. Web devs know the stylesheet is needed for the entire page and
 thus the blocking behavior is more intuitive. But HTML imports don't affect
 the rest of the page - so the fact that an HTML import can block the entire
 page the same way as stylesheets is likely to surprise folks. I don't have
 data on this, but the reaction to my blog post reflects this surprise.

 Do we need to add a sync (aka blockScriptFromExecuting) attribute? I
 don't think so. It would be the web dev's responsibility to confirm that
 the import was done loading before trying to insert it into the document
 (using the import ready flag). Even better would be to train web devs to
 use the LINK's onload handler for that.

 -Steve





 On Mon, Nov 18, 2013 at 10:16 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Maybe Steve's example[1] could be on JS rather than on components:

 System.component(import.php, function(component) {
   var content = component.content

 document.getElementById('import-container').appendChild(content.cloneNode(true));
 });

 Here we mimic System.load(jsId, success, error).  Then make link not
 block script: it's on JS to express the dependency correctly.

 jjb


 On Mon, Nov 18, 2013 at 1:40 PM, Dimitri Glazkov dglaz...@google.comwrote:

 'Sup yo!

 There was a thought-provoking post by Steve Souders [1] this weekend
 that involved HTML Imports (yay!) and document.write (boo!), which
 triggered a Twitter conversation [2], which triggered some conversations
 with Arv and Alex, which finally erupted in this email.

 Today, HTML Imports loading behavior is very simply defined: they act
 like stylesheets. They load asynchronously, but block script from
 executing. Some peeps seem to frown on that and demand moar async.

 I am going to claim that there are two distinct uses of link
 rel=import:

 1) The import is the most important part of the document. Typically,
 this is when the import is the underlying framework that powers the app,
 and the app simply won't function without it. In this case, any more async
 will be all burden and no benefit.

 2) The import is the least important of the document. This is the +1
 button case. The import is useful, but sure as hell doesn't need to take
 rendering engine's attention from presenting this document to the user. In
 this case, async is sorely needed.

 We should address both of these cases, and we don't right now -- which
 is a problem.

 Shoot-from-the-hip Strawman:

 * The default behavior stays currently specified
 * The async attribute on link makes import load asynchronously
 * Also, consider not blocking rendering when blocking script

 This strawman is intentionally

Re: [HTML Imports]: Sync, async, -ish?

2013-11-18 Thread John J Barton
On Mon, Nov 18, 2013 at 3:06 PM, Scott Miles sjmi...@google.com wrote:

  I'll assert that the primary use case for JS interacting with HTML
 components ought to be 'works well with JS modules'.

 You can happily define modules in your imports, those two systems are
 friends as far as I can tell. I've said this before, I've yet to hear the
 counter argument.


Yes indeed. Dimitri was asking for a better solution, but I agree that both
are feasible and compatible.



  But if you believe in modularity for Web Components then you should
 believe in modularity for JS

 Polymer team relies on Custom Elements for JS modularity. But again, this
 is not mutually exclusive with JS modules, so I don't see the problem.


Steve's example concerns synchrony between script and link
rel='import'. It would be helpful if you can outline how your modularity
solution works for this case.




  Dimitri's proposal makes the async case much more difficult: you need
 both the link tag with async attribute then again you need to express the
 dependency with the clunky onload busines

 I believe you are making assumptions about the nature of link and async.
 There are ways of avoiding this problem,


Yes I am assuming Steve's example, so again your version would be
interesting to see.


 but it begs the question, which is: if we allow Expressing the dependency
 in JS then why doesn't 'async' (or 'sync') get us both what we want?


I'm not arguing against any other solution that also works. I'm only
suggesting a solution that always synchronizes just those blocks of JS that
need order-of-execution and thus never needs 'sync' or 'async' and which
leads us to unify the module story for the Web.

jjb



 Scott

 On Mon, Nov 18, 2013 at 2:58 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Nov 18, 2013 at 2:33 PM, Scott Miles sjmi...@google.com wrote:

  I love the idea of making HTML imports *not* block rendering as the
 default behavior

 So, for what it's worth, the Polymer team has the exact opposite
 desire. I of course acknowledge use cases where imports are being used to
 enhance existing pages, but the assertion that this is the primary use case
 is at least arguable.


 I'll assert that the primary use case for JS interacting with HTML
 components ought to be 'works well with JS modules'. Today, in the current
 state of HTML Import and JS modules, this sounds too hard. But if you
 believe in modularity for Web Components then you should believe in
 modularity for JS (or look at the Node ecosystem) and gee they ought to
 work great together.




   It would be the web dev's responsibility to confirm that the import
 was done loading

 Our use cases almost always rely on imports to make our pages sane.
 Requiring extra code to manage import readiness is a headache.


 I think your app would be overall even more sane if the dependencies were
 expressed directly where they are needed. Rather than loading components
 A,B,C,D then some JS that uses B,C,F, just load the JS and let it pull B,
 C, F.  No more checking back to the list of link to compare to the JS
 needs.



 Dimitri's proposal above tries to be inclusive to both world views,
 which I strongly support as both use-cases are valid.


 Dimitri's proposal makes the async case much more difficult: you need
 both the link tag with async attribute then again you need to express the
 dependency with the clunky onload business. Expressing the dependency in JS
 avoids both of these issues.

 Just to point out: System.component()-ish need not be blocked by
 completing ES module details and my arguments only apply for JS dependent
 upon Web Components.




 Scott

 On Mon, Nov 18, 2013 at 2:25 PM, Steve Souders soud...@google.comwrote:

 I love the idea of making HTML imports *not* block rendering as the
 default behavior. I believe this is what JJB is saying: make link
 rel=import NOT block script.

 This is essential because most web pages are likely to have a SCRIPT
 tag in the HEAD, thus the HTML import will block rendering of the entire
 page. While this behavior is the same as stylesheets, it's likely to be
 unexpected. Web devs know the stylesheet is needed for the entire page and
 thus the blocking behavior is more intuitive. But HTML imports don't affect
 the rest of the page - so the fact that an HTML import can block the entire
 page the same way as stylesheets is likely to surprise folks. I don't have
 data on this, but the reaction to my blog post reflects this surprise.

 Do we need to add a sync (aka blockScriptFromExecuting) attribute?
 I don't think so. It would be the web dev's responsibility to confirm that
 the import was done loading before trying to insert it into the document
 (using the import ready flag). Even better would be to train web devs to
 use the LINK's onload handler for that.

 -Steve





 On Mon, Nov 18, 2013 at 10:16 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 Maybe Steve's example[1] could be on JS rather

Re: The JavaScript context of a custom element

2013-05-20 Thread John J Barton
Aren't ES6 modules is a good-enough solution for this issue? They make
global collision rare and likely to be what the author really needed.

jjb


On Mon, May 20, 2013 at 1:00 PM, Aaron Boodman a...@google.com wrote:

 Hello public-webapps,

 I have been following along with web components, and am really excited
 about the potential.

 However, I just realized that unlike the DOM and CSS, there is no real
 isolation for JavaScript in a custom element. In particular, the global
 scope is shared.

 This seems really unfortunate to me, and limits the ability of element
 authors to create robustly reusable components.

 I would like to suggest that custom elements have the ability to ask for a
 separate global scope for their JavaScript. This would be analogous to what
 happens today when you have multiple script-connected frames on the same
 origin.

 Has there been any thought along these lines in the past?

 Thanks,

 - a



Re: webcomponents: import instead of link

2013-05-16 Thread John J Barton
On Wed, May 15, 2013 at 11:03 AM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 On Wed, May 15, 2013 at 10:59 AM, Jonas Sicking jo...@sicking.cc wrote:
  On Wed, May 15, 2013 at 10:21 AM, John J Barton
  johnjbar...@johnjbarton.com wrote:
 
 
 
  On Tue, May 14, 2013 at 8:04 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  Apparently I wasn't clear enough before.
 
  We shouldn't add dynamically updating imports of components just
  because we're choosing to reuse link. We add dynamic imports if
  there are use cases.
 
  So far no-one has presented any use cases.
 
 
  Sorry if this is out-of-context, but as far as I can tell you are
 proposing
  that demand-loading of Web components for Web apps is not a valid
 use-case
  for components.
 
  That's not what I'm proposing. What I'm saying is that unloading of a
  component document is not a use case.


I agree that unloading a component document is not a high-priority use case.


 I.e. using link to point to
  URL A and wait for it to load the components in A. Then change the
  link to point to URL B and have it unload the components from A and
  instead load the components in B.
 
  This is how stylesheets work if you dynamically modify a link from
  pointing at A to pointing at B.


And the same logic could apply to web-components. However the consequences
need not be similar.

When you remove a stylesheet you remove rules from an active rule set. The
only reason is history. Historically we had poor JS control over CSS so we
did not manipulate the CSS rule set. This meant 1) unloading the stylesheet
had some utility and 2) removing the corresponding rules makes some sense.
 If we had a lot of JS operating on CSS, then we would consider CSS rule
removal with stylesheet unloading to be a horrible mistake.

Unloading web-components need not follow that path. Unloading could simply
mean un-registering the component so no new instances of that component
could be created. Would that be horrible? I don't think so. The next
attempt to create that component instance simply falls over.


  I definitely agree there are use cases for at some point after a
  document has finished loading, loading components from url A, and
  again at a yet later point loading components from URL B.

 I think unloading components (unregistering custom elements, to be
 precise), is out of questions and never should be on the table. In
 fact, we have a separate table for that -- it's in a dark, scary place
 with eternal burning fire, where all bad ideas go after they die.


If web-components are meta objects for creating instances, then unloading
is a marginal-value idea. Else there is something more to understand...
What makes unloading apocalyptically bad?

jjb


 :DG



Re: Does JS bound to element need to inherit from HTMLElement?

2013-04-19 Thread John J Barton
On Thu, Apr 18, 2013 at 11:11 PM, Dominic Cooney domin...@google.comwrote:

 On Wed, Apr 17, 2013 at 12:01 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 I wonder if there may be a cultural difference involved in our
 different points of view. As a C++ developer I think your point of view
 makes a lot of sense.  As a JavaScript developer I find it puzzling.  Given
 a JS object I can override its value getter and add new properties
 operating on the object or inheriting from it.


 I'm not sure what precisely you mean by override its value getter, but
 if you mean define a property on the instance itself, that is an
 antipattern in JavaScript because it bloats every instance with an
 additional property, instead of just one additional property the prototype
 object. It also makes further metaprogramming difficult, because there are
 n places to hook (n=number of instances) of uncertain location, compared to
 one place to hook (the prototype involved in the inheritance) with a
 discernable location (Constructor.prototype).

 I'm not sure what precisely you mean by inheriting from it, but if you
 mean put the DOM object on the prototype chain of a JavaScript object
 (apologies if that is not what you meant), that is problematic too.
 Depending on where the additional properties are defined it could have the
 same problems I outlined in the previous paragraph. I think it has the
 additional problem for implementations of making call sites using objects
 set up this way appear polymorphic, interfering with polymorphic inline
 caches.

 This is also a problem in that DOM operations stop working, for example:

 var x = document.createElement('div');
 var y = Object.create(x);
 y.appendChild(document.createElement('span'));

 will throw TypeError because the receiver is not a DOM object. I believe
 this is correct per Web IDL http://www.w3.org/TR/WebIDL/#es-operations
 4.4.7 step 2.


I meant monkey patching the prototype.  Particular implementations of host
objects may not allow that, which we could fix with various tradeoffs.
I'm only suggesting to keep an open mind to alternatives. Inheritance is a
fine technology but not always appropriate.

jjb



 Pre-ES6, the number of failure modes in both paths loom large. Anyone
 looking at the end result won't be able to tell the difference.


 If I understood the alternative approaches proposed, I think the
 differences are observable.


 Anyway the group seems keen on inheritance so I hope it works out.


 On Mon, Apr 15, 2013 at 11:24 PM, Dominic Cooney domin...@google.comwrote:

 On Sat, Apr 13, 2013 at 12:03 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 While I completely understand the beauty of having any JS object bound
 to an element inherit functions that make that object 'be an element',
 I'm unsure of the practical value.

 To me the critical relationship between the JS and the element is JS
 object access to its corresponding element instance without global
 operations. That is, no document.querySelector() must be required, because
 the result could depend upon the environment of the component instance.


 The critical issue to me is that there is a canonical object that script
 uses to interact with the element. With ad-hoc wrapping of elements in
 JavaScript, there are two objects (the native element wrapper provided by
 the UA and the object provided by the page author) which results in tedium
 at best (I did querySelector, now I need to do some other step to find the
 author's wrapper if it exists) and bugs at worst (the author's wrapper is
 trying to maintain some abstraction but that is violated by direct access
 to the native element wrapper.)


 Whether that access is through |this| is way down the list of critical
 issues for me. Given a reference to the element I guess I can do everything
 I want. In fact I believe the vast majority of the JS code used in
 components will never override HTMLElement operations for the same reason
 we rarely override Object operations.


 The Object interface is not terribly specific and mostly dedicated to
 metaprogramming the object model, so it is not surprising that it isn't
 heavily overridden.

 Elements are more specific so overriding their operations seems more
 useful. If I design a new kind of form input, it's very useful to hook
 HTMLInputElement.value to do some de/serialization and checking.

 Extending HTMLElement et al is not just about overriding methods. It is
 also to let the component author define new properties alongside existing
 ones, as most HTMLElement subtypes do alongside HTMLElement's existing
 properties and methods. And to enable authors to do this in a way
 consistent with the way the UA does it, so authors using Web Components
 don't need to be constantly observant that some particular functionality is
 provided by the UA and some particular functionality is provided by
 libraries.


 So is the inheritance thing really worth the effort? It seems to
 complicate

Re: Does JS bound to element need to inherit from HTMLElement?

2013-04-16 Thread John J Barton
I wonder if there may be a cultural difference involved in our different
points of view. As a C++ developer I think your point of view makes a lot
of sense.  As a JavaScript developer I find it puzzling.  Given a JS object
I can override its value getter and add new properties operating on the
object or inheriting from it.  Pre-ES6, the number of failure modes in both
paths loom large. Anyone looking at the end result won't be able to tell
the difference.

Anyway the group seems keen on inheritance so I hope it works out.


On Mon, Apr 15, 2013 at 11:24 PM, Dominic Cooney domin...@google.comwrote:

 On Sat, Apr 13, 2013 at 12:03 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 While I completely understand the beauty of having any JS object bound to
 an element inherit functions that make that object 'be an element', I'm
 unsure of the practical value.

 To me the critical relationship between the JS and the element is JS
 object access to its corresponding element instance without global
 operations. That is, no document.querySelector() must be required, because
 the result could depend upon the environment of the component instance.


 The critical issue to me is that there is a canonical object that script
 uses to interact with the element. With ad-hoc wrapping of elements in
 JavaScript, there are two objects (the native element wrapper provided by
 the UA and the object provided by the page author) which results in tedium
 at best (I did querySelector, now I need to do some other step to find the
 author's wrapper if it exists) and bugs at worst (the author's wrapper is
 trying to maintain some abstraction but that is violated by direct access
 to the native element wrapper.)


 Whether that access is through |this| is way down the list of critical
 issues for me. Given a reference to the element I guess I can do everything
 I want. In fact I believe the vast majority of the JS code used in
 components will never override HTMLElement operations for the same reason
 we rarely override Object operations.


 The Object interface is not terribly specific and mostly dedicated to
 metaprogramming the object model, so it is not surprising that it isn't
 heavily overridden.

 Elements are more specific so overriding their operations seems more
 useful. If I design a new kind of form input, it's very useful to hook
 HTMLInputElement.value to do some de/serialization and checking.

 Extending HTMLElement et al is not just about overriding methods. It is
 also to let the component author define new properties alongside existing
 ones, as most HTMLElement subtypes do alongside HTMLElement's existing
 properties and methods. And to enable authors to do this in a way
 consistent with the way the UA does it, so authors using Web Components
 don't need to be constantly observant that some particular functionality is
 provided by the UA and some particular functionality is provided by
 libraries.


 So is the inheritance thing really worth the effort? It seems to
 complicate the component story as far as I can tell.


 I think it is worth the effort.

 --
 http://goto.google.com/dc-email-sla



Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread John J Barton
Why do the constructors of component instances run during component loading?

Why not use standard events rather than callbacks?

Thanks,
jjb
On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user code
 during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html

 Scott


 On Mon, Apr 15, 2013 at 7:45 AM, Rick Waldron waldron.r...@gmail.comwrote:




 On Mon, Apr 15, 2013 at 8:57 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/14/13 5:35 PM, Rick Waldron wrote:

 I have a better understanding of problem caused by these generated
 HTML*Element constructors: they aren't constructable.


 I'd like to understand what's meant here.  I have a good understanding
 of how these constructors work in Gecko+SpiderMonkey, but I'm not sure what
 the lacking bit is, other than the fact that they have to create JS objects
 that have special state associated with them, so can't work with an object
 created by the [[Construct]] of a typical function.

 Is that what you're referring to, or something else?


 Sorry, I should've been more specific. What I meant was that:

 new HTMLButtonElement();

 Doesn't construct an HTMLButtonElement, it throws with an illegal
 constructor in Chrome and HTMLButtonElement is not a constructor in
 Firefox (I'm sure this is the same across other browsers)

 Which of course means that this is not possible even today:

 function Smile() {
   HTMLButtonElement.call(this);
   this.textContent = :);
 }

 Smile.prototype = Object.create(HTMLButtonElement.prototype);


 Since this doesn't work, the prototype method named readyCallback was
 invented as a bolt-on stand-in for the actual [[Construct]]

 Hopefully that clarifies?

 Rick


 PS. A bit of trivial... A long time ago some users requested that
 jQuery facilitate a custom constructor; to make this work, John put the
 actual constructor code in a prototype method called init and set that
 method's prototype to jQuery's own prototype. The thing called
 readyCallback is similar. For those that are interested, I created a gist
 with a minimal illustration here: https://gist.github.com/rwldrn/5388544







 -Boris






Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread John J Barton
On Mon, Apr 15, 2013 at 9:44 AM, Scott Miles sjmi...@google.com wrote:

  Why do the constructors of component instances run during component
 loading?

 I'm not sure what you are referring to. What does 'component loading' mean?

  Why not use standard events rather than callbacks?


 I'll some of the doc you link below and re-ask.

 On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user code
 during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html


---

Here's why:

i) when we load component document, it blocks scripts just like a
stylesheet 
(http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts)

ii) this is okay, since our constructors are generated (no user code)
and most of the tree could be constructed while the component is
loaded.

iii) However, if we make constructors run at the time of tree
construction, the tree construction gets blocked much sooner, which
effectively makes component loading synchronous. Which is bad.



Why do the constructors of component *instances* which don't need to
run until instances are created, need to block the load of component
documents?

Seems to me that you could dictate that script in components load
async WRT components but block instance construction.

jjb


Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread John J Barton
On Mon, Apr 15, 2013 at 10:38 AM, Scott Miles sjmi...@google.com wrote:

 Dimitri is trying to avoid 'block[ing] instance construction' because
 instances can be in the main document markup.


Yes we sure hope so!



 The main document can have a bunch of markup for custom elements. If the
 user has made element definitions a-priori to parsing that markup
 (including inside link rel='import'), he expects those nodes to be 'born'
 correctly.


Sure.




 Sidebar: running user's instance code while the parser is constructing the
 tree is Bad(tm) so we already have deferred init code until immediately
 after the parsing step. This is why I keep saying 'ready-time' is different
 from 'construct-time'.


? user's instance code?  Do you mean: Running component instance
initialization during document construction is Bad?



 Today, I don't see how we can construct a custom element with the right
 prototype at parse-time without blocking on imported scripts (which is
 another side-effect of using script execution for defining prototype, btw.)


You must block creating instances of components until component documents
are parsed and initialized.  Because of limitations in HTML DOM
construction, you may have to block HTML parsing until instances of
components are created. Thus I imagine that creating instances may block
HTML parsing until component documents are parsed and initialized or the
HTML parsing must have two passes as your Pinocchio link outlines.

But my original question concerns blocking component documents on their own
script tag compilation. Maybe I misunderstood.

jjb





 On Mon, Apr 15, 2013 at 9:54 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 9:44 AM, Scott Miles sjmi...@google.com wrote:

  Why do the constructors of component instances run during component
 loading?

 I'm not sure what you are referring to. What does 'component loading'
 mean?

  Why not use standard events rather than callbacks?


 I'll some of the doc you link below and re-ask.

  On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user code
 during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html


 ---

 Here's why:

 i) when we load component document, it blocks scripts just like a
 stylesheet 
 (http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts)

 ii) this is okay, since our constructors are generated (no user code)
 and most of the tree could be constructed while the component is
 loaded.

 iii) However, if we make constructors run at the time of tree
 construction, the tree construction gets blocked much sooner, which
 effectively makes component loading synchronous. Which is bad.

 

 Why do the constructors of component *instances* which don't need to run 
 until instances are created, need to block the load of component documents?

 Seems to me that you could dictate that script in components load async 
 WRT components but block instance construction.

 jjb








Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread John J Barton
On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.com wrote:

 Thank you for your patience. :)

ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute before
 the instance is fully 'constructed'. Parser encounters an x-foo/x-foo
 and constructs it. My understanding is that calling 'init' from the parser
 at that point is a non-starter.


I think the Pinocchio link makes the case that you have only three choices:
   1) call 'init' when component instance tag is encountered, blocking
parsing,
   2) call 'init' later, causing reflows and losing the value of not
blocking parsing,
   3) don't allow 'init' at all, limiting components.

So non-starter is just a vote against one of three Bad choices as far as
I can tell. In other words, these are all non-starters ;-).


  But my original question concerns blocking component documents on their
 own script tag compilation. Maybe I misunderstood.

 I don't think imports (nee component documents) have any different
 semantics from the main document in this regard. The import document may
 have an x-foo instance in it's markup, and element tags or link
 rel=import just like the main document.


Indeed, however the relative order of the component's script tag processing
and the component's tag element is all I was talking about.




 On Mon, Apr 15, 2013 at 11:23 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 10:38 AM, Scott Miles sjmi...@google.com wrote:

 Dimitri is trying to avoid 'block[ing] instance construction' because
 instances can be in the main document markup.


 Yes we sure hope so!



 The main document can have a bunch of markup for custom elements. If the
 user has made element definitions a-priori to parsing that markup
 (including inside link rel='import'), he expects those nodes to be 'born'
 correctly.


 Sure.




 Sidebar: running user's instance code while the parser is constructing
 the tree is Bad(tm) so we already have deferred init code until immediately
 after the parsing step. This is why I keep saying 'ready-time' is different
 from 'construct-time'.


 ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?



 Today, I don't see how we can construct a custom element with the right
 prototype at parse-time without blocking on imported scripts (which is
 another side-effect of using script execution for defining prototype, btw.)


 You must block creating instances of components until component documents
 are parsed and initialized.  Because of limitations in HTML DOM
 construction, you may have to block HTML parsing until instances of
 components are created. Thus I imagine that creating instances may block
 HTML parsing until component documents are parsed and initialized or the
 HTML parsing must have two passes as your Pinocchio link outlines.

 But my original question concerns blocking component documents on their
 own script tag compilation. Maybe I misunderstood.

 jjb





 On Mon, Apr 15, 2013 at 9:54 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 9:44 AM, Scott Miles sjmi...@google.comwrote:

  Why do the constructors of component instances run during component
 loading?

 I'm not sure what you are referring to. What does 'component loading'
 mean?

  Why not use standard events rather than callbacks?


 I'll some of the doc you link below and re-ask.

  On Apr 15, 2013 9:04 AM, Scott Miles sjmi...@google.com wrote:

 Again, 'readyCallback' exists because it's a Bad Idea to run user
 code during parsing (tree construction). Ready-time is not the same as
 construct-time.

 This is the Pinocchio problem:
 http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0728.html


 ---

 Here's why:

 i) when we load component document, it blocks scripts just like a
 stylesheet 
 (http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#a-style-sheet-that-is-blocking-scripts)

 ii) this is okay, since our constructors are generated (no user code)
 and most of the tree could be constructed while the component is
 loaded.

 iii) However, if we make constructors run at the time of tree
 construction, the tree construction gets blocked much sooner, which
 effectively makes component loading synchronous. Which is bad.

 

 Why do the constructors of component *instances* which don't need to run 
 until instances are created, need to block the load of component documents?

 Seems to me that you could dictate that script in components load async 
 WRT components but block instance construction.

 jjb










Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread John J Barton
On Mon, Apr 15, 2013 at 2:01 PM, Scott Miles sjmi...@google.com wrote:

  What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 IIRC it's not possible to override methods that will be called from inside
 of builtins, so I don't believe this is an issue (unless we change the
 playfield).


Ugh. So we can override some methods but not others, depending on the
implementation?

So really these methods are more like callbacks with a funky kind of
registration. It's not like inheriting and overriding, it's like onLoad
implemented with an inheritance-like wording.  An API users doesn't think
like an object, rather they ask the Internet some HowTo questions and get
a recipe for a particular function override.

Ok, I'm exaggerating, but I still think the emphasis on inheritance in the
face of so me is a high tax on this problem.




  How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)

 This is another can of worms. Right now we blanket solve this by waiting
 for an 'all clear' event (also being discussed, 'DOMComponentsReady' or
 something) and handling this appropriately for our application.


Gee, that's not very encouraging: this is the most important kind of issue
for a developer, more so than whether the API is inheritance-like or not.





 On Mon, Apr 15, 2013 at 1:46 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 How, as component author, do I ensure that my imperative set up code runs
 and modifies my element DOM content before the user sees the un-modified
 custom element declared in mark-up? (I'm cheating, since this issue isn't
 specific to your prototype)


 On Mon, Apr 15, 2013 at 12:39 PM, Scott Miles sjmi...@google.com wrote:

 Sorry for beating this horse, because I don't like 'prototype' element
 anymore than anybody else, but I can't help thinking if there was a way to
 express a prototype without script 98% of this goes away.

 The parser can generate an object with the correct prototype, we can run
 init code directly after parsing, there are no 'this' issues or problems
 associating element with script.

 At least somebody explain why this is conceptually wrong.


 On Mon, Apr 15, 2013 at 11:52 AM, Scott Miles sjmi...@google.comwrote:

   1) call 'init' when component instance tag is encountered, blocking
 parsing,

 Fwiw, it was said that calling user code from inside the Parser could
 cause Armageddon, not just block the parser. I don't recall the details,
 unfortunately.


 On Mon, Apr 15, 2013 at 11:44 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.comwrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute
 before the instance is fully 'constructed'. Parser encounters an
 x-foo/x-foo and constructs it. My understanding is that calling 
 'init'
 from the parser at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only three
 choices:
1) call 'init' when component instance tag is encountered, blocking
 parsing,
2) call 'init' later, causing reflows and losing the value of not
 blocking parsing,
3) don't allow 'init' at all, limiting components.

 So non-starter is just a vote against one of three Bad choices as
 far as I can tell. In other words, these are all non-starters ;-).


  But my original question concerns blocking component documents on
 their own script tag compilation. Maybe I misunderstood.

 I don't think imports (nee component documents) have any different
 semantics from the main document in this regard. The import document may
 have an x-foo instance in it's markup, and element tags or link
 rel=import just like the main document.


 Indeed, however the relative order of the component's script tag
 processing and the component's tag element is all I was talking about.




 On Mon, Apr 15, 2013 at 11:23 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 10:38 AM, Scott Miles sjmi...@google.comwrote:

 Dimitri is trying to avoid 'block[ing] instance construction'
 because instances can be in the main document markup.


 Yes we sure hope so!



 The main document can have a bunch of markup for custom elements.
 If the user has made element definitions a-priori to parsing that 
 markup
 (including inside link rel='import'), he expects those nodes to be 
 'born'
 correctly.


 Sure

Re: [webcomponents]: Of weird script elements and Benadryl

2013-04-15 Thread John J Barton
I think that rendering a placeholder (eg blank image) then filling it in
rather than blocking is good if done well (eg images with pre-allocated
space). Otherwise it's bad but less bad than blocking ;-).

But if you allow this implementation, then this whole discussion confuses
me even more. I'm thinking: If you don't need the custom constructors
during parsing, just wait for them to arrive, then call them. Something
else is going on I suppose, so I'm just wasting your time.


On Mon, Apr 15, 2013 at 2:42 PM, Daniel Buchner dan...@mozilla.com wrote:

 *
 *
 *Gee, that's not very encouraging: this is the most important kind of
 issue for a developer, more so than whether the API is inheritance-like or
 not.*

 IMO, the not-yet-upgraded case is nothing new, and developers will hardly
 be surprised. This nit is no different than if devs include a jQuery plugin
 script at the bottom of the body that 'upgrades' various elements on the
 page after render - basically, it's an unfortunate case of That's Just Life™


 Daniel J. Buchner
 Product Manager, Developer Ecosystem
 Mozilla Corporation


 On Mon, Apr 15, 2013 at 2:23 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 2:01 PM, Scott Miles sjmi...@google.com wrote:

  What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 IIRC it's not possible to override methods that will be called from
 inside of builtins, so I don't believe this is an issue (unless we change
 the playfield).


 Ugh. So we can override some methods but not others, depending on the
 implementation?

 So really these methods are more like callbacks with a funky kind of
 registration. It's not like inheriting and overriding, it's like onLoad
 implemented with an inheritance-like wording.  An API users doesn't think
 like an object, rather they ask the Internet some HowTo questions and get
 a recipe for a particular function override.

 Ok, I'm exaggerating, but I still think the emphasis on inheritance in
 the face of so me is a high tax on this problem.




  How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)

 This is another can of worms. Right now we blanket solve this by waiting
 for an 'all clear' event (also being discussed, 'DOMComponentsReady' or
 something) and handling this appropriately for our application.


 Gee, that's not very encouraging: this is the most important kind of
 issue for a developer, more so than whether the API is inheritance-like or
 not.





 On Mon, Apr 15, 2013 at 1:46 PM, John J Barton 
 johnjbar...@johnjbarton.com wrote:

 What happens if the construction/initialization of the custom element
 calls one of the element's member functions overridden by code in a
 prototype?

 How, as component author, do I ensure that my imperative set up code
 runs and modifies my element DOM content before the user sees the
 un-modified custom element declared in mark-up? (I'm cheating, since this
 issue isn't specific to your prototype)


 On Mon, Apr 15, 2013 at 12:39 PM, Scott Miles sjmi...@google.comwrote:

 Sorry for beating this horse, because I don't like 'prototype' element
 anymore than anybody else, but I can't help thinking if there was a way to
 express a prototype without script 98% of this goes away.

 The parser can generate an object with the correct prototype, we can
 run init code directly after parsing, there are no 'this' issues or
 problems associating element with script.

 At least somebody explain why this is conceptually wrong.


 On Mon, Apr 15, 2013 at 11:52 AM, Scott Miles sjmi...@google.comwrote:

   1) call 'init' when component instance tag is encountered,
 blocking parsing,

 Fwiw, it was said that calling user code from inside the Parser could
 cause Armageddon, not just block the parser. I don't recall the details,
 unfortunately.


 On Mon, Apr 15, 2013 at 11:44 AM, John J Barton 
 johnjbar...@johnjbarton.com wrote:




 On Mon, Apr 15, 2013 at 11:29 AM, Scott Miles sjmi...@google.comwrote:

 Thank you for your patience. :)

 ditto.




  ? user's instance code?  Do you mean: Running component instance
 initialization during document construction is Bad?

 My 'x-foo' has an 'init' method that I wrote that has to execute
 before the instance is fully 'constructed'. Parser encounters an
 x-foo/x-foo and constructs it. My understanding is that calling 
 'init'
 from the parser at that point is a non-starter.


 I think the Pinocchio link makes the case that you have only three
 choices:
1) call 'init' when component instance tag is encountered,
 blocking parsing,
2) call 'init' later, causing reflows and losing the value of not
 blocking parsing,
3) don't allow 'init' at all, limiting components.

 So

Does JS bound to element need to inherit from HTMLElement?

2013-04-12 Thread John J Barton
While I completely understand the beauty of having any JS object bound to
an element inherit functions that make that object 'be an element', I'm
unsure of the practical value.

To me the critical relationship between the JS and the element is JS object
access to its corresponding element instance without global operations.
That is, no document.querySelector() must be required, because the result
could depend upon the environment of the component instance.

Whether that access is through |this| is way down the list of critical
issues for me. Given a reference to the element I guess I can do everything
I want. In fact I believe the vast majority of the JS code used in
components will never override HTMLElement operations for the same reason
we rarely override Object operations.

So is the inheritance thing really worth the effort? It seems to complicate
the component story as far as I can tell.

jjb


Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-11 Thread John J Barton
On Thu, Apr 11, 2013 at 7:57 AM, Erik Arvidsson a...@chromium.org wrote:

 The problem here is how do you register `My_yay` as the class that goes
 with the tag name `my_yay`. One option could be to use the completion
 value but it seems too magical/unreliable. It also does not scale well. I
 would like us to be able to put all custom component classes in a single
 js file:

 element name=my-foo
   ...
 /element
 element name=my-bar
   ...
 /element
 element name=my-baz
   ...
 /element
 script src=my-elements.js/script

 // my-elements.js
 someDocument.querySelector('[name=my-foo]').registerConstructor(MyFoo);
 someDocument.querySelector('[name=my-bar]').registerConstructor(MyBar);
 someDocument.querySelector('[name=my-baz]').registerConstructor(MyBaz);


 This calls out for a less verbose and more DRY API.


To me this seems to defeat modularity. Force me to organize all of MyFoo
JS/HTML/CSS together and separate from MyBar and MyBaz.  I already can
create a big messy pile. Help me clean up my room.

jjb


Re: [webcomponents]: Platonic form of custom elements declarative syntax

2013-04-10 Thread John J Barton
On Wed, Apr 10, 2013 at 3:30 PM, Daniel Buchner dan...@mozilla.com wrote:

 @John - in my opinion, template bindtotagname=my-yay is the wrong
 direction. You should be declaring which *template* an *element* uses, not
 which element a template captures. Having templates latch onto element
 types from afar breaks the one-to-many case, prevents sane swapping of
 templates on a specific element node, and many other oddities.
 Alternatively, element template=id-of-some-template is more flexible
 and the right 'direction' for such an association that suffers none of
 those issues, at least in my opinion. Feel free to disagree or set me
 straight if anything I said is not accurate :)


I don't have any opinion on this aspect, sorry. I was only offering a
modernization of an old and not-popular-among-purists way of connecting
scripts and elements.
jjb


Re: Shrinking existing libraries as a goal

2012-05-17 Thread John J Barton
On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote:
 Consider the cowpath metaphor - web developers have made highways out of
 sticks, grass and mud - what we need is someone to pour the concrete.

I'm confused. Is the goal shorter load times (Yehuda) or better
developer ergonomics (Waldron)?

Of course *some* choices may do both. Some may not.

jjb




 Rick


 [1] http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-EventTarget



Re: Shrinking existing libraries as a goal

2012-05-17 Thread John J Barton
On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, May 17, 2012 at 9:56 AM, John J Barton
 johnjbar...@johnjbarton.com wrote:
 On Thu, May 17, 2012 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote:
 Consider the cowpath metaphor - web developers have made highways out of
 sticks, grass and mud - what we need is someone to pour the concrete.

 I'm confused. Is the goal shorter load times (Yehuda) or better
 developer ergonomics (Waldron)?

 Of course *some* choices may do both. Some may not.

 Libraries generally do three things: (1) patch over browser
 inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new
 features*.

 #1 is just background noise; we can't do anything except write good
 specs, patch our browsers, and migrate users.

 #3 is the normal mode of operations here.  I'm sure there are plenty
 of features currently done purely in libraries that would benefit from
 being proposed here, like Promises, but I don't think we need to push
 too hard on this case.  It'll open itself up on its own, more or less.
  Still, something to pay attention to.

 #2 is the kicker, and I believe what Yehuda is mostly talking about.
 There's a *lot* of code in libraries which offers no new features,
 only a vastly more convenient syntax for existing features.  This is a
 large part of the reason why jQuery got so popular.  Fixing this both
 makes the web easier to program for and reduces library weight.

Yes! Fixing ergonomics of APIs has dramatically improved web
programming.  I'm convinced that concrete proposals vetted by major
library developers would be welcomed and have good traction. (Even
better would be a common shim library demonstrating the impact).

Measuring these changes by the numbers of bytes removed from downloads
seems 'nice to have' but should not be the goal IMO.

jjb


 * Yes, #3 is basically a subset of #2 since libraries aren't rewriting
 the JS engine, but there's a line you can draw between here's an
 existing feature, but with better syntax and here's a fundamentally
 new idea, which you could do before but only with extreme
 contortions.

 ~TJ



Re: Shrinking existing libraries as a goal

2012-05-16 Thread John J Barton
On Wed, May 16, 2012 at 9:53 AM, Dimitri Glazkov dglaz...@chromium.org wrote:
 I think it's a great idea. Shipping less code over the wire seems like
 a win from any perspective.

How about a cross-site secure (even pre-compiled) cache for JS
libraries as well?  We almost have this with CDN now, if it were
formally supported by standards then every site using a common library
would ship less code without compromises by the platform or libraries
wrt API.

jjb



Re: Synchronous postMessage for Workers?

2012-02-15 Thread John J Barton
On Tue, Feb 14, 2012 at 10:39 PM, Jonas Sicking jo...@sicking.cc wrote:
...
 The problem is when you have functions which call yieldUntil. I.e.
 when you have code like this:

 function doStuff() {
  yieldUntil(x);
 };

 now what looks like perfectly safe innocent code:

 function myFunction() {
  ... code here ...
  doStuff();
  ... more code ...
 }

 The myFunction code might look perfectly sane and safe. However since
 the call to doStuff spins the event loop, the two code snippets can
 see entirely different worlds.

 Put it another way, when you spin the event loop, not only does your
 code need to be prepared for anything happening. All functions up the
 call stack also has to. That makes it very hard to reason about any of
 your code, not just the code that calls yieldUntil.

This argument makes good sense, but can we make it more concrete and
thus clearer?

What I am fishing for is an example that clearer shows the
yieldUntil() pattern is hard compared to a function call in to a large
library or into the platform and compared to a function call that
passes state fro myFunction() into a closure. Can a function on
another event loop access the private (closure) state of myFunction()
in a way that the other two patterns cannot?

jjb



Re: Synchronous postMessage for Workers?

2012-02-14 Thread John J Barton
On Tue, Feb 14, 2012 at 11:14 AM, David Bruant bruan...@gmail.com wrote:
 Le 14/02/2012 14:31, Arthur Barstow a écrit :

 Another addition will be promises.
 An already working example of promises can be found at
 https://github.com/kriskowal/q

Just to point out that promises are beyond the working example stage,
they are deployed in the major JS frameworks, eg:

http://dojotoolkit.org/reference-guide/dojo/Deferred.html
http://api.jquery.com/category/deferred-object/

The Q library is more like an exploration of implementation issues in
promises, trying to push them further.

jjb



Re: Synchronous postMessage for Workers?

2012-02-13 Thread John J Barton
On Mon, Feb 13, 2012 at 11:44 AM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Nov 2011, Joshua Bell wrote:

 Wouldn't it be lovely if the Worker script could simply make a
 synchronous call to fetch data from the Window?

 It wouldn't be so much a synchronous call, so much as a blocking get.
..
 Anyone object to me adding something like this? Are there any better
 solutions? Should we just tell authors to get used to the async style?

I guess the Q folks would say that remote promises provides another
solution. If promises are adopted by the platform, then the async
style gets much easier to work with.
https://github.com/kriskowal/q
(spec is somewhere on the es wiki)

In the Q model you would fetch data like:
  parentWindow.fetchData('myQueryString').then(  // block until reply.
function(data) {...},
function(err) {...}
  );
Q has functions to join promises; q_comm add remote promises.

I believe this can be done today with q_comm in workers.

Your signal/yieldUntil looks like what es-discuss calls generators.
I found them much harder to understand than promises, but then I come
from JS not python.

jjb



Re: connection ceremony for iframe postMessage communications

2012-02-13 Thread John J Barton
On Mon, Feb 13, 2012 at 12:57 PM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 10 Feb 2012, John J Barton wrote:
 
  Why would the connectivity part of this be the hard part?

 Because the existing information on cross-domain iframe communications
 is incomplete and written in terms few Web app developers understand,
 the browser implementations are new and the error messages they emit are
 puzzling. Solutions for same-domain cases don't work for cross-domain.
 Async communications is hard to debug.

 I agree with your described problems, but I don't see the link between
 them and adding yet more features to the platform.

Oh, sorry, I answered the question you asked, rather than the one you
wanted to ask ;-)

Why should the connectivity part be part of the platform?

To simplify and thus encourage cross-domain application development as
a natural and powerful extension of the Web.

Multiple incompatible connection sequences exist and will naturally
arise. One may eventually dominate. The process will be long and
painful; the advantages of competition seem low. By pursuing a
standard connection we accelerate the process.

A different push-back would make sense to me: it's not yet time. By
having this discussion we put the possibility on the table.  Already
I've learned how to use the MessageChannel solution and that
webintents could be another consumer of this connection layer.  Maybe
there should be no next steps for now.

jjb



 The solution to existing information on cross-domain iframe
 communications is incomplete is to add more information. The solution to
 existing information on cross-domain iframe communications is written in
 terms few Web app developers understand is to write new information. The
 solution to the browser implementations are new is to wait. The solution
 to the error messages they emit are puzzling is to file bugs on the
 browsers with suggestions for how to improve them. Etc.

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: connection ceremony for iframe postMessage communications

2012-02-10 Thread John J Barton
On Thu, Feb 9, 2012 at 11:53 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 9 Feb 2012, John J Barton wrote:
 On Thu, Feb 9, 2012 at 4:42 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 9 Feb 2012, John J Barton wrote:
 
  However the solution has two significant problems:
    1. There is no way to know if portToOtherWindow is connected before
  you issue postMessage()
 
  Just have the target message you when it's ready.

 What I meant was just to do this on the receiving side (inside the
 iframe), after the onmessage handler has been set up (which we are
 assuming happens after the 'load' event for some reason):

   parent.postMessage('load', '*');

 That way you don't have to depend on the 'load' event, you can just wait
 for the message from the inner frame. Then when you get it, you know you
 can start sending..

The problem here is that the iframe may issue
parent.postMessage('load', '*) before the parent onmessage handler
has been set up. Modern apps no longer use single-point
synchronization.  The parent window 'load' event has no time relation
to the onmessage handler setup, neither the iframe load event.

Instead we have multiple synchronizations based on the dependency
relationships. This started with script loading but eventually all
content will be loaded this way.

In the past I've created synchronization in the parent by making the
iframe loading dependent upon the onmessage handler set up. But this
complicates and thus constraints the design.  A peer-to-peer or
symmetric solution would be better.


 And when you do send, you just send a message whose contents are just a
 single key saying what API endpoint you want, and a port, which you then
 use for all communication for that particular API call.

Just to clarify, I want to see the layer you just outlined be standard
so we can design iframe components and apps to mix and match. This can
be two simple layers on the current messaging: 1) the connection
ceremony, 2) the key/API format.


 No races or anything.

Unfortunately for devs, the Web app world is becoming asynchronous.

jjb


 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: connection ceremony for iframe postMessage communications

2012-02-10 Thread John J Barton
On Fri, Feb 10, 2012 at 10:58 AM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 10 Feb 2012, John J Barton wrote:

 Just to clarify, I want to see the layer you just outlined be standard
 so we can design iframe components and apps to mix and match. This can
 be two simple layers on the current messaging: 1) the connection
 ceremony, 2) the key/API format.

 No reason for it to be standard, just define it as part of the protocol
 you are implementing over postMessage().

Ok, so I define it for my app. You write an iframe. You read my
definition and I can load your iframe. Yay!

Then Boris write an app. He defines it for his app. You change your
iframe to deal with my app and Boris' app. Ok.

Then Mark Zuckerberg and Larry Page define apps. Soon you are spending
all of your money hiring devs to deal with connection protocols.

Then maybe you will have a reason for a standard?

jjb


 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: connection ceremony for iframe postMessage communications

2012-02-10 Thread John J Barton
On Fri, Feb 10, 2012 at 10:58 AM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 10 Feb 2012, John J Barton wrote:
 
  What I meant was just to do this on the receiving side (inside the
  iframe), after the onmessage handler has been set up (which we are
  assuming happens after the 'load' event for some reason):
 
    parent.postMessage('load', '*');
 
  That way you don't have to depend on the 'load' event, you can just
  wait for the message from the inner frame. Then when you get it, you
  know you can start sending..

 The problem here is that the iframe may issue parent.postMessage('load',
 '*) before the parent onmessage handler has been set up.

 You can always guarantee that you've set up your handler before you create
 the iframe. But, suppose that's somehow not possible. Then you just define
 ping as a message you can send to the inner frame, which the inner frame
 then responds to with the aforementioned load message.

 So now you have the following situations:

  - parent is set up first, then opens iframe:
    - iframe sends 'load' message when ready

  - parent opens iframe, then sets up communications, iframe is quicker:
    - iframe sends 'load' message when ready, but it gets missed
    - parent sends 'ping' message
    - iframe sends 'load' message in response

  - parent opens iframe, then sets up communications, parent is quicker:
    - parent sends 'ping' message, but it gets missed
    - iframe sends 'load' message when ready

 In all three cases, the first 'load' message that is received indicates
 that the communication system is ready.

Thanks. As a hint for the next person, it seems like the asymmetric
messages (parent 'ping', iframe 'load') is easier than symmetric
('hello'/'ack')

I think there are two more cases. Because the messages are all async,
the 'it gets missed case can be it gets delayed. That causes
additional messages.

  - parent opens iframe, then sets up communications, iframe is quicker:
   -  iframe sends 'load' message when ready, but it gets delayed
   - parent sends 'ping' message
   - parent get first 'load' message, responds with port
   - iframe sends 'load' message in response to 'ping'
 - parent opens iframe, then sets up communications, parent is quicker:
   - parent sends 'ping' message, but it gets delayed
   - iframe sends 'load' message when ready
   - iframe gets 'ping' message, sends 'load' message in response
   - parent gets 'load' message, responds with port

This doe not change your conclusion, the first 'load' message
indicates ready, but I believe it does mean that a third message
(sending the port) is needed. The 'load' message cannot also send the
port.  It also affects when one removes the listener.


 (In practice it's usually much simpler than any of this because the parent
 can guarantee that it sets up its communications first, before the iframe
 is even created, and the child can guarantee that it sets up its
 communications before it finishes loading, so the parent can just use the
 regular 'load' event on the iframe and the child never needs to wait at
 all if it wants to start communicating first.)

Entangling communications setup with iframe 'load' just makes a
complicated problem harder, not simpler. If we can encapsulate the
above logic in a communications library then we don't have to involve
the UI or impact performance delaying load.

jjb



Re: connection ceremony for iframe postMessage communications

2012-02-10 Thread John J Barton
On Fri, Feb 10, 2012 at 1:37 PM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 10 Feb 2012, John J Barton wrote:
 On Fri, Feb 10, 2012 at 10:58 AM, Ian Hickson i...@hixie.ch wrote:
  On Fri, 10 Feb 2012, John J Barton wrote:
 
  Just to clarify, I want to see the layer you just outlined be
  standard so we can design iframe components and apps to mix and
  match. This can be two simple layers on the current messaging: 1) the
  connection ceremony, 2) the key/API format.
 
  No reason for it to be standard, just define it as part of the
  protocol you are implementing over postMessage().

 Ok, so I define it for my app. You write an iframe. You read my
 definition and I can load your iframe. Yay!

 Then Boris write an app. He defines it for his app. You change your
 iframe to deal with my app and Boris' app. Ok.

 Then Mark Zuckerberg and Larry Page define apps. Soon you are spending
 all of your money hiring devs to deal with connection protocols.

 Then maybe you will have a reason for a standard?

 Why would the connectivity part of this be the hard part?

Because the existing information on cross-domain iframe communications
is incomplete and written in terms few Web app developers understand,
the browser implementations are new and the error messages they emit
are puzzling. Solutions for same-domain cases don't work for
cross-domain. Async communications is hard to debug.


  Each of these
 apps has an entire protocol you'll have to reimplement if you want to
 connect to it! The connectivity part is trivial in comparison.

Probably not, at least to start. Most of the early efforts are just
things like setting UI sizes or passing config strings.

In the long run the essential difference between a JS library widget
and an iframe widget will be asynchronous message passing. Libraries,
tools, and languages features are gearing up to help.


 I'm all for people standardising their postMessage() protocols, though,
 if they want to. Nothing is stopping people from doing so.

public-webapps would seem to be an appropriate venue to begin the
discussion, esp. since this is where the expertise on web messaging
exists.

jjb



xframe or iframe type='cross-domain'

2012-02-09 Thread John J Barton
I've been working with cross-domain iframes. This technology has a lot
of potential, but the current API is very difficult to use. Just
search the web for cross-domain iframe info and you can read how many
developers are confused.

I believe a simple change could make a huge difference. My suggestions
are related to 
http://www.whatwg.org/specs/web-apps/current-work/multipage/web-messaging.html

The current model for a cross-domain iframe is it's just a restricted
same-domain iframe. So both iframes have a contentWindow property as
their key API anchor. Sounds consistent and economical. But it's not,
because developer code written to process contentWindow references
cannot work with cross-domain iframe contentWindow objects.

As far as I can tell, a cross-domain iframe contentWindow has only one
valid property, postMessage(). By no stretch of anyone's imagination
is the object a window. Calling this thing we get a contentWindow
is a mean lie to developers; it forces us into Exception-oriented
programming where we try every recipe on the Web looking for
something that does not emit errors.

On the other hand, there is an important Web API focused on
postMessage() as outlined in the spec above. Generally (though not
exclusively) the spec refers to objects with postMessage() as ports.

Thus my proposal:
  1. create a spec-only base class element with the current properties
of iframe, except no contentWindow or contentDocument
  2. inherit iframe from the spec-only base class, add contentWindow
and contentDocument
  3. inherit a new element (eg xframe) or element type (eg iframe
type='cross-domain'), add property port
  4. Access to xframe.contentWindow would result  in undefined
(yay!, no funky errors)
  5. Access to iframe.port would result in 'undefined': developers now
have a simple test.
  6. xframe.port would have postMessage. I believe the port could in
fact be a MessagePort.
(http://www.whatwg.org/specs/web-apps/current-work/multipage/web-messaging.html#messageport)

I know that some may view this suggestion as trivial. I would just ask
you to talk to web app developers who have tried or considered using
cross-domain iframe messaging.

jjb



Re: xframe or iframe type='cross-domain'

2012-02-09 Thread John J Barton
On Thu, Feb 9, 2012 at 9:22 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 2/9/12 12:04 PM, John J Barton wrote:

 As far as I can tell, a cross-domain iframe contentWindow has only one
 valid property, postMessage(). By no stretch of anyone's imagination
 is the object a window. Calling this thing we get a contentWindow
 is a mean lie to developers; it forces us into Exception-oriented
 programming where we try every recipe on the Web looking for
 something that does not emit errors.


 So here's the thing.  If the element is called iframe it needs to have a
 contentWindow property.  The thing that cross-domain iframes return could be
 returned from some other property, but what should contentWindow then return
 for cross-domain iframes?

If we added 'port' to iframe and if we managed to change all of the
documentation to point devs toward iframe.port for messaging, then it
would be ok if contentWindow remained as it is now. I would expect
that the Web would quickly direct developers who have problems using
contentWindow towards using port.

We already have some emerging libraries for postMessage communications, eg
https://github.com/kriskowal/q-comm
My suggestion just help devs attach the right thing to these
libraries: they need a 'port'.


 Of course using a different element name solves that problem.


   1. create a spec-only base class element with the current properties
 of iframe, except no contentWindow or contentDocument
   2. inherit iframe from the spec-only base class, add contentWindow
 and contentDocument
   3. inherit a new element (eg xframe) or element type (eg iframe
 type='cross-domain'), add property port


 It'd have to be a new element if it has a different API.

 The benefit is a cleaner API and not having to define what happens when the
 type changes.

 The drawback is that your fallback behavior in UAs without support for the
 new feature is quite different.  Is that a problem?  Developer feedback
 definitely needed there.

Extending the API on iframe would make fallback easy even though it
lacks elegance:
  if (iframe.port) {
 // modern browser
  } else {
// we still deal with contentWindow carefully for old timers
  }

jjb



Re: xframe or iframe type='cross-domain'

2012-02-09 Thread John J Barton
On Thu, Feb 9, 2012 at 10:01 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 2/9/12 12:43 PM, John J Barton wrote:

 The drawback is that your fallback behavior in UAs without support for
 the
 new feature is quite different.  Is that a problem?  Developer feedback
 definitely needed there.


 Extending the API on iframe would make fallback easy even though it
 lacks elegance:
   if (iframe.port) {
      // modern browser
   } else {
     // we still deal with contentWindow carefully for old timers
   }


 The fallback issue I was talking about is that if you mint a new element
 called xframe then it wouldn't even load pages in an old browser.

Yes, sorry, I did understand that. Let me try again:

If, rather than using xframe, we simply add port to iframe then,
as you implied originally, there is no fall back problem. Fallback is
as above.

jjb



connection ceremony for iframe postMessage communications

2012-02-09 Thread John J Barton
Recently I've been working with iframe messaging. The postMessage
solution has a lot of advantages and good traction across iframes,
WebWorkers, and browser extensions, with lots of overlap with Web
Sockets.

However the technology has two significant problems.  First is the
contentWindow that is not a window confusion I discussed recently.
Second concerns the connection setup. I describe the second problem
here.

The basic communications solution is simple enough:
  window.addEventListener('message', handler, false);  // I'm listening!
  portToOtherWindow.postMessage(message);  // I'm talking to you!

However the solution has two significant problems:
  1. There is no way to know if portToOtherWindow is connected before
you issue postMessage()
  2. All iframes send messages to the same handler.

The first problem arises because web apps are increasingly
asynchronous for load performance and other reasons.

This leads developers to look for events that will tell them about
'load' on iframes, and that leads them to try
iframe.contentWindow.addEventListener(). It works fine for same-domain
iframes, but fails for cross-domain.

The second problem arises because the handler is attached to the
window and not to an object related to the connection between the two
windows.

To workaround for these problems developers have to
  1. create a handshake message sequence, AND
  2. de-multiplex messages.
OR
  3. No use cross-domain iframes

Notice that if multiple developers each create different a handshake
and de-multiplexing solutions, then we end up with isolated
collections of compatible iframes or we end up with
handshake-detection code in iframes.

To leverage an iframe component, a Web page needs to solve two hard
problems: 1) understand the API the component needs and 2) understand
the connection ceremony. The first part is fundamental to using the
component. The second part is just busy work.

I think we should have a standard solution to the connection problem
for cross-domain iframes.

Note that this problem is not shared by other uses of postMessage:
  1. WebWorkers uses port
  2. WebSockets: server always starts first, object is connection not window
  2. MessageChannel: object is connection not window.

Ideas?

jjb



Re: connection ceremony for iframe postMessage communications

2012-02-09 Thread John J Barton
On Thu, Feb 9, 2012 at 11:49 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 2/9/12 1:15 PM, John J Barton wrote:

 This leads developers to look for events that will tell them about
 'load' on iframes, and that leads them to try
 iframe.contentWindow.addEventListener(). It works fine for same-domain
 iframes, but fails for cross-domain.


 Adding a load listener to the iframe element itself should work for this,
 no?

I guess you mean: by issuing
  iframe.addEventListener('load', handler, false);
you get notified when the iframe load event has completed (but you
don't need to touch the contentWindow property).

This will work if the iframe ensures that it completes its connection
work before 'load'. This prevents the iframe from using async loading
for the scripts that create the connection and for any code that
handles messages from the parent. Which, in a typical iframe
component, would be all the code, since its main job is to provide for
the parent.

In addition this solution requires that the above addEventListener be
attached after the iframe is inserted (so the iframe exists) but
before the parent's 'load' event (which is after the iframe load and
thus too late.

So I'd say it does not solve the original problem and it's hard to use
too. Other than that it's a fine idea ;-).

jjb



Re: connection ceremony for iframe postMessage communications

2012-02-09 Thread John J Barton
On Thu, Feb 9, 2012 at 11:49 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 That doesn't help with the second problem, of course

Ok here are some ideas, riffing off the web messaging doc

1 To iframe element add:
readonly attribute MessagePort port;

'message' events from the iframe to the containing window (via
window.parent.postMessage) will be delivered to any port listener (as
well as the global window handler).
   This solves the multiplexing part, the container listens to a
per-iframe object.

'connect' event would be raised and delivered (synchronously) as soon
as the iframe issues window.parent.addEventListener('message'...) and
vice versa.
   This solves the async start up part, each side waits for 'connect'
before issuing its first postMessage. The 'connect' for the
second-place racer triggers the first real message.

Pro: also solves the
cross-domain-iframes-don't-really-have-contentWindow problems I
discussed before.
   familiar addEventListener API, reuses MessagePort
   existing iframe code would just work

2. Have HTMLIFrameElement implement MessagePort.
  This is similar to #1 but the message port functions are attached to
the iframe element directly rather than to its port property.

Pro: resembles Worker
Con: resembles Worker.


3. To window add:
  [TreatNonCallableAsNull] attribute Function? onconnect;
The function would be called when the iframe issues
window.parent.addEventListener('message')

The onconnect event delivers a 'port'; the event.target would be the
iframe element
This solves the multiplexing problem: the container listens to a
per-iframe port object. Container can compare the event.target to its
iframes to decide which port is associated with which iframe.
This solves the async startup by causing the container to act like
a server: it must listen for connections early.
(Modeled on 
http://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html#handler-sharedworkerglobalscope-onconnect,
since the parent window is shared by all of its enclosed iframes )

This second one seems like it solves the async problem by cheating.
Couldn't we just issue addEventListener('message',...) first in the
parent window? The reason 'connect' is better is that it is
out-of-band. If we use 'message' for setting up the connection, then
we must hold postMessage traffic until we get the first 'message'.
Thus the logic in the message handler must have two paths switching on
'first', exactly the problem we try to avoid. With 'connect', the
'message' handler just focuses on communications, not set up.

Pro: A bit more modern, as it follows SharedWorkers
   Seems like it could be expanded to inter-window communications
a la Web Intents
   Again it seems like the iframe code is all the same.

While I have experience with the iframe problems, I don't have
experience with the features I've cobble together here. Any feedback?
If I had any hints about the issues involved in a real implementation
and standard I'd work on simulating this with JS.

jjb



Re: connection ceremony for iframe postMessage communications

2012-02-09 Thread John J Barton
On Thu, Feb 9, 2012 at 4:42 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 9 Feb 2012, John J Barton wrote:

 However the solution has two significant problems:
   1. There is no way to know if portToOtherWindow is connected before
 you issue postMessage()

 Just have the target message you when it's ready.

Ah, ok, just to translate (in case anyone understood what I was
talking about before): there already exists an out-of-band
introduction system, the global postMessage(), which can be used to
set up the in-band channel by sending MessageChannel ports as
Transferables.

Let me see if I can understand this.
  Both sides create MessageChannel objects;
  both sides window.addEventListener('message', handler, false)
  both sides issue other.postMessage(..., *, [channel.port2]);

The second-place finisher in the race succeeds in posting its port2 to
the first-place racer. The first-place racer knows it 'won' because it
gets the port. But how does the second-place racer know it should use
channel.port1 rather than continue waiting? I guess the first-place
racer can send an ACK.

If yes, then this ACK message needs to be standard for cross-domain iframes.

We also need the containing window's global introduction handler to
associate the given port with the correct iframe. The difficulty here
is that no property of event.source is available (similar I suppose to
iframe.contentWindow having nothing but errors to offer beyond only
postMessage).   Experimentally
   event.source === other
is true in the handler. Is this given by the standard?



   2. All iframes send messages to the same handler.

 Pass a MessagePort to the target when you start a new conversation, and
 do the rest of the conversation on that.

Yes this part is cool.

jjb



Re: Adding Web Intents to the Webapps WG deliverables

2011-09-25 Thread John J Barton
On Thu, Sep 22, 2011 at 2:36 PM, Ian Hickson i...@hixie.ch wrote:
 There's no difference between two people coming up with the name foo and
 two people coming up with the name http://webintents.org/foo;, unless
 you're saying you're confident that people won't use the prefix the spec
 uses for its verbs for their verbs.

I don't think this claim makes sense. As a developer I have no way to
know if 'foo' is used by anyone else on the Internet, but it would be
trivial to check http://webintents.org/foo;.


 But this is a non-problem. In practice, we have plenty of examples of
 spaces where conflicts don't happen despite not having used long names
 such as URLs. For example:

  - rel= values in HTML
  - element names in HTML
  - MIME type names
  - scheme names

I believe all of these examples have one or more central name
controls.  The rel example in particular provides a counter example to
using simple uncontrolled verbs:
http://microformats.org/wiki/existing-rel-values
Multiple naming authorities, layered on wiki, and still messy.



 A verb on its own will imply that it is a web intents verb managed by
 the webintents project and all the documentation for that will live
 under webintents, which means we would then need to think about
 standardisation and stewardship for the entire namespace.

 I don't see why. Just have a wiki page that people can list their verbs on
 and then point to their documentation.

A wiki is not comparable to the controlled naming systems in the four
examples you give above.  A wiki is a free for all that works great
when there is no money involved. A Web system involving 'share' along
with images, audio, and video will have money involved.

I think the intent names need a controlled namespace, either
centralized like your examples or decentralized as in the original
proposal. URLs need not be the format.  Note the Firefox extension
developers use domain@name format for unique ids.

jjb



Re: Adding Web Intents to the Webapps WG deliverables

2011-09-22 Thread John J Barton
On Thu, Sep 22, 2011 at 5:22 PM, Charles Pritchard ch...@jumis.com wrote:
 I don't see why. Just have a wiki page that people can list their verbs on
 and then point to their documentation.

 I agree here. The standard is sufficient for stewardship.

Why won't I create a bot that fills with wiki with a dictionary's
worth of verbs point to my important intent: making money on
advertising? My own personal single-point of control. Oh, you think
your bot is faster than mine? We'll see about that! Oh, the wiki
domain owners stepped in? Set up a committee to approve changes?
Censorship! Politics! Slowness!

And so on.

jjb



Re: Component Model Update

2011-08-25 Thread John J Barton
On Thu, Aug 25, 2011 at 1:41 AM, Olli Pettay olli.pet...@helsinki.fiwrote:

 One thing missing is some kind of declarative way to define
 shadow trees, similar to XBL1's content.

 I think this omission is a big plus. XBL1 content is mysterious.  If a
dev tool wants to add support for building Components from declarative
markup, awesome. But the bizarre combo of xml, .css, and .js in XBL one is
poorly supported by tooling and thus is just a mess. Create a great JS
solution then let tools build on that.

jjb


Re: Component Model Update

2011-08-24 Thread John J Barton
On Wed, Aug 24, 2011 at 2:30 PM, Dominic Cooney domin...@google.com wrote:

 On Thu, Aug 25, 2011 at 2:03 AM, Dimitri Glazkov dglaz...@chromium.org
 wrote:
  Yes, shadow DOM gives the author an extra lever to control visibility
  and hackability of their code. It's up to them to use this lever
  wisely.


Maybe I grew up on to much Web koolaid, but browsers should be giving all
extra levers to users. In real life control in the hand of authors means
control in the hands of suits and suits will always pick the hide all
setting.


 This is not without precedent. Just like authors who choose to
  use canvas to build their entire applications are shutting the door
  (intentionally or not) on extensions, I bet we'll also see these
  extremes with the Component Model.


In the case of canvas the reason is technical inferiority, the medium is
write only. Component Model has not such technical limit.


 However, I am also sure that a lot
  of authors will see value in retaining composability for extensions.
  If anything, shadow DOM can help authors draw proper composability
  boundaries and thus inform extensions developers where tweaking is ok
  and where may cause explosions.


Again, that's old school.

Independent of our different point of view on control, shadow DOM needs
debug APIs. So much the better if these are available to extensions.

jjb


Re: Component Model Update

2011-08-24 Thread John J Barton
On Wed, Aug 24, 2011 at 7:50 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:


  Independent of our different point of view on control, shadow DOM needs
  debug APIs. So much the better if these are available to extensions.

 Let me see if I can capture this into a feature: user scripts may have
 access to shadow DOM subtrees. In terms of WebKit, when run in user
 script worlds, the Element has an extra accessor to spelunk down the
 shadow DOM.

 Is this what you're suggesting?


Yes. Encapsulation is good UI, not security. I want to ignore the subtree
normally but jump into the astral plane for special enlightenment.

XUL has such a mechanism, but I'd wish for less mystery. I spent many hours
trying to keep element inspection working on XUL. The API should aim to work
well with code designed for normal elements.

jjb



 :DG

 
  jjb
 



Re: Component Model Update

2011-08-24 Thread John J Barton
I'm still trying to digest this, but it seem pretty clear the 'confinement'
is the clear scope thing I was asking about on es-discuss.  According to
that discussion, this means needs to fit with the 'modules' thing on
ecmascript. That seems to be where you are headed, but basing a new proposal
on another new proposal is ... well I'll let you fill in the blank depending
on how you are feeling.

I guess the actual implementation of confined script evaluation would not be
difficult (Firefox can do it now if you can get some one to explain it).
Getting the entire 'modules' effort out? I'm thinking that could be hard.

jjb


Re: Overview of behavior attachment as a general problem on the Web

2011-07-08 Thread John J. Barton

On 7/8/2011 1:18 PM, Dimitri Glazkov wrote:

As a background for the wider Component Model discussion, I put
together an overview of the general behavior attachment problem on the
Web:

http://wiki.whatwg.org/wiki/Behavior_Attachment

Please take a look. Comments, additions, and critique are appreciated.

:DG
First, I like the overview, I think it helps clear up a lot of issues. 
And it raises lots of questions, which is also good ;-).


I'm not quite connecting the dots. Behavior attachment is needed, your 
examples demonstrate that. You claim the missing facility is atomic 
component addition and proper encapsulation. Perhaps this is well known, 
but I think it would be helpful to explicitly explain why organized 
behavior attachment requires encapsulation. Actually I think a better 
approach is to explain why/how behavior attachment with encapsulation 
will be better, cheap, faster.  A small example would be helpful 
(perhaps later in the document).


Your introduction highlights encapsulation. However, it seems to me that 
encapsulation is secondary to componentization: the critical step is to 
have a way to group HTML/CSS/JS in to a unit that can be developed 
independently and then be used without reference to the implementation.  
Encapsulation in the the OO sense adds constraints that enforce the 
boundaries.  It's great and maybe even critical but not primary.


The examples sections are great, perhaps some experts will correct some 
details but your overall approach here is excellent.


The Behavior Attachment Methods section is also super, but at the end I 
was puzzled. I thought the Shadow DOM proposal only allowed one binding, 
and thus it would exclude exactly the Decorator pattern we need to 
compose multiple frameworks.  I understand how you can solve the Dojo or 
Sencha or jQuery problem better, but I don't see how you can solve the 
'and' version.


HTH,
jjb



Re: Mutation events replacement

2011-07-07 Thread John J Barton

Jonas Sicking wrote:

 We are definitely
short on use cases for mutation events in general which is a problem.
  
1. Graphical breakpoints. The user marks some DOM element or attribute 
to trigger break. The debugger inserts mutation listeners to watch for 
the event that causes that element/attribute to be created/modified. 
Then the debugger re-executes some code sequence and halts when the 
appropriate listener is entered. Placing the listeners high in the tree 
and analyzing all of the events is easier than trying to precisely add a 
listener since the tree will be modified during re-execution.


2. Graphical tracing. Recording all or part of the DOM creation. For 
visualization or analysis tools.  See for example Firebug's HTML panel 
with options Highlight Changes, Expand Changes, or Scroll Changes into View.


3. Client side dynamic translation. Intercept mutations and replace or 
extend them. This could be for user tools like scriptish or stylish, dev 
tools to inject marks or code, or for re-engineering complex sites for 
newer browser features.


jjb



Re: Mutation events replacement

2011-07-07 Thread John J Barton

Rafael Weinstein wrote:

On Thu, Jul 7, 2011 at 1:58 PM, Ryosuke Niwa rn...@webkit.org wrote:
  

On Thu, Jul 7, 2011 at 12:18 PM, Jonas Sicking jo...@sicking.cc wrote:


I don't think John J Barton's proposal to fire before mutation
notifications is doable.
  

I concur.  Being synchronous was one of the reasons why the existing DOM
mutation events don't work.  We shouldn't adding yet-another synchronous
event here.

However, my proposal need not be synchronous in the sense that is 
important here: 'before' mutation listeners need not able to mutate, 
only cancel.  So it's not yet another synchronous event.  Developers 
would use their handler to build a new mutation event and fire it on the 
next turn: it' s essentially asynchronous.

In short before spending more time on this, I'd like to see a
comprehensive proposal, including a description of the use cases it
solves and how it solves them. I strongly doubt that this approach is
practical.
  
There are lots of reasons why 'before' events may not be practical, 
including lack of enthusiasm on the part of the implementors.  You folks 
are the experts, I'm just trying to contribute another point of view. 
Thus I want to point out that for the critical issue of preventing 
mutation listeners from mutating, all you have to do to Jonas' algorithm 
is prepend:


0. If notifyingCallbacks is set to true, throw 
MutationNotAllowedInBeforeMutationCallbacks.


You don't have to to any thing to create a read-only DOM API because you 
already track all possible DOM modifications.   The clean-up from the 
throw is similar to the cancel and not different from any other clean-up 
you have to do if the mutation listener fails. 

This is of course not a comprehensive proposal.  I'm perfectly fine if 
you choose not to respond because you want to close off this discussion 
and I thank you for the replies so far.


jjb



Re: Mutation events replacement

2011-07-07 Thread John J Barton

Olli Pettay wrote:

On 07/08/2011 01:43 AM, John J Barton wrote:

Rafael Weinstein wrote:

On Thu, Jul 7, 2011 at 1:58 PM, Ryosuke Niwa rn...@webkit.org wrote:
On Thu, Jul 7, 2011 at 12:18 PM, Jonas Sicking jo...@sicking.cc 
wrote:

I don't think John J Barton's proposal to fire before mutation
notifications is doable.
I concur. Being synchronous was one of the reasons why the existing 
DOM
mutation events don't work. We shouldn't adding yet-another 
synchronous

event here.

However, my proposal need not be synchronous in the sense that is
important here: 'before' mutation listeners need not able to mutate,
only cancel. So it's not yet another synchronous event. Developers would
use their handler to build a new mutation event and fire it on the next
turn: it' s essentially asynchronous.

In short before spending more time on this, I'd like to see a
comprehensive proposal, including a description of the use cases it
solves and how it solves them. I strongly doubt that this approach is
practical.

There are lots of reasons why 'before' events may not be practical,
including lack of enthusiasm on the part of the implementors. You folks
are the experts, I'm just trying to contribute another point of view.
Thus I want to point out that for the critical issue of preventing
mutation listeners from mutating, all you have to do to Jonas' algorithm
is prepend:

0. If notifyingCallbacks is set to true, throw
MutationNotAllowedInBeforeMutationCallbacks.


I don't understand how this could really work.

Just as an example:
What if the mutation listener spins event loop which ends up
touching parser so that it tries to insert new content to the document.
I would like to learn what you mean here.  The only way I know how to 
suspend an event and spin a new one is via the debugger API.  Is that 
the case you are concerned with?

That mutation wouldn't be allowed. What should be done to that
data which the parser can't add to the document?

Discard, same as any exception, not a special case.

jjb






You don't have to to any thing to create a read-only DOM API because you
already track all possible DOM modifications. The clean-up from the
throw is similar to the cancel and not different from any other clean-up
you have to do if the mutation listener fails.
This is of course not a comprehensive proposal. I'm perfectly fine if
you choose not to respond because you want to close off this discussion
and I thank you for the replies so far.

jjb









Re: Mutation events replacement

2011-07-07 Thread John J. Barton

On 7/7/2011 6:38 PM, Jonas Sicking wrote:

On Thu, Jul 7, 2011 at 5:23 PM, Rafael Weinsteinrafa...@google.com  wrote:

So yes, my proposal only solves the usecase outside mutation handlers.
However this is arguably better than never solving the use case as in
your proposal. I'm sure people will end up writing buggy code, but
ideally this will be found and fixed fairly easily as the behavior is
consistent. We are at least giving people the tools needed to
implement the synchronous behavior.

Ok. Thanks for clarifying. It's helpful to understand this.

I'm glad there's mostly common ground on the larger issue. The point
of contention is clearly whether accommodating some form of sync
mutation actions is a goal or non-goal.

Yup, that seems to be the case.

I think the main reason I'm arguing for allowing synchronous callbacks
is that I'm concerned that without them people are going to stick to
mutation events. If I was designing this feature from scratch, I'd be
much happier to use some sort of async callback. However given that we
need something that people can migrate to, and we don't really know
what they're using mutation events for, I'm more conservative.

/ Jonas
Hmm... you don't believe the use cases and info on how mutation events 
are being used that Dave and I have posted and you don't have any 
alternatives.  Perhaps the conservative solution is do nothing.


You might ask Prof. Jan Vitek if his infrastructure can give you any 
information on mutation event uses.  He may also other ways to get such 
answers.


jjb

jjb




Re: Mutation events replacement

2011-07-06 Thread John J. Barton

On 7/6/2011 5:38 AM, Boris Zbarsky wrote:

On 7/6/11 4:27 AM, Dave Raggett wrote:

How does that scale to the case where you set the observer on the
document or on a div element acting as a contained for content editable
content? If I am not mistaken you would have to keep a copy of the
document, or of that div element respectively, and keep it in sync with
all of the mutations, which sounds like a major performance hit, and
something you don't need to incur with the current DOM mutation events.


Oh, you _do_ incur a major performance hit with current mutation 
events if you watch attribute mutations, precisely due to the need to 
save the pre-mutation values.  You just push the performance hit off 
on the browser core.


And before you say that it can do this more efficiently, that's only 
true if you're interested in the previous value of _all_ attributes.  
I realize your particular use case is.  But lots of others are not. 
Unfortunately, the browser has no way to tell which attributes on 
which elements the mutation event really cares about, so all mutation 
event consumers take the same performance hit.  Which leads to the 
common recommendation to not use attribute modification mutation 
events at all, because, for example, they make your jQuery animations 
dog-slow.


Again, I realize this is not a problem for you because of your 
particular use case of mirroring the entire DOM.  But let's not 
pretend there's no performance hit now or that the performance hit 
with a different setup would always be more than what we have now.
This is another advantage of onModelChanging or 'before' events. All of 
the previous values are available for listeners and the task of 
selecting which ones to process is left to the listener.


jjb





Re: [WebIDL] Exceptions

2011-07-06 Thread John J. Barton

On 7/6/2011 6:06 PM, Allen Wirfs-Brock wrote:


I'd much prefer to see code that looks like:
  try {doSomeDOMStuff() }
  catch (e) {
 switch (e.name) {
  case NoNotificationAllowedError: ...; break;
  case HierarchyRequestError: ...; break;
  default: throw e
   }
  }
Any work with DOM API (at least in Firefox) makes even this case 
impractical.  You'll be luck if the exception has *any* meaningful content.


jjb



Re: Mutation events replacement

2011-07-04 Thread John J. Barton

On 7/3/2011 10:26 AM, Ryosuke Niwa wrote:
On Sun, Jul 3, 2011 at 8:41 AM, John J. Barton 
johnjbar...@johnjbarton.com mailto:johnjbar...@johnjbarton.com wrote:


On 7/2/2011 8:50 PM, Boris Zbarsky wrote:

On 7/2/11 1:46 PM, John J. Barton wrote:

2) element transformation. The replacement fires after a
mutation.
Library or tools that want to transform the application
dynamically want
to get notification before the mutation. A common
solution then is
to bracket changes:
beforeChange or onModelChanging
afterChange or onModelChanged


This really only works if you trust the listeners.  The
browser core can't trust scripted listeners using Web APIs.

I don't understand what 'trust' means here.  I am not proposing
any change to the privileges of listeners. How can the browser
core trust an 'onModelChanged' listener but not an
'onModelChanging' listener?


If the user agent fires a modelChanging event, then the user agent 
must verify that the pre-condition of removal is still met after the 
the event is fired.  This is extremely hard to do and very error-prone.
In the current proposal, the DOM API is manipulated while the 
onModelChange mutation listeners run. This manipulation ensures certain 
properties of the overall mutation process.  However the manipulation 
makes the API unreliable and the overall solution forces some use cases 
to adopt bizarre solutions.


I am not asking you to support onModelChanging with full DOM API access. 
I am asking you to take an open minded look at onModelChanging with 
manipulation of the API to maintain the pre-conditions you require.


Instead of surreptitiously changing the DOM API (to cause results to 
appear out of order), I am suggesting that the API be explicit. Rather 
than silently delaying the results of DOM mutations made in mutation 
event handlers, make the delay explicit. Force developers to delay 
mutations. Force developers to operate against your system the way you 
say they must, rather than pretending that they can mutate in mutation 
listeners.


Let's set the onModelChange/onModelChanging differences aside and focus 
just on the effective DOM API in mutation listeners. Your proposal is to 
change the DOM API experienced in listeners from the DOM API experiences 
outside of listeners.  The purpose is to control the order of DOM 
mutation.  You can achieve the same goal in other ways.


For example, consider WebWorkers. There we face a similar problem: the 
browser requires certain restrictions on the worker.  Direct DOM 
mutation must not be allowed. Developers have to work within these 
restrictions. Developing with WebWorkers is different from developing in 
web pages.  But the differences are clear and predictable.


In the present case, imagine that the mutation listeners have only one 
function call available: onNextTurn().   Their only option is to stage 
work based on the arguments to the listener. Within this model verifying 
preconditions is even easier than the current proposal *and* developers 
will have a reliable API.  This approach makes the properties of the 
platform explicit: no mutations within mutations.


jjb


Re: Mutation events replacement

2011-07-04 Thread John J. Barton

On 7/3/2011 1:23 PM, Boris Zbarsky wrote:

On 7/3/11 2:43 PM, John J. Barton wrote:

I'm not sure what you're asking...  The whole point of the proposed 
model is that if someone tries to do a mutation the mutation _will_ 
happen and will complete.  _Then_ listeners, if any, will be notified. 
What are you worried about working or failing?


According to Olli, some functions in mutation listeners will fail. The 
list of functions is not specified directly, but is implicit in the 
algorithm: some function's actions become asynchronous.  This means one 
cannot write reliable code in mutation listeners and, worse, efforts to 
debug your failing code will fail. Code examples that work outside of 
mutation listeners will silently fail inside of mutation listeners.


I have experience with these kinds of mysteriously-failing APIs in 
browser extension systems and that is why I am advocating against their 
inclusion in Web facing systems. If these already exist in the various 
browser implementations of Mutation events listeners, that does not mean 
its replacement should perpetuate the problem.





Ok, that's good, whatever it takes. A DOM API that switches between
read-only and read-write would much better for developers than a DOM API
that partly switches to async.


Well, it sounds better to you.  I'm not sure it sounds better to 
developers.


If you think it's ok for assigning to a global variable to throw in a 
mutation listener, and that this is better than some delay in the 
listener firing (not actually async; Jonas' proposal doesn't really 
fire things async, if you note), then I suspect developers might 
disagree with you.


The issue is not the delay in the listener firing. The issue is the 
effectively broken API within the listeners. Some functions called in 
listeners do not work  the same way they do outside of listeners.


Developers want a complete DOM API in mutation listeners. They can't 
have it. So the only question is how to express the restrictions.  
Silently changing the behavior of the API is not a good choice in my 
opinion.





Consider the alternative. In this use case, the developer wants to
modify an execCommand. In the current replacement solution they have to
wait for the execCommand to take effect, then undo the execCommand and
redo it modified. Creating a good user experience may not be possible.


Quite honestly, that's the developer's problem.
No, it can't be the developer's problem because the current API does not 
allow the developer to fix the problem.  I want to make it the 
developer's problem. I want to the developer to be able to reject 
operations before they commit because the alternative is to undo/redo.


Now the developer of course wants to push as much of the cost of this 
problem onto the UA as possible, and this makes sense: there are a lot 
fewer UAs than developers. 

This has nothing to do with my perspective.
...

You're missing at least the following options:

4. Restrict any APIs that have this sort of power so they're not 
usable by untrusted web pages (e.g. move them into browser extension 
systems).
If you can implement onModelChanging for extensions without crashing, 
then you can implement it for Web pages.
5. Accept that certain levels of the platform just can't be hooked, at 
least for the time being.

There is also:
6. Leave the current Mutation event system as is.


Again, I think trying to shoehorn all mutation consumers into the same 
API is a bad idea that gave us the current mutation events.  Some 
consumers just want to know things have changed and not much more than 
that.  Some want to know details of the changes.  Some want to rewrite 
parts of the browser on the fly.  It's not clear to me that the same 
API for all three sets of consumers is the right solution.
By restricting mutation listeners to explicitly avoid DOM mutation, the 
most sophisticated case is no different than the simple case. Then all 
three can be accommodated.


jjb



Re: Mutation events replacement

2011-07-04 Thread John J. Barton

On 7/4/2011 9:38 AM, Olli Pettay wrote:

On 07/04/2011 07:23 PM, John J. Barton wrote:

On 7/3/2011 1:23 PM, Boris Zbarsky wrote:

On 7/3/11 2:43 PM, John J. Barton wrote:

I'm not sure what you're asking... The whole point of the proposed
model is that if someone tries to do a mutation the mutation _will_
happen and will complete. _Then_ listeners, if any, will be notified.
What are you worried about working or failing?


According to Olli, some functions in mutation listeners will fail.

What? I don't understand this at all.

Sorry, it was actually in Jonas' post you quoted:


The only undesirable feature is that code that mutates the DOM from
inside a callback, say by calling setAttribute, can't rely on that by
the time that setAttribute returns, all callbacks have been notified.
This is unfortunately required if we want the second desirable
property listed above.



If I understand correctly, this description understates the problem.   
Isn't it the case that all DOM mutations will act differently in 
mutation listeners?  Code tested outside of mutation listeners can fail 
inside of mutation listeners, correct?


This is indeed an undesirable feature and I don't believe it is 
necessary.  We can achieve the second desirable property, which is 
really about avoiding mutation in mutation listeners, in other ways. 
Instead of special case asynchrony, make it explicit asynchrony.  
Developers will hate it, just like they hate XHR asynchrony, but it's 
better than unreliable programs.


 The

list of functions is not specified directly, but is implicit in the
algorithm: some function's actions become asynchronous.

No. Change notifications are queued, and the listeners handling the
queue will called at the time when the outermost DOM mutation is about
to return.
How can I reconcile your answer with the idea that setAttribute() does 
not work synchronously in my mutation listener? Can my mutation listener 
mutate the DOM and expect those mutations to act the way they do outside 
of the listeners?  My understanding is no: the API works differently 
inside of mutation listeners.


jjb



Re: Mutation events replacement

2011-07-04 Thread John J. Barton

On 7/4/2011 6:34 PM, Boris Zbarsky wrote:

On 7/4/11 12:09 PM, John J. Barton wrote:

In the current proposal, the DOM API is manipulated while the
onModelChange mutation listeners run.


Citation please?  I see nothing like that in the proposal.

http://www.mail-archive.com/public-webapps@w3.org/msg14008.html

The only undesirable feature is that code that mutates the DOM from
inside a callback, say by calling setAttribute, can't rely on that by
the time that setAttribute returns,





In the present case, imagine that the mutation listeners have only one
function call available: onNextTurn().


How do you ensure that given that the arguments include DOM nodes 
which then transitively allow you to reach all of the DOM's 
functionality?

By making all such calls fail.

jjb


-Boris





Re: Mutation events replacement

2011-07-04 Thread John J. Barton

On 7/4/2011 6:39 PM, Boris Zbarsky wrote:

On 7/4/11 12:23 PM, John J. Barton wrote:


By restricting mutation listeners to explicitly avoid DOM mutation, the
most sophisticated case is no different than the simple case. Then all
three can be accommodated.


If such a restriction were feasible, it might be worth looking into.  
It would involve not passing any DOM nodes to the mutation listener, I 
suspect.


All I am asking is a few minutes of reasonable consideration for 
alternatives before many thousands of person hours become invested in 
the proposed mutation events replacement.


jjb



Re: Mutation events replacement

2011-07-03 Thread John J. Barton

On 7/2/2011 12:36 PM, Ryosuke Niwa wrote:

On Sat, Jul 2, 2011 at 10:46 AM, John J. Barton
johnjbar...@johnjbarton.com  wrote:

1) break on mutation. In Firebug we add DOM mutation listeners to
implement graphical breakpoints. The replacement would work fine for
local, element observation breakpoints like add/remove attribute.
If my goal is to break on addition of elements with class=foo, then
I guess I have to listen for addChildlistChanged on all elements, and
add an additional addChildlistChanged listener for each new element?
So in general one would implement document observation by walking
the DOM and covering it with listeners?

I don't think we can support this use case in general.  We're trying
to avoid invoking scripts synchronously and your use case requires
exactly that.

Are you talking about the algorithm described by Olli and Jonas:
http://www.mail-archive.com/public-webapps@w3.org/msg14008.html
? If I set a breakpoint in a listener in their algorithm I would expect 
to halt the browser before the algorithm completed and especially before 
another DOM change can occur. (An entirely separate issue is how the 
debugger mutates the document).


The only part that is async in the Mutation Event replacement is further 
DOM events.

2) element transformation. The replacement fires after a mutation.
Library or tools that want to transform the application dynamically want
to get notification before the mutation.

Why do you need to get notified before the mutation?
Because developers want to take actions before the mutation.  Removing 
mutations after they are committed is a kludge we should try to avoid.

Again, this is
exactly the sort of usage that causes us headaches and what we want to
avoid because scripts can then modify DOM before we make mutations.

A common solution then is to bracket changes:
beforeChange or onModelChanging
afterChange or onModelChanged
Of course element transformation may want to prevent the current
change and replace it. Some changes are reversible so the observed
change can be countered with a remove (at unknown cost to performance
and user experience).

I don't think we can address your use case here.  Scripts'
intercepting and preventing mutations browser is about to make or
mutating DOM in the way browser doesn't expect is exactly what we want
to avoid.

The stated properties (goals)  of the replacement are (rephased):
  1. No script operation in the middle of mutation,
  2. Callback in order of mutation,
  3. No propagation chain,
Importantly the current proposal has an undesirable feature:
  4. DOM modification in listeners is asynchronous.
Preventing mutation in a 'before' listener still allows all of the 
properties as far as I know.


A two-phase-commit like solution can have all of these properties.  
Copy Jonas' AttributeChanged algorithm from the page cited above. 
Replace ...ed with ...ing. Add a flag 'cancelChange' initially 
false.  Add step
8b. If cancelChange is true, abort these steps and the DOM 
mutation.  Fire event DOMMutationCanceled


The 'before' version would be more powerful.

The Mutation events replacement gains its advantages from property 3 and 
especially 4.  Whether you enter the callback algorithm before or after 
the mutation does not change this. In fact I think from the API point of 
view the 'before' version is clearer.  If the developer knows that the 
callback is 'before' the mutation, they are naturally set up to think 
about the meaning of additional DOM mutations.  The arguments in the 
callback are known to be in flight.


In fact we could go one step further and eliminate the undesirable 
feature #4: if notifyingCallbacks is true, all DOM write operations 
fail.  That way developers are never surprised by DOM functions that 
fail as in the current version. Instead developers will explicitly have 
to cascade modifications via separate events. This would be a much less 
mysterious solution.  In some places Firefox already works in the 
mystery way and speaking from experience it is not fun to figure out why 
your function calls have no effect.


If you have both a before and and after event and both events prevent 
DOM write, then the programming paradigm could be clear:
   before handlers cancel mutations and signal 'after' handlers to 
stage alternatives
   after handlers stage responses to mutation or alternatives to 
canceled mutations.


jjb



Re: Mutation events replacement

2011-07-02 Thread John J. Barton



Olli Pettay
Tue, 28 Jun 2011 04:32:14 -0700
These are *not* DOM-Event listeners. No DOM Events are created, there
are no capture phases or bubbling phases. Instead you register a
listener on the node you are interested in being notified about, and
will get a call after a mutation takes place.


The proposed model will be great for element observation, but, as far
as I understand it, document observation and especially
document transformation would not be supported.

Perhaps someone can outline how the replacement would solve two
use cases for DOM mutations:

1) break on mutation. In Firebug we add DOM mutation listeners to
implement graphical breakpoints. The replacement would work fine for
local, element observation breakpoints like add/remove attribute.
If my goal is to break on addition of elements with class=foo, then
I guess I have to listen for addChildlistChanged on all elements, and
add an additional addChildlistChanged listener for each new element?
So in general one would implement document observation by walking
the DOM and covering it with listeners?

2) element transformation. The replacement fires after a mutation.
Library or tools that want to transform the application dynamically want
to get notification before the mutation. A common solution then is
to bracket changes:
beforeChange or onModelChanging
afterChange or onModelChanged
Of course element transformation may want to prevent the current
change and replace it. Some changes are reversible so the observed
change can be countered with a remove (at unknown cost to performance
and user experience). But some changes are irreversible such as
script tag addition. So, for example, I can't see how to implement
JS pre-processing using only the after event. (This may not be
possible with the current mutation events either, but it is something
I want to to).

Thanks,
jjb




Re: Storage 'length' and enumeration

2009-04-29 Thread John J. Barton




Ian Hickson wrote:

  On Tue, 28 Apr 2009, John J. Barton wrote:
  
  
And then afterwards the |length| is ? one? three?

  
  
One.

  
  
If I iterate
  for(var i = 0; i  sesssionStore.length; i++) foo(i, sessionStore[i]);
what can I expect in foo()?
 (0, null), (1, null), (2, "2")
or
  (0, "2")?
or ?

  
  
(0, "2").

I reiterate my criticism: using a length property in this type is
inconsistent with _javascript_ and with developers expectations about
objects. Every time we use this object we will make pointless mistakes
because the type mimics arrays only partially and we won't be able to
recall which part it imitates. A simple change from |length| to a
method call like getNumberOfItems() would prevent this co-incidental
mimicry and make the standard better.

jjb





Re: Storage 'length' and enumeration

2009-04-29 Thread John J Barton




Ian Hickson wrote:

  On Wed, 29 Apr 2009, John J. Barton wrote:
  
  
I reiterate my criticism: using a length property in this type is 
inconsistent with _javascript_ and with developers expectations about 
objects. Every time we use this object we will make pointless mistakes 
because the type mimics arrays only partially and we won't be able to 
recall which part it imitates. A simple change from |length| to a method 
call like getNumberOfItems() would prevent this co-incidental mimicry 
and make the standard better.

  
  
The Storage object works like an HTMLCollection object except that you can 
also add items, and except that indexing by number returns the key, not 
the value, since otherwise there'd be no way to know which keys were being 
returned. I agree that it's not like an Array, but just having a "length" 
property doesn't mean it works like an Array -- there are lots of host 
objects in the DOM with "length" properties.
  

Yes and Firebug has to have special code for HTMLCollection because
this mistake was made in the past. Now we will have to have different
special code for Storage. Rather than modeling new API on old mistakes,
consider learning from the past experience and take a direction that
developers will find less confusing. Pseudo-arrays with "except...
this and that" makes APIs intricate and puzzling. A simpler and less
ambiguous approach would be better in my opinion.

jjb





Re: Storage 'length' and enumeration

2009-04-29 Thread John J Barton




Ian Hickson wrote:

  On Wed, 29 Apr 2009, John J Barton wrote:
  
  
Yes and Firebug has to have special code for HTMLCollection because this 
mistake was made in the past. Now we will have to have different special 
code for Storage. Rather than modeling new API on old mistakes, consider 
learning from the past experience and take a direction that developers 
will find less confusing.  Pseudo-arrays with "except... this and that" 
makes APIs intricate and puzzling.  A simpler and less ambiguous 
approach would be better in my opinion.

  
  
It's not an array or a pseudo-array. It's an enumerable JS host object.
  

So why call the property |length|? Wouldn't an enumerable JS host
object be just as fabulous with getNumberOfItems()?

But the part that has me confused is maybe just me being dumb. I just
have a hard time with:
 sessionStorage[2] = "foo"; // set the key |2| to value "foo".
then
 var x = sessionStorage[2]; // null? "foo"? 
 var y = sessionStorage[0]; // "2"
I'm thinking: why do I have to think so hard about this? It should be
just an associative array.


  
Firefox will have to have special code to implement Storage anyway; why is 
more special code to show it in Firebug a bad thing? In fact, it's 
probably a good thing, since for Storage you probably don't want to be 
showing the data in the debugger all the time anyway (since that has 
performance implications).

  

Firebug shows objects from Firefox to developers. The appropriate
display format for objects depends on the character of the objects and
the needs of developers. For example, arrays are shown in square
brackets with the first few entries, ["foo", "bar", ...]. HTML Elements
are shown with their nodeName and id if any. In this way developers
can quickly get an idea of the nature of the object, and perhaps drill
down for more information.

How many display formats should be created? One for every kind of
object is simply impractical. Even if time allowed to create formats
for all the built-in types, all the DOM types, all the library types
(prototype, jquery, dojo,...) etc, there would still be user types. So
you have to create categories of representations that cover the
important cases. Firebug has about thirty. 

Now given an object, how do we assign it to one of these thirty
representations? The only possibility is to examine the properties of
the object. Since for sure the _javascript_ built in Array type has a
|length| and since any type with a length property is likely to be
designed to be array-like, the tests for representation use the
existence of a length property that is a finite number as part of its
test. 

Of course there are objects that are not arrays and yet have length, eg
Strings. Firebug has a separate representation to avoid ["t", "h", "i",
"s"].

Since Storage has a length it was originally appearing in Firebug as
[]. 

If your API was such that sessionStorage[i] gave the i-th entry, a key,
value pair, then the array representation would be already working for
us.

I hope this makes the issues clearer.
jjb







Re: Storage 'length' and enumeration

2009-04-29 Thread John J Barton

Anne van Kesteren wrote:
On Wed, 29 Apr 2009 20:51:33 +0200, John J Barton 
johnjbar...@johnjbarton.com wrote:
Yes and Firebug has to have special code for HTMLCollection because 
this mistake
was made in the past. Now we will have to have different special code 
for
Storage. Rather than modeling new API on old mistakes, consider 
learning from the past experience and take a direction that 
developers will find less
confusing. Pseudo-arrays with except... this and that makes APIs 
intricate and
puzzling. A simpler and less ambiguous approach would be better in my 
opinion.


Is there any type of object that holds a collection that does not use 
.length? Seems a bit weird to break consistency here in my opinion.
Consistency is exactly not wanted, because it creates the impression of 
an array-like access pattern where there is not one.   sessionStorage[2] 
is not the third item stored.   Actually I don't know what it is, I'm 
confused.


There are lots of on-line articles explaining  Javascript arrays versus 
associative arrays (objects). By having a type which is an associative 
array -- Storage -- share a property name with Array just makes matters 
worse.


jjb



Re: Storage 'length' and enumeration

2009-04-28 Thread John J. Barton




Ian Hickson wrote:

  On Tue, 28 Apr 2009, John J Barton wrote:
  
  
Sorry, I don't follow what you mean. The loop is possible of course, but 
what should the result be? If I have a sessionStorage object |s| with 10 
items, the length will be 10. Should I expect |s[i]| for i=0,..., 9?  
If so what will be the result, keys? items? Can I set values, eg s[2] = 
"howdy"?

  
  
Keys:

# The object's indices of the supported indexed properties are the numbers 
# in the range zero to one less than the number of key/value pairs 
# currently present in the list associated with the object.

>From the IDL:
# [IndexGetter] DOMString key(in unsigned long index);

# The key(n) method must return the name of the nth key in the list.
  

>From the perspective of development tools (and perhaps thus
developers), this pseudo-array interface is unfortunate. Storage is
neither an array of entries nor an associative array. As such, tools
like Firebug will just show it as Object with a curious property
'length' which looks like a key but is not. I think a getLength()
function would be more true to the character of the type.

  
See WebIDL for the definition of the [IndexGetter] syntax.
  

I could not figure out from the WebIDL what happens in this case:
 sessionStore[2] = "howdy"; // no other keys in sessionStore

I guess this does not work like _javascript_ arrays or objects, rather I
expect it fails?

jjb





Re: Storage 'length' and enumeration

2009-04-28 Thread John J. Barton




Ian Hickson wrote:

  On Tue, 28 Apr 2009, John J. Barton wrote:
  
  
I could not figure out from the WebIDL what happens in this case:
   sessionStore[2] = "howdy"; // no other keys in sessionStore

I guess this does not work like _javascript_ arrays or objects, rather I 
expect it fails?

  
  
It works, it just sets the key "2" to the value "howdy".
  

And then afterwards the |length| is ? one? three? 
If I iterate 
 for(var i = 0; i  sesssionStore.length; i++) foo(i,
sessionStore[i]);
what can I expect in foo()? 
 (0, null), (1, null), (2, "2")
or 
 (0, "2")?
or ?

Thanks,
jjb