Re: [whatwg] Supporting feature tests of untestable features

2015-04-08 Thread Kyle Simpson
A lot of the untestable bugs have been around for a really, really long time, 
and are probably never going away. In fact, as we all know, as soon as a bug is 
around long enough and in enough browsers and enough people are working around 
that bug, it becomes a permanent feature of the web.

So to shrug off the concerns driving this thread as bugs can be fixed is 
either disingenuous or (at best) ignorant of the way the web really works. 
Sorry to be so blunt, but it's frustrating that our discussion would be 
derailed by rabbit trail stuff like that. The point is not whether this 
clipboard API has bugs or that canvas API doesn't or whatever.

Just because some examples discussed for illustration purposes are bug related 
doesn't mean that they're all bug related. There **are** untestable features, 
and this is an unhealthy pattern for the growth/maturity of the web platform.

For example:

1. font-smoothing
2. canvas anti-aliasing behavior (some of it is FT'able, but not all of it)
3. clamping of timers
4. preflight/prefetching/prerendering
5. various behaviors with CSS transforms (like when browsers have to optimize a 
scaling/translating behavior and that causes visual artifacts -- not a bug 
because they refuse to change it for perf reasons)
6. CSS word hyphenation quirks
7. ...

The point I'm making is there will always be features the browsers implement 
that won't have a nice clean API namespace or property to check for. And many 
or all of those will be things developers would like to determine if the 
feature is present or not to make different decisions about what and how to 
serve.

Philosophically, you may disagree that devs *should* want to test for such 
things, but that doesn't change the fact that they *do*. And right now, they do 
even worse stuff like parsing UA strings and looking features up in huge cached 
results tables.

Consider just how huge an impact stuff like caniuse data is having right now, 
given that its data is being baked into build-process tools like CSS 
preprocessors, JS transpilers, etc. Tens of millions of sites are implicitly 
relying not on real feature tests but on (imperfect) cached test data from 
manual tests, and then inference matching purely through UA parsing voodoo.

[whatwg] Supporting feature tests of untestable features

2015-03-31 Thread Kyle Simpson
There are features being added to the DOM/web platform, or at least under 
consideration, that do not have reasonable feature tests obvious/practical in 
their design. I consider this a problem, because all features which authors 
(especially those of libraries, like me) rely on should be able to be tested if 
present, and fallback if not present.

Paul Irish did a round-up awhile back of so called undetectables here: 
https://github.com/Modernizr/Modernizr/wiki/Undetectables

I don't want to get off topic in the weeds and/or invite bikeshedding about 
individual hard to test features. So I just want to keep this discussion to a 
narrow request:

Can we add something like a feature test API (whatever it's called) where 
certain hard cases can be exposed as tests in some way?

The main motivation for starting this thread is the new `link rel=preload` 
feature as described here: https://github.com/w3c/preload

Specifically, in this issue thread: https://github.com/w3c/preload/issues/7 I 
bring up the need for that feature to be testable, and observe that as 
currently designed, no such test is feasable. I believe that must be addressed, 
and it was suggested that perhaps a more general solution could be devised if 
we bring this to a wider discussion audience.

Re: [whatwg] Preloading and deferred loading of scripts and other resources

2014-08-23 Thread Kyle Simpson
 Surely our goal should be to make script loaders unnecessary.

There's unquestionably a lot of folks on this thread for whom that is their 
main concern. I think it's a mistake to assume that because they mostly seem to 
be working as browser developers (which strongly influences their perspective, 
I believe) that this is a universal goal.

In fact, I would say I have the opposite goal. I currently, and have always 
(for the 3-4 years this discussion has been languishing), wanted to make script 
loaders better and more capable, not get rid of them.

Two primary reasons:


1. A hand-authorable markup only (aka zero script loaders) approach is, and 
always will be, limited. Limited to what? To the initial page load.

A significant portion of the discussion in the long and sordid history of this 
thread of discussion is indeed centered around browsers wanting to optimize 
markup loading using their pre-scanner, etc.

There's a strong implied assumption that zero-script-loader 
hand-authored-markup-only page-load-only script loading is the most important, 
and in fact the only important, concern. I have registered already many times 
my strong objection to this mindset.

Originally, my participation in the thread, and my many proposals and 
variations along the way, actually didn't really care nearly as much about this 
initial page-load case. But let me clarify: it's not that I don't care about 
initial page-load at all, but that I care *more* (much more, in fact) about 
enabling and making easy-to-use dynamic in-page on-demand loading use-cases.

The proposals I made for improving that usage also degrade to letting the same 
script loader logic handle the initial page-load using the same mechanisms.

IOW, my focus is on on-demand loading capabilities, via a script loader, and 
the initial-page-load script loading (via script loader) is a simple degraded 
base-case for the same capabilities.

The reverse is not true. The hand-authored-markup-only solutions being proposed 
largely don't care at all about the script loaders use cases, and certainly 
don't degrade in any reasonable way to improving the outlook for dynamic script 
loading (see the initial quote above). In fact, they sometimes make those use 
cases MUCH WORSE.

Why?

Because the markup-only solutions make lots of assumptions that don't apply to 
on-demand loading, such as availability of DOM elements, order of parsing, 
etc. Some of the concerns have analogs in script loading logic, but they're 
awkward and inefficient from the script loader point of view.

--
The majority of markup-focused solutions thus far proposed favor 
hand-authored-markup-only, and seem unconcerned by the fact that they will make 
the on-demand script loader use-cases harder.
--

For example:

var s1 = document.createElement(script);
s1.id = some-id-1;
s1.src = http://some.url.1;;

var s2 = document.createElement(script);
s2.id = some-id-2;
s2.src = http://some.url.2;;
s2.needs = s1.id;

var s3 = document.createElement(script);
s3.src = http://some.url.3;;
s3.needs = s1.id + , + s2.id;

document.head.appendChild(s3); // append in reverse order to make sure `needs` 
is registered early enough
document.head.appendChild(s2);
document.head.appendChild(s1);

This is a super simple distillation of generated script loader logic to load 3 
scripts, where 2 needs 1, and 3 needs both 1 and 2. If you author JS code like 
this by hand, it seems somewhat tractable.

But, as the chains get longer (more scripts) and the dependencies get more 
complex, this becomes increasingly difficult (in fact, approaching 
impossible/impractical) to actually generate via a generalized script loader.

Why? Because the script loader has to know the entire list of scripts (IOW it 
needs to make its own internal queue/s) in this group, before it can reason 
about how to generate the ID's and wire up the attributes.

By contrast, most good script loaders right now that take advantage of the 
`async=false` internal browser loading queue can start loading in 1-2-3 order 
in a streaming fashion, not having to wait for all 3 to start. Why? Because the 
`async=false` loading queue that the browser provides implicitly handles the 
ordering.

The result? Current script loaders can load 1. Then later 2. Then later 3. And 
regardless of how long or short later is, or of how quickly things load, or 
if all 3 are indeed loaded at the same time -- in all these variations, the 
queue the browser provides just makes the loading/ordering work.

The proposals being suggested actively do away with that assistance, in favor 
of making authors manage their own queues via chains of selectors, in markup.

Which leads to…


2. There is, intuitively, some threshold of complexity of markup beyond which 
most hand-authors will not go.

They may author:

script src=http://some.url.1; async id=s1
script src=http://some.url.2; async needs=s1

But, most probably would never author:

script src=http://some.url.1; 

Re: [whatwg] Promise-vending loaded() ready() methods on various elements

2014-03-15 Thread Kyle Simpson
 As I noted above, what we need to know (and I guess we need to know this 
 from all browsers) if there's a *guarantee* of a-b-c execution order (even 
 if all 3 are executing async)
 
 I don't believe there is such a guarantee, unless the spec spells it out 
 explicitly.


The `async=false` stuff in the spec talks about dynamically loaded (not parser 
loaded) scripts going into a queue so they are downloaded in parallel, but 
executed in request-order from the queue.

So, in my aforementioned `execScript(..)` function, if it also set `s.async = 
false`, I believe that would opt all of the scripts into the async queue.

function execScript(l) {
  var s = document.createElement(script);
  s.async = false; // -- insert this to get ordered-async
  s.src = l.href;
  document.head.appendChild(s);
  return s.loaded();
}

Even though all of them would, at that point, be strictly loading from cache, 
it should still have the effect of ensuring they execute strictly in a-b-c 
order, correct?



-


One downside to this is that there were use-cases where the single queue that 
this spec mechanism created were not sufficient, such as loading a group of 
scripts for widget A and another independent group of scripts widget B, and not 
wanting A or B to block the other.

If all of those scripts were set with `async=false` and thus all put in that 
single queue, widget A's scripts could block widget B's scripts, which sorta 
fails that use-case.

However, it would probably only be a slight delay, as you wouldn't (in the 
previously mentioned code pattern) add the script elements to the DOM (and 
thus the queue) until after all the link rel=preload's had finished loading, 
so it would only be parsing/execution that blocked, not downloading.

Execution is already an implicit blocking, as the engine can only run one 
script at a time, so actually, it's just a concern of potential parsing 
blocking other parsing.

The question is whether `async=false` scripts in the queue can be parsed in 
parallel (unblocked from each other) on the background threads, as you said, or 
whether being in that async=false queue means that both parsing and execution 
happen in order, and thus could cause long parsing of widget A's scripts from 
blocking parsing of widget B's scripts?

However you slice it, I think it would cause *some* delays between widget A and 
B (aka, not totally independent), but it would in general be far less delays 
than what we have currently, which is that downloading blocks in that queue. So 
that seems like a big (if not complete) win. :)



--Kyle








Re: [whatwg] Promise-vending loaded() ready() methods on various elements

2014-03-15 Thread Kyle Simpson
 I'd say the first syntax is a bit verbose for what I was dreaming 4 years
 ago when I started asking for a simple script preloading mechanism, but
 it's just this side of acceptable. If we have to take the second approach,
 I say that's unacceptably more verbose/complex and falls short of my
 opinion of everything we need for sane  versitile dependency loading.
 
 It's everything we need but perhaps not everything we desire.


If everything we need was the only standard, then the proposals we had 
discussed years ago should have been pushed through, years ago. Those 
discussions were bogged down because, as you say in the next sentence (quoted 
below), there were some on this list who insisted that mechanisms which 
couldn't be markup-only were insufficient, not just undesirable.

For my part, I hope our current discussion thread is signaling a shift from the 
previous dogmatism (on all sides).



 Last time we
 went round with script loading the proposal for high-level dependency
 declaration got weighed down by use-cases that should have been left to
 lower level primitives. These are the lower level bits, something higher
 level is still possible. For legacy-free project, modules + promises for
 non-script assets are the answer.

It was never a priority of *mine* to support zero-script-loader capability. I 
think, and always have, that script loaders are the place where logic for 
anything beyond straight-forward loading *should* occur.

But there was clearly a LOT of noise in the past whenever I pointed out 
advanced use-cases, and this noise led to suggestions about markup-only 
(zero-script-loader) mechanisms for handling all or most of the use-cases in a 
way that theoretically could eliminate any need for script loaders. Recall all 
the discussions of `depends='...'` attributes being able to annotate one 
script's dependencies on another script or scripts?

I was only trying to call out the fact that there had been a very clear 
intention in the past to get some silver-bullet solution that implies 
zero-script-loader, and what we're currently discussing (.loaded() promises) is 
NOT that.

If we are, at this point in the years-long arc of discussion, ready to set that 
requirement aside, and admit that markup capabilities (link rel=preload and 
script) are for the straight linear a-b-c use-cases, and that the more 
sophisticated use-cases (like dynamic loading, independent widgets, etc etc) 
require logic that belongs in some form of script loader (whether a lib or a 
set of inline boilerplate code) that handles all the promises-negotiation, then 
I'm fine with that approach. I say fine relatively, because I don't really 
like the ugliness if we have to ensure order ourselves (maybe we don't!?), but 
it's not a deal-breaker either way.



--Kyle






Re: [whatwg] Promise-vending loaded() ready() methods on various elements

2014-03-14 Thread Kyle Simpson
 This, along with Ilya's proposed link rel=preload (
 https://docs.google.com/a/google.com/document/d/1HeTVglnZHD_mGSaID1gUZPqLAa1lXWObV-Zkx6q_HF4/edit#heading=h.x6lyw2ttub69),
 and JavaScript modules (
 https://github.com/dherman/web-modules/blob/master/module-tag/explainer.md)
 gives us everything we need for sane  versitile dependency loading.


Is link rel=preload going to fire this loaded event after it finishes 
pre-loading but BEFORE it executes (or, rather, BEFORE because it doesn't 
execute them at all)? Because for script, the load event fires only after 
it has loaded AND executed, which is of course too late for many of the more 
advanced use-cases below.

If you want to dynamically *preload* scripts (that is, you don't have link 
rel=preload tags in your initial page markup) later on in the lifetime of the 
page, is the story basically like this?



function preloadScript(src) {
   var l = document.createElement(link);
   l.rel = preload;
   l.href = src;
   document.head.appendChild(l);
   return l.loaded();
}

function execScript(l) {
   var s = document.createElement(script);
   s.src = l.href;
   document.head.appendChild(s);
   return s.loaded();
}

Promise.all(
   preloadScript(a.js),
   preloadScript(b.js),
   preloadScript(c.js)
)
.then(function(links){
   return Promise.all.apply(null,links.map(execScript));
})
.then(function(){
   alert(All scripts loaded and executed);
});
   


So, if that's how we think this would work, we need to understand how the 
`execScript(..)` logic is going to be treated. Is creating a script element 
dynamically and inserting it going to make sure that it either:

  a. executes sync
  b. executes async, but a.js will *definitely* execute before b.js, which 
will *definitely* execute before c.js.

In other words, is there any possibility that it won't execute in order a - 
b - c in the above code? If so, do/don't we have to be more complex, like?



Promise.all(
   preloadScript(a.js),
   preloadScript(b.js),
   preloadScript(c.js)
)
.then(function(links){
   var chain;
   links = links.forEach(function(link){
  if (!chain) chain = execScript(link);
  else chain = chain.then(function(){ return execScript(link); });
   });
   return chain;
})
.then(function(){
   alert(All scripts loaded and executed);
});




--Kyle






Re: [whatwg] Promise-vending loaded() ready() methods on various elements

2014-03-14 Thread Kyle Simpson
 So, if that's how we think this would work, we need to understand how the
 `execScript(..)` logic is going to be treated. Is creating a script
 element dynamically and inserting it going to make sure that it either:
 
  a. executes sync
  b. executes async, but a.js will *definitely* execute before b.js,
 which will *definitely* execute before c.js.
 
 
 I'm hoping a, but you tell me. Do you know what browsers do with a fully
 cached script? Is there consistency there? If not, yeah, you'll have to
 create a chain.

Regardless of (a) or (b), the simpler Promise approach (my first snippet) is 
sufficient, if and only if a-b-c is the  *guaranteed* execution order. That's 
the important part. If there's a chance the browser might do b-a-c (even if 
all were equally ready in the cache), then the pattern goes fubar and the 
uglier second syntax is required.

I'd say the first syntax is a bit verbose for what I was dreaming 4 years ago 
when I started asking for a simple script preloading mechanism, but it's just 
this side of acceptable. If we have to take the second approach, I say that's 
unacceptably more verbose/complex and falls short of my opinion of everything 
we need for sane  versitile dependency loading.



 Do you know what browsers do with a fully cached script?
 
 script src=url is always executed async when inserted into the DOM, 
 cached or not.

Boris-

As I noted above, what we need to know (and I guess we need to know this from 
all browsers) if there's a *guarantee* of a-b-c execution order (even if all 
3 are executing async) when they are added to the DOM in that order and all 3 
are guaranteed preloaded first, by the link rel=preload tag usage? Is there 
ever a case where some other execution order than a-b-c would happen?




--Kyle







Re: [whatwg] Promise-vending loaded() ready() methods on various elements

2014-03-14 Thread Kyle Simpson
I'd also like to make the observation that putting link rel=preload.loaded() 
together with script.loaded(), and indeed needing a promise mechanism to wire 
it altogether, is a fair bit more complicated than the initial proposals for 
script preloading use-cases from the earlier threads over the last few years of 
this list.

For one, we're talking about twice as many DOM elements. For another, there's a 
seemingly implicit requirement that we have to get both ES6 promises and DOM 
promises to land for these suggested approaches to work. I don't know if that's 
already a guarantee or if there are some browsers which are possibly going to 
land DOM promises before ES6 promises? If so, ES6 promises become the long 
pole, which isn't ideal.

Lastly, I'd observe that many of the arguments against the original/previous 
script preloading proposals were heavily weighted towards we don't like script 
loaders, we want to make them obsolete, we need simple enough (declarative 
markup) mechanisms for that stuff so as to make script loaders pointless…

At one point the conversation shifted towards ServiceWorker as being the 
answer. When we explored the use cases, it was my impression there was still a 
fair amount of non-trivial code logic to perform these different loading cases 
with SW, which means for the general user to take advantage of such use-cases, 
they're almost certainly going to need some library to do it.

I can't imagine most end-users writing the previously-suggested ServiceWorker 
code, and I'm having a hard time believing that they'd alternatively be writing 
this newly suggested promises-based loading logic all over their pages. In 
either case, I think for many of the use-cases to be handled, most general 
users will need to use some script-loader lib.

So, if this .loaded() DOM promises thing isn't the silver bullet that gets us 
to no script loader utopia, then I don't see why it's demonstrably better 
than the simpler earlier proposals.

Moreover, the earlier proposals relied on the browser having logic built-in to 
handle stuff like download in parallel, execute serially, which made their 
interface (the surface area users needed to work with) much less than either 
the Promises here or the ServiceWorker before.

What you're implicitly suggesting with both sets of suggestions is, let's make 
the user do the more complicated logic to wire things together, instead of the 
browser doing it. Why?

Why isn't putting preloading into existing script elements (whether exposed 
by events or by promises) better than splitting it out into a separate element 
(link rel=preload) and requiring a third mechanism (promises) to wire them up?



Is there any chance we could take a fresh look at the earlier proposals 
(putting both preloading and loading/exec into script), and perhaps freshen 
them up with promises instead of events?




--Kyle













Re: [whatwg] Proposal: HTTP Headers + sessionStorage stored session-ID

2013-12-06 Thread Kyle Simpson
 On Thu, 31 Oct 2013, Kyle Simpson wrote:
 
 Session cookies are preserved at the browser-level, which means they are 
 kept around for the lifetime of the browser instance. sessionStorage, 
 OTOH, is kept only for the lifetime of the tab. In many respects, this 
 makes sessionStorage more desirable for session-based tracking.
 
 2. As a consequence of #1, the most pertinent difference is 
 sessionStorage based session-IDs being attached to an individual tab 
 rather than the browser. This means if I open up two tabs to the same 
 site, and I use session cookies, then both tabs share the same session 
 (can be useful or can be very annoying).
 
 But with a sessionStorage based approach, the two tabs have two entirely 
 separate sessions and operate independently. They can share storage 
 through localStorage, if so desired, and even communicate with 
 StorageEvents. But they can be separate if they want by relying on 
 sessionStorage.
 
 In particular, #2 is a big win (IMO) for session-based architecture (as 
 well as UX) and I often now design my systems with this particular 
 behavior intentionally relied upon.
 
 I've filed this bug to track this problem:
 
   https://www.w3.org/Bugs/Public/show_bug.cgi?id=24024
 
 If any implementors want to implement this and thus would like this 
 specced, please do comment on the bug.


Thanks for filing, Ian.

For the sake of brevity of the list, I've tried to explain the motiviations 
behind my proposal a little more clearly, both in that bug, and in this part of 
my recent blog post:

http://blog.getify.com/tale-of-two-ends/#proposal-hope

--

One further point I have not specifically called out in those previous posts: 
in some jurisdictions, like Europe, apparently cookies (even session cookies!?) 
are illegal unless you specifically declare that you're using them.

I don't know all the details on what that includes or not, but I've been told 
by a few people in Europe that (session) cookies are frowned upon, whereas 
tracking things in sessionStorage (which automatically is sandboxed to the host 
and automatically goes away after the tab closes) is more preferable and more 
legal. :)



--Kyle








Re: [whatwg] Proposal: HTTP Headers + sessionStorage stored session-ID

2013-10-31 Thread Kyle Simpson
 Why not just use cookies ? I feel they're sufficient to do what you need.
 Asked differently: in what way are cookies insufficient so that we need a new 
 different API/feature?

There are substantive differences between the behavior of session cookies vs. 
sessionStorage. Without re-arguing the whole case for why sessionStorage was a 
useful addition to the platform, a few observations:

1. Session cookies are preserved at the browser-level, which means they are 
kept around for the lifetime of the browser instance. sessionStorage, OTOH, is 
kept only for the lifetime of the tab. In many respects, this makes 
sessionStorage more desirable for session-based tracking. IOW, several 
use-cases I'm familiar with would prefer the semantics of sessionStorage 
expiration when they say session as opposed to the old way of session cookies.

2. As a consequence of #1, the most pertinent difference is sessionStorage 
based session-IDs being attached to an individual tab rather than the browser. 
This means if I open up two tabs to the same site, and I use session cookies, 
then both tabs share the same session (can be useful or can be very annoying).

But with a sessionStorage based approach, the two tabs have two entirely 
separate sessions and operate independently. They can share storage through 
localStorage, if so desired, and even communicate with StorageEvents. But they 
can be separate if they want by relying on sessionStorage.

In particular, #2 is a big win (IMO) for session-based architecture (as well as 
UX) and I often now design my systems with this particular behavior 
intentionally relied upon.


 I'm worried about your proposal as it reinvents a new sort of cookies with 
 the same issues of current cookies (consequences of ID theft, ability of 
 third-party tracking, XSRF, etc.).

I believe with a proper CSP all of those issues are mitigated. AFAICT there's 
no additional leakage vectors with sessionStorage (otherwise we'd have major 
complaints about that system). If you let code run via your trusted CSP, it can 
gather any info (cookies or not), but if you disallow something (JS from some 
untrusted location) via CSP, then it does not have any access to your 
sessionStorage.

Also, it's a new sort of cookie that has a very conservative expiration, 
which further mitigates downsides. As opposed to both cookies and localStorage 
(which can have nearly infinite expiration), a user doesn't have to do 
anything else to clear out sessionStorage than close a tab. They don't have to 
clear cookies nor do they have to restart their whole browser experience.

So any kind of undesired tracking that may occur with sessionStorage approaches 
only stays around as long as the tab. To mitigate tracking of real cookies 
(and/or localStorage), users have to actually use wholly separate Private 
Browsing type windows.


 On designing sessions without cookies, I recommand reading 
 http://waterken.sourceforge.net/web-key/ (where everything uses URLs without 
 the need of HTTP headers at all).

I've read that article before. I'm well aware that you *can* design around 
headers with the URL.

But that's an orthagonal point. If you have decided to design your system with 
some sort of headers based transmission of data, for any of the variety of 
reasons people do that (not going to argue that point as it's not relevant 
here), we're comparing the behavior of session cookies to this sessionStorage 
cookie I've proposed.

--

Importantly, it's the user-experience that I'm getting at with my proposal. 
With session cookies, a user gets a response from the server directly which is 
session aware.

With sessionStorage session IDs, currently, the response from the server on 
first request (that is, a navigation event in the browser -- bookmark, address 
bar, foreign link click, etc) is always session unaware, and then JS on the 
page/app has to kick in once the page is received and pull out the ID and then 
perform some other actions to make the page session aware.

Often times, in SPA architecture, this means that the initial page request 
isn't terribly useful from the session aware perspective, and instead 
requires not just code to run, but likely subsequent in-page requests to the 
server, including the session ID, to retrieve the session-aware data and 
display it on the page.

The UX can be harmed in this case because of the additional delay before a user 
has a complete page.

If SPAs dislike this, they have very little choice but to fallback to 
old-school session cookies so session-aware information is available on initial 
page request. But then they lose the benefits of sessionStorage listed above.




--Kyle








[whatwg] Proposal: HTTP Headers + sessionStorage stored session-ID

2013-10-30 Thread Kyle Simpson
I have put together a simple soft proposal for a pair of HTTP Request/Response 
headers that bridge to the browser's sessionStorage mechanism for session ID 
storage. It's basically to embrace the new SPA style architecture prevalent on 
the web, and the use of sessionStorage, instead of the old-school usage of 
session cookies.

Details:

https://gist.github.com/getify/7240515


TL;DR:


HTTP Response Header
Register-Session-ID: sessID

   --or--

Register-Session-ID: sessID=23kml2r2-aniwpkmsd-li24t-35n


Makes an entry in sessionStorage of the specified name, which entry can 
automatically be pulled (including its current value) from sessionStorage on 
each new page request (to same SOP target) and be sent along, as:


HTTP Request Header
Session-ID: 23kml2r2-aniwpkmsd-li24t-35n


Thoughts?



--Kyle







Re: [whatwg] Script preloading

2013-07-22 Thread Kyle Simpson
FWIW, I'd be much more in favor of Jonas' proposal, at this point, than the 
dependencies=.. proposal. The `noexecute/execute()` is conceptually pretty 
similar to the preload proposal I've been pushing. As far as I can tell from 
how Jonas describes it, it looks like it would fit most of the use-cases I've 
put forth as caring about.

There's a few details to iron out, however:

1. Will it cause confusion for authors that the behavior of the `onload` event 
on script elements is modified from its current long-standing loaded and 
executed to just loaded when in the presence of the `noexecute` 
attribute/property? Seems like it could cause a fair bit of confusion.

That's why the preload proposal proposed another event (`onpreload`), to reduce 
the conflation/confusion. But it might be tenable for authors. I'm sure as a 
script loader author I can work around it just fine if I need to.



2. What will the interaction be, if any different, with the `onerror` events? 
Script preloading not-withstanding, there's little cross-browser compatibility 
on the topic of just what exactly constitutes a script error event. Some 
browsers fire it for all of the common network error conditions (404, 5xx), 
others only some of them.

In particular, if we're just building off existing mechanisms and not inventing 
whole new ones, it would be nice, from an author perspective, if EITHER the 
`onload` or the `onerror` event ALWAYS fired, and never both, and never 
neither, and if it was consistent when each happened.

I asked a couple of years ago on this list for that exact thing to be clarified 
in the spec, and was told at that time that implementors were left with the 
discretion, which accounted for the disparity. I of course would renew my 
request to reverse that and land on some common spec.



3. The main reason people seem to favor the `dependencies=..` proposal or its 
variants is the idea of markup-only preloading, where the loading of a 
subsequent script tag or tags can implicitly be the signal to execute another 
previously loaded-and-waiting script. Not clear if/how Jonas' proposal could 
suit that view. Perhaps similar to how I proposed handling that desire in the 
preload proposal.



--Kyle




On Jul 22, 2013, at 4:00 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Jul 9, 2013 at 12:39 PM, Ian Hickson i...@hixie.ch wrote:
 The proposals I've seen so far for extending the spec's script preloading
 mechanisms fall into two categories:
 
 - provide some more control over the mechanisms already there, e.g.
   firing events at various times, adding attributes to make the script
   loading algorithm work differently, or adding methods to trigger
   particular parts of the algorithm under author control.
 
 - provide a layer above the current algorithm that provides strong
   semantics, but that doesn't have much impact on the loading algorithm
   itself.
 
 I'm very hesitant to do the first of these, because the algorithm is _so_
 complicated that adding anything else to it is just going to result in
 bugs in browsers. There comes a point where an algorithm just becomes so
 hard to accurately test that it's a lost cause.
 
 The second seems more feasible, though.
 
 FWIW, I don't really know what functionality you put in the first
 category, and what you put in the second.
 
 However, as an implementor, I definitely think that the current
 proposal is more complicated to implement than the proposal that I
 pushed for before. I.e. adding a noexecute attribute on the script
 element which causes the script element not to execute when it
 normally would. Instead it fires the load event when the script has
 been loaded and does nothing more.
 
 Once the page wants the script to execute, it would call a new
 .execute() function on the script which would cause the loaded script
 to execute. If the function is called before the load event has fired,
 an InvalidStateError exception would be thrown.
 
 I could absolutely believe that this is harder to specify than your
 proposal. I haven't looked at the spec in enough detail to know. But
 it's definitely easier to implement in at least Gecko. I'd be
 interested to hear what other implementors think. And implementations
 have a higher priority than spec writing in the hierarchy of
 constituents.
 
 I also think it's a simpler model for authors to understand.
 
 Now, even higher priority in the hierarchy of constituents are
 authors. So if your proposal above is written with the goal of
 creating something authors prefer over the noexecute proposal, then
 that definitely seems like the right goal. I haven't read enough of
 the feedback here to get a clear picture of if the proposal in this
 thread is considered better than noexecute.
 
 I could definitely see that the dependencies feature could be
 attractive if it indeed would let authors avoid manually scheduling
 scripts for execution. But as always when building high-level
 features, there's a risk that if they 

Re: [whatwg] Script preloading

2013-07-22 Thread Kyle Simpson
 Do you have a link to your preload proposal?

My main `script preload` proposal (skip the first section of this LONG email, 
proposal starts at Summary: several paragraphs down):

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-July/039973.html

Then proposal slightly amended here:

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-July/039974.html



 Either way, I agree about the concern with onload. I personally have a
 hard time telling if it'll be confusing.
 
 Having the load event anytime we are done with a network request also
 seems beneficial. Rather than having most APIs use load whereas this
 would use preload.

FWIW, I believe script loaders, at least as I envision them, would use BOTH the 
`onpreload` AND the `onload` events, since marking a script as 
ready-to-execute, however that's done, is unlikely to be a synchronouse event, 
so the script loader would still want to listen for when it finished running 
(hopefully pretty soon after being told it's OK to run!? :)

I see the `onpreload` as similar in spirit to the non-standard and 
now-phased-out-as-of-IE11 `script.onreadystate` mechanism, which provided more 
detail about the status of a script as progressed from request to completion. 
`onpreload` just, to me, fills the spot of what semantically the word load 
meant, but since everyone knows load in the context of scripts means load 
and process/run, we can't lightly just change load.

If `onpreload` seemed to imply something too far outside the normal state 
progression of a script, it could be called `onfinishload` or `onfinishrequest` 
or something like that. In the `onreadystatechange` world, they called that 
state loaded, FWIW.



 Generally speaking load means loaded and processed. The
 'noexecute' flag would change what the and processed piece includes.

Understood.

I still think, though, that it's problematic to overload load, because as 
mentioned above, as a script loader, I'd want to know about both events 
(finished load and finished execution), not just a binary switch between 
one or the other. If load fired twice, that seems (potentially) awfully 
confusing. If load fired early (at load finish), and there was no more event 
for finished execution, script loaders would be quite hampered (at least as I 
envision architecting one) because they couldn't know if a script had really 
run yet or not, only that they asked it to be eligible to run.



 There are three opportunities to fire error stuff here:
 
 1. Failed network request
 2. Failed JS compilation
 3. Exception thrown from execution
 
 And there are two error reporting mechanisms in play
 
 A. Fire an error event on the script element.
 B. Fire the window.onerror callback (like an event, but not exactly the same).

Agreed. `window.onerror` serves fine case #3. What we don't seem to have as 
consistent cross-browser behavior, or even terribly well defined in the spec, 
is #1 and #2, especially #1. Various older browsers had different 
interpretations as to which network conditions constituted load complete or 
not.

Obviously, the 200 (and several other 2xx's) should be success. And I'd think 
it would be obvious that any of the 4xx and 5xx codes were error.

But perhaps it should just be: did the network request result in a non-empty 
payload?

Now, for #2, to my non-implementor eye, that seems pretty well definable too. 
But again, cross-browser mayhem was bad with this. IIRC, Opera fired the 
onerror in the case of bad compilation, but none of the others did. Or maybe it 
was vice versa. Been awhile since I looked specifically. Just for sure recall 
inconsistency here.



 1 and 2 seems like they could behave exactly the same for noexecute
 scripts as for normal scripts.

Yes, as long as all the spec makes it clear what the do's and don'ts here are, 
and everyone complies. :)



 I'm not sure if that includes firing
 both A and B in current browsers?

They do not seem to fire `window.onerror` in either #1 or #2 IIRC, but they do 
in #3.

If I could have it my way, I'd have all three errors firing on the 
script.onerror, and not involve window.onerror. The reason is because 
`window.onerror` is notoriously hijacked by various RUM libraries and such, to 
do remote logging of errors, so a script loader attaching to window.onerror and 
catch is not terribly reliable in my experience.

But I'm fine with whatever combination gives reliable and non-overlapping and 
non-gaping error handling coverage. :)



 3 presumably only triggers B currently for normal scripts?

AFAICT, yes.



 Indeed. Though we're only talking about the A mechanism about, right?

Correct. Although in an ideal world (for me), B would be more reliable, as 
noted above.



 I.e. the following would cause both a load event to be fired on the
 script, and window.onerror to be triggered?
 
 script src=data:text/plain,throw new Error();/script

Sure. Though as noted, script loaders may have limited utility of 
window.onerror when they run 

Re: [whatwg] Script preloading

2013-07-18 Thread Kyle Simpson
About a week ago, I presented a set of code comparing the script 
dependencies=.. approach to the script preload approach, as far as creating 
generalized script loaders.

There were a number of concerns presented in those code snippets, and the 
surrounding discussions. I asked for input on the code and the issues raised.

Later in the thread, I also identified issues with needing to more robustly 
handle error recovery and it was suggested that Navigation Controller was a 
good candidate for partnering with script loading for this task. I asked for 
some code input in that respect as well.

AFAICT, the thread basically went dormant at roughly the same time.

I'm sure people are busy with plenty of other things, so I'm not trying to be 
annoying or impatient. But I would certainly like for the thread not to die, 
but to instead make productive progress.

If you have any feedback on the code comparisons I posted earlier 
(https://gist.github.com/getify/5976429) please do feel free to share.

Thanks!



--Kyle







Re: [whatwg] Script preloading

2013-07-14 Thread Kyle Simpson
 So maybe a concept of optional dependency would be useful?

 1. not all dependencies are JS files, e.g. authors use plugins to load  
 template files, JSON, images, etc.

 2. not all dependencies are usefully satisfied immediately after their JS  
 file is loaded, e.g. some libraries may need asynchronous initialization.  

These are relevant statements of other use-cases that are common.

I just want to point out that my script preload proposal easily handles (via 
a script-based script loader, not markup-only) both of these use-case 
variations, no matter how deep you chase their rabbit holes.

In my proposal, the script loader (or module loader, or whatever) is fully in 
control of when the next waiting script is told that it's eligible to execute, 
so it can accept any arbitrary amount of complexity into its configuration for 
how to decide that it's OK to proceed from A to B.

For example, if, in the execution of A, A registers that it needs C, then B 
won't auto-run just because A finishes. If C is not a script, but is a template 
or stylesheet or series of images or Ajax call for some data or whatever... 
none of that complication affects the script preload mechanism whatsoever. It 
simply leaves B preloaded, paitiently waiting around until it's told that its 
dependencies are fulfilled, and the script loader can wait as long as it needs 
to until A and anything A needs (C, C*) are loaded and processed.

In the script dependencies proposals and variations, we have to follow the 
rabbit trail of inventing micro-syntaxes or other attribute complexities to 
handle these markup expressions of dependency that are not as 
trivial/simplistic as a prior completed network connection and, if applicable, 
parsing and processing, as you're seeing in Kornel's two messages.

*
Of course, I don't share the same mindset of some here that any markup-only 
solution we could invent, no matter how complex or intricate or involved, is 
better than a script-based equivalent.
*

To me, markup should be able to handle the majority case(s), and minority cases 
(while important) should be handled in script-based loading. script preload 
(including the markup extension I suggested) does exactly that: it handles 
basic preload/dependency annotation with markup-only, and it provides the full 
preload-defer-execute mechanism to the script-based loader for handling 
*everything* else you can dream up (AND you can mix-n-match).

By contrast, the prior simple statement of script dependencies=... proposal 
(without Kornel's extensions) handles basic preload/dependency annotation in 
markup-only, but then the other more complex use-cases appear to have to be 
filled by pulling in and weaving together other (proposed) mechanisms like 
Navigation Controller.

I'm not saying that's a bad option, but we need to be able to compare what the 
script-based approach is going to look in both scenarios, and see which one is 
more suitable/practical/desireable.



--Kyle












Re: [whatwg] Script preloading

2013-07-12 Thread Kyle Simpson
 Ok, and I'm saying they shouldn't be asking LABjs to handle it, they should 
 be asking the devtools teams at browser vendors to give them ways to deal 
 with it. You're not going to be able to pause execution for code, implement 
 future breakpoints, or debug root causes for this sort of thing well from 
 script. You can do SOMETHING, but not with the fidelity that devtools can.

I'm not sure why you keep focusing on this being a devtools centric question, 
because I think you're missing the point.

The developers that are asking for these features from LABjs are NOT asking for 
the capability to debug what's going on with failed loads while testing their 
app in their development environment. IF what they're trying to do is diagnose 
and fix such failed loads while developing their app, then certainly that would 
squarely be a devtools type of task.

LABjs also has a debug-build available, and it has certain extra tracing going 
on inside it when used, that aids developers in understanding, from its 
perspective, what is going on as things proceed through loading. That debug 
mode, in addition to whatever great devtools that exist or will exist, are all 
fantastic ways for developers to work on and fix problems.

***
But all that is ENTIRELY ORTHGONAL to what the developers are actually 
presently asking from LABjs.
***

They are asking, repeatedly, for the ability to have logic deployed in their 
*production* builds, which sits in front of end users which have no knowledge 
of or relation to any developer tools in whatever browser they use. Certainly, 
these developers are not interested in whether or not their end-users happen to 
be in a browser that has devtools, because end users don't care about devtools, 
and the developers who do care aren't actually using the user's browser anyway.

At this point, whether or not a browser has certain devtools is entirely 
irrelevant to what the developer wants from LABjs.

What seems to be their mindset and internal narrative is this: 

OK, no matter how good we are at figuring out how to build a bug-free app, we 
rely on third-party external resources that we don't control. We cannot 
guarantee that our request for 'jquery.js' from the Google CDN will actually 
work. It should work. It usually works. But it doesn't always work. Sometimes 
Google goes down. Sometimes the DNS lookup fails. Sometimes a proxy server 
misbehaves. Sh$$ happens. SO! We'll just accept that fact. We look at our RUM 
logs and we see that about 2% of our users experience one of these dead page 
loads. But, hey, I've got an idea, how about we try to write code into our 
production code which detects when something like that happens, and tries to 
gracefully recover, if possible, to maybe reduce the 2% down to 1%. Yeah, 
that's a good idea. How do we do that? Oh, I know! We're already using a script 
loader. Let's have that script loader tell us when `script.onerror` fires, 
which tells us that a script load failed (right!?!?), and we'll just re-request 
it from a fallback location on our own CDN. Sounds like a plan. Can you file a 
feature request at LABjs for them to expose when `script.onerror` fires? It'd 
be great if it just could automatically re-try or fallback to another URL. 
Yeah, that sounds cool. Sure, will do.

I understand clearly at this point that you don't agree with their mindset. I 
understand you think their desire is misguided. I admit sometimes I am 
skeptical of the efficacy of such efforts.

But respectfully speaking, your opinion is not the only one that matters here.

Who are we to tell some in-good-faith developer that they are objectively WRONG 
to hope their script loader could not only LOAD scripts but RELOAD or ALTLOAD 
scripts. Think about the conceptual and semantic there. It's a pretty sensible 
expectation for most non-browser-author devs.



 We'll be able to do this from the Navigation Controller: 
 https://github.com/slightlyoff/NavigationController/blob/master/explainer.md

Never heard of it before. Thanks for the link.

But I don't see how the idea that this may (likely) happen (someday) 
automatically moots the discussion at hand. For you to fairly exercise some 
sort of veto vote over what we discuss, which it seems like you're trying to 
do, you've gotta come to the table with some real tangible thing that's 
standardized (or clearly headed that way) and ready to evaluate for fitness to 
the use-cases.

I glanced through (it's long, haven't digested it yet) and didn't immediately 
see a section called RETRYING AND FALLBACK LOADING. :)

I don't see an MDN page entry for `Navigation Controller`. I can't find 
Navigation Controller on http://caniuse.com yet. So far, the only google 
result I've found for it is your writeup. So having only seen it for a few 
moments as of the writing of this email, and finding no other evidence about 
it, it's hard to judge if it's a valid alternative for the requested use-cases 
or not.

Would you be able to make a 

Re: [whatwg] Script preloading

2013-07-12 Thread Kyle Simpson
 (AT EYE-WATERING-LENGTH)

I'm sorry I'm too verbose on the list for everyone's taste. Every time I'm 
brief and make assumptions, I get accusations like Jake's repeated ones that 
I'm just asserting without reason.

FWIW, my exhaustion of this process is not about my eyes, but my fingers sure 
take a beating. :)



 Ok, I think I understand what you're saying now.

Happy we're on the same page there, then. :)



 It's not scepticism at that level that I'm expressing. Accepting everything 
 you just typed out (AT EYE-WATERING-LENGTH), changes to Ian's proposal are 
 still a poor place to attack the issue. The Navigation Controller can give 
 you everything you want here and more. It's the right hammer for this 
 particular nail, not dependency attributes.

I'm not arguing for dependency attributes, nor am I arguing that they should or 
should not do this or that. Jake and Ian are. I'm only trying to give them due 
consideration, through my prior posted code snippets (and others I'm working 
on) to examine what they do and do not afford as it relates to the use-cases I 
believe are important to consider.

But I've stated that I actually don't like the dependency attribute, for many 
reasons previously stated, and further reasons I hope to demonstrate through 
the code comparisons Jake suggested. That is an ongoing effort to examine and 
explore the pros-and-cons, and is done in good faith.

Again: my question (which remains unanswered),  the reason I stated the 
error/retry/fallback use case in detail, is whether or not the dependency 
attributes proposal, as put forth by Jake (and Ian) will or will not, factually 
speaking, have any sensitivity to the error conditions (network load error, 
compile error, run-time error) or not, or somewhere in between?

If `dependencies` IS sensitive, in any way, then it clearly overlaps Navigation 
Controller, right? That may or may not be a good thing. If OTOH it has no 
sensitivity whatsoever, and it will trigger a subsequent waiting resource 
regardless of ANY particular network condition or result, that is vital 
information to know, as well.

Either way, shouldn't we afford enough discussion here to discuss how little or 
how much, or if at all, that sensitivity would be? I certainly can't adequately 
judge the proposal in the absence of that information.

Isn't it fair enough for me to inquire as to the actual nature of their 
proposal, given that they've asserted in no uncertain terms that it is fully 
sufficient for all my use-cases? Aren't I entitled to examine that with the 
assistance and participation of this list?

None of that means I necessarily think that Navigation Controller is or is not 
ALSO suitable. I have no opinion on that yet. I stated the reasons why I have 
no opinion on it yet.

Presenting an alternate proposal for one or more use-cases, as you have done, 
is fine. But that doesn't mean that it answers the questions I have about the 
other proposal, by Jake and Ian. Those pre-existing questions are my concern, 
presently.



 We're working on implementing it in Chrome. So yes, both likely and soon.

I look forward to seeing more great documentation on it as we've all come to 
appreciate from the Chrome and Developer relations teams.



 It's unfair of you to expand the scope of a proposed feature to include your 
 pet issue when, logically, it can be separated. See what we both just did 
 there?

Expand the scope? I am not expanding the scope beyond that which I've pushed 
for over and over again for 3+ years, this thread just being the latest 
incarnation. That others (like you) may come to this list with a pre-conceived 
notion of what is and is not in scope doesn't mean that me stating (in 
response to repeated appeals from Jake and others) at length the use-cases *I* 
care about is actually expanding that scope.

Moreover, I am the originator, or at least one of them, of the proposed 
feature, so I hardly think it's fair for you assert that I'm changing the 
game. In reality, I'm clarifying the proposed feature as it relates to my 
viewpoint, as an interested party acting in good faith. I'm recounting the 
substantial prior art in the discussions and discovery and experimentation.

No, I'm not expanding the scope. You (and others) appear to want to be 
narrowing the scope that well pre-dates this thread.

My scope (as it always has been) put simply: I want (for all the reasons here 
and before) to have a silver bullet in script loading, which lets me load any 
number of scripts in parallel, and to the extent that is reasonable, be fully 
in control of what order they run in, if at all, responding to conditions AS 
THE SCRIPTS EXECUTE, not merely as they might have existed at the time of 
initial request. I want such a facility because I want to continue to have 
LABjs be a best-in-class fully-capable script loader that sets the standard for 
best-practice on-demand script loading.



 Better to ask how best to accomplish our goals, not 

Re: [whatwg] Script preloading

2013-07-12 Thread Kyle Simpson
(being as brief as I possibly can...)


 As per the existing outline, I don't see how it could have any sensitivity.

So, just to clarify, `script dependencies=…` waiting on some other script 
tag is ONLY waiting on that script tag loading to have some sort of positive 
network result, whether that be a 2xx, 3xx, 4xx, or 5xx, and it cares not 
whether the script in question actually loaded, nor whether it fired its 
`onerror` event? Do I have that correct?



 I think you missed the second sentence…

Did I miss some rhetorical levity? Sorry. :)



 That's only your ignorance speaking. There are examples in the repo which you 
 can use to extrapolate examples, or if you a code snippet showing the problem,

I did show a code snippet with the problem already.

Specifically: 
https://gist.github.com/getify/5976429#file-ex2-jaffathecake-js-L54-L68

As I said, I only glanced at your long writeup on Navigation Controller. 
Ignorance is a tiny bit of a pejorative term for my lack of knowledge of some 
non-trivial you just dropped onto the list right in the middle of lots of other 
discussion. But I'll take it in a pleasant light and agree, indeed, that I'm 
ignorant so far of how Navigation Controller can help.

Had it been clear and obvious to me in my initial glances at your document 
immediately how to address the code problem above, I certainly wouldn't have 
exposed such ignorance.

In any case…

 either Jake or I can show how NC would address it.

I would certainly appreciate input on that part of the code I highlighted. As I 
said…

 I look forward to you helping remedy that. :)



--Kyle






Re: [whatwg] Script preloading

2013-07-11 Thread Kyle Simpson
 How is this any different from the case today when script elements are 
 fetched and run in the situation where one 404's?

Right now, without any script loader, AFAICT, if A loads fine, B 404's or 
500's, and C loads fine, both A and C will run, and usually C will have lots of 
cascading errors because it assumes B ran when it clearly didn't.

What I'm saying is that quite a few developers have repeatedly asked for LABjs 
to provide some relief to that, because they would like to be able to have 
code-driven logic that tries to gracefully handle such an error. As I said, 
some developers have expressed the desire to have a script loader be able to 
re-try a failed script load a few times. Others have expressed the desire to 
have alternate fallback URL(s) if a script fails to load at its primary 
location.

The point is, normal script tags don't let devs do that AT ALL, and when they 
want to do such things, they hope that a script loader could give them that 
capability. Since LABjs currently relies on script elements in pretty much 
all cases, LABjs can't give them what they want.

As far as I'm concerned, this is absolutely a *candidate* for a perfect 
silver-bullet next-generation script loading mechanism that handles all the 
complex use-cases under discussion thus far under discussion.



 And why is the fix not a stop on first script error devtools option rather 
 than a change to the intrinsics for loading? This is the usual recourse for 
 most debuggers.

As stated, this isn't as much about developers doing things in dev-mode, it's 
about them wanting to have more robust loading logic in their production 
installations that is capable of doing things like script-load-retries or 
script-load-fallback-URLs.

Certainly developers asking LABjs for this don't care nearly as much whether 
other developers can effectively deal with the issue using their devtoosl as 
they care that their production website in front of end-users has the ability 
to respond more robustly, if they care that much in the first place.



 Or are you saying we should be able to detect (via HTTP status code? some 
 other mechanism?) that a script load as failed before we even attempt to 
 run the code which might depend on it?

I was suggesting if we're inventing a new mechanism called `depends` as Jake 
has suggested, it would be nice if that new mechanism was made sensitive to 
things like did the script load successfully (non 4xx/5xx), and even better 
if it could also be sensitive to things like the script load successfully, but 
was there an uncaught error thrown during its main execution?

The more sensitive the mechanism is, the more capable it would be to handling 
the use-cases these developers care about.



 I'm unsure how any of this is apropos to the debate at hand. Changes to this 
 proposal seem entirely the wrong place to be dealing with this sort of 
 failure/recovery issue.

Why so hostile? Isn't it quite apropos/germane to discuss HERE what real-world 
developers want to do (and cannot do currently!) in their code as it relates to 
script loading?

I exhaustively listed out 11 other use-cases of things I care about, as a 
script loader author/maintainer. Are none of those use-cases apropos? Then I 
noted an additional 12th use-case here that I may not personally care as much 
about, but dozens of times developers have filed issues against LABjs 
begging/insisting for.

It appears that some people care enough about production loading robustness 
that they go to extraordinary efforts in their code to detect and respond to 
such conditions. I felt like those many requests to LABjs (and I'm sure other 
script loaders get similar requests) was evidence enough that there's a valid 
use-case to consider and I was just bringing it up for such consideration.



--Kyle




Re: [whatwg] Script preloading

2013-07-11 Thread Kyle Simpson
 I am interested to see how the above use-cases would be met in your
 counter proposal(s) to see if it would be simpler/faster. If LabJS is
 a requirement, it must be factored in as a unit of complexity and
 load-step.
 
 Please do this rather than declare anything to be insufficient without
 reasoning.

It's gonna take a lot of time to write proof-of-concept code for all the 
different nuances and use-cases I've brought up. I'm presenting 2 here. There's 
more to come.

Unfortunately, Jake made a number of simplifying assumptions, or simply missed 
various nuances, in his attempt to combine/reduce/restate my use-case list. I'm 
not ascribing any ill-will to that, but just pointing out that it's not nearly 
as easy as his list might suggest.

I've spent some time trying to put together a first set of code comparisons 
between the `script preload` proposal I've put forth and the `link 
rel=subresource` + `script dependencies=..` proposal Jake is advocating. 
This is by no means an exhaustive comparison or even nearly stating the many 
issues I forsee, but it starts the discussion with actual code instead of 
theoretical concepts.



https://gist.github.com/getify/5976429


There's lots of code comments in there to explain intended semantics, 
outstanding questions/issues, etc.



Some observations/clarifications:

* ex1-getify.js is my attempt at creating a simple `loadScripts()` script 
loader that loads scripts in parallel, but executes them strictly in request 
order, serially.

  -- ONE KEY NOTE: my implementation accomplishes my use case #11 quite 
easily, in that it doesn't start executing any of the scripts in a group until 
ALL the scripts are finished preloading, thus minimizing any gaps/delays 
between them running. I'm able to do this easily because every script fires a 
preload event, so it's trivial to know when all such events have fired as my 
clue on when to start execution.


* ex1-jaffathecake.js is my attempt at doing something as close as possible 
using Jake's proposed way. I've asked for his feedback on this code, so until 
he has a chance to feedback, take it with a grain of salt.

In any case, the code is certainly simpler, but it's missing a KEY COMPONENT: 
it's NOT able to assure what my script loader code does. That is, a.js might 
run long before b.js runs, whereas in my implementation, there should be 
almost no gaps, because a.js doesn't run until b.js is loaded and ready to 
go.

Jake suggested a hack to address this use-case which is based on the idea of 
hiding the whole page while scripts load with gaps in between. This hack is not 
only terribly ugly, but it also misses a big point of my motivation for that 
use-case.

The point is NOT can we hide stuff visually, it's can we make sure stuff 
doesn't run until EVERYTHING is ready to run.

Moreover, as I note in the code comments, it is impossible/impractical for a 
generalized script loader to be able to determine all or parts of a page that 
it should hide, and under what conditions. The script loader is agnostic of 
what it's loading, and it certainly is supposed to be as unobtrusive to the 
hosting page as possible, so it's out of the question to consider that a script 
loader would go and do nuclear-level things like hiding the document element.

The key thing that's missing in Jake's proposal that's necessary to address 
this use-case is that there's no way to be notified that all the scripts have 
finished pre-loading. Instead, his approach obfuscates when things finish 
loading by simply letting the script element internally listen for loads of 
its dependencies.

This is what I mean when I keep saying chicken-and-the-egg, because I want to 
know everything's finished preloading BEFORE I start the execution cursor, but 
in Jake's world, I can't know stuff is finished loading until after I observe 
that it's running.


* ex2-getify.js is a more complex script loader, that takes into account the 
ability to have sub-groups of scripts, where within the sub-group, ASAP 
execution order is desired, and serial-request-order execution order is desired 
across the various sub-groups.

Simply stated: All of C, D, E, and F scripts load in parallel. When it comes to 
execution, C.js runs, then a sub-group of D.js and E.js will run, where 
within the sub-group, either D or E runs first, ASAP (they don't block each 
other), and then when both have run, finally, F.js executes.

This scenario is quite common: I load jquery.js, then I load 4 jquery plugins 
(which are independent), then I load my page's app.js code runs. jquery.js is 
the firs to execute, then the 4 plugins run in ASAP order, then when they're 
all done, finally my app.js code executes.

Also, this more complex script loader listens for `script.onerror` events, and 
if it detects one, it aborts any of the rest of the execution.

Any such error handling is trivial in my loader, because I am always fully in 
control over which script is loading at any given time.


* 

Re: [whatwg] Script preloading

2013-07-10 Thread Kyle Simpson
 The IE4-10 technique is invisible to pre-parsers, if we're chasing 
 performance here it's not good enough.
 ...
 Also invisible to preloaders.

I personally don't care about scripts being discoverable by pre-parsers. I have 
done testing and am not convinced that something appearing earlier (in markup) 
leads to better performance than allowing my script loading logic to load 
things when I want, and just relying on the browser to do that as quickly as 
possible.

For instance, I've added like link rel=prefetch annotations for my scripts 
into the head of my document, and then done my normal script-based script 
loading as usual, and benchmarked if them being in the markup somehow magically 
sped up the page. I saw no appreciable increase in average page load speed in 
my testing.

It's quite possible that this is because when I use script loading, generally 
speaking, I'm only loading the scripts I consider to be most critical for 
actual page load (not everything and the kitchen sink), so my script-based 
script loading during page-load usually is pretty darn quick. I defer the 
rest of my code that's not as critical until later (perhaps until when needed, 
strictly), which is something that markup alone doesn't let me do.

I like the fact that I can have my bootstrapper load.js file either at the 
very top (and thus it starts loading them nearly immediately) if the scripts I 
want to load are more important than the images and stylesheets in the markup, 
OR I can put my load.js file at the bottom of the markup and thus give a chance 
for other content to start loading slightly before my scripts start loading.

The fact that browsers are trying to second guess developers and look-ahead to 
find and prioritize certain resources is NOT something I consider a positive 
benefit that I'm eager to assist. I still come from a world where a developer 
ought to get to decide what's higher priority.

There's certainly a strong pre-disposition among a lot of developers to falling 
in love with declarative markup-only solutions. I share no such obsession, when 
it comes to script loading. I think script loading is far more complex than 
markup is ever going to be equipped to handle.

To be clear, I will not be satisfied with a markup-only approach. No matter how 
complex it is, it does not handle all the use-cases I care about. I feel like a 
broken record on these threads, because I keep talking about why markup-only is 
insufficient, and people keep trying to convince me they can make markup more 
and more complex and certainly they'll eventually convince me that markup-only 
is superior. The more complex you make the markup-only proposals, the more I'm 
convinced (and self-validated) that markup is the wrong tool for the complex 
use-cases I care about.

---

All that having been said, I am not trying to block a solution that would BOTH 
serve those who have a subset of (simpler) use-cases which are markup-centric, 
and those of us who care about serving more complex use-cases via code. A 
solution that both camps can accept is better than either camp being happy to 
the exclusion of the other.

I would be fine if we went with a variation of Nicholas' proposal. Let me state 
that new proposal here:


**
Summary:

1. `preload` attribute on script tags in markup, `preload` property on script 
elements created by code. In either case, its presence tells the browser not to 
execute the script once it finishes loading.

2. `onpreload` event fired on any script which has `preload` attribute or 
property on it at the time its (pre)loading finishes (and execution is thus 
suppressed). Otherwise, not fired.

3. To execute a script that was preloaded in code, remove the `preload` 
attribute or property from the element, which signals to the browser that it's 
OK to execute it now. If you remove it before loading finishes, the browser 
acts as if it was never marked as preload and continues as normal. If you 
remove it after preloading finishes, the browser is free to execute that script 
ASAP now.

4. If you are doing markup-only loading, you signal to a preloaded script 
that its eligible for execution by putting a matching selector to it into a 
`fulfills` attribute on another script element. If a script finishes loading 
and it's already been signaled by another `fulfills`, it will run right away. 
Otherwise, it'll wait until some script executes that has a matching `fulfills` 
attribute on it.



Details:

The behavior of preload-but-don't-execute is controlled by the markup presence 
of a `preload` attribute on script tags (thus discoverable by pre-parsers), or 
a corresponding `preload` property in the script-based loading scenario. BOTH 
sides of the feature have to be implemented -- markup only is NOT enough for my 
needs.

That attribute/property being present/set would be the only thing that signals 
to the browser load this script, but DON'T auto-execute it until told to do 

Re: [whatwg] Script preloading

2013-07-10 Thread Kyle Simpson
 **
 Summary:
 
 1. `preload` attribute on script tags in markup, `preload` property on 
 script elements created by code. In either case, its presence tells the 
 browser not to execute the script once it finishes loading.
 
 2. `onpreload` event fired on any script which has `preload` attribute or 
 property on it at the time its (pre)loading finishes (and execution is thus 
 suppressed). Otherwise, not fired.
 
 3. To execute a script that was preloaded in code, remove the `preload` 
 attribute or property from the element, which signals to the browser that 
 it's OK to execute it now. If you remove it before loading finishes, the 
 browser acts as if it was never marked as preload and continues as normal. 
 If you remove it after preloading finishes, the browser is free to execute 
 that script ASAP now.
 
 4. If you are doing markup-only loading, you signal to a preloaded script 
 that its eligible for execution by putting a matching selector to it into a 
 `fulfills` attribute on another script element. If a script finishes loading 
 and it's already been signaled by another `fulfills`, it will run right away. 
 Otherwise, it'll wait until some script executes that has a matching 
 `fulfills` attribute on it.

A variation on my proposal which gives a little more symmetry between 
markup-based loading and script-based loading:

Instead of removing the `preload` attribute/property to signal it's OK to 
execute now, I just set another property on it called `fulfilled`. That 
attribute/property, if present at time of preload completion, means it's OK, 
go ahead an execute, or if added after preload, says go ahead and execute it 
now ASAP.

The symmetry is that in markup-only usage, I use `fulfills` on another script 
tag (because with markup only I can't modify another script element) to signal 
that the preloaded script element is now fulfilled. But in script-based 
loading, I can just signal the preloaded script element directly that it's 
fulfilled now by setting a `fulfilled` property on it.

I like that even a lot better than deleting/removing the `preload` 
attribute/property.




--Kyle






Re: [whatwg] Script preloading

2013-07-10 Thread Kyle Simpson
 Which of your use-cases have not been met? So far I've seen only I want X, 
 Y, Z but not what you need X, Y, Z to achieve that isn't covered by other 
 simpler proposals or existing features.

You know, I keep relying on the fact that the body of work on this topic for 
almost 3 years ought NOT have to be re-visited every few months when these 
threads wake from dormancy. I keep hoping that someone who really cares about 
actually addressing all the concerns, and not just some of them, will do the 
due dilligence to look at all the previous stuff before criticizing me for not 
providing enough detail.

I've written nearly a book's worth on this over all the threads and sites and 
blog posts over the years. In fact, I think its fair to say at this point that 
I've spent more time over the last 4+ years obsessing on script loading than 
any other developer, anywhere, ever.

I don't like the implication that I'm apparently just an impetuous little child 
demanding my way with no reasoning.

So, fine. Here it is. I'm going to state explicitly the use-cases I care about. 
This is nothing new. I am saying the same things I've been saying for 3 years. 
But sure, I'll say them, AGAIN, because now someone wants to hear them again. I 
doubt anyone is going to read this crazy long message and actually read all 
these, but I'll put them here nonetheless.

And I'm listing them here because they are not covered fully by any of the 
other proposals besides the 2 or 3 I keep pushing for. You may think they are 
covered, but I think the nuances prove that they aren't. The devil is always in 
the details. Or you may think my use-cases are irrelevant and so you dismiss 
them as unimportant. Guess there's nothing I can do about that.


-

1. Premise: I'm the author of a popular and wide-spread used script loader. 
It's a general utility that's used in tens of thousands of different sites, 
under a myriad of conditions and in different ways, and in a huge swath of 
different browsers and devices. I need the ability inside this general utility 
to do consistent, 100% reliable, predictable script loading for sites, without 
making ANY assumptions about the site/markup/environment itself. I need to be 
as unintrusive as possible. It needs to be totally agnostic to where it's used.


2. Premise: I need a solution for script (pre)loading that works not JUST in 
markup at page-load time, but in on-demand scenarios long after page-load, 
where markup is irrelevant. Markup-only solutions that ignore on-demand loading 
are insufficient, because I have cases where I load stuff on-demand. Lots of 
cases. Bookmarklets, third-party widgets, on-demand loading of heavy resources 
that I only want to pay the download penalty for if the user actually goes to a 
part of the page that needs it (like a tab set, for instance). In fact, most of 
the code I write ends up in the on-demand world. That's why I care so much 
about it.


3. Premise: this is NOT just about deferring parsing. Some people have argued 
that parsing is the expensive part. Maybe it is (on mobile), maybe not. 
Frankly, I don't care. What I care about is deferring EXECUTION, not parsing 
(parsing can happen after-preload or before-execution, or anywhere in between, 
matters not to me). Why? Because there's still lots of legacy content on the 
web that has side-effects when it runs. I need a way to prevent those side 
effects through my script loading, NOT just hoping someday they rewrite their 
code so that it has no side effects upon execution.

NOTE: there ARE people who care about the expense of parsing. Gmail-mobile (at 
one point, anyway) was doing the /* here's my code */ comment-execute trick to 
defer parsing. So me not caring about it doesn't make it not an important 
use-case. Perhaps it IS something to consider. But it doesn't change any of my 
proposals or opinions -- it leaves the door open for the browser to decide when 
parsing is best.


4. Use-case: I am dynamically loading one of those social widgets that, upon 
load, automatically scans a page and renders social buttons. I need to be able 
to preload that script so it's ready to execute, but decide when I want it to 
run against the page. I don't want to wait for true on-demand loading, like 
when my user clicks a button, because of the loading delay that will be visible 
to the user, so I want to pre-load that script and have it waiting, ready at a 
moment's notice to say it's ok to execute, do it now! now! now!.

There is not this script has loaded, so run the other one scenario here. It's 
some run-time environment condition, such as user-interaction, which needs to 
be the trigger. For instance, it might be when I finish using a templating 
engine client-side to render the markup that the social widget will search for 
and attach to. Or it might be like clicking to open a tab, which isn't rendered 
until made visible, and so we can't run the social widget code until that tab 
is rendered and 

Re: [whatwg] Script preloading

2013-07-10 Thread Kyle Simpson
 Pre-parsers can kick in before a page is actually opened, but script cannot 
 be executed. Let me dig up some numbers on the benefits of this  report 
 back. But logically, [parse html]-[load script] is always going to be faster 
 than [parse html]-[parse inline script]-[execute inline script]-[load 
 script]. And I imagine, more bytes and complex.

That they *can* do that is not sufficient proof that in the real-world, it 
actually does speed up page loads. In my testing, the way I do script loading, 
annotating scripts in the markup and then doing my normal script loading did 
not, on average, provide any noticeable speed up.

But anyway, this is a moot discussion, because I already conceded that if we 
want a solution that helps markup-only advocates, that's fine. script preload 
does that. Why are we going to keep arguing that point?

No matter how that plays out, it doesn't change the fact that I need a solution 
which works for on-demand loading long after page-load, where what was or 
wasn't in the markup during that magical not rendered yet state is completely 
irrelevant.

I feel like we're on a merry-go-round the last few years where someone says 
but, look, this is better in markup at page-load and then I say whatever, 
fine, but it doesn't help on-demand and then we go back around to no no, this 
is better for page-load.

I don't consider this an either/or proposition. You have your thing you care 
about, I have my thing I care about. Both should be important.



 If it's something likely to be used later you're better off loading it with 
 the page. Waking up the radio on a mobile connection is slow and uses 
 battery. Having said that, there's nothing in either of the original 
 proposals that prevents adding scripts dynamically. It's the more complex 
 option if you want more complex behaviour.

This is a big over-simplification of the scenarios involved. Sure, in general, 
you want to minimize waking the radio back up, but if there's 10% of script 
code I need to render the page initially, and 90% of my code is needed later 
(or maybe, conditionally, never at all), it's not supportable performance-wise 
to say well, just slow down the initial page render for that 90%.

We have the techniques of post-loading and on-demand loading for a reason, and 
there are cases where you can prove, through testing, that doing dynamic 
loading after page-load is objectively better than slowing down the initial 
page load. Not only *can* you prove it, I have proven it on my own sites. Time 
and again.

I'm sure there are ALSO cases where requesting everything at once is better. 
But there is no proof yet presented that always requesting everything all at 
once is always, unconditionally, the best option. If you have such evidence, 
and can prove that 100% of sites/apps which use on-demand loading are doing it 
wrong, please present that evidence.

Short of that evidence, I live in a world that accepts that sometimes one is 
better, sometimes the other is better. And I want a solution that equally 
empowers both sides of the coin, not just one.


-

I'm going to stop this email here, and reply with another reply regarding your 
request for use-cases detail. That one is going to be QUITE long.




--Kyle

Re: [whatwg] Script preloading

2013-07-09 Thread Kyle Simpson
This is a long and complicated topic with lots of history. Please bear with 
the length of my reply.


 It seems that people want something that:
 
 - Lets them download scripts but not execute them until needed.
 - Lets them have multiple interdependent scripts and have the browser
   manage their ordering.
 - Do all this without having to modify existing scripts.

I think it's important to note that the primary motivation here is performance. 
If all I cared about was serial loading (and the performance of that was 
irrelevant), I could, with today's mechanisms, load one script at a time, only 
when I was ready to execute it, and if there were multiple scripts, do so in 
the correct order.

The fly in in the ointment is if I need to load multiple scripts at once, 
specifically if those come from different locations (one is jQuery from the 
CDN, another is a plugin from my server), and for performance reasons I want 
these scripts to load in parallel, but their execution order still matters.

Now, if THAT was the only concern, the new (well, 2 years old now) 
`async=false` (or, as I call it, ordered async) mechanism would be enough. 
But it's not.

Because there's only one ordered async queue, which means all such scripts 
have to go into the same bucket, and it's all or nothing for preserving 
execution order. What if I want to load jQuery, then I want (in ASAP order) for 
4 independent plugins to run?

Ordered async will run jQuery, then each plugin in request order, which might 
make one small plugin wait much longer than it needs to for an earlier plugin. 
Here, it's a performance issue because a plugin like a calendar widget might 
not appear and render as early as it could otherwise, because some other 
non-related plugin is coming from a slow location but was requested first.

Very quickly, ordered async starts to not be sufficient for various 
use-cases. It's like the 75% solution, but the 25% are still a concern, 
performance wise.



 I must admit to not really understanding these requirements (script 
 execution can be made more or less free if they are designed to just 
 expose some functions, for example, and it's trivial to set up a script 
 dependency mechanism for scripts to run each other in order, and there's 
 no reason browsers can't parse scripts off the main thread, etc). But 
 since everyone else seems to think these are issues, let's ignore that.

In an ideal world, we could tell everyone hey, there's a new mandate, rewrite 
all your code by XYZ deadline. Kthxbai. That's not how it works. The fact is 
that the web platform and the browsers move MUCH quicker than legacy content.

This is, and always has been, a question of if you think we should just wait 
around for years and years until every script we care about (but don't control 
since we didn't author it) gets rewritten, and while that's happening, web 
performance in this area just can't be addressed? I maintain still that we can 
and should fix the platform so we have more options than just tough luck.



 The proposals I've seen so far for extending the spec's script preloading 
 mechanisms fall into two categories:
 
 - provide some more control over the mechanisms already there, e.g. 
   firing events at various times, adding attributes to make the script 
   loading algorithm work differently, or adding methods to trigger 
   particular parts of the algorithm under author control.
 
 - provide a layer above the current algorithm that provides strong 
   semantics, but that doesn't have much impact on the loading algorithm 
   itself.
 
 I'm very hesitant to do the first of these, because the algorithm is _so_ 
 complicated that adding anything else to it is just going to result in 
 bugs in browsers. There comes a point where an algorithm just becomes so 
 hard to accurately test that it's a lost cause.

Can you please elaborate on how either of the two prominent proposals that 
Nicholas Zakas and I detailed years ago here are insufficient in that they fall 
into your first category?

http://wiki.whatwg.org/wiki/Script_Execution_Control

My proposal was to standardize what IE4-10 did, which is to start loading a 
script even if it's not in the DOM, but not execute it until it's in the DOM. 
Then, you monitor an event to know if one or more scripts have finished this 
preloading, and then you can decide if and when and in what order to add the 
corresponding script elements to the DOM to allow execution to proceed.

The spec already suggests the core of this:

For performance reasons, user agents may start fetching the script as soon as 
the attribute is set, instead, in the hope that the element will be inserted 
into the document. Either way, once the element is inserted into the document, 
the load must have started. If the UA performs such prefetching, but the 
element is never inserted in the document, or the src attribute is dynamically 
changed, then the user agent will not execute the script, and the fetching 

Re: [whatwg] Script preloading

2013-07-09 Thread Kyle Simpson
 But I'd settle for anything, no matter how complex, as long as it actually 
 solves the many use cases. Your proposed option has potential, as long as 
 the missing event part is addressed.
 
 It seems to me that from an IE-perspective, the only missing piece is the 
 event itself.

Well, strictly speaking, IE4-10 had a suitable event already, as you surely 
know. Unfortunately, IE11 has, currently, removed those event handlings, as 
they are non-standard. I wrote this blog post a few days ago begging the 
IE11 team to bring them back:

http://blog.getify.com/ie11-please-bring-real-script-preloading-back/

I know it was passed on to at least a few of the decision makers there, but 
I've not heard anything official in response yet. Any update? :) As it stands, 
the IE version of real preloading is in limbo and in danger of dying, as it's 
quite neutered without some event.

IF we could act quickly enough to standardize some preloading approach, even if 
that were different than how IE did it before, *maybe* it could make it into 
IE11 before final release? I dunno.



 Just one final side note on the above linked-to proposals (Zakas's and 
 mine). Over 2 years ago, I implemented feature-detects in LABjs script 
 loader for both of those proposals. Of course, the `readyState` one actually 
 works in IE4-10 and works beautifully I might add. In head-to-head loading 
 tests I've done from time to time, the IE real preloading mechanism often 
 beats out the good-but-not-great ordered async of the other modern 
 browsers.
 The `preload` one doesn't currently work of course (it's just dormant code 
 for now), but I thought it to be a sufficiently good enough proposal, and 
 likely enough to eventually happen, that I put in those few lines of code to 
 LABjs, as speculative future-proofing.
 
 The LABjs source code uses a feature-detect for the real preloading by 
 looking for the existence of the preload Boolean DOM attribute. After 
 thinking about it for a bit, I'm not sure I understand why that attribute is 
 necessary. 

I believe the reason that Nicholas suggested that the attribute needed to be 
there was two-fold:

1) he was concerned about the implicit nature of IE's behavior by sort of 
indirectly preloading simply by developer non-action (not inserting into the 
DOM).

Adding a positive attribute to a tag to say yes, I want this 
preload-and-defer-execution behavior was certainly more explicit, and opt-in, 
and thus maybe more attractive, since it had perhaps less potential to create 
accidental problems for legacy content or developer ignorance.

2) it makes for a simple/effective feature-detect. :)


Whatever mechanism we do have, we need a feature detect for it, obviously. 
`preload in document.createElement(script)` is nice and clean and semantic.

The IE way, I detect by looking at the readyState and noticing its initial 
value, which was an IE only behavior. Opera was the only other browser to 
support `script.readyState` (but NOT support the actual preloading concept), 
but Opera's version of the property has a different initial value.

I asked an Opera developer specifically and he asserted that Opera would not 
ever have an occasion to change that initial value to the same as IE's unless 
they were also matching IE's preloading behavior. Thus, we avoided (tenuously, 
in the absence of standards) any false-positives on that feature detect.



 If I were to only introduce the event handler (onpreload) it seems to address 
 the use cases, but then your 2+ year old dormant code would stay dormant :( 

I'm not nearly as concerned about dormant code staying dormant forever. I 
made the judgement call back then that the extra ~100 gzip-bytes were worth 
it if a future day ever came that it just magically worked.

I'd love it, for LABjs' sake, if whatever was standardized was one of the two 
approaches, either one.

But even if we standardized a third option, and I had to change LABjs, that 
would be FAR BETTER in my mind than never addressing this use case at all, 
especially in light of IE11 sort of retreating on this topic (either 
intentionally or not).





--Kyle






Re: [whatwg] Script preloading

2013-07-09 Thread Kyle Simpson
 I have been wrestling pretty hard with script loading issues lately. I'd
 say that having the browser manage script interdependency is probably a
 bad and cumbersome way to solve these issues.

What do you mean by having the browser manage script interdependency? As far 
as I am aware, this thread and these feature requests are not about the browser 
managing script interdependencies… in fact, quite the opposite. What we're 
asking and hoping for is a facility that allows the app code to manage the 
dependencies and the loading order, while only relying on the browser to do the 
actual loading for us the way it always has.

The only part of that puzzle that's missing is a way to tell the browser to 
pause between finishing-loading and starting-executing. Asking for such a 
mechanism has really nothing to do with offloading dependency management to the 
browser to handle. It's empowering the app code to be more in control, not less.



 I think one of the reason that people may ask for this Interdependency
 feature,
 is due to the weakness of the platform they use,

Again, I think you may possibly be misunderstanding the goal of what's being 
asked for. But I'll agree with you that there's a weakness. The weakness, IMO, 
is in the web platform itself, in not giving us more fine-grained control over 
script loading. That is why we keep having this discussion every 6 months or so 
for the least several years. The use-cases never go away, and the hacks we use 
to deal with them never get any less ugly.



 And 'async', while good for independent scripts such as social media apis,
 is not really a good tools for dependency management.

Again, possibly a case of misunderstanding or missed context from previous 
conversations. When I bring up async, I'm not talking about script async in 
markup as you suggest, but actually `script.async = false` being set on 
dynamcially created script elements (in code). async=false (aka ordered 
async) is a relatively new feature added to the platform about 2.5 years ago 
that gives us async parallel loading, but the browser automatically enforces 
execution order to be request order, instead of ASAP order.

async=false is actually a good feature. The problem is not the feature 
itself, but that it's only part of what we needed.



 My main issue against using external script loaders like LABjs and others,
 has always been that if the browser must download a script first, before
 starting to download the dependencies. It presents a drawback already, for
 delayed the scripts by the script loader's latency and download time (at
 least for the first uncached page load) similarly to having scripts at the
 bottoms.

For LABjs' part, I never suggest to people to load LABjs in a separate file. I 
suggest that people use a bootstrap type code file, which would be a single 
.js file they load with a single script tag in their markup, and it contains 
all the bare-minimum code necessary to bootstrap the page/app. It would include 
the code for LABjs, also the $LAB chains for loading other scripts, even basic 
event handling or other bootstrap type logic.

Moreover, the extra ~2k of gzip-bytes that LABjs costs (even if it's a 
separate file, but especially if it's included in another bootstrap file) is 
almost always made up for in savings by being able to take advantage of 
consistent parallel loading of scripts.

If your page only has 3-5k of JS on it, then you shouldn't use a script loader. 
But if you're like most sites, and you're loading 100-300k of JS, then a tiny 
2k of JS for loading optimization is not even a drop in the ocean.

Lastly, I advocate the techniques of deferring certain parts of your scripts to 
not loading during page-load at all, but instead post-loading or on-demand 
loading at later times. This amounts to needing to have the ability to do 
dynamic loading during the lifetime of the page. Markup alone is never enough 
for that. You have to have a script loader of some sort.

So, I advocate that a tiny but powerful script loader that you use for BOTH 
uses is a win. You use the same script loading techniques for page-load loading 
as you do for on-demand loading. Consistency of toolset here makes coding and 
maintenance easier in the long run.


 Why not simply load all such scripts early in the head with 'defer',

As you mention below, defer is horribly buggy and unreliable. The chances of 
IE8+ (not to mention IE6-8) being patched to have better defer are roundable to 
zero.



 'defer' in head scripts is actually a very good way to preserve script
 order with non-blocking scripts.

But it only works for external scripts (as you note below), and it only works 
for markup loading during page-load, and gives no answer for dynamic/on-demand 
loading later. As such, by design, it's insufficient for the use-cases 
presented.



 I would actually advocate to petition Microsoft to release patches for
 IE8, IE9 and IE10 for these particular stupid overlooked bugs


Re: [whatwg] Proposing: autoscroll event

2013-05-14 Thread Kyle Simpson
 when should autoscroll be called? only after a url with a hash is clicked in 
 the same page? when following a link to a : url#specific_hash? both cases?

I initially conceived it as only firing on initial page load for a URL#hash, so 
definitely that case. But I can also see how it might be useful to listen for 
such events even when clicking within the same document, so I'd certainly 
entertain that addition. :)


--Kyle





Re: [whatwg] Proposing: autoscroll event

2013-05-14 Thread Kyle Simpson
 So, to that end, what about just adjusting the spec for the hashchange
 event to also fire when the page is initially loaded if the URL contains a
 fragment identifier (hash)?  The other scenarios mentioned would already be
 covered by the hashchange event.

I think there's one critical thing that wouldn't be handled by this approach, 
and it was the primary reason I suggested a new event and not piggybacking on 
an existing one: I find the need to *cancel* (by calling `preventDefault()`) 
the auto-focus behavior at least as compelling as the ability to listen for or 
articifially fire it. There have been times this automatic behavior has been 
quite annoying because of accidential ID/hash overlap. There's also times when 
it would be nice to, UX-wise, let the user decide if they want to scroll or not.

--Kyle




Re: [whatwg] Proposing: autoscroll event

2013-05-14 Thread Kyle Simpson
 There have been times this automatic behavior has been quite annoying
 because of accidential ID/hash overlap.
 
 Please explain how a document subresource can be “accidentally”
 referred to by a URL be “accidental”. I do not understand it.

In my case, I ran across this once (not too long ago) where I unintentionally 
had chosen a URL #hash value that turned out to collide with a DOM element 
being added to some pages via a third-party widget. The weird 
somewhat-randomish scrolling was difficult to track down, and when I did, it 
made me wish I could have just suppressed the problem by canceling the scroll 
in those cases.



 You're using a hash to store information that is used by JS.  You also
 use ids on your page.  These can collide unintentionally, causing a
 scroll on page load.
 
 The simplest solution (by far) would be to stop storing “information
 that is used by JS” in a hash. Even Internet Explorer has pushState()
 these days: http://caniuse.com/history.

This isn't really about older browsers, per-se. Sure, some people still support 
IE = 8 (usually by using a shim like History.js which automatically switches 
back to the #hash method in older browsers). But if those browsers are 
currently broken (in their inability to cancel scrolling as a result of 
id/hash collision) then the new proposed event won't be in those older browsers 
and thus won't fix them. So, mentioning browser support is kind of a 
misdirected argument.

You *can* conviently just say that all apps should switch to using fully 
semantic server-renderable URLs across the board, and thus the stuff they store 
in their History state URLs are canonical URLs that will work fine from the 
server. However, not all apps have been able to 100% make that jump yet.

I've worked on a couple that were hybrid, using some of the new semantic URLs 
but still representing some sub/intermediate navigation states (like in a 
shopping cart, for instance) in the #hash of the URL (that is, rendereable ONLY 
on the client-side thus there's no server-side semantic URL to choose), even if 
they still stick that #hash URL into pushState History for backward/forward nav.

Also, bookmarkability and sharability (to these non-server-renderable states) 
basically sometimes prefers still using #hash'd URLs.

The point is, it's not terribly helpful for you to just redirect the 
conversation to an entirely orthagonal (and somewhat more complicated) topic of 
adoption of History API and semantic URLs, especially since this (auto-scroll 
cancelation) is only one part of the main proposal.

We're kind of off on that tangent here and it's probably not helpful to belabor 
it more. I just mentioned as a side-note really earlier in the thread that I've 
had cases where a cancelable event would have been much easier.




[whatwg] Proposing: autoscroll event

2013-05-13 Thread Kyle Simpson
Increasingly, sites are doing client-side rendering at page load time, which is 
breaking the (useful) functionality of being able to have a #hash on a URL that 
auto-scrolls the page to make some element visible after page-load.

A perfect example of this problem is that most #hash URLs (as far as scrolling) 
are broken on gist.github and github when viewed in recent Firefox.

https://gist.github.com/getify/5558974

I am proposing that the browser throw a JS event right before it's about to try 
an auto-scroll to an element with the #id of the #hash in the URL (during a 
page's initial loading), called for instance autoscroll. The purpose of this 
event is to simplify how a web app can detect and respond to a URL having a 
#hash on it that would normally create a scrolling behavior, even if the 
element isn't yet rendered for the window to scroll to. That gist shows how you 
could listen for the event, and store for later use which target-ID was going 
to be scrolled to, and manually scroll to it at a later time.

If you have an app that does client-side rendering where it can break 
auto-scrolling, but you want it to work properly, you can of course manually 
inspect the URL for a #hash at any point, but it's a bit awkward, especially if 
you are already relying entirely on event-driven architecture in the app, and 
you want to just detect and respond to events. This autoscroll event will 
normalize that handling.

Notice the polyfill code in the above gist shows that you can obviously detect 
it yourself, but it's awkward, and would be nice if it were just built-in.

Additionally, having it be a built-in event would allow an app to prevent the 
browser from doing unwanted auto-scrolling in a very simple and natural way, by 
just trapping the event and calling `preventDefault()`. Currently, there's not 
really a clean way to accomplish that, if you needed to.



--Kyle




[whatwg] Proposing: some companions to `requestAnimationFrame(..)`

2013-05-13 Thread Kyle Simpson
I'm proposing a couple of companion APIs to the already standardized 
`requestAnimationFrame(..)` API.


First: https://gist.github.com/getify/5130304

`requestEachAnimationFrame(..)` and `cancelEachAnimationFrame(..)`

This is the analog to `setInterval(..)`, in that it runs the handler 
automatically for every animation-frame, instead of requiring you to re-queue 
your function each time. Hopefully that could be made slightly more performant 
than the manual re-attachment, and since this is often a very tight loop where 
performance really does matter, that could be useful.

It will make animation loops, frame-rate detection, and other such things, a 
little easier, and possibly slightly more performant. The code linked above has 
the polyfill (aka prollyfill aka hopefull-fill) logic.


--


Second: https://gist.github.com/getify/3004342#file-naf-js

`requestNextAnimationFrame(..)` and `cancelNextAnimationFrame(..)`

`requestNextAnimationFrame(..)` queues up a function not for the current 
upcoming animation-frame, but for the next one. It can be accomplished by 
nesting one rAF call inside another, as the polyfill implies, but again, my 
presumption is that this sort of logic is not only more awkward but also 
possibly slightly less efficient than if it were built-in as I'm proposing.

Why would we need this? Well, there are some sorts of CSS-based tasks which end 
up getting automatically batched-together if they happen to be processed in the 
same rendering pass. For example: if you want to unhide an element (by setting 
display:block) and then tell the element to move via a CSS transition (say by 
adding a class to it). If you do both those tasks in the same rendering pass, 
then the transition won't occur, and the repaint will show the element at its 
final location. Bummer. So, I have to first unhide it in the current 
animation-frame, and then add the class for the transition to the *next* 
animation-frame.

Do that kind of thing enough times (which I have), and you start wishing there 
was a codified API for it, instead of my hack. Thus, my simple proposal here.




--Kyle






Re: [whatwg] Deferring javascript download and execution until after onload

2012-12-08 Thread Kyle Simpson
Further evidence that the current state of the web is not friendly with respect 
to how browsers default to treating script loading/parsing/executing.

https://www.facebook.com/notes/facebook-engineering/under-the-hood-the-javascript-sdk-truly-asynchronous-loading/10151176218703920

The efforts that Facebook and Meebo are going to, to get around the blocking 
behavior of loading/parsing/execution of JavaScript, are astounding. This is 
just another example of the crazy hacks web apps go to so they can optimize 
this script loading process in the overall web performance picture. Talented 
development teams like Facebook and Meebo wouldn't be doing these crazy hacks 
if there wasn't a real need for the effects they achieve.

From where I stand, this is more solid evidence that we actually DO need more 
fine-grained control over a script load, so that the Facebook/Meebo technique 
wouldn't be needed. Instead, you could simply load whatever you wanted 
asynchronously in the background, in whatever order, and at whatever time, then 
choose when you want each preloaded script to be executed. In that way, they 
get to prevent side-effects on the DOMready/onload of a page without all these 
crazy hacks.

As I have been doing for 2 years now, I once again implore the decision makers 
of the web platform to recognize the validity and utility of this feature 
request. Let us preload scripts by separating (in some way) the download from 
the parse/execution phase. IE has had this simple feature for more than a 
decade, since IE4. That the web platform and other browsers haven't seen the 
value of this yet is dismaying.



--Kyle





Re: [whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-04 Thread Kyle Simpson
 One suggestion is to added a state to the readyState mechanism like 
 chunkReady, where the event fires and includes in its event object 
 properties the numeric index, the //@sourceURL, the separator identifier, or 
 otherwise some sort of identifier for which the author can tell which chunk 
 executed.
 
 If the script author needs to manually designate the chunk boundaries,

I don't think a script author needs to manually designate the chunk boundaries. 
I can envision this sort of feature just being part of some low-level automated 
build process, which concats all .js files together into a big file, but 
separates out the chunks.


 can?t the script authors insert a call to a function before each
 boundary? That is, why is it necessary for the UA to generate events?

Firstly, the exact same argument could be made for not needing the 
script.onload event, but I don't think that argument would fly very far. Do you 
similarly think we should remove script.onload and just force every script on 
the web to be modified so that script loaders can detect when a script finishes 
loading?

Secondly, you very well may be inserting scripts into the stream which you 
don't own or control, just third party plugins or utilities, so modifying 
them to make such calls wouldn't fit with the workflow of just including 
third-party scripts untouched.

You could you make a system where you had a global function called 
`scriptLoaded(..)`, which when called essentially fired an event notification 
to anyone who's listening, and then insert `scriptLoaded(foo.js);` at the 
very end of each script part right before the separator. But the drawbacks of 
that should be obvious: having to create another user-land library for 
negotiating that stuff (which increases the overall script payload on the 
page), AND having to create another global namespace object.

Thirdly, forcing these notifications to be manually inserted into the stream 
not only makes the stream creation more complex (perhaps less automatable), but 
it then basically makes the front-end side of the equation not be able to do 
passive observation of the loading progress. Passive observation of events is 
an important technique for debug logging, performance monitoring, etc.



--Kyle





Re: [whatwg] Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Kyle Simpson
Adam-

 To load and execute a script as quickly as possible, the author would
 use the following markup:
 
 script async src=path/to/script.js/script
 
 The HTTP server would then break script.js into chunks that are safe
 to execute sequentially and provide each chunk as a separate MIME part
 in a multipart/mixed response.

I like the spirit of this idea, but one concern I have is about the script load 
and readystate events. It seems that authors will want to know when each chunk 
has finished executing (in the same way they want to know that scripts 
themselves finish).

There's a contingent on this list which thinks that all script authors should 
change their code to never have side effects of execution, and should all 
instead be executable by having some other logic invoke them (aka module 
style coding). The reality is that a mixture of both types of approaches will 
be available on the web for any foreseeable future (well beyond the time when 
ES6 has provided first-class module support to all in-use browsers, so probably 
nearly a decade from now I'd think). So authors will likely want to be able to 
monitor when each chunk onloads.

One suggestion is to added a state to the readyState mechanism like 
chunkReady, where the event fires and includes in its event object properties 
the numeric index, the //@sourceURL, the separator identifier, or otherwise 
some sort of identifier for which the author can tell which chunk executed.



--Kyle





Re: [whatwg] Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Kyle Simpson
  I like the spirit of this idea, but one concern I have is about the script 
  load and readystate events. It seems that authors will want to know when 
  each chunk has finished executing (in the same way they want to know that 
  scripts themselves finish).
 
  Why? What would you do in such an event?
  ...
  Someone pointed out a use-case to me: a progress bar showing how far along 
  the page load is. You could do this without an event by just putting the 
  appropriate bit in each chunk of the script, but you couldn't do this if 
  you use defer instead async (i.e. you want a progress bar, but you 
  don't want the script to execute).


The same diverse sorts of things that authors currently use script#onload for… 
like initializing some code now that you know it's executed and ready to go, 
etc.

For instance, imagine you have a small plugin as the first chunk in a rather 
large file, and you'd like to run some logic to initialize and use that plugin 
as soon as possible, rather than waiting for all the chunks of the multi-part'd 
file to download and execute, so you'd listen for that very first `chunkReady` 
event, or whatever, and fire your code off then, which could/would be much 
earlier than if you waited until the very end of all chunks loading.

My assumption is that this feature, if added, would basically allow an author 
to treat scripts as separately loading items in development mode, thus having 
separate onload handlers as they might normally design, and then for 
production combining all scripts into a single, but multi-parted, concat file, 
and mapping those individual `onload` handlers directly to `chunkReady` 
handlers, one-to-one.

-

I know the code could itself be changed to simulate the same behavior. My bias 
(as I exposed in the previous message) is to see features designed that are 
easiest for existing code to take advantage of. If this feature were added in 
such a way that the only way people could really take advantage of it is if 
they had to rearchitect their code (as some on this list persistently suggest) 
so as to do its own wrapping of code and notifications of each chunk being 
finished, I see this feature as dying a niche death without much widespread 
usefulness.

But if we make it yet another tool in the web performance professional's 
toolbelt to take existing sites which load multiple files and give them a way 
to easily convert them (without any major code changes) over to loading fewer 
files in a multi-part fashion, I can see this feature being pretty useful.



--Kyle








Re: [whatwg] Deferring javascript download and execution until after onload

2012-11-28 Thread Kyle Simpson
Ian,

 The cost of parsing the script can be done async, even off the main thread 
 in theory, so it's a non-issue.

You have asserted many times that parsing is off the main thread, therefore it 
doesn't matter. That makes the giant (and I think faulty) assumption that the 
device in question has enough spare resources to give multiple threads, which 
not all do, and that all threads can run in parallel without hurting each other 
and/or the overall performance.

There have been a number of articles referencing issues where complex scripts 
do in fact take a non-trivial amount of parsing time on limited mobile devices. 
Even if such work wasn't strictly blocking the main UI thread, the fact that it 
may take a lot of processing (or memory) power from the device might very well 
mean that large parsing tasks could starve necessary resources, so even if it's 
on another thread, there's still very much not a non-issue to consider.

I tried quick google searching just now to dig some specific articles up and 
failed to find what I remembered reading, but perhaps others on the list know 
what I'm talking about. I recall someone showing that jquery.js took over 1+ 
second to parse on a mobile device, but I can't remember the exact numbers. I 
remember being shocked at how crazy long jquery.js took to parse.

Here's something though that seems it's related: 
https://github.com/tolmasky/language/issues/18



Moreover, as has been mentioned many times in various threads (and you seem to 
gloss over repeatedly), the gmail mobile team passed down large amounts of 
javascript hidden in a javascript /* … */ comment, so as to prevent that 
large amount of code from being parsed, until later when the hit was 
acceptable, because they felt that the parsing and/or execution was non-trivial 
enough to unacceptably slow down their app's startup time.

IIRC, this was their post: 
http://googlecode.blogspot.com/2009/09/gmail-for-mobile-html5-series-reducing.html

In any case, your assumption that parsing is a non-issue seems to fly in the 
face of not only some explicit evidence to the contrary, but there have also 
been some comments in various threads from browser vendor devs that lent 
credence to the fact that parsing could in fact be somewhat costly.

Do you have explicit evidence to the contrary that no such possible performance 
issues during parsing could possibly exist on any device or with any script in 
the wild? It would be nice if you could show such contrary evidence instead of 
dismissing or ignoring the arguments and evidence already out on the table.

The coming module feature (in ES6 or whatever) could make this worse, because 
AIUI, the parsing/compilation stage is where the browser will resolve static 
dependencies, so it's THAT stage which very well might be taking a lot longer 
while it fetches dependency resources.


--Kyle







Re: [whatwg] Deferring javascript download and execution until after onload

2012-11-28 Thread Kyle Simpson
Ian,

 The only cost there could be is the cost 
 of executing the script, and it's already trivial to offload that: just 
 put all the code in a function, then call the function when you're ready.

 It's already possible now to design scripts such that they don't run until 
 you call them, so you could already do this:

You ask us not to make duplicate arguments because you say that it just clogs 
up this list and does nothing to change the outcome of your decision. I would 
like to ask that you not do the same. You have said this same thing no less 
than 10 times across various threads and communications. I really don't think 
anyone who's following this particular issue is unclear about your stance.

---

What it boils down to is, you feel that the onus is on the developers of the 
scripts themselves to change so they are more performance optimization 
friendly for those who use the scripts. 

There are a number of us who work in the performance optimization consulting 
arena, and when we consult with a site who's using a bunch of third party 
scripts, and most of those scripts are not written the way you think they 
should be, those clients aren't happy when we have to tell them sorry, your 
only option to optimize the performance is to make your own modifications to 
those 3rd party scripts, and then self-host them, and then keep up with merging 
changes constantly, etc… or some other such impractical nonsense.

Your approach is like tail-wagging-the-dog: let's make sure performance stays 
less than optimal, so that eventually the designers of these scripts have to 
wake up and fix it.

Perhaps you want to drag this issue out long enough (been under discussion for 
almost 2 years now) that all those poorly designed scripts across the web are 
just eventually made obsolete or finally fixed without the web platform 
needing to address it. The rest of us, I think, would like to actually make 
performance gains and optimizations now. It will be years and years before most 
of the popular scripts on the web may be rewritten in the way you suggest. It's 
just a shame that performance has to continue to suffer until then.

Giving us a mechanism by which we could load existing scripts written and 
maintained by others, which we don't control, in a way that is more performant 
than what we can currently do, regardless of how that script is designed to 
self-execute or not, would be a very useful tool to us, despite you insisting 
it's not useful.

Also, some scripts, by nature of what they do or how they do it, will ALWAYS 
have to auto-execute. Consider the feature-tests that jQuery or Modernizr do 
automatically during their initialization. I think it would hurt both jQuery 
and Modernizr and others like them if users all of a sudden had to start 
calling a $.init() or something like that before they could use the script. The 
untold tens of thousands of sites and books which explain how to use these 
scripts would all be rendered completely inaccurate if such a major paradigm 
shift were to happen.


--Kyle





Re: [whatwg] readystatechange for SCRIPT

2011-09-10 Thread Kyle Simpson

Since nobody seems to object, I'm going to revert r6543 and make
onreadystatechange special.


Since nobody seems to object? You had this thread active with this 
suggestion for less than a day, and that's long enough to conclude that 
noone objects? Man, am I sorry I was away from my email yesterday. Sheesh.


So, can I clarify something? You have moved `onreadystatechange` and 
`readyState` off of the script element entirely, and onto the HTML 
element? If we have multiple scripts loading at the same time, how do you 
get notified of the different states of each script element, when there's 
only one property and one event handler?


--

In regards to all the concern about double-firing of load detection logic, 
IE9 added both `onload` event firing to their existing script element's 
`onreadystatechange` firing. That's been around now for 6 months (not to 
mention the year long platform-preview stage where content was tested in IE9 
relentlessly).


AFAIK, there've been no major compat problems with that. Why? Because most 
script loaders were already aware of a case (in Opera) where the load 
handler might be fired twice, and so were already doing the filtering with 
the loaded flag. LABjs has done exactly that for over 2 years now, as have 
almost all other script loaders since. This is hardly something new.


So, I'm not sure why we're rushing to fear these problems. A few years ago, 
maybe this was an issue, but I don't see how there's real evidence of 
current problems. Most script loaders are already immune to this problem.


--Kyle






Re: [whatwg] readystatechange for SCRIPT

2011-09-10 Thread Kyle Simpson
In regards to all the concern about double-firing of load detection logic, 
IE9 added both `onload` event firing to their existing script element's 
`onreadystatechange` firing. That's been around now for 6 months (not to 
mention the year long platform-preview stage where content was tested in 
IE9 relentlessly).


AFAIK, there've been no major compat problems with that. Why? Because most 
script loaders were already aware of a case (in Opera) where the load 
handler might be fired twice, and so were already doing the filtering with 
the loaded flag. LABjs has done exactly that for over 2 years now, as 
have almost all other script loaders since. This is hardly something new.


Furthermore, this problem only presents itself if a script loader listens 
for both the `onload` and the `onreadystatechange` events. Prior to IE9, 
Opera was the only one to fire both, and now IE9+ and Opera fire both, but 
in any case, script loaders that were concerned with working correctly 
cross-browser have had to, for several years now, either:


* listen only for one event or the other, but not both (some do this)
* listen for both events, and keep a flag to filter if the handler is 
double-fired (most do this)


In either case, those are reasonable and long-established well-known 
work-arounds for the double-firing. Any script loader logic which isn't 
currently doing one of those two things is already *years* behind the times 
(and thus has been ostensibly broken in Opera/IE9 for years), regardless of 
the proposed (and now reverted) change of spec'ing `onreadystatechange` for 
script elements that other browsers might have picked up on.


--Kyle






Re: [whatwg] readystatechange for SCRIPT

2011-09-10 Thread Kyle Simpson

So, can I clarify something? You have moved `onreadystatechange` and
`readyState` off of the script element entirely, and onto the HTML
element?


No.  They've been removed from elements (and windows) entirely.  They
remain on Document.


So, if I understand correctly, you've simply said there will be no 
`readyState` property progression for script elements?




In regards to all the concern about double-firing of load detection
logic, IE9 added both `onload` event firing to their existing script
element's `onreadystatechange` firing.


In all modes?


IE9 in standards mode fires both events. Check the test I posted earlier in 
the thread:


http://test.getify.com/test-script-onload-and-readystate.html



So, I'm not sure why we're rushing to fear these problems. A few years
ago, maybe this was an issue, but I don't see how there's real evidence
of current problems. Most script loaders are already immune to this
problem.


Opera pointed to a specific script loader in the Facebook API that is
not thus immune, as well as one in popcornjs.

Given an existence proof like that, most doesn't really cut it for me,
unfortunately.


Those faulty script loaders may indeed exist. My point is, they are already 
broken (or at least susceptible to breakage) in IE9 standards mode and in 
Opera, both of which fire both events for script elements. Given the fact 
that those script loaders are already broken before we even consider what we 
do with the spec or in other browsers, *they* are clearly faulty and should 
be fixed, not used as an indirect excuse to throw the baby out with the 
bathwater. Their breakage is orthagonal to this discussion because it 
predates this discussion, and it's neither helped nor harmed by either 
outcome of this thread's proposal.


Completely undoing/removing `readyState` from script elements doesn't 
actually do anything to address the existing fact that any script loaders 
(such as those cited) which are not paying attention to the fact both 
`onload` and `onreadystatechange` fire for a script element in 2 of the 5 
major browsers is a broken and faulty loader.


Those script loaders should be fixed regardless of what is decided in this 
thread -- that much should be clear. So if they are fixed, then they're a 
moot argument against considering a simple `readyState` mechanism for script 
elements.




Or put another way, I would not be willing to implement readyState on
scripts in in Gecko as things stand, without a lot stronger data
supporting the fact that scripts no longer listen for both load and
readystatechange.


I think that's an improper standard. What should be asked is: will the 
proposed change break any existing scripts in new or worse ways than they 
already are? The answer is no. And OTOH, will it help some scripts? 
Apparently, yes (yandex).



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-05-30 Thread Kyle Simpson
[Apologies for being out of the loop on this thread thus far, as I was one 
of the main proponents of it earlier this year. I am now going to dive in 
and offer some feedback, both to Ian's comments as well as to others that 
have replied. I also apologize that this will be an exceedingly long message 
to address all that Ian brought up.]




Problem A:

On Tue, 8 Feb 2011, John Tamplin wrote:


simply parsing the downloaded script takes a lot of time and interferes
with user interaction with the UI, so as awkward as it seems,
downloading the script in a comment in the background, and then
evaluating it when needed does provide a better user experience on
mobile devices.

See
http://googlecode.blogspot.com/2009/09/gmail-for-mobile-html5-series-reducing.html
for the official blog post about this technique.


The problem here seems to boil down to we want our script-heavy page to
load fast without blocking UI, but browsers block the UI thread while
parsing after downloading but before executing.

    .


There's a whole bunch of other comments later in this thread, as well as in 
the original threads, which seem to focus on the performance side of this 
proposal's justification. I think we've beat this horse to death a dozen 
times now, so I think belaboring it further is counter-productive.


But you must understand that performance impact of execution/parsing was 
only PART (and in fact in my mind, only a smaller minority part) of the 
justification for wanting to have separatable download vs. parse/execute.


However, *performance optimizations* as a general goal of web applications 
is much more broad than just the question of if a background thread can 
handle parsing of a script in a non-UI-blocking way.


For instance, the whole concept of dynamic script loading (loading multiple 
scripts in parallel, but executing them in order) is all about performance 
optimization. THAT is a much more compelling set of arguments for this 
feature being requested. So, *performance* is important, but the performance 
of parsing/execution is perhaps a little less important in the overall 
scheme of things.


This thread seems to be so easily side-tracked into the minutia of 
conjecturing about background thread parsing and different implementation 
details. I wish we could just take as a given that parsing/execution of a 
script are not zero-cost (though they may be smaller cost, depending on 
various things), and that ANY control that a web performance optimization 
expert can get in terms of when non-zero cost items happen is, in general, a 
good thing.




PROPOSALS

The simplest solution to problem A seems to be to have the browsers do the
script parsing on a background thread, rather than blocking the UI. This
requires no changes to the specification at all. It can be combined with
lazy downloading by inserting a script node when the script is needed;
basically, it is combining the downloading and parsing background
steps into one.


The thread also makes a lot of references to script async and how that 
seems to be the silver-bullet solution. The problem is two-fold:


1. script async only effectively tells a user-agent to make the loading of 
that script resource happen in parallel with other tasks, and that it can 
choose to execute that script at any point whenever it feels is good. This 
means that the script in fact can be executed still before DOM-ready, or 
between DOM-ready and window.onload, or after window.onload. Thus, the 
script's execution effects the page's rendering in an intermittent way, 
depending on network speeds, etc.


script defer on the other hand specifically tells the script to wait on 
its execution until after onload.


2. script async only effectively delays a script if that script is 
completely self-contained, and doesn't have any other dependencies (such as 
needing two scripts to defer/async themselves). If you need to tell two or 
more dependent scripts to wait until later to execute, script async is 
not helpful, as (per spec) the execution order of the scripts is not 
guaranteed. Dynamically loading a script element, and setting async=false, 
will ensure execution order, but will not alleviate the problem that 
execution of the script (and its affects thereof) may happen earlier than 
they would like (like during critical page-loading activities, animations, 
etc).



There doesn't seem to be any need to treat them as separate steps for 
solving problem A.


I believe in the thread earlier in the year, it was (mostly) a consensus 
that while parsing and execution were separate, all that was really desired 
was to separate (aka, delay) execution from the loading, which had the side 
effect of providing a larger window of buffer between load-finished and 
script-executing in which parsing will occur. This allows the user-agent to 
defer parsing to a later time, perhaps even entirely deferred until the page 
asks for a script 

Re: [whatwg] Proposal for separating script downloads and execution

2011-05-30 Thread Kyle Simpson
Sorry for repetition, but we can already preload images and CSS and apply 
them to the page at an arbitrary point in time. Why wouldn't we want the 
same thing for JavaScript?


I think the question is whether you want _more_ than that for JavaScript.

For images, you can preload them and choose when they're shown, but
_cannot_ choose when they're decoded.

For CSS, you can preload it and choose when it's applied but _cannot_
generally choose when it's parsed.

For JS, you want to be able to preload it and control when it's executed
(in the sense that the side-effects it causes become visible).  The
question is whether control is also needed over exactly when
side-effect-free preprocessing of the script happens.


Boris-
The spirit of the proposals is NOT to directly control when parsing happens, 
but seeking relief from the current situation, where execution (and thus 
obviously parsing before it) must happen basically right after download 
finishes. To achieve deferral of execution until a desired time, the current 
UA tech forces an author to perform the download of a script at times when 
it's not necessarily most efficient (such as lazy-loading long after the 
keep-alive is past, or the mobile device's radio has powered down).


We simply want to separate download from execution, so that each can happen 
independently, when a performance optimization expert feels its best for 
them to happen.


Furthermore, if implementations, as a first pass, chose to still insist that 
parsing happened right after download, even though execution could be 
controlled and deferred until later, this would IMHO still solve the 
majority use-case (deferring execution) and would simply leave the 
minority use-case (performance of parsing) unaddressed.


BUT, and this is the big key, it would provide a clear future path for 
implementors to improve this performance, by them getting smarter about 
putting off parsing until some later time, because it would stretch out 
(possibly significantly) the time window in which a UA has to parse that 
script. Right now, a UA must parse it nearly immediately as its going to 
then execute it right away. In our proposals, the UA would often have a much 
larger window to find the more optimal time to decide.


And in the worst case scenario, where the author asks to execute a script 
before the UA has elected to passively do the parsing, we're no worse off 
than we are currently, because the UA would simply force the parse to happen 
right away, and then execute. Worst case: no worse; Best case: parsing 
happens at a more optimal/idle time.


We're not seeking to directly control when parsing happens, but seeking to 
indirectly affect it (for the positive) by making it so that it's possible 
(now or in the future) that UA's don't have to bog down a device (main UI 
thread or background thread) *right now* with parsing of a script if I as an 
author have clearly indicated (by not marking the script for immediate 
execution) that I intend to defer its execution until *later* (if ever).



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-05-30 Thread Kyle Simpson

This isn't practical if the contents of the script are not under the

author's direct control.  For example, an author that wanted to use jquery
would create a script tag with the src set to one of the popular jquery
mirrors (to maximize the chance of the resource being cached), but then have
no control over when the script actually evaluated.

EXACTLY! That is the main crux of the argument against those who keep saying 
just modify the script to not have side-effects. That sounds great in 
theory, but it doesn't mean anything useful in practice on today's web with 
today's scripts. And that reality isn't gonna change any time in the 
foreseeable future. We need a better mechanism now, that recognizes the 
realities of the current ecosystem of scripts that pages load.



For this use case I think it would be handy to have a way to express 
please

download this script but do not start evaluating it until I'm ready.  As a
straw man, what about using the disable attribute?  When the load completes,
if the disabled attribute is set then the script is not evaluated until the
disabled attribute is unset.

While I like the spirit of what you're asking for, you're actually going to 
create more confusion by creating yet another proposal (#4 or #5 at this 
point) to deal with, but your proposal is not fundamentally different/better 
than the existing ones (in fact, it falls short). I really hope we can keep 
bikeshedding and distractions to a minimum, at this point in the discussion, 
as we already have lots of discussion baggage to wade through.


The `disabled` proposal is elegant in its simplicity, on the surface, and 
would likely cover the use-cases well, but it maps functionally almost 
identically to Nicholas' proposal (proposal #2). However, where Nicholas' 
proposal is better though is that he covers the necessity for event handling 
notification, where it's quite unclear how a simple notification of such 
would happen within the semantics of a `disabled` script.


In any case, we've already treaded through the pluses and minuses of that 
proposal, so we'd be re-treading the same ground to explore the `disabled` 
proposal you put forth. I'd rather us focus our energy on helping to show 
Ian (and others) the necessity of a solution, rather than on arguing (right 
now) which solution is best.



--Kyle






Re: [whatwg] Proposal for separating script downloads and execution

2011-05-30 Thread Kyle Simpson

If browsers processed (parsed  compiled) scripts in a background thread

it would mitigate the problem, but not solve it. Suppose I have 100K of
JS I need right now to generate the DOM for the initial page, and I have
another 500K of JS that's only needed if the user clicks on FeatureX.
Assuming there's only one background thread, I want to prioritize the
first 100K of JS on that thread, and not have it blocked by the
unnecessary processing of the second script. Also, I only want to do the
processing on the second script if the user activates the feature. This
is important on mobile now to reduce power consumption but is also
important on desktops as CPUs become more power sensitive and JS
payloads grow.

Steve's made an excellent point here, which I have failed to as succinctly 
state thus far. What you first have to realize is that there are valid 
reasons why you'd want your code to download all at once up front 
(connection keep-alive, mobile radio power state, etc). But once all that 
code is downloaded, there are also valid reasons why some of that code is 
more important (to parse/execute) than other code. The current technology 
gives us no way to distinguish, and to ensure that the device spends its 
time parsing/executing the important stuff while putting off 
parsing/executing the less important stuff.


And to this use-case, the only suggestions thus far have been:
1. change your code so it doesn't have auto-execute side effects (not 
practical)
2. let the UA manage this with background threads (partially useful, but not 
wholly sufficient given our suggested use-cases)
3. wait to download the less important code until its needed (inefficient 
use of connections, etc)


We need a mechanism that allows an author to explicitly state what of the 
downloaded code it wants executed, and when. That's the only practical way 
to fully serve this performance use-case in *today's* current state of 
JavaScript code patterns. It's simply unacceptable that the only way to 
address this valid use-case (without code modification) is through various 
hacky tricks like cache preloading.



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-05-30 Thread Kyle Simpson

I think there's a valid use case for downloading a script and not
evaluating it immediately.


I think we all agree on that.


Boris-
I wish that were true, at this point, but I'm not sure that it is. The tone 
I got from Ian's post (and even subsequent replies) was that he in fact does 
not see this as a valid use-case that we're asking to solve.


And I don't think he's the only one who still doesn't see the value.

If the conversation continues to devolve into rabbit trails about who 
processes what code on which thread in which UA implementation, we will keep 
missing the overall point, which is that authors want and need to download 
code at different times than that code executes, and they can't just freely 
self-host and modify that code to accomplish that -- they must have a more 
flexible loading mechanism that gives them the ability to separate loading 
from execution.


All other bikeshedding about performance nuances aside, this use-case 
remains valid and is unsolvable with unmodified scripts on today's web, 
without sub-optimal hacks like cache preloading.


--Kyle







Re: [whatwg] Proposal for separating script downloads and execution

2011-03-04 Thread Kyle Simpson
Can someone double-check that onreadystatechange does not actually work 
for

this in IE9 in standards mode?  IE9 seems to no longer fire
onreadystatechange when the script is not in the document.  (onerror is,
though, which I think is a spec violation.)


http://zewt.org/~glenn/test-script-preload-onreadystatechange/standards-mode.html(onreadystatechange
not fired)

http://zewt.org/~glenn/test-script-preload-onreadystatechange/quirks-mode.html(onreadystatechange
fired)


In IE9 RC1, both those tests fired the onreadystatechange: loaded alert. 
Isn't that expected behavior? What led you to believe it was broken?


--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-03-03 Thread Kyle Simpson
So, I'm just curious, what do the participants of this thread (and Ian, 
specifically) feel is the most productive next step? We have 3 or 4 
proposals that have all garnered various support. We've had at least one 
browser-developer informing us on concerns with various aspects of each.


I have, at several times throughout this thread, summarized both pros and 
cons for my proposal. I would still like to see the same fairly done for the 
other proposals, so we could get a succinct statement of where each stands. 
Without simple summaries, it feels like the confusion of this thread has 
just overtaken the process and made it more cumbersome than its worth.


At one point in the thread, it felt like there was some trend toward 
convergence, which seemed promising for progress. However, at this point, it 
seems like the more discussion we have, the more divergence we find.


Is it time for this issue to be referred to some formal or informal working 
group committee to narrow the field? Is it time to just do some sort of 
democratic voting? Or are the disagreements over the 3-4 proposals simply 
too much for any progress to be made at this time?




--Kyle 



Re: [whatwg] Proposal for separating script downloads and execution

2011-03-03 Thread Kyle Simpson

So, I'm just curious, what do the participants of this thread (and Ian,
specifically) feel is the most productive next step? We have 3 or 4
proposals that have all garnered various support. We've had at least one
browser-developer informing us on concerns with various aspects of each.


As with all other feedback on the spec, at some point I will take all the
e-mail on the topic, read it, throw away any e-mail repeating
previously-made points or just stating support without reasoning, and then
reply, making any appropriate changes to the specification.


OK, then in fairness to posterity (including our future-selves), I would 
highly suggest/request that the the 3rd (and 4th) proposals recently 
discussed on this list should get their own entries on this wiki page:


http://wiki.whatwg.org/wiki/Script_Execution_Control

While Ian may spend the time to wade through this whole complicated thread 
at some point, I think the benefit of the greater community deserves a 
clearer and more succinct summary of where we've come thus far.


--Kyle









HTH,
--
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-23 Thread Kyle Simpson

I don't understand why the preloading specifically would imply different
HTTP caching semantics than normal dynamic script loading?


It doesn't have to.  It's just that if preloading is easy to trigger by
accident and authors don't notice when they accidentally preload lots of
stuff then we may have a problem if we don't coalesce identical-object
(whatever that means) loads.

Normal script loading doesn't have the don't notice issue much,
because a typical script running is noticeable.


I'm curious if we could apply some limit to the number of scripts that 
will be simultaneously preloaded, at say 100 scripts for instance? A 
sufficiently high limit that almost all normal usages of this feature would 
never hit that limit, and yet small enough to prevent the run-away memory 
usage you're concerned about.


The way I'd see that limit working would be that if more scripts are 
requested to preload than the 100 limit, then all the rest will simply be 
blocked in a loading queue, waiting for the script elements to either be 
added and thus execute some from the preload queue, or abandoned/aborted 
(GC'd)... both of which would free up slots in the preload queue, letting 
the browser preload some more.


This would work conceptually very similar to how the simultaneous connect 
limits work right now... from the API perspective, there'd be no difference, 
but the browser would just throttle and delay any loads that go over this 
internal limit. In fact, a browser could probably be free to play with this 
limit a little bit depending on conditions like amount of available memory 
on the computer/device, etc. I don't see any reason that an author would 
need to know what the limits are, or control them, so long as the limit is 
never so low as to prevent the normal use-cases from operating as expected.


To be clear, I'm not saying that no site would ever need to load more than 
100 scripts. I know there are sites out there that do. But I'm saying that I 
don't know of any sites that would have a need to preload that many scripts. 
Script loaders could quite easily be set to begin executing the preload 
queue as soon as that localized part of the dependency graph is fulfilled, 
which could naturally keep the queue being emptied as more scripts are 
preloaded. It would be an extreme condition in which there truly was a 
dependency graph that required more than 100 dependencies in the cycle 
before the execution cursor could advance.


If 100 still seems too low, make it 500. Somewhere orders of magnitude lower 
than the run-away 10,000 scripts case... seems like it could mitigate the 
browser vendors' fears in this area. Thoughts?




--Kyle




Re: [whatwg] Proposal for separating script downloads and

2011-02-23 Thread Kyle Simpson
3. My (and Nicholas's previous) proposal: Script elements are fetched 
when
inserted into the DOM[1].  An event is dispatched when the script has 
been
fetched, eg. onfetch or onpreload[2].  A preload attribute is 
added;

when true, the script will not be executed when the fetch completes; call
script.execute() to run the script.



I strongly prefer this proposal to either of the other two, for what
it's worth.  Is the concern that this doesn't degrade as nicely in UAs
that don't support preload or something?  If not, what _are_ the
arguments against this proposal?  Links to existing discussion are fine
if this ground was already covered.



There are several concerns which, at various times, have been brought up 
about this variation of the proposal. As Glenn stated, this was Nicholas' 
original proposal, but given those questions and concerns, he has adjusted 
his proposal several times. The adjustments he's made to his proposal have 
generally been to converge it in the direction of my proposal, at least to 
some extent.


To briefly restate some of the issues with the original proposal (as 
compared to the alternatives):


1. Not only does IE already have the functionality of my proposal 
implemented, but the spec already has this exact wording in it. The spec 
already suggests that browsers could/should do exactly this preloading, when 
the src is set but the element is not yet appended to the DOM. Moreover, my 
proposal draws on existing precedent for `readyState` and 
`onreadystatechange`, and the way that Image preloading works.


Put plainly, the original proposal is much further from:  a) existing spec 
wording; AND b) existing browser implementation; AND c) existing precedent.


The goal (from my perspective) is to come up with the simplest proposal that 
serves the use-cases. Simplest being defined in this particular situation as 
the least amount of change to the spec, AND the least amount of change to 
the browser that has by far the slowest release cycle (IE).



2. The execute() API concept had several other questions that arose, such 
as:
 a) is execute() sync or async? what does this imply about if the script 
being executed itself calls execute() on other script elements, and so on?


 b) what does it imply about whether/when the event handler(s) would be 
fired? If it's synchronous, are the event handlers also synchronous or are 
they async? Are they fired before or after the execute() actually does the 
executing of the script element?


 c) does this run the risk of going afoul of the same issue that tripped of 
Firefox with their synchronous execution of inline script elements (that 
jQuery used for global-eval)?


 d) what are the semantics of if you call execute() on a script element 
before it has finished preloading, or for a script that wasn't preloaded at 
all? Does this simply turn off preloading execution-deferral flag? Or does 
it throw an error? Would those errors be synchronous (like an actual 
exception that aborts processing) or simply bubble to the script.onerror 
handler?


 e) what happens if a script's .text is modified before execute() is 
called? What if a script element is cloned before execute() is called? What 
if it's cloned after execute() is called?



3. If in the future we want to also support preloading of other resources, 
like stylesheets for instance, which of the proposals offers the best 
precedent for that? For instance, would it make sense to add a .execute() to 
the link element for applying a stylesheet that had been preloaded? Or 
would the preloading style from my proposal (or even Nicholas' current 
proposal variation) fit more cleanly?


In exploring these issues and questions (and others), some contradictory 
arguments were brought up. In the end, I think Nicholas found it easier to 
simplify his proposal rather than keep going down this rabbit hole. For the 
most part, my proposal doesn't seem to suffer the same complexity of most of 
these questions. And to the extent that some of the questions are 
applicable, those questions already exist and browsers already have answers 
for the normal dynamic script append semantics.


Again, I think the spirit we all share is to find the simplest proposal that 
gets the job done, and introducing a new .execute() concept raised more 
questions than it purported to solve.


BTW, I don't necessarily claim that above to be an exhaustive distillation 
of this entire thread as it related to Nicholas' original proposal, and the 
revival of it that Glenn has been pushing for -- I'm sure I missed a few 
points in my memory recall. But I do think it's at least illustrative of how 
the conversation got a lot more complicated as we started exploring how 
.execute() would actually work.




I sympathize with that, since they're aiming to improve the likelihood of
being implemented--but the precedent it's drawing on seems like a bad one,
which should be treated as a compatibility hack rather than a precedent 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-23 Thread Kyle Simpson

I'm curious if we could apply some limit to the number of scripts that
will be simultaneously preloaded, at say 100 scripts for instance?


I would be fine with that from an implementation standpoint; not sure
about the author-facing aspect of it.


As one of the concerned web-authors, I can't see having much trouble (again 
as long as the limit is high enough not to cause likely deadlock in the 99% 
use-case), nor caring, that the browser had some edge-case limits in place 
to prevent true run-away.


Just taking a swag, what would you say is the highest number that limit 
could be (by default) that you could still feel reasonably comfortable is 
protecting from run-away? 100? 500? 1000? If the number is 10, this may be 
impractical. If the number is 100+, I think it's quite unlikely to affect 
any of the vast majority use-cases.


Just as a random thought (perhaps too crazy out-there)... if we were 
concerned about dead-lock if the limit is reached... could this be a browser 
warning dialog to the user, like it is in some where they warn about long 
running scripts... Something like This page is attempting to load but not 
use a large number of script resources. Do you want to allow this to 
continue? I dunno, that's probably crazy. But again, I think it's probably 
rare enough that I wouldn't worry too much if that was the fail-safe at the 
end of the line.





The goal (from my perspective) is to come up with the simplest proposal
that serves the use-cases. Simplest being defined in this particular
situation as the least amount of change to the spec, AND the least
amount of change to the browser that has by far the slowest release
cycle (IE).


That last part is an important point, yes.


Some people on this thread I think don't give the pragmatism of IE's release 
cycle as much weight as I do. If that weren't a concern, I probably would 
have been more in favor of Nicholas' v2.1 (current) proposal, based on its 
cleaner semantics. Honestly, it's the IE question that causes me still to 
favor my own proposal. And that's not meant to be a knock on IE. It's just 
meant to recognize the reality of that situation, and try to dance around it 
and find some balance/compromise.


Right or wrong, IE9 is feature-complete, and any change to this mechanism is 
almost certainly targeted for IE10 or later, which could very well be 1-2 
years away (or more). While we may, in practice, still be a ways off from 
full browser compat on this topic, I'm still fighting for the path that at 
least seems like it has the shortest path. I may be fighting the wrong 
battle, but that's my motivations thus far.




a) is execute() sync or async? what does this imply about if the script
being executed itself calls execute() on other script elements, and so 
on?


Fair enough.  Seems to me that execute() should act just like inserting
an inline script into the DOM does right now.  Browsers already have to
handle that; they could just reuse this code.


There are some additional complications, I think. For instance, if a 
script.execute() call is made, and inside that script, it calls .execute() 
on its own script element... what does that do?


But yes, I agree that, as I've said many times in this thread, the more we 
can reuse of existing code and/or semantics, the better.




b) what does it imply about whether/when the event handler(s) would be
fired? If it's synchronous, are the event handlers also synchronous or
are they async? Are they fired before or after the execute() actually
does the executing of the script element?


Which event handlers?


Well, script.onload for one. And for the IE crowd, also 
script.onreadystatechange. Oh, and the script.onerror handler needs to at 
least be considered as well. If both the execute and the event handlers are 
synchronous, I could see a situation where script.execute() causes a 
script.onerror to fire, which then calls another script.execute(), etc. I'm 
not specifically saying this type of thing is a problem, per se. I'm 
suggesting that there's quite a few questions that have to get thought about 
and spelled out. And we may not all agree completely on those answers, which 
is going to make getting .execute() even specified (much less implemented) 
quite a bit more involved.


If execute() were our only option, that would be the necessary task. If 
there's reasonable and simpler alternatives, which I think my (and Nicholas' 
current) proposals may very well be, I think the process should tend to 
favor that route, absent some strong reason (besides API preference) to 
force us down the more difficult path.




d) what are the semantics of if you call execute() on a script element
before it has finished preloading


Good question, yes.


In the case of DOM-append-to-execute, this question is moot. In the case of 
execute(), it's an open question without full consensus, thus far.




e) what happens if a script's .text is modified before execute() is
called? What if a script 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-23 Thread Kyle Simpson
Again, I think the spirit we all share is to find the simplest proposal 
that gets the job done, and introducing a new .execute() concept raised 
more questions than it purported to solve.


The last dozen or two messages were regarding your rabbit hole, which
raised serious issues.


Serious issues? As far as I know there was only one issue recently raised: 
what happens in the run-away case where someone preloads 10,000 scripts? 
First of all, I don't think that particularly qualifies as serious, 
because it's an extreme corner case at best. There was no evidence or 
argument for it being a valid use-case. It's clear that the concern is only 
for the accidental run-away.


Now, it IS obviously important to browser implementers, so we rightly 
explored it. But, with relatively little fuss/disagreement, the simple 
limit solution emerges at least at first glance as a feasible solution to 
that narrow specific problem. I haven't heard any tangible objections to why 
that wouldn't help ease the browser's concerns.


As for v1 of Nicholas' proposal, which you now champion, I have read the 
answers to (many of) the questions, but they aren't satisfactory as far as 
I'm concerned, so we can't exactly call those issues resolved. If you're 
trying to suggest that the questions about the original proposal are already 
fully resolved, I strongly disagree.


As you put it, that rabbit hole is neither as complete nor anywhere near 
as shallow as the single question rabbit hole we just explored for my 
proposal.


Moreover, the spirit of a simple proposal is not just about how many 
questions a proposal raises, but the actual simplicity of the eventual 
solution itself. I maintain my proposal is the simplest of the (now 3) 
proposals being discussed, because it has fewer moving parts. As a script 
loader author, I favor an API that is simpler to code against (as long as 
it serves all the use-cases), and one that, where possible, borrows on 
precedent of existing code. Since pretty much all script loaders already 
deal with checking `readyState` and `onreadystatechange` (because IE9 
didn't support `onload`), it's easier to wire in a preloading solution based 
on *that code precedent* than it is to implement an entirely new paradigm 
with `preload=true`  or `execute()` kinds of semantics.


Thus I stand by my assertion that my proposal is more simple than v1 of 
Nicholas' proposal.



 c) does this run the risk of going afoul of the same issue that tripped 
of Firefox with their synchronous execution of inline script elements 
(that jQuery used for global-eval)?


I don't know how an opt-in API that doesn't yet exist and which nobody
is using can run afoul of existing code, so you'll need to be more
specific.


I don't know all the details myself, perhaps Boris or Henri (Mozilla) could 
shed more light. But I *do* know that what prompted Firefox to stop 
enforcing insertion-order on dynamic scripts (which led to breaking LABjs 
and the whole async=false proposal and adoption) back in the fall was 
specifically something to do with problems of guaranteeing (or not?) 
synchronous execution of inserted script elements. Apparently, this was 
causing issues for jQuery and their global eval.


All I'm raising is that this is a relevant issue, at least for Mozilla, 
which has been dealt with recently, and it's prudent that we check to make 
sure that specific sync or async execution of scripts (in Glenn's and 
Nicholas' supported proposals) isn't going to create problems for their 
existing mechanisms, which have already been the recent subject of some 
changes and problems.



I sympathize with that, since they're aiming to improve the likelihood 
of
being implemented--but the precedent it's drawing on seems like a bad 
one,
which should be treated as a compatibility hack rather than a precedent 
for

new APIs.


I strongly disagree with this characterization, based solely on the fact 
that the wording of the current spec already says to do exactly as I'm 
proposing. That's not a compatibility hack, that's further 
standardizing the wisdom that the spec writers already thought through 
and codified.


There's no need to load images that aren't in the DOM, since you can
simply add them to a hidden container in the document.  Loading images
that aren't in any document avoids breaking existing pages--a
compatibility hack.


I'm not basing my arguments for my proposal solely, or even remotely, on 
Image preloading. It's a side issue that Image preloading ALSO works this 
way -- nothing more than a tangential side note that it fits somewhat in 
consistency with how Image preloads, and that also CSS preloading with 
link *could* perhaps work the same way. If *that* is what derails support 
for my proposal, just forget they were even brought up.


What I'm really basing my argument on is the precedent of `readyState` in 
the XHR object (for event handling), the spec wording that specifically 
describes as a suggestion the 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-22 Thread Kyle Simpson

1)  If your script is no-cache, or max-age:0, does IE make a new
request for it for everyscript  element?


For the most part this seems to be the case but there are two exceptions:
   a) Before a URL loads, if it's assigned to another script, only one
request is made.


OK, that would be a violation of the HTTP caching semantics.


Can you explain how, in more detail? In practice I haven't seen IE's 
behavior be a problem, but perhaps I'm not seeing the full context of the 
issue you're concerned with.




IE  9 may mitigate this to some degree by enforcing its standard
garbage collection rules. If only circular references to the script
element exist, IE will abort the network request and never fire the
readystatechange event.

(function(){
var s= create('script');
s.src= 
s.onreadystatechange= function(){addToDom(this);};
})();


Uh... In that situation I would expect the event handler to keep the 
script alive until the load finishes.  Anything else is just a bug that 
exposes GC timing to the web page.


I've said the same thing to Will before. I agree that a script having a 
circular reference to itself via the closure that's created when its handler 
is created and assigned... *should* have kept the item alive and not GC'd. I 
don't understand why IE GC's in this way.


In any case, for all intents and purposes, for someone to be using the 
preloading as we're suggesting (with either proposal), you'd have to keep 
around a reference to the script element anyway, so that you could later 
programmatically execute it. So, I think this GC quirk of IE would in 
practice mostly be avoided.



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-22 Thread Kyle Simpson

1) If your script is no-cache, or max-age:0, does IE make a new
request for it for everyscript element?


For the most part this seems to be the case but there are two
exceptions:
a) Before a URL loads, if it's assigned to another script, only one
request is made.


OK, that would be a violation of the HTTP caching semantics.


Can you explain how, in more detail? In practice I haven't seen IE's
behavior be a problem, but perhaps I'm not seeing the full context of
the issue you're concerned with.


If I have a response set to no-cache and you make two requests for it but 
only one of those actually hits the server, then you're clearly caching it 
in violation of the no-cache header.  Is that really that unclear?


Look above at what Will says... he says before a URL loads in (a). I 
interpreted that to mean that if I make two requests in rapid fire 
succession, and the browser hasn't yet gotten the response headers (from the 
first request) to tell it not to cache, then it makes sense from an 
optimization standpoint that IE would see the two simultaneous URL requests 
as the same and assume to only load once instead of twice.


Again, maybe I'm missing something, but the way Will describes it sounds 
perfectly reasonable to me. It might be slightly on the aggressive side, but 
I don't see how that, as described, is violating the HTTP caching semantics. 
I don't see that those semantics imply that a browser must wait to fully 
receive response-headers from a first request before deciding what to do 
with a second request of the same URL.



Because it's the easy way to do it; we had to jump through some hoops in 
Gecko to make sure an async XHR stays alive until it fires its last 
readystate change event when no one is holding a ref to the XHR object.


Right, but in that case, the XHR object has a circular reference to itself 
via the closure of the handler function (assuming it was an assigned 
anonymous or in-scope function that was assigned). I was just saying that in 
the case of actual DOM elements, when a circular reference is created 
between the DOM element and a JS counter-part, through the closure of a 
handler assigned to the element, I assumed this was enough to avoid GC.


I recall in older IE days avoiding stuff like:

var script = document.createElement(script);
script.theobj = script;

Because this created a circular reference, and thus a memory-leak, if you 
didn't forcibly unset before unload the `theobj` reference to break the 
circular ref.





In any case, for all intents and purposes, for someone to be using the
preloading as we're suggesting (with either proposal), you'd have to
keep around a reference to the script element anyway, so that you could
later programmatically execute it.


Well... no.  You could grab the ref in the onreadystatechange handler.


In the most rudimentary of cases, and only assuming the `onreadystatechange` 
handler actually had a closure reference to the script element... it 
wouldn't if say you just made reference to some outer/global scope function 
that you just assigned to the `onreadystatechange` property, like:


function handle(rs) {
  if (rs == loaded) { ... }
  // but, no ref to script object in here
}

(function(){
  var script = document.createElement(script);
  script.onreadystatechange = handle;
  script.src = ...;
  // append to DOM
})();

Also, there's a whole set of more advanced preloader functionality at 
stake for script loaders which wouldn't suffice *even if* the only reference 
to a script element was via closure in the handler (and that was sufficient 
to avoid GC). For instance, a script loader that needs to load a dozen 
script files all in parallel, then execute some of them in particular order, 
while others in just first-come-first-served order... he can't just 
daisy-chain off the handlers, he needs to actually have a reference kept for 
each script element, so that he can specifically execute each one in their 
proper order.


My point was, in practice, most advanced usages of preloading are in fact 
going to have to keep around the reference, thus the GC isn't going to be an 
issue. Only in the simple basic subset of the main proposal use-case would 
this GC bug arise. And it's easily worked around by keeping a ref in scope.



--Kyle






Re: [whatwg] Proposal for separating script downloads and execution

2011-02-22 Thread Kyle Simpson
But note that image loads very explicitly do NOT have HTTP semantics, last 
I checked.  In Gecko they coalesce very aggressively in a cache that sits 
in front of the network layer, including coalescing across documents, etc. 
This cache applies to both in-progress loads and completely loads (it's 
actually a cache of image objects).


This seems strange to me. Generated images (like in captchas, etc) have to 
be common enough that the same semantics for don't cache unless I say it's 
ok would apply somewhat equally to JS as images, right? What's the 
reasoning that says that JS is more likely to be dynamically created (and 
thus needs proper always-request semantics) where images do not have that 
need?



--Kyle






Re: [whatwg] Proposal for separating script downloads and execution

2011-02-22 Thread Kyle Simpson
This can cause the wrong image to show temporarily, until replaced by the 
right one (which I consider a bug; I think the cache needs to be less 
aggressive).


That approach is clearly not workable for scripts... ;)


No, clearly not. I think we're finally in agreement on something. :)


I think we need to refocus the thread. Boris, you've brought up issues of 
essentially:


1. Will keeping scripts around in memory that never get used lead to 
run-away memory usage?


2. Does the caching behavior of IE do incorrect things (that Mozilla would 
want to avoid)?



For #1, I think we've established this is probably true (for those rare 
corner cases). Perhaps a more sophisticated in-memory content-uniqueness 
cache could be constructed, but it may be more work than it's worth. To push 
the ball forward, in a rough (non-binding) estimate, do you think that 
Mozilla could be persuaded to agree to either of the two proposals, granted 
the potential corner case negative performance, *without* such a 
sophisticated in-memory cache to address some of those concerns? If not, 
would the feasibility of such a system make implementing this proposal 
unlikely? Or would it just be a pre-requisite that made the implementation 
of preloading somewhat more complicated than it appears on the surface?


For #2 (and several other related questions we've been exploring)... 
granted, it clearly seems that IE's implementation is not perfect (but is at 
least getting better as of IE9). But as with the above assertion/question 
about #1... if the correct thing is just to always follow HTTP semantics, 
and assume you have to request every URL until you get caching headers 
saying otherwise... isn't that still feasible within the constraints of 
either of the two main proposals? Granted that it would be diverging from 
IE's bugs in this area, but would it be workable to do so? If not, can you 
clearly articulate why you think the proposals could not fit with existing 
precedent on HTTP caching semantics?


Also, I want to go back to a question I asked earlier in this thread and I 
don't think I quite got a full answer to:


With respect to the HTTP caching semantics (or other related performance 
concerns), *other than the potential waste of unused scripts*, what 
additional concerns does preloading imply that the quite standard current 
practice of dynamically adding script elements to the DOM wouldn't imply? 
I'm trying to figure out why preloading presents additional 
challenges/risks that the current dynamic loading mechanisms don't.




--Kyle





Re: [whatwg] Proposal for separating script downloads and

2011-02-22 Thread Kyle Simpson

First of all, which two proposals are we talking about here?


1. Nicholas' proposal, which is currently to preload a script if its 
script element is marked with a `preload` attribute, before the setting of 
the `src` property. To execute the script, you add the script element to 
the DOM. To detect when the preload finishes, you listen to the `onpreload` 
event.


2. My proposal, which is (by and large) to standardize the functionality 
that IE already has, and that the spec already suggests, which is that 
preloading happens when setting the `src` property before adding the script 
to the DOM. To execute, add the script to the DOM. TO detect when the 
preload finishes, listen for the `onreadystatechange` event to signal that 
the `readyState` property is loaded.




It would certainly make implementing it soon unlikely, if such a beastie

is needed.

I guess that's the crux of the question. Is such a mechanism needed to make 
either of those two proposals something palatable to a browser like Mozilla?





For #2 (and several other related questions we've been exploring)...
granted, it clearly seems that IE's implementation is not perfect (but
is at least getting better as of IE9). But as with the above
assertion/question about #1... if the correct thing is just to always
follow HTTP semantics


That's an excellent question.  Is that the correct thing?

For some things (e.g. stylesheets and images) browsers don't do this in
many cases (and the HTML5 spec in fact requires such behavior).  What
should the script behavior be?


Let me restate: I'm not purporting to know what the semantics should or 
should not be. I'm suggesting they should be, per browser, exactly the same 
as normal dynamic script loading, in each browser, already behaves. In other 
words, I've been operating under the assumption that neither proposal 
requires explicitly defining or changing the current HTTP caching semantics. 
I'm hoping that if this assumption is wrong, someone can help me understand 
why?


I don't understand why the preloading specifically would imply different 
HTTP caching semantics than normal dynamic script loading?




--Kyle




Re: [whatwg] Proposal for separating script downloads and execution

2011-02-17 Thread Kyle Simpson
 (so in IE preload would default to true while in FF it would default to 
false).


Let's be clear. In Nicholas' proposal, while the `preload` property may 
default to true or false, the property (I think confusingly misnamed) 
controls  a *behavior*, which is NOT binary true/false. The more useful way 
to think about this is about the default behavior in each browser, not the 
default property value.


You're suggesting that in IE, preload behavior would default to being 
forced, and in FF it would default to being optional (aka, not-forced). 
Regardless of the property's default value, it's confusing that if I set the 
`preload` property to false, I'm not turning off preloading, I'm just 
turning off the *forcing* of preloading.


Which presents the question... what should setting `preload=false` in IE do? 
Should it tell IE to relax its otherwise default-behavior of preloading (and 
perhaps not do it after all)? Or should IE just ignore setting 
`preload=false`?


If at least one browser gets to ignore setting it to false, then shouldn't 
all of them get that option? And if all of them get to ignore it, then why 
even have it be controllable?


This whole line of reasoning seems to move us further from full-compat cross 
browser. I don't like the direction that we're headed. We should be favoring 
convergence over divergence. We should only accept divergence if there's no 
other option. And I think there is another option.



--Kyle 



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-17 Thread Kyle Simpson
The problem with prefetching immediately on src set is that you have no 
idea when or whether the node will get inserted.  So you have to keep the 
data alive... for how long?  From a user's point of view, that's a memory 
leak pure and simple, if the node never gets inserted into the DOM.


Memory leak in the sense that the page is holding onto more memory than it 
*potentially* needs to. But not memory leak in the sense that this memory 
stays around after the page unloads/reloads, right? I dunno if I'd call that 
a memory leak as much as I'd call it a higher memory utilization, or 
maybe potential memory waste.


How much memory does a 25k JavaScript file take up while sitting in this 
queue? Is it roughly 25k, or is it a lot more? Compared to the 100-300 MB of 
memory that a Firefox instance takes up on my computer, what percentage of 
that would be (or would be increased) if Firefox were also holding onto even 
a large amount (~200k) of not-yet-used JavaScript code in memory?


Also, we have to consider whether the intended usage of this feature, by 
developers, is to unnecessarily waste bandwidth and memory and never use the 
scripts, or if it's in good-faith to eventually use them. Does that mean 
there will never be any memory waste? No. But I don't think it'll be the 
norm, at least based on the interested parties in this discussion.


I'd venture to guess that right now, there's a pretty small amount of code 
out there which is creating script elements en masse but not appending them 
to the DOM. Can't imagine really what that use-case would be (sans the 
preloading we're discussing). The likelihood is that the majority of tools 
that would be doing this new technique would be doing so intentionally, with 
the clear intent to later use that script.



--Kyle 



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-17 Thread Kyle Simpson

I dunno if
I'd call that a memory leak as much as I'd call it a higher memory
utilization, or maybe potential memory waste.


Most users will call continuously increasing memory (which is what you'd 
get if a page creates script elements, sets src, and then doesn't insert 
them, perhaps by accident) a memory leak.


In my experience, the term memory-leak (both in the IE sense and in the 
C-programming sense) is not about continuously increasing, even run-away 
memory usage, so much as it is that memory gets into a state where it cannot 
be re-claimed. I don't see you saying that this memory usage would be of 
that type, so I still don't think it's right to call it a leak as much as 
a non-release (yet). But that's probably just a side semantics issue. You 
say tomato, I say tomato.




or is it a lot more? Compared to the 100-300
MB of memory that a Firefox instance takes up on my computer, what
percentage of that would be (or would be increased) if Firefox were also
holding onto even a large amount (~200k) of not-yet-used JavaScript code
in memory?


My worries are cases where a page inadvertently makes you hold on to tens 
or hundreds of megabytes of js, not about the 200k case.


Do you have any example where hundreds of megabytes of JavaScript is being 
loaded onto pages? Even tens of megabytes seems quite extraordinary. I 
believe I recall reading somewhere that the average amount of JavaScript on 
the Alexa 200,000 is like 375k. I think the most I've ever personally seen 
is around 2-5 MB. Even if we factor in long-running pages (like Gmail, for 
instance), I can't fathom that during the course of the page lifetime, all 5 
MB of JavaScript code is being re-downloaded fresh, every few minutes or 
hours, in which case a run-away scenario of hundreds of megabytes might 
occur. Even then, it would only occur if that site were re-downloading their 
entire code again, over and over, but never using it. That would be bizarre, 
indeed.


And not only would it be bizarre if that were happening by design, but it'd 
be bizarre if it was just a random flaw in their software, which for all 
other browsers didn't bite them but somehow they avoided that flaw in IE, 
where it should be killing them.



I think you understimate how often scripts just have bugs in them.  I'm 
not saying someone would create a few million nodes and then not insert 
them in the DOM because they're _trying_ to do something dumb.  But that 
sort of thing scripts do all the time.


You're assuming scripts mean to do everything they do.  That's not a good 
assumption, unfortunately.


Here's what I'm assuming: more than not, this feature will be used 
appropriately. That's not an exclusive thing that ignores mistakes or bugs. 
But the nature of what we're suggesting is not quite as likely to be 
accidentally happening as many other potential features we might discuss for 
HTML.


I haven't seen any examples of existing sites where the millions of script 
nodes phenomena is happening right now, which would be potential landmines 
for this newly suggested preloading functionality. The fear of it being 
theoretically possible seems much more intense than any evidence or logical 
reasoning for it being probable.


I also am on record as saying that I think it's a bad idea to avoid a useful 
(to some) feature for fears that others (probably the minority) will abuse 
or misuse it. That's why we have technical/performance evangelism. That's 
one way the open-source community is so vibrant. In fact, that's why Firefox 
was able to contact several different websites which broke as a result of 
the whole async=false thing, and re-educate them on how to properly 
proceed. To my knowledge, that process worked ok, and I think it's a decent 
model for going forward.


Unless someone can show that the majority of sites (or even just something 
greater than a tiny fraction) are going to choke. And if that can be shown, 
I welcome it. But I'd also be extremely curious as to how those same sites 
are (probably) surviving fine in IE, which has been doing this preloading 
for a decade or more. It's I guess possible, but I'd highly doubt, that 
there are sites which, only for IE, are intentionally avoiding preloading by 
not creating script nodes, whereas they happily queue up millions of nodes 
in all the other browsers.




--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-17 Thread Kyle Simpson

Do you have any example where hundreds of megabytes of JavaScript is
being loaded onto pages? Even tens of megabytes seems quite
extraordinary.


Think 10,000 script elements all pointing to the same 25KB script.  If 
you're forced to preload the script at src-set time, that's 25MB of data.


And if the argument is that the scripts can share the data, I don't see 
what guarantees that.


I don't know of any browsers which are set to download more than 8 parallel 
connections. I can't imagine that you'd have 10,000 separate downloads of 
the same resource. Depending on race conditions, you might get a few extra 
requests in before the item was cached (if it caches). But, once the item is 
in the cache, any additional identical requests for that element should be 
pulling from the cache (from a network-request-level, that is), right? If 
they request a script 10,000 times and the script doesn't cache, or its 
contents are all different, than yeah, that page's resource utilization is 
going to already be crazy abnormally high... and will potentially be 
exacerbated (memory-wise) by the preloading functionality in question.


The question becomes, can the browser create a unique in-memory cache 
entry for each distinct script contents' processing, such that each script 
element has a pointer to its appropriate copy of the ready-to-execute script 
contents, without duplication? I can't imagine the browser would need 
separate copies for identical script contents, but perhaps I'm missing 
something that prevents it from doing the uniqueness caching. I know very 
little about the inner workings of the browser, so I won't go any further in 
guessing there.


Even if they can't , what it means is, these edge cases with 10,000 script 
requests might get exponentially bad. But it still seems like the normal 
majority cases will perform roughly the same, if not better. Or am I missing 
something?



Doing that while obeying HTTP semantics might be pretty difficult if you 
don't have very low-level control over your network layer.


I'm not sure what you mean by HTTP semantics if it isn't about caching. 
But I don't think there'd be any reason that this proposal would be 
suggesting any different handling of resources (from a HTTP semantics, 
network-request-layer perspective) than is already true of script tags. Can 
you elaborate on how the current network-resource handling semantics of 
script tags would create HTTP related issues if preloading were happening?


I wonder how IE is handling this, seemingly without too many issues (since 
they've been doing it forever).


In other words... if I loop through and create 10,000 script elements (no 
DOM append), and that causes 10,000 (or so) requests for that resource... 
how is that different/worse than if I loop through and create 10,000 script 
elements that I append to the DOM? Won't they have roughly the same impact 
on HTTP-layer loading, caching, etc?



Sure.  That doesn't mean we shouldn't worry about the edge cases.  It 
might be we decide to ignore them, after careful consideration.  But we 
should consider them.


OK, fair enough. I agree it's worthwhile to consider them. Sorry if I 
overreacted too strongly to you bringing it up.



Yes, and I'm on record saying that I need to think about my users and 
protecting them from the minority of incompetent or malicious web 
developers.  We just have slightly different goals here.


Also a fair statement.


To my knowledge, that process worked ok, and I think it's a decent model 
for going forward.


Just to be clear, that process, on our end, was a huge engineering-time 
sink.  Several man-months were wasted on it.  We would very much like to 
avoid having to repeat that experience if at all possible.


It's a shame that it's being viewed as wasted. With respect to 
async=false, there was already going to be some site breakage that cropped 
up with Firefox's script-ordering change, regardless of if I had jumped in 
to complain about the LABjs breakage and the proposal for async=false. 
And, while async=false took a lot of extra time to discuss and agree on, 
it sure seemed like it ended up being the solution to all those other sites 
problems. I'm obviously not on the Mozilla team, so perhaps I'm unaware of 
additional burdens that process placed on the team.


I do think it's fair to say that the magnitude of impact for changing script 
ordering semantics and changing script network-loading semantics are not 
really the same. But yes, I'm sure there would be a few sites who broke in 
some random way -- there always is.


Is it better to break a few sites so that many other sites can start 
improving performance? I dunno, but I tend to think it's worth considering 
that tradeoff. I've heard a similar argument made earlier in this thread.



Does IE obey HTTP semantics for the preloads?  Has anyone does some really 
careful testing of IE's actual behavior here?


I've done a lot of careful testing of IE's actual 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-15 Thread Kyle Simpson

Although I'm not aware of anyone wrapping a 250KB style-sheet in
comments, the pre-loading interface could seemingly be applied to any
number of elements.  Nicholas' original e-mail referenced a blog post
by Stoyan Stefanov which details a way to pre-fetch both scripts and
stylesheets.


It's true that many developers have created various tricks for dynamically 
loading stylesheets. Since the link element doesn't fire an event when the 
stylesheet finishes loading, they've resorted to a number of hacks, usually 
related to polling some DOM element's calculated style to see if the 
stylesheet has been applied yet.


However, I haven't seen nearly as many people who are wanting to preload 
stylesheets (that is, load them but not have them applied). That doesn't 
mean it's not a valid use-case (it very well might be), but I don't see 
there's nearly as much evidence of people doing that as there is for the 
current use-case under discussion (preloading scripts). I do see that 
there's a pretty common use-case where they want to be able to load a script 
element, and be notified with a normative event when the stylesheet 
finishes, so they can execute some further JavaScript logic. But that's 
quite different from saying that they need to preload stylesheets.


If you're suggesting that we broaden the scope of this discussion to also 
include the use-cases for preloading of stylesheets, then I think that is 
not a good idea. I understand the desire to solve preloading in a 
*consistent* way (I will address that further in a moment) that would work 
for other resource types, but this discussion thread is already quite 
over-weighted with discussions just about scripts. Introducing stylesheets 
into the mix may very well cause the discussion to cross the tipping point 
into unmanageable.


Besides, if we're talking about adding stylesheets into the list of 
resources that should support preloading, why not open the conversation up 
to all types of media: images, video, audio, favicons, etc. I don't see why 
if we're going to broaden the scope of the discussion, we wouldn't just talk 
about all of those different containers' preloading mechanisms.


For the sake of discussion though, let's examine stylesheet preloading 
briefly: there's no reason that stylesheet preloading couldn't work exactly 
as I'm describing (my proposal) for script preloading. In fact, if we're 
looking at the broader context of resource preloading, there's even more 
precedent for doing it this way, when we consider that this is how Image 
preloading has worked for ages. Images are preloaded when the element's 
`src` is set, but are obviously not rendered until added to the DOM. If 
we're going for consistency, I'd say this is even more evidence for my 
proposal.




Requiring authors opt-into the behavior seems best at least in the
short term and readyState does not provide this mechanism.


I haven't seen any arguments which suggest that requiring an author to 
opt-in to preloading is necessary to avoid problems. Authors don't/can't opt 
into it in IE, nor does the spec currently give authors any way to opt in or 
out of the behavior, if the browser implements the current spec suggestion. 
Is there any evidence of any compat issues if a resource is preloaded? I've 
not see any valid examples, only speculation about possible/theoretical 
issues.





Making RPC
or Ad calls can require disabling this functionality in IE and create
quite a kluge. (1)


I'm sorry, I don't understand this claim at all. Can you elaborate?



OTOH, with readystate, the
tendency will be to add logic for both preload and onload into a
single handler,


This is a completely specious argument. You have no evidence that the 
responsible few web authors (in the resource-loading toolset community) who 
are advocating for preloading are going to act irresponsibly and overload 
event handlers in a way that is going to lead to further breakage. And even 
if someone did that, it would easily and obviously break, and they'd be 
shown up for doing it wrongly, whereas the others of us in the 
resource-loading toolset community who do it correctly will be shown to have 
implemented it as intended.


I've said before, I think it's a bad idea to make decisions based upon 
speculation about how bad habits of some of the development community will 
abuse something. Perhaps that prevailing line of pessimistic reasoning was 
applicable 10 years ago, but I chose to believe optimistically that in 2011, 
as the community works much more closely (this thread is proof!) with the 
spec process, quality improvements to the HTML technology/implementation are 
achievable, which will be used responsibly by those who are informed, for 
greater good, than by those who are ill-informed and do it wrongly.




Most concerning, however, is that adopting readyState will undoubtedly
create compatibility issues.   It's quite common to registerfor both
onload and onreadystatechange, testing for the 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-14 Thread Kyle Simpson
You may be correct in that people may never want to set preload to false. 
You'll note that I put in my proposal that an alternate approach would be 
for preload to be set to true by default.


Since your proposal also says that setting `preload` to `false` wouldn't do 
anything except not *require* the preload (in other words, it wouldn't 
strictly prevent the preload), then what would be the use of someone being 
able to set it to `false`? In other words, what's the benefit of being able 
to tell the browser, Preloading is not required, but you can still preload 
if you want to? That seems basically like a moot no-op.


If preload is going to default to `true`, and setting it to `false` is 
really a moot functionality, we're almost back to my core proposal, except 
for the fact that having an explicit `preload` property gives an admittedly 
nicer feature-detect.




This would allow even easier feature detection...


Honestly, whether `preload` defaults to `false` or `true`, your 
feature-detect for your proposal can be more simplified (no use of `typeof`) 
like this:


if (script.preload === true)  /* or */  if (script.preload === false)


I think changing the behavior of dynamic script elements to match IE's isn't 
a bad idea, but...


Nicholas, I would still like to hear your thoughts/response on the core 
reason I'm pushing to **identically** match IE: that if we specify something 
that IE will have to change about their implementation, we're automatically 
pushing out the time-frame of when we might possibly get to full-compat on 
this issue, from say 4-8 months (reasonable for all other browsers to 
respond) to 1-2 years (the typical release-cycle for IE).


I have conceded that your v2.1 proposal is both more semantic and has a 
better feature-detect than my proposal. BUT, as is often the case, the 
pragmatics of how we can achieve full-compat sometimes outweigh the benefits 
of holding out for the more correct solution.


Given the convergence of proposals, with that point being really the last 
major sticking point, I think it's time to start talking in terms of the 
pragmatics. I believe this is a case where the pragmatics of existing 
implementation and spec wording have greater influence than the desire to 
create new precedent for the sake of correctness.




--Kyle




Re: [whatwg] Proposal for separating script downloads and execution

2011-02-13 Thread Kyle Simpson
I've compiled a WHATWG Wiki page detailing both Nicholas' most recent (and 
simplified) proposal (v2.1), as well as mine:


http://wiki.whatwg.org/wiki/Script_Execution_Order_Control

In essence, the two are somewhat converging, though are still distinct in 
important ways. Nicholas's proposal now includes relying on DOM appending to 
execute a script (instead of using a new `execute()` method), in agreement 
with my proposal.


But he relies on a new property `preload` to signal that preloading should 
happen before DOM append (instead of how it automatically happens in IE and 
in the Specification, currently). He also specifies a new event `onpreload`, 
whereas my proposal uses the existing precedent of the `readyState` property 
and `onreadystatechange` event firing.


I've stated before in this thread several reasons why I still prefer my 
proposal to the one Nicholas is advocating. I won't repeat those. But, while 
his changes and simplifications have greatly improved his solution to the 
point where many of my original concerns are almost moot, there's one 
fundamental point which I cannot move past.


My proposal seeks to codify what IE already does, and what's already a 
suggestion in the Specification. Since IE is the more slower-moving of the 
various browser vendors, I'm attempting to codify a solution that is more 
likely to see adoption soon. If we specify anything that requires changes 
by IE, while those changes are of course possible, the timing of them (in 
relation to IE9 RC -- feature-complete -- being recently released) will be 
in jeopardy of not happening any time soon (until at least IE9.1 or IE10, 
which could be a year or more off from now). My proposal accepts IE's 
current behavior without change, which in general may give us a quicker path 
to full implementation in all browsers.


Moreover, the strict reading of Nicholas' proposal is that a browser should 
not preload a script resource if the `preload` property is not set to 
`true`. This has two implications:


1. It contradicts the existing Specification performance-suggestion, which 
would of course need to be amended to fit; AND


2. More importantly, it requires that IE, to adhere to the strict behavior 
wording, must *change* their existing automatic pre-fetching, so that it not 
occur unless `preload` is true. Requiring IE to change their existing 
behavior in this way is likely to lead to one of two outcomes:


  a. IE agrees to pin the behavior on the `preload` property, but to reduce 
backwards-compat with their browser community's content, they insist on the 
default behavior being `preload=true`. If this happens, the spec should 
seriously consider aligning with that, because having different default 
behaviors in different browsers will only complicate the situation, where 
with this proposal we're trying to remove hacks and complication for simpler 
functionality.


  b. Or, IE will refuse to change their behavior to be dependent on 
`preload`, citing fears of backwards-compat problems (loss of performance), 
in which case we have failed to achieve compat cross-browser (a very 
important and stated goal of these proceedings).



Specifically related to (2), I would say that, barring some further input 
from the IE team to contradict my observations and assumptions, the more 
solid path forward to full cross-browser compat is to standardize on the 
existing IE behavior as my proposal suggests.




--Kyle



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-13 Thread Kyle Simpson

This change wasn't mentioned here, and introduces a lot of problems.

- script onerror is only dispatched for fetch errors, not syntax errors, 
which makes error detection harder.
- If the called script throws an exception, the synchronous execute() 
model allows the exception to be raised by execute().  With this model, 
they go straight to the browser and they're much harder to detect.


I can see why having `onerror` also fire for parsing (or even run-time) 
errors might be helpful, but I'd consider that orthagonal to this 
discussion. We don't have that now, so it's not necessarily a short-coming 
that this proposal doesn't get into the complications of that discussion. 
This proposal process should, I believe, be as simple and straightforward as 
possible, and not some comprehensive review/change of all of script's 
functional characteristics.


We could (and probably should) consider that kind of thing in a separate 
proposal. There are many things about the script element's events which are 
quirky and could benefit from some further clarification from the spec. This 
is part of my overall goal in addressing the various short-fallings, but I 
don't think we need to necessarily bog down this proposal with that 
additional line of argument.


In my opinion, especially because both main proposals now seem to rely on 
the normal browser script execution model, with a script element simply 
being added to the DOM, we shouldn't be concerned that some additional 
potential error checking that `execute()` might have given us is now gone... 
rather, we should just consider that as future discussion that needs to 
happen separate from this thread/proposal.



- The scripts won't be executed immediately if there are already any 
scripts on the list of scripts that will execute in order as soon as 
possible; they'll be deferred until it's their turn.


You seem to suggest this is a bad thing. I actually think it's a good thing 
that we're keeping script execution as much as possible in the existing 
architecture. There's lots of different reasons why the queues and behavior 
are set up the way they are, and I can say that I never intended this new 
add a script to DOM to execute suggestion was meant to imply some entirely 
different the browser must execute this now or else kind of model. That's 
a much more complicated road to go down, and one which I think we'll likely 
derail either of the two main proposals.



Moreover, the strict reading of Nicholas' proposal is that a browser 
should not preload a script resource if the `preload` property is not set 
to `true`. This has two implications:


Maybe this was changed since you sent this mail, but: When preload is 
false, the user agent may download and execute the external script 
according to its normal behavior.  Setting preload to true requires 
preloading, but leaving it at false should change nothing.


Perhaps on my initial reading I missed that section (I apologize if so), or 
perhaps Nicholas added it later. Either way, it presents us with an 
interesting situation, one which I'm neither sure I support nor disagree 
with at the moment.


Basically, the suggestion is that `preload` is how a web author can force 
the browser from its hinted you may preload to you must preload. I think 
this has the potential for confusion. It's like saying If I set a script 
element to `async`, it will definitely be asynchronous, but if I don't set 
it to `async`, then it may or may not be asynchronous, I'm just not sure. 
The same confusion would be true of defer, disabled, and a whole host of 
other attributes/properties on HTML elements that come to mind.


The strong precedent is that such boolean attributes convey the semantics of 
binary (on or off), not (on or maybe on). That's a strange new semantic 
precedent to introduce.


If we were to go the route of Nicholas' proposal, I think the name should be 
forcePreload to signify that setting it to false doesn't mean don't 
preload, it simply means don’t force preload.



[1] Note that FF3.6 does execute a script immediately when it's inserted 
into the document, if the script is cached.  I'm pretty sure that's a bug. 
Whether due to a bugfix or simply being masked due to changes in cache 
behavior, it doesn't seem to happen in FF4.


I'm almost positive that what you've identified is what led Firefox to 
address the whole script order thing in the first place for FF4, which is 
what led to the cascade of changes, like async=false, etc. IIRC, there was 
some bug with jQuery's globalEval that preciptated them addressing the bug 
you point out. Check the Mozilla bug tracker for more info.





--Kyle



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Kyle Simpson
We've gone back and forth around implementation specifics, and now I'd 
like to get a general feeling on direction. It seems that enough people 
understand why a solution like this is important, both on the desktop and 
for mobile, so what are the next steps?


Are there changes I can make to my proposal that would make it easier to 
implement and therefore more likely to have someone take a stab at 
implementing?


Nicholas, if you're sticking with your original proposal of the `noexecute` 
on script elements, then a mechanism should be specified by which the event 
can be detected for when the script finishes loading. As stated earlier, 
`onload` isn't sufficient, since it doesn't fire until after a script has 
finished (including execution). Are you proposing instead a new event, like 
onloadingcomplete or something of that nature?


Otherwise, the next most obvious candidate for an event, using existing 
precedent, would be the `readyState=loaded`, coupled with that event being 
fired by `onreadystatechange`, as happens currently in IE (similar to XHR).


Once we have some event mechanism to detect when the script finishes 
loading, then your original proposal breaks down to:


1. Add a `noexecute` property on dynamic script elements, default it to 
false, let it be settable to true.

2. Add an `execute()` function.

For the `noexecute`, We need clearer definition on if this proposal is that 
it's only a property on dynamic script elements, or is it also a boolean 
attribute in markup script elements? If the proposal includes the markup 
attribute, we need clearer definition around the semantics of how that would 
be used. As stated, script src=... noexecute onload=this.execute() 
doesn't work (chicken-and-the-egg), so in place of that, what is a concrete 
example of how the `noexecute` boolean attribute in markup would be used and 
useful?


The `execute()` function needs further specification as to what happens if 
execute() is called too early, or on a script that already executed, or on a 
script that wasn't `noexecute`, as Will pointed out.




Is there a concrete alternate proposal that's worth building out instead?


Aside from the event system questions, which is required for either 
proposal, the concrete alternate proposal (from me) is simply:


1. Change the suggestion behavior of preloading before DOM-append to 
required behavior, modeled as it is implemented in IE.



As to whether this one is more worth building out than your original 
proposal, my support arguments are:


1. entirely uses existing precedent, both in wording in the spec and in IE's 
implementation.
2. requires less new additions (no extra function call), which means less 
complexity to work through semantics on (see above questions about 
`execute()` semantics)



I haven't heard on this thread any serious discussion of other workable 
proposals besides those two. Correct me if I'm wrong.




Early on it seemed there was general consensus that changing the existing
MAY fetch-upon-src-assignment to MUST or SHOULD.


I'm not sure there's been consensus on this yet, but there's definitely been 
some strong support by several people. I'd say the two proposals are about 
even (maybe slightly in favor of `readyState`) in terms of vocalized support 
thus far.




Since that is only
tangential to this proposal, provides immediate benefit to existing code,
and can satisfy use cases that do not require feature-detection or 
strictly

synchronous execution.


I'm not sure what you mean by do not require feature-detection. I think 
it's clear that both proposals need feature-detection to be useful. In both 
cases, we're creating opt-in behavior, and you only want to opt-in to that 
behavior (and, by extension, *not* use some other method/fallback) if the 
behavior you want exists.


If I created several script elements, but don't attach them to the DOM, and 
I assume (without feature-testing) that they are being fetched, then without 
this feature they'll never load. So I'd definitely need to feature-test 
before making that assumption.


Conversely, with `noexecute`, I'd definitely want to feature-test that 
`noexecute` was going to in fact suppress execution, otherwise if I start 
loading several scripts and they don't enforce execution order (which spec 
says they shouldn't), then I've got race conditions.




I'm hopeful the change would generate activity around these bug reports.

https://bugs.webkit.org/show_bug.cgi?id=51650
https://bugzilla.mozilla.org/show_bug.cgi?id=621553


I think it's a mistake for those two bug reports not to make it clear that 
an event system for detecting the load is a must. Without the event system, 
a significant part of this use-case is impossible to achieve.



--Kyle






Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Kyle Simpson
Once again, the problem with changing how src works is that there's no way 
to feature detect this change. It's completely opaque to developers and 
therefore not helpful in solving the problem.


I still believe the feature-detect for my proposal is valid. It's obviously 
not ideal (often times feature-detects aren't), but I don't think we should 
suffer a more complicated solution just so we can get a slightly more 
graceful feature-detect, when the simpler solution has a functional 
feature-detect. So far, the feature-detect issue is the only thing I've 
heard Nicholas push back on with regards to my proposal.


To restate, the feature-detect for my proposal is:

(document.createElement(script).readyState == uninitialized) // true 
only for IE, not for Opera or any others, currently


In fact, the precedent was already set (in the async=false 
proposal/discussion, which was officially adopted by the spec recently) for 
having a feature-detect that uses not only the presence of some property but 
its default value.


(document.createElement(script).async === true)

Many of the same reasonings I gave there for that type (slightly 
unconventional compared to previous ones) of feature detect are exactly the 
reasons I'm suggesting a similar pattern for `readyState`. Extending the 
feature-detect to use a property and its default value is a delicate way of 
balancing the need for a feature-detect without creating entirely new 
properties (more complexity) just so we can feature-detect.


While it isn't as pretty-looking, in the current state of how the browsers 
have implemented things, it IS workable. The set of browsers and their 
current support for `readyState` is a known-matrix. We know that only IE 
(and Opera) have it defined. And given the high visibility of this issue and 
our active evangelism efforts to the browser vendors, it's quite likely and 
reliable that all of them would know the nature of the `readyState` part of 
the proposal being the feature


The only wrinkle would have been Opera possibly changing the default value 
to uninitialized but not implementing the proposed underlying behavior. 
Thankfully, they already commented on this thread to indicate they would act 
in good faith to implement the full atomic nature of the proposal (not just 
part of it), so as to preserve the validity of the proposed feature-detect.


I know Nicholas has expressed reservations about that feature-detect. But I 
would say that there needs to be hard evidence of how it will break, not 
just premature fear that some browser vendor will go rogue on us and 
invalidate the expressed assumptions.




Summary of changes:
* Changed noexecute to preload
* No HTML markup usage
* No change to load event
* Introduction of preload event
* Removed mention of readyState

I'd appreciate hearing feedback on this revision from everyone.


Firstly, I like the changes Nicholas made to his proposal. I think preload 
and onpreload are definitely clearer than noExecute and whatever the 
onfinishedloading would have had to be. I still think his proposal is more 
complicated (and thus faces a more uphill journey to spec acceptance and 
browser adoption) than `readyState` preloading, but it's definitely clearer 
and more semantic than the original version.


If we ended up deciding to go with Nicholas' proposal, I'd at least suggest 
that `.execute()` on a not-yet-loaded script should not throw an error, but 
should just remove/unset the `preload` flag, such that the script will just 
execute as normal when it finishes loading.


Also, I'd like someone (with better knowledge than I, perhaps Henri?) to 
consider/comment on the implications of Nicholas' statement that 
`.execute()` must be synchronous. I recall from the async=false 
discussions that there were several wrinkles with injected scripts executing 
synchronously (something to do with jQuery and their global Eval). We should 
definitely verify that this part of his proposal isn't setting us up for the 
same landmines that the async=false process had to tip-toe around.


For instance, if I call `.execute()` on a script element that is loaded and 
ready, and it's going to execute synchronously, what happens if the script 
logic itself calls other synchronous `.execute()` calls? And is the script's 
onload event (which fires after execution) also synchronous? I can see 
this leading to some difficult race conditions relating to how script 
loaders have to do cleanup (to prevent memory-leaks) by unsetting 
properties on script elements, etc.



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-10 Thread Kyle Simpson

For the purposes of this discussion, we are combining (but safely so, I
believe) execution and parsing, and saying that we want to be able to
defer the parse/execution phase of script loading. The reason it's
necessary to draw the distinction (and point out that parsing is the 
costly
bit) is to defuse the argument that the script author can simply change 
the

script to not execute itself until manually invoked at a later time.


There are multiple phases between receiving bytes on the wire and having
executed the code they represent. Parsing would seem unlikely to be the
main problem here (parsing is mainly checking for syntax errors while or
after removing the character encoding from the bytes received),


The Gmail mobile team did extensive research into this area and concluded 
that it was in fact the parsing that was the big slow-down in their case. 
From what I recall, they have a big file with nothing but function 
declarations in it (NO EXECUTIONS), and that file took a few seconds to 
execute (not actually execute any functions, but parse and declare those 
functions into the global space). On the other hand, if they wrapped all the 
code in /* .. */ comments, and had that single big comment parsed/executed 
by the engine, it went orders of magnitude faster (unsurprisingly).


So, it strongly suggests that the parsing/interpretation of the code was in 
fact the culprit. There's nothing they could have really done to prevent 
less execution, since they weren't executing anything. It was merely the 
sheer number of function declarations being parsed and added to the 
environment that slowed everything down.


There's already in thread sufficient confusion over what execute means. In 
the literal sense as far as the JavaScript engine is concerned, we probably 
ARE talking about wanting to defer when the code itself (the function 
declarations) is executed. But we need to differentiate *that* execution 
(which is the problem) from later execution (which isn't the problem) with 
actual function call invocations. For the sake of this discussion here, I've 
been referring to the first execution as parsing and the second 
execution as execution.


I don't want us to derail this thread AGAIN with semantics arguments about 
what is and is not parsing or execution, and whether the problem is 
parsing or interpretation, or whatever you want to call that first pass 
where JavaScript code is run through the engine, even if no function calls 
were happening.


The real point is, THAT part (whatever it's called) is clearly what is so 
slow, and THAT part is what we're seeking to have control to defer. And THAT 
part won't benefit at all from telling a developer just redesign your 
code.




Anyway, I don't really see the problem with rewriting your code so you
have more control over when execution takes place,


Again, this is exactly the line of degenerative conversation that I was 
trying to preempt from happening. You're assuming (wrongly) that the code is 
unnecessarily executing function calls at the time of inclusion, when in 
reality it's not, and so that's not the problem.


It's not a question of if I can change code from automatically invoking a 
function to controlling that function call myself. It's a question of if I 
have any way to defer when the browser interprets a huge chunk of function 
declarations present in my source code. And the answer is, currently, I 
can't defer that step, whatever we call that step.



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-10 Thread Kyle Simpson

Testing this shows that IE9 doesn't fire a progress event for the
transition that is of interest for the use case. That is, when the script
transitions to loaded, there's no event. Once the script has been
evaluated, there is a (rather useless) progress event for the transition
to complete. The interesting transition to loaded can only be 
observed

by polling. Sigh. :-(

Demo: http://hsivonen.iki.fi/test/moz/script-readystate.html


You're correct about this not working in IE9b. But it would seem that it's 
a

regression, as I just checked in IE6-8, and it does indeed fire the
`onreadystatechange` event on the loaded state. I'm going to file a bug 
in

the IE9 feedback system to ask them to address that regression.

Here's my test: http://test.getify.com/ie-script-readystate/



UPDATE: IE9 RC1 came out today, and this regression is fixed. 
`readyState=loaded` does fire the `onreadystatechange` handler as expected. 
Good news for the support of `readyState` proposal, I think.




*HOWEVER*, in IE6-8 (and I would assume IE9 once they address that
regression), there's still a wrinkle with being able to rely on the 
loaded
readyState event. If the script is already in the cache, it appears that 
IE

does *not* fire the loaded readyState event. Obviously, this is quite
unfortunate, since it means that polling would still be a necessary piece 
of

the puzzle for IE.


Turns out I was completely wrong on this. `readyState=loaded` fires fine 
with cached items. My test code has a race-condition in it that was masking 
the correct behavior.


So, at this point, I can verify (at least in my tests), that `readyState` 
works fine (without polling) in at least IE6-IE9. This means polling is not 
necessary to support that functionality in IE. Good news on both fronts.



--Kyle






Re: [whatwg] Proposal for separating script downloads and execution

2011-02-10 Thread Kyle Simpson

The proposal is an optimization of these crude hacks. Authors using such
hacks are unlikely to stop using them because the optimization does not
work on deployed clients.

What will happen is that people using the proposed feature will intro-
duce subtle bugs in their code (like calling .execute() in some place
but not in another which works 99% of the time on the test systems but


First of all, you're making quite a few assumptions which YOU have no proof 
of. The people who are vocal on here asking for this feature are 
responsible, seasoned developers, who've been in the trenches of JavaScript 
and web development for many years. We're also authors and maintainers of 
publicly consumed and widely used tools (script loaders, etc), and we know 
exactly how to responsibly use the feature we are asking for. I can't speak 
to if other devs will possibly do it wrong, but there's PLENTY in both HTML 
and in JavaScript specs which can (and is, regularly) abused by ignorant 
developers. That something *can* be abused is not proof it will be, nor is 
it a reason to deny it from the people who clearly know how to use it 
correctly.


Secondly, and more importantly, as I've said several times already in this 
thread... **THIS IS NOT JUST ABOUT THE MOBILE PERFORMANCE** I'm not sure why 
some people in this thread insist on focusing on arguing that point (ad 
nauseum) to the exclusion of the other parts of the conversation. Combine 
that with the others who want to play semantics games over what we call 
something, and the bikeshedding is getting out of hand.


Talking about how deferring a script's execution can help mobile 
performance seemed like a simple way to illustrate a usage of the feature 
being requested, especially since there was hard evidence and established 
research done by a pretty well known/respected/intelligent group -- the 
Gmail Mobile team.


If we were to completely throw out the mobile performance use-case, and ONLY 
consider the others (of which I've documented several), could we get this 
conversation back on track instead of these side paths of argument over 
issues which don't really matter to the overall validity of the 
request/proposal?


Even if the mobile performance use-case were thrown out, I'd still be 
advocating for the other use-cases and requesting this functionality as a 
result. I think I can safely speak for Nicholas and Steve in my assertions 
that there are other valid reasons this functionality is important besides 
just deferring execution to avoid CPU-bottlenecking on mobile.




--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Kyle Simpson

?

Mighty conjecture, chap. Multithreading is even possible on
microcontrollers like the Atmel ATmega32 ? so why should a modern
operating system running on reasonable hardware not be able to do it?


In most mobile devices I've had the exposure to developing for, 
multi-threading is not possible/available to me. The usual answer, for 
instance for the iPhone, is that true multi-threading will tend to cause 
serious drains on limited battery life, which degrades quality of 
user-experience and user satisfaction. That's the only anecdotal evidence 
that I have for how these engines may not be completely free to multi-thread 
as is being suggested.


In any case, you're still missing the point. The mobile OS's (and even the 
JavaScript engines) are of course free to improve their internal 
implementation details, but this HTML spec has only a slight modest ability 
to affect that. The hardware/mobile-OS vendors have dozens of different 
pressures that play into what they can and cannot implement, and how. Even 
if the HTML spec were to say must process script execution in a separate 
thread from the rendering engine, the feasibility of that requirement may 
still be overshadowed by lots of factors completely out of the control of 
the specification group.


We can continue to debate what might be nice for mobile vendors to consider, 
but they aren't on this list and listening to us. Who IS on this list, and 
who IS interested, are developers who have real performance problems right 
now. And they are creating ever more complex hacks to get around these 
problems. And the spec has an opportunity to make a small foot-print change 
to give them some better options for that performance negotiation.


You're also ignoring the fact that there are several other documented 
use-cases for execution-deferral that are not related to mobile (or 
multi-threading) at all. That maybe the 80% use-case for this proposal, but 
it's certainly not the only reason we want and need a feature like this.




Fun fact: I use mobile versions of some web sites, because they are much
quicker, even on the desktop. Sometimes a little minimalism can go a
long way.


We're not particularly talking about generalized web sites as much as we are 
talking about complex mobile web applications like Gmail. Even in their 
minimalism, the bare minimum experience they're willing to deliver is 
overloading the mobile browser and so they are resorting to crazy and 
brittle hacks.


In my opinion, when we see a trend toward developers having to hack around 
certain parts of the functionality that don't work the way they need it to 
(for real-world use-cases), then it's a good sign that we should consider 
helping them out. And suggesting that they just load less JavaScript is not 
really all that helpful for the population of applications that are most in 
need of this feature.




Counter-intuitive at first, but true: More complex code is not
necessarly faster code. More options are more options to screw up.


We have a number of well-known and well-documented experts in the realm of 
page-load optimization and script loading functionality who are behind 
requests like is being discussed. If we can't trust them to do correctly 
with what we give them, then the whole system is broken and moot. The fact 
that some developers may misunderstand and improperly use some functionality 
should not prevent us from considering its usefulness to those who clearly 
know the right things to accomplish with it.


It also hasn't been shown with any degree of specificity just what the fear 
is of developers screwing up if we give them this functionality. Right now 
it's a bunch of conjecture about possible misunderstandings, something which 
should be easy to deal with through proper documentation, education, and 
evangelism. Why are we so afraid to let the right implementations of a 
functionality flourish and bubble to the top, and drown out the wrong 
implementations of functionality by those are are either ignorant or 
incompetent?



I'm losing track in the noise of what the fundamental disagreements 
are--if

there even are any.  I think the original proposal is a very good place to
start


The original proposal is in fact more focused on the markup-driven use-case 
than on the script-driven use-case. The original proposer, Nicholas, agreed 
in an earlier message that he's really more concerned with script-driven 
functionality than markup driven functionality. And I completely agree with 
that assertion.


In fact, I'd go so far as to say that the use-case for separating script 
loading from its parsing/execution phase (and thus being able to 
control/trigger when that phase occurs, later) is 99% driven by the 
script-loaders use-case. Script loaders by and large do not use markup 
semantics to accomplish their tasks (because most of them do not use 
document.write(script); to load scripts)


So, if we consider the spirit of the original proposal, we 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Kyle Simpson

? Regardless, considering such things is way outside the scope of anything

that's going to be useful for web developers in the near-term dealing
with these use-cases.


Yes, but so is the proposal here, no?


No, I don't think so. A huge part of my point with the proposal is that it 
builds on existing spec wording AND it has browser implementation precedent 
from IE, and *some* stated support from Opera. That makes a solution a bit 
more tangible and foreseeable in the near future, as opposed to for instance 
saying that all mobile device JavaScript engines must be changed so that 
they take more advantage of multi-threading -- a task which could be years 
before realized.



Yes, but what makes you think that those very same sites will make good 
use of the functionality we're proposing here?


Fair enough, the offenders will probably keep on offending. EXCEPT that web 
performance advocates like myself (and Steve Souders, and many others) will 
have something tangible to take to them in performance evangelism efforts. 
Right now, if we try to get them to address their bad performance, it 
involves suggesting an extremely complex and convoluted set of brittle 
hacks, which they are rightly hesitant to consider.


It's a much easier sell if we can say look, here's this simple mechanism 
dedicated specifically to helping the problem your site has, would you 
consider it?




Neither will the browser eagerly parsing.  ;)


What's VERY important to note: (perhaps) the most critical part of
user-experience satisfaction in web page interaction is the *initial*
page-load experience.


Really?  The pages I hate the most are the ones that make every single 
damn action slow.  I have had no real pageload issues (admittedly, on 
desktop) in a good long while, but pages that freeze up for a while 
doing sync XHR or computing digits of pi or whatever when you just try to 
use their menus are all over the place


There's lots and lots of research into how user-satisfaction in web pages 
and web applications is more driven by the initial page-load experience than 
any other factor (not exclusively, just majority). Again, I refer you to the 
great work Steve Souders has done in this area. There's plenty of 
information about how when sites speed up their page-load (and nothing 
else), user retention (and a whole related host of other positive 
user-satisfaction indicators) all go up, sometimes dramatically.




So if it's a tradeoff where I can get my page-load
to go much quicker on a mobile device (and get some useful content in
front of them quickly) in exchange for some lag later in the lifetime of
the page, that's a choice I (and many other devs) are likely to want to
make.


See, as a user that seems like the wrong tradeoff to me if done to the 
degree I see people doing it.


We can debate that point forever and never really come to a definitive 
consensus. I myself sometimes feel like this technique can be taken 
overboard and I'm not entirely behind all attempts to defer script 
execution. But nonetheless, there's provable validity to making some 
tradeoffs like that, and seeing user happiness go up. We're simply asking 
for the means to make those tradeoffs without costly/ugly hacks. That's all.


There's obviously an art here in balance. But the numbers clearly indicate 
that addressing page-load performance bottlenecks leads to huge gains in 
user-satisfaction.



Perhaps they just have different goals?  For example, completely 
hypothetically, the browser may have a goal of never taking more than 50ms 
to respond to a user action.  This is clearly a non-goal for web authors, 
right?


In fact, no. As I asserted in an earlier message in the thread, I believe 
the goals of the browsers (to be faster in page load) line up well with the 
goals of web authors (to reduce the amount of bounce traffic because of slow 
loading sites, especially on mobile).


Not all web authors care about performance (often they just care about bells 
 whistles). But there's a recent undeniable trend, and huge uptick, toward 
more awareness of web performance optimization issues and specifically on 
improving initial page-load experience.


Consider the Google algorithm change where they take page-load speed as a 
factor in ranking. Clearly, more and more web authors (and the businesses 
that drive their decision making) are seeing the benefits of 
performance-savvy websites, so I believe we'll see even more alignment of 
goals as we move forward.



Should a browser be prohibited from pursuing that goal even if it makes 
some particular sites somewhat slower to load initially (which is a big 
if, btw).


A browser should have some strong warnings against acting in a way that is 
counter to the expressed intent of a web author. If a web author is taking 
steps to more actively control the pipeline of resource loading and 
page-load performance, the browser should not try to second-guess that 
author and thwart their efforts.



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Kyle Simpson

?

You're also ignoring the fact that there are several other documented
use-cases for execution-deferral that are not related to mobile (or
multi-threading) at all. That maybe the 80% use-case for this proposal,
but it's certainly not the only reason we want and need a feature like
this.


Could you list those issues or point me where these issues are documented?


Earlier in this thread: 
http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-February/030327.html



--Kyle 



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-08 Thread Kyle Simpson
? I think we should do the readyState thing and put a note in the spec 
saying that implementors should be polite to authors and not implement the 
readyState property until they also implement the behavior that setting .src 
on a not-in-tree node starts the HTTP fetch (in order to make the behavior 
feature detectable from JS).


Adopting the readyState / early .src assignment mechanism has these 
benefits over the proposed alternative:
* Already (reportedly; I didn't test) work in IE. Always a plus over 
making up some new stuff.
* Authors already have to deal with IE, so the question of opting in 
doesn't arise.
* Sites already have to work when scripts haven't been fetched yet and 
when the scripts are already in the HTTP cache. Thus, starting the fetch 
earlier than before shouldn't cause breakage since the worst case is 
that the observable behavior becomes similar to the script already being 
in cache by the time the node is attached to the tree.
* img elements have started fetches upon .src setting since almost 
forever, so making scripts do the same makes the platform more 
self-consistent.

* noexecute when used in markup has a particularly bad degradation story.



**Very much** agree with Henri's assessment here.



I agree with Henri's analysis. Opera already has readyState (with value
always being 'loaded'), but we'd be careful to fix script prefetching and
readyState 'uninitialized' at the same time.


Awesome, that's very helpful to have cooperation from Opera like that! Thank 
you. :)




That is only the case if there is a readystatechange event. Is that so?


Yes, IE has had the `readystatechange` event on script elements for a very 
long time (pre IE6 at least). However, as noted below in just a moment, it's 
not quite as reliable as one would hope it would be. But it *is* there, and 
I think makes a good candidate reference implementation (sans the 
quirks/bugs about to be discussed) for what the HTML specification could 
adopt as requirement.



Testing this shows that IE9 doesn't fire a progress event for the 
transition that is of interest for the use case. That is, when the script 
transitions to loaded, there's no event. Once the script has been 
evaluated, there is a (rather useless) progress event for the transition 
to complete. The interesting transition to loaded can only be observed 
by polling. Sigh. :-(


Demo: http://hsivonen.iki.fi/test/moz/script-readystate.html


You're correct about this not working in IE9b. But it would seem that it's a 
regression, as I just checked in IE6-8, and it does indeed fire the 
`onreadystatechange` event on the loaded state. I'm going to file a bug in 
the IE9 feedback system to ask them to address that regression.


Here's my test: http://test.getify.com/ie-script-readystate/

*HOWEVER*, in IE6-8 (and I would assume IE9 once they address that 
regression), there's still a wrinkle with being able to rely on the loaded 
readyState event. If the script is already in the cache, it appears that IE 
does *not* fire the loaded readyState event. Obviously, this is quite 
unfortunate, since it means that polling would still be a necessary piece of 
the puzzle for IE.


But, it can be used as a fallback along with the onreadystatechange handler, 
with a higher timeout like 100ms, and thus degrade nicely between IE and 
other browsers (if they eventually implement this feature... correctly).


Also, I am going to file a separate bug with IE9 Feedback to ask them to 
fire the loaded event when loading from cache. Perhaps they'll fix both 
the regression and this bug all in one fell swoop. I can dream, can't I?



Is there any reason to believe that sites set .src on scripts they don't 
intend to have fetched?


There's some reason to believe that there could be speculative fetching in 
some sites apps (obviously IE only), where the fetching happens but the user 
never activates some part of the page (like a tab or widget) which needs the 
script, and so in some cases where sites have advanced techniques like this, 
there may be some waste. But I'd suggest that in almost all cases, the 
wasted load is the fault of the web author for being speculative rather 
than the fault of the browser. It's clearly an advanced technique that 
requires intentional opt-in. And since it only works in IE at the moment, I 
doubt there's very many sites doing it.




--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-08 Thread Kyle Simpson
? Is there a specific problem with letting Web Workers handle this use 
case?

They should not interfere with the UI thread.


I think the primary reason why Web Workers is not useful is that it's not 
widespread enough adoption yet to be useful to the script loader community.


http://caniuse.com/#feat=webworkers

To the best of my knowledge, it's not implemented in the mobile world at all 
(concurring with that chart). A main reason why the Gmail team experimented 
with the comment-trick for deferring parsing/execution of code was 
specifically for the mobile use-case, where Web Workers would not be 
helpful.


Also, there's no signs of Web Workers being added to IE9 (I guess we can 
hope, but I doubt it), so leaving the entire IE family out of the equation 
is not very useful or practical for the foreseeable future.



Note that in the blog they mention that on an iPhone 2.2 parse time was 
2.6

seconds for 200k of JS, compared to 240ms to just download it in a comment
-- the mobile network isn't the issue, it is the JS parser in mobile
browsers.


Yes, it's important to note that it's not even the *execution* of JavaScript 
code that's actually the particular issue, but rather just the parsing of it 
(even if invoking of the functionality is deferred until later) that causes 
the biggest slowdown, in most cases.


For the purposes of this discussion, we are combining (but safely so, I 
believe) execution and parsing, and saying that we want to be able to 
defer the parse/execution phase of script loading. The reason it's 
necessary to draw the distinction (and point out that parsing is the costly 
bit) is to defuse the argument that the script author can simply change the 
script to not execute itself until manually invoked at a later time. (that 
argument hasn't been heard here yet, but it's definitely present in many 
other forums where this line of discussion has occurred before)




Is there any reason to believe that sites set .src on scripts they don't
intend to have fetched?


I believe I misinterpreted this question in my previous post, so let me 
readdress it. The question is, are there sites which are setting the `src` 
property but NOT wanting the download to occur, which could be burdened if 
the proposed behavior were more widely adopted? I haven't run across any 
examples of such behavior. I can't imagine that it's very widespread, 
although it's conceivable that someone may have a very small complex niche 
case where the speculative download was undesirable.


HOWEVER, the spec already says that the user-agent may do this speculative 
downloading, so if there are any sites which are relying on that NOT 
happening, then they are playing a dangerous game already. If the spec never 
changed to say this was a required behavior, but several more browsers just 
decided to implement the suggestion as its currently stated, those sites 
would be at no more practical risk than they are if we consider making it a 
spec requirement rather than a suggestion.



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-08 Thread Kyle Simpson

? Isn't this just a quality of implementation issue?

No, frankly it isn't. No matter how good the implementation of the 
JavaScript engine on mobile, the mobile device will always be much more 
limited in processing power than a desktop browser environment. There are 
those who think that the mobile and desktop paradigms are not (or shouldn't 
be) fundamentally different in some respects, but those people are 
incorrect.


Mobile will always have special challenges that desktop may not face. And 
the faster that mobile devices get, the more complex the scripts that devs 
will want to shove down the pipes to run on it. You can't just wait around 
for some mythical future time when mobile processing power is not a limiting 
factor. The fact is, it's a limiting factor right now, and will be for any 
foreseeable future. And so we're trying to find ways to juggle the costly 
operations around to help mitigate the impact.


It's also tempting to just get mired down in this one use-case of mobile 
JavaScript parsing deferral. While this use-case is a great example of why 
controlling execution is important, there are plenty of other use-cases for 
loading a script ahead of time and not using the script (parsing/executing 
it) until later (or sometimes never).


--Kyle 



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-08 Thread Kyle Simpson

Can you list some of them?  Most of the ones I can think of are ultimately
different forms of the same optimization.


I would first refer you to the use-cases that Steve Souders has documented 
around his ControlJS library. His commentary on this topic is far more 
comprehensive than anything I can rabble off here.


http://www.stevesouders.com/blog/2010/12/15/controljs-part-1/

But, I'll take a stab at a couple of use-cases:

1. One use-case that I *am* quite familiar with is: script loaders (like 
mine, LABjs) have the need to be able to download multiple scripts in 
parallel (again, for performance optimization, but not just for mobile!), 
but it's quite common that some scripts have dependencies on each other. The 
problem is that scripts loaded dynamically are not guaranteed to execute in 
any particular order. A mechanism for loading files in parallel but 
controlling (or enforcing) their execution order, is necessary.


A recent proposal and accepted addition to the spec was my async=false 
proposal, which was also focused on this exact use-case. For async=false 
(a simplified solution to the majority use-case), what the web author can 
now opt into is that a group of scripts will be enforced to execute in the 
order that they are added to the DOM (like script tags in markup do). While 
this works for the 90% majority use-case, it doesn't cover all needs of 
script loaders.


For instance, if I have two groups of scripts (A.js, B.js and C.js) 
and (D.js, E.js, and F.js). Within each group, the order must be 
maintained, but the two groups are completely independent. As async=false 
is currently implemented, you cannot accomplish isolating the two groups of 
scripts from affecting each other. The D,E,F group will be forced to wait 
for the A,B,C group to finish executing.


There are several permutations of that nature in script loading which would 
be enabled quite easily by the ability to explicitly control when a script 
executes.


2. Another plausible use-case that occurs to me is loading two overlapping 
plugins (like for jQuery, for instance). The author may have a simple 
calendar widget and a much more complex calendar widget, and the two may 
conflict or overlap in such a way that only one should be executed. But for 
speed of response, the author may want to preload both plugins and have 
them waiting on hand, and depending on what action the user takes (or the 
state of data from an Ajax request), may then decide at run-time which of 
the two plugins to execute.


Hopefully that illustrates a few other advanced use-cases (not specifically 
around mobile) which are enabled/assisted by controlling execution of 
JavaScript separate from its loading.




See
http://googlecode.blogspot.com/2009/09/gmail-for-mobile-html5-series-reducing.html
for the official blog post about this technique.

So, I think you should consider having download / parse / execute be
separate if you are going to go to the trouble to do anything.


Isn't this just a quality of implementation issue?


No, frankly it isn't. No matter how good the implementation of the
JavaScript engine on mobile, the mobile device will always be much more
limited in processing power than a desktop browser environment.


That's not what the question was about.


The context of the original assertion is clearly about optimizing things in 
mobile (like the Gmail mobile team did) by deferring parsing/execution of 
scripts from happening during initial page-load (when the mobile device's 
CPU is already taxed). Then, the question is asked is that just a quality 
of implementation issue.


And so I think my response is quite on target and germane. I'm asserting 
that the solution to the problem can't just be the mobile implementation 
needs to be more efficient (higher quality), because the issue is not 
really about the JavaScript engine, but the limitations of the device it's 
running on. That's what the Gmail mobile team found a way to work-around. No 
matter how much better the JavaScript engine on mobile could be made, there 
would be a finite limit of CPU power available on the mobile device that is 
at least an order of magnitude less than on the desktop.


We're saying we need a feature to assist in working around such issues, not 
to debate possible/mythical future optimizations to the engine which do 
relatively nothing to help the hacky workarounds for the use-case 
(performance) right now in current applications.




The thing is, if a browser is idle, why shouldn't it go ahead and parse
the script?


In most cases, a web author trying to second-guess a browser is not a 
fruitful endeavor. However, browsers are not always perfect in their 
behavior and decision making. If a web author needs to do something that 
they then observe is causing issues on a slow mobile device, in general, why 
shouldn't they have a little more control over how/when it happens?




That way when you want to execute it there's no sudden UI
pause as the 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-08 Thread Kyle Simpson

? If that's what you were responding to, then I think your response is
simply incorrect.  There's nothing whatsoever that requires that a web 
browser freeze up its UI while parsing a script.  If it does so, it's a 
quality of implementation issue, pure and simple.


You don't need to be more efficient to avoid freezing the UI.  You just 
have to not do the parsing work in a single shot on the main thread.


I can't speak definitively as to how the JavaScript engine is implemented 
(and if the code is significantly different between mobile and desktop). But 
I can say that even if their code is substantially the same, I could still 
see it quite plausible that the device itself locks up (not the browser) if 
there's just simply too much going, taxing its limited CPU power. Heck, I 
have times when my powerhouse desktop freezes for a brief moment when I have 
a lot going on. Exhausting the CPU is not a difficult thing to imagine 
happening on these tiny devices.


I can also see it quite plausible that mobile OS's are not as capable of 
taking advantage of multi-threading (maybe intentionally forbidden from it 
in many instances, for fear of battery life degradation). Perhaps it's 
simply not possible to multi-thread the parsing of JavaScript in parallel to 
the UI rendering. If that's the case (I really am completely guessing here), 
then it's not exactly a quality of implementation issue as far as the 
JavaScript engine is concerned, but more an issue of how the mobile OS is 
designed and integrated with the device hardware. Regardless, considering 
such things is way outside the scope of anything that's going to be useful 
for web developers in the near-term dealing with these use-cases.


Even if you're right and the fault really lies with the implementor of the 
JavaScript engine (or the OS), that's still a fruitless path for this 
discussion to go down. No matter how good the mobile JavaScript engine gets, 
I promise you sites will find a way to stuff too much JavaScript down the 
pipe at the beginning of page-load in such a way as to overload the device. 
That is a virtual certainty.



And I'm saying that I just don't want this feature getting in the way of 
browsers improving.  As long as it doesn't, it's fine by me.


I don't want to cause browsers to be less performant or hold them back from 
improving. I want to help developers have an option to increase performance 
in those cases where the browser's automatic processes to do so happens to 
fall short. I believe there must be a way to achieve both goals 
simultaneously.



Now you may be right that authors who really want to screw up like that 
will just do browser-sniffing hacks of various sorts and still screw up. 
But it's not clear to me that we need to make the barrier to shooting 
yourself in the foot lower as a result


That sounds more like a question of degree (how much we should expose to the 
developer, and how) than the principle (should we expose it). In any case, I 
don't see much evidence that suggests that allowing an author to opt-in to 
pausing the script processing between load and execute is going to lead to 
authors killing their page's performance. At worst, if the browser did defer 
parsing all the way until instructed to execute, the browser simply would 
have missed out on a potential opportunity to use some idle background time, 
yes, and the user will have to suffer a little bit. That's not going to 
cause the world to come crashing down, though.


What's VERY important to note: (perhaps) the most critical part of 
user-experience satisfaction in web page interaction is the *initial* 
page-load experience. So if it's a tradeoff where I can get my page-load to 
go much quicker on a mobile device (and get some useful content in front of 
them quickly) in exchange for some lag later in the lifetime of the page, 
that's a choice I (and many other devs) are likely to want to make. 
Regardless of wanting freedom of implementation, no browser/engine 
implementation should fight against/resist the efforts of a web author to 
streamline initial page-load performance.


Presumably, if an author is taking the extraordinary steps to wire up 
advanced functionality like deferred execution (especially negotiating that 
with several scripts), they are doing so intentionally to improve 
performance, and so if they ended up actually doing the reverse, and killing 
their performance to an unacceptable level, they'd see that quickly, and 
back-track. It'd be silly and unlikely to think they'd go to the extra 
trouble to actually worsen their performance compared to before.


Really, let's not always assume the worst about web authors. I believe in 
giving them appropriate tools to inspire them to do the best. If they do it 
wrongly and their users suffer, bad on them, not on the rest of us. That's 
not an excuse for recklessly poor implementation of features, but it IS a 
call for giving some benefit of the doubt from time to 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-03 Thread Kyle Simpson

? One reason I like the noexecute proposal more than relying on

readyState is that noexecute can be used in markup. I.e. you can do
things like:

html
head
script src=a.js noexecute onload=...
script src=b.js noexecute onload=...
script src=c.js noexecute onload=...
/head


Doesn't link rel=prefetch mostly address the use-case of 
load-but-don't-execute in markup? The reason script-inserted script elements 
need this capability is more advanced than any use-case for why you'd do so 
in markup. In other words, I can't imagine that a script loader would rely 
on adding script tags through markup (like with document.write() I guess?) 
rather than just using dynamic script elements.


For the sake of the argument though, I *can* see how the noexecute would 
be useful for *inline* script elements that you wanted to include in your 
markup. For instance, the gmail-mobile team does this by wrapping the inline 
script content in comments (and then later processing the code to execute 
it).


However, it's already possible to address that same use-case using existing 
behavior... by simple specifying a bogus `type` for the inline script 
element. Some JS templating solutions make use of this behavior, like:


script type=template/foobar
...
/script

What `noexecute` *would* bring additional to the use-case is the ability to 
directly execute the script block at a later time without having to process 
its contents manually. Currently, if you use the bogus type method, or the 
comment-wrapped-content method, you have to manually grab the content, 
process it, and re-inject it into a proper script element, which of course 
is slightly less performant.


I'm not sure that the slight performance hit of this use-case is important 
(or impactful) enough though to define a whole new attribute and its 
semantics/complications.




Though of course, if people think that using readystate is enough then
we can flesh out that solution too. We'd have to require that UAs
start downloading the script as soon as .src is set, and that events
fire at reasonable points in time, like when the script has been
downloaded.


Yes, as I said earlier in the thread, I think we'd need to consider changing 
the may wording in the current spec language to shall or will. And 
then we'd have to consider giving some basic framework language for an event 
mechanism. Technically, the preloading event mechanism isn't strictly 
necessary, but it's quite useful for several things you can't do without it, 
and so I really don't think it's worth adjusting the spec without also 
adding that part in.




I think that we couldn't use the 'load' event has that
might break existing pages which rely on 'load' not firing until the
script has executed.


Agreed, load is a binary one-time event, and thus not suitable for 
overloading for this purpose. readyState is much more suitable since it 
defines a progression of states. XHR already makes good use of defining such 
an event mechanism, and so there's precedent to draw from here. In the case 
of preloading for scripts, there's probably just two states necessary: 
uninitialized and complete. Haven't seen any use-cases for which any 
intermediate states (like loading) would be useful, as they are in XHR.




--Kyle 



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-03 Thread Kyle Simpson

?

I'm not sure why you are narrowing the scope to script loaders? (I
imagine you're referring to js-libraries which help with loading
scripts faster?)


Yes, script loaders like LABjs are the primary use-case that I'm concerned 
about in terms of giving the load-but-defer-execution behavior to. That 
doesn't mean it's the only use-case, just that it's (in my mind) the 
majority use-case for this feature.




My idea was that webservers would output the above markup directly,
avoiding the need to go through special libraries at all.


Yes, web servers could output markup like that. BUT, there'd still need to 
be some special library or code logic on the page that knew how and when to 
execute the scripts. So, in my mind, if you already need to have such logic 
in place for the execution, wrapping that logic into an existing script 
loader which is almost certainly going to use a dynamic script element 
(instead of script markup) makes natural sense.


I'm not saying the markup use-case is invalid, just that from my perspective 
it's less prevalent than the rise of all the different script loaders 
wanting to access this behavior on dynamic script elements, loaders like 
LABjs, RequireJS, HeadJS, ControlJS, and many others.


Yes, as I said earlier in the thread, I think we'd need to consider 
changing

the may wording in the current spec language to shall or will. And
then we'd have to consider giving some basic framework language for an 
event

mechanism. Technically, the preloading event mechanism isn't strictly
necessary, but it's quite useful for several things you can't do without 
it,

and so I really don't think it's worth adjusting the spec without also
adding that part in.


I'm not quite sure I follow you here. What I was thinking was that we
say that implementations MUST (in the rfc 2119 sense) start loading
the script immediately.


This is the wording that's already in the spec:

---
For performance reasons, user agents may start fetching the script as soon 
as the attribute is set, instead, in the hope that the element will be 
inserted into the document. Either way, once the element is inserted into 
the document, the load must have started. If the UA performs such 
prefetching, but the element is never inserted in the document, or the src 
attribute is dynamically changed, then the user agent will not execute the 
script, and the fetching process will have been effectively wasted.

---

I was just saying that since this wording currently says may, it's only 
taken as guidance and a suggestion. To make this a true requirement, we'd 
likely change may to shall/will/must, right?




Agreed, load is a binary one-time event, and thus not suitable for
overloading for this purpose. readyState is much more suitable since it
defines a progression of states. XHR already makes good use of defining 
such
an event mechanism, and so there's precedent to draw from here. In the 
case

of preloading for scripts, there's probably just two states necessary:
uninitialized and complete. Haven't seen any use-cases for which any
intermediate states (like loading) would be useful, as they are in XHR.


Sure, but we'd also want to fire some event once the script has been
fully downloaded so that the page doesn't have to use a timer and poll
to figure out when the download is done.


I think we're in agreement here. At least I hope so. I think that the 
`onreadystatechange` event firing when the `readyState` property becomes 
complete is quite sufficient for an event to notify when the script 
finishes loading, correct? That's how the current IE implementation works.


I *do* see a possibility that an event for `readyState=complete` (fired when 
the script has finished *loading*) and an `onload` event (fired when the 
script has finished *parsing  executing*) could be a little confusing (name 
wise) to some developers. I wish that it hadn't ever been called onload, 
but for clarity sake, instead called something like onrun or ondone, 
etc. However, load has been for a long time commonly taken to mean 
completely loaded and run -- that boat sailed long ago. There'd probably 
be far too much compat breakage if we changed the semantics of the `onload` 
event now.


IMHO, `readyState=complete` (or `readyState=loaded`) will be fine for the 
true loaded event and `onload` can remain as-is for the loaded and run 
event.



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-03 Thread Kyle Simpson

?
I don't think readyState as Kyle describes is an appropriate candidate 
mechanism because it's not an actual indicator that the functionality 
exists. The only thing you can really be sure of if readyState is 
uninitialized is that the script element supports readyState. The fact 
the only browser supporting this presently is the same one that supports 
the desired behavior is a happy coincidence.


You are correct, it a little bit of a weak feature-test (compared to other 
alternatives). But then again, consider something like the `defer` property 
(which not all browsers currently support yet). Since `defer` is clearly 
spelled out in the spec, we have to hope that any browser which does not 
currently support `defer`, when they decide to add `defer`, that they will 
add it correctly based on how the spec says it should work. If a browser 
chose to implement a feature (like `defer`) in direct willful violation of 
the spec in a way that broke feature-detection, this would be a quite 
unfortunate situation and the community could rightly cry out to that 
browser to come back into alignment with the spec and the norm.


The same would be true of my proposal of feature-testing `readyState` (and 
its initial value) on the script element. Since FF and Webkit do not 
currently have a `readyState` property on the script element, if we were to 
amend the spec now to say that a `readyState` property must be added to 
indicate the progress of the preloading (implying of course that the 
preloading itself must also be implemented), then we could also somewhat 
confidently assume that FF and Webkit would follow the spec's instructions 
for the functionality when/if they decide to add a `readyState` property 
down the road.


Is this a perfect guarantee? Absolutely not. But it's definitely within 
reason for advocacy and evangelism to the browser vendors that they properly 
implement atomic/related chunks of the spec and not pick-and-choose pieces 
or make willful violations of critical aspects. I'm not saying that problems 
couldn't arise, I'm just saying that the general likelihood is that browser 
vendors would implement things the way the spec says to, and the 
feature-test I propose would be viable.


So... happy coincidence? Yes. But if the spec acts quickly enough before 
any other browsers implement `readyState` in an incompatible way (that is, 
without the attached preloading behavior we're discussing), then there's a 
decent and clear path forward which will allow the feature-test to be 
reliable. And that happy coincidence may just be our saving grace.


The only wrinkle is Opera, which has a `readyState` property on the script 
tag already, but it's non-functioning. The other happy coincidence is that 
Opera at least has a different default value than IE (and what is being 
proposed), so the pragmatic feature-test including not only the presence of 
the `readyState` property but also its initial value is still viable. Again, 
we would have to hope/assume that Opera would not act contrary to the spec 
to change the behavior/default-value of its `readyState` until such a time 
as they were prepared to implement the whole atomic changeset of 
functionality being discussed in this thread. If the spec is clear and 
unambiguous in that regard, this is perfectly reasonable to request and 
expect of all the browser vendors.



---
Just to reiterate, it's not that I'm against the noexecute proposal 
Nicholas put forth. It's just that this readyState preloading 
functionality is already implemented as we want it to be in one browser, AND 
it's already a suggestion in the spec, so the path to getting it fully 
adopted as a spec requirement, and evangelizing it to other browser vendors, 
is cleaner and simpler than starting from scratch across the board.


More than anything, I support the readyState concept out of a pragmatic 
desire to see *something* reasonable and workable for this use-case that is 
feasible to be adopted sooner rather than much later. And the path of least 
resistance is usually the best path to take on such matters.


--Kyle






Re: [whatwg] Proposal for separating script downloads and execution

2011-02-01 Thread Kyle Simpson
? The ability to separate download and execution is a trend that has not 
only emerged, but continues to be explored. There are problems with the 
previous solutions, the biggest of which (in the case of #1 and #2) is the 
reliance on the browser caching behavior which may, in some instances, lead 
to a double download of the same script. It would be preferable for a 
standardized approach to achieve these goals.


Absolutely agree with Nicholas that this is a necessary (but I think more 
advanced) use-case in script loading. It's *especially* useful for the 
mobile web, where CPU utilization (the parsing/execution of scripts) must be 
carefully managed. For instance, you might want to take advantage of loading 
a bunch of code all at once, while the mobile radio receiver is still on in 
the initial loading, but then choose to execute them piece by piece as the 
user needs it.


Of course, in the desktop world, it's useful as well, as script loaders can 
use this technique to load multiple files in parallel, but execute them in a 
desired order (for dependency-sake). FYI: for *that* particular use-case, a 
solution has already been discussed, and, as I understand it, Ian has agreed 
to add it to the spec. I'm referring to the async=false functionality 
proposal that's been discussed in various forums for the past few months, 
and is now in implementation in FF4 and coming soon to Webkit.




Add a new attribute to the script called noexecute (for lack of a better 
term) that instructs the browser to download the script but do not execute 
it. Developers must manually execute the code by calling an execute() 
method on the script node.


I'm not particularly in favor of this proposal, mostly because the spec 
already has a mechanism listed in it (and indeed it's been implemented in IE 
since v4) for doing exactly this.


http://dev.w3.org/html5/spec/Overview.html#running-a-script

In step 12:
For performance reasons, user agents may start fetching the script as soon 
as the attribute is set, instead, in the hope that the element will be 
inserted into the document. Either way, once the element is inserted into 
the document, the load must have started. If the UA performs such 
prefetching, but the element is never inserted in the document, or the src 
attribute is dynamically changed, then the user agent will not execute the 
script, and the fetching process will have been effectively wasted.


In other words, you can begin downloading one or more scripts (but not 
executing them) by simply creating a script element dynamically and setting 
its `src` property. The script will not be executed (even if it finishes 
downloading) until the script element is added to the DOM. In this way, you 
can easily create several script elements (but not append them to the DOM), 
and then when you want to execute them, you simply append them to the DOM in 
the order you prefer.


IE goes one step further, which I think is useful, which is to give a 
`readyState` (and `onreadystatechange` event handling) to the script 
element, which notifies the code of the state of this preloading. Why this 
is useful is that you may choose to wait until all scripts have finished 
loading before starting to execute them. Being notified of when they finish 
loading (but not executing) can be a very useful addition to this technique.


The wording in the spec lists this idea as may. I suggest that the 
discussion Nicholas has proposed should shift to discussing if the spec 
should:


1) change may to shall or will to move it from being a suggestion to 
being a directly specified thing (that way the other browsers besides IE 
have incentive to eventually include it)


2) consider also specifying (rudimentary/basic wording of course) a 
mechanism similar to or compatible with IE's existing `readyState` event 
emissions for the script tag, such that the progress of the preloading 
(script.src set but script not yet DOM appended) can be monitored if 
need-be.


The primary reason I'm in favor of this approach over the one Nicholas 
suggests is because it's already in the spec as a suggestion (less work to 
get it to fully specified) and because one browser has already implemented 
and proven the approach, a foundation upon which other browsers can move 
forward.



--Kyle 



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-01 Thread Kyle Simpson
? The major issue I have with the way the spec is written is that there is 
no way to feature detect this capability. I'd like this behavior (which I 
agree, is useful), to be more explicit so we can easily make use where 
available.


I agree, the spec doesn't make it clear in its current wording how this 
could be feature-tested. And (as you know, Nicholas), I'm a firm believer 
that any new functionality *must* be feature-testable rather than 
browser-inferences or UA sniffing. :)


However, the current IE implementation (with the additional `readyState` 
property) does actually provide a feasible feature-test (in fact, I'm 
working on a new revision of LABjs to take advantage of this functionality 
for IE).


The feature-test for IE essentially looks for the presence of the 
`readyState` property on a newly created script element, and then also 
inspects its value, because in IE it always defaults to uninitialized as 
its value. The reason for having to also test the value is because Opera has 
had a present-but-non-functional `readyState` property on their script 
elements since like 9.2, with a default value of loaded.


If the spec considers adding an event system to this mechanism similar to or 
compatible with IE's existing mechanism, I think this could be a valid 
approach to feature-testing, assuming the browsers all agree to play nicely. 
:)


--Kyle