Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread timeless
On Wed, Feb 9, 2011 at 9:46 AM, Glenn Maynard gl...@zewt.org wrote:
 - The scripts in comments hack would be unneeded.  That's an unpleasant
 hack, because it will both prevent browsers from caching compiled scripts,
 and prevent scripts from being compiled in the background.  Specifying a
 bogus file type also has these problems.

i don't think a script in comments prevents caching.

in the end at some point someone sends a string to a js engine for execution

the js engine is free to generate bytecode or native code, it's also
free to recognize that it has already parsed that given string and has
a copy of the corresponding bytecode/native code.

as for compiling in the background, again, nothing prevents this, if
you have idle cycles you're free to speculatively parse whatever you
like. it might not be a great idea, but it isn't forbidden. as a
quality of impl issue, an agent is free to recognize when a given site
pulls things out of comments and note to itself (or its peers) in the
future to speculatively parse comments for that site.


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread timeless
On Wed, Feb 9, 2011 at 12:08 PM, Alexandre Morgaut
alexandre.morg...@4d.com wrote:
 Another approach:
 The link tag is meant to support a prefetch value for the rel attribute
 asking to preemptively cache the resource:
  - http://blog.whatwg.org/the-road-to-html-5-link-relations#rel-prefetch
  - http://davidwalsh.name/html5-prefetch
 We can then write:
 link rel=prefetch type=text/javascript src=myscript.js
 let the link HTML Element have an execute() method when the type attribute
 is one off a User-Agent supported Scripting Media Types:

   +-+
   | text/javascript  | text/ecmascript  |
   | text/javascript1.0   | text/javascript1.1   |
   | text/javascript1.2   | text/javascript1.3   |
   | text/javascript1.4   | text/javascript1.5   |
   | text/jscript | text/livescript  |
   | text/x-javascript| text/x-ecmascript|
   | application/x-javascript | application/x-ecmascript |
   | application/javascript   | application/ecmascript   |
   +-+

 (source: RFC 4329 Scripting Media Types
 - http://www.rfc-editor.org/rfc/rfc4329.txt )
 let the execute property value be null otherwise
 Note 1:
 Glenn just told me: I doubt it's possible to change an object's interface
 based on the current value of an attribute.
 So, the execute() method may exits everytime and throw an Error like:
  Wrong call, the resource is not executable

 Note2
 The rel attribute can accept several values.
 - http://www.whatwg.org/specs/web-apps/current-work/multipage/semantics.html#attr-link-rel
 It may then accept an additional script value,
 The link tag may take profit from a getElementsByRelationship(relationName)
 method similar to getElementsByClassName(className)
 This way, prefetched scripts could be more easily retrievable (as any other
 link tag)

i'm not sure you need an execute(), you might benefit from an event
listener to tell you if a resource has been prefetched. but this
general path seems less icky to me than most if not all of the other
paths suggested in this thread.


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Alexandre Morgaut

 i'm not sure you need an execute(), you might benefit from an event
 listener to tell you if a resource has been prefetched. but this
 general path seems less icky to me than most if not all of the other
 paths suggested in this thread.

you will surely take benefit from the onload event 

Actually this approach should already work in browsers supporting prefetch...

if the resource has already been fully loaded, the script calling for it will 
take profit from the cache
But what if it is only partially loaded ?
Is the User-Agent already smart enough to detect that there is already a 
request for this file and wait for it being fully loaded in the cache ?

Adding support of script mediatype or script relationship to the link tag:
- would make it possible for the browser to also pre-parse it.
- if execute() method is provided, it could take advantage of this pre-parsing 
while a eval on a string wouldn't

Another thing strongly missing in the support of the link tag is a content 
property to access to the content of the resource 
(which would be null if not loaded like for the XHR API)

This way, many resources loading could be more declarative instead of using XHR 
for anything



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Glenn Maynard
 On Wed, Feb 9, 2011 at 12:08 PM, Alexandre Morgaut
 alexandre.morg...@4d.com wrote:
  Another approach:
  The link tag is meant to support a prefetch value for the rel
attribute
  asking to preemptively cache the resource:
   - http://blog.whatwg.org/the-road-to-html-5-link-relations#rel-prefetch
   - http://davidwalsh.name/html5-prefetch
  We can then write:
  link rel=prefetch type=text/javascript src=myscript.js
  let the link HTML Element have an execute() method when the type
attribute
  is one off a User-Agent supported Scripting Media Types:

Executing scripts out of a link seems very strange.

Prefetching can also be disabled by the user, heuristically disabled by the
browser or download at a lower priority.  There's no way to know in advance
whether that will happen--not just due to lack of an API to ask, but because
the browser can't always tell in advance.  Prefetching is a hint, where
script preloading shouldn't be; loaders must be able to know whether they
can load-without-executing or not.

--
Glenn Maynard


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Glenn Maynard
On Wed, Feb 9, 2011 at 2:46 AM, Glenn Maynard gl...@zewt.org wrote:

 - Just for comparison: script src=path.js noexecute
 onload=this.execute() seems roughly equivalent to script async, and
 like async, falls back on immediate loading if noexecute isn't supported.
 script defer could be implemented in terms of this as well.


Put more generally, noexecute is a straightforward generalization of the
script deferral established by script defer.  Defer delays execution until
the document is parsed, and noexecute delays execution until instructed.
These fit together in an obvious, consistent way.

-- 
Glenn Maynard


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Boris Zbarsky

On 2/9/11 12:27 AM, Kyle Simpson wrote:

I can't speak definitively as to how the JavaScript engine is
implemented (and if the code is significantly different between mobile
and desktop).


In Gecko's case, it's identical (modulo the different JIT backends for 
ARM and x86 and x86-64, of course).



But I can say that even if their code is substantially the
same, I could still see it quite plausible that the device itself locks
up (not the browser) if there's just simply too much going, taxing its
limited CPU power.


Yes, but that could just happen due to background tabs, workers, etc. 
That's not something a page author can sanely control



I can also see it quite plausible that mobile OS's are not as capable of
taking advantage of multi-threading


That's not the case for the OSes Gecko is targeting on mobile, at least. 
 It's not the case on iOS, last I checked.  I can't speak to WP7 or 
Symbian; I don't have any experience with them.



Regardless, considering such things is way outside the scope of anything
that's going to be useful for web developers in the near-term dealing
with these use-cases.


Yes, but so is the proposal here, no?


Even if you're right and the fault really lies with the implementor of
the JavaScript engine (or the OS), that's still a fruitless path for
this discussion to go down. No matter how good the mobile JavaScript
engine gets, I promise you sites will find a way to stuff too much
JavaScript down the pipe at the beginning of page-load in such a way as
to overload the device. That is a virtual certainty.


Yes, but what makes you think that those very same sites will make good 
use of the functionality we're proposing here?



Now you may be right that authors who really want to screw up like
that will just do browser-sniffing hacks of various sorts and still
screw up. But it's not clear to me that we need to make the barrier to
shooting yourself in the foot lower as a result


That sounds more like a question of degree (how much we should expose to
the developer, and how) than the principle (should we expose it).


Yes, of course.  Degree is my only concern here.


In any case, I don't see much evidence that suggests that allowing an author to
opt-in to pausing the script processing between load and execute is
going to lead to authors killing their page's performance. At worst, if
the browser did defer parsing all the way until instructed to execute,
the browser simply would have missed out on a potential opportunity to
use some idle background time, yes, and the user will have to suffer a
little bit. That's not going to cause the world to come crashing down,
though.


Neither will the browser eagerly parsing.  ;)


What's VERY important to note: (perhaps) the most critical part of
user-experience satisfaction in web page interaction is the *initial*
page-load experience.


Really?  The pages I hate the most are the ones that make every single 
damn action slow.  I have had no real pageload issues (admittedly, on 
desktop) in a good long while, but pages that freeze up for a while 
doing sync XHR or computing digits of pi or whatever when you just try 
to use their menus are all over the place



So if it's a tradeoff where I can get my page-load
to go much quicker on a mobile device (and get some useful content in
front of them quickly) in exchange for some lag later in the lifetime of
the page, that's a choice I (and many other devs) are likely to want to
make.


See, as a user that seems like the wrong tradeoff to me if done to the 
degree I see people doing it.



Regardless of wanting freedom of implementation, no browser/engine
implementation should fight against/resist the efforts of a web author
to streamline initial page-load performance.


Perhaps they just have different goals?  For example, completely 
hypothetically, the browser may have a goal of never taking more than 
50ms to respond to a user action.  This is clearly a non-goal for web 
authors, right?  Should a browser be prohibited from pursuing that goal 
even if it makes some particular sites somewhat slower to load initially 
(which is a big if, btw).



Presumably, if an author is taking the extraordinary steps to wire up
advanced functionality like deferred execution (especially negotiating
that with several scripts), they are doing so intentionally to improve
performance


My point is that they may or may not be improving performance in the 
process, depending on implementation details.  Unless they tested in the 
exact browser that's running right now, it's hard to tell.


I see this all the time in JS: people time some script in one browser, 
tweak it to be faster, and in the process often make it slower in other 
browsers that have different JIT heuristics or different bottlenecks.


I think you're assuming a uniformity to browser implementations that's 
simply not there.



and so if they ended up actually doing the reverse, and
killing their performance to an unacceptable 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Boris Zbarsky

On 2/9/11 2:19 AM, John Tamplin wrote:

I am not sure I understand why you are so opposed to providing a
mechanism for an application to tell the browser it would like the
parsing to not necessarily be performed immediately on a downloaded script.


I'm not opposed to that, as should be clear if you read what I actually 
said.


What I'm opposed to is a normative testable requirement that the browser 
parse or not parse the script at particular points in time, with that 
time being under page control.


-Boris


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Kyle Simpson

?

Mighty conjecture, chap. Multithreading is even possible on
microcontrollers like the Atmel ATmega32 ? so why should a modern
operating system running on reasonable hardware not be able to do it?


In most mobile devices I've had the exposure to developing for, 
multi-threading is not possible/available to me. The usual answer, for 
instance for the iPhone, is that true multi-threading will tend to cause 
serious drains on limited battery life, which degrades quality of 
user-experience and user satisfaction. That's the only anecdotal evidence 
that I have for how these engines may not be completely free to multi-thread 
as is being suggested.


In any case, you're still missing the point. The mobile OS's (and even the 
JavaScript engines) are of course free to improve their internal 
implementation details, but this HTML spec has only a slight modest ability 
to affect that. The hardware/mobile-OS vendors have dozens of different 
pressures that play into what they can and cannot implement, and how. Even 
if the HTML spec were to say must process script execution in a separate 
thread from the rendering engine, the feasibility of that requirement may 
still be overshadowed by lots of factors completely out of the control of 
the specification group.


We can continue to debate what might be nice for mobile vendors to consider, 
but they aren't on this list and listening to us. Who IS on this list, and 
who IS interested, are developers who have real performance problems right 
now. And they are creating ever more complex hacks to get around these 
problems. And the spec has an opportunity to make a small foot-print change 
to give them some better options for that performance negotiation.


You're also ignoring the fact that there are several other documented 
use-cases for execution-deferral that are not related to mobile (or 
multi-threading) at all. That maybe the 80% use-case for this proposal, but 
it's certainly not the only reason we want and need a feature like this.




Fun fact: I use mobile versions of some web sites, because they are much
quicker, even on the desktop. Sometimes a little minimalism can go a
long way.


We're not particularly talking about generalized web sites as much as we are 
talking about complex mobile web applications like Gmail. Even in their 
minimalism, the bare minimum experience they're willing to deliver is 
overloading the mobile browser and so they are resorting to crazy and 
brittle hacks.


In my opinion, when we see a trend toward developers having to hack around 
certain parts of the functionality that don't work the way they need it to 
(for real-world use-cases), then it's a good sign that we should consider 
helping them out. And suggesting that they just load less JavaScript is not 
really all that helpful for the population of applications that are most in 
need of this feature.




Counter-intuitive at first, but true: More complex code is not
necessarly faster code. More options are more options to screw up.


We have a number of well-known and well-documented experts in the realm of 
page-load optimization and script loading functionality who are behind 
requests like is being discussed. If we can't trust them to do correctly 
with what we give them, then the whole system is broken and moot. The fact 
that some developers may misunderstand and improperly use some functionality 
should not prevent us from considering its usefulness to those who clearly 
know the right things to accomplish with it.


It also hasn't been shown with any degree of specificity just what the fear 
is of developers screwing up if we give them this functionality. Right now 
it's a bunch of conjecture about possible misunderstandings, something which 
should be easy to deal with through proper documentation, education, and 
evangelism. Why are we so afraid to let the right implementations of a 
functionality flourish and bubble to the top, and drown out the wrong 
implementations of functionality by those are are either ignorant or 
incompetent?



I'm losing track in the noise of what the fundamental disagreements 
are--if

there even are any.  I think the original proposal is a very good place to
start


The original proposal is in fact more focused on the markup-driven use-case 
than on the script-driven use-case. The original proposer, Nicholas, agreed 
in an earlier message that he's really more concerned with script-driven 
functionality than markup driven functionality. And I completely agree with 
that assertion.


In fact, I'd go so far as to say that the use-case for separating script 
loading from its parsing/execution phase (and thus being able to 
control/trigger when that phase occurs, later) is 99% driven by the 
script-loaders use-case. Script loaders by and large do not use markup 
semantics to accomplish their tasks (because most of them do not use 
document.write(script); to load scripts)


So, if we consider the spirit of the original proposal, we 

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Kyle Simpson

? Regardless, considering such things is way outside the scope of anything

that's going to be useful for web developers in the near-term dealing
with these use-cases.


Yes, but so is the proposal here, no?


No, I don't think so. A huge part of my point with the proposal is that it 
builds on existing spec wording AND it has browser implementation precedent 
from IE, and *some* stated support from Opera. That makes a solution a bit 
more tangible and foreseeable in the near future, as opposed to for instance 
saying that all mobile device JavaScript engines must be changed so that 
they take more advantage of multi-threading -- a task which could be years 
before realized.



Yes, but what makes you think that those very same sites will make good 
use of the functionality we're proposing here?


Fair enough, the offenders will probably keep on offending. EXCEPT that web 
performance advocates like myself (and Steve Souders, and many others) will 
have something tangible to take to them in performance evangelism efforts. 
Right now, if we try to get them to address their bad performance, it 
involves suggesting an extremely complex and convoluted set of brittle 
hacks, which they are rightly hesitant to consider.


It's a much easier sell if we can say look, here's this simple mechanism 
dedicated specifically to helping the problem your site has, would you 
consider it?




Neither will the browser eagerly parsing.  ;)


What's VERY important to note: (perhaps) the most critical part of
user-experience satisfaction in web page interaction is the *initial*
page-load experience.


Really?  The pages I hate the most are the ones that make every single 
damn action slow.  I have had no real pageload issues (admittedly, on 
desktop) in a good long while, but pages that freeze up for a while 
doing sync XHR or computing digits of pi or whatever when you just try to 
use their menus are all over the place


There's lots and lots of research into how user-satisfaction in web pages 
and web applications is more driven by the initial page-load experience than 
any other factor (not exclusively, just majority). Again, I refer you to the 
great work Steve Souders has done in this area. There's plenty of 
information about how when sites speed up their page-load (and nothing 
else), user retention (and a whole related host of other positive 
user-satisfaction indicators) all go up, sometimes dramatically.




So if it's a tradeoff where I can get my page-load
to go much quicker on a mobile device (and get some useful content in
front of them quickly) in exchange for some lag later in the lifetime of
the page, that's a choice I (and many other devs) are likely to want to
make.


See, as a user that seems like the wrong tradeoff to me if done to the 
degree I see people doing it.


We can debate that point forever and never really come to a definitive 
consensus. I myself sometimes feel like this technique can be taken 
overboard and I'm not entirely behind all attempts to defer script 
execution. But nonetheless, there's provable validity to making some 
tradeoffs like that, and seeing user happiness go up. We're simply asking 
for the means to make those tradeoffs without costly/ugly hacks. That's all.


There's obviously an art here in balance. But the numbers clearly indicate 
that addressing page-load performance bottlenecks leads to huge gains in 
user-satisfaction.



Perhaps they just have different goals?  For example, completely 
hypothetically, the browser may have a goal of never taking more than 50ms 
to respond to a user action.  This is clearly a non-goal for web authors, 
right?


In fact, no. As I asserted in an earlier message in the thread, I believe 
the goals of the browsers (to be faster in page load) line up well with the 
goals of web authors (to reduce the amount of bounce traffic because of slow 
loading sites, especially on mobile).


Not all web authors care about performance (often they just care about bells 
 whistles). But there's a recent undeniable trend, and huge uptick, toward 
more awareness of web performance optimization issues and specifically on 
improving initial page-load experience.


Consider the Google algorithm change where they take page-load speed as a 
factor in ranking. Clearly, more and more web authors (and the businesses 
that drive their decision making) are seeing the benefits of 
performance-savvy websites, so I believe we'll see even more alignment of 
goals as we move forward.



Should a browser be prohibited from pursuing that goal even if it makes 
some particular sites somewhat slower to load initially (which is a big 
if, btw).


A browser should have some strong warnings against acting in a way that is 
counter to the expressed intent of a web author. If a web author is taking 
steps to more actively control the pipeline of resource loading and 
page-load performance, the browser should not try to second-guess that 
author and thwart their efforts.



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Nicholas Zakas
I had chatted with a few folks about using rel=prefetch, but there seems to be 
a lot of issues that would have to be resolved to get the behavior I'm after. 
Prefetching in this way is very passive, currently implemented as happening 
during user idle time, which is unpredictable (not to mention the issues Glen 
mentioned below). 

I think Glen summed this up correct by saying that prefetching is a hint, and 
when you want to load a script you want it to happen. This isn't to say that 
you wouldn't want to prefetch a script, but I see that as more of a way to help 
the next page you'll navigate to by priming the cache vs. helping the currently 
loaded page. 

In any event, it seems that rel=prefetch would have to change a lot vs. the 
changes to the script element to allow the same behavior.

-N

-Original Message-
From: whatwg-boun...@lists.whatwg.org [mailto:whatwg-boun...@lists.whatwg.org] 
On Behalf Of Glenn Maynard
Sent: Wednesday, February 09, 2011 6:15 AM
To: timeless
Cc: Alexandre Morgaut; whatwg@lists.whatwg.org
Subject: Re: [whatwg] Proposal for separating script downloads and execution

 On Wed, Feb 9, 2011 at 12:08 PM, Alexandre Morgaut
 alexandre.morg...@4d.com wrote:
  Another approach:
  The link tag is meant to support a prefetch value for the rel
attribute
  asking to preemptively cache the resource:
   - http://blog.whatwg.org/the-road-to-html-5-link-relations#rel-prefetch
   - http://davidwalsh.name/html5-prefetch
  We can then write:
  link rel=prefetch type=text/javascript src=myscript.js
  let the link HTML Element have an execute() method when the type
attribute
  is one off a User-Agent supported Scripting Media Types:

Executing scripts out of a link seems very strange.

Prefetching can also be disabled by the user, heuristically disabled by the
browser or download at a lower priority.  There's no way to know in advance
whether that will happen--not just due to lack of an API to ask, but because
the browser can't always tell in advance.  Prefetching is a hint, where
script preloading shouldn't be; loaders must be able to know whether they
can load-without-executing or not.

--
Glenn Maynard


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Boris Zbarsky

On 2/9/11 10:37 AM, Kyle Simpson wrote:

I think you're assuming a uniformity to browser implementations that's
simply not there.


No, I'm relying on the growing trend of more and more web authors being:
1) aware of performance issues, especially initial page-load performance
2) able to use more tools to measure these issues, and test them across
a broader range of user-agents
3) able to leverage more functionality that the spec and browsers give
to them, to have better optimization of their pages


Again, you're assuming that the optimization that needs to happen is 
that same for all browsers



Assuming the browser does the parsing on the main thread, yes? What if
it doesn't?


Regardless of what thread the processing is on, if the parsing happens
during the critical few moments of page-load, and the device's
CPU/hardware is overwhelmed, it's going to be obvious that there's a
slowdown or freeze.


If the hardware is overwhelmed.  On the other hand, if it's a multicore 
chip (which is what mobile hardware is shipping with nowadays) and the 
page-load is gated on one core, there's no reason to not be parsing on 
another core...



The major problem in the mobile-performance part of the use-case is
around parsing. Noone is suggesting here that the web author should have
a .parse() function where they deterministically control it and handcuff
the browser.


OK, good.


What we're suggesting is that we be able to directly
control execution, and in so doing, make an indirect hint to the browser
that it should also strongly consider deferring the parsing.


That sounds fine to me.

-Boris


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Nicholas Zakas
 What we're suggesting is that we be able to directly
 control execution, and in so doing, make an indirect hint to the browser
 that it should also strongly consider deferring the parsing.

That sounds fine to me.

Sorry for the confusion, that's exactly what I had in mind with the proposal 
initially by saying, User agents may background parse or compile the script in 
preparation for execution but must not execute the code until instructed to do 
so. It should definitely be up to the browser to determine the optimal way to 
proceed.

-N


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Alexandre Morgaut

 The link tag is meant to support a prefetch value for the rel
 attribute asking to preemptively cache the resource:
 - http://blog.whatwg.org/the-road-to-html-5-link-relations#rel-prefetch
 - http://davidwalsh.name/html5-prefetch
 
 For link rel=prefetch to address the use-case, some event mechanism would
 HAVE to be added to the link tag such that the finishing of that
 prefetching could be detected. But then what do we do once it's finished
 loading, even if we *do* add some link event mechanism to detect it?

We already had requirements for event mechanism on link to detect when 
stylesheet were going to be applied
I most often see now XHR requests with the append of a style element to 
resolve that
Don't like it much

 .execute() would be a terrible idea for link, because it would essentially
 morph link into a script element at that point. Not only is this
 significantly more confusing to web authors for that type of behavioral
 overloading, but I'd guess that all the complicated semantics around the
 script element would then have to be duplicated into a link tag that is
 being executed.

Well I admit that it can be confusing... 
As maybe the use of style for inline CSS vs link for external CSS

 It's tempting to suggest that you would then just add a proper script
 element with the same URL to accomplish the execution of it. This suffers
 a similar fate to many of the hacky workarounds that currently exist: that
 it's based on the assumption that the resource was cached.

As there is nofollow and noreferrer types, links may take advantage of 
nocache type

Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Diego Perini
On Wed, Feb 9, 2011 at 6:57 PM, Alexandre Morgaut
alexandre.morg...@4d.com wrote:

 On Feb 9, 2011, at 4:40 PM, Nicholas Zakas wrote:

 I had chatted with a few folks about using rel=prefetch, but there seems to 
 be a lot of issues that would have to be resolved to get the behavior I'm 
 after. Prefetching in this way is very passive, currently implemented as 
 happening during user idle time, which is unpredictable (not to mention the 
 issues Glen mentioned below).

 I think you guys are perfectly right as prefetch is not meant to say that 
 the interface will need the resource ASAP


 I think Glen summed this up correct by saying that prefetching is a hint, and 
 when you want to load a script you want it to happen. This isn't to say that 
 you wouldn't want to prefetch a script, but I see that as more of a way to 
 help the next page you'll navigate to by priming the cache vs. helping the 
 currently loaded page.

 Good point

 In any event, it seems that rel=prefetch would have to change a lot vs. the 
 changes to the script element to allow the same behavior.

 Surely, if starting from the specific prefetch behavior


 I still think that using a more declarative way to define required HTTP 
 resources would be a gain and may work in your case

 The link type list is rich:
 - 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/links.html#linkTypes

 Adding a required relationShip could be more appropriate way

 link id=someScript rel=required type=text/javascript src=someData.js
 link id=someData rel=required type=application/json 
 src=someData.json
 link id=aTemplate rel=required type=text/html src=myTemplate.html

 This would still need:
 - a content property on HTML link elements (which may be also useful to 
 access raw CSS definition)

 Binding an execute(), run() or eval() method on link elements may be more 
 discussable but it doesn't hurt me that much
 This way, pre-parsing Script resources would be still possible


Completely agree, your proposal seems better, probably easier to
implement and seem less prone to backward compatibility issues to me.

I believe it would be better to leave out the script tag and try to
obtain the same benefits by defining the correct rel on link
elements.

Having script access to the content of the HTTP resource is what is
really missing and seems to me would cover most of the presented
needs.

Maybe not completely related to script loading, but bubbling all
load events up to the document like Opera does would also help
authors in determining the correct timings (when to execute/use the
resource once it is fully loaded).

--
Diego


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Nicholas Zakas
I had thought a bit about a new rel for links, but always got caught up on 
the execute() method and how inappropriate it was for any other content types. 
It seemed weird to be able to call such a method on link to execute a script 
when script's sole job is to execute scripts. If link then becomes capable 
of executing scripts, do we need script? 

If you can't get agreement to add a method on link, then we're back to 
possibly having a double-download situation, where you include the script via 
link and then need to create a dynamic script node to point to the same URL.

In the end it seemed that keeping script as the sole executor of scripts 
would be more likely agreed upon than augmenting link to do the same. It 
still seems like there would be more changes necessary for a link approach 
than a script approach, and I'm not sure it addresses backwards compatibility 
any better.

Once again, I expect the common case to be the script loader case, where 
script elements are created using JavaScript. In that case, there is zero 
impact on backwards compatibility when feature testing is used. The only 
backwards compatibility issue is when you use noexecute in markup, but a 
similar issue would occur using link in that case.

-N


-Original Message-
From: whatwg-boun...@lists.whatwg.org [mailto:whatwg-boun...@lists.whatwg.org] 
On Behalf Of Diego Perini
Sent: Wednesday, February 09, 2011 1:35 PM
To: Alexandre Morgaut
Cc: whatwg@lists.whatwg.org
Subject: Re: [whatwg] Proposal for separating script downloads and execution

On Wed, Feb 9, 2011 at 6:57 PM, Alexandre Morgaut
alexandre.morg...@4d.com wrote:

 On Feb 9, 2011, at 4:40 PM, Nicholas Zakas wrote:

 I had chatted with a few folks about using rel=prefetch, but there seems to 
 be a lot of issues that would have to be resolved to get the behavior I'm 
 after. Prefetching in this way is very passive, currently implemented as 
 happening during user idle time, which is unpredictable (not to mention the 
 issues Glen mentioned below).

 I think you guys are perfectly right as prefetch is not meant to say that 
 the interface will need the resource ASAP


 I think Glen summed this up correct by saying that prefetching is a hint, and 
 when you want to load a script you want it to happen. This isn't to say that 
 you wouldn't want to prefetch a script, but I see that as more of a way to 
 help the next page you'll navigate to by priming the cache vs. helping the 
 currently loaded page.

 Good point

 In any event, it seems that rel=prefetch would have to change a lot vs. the 
 changes to the script element to allow the same behavior.

 Surely, if starting from the specific prefetch behavior


 I still think that using a more declarative way to define required HTTP 
 resources would be a gain and may work in your case

 The link type list is rich:
 - 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/links.html#linkTypes

 Adding a required relationShip could be more appropriate way

 link id=someScript rel=required type=text/javascript src=someData.js
 link id=someData rel=required type=application/json 
 src=someData.json
 link id=aTemplate rel=required type=text/html src=myTemplate.html

 This would still need:
 - a content property on HTML link elements (which may be also useful to 
 access raw CSS definition)

 Binding an execute(), run() or eval() method on link elements may be more 
 discussable but it doesn't hurt me that much
 This way, pre-parsing Script resources would be still possible


Completely agree, your proposal seems better, probably easier to
implement and seem less prone to backward compatibility issues to me.

I believe it would be better to leave out the script tag and try to
obtain the same benefits by defining the correct rel on link
elements.

Having script access to the content of the HTTP resource is what is
really missing and seems to me would cover most of the presented
needs.

Maybe not completely related to script loading, but bubbling all
load events up to the document like Opera does would also help
authors in determining the correct timings (when to execute/use the
resource once it is fully loaded).

--
Diego


Re: [whatwg] Should script run if it comes from a HTML fragment?

2011-02-09 Thread Ian Hickson
On Thu, 11 Nov 2010, Ryosuke Niwa wrote:
 
 I'm working on the WebKit bug 12234 - Using createContextualFragment to 
 insert a script does not cause the script to execute 
 https://bugs.webkit.org/show_bug.cgi?id=12234. [...]

This thread pretty much resolved itself, but for the record:

* createContextualFragment() is here:
http://html5.org/specs/dom-parsing.html#dom-range-createcontextualfragment
  ...and re-enables scripts before returning them; the parser doesn't 
  execute them synchronously.

* innerHTML doesn't run scripts and they are inserted disabled.
 

On Sat, 13 Nov 2010, Henri Sivonen wrote:
 
 Hixie wrote the relevant part of the spec without taking 
 createContextualFragment into account, so you shouldn't read too much 
 into what the spec says now. Similar overlooking a part of the platform 
 when considering the interactions has occurred a couple of time with how 
 HTML5 integrates with XSLT, too.

Indeed. I am not omniscient. :-)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


[whatwg] SharedWorkers and document discarded

2011-02-09 Thread Drew Wilson
Hi all,

Jonas brought up an interesting point regarding SharedWorkers in an
unrelated thread that I wanted to clarify here.

His contention is that the current SharedWorker spec specifies that the
lifecycle of a SharedWorker is currently tied to the GC behavior of the
underlying VM - specifically, that a SharedWorker is shutdown after its last
parent document has been GC'd.

The relevant spec language is (from
http://www.whatwg.org/specs/web-workers/current-work/#the-worker's-lifetime
):

Whenever a Document d is added to the worker's Documents, the user agent
must, for each worker in the list of the worker's
workershttp://www.whatwg.org/specs/web-workers/current-work/#the-worker's-workers
whose
list of the worker's
Documentshttp://www.whatwg.org/specs/web-workers/current-work/#the-worker's-documents
does
not contain d, add dto q's WorkerGlobalScope owner's list of the worker's
Documentshttp://www.whatwg.org/specs/web-workers/current-work/#add-a-document-to-the-worker's-documents
.

Whenever a Document object is discarded, it must be removed from the list
of the worker's
Documentshttp://www.whatwg.org/specs/web-workers/current-work/#the-worker's-documents
of
each worker whose list contains that Document.
So I'm not an expert on Document lifecycles, so I don't entirely understand
under which circumstances the spec requires that a Document object be
discarded. For example, if I have a top level window with a child iframe,
and that child iframe creates a SharedWorker, then reloads itself or
navigates, could that cause the original document to be discarded/suspended,
or does this depend on GC (whether script in the top level window maintains
a reference to the document javascript object)?

My understanding from previous discussions was that the only thing impacting
whether a document is discarded is whether the UA decided to keep it
suspended in the history cache - can javascript-level references also
prevent a document from being discarded?

-atw


Re: [whatwg] Removal of blocking script

2011-02-09 Thread Ian Hickson
On Mon, 15 Nov 2010, Juriy Zaytsev wrote:

 When removing [1] a long-loading script element from a document, 
 browsers seem to disagree on whether such removal should affect page 
 rendering. A simple test � 
 http://kangax.github.com/jstests/blocking_script_removal_test/� shows 
 that Opera (9.x - 11) and IE (5.5 - 9) immediately continue parsing the 
 document upon element removal. However, in Firefox (3-4) and Chrome (9) 
 the document parsing is blocked until script is loaded or times out 
 (even when the actual element no longer exists in the document, has its 
 src reference an empty string, and there exist no references to it).
 
 Does current draft explain what should happen in such case, and if it 
 does � what is it (I can't seem to find it)? The existing discrepancy 
 suggests that it's something worth codifying.
 
 [1] Where removing is done through scripting (say, via Node's 
 `removeChild` or analogous method).

The spec currently implies that the page should block for the full second, 
and that the script should still execute.

HTH,
-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Google Feedback on the HTML5 media a11y specifications

2011-02-09 Thread Silvia Pfeiffer
On Sun, Jan 23, 2011 at 1:23 AM, Philip Jägenstedt phil...@opera.com wrote:
 On Fri, 14 Jan 2011 10:01:38 +0100, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:
 5. Markup changes

 [..]
 * Time markers: WebVTT time stamps follow no existing standard for
 time markers. Has the use of NPT as introduced by RTSP[5] for time
 markers been considered (in particular npt-hhmmss)?

 [5] http://www.ietf.org/rfc/rfc2326.txt

 Unfortunately, the hour component is not optional in NPT. Also, the decimal
 part of seconds is of arbitrary precision, which doesn't seem necessary.

Further discussions at Google indicate that it would be nice to make
more components optional. Can we have something like this:

  [[h*:]mm:]ss[.[d[c[m]]]  | s*[.d[c[m]]]

Examples:
23  = 23 seconds
23.2  = 23 sec, 1 decisec
1:23.45   = 1 min, 23 sec, 45 centisec
123.456  = 123 sec, 456 millisec

Cheers,
Silvia.


Re: [whatwg] Google Feedback on the HTML5 media a11y specifications

2011-02-09 Thread Glenn Maynard
On Tue, Feb 8, 2011 at 9:57 PM, Silvia Pfeiffer
silviapfeiff...@gmail.comwrote:

 Even text can amount to a substantial amount of data. Compressed http
 delivery will help. Keeping the caption/subtitle tracks in separate
 files and only delivering those that a user really wants helps, too.
 But even then a caption file for a 2 hour video can be a fairly big
 file and we want them downloaded to the browser as quickly as
 possible, such that the video player is not held back from playback of
 the video through still downloading the captions. So, serving billions
 of caption files at as little latency as possible are both good
 arguments for keeping the format dense.


Without doing a lot of sampling and just looking at an SRT for an arbitrary
long movie (LOTR #1), it's 110k uncompressed, 45k deflated--that seems small
enough to not cause latency problems, as long as you're only downloading the
track you need.

Of course, I agree that it shouldn't be necessary to actually repeat
information for each cue--it makes authoring painful.

(Ouch.  This .SRT I downloaded at random uses commas for the decimal
separator.)



  Agreed. I'm happy for the previously suggested // at the line start
 to be comments, or, for that matter, # or ; or any other special
 character. I would prefer not to use /* since it implies a */ is
 required to end the comment. Similarly we should avoid !-- and
 -- or anything else that requires a special comment end mark and
 more than one or two characters.


Comment end markers aren't a major burden in HTML and CSS.  Block comments
allow easily commenting out sets of cues, midsentence comments within a cue
(translator/editor comments), and (putting aside the -- token conflict,
which will go away if the timestamp separator is changed) !-- ... -- can
be used without adding any new escapes.  All of the other suggestions would
also need to be escaped more frequently: // happens in URLs, and # and ;
occur in plain language.


-- 
Glenn Maynard


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Glenn Maynard
On Wed, Feb 9, 2011 at 10:06 AM, Kyle Simpson get...@gmail.com wrote:

 The original proposal is in fact more focused on the markup-driven use-case
 than on the script-driven use-case. The original proposer, Nicholas, agreed
 in an earlier message that he's really more concerned with script-driven
 functionality than markup driven functionality. And I completely agree with
 that assertion.

 In fact, I'd go so far as to say that the use-case for separating script
 loading from its parsing/execution phase (and thus being able to
 control/trigger when that phase occurs, later) is 99% driven by the
 script-loaders use-case. Script loaders by and large do not use markup
 semantics to accomplish their tasks (because most of them do not use
 document.write(script); to load scripts)

 So, if we consider the spirit of the original proposal, we should examine
 it in the proper context (the vast majority use-case), which is script
 elements being created from script logic rather than markup.

 Given that proper context, the proposal becomes something like:

 1. Give a dynamic script element a noExecute property (a boolean
 property, defaults to false, can be set to true)
 2. Give a dynamic script element an execute() function which executes a
 script that has been deferred by the noExecute property.


This is precisely what I described.  (Obviously, the noexecute flag would
be exposed both as a DOM attribute and a script property.)

The problem with *that* phrasing of the proposal (compared to the
 readyState preoloading I'm advocating) is:

 1. It asks for two new unprecedented additions to the script element
 specification. The other proposal asks to take the existing spec wording and
 change it from a may to a must (from suggestion to requirement).


It would take more than that.  It wouldn't make sense to put a must
requirement for must begin loading data when the src attribute is set, even
if the script element has not been added to the document inside a list of
steps that only happens when the script has been added to the document.

It would also require adding readyState to the script element spec; it's
currently only defined for document and media.  It would require specifying
onreadystatechange, which is only currently defined for document (and not
media, I believe).  The error event would need to be updated to reflect
the fact that it can fire when the script element isn't in the DOM tree if
the fetch fails; I think IE's behavior of firing that event in this case is
currently off-spec.

I'm not against this approach fundamentally; I'm just pointing out that it's
not a one-word s/may/must/ change.  I do believe noexecute is cleaner and
more powerful: it allows executing a fetched script synchronously, which
would make using this transparently behind a black-box script API easier.
With readyState, if you call a function and it needs an interface that isn't
yet loaded, it needs to trigger execution (by adding the script to the
document) and return with try again later, which is restrictive.  I'd also
expect most of the engine work needed to support noexecute is already
implemented for defer (a question for implementors, of course).

In any case, I'm not too worried about either approach; my main goal was
getting back to discussing interfaces, since it seemed like most of the
debate was tangental--the main relevant point seems to be guaranteeing
browsers retain the freedom to parse scripts at whatever point they want
(during load, after load, during idle time, or upon execution), which I
think everyone is strongly agreed on.

-- 
Glenn Maynard


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-09 Thread Kyle Simpson

?

You're also ignoring the fact that there are several other documented
use-cases for execution-deferral that are not related to mobile (or
multi-threading) at all. That maybe the 80% use-case for this proposal,
but it's certainly not the only reason we want and need a feature like
this.


Could you list those issues or point me where these issues are documented?


Earlier in this thread: 
http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-February/030327.html



--Kyle 



Re: [whatwg] Javascript: URLs as element attributes

2011-02-09 Thread Ian Hickson
On Mon, 15 Nov 2010, Boris Zbarsky wrote:
 On 11/15/10 8:15 PM, Ian Hickson wrote:
   Gecko's currently-intended behavior is to do what [the spec] 
   describes in all cases except:
   
  iframe src=javascript:
  object data=javascript:
  embed src=javascript:
  applet code=javascript:
  
  What does it do for those cases if it doesn't match the spec?
 
 For iframe the behavior in Gecko currently is different in terms of 
 what the URI of the result document of javascript: is set to.

How does it differ? As far as I can tell, it works the same as the spec 
says (the document.location is about:blank in the example above).


 For the others, I believe we execute them in the script environment of the
 owner document of the object/embed/applet, whereas the spec requires them to
 execute in a sandbox, as far as I can tell.

Ah, interesting. For object, this seems to be a unique feature of 
Firefox. Opera also executes the script in the context of the owner, but 
then ignores the results as far as I can tell. Other browsers don't seem 
to support javascript: in data= at all.

For embed, only Firefox does this (tested using window.alert). I didn't 
test further with embed since there doesn't seem to be a use case for 
this anyway.

I didn't test applet.


 Note that there is some confusion here in terms of browsing contexts and
 object, since object does expose a Document object sometimes (but not
 others) and does participate in session history sometimes, I believe...  So
 I'm not quite sure what behavior the spec calls for for object.

It's defined; see the section on the onject element.


   For what it's worth, as I see it there are three possible behaviors for
   a javascript: URI (whether in an attribute value or elsewhere):
   
   1)  Don't run the script.
   2)  Run the script, but in a sandbox.
   3)  Run the script against some Window object (which one?)
   
   Defining which of these happens in which case would be good.  
   Again, Gecko's behavior is #2 by default (in all sorts of 
   situations; basically anywhere you can dereference a URI), with 
   exceptions made to do #3 in some cases.
  
  That's what the spec says currently.
 
 That doesn't agree with your comments about script src above...

Indeed, I misspoke. The spec actually defaults to not running the script, 
but in most circumstances of interest does #2, and in a number of other 
cases does #3 or does #1 explicitly even if it would otherwise do #2 or 
#3. It's complicated. :-)


On Thu, 25 Nov 2010, Philip Jägenstedt wrote:
 
 Based on this, unless there are corner-cases I've missed, it seems 
 unlikely that there's a large body of web content that depends on inline 
 javascript: URLs executing. My current plan is to try completely 
 blocking javascript: URLs in the contexts mentioned above. This seems to 
 be the simplest to implement and the fastest way to reach 
 interoperability. The alternative is to start executing javascript: URLs 
 in more contexts, which, even if sandboxed, doesn't seem particularly 
 useful.

There's a minor body of work on the Web that is based on using javascript: 
URLs to generate bitmaps, and I don't really see any harm with this.


 I'll keep you posted if there are any compatibility issues that come up 
 with this. Assuming (boldly) there is not, would there be support from 
 other browsers to move in this direction and change the spec to match? 

What the spec currently specs seems to be a reasonable compromise between 
security, compatibility needs based on what browsers do today, use cases, 
and consistency across the platform (usability), in that order.

Obviously if browsers implement something different, then I'll happily 
move the spec to match, but it would be sad to just close off these 
features without good reason.


On Tue, 30 Nov 2010, Boris Zbarsky wrote:
 
 At least in Gecko, the return value string is examined to see whether 
 all the charcode values are  255.  If they are, then the string is 
 converted to a byte array by just dropping the high byte of every char.  
 So you can pretty easily generate image data this way.

 If any of the bytes are  255, then the string is encoded as UTF-8 
 instead.

Hm. This currently isn't specced; the spec just assumes the return value 
is text/html string data and doesn't say what encoding to use. Is there a 
good way to test this in the context of an iframe, where all the 
browsers do something with javascript:?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Javascript: URLs as element attributes

2011-02-09 Thread Boris Zbarsky

On 2/9/11 10:12 PM, Ian Hickson wrote:

On Mon, 15 Nov 2010, Boris Zbarsky wrote:

On 11/15/10 8:15 PM, Ian Hickson wrote:

Gecko's currently-intended behavior is to do what [the spec]
describes in all cases except:

iframe src=javascript:
object data=javascript:
embed src=javascript:
applet code=javascript:


What does it do for those cases if it doesn't match the spec?


Foriframe  the behavior in Gecko currently is different in terms of
what the URI of the result document of javascript: is set to.


How does it differ? As far as I can tell, it works the same as the spec
says (the document.location is about:blank in the example above).


The example above doesn't actually return a document from the 
javascript: URI; it was a shorthand for a generic javascript: URI that 
does do that.


Try this:

  data:text/html,body onload=alert(window[0].location)iframe 
src=javascript:''



Note that there is some confusion here in terms of browsing contexts and
object, sinceobject  does expose a Document object sometimes (but not
others) and does participate in session history sometimes, I believe...  So
I'm not quite sure what behavior the spec calls for forobject.


It's defined; see the section on theonject  element.


I've read that section, in fact.  I couldn't make sense of what behavior 
it actually called for.  Has it changed recently (last few months) to 
become clearer such that rereading would be worthwhile?



At least in Gecko, the return value string is examined to see whether
all the charcode values are  255.  If they are, then the string is
converted to a byte array by just dropping the high byte of every char.
So you can pretty easily generate image data this way.

If any of the bytes are  255, then the string is encoded as UTF-8
instead.


Hm. This currently isn't specced; the spec just assumes the return value
is text/html string data and doesn't say what encoding to use. Is there a
good way to test this in the context of aniframe, where all the
browsers do something with javascript:?


body onload=alert(window[0].document.characterSet)iframe 
src=javascript:'\u0400'


(can't be a data: URI in webkit, for what it's worth; seems to fail 
same-origin checks).


If I load that from file://, italerts UTF-8 in Gecko, ISO-8859-1 in the 
Webkit-based browsers I have here, empty string in Opera 11 (?).


You could also do things like generate a document that links to a 
stylesheet with no encoding information and see what encoding the sheet 
is treated as.


If the question was whether it's possible to tell by black-box testing 
what the return string is actually treated as, not just what 
characterSet the resulting document reports, I'd have to do some more 
thinking.


-Boris