Re: [whatwg] Script-related feedback

2013-07-24 Thread Ian Hickson
On Wed, 9 Jan 2013, Anne van Kesteren wrote:
 On Wed, Jan 9, 2013 at 9:32 PM, Ian Hickson i...@hixie.ch wrote:
  Advantages of putting this in JS over multipart:
 
   - it's backwards-compatible
   - it's easier to parse a static barrier than a multipart/*'s wacky
 syntax.
   - it doesn't impact any of the current fetching logic, since it's
 still just one resource instead of introducing a layer in between
 script's logic and the JS logic.
   - it automatically works anywhere you can use JS, not just where HTTP is
 involved.
   - it can be shimmed more easily (if you trust the JS not to have
 arbitrary injection and be written with the shim in mind, especially).
   - it doesn't run into weird problems like what if a part has the wrong
 MIME type.
   - it's way easier to deploy (authors hate having to set MIME types).
   - it doesn't run into the problem that all UAs have historically ignored
 the MIME type of script.
 
 Adding magic meaning to certain JavaScript comments seems like a pretty 
 big downside though. Furthermore, multipart logic, however weird, is a 
 sunk cost both on consumer and producer side, whereas introducing 
 /*@BREAK*/ seems like a very steep uphill battle. And actually img is 
 a precedent for checking a MIME type before sniffing/executing and it 
 hasn't been much of a problem. (The problems there were mostly figuring 
 out how SVG should work.)

Yeah, but the multipart logic has pretty big disadvantages -- mainly the 
opposite of the advantages for a built-in feature:

 - not backwards compatible
 - not as simple to understand, use, implement, or spec
 - doesn't really work outside HTTP
 - harder to shim
 - more edge cases to define (e.g. what if the MIME types of the parts 
   change unexpectedly)
 - requires setting MIME types, which authors hate

I think JavaScript would be the logical place to support this. We don't 
use multipart/* logic to do incremental rendering of HTML, we don't use it 
for incremental rendering of images (only for animating them), why would 
we use it for incremental execution of script? I think scripts, just like 
image formats, HTML, XML, etc, should have built-in support for 
incremental processing.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Script-related feedback

2013-07-24 Thread Anne van Kesteren
On Wed, Jul 24, 2013 at 11:18 AM, Ian Hickson i...@hixie.ch wrote:
 Yeah, but the multipart logic has pretty big disadvantages -- mainly the
 opposite of the advantages for a built-in feature:

  - not backwards compatible
  - not as simple to understand, use, implement, or spec
  - doesn't really work outside HTTP
  - harder to shim
  - more edge cases to define (e.g. what if the MIME types of the parts
change unexpectedly)
  - requires setting MIME types, which authors hate

 I think JavaScript would be the logical place to support this. We don't
 use multipart/* logic to do incremental rendering of HTML, we don't use it
 for incremental rendering of images (only for animating them), why would
 we use it for incremental execution of script? I think scripts, just like
 image formats, HTML, XML, etc, should have built-in support for
 incremental processing.

Given module loaders, new features to control script loading from HTML
(in parallel thread) and lack of interest from implementers in this
feature since it was last discussed we should probably hold off on
this.


-- 
http://annevankesteren.nl/


Re: [whatwg] Script-related feedback

2013-01-09 Thread Adam Barth
On Mon, Jan 7, 2013 at 7:51 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 7 Jan 2013, Adam Barth wrote:
  Why not just introduce a keyword or pragma to JavaScript that tells
  the user agent to act as if the end of the Program production had been
  reached, and that it should treat the remainder of the file as another
  Program?
 
  This could even be done in a backwards-compatible fashion by having
  the syntax to do this be something that down-level clients ignore,
  e.g.:
 
 /*@BREAK*/
 
  ...or some such.

 That approach is an in-band signal, which means it's vulnerable to
 injection attacks.

 If you can inject this, you can inject arbitrary code, so I don't see how
 this would be a problem.

 For example, consider a server that produces a JavaScript file of the
 following form:

 [...]
 var userData = ?php echo santize($userData) ?;
 [...]

 Currently, the rules for sanitizing using input are relatively
 straightforward (essentially, you just need to worry about a few special
 characters).

 Those simple rules would prevent anyone from inserting a pragma-like
 comment, too, so that's fine.

 However, if we implemented an in-band signaling we might well break
 these sanitation algorithms.

 How? I'm not suggesting changing any JS syntax, just making existing JS
 syntax be used as a signal.

 If making a comment do this is too dodgy, make it something like this:

breakParsing();

 ...and for down-level support, define an explicit breakParsing function
 that does nothing. If someone can insert a function call into JS, you've
 definitely lost already.

Working through some examples, that seems really strange:

foo();
breakParsing();
bar();

In this case, breakParsing() works a bit like yield() in other
programming languages: first foo() executes, then the event loop
spins, then bar() executes.  However, if we wrap the code in an
anonymous function block (as would make sense for JavaScript):

(function() {
  foo();
  breakParsing();
  bar();
})();

Now I get either get a parse error, if breakParsing() actually breaks
up the parsing, or breakParsing() does nothing, both of which are
surprising.  Worse, other seemingly trivial syntactic transformation
also break the magic:

foo();
breakParsing.call();
bar();

Now the JavaScript parse won't recognize the magic breakParsing();
production, and my script executes slowly.

I guess I don't understand the advantage of trying to cram this into
JavaScript syntax.  It's really got nothing to do with JavaScript as a
language and everything to do with providing an efficient way for web
sites to ask the browser to execute several JavaScript programs in
sequence.

HTTP already has an efficient mechanism for delivering several
JavaScript programs in sequence: multipart.  Given that img and
iframe already support multipart, it seems much simpler just to make
script support multipart as well.

Adam


Re: [whatwg] Script-related feedback

2013-01-09 Thread Ian Hickson
On Wed, 9 Jan 2013, Adam Barth wrote:
 
 Working through some examples, that seems really strange:
 
 foo();
 breakParsing();
 bar();
 
 In this case, breakParsing() works a bit like yield() in other
 programming languages: first foo() executes, then the event loop
 spins, then bar() executes.  However, if we wrap the code in an
 anonymous function block (as would make sense for JavaScript):
 
 (function() {
   foo();
   breakParsing();
   bar();
 })();
 
 Now I get either get a parse error, if breakParsing() actually breaks up 
 the parsing, or breakParsing() does nothing, both of which are 
 surprising.

That's why I originally proposed it as a pragma comment (which I'm pretty 
sure would be just as safe, because anything that stops someone from 
escaping a string injection in any way will stop both identifiers and 
comments, and it seems highly unlike that someone would go out of their 
way to let you escape a string literal and be allowed to inject a comment 
but not be allowed to inject a division or method call or whatnot).


 Worse, other seemingly trivial syntactic transformation also break the 
 magic:
 
 foo();
 breakParsing.call();
 bar();
 
 Now the JavaScript parse won't recognize the magic breakParsing(); 
 production, and my script executes slowly.

So let's not use something that looks like a method call -- I agree that 
isn't ergonomically or aesthetically pleasing.

/*@BREAK*/


 I guess I don't understand the advantage of trying to cram this into 
 JavaScript syntax.

Advantages of putting this in JS over multipart:

 - it's backwards-compatible
 - it's easier to parse a static barrier than a multipart/*'s wacky 
   syntax.
 - it doesn't impact any of the current fetching logic, since it's 
   still just one resource instead of introducing a layer in between 
   script's logic and the JS logic.
 - it automatically works anywhere you can use JS, not just where HTTP is 
   involved.
 - it can be shimmed more easily (if you trust the JS not to have 
   arbitrary injection and be written with the shim in mind, especially).
 - it doesn't run into weird problems like what if a part has the wrong 
   MIME type.
 - it's way easier to deploy (authors hate having to set MIME types).
 - it doesn't run into the problem that all UAs have historically ignored 
   the MIME type of script.


 HTTP already has an efficient mechanism for delivering several 
 JavaScript programs in sequence: multipart.

Efficient isn't the word I would have used.


 Given that img and iframe already support multipart, it seems much 
 simpler just to make script support multipart as well.

Given how much pain multipart was to handle in img and iframe, 
avoiding it like the plague seems like the more appropriate lesson. :-)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Script-related feedback

2013-01-07 Thread Adam Barth
On Wed, Dec 19, 2012 at 2:27 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 3 Dec 2012, Adam Barth wrote:
 Currently, there are a number of ways to load a script from the network
 and execute it, but none of them will actually load and execute the
 script as fast as physically possible.  Consider the following markup:

 script async src=path/to/script.js/script

 In this case, the user agent will wait until it receives the last byte
 of script.js from the network before executing the first byte of
 script.js.

 It had better, since JavaScript requires that syntax errors in the lasy
 byte prevent execution of the first byte.

 The main ingredient that we're missing is a way for the author to signal
 to the user agent which chunks of scripts are safe to execute in
 parallel with loading subsequent chunks from the network. Fortunately,
 the web platform already has a mechanism for breaking a single HTTP
 response body into chunks that are processed sequentially:
 multipart/mixed.

 For example, if an HTTP server provides a multipart/mixed response to a
 request for an image, the img element will display each part of the
 response in sequence, animating the image.  Similarly, if an HTTP server
 provides a multipart/mixed response to a request for an HTML document,
 the user agent will display each part of the response sequentially.

 One way to address this use case is to add multipart/mixed support to
 the script element.  Upon receiving a multipart/mixed response to a
 request for a script, the script element must execute each part of the
 response as they become available.  This behavior appears to be
 consistent with the definition of multipart/mixed
 http://tools.ietf.org/html/rfc2046#section-5.1.3.

 To load and execute a script as quickly as possible, the author would
 use the following markup:

 script async src=path/to/script.js/script

 The HTTP server would then break script.js into chunks that are safe to
 execute sequentially and provide each chunk as a separate MIME part in a
 multipart/mixed response.

 This seems like an overly complicated way of solving this problem.

 Why not just introduce a keyword or pragma to JavaScript that tells the
 user agent to act as if the end of the Program production had been
 reached, and that it should treat the remainder of the file as another
 Program?

 This could even be done in a backwards-compatible fashion by having the
 syntax to do this be something that down-level clients ignore, e.g.:

/*@BREAK*/

 ...or some such.

That approach is an in-band signal, which means it's vulnerable to
injection attacks.  For example, consider a server that produces a
JavaScript file of the following form:

[...]
var userData = ?php echo santize($userData) ?;
[...]

Currently, the rules for sanitizing using input are relatively
straightforward (essentially, you just need to worry about a few
special characters).  However, if we implemented an in-band signaling
we might well break these sanitation algorithms.

To make this secure, we'd probably want some sort of randomized
delimiter (perhaps declared via a pragma at the top of the file), but
then we would have just re-invented multipart/mixed.

Adam


Re: [whatwg] Script-related feedback

2013-01-07 Thread Ian Hickson
On Mon, 7 Jan 2013, Adam Barth wrote:
 
  Why not just introduce a keyword or pragma to JavaScript that tells 
  the user agent to act as if the end of the Program production had been 
  reached, and that it should treat the remainder of the file as another 
  Program?
 
  This could even be done in a backwards-compatible fashion by having 
  the syntax to do this be something that down-level clients ignore, 
  e.g.:
 
 /*@BREAK*/
 
  ...or some such.
 
 That approach is an in-band signal, which means it's vulnerable to 
 injection attacks.

If you can inject this, you can inject arbitrary code, so I don't see how 
this would be a problem.


 For example, consider a server that produces a JavaScript file of the 
 following form:
 
 [...]
 var userData = ?php echo santize($userData) ?;
 [...]
 
 Currently, the rules for sanitizing using input are relatively 
 straightforward (essentially, you just need to worry about a few special 
 characters).

Those simple rules would prevent anyone from inserting a pragma-like 
comment, too, so that's fine.


 However, if we implemented an in-band signaling we might well break 
 these sanitation algorithms.

How? I'm not suggesting changing any JS syntax, just making existing JS 
syntax be used as a signal.

If making a comment do this is too dodgy, make it something like this:

   breakParsing();

...and for down-level support, define an explicit breakParsing function 
that does nothing. If someone can insert a function call into JS, you've 
definitely lost already.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Script-related feedback

2013-01-07 Thread Glenn Maynard
On Mon, Jan 7, 2013 at 7:20 PM, Adam Barth w...@adambarth.com wrote:

   This could even be done in a backwards-compatible fashion by having the
  syntax to do this be something that down-level clients ignore, e.g.:
 



 /*@BREAK*/
 
  ...or some such.

 That approach is an in-band signal, which means it's vulnerable to
 injection attacks.  For example, consider a server that produces a
 JavaScript file of the following form:

 [...]
 var userData = ?php echo santize($userData) ?;
 [...]

 Currently, the rules for sanitizing using input are relatively
 straightforward (essentially, you just need to worry about a few
 special characters).  However, if we implemented an in-band signaling
 we might well break these sanitation algorithms.

 To make this secure, we'd probably want some sort of randomized
 delimiter (perhaps declared via a pragma at the top of the file), but
 then we would have just re-invented multipart/mixed.


The suggestion was the comment /*@BREAK*/, which the string literal
/*@BREAK*/ wouldn't match, being a string token, not a comment, right?

-- 
Glenn Maynard


Re: [whatwg] Script-related feedback

2010-07-25 Thread Diego Perini
On Sun, Jul 25, 2010 at 3:25 AM, Steve Souders wha...@souders.org wrote:

  Defer doesn't achieve the desired behavior. The goal is load this
 script after everything else in the page is done. Instead, defer'ed scripts
 get loaded immediately, thus stealing one of the few network connections
 from other (more important) resources.


If I recall correctly defer was meant as related to the script execution
not its loading, a totally different task.

I am not sure what I tried with Cuzillion is the correct way of testing
this specifically but adding an in-line script block with a 2 seconds
execute time at the end of the body also delay the deferred script of 2
seconds. To me this means the defer attribute will defer execution until
all other in-line scripts have finished run. Which, in turn, has noting to
do with information about the loading process and its timing.


 Here's an example:

 http://stevesouders.com/cuzillion/?c0=hj1hfft0_0_fc1=bi1hfff2_0_fc2=bi1hfff2_0_fc3=bi1hfff2_0_fc4=bi1hfff2_0_fc5=bi1hfff2_0_fc6=bi1hfff2_0_fc7=bi1hfff2_0_fc8=bi1hfff2_0_fhttp://stevesouders.com/cuzillion/?c0=hj1hfft0_0_fc1=bi1hfff2_0_fc2=bi1hfff2_0_fc3=bi1hfff2_0_fc4=bi1hfff2_0_fc5=bi1hfff2_0_fc6=bi1hfff2_0_fc7=bi1hfff2_0_fc8=bi1hfff2_0_ft=1280020727443

 Notice that although the script is defer it gets loaded before the
 images. (The load time of the script is displayed at the bottom and is
 typically 200-500ms. However, each image takes 2000ms to download. If the
 script was truly deferred it should be downloaded some time  2000ms. Since
 it's loaded earlier, it steals a network connection and causes one of the
 images to download later.)

 -Steve



To effectively defer loading of scripts (lazy loading) one should use the
onload event, that at least ensure cross-browser functionality.


Diego Perini



 On 7/23/2010 1:27 PM, Ian Hickson wrote:

 On Wed, 17 Mar 2010, Steve Souders wrote:


  Given that it is possible to do this from script, how common is it for
 people to do it from script? If it's very common, that would be a good
 data point encouraging us to do this sooner rather than later.


  6 of the top 10 US web sites load scripts after the load event: eBay,
 Facebook, Bing, MSN.com, MySpace, and Yahoo.


  Do we know why they do this rather than use defer=, and whether
 defer= would handle their use casess?






Re: [whatwg] Script-related feedback

2010-07-24 Thread Diego Perini
Six out of the six mentioned sites are never visited by me.

Though I know I am not representative in these numbers, I believe I recently
saw tests about defer currently having different implementation across
browser and different behavior depending on the script insertion point and
again depending on the browser.

Should we really trust defer= (at least in most recent browsers) ?
Where is it more reliable cross-browser in the head or the body section ?


Diego Perini


On Fri, Jul 23, 2010 at 10:27 PM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 17 Mar 2010, Steve Souders wrote:
  
   Given that it is possible to do this from script, how common is it for
   people to do it from script? If it's very common, that would be a good
   data point encouraging us to do this sooner rather than later.
 
  6 of the top 10 US web sites load scripts after the load event: eBay,
  Facebook, Bing, MSN.com, MySpace, and Yahoo.

 Do we know why they do this rather than use defer=, and whether
 defer= would handle their use casess?

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Script-related feedback

2010-07-24 Thread Steve Souders
Defer doesn't achieve the desired behavior. The goal is load this 
script after everything else in the page is done. Instead, defer'ed 
scripts get loaded immediately, thus stealing one of the few network 
connections from other (more important) resources.


Here's an example:
http://stevesouders.com/cuzillion/?c0=hj1hfft0_0_fc1=bi1hfff2_0_fc2=bi1hfff2_0_fc3=bi1hfff2_0_fc4=bi1hfff2_0_fc5=bi1hfff2_0_fc6=bi1hfff2_0_fc7=bi1hfff2_0_fc8=bi1hfff2_0_f 
http://stevesouders.com/cuzillion/?c0=hj1hfft0_0_fc1=bi1hfff2_0_fc2=bi1hfff2_0_fc3=bi1hfff2_0_fc4=bi1hfff2_0_fc5=bi1hfff2_0_fc6=bi1hfff2_0_fc7=bi1hfff2_0_fc8=bi1hfff2_0_ft=1280020727443


Notice that although the script is defer it gets loaded before the 
images. (The load time of the script is displayed at the bottom and is 
typically 200-500ms. However, each image takes 2000ms to download. If 
the script was truly deferred it should be downloaded some time  
2000ms. Since it's loaded earlier, it steals a network connection and 
causes one of the images to download later.)


-Steve

On 7/23/2010 1:27 PM, Ian Hickson wrote:

On Wed, 17 Mar 2010, Steve Souders wrote:
   

Given that it is possible to do this from script, how common is it for
people to do it from script? If it's very common, that would be a good
data point encouraging us to do this sooner rather than later.
   

6 of the top 10 US web sites load scripts after the load event: eBay,
Facebook, Bing, MSN.com, MySpace, and Yahoo.
 

Do we know why they do this rather than use defer=, and whether
defer= would handle their use casess?

   


Re: [whatwg] Script-related feedback

2010-07-23 Thread Ian Hickson
On Wed, 17 Mar 2010, Steve Souders wrote:
 
  Given that it is possible to do this from script, how common is it for 
  people to do it from script? If it's very common, that would be a good 
  data point encouraging us to do this sooner rather than later.
 
 6 of the top 10 US web sites load scripts after the load event: eBay, 
 Facebook, Bing, MSN.com, MySpace, and Yahoo.

Do we know why they do this rather than use defer=, and whether 
defer= would handle their use casess?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Script-related feedback

2010-03-17 Thread Steve Souders

 Given that it is possible to do this from script, how common is it for
 people to do it from script? If it's very common, that would be a good
 data point encouraging us to do this sooner rather than later.


6 of the top 10 US web sites load scripts after the load event: eBay, 
Facebook, Bing, MSN.com, MySpace, and Yahoo.


-Steve


On 3/16/2010 5:05 PM, Ian Hickson wrote:

On Tue, 3 Nov 2009, Brian Kuhn wrote:
   

In section
http://www.whatwg.org/specs/web-apps/current-work/#attr-script-async, it
says:

*Fetching an external script must delay the load event of the element's
document until the task that is queued by the networking task source
once the resource has been fetched (defined above) has been run.*

Has any thought been put into changing this for async scripts?  It seems
like it might be worthwhile to allow window.onload to fire while an
async script is still downloading if everything else is done.
 

On Fri, 6 Nov 2009, Brian Kuhn wrote:
   

It seems to me that the purpose of async scripts is to get out of the
way of user-visible functionality.  Many sites currently attach
user-visible functionality to window.onload, so it would be great if
async scripts at least had a way to not block that event.  It would help
minimize the affect that secondary-functionality like ads and web
analytics have on the user experience.
 

On Wed, 10 Feb 2010, Jonas Sicking wrote:
   

I'm concerned that this is too big of a departure from how people are
used toscripts behaving.

If we do want to do something like this, one possibility would be to
create a generic attribute that can go on things likeimg,link
rel=stylesheet,script  etc that make the resource not block the
'load' event.
 

On Thu, 11 Feb 2010, Steve Souders wrote:
   

I just sent email last week proposing a POSTONLOAD attribute for
scripts.
 

On Thu, 11 Feb 2010, Jonas Sicking wrote:
   

Though what we want here is a DONTDELAYLOAD attribute. I.e. we want
load to start asap, but we don't want the load to hold up the load
event if all other resources finish loading before this one.
 

On Fri, 12 Feb 2010, Brian Kuhn wrote:
   

Right.  Async scripts aren't really asynchronous if they block all the
user-visible functionality that sites currently tie to window.onload.

I don't know if we need another attribute, or if we just need to change
the behavior for all async scripts.  But I think the best time to fix
this is now; before too many UAs implement async.
 

On Fri, 12 Feb 2010, Nicholas Zakas wrote:
   

To me asynchronous fundamentally means doesn't block other things
from happening, so if async currently does block the load event from
firing then that seems very wrong to me.
 

On Fri, 12 Feb 2010, Steve Souders wrote:
   

ASYNC should not block the onload event. Thinking of the places where
ASYNC will be used, they would not want onload to be blocked.
 

On Sat, 13 Feb 2010, Darin Fisher wrote:
   

I don't know... to me, asynchronous means completes later.
Precedence: XMLHttpRequest.
 

On Sat, 13 Feb 2010, Boris Zbarsky wrote:

   

[...] my real worry about making any loads that don't block onload:
would web developers expect them to?
 

On Sat, 13 Feb 2010, Brian Kuhn wrote:
   

FWIW, loading scripts asynchronously with the Script DOM Element
approach does not block window.onload in IE.  In Chrome and Safari, the
downloading blocks, but execution doesn't.  In Firefox and Opera,
downloading and execution blocks.

So, it's pretty hard to say what web developers would expect with async
scripts.  I know that they will like having things like ads and
analytics not block window.onload though.  At the very least, we need
that ability to make that happen.
 

On Sat, 13 Feb 2010, Jonas Sicking wrote:
   

Yeah, my big concern is what do developers expect. Having an explicit
attribute for not blocking onload definitely follows the path of least
surprise. Though having an explicit attribute does give Steve more
things to evangelize, i.e. it'll probably lead to more pages firing
onload later than they could.
 

On Sat, 13 Feb 2010, Darin Fisher wrote:
   

The thing is, almost all subresources load asynchronously.  The load
event exists to tell us when those asynchronous loads have finished.
So, I think it follows that an asynchronous resource load may reasonably
block the load event.  (That's the point of the load event afterall!)
 

I've changed the spec to fire 'DOMContentLoaded' without waiting for the
async scripts, so that if you need this you can just listen for that event
instead of 'load'. 'load' still waits for all scripts. 'DOMContentLoaded'
still waits for deferred scripts. As far as I can tell this handles all
the above (still makes sense, still consistent with the way other 'load'
events work, but still lets you do things without waiting).


On Wed, 30 Dec 2009, David Bruant wrote:
   

The 6.8.1 Client identification starts with an explanation dealing
with browser-specific bugs and 

[whatwg] Script-related feedback

2010-03-16 Thread Ian Hickson
On Tue, 3 Nov 2009, Brian Kuhn wrote:

 In section 
 http://www.whatwg.org/specs/web-apps/current-work/#attr-script-async, it 
 says:
 
 *Fetching an external script must delay the load event of the element's 
 document until the task that is queued by the networking task source 
 once the resource has been fetched (defined above) has been run.*
 
 Has any thought been put into changing this for async scripts?  It seems 
 like it might be worthwhile to allow window.onload to fire while an 
 async script is still downloading if everything else is done.

On Fri, 6 Nov 2009, Brian Kuhn wrote:
 
 It seems to me that the purpose of async scripts is to get out of the 
 way of user-visible functionality.  Many sites currently attach 
 user-visible functionality to window.onload, so it would be great if 
 async scripts at least had a way to not block that event.  It would help 
 minimize the affect that secondary-functionality like ads and web 
 analytics have on the user experience.

On Wed, 10 Feb 2010, Jonas Sicking wrote:
 
 I'm concerned that this is too big of a departure from how people are 
 used to scripts behaving.
 
 If we do want to do something like this, one possibility would be to 
 create a generic attribute that can go on things like img, link 
 rel=stylesheet, script etc that make the resource not block the 
 'load' event.

On Thu, 11 Feb 2010, Steve Souders wrote:

 I just sent email last week proposing a POSTONLOAD attribute for 
 scripts.

On Thu, 11 Feb 2010, Jonas Sicking wrote:

 Though what we want here is a DONTDELAYLOAD attribute. I.e. we want
 load to start asap, but we don't want the load to hold up the load
 event if all other resources finish loading before this one.

On Fri, 12 Feb 2010, Brian Kuhn wrote:

 Right.  Async scripts aren't really asynchronous if they block all the 
 user-visible functionality that sites currently tie to window.onload.
 
 I don't know if we need another attribute, or if we just need to change 
 the behavior for all async scripts.  But I think the best time to fix 
 this is now; before too many UAs implement async.

On Fri, 12 Feb 2010, Nicholas Zakas wrote:

 To me asynchronous fundamentally means doesn't block other things 
 from happening, so if async currently does block the load event from 
 firing then that seems very wrong to me.

On Fri, 12 Feb 2010, Steve Souders wrote:

 ASYNC should not block the onload event. Thinking of the places where 
 ASYNC will be used, they would not want onload to be blocked.

On Sat, 13 Feb 2010, Darin Fisher wrote:

 I don't know... to me, asynchronous means completes later.  
 Precedence: XMLHttpRequest.

On Sat, 13 Feb 2010, Boris Zbarsky wrote:

 [...] my real worry about making any loads that don't block onload: 
 would web developers expect them to?

On Sat, 13 Feb 2010, Brian Kuhn wrote:

 FWIW, loading scripts asynchronously with the Script DOM Element 
 approach does not block window.onload in IE.  In Chrome and Safari, the 
 downloading blocks, but execution doesn't.  In Firefox and Opera, 
 downloading and execution blocks.
 
 So, it's pretty hard to say what web developers would expect with async 
 scripts.  I know that they will like having things like ads and 
 analytics not block window.onload though.  At the very least, we need 
 that ability to make that happen.

On Sat, 13 Feb 2010, Jonas Sicking wrote:
 
 Yeah, my big concern is what do developers expect. Having an explicit 
 attribute for not blocking onload definitely follows the path of least 
 surprise. Though having an explicit attribute does give Steve more 
 things to evangelize, i.e. it'll probably lead to more pages firing 
 onload later than they could.

On Sat, 13 Feb 2010, Darin Fisher wrote:

 The thing is, almost all subresources load asynchronously.  The load 
 event exists to tell us when those asynchronous loads have finished.  
 So, I think it follows that an asynchronous resource load may reasonably 
 block the load event.  (That's the point of the load event afterall!)

I've changed the spec to fire 'DOMContentLoaded' without waiting for the 
async scripts, so that if you need this you can just listen for that event 
instead of 'load'. 'load' still waits for all scripts. 'DOMContentLoaded' 
still waits for deferred scripts. As far as I can tell this handles all 
the above (still makes sense, still consistent with the way other 'load' 
events work, but still lets you do things without waiting).


On Wed, 30 Dec 2009, David Bruant wrote:
 
 The 6.8.1 Client identification starts with an explanation dealing 
 with browser-specific bugs and limitation (browser-specific features 
 are missing, aren't they ?) that Web authors are forced to work around.

Browser-specific features should be featured-tested, not version-tested.


 A very interesting project dealing with these browsers specific
 implementations is TestSwarm : http://testswarm.com/
 
 As you may notice, the web browsers are classified this way :
 1) Operating system
 2) 

[whatwg] script-related feedback

2008-12-28 Thread Ian Hickson
On Thu, 21 Aug 2008, Jonas Sicking wrote:
 
 Here is the list of elements that we *don't* execute scripts inside of 
 in firefox:
 
 http://mxr.mozilla.org/mozilla-central/source/content/base/src/nsScriptElement.cpp#148
 
 i.e. iframe, noframes, noembed
 
 Everywhere else we do execute the script.
 
 The reason these elements ended up at the list is in bugs 
 https://bugzilla.mozilla.org/show_bug.cgi?id=5847 
 https://bugzilla.mozilla.org/show_bug.cgi?id=26669

On Thu, 21 Aug 2008, Jo�o Eiras wrote:
 
 I kind of agree with iframe and noembed, but noframes ? noframes, IMO, 
 it fairly legitimate, because you can have scripts providing fallback, 
 or redirecting to another page.

On Thu, 21 Aug 2008, Jonas Sicking wrote:
 
 Yes, we would presumably run scripts in noframes if we didn't have 
 frame support. There is even a comment in the code that says that we 
 should not check for noscript if we ever add the ability to turn off 
 frame support.

On Fri, 22 Aug 2008, Simon Pieters wrote:
 
 iframe, noframes and noembed are parsed as CDATA elements
 
 http://software.hixie.ch/utilities/js/live-dom-viewer/?%3C!DOCTYPE%20html%3E%0D%0A%3Ciframe%3E%3Cscript%3Ealert(1)%3C%2Fscript%3E%3C%2Fiframe%3E
 
 so there can't be any script elements as children of those in text/html. 
 In Opera and WebKit, the script executes in
 
data:text/xml,iframe 
 xmlns='http://www.w3.org/1999/xhtml'scriptalert(1)/script/iframe
 
 and it hasn't caused us any problems AFAIK.

On Thu, 21 Aug 2008, Jonas Sicking wrote:
 
 Looks like firefox doesn't parse the contents of the iframe as markup 
 either, but rather treat it as CDATA. Which makes me wonder why we ever 
 look for iframes in the parent chain :)
 
 I suspect it's just remnants from when things worked differently, the 
 check was put in in 1999 :)
 
 But the effect is that even in XHTML, like the example you're providing 
 above, scripts in iframes don't execute. This was not intentional though 
 given that this code was put in in 1999, before we had xhtml support.

I have gone with the Safari/Opera behavior here rather than the Mozilla 
behavior. This means you can remove that check altogether, which should 
simplify the code a bit. :-



On Thu, 30 Oct 2008, Keryx Web wrote:
 
 Webkit based browsers happily tries to parse scripts after the following tags:
 
  script language=javascript1.6
  script language=javascript1.7
 
 Even though neither Safari nor Chrome support those JavaScript versions. 
 And it is not a matter of bugs, but lacking implementations.
 
 No browser runs script specified with:
 
  type=text/ecmascript;version=2.0
  type=application/ecmascript;version=2.0
  type=text/ecmascript;version=3.0
  type=application/ecmascript;version=3.0
 
 A. Should not the spec mandate that a browser must support a certain 
 version of JavaScript if it tries to run it?

It already does require this (though it is defined the other way around, 
in that the requirement to run the script is only reached if the script 
type is supported).


 B. Should the spec mandate that a browser must run a script that it de 
 facto supports, e.g. ecmascript 3 in Firefox?

I don't understand what this means.



On Fri, 12 Dec 2008, Ojan Vafai wrote:
 
  I just went ahead and specced out the 'onbeforeunload' feature that 
  most browsers support today that handles this case.
 
 If we're going for matching what browsers do, there's a number of cases 
 (different in each browser) where the confirm doesn't popup. In Chrome, 
 for example, if the beforeunload handler takes too long, we kill it and 
 navigate away. Similarly, in Firefox, if the beforeunload handler hits 
 the limit for script execution and the user stops the script, the 
 beforeunload handler never fires.

I've added a section that allows these behaviors.


 Not sure what the right language for that is. But developers try to do 
 things like using beforeunload/unload to release locks, make server 
 requests, etc. and it's just not a very reliable thing to do in any 
 browser. It's really just useful for the quick prompt for the user as to 
 things like unsaved changes.

I haven't mentioned this in the spec, but I agree that it should be 
mentioned in the authoring guide.


On Fri, 12 Dec 2008, Martin Atkins wrote:
 
 Could browsers handle confirm() and friends in such a way that they only 
 block the contents of the tab, not the whole browser? In particular, the 
 close tab and close window features, ideally along with things such 
 as Back, Forward and Home should still be available.

It'd have to block any page in the same unit of related browsing contexts, 
but otherwise yes. Chrome does this, mostly. IE8 probably too.


 This does of course create some tricky interactions where onbeforeunload 
 is concerned. If I try to close the browser/tab and the page uses 
 onbeforeunload to create a confirmation prompt, how does this interact 
 with the confirmation prompt only being tab-modal?

The spec now says you can disable scripting at 

Re: [whatwg] script-related feedback

2008-12-28 Thread João Eiras

On , Ian Hickson i...@hixie.ch wrote:




On Fri, 12 Dec 2008, Martin Atkins wrote:


Could browsers handle confirm() and friends in such a way that they only
block the contents of the tab, not the whole browser? In particular, the
close tab and close window features, ideally along with things such
as Back, Forward and Home should still be available.


It'd have to block any page in the same unit of related browsing  
contexts,

but otherwise yes. Chrome does this, mostly. IE8 probably too.



That is a browser issue. Chrome and IE8 don't have it because they are  
multi-process. Opera never had the issue because processing is always  
divided between the UI and webpages.
No spec should go into great lengths specifying how UI should behave and  
look, although it can make recommendations.