Re: [whatwg] Making cross-origin iframe seamless= (partly) usable

2012-12-03 Thread Mikko Rantalainen
Ian Hickson, 2012-12-01 04:57 (Europe/Helsinki):
 ...and Adam Barth posted some on the wiki:
 Expandable Advertisement: A publisher wishes to display an advertisement 
 that expands when the user interacts with the advertisement. Today, the 
 common practice is for the advertising network to run script in the 
 publisher's page that receives postMessage instructions to resize the 
 advertisement's iframe, but this requires that the publisher allow the 
 advertisement to run script in its page, potentially compromising the 
 publisher's security.
 
 It seems to me like the best solution is to have a new HTTP header, with 
 the four following values being allowed:
 
Seamless-Options: allow-shrink-wrap
Seamless-Options: allow-styling
Seamless-Options: allow-shrink-wrap allow-styling
Seamless-Options: allow-styling allow-shrink-wrap

Not that I fancy for expendable advertisement, but I fail to see how
that is supposed to work with those headers. Basically I think that in
such case, the host document should be able to specify something like
following:

(1) I want to embed a seamless untrusted iframe here, and
(2) iframe should have maximum size of e.g. 480x240 pixels (or any size
set via CSS max-width/max-height). However, if user interacts (I guess
moving focus inside the iframe is enough) with the iframe, then
max-width and max-height are set to expanded state (whatever that means).

Is it possible for host document to detect that the focus is within the
iframe from cross-origin location? If yes, then all we need is
cross-origin seamless iframe and a host document script that increases
the max-width and max-height limitations for the seamless iframe.

Does there need to be any support for expendable seamless iframe without
scripting?

-- 
Mikko



Re: [whatwg] Canvas in Workers

2012-12-03 Thread 社用
On Sat, Dec 1, 2012 at 2:44 AM, Ian Hickson i...@hixie.ch wrote:

 On Fri, 30 Nov 2012, Gregg Tavares (社ç~T¨) wrote:
 
  on ImageBitmap should zero size canvases just work (and create a 0 sized
  ImageBitmap)?
 
  My personal preference is for APIs that just work with zero sizes so I
  don't have to write lots of special cases for handling zero.
 
  For example [1,2,3].slice(0,0) returns []. It doesn't throw.
  abc.substring(0,0) returns  it doesn't throw. fillRect(x, y, 0, 0)
  doesn't throw. etc...
 
  It just makes life a lot easier

 The main reason 0-sized canvases have always thrown in drawImage() is that
 I couldn't work out what you would paint, nor why you'd have a zero-sized
 canvas, and throwing seemed like it'd be the best way to help the author
 figure out where the problem was, rather than just ignoring the call and
 having the author scratch their head about why nothing was happening.

 If there's cases where you would legitimately end up with zero-sized
 canvases that you'd try to draw from, though, I'm happy to change it.


I don't see how zero sized canvases are any different than zero sized
arrays or empty strings. It's not a matter of use case. It's a matter of
not having to write checks everywhere for 0.  If I'm writing some app that
takes a user supplied size (say a photo editing app where the user can
select a rectangle and copy and paste), why do I want to have to check for
zero?

var x = Math.min(x1, x2);
var y = Math.min(y1, y2);
var width = Math.abs(x1 - x2);
var height = Math.abs(y1 - y2);

// Do something with rect defined by x,y,width,height

This seems no different from malloc(0) in C or the other cases I've
mentioned (array size 0 and empty string).  Lots of programming becomes
easier when size = 0 works. Maybe I'm animating

   function draw() {
  var scale = Math.sin() * 0.5 + 0.5;
  var width = realWidth * scale;
  var height = realHeight * scale;

  // do something with width, height
   }

Why do I want to have to check for zero and special case it?

You could argue that I'd have to check of negative values but that's still
nicer than checking for 0

   function draw() {
  var scale = Math.sin() * 0.5 + 0.5;
  var width = Math.max(0, realWidth * scale);
  var height = Math.max(0, realHeight * scale);

  // do something with width, height
   }

vs

   function draw() {
  var scale = Math.sin() * 0.5 + 0.5;
  var width = realWidth * scale;
  var height = realHeight * scale;

  if (width = 0 || height = 0) {
 // skip this step
  } else {
// do something with width, height
  }
   }

I'm just making the case it seems like 0 should always work. That includes
ImageBitmap, Canvas and ImageData




 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



[whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Boris Zbarsky

Consider this testcase:

  var desc = Object.getOwnPropertyDescriptor(HTMLElement.prototype,
 onscroll);
  desc.set.call(document.body, function() { alert(this); });

Is the listener added on the body, or the window?

The relevant parts of the spec are:

1)  onscroll is present on both HTMLElement.prototype and 
HTMLBodyElement.prototype.  This testcase explicitly invokes the setter 
for the former.


2)  The spec text at 
http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#handler-onscroll 
(scroll up; there is no way to link to the actual text) says:


  The following are the event handlers (and their corresponding event
  handler event types) that must be supported by all HTML elements
  other than body and frameset, as both content attributes and IDL
  attributes, and on Document objects, as IDL attributes

It's not clear to me what this means since the properties are on 
HTMLElement.prototype so they can be applied to all HTML elements.  What 
does this text mean in terms of the testcase above?


Basically, I can see three possible behaviors here.  Either the 
HTMLElement.prototype.onscroll setter behaves the same way on all 
elements (and hence the above adds the event handler on the body) or it 
behaves specially for the body element, forwarding to the window (and 
then we don't need HTMLBodyElement.prototype.onscroll), or it throws for 
the body element.  Which one is intended?


-Boris


Re: [whatwg] Making cross-origin iframe seamless= (partly) usable

2012-12-03 Thread Adam Barth
On Fri, Nov 30, 2012 at 6:57 PM, Ian Hickson i...@hixie.ch wrote:
 On Sat, 26 May 2012, Adam Barth wrote:

 [CSP]

 CSP doesn't seem to include any features that would let you limit who is
 allowed to iframe you, so I don't think CSP as designed today provides a
 solution for the per-origin part. Could it be extended?

The current plan is for X-Frame-Options to become a CSP directive.
CSP is quite extensible.  The only hard restrictions are that new
directives conform to this grammar (yes, error handling for parsing is
also defined):

directive = *WSP [ directive-name [ WSP directive-value ] ]
directive-name= 1*( ALPHA / DIGIT / - )
directive-value   = *( WSP / VCHAR except ; and , )

Philosophically, the current plan is to use CSP for things that might
be called content restrictions, i.e., for restricting what a
document might otherwise be able to do.  As I wrote on the wiki [1],
it's not a great match for this use case because here we're loosening
restrictions rather than tightening them.  Of course, this philosophy
might evolve over time, so I wouldn't necessarily treat it as a
hard-and-fast rule.

 [X-Frame-Options]

 This doesn't let you chose on a per-origin basis whether you can be framed
 either (since you don't get an Origin header in the request, and the X-F-O
 header only gives a thumbs-up or thumbs-down in general).

 I'm dubious about extending X-F-O since it lacks a spec and so how exactly
 to change it in a backwards-compatible way is unclear and getting it
 wrong would be very dodgy.


 On Thu, 12 Apr 2012, Ojan Vafai wrote:

 we could add a special http header and/or meta tag for this, like
 x-frame-options, but for the child frame to define it's relationship to
 the parent frame.

 Yeah.

 It seems to me like the best solution is to have a new HTTP header, with
 the four following values being allowed:

Seamless-Options: allow-shrink-wrap
Seamless-Options: allow-styling
Seamless-Options: allow-shrink-wrap allow-styling
Seamless-Options: allow-styling allow-shrink-wrap

 (Split on spaces, ignore unknown tokens.)

Assuming that these are order independent, it's slightly more
idiomatic for HTTP to use , as a delimiter.

 Then for the per-origin control, we would extend CSP to have a flag for
 limiting who is allowed to embed you (subsuming X-Frame-Options, essentially).

That's already planned for CSP (e.g.,
http://dvcs.w3.org/hg/user-interface-safety/raw-file/tip/user-interface-safety.html#frame-options
is one current proposal).

 For the case of things that can be embedded by anyone but only seamlessly
 by paying clients, I would recommend putting the origin in the URL, and
 then limiting the embedding to that URL using CSP.

 Is this a viable direction?

Yeah, I can see how you ended up with an HTTP header.  I wonder if it
would make sense to align this stylistically with CORS.  For example:

Access-Control: allow-shrink-wrap, allow-styling

I guess it depends how costly you think it is to mint new HTTP headers
rather than having fewer, harder working headers.

Adam


[1] http://wiki.whatwg.org/wiki/AllowSeamless


Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Ian Hickson
On Mon, 3 Dec 2012, Boris Zbarsky wrote:

 Consider this testcase:
 
   var desc = Object.getOwnPropertyDescriptor(HTMLElement.prototype,
  onscroll);
   desc.set.call(document.body, function() { alert(this); });
 
 Is the listener added on the body, or the window?
 
 The relevant parts of the spec are:
 
 1)  onscroll is present on both HTMLElement.prototype and
 HTMLBodyElement.prototype.  This testcase explicitly invokes the setter for
 the former.
 
 2)  The spec text at
 http://www.whatwg.org/specs/web-apps/current-work/multipage/webappapis.html#handler-onscroll
 (scroll up; there is no way to link to the actual text) says:
 
   The following are the event handlers (and their corresponding event
   handler event types) that must be supported by all HTML elements
   other than body and frameset, as both content attributes and IDL
   attributes, and on Document objects, as IDL attributes
 
 It's not clear to me what this means since the properties are on 
 HTMLElement.prototype so they can be applied to all HTML elements.  
 What does this text mean in terms of the testcase above?
 
 Basically, I can see three possible behaviors here.  Either the 
 HTMLElement.prototype.onscroll setter behaves the same way on all 
 elements (and hence the above adds the event handler on the body) or it 
 behaves specially for the body element, forwarding to the window (and 
 then we don't need HTMLBodyElement.prototype.onscroll), or it throws for 
 the body element. Which one is intended?

What do browsers do?

This should probably be defined in WebIDL. It relatess also to:

   https://www.w3.org/Bugs/Public/show_bug.cgi?id=17201

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Boris Zbarsky

On 12/3/12 2:05 PM, Ian Hickson wrote:

What do browsers do?


WebKit and Opera don't put the property on the prototype at all, so the 
whole issue is not even testable there.  This obviously doesn't follow 
WebIDL, but that's not relevant here.


It looks like Gecko currently doesn't allow the onscroll setter gotten 
from HTMLElement.prototype to be invoked on things whose prototype is 
not exactly HTMLElement.prototype.  In particular, applying it to an 
HTMLBodyElement throws.  This is an artifact of this property being 
implemented via XPConnect, unlike a lot of other DOM properties; we're 
in the process of switching to WebIDL for the bindings here, which is 
why the question arose.


IE9 in IE9 standards mode seems to depend on the exact event handler. 
Specifically, assuming I didn't mess up my tests:


1)  For onload, onfocus, onblur it seems to forward the set
to the window even if it's invoked via the HTMLElement.prototype
setter.
2)  For onscroll, onerror it seems to never forward to the
window, no matter how you set it.

So in terms of compat, I claim there are no constraints here.  ;)


This should probably be defined in WebIDL.


You have IDL like this:

  interface Foo {
attribute EventHandler onscroll;
  };
  interface Bar : Foo {
attribute EventHandler onscroll;
  };

WebIDL already defines how this behaves: there are getters/setters on 
both Foo.prototype and Bar.prototype, and it's up to the spec prose to 
decribe how those getters/setters actually behave.  That's really what's 
missing here, no?  Again, there are several possible behaviors; the 
question is which one we want for this particular case.



It relatess also to:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=17201


It's a similar situation, yes.  But in this case I don't see why you'd 
need an IDL annotation of any sort at all.  If you want the behavior to 
be the same, just don't define onscroll on Bar at all and define the one 
on Foo to special case the two Foo subclasses you care about here.  If 
you don't want it to be the same, the IDL annotation doesn't help you.


-Boris


Re: [whatwg] URL: URLQuery

2012-12-03 Thread Anne van Kesteren
On Sat, Dec 1, 2012 at 7:52 PM, Rick Waldron waldron.r...@gmail.com wrote:
 The definitive answer is that we discussed this at the TC39 in-person this
 past week and the collection mutation methods will return |this|. I will be
 publishing the meetings notes on Monday and the next draft will reflect
 these changes

Thanks Rick. I take it that is Cascading this returns from
https://github.com/rwldrn/tc39-notes/blob/master/es6/2012-11/nov-29.md
Why is the delete() method not included?

And is there an explain-it-like-I'm-five available for the conclusion?
Either that or an example where this would not apply might suffice. If
we want to adopt this policy throughout all API specifications it
would help if I knew what was going on.


-- 
http://annevankesteren.nl/


[whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Adam Barth
== Use case ==

Load and execute script as quickly as possible.

== Discussion ==

Currently, there are a number of ways to load a script from the
network and execute it, but none of them will actually load and
execute the script as fast as physically possible.  Consider the
following markup:

script async src=path/to/script.js/script

In this case, the user agent will wait until it receives the last byte
of script.js from the network before executing the first byte of
script.js.  In principle, the user agent could finish executing
script.js sooner if it could overlap some of the execution time with
some of the network latency, for example by executing a chunk of the
script while waiting for the bytes for the next chunk to arrive from
the network.

Unfortunately, without additional information, the user agent doesn't
know where safe chunk boundaries are located.  Picking an arbitrary
byte boundary is likely to cause a syntax error, and even picking an
arbitrary JavaScript statement boundary will change the semantics of
the script.  The user agent needs some sort of signal from the author
to know where the safe chunk boundaries are located.

== Workarounds ==

The simplest work around is to break your script into several pieces:

script async src=path/to/script-part1.js/script
script async src=path/to/script-part2.js/script
script async src=path/to/script-part3.js/script

Now, script-part1.js will execute before the user agent has received
the last byte of script-part3.js.  Unfortunately, this approach does
not make efficient use of the network.  Specifically, if the three
parts are retrieved from the network in parallel, then the user agent
might receive a byte from script-part3.js before receiving all the
bytes of script-part1.js, wasting network bandwidth (because the bytes
from script-part3.js are not useful until all of script-part1.js is
received an executed).

A more sophisticated workaround is to use an iframe element rather
than a script element to load the script:

iframe src=path/to/script-in-markup.html/iframe

In this approach, script-in-markup.html is the following HTML:

script
[... text of script-part1.js ...]
/script
script
[... text of script-part2.js ...]
/script
script
[... text of script-part3.js ...]
/script

Now the bytes of the script are retrieved from the network in the
proper order (making efficient use of bandwidth) and the user agent
can overlap execution of the script with network latency (because the
script tags delineate the safe chunks).

This approach is used in production web applications, including Gmail,
to load and execute script as quickly as possible.  If you inspect a
running copy of Gmail, you can find this frame---it's the one with ID
js_frame.

Unfortunately, this approach as a number of disadvantages:

(1) Creating an extra iframe for loading JavaScript is not resource
efficient.  The user agent needs to create a number of extra data
structures and an extra JavaScript environment, which wastes time as
well as memory.

(2) Authors need to write their scripts with the understanding that
the primary callers of their code will do so from another frame.  For
example, the instanceof operator might not work as expected if they
ask whether an object from the caller (i.e., from the parent frame) is
an instance of a constructor from the callee's environment (i.e., from
the child frame).

(3) This approach requires the author who loads the script to use
different syntax than normally used for loading script.  For example,
this prevents this technique from being applied to the JavaScript
libraries that Google hosts (as described by
https://developers.google.com/speed/libraries/).

== Proposal ==

The script element should support multipart/mixed.

== Details ===

The main ingredient that we're missing is a way for the author to
signal to the user agent which chunks of scripts are safe to execute
in parallel with loading subsequent chunks from the network.
Fortunately, the web platform already has a mechanism for breaking a
single HTTP response body into chunks that are processed sequentially:
multipart/mixed.

For example, if an HTTP server provides a multipart/mixed response to
a request for an image, the img element will display each part of
the response in sequence, animating the image.  Similarly, if an HTTP
server provides a multipart/mixed response to a request for an HTML
document, the user agent will display each part of the response
sequentially.

One way to address this use case is to add multipart/mixed support to
the script element.  Upon receiving a multipart/mixed response to a
request for a script, the script element must execute each part of
the response as they become available.  This behavior appears to be
consistent with the definition of multipart/mixed
http://tools.ietf.org/html/rfc2046#section-5.1.3.

To load and execute a script as quickly as possible, the author would
use the following markup:

script async src=path/to/script.js/script

The HTTP 

Re: [whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Anne van Kesteren
On Mon, Dec 3, 2012 at 10:14 PM, Adam Barth w...@adambarth.com wrote:
 The HTTP server would then break script.js into chunks that are safe
 to execute sequentially and provide each chunk as a separate MIME part
 in a multipart/mixed response.

Is it expected that SPDY will take much longer than getting this
supported in all browsers? Or am I missing how SPDY will not address
this problem?


-- 
http://annevankesteren.nl/


Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Cameron McCormack

On 4/12/12 6:31 AM, Boris Zbarsky wrote:

It's a similar situation, yes.  But in this case I don't see why you'd
need an IDL annotation of any sort at all.  If you want the behavior to
be the same, just don't define onscroll on Bar at all and define the one
on Foo to special case the two Foo subclasses you care about here.  If
you don't want it to be the same, the IDL annotation doesn't help you.


I agree.  But if you we really do need a separate namedItem (for bug 
17201) on HTMLPropertiesCollection, then there is no harm in having it 
too, but I would have it not work on other HTMLCollection objects.


So I think my suggested solution for that bug is:

  * Have the definition of HTMLCollection.namedItem include a hook that
other specifications can override for descendant classes like
HTMLPropertiesCollection.

  * Do that overriding for HTMLPropertiesCollection.

  * Not define a distinct namedItem on HTMLPropertiesCollection.

I can see that if you did still include a namedItem on 
HTMLPropertiesCollection with its special behaviour, then you could save 
yourself effort by putting an extended attribute on HTMLCollection's one 
(which means delegate to the subclass) but I don't think it's really 
necessary.


(I will put the above in the bug.)


Re: [whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread 陈智昌
Unless I am misunderstanding, SPDY will not solve this problem. SPDY uses
prioritized multiplexing of streams. Generally speaking, a browser will map
a single resource request to a single stream, which would prevent chunked
processing by the browser without multipart/mixed. One could imagine
working around this by splitting the single resource into multiple
resources, and then relying on SPDY priorities to ensure sequential
delivery, but that is suboptimal due to having limited priority levels (4
in SPDY/2, 8 in SPDY/3), and many of them are already used to indicate
relative priority amongst resource types (
https://code.google.com/p/chromium/source/search?q=DetermineRequestPriorityorigq=DetermineRequestPrioritybtnG=Search+Trunk
).


On Mon, Dec 3, 2012 at 1:40 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, Dec 3, 2012 at 10:14 PM, Adam Barth w...@adambarth.com wrote:
  The HTTP server would then break script.js into chunks that are safe
  to execute sequentially and provide each chunk as a separate MIME part
  in a multipart/mixed response.

 Is it expected that SPDY will take much longer than getting this
 supported in all browsers? Or am I missing how SPDY will not address
 this problem?


 --
 http://annevankesteren.nl/



Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Ian Hickson
On Mon, 3 Dec 2012, Boris Zbarsky wrote:
 
 You have IDL like this:
 
   interface Foo {
 attribute EventHandler onscroll;
   };
   interface Bar : Foo {
 attribute EventHandler onscroll;
   };
 
 WebIDL already defines how this behaves: there are getters/setters on 
 both Foo.prototype and Bar.prototype, and it's up to the spec prose to 
 decribe how those getters/setters actually behave.  That's really what's 
 missing here, no? Again, there are several possible behaviors; the 
 question is which one we want for this particular case.

I'd really like to have WebIDL define the behaviour rather than HTML, 
though.

Note that onerror has a different type on HTMLElement and HTMLBodyElement.


  It relatess also to:
  
  https://www.w3.org/Bugs/Public/show_bug.cgi?id=17201
 
 It's a similar situation, yes.  But in this case I don't see why you'd 
 need an IDL annotation of any sort at all.  If you want the behavior to 
 be the same, just don't define onscroll on Bar at all and define the one 
 on Foo to special case the two Foo subclasses you care about here.  If 
 you don't want it to be the same, the IDL annotation doesn't help you.

onscroll is a case where there's really no reason to use a different 
setter, agreed. So I've commented that out (and it's similar friends). 
That still leaves onerror though.


On Tue, 4 Dec 2012, Cameron McCormack wrote:
 
 I agree.  But if we really do need a separate namedItem (for bug 17201) 
 on HTMLPropertiesCollection, then there is no harm in having it too, but 
 I would have it not work on other HTMLCollection objects.

And vice versa.


Per our IRC discussion just now, I think I would propose that when a 
method/setter/getter from a prototype of interface A is called against an 
object that is of an interface B (or one of B's descendants), where B is a 
subclass of A, and B defines its own method/getter/setter with the same 
name, then it should throw.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Cameron McCormack

On 4/12/12 11:33 AM, Ian Hickson wrote:

Per our IRC discussion just now, I think I would propose that when a
method/setter/getter from a prototype of interface A is called against an
object that is of an interface B (or one of B's descendants), where B is a
subclass of A, and B defines its own method/getter/setter with the same
name, then it should throw.


I filed https://www.w3.org/Bugs/Public/show_bug.cgi?id=20225 for that.


Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Boris Zbarsky

On 12/3/12 7:33 PM, Ian Hickson wrote:

Note that onerror has a different type on HTMLElement and HTMLBodyElement.


Yes, indeed.  That's the biggest problem with forwarding to Window for 
the HTMLElement.prototype case for onerror here: the types are different.



onscroll is a case where there's really no reason to use a different
setter, agreed. So I've commented that out (and it's similar friends).
That still leaves onerror though.


Indeed.  I would have no problem with just having 
HTMLElement.prototype.onerror's setter set an error handler on the body 
itself, like it would on any other HTML element, and likewise for the 
getter.



Per our IRC discussion just now, I think I would propose that when a
method/setter/getter from a prototype of interface A is called against an
object that is of an interface B (or one of B's descendants), where B is a
subclass of A, and B defines its own method/getter/setter with the same
name, then it should throw.


Hmm.  That, as phrased, is pretty complicated to implement in a 
performant way, if the two methods/getters/setters have the same 
signatures...


-Boris


Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Cameron McCormack

On 4/12/12 12:11 PM, Boris Zbarsky wrote:

Hmm.  That, as phrased, is pretty complicated to implement in a
performant way, if the two methods/getters/setters have the same
signatures...


Since I'm not terribly familiar with our generated bindings code, I'm 
not really sure what that would be.  Is there a phrasing that would not 
be so complicated but does the same thing? :)


Re: [whatwg] Specification unclear about how HTMLElement.prototype.onscroll's getter/setter should behave for body elements

2012-12-03 Thread Boris Zbarsky

On 12/3/12 8:16 PM, Cameron McCormack wrote:

On 4/12/12 12:11 PM, Boris Zbarsky wrote:

Hmm.  That, as phrased, is pretty complicated to implement in a
performant way, if the two methods/getters/setters have the same
signatures...


Since I'm not terribly familiar with our generated bindings code, I'm
not really sure what that would be.  Is there a phrasing that would not
be so complicated but does the same thing? :)


The problem is the functionality, not the phrasing.

I have to ask: are there languages or runtime systems that have that 
sort of behavior on method calls (as opposed to in method 
implementations in special cases where the operation is nonsensical)? 
It seems weird to be requiring this behavior, in general.


-Boris




Re: [whatwg] Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Kyle Simpson
Adam-

 To load and execute a script as quickly as possible, the author would
 use the following markup:
 
 script async src=path/to/script.js/script
 
 The HTTP server would then break script.js into chunks that are safe
 to execute sequentially and provide each chunk as a separate MIME part
 in a multipart/mixed response.

I like the spirit of this idea, but one concern I have is about the script load 
and readystate events. It seems that authors will want to know when each chunk 
has finished executing (in the same way they want to know that scripts 
themselves finish).

There's a contingent on this list which thinks that all script authors should 
change their code to never have side effects of execution, and should all 
instead be executable by having some other logic invoke them (aka module 
style coding). The reality is that a mixture of both types of approaches will 
be available on the web for any foreseeable future (well beyond the time when 
ES6 has provided first-class module support to all in-use browsers, so probably 
nearly a decade from now I'd think). So authors will likely want to be able to 
monitor when each chunk onloads.

One suggestion is to added a state to the readyState mechanism like 
chunkReady, where the event fires and includes in its event object properties 
the numeric index, the //@sourceURL, the separator identifier, or otherwise 
some sort of identifier for which the author can tell which chunk executed.



--Kyle





Re: [whatwg] Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Ojan Vafai
On Mon, Dec 3, 2012 at 6:15 PM, Kyle Simpson get...@gmail.com wrote:

 Adam-

  To load and execute a script as quickly as possible, the author would
  use the following markup:
 
  script async src=path/to/script.js/script
 
  The HTTP server would then break script.js into chunks that are safe
  to execute sequentially and provide each chunk as a separate MIME part
  in a multipart/mixed response.

 I like the spirit of this idea, but one concern I have is about the script
 load and readystate events. It seems that authors will want to know when
 each chunk has finished executing (in the same way they want to know that
 scripts themselves finish).


Why? What would you do in such an event?


 There's a contingent on this list which thinks that all script authors
 should change their code to never have side effects of execution, and
 should all instead be executable by having some other logic invoke them
 (aka module style coding). The reality is that a mixture of both types of
 approaches will be available on the web for any foreseeable future (well
 beyond the time when ES6 has provided first-class module support to all
 in-use browsers, so probably nearly a decade from now I'd think). So
 authors will likely want to be able to monitor when each chunk onloads.

 One suggestion is to added a state to the readyState mechanism like
 chunkReady, where the event fires and includes in its event object
 properties the numeric index, the //@sourceURL, the separator identifier,
 or otherwise some sort of identifier for which the author can tell which
 chunk executed.



 --Kyle






Re: [whatwg] Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Ojan Vafai
On Mon, Dec 3, 2012 at 6:35 PM, Ojan Vafai o...@chromium.org wrote:

 On Mon, Dec 3, 2012 at 6:15 PM, Kyle Simpson get...@gmail.com wrote:

 Adam-

  To load and execute a script as quickly as possible, the author would
  use the following markup:
 
  script async src=path/to/script.js/script
 
  The HTTP server would then break script.js into chunks that are safe
  to execute sequentially and provide each chunk as a separate MIME part
  in a multipart/mixed response.

 I like the spirit of this idea, but one concern I have is about the
 script load and readystate events. It seems that authors will want to know
 when each chunk has finished executing (in the same way they want to know
 that scripts themselves finish).


 Why? What would you do in such an event?


Someone pointed out a use-case to me: a progress bar showing how far along
the page load is. You could do this without an event by just putting the
appropriate bit in each chunk of the script, but you couldn't do this if
you use defer instead async (i.e. you want a progress bar, but you
don't want the script to execute).



 There's a contingent on this list which thinks that all script authors
 should change their code to never have side effects of execution, and
 should all instead be executable by having some other logic invoke them
 (aka module style coding). The reality is that a mixture of both types of
 approaches will be available on the web for any foreseeable future (well
 beyond the time when ES6 has provided first-class module support to all
 in-use browsers, so probably nearly a decade from now I'd think). So
 authors will likely want to be able to monitor when each chunk onloads.

 One suggestion is to added a state to the readyState mechanism like
 chunkReady, where the event fires and includes in its event object
 properties the numeric index, the //@sourceURL, the separator identifier,
 or otherwise some sort of identifier for which the author can tell which
 chunk executed.



 --Kyle







Re: [whatwg] Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Kyle Simpson
  I like the spirit of this idea, but one concern I have is about the script 
  load and readystate events. It seems that authors will want to know when 
  each chunk has finished executing (in the same way they want to know that 
  scripts themselves finish).
 
  Why? What would you do in such an event?
  ...
  Someone pointed out a use-case to me: a progress bar showing how far along 
  the page load is. You could do this without an event by just putting the 
  appropriate bit in each chunk of the script, but you couldn't do this if 
  you use defer instead async (i.e. you want a progress bar, but you 
  don't want the script to execute).


The same diverse sorts of things that authors currently use script#onload for… 
like initializing some code now that you know it's executed and ready to go, 
etc.

For instance, imagine you have a small plugin as the first chunk in a rather 
large file, and you'd like to run some logic to initialize and use that plugin 
as soon as possible, rather than waiting for all the chunks of the multi-part'd 
file to download and execute, so you'd listen for that very first `chunkReady` 
event, or whatever, and fire your code off then, which could/would be much 
earlier than if you waited until the very end of all chunks loading.

My assumption is that this feature, if added, would basically allow an author 
to treat scripts as separately loading items in development mode, thus having 
separate onload handlers as they might normally design, and then for 
production combining all scripts into a single, but multi-parted, concat file, 
and mapping those individual `onload` handlers directly to `chunkReady` 
handlers, one-to-one.

-

I know the code could itself be changed to simulate the same behavior. My bias 
(as I exposed in the previous message) is to see features designed that are 
easiest for existing code to take advantage of. If this feature were added in 
such a way that the only way people could really take advantage of it is if 
they had to rearchitect their code (as some on this list persistently suggest) 
so as to do its own wrapping of code and notifications of each chunk being 
finished, I see this feature as dying a niche death without much widespread 
usefulness.

But if we make it yet another tool in the web performance professional's 
toolbelt to take existing sites which load multiple files and give them a way 
to easily convert them (without any major code changes) over to loading fewer 
files in a multi-part fashion, I can see this feature being pretty useful.



--Kyle








Re: [whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Maciej Stachowiak


On Dec 3, 2012, at 2:11 PM, William Chan (陈智昌) willc...@chromium.org wrote:

 Unless I am misunderstanding, SPDY will not solve this problem. SPDY uses
 prioritized multiplexing of streams.

It seems to me like SPDY could make this case work better:

script async src=path/to/script-part1.js/script
script async src=path/to/script-part2.js/script
script async src=path/to/script-part3.js/script

Specifically the individual script chunks could be ordered and prioritized such 
that all of script-part1.js transfers before any of script-part3.js. That's 
harder to do with HTTP because the scripts could be loading on wholly separate 
HTTP connections, while SPDY will use one connection to the server.

That being said, I do not know if SPDY will actually achieve this. Presumably 
it makes sense for it to serialize within a given priority level, at least a 
priority level that's likely to correspond to resources that are only 
atomically consumable, like scripts. But I don't know if SPDY implementations 
really do that.

 - Maciej


 Generally speaking, a browser will map
 a single resource request to a single stream, which would prevent chunked
 processing by the browser without multipart/mixed. One could imagine
 working around this by splitting the single resource into multiple
 resources, and then relying on SPDY priorities to ensure sequential
 delivery, but that is suboptimal due to having limited priority levels (4
 in SPDY/2, 8 in SPDY/3), and many of them are already used to indicate
 relative priority amongst resource types (
 https://code.google.com/p/chromium/source/search?q=DetermineRequestPriorityorigq=DetermineRequestPrioritybtnG=Search+Trunk
 ).
 
 
 On Mon, Dec 3, 2012 at 1:40 PM, Anne van Kesteren ann...@annevk.nl wrote:
 
 On Mon, Dec 3, 2012 at 10:14 PM, Adam Barth w...@adambarth.com wrote:
 The HTTP server would then break script.js into chunks that are safe
 to execute sequentially and provide each chunk as a separate MIME part
 in a multipart/mixed response.
 
 Is it expected that SPDY will take much longer than getting this
 supported in all browsers? Or am I missing how SPDY will not address
 this problem?
 
 
 --
 http://annevankesteren.nl/
 



Re: [whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Maciej Stachowiak

It might be good to use a custom MIME type instead of multipart/mixed. 
multipart/mixed can represent arbitrary heterogenous sequences of types, which 
is not the desired semantic here - you want a sequence of all text/javascript 
types. It also has a syntactic affordance for conveying a MIME type per chunk, 
which is unnecessary in this case. Since browsers will likely need custom logic 
for this case anyway, I think it might be better to have a multipart/javascript 
type. Note: if this feature is needed for other script types, let's say 
vbscript, you could mint distinct types like multipart/vbscript, or use a MIME 
parameter: multipart/script; type=text/javascript

On Dec 3, 2012, at 1:14 PM, Adam Barth w...@adambarth.com wrote:

 == Use case ==
 
 Load and execute script as quickly as possible.
 
 == Discussion ==
 
 Currently, there are a number of ways to load a script from the
 network and execute it, but none of them will actually load and
 execute the script as fast as physically possible.  Consider the
 following markup:
 
 script async src=path/to/script.js/script
 
 In this case, the user agent will wait until it receives the last byte
 of script.js from the network before executing the first byte of
 script.js.  In principle, the user agent could finish executing
 script.js sooner if it could overlap some of the execution time with
 some of the network latency, for example by executing a chunk of the
 script while waiting for the bytes for the next chunk to arrive from
 the network.
 
 Unfortunately, without additional information, the user agent doesn't
 know where safe chunk boundaries are located.  Picking an arbitrary
 byte boundary is likely to cause a syntax error, and even picking an
 arbitrary JavaScript statement boundary will change the semantics of
 the script.  The user agent needs some sort of signal from the author
 to know where the safe chunk boundaries are located.
 
 == Workarounds ==
 
 The simplest work around is to break your script into several pieces:
 
 script async src=path/to/script-part1.js/script
 script async src=path/to/script-part2.js/script
 script async src=path/to/script-part3.js/script
 
 Now, script-part1.js will execute before the user agent has received
 the last byte of script-part3.js.  Unfortunately, this approach does
 not make efficient use of the network.  Specifically, if the three
 parts are retrieved from the network in parallel, then the user agent
 might receive a byte from script-part3.js before receiving all the
 bytes of script-part1.js, wasting network bandwidth (because the bytes
 from script-part3.js are not useful until all of script-part1.js is
 received an executed).
 
 A more sophisticated workaround is to use an iframe element rather
 than a script element to load the script:
 
 iframe src=path/to/script-in-markup.html/iframe
 
 In this approach, script-in-markup.html is the following HTML:
 
 script
 [... text of script-part1.js ...]
 /script
 script
 [... text of script-part2.js ...]
 /script
 script
 [... text of script-part3.js ...]
 /script
 
 Now the bytes of the script are retrieved from the network in the
 proper order (making efficient use of bandwidth) and the user agent
 can overlap execution of the script with network latency (because the
 script tags delineate the safe chunks).
 
 This approach is used in production web applications, including Gmail,
 to load and execute script as quickly as possible.  If you inspect a
 running copy of Gmail, you can find this frame---it's the one with ID
 js_frame.
 
 Unfortunately, this approach as a number of disadvantages:
 
 (1) Creating an extra iframe for loading JavaScript is not resource
 efficient.  The user agent needs to create a number of extra data
 structures and an extra JavaScript environment, which wastes time as
 well as memory.
 
 (2) Authors need to write their scripts with the understanding that
 the primary callers of their code will do so from another frame.  For
 example, the instanceof operator might not work as expected if they
 ask whether an object from the caller (i.e., from the parent frame) is
 an instance of a constructor from the callee's environment (i.e., from
 the child frame).
 
 (3) This approach requires the author who loads the script to use
 different syntax than normally used for loading script.  For example,
 this prevents this technique from being applied to the JavaScript
 libraries that Google hosts (as described by
 https://developers.google.com/speed/libraries/).
 
 == Proposal ==
 
 The script element should support multipart/mixed.
 
 == Details ===
 
 The main ingredient that we're missing is a way for the author to
 signal to the user agent which chunks of scripts are safe to execute
 in parallel with loading subsequent chunks from the network.
 Fortunately, the web platform already has a mechanism for breaking a
 single HTTP response body into chunks that are processed sequentially:
 multipart/mixed.
 
 For example, if an HTTP server 

Re: [whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread 陈智昌
I did not explain this in more detail because I did not want to hijack this
thread to discuss SPDY prioritization. But we've indeed considered
situations like you've mentioned. Please refer to
https://groups.google.com/d/topic/spdy-dev/-d9Auoun4HU/discussion. But your
insight that in many situations it's not better to interleave response
data, as would be done by separate TCP connections, is indeed spot on.

On Mon, Dec 3, 2012 at 9:57 PM, Maciej Stachowiak m...@apple.com wrote:



 On Dec 3, 2012, at 2:11 PM, William Chan (陈智昌) willc...@chromium.org
 wrote:

  Unless I am misunderstanding, SPDY will not solve this problem. SPDY uses
  prioritized multiplexing of streams.

 It seems to me like SPDY could make this case work better:


 script async src=path/to/script-part1.js/script
 script async src=path/to/script-part2.js/script
 script async src=path/to/script-part3.js/script

 Specifically the individual script chunks could be ordered and prioritized
 such that all of script-part1.js transfers before any of script-part3.js.
 That's harder to do with HTTP because the scripts could be loading on
 wholly separate HTTP connections, while SPDY will use one connection to the
 server.


Just to be clear, you mean that script-part1.js completes before
script-part2.js which completes before script-part3.js, right?



 That being said, I do not know if SPDY will actually achieve this.
 Presumably it makes sense for it to serialize within a given priority
 level, at least a priority level that's likely to correspond to resources
 that are only atomically consumable, like scripts. But I don't know if SPDY
 implementations really do that.


It's more complicated than what you indicate here. We discuss this in our
SPDY/4 prioritization proposal and also in the aforementioned discussion
thread.

Also note that, even though Adam didn't mention it in his initial email,
correct prioritization is one motivation for this proposal, since it tries
to remove the necessity of using an iframe instead of a normal script,
since certain browsers (like Chromium) will assign different priorities
here:
https://code.google.com/p/chromium/source/search?q=DetermineRequestPriorityorigq=DetermineRequestPrioritybtnG=Search+Trunk.
I wrote up a more detailed discussion of this topic in
https://insouciant.org/tech/resource-prioritization-in-chromium/.


  - Maciej


  Generally speaking, a browser will map
  a single resource request to a single stream, which would prevent chunked
  processing by the browser without multipart/mixed. One could imagine
  working around this by splitting the single resource into multiple
  resources, and then relying on SPDY priorities to ensure sequential
  delivery, but that is suboptimal due to having limited priority levels (4
  in SPDY/2, 8 in SPDY/3), and many of them are already used to indicate
  relative priority amongst resource types (
 
 https://code.google.com/p/chromium/source/search?q=DetermineRequestPriorityorigq=DetermineRequestPrioritybtnG=Search+Trunk
  ).
 
 
  On Mon, Dec 3, 2012 at 1:40 PM, Anne van Kesteren ann...@annevk.nl
 wrote:
 
  On Mon, Dec 3, 2012 at 10:14 PM, Adam Barth w...@adambarth.com wrote:
  The HTTP server would then break script.js into chunks that are safe
  to execute sequentially and provide each chunk as a separate MIME part
  in a multipart/mixed response.
 
  Is it expected that SPDY will take much longer than getting this
  supported in all browsers? Or am I missing how SPDY will not address
  this problem?
 
 
  --
  http://annevankesteren.nl/
 




Re: [whatwg] Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Adam Barth
On Mon, Dec 3, 2012 at 6:15 PM, Kyle Simpson get...@gmail.com wrote:
 Adam-

 To load and execute a script as quickly as possible, the author would
 use the following markup:

 script async src=path/to/script.js/script

 The HTTP server would then break script.js into chunks that are safe
 to execute sequentially and provide each chunk as a separate MIME part
 in a multipart/mixed response.

 I like the spirit of this idea, but one concern I have is about the script 
 load and readystate events. It seems that authors will want to know when each 
 chunk has finished executing (in the same way they want to know that scripts 
 themselves finish).

Sure, you could imagine firing progress events or other sorts of
events before or after each chunk executes.

Adam


Re: [whatwg] Proposal: Loading and executing script as quickly as possible using multipart/mixed

2012-12-03 Thread Adam Barth
On Mon, Dec 3, 2012 at 9:57 PM, Maciej Stachowiak m...@apple.com wrote:
 On Dec 3, 2012, at 2:11 PM, William Chan (陈智昌) willc...@chromium.org wrote:
 Unless I am misunderstanding, SPDY will not solve this problem. SPDY uses
 prioritized multiplexing of streams.

 It seems to me like SPDY could make this case work better:

 script async src=path/to/script-part1.js/script
 script async src=path/to/script-part2.js/script
 script async src=path/to/script-part3.js/script

 Specifically the individual script chunks could be ordered and prioritized 
 such that all of script-part1.js transfers before any of script-part3.js. 
 That's harder to do with HTTP because the scripts could be loading on wholly 
 separate HTTP connections, while SPDY will use one connection to the server.

 That being said, I do not know if SPDY will actually achieve this. Presumably 
 it makes sense for it to serialize within a given priority level, at least a 
 priority level that's likely to correspond to resources that are only 
 atomically consumable, like scripts. But I don't know if SPDY implementations 
 really do that.

It also has disadvantage (3):

---8---
(3) This approach requires the author who loads the script to use
different syntax than normally used for loading script.  For example,
this prevents this technique from being applied to the JavaScript
libraries that Google hosts (as described by
https://developers.google.com/speed/libraries/).
---8---

Adam


 Generally speaking, a browser will map
 a single resource request to a single stream, which would prevent chunked
 processing by the browser without multipart/mixed. One could imagine
 working around this by splitting the single resource into multiple
 resources, and then relying on SPDY priorities to ensure sequential
 delivery, but that is suboptimal due to having limited priority levels (4
 in SPDY/2, 8 in SPDY/3), and many of them are already used to indicate
 relative priority amongst resource types (
 https://code.google.com/p/chromium/source/search?q=DetermineRequestPriorityorigq=DetermineRequestPrioritybtnG=Search+Trunk
 ).


 On Mon, Dec 3, 2012 at 1:40 PM, Anne van Kesteren ann...@annevk.nl wrote:

 On Mon, Dec 3, 2012 at 10:14 PM, Adam Barth w...@adambarth.com wrote:
 The HTTP server would then break script.js into chunks that are safe
 to execute sequentially and provide each chunk as a separate MIME part
 in a multipart/mixed response.

 Is it expected that SPDY will take much longer than getting this
 supported in all browsers? Or am I missing how SPDY will not address
 this problem?


 --
 http://annevankesteren.nl/




Re: [whatwg] [mimesniff] Sniffing archives

2012-12-03 Thread Julian Reschke

On 2012-11-29 20:25, Adam Barth wrote:

These are supported in Chrome.  That's what causes the download.  From


Can you elaborate about what you mean by supported? Chrome sniffs for 
the type, and then offers to download as a result of that sniffing? How 
is that different from not sniffing in the first place?



...your comment, it's not clear to me if you are correctly reverse
engineering existing user agents.  The techniques we used to create
this list originally are quite sophisticated and involved a massive
amount of data [1].  It would be a shame if you destroyed that work
because you didn't understand it.

Adam

[1] http://www.adambarth.com/papers/2009/barth-caballero-song.pdf
...


Understood; but on the other hand if there's a chance to simplify things 
than it makes sense to discuss this, even if that would involve changing 
some of the implementations.


Best regards, Julian


Re: [whatwg] [mimesniff] Sniffing archives

2012-12-03 Thread Adam Barth
On Mon, Dec 3, 2012 at 12:39 PM, Julian Reschke julian.resc...@gmx.de wrote:
 On 2012-11-29 20:25, Adam Barth wrote:
 These are supported in Chrome.  That's what causes the download.  From

 Can you elaborate about what you mean by supported? Chrome sniffs for the
 type, and then offers to download as a result of that sniffing? How is that
 different from not sniffing in the first place?

They might otherwise be treated as a type that can be displayed
(rather than downloaded).  Also, some user agents treat downloads of
ZIP archives differently than other sorts of download (e.g., they
might offer to unzip them).

Adam


Re: [whatwg] [mimesniff] Sniffing archives

2012-12-03 Thread Julian Reschke

On 2012-12-04 08:40, Adam Barth wrote:

On Mon, Dec 3, 2012 at 12:39 PM, Julian Reschke julian.resc...@gmx.de wrote:

On 2012-11-29 20:25, Adam Barth wrote:

These are supported in Chrome.  That's what causes the download.  From


Can you elaborate about what you mean by supported? Chrome sniffs for the
type, and then offers to download as a result of that sniffing? How is that
different from not sniffing in the first place?


They might otherwise be treated as a type that can be displayed
(rather than downloaded).  Also, some user agents treat downloads of


Do you have an example for that case?


ZIP archives differently than other sorts of download (e.g., they
might offer to unzip them).


Out of curiosity: which?

Best regards, Julian