Re: [whatwg] Window id - a proposal to leverage session usage in web application

2010-02-10 Thread Martin Atkins

Sebastian Hennebrueder wrote:


thank you for the feedback. I hope that I see your point correctly. You 
are right, that for JavaScript based applications this can easily be 
solved with a sessionStorage. All technologies around GoogleWebToolkit, 
Dojo, Echo etc which hold the state in the client and make use of a 
stateless server side application can use the session storage to 
distinguish browser windows.


But there are a lot of web technologies which hold the state on the 
server using the browser session. Technologies like Ruby on Rails, 
JavaServerFaces, Wicket, Struts, Tapestry to name a couple of them. 
Those technologies can not make a simple use of the session storage. 
They are only aware of the browser session which is a common space of 
all browser windows. The windows id let them split the session in a per 
browser window scope.


Originally, when playing with the window id concept, I simulated a 
window id by storing it in a session storage and adding it with the help 
of JavaScript as parameter to all links and as hidden field to all 
forms. It works to some extend but it pollutes the URL and is likely to 
cause problems with bookmarking and there is a use case where it fails. 
If you open a link in a new windows. In that case the first request is 
sending the wrong windows id.





The server-side session management you describe is usually implemented 
via cookies, which as you note are scoped to a particular site and do 
not consider a particular window.


Cookies and sessionStorage are conceptually similar in that both of them 
are mechanisms to allow a site to store data on the client. 
sessionStorage sets the scope of this data to be a (site, window) tuple 
rather than just site.


So it seems like your use-case could also be addressed by providing an 
interface to sessionStorage that uses HTTP headers, allowing the server 
to use sessionStorage in the same way that cookies are used, without 
requiring client-side script and thus requiring the data to be set via 
an HTML page.


To emulate the server-side session mechanisms you describe, you'd simply 
use a single sessionStorage value containing a session id which gets set 
in response to any request that does not provide it.





Re: [whatwg] script postonload

2010-02-10 Thread Anne van Kesteren
On Mon, 08 Feb 2010 23:06:07 +0100, Steve Souders st...@souders.org  
wrote:
I'd like to propose the addition of a POSTONLOAD attribute to the SCRIPT  
tag.


The behavior would be similar to DEFER, but instead of delaying  
downloads until after parsing they would be delayed until after the  
window's load event. Similar to DEFER, this new attribute would ensure  
scripts were executed in the order they appear in the document, although  
it could be combined with ASYNC to have them execute as soon as the  
response is received.


Developers can do this now using JavaScript, but it's complex and  
errorprone. For example, how should the script be added to the document?  
People typically append to the 'head' element, but some pages don't have  
a 'head' element and some browsers don't create a default one. And  
'documentElement' doesn't work in all browsers either. The safest path  
I've seen is to append to ( head || body ). Whether everyone agrees this  
is best, it reveals the complexity developers will have to consider.


Adding this attribute would lower the bar promoting this best practice  
for making web pages faster.


Which browsers do not create a head element? I thought we fixed our bug.  
Also, introducing new features mainly to work around existing bugs is  
generally not a good idea. We'd only increase the potential for  
interoperability issues.



--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] comments on SCRIPT ASYNC and DEFER

2010-02-10 Thread Henri Sivonen
On Feb 8, 2010, at 23:54, Steve Souders wrote:

 It would be good to mention this optional behavior here, something along the 
 lines of browsers may want to do speculative parsing, but shouldn't create 
 DOM elements, etc. - only kickoff HTTP requests.

FWIW, the HTML5 parser in Gecko (not on by default yet) does more than just 
kicks off HTTP requests. However, what it does isn't supposed to be detectable 
by author-supplied scripts.

 4. If the element has a src attribute, [snip] the specified resource must 
 then be fetched, from the origin of the element's Document.
 If the script has DEFER, the request should not start until after parsing 
 is finished. Starting it earlier could block other (non-deferred) requests 
 due to a connection limit or limited bandwidth.

As I understand it, starting the request early is the whole point of 'defer'. 
Otherwise, the author could put those scripts at the end of the page.

-- 
Henri Sivonen
hsivo...@iki.fi
http://hsivonen.iki.fi/




[whatwg] Step base for input type=week

2010-02-10 Thread TAMURA, Kent

The default step base for type=week should be -259,200,000 (the beginning of
1970-W01) instead of 0.
If an implementation follows the current spec and an input element has no
min attribute, stepMismatch for the element never be false because the step
base is not aligned to the beginning of a week.

--
TAMURA Kent
Software Engineer, Google





Re: [whatwg] Window id - a proposal to leverage session usage in web application

2010-02-10 Thread Sebastian Hennebrueder

Martin Atkins schrieb:

Sebastian Hennebrueder wrote:


thank you for the feedback. I hope that I see your point correctly. 
You are right, that for JavaScript based applications this can easily 
be solved with a sessionStorage. All technologies around 
GoogleWebToolkit, Dojo, Echo etc which hold the state in the client 
and make use of a stateless server side application can use the 
session storage to distinguish browser windows.


But there are a lot of web technologies which hold the state on the 
server using the browser session. Technologies like Ruby on Rails, 
JavaServerFaces, Wicket, Struts, Tapestry to name a couple of them. 
Those technologies can not make a simple use of the session storage. 
They are only aware of the browser session which is a common space of 
all browser windows. The windows id let them split the session in a 
per browser window scope.


Originally, when playing with the window id concept, I simulated a 
window id by storing it in a session storage and adding it with the 
help of JavaScript as parameter to all links and as hidden field to 
all forms. It works to some extend but it pollutes the URL and is 
likely to cause problems with bookmarking and there is a use case 
where it fails. If you open a link in a new windows. In that case the 
first request is sending the wrong windows id.





The server-side session management you describe is usually implemented 
via cookies, which as you note are scoped to a particular site and do 
not consider a particular window.


Cookies and sessionStorage are conceptually similar in that both of them 
are mechanisms to allow a site to store data on the client. 
sessionStorage sets the scope of this data to be a (site, window) tuple 
rather than just site.



I am aware of this and agree.
So it seems like your use-case could also be addressed by providing an 
interface to sessionStorage that uses HTTP headers, allowing the server 
to use sessionStorage in the same way that cookies are used, without 
requiring client-side script and thus requiring the data to be set via 
an HTML page.


To emulate the server-side session mechanisms you describe, you'd simply 
use a single sessionStorage value containing a session id which gets set 
in response to any request that does not provide it.




This sounds interesting and is probably a lot better aligned to the 
current sessionStorage/localStorage functionality. Just to make sure, 
that I have understood you correctly.


Case a)
Users enters a fresh URL.
Server receives a request without a window scoped sessionStorage id.
Server renders the page and adds a response header
sessionStorage.myid: 3452345 // a random number
Browser reads the header and creates a JavaScript call
sessionStorage.myid=3452345;
Case b)
A follow up request of the browser includes the sessionStorage.myid value
The server can read data from his scoped HTTPSession.
// pseudo code of the server
id = request['sessionStorage.myid']
session = getSessionHashMap()
contextHashMap = session[id]
someValue = contextHashMap['someValueKey']
Case c)
open a link in a new browser windows
would behave like a)

- possible issues to address
Request header might become too large. Somehow the browser should be 
instructed to add only specific sessionStorage/localStorage values to 
the request. This could be a flag in the response header or / and a 
cookey like approach.


A lot more security related behaviour need to be defined. I assume that 
it will follow about the same caveats as the sessionStorage/localStorage 
on the client.


--
Best Regards / Viele Grüße

Sebastian Hennebrueder
-
Software Developer and Trainer for Hibernate / Java Persistence
http://www.laliluna.de




Re: [whatwg] [html5] r4685 - [e] (0) Add an example of forcing fallback from source.

2010-02-10 Thread Simon Pieters

On Wed, 10 Feb 2010 03:27:45 +0100, wha...@whatwg.org wrote:


Author: ianh
Date: 2010-02-09 18:27:42 -0800 (Tue, 09 Feb 2010)
New Revision: 4685

Modified:
   complete.html
   index
   source
Log:
[e] (0) Add an example of forcing fallback from source.

Modified: complete.html
===
--- complete.html   2010-02-10 02:14:17 UTC (rev 4684)
+++ complete.html   2010-02-10 02:27:42 UTC (rev 4685)
@@ -21749,8 +21749,32 @@
  /div
+  div class=example
+   pIf the author isn't sure if the user agents will all be able to
+   render the media resources provided, the author can listen to the
+   code title=event-errorerror/code event on the last
+   codea href=#the-source-elementsource/a/code element and  
trigger fallback behaviour:/p

+   prelt;video controls autoplaygt;
+ lt;source src='video.mp4' type='video/mp4; codecs=avc1.42E01E,  
mp4a.40.2'gt;

+ lt;source src='video.ogv' type='video/ogg; codecs=theora, vorbis'
+ onerror=fallback(parentNode)gt;
+ ...
+lt;/videogt;
+lt;scriptgt;
+ function fallback(video) {
+   // replace lt;videogt; with its contents
+   while (video.hasChildNodes())
+ video.parentNode.insertBefore(video.firstChild, video);
+   video.parentNode.removeChild(video);
+ }
+lt;/scriptgt;/pre


The script should probably be before the video, because it's possible that  
a UA will fire the error event before having parsed the script defining  
the function.


Also, the script results in invalid HTML since it puts sources outside  
video.


--
Simon Pieters
Opera Software


Re: [whatwg] URN or protocol attribute

2010-02-10 Thread Simon Pieters
On Wed, 10 Feb 2010 08:55:56 +0100, Martin Atkins  
m...@degeneration.co.uk wrote:




Brett Zamir wrote:

Hi,
 Internet Explorer has an attribute on anchor elements for URNs:  
http://msdn.microsoft.com/en-us/library/ms534710%28VS.85%29.aspx



Internet Explorer supports a non-standard attribute on the A element  
called urn, which accepts an URN identifying some resource.


It is described in detail here:
http://msdn.microsoft.com/en-us/library/ms534710(VS.85).aspx

It is not apparent that this attribute causes any behavior in the  
browser itself. It is possible that this is exposed to browser  
extensions in some way to allow them to overload the behavior of a link  
which identifies a particular class of resource.


It does not seem that this attribute has achieved wide author adoption  
nor wide application support.


IE's .urn attribute is present on *all* elements, and is part of IE's  
namespaces. It's the equivalent of DOM's .namespaceURI.


--
Simon Pieters
Opera Software


Re: [whatwg] URN or protocol attribute

2010-02-10 Thread Thomas Broyer
On Wed, Feb 10, 2010 at 1:36 PM, Simon Pieters wrote:
 On Wed, 10 Feb 2010 08:55:56 +0100, Martin Atkins wrote:


 Brett Zamir wrote:

 Hi,
  Internet Explorer has an attribute on anchor elements for URNs:
 http://msdn.microsoft.com/en-us/library/ms534710%28VS.85%29.aspx


 Internet Explorer supports a non-standard attribute on the A element
 called urn, which accepts an URN identifying some resource.

 It is described in detail here:
 http://msdn.microsoft.com/en-us/library/ms534710(VS.85).aspx
[...]

 IE's .urn attribute is present on *all* elements, and is part of IE's
 namespaces. It's the equivalent of DOM's .namespaceURI.

Simon, you're confusing .urn with .tagUrn:
http://msdn.microsoft.com/en-us/library/ms534658(VS.85).aspx

-- 
Thomas Broyer
/tɔ.ma.bʁwa.je/


Re: [whatwg] Making cross-domain overlays more user-friendly

2010-02-10 Thread Scott González
The big disadvantage to this proposal is that it won't work until browsers
implement the functionality, which would discourage anyone from using it
since the fallback is that no overlay/infobar is presented. Rowan's
implementation will allow the overlay/infobar to be displayed, but would
keep the target page contained in an iframe. I would imagine that almost
everybody that wants an overlay/infobar would opt to use a method that
forces the overlay/infobar to be displayed, even if that means continuing
with their current implementations.


On Wed, Feb 10, 2010 at 2:00 AM, Martin Atkins m...@degeneration.co.ukwrote:

 Rowan Nairn wrote:

 Hi,

 In the spirit of paving some cow paths I'd like to put forward a
 proposal for a future version of HTML.  The behavior I'm addressing is
 sites that replace links to external content with a framed version of
 that content, along with their own overlay of information and links.
 I think with some small tweaks it's possible to create a better
 experience for the user in these situations.  I wasn't able to find
 any discussion of this use case in the archives.  Please excuse me if
 this has already been discussed.

  [snip details of proposal]

 Alternate proposal:

  * Add a new attribute on all hyperlink elements which accepts a URL as its
 value. Let's call this new attribute infobar for want of a better name for
 it.
  * When the user follows that link, create a view where the main browser
 window contains the page which was the href of the link, but where there is
 also a smaller docked view, perhaps along the top of the browser window,
 containing the page which at the URL given in the infobar attribute.
  * Include a mandatory close button on the infobar panel.
  * Have this extra bar disappear if the user navigates away from the page
 or chooses the close button.
  * Have this extra bar disappear if the infobar document calls
 window.close.
  * Any links in the infobar document get loaded in the main browser window,
 which implicitly closes the infobar since this is a navigation in the main
 window.
  * If the infobar document causes navigation by a means other than a link,
 such as setting document.location, it is just closed.
  * The infobar document *may* use window.open to open other windows --
 subject to normal popup blocker restrictions -- if it needs to display some
 UI that does not fit into the infobar form factor.

 This proposal compromises the flexibility of UI in what you called the
 overlay document and what I called the infobar document. The most notable
 omission is that it does not allow the overlay site to choose how the
 infobar is displayed, since the main page is given precedence. It may be
 desirable to provide some means by which the infobar document can request a
 particular size, though the user-agent must impose restrictions on the size
 to prevent a situation where the information bar appears more prominent than
 the main document.

 This proposal, much like the ping attribute proposed previously, allows a
 user-agent to offer a feature whereby the infobar can be skipped altogether.
 It also causes the Referer header of the request to the main page to be the
 original linking page and not the infobar page.

 It still has the challenge that the UA must find a way to make it
 unambiguous that the infobar is *not* part of the main page, to prevent
 attacks such as including an infobar containing a login form which looks
 like it belongs to the main page when in fact it does not. One idea, which
 may be too draconian, is to prevent the infobar frame from accepting
 keyboard input altogether, and not allow any form elements within it to
 receive focus. The infobar document may open a new window using window.open
 if it needs to capture some information; that window will display the URL of
 the document in its location bar, allowing the user to see what site it's
 loaded from.

 However, it is likely that these restrictions will cause site implementers
 to continue with current practice in order to retain the flexibility they
 currently enjoy.





Re: [whatwg] video feedback

2010-02-10 Thread Brian Campbell
On Feb 9, 2010, at 9:03 PM, Ian Hickson wrote:

 On Sat, 31 Oct 2009, Brian Campbell wrote:
 
 As a multimedia developer, I am wondering about the purpose of the timeupdate
 event on media elements.
 
 It's primary use is keeping the UIs updated (specifically the timers and 
 the scrubber bars).
 
 
 On first glance, it would appear that this event would be useful for 
 synchronizing animations, bullets, captions, UI, and the like.
 
 Synchronising accompanying slides and animations won't work that well with 
 an event, since you can't guarantee the timing of the event or anything 
 like that. For anything where we want reliable synchronisation of multiple 
 media, I think we need a more serious solution -- either something like 
 SMIL, or the SMIL subset found in SVG, or some other solution.

Yes, but that doesn't exist at the moment, so our current choices are to use 
timeupdate and to use setInterval().

 At 4 timeupdate events per second, it isn't all that useful. I can 
 replace it with setInterval, at whatever rate I want, query the time, 
 and get the synchronization I need, but that makes the timeupdate event 
 seem to be redundant.
 
 The important thing with timeupdate is that it also fires whenever the 
 time changes in a significant way, e.g. immediately after a seek, or when 
 reaching the end of the resource, etc. Also, the user agent can start 
 lowering the rate in the face of high CPU load, which makes it more 
 user-friendly than setInterval().

I agree, it is important to be able to reduce the rate in the face of high CPU 
load, but as currently implemented in WebKit, if you use timeupdate to keep 
anything in sync with the video, it feels fairly laggy and jerky. This means 
that for higher quality synchronization, you need to use setInterval, which 
defeats the purpose of making timeupdate more user friendly.

Perhaps this is just a bug I should file to WebKit, as they are choosing an 
update interval at the extreme end of the allowed range for their default 
behavior; but I figured that it might make sense to mention a reasonable 
default value (such as 30 times per second, or once per frame displayed) in the 
spec, to give some guidance to browser vendors about what authors will be 
expecting.

 On Thu, 5 Nov 2009, Brian Campbell wrote:
 
 Would something like video firing events for every frame rendered 
 help you out?  This would help also fix the canvas over/under 
 painting issue and improve synchronization.
 
 Yes, this would be considerably better than what is currently specced.
 
 There surely is a better solution than copying data from the video 
 element to a canvas on every frame for whatever the problem that that 
 solves is. What is the actual use case where you'd do that?

This was not my use case (my use case was just synchronizing bullets, slide 
transitions, and animations to video), but an example I can think of is using 
this to composite video. Most (if not all) video formats supported by video 
in the various browsers do not store alpha channel information. In order to 
composite video against a dynamic background, authors may copy video data to a 
canvas, then paint transparent to all pixels matching a given color.

This use case would clearly be better served by video formats that include 
alpha information, and implementations that support compositing video over 
other content, but given that we're having trouble finding any video format at 
all that the browsers can agree on, this seems to be a long way off, so 
stop-gap measures may be useful in the interim.

Compositing video over dynamic content is actually an extremely important use 
case for rich, interactive multimedia, which I would like to encourage browser 
vendors to implement, but I'm not even sure where to start, given the situation 
on formats and codecs. I believe I've seen this discussed in Theora, but never 
went anywhere, and I don't have any idea how I'd even start getting involved in 
the MPEG standardization process.

 On Thu, 5 Nov 2009, Andrew Scherkus wrote:
 
 I'll see if we can do something for WebKit based browsers, because today 
 it literally is hardcoded to 250ms for all ports. 
 http://trac.webkit.org/browser/trunk/WebCore/html/HTMLMediaElement.cpp#L1254
 
 Maybe we'll end up firing events based on frame updates for video, and 
 something arbitrary for audio (as it is today).
 
 I strongly recommend making the ontimeupdate rate be sensitive to system 
 load, and no faster than one frame per second.

I'm assuming that you mean no faster than once per frame?

 On Fri, 6 Nov 2009, Philip Jägenstedt wrote:
 
 We've considered firing it for each frame, but there is one problem. If 
 people expect that it fires once per frame they will probably write 
 scripts which do frame-based animations by moving things n pixels per 
 frame or similar. Some animations are just easier to do this way, so 
 there's no reason to think that people won't do it. This will break 
 horribly if a browser is 

Re: [whatwg] Making cross-domain overlays more user-friendly

2010-02-10 Thread Rowan Nairn
Right.  With this type of proposal we lose the degrade gracefully
property which means implementors have to do twice the amount of work
or more.  I also think an attribute on hyperlinks is not the way to go
(at least not the only way).  Remember that the entity that is
providing the infobar will not necessarily have control over the
hyperlink that brought you there.  Many of these overlays are
triggered from a link on Twitter for example.

Rowan

2010/2/10 Scott González scott.gonza...@gmail.com:
 The big disadvantage to this proposal is that it won't work until browsers
 implement the functionality, which would discourage anyone from using it
 since the fallback is that no overlay/infobar is presented. Rowan's
 implementation will allow the overlay/infobar to be displayed, but would
 keep the target page contained in an iframe. I would imagine that almost
 everybody that wants an overlay/infobar would opt to use a method that
 forces the overlay/infobar to be displayed, even if that means continuing
 with their current implementations.


 On Wed, Feb 10, 2010 at 2:00 AM, Martin Atkins m...@degeneration.co.uk
 wrote:

 Rowan Nairn wrote:

 Hi,

 In the spirit of paving some cow paths I'd like to put forward a
 proposal for a future version of HTML.  The behavior I'm addressing is
 sites that replace links to external content with a framed version of
 that content, along with their own overlay of information and links.
 I think with some small tweaks it's possible to create a better
 experience for the user in these situations.  I wasn't able to find
 any discussion of this use case in the archives.  Please excuse me if
 this has already been discussed.

 [snip details of proposal]

 Alternate proposal:

  * Add a new attribute on all hyperlink elements which accepts a URL as
 its value. Let's call this new attribute infobar for want of a better name
 for it.
  * When the user follows that link, create a view where the main browser
 window contains the page which was the href of the link, but where there is
 also a smaller docked view, perhaps along the top of the browser window,
 containing the page which at the URL given in the infobar attribute.
  * Include a mandatory close button on the infobar panel.
  * Have this extra bar disappear if the user navigates away from the page
 or chooses the close button.
  * Have this extra bar disappear if the infobar document calls
 window.close.
  * Any links in the infobar document get loaded in the main browser
 window, which implicitly closes the infobar since this is a navigation in
 the main window.
  * If the infobar document causes navigation by a means other than a link,
 such as setting document.location, it is just closed.
  * The infobar document *may* use window.open to open other windows --
 subject to normal popup blocker restrictions -- if it needs to display some
 UI that does not fit into the infobar form factor.

 This proposal compromises the flexibility of UI in what you called the
 overlay document and what I called the infobar document. The most notable
 omission is that it does not allow the overlay site to choose how the
 infobar is displayed, since the main page is given precedence. It may be
 desirable to provide some means by which the infobar document can request a
 particular size, though the user-agent must impose restrictions on the size
 to prevent a situation where the information bar appears more prominent than
 the main document.

 This proposal, much like the ping attribute proposed previously, allows
 a user-agent to offer a feature whereby the infobar can be skipped
 altogether. It also causes the Referer header of the request to the main
 page to be the original linking page and not the infobar page.

 It still has the challenge that the UA must find a way to make it
 unambiguous that the infobar is *not* part of the main page, to prevent
 attacks such as including an infobar containing a login form which looks
 like it belongs to the main page when in fact it does not. One idea, which
 may be too draconian, is to prevent the infobar frame from accepting
 keyboard input altogether, and not allow any form elements within it to
 receive focus. The infobar document may open a new window using window.open
 if it needs to capture some information; that window will display the URL of
 the document in its location bar, allowing the user to see what site it's
 loaded from.

 However, it is likely that these restrictions will cause site implementers
 to continue with current practice in order to retain the flexibility they
 currently enjoy.






Re: [whatwg] comments on SCRIPT ASYNC and DEFER

2010-02-10 Thread Steve Souders
In the current text, it says must then be fetched. In my suggestion I 
say should not start until after parsing. Saying should instead of 
must leaves the opening for browsers that feel they can fetch 
immediately without negatively impacting performance.


-Steve

On 2/9/2010 6:39 PM, Boris Zbarsky wrote:

On 2/8/10 4:54 PM, Steve Souders wrote:

4. If the element has a src attribute, [snip] the specified resource
must then be fetched, from the origin of the element's Document.
If the script has DEFER, the request should not start until after
parsing is finished. Starting it earlier could block other
(non-deferred) requests due to a connection limit or limited bandwidth.


Shouldn't this be left up to a UA?  I can see a UA with high enough 
connection limits being willing to use some small number of those 
connections for deferred scripts even before parsing is done.  The 
alternative might end up being for the network to be completely idle 
while a bunch of parsing happens followed by a flurry of deferred 
script loading activity


-Boris




Re: [whatwg] video feedback

2010-02-10 Thread Eric Carlson

On Feb 10, 2010, at 8:01 AM, Brian Campbell wrote:

 On Feb 9, 2010, at 9:03 PM, Ian Hickson wrote:
 
 On Sat, 31 Oct 2009, Brian Campbell wrote:
 
 At 4 timeupdate events per second, it isn't all that useful. I can 
 replace it with setInterval, at whatever rate I want, query the time, 
 and get the synchronization I need, but that makes the timeupdate event 
 seem to be redundant.
 
 The important thing with timeupdate is that it also fires whenever the 
 time changes in a significant way, e.g. immediately after a seek, or when 
 reaching the end of the resource, etc. Also, the user agent can start 
 lowering the rate in the face of high CPU load, which makes it more 
 user-friendly than setInterval().
 
 I agree, it is important to be able to reduce the rate in the face of high 
 CPU load, but as currently implemented in WebKit, if you use timeupdate to 
 keep anything in sync with the video, it feels fairly laggy and jerky. This 
 means that for higher quality synchronization, you need to use setInterval, 
 which defeats the purpose of making timeupdate more user friendly.
 
 Perhaps this is just a bug I should file to WebKit, as they are choosing an 
 update interval at the extreme end of the allowed range for their default 
 behavior; but I figured that it might make sense to mention a reasonable 
 default value (such as 30 times per second, or once per frame displayed) in 
 the spec, to give some guidance to browser vendors about what authors will be 
 expecting.
 
  I disagree that 30 times per second is a reasonable default. I understand 
that it would be useful for what you want to do, but your use case is not a 
typical. I think most pages won't listen for 'timeupdate' events at all so 
instead of making every page incur the extra overhead of waking up, allocating, 
queueing, and firing an event 30 times per second, WebKit sticks with  the 
minimum frequency the spec mandates figuring that people like you that need 
something more can roll their own.


 On Thu, 5 Nov 2009, Brian Campbell wrote:
 
 Would something like video firing events for every frame rendered 
 help you out?  This would help also fix the canvas over/under 
 painting issue and improve synchronization.
 
 Yes, this would be considerably better than what is currently specced.
 
 There surely is a better solution than copying data from the video 
 element to a canvas on every frame for whatever the problem that that 
 solves is. What is the actual use case where you'd do that?
 
 This was not my use case (my use case was just synchronizing bullets, slide 
 transitions, and animations to video), but an example I can think of is using 
 this to composite video. Most (if not all) video formats supported by video 
 in the various browsers do not store alpha channel information. In order to 
 composite video against a dynamic background, authors may copy video data to 
 a canvas, then paint transparent to all pixels matching a given color.
 
 This use case would clearly be better served by video formats that include 
 alpha information, and implementations that support compositing video over 
 other content, but given that we're having trouble finding any video format 
 at all that the browsers can agree on, this seems to be a long way off, so 
 stop-gap measures may be useful in the interim.
 
 Compositing video over dynamic content is actually an extremely important use 
 case for rich, interactive multimedia, which I would like to encourage 
 browser vendors to implement, but I'm not even sure where to start, given the 
 situation on formats and codecs. I believe I've seen this discussed in 
 Theora, but never went anywhere, and I don't have any idea how I'd even start 
 getting involved in the MPEG standardization process.
 
  Have you actually tried this? Rendering video frames to a canvas and 
processing every pixel from script is *extremely* processor intensive, you are 
unlikely to get reasonable frame rate. 

  The H.262 does support alpha (see AVC spec 2nd edition, section 7.3.2.1.2 
Sequence parameter set extension), but we do not support it correctly in WebKit 
at the moment. *Please* file bugs against WebKit if you would like to see this 
properly supported. QuickTime movies support alpha for a number of video 
formats (eg. png, animation, lossless, etc), you might give that a try.

eric


Re: [whatwg] comments on SCRIPT ASYNC and DEFER

2010-02-10 Thread Steve Souders

Two common scenarios where scripts aren't put at the bottom:
- Having talked to web devs across hundreds of companies it's often 
the case that they control a certain section of the page. Inserting 
content outside of that section requires changing so much 
infrastructure, they skip the optimization.
- 3rd parties have no control over where their snippet is placed in 
the content owner's page. Providing a snippet that contains DEFER will 
guarantee they don't block the main page's content.


-Steve

On 2/10/2010 1:31 AM, Henri Sivonen wrote:

On Feb 8, 2010, at 23:54, Steve Souders wrote:

   

It would be good to mention this optional behavior here, something along the 
lines of browsers may want to do speculative parsing, but shouldn't create DOM 
elements, etc. - only kickoff HTTP requests.
 

FWIW, the HTML5 parser in Gecko (not on by default yet) does more than just 
kicks off HTTP requests. However, what it does isn't supposed to be detectable 
by author-supplied scripts.

   

4. If the element has a src attribute, [snip] the specified resource must then be 
fetched, from the origin of the element's Document.
 If the script has DEFER, the request should not start until after parsing 
is finished. Starting it earlier could block other (non-deferred) requests due 
to a connection limit or limited bandwidth.
 

As I understand it, starting the request early is the whole point of 'defer'. 
Otherwise, the author could put those scripts at the end of the page.

   


Re: [whatwg] script postonload

2010-02-10 Thread Steve Souders
Being able to replicate the behavior in JavaScript is not a valid reason 
to reject the proposal. For example, all the behavior of DEFER and ASYNC 
can be replicated using JavaScript and yet those attributes are also 
proposed. The point is to lower the bar to get wider adoption. Adding 
DEFER is significantly simpler than adding a load handler that 
dynamically appends a SCRIPT element to the DOM.


-Steve

On 2/9/2010 6:40 PM, Boris Zbarsky wrote:

On 2/8/10 5:06 PM, Steve Souders wrote:

The behavior would be similar to DEFER, but instead of delaying
downloads until after parsing they would be delayed until after the
window's load event.


Is this meant to be used for scripts one has no control over?  If not, 
then just making all the interesting parts of the script happen after 
the load event (which would also need to be defined) seems pretty 
simple, right?


-Boris


Re: [whatwg] video feedback

2010-02-10 Thread Boris Zbarsky

On 2/10/10 1:37 PM, Eric Carlson wrote:

   Have you actually tried this? Rendering video frames to a canvas and 
processing every pixel from script is *extremely* processor intensive, you are 
unlikely to get reasonable frame rate.


There's a demo that does just this at 
http://people.mozilla.com/~prouget/demos/green/green.xhtml


-Boris


Re: [whatwg] video feedback

2010-02-10 Thread Brian Campbell
On Feb 10, 2010, at 1:37 PM, Eric Carlson wrote:

 
 On Feb 10, 2010, at 8:01 AM, Brian Campbell wrote:
 
 On Feb 9, 2010, at 9:03 PM, Ian Hickson wrote:
 
 On Sat, 31 Oct 2009, Brian Campbell wrote:
 
 At 4 timeupdate events per second, it isn't all that useful. I can 
 replace it with setInterval, at whatever rate I want, query the time, 
 and get the synchronization I need, but that makes the timeupdate event 
 seem to be redundant.
 
 The important thing with timeupdate is that it also fires whenever the 
 time changes in a significant way, e.g. immediately after a seek, or when 
 reaching the end of the resource, etc. Also, the user agent can start 
 lowering the rate in the face of high CPU load, which makes it more 
 user-friendly than setInterval().
 
 I agree, it is important to be able to reduce the rate in the face of high 
 CPU load, but as currently implemented in WebKit, if you use timeupdate to 
 keep anything in sync with the video, it feels fairly laggy and jerky. This 
 means that for higher quality synchronization, you need to use setInterval, 
 which defeats the purpose of making timeupdate more user friendly.
 
 Perhaps this is just a bug I should file to WebKit, as they are choosing an 
 update interval at the extreme end of the allowed range for their default 
 behavior; but I figured that it might make sense to mention a reasonable 
 default value (such as 30 times per second, or once per frame displayed) in 
 the spec, to give some guidance to browser vendors about what authors will 
 be expecting.
 
 I disagree that 30 times per second is a reasonable default. I understand 
 that it would be useful for what you want to do, but your use case is not a 
 typical. I think most pages won't listen for 'timeupdate' events at all so 
 instead of making every page incur the extra overhead of waking up, 
 allocating, queueing, and firing an event 30 times per second, WebKit sticks 
 with  the minimum frequency the spec mandates figuring that people like you 
 that need something more can roll their own.

Do browsers fire events for which there are no listeners? It seems like it 
would be easiest to just not fire these events if no one is listening to them.

And as Ian pointed out, just basic video UI can be better served by having at 
least 10 updates per second, if you want to show time at a resolution of tenths 
of a second.

 On Thu, 5 Nov 2009, Brian Campbell wrote:
 
 Would something like video firing events for every frame rendered 
 help you out?  This would help also fix the canvas over/under 
 painting issue and improve synchronization.
 
 Yes, this would be considerably better than what is currently specced.
 
 There surely is a better solution than copying data from the video 
 element to a canvas on every frame for whatever the problem that that 
 solves is. What is the actual use case where you'd do that?
 
 This was not my use case (my use case was just synchronizing bullets, slide 
 transitions, and animations to video), but an example I can think of is 
 using this to composite video. Most (if not all) video formats supported by 
 video in the various browsers do not store alpha channel information. In 
 order to composite video against a dynamic background, authors may copy 
 video data to a canvas, then paint transparent to all pixels matching a 
 given color.
 
 This use case would clearly be better served by video formats that include 
 alpha information, and implementations that support compositing video over 
 other content, but given that we're having trouble finding any video format 
 at all that the browsers can agree on, this seems to be a long way off, so 
 stop-gap measures may be useful in the interim.
 
 Compositing video over dynamic content is actually an extremely important 
 use case for rich, interactive multimedia, which I would like to encourage 
 browser vendors to implement, but I'm not even sure where to start, given 
 the situation on formats and codecs. I believe I've seen this discussed in 
 Theora, but never went anywhere, and I don't have any idea how I'd even 
 start getting involved in the MPEG standardization process.
 
 Have you actually tried this? Rendering video frames to a canvas and 
 processing every pixel from script is *extremely* processor intensive, you 
 are unlikely to get reasonable frame rate. 

Mozilla has a demo of this working, in Firefox only:

https://developer.mozilla.org/samples/video/chroma-key/index.xhtml

But no, this isn't something I would consider to be production quality. But 
perhaps if the WebGL typed arrays catch on, and start being used in more 
places, you might be able to start doing this with reasonable performance.

 The H.262 does support alpha (see AVC spec 2nd edition, section 7.3.2.1.2 
 Sequence parameter set extension), but we do not support it correctly in 
 WebKit at the moment. *Please* file bugs against WebKit if you would like to 
 see this properly supported. QuickTime movies support alpha for 

Re: [whatwg] script postonload

2010-02-10 Thread Boris Zbarsky

On 2/10/10 1:55 PM, Steve Souders wrote:

Being able to replicate the behavior in JavaScript is not a valid reason
to reject the proposal.


No, but it _is_ a reason to carefully consider the complexity the 
proposal introduces against the possible benefits of the proposal and to 
perhaps examine the cases where content right now does this thing that 
we're presumably proposing to make easier.



For example, all the behavior of DEFER and ASYNC can be replicated using 
JavaScript


That's not the case, actually.  The behavior of DEFER (eager load start, 
deferred script execution, not blocking the parser or other scripts 
while loading) cannot in fact be replicated using JavaScript in a 
cross-browser manner.  The behavior of ASYNC (and in particular its 
allowing scripts to run in order other than DOM insertion order, 
combined with the eager loading it triggers, not blocking other scripts, 
and execution when the load is done) can't be replicated cross-browser 
either, unless I'm missing something.


 Adding DEFER is significantly simpler than adding a load handler that

dynamically appends a SCRIPT element to the DOM.


And has a different behavior from such a load handler.  On the other 
hand, a script that instead of having this structure:


  foo();
  bar();

has this structure:

  window.addEventListener(load, function() {
foo();
bar();
  }, false);

(and the IE version thereof) does in fact have behavior identical to 
your proposal, at least insofar as your proposal has a defined behavior 
at the moment.  And works in all browsers modulo the 
addEventLister/attachEvent difference.


-Boris


Re: [whatwg] video feedback

2010-02-10 Thread Boris Zbarsky

On 2/10/10 2:19 PM, Brian Campbell wrote:

Do browsers fire events for which there are no listeners?


It varies.  Gecko, for example, fires image load events not matter what 
but only fires mutation events if there are listeners.


-Boris


Re: [whatwg] script postonload

2010-02-10 Thread Jonas Sicking
On Wed, Feb 10, 2010 at 12:57 AM, Anne van Kesteren ann...@opera.com wrote:
 On Mon, 08 Feb 2010 23:06:07 +0100, Steve Souders st...@souders.org wrote:

 I'd like to propose the addition of a POSTONLOAD attribute to the SCRIPT
 tag.

 The behavior would be similar to DEFER, but instead of delaying downloads
 until after parsing they would be delayed until after the window's load
 event. Similar to DEFER, this new attribute would ensure scripts were
 executed in the order they appear in the document, although it could be
 combined with ASYNC to have them execute as soon as the response is
 received.

 Developers can do this now using JavaScript, but it's complex and
 errorprone. For example, how should the script be added to the document?
 People typically append to the 'head' element, but some pages don't have a
 'head' element and some browsers don't create a default one. And
 'documentElement' doesn't work in all browsers either. The safest path I've
 seen is to append to ( head || body ). Whether everyone agrees this is best,
 it reveals the complexity developers will have to consider.

 Adding this attribute would lower the bar promoting this best practice for
 making web pages faster.

 Which browsers do not create a head element? I thought we fixed our bug.
 Also, introducing new features mainly to work around existing bugs is
 generally not a good idea. We'd only increase the potential for
 interoperability issues.

Also note that introducing a new feature X in order to work around
shortcomings in implementations in another feature Y doesn't really
make sense. What is to say that you'll get interoperability in X any
earlier than in Y?

In other words, what is to say that browsers will implement postonload
before they'll implement .head or .documentElement?

/ Jonas


Re: [whatwg] script postonload

2010-02-10 Thread Jonas Sicking
On Wed, Feb 10, 2010 at 11:26 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 For example, all the behavior of DEFER and ASYNC can be replicated using
 JavaScript

 That's not the case, actually.  The behavior of DEFER (eager load start,
 deferred script execution, not blocking the parser or other scripts while
 loading) cannot in fact be replicated using JavaScript in a cross-browser
 manner.  The behavior of ASYNC (and in particular its allowing scripts to
 run in order other than DOM insertion order, combined with the eager loading
 it triggers, not blocking other scripts, and execution when the load is
 done) can't be replicated cross-browser either, unless I'm missing
 something.

ASYNC can be implemented in most browsers actually. In browsers other
than firefox (and possibly opera), creating an element using the DOM
and inserting it into a document gives the same behavior as ASYNC
scripts.

I'm planning on fixing this in firefox for 3.7.

/ Jonas


Re: [whatwg] script postonload

2010-02-10 Thread Jonas Sicking
On Wed, Feb 10, 2010 at 11:40 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Feb 10, 2010 at 12:57 AM, Anne van Kesteren ann...@opera.com wrote:
 On Mon, 08 Feb 2010 23:06:07 +0100, Steve Souders st...@souders.org wrote:

 I'd like to propose the addition of a POSTONLOAD attribute to the SCRIPT
 tag.

 The behavior would be similar to DEFER, but instead of delaying downloads
 until after parsing they would be delayed until after the window's load
 event. Similar to DEFER, this new attribute would ensure scripts were
 executed in the order they appear in the document, although it could be
 combined with ASYNC to have them execute as soon as the response is
 received.

 Developers can do this now using JavaScript, but it's complex and
 errorprone. For example, how should the script be added to the document?
 People typically append to the 'head' element, but some pages don't have a
 'head' element and some browsers don't create a default one. And
 'documentElement' doesn't work in all browsers either. The safest path I've
 seen is to append to ( head || body ). Whether everyone agrees this is best,
 it reveals the complexity developers will have to consider.

 Adding this attribute would lower the bar promoting this best practice for
 making web pages faster.

 Which browsers do not create a head element? I thought we fixed our bug.
 Also, introducing new features mainly to work around existing bugs is
 generally not a good idea. We'd only increase the potential for
 interoperability issues.

 Also note that introducing a new feature X in order to work around
 shortcomings in implementations in another feature Y doesn't really
 make sense. What is to say that you'll get interoperability in X any
 earlier than in Y?

 In other words, what is to say that browsers will implement postonload
 before they'll implement .head or .documentElement?

Erm.. sorry, you already said that. Never mind me, nothing to see
here. Carry on!

/ Jonas


Re: [whatwg] video feedback

2010-02-10 Thread Jonas Sicking
On Wed, Feb 10, 2010 at 11:29 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 2/10/10 2:19 PM, Brian Campbell wrote:

 Do browsers fire events for which there are no listeners?

 It varies.  Gecko, for example, fires image load events not matter what but
 only fires mutation events if there are listeners.

However checking for listeners has a non-trivial cost. You have to
walk the full parentNode chain and see if any of the parents has a
listener. This applies to both bubbling and non-bubbling events due to
the capture phase.

Also, feature which requires implementations to optimize for the
feature not being used seems like a questionable feature to me. We
want people to use the stuff we're creating, there's little point
otherwise.

/ Jonas


Re: [whatwg] script postonload

2010-02-10 Thread Boris Zbarsky

On 2/10/10 2:44 PM, Jonas Sicking wrote:

ASYNC can be implemented in most browsers actually.


Yes, I was pretty careful with my use of in a cross-browser manner... ;)

-Boris


Re: [whatwg] comments on SCRIPT ASYNC and DEFER

2010-02-10 Thread Jonas Sicking
On Wed, Feb 10, 2010 at 10:49 AM, Steve Souders wha...@souders.org wrote:
 Two common scenarios where scripts aren't put at the bottom:
    - Having talked to web devs across hundreds of companies it's often the
 case that they control a certain section of the page. Inserting content
 outside of that section requires changing so much infrastructure, they skip
 the optimization.
    - 3rd parties have no control over where their snippet is placed in the
 content owner's page. Providing a snippet that contains DEFER will
 guarantee they don't block the main page's content.

I think there are use cases both for wanting to start fetching as soon
as possible, and to start fetching at the end in order to avoid
blocking.

However defer is already implemented in IE and Firefox as a way to
start fetching as soon as possible, but only execute at the end, so
I'm reluctant to change it at this point.

Instead, if the use cases are strong enough, I think we need to
introduce another mechanism for delaying a script to get loaded
until after the 'load' event has fired. I think it's an interesting
idea to add a 'postonload' attribute to all resources, such as
script, img and link rel=stylesheet (though the maybe the name
could be better).

/ Jonas


Re: [whatwg] video feedback

2010-02-10 Thread Robert O'Callahan
On Thu, Feb 11, 2010 at 8:19 AM, Brian Campbell lam...@continuation.orgwrote:

 But no, this isn't something I would consider to be production quality. But
 perhaps if the WebGL typed arrays catch on, and start being used in more
 places, you might be able to start doing this with reasonable performance.


With WebGL you could do the chroma-key processing on the GPU, and
performance should be excellent. In fact you could probably prototype this
today in Firefox.

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] video feedback

2010-02-10 Thread Gregory Maxwell
On Wed, Feb 10, 2010 at 4:37 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Thu, Feb 11, 2010 at 8:19 AM, Brian Campbell lam...@continuation.org
 wrote:

 But no, this isn't something I would consider to be production quality.
 But perhaps if the WebGL typed arrays catch on, and start being used in more
 places, you might be able to start doing this with reasonable performance.

 With WebGL you could do the chroma-key processing on the GPU, and
 performance should be excellent. In fact you could probably prototype this
 today in Firefox.

You're not going to get solid professional quality keying results just
by depending on a client side keying algorithm, even a computationally
expensive one, without the ability to perform manual fixups.

Being able to manipulate video data on the client is a powerful tool,
but its not necessarily the right tool for every purpose.


Re: [whatwg] script postonload

2010-02-10 Thread Jonas Sicking
On Wed, Feb 10, 2010 at 12:14 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 2/10/10 2:44 PM, Jonas Sicking wrote:

 ASYNC can be implemented in most browsers actually.

 Yes, I was pretty careful with my use of in a cross-browser manner... ;)

However even if Firefox (and maybe Opera) had behaved the same as
webkit/IE, I still think that the async attribute would have been
useful to implement as it allows websites to use a much cleaner
syntax.

Ultimately it all comes down to how common we think a use case is and
the cost of not supporting that use case directly, vs. cost of forever
expanding and supporting the addition to the platform.

I think it would be interesting to discuss resource downloading in a
bit more generic way. I.e. ways to postpone the download of certain
resources, prioritization of various resources, etc. That way we can
probably increase the number of use cases supported, without
significantly increasing the added complexity to the platform.

/ Jonas


Re: [whatwg] video feedback

2010-02-10 Thread Silvia Pfeiffer
On Thu, Feb 11, 2010 at 3:01 AM, Brian Campbell lam...@continuation.org wrote:
 On Feb 9, 2010, at 9:03 PM, Ian Hickson wrote:

 On Sat, 7 Nov 2009, Silvia Pfeiffer wrote:

 I use timeupdate to register a callback that will update
 captions/subtitles.

 That's only a temporary situation, though, so it shouldn't inform our
 decision. We should in due course develop much better solutions for
 captions and time-synchronised animations.

 The problem is, due to the slow pace of standards and browser development, we 
 can sometimes be stuck with a temporary feature for many years. How long 
 until enough IE users support HTML6 (or whatever standard includes a 
 time-synchronization feature) for it to be usable? 10, 15 years?

Even when we have a standard means of associate captions/subtitles
with audio/video, we still want to allow for overriding the default
presentation of these and do it all in JavaScript ourselves.

I have just been pointed to a cool lyrics demo at
http://svg-wow.org/audio/animated-lyrics.html which uses an audio file
and essentially a caption file to display the lyrics in sync in svg.
Problem is: they are using setInterval and setTimeout on the audio and
that breaks synchronisation for me - probably because loading the
audio over the distance takes longer than no time.

Honestly, you cannot use setInterval for synchronising with a/v. You
really need timeupdate.

Maybe one option for pages that need a higher event firing rate than
the default of the browser is to introduce a javascript api that lets
it be set to anything between once per frame (25Hz) and every 250ms
(4Hz)? I'm just wary what it may do to the responsiveness of the
browser and whether the browser could refuse if it knew it would kill
the performance.

Cheers,
Silvia.


[whatwg] DOMContentLoaded and stylesheets

2010-02-10 Thread Mathias Schäfer
Hello everyone,

In a JavaScript tutorial, I wanted to explain what DOMContentLoaded
actually does. But the tests I made revealed that there isn't a
consistent behavior across browsers with regard to stylesheets. In fact,
it's a total mess. These are the results of my tests:

http://molily.de/weblog/domcontentloaded

Please have a quick look at these findings (you can skip the
introduction part). My questions are:

1. Am I right that HTML5 will standardize Opera's pure DOMContentLoaded
model, never waiting for stylesheets? My assumption is that this will
break compatibility with the current Gecko and Webkit implementations.

2. Does the HTML5 parser specify that external stylesheets defer
external script execution? As far as I understand the specs, it doesn't.
But according to my tests, that's how it is implemented in Gecko, WebKit
and Internet Explorer. Many scripts on the Web seem to rely on this
non-standard feature.

In Gecko and IE, the loading of stylesheets also defers the execution of
subsequent *inline* scripts. I haven't found a rule for that in the
HTML5 parsing algorithm either. Does it conform to the specs, is it
against the rules or a legitimate extension which is not covered by HTML5?

3. If I'm right in my conclusions, could HTML5 provide a solution for
this quirks? Does HTML5 require browsers to change their parsing
behavior with regard to stylesheets as well as their DOMContentLoaded
handling? Is it likely that the browser vendors will do so? What would
be a proper solution?

(By the way, I tried the experimental HTML5 parser in Gecko, but it
seems to apply the same rules as the standard parser.)

Regards,
Mathias


Re: [whatwg] DOMContentLoaded and stylesheets

2010-02-10 Thread Boris Zbarsky

On 2/10/10 6:55 PM, Mathias Schäfer wrote:

In a JavaScript tutorial, I wanted to explain what DOMContentLoaded
actually does.


It fires once the parser has consumed the entire input stream, such that 
you can rely on all the parser-created DOM nodes being present.  This is 
true in all implementations of DOMContentLoaded.  What is not consistent 
is the ordering of this event with the loading of various subresources.



1. Am I right that HTML5 will standardize Opera's pure DOMContentLoaded
model, never waiting for stylesheets? My assumption is that this will
break compatibility with the current Gecko and Webkit implementations.


Gecko currently does not wait on stylesheet loads to complete before 
firing DOMContentLoaded.  They might complete before the parser is done, 
or they might not.



2. Does the HTML5 parser specify that external stylesheets defer
external script execution? As far as I understand the specs, it doesn't.


http://www.whatwg.org/specs/web-apps/current-work/multipage/scripting-1.html#running-a-script 
step 8 the cases that talk about a style sheet blocking scripts 
specify this.


I really wish those steps had individual IDs, and so did the cases 
inside them.  It'd make it a lot easier to link to them!



In Gecko and IE, the loading of stylesheets also defers the execution of
subsequent *inline* scripts. I haven't found a rule for that in the
HTML5 parsing algorithm either.


See above.

-Boris


Re: [whatwg] should async scripts block the document's load event?

2010-02-10 Thread Jonas Sicking
On Fri, Nov 6, 2009 at 4:22 PM, Brian Kuhn bnk...@gmail.com wrote:
 No one has any thoughts on this?
 It seems to me that the purpose of async scripts is to get out of the way of
 user-visible functionality.  Many sites currently attach user-visible
 functionality to window.onload, so it would be great if async scripts at
 least had a way to not block that event.  It would help minimize the affect
 that secondary-functionality like ads and web analytics have on the user
 experience.
 -Brian

I'm concerned that this is too big of a departure from how people are
used to scripts behaving.

If we do want to do something like this, one possibility would be to
create a generic attribute that can go on things like img, link
rel=stylesheet, script etc that make the resource not block the
'load' event.

/ Jonas


[whatwg] Parser-related feedback

2010-02-10 Thread Ian Hickson
On Thu, 29 Oct 2009, Matt Hall wrote:

 Prior to r4177, the matching of tag names for exiting the RCDATA/RAWTEXT 
 states was done as follows:

 ...and the next few characters do no match the tag name of the last 
 start tag token emitted (compared in an ASCII case-insensitive manner)
 
 However, the current revision doesn't include any comment on character 
 casing in its discussion of Appropriate End Tags.  Similarly, certain 
 tokenizer states require that you check the contents of the temporary 
 buffer against the string script but there is no indication of 
 whether or not to do this in a case-insensitive manner.

 In both cases, should this comparison be done in an ASCII 
 case-insensitive manner or not? It might be helpful to clarify the spec 
 in both places in either case.

On Thu, 29 Oct 2009, Geoffrey Sneddon wrote:
 
 It is already case-insensitive as you lowercase the characters when 
 creating the token name and when adding them to the buffer.

Indeed.


On Fri, 30 Oct 2009, Matt Hall wrote:

 When the script data state was added to the tokenizer, the tree 
 construction algorithm was updated to switch the tokenizer into this 
 state upon finding a start tag named script while in the in head 
 insertion mode (9.2.5.7). I see that a corresponding change was not made 
 to 9.5 about Parsing HTML Fragments as it still says to switch into 
 the RAWTEXT state upon finding a script tag. Does anyone know if this 
 difference is intentional, or did someone just forget to update the 
 fragment parsing case?

There's a comment now mentioning this explicitly. Is it ok?


On Tue, 10 Nov 2009, Kartikaya Gupta wrote:

 If you have a page like this:
 
 !DOCTYPE HTML
 htmlbody
 font size=2 face=Verdana
 p align=leftSome text
 font size=2 face=Verdana
 p align=leftSome text
 /body/html
 
 according to the HTML5 parser rules, I believe this should create a DOM with 
 3 font elements that looks something like this:
 
 !DOCTYPE HTMLHTMLHEAD/HEADBODY
 FONT face=Verdana size=2
 P align=leftSome text
 FONT face=Verdana size=2
 /FONT/PP align=leftFONT size=2 face=VerdanaSome text
 
 /FONT/P/FONT/BODY/HTML
 
 However, if you add extend the original source with another font/p 
 combination, like so:
 
 !DOCTYPE HTML
 htmlbody
 font size=2 face=Verdana
 p align=leftSome text
 font size=2 face=Verdana
 p align=leftSome text
 font size=2 face=Verdana
 p align=leftSome text
 /body/html
 
 You end up with a DOM which has 6 font elements:
 
 !DOCTYPE HTMLHTMLHEAD/HEADBODY
 FONT face=Verdana size=2
 P align=leftSome text
 FONT face=Verdana size=2
 /FONT/PP align=leftFONT size=2 face=VerdanaSome text
 FONT face=Verdana size=2
 /FONT/FONT/PP align=leftFONT face=Verdana size=2FONT 
 size=2 face=VerdanaSome text
 
 /FONT/FONT/P/FONT/BODY/HTML
 
 .. and so on. In general the number of font elements in the DOM grows 
 polynomially, with the result that pages like [1] and [2] end up with 
 hundreds of thousands of font elements. I haven't even been able to 
 successfully parse [3] with either our own HTML5 parser or the one at 
 validator.nu, it just gobbles up all available memory and asks for more.
 
 [1] http://www.miprepzone.com/past.asp?Category=%27news%27
 [2] http://info4.juridicas.unam.mx/ijure/tcfed/8.htm?s=
 [3] http://info4.juridicas.unam.mx/ijure/tcfed/1.htm?s=
 
 Is this behavior expected, or is it a bug in the spec? Obviously 
 shipping browsers don't demonstrate this behavior (nor does Firefox's 
 HTML5 parser - see bugzilla 525960) so I'm wondering if the spec could 
 be modified to not have this polynomial-growth behavior.

I haven't checked if the exact behaviour you describe is what the spec 
currently requires, but in general, there will always be cases where input 
has a disproportional result on output, because backwards-compatible fixup 
is basically contrained to very few possibilities, all of which have this 
behaviour in certain cases.

In practice, it's not a huge issue, because you have to cope with these 
cases even just to handle regular valid documents -- consider for example 
an infinite document whose body is just fontfontfontfont... with 
no close tags. There are a number of pages on the Web that approximate 
this on the Web, for example:

   http://www.frikis.org/images/ascii/tux.html


On Tue, 24 Nov 2009, Daniel Glazman wrote:
 
 I think that insertAdjacentHTML as defined in current section 3.5.7 [1] 
 could be much cleaner and clearer if
 
 1 - Adjacent was dropped. It's useless. The name could be insertHTML.
 
 2. if the values were before, firstchild, lastchild, after
instead of the current beforebegin, afterbegin, beforend and
afterend that seem to me visually related to start and end tags
and not the element itself. Consistency with the existing DOM
phraseology seems to me useful.
 
 [1] 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/apis-in-html-documents.html#insertadjacenthtml%28%29

On Tue, 24 Nov 2009, Anne van Kesteren wrote:
 
 The problem is that it 

Re: [whatwg] History API, pushState(), and related feedback

2010-02-10 Thread Justin Lebar
 On Thu, Jan 14, 2010, Hixie...oh dear.

 On Tue, 18 Aug 2009, Justin Lebar wrote:
 (An attempt at describing how pushstate is supposed to be used.)

 That's not quite how I would describe it. It's more that each entry in the
 session history has a URL and optionally some data. The data can be used
 for two main purposes: first, storing a preparsed description of the state
 in the URL so that in the simple case you don't have to do the parsing
 (though you still need the parsing for handling URLs passed around by
 users, so it's only a minor optimisation), and second, so that you can
 store state that you wouldn't store in the URL, because it only applies to
 the current Document instance and you would have to reconstruct it if a
 new Document were opened.

 An example of the latter would be something like keeping track of the
 precise coordinate from which a popup div was made to animate, so that
 if the user goes back, it can be made to animate to the same location. Or
 alternatively, it could be used to keep a pointer into a cache of data
 that would be fetched from the server based on the information in the URL,
 so that when going back and forward, the information doesn't have to be
 fetched again.

 Basically any information that is not information that you would not
 include in a URL describing the page, but which could be useful when going
 backwards and forwards in the history.

Can we publish this somewhere?  This is crucial and not obvious.

 If the Document is not recoverable, then recovering the state object makes
 little sense, IMHO. We should not be encouraging a world in which the
 meaningful state of a page is described by more than its URL. However,
 it's a UA decision whether to enable this or not.

Yes, but we want to make sure we're making the right UA decision. :)

I approached this from a different angle: Does it make sense to persist the
fact that two history entries with (potentially) different URLs correspond to
the same document across session history?  If pushState is supposed to replace
using the hash to store data, then we should persist this fact across session
restores, right?  But then we have to also persist the state data; otherwise,
if the page used pushState with no URL argument, it wouldn't be able to
distinguish between the two states.

I think you have a strong argument above.  On the other hand, the fact that
history entries X and Y are in fact the same Document is itself page state
which isn't stored in the URL.

 On Tue, 5 Jan 2010, Justin Lebar wrote:

 I think this is correct.  A popstate event is always dispatched whenever
 a new session history entry is activated (6.10.3).

 Actually if multiple popstates are fired before 'load' fires, all but the
 last are discarded, and the last waits until after 'load' fires to be
 fired. But otherwise yes.

Oh, interesting.  I didn't even notice that popstate is async again.  Good to
know.

-Justin


Re: [whatwg] URN or protocol attribute

2010-02-10 Thread Brett Zamir

On 2/10/2010 3:55 PM, Martin Atkins wrote:


Brett Zamir wrote:

Hi,

Internet Explorer has an attribute on anchor elements for URNs: 
http://msdn.microsoft.com/en-us/library/ms534710%28VS.85%29.aspx


This has not caught on in other browsers, though I believe it could 
be a very powerful feature once the feature was supported with a UI 
that handled URNs (as with adding support for custom protocols).


Imagine, for example, to take a link like:

a href=http://www.amazon.com/...(shortened) 
urn=isbn:9210020251United Nations charter/a



[snip details]

I like what this proposal achieves, but I'm not sure it's the right 
solution.


Here's an attempt at stating what problem you're trying to solve 
without any specific implementation: (please correct me if I 
misunderstood)


 * Provide a means to link to or operate on a particular artifact 
without necessarily requiring that the artifact be retrieved from a 
specific location.


 * Provide graceful fallback to user-agents which do not have any 
specialized handling for a particular artifact.



Yes, exactly.
This is different to simply linking to a different URL scheme (for 
example, linking a mailto: URL in order to begin an email message 
without knowing the user's preferred email provider) because it 
provides a fallback behavior for situations where there is no handler 
available for a particular artifact.


Yes. But note also that it would be possible to have the @protocol 
attribute be, for example, be used for already frequent protocols like 
mailto:, and the @href be http: . Or, to use a URN, the protocol could 
be urn:isbn:... and the @href could be http, etc.


Since 'href' also links to a protocol, it might be more clear for the 
proposed attribute to be called something like @defaultProtocol.



== Example Use-cases ==

 * View a particular book, movie or other such product without 
favoring a particular vendor.


 * View a map of the location for particular address or directions to 
that address without favoring a particular maps provider.


 * View a Spanish translation of some web document without favoring a 
particular translation provider.


 * Share a link/photo/etc with friends without favoring a particular 
publishing platform. (i.e. generalizing the Tweet This, Share on 
Facebook, class of links)




Yes. This would of course depend on the protocols existing. For example, 
XMPP as an open protocol, might work for your last examples if those 
services were actually using XMPP. And your other examples would also be 
excellent use cases.



== Prior Art ==

=== Android OS Intents ===

The Android OS has a mechanism called Intents[1] which allow one 
application to describe an operation it needs have performed without 
nominating a particular other application to perform it.


Intents are described in detail here:
http://developer.android.com/guide/topics/intents/intents-filters.html

An intent that does not identify a particular application consists of 
the following properties:


 * Action: a verb describing what needs to be done. For example, 
view, edit, choose, share, call.
 * Object: the URI of a particular thing that the action is to be done 
to. This is not specified for actions that apply only to a class of 
object, such as choose.
 * Object Type: the MIME type of the Object, or if no particular 
Object is selected a concrete MIME type or wildcard MIME type (e.g. 
image/*) describing the class of object that the action relates to.


A process called Intent Resolution is used to translate an abstract 
intent like the above into an explicit intent which nominates a 
particular handler.


Often when applications use intents a UI is displayed which allows a 
user to choose one of several available applications that can perform 
the action. For example, the built-in photo gallery application 
provides a Share command on a photo. By default, this can be handled 
by application such as the email client and the MMS application, but 
other applications can declare their support for intents of this type 
thus allowing plug-in functionality such as sharing a photo on Facebook.




That's an interesting consideration.

I think some behaviors should be necessarily limited with links (as they 
are in HTTP disallowing a link to make a POST or PUT request or upload a 
form (without JavaScript at least))--so that, e.g., spam links don't 
cause users to accidentally do things they didn't want to do. So 
side-effects should probably not occur (like share at least), unless 
it was merely, as in your use cases with Twitter/Facebook to lead to a 
UI control confirming that you wanted to share.


Unlike URNs, a regular protocol could already handle the use cases you 
mention, and perhaps the Intents mechanism could itself be made into a 
protocol: e.g.,:


android:intents;action=CALL;data=tel:123-555-1212

Being that experimentation here is fairly early on, and being that there 
may be too many types of fundamental actions/data/etc. to agree