Re: [whatwg] Onpopstate is Flawed

2011-02-11 Thread Justin Lebar
 I'm not sure I follow you here. My idea for option A is that you never
 get a popstate when doing the initial parsing of a page.

Okay, I may still have misunderstood, despite my best efforts!  :)

 Option B:
 Fire popstates as we currently do, with the caveat that you never
 fire a stale popstate -- that is, if any navigations or
 push/replaceStates have occurred since you queued the task to fire the
 popstate, don't fire it.

Is my option B clear?  It's also what the patch I have [1] does.

We'd might want to make popstate sync again, since otherwise you have
to schedule a task which synchronously checks if no state changes have
occurred, and dispatches popstate only if appropriate.
I know Olli has some thoughts on making popstate sync, and fwiw, FF
currently dispatches it synchronously.

 The main problem with this proposal is that it's a big change from
 what the API is today. However it's only a change in the situation
 when the spec today calls for firing popstate during the initial page
 load. Something that it seems like pages don't deal with properly
 today anyway, at least in the case of facebook.

Given the adoption the feature has seen, I guess I'd favor a smaller
change.  In particular, the option B above makes it possible to write
correct pages without ever reading the DOM current state property --
it's there only as an optimization to allow pages to set their state
faster, so no rush to put it in Right Away.  In contrast, a correct
page with option A would have to check its state at some point as it
loads.

I guess I don't see why it's better to make a big change than a small
one, if they both work equally well.

-Justin

[1] Patch v4: https://bugzilla.mozilla.org/show_bug.cgi?id=615501

On Mon, Feb 7, 2011 at 5:07 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, Feb 6, 2011 at 10:18 AM, Justin Lebar justin.le...@gmail.com wrote:
 1) Fire popstates as we currently do, with the caveat that you never
 fire a stale popstate -- that is, if any navigations or
 push/replaceStates have occurred since you queued the task to fire the
 popstate, don't fire it.

 Proposal B has the advantage of requiring fewer changes.

 The more I think about this, the more I like this option.  It's a
 smaller change than option A (though again, we certainly could expose
 the state object through a DOM property separately from this
 proposal), and I think it would be sufficient to fix some sites which
 are currently broken.  (For instance, I've gotten Facebook to receive
 stale popstates and show me the wrong page just by clicking around
 quickly.)

 Furthermore, this avoids the edge case in option B of you don't get a
 popstate on the initial initial load, but you do get a popstate if
 you're reloading from far enough back in the session history, or after
 a session restore.

 I'm not sure I follow you here. My idea for option A is that you never
 get a popstate when doing the initial parsing of a page. So if you're
 reloading from session restore or if you're going far back enough in
 history that you end up parsing a Document, you never get a popstate.

 You get a popstate when and only when you transition between two
 history entries while remaining on the same Document.

 So the basic code flow would be:

 Whenever creating a part of the UI (for example during page load or if
 called upon to render a new AJAX page), use document.currentState to
 decide what state to render.
 Whenever you receive a popstate, rerender UI as described by the popstate.

 So no edge cases that I can think of?

 The main problem with this proposal is that it's a big change from
 what the API is today. However it's only a change in the situation
 when the spec today calls for firing popstate during the initial page
 load. Something that it seems like pages don't deal with properly
 today anyway, at least in the case of facebook.

 I was concerned that pages might become confused when they don't get a
 popstate they were expecting -- for instance, if you pushState before
 the initial popstate, a page may never see a popstate event -- but I
 think this might not be such a big deal.  A call to push/replaceState
 would almost certainly be accompanied by code updating the DOM to the
 new state.  Popstate's main purpose is to tell me to update the DOM,
 so I don't think I'd be missing much by not getting it in that case.

 That was my thinking too FWIW.

 / Jonas



Re: [whatwg] Cryptographically strong random numbers

2011-02-11 Thread Adam Barth
Just to followup on this thread, I've landed this feature in WebKit.
I'm not sure whether it made it into tonight's nightly, but it should
be in a nightly shortly.  The IDL for the API is as follows:

interface Crypto {
 void getRandomValues(in ArrayBufferView array) raises(DOMException);
};

If the ArrayBufferView isn't a Uint8Array or if the user agent is
unable to obtain true randomness from the OS, getRandomValues throws
an exception (VALIDATION_ERR in the former case and NOT_SUPPORTED_ERR
in the latter case).

If the function doesn't throw an exception, the array is filled with
bytes obtained from a cryptographically strong PRNG seeded with true
randomness from the operating system.  Internally, WebKit uses RC4 as
the PRNG, but any cryptographically strong PRNG should work fine.

If there's interest, I can write up the above as a more formal
specification, but that seems like a bit of overkill given the
simplicity of the API.  Thanks for all your feedback.  It was quite
helpful.

Adam


On Fri, Feb 4, 2011 at 4:42 PM, Adam Barth w...@adambarth.com wrote:
 Several folks have asked for a cryptographically strong random number
 generator in WebKit.  Our first approach was to make Math.random
 cryptographically strong, but that approach has two major
 disadvantages:

 1) It's difficult for a web page to detect whether math.random is
 actually cryptographically strong or whether it's a weak RNG.

 2) Math.random is used in a number of popular JavaScript benchmarks.
 Strengthening math.random to be cryptographically strong would slow
 down these benchmarks.  Feel free to treat read this disadvantage as
 folks who don't care about cryptographic strength don't want to pay
 the performance cost of cryptographic strength.

 Our second approach was to implement crypto.random, with the idea of
 matching Firefox.  Unfortunately, Firefox does not appear to implement
 crypto.random and instead just exposes a function that throws an
 exception.  Additionally, crypto.random returns a string, which isn't
 an ideal data type for randomness because we'd need to worry about
 strange Unicode issues.

 Our third approach is to add a new cryptographically strong PRNG to
 window.crypto (in the spirit of crypto.random) that return floating
 point and integer random numbers:

 interface Crypto {
  Float32Array getRandomFloat32Array(in long length);
  Uint8Array getRandomUint8Array(in long length);
 };

 These APIs use the ArrayBuffer types that already exist to service
 APIs such as File and WebGL.  You can track the implementation of
 these APIs via this WebKit bug:

 https://bugs.webkit.org/show_bug.cgi?id=22049

 Please let me know if you have any feedback.

 Thanks,
 Adam



Re: [whatwg] Cryptographically strong random numbers

2011-02-11 Thread Glenn Maynard
On Fri, Feb 11, 2011 at 6:38 AM, Adam Barth w...@adambarth.com wrote:

 Just to followup on this thread, I've landed this feature in WebKit.
 I'm not sure whether it made it into tonight's nightly, but it should
 be in a nightly shortly.  The IDL for the API is as follows:

 interface Crypto {
  void getRandomValues(in ArrayBufferView array) raises(DOMException);
 };

 If the ArrayBufferView isn't a Uint8Array or if the user agent is
 unable to obtain true randomness from the OS, getRandomValues throws
 an exception (VALIDATION_ERR in the former case and NOT_SUPPORTED_ERR
 in the latter case).


Rather than raising NOT_SUPPORTED_ERR, would it be better to follow the
example from other specs: to omit the function entirely if the feature is
disabled?  (Specifically, When support for a feature is disabled (e.g. as
an emergency measure to mitigate a security problem, or to aid in
development, or for performance reasons), user agents must act as if they
had no support for the feature whatsoever, and as if the feature was not
mentioned in this specification.)

That's nicer for checking whether the function exists to check support.
Otherwise, you have to make a dummy call to check support.  It also means
you only need to check support in one way--since you'll need to check
whether the function exists anyway.

-- 
Glenn Maynard


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Nicholas Zakas
We've gone back and forth around implementation specifics, and now I'd like to 
get a general feeling on direction. It seems that enough people understand why 
a solution like this is important, both on the desktop and for mobile, so what 
are the next steps?

Are there changes I can make to my proposal that would make it easier to 
implement and therefore more likely to have someone take a stab at implementing?

Is there a concrete alternate proposal that's worth building out instead?

As I've said before, I believe this is an important feature in whatever 
incarnation may arise. If my solution isn't the right one, that's fine, but 
let's try to figure out what the right one is.

Thanks.

-N

-Original Message-
From: whatwg-boun...@lists.whatwg.org [mailto:whatwg-boun...@lists.whatwg.org] 
On Behalf Of timeless
Sent: Thursday, February 10, 2011 11:34 PM
To: Boris Zbarsky
Cc: whatwg@lists.whatwg.org; Bjoern Hoehrmann
Subject: Re: [whatwg] Proposal for separating script downloads and execution

On Fri, Feb 11, 2011 at 5:51 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 I don't think so.  If there is any parse or compilation or whatever you want
 to call it error, the script is never executed, so window.x is never
 defined.

oops, right, but i don't know that that complicates things much. you
just store a list of variables to pollute window with when /script
should be applied or the error to send to window.onerror at /script
time.

 Also, I would fully expect it to be a web compat requirement that
 window.onerror not be triggered until the script would be evaluated.  If you
 parse/compile/whatever before that and hit an error, you'd need to save that
 and report it at the right time.

sure, but that's one error to store with a pending script it's
cheaper than most other things. any attempt to speculatively compile
or compile on a thread will have to support queuing the window
assignments / onerror dispatch.

 Which means the that parse/compile/whatever process is currently not
 observable directly.  And that's a good thing!

agreed

 in theory, i believe a js engine could choose to discard all work it
 has done to validate syntax for x+y beyond saving those coordinates,
 and then either do the proper ast / bytecode / machine code generation
 lazily (or schedule to do it on a thread).

 Sure; a js engine could also not do any syntax validation at all, until it
 needs to run the script...

my operative assumption is that the scripts we're dealing with have
lots of functions which are declared but which will not be used in a
given page instance, which means that there /could/ be a win in not
performing a complete compile, both in time and space. obviously any
changes to the engines increases complexity which is a loss of sorts.


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread James Graham

On 02/11/2011 04:40 PM, Nicholas Zakas wrote:

We've gone back and forth around implementation specifics, and now
I'd like to get a general feeling on direction. It seems that enough
people understand why a solution like this is important, both on the
desktop and for mobile, so what are the next steps?


I think the first step would be to produce some performance data to 
indicate the actual bottleneck(s) in different configurations (browsers, 
devices, scripts, etc.). Unless I missed something (quite possible, the 
thread has been long), the only data so far presented has been some 
hearsay about gmail on some unknown hardware/browser combination.


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Will Alexander
On Feb 11, 2011 12:31 PM, Will Alexander serverher...@gmail.com wrote:


 On Feb 11, 2011 10:41 AM, Nicholas Zakas nza...@yahoo-inc.com wrote:
 
  We've gone back and forth around implementation specifics, and now I'd
like to get a general feeling on direction. It seems that enough people
understand why a solution like this is important, both on the desktop and
for mobile, so what are the next steps?

Early on it seemed there was general consensus that changing the existing
MAY fetch-upon-src-assignment to MUST or SHOULD.  Since that is only
tangential to this proposal, provides immediate benefit to existing code,
and can satisfy use cases that do not require feature-detection or strictly
synchronous execution. If I am wrong in my assessment of the consensus, does
it make sense to consider that change outside of this proposal?

  Are there changes I can make to my proposal that would make it easier to
implement and therefore more likely to have someone take a stab at
implementing?

I may have missed it, but what would execute() do if the url has not been
loaded?   Would it be similar to a synchronous XHR request, lead to ASAP
execution, or throw error?

  Is there a concrete alternate proposal that's worth building out
instead?
 
If execute() must always be synchronous, then readystate is not applicable.
Otherwise, while it should be considered, it would probably take longer to
describe and has no corresponding markup semantics.

Glenn's point about noexecute  being a natural extension of defer and async
is a good one, however neither of those required changing onload semantics
or introducing a new event type.  Readystate on the other hand is already a
well-known concept. Moreover, if history is any indication, we'll continue
using it to implement deferred exec for awhile.


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Will Alexander
On Feb 11, 2011 10:41 AM, Nicholas Zakas nza...@yahoo-inc.com wrote:

 We've gone back and forth around implementation specifics, and now I'd like 
 to get a general feeling on direction. It seems that enough people understand 
 why a solution like this is important, both on the desktop and for mobile, so 
 what are the next steps?

Early on it seemed there was general consensus that changing the
existing MAY fetch-upon-src-assignment to MUST or SHOULD.  Since that
is only tangential to this proposal, provides immediate benefit to
existing code, and can satisfy use cases that do not require strictly
synchronous execution.

I'm hopeful the change would generate activity around these bug reports.

https://bugs.webkit.org/show_bug.cgi?id=51650
https://bugzilla.mozilla.org/show_bug.cgi?id=621553

If I am wrong in my assessment of the consensus, does it make sense to
consider that change outside of this proposal?

 Are there changes I can make to my proposal that would make it easier to 
 implement and therefore more likely to have someone take a stab at 
 implementing?

I may have missed it, but what would execute() do if the url has not
been loaded?   Would it be similar to a synchronous XHR request, lead
to ASAP execution, or throw error?

 Is there a concrete alternate proposal that's worth building out instead?

If execute() must be synchronous, then readystate is not applicable.
Otherwise, while it should be considered, it would probably take
longer to describe and has no corresponding markup semantics.

Glenn's point about noexecute  being a natural extension of defer and
async is a good one, however neither of those required changing onload
semantics or introducing a new event type.  Readystate on the other
hand is already a well-known concept. Moreover, if history is any
indication, we'll continue using it to implement deferred exec for
awhile.


Re: [whatwg] Cryptographically strong random numbers

2011-02-11 Thread Cedric Vivier
On Fri, Feb 11, 2011 at 19:38, Adam Barth w...@adambarth.com wrote:
 Just to followup on this thread, I've landed this feature in WebKit.
 I'm not sure whether it made it into tonight's nightly, but it should
 be in a nightly shortly.

Nice!


 interface Crypto {
  void getRandomValues(in ArrayBufferView array) raises(DOMException);
 };
 If the ArrayBufferView isn't a Uint8Array , getRandomValues throws
 an exception (VALIDATION_ERR

Is there a specific reason for this limitation?
Imho it should throw only for Float32Array and Float64Array since
unbounded random floating numbers does not really make sense
(including because of NaN and +inf -inf).
However some use cases might prefer signed Int8Array or any other
integer type and it doesn't change anything to the implementation :
filling bytes to the ArrayBufferView's underlying ArrayBuffer).

Like for instance you can do below in C to get random 32-bit numbers
directly (though 'read' might very well get them one byte at a time
from /dev/random) :
int32_t random_32bit_integers[32];
read(dev_random_fd, random_32bit_integers, sizeof(random_32bit_integers))


Regards,


Re: [whatwg] Onpopstate is Flawed

2011-02-11 Thread Jonas Sicking
On Fri, Feb 11, 2011 at 12:25 AM, Justin Lebar justin.le...@gmail.com wrote:
 I'm not sure I follow you here. My idea for option A is that you never
 get a popstate when doing the initial parsing of a page.

 Okay, I may still have misunderstood, despite my best efforts!  :)

 Option B:
 Fire popstates as we currently do, with the caveat that you never
 fire a stale popstate -- that is, if any navigations or
 push/replaceStates have occurred since you queued the task to fire the
 popstate, don't fire it.

 Is my option B clear?  It's also what the patch I have [1] does.

 We'd might want to make popstate sync again, since otherwise you have
 to schedule a task which synchronously checks if no state changes have
 occurred, and dispatches popstate only if appropriate.
 I know Olli has some thoughts on making popstate sync, and fwiw, FF
 currently dispatches it synchronously.

 The main problem with this proposal is that it's a big change from
 what the API is today. However it's only a change in the situation
 when the spec today calls for firing popstate during the initial page
 load. Something that it seems like pages don't deal with properly
 today anyway, at least in the case of facebook.

 Given the adoption the feature has seen, I guess I'd favor a smaller
 change.  In particular, the option B above makes it possible to write
 correct pages without ever reading the DOM current state property --
 it's there only as an optimization to allow pages to set their state
 faster, so no rush to put it in Right Away.  In contrast, a correct
 page with option A would have to check its state at some point as it
 loads.

 I guess I don't see why it's better to make a big change than a small
 one, if they both work equally well.

The problem with option B is that pages can't display correctly until
the load event fires, which can be quite late in the game what with
slow loading images and ads. It means that if you're on a page which
uses state, and reload the page, you'll first see the page in a
state-less mode while it's loading, and at some point later (generally
when the last image finishes loading) it'll snap to be in the state
it was when you pressed reload.

You'll get the same behavior going back to a state-using page which
has been kicked out of the fast-cache.

/ Jonas


Re: [whatwg] Processing the zoom level - MS extensions to window.screen

2011-02-11 Thread Ian Hickson

On Wed, 29 Dec 2010, Glenn Maynard wrote:
 On Wed, Dec 29, 2010 at 7:38 PM, Ian Hickson i...@hixie.ch wrote:
  Any UI that is based on being able to zoom content (e.g. maps is 
  another one) would presumably have in-page zoom separate from UA zoom, 
  but you'd still want to be able to change the UA zoom (changing the 
  CSS pixel size, essentially), since you would want to be able to zoom 
  the page UI itself.
 
 I hit this problem in a UI I worked on.  It rendered into a canvas the 
 size of the window, which can be zoomed and scrolled around.  At 100% 
 full page zoom this works well.  At 120% zoom, it creates a canvas 
 smaller than the window, which is then scaled back up by the browser, 
 resulting in a blurry image.  Full page zoom should work on the UI 
 around it--I didn't want to disable it entirely--but the canvas itself 
 should be created in display pixels, rather than CSS pixels.
 
 I didn't find any reasonable workaround.  All I can do is tell people 
 not to use full-page zoom.  Many users probably see a blurry image and 
 don't know why, since there's no way to detect full-page zoom in most 
 browsers to even hint the user about the problem.

That's a bug in the browser. If it knows it's going to be zooming up the 
canvas when it creates the backing store, it should be using a bigger 
backing store.


On Fri, 31 Dec 2010, Charles Pritchard wrote:
 
 My objections have been noted throughout the threads:
 
  It's not possible to discover the scaling of CSS pixels to actual 
  device pixels, with the current standard.
 
 Ian's response:
 
 This is by design. You shouldn't need to know the actual device pixel 
 depth, as far as I can tell. What's the use case?
 
 It's necessary to know CSS pixel scaling to match the backend bitmap 
 with the device.
 This is common, active practice on mobile devices:
  canvas width=200 style=width: 100px;

It may be necessary to know the CSS pixel scaling to match the backend 
bitmap with the device today, but this is only because of bugs in the 
browsers. The solution isn't to add a feature to the spec, and then wait 
for the browsers to implement it, that lets you work around the bug in 
browsers. The solution is for the browsers to fix the bug instead.


 I see Canvas and the scripting environment as a part of the graphics 
 layer, whereas it seems many on the list feel that the graphics layer 
 should not be handled by authors.

By graphics layer I meant CSS, media queries, and image decoders.

canvas is intended to be device-agnostic.


 My use case, regarding Google Books, is not about printing. It was 
 simply about using a computer screen, with the zoom level turned up. 
 That's it. If you go to Google books, and your zoom level is turned up, 
 the image displayed will be upscaled, with some possible blurriness. 
 This could be avoided, by simply exposing the CSS pixel ratios, so that 
 the image would match the correct scaling.

I went to books.google.com, opened up the first book in my library, and 
zoomed in, and it reflowed and rerendered the text to be quite crisp. I 
don't see any problem here. Images were similiarly handled beautifully.

Could you elaborate on the steps to reproduce this problem?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Canvas feedback (various threads)

2011-02-11 Thread Ian Hickson
On Thu, 10 Feb 2011, Boris Zbarsky wrote:
 On 2/10/11 11:31 PM, Ian Hickson wrote:
  I think you had a typo in your test. As far as I can tell, all
  WebKit-based browsers act the same as Opera and Firefox 3 on this:
  
  
  http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Atry%20%7B%0A%20%20c.strokeStyle%20%3D%20'red'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%2080)%3B%0A%20%20c.strokeStyle%20%3D%20'transparent'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%20120)%3B%0A%7D%20finally%20%7B%0A%20%20c.restore()%3B%0A%7D%0A
 
 On that test, Safari 5.0.3 on Mac outputs red and transparent for 
 the two strings.

Huh. Interesting. I never test release browsers, totally missed this. :-)

Thanks.


   Which is less interop than it seems (due to Safari's behavior), and 
   about to disappear completely, since both IE9 and Firefox 4 will 
   ship with the 0 instead of 0.0  :(
  
  Is there no chance to fix this in Firefox 4? It _is_ a regression. :-)
 
 At this point, probably not.  If it's not actively breaking websites 
 it's not being changed before final release.  If it is, we'd at least 
 think about it...

Well I don't really mind what we do at this point.

I'm assuming you're not suggesting changing the alpha=1 case (which is 
also different between CSS and canvas). Is that right?

I guess with IE9 and Firefox4 about to go to the 0 behaviour, and Safari 
still having 0 behaviour, and Opera and Chrome being the only ones doing 
what the spec says, we should move to 0...

I've changed the spec to make alpha1 colours work like CSS, and I've 
marked this part of the spec as controversial so that people are aware 
that it could change again. I guess we'll look at the interop here again 
in a few months and see if it's any better. My apologies to Opera and 
WebKit for getting screwed by following the spec.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Cryptographically strong random numbers

2011-02-11 Thread Adam Barth
On Fri, Feb 11, 2011 at 4:32 AM, Glenn Maynard gl...@zewt.org wrote:
 On Fri, Feb 11, 2011 at 6:38 AM, Adam Barth w...@adambarth.com wrote:
 Just to followup on this thread, I've landed this feature in WebKit.
 I'm not sure whether it made it into tonight's nightly, but it should
 be in a nightly shortly.  The IDL for the API is as follows:

 interface Crypto {
  void getRandomValues(in ArrayBufferView array) raises(DOMException);
 };

 If the ArrayBufferView isn't a Uint8Array or if the user agent is
 unable to obtain true randomness from the OS, getRandomValues throws
 an exception (VALIDATION_ERR in the former case and NOT_SUPPORTED_ERR
 in the latter case).

 Rather than raising NOT_SUPPORTED_ERR, would it be better to follow the
 example from other specs: to omit the function entirely if the feature is
 disabled?  (Specifically, When support for a feature is disabled (e.g. as
 an emergency measure to mitigate a security problem, or to aid in
 development, or for performance reasons), user agents must act as if they
 had no support for the feature whatsoever, and as if the feature was not
 mentioned in this specification.)

 That's nicer for checking whether the function exists to check support.
 Otherwise, you have to make a dummy call to check support.  It also means
 you only need to check support in one way--since you'll need to check
 whether the function exists anyway.

In some cases, it's not possible to determine whether we'll be able to
get OS randomness until runtime.  For example, on Linux, if we don't
have permission to read /dev/urandom.  Not all JavaScript engines have
the ability to selectively disable DOM APIs at runtime.

On Fri, Feb 11, 2011 at 10:00 AM, Cedric Vivier cedr...@neonux.com wrote:
 On Fri, Feb 11, 2011 at 19:38, Adam Barth w...@adambarth.com wrote:
 Just to followup on this thread, I've landed this feature in WebKit.
 I'm not sure whether it made it into tonight's nightly, but it should
 be in a nightly shortly.

 Nice!

 interface Crypto {
  void getRandomValues(in ArrayBufferView array) raises(DOMException);
 };
 If the ArrayBufferView isn't a Uint8Array , getRandomValues throws
 an exception (VALIDATION_ERR

 Is there a specific reason for this limitation?
 Imho it should throw only for Float32Array and Float64Array since
 unbounded random floating numbers does not really make sense
 (including because of NaN and +inf -inf).
 However some use cases might prefer signed Int8Array or any other
 integer type and it doesn't change anything to the implementation :
 filling bytes to the ArrayBufferView's underlying ArrayBuffer).

 Like for instance you can do below in C to get random 32-bit numbers
 directly (though 'read' might very well get them one byte at a time
 from /dev/random) :
 int32_t random_32bit_integers[32];
 read(dev_random_fd, random_32bit_integers, sizeof(random_32bit_integers))

I went with a whitelist approach.  If there are other specific types
that you think we should whitelist, we can certainly do that.  Why
types, specifically, would you like to see supported?

Adam


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Kyle Simpson
We've gone back and forth around implementation specifics, and now I'd 
like to get a general feeling on direction. It seems that enough people 
understand why a solution like this is important, both on the desktop and 
for mobile, so what are the next steps?


Are there changes I can make to my proposal that would make it easier to 
implement and therefore more likely to have someone take a stab at 
implementing?


Nicholas, if you're sticking with your original proposal of the `noexecute` 
on script elements, then a mechanism should be specified by which the event 
can be detected for when the script finishes loading. As stated earlier, 
`onload` isn't sufficient, since it doesn't fire until after a script has 
finished (including execution). Are you proposing instead a new event, like 
onloadingcomplete or something of that nature?


Otherwise, the next most obvious candidate for an event, using existing 
precedent, would be the `readyState=loaded`, coupled with that event being 
fired by `onreadystatechange`, as happens currently in IE (similar to XHR).


Once we have some event mechanism to detect when the script finishes 
loading, then your original proposal breaks down to:


1. Add a `noexecute` property on dynamic script elements, default it to 
false, let it be settable to true.

2. Add an `execute()` function.

For the `noexecute`, We need clearer definition on if this proposal is that 
it's only a property on dynamic script elements, or is it also a boolean 
attribute in markup script elements? If the proposal includes the markup 
attribute, we need clearer definition around the semantics of how that would 
be used. As stated, script src=... noexecute onload=this.execute() 
doesn't work (chicken-and-the-egg), so in place of that, what is a concrete 
example of how the `noexecute` boolean attribute in markup would be used and 
useful?


The `execute()` function needs further specification as to what happens if 
execute() is called too early, or on a script that already executed, or on a 
script that wasn't `noexecute`, as Will pointed out.




Is there a concrete alternate proposal that's worth building out instead?


Aside from the event system questions, which is required for either 
proposal, the concrete alternate proposal (from me) is simply:


1. Change the suggestion behavior of preloading before DOM-append to 
required behavior, modeled as it is implemented in IE.



As to whether this one is more worth building out than your original 
proposal, my support arguments are:


1. entirely uses existing precedent, both in wording in the spec and in IE's 
implementation.
2. requires less new additions (no extra function call), which means less 
complexity to work through semantics on (see above questions about 
`execute()` semantics)



I haven't heard on this thread any serious discussion of other workable 
proposals besides those two. Correct me if I'm wrong.




Early on it seemed there was general consensus that changing the existing
MAY fetch-upon-src-assignment to MUST or SHOULD.


I'm not sure there's been consensus on this yet, but there's definitely been 
some strong support by several people. I'd say the two proposals are about 
even (maybe slightly in favor of `readyState`) in terms of vocalized support 
thus far.




Since that is only
tangential to this proposal, provides immediate benefit to existing code,
and can satisfy use cases that do not require feature-detection or 
strictly

synchronous execution.


I'm not sure what you mean by do not require feature-detection. I think 
it's clear that both proposals need feature-detection to be useful. In both 
cases, we're creating opt-in behavior, and you only want to opt-in to that 
behavior (and, by extension, *not* use some other method/fallback) if the 
behavior you want exists.


If I created several script elements, but don't attach them to the DOM, and 
I assume (without feature-testing) that they are being fetched, then without 
this feature they'll never load. So I'd definitely need to feature-test 
before making that assumption.


Conversely, with `noexecute`, I'd definitely want to feature-test that 
`noexecute` was going to in fact suppress execution, otherwise if I start 
loading several scripts and they don't enforce execution order (which spec 
says they shouldn't), then I've got race conditions.




I'm hopeful the change would generate activity around these bug reports.

https://bugs.webkit.org/show_bug.cgi?id=51650
https://bugzilla.mozilla.org/show_bug.cgi?id=621553


I think it's a mistake for those two bug reports not to make it clear that 
an event system for detecting the load is a must. Without the event system, 
a significant part of this use-case is impossible to achieve.



--Kyle






Re: [whatwg] Canvas feedback (various threads)

2011-02-11 Thread Boris Zbarsky

On 2/11/11 3:34 PM, Ian Hickson wrote:

I'm assuming you're not suggesting changing the alpha=1 case (which is
also different between CSS and canvas). Is that right?


Sadly, probably right.  For the alpha=1 case, I would expect there to be 
sites depending on this...


-Boris


[whatwg] Microdata Feedback: A Server Side implementation of a Microdata Consumer library.

2011-02-11 Thread Emiliano Martinez Luque
Hi everybody, I originally intended to send this message to the
implementors list but seeing in the archives that there hasn't been
much activity there for the last couple of months, I'm sending this to
the general list. Well, basically I just wanted to announce that I've
just released ( http://github.com/emluque/MD_Extract ) a library for
server side Microdata consuming. There are some known issues (
particularly with non-ASCII-extending character encodings, also the
text extraction mechanism from a tree of nodes is very basic, etc. )
but I still felt it was sensible to release it to showcase the
possibilities of the Microdata specification.

I based the implementation on the Algorithm provided by the WhatWG but
there are some variations, the most notable one being that I'm
constructing an intermediate results data structure while traversing
the Html tree rather than storing them in a list and then sorting them
later in tree order as the spec says. I did take Tab's suggestion of
doing a first pass through the Html tree and storing a list of
references to elements with ids ( which was a great suggestion, it
makes the code way clearer and it completely changed the way I was
thinking about the problem ).

To test this:

1. Make sure you have PHP 5 with Tidy (
http://www.php.net/manual/en/tidy.installation.php ) and MB_String (
http://ar.php.net/manual/en/mbstring.installation.php ) support.
2. Download the folder, uncompress it and move it to an apache dir. (
or clone it from github: git clone
https://github.com/emluque/MD_Extract.git )
3. Access the /examples folder with your browser.

Other than that, it reports most common errors ( like an element
marked up with itemscope not having child nodes, or a img element
marked with itemprop and not having an src attribute ). I believe that
apart from the known issues, and thinking just about microdata syntax,
it's 100% compliant with the latest microdata spec (Though there might
be some edge cases I might not be considering).

I'm hoping that it gets tested, this time I made it so that all it
takes (other than having the appropriate configuration of PHP) is
downloading and uncompressing the folder, please do, you will like it.
And please fill any bug reports through the github interface or
through the contact form at my personal page at
http://www.metonymie.com .

Again thank you for a great spec,

-- 
Emiliano Martínez Luque
http://www.metonymie.com


Re: [whatwg] Cryptographically strong random numbers

2011-02-11 Thread Glenn Maynard
On Fri, Feb 11, 2011 at 3:40 PM, Adam Barth w...@adambarth.com wrote:

 In some cases, it's not possible to determine whether we'll be able to
 get OS randomness until runtime.  For example, on Linux, if we don't
 have permission to read /dev/urandom.


You can have an exception, eg. INTERNAL_ERR or RUNTIME_ERR, for cases where
the PRNG is normally expected to work but failed in a rare way at runtime.
That's always possible in theory (eg. a read() from /dev/urandom returns an
error), but is separate from feature testing since it can't be predicted,
and it should be exceptionally rare.

Not all JavaScript engines have the ability to selectively disable DOM APIs
 at runtime.


If that's a concern, then all of the specs with the text I mentioned will
have trouble.  I think either the convention of removing APIs at runtime
should be expected and depended on by the specs (and used as consistently as
is reasonable), or not used at all and those specs should be changed.

-- 
Glenn Maynard


Re: [whatwg] Application Cache for on-line sites

2011-02-11 Thread Michael Nordman
Waking this feature request up again as it's been requested multiple
times, I think the ability to utilize an appcache w/o having to have
the page added to it is the #1 appcache feature request that I've
heard.

* The Gmail mobile team has mentioned this.

* Here's a thread on a chromium.org mailing list where this feature is
requested: How to instruct the main page to be not cached?
https://groups.google.com/a/chromium.org/group/chromium-html5/browse_thread/thread/a254e2090510db39/916f3a8da40e34f8

* More recently this has been requested in the context of an
application that uses pushState to alter the url of the main page.

To keep this discussion distinct from others, I'm pulling in the few
comments that have been made on another thread.

hixie said...
 Why can't the pages just switch to a more AJAX-like model rather than
 having the main page still load over the network? The main page loading
 over the network is a big part of the page being slow.

and i replied...
 The premise of the feature request is that the main pages aren't
 cached at all.

 | I tried to use the HTML5 Application Cache to improve the performances
 | of on-line sites (all the tutorials on the web write only about usage
 | with off-line apps)

 As for why can't the pages just switch, I can't speak for andrea,
 but i can guess that a redesign of that nature was out of scope and/or
 would conflict with other requirements around how the url address
 space of the app is defined.

Once you get past the should this be a feature question, there are
some questions to answer.

1) How does an author indicate which pages should be added to the
cache and which should not?

A few ideas...
a. html useManifest='x'
b. If the main resource has a no-store header, don't add it to the
cache, but do associate the document with the cache.
b. A new manifest section to define a prefix matched namespace for these pages.

2) What sequence of events does a page that just uses the cache w/o
being added to it observe?

3) At what point do subresources in an existing appcache start getting
utlized by such pages? What if the appcache is stale? Do subresource
loads cause revalidation?

On Mon, Dec 20, 2010 at 12:56 PM, Michael Nordman micha...@chromium.org wrote:
 This type of request (see forwarded message below) to utilize the
 application cache for subresource loads into documents that are not stored
 in the cache has come up several times now. The current feature set is very
 focused on the offline use case. Is it worth making additions such that a
 document that loads from a server can utilize the resources in an appcache?
 Today we have html manifest=manifestFile, which adds the document
 containing this tag to the appcache and associates that doc with that
 appcache such that subresource loads hit the appcache.
 Not a complete proposal, but...
 What if we had something along the lines of html
 useManifest=''manifestFile, which would do the association of the doc with
 the appcache (so subresources loads hit the cache) but not add the document
 to the cache?

 -- Forwarded message --
 From: UVL andrea.do...@gmail.com
 Date: Sun, Dec 19, 2010 at 1:35 PM
 Subject: [chromium-html5] Application Cache for on-line sites
 To: Chromium HTML5 chromium-ht...@chromium.org


 I tried to use the HTML5 Application Cache to improve the performances
 of on-line sites (all the tutorials on the web write only about usage
 with off-line apps)

 I created the manifest listing all the js, css and images, and the
 performances were really exciting, until I found that even the page
 HTML was cached, despite it was not listed in the manifest. The pages
 of the site are in PHP, so I don't want them to be cached.

 From
 http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
 :
 Authors are encouraged to include the main page in the manifest also,
 but in practice the page that referenced the manifest is automatically
 cached even if it isn't explicitly mentioned.

 Is there a way to have this automating caching disabled?

 Note: I know that caching can be controlled via HTTP headers, but I
 just wanted to try this way as it looks quite reliable, clean and
 powerful.

 --
 You received this message because you are subscribed to the Google Groups
 Chromium HTML5 group.
 To post to this group, send email to chromium-ht...@chromium.org.
 To unsubscribe from this group, send email to
 chromium-html5+unsubscr...@chromium.org.
 For more options, visit this group at
 http://groups.google.com/a/chromium.org/group/chromium-html5/?hl=en.





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Glenn Maynard
On Fri, Feb 11, 2011 at 12:57 PM, Will Alexander 
serverherder+wha...@gmail.com wrote:

  Are there changes I can make to my proposal that would make it easier to
 implement and therefore more likely to have someone take a stab at
 implementing?

 I may have missed it, but what would execute() do if the url has not
 been loaded?   Would it be similar to a synchronous XHR request, lead
 to ASAP execution, or throw error?


In my proposal (which was intended as a further refinement of Nicholas's),
execute would only be permitted once the network load completes, and throw
an exception if called before then.

  Is there a concrete alternate proposal that's worth building out instead?

 If execute() must be synchronous, then readystate is not applicable.
 Otherwise, while it should be considered, it would probably take
 longer to describe and has no corresponding markup semantics.


(I only suggested execute() being synchronous since it made sense in the
context of my proposal and it seemed like a useful, natural side-effect of
the rest of the proposal.)



 Glenn's point about noexecute  being a natural extension of defer and
 async is a good one, however neither of those required changing onload
 semantics or introducing a new event type.  Readystate on the other
 hand is already a well-known concept. Moreover, if history is any
 indication, we'll continue using it to implement deferred exec for
 awhile.


Note that I didn't mean to propose changing existing event semantics; I
simply forgot that script events already had an onload event that means
something entirely different than the one described by Progress Events.  The
finished loading from the network event would need to have a different
name.

-- 
Glenn Maynard


Re: [whatwg] Cryptographically strong random numbers

2011-02-11 Thread Adam Barth
On Fri, Feb 11, 2011 at 1:13 PM, Glenn Maynard gl...@zewt.org wrote:
 On Fri, Feb 11, 2011 at 3:40 PM, Adam Barth w...@adambarth.com wrote:
 In some cases, it's not possible to determine whether we'll be able to
 get OS randomness until runtime.  For example, on Linux, if we don't
 have permission to read /dev/urandom.

 You can have an exception, eg. INTERNAL_ERR or RUNTIME_ERR, for cases where
 the PRNG is normally expected to work but failed in a rare way at runtime.
 That's always possible in theory (eg. a read() from /dev/urandom returns an
 error), but is separate from feature testing since it can't be predicted,
 and it should be exceptionally rare.

 Not all JavaScript engines have the ability to selectively disable DOM
 APIs at runtime.

 If that's a concern, then all of the specs with the text I mentioned will
 have trouble.  I think either the convention of removing APIs at runtime
 should be expected and depended on by the specs (and used as consistently as
 is reasonable), or not used at all and those specs should be changed.

Regardless, the ability does not exist in JavaScriptCore.  If you'd
like to contribute a patch that makes it possible, I'm sure it would
be warmly received.

Adam


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Nicholas Zakas
Once again, the problem with changing how src works is that there's no way to 
feature detect this change. It's completely opaque to developers and therefore 
not helpful in solving the problem.

Again, the reason I used readyState was for tracking the script's state. My doc 
states that if execute() is called on any script whose readyState is earlier 
than loaded, then it throws an error; if it's called when readyState is 
loaded, then the code is executed and the state is changed to complete; if 
it's called when readyState is complete, nothing happens and the method 
returns false.

As I said before, I'm not married to all bits of this proposal. If there's some 
other way to achieve the same functionality, I'm all for it. The main goals are 
listed in the doc, and I'm happy to support any proposal that achieves all of 
them.

-N


-Original Message-
From: whatwg-boun...@lists.whatwg.org [mailto:whatwg-boun...@lists.whatwg.org] 
On Behalf Of Will Alexander
Sent: Friday, February 11, 2011 12:58 PM
To: whatwg@lists.whatwg.org
Subject: Re: [whatwg] Proposal for separating script downloads and execution

On Feb 11, 2011 10:41 AM, Nicholas Zakas nza...@yahoo-inc.com wrote:

 We've gone back and forth around implementation specifics, and now I'd like 
 to get a general feeling on direction. It seems that enough people understand 
 why a solution like this is important, both on the desktop and for mobile, so 
 what are the next steps?

Early on it seemed there was general consensus that changing the
existing MAY fetch-upon-src-assignment to MUST or SHOULD.  Since that
is only tangential to this proposal, provides immediate benefit to
existing code, and can satisfy use cases that do not require strictly
synchronous execution.

I'm hopeful the change would generate activity around these bug reports.

https://bugs.webkit.org/show_bug.cgi?id=51650
https://bugzilla.mozilla.org/show_bug.cgi?id=621553

If I am wrong in my assessment of the consensus, does it make sense to
consider that change outside of this proposal?

 Are there changes I can make to my proposal that would make it easier to 
 implement and therefore more likely to have someone take a stab at 
 implementing?

I may have missed it, but what would execute() do if the url has not
been loaded?   Would it be similar to a synchronous XHR request, lead
to ASAP execution, or throw error?

 Is there a concrete alternate proposal that's worth building out instead?

If execute() must be synchronous, then readystate is not applicable.
Otherwise, while it should be considered, it would probably take
longer to describe and has no corresponding markup semantics.

Glenn's point about noexecute  being a natural extension of defer and
async is a good one, however neither of those required changing onload
semantics or introducing a new event type.  Readystate on the other
hand is already a well-known concept. Moreover, if history is any
indication, we'll continue using it to implement deferred exec for
awhile.


Re: [whatwg] Reserved browsing context names

2011-02-11 Thread Ian Hickson
On Sun, 14 Nov 2010, Boris Zbarsky wrote:

 I was just looking at 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#the-rules-for-choosing-a-browsing-context-given-a-browsing-context-name
  
 and I noticed that there are some other magical (starting with '_') 
 browsing context names that Gecko, at least supports.
 
 These are _content and _main; they target the main browser rendering 
 area. I believe at least _main is supported in IE as well, or was at 
 some point according to the Gecko code comments (we added _main for IE 
 compat).  These are useful for UAs that allow a non-main rendering area 
 (e.g. a sidebar) to allow links in it to trigger the main rendering 
 area.
 
 I think it would be good to add one or both of these to the spec.

I tried testing this but I couldn't actually find a modern browser where 
there was a way to put content into a sidebar, so I'm not sure how to 
test it.

We do spec rel=sidebar, which in theory lets make a link open in an 
auxiliary browsing context, so it would make some sense to have a feature 
to target back, if there is a browser that lets you use auxiliary browsing 
contexts still. Is there?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Processing the zoom level - MS extensions to window.screen

2011-02-11 Thread Glenn Maynard
On Fri, Feb 11, 2011 at 3:24 PM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 29 Dec 2010, Glenn Maynard wrote:
  I hit this problem in a UI I worked on.  It rendered into a canvas the
  size of the window, which can be zoomed and scrolled around.  At 100%
  full page zoom this works well.  At 120% zoom, it creates a canvas
  smaller than the window, which is then scaled back up by the browser,
  resulting in a blurry image.  Full page zoom should work on the UI
  around it--I didn't want to disable it entirely--but the canvas itself
  should be created in display pixels, rather than CSS pixels.
 
  I didn't find any reasonable workaround.  All I can do is tell people
  not to use full-page zoom.  Many users probably see a blurry image and
  don't know why, since there's no way to detect full-page zoom in most
  browsers to even hint the user about the problem.

 That's a bug in the browser. If it knows it's going to be zooming up the
 canvas when it creates the backing store, it should be using a bigger
 backing store.


It sounds like you're saying that, if the user's full-page zoom level is
110% and the page requests a 100x100 canvas, the browser should create a
110x110 backing store instead.  There are several problems with that:

- The full-zoom level can be changed by the user after the canvas is already
rendered.  If I load a page at 100%, the canvas renders at that resolution,
and then I change the full-zoom level to 110%, there's no way for the
browser to know this and use a bigger backing store in advance.
- The data would have to be downscaled to the exposed 100x100 resolution
when exported with ImageData.  This means that retrieving, modifying and
re-importing ImageData would be lossy.  Similarly, rendering a 100x100 image
into a canvas set to 100x100 would upscale the image, blurring it: the
developer should be able to expect that blitting a 100x100 image into a
100x100 canvas will be a 1:1 copy.
- If, rather than displaying it in the document at the full-zoom level, the
data is sent to the server, the results would be blurry.  For example, if I
create a 1000x1000 canvas (and the browser's backing store is actually
1100x1100), and I send the finished data to the server (at the exposed
1000x1000), the browser has to resample the final image, blurring it.

If that's not what you meant, could you clarify?

 I went to books.google.com, opened up the first book in my library, and
 zoomed in, and it reflowed and rerendered the text to be quite crisp. I
 don't see any problem here. Images were similiarly handled beautifully.

 Could you elaborate on the steps to reproduce this problem?


(I tried this, and text was blurry even when I zoomed using only that page's
built-in zoom mechanism; it seemed to be scaling the rendered page and not
rerendering text at all.  I figured some books might not be OCR'd so I tried
another couple books, but it still happened; then it somehow crashed FF3, so
I gave up.)

-- 
Glenn Maynard


Re: [whatwg] Javascript: URLs as element attributes

2011-02-11 Thread Charles Pritchard

On 2/10/2011 12:09 PM, whatwg-requ...@lists.whatwg.org wrote:

Date: Thu, 10 Feb 2011 13:43:11 -0500
From: Boris Zbarskybzbar...@mit.edu
To: Adam Barthw...@adambarth.com

On 2/10/11 1:38 PM, Adam Barth wrote:

  The connection is that these features are unlikely to get implemented
  in WebKit anytime soon.  To the extent that we want the spec to
  reflect interoperable behavior across browsers, speccing things that
  aren't (and aren't likely to become) interoperable is a net loss.

That's fine; I just think that if you mean Don't specify this because
we don't want to implement it and will refuse to do so you should just
say that instead of making it sound like there are unspecified security
issues with the proposal.


Boris,
It's more often your group that makes a stand with merit-less refusals.
See devicePixelRatio and CSS scrollbar styling for an example of that.

So, sure, I can see why you'd assume other groups would do the same.


Adam,

Would you be willing to dig up the bug report on webkit that documented 
your attempts

to satisfy javascript: urls in embedding?

I did a little bit of poking around, but didn't find it.

I agree that data-uris are much easier/preferable, but I'd still like to 
see where the conversation went

on the webkit dev list and/or bug list.

-Charles


Re: [whatwg] Constraint validation feedback (various threads)

2011-02-11 Thread Ian Hickson
On Tue, 16 Nov 2010, Mounir Lamouri wrote:
  On Thu, 12 Aug 2010, Aryeh Gregor wrote:
  On Wed, Aug 11, 2010 at 6:03 PM, Ian Hickson i...@hixie.ch wrote:
  The script setting the value doesn't set the dirty flag. The only 
  way this could be a problem is if the user edits the control and 
  _then_ the script sets the value to an overlong value.
 
  
  value
  On getting, it must return the current value of the element. On
  setting, it must set the element's value to the new value, set the
  element's dirty value flag to true, and then invoke the value
  sanitization algorithm, if the element's type attribute's current
  state defines one.
  
  http://www.whatwg.org/specs/web-apps/current-work/#common-input-element-apis
 
  That seems to say that setting via .value will set the dirty flag 
  (although setting via .setAttribute() will not).  Am I mistaken?
  
  Hm, yes, you are correct.
  
  I've lost track of the context for this; does the above imply that 
  there is a change we need to make?
 
 Aryeh was worried about page using maxlength to block users input then 
 setting the value to something else before submitting. Given that 
 maxlength has nothing to do with form validation and now can block the 
 form submission, it might break some websites.

 There is a LinkedIn form broken because of that: there are two fields 
 with a non-HTML5 placeholder (ie. it's the value) which is set with 
 .value=something but the field has a maxlength set to 4 (it's a year). 
 With a checkbox checked, one of this field will be hidden, with a value 
 greater than the maxlength making the form always invalid.

Ok. It seems the best solution is to just remove the suffering from being 
too long state and simply require that authors not let authors enter 
values longer than the maxlength. Right?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Constraint validation feedback (various threads)

2011-02-11 Thread Ian Hickson

Following up on the e-mail I just sent, which was on the same topic (I 
missed that there was more to the thread when replying to that one):

On Wed, 24 Nov 2010, Mounir Lamouri wrote:
 
 After Firefox 4, we would like to introduce a new flag that will let us 
 know if the element's value has been changed by the user (probably if 
 the _last_ change has been done by the user). The meaning of this flag 
 would be to fix the retro-compatibility issues. Then, an element would 
 suffer from being too long if this flag is true and it's value length is 
 greater than the maxlength attribute value. In addition, users will be 
 able to enter text longer than the maxlength attribute for textarea (and 
 maybe input) elements. That way, we would be able to fix 
 retro-compatibility issues and provide a better experience to the users.

Why would you want to make it possible to enter long values? If there's a 
use case for it, I'm happy to address it, but if there isn't it's not 
clear that we should change this aspect of browser behaviour, since it's 
been pretty widely implemented for a long time.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Javascript: URLs as element attributes

2011-02-11 Thread Adam Barth
On Fri, Feb 11, 2011 at 2:20 PM, Charles Pritchard ch...@jumis.com wrote:
 On 2/10/2011 12:09 PM, whatwg-requ...@lists.whatwg.org wrote:

 Date: Thu, 10 Feb 2011 13:43:11 -0500
 From: Boris Zbarskybzbar...@mit.edu
 To: Adam Barthw...@adambarth.com

 On 2/10/11 1:38 PM, Adam Barth wrote:

   The connection is that these features are unlikely to get implemented
   in WebKit anytime soon.  To the extent that we want the spec to
   reflect interoperable behavior across browsers, speccing things that
   aren't (and aren't likely to become) interoperable is a net loss.

 That's fine; I just think that if you mean Don't specify this because
 we don't want to implement it and will refuse to do so you should just
 say that instead of making it sound like there are unspecified security
 issues with the proposal.

 Boris,
 It's more often your group that makes a stand with merit-less refusals.
 See devicePixelRatio and CSS scrollbar styling for an example of that.

 So, sure, I can see why you'd assume other groups would do the same.


 Adam,

 Would you be willing to dig up the bug report on webkit that documented your
 attempts
 to satisfy javascript: urls in embedding?

 I did a little bit of poking around, but didn't find it.

https://bugs.webkit.org/show_bug.cgi?id=9706
https://bugs.webkit.org/show_bug.cgi?id=12408

This is the bug I was thinking about (although not all the discussion
was captured in the bug):
https://bugs.webkit.org/show_bug.cgi?id=16855

Most directly related is this bug, which unfortunately is marked
security-sensitive.  I've added Boris to the CC list of this bug, but
unfortunately I can't open it up to the public at the moment:

https://bugs.webkit.org/show_bug.cgi?id=41483

 I agree that data-uris are much easier/preferable, but I'd still like to see
 where the conversation went
 on the webkit dev list and/or bug list.

Hopefully the links above are helpful.  Not all the discussion is
captured in the bug database.  Some of it happens on mailing lists and
in IRC (as well as in person).

Adam


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Nicholas Zakas
Thanks Kyle, those comments were helpful. I've simplified and refined my 
proposal based on them and the others in this thread:
https://docs.google.com/document/d/1wLdTU3xPMKhBP0anS774Y4ZT2UQDqVhnQl3VnSceDJM/edit?hl=enauthkey=CJ6z2ZgO

Summary of changes:
* Changed noexecute to preload
* No HTML markup usage
* No change to load event
* Introduction of preload event
* Removed mention of readyState

I'd appreciate hearing feedback on this revision from everyone.

-N

-Original Message-
From: whatwg-boun...@lists.whatwg.org [mailto:whatwg-boun...@lists.whatwg.org] 
On Behalf Of Kyle Simpson
Sent: Friday, February 11, 2011 3:42 PM
To: whatwg@lists.whatwg.org
Cc: Will Alexander; Nicholas C. Zakas
Subject: Re: [whatwg] Proposal for separating script downloads and execution

 We've gone back and forth around implementation specifics, and now I'd 
 like to get a general feeling on direction. It seems that enough people 
 understand why a solution like this is important, both on the desktop and 
 for mobile, so what are the next steps?

 Are there changes I can make to my proposal that would make it easier to 
 implement and therefore more likely to have someone take a stab at 
 implementing?

Nicholas, if you're sticking with your original proposal of the `noexecute` 
on script elements, then a mechanism should be specified by which the event 
can be detected for when the script finishes loading. As stated earlier, 
`onload` isn't sufficient, since it doesn't fire until after a script has 
finished (including execution). Are you proposing instead a new event, like 
onloadingcomplete or something of that nature?

Otherwise, the next most obvious candidate for an event, using existing 
precedent, would be the `readyState=loaded`, coupled with that event being 
fired by `onreadystatechange`, as happens currently in IE (similar to XHR).

Once we have some event mechanism to detect when the script finishes 
loading, then your original proposal breaks down to:

1. Add a `noexecute` property on dynamic script elements, default it to 
false, let it be settable to true.
2. Add an `execute()` function.

For the `noexecute`, We need clearer definition on if this proposal is that 
it's only a property on dynamic script elements, or is it also a boolean 
attribute in markup script elements? If the proposal includes the markup 
attribute, we need clearer definition around the semantics of how that would 
be used. As stated, script src=... noexecute onload=this.execute() 
doesn't work (chicken-and-the-egg), so in place of that, what is a concrete 
example of how the `noexecute` boolean attribute in markup would be used and 
useful?

The `execute()` function needs further specification as to what happens if 
execute() is called too early, or on a script that already executed, or on a 
script that wasn't `noexecute`, as Will pointed out.


 Is there a concrete alternate proposal that's worth building out instead?

Aside from the event system questions, which is required for either 
proposal, the concrete alternate proposal (from me) is simply:

1. Change the suggestion behavior of preloading before DOM-append to 
required behavior, modeled as it is implemented in IE.


As to whether this one is more worth building out than your original 
proposal, my support arguments are:

1. entirely uses existing precedent, both in wording in the spec and in IE's 
implementation.
2. requires less new additions (no extra function call), which means less 
complexity to work through semantics on (see above questions about 
`execute()` semantics)


I haven't heard on this thread any serious discussion of other workable 
proposals besides those two. Correct me if I'm wrong.


 Early on it seemed there was general consensus that changing the existing
 MAY fetch-upon-src-assignment to MUST or SHOULD.

I'm not sure there's been consensus on this yet, but there's definitely been 
some strong support by several people. I'd say the two proposals are about 
even (maybe slightly in favor of `readyState`) in terms of vocalized support 
thus far.


 Since that is only
 tangential to this proposal, provides immediate benefit to existing code,
 and can satisfy use cases that do not require feature-detection or 
 strictly
 synchronous execution.

I'm not sure what you mean by do not require feature-detection. I think 
it's clear that both proposals need feature-detection to be useful. In both 
cases, we're creating opt-in behavior, and you only want to opt-in to that 
behavior (and, by extension, *not* use some other method/fallback) if the 
behavior you want exists.

If I created several script elements, but don't attach them to the DOM, and 
I assume (without feature-testing) that they are being fetched, then without 
this feature they'll never load. So I'd definitely need to feature-test 
before making that assumption.

Conversely, with `noexecute`, I'd definitely want to feature-test that 
`noexecute` was going to in fact suppress execution, 

Re: [whatwg] Reserved browsing context names

2011-02-11 Thread Bjartur Thorlacius
On 2/11/11, Ian Hickson i...@hixie.ch wrote:
 We do spec rel=sidebar, which in theory lets make a link open in an
 auxiliary browsing context, so it would make some sense to have a feature
 to target back, if there is a browser that lets you use auxiliary browsing
 contexts still. Is there?

Well, Firefox 3.0 (IIRC) allows opening bookmarks in the sidebar.
Following links in the sidebar will navigate the main browsing
context, even the target isn't explicitly set. I don't have access to
Firefox ATM, for further testing.


Re: [whatwg] Application Cache for on-line sites

2011-02-11 Thread Jeremy Orlow
bcc chromium-html5

In addition to what Michael has cited, I've had many developers (at various
Google events) ask why we don't have some API like this as well.  I think
it's clear there's demand.

On Fri, Feb 11, 2011 at 1:14 PM, Michael Nordman micha...@google.comwrote:

 Waking this feature request up again as it's been requested multiple
 times, I think the ability to utilize an appcache w/o having to have
 the page added to it is the #1 appcache feature request that I've
 heard.

 * The Gmail mobile team has mentioned this.

 * Here's a thread on a chromium.org mailing list where this feature is
 requested: How to instruct the main page to be not cached?

 https://groups.google.com/a/chromium.org/group/chromium-html5/browse_thread/thread/a254e2090510db39/916f3a8da40e34f8

 * More recently this has been requested in the context of an
 application that uses pushState to alter the url of the main page.

 To keep this discussion distinct from others, I'm pulling in the few
 comments that have been made on another thread.

 hixie said...
  Why can't the pages just switch to a more AJAX-like model rather than
  having the main page still load over the network? The main page loading
  over the network is a big part of the page being slow.

 and i replied...
  The premise of the feature request is that the main pages aren't
  cached at all.
 
  | I tried to use the HTML5 Application Cache to improve the performances
  | of on-line sites (all the tutorials on the web write only about usage
  | with off-line apps)
 
  As for why can't the pages just switch, I can't speak for andrea,
  but i can guess that a redesign of that nature was out of scope and/or
  would conflict with other requirements around how the url address
  space of the app is defined.

 Once you get past the should this be a feature question, there are
 some questions to answer.

 1) How does an author indicate which pages should be added to the
 cache and which should not?

 A few ideas...
 a. html useManifest='x'
 b. If the main resource has a no-store header, don't add it to the
 cache, but do associate the document with the cache.
 b. A new manifest section to define a prefix matched namespace for these
 pages.

 2) What sequence of events does a page that just uses the cache w/o
 being added to it observe?

 3) At what point do subresources in an existing appcache start getting
 utlized by such pages? What if the appcache is stale? Do subresource
 loads cause revalidation?

 On Mon, Dec 20, 2010 at 12:56 PM, Michael Nordman micha...@chromium.org
 wrote:
  This type of request (see forwarded message below) to utilize the
  application cache for subresource loads into documents that are not
 stored
  in the cache has come up several times now. The current feature set is
 very
  focused on the offline use case. Is it worth making additions such that
 a
  document that loads from a server can utilize the resources in an
 appcache?
  Today we have html manifest=manifestFile, which adds the document
  containing this tag to the appcache and associates that doc with that
  appcache such that subresource loads hit the appcache.
  Not a complete proposal, but...
  What if we had something along the lines of html
  useManifest=''manifestFile, which would do the association of the doc
 with
  the appcache (so subresources loads hit the cache) but not add the
 document
  to the cache?
 
  -- Forwarded message --
  From: UVL andrea.do...@gmail.com
  Date: Sun, Dec 19, 2010 at 1:35 PM
  Subject: [chromium-html5] Application Cache for on-line sites
  To: Chromium HTML5 chromium-ht...@chromium.org
 
 
  I tried to use the HTML5 Application Cache to improve the performances
  of on-line sites (all the tutorials on the web write only about usage
  with off-line apps)
 
  I created the manifest listing all the js, css and images, and the
  performances were really exciting, until I found that even the page
  HTML was cached, despite it was not listed in the manifest. The pages
  of the site are in PHP, so I don't want them to be cached.
 
  From
  http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html
  :
  Authors are encouraged to include the main page in the manifest also,
  but in practice the page that referenced the manifest is automatically
  cached even if it isn't explicitly mentioned.
 
  Is there a way to have this automating caching disabled?
 
  Note: I know that caching can be controlled via HTTP headers, but I
  just wanted to try this way as it looks quite reliable, clean and
  powerful.
 
  --
  You received this message because you are subscribed to the Google Groups
  Chromium HTML5 group.
  To post to this group, send email to chromium-ht...@chromium.org.
  To unsubscribe from this group, send email to
  chromium-html5+unsubscr...@chromium.org.
  For more options, visit this group at
  http://groups.google.com/a/chromium.org/group/chromium-html5/?hl=en.
 
 
 



Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Kyle Simpson
Once again, the problem with changing how src works is that there's no way 
to feature detect this change. It's completely opaque to developers and 
therefore not helpful in solving the problem.


I still believe the feature-detect for my proposal is valid. It's obviously 
not ideal (often times feature-detects aren't), but I don't think we should 
suffer a more complicated solution just so we can get a slightly more 
graceful feature-detect, when the simpler solution has a functional 
feature-detect. So far, the feature-detect issue is the only thing I've 
heard Nicholas push back on with regards to my proposal.


To restate, the feature-detect for my proposal is:

(document.createElement(script).readyState == uninitialized) // true 
only for IE, not for Opera or any others, currently


In fact, the precedent was already set (in the async=false 
proposal/discussion, which was officially adopted by the spec recently) for 
having a feature-detect that uses not only the presence of some property but 
its default value.


(document.createElement(script).async === true)

Many of the same reasonings I gave there for that type (slightly 
unconventional compared to previous ones) of feature detect are exactly the 
reasons I'm suggesting a similar pattern for `readyState`. Extending the 
feature-detect to use a property and its default value is a delicate way of 
balancing the need for a feature-detect without creating entirely new 
properties (more complexity) just so we can feature-detect.


While it isn't as pretty-looking, in the current state of how the browsers 
have implemented things, it IS workable. The set of browsers and their 
current support for `readyState` is a known-matrix. We know that only IE 
(and Opera) have it defined. And given the high visibility of this issue and 
our active evangelism efforts to the browser vendors, it's quite likely and 
reliable that all of them would know the nature of the `readyState` part of 
the proposal being the feature


The only wrinkle would have been Opera possibly changing the default value 
to uninitialized but not implementing the proposed underlying behavior. 
Thankfully, they already commented on this thread to indicate they would act 
in good faith to implement the full atomic nature of the proposal (not just 
part of it), so as to preserve the validity of the proposed feature-detect.


I know Nicholas has expressed reservations about that feature-detect. But I 
would say that there needs to be hard evidence of how it will break, not 
just premature fear that some browser vendor will go rogue on us and 
invalidate the expressed assumptions.




Summary of changes:
* Changed noexecute to preload
* No HTML markup usage
* No change to load event
* Introduction of preload event
* Removed mention of readyState

I'd appreciate hearing feedback on this revision from everyone.


Firstly, I like the changes Nicholas made to his proposal. I think preload 
and onpreload are definitely clearer than noExecute and whatever the 
onfinishedloading would have had to be. I still think his proposal is more 
complicated (and thus faces a more uphill journey to spec acceptance and 
browser adoption) than `readyState` preloading, but it's definitely clearer 
and more semantic than the original version.


If we ended up deciding to go with Nicholas' proposal, I'd at least suggest 
that `.execute()` on a not-yet-loaded script should not throw an error, but 
should just remove/unset the `preload` flag, such that the script will just 
execute as normal when it finishes loading.


Also, I'd like someone (with better knowledge than I, perhaps Henri?) to 
consider/comment on the implications of Nicholas' statement that 
`.execute()` must be synchronous. I recall from the async=false 
discussions that there were several wrinkles with injected scripts executing 
synchronously (something to do with jQuery and their global Eval). We should 
definitely verify that this part of his proposal isn't setting us up for the 
same landmines that the async=false process had to tip-toe around.


For instance, if I call `.execute()` on a script element that is loaded and 
ready, and it's going to execute synchronously, what happens if the script 
logic itself calls other synchronous `.execute()` calls? And is the script's 
onload event (which fires after execution) also synchronous? I can see 
this leading to some difficult race conditions relating to how script 
loaders have to do cleanup (to prevent memory-leaks) by unsetting 
properties on script elements, etc.



--Kyle





Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Will Alexander
On Feb 11, 2011 5:00 PM, Nicholas Zakas nza...@yahoo-inc.com wrote:

 Once again, the problem with changing how src works is that there's no way
to feature detect this change. It's completely opaque to developers and
therefore not helpful in solving the problem.

I completely agree with what you're saying as it relates to *this* feature.
My only point is, even absent feature testing, readystate,or an ondownload
event, this behavior is useful and benefits existing code.

Execution order mgmt  is one example.  Many of these loaders create and then
queue the script elements, using onload chaining to manage their
attachment.   Without the use of any prefetching hacks, scripts will load in
parallel in IE.  This is because IE implements the specs current performance
suggestion, *not* because it also provides a readystate.  The loader does
not need to know whether a script is in cache before attaching, it only
needs to know the prerequisite libs have been loaded.  There are a number of
add'l use-cases that do not require that prefetching or ondownload be either
available or implemented.   Browsers that prefetch simply perform better
than those that do not.  Many authors are willing to accept this tradeoff,
and more will be prone to do so if this achieves wider adoption.  We have it
for Image's, so why not scripts?

  will throw an error.

I don't think its a stretch to see how an error might not always be
appropriate.  For many of those, however, opaque pre fetching would be
perfectly acceptable.

  Consider the controljs example in which the menu code does not load until
it is clicked.  There's no requirement that it run synchronously so it is
acceptable for the script's execution to simply be scheduled in response to
the click event.   A non-prefetching browser would not be as performant
but would still work.


 As I said before, I'm not married to all bits of this proposal. If there's
some other way to achieve the same functionality, I'm all for it. The main
goals are listed in the doc, and I'm happy to support any proposal that
achieves all of them.


Just to reiterate, prefetching alone is clearly not the solution to your
problem statement.  I failed to make that clear before.  My point is only
that it is useful, does not require readystate to be so and it seems like an
easy change to the spec.

It also belongs in a separate discussion.  I should not have clouded this
thread with more slightly-related-but-mostly-off-topic fud.
 -N


 -Original Message-
 From: whatwg-boun...@lists.whatwg.org [mailto:
whatwg-boun...@lists.whatwg.org] On Behalf Of Will Alexander
 Sent: Friday, February 11, 2011 12:58 PM
 To: whatwg@lists.whatwg.org
 Subject: Re: [whatwg] Proposal for separating script downloads and
execution

 On Feb 11, 2011 10:41 AM, Nicholas Zakas nza...@yahoo-inc.com wrote:
 
  We've gone back and forth around implementation specifics, and now I'd
like to get a general feeling on direction. It seems that enough people
understand why a solution like this is important, both on the desktop and
for mobile, so what are the next steps?
 
 Early on it seemed there was general consensus that changing the
 existing MAY fetch-upon-src-assignment to MUST or SHOULD.  Since that
 is only tangential to this proposal, provides immediate benefit to
 existing code, and can satisfy use cases that do not require strictly
 synchronous execution.

 I'm hopeful the change would generate activity around these bug reports.

 https://bugs.webkit.org/show_bug.cgi?id=51650
 https://bugzilla.mozilla.org/show_bug.cgi?id=621553

 If I am wrong in my assessment of the consensus, does it make sense to
 consider that change outside of this proposal?

  Are there changes I can make to my proposal that would make it easier to
implement and therefore more likely to have someone take a stab at
implementing?

 I may have missed it, but what would execute() do if the url has not
 been loaded?   Would it be similar to a synchronous XHR request, lead
 to ASAP execution, or throw error?

  Is there a concrete alternate proposal that's worth building out
instead?

 If execute() must be synchronous, then readystate is not applicable.
 Otherwise, while it should be considered, it would probably take
 longer to describe and has no corresponding markup semantics.

 Glenn's point about noexecute  being a natural extension of defer and
 async is a good one, however neither of those required changing onload
 semantics or introducing a new event type.  Readystate on the other
 hand is already a well-known concept. Moreover, if history is any
 indication, we'll continue using it to implement deferred exec for
 awhile.


Re: [whatwg] Proposal for separating script downloads and execution

2011-02-11 Thread Glenn Maynard
Note that there's still concern that the feature in general hasn't been
justified properly.  In particular, the major real-world example used to
justify this is the Gmail scripts-in-comments hack, and I don't think we
actually know the complete justification for that.  We know it's for mobile
browsers, but it may be for older mobile browsers with much slower
Javascript parsers and not relevant for today's or future browsers (the ones
that would support this), even on mobile devices.

My justification is this: Javascript applications are bigger and more
complex than they used to be, and they'll only get bigger and yet more
complex.  Having codebases several megabytes in size in the future seems a
fair prediction.  Once we get to that point, having browsers parse all of
that at once, no matter how fast parsers are, seems unreasonable; we should
have a solid framework to allow modular codebases, as every other serious
application platform has.  It also seems like it may become very useful to
allow browsers to spend time (whether idle time or otherwise) not just on
parsing but on more expensive optimizations, and having a framework that
gives them access to scripts to do that in advance seems like a very good
idea.  (As timeless pointed out, it may be possible for browsers to work
around the hacks with hacks of their own, such as attempting to extract code
hidden in comments, but I don't think that's a sane way forward.)

Javascript applications generally aren't yet at that size, but I think it's
a fair prediction.  As it takes a long time for anything we're talking about
here to be implemented and deployed, I think it makes sense to not wait
until it actually becomes a problem.

To put forward an opposite argument: browsers caching parsed scripts might
address some of the performance question without any extra API.  Pages would
only have a longer load time the first time they were loaded; pulling a
parsed block of bytecode out of cache should be very fast.

Also, for what it's worth (not much), I ran a simple, very unscientific
benchmark, loading 40 MB of code in Chrome, a list of function f() { a();
b(); c(); d(); } functions.  It took about 6 seconds on my desktop, or
about 150ms per megabyte.  That suggests very weakly that on a current
parser on a desktop browser, a 5 MB application would take on the order of
750ms to load, assuming no parser caching.  I don't know how much of that is
parsing and how much is execution; I only mention it at all since I don't
think there have been any attempts at all so far to put numbers to the
performance question.

On Fri, Feb 11, 2011 at 5:44 PM, Nicholas Zakas nza...@yahoo-inc.comwrote:

 Thanks Kyle, those comments were helpful. I've simplified and refined my
 proposal based on them and the others in this thread:

 https://docs.google.com/document/d/1wLdTU3xPMKhBP0anS774Y4ZT2UQDqVhnQl3VnSceDJM/edit?hl=enauthkey=CJ6z2ZgO

 Summary of changes:
 * Changed noexecute to preload
 * No HTML markup usage


It seems consistent to allow specifying it via markup, like defer and async,
so scripts can be preloaded in markup, but it's a minor point.  I suppose
handling this sanely would also require another attribute, indicating
whether onpreload has been called yet, so maybe it's not worth it.

* No change to load event
 * Introduction of preload event
 * Removed mention of readyState


It's hard to read your example, since the indentation was, I think, mangled
during the paste into the document.

I think the example code can be simplified a lot to demonstrate the API more
clearly.  I've attached a simplified version.  It also explicitly catches
exceptions from execute() and calls errorCallback, and demonstrates feature
checking (in a simpler way).

-- 
Glenn Maynard


Re: [whatwg] Onpopstate is Flawed

2011-02-11 Thread Justin Lebar
 The problem with option B is that pages can't display correctly until
 the load event fires, which can be quite late in the game what with
 slow loading images and ads. It means that if you're on a page which
 uses state, and reload the page, you'll first see the page in a
 state-less mode while it's loading, and at some point later (generally
 when the last image finishes loading) it'll snap to be in the state
 it was when you pressed reload.

 You'll get the same behavior going back to a state-using page which
 has been kicked out of the fast-cache.

But isn't this problem orthogonal to option B?  That is, we could
still add the DOM property to address this concern, right?

But at least with option B, one can write a correct page without
reading that property -- that is, pages won't have to change in order
to be as fast and correct as they currently are.

-Justin


Re: [whatwg] Cryptographically strong random numbers

2011-02-11 Thread Cedric Vivier
On Sat, Feb 12, 2011 at 04:40, Adam Barth w...@adambarth.com wrote:
 Is there a specific reason for this limitation?
 Imho it should throw only for Float32Array and Float64Array since
 unbounded random floating numbers does not really make sense
 (including because of NaN and +inf -inf).
 (...)
 I went with a whitelist approach.  If there are other specific types
 that you think we should whitelist, we can certainly do that.  Why
 types, specifically, would you like to see supported?

All integer types can have use cases imo so there is no reason to
impose an articifial limitation [1] except for sanity-checking (floats
does not make sense here), ie:
Int8Array, UInt8Array, Int16Array, UInt16Array, Int32Array, UInt32Array

Regards,

[1] : artificial because typed arrays can be 'casted' anyways.


Re: [whatwg] Workers feedback

2011-02-11 Thread Gregg Tavares (wrk)
On Fri, Feb 4, 2011 at 3:43 PM, Ian Hickson i...@hixie.ch wrote:

 On Sat, 16 Oct 2010, Samuel Ytterbrink wrote:
 
  *What is the problem you are trying to solve?*
  To create sophisticated single file webpages.

 That's maybe a bit vaguer than I was hoping for when asking the question.
 :-)

 Why does it have to be a single file? Would multipart MIME be acceptable?

 A single file is a solution, not a problem. What is the problem?


  [...] trying to build a more optimal standalone DAISY player (would be
  nice if i could rewrite it with web workers).

 Now that's a problem. :-)

 It seems like what you need is a package mechanism, not necessarily a way
 to run workers without an external script.


 On Fri, 15 Oct 2010, Jonas Sicking wrote:
 
  Allowing both blob URLs and data URLs for workers sounds like a great
  idea.

 I expect we'll add these in due course, probably around the same time we
 add cross-origin workers. (We didn't add them before because exactly how
 we do them depends on how we determine origins.)


 On Sat, 16 Oct 2010, Samuel Ytterbrink wrote:
 
  But then i got another problem, why is not
  file:///some_directory_where_the_html_are/ not the same domain as
 
 file:///some_directory_where_the_html_are/child_directory_with_ajax_stuff/.
  I understand if it was not okay to go closer to root when ajax,
  file:///where_all_secrete_stuff_are/ or /../../.

 That's not a Web problem. I recommend contacting your browser vendor about
 it. (It's probably security-related.)


 On Thu, 30 Dec 2010, Glenn Maynard wrote:
  On Thu, Dec 30, 2010 at 7:11 PM, Ian Hickson i...@hixie.ch wrote:
  
   Unfortunately we can't really require immediate failure, since there'd
   be no way to test it or to prove that it wasn't implemented -- a user
   agent could always just say oh, it's just that we take a long time to
   launch the worker sometimes. (Performance can be another hardware
   limitation.)
 
  Preferably, if a Worker is successfully created, the worker thread
  starting must not block on user code taking certain actions, like
  closing other threads.

 How can you tell the difference between the thread takes 3 seconds to
 start and the thread waits for the user to close a thread, if it takes
 3 seconds for the user to close a thread?

 My point is from a black-box perspective, one can never firmly say that
 it's not just the browser being slow to start the thread. And we can't
 disallow the browser from being slow.


  That doesn't mean it needs to start immediately, but if I start a thread
  and then do nothing, it's very bad for the thread to sit in limbo
  forever because the browser expects me to take some action, without
  anything to tell me so.

 I don't disagree that it's bad. Hopefully browser vendors will agree and
 this problem will go away.


  If queuing is really necessary, please at least give us a way to query
  whether a worker is queued.

 It's queued if you asked it to start and it hasn't yet started.


 On Fri, 31 Dec 2010, Aryeh Gregor wrote:
 
  I've long thought that HTML5 should specify hardware limitations more
  precisely.

 We can't, because it depends on the hardware. For example, we can't say
 you must be able to allocate a 1GB string because the system might only
 have 500MB of storage.


  Clearly it can't cover all cases, and some sort of general escape clause
  will always be needed -- but in cases where limits are likely to be low
  enough that authors might run into them, the limit should really be
  standardized.

 It's not much of a standardised limit if there's still an escape clause.

 I'm happy to put recommendations in if we have data showing certain
 specific limits are needed for interop with real content.


   Unfortunately we can't really require immediate failure, since there'd
   be no way to test it or to prove that it wasn't implemented -- a user
   agent could always just say oh, it's just that we take a long time to
   launch the worker sometimes. (Performance can be another hardware
   limitation.)
 
  In principle this is so, but in practice it's not.  In real life, you
  can easily tell an algorithm that runs the first sixteen workers and
  then stalls any further ones until one of the early ones exit, from an
  algorithm that just takes a while to launch workers sometimes.  I think
  it would be entirely reasonable and would help interoperability in
  practice if HTML5 were to require that the UA must run all pending
  workers in some manner that doesn't allow starvation, and that if it
  can't do so, it must return an error rather than accepting a new worker.
  Failure to return an error should mean that the worker can be run soon,
  in a predictable timeframe, not maybe at some indefinite point in the
  future.

 All workers should run soon, not maybe in the future. Not running a
 worker should be an unusual circumstance. Errors that occur in unusual
 circumstances aren't errors that authors will check for.

 This dicussion comes from Chrome 

Re: [whatwg] Workers feedback

2011-02-11 Thread Ian Hickson
On Fri, 11 Feb 2011, Gregg Tavares (wrk) wrote:
  On Fri, 7 Jan 2011, Berend-Jan Wever wrote:
  
   1) To give WebWorkers access to the DOM API so they can create their 
   own elements such as img, canvas, etc...?
 
  It's the API itself that isn't thread-safe, unfortunately.
 
 I didn't see the original thread but how is a WebWorker any different 
 from another webpage? Those run just fine in other threads and use the 
 DOM API.

Web pages do not run in a different thread.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


[whatwg] [html5] @formaction, @formenctype, @formmethod, @formnovalidate, @formtarget

2011-02-11 Thread Jens O. Meiert
I realize that the @formaction, @formenctype, @formmethod,
@formnovalidate, and @formtarget attributes on “input” and “button”
elements are not quite new to HTML 5 anymore, however would someone
mind sharing with me why we don’t just simply allow @action, @enctype,
@method, @novalidate, and @target to serve the exact same purpose?
Would any existing behavior of user agents stand in the way of this,
or is there any other kind of incompatibility (examples appreciated)?

Pending any oversight, allowing the same attribute names seems
straight-forward, simple, and way easier to use.

-- 
Jens O. Meiert
http://meiert.com/en/


Re: [whatwg] Workers feedback

2011-02-11 Thread Gregg Tavares (wrk)
On Fri, Feb 11, 2011 at 5:45 PM, Ian Hickson i...@hixie.ch wrote:

 On Fri, 11 Feb 2011, Gregg Tavares (wrk) wrote:
   On Fri, 7 Jan 2011, Berend-Jan Wever wrote:
   
1) To give WebWorkers access to the DOM API so they can create their
own elements such as img, canvas, etc...?
  
   It's the API itself that isn't thread-safe, unfortunately.
 
  I didn't see the original thread but how is a WebWorker any different
  from another webpage? Those run just fine in other threads and use the
  DOM API.

 Web pages do not run in a different thread.


Oh, sorry. I meant they run in a different process. At least in some
browsers.



 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Workers feedback

2011-02-11 Thread Drew Wilson
I'll mention that the Chrome team is experimenting with something like this
(as a Chrome extensions API) - certain extensions will be able to do:

window.open(my_bg_page.html, name, background);

...and the associated window will be opened offscreen. They share a process
with other pages under that domain which means they can't be used as a
worker (doing long-lived operations). But I agree, there's some value in
having the full set of page APIs available.

-atw

On Fri, Feb 11, 2011 at 5:58 PM, Gregg Tavares (wrk) g...@google.comwrote:

 On Fri, Feb 11, 2011 at 5:45 PM, Ian Hickson i...@hixie.ch wrote:

  On Fri, 11 Feb 2011, Gregg Tavares (wrk) wrote:
On Fri, 7 Jan 2011, Berend-Jan Wever wrote:

 1) To give WebWorkers access to the DOM API so they can create
 their
 own elements such as img, canvas, etc...?
   
It's the API itself that isn't thread-safe, unfortunately.
  
   I didn't see the original thread but how is a WebWorker any different
   from another webpage? Those run just fine in other threads and use the
   DOM API.
 
  Web pages do not run in a different thread.
 

 Oh, sorry. I meant they run in a different process. At least in some
 browsers.


 
  --
  Ian Hickson   U+1047E)\._.,--,'``.fL
  http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
  Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
 



Re: [whatwg] Reserved browsing context names

2011-02-11 Thread Boris Zbarsky

On 2/11/11 5:12 PM, Ian Hickson wrote:

I tried testing this but I couldn't actually find a modern browser where
there was a way to put content into a sidebar, so I'm not sure how to
test it.


Bookmark something in Firefox.  Open the Bookmarks menu, select the 
Show All Bookmarks option.   Select your bookmark, click the little 
expander twisty at the bottom of the right-hand pane, and check the 
Load this bookmark in the sidebar checkbox.


Then load the bookmark (might need to move it to a folder you can access 
via the bookmarks menu or whatnot).


Discoverable that's not, I agree.  ;)

Of course extensions can also load things in sidebars.

-Boris


Re: [whatwg] [html5] @formaction, @formenctype, @formmethod, @formnovalidate, @formtarget

2011-02-11 Thread Anne van Kesteren

On Sat, 12 Feb 2011 02:45:55 +0100, Jens O. Meiert j...@meiert.com wrote:

Pending any oversight, allowing the same attribute names seems
straight-forward, simple, and way easier to use.


It was that way before, but many pages were already using those attributes  
and expected the browser to not do anything with them.



--
Anne van Kesteren
http://annevankesteren.nl/