Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-23 Thread Darin Fisher
On Tue, Apr 17, 2012 at 9:12 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 4/17/12 6:32 PM, Darin Fisher wrote:

 In Chrome at least, getImageData() doesn't actually block to fetch pixels.
  The thread is only blocked when the first dereference of the pixel buffer
 occurs.


 How does that interact with paints that happen after the getImageData
 call?  Or is the point that you send off an async request for a pixel
 snapshot but don't block on it returning until someone tries to reach into
 the pixel buffer?


To answer your second question:  Yes.

I think the implication for the first question is that you would get back a
snapshot of what the pixel data should have been when you called
getImageData.

-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-23 Thread Darin Fisher
On Sun, Apr 22, 2012 at 6:03 PM, Maciej Stachowiak m...@apple.com wrote:


 On Apr 20, 2012, at 6:53 AM, Glenn Maynard wrote:

 On Thu, Apr 19, 2012 at 11:28 PM, Maciej Stachowiak m...@apple.com wrote:

 You could also address this by adding a way to be notified when the
 contents of an ImageData are available without blocking. That would work
 with both vanilla getImageData and the proposed getImageDataHD. It would
 also give the author the alternative of just blocking (e.g. if they know
 the buffer is small) or of sending the data off to a worker for processing.


 This would result in people writing poor code, based on incorrect
 assumptions.  It doesn't matter how big the buffer is; all that matters is
 how long the drawing calls before the getImageData take.  For example, if
 multiple canvases are being drawn to (eg. on other pages running in the
 same thread), they may share a single drawing queue.

 Any time you retrieve image data synchronously, and it happens to require
 a draw flush, you freeze the UI for all pages sharing that thread.  Why is
 that okay for people to do?  We should know better by now than to expose
 APIs that encourage people to block the UI thread, after spending so much
 time trying to fix that mistake in early APIs.

 (This should expose a synchronous API in workers if and when Canvas makes
 it there, of course, just like all other APIs.)


 All JavaScript that runs on the main thread has the potential to freeze
 the UI for all pages sharing that thread. One can imagine models that
 avoid this by design - for example, running all JavaScript on one or more
 threads separate from the UI thread. But from where we are today, it's not
 practical to apply such a solution. It's also not practical to make every
 API asynchronous - it's just too hard to code that way.

 In light of this, we need some sort of rule for what types of APIs should
 only be offered in asynchronous form on the main thread. Among the major
 browser vendors, there seems to be a consensus that this should at least
 include APIs that do any network or disk I/O. Network and disk are slow
 enough and unpredictable enough that an author could never correctly judge
 that it's safe to do synchronous I/O.

 Some feel that a call that reads from the GPU may also be in this category
 of intrinsically too slow/unpredictable. However, we are talking about
 operations with a much lower upper bound on their execution time. We're
 also talking about an operation that has existed in its synchronous form
 (getImageData) for several years, and we don't have evidence of the types
 of severe problems that, for instance, synchronous XHR has been known to
 cause. Indeed, the amount of trouble caused is low enough that no one has
 yet proposed or implemented an async version of this API.


The point is not about whether the jank introduced by GPU readbacks is
emergency level.  The point is that it can be costly, and it can interfere
greatly with having an interactive main thread.  If you assume a goal of 60
FPS, then smallish jank can be killer.  It is common for new GL programmers
to call glGetError too often for example, and that can kill the performance
of the app.  Of course this is no where near as bad as synchronous XHR.  It
doesn't have to be at that level to be a problem.  I think it is fair to
focus on 60 FPS as a goal in other words.

That said, I've come around to being OK with getImageDataHD.  As I wrote
recently, this is because it is possible to implement that in a
non-blocking fashion.  It can just queue up a readback.  It only becomes
necessary to block the calling thread when a pixel is dereferenced.  This
affords developers with an opportunity to instead pass the ImageData off to
a web worker before dereferencing.  Hence, the main thread should not jank
up.  This of course requires developers to be very smart about what they
are doing, and for browsers to be smart too.

I'm still sad that getImageData{HD} makes it easy for bad code in one web
page to screw over other web pages.  The argument that this is easy to do
anyways with long running script is a cop out.  We should guide developers
to do the right thing in this cooperatively multi-tasking system.

-Darin




 If adding an async version has not been an emergency so far, then I don't
 think it is critical enough to block adding scaled backing store support.
 Nor am I convinced that we need to deprecate or phase out the synchronous
 version. Perhaps future evidence will change the picture, but that's how it
 looks to me so far.

 Regards,
 Maciej




Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-17 Thread Darin Fisher
On Mon, Apr 16, 2012 at 4:05 PM, Darin Fisher da...@chromium.org wrote:



 On Mon, Apr 16, 2012 at 2:57 PM, Oliver Hunt oli...@apple.com wrote:


 On Apr 16, 2012, at 2:34 PM, Darin Fisher da...@chromium.org wrote:

  On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt oli...@apple.com wrote:
 
 
  On Apr 16, 2012, at 1:12 PM, Darin Fisher da...@chromium.org wrote:
 
  Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
  more precise issue.
 
  On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt oli...@apple.com
 wrote:
 
  Could someone construct a demonstration of where the read back of the
  imagedata takes longer than a runloop cycle?
 
 
  I bet this would be fairly easy to demonstrate.
 
 
  Then by all means do :D
 
 
 
  Here's an example.
 
  Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and
 apply
  the following diff (changing the draw function):
 
  BEGIN DIFF
  --- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
  +++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
  @@ -177,10 +177,17 @@
  // Draw each fish
  for (var fishie in fish) {
  fish[fishie].swim();
  }
 
  +
  +if (window.read_back) {
  +var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
  +var x = data[0];  // force readback
  +}
  +
  +
 //draw fpsometer with the current number of fish
  fpsMeter.Draw(fish.length);
  }
 
  function Fish() {
  END DIFF
 
  Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish,
 I
  get 60 FPS.  Setting read_back to true (using dev tools), drops it down
 to
  30 FPS.
 
  Using about:tracing (a tool built into Chrome), I can see that the read
  pixels call is taking ~15 milliseconds to complete.  The implied GL
 flush
  takes ~11 milliseconds.
 
  The page was sized to 1400 x 1000 pixels.

 How does that compare to going through the runloop -- how long does it
 take to get from that point to a timeout being called if you do var start =
 new Date; setTimeout(function() {console.log(new Date - start);}, 0);
 ?


 The answer is ~0 milliseconds.  I know this because without the
 getImageData call, the frame rate is 60 FPS.  The page calls the draw()
 function from an interval timer that has a period of 16.7 milliseconds.
  The trace indicates that nearly all of that budget is used up prior to the
 getImageData() call that I inserted.




 This also ignores that possibility that in requesting the data, i
 probably also want to do some processing on the data, so for the sake of
 simplicity how long does it take to subsequently iterate through every
 pixel and set it to 0?


 That adds about 44 milliseconds.  I would hope that developers would
 either perform this work in chunks or pass ImageData.data off to a web
 worker for processing.


^^^ This got me thinking...

In Chrome at least, getImageData() doesn't actually block to fetch pixels.
 The thread is only blocked when the first dereference of the pixel buffer
occurs.  I believe this is done so that a getImageData() followed by
putImageData() call will not need to block the calling thread.

The above suggests that making getImageData() asynchronous would not
actually provide any benefit for cases where the page does not dereference
the pixel buffer.  Another use case where this comes up is passing the
ImageData to a web worker.  If the web worker is the first to dereference
the ImageData, then only the web worker thread should block.

I think this becomes an argument for keeping getImageData() as is.  It
assumes that ImageData is just a handle, and we could find another way to
discourage dereferencing the pixel buffer on the UI thread.

Hmm...

-Darin







 Remember the goal of making this asynchronous is to improve performance,
 so the 11ms of drawing does have to occur at some point, you're just hoping
 that by making things asynchronous you can mask that.  But I doubt you
 would see an actual improvement in wall clock performance.


 The 11 ms of drawing occurs on a background thread.  Yes, that latency
 exists, but it doesn't have to block the main thread.

 Let me reiterate the point I made before.  There can be multiple web pages
 sharing the same main thread.  (Even in Chrome this can be true!)  Blocking
 one web page has the effect of blocking all web pages that share the same
 main thread.

 It is not nice for one web page to jank up the browser's main thread and
 as a result make other web pages unresponsive.




 I also realised something else that I had not previously considered -- if
 you're doing bitblit based sprite movement the complexity goes way up if
 this is asynchronous.


 I don't follow.  Can you clarify?

 Thanks,
 -Darin



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Wed, Mar 21, 2012 at 8:29 PM, Maciej Stachowiak m...@apple.com wrote:


 On Mar 20, 2012, at 12:00 PM, James Robinson wrote:

  If we are adding new APIs for manipulating the backing directly, can we
  make them asynchronous? This would allow for many optimization
  opportunities that are currently difficult or impossible.

 Neat idea to offer async backing store access. I'm not sure that we should
 tie this to backing store access at true backing store resolution vs at CSS
 pixel nominal resolution, because it will significantly raise the barrier
 to authors recoding their existing apps to take full advantage of higher
 resolutions. With Ted's proposal, all they would have to do is use the HD
 versions of calls and change their loops to read the bounds from the
 ImageData object instead of assuming. If we also forced the new calls to be
 async, then more extensive changes would be required.

 I hear you on the benefits of async calls, but I think it would be better
 to sell authors on their benefits separately.

 Cheers,
 Maciej



Carrots and Sticks.

Aren't we missing an opportunity here?  By giving web developers this easy
migration path, you're also giving up the opportunity to encourage them to
use a better API.  Asynchronous APIs are harder to use, and that's why we
need to encourage their adoption.  If you just give people a synchronous
version that accomplishes the same thing, then they will just use that,
even if doing so causes their app to perform poorly.

See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes that
didn't exist.  Note how we recently withdrew support for synchronous
ArrayBuffer access on XHR?  We did this precisely to discourage use of
synchronous mode XHR. Doing so actually broke some existing web pages.  The
pain was deemed worth it.

GPU readback of a HD buffer is going to suck.  Any use of this new API is
going to suck.

-Darin





 
  - James
  On Mar 20, 2012 10:29 AM, Edward Oapos;Connor eocon...@apple.com
 wrote:
 
  Hi,
 
  Unfortunately, lots of canvas content (especially content which calls
  {create,get,put}ImageData methods) assumes that the canvas's backing
  store pixels correspond 1:1 to CSS pixels, even though the spec has been
  written to allow for the backing store to be at a different scale
  factor.
 
  Especially problematic is that developers have to round trip image data
  through a canvas in order to detect that a different scale factor is
  being used.
 
  I'd like to propose the addition of a backingStorePixelRatio property to
  the 2D context object. Just as window.devicePixelRatio expresses the
  ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
  express the ratio of backing store pixels to CSS pixels. This allows
  developers to easily branch to handle different backing store scale
  factors.
 
  Additionally, I think the existing {create,get,put}ImageData API needs
  to be defined to be in terms of CSS pixels, since that's what existing
  content assumes. I propose the addition of a new set of methods for
  working directly with backing store image data. (New methods are easier
  to feature detect than adding optional arguments to the existing
  methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
  but I'm not wedded to the names. (Nor do I want to bikeshed them.)
 
 
  Thanks for your consideration,
  Ted
 




Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 11:17 AM, Oliver Hunt oli...@apple.com wrote:


 On Apr 16, 2012, at 11:07 AM, Darin Fisher da...@chromium.org wrote:

 
  Carrots and Sticks.
 
  Aren't we missing an opportunity here?  By giving web developers this
 easy
  migration path, you're also giving up the opportunity to encourage them
 to
  use a better API.  Asynchronous APIs are harder to use, and that's why we
  need to encourage their adoption.  If you just give people a synchronous
  version that accomplishes the same thing, then they will just use that,
  even if doing so causes their app to perform poorly.
 
  See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes
 that
  didn't exist.  Note how we recently withdrew support for synchronous
  ArrayBuffer access on XHR?  We did this precisely to discourage use of
  synchronous mode XHR. Doing so actually broke some existing web pages.
  The
  pain was deemed worth it.
 
  GPU readback of a HD buffer is going to suck.  Any use of this new API is
  going to suck.
 
  -Darin
 

 Any use of imagedata i've seen assumes that they can avoid intermediate
 states in the canvas ever being visible, if you make reading and writing
 the data asynchronous you break that invariant and suddenly makes things
 much harder for the user.


I agree with Charles Pritchard that it is only the reading of pixel data
that should be asynchronous.

I think developers could learn to cope with this new design just as they do
with other asynchronous facets of the platform.




 The reason we don't want IO synchronous is because IO can take a
 potentially unbound amount of time, if you're on a platform that makes a
 memcpy take similarly unbound time, i recommend that you work around it.


Of course, GPU readbacks do not compare to network IO.  However, if the
goal is to achieve smooth animations, then it is important that the main
thread not hitch for multiple animation frames.  GPU readbacks are
irregular in duration and can sometimes be quite expensive if the GPU
pipeline is heavily burdened.




 Anyway, the sensible approach to imagedata + hardware backed canvas is to
 revert to a software backed canvas, as once someone has used imagedata
 once, they're likely to do it again (and again, and again) so it is
 probably a win to just do everything in software at that point.  Presumably
 you could through in heuristics to determine whether or not it's worth
 going back to the GPU at some point, but many of the common image data use
 cases will have awful perf if you try to keep them on the GPU 100% of the
 time.


I don't think it is OK if at application startup (or animation startup)
there is a big UI glitch as the system determines that it should not
GPU-back a canvas.  We have the opportunity now to design an API that does
not have that bug.

Why don't you want to take advantage of this opportunity?

-Darin





 
 
 
 
 
  - James
  On Mar 20, 2012 10:29 AM, Edward Oapos;Connor eocon...@apple.com
  wrote:
 
  Hi,
 
  Unfortunately, lots of canvas content (especially content which
 calls
  {create,get,put}ImageData methods) assumes that the canvas's backing
  store pixels correspond 1:1 to CSS pixels, even though the spec has
 been
  written to allow for the backing store to be at a different scale
  factor.
 
  Especially problematic is that developers have to round trip image
 data
  through a canvas in order to detect that a different scale factor is
  being used.
 
  I'd like to propose the addition of a backingStorePixelRatio property
 to
  the 2D context object. Just as window.devicePixelRatio expresses the
  ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
  express the ratio of backing store pixels to CSS pixels. This allows
  developers to easily branch to handle different backing store scale
  factors.
 
  Additionally, I think the existing {create,get,put}ImageData API needs
  to be defined to be in terms of CSS pixels, since that's what existing
  content assumes. I propose the addition of a new set of methods for
  working directly with backing store image data. (New methods are
 easier
  to feature detect than adding optional arguments to the existing
  methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
  but I'm not wedded to the names. (Nor do I want to bikeshed them.)
 
 
  Thanks for your consideration,
  Ted
 
 
 




Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
more precise issue.

On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt oli...@apple.com wrote:

 Could someone construct a demonstration of where the read back of the
 imagedata takes longer than a runloop cycle?


I bet this would be fairly easy to demonstrate.


 You're asking for significant additional complexity for content authors,
 with a regression in general case performance, it would be good to see if
 it's possible to create an example, even if it's not something any sensible
 author would do, where their is a performance improvement.

 Remember, the application is only marginally better when it's not painting
 due to waiting for a runloop cycle than it is when blocked waiting on a
 graphics flush.


You can do a lot of other things during this time.  For example, you can
prepare the next animation frame.  You can run JavaScript garbage
collection.

Also, it is common for a browser thread to handle animations for multiple
windows.  If you have animations going in both windows, it would be nice
for those animations to update in parallel instead of being serialized.

-Darin




 Also, if the argument is wrt deferred rendering rather than GPU copyback,
 can we drop GPU related arguments from this thread?

 --Oliver

 On Apr 16, 2012, at 12:10 PM, Glenn Maynard gl...@zewt.org wrote:

 On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt oli...@apple.com wrote:

 I don't understand why adding a runloop cycle to any read seems like
 something that would introduce a much more noticable delay than a memcopy.


 The use case is deferred rendering.  Canvas drawing calls don't need to
 complete synchronously (before the drawing call returns); they can be
 queued, so API calls return immediately and the actual draws can happen in
 a thread or on the GPU.  This is exactly like OpenGL's pipelining model
 (and might well be implemented using it, on some platforms).

 The problem is that if you have a bunch of that work pipelined, and you
 perform a synchronous readback, you have to flush the queue.  In OpenGL
 terms, you have to call glFinish().  That might take long enough to cause a
 visible UI hitch.  By making the readback asynchronous, you can defer the
 actual operation until the operations before it have been completed, so you
 avoid any such blocking in the UI thread.


  I also don't understand what makes reading from the GPU so expensive
 that adding a runloop cycle is necessary for good perf, but it's
 unnecessary for a write.


 It has nothing to do with how expensive the GPU read is, and everything to
 do with the need to flush the pipeline.  Writes don't need to do this; they
 simply queue, like any other drawing operation.

 --
 Glenn Maynard






Re: [whatwg] Proposal for non-modal versions of modal prompts

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 1:18 PM, Maciej Stachowiak m...@apple.com wrote:


 On Mar 29, 2012, at 1:10 AM, Darin Fisher wrote:



 On Wed, Mar 21, 2012 at 8:03 PM, Maciej Stachowiak m...@apple.com wrote:


 On Mar 21, 2012, at 7:54 PM, Maciej Stachowiak wrote:

 
  dialog will give a better user experience than even a non-modal
 version of window.confirm() or window.alert(). Dialogs that are fully
 in-page

 Oops, got cut off here. What I meant to say is something like dialogs
 that are fully in-page are the emerging standard for high-quality
 page-modal prompting.


 Non-blocking window.{alert,confirm,prompt} would most likely be rendered
 by UAs as in-page overlays / tab-scoped dialogs.  This is what we would do
 in Chrome, and it seems like others would do the same given the prevalence
 of the standard window.{alert,confirm,prompt} being implemented in a
 tab-scoped manner already by some browsers (albeit with bugs).

 I think people use alert, confirm and prompt in part because they are so
 easy to use.  People who choose window.{alert,confirm,prompt} probably
 don't care about loss of customization or else they would roll their own
 dialogs.

 Why not provide less sucky versions of those common dialogs?

 Benefit:  Less code for simple dialogs.
 Con:  Another web platform API to standardize.


 Con: Encourages poor HI design (since these stock dialogs should almost
 never be used).

 That being said, I find in-page UI less objectionable than a pop-up alert,
 but in that case I'm not sure it makes sense to overload the existing API.
 It would be better to make new methods so feature testing is possible. Even
 given all that, I'm not confident of the value add over dialog.


It seems like poor HI design is rather subjective.  Some might prefer the
OS-native look-and-feel of these simple dialogs.

Good point about feature testing.  I'd be OK with
async{Alert,Confirm,Prompt} or whatever name variant we prefer.

You don't see much value in the simplicity of having these methods be
provided by the platform?  It seems like dialog requires much more code
to setup.

Regards,
-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 1:45 PM, Oliver Hunt oli...@apple.com wrote:


 On Apr 16, 2012, at 11:07 AM, Darin Fisher da...@chromium.org wrote:
 
  See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes
 that
  didn't exist.  Note how we recently withdrew support for synchronous
  ArrayBuffer access on XHR?  We did this precisely to discourage use of
  synchronous mode XHR. Doing so actually broke some existing web pages.
  The
  pain was deemed worth it.

 Yes, but the reason for this is very simple: synchronous IO can take a
 literally interminable amount of time, in which nothing else can happen.
  We're talking about something entirely client side, that is theoretically
 going to be done sufficiently quickly to update a frame.

 The IO case has a best case of hundreds of milliseconds, whereas that is
 likely to be close to the worst case on the graphics side.


Sorry, I did not make my point clear.  I did not intend to equate network
delays to graphics delays, as they are obviously not on the same order of
magnitude.  Let me try again.

We decided that we didn't like synchronous XHR.  We decided to withhold new
features from synchronous XHR.  I believe we did so in part to discourage
use of synchronous XHR and encourage use of asynchronous XHR.

I was suggesting that we have an opportunity to apply a similar approach to
canvas ImageData.

I have learned that it is not commonly accepted that reading ImageData can
be slow.  I had assumed otherwise.

-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt oli...@apple.com wrote:


 On Apr 16, 2012, at 1:12 PM, Darin Fisher da...@chromium.org wrote:

 Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
 more precise issue.

 On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt oli...@apple.com wrote:

 Could someone construct a demonstration of where the read back of the
 imagedata takes longer than a runloop cycle?


 I bet this would be fairly easy to demonstrate.


 Then by all means do :D



Here's an example.

Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and apply
the following diff (changing the draw function):

BEGIN DIFF
--- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
+++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
@@ -177,10 +177,17 @@
 // Draw each fish
 for (var fishie in fish) {
 fish[fishie].swim();
 }

+
+if (window.read_back) {
+var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
+var x = data[0];  // force readback
+}
+
+
//draw fpsometer with the current number of fish
 fpsMeter.Draw(fish.length);
 }

 function Fish() {
END DIFF

Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish, I
get 60 FPS.  Setting read_back to true (using dev tools), drops it down to
30 FPS.

Using about:tracing (a tool built into Chrome), I can see that the read
pixels call is taking ~15 milliseconds to complete.  The implied GL flush
takes ~11 milliseconds.

The page was sized to 1400 x 1000 pixels.

-Darin






 You're asking for significant additional complexity for content authors,
 with a regression in general case performance, it would be good to see if
 it's possible to create an example, even if it's not something any sensible
 author would do, where their is a performance improvement.

 Remember, the application is only marginally better when it's not
 painting due to waiting for a runloop cycle than it is when blocked waiting
 on a graphics flush.


 You can do a lot of other things during this time.  For example, you can
 prepare the next animation frame.  You can run JavaScript garbage
 collection.

 Also, it is common for a browser thread to handle animations for multiple
 windows.  If you have animations going in both windows, it would be nice
 for those animations to update in parallel instead of being serialized.


 None of which changes the fact that your actual developer now needs more
 complicated code, and has slower performance.  If I'm doing purely
 imagedata based code then there isn't anything to defer, and so all you're
 doing is adding runloop latency.  The other examples you give don't really
 apply either.

 Most imagedata both code i've seen is not GC heavy, if you're performing
 animations using css animations, etc then I believe that the browser is
 already able to hoist them onto another thread.  If you have animations in
 multiple windows then chrome doesn't have a problem because those windows
 are a separate process, and if you're not, then all you're doing is
 allowing one runloop of work (which may or may not be enough to get a paint
 done) before you start processing your ImageData.  I'm really not sure what
 it is that you're doing with your ImageData such that it takes so much less
 time than the canvas work, but it seems remarkable that there's some
 operation you can perform in JS over all the data returned that takes less
 time that the latency introduced by an async API.

 --Oliver


 -Darin




 Also, if the argument is wrt deferred rendering rather than GPU copyback,
 can we drop GPU related arguments from this thread?

 --Oliver

 On Apr 16, 2012, at 12:10 PM, Glenn Maynard gl...@zewt.org wrote:

 On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt oli...@apple.com wrote:

 I don't understand why adding a runloop cycle to any read seems like
 something that would introduce a much more noticable delay than a memcopy.


 The use case is deferred rendering.  Canvas drawing calls don't need to
 complete synchronously (before the drawing call returns); they can be
 queued, so API calls return immediately and the actual draws can happen in
 a thread or on the GPU.  This is exactly like OpenGL's pipelining model
 (and might well be implemented using it, on some platforms).

 The problem is that if you have a bunch of that work pipelined, and you
 perform a synchronous readback, you have to flush the queue.  In OpenGL
 terms, you have to call glFinish().  That might take long enough to cause a
 visible UI hitch.  By making the readback asynchronous, you can defer the
 actual operation until the operations before it have been completed, so you
 avoid any such blocking in the UI thread.


  I also don't understand what makes reading from the GPU so expensive
 that adding a runloop cycle is necessary for good perf, but it's
 unnecessary for a write.


 It has nothing

Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 2:06 PM, Maciej Stachowiak m...@apple.com wrote:


 On Apr 16, 2012, at 12:10 PM, Glenn Maynard wrote:

 On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt oli...@apple.com wrote:

 I don't understand why adding a runloop cycle to any read seems like
 something that would introduce a much more noticable delay than a memcopy.


 The use case is deferred rendering.  Canvas drawing calls don't need to
 complete synchronously (before the drawing call returns); they can be
 queued, so API calls return immediately and the actual draws can happen in
 a thread or on the GPU.  This is exactly like OpenGL's pipelining model
 (and might well be implemented using it, on some platforms).

 The problem is that if you have a bunch of that work pipelined, and you
 perform a synchronous readback, you have to flush the queue.  In OpenGL
 terms, you have to call glFinish().  That might take long enough to cause a
 visible UI hitch.  By making the readback asynchronous, you can defer the
 actual operation until the operations before it have been completed, so you
 avoid any such blocking in the UI thread.


  I also don't understand what makes reading from the GPU so expensive
 that adding a runloop cycle is necessary for good perf, but it's
 unnecessary for a write.


 It has nothing to do with how expensive the GPU read is, and everything to
 do with the need to flush the pipeline.  Writes don't need to do this; they
 simply queue, like any other drawing operation.


 Would the async version still require a flush and immediate readback if
 you do any drawing after the get call but before the data is returned?


I think it would not need to.  It would just return a snapshot of the state
of the canvas up to the point where the asyncGetImageData call was made.
 This makes sense if you consider both draw calls and asyncGetImageData
calls being put on the same work queue (without any change in their
respective order).

-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 2:57 PM, Oliver Hunt oli...@apple.com wrote:


 On Apr 16, 2012, at 2:34 PM, Darin Fisher da...@chromium.org wrote:

  On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt oli...@apple.com wrote:
 
 
  On Apr 16, 2012, at 1:12 PM, Darin Fisher da...@chromium.org wrote:
 
  Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
  more precise issue.
 
  On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt oli...@apple.com wrote:
 
  Could someone construct a demonstration of where the read back of the
  imagedata takes longer than a runloop cycle?
 
 
  I bet this would be fairly easy to demonstrate.
 
 
  Then by all means do :D
 
 
 
  Here's an example.
 
  Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and
 apply
  the following diff (changing the draw function):
 
  BEGIN DIFF
  --- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
  +++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
  @@ -177,10 +177,17 @@
  // Draw each fish
  for (var fishie in fish) {
  fish[fishie].swim();
  }
 
  +
  +if (window.read_back) {
  +var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
  +var x = data[0];  // force readback
  +}
  +
  +
 //draw fpsometer with the current number of fish
  fpsMeter.Draw(fish.length);
  }
 
  function Fish() {
  END DIFF
 
  Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish, I
  get 60 FPS.  Setting read_back to true (using dev tools), drops it down
 to
  30 FPS.
 
  Using about:tracing (a tool built into Chrome), I can see that the read
  pixels call is taking ~15 milliseconds to complete.  The implied GL flush
  takes ~11 milliseconds.
 
  The page was sized to 1400 x 1000 pixels.

 How does that compare to going through the runloop -- how long does it
 take to get from that point to a timeout being called if you do var start =
 new Date; setTimeout(function() {console.log(new Date - start);}, 0);
 ?


The answer is ~0 milliseconds.  I know this because without the
getImageData call, the frame rate is 60 FPS.  The page calls the draw()
function from an interval timer that has a period of 16.7 milliseconds.
 The trace indicates that nearly all of that budget is used up prior to the
getImageData() call that I inserted.




 This also ignores that possibility that in requesting the data, i probably
 also want to do some processing on the data, so for the sake of simplicity
 how long does it take to subsequently iterate through every pixel and set
 it to 0?


That adds about 44 milliseconds.  I would hope that developers would either
perform this work in chunks or pass ImageData.data off to a web worker for
processing.



 Remember the goal of making this asynchronous is to improve performance,
 so the 11ms of drawing does have to occur at some point, you're just hoping
 that by making things asynchronous you can mask that.  But I doubt you
 would see an actual improvement in wall clock performance.


The 11 ms of drawing occurs on a background thread.  Yes, that latency
exists, but it doesn't have to block the main thread.

Let me reiterate the point I made before.  There can be multiple web pages
sharing the same main thread.  (Even in Chrome this can be true!)  Blocking
one web page has the effect of blocking all web pages that share the same
main thread.

It is not nice for one web page to jank up the browser's main thread and as
a result make other web pages unresponsive.




 I also realised something else that I had not previously considered -- if
 you're doing bitblit based sprite movement the complexity goes way up if
 this is asynchronous.


I don't follow.  Can you clarify?

Thanks,
-Darin


Re: [whatwg] Proposal for non-modal versions of modal prompts

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 2:03 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Mon, Apr 16, 2012 at 1:52 PM, Darin Fisher da...@chromium.org wrote:
  On Mon, Apr 16, 2012 at 1:18 PM, Maciej Stachowiak m...@apple.com
 wrote:
  Con: Encourages poor HI design (since these stock dialogs should almost
  never be used).
 
  That being said, I find in-page UI less objectionable than a pop-up
 alert,
  but in that case I'm not sure it makes sense to overload the existing
 API.
  It would be better to make new methods so feature testing is possible.
 Even
  given all that, I'm not confident of the value add over dialog.
 
  It seems like poor HI design is rather subjective.  Some might prefer
 the
  OS-native look-and-feel of these simple dialogs.

 I think you'll have a hard time finding people who prefer that. ^_^

  Good point about feature testing.  I'd be OK with
  async{Alert,Confirm,Prompt} or whatever name variant we prefer.
 
  You don't see much value in the simplicity of having these methods be
  provided by the platform?  It seems like dialog requires much more code
  to setup.

 Hixie provided (in another thread) an example of the code required for
 dialog that was feature-equivalent to popping a prompt.  The
 difference is minimal.

 ~TJ



Oh, indeed he did.  Using form and input inside a dialog to create
simple dialogs is a nice idea.  I suppose the UA stylesheet could have some
extra rules to make that have a decent default rendering.  Hmm...

I'm starting to care a bit less about async{Alert,Confirm,Prompt}.
 Although,
it still bugs me that the path of least resistance for simple dialogs
will remain
good old thread-blocking modal alert/confirm/prompt :-(

-Darin


Re: [whatwg] keepalive attribute on iframe

2012-04-16 Thread Darin Fisher
Can you hide this behind adoptNode just as we did for magic iframe?  The
nice thing about adoptNode is that the browser gets told both the source and
destination parent nodes.  This way there is never a disconnected state.

So long as we unload when moving between documents, we should be pretty
safe as far as the issues which plagued magic iframe are concerned.

-Darin


On Thu, Apr 12, 2012 at 12:35 PM, Ojan Vafai o...@chromium.org wrote:

 We should add a keepalive attribute to iframes that prevents iframes from
 being unloaded/reloaded when removed from or appended to a document.
 Similarly, a disconnected iframe with keepalive should load. If the
 keepalive attribute is removed from a disconnected iframe, then it should
 unload.

 I'm not terribly happy with the name 'keepalive', but I can't think of
 anything better at the moment.

 As iframes increasingly become the standard way of achieving certain tasks
 (e.g. sandboxing), it's increasingly important to be able to move them
 around in the DOM. Right now, to achieve this sort of keepalive behavior,
 you have to keep the iframe always appended to the document and position it
 absolutely as the document changes.

 Ojan



Re: [whatwg] Proposal for non-modal versions of modal prompts

2012-03-29 Thread Darin Fisher
On Wed, Mar 21, 2012 at 8:03 PM, Maciej Stachowiak m...@apple.com wrote:


 On Mar 21, 2012, at 7:54 PM, Maciej Stachowiak wrote:

 
  dialog will give a better user experience than even a non-modal
 version of window.confirm() or window.alert(). Dialogs that are fully
 in-page

 Oops, got cut off here. What I meant to say is something like dialogs
 that are fully in-page are the emerging standard for high-quality
 page-modal prompting.


Non-blocking window.{alert,confirm,prompt} would most likely be rendered by
UAs as in-page overlays / tab-scoped dialogs.  This is what we would do in
Chrome, and it seems like others would do the same given the prevalence of
the standard window.{alert,confirm,prompt} being implemented in a
tab-scoped manner already by some browsers (albeit with bugs).

I think people use alert, confirm and prompt in part because they are so
easy to use.  People who choose window.{alert,confirm,prompt} probably
don't care about loss of customization or else they would roll their own
dialogs.

Why not provide less sucky versions of those common dialogs?

Benefit:  Less code for simple dialogs.
Con:  Another web platform API to standardize.

-Darin




 I should add that this could be partly for path-dependent reasons, and
 that if other technologies had been available, authors might not have
 resorted to in-page modality with overlays. But I think the key missing
 enabled was not asynchrony but rather the ability to fully control the UI,
 layout and available commands of the modal experience.

 
  alert() is mostly only used by either by sites with a low-quality user
 experience, or as as non-production debugging aid. In both cases, authors
 who care about the user experience will use dialog or a JS-implemented
 lightbox style dialog. And authors who do not care about user experience,
 or who are doing a quick debugging hack in non-production code, will use
 old-fashioned blocking alert/confirm/prompt. Thus, I am not sure there is
 really a meaningful audience for the non-blocking editions of these calls.
 
  Regards,
  Maciej
 
 
 
 
 




Re: [whatwg] Proposal for non-modal versions of modal prompts

2012-03-29 Thread Darin Fisher
On Thu, Mar 29, 2012 at 1:10 AM, Darin Fisher da...@chromium.org wrote:



 On Wed, Mar 21, 2012 at 8:03 PM, Maciej Stachowiak m...@apple.com wrote:


 On Mar 21, 2012, at 7:54 PM, Maciej Stachowiak wrote:

 
  dialog will give a better user experience than even a non-modal
 version of window.confirm() or window.alert(). Dialogs that are fully
 in-page

 Oops, got cut off here. What I meant to say is something like dialogs
 that are fully in-page are the emerging standard for high-quality
 page-modal prompting.


 Non-blocking window.{alert,confirm,prompt} would most likely be rendered
 by UAs as in-page overlays / tab-scoped dialogs.  This is what we would do
 in Chrome, and it seems like others would do the same given the prevalence
 of the standard window.{alert,confirm,prompt} being implemented in a
 tab-scoped manner already by some browsers (albeit with bugs).

 I think people use alert, confirm and prompt in part because they are so
 easy to use.  People who choose window.{alert,confirm,prompt} probably
 don't care about loss of customization or else they would roll their own
 dialogs.

 Why not provide less sucky versions of those common dialogs?

 Benefit:  Less code for simple dialogs.
 Con:  Another web platform API to standardize.

 -Darin



Also, there is a downside to the current convention of custom drawing modal
dialogs.  Web pages that mash-up content from varied sources would need to
have some convention for queuing up dialog requests.  Ideally, modal
dialogs should be shown in FIFO order rather than all at the same time.
 This seems like a tricky problem.  It seems like something the platform
could help with.  I believe the dialog proposal helps here.  I think
non-blocking alert, confirm and prompt helps in a similar vein.

-Darin







 I should add that this could be partly for path-dependent reasons, and
 that if other technologies had been available, authors might not have
 resorted to in-page modality with overlays. But I think the key missing
 enabled was not asynchrony but rather the ability to fully control the UI,
 layout and available commands of the modal experience.

 
  alert() is mostly only used by either by sites with a low-quality user
 experience, or as as non-production debugging aid. In both cases, authors
 who care about the user experience will use dialog or a JS-implemented
 lightbox style dialog. And authors who do not care about user experience,
 or who are doing a quick debugging hack in non-production code, will use
 old-fashioned blocking alert/confirm/prompt. Thus, I am not sure there is
 really a meaningful audience for the non-blocking editions of these calls.
 
  Regards,
  Maciej
 
 
 
 
 





Re: [whatwg] Proposal for non-modal versions of modal prompts

2012-03-20 Thread Darin Fisher
On Tue, Mar 20, 2012 at 4:05 PM, Glenn Maynard gl...@zewt.org wrote:

 On Mon, Mar 19, 2012 at 3:38 PM, Jochen Eisinger joc...@chromium.org
 wrote:

  I'd like to put forward a proposal for extending the modal prompts
  (alert/confirm/prompt) with an optional callback parameter. If the
 optional
  callback parameter is present, the javascript execution would resume
  immediately. The callback will be invoked when the dialog that doesn't
 need
  to be browser modal now, is closed.
 

 I'm not sure this accomplishes anything.  It won't discourage people from
 using the blocking dialog calls, because generally the entire reason people
 use them is because the blocking is convenient.  People who don't need that
 are likely to just use any old dialog overlay script that they can style to
 match their page.


While it would be nice to completely discourage use of blocking alert()
calls,
I don't think that is really the goal here.  The goal is to provide a super
simple
non-blocking set of dialog calls.  The alternative requires a fair bit of
code to
construct an overlay, etc.

-Darin


Re: [whatwg] Fullscreen Update

2011-10-19 Thread Darin Fisher
On Tue, Oct 18, 2011 at 9:40 PM, Anne van Kesteren ann...@opera.com wrote:

 1) How much should UI-based and API-based fullscreen interact? To me it
 seems nice if pressing F11 would also give you fullscreenchange events and
 that Document.fullscreen would yield true. Why would you not want to give
 the same presentation via native activation and API-based activation? Of
 course when you activate it UI-wise, navigation should not exit it. For
 native video controls the case seems clearer that they should work using
 this API.


Agreed.  What should the target be for the fullscreenchange events in the
native activation case?  Should it be the documentElement or perhaps the
window?  Since the fullscreen attribute exists on Document instead of
Window, it seems like it might be odd to dispatch the fullscreenchange event
to the window.  However, in the native activation case, you could really
argue that it is the window that is being presented fullscreen and not the
document since fullscreen survives navigation.





 2) Chris brought forward the case of nesting. You have a fullscreen
 presentation (lets assume API-based activated for now) and in that
 presentation there's some video that the presenter wants to display
 fullscreen (lets assume the video player is a custom widget with API-based
 fullscreen activation for now). Once the presenter exits displaying the
 video fullscreen, the presentation should still be fullscreen.

 Initially this was brought up with the video being hosted in a separate
 descendant document, but the video could be in the same document as well.
 roc suggested a model that works when you have separate documents and it
 could be made to work for the single document case too, as long as the level
 of nesting remains is no larger than required for the presentation scenario
 mentioned above.

 Is that an acceptable limitation? Alternatively we could postpone the
 nested fullscreen scenario for now (i.e. make requestFullscreen fail if
 already fullscreen).


+1 for punting on the nested case.


-Darin


Re: [whatwg] Entering fullscreen when already in fullscreen mode [was: Fullscreen]

2011-10-18 Thread Darin Fisher
On Tue, Oct 18, 2011 at 7:24 AM, Glenn Maynard gl...@zewt.org wrote:

 On Tue, Oct 18, 2011 at 3:55 AM, Anne van Kesteren ann...@opera.com
 wrote:

  However, I just realized this does not work for the single document case.
  You have a video player website and you host your videos in video or
 maybe
  a div container. So your video player website is displayed fullscreen,
  because your users like the fullscreen application feel from their OS,
 but
  then they click to display one of those videos fullscreen and once they
 hit
  exit the video player website is also no longer displayed fullscreen.
 

 Do you mean the user-fullscreen mode that most browsers enter with F11?
 That's a separate piece of state entirely, since it affects the whole
 browser window, not individual tabs.  (You should still be able to
 enterFullscreen and exitFullscreen, to set and clear the fullscreen
 element.  It just wouldn't change the browser window's actual fullscreen
 status.)


In Chrome, the user-fullscreen mode you get when you press F11 places the
active tab into fullscreen mode.  It is interesting to wonder how this API
should interact with user-fullscreen mode.  However, maybe that is best left
to the UAs and shouldn't be covered by this spec.

-Darin


[whatwg] Entering fullscreen when already in fullscreen mode [was: Fullscreen]

2011-10-17 Thread Darin Fisher
Hi Anne,

Thanks for working on this spec!  I have more questions, but I'll just start
with one.  If enterFullscreen() is called when the browsing context is
already being displayed fullscreen, what should happen?  (It looks like
Safari 5.1 ignores the second call to webkitRequestFullScreen.)

I also find it curious that there is a bit of a dead-time between the
request to enter fullscreen and the fullscreenchange event (nit:
fullscreenchange instead of fullscreenchanged to be consistent, right?).
 It appears that JS cannot request to cancel out of fullscreen mode until
the fullscreenchange event is generated (i.e., until the fullscreen flag is
set).  It could cause pain for developers if there is no guaranteed response
to enterFullscreen().  Did my request succeed, did it fail?  What happened?

-Darin


On Fri, Oct 14, 2011 at 9:27 PM, Anne van Kesteren ann...@opera.com wrote:

 I wrote up a draft:

 http://dvcs.w3.org/hg/**fullscreen/raw-file/tip/**Overview.htmlhttp://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html

 Defining when exactly the fullscreen enabled flag is set for Document
 objects I will leave up to HTML. As well as defining the allowfullscreen
 attribute. Presumably it should be set for Document objects associated with
 the top-level browsing context and descendant browsing context as long as
 their browsing context container has the aforementioned attribute set.

 If we want to transition from fullscreen when navigating, HTML can define
 that as well, neatly integrated in the navigation section. The Model
 section of the Fullscreen specification has an appropriate hook.


 I have not added the key restrictions given earlier emails. Unfortunately
 there was not that much feedback on them, but maybe this draft will help on
 that front!


 I went with fullscreen rather than full screen as that seemed cleaner
 and easier to type. I also used enter and exit rather than request and
 cancel as they seemed somewhat nicer too. I'm less attached to this latter
 change though.


 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: [whatwg] Entering fullscreen when already in fullscreen mode [was: Fullscreen]

2011-10-17 Thread Darin Fisher
OK, I can't help myself.  One more question:

What should happen if the fullscreen browsing context is navigated?  What
happens if the document, containing the fullscreen element, is destroyed?
 Perhaps it should bounce out of fullscreen mode?

-Darin


On Mon, Oct 17, 2011 at 3:55 PM, Darin Fisher da...@chromium.org wrote:

 Hi Anne,

 Thanks for working on this spec!  I have more questions, but I'll just
 start with one.  If enterFullscreen() is called when the browsing context is
 already being displayed fullscreen, what should happen?  (It looks like
 Safari 5.1 ignores the second call to webkitRequestFullScreen.)

 I also find it curious that there is a bit of a dead-time between the
 request to enter fullscreen and the fullscreenchange event (nit:
 fullscreenchange instead of fullscreenchanged to be consistent, right?).
  It appears that JS cannot request to cancel out of fullscreen mode until
 the fullscreenchange event is generated (i.e., until the fullscreen flag is
 set).  It could cause pain for developers if there is no guaranteed response
 to enterFullscreen().  Did my request succeed, did it fail?  What happened?

 -Darin


 On Fri, Oct 14, 2011 at 9:27 PM, Anne van Kesteren ann...@opera.comwrote:

 I wrote up a draft:

 http://dvcs.w3.org/hg/**fullscreen/raw-file/tip/**Overview.htmlhttp://dvcs.w3.org/hg/fullscreen/raw-file/tip/Overview.html

 Defining when exactly the fullscreen enabled flag is set for Document
 objects I will leave up to HTML. As well as defining the allowfullscreen
 attribute. Presumably it should be set for Document objects associated with
 the top-level browsing context and descendant browsing context as long as
 their browsing context container has the aforementioned attribute set.

 If we want to transition from fullscreen when navigating, HTML can define
 that as well, neatly integrated in the navigation section. The Model
 section of the Fullscreen specification has an appropriate hook.


 I have not added the key restrictions given earlier emails. Unfortunately
 there was not that much feedback on them, but maybe this draft will help on
 that front!


 I went with fullscreen rather than full screen as that seemed cleaner
 and easier to type. I also used enter and exit rather than request and
 cancel as they seemed somewhat nicer too. I'm less attached to this latter
 change though.


 --
 Anne van Kesteren
 http://annevankesteren.nl/





Re: [whatwg] createObjectURL(stream) protocol issue

2011-08-12 Thread Darin Fisher
Putting implementation details aside, I agree that it is a bit unfortunate
to refer to a stream as a blob.  So far, blobs have always referred to
static, fixed-size things.

This function was originally named createBlobURL, but it was renamed
createObjectURL precisely because we imagined it being useful to pass things
that were not blobs to it.  It seems reasonable that passing a Foo object to
createObjectURL might mint a different URL type than what we would mint for
a Bar object.

It could also be the case that using blob: for referring to Blobs was
unfortunate.  Maybe we do not really need separate URL schemes for static,
fixed size things and streams.

Hmm...
-Darin



On Thu, Aug 11, 2011 at 2:13 AM, Tommy Widenflycht (ᛏᚮᛘᛘᚤ) 
tom...@google.com wrote:

 Would it be possible to give the associated URL for a mediastream to have
 its own protocol, for example mediastream:, instead of the proposed blob:?

 window . URL . createObjectURL(stream)
 Mints a Blob URL to refer to the given MediaStream.


 This would tremendously help the implementation.

 Thanks in advance,
 Tommy


 --
 Tommy Widenflycht, Senior Software Engineer
 Google Sweden AB, Kungsbron 2, SE-11122 Stockholm, Sweden
 Org. nr. 556656-6880
 And yes, I have to include the above in every outgoing email according to
 EU
 law.



Re: [whatwg] a rel=attachment

2011-07-16 Thread Darin Fisher
rel=anything makes me sad as it will mean more UA sniffing.  The fallback
behavior of loading the href inline could be dangerous.
On Jul 15, 2011 5:38 PM, Tantek Çelik tan...@cs.stanford.edu wrote:


Re: [whatwg] a rel=attachment

2011-07-15 Thread Darin Fisher
On Fri, Jul 15, 2011 at 1:09 PM, Jonas Sicking jo...@sicking.cc wrote:

 2011/7/15 Ian Fette (イアンフェッティ) ife...@google.com:
  2011/7/15 Jonas Sicking jo...@sicking.cc
 
  2011/7/14 Ian Fette (イアンフェッティ) ife...@google.com:
   Many websites wish to offer a file for download, even though it could
   potentially be viewed inline (take images, PDFs, or word documents as
 an
   example). Traditionally the only way to achieve this is to set a
   content-disposition header. *However, sometimes it is not possible for
  the
   page author to have control over the response headers sent by the
   server.*(A related example is offline apps, which may wish to provide
   the user with
   a way to download a file stored locally using the filesystem API but
  again
   can't set any headers.) It would be nice to provide the page author
 with
  a
   client side mechanism to trigger a download.
  
   After mulling this over with some application developers who are
 trying
  to
   use this functionality, it seems like adding a rel attribute to the
 a
   tag would be a straightforward, minimally invasive way to address this
  use
   case. a rel=attachment href=blah.pdf would indicate that the browser
   should treat this link as if the response came with a
  content-disposition:
   attachment header, and offer to download/save the file for the user.
 
  We've discussed a different solution to the same problem at mozilla.
  The solution we discussed was allowing FileSaver to in addition to
  taking a blob argument, allow it to take a url argument.
 
  One concern which was brought up was the ability to cause the user to
  download a file from a third party site. I.e. this would allow
  evil.com to trick the user into downloading an email from the users
  webmail, or download a page from their bank which contains all their
  banking information. It might be easier to then trick the user into
  re-uploading the saved file to evil.com since from a user's
  perspective, it looked like the file came from evil.com
 
  Another possible attack goes something like:
  1. evil.com tricks the user into downloading sensitive data from
 bank.com
  2. evil.com then asks the user to download a html from evil.com and
  open the newly downloaded file
  3. the html file contains script which reads the contents from the
  file downloaded from bank.com and sends it back to evil.com
 
  Step 1 and 2 require the user to answer yes to a dialog displayed by
  the browser. However it's well known that users very often hit
  whichever button they suspect will make the dialog go away, rather
  than actually read the contents of the dialog.
  Step 3 again requires the user to answer yes to a dialog displayed
  by the browser in at least some browsers. Same caveat applies though.
 
  One very simple remedy to this would be to require CORS opt-in for
  cross-site downloads. For same-site downloads no special opt-in would
  be required of course.
 
  It's also possible that it would be ok to do this without any opt-ins
  since there are a good number of actions that the user has to take in
  all these scenarios. Definitely something that I'd be ok with
  discussing with our security team.
 
  Tentatively I would feel safer with the CORS option though. And again,
  for same-site downloads this isn't a problem at all, but I suspect
  that in many cases the file to be downloaded is hosted on a separate
  server.
 
  Oh, and I don't have strong opinions at this time on if rel=attachment
  or FileSaver or both should be the way to trigger this functionality.
 
  / Jonas
 
 
  I agree FileSaver is useful and has its place, but I don't think it
 negates
  the need for something like rel=attachment or download=filename. For one,
  FileSaver currently operates on blobs and as you mention would have to be
  modified to handle URLs or streams more generally. Second, it would force
  developers to use javascript links and/or set up click listeners and so
  forth, which could be annoying for users (losing the ability to copy the
 URL
  etc).

 As stated, I don't have a strong preference here. I suspect ultimately
 we'll end up wanting both a markup based and an API based solution
 here.

  I guess the interesting question is If the response would not have
  otherwise triggered a download, and the request is cross-origin, should
 that
  require CORS and personally I would say no, this is still a remote
 enough
  concern that I would not worry about it.

 Indeed, that is the interesting question.

 I know that I would personally feel a lot more comfortable if the site
 opted in to allowing downloads of the resource in question. But it's
 quite possible that I'm overly paranoid.

 Though one thing to keep in mind is sites that explicitly state that a
 resource should *not* reach the users disk. This is today often done
 using Cache-Control: no-store. Seems scary to allow such content to
 be saved based on a cross-site request.

 / Jonas



This security concern is very 

Re: [whatwg] a rel=attachment

2011-07-14 Thread Darin Fisher
On Thu, Jul 14, 2011 at 12:36 PM, Glenn Maynard gl...@zewt.org wrote:

 2011/7/14 Ian Fette (イアンフェッティ) ife...@google.com

  Many websites wish to offer a file for download, even though it could
  potentially be viewed inline (take images, PDFs, or word documents as an
  example). Traditionally the only way to achieve this is to set a
  content-disposition header. *However, sometimes it is not possible for
 the
 

 This has been raised a couple times:

 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-July/027455.html

 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-April/031190.html(thread
 was derailed partway through)

 I've wanted this several times and I'm strongly in favor of it.


Yes, it seems very useful.




 After mulling this over with some application developers who are trying to
  use this functionality, it seems like adding a rel attribute to the a
  tag would be a straightforward, minimally invasive way to address this
 use
  case. a rel=attachment href=blah.pdf would indicate that the browser
 

 This isn't enough; the filename needs to be overridable as well, as it is
 with Content-Disposition.  My recommendation has been:

 a href=image.jpg download
 a href=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15.jpg download=picture.jpg

 where the first is equivalent to Content-Disposition: attachment, and the
 second is equivalent to Content-Disposition: attachment;
 filename=picture.jpg.


This is an interesting variation!  I like that it addresses the issue of
providing a name for the download.  Using the term download here is also
nice.

I know that there is also a proposal to add a FileSaver API.  I like that as
well, _but_ it is very nice to be able to simply decorate an anchor tag.  In
many cases, that will be a lot simpler for developers than using JavaScript
to construct a FileSaver.  I think it makes sense to implement both.

On the other thread, Michal Zalewski raised a concern about giving
client-side JS the power to name files.  It looks like that discussion did
not conclude, but I will note that even without the proposal here to name
the download, an attacker could still have control over the downloaded
filename.  They could either manufacture a file using the FileSystem API,
and then get a filesystem: URL to that file, or they could simply use a
server to produce an URL with a C-D header of their choosing.  It seems like
we are well past the point of trying to limit a web page authors ability to
influence the downloaded filename.  Fortunately, however, user agents can
protect the user from potentially harmful downloads.  Chrome for instance
asks the user to confirm the download of a EXE file before we ever write a
file to the filesystem with a .exe extension.

-Darin


Re: [whatwg] a rel=attachment

2011-07-14 Thread Darin Fisher
On Thu, Jul 14, 2011 at 1:32 PM, Tantek Çelik tan...@cs.stanford.eduwrote:

 2011/7/14 Darin Fisher da...@chromium.org:
  On Thu, Jul 14, 2011 at 12:36 PM, Glenn Maynard gl...@zewt.org wrote:
 
  2011/7/14 Ian Fette (イアンフェッティ) ife...@google.com
 
   Many websites wish to offer a file for download, even though it could
   potentially be viewed inline (take images, PDFs, or word documents as
 an
   example). Traditionally the only way to achieve this is to set a
   content-disposition header. *However, sometimes it is not possible for
  the
  
 
  This has been raised a couple times:
 
 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-July/027455.html
 
 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-April/031190.html(thread
  was derailed partway through)
 
  I've wanted this several times and I'm strongly in favor of it.
 
 
  Yes, it seems very useful.

 Indeed, and has been pointed out, already specified (since 2005) and
 implemented as well for HTML:

 http://microformats.org/wiki/rel-enclosure

 re-using the enclosure term from the Atom format (thus minimal
 bikeshedding)


  After mulling this over with some application developers who are trying
 to
   use this functionality, it seems like adding a rel attribute to the
 a
   tag would be a straightforward, minimally invasive way to address this
  use
   case. a rel=attachment href=blah.pdf would indicate that the browser
  
 
  This isn't enough; the filename needs to be overridable as well, as it
 is
  with Content-Disposition.  My recommendation has been:
 
  a href=image.jpg download
  a href=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15.jpg
 download=picture.jpg
 
  where the first is equivalent to Content-Disposition: attachment, and
 the
  second is equivalent to Content-Disposition: attachment;
  filename=picture.jpg.
 
 
  This is an interesting variation!  I like that it addresses the issue of
  providing a name for the download.  Using the term download here is
 also
  nice.

 Agreed.

 I've captured the suggestion on a brainstorming page:

 http://microformats.org/wiki/rel-enclosure-brainstorming

 Feel free to contribute or iterate.

 Thanks,

 Tantek


Why do you feel it is important to specify rel=enclosure in addition to the
download attribute?

Thanks,
-Darin


Re: [whatwg] a rel=attachment

2011-07-14 Thread Darin Fisher
On Thu, Jul 14, 2011 at 1:53 PM, Glenn Maynard gl...@zewt.org wrote:

 2011/7/14 Darin Fisher da...@chromium.org

 I know that there is also a proposal to add a FileSaver API.  I like that
 as well, _but_ it is very nice to be able to simply decorate an anchor tag.
  In many cases, that will be a lot simpler for developers than using
 JavaScript to construct a FileSaver.  I think it makes sense to implement
 both.


 FileSaver is useful in its own right, but it's not a great fit for this.
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-April/031398.html

 That reminds me of something download=filename can't do: assign a filename
 while leaving it inline, so save as and other operations can have a
 specified filename.  That would require two separate properties.  One case
 I've come across is img, where I want to display an image, but provide a
 different filename for save-as.  Separating the filename would allow this to
 be applied generically both links and inline resources: img
 src=f1d2d2f924e986ac86fdf7b36c94bcdf32beec15.jpg filename=picture.jpg.

 In that case, rel=enclosure would probably make sense.


Yeah, that makes a lot of sense.  I'm fine with using rel=enclosure too.



 On the other thread, Michal Zalewski raised a concern about giving
 client-side JS the power to name files.


 That subthread just seemed to be asking whether browsers should implement
 Content-Disposition, which didn't seem relevant--they already have, for
 years.

 Separately, there was a security question raised about the ability to serve
 a file off of a third-party site with a different filename than was
 intended.  For example, uploading a file which is both an executable trojan
 and a valid JPEG to an image hosting site, and linking to it externally,
 overriding its filename to .EXE.  The question there isn't about being able
 to serve executables--you can always do that--but being able to serve
 executables that appear to be from the image hosting site.  Arguably, it
 could 1: cause users to trust the file because it appears to be from a site
 they recognize, or 2: cause the site to be blamed for the trojan.

 I mention it so people don't have to scour the previous threads for it, but
 I think it's uncompelling.  It just seems like something UI designers would
 need to take into consideration.  (In my opinion, the trust and blame for
 saving a file to disk should be applied to the host *linking* the file, and
 not from the site hosting the file, which makes the above irrelevant.)


Agreed.  I suspect that users will associate a download with whatever host
they see in the location bar.

-Darin


[whatwg] requesting clarification for a navigate with replacement enabled case

2010-04-07 Thread Darin Fisher
Case #1:

var f = document.createElement(iframe);
f.src = http://foo.com/;;
document.body.appendChild(f);



Case #2:

var f = document.createElement(iframe);
document.body.appendChild(f);
f.src = http://foo.com/;;


My interpretation of section 4.8.2 is that in case #1 the iframe should be
navigated with replacment enabled, and in case #2 the iframe should be
navigated without replacement enabled.

I am basing that on the following passage:

Furthermore, if the process the iframe attributes algorithm was invoked for
 the first time for this element (i.e. as a result of the element being
 inserted into a document), then any navigation required of the user agent in
 that algorithm must be completed with replacement enabled.


That passage only specifies that in case #1 the navigation be completed with
replacement enabled.  It does not apply to the assignment of src in case #2.
 I assume that means that the spec would have the frame navigated without
replacement enabled.

I raise this issue because I observe that Firefox and IE treat case #1 and
#2 the same with respect to whether or not replacement is enabled.  They
enable it for both navigations to http://foo.com/.  WebKit based browsers
(the latest stable Chrome and Safari) also happen to agree, but that's
because they always navigate with replacement enabled when the src attribute
is set, which is a bug.  WebKit nightlies behave differently, and that is
how I stumbled upon this issue.

I believe the spec for src assignment should match the spec for
location.assign:

When the assign(url) method is invoked, the UA must resolve the argument,
 relative to the entry script's base URL, and if that is successful, must
 navigate the browsing context to the specified url. *If the browsing
 context's session history contains only one Document, and that was the
 about:blank Document created when the browsing context was created, then the
 navigation must be done with replacement enabled.*


Agreed?
-Darin


Re: [whatwg] HTML Cookie API

2010-02-26 Thread Darin Fisher
On Fri, Feb 26, 2010 at 10:56 AM, Diogo Resende drese...@thinkdigital.ptwrote:



  What about something like:
 
  document.pushCookies(function () {
 // cookies have been pushed to the js process
 var x = document.getCookie(x);
 // whatever...
  });
 
 
  This seems similar to Adam's proposed document.getAllCookies.
 
 
  -Darin

 No. pushCookies would be a way of pushing cookies to the current js and
 then you could call getCookie several times without defining a callback.
 It would be almost like:

document.observe(cookieload, myAppLoad)


Right.  My point was that you could implement pushCookies on top of Adam's
API.

-Darin


Re: [whatwg] HTML Cookie API

2010-02-26 Thread Darin Fisher
On Fri, Feb 26, 2010 at 12:04 PM, Diogo Resende drese...@thinkdigital.ptwrote:



  No. pushCookies would be a way of pushing cookies to the
  current js and
  then you could call getCookie several times without defining a
  callback.
  It would be almost like:
 
 document.observe(cookieload, myAppLoad)
 
 
  Right.  My point was that you could implement pushCookies on top of
  Adam's API.
 
 
  -Darin

 Agree. Just like you could implement Adam's API on top of current
 browsers cookies spec :P



No, I don't think that is possible.  Adam's spec reveals a lot of extra
information that document.cookie does not return.  For example, it exposes
domain and expiry information.

But, I think your point was that it would be possible to simulate an
asynchronous API on top of a synchronous one.  I agree that is possible, but
it would not perform very well.

Regards,
-Darin


Re: [whatwg] HTML Cookie API

2010-02-25 Thread Darin Fisher
On Thu, Feb 25, 2010 at 6:54 AM, Diogo Resende drese...@thinkdigital.ptwrote:

 On Wed, 2010-02-24 at 11:21 -0800, Darin Fisher wrote:
  For reference, reading document.cookie has measurable performance cost
  in Chromium since the cookie jar lives in a process separate from the
  process running JavaScript.  We could have minimized this cost by
  caching the cookies locally, but then there are cache coherency
  issues.
 
 
  I think the cookie APIs should have been asynchronous from the start.
   Whenever an API is backed by I/O, asynchronous should be the rule.
 
 
  -Darin

 What about something like:

 document.pushCookies(function () {
// cookies have been pushed to the js process
var x = document.getCookie(x);
// whatever...
 });


This seems similar to Adam's proposed document.getAllCookies.

-Darin


Re: [whatwg] HTML Cookie API

2010-02-24 Thread Darin Fisher
An explicit deleteCookie method might also be nice.
-Darin

On Tue, Feb 23, 2010 at 8:56 PM, Adam Barth w...@adambarth.com wrote:

 The document.cookie API is kind of terrible.  Web developers shouldn't
 have to parse a cookie-string or prepare a properly formated
 set-cookie-string.  Here's a proposal for an HTML cookie API that
 isn't as terrible:


 https://docs.google.com/Doc?docid=0AZpchfQ5mBrEZGQ0cDh3YzRfMTRmdHFma21kMghl=en

 I'd like to propose we include this API in a future version of HTML.
 As always, feedback welcome.

 Adam



Re: [whatwg] HTML Cookie API

2010-02-24 Thread Darin Fisher
For reference, reading document.cookie has measurable performance cost in
Chromium since the cookie jar lives in a process separate from the process
running JavaScript.  We could have minimized this cost by caching the
cookies locally, but then there are cache coherency issues.

I think the cookie APIs should have been asynchronous from the start.
 Whenever an API is backed by I/O, asynchronous should be the rule.

-Darin


On Wed, Feb 24, 2010 at 11:11 AM, Nicholas Zakas nza...@yahoo-inc.comwrote:

 I like the idea of creating an easier way to deal with cookies (which is
 why I wrote the YUI Cookie utility way back when). The thing that seems to
 be missing in your proposed API is what I consider to be the most common use
 case: retrieving the value of a single cookie. There's not many times when I
 need to get every single cookie that's available on the page, but there are
 plenty of times when I want to check the value of a single cookie. Using
 your API, getting the value of a single cookie with a known name becomes:

document.getCookies(function(cookies) {
  for (var i=0; i  cookies.length; ++i){
  if(cookies[i].name == my_cookie_name){
  doSomething(cookies[i]);
  }
  }
});

 That seems like a lot of work just to retrieve a single cookie value.

 I'm also less-than-thrilled with this being asynchronous, as I think the
 use cases for cookies are vastly differently than those for databases and
 web storage. The world is already parsing cookies synchronously right now,
 it doesn't seem like asynchronicity buys much benefit, it just introduces an
 additional level of indirection.

 -Nicholas

 __
 Commander Lock: Damnit Morpheus, not everyone believes what you believe!
 Morpheus: My beliefs do not require them to.

 -Original Message-
 From: whatwg-boun...@lists.whatwg.org [mailto:
 whatwg-boun...@lists.whatwg.org] On Behalf Of Adam Barth
 Sent: Wednesday, February 24, 2010 8:47 AM
 To: Darin Fisher
 Cc: whatwg
 Subject: Re: [whatwg] HTML Cookie API

 Done.

 On Wed, Feb 24, 2010 at 12:29 AM, Darin Fisher da...@chromium.org wrote:
  An explicit deleteCookie method might also be nice.
  -Darin
 
  On Tue, Feb 23, 2010 at 8:56 PM, Adam Barth w...@adambarth.com wrote:
 
  The document.cookie API is kind of terrible.  Web developers shouldn't
  have to parse a cookie-string or prepare a properly formated
  set-cookie-string.  Here's a proposal for an HTML cookie API that
  isn't as terrible:
 
 
 
 https://docs.google.com/Doc?docid=0AZpchfQ5mBrEZGQ0cDh3YzRfMTRmdHFma21kMghl=en
 
  I'd like to propose we include this API in a future version of HTML.
  As always, feedback welcome.
 
  Adam
 
 



Re: [whatwg] HTML Cookie API

2010-02-24 Thread Darin Fisher
On Wed, Feb 24, 2010 at 6:08 PM, Adam Barth w...@adambarth.com wrote:

 On Wed, Feb 24, 2010 at 5:00 PM, Nicholas Zakas nza...@yahoo-inc.com
 wrote:
  Even though there can be multiple cookies with the same name on a single
 document, this most frequently occurs due to error rather than intention.
 I've never received a YUI bug report about this occurrence though I have
 considered returning an array of values instead of just the first value in
 this case. I might just go do that now. :)
 
  My initial comment still remains: retrieving the value of a single named
 cookie seems to be a much more common use case than retrieving all cookies.
 You can choose to solve the duplicate cookie name issue in a number of ways,
 but not providing a way to access a cookie by name seems like a flaw in this
 design.

 Done.  I've made the API return the first cookie that matches the
 specified name.  If a web developer wants to get all the cookies, I've
 added a getAllCookies() API.

 Adam



Some other random comments:

1- Perhaps deleteCookie should also take an optional error callback.

2- Is it possible for setCookie to be used to set a http-only cookie?  That
could be an interesting use case.

-Darin


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-23 Thread Darin Fisher
On Mon, Feb 22, 2010 at 4:05 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 3:43 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
   I've talked with some other folks on WebKit (Maciej and Oliver) about
   having
   a canvas that is available to workers. They suggested some nice
   modifications to make it an offscreen canvas, which may be used in the
   Document or in a Worker.
 
  What is the use case for this? It seems like in most cases you'll want
  to display something on screen to the user, and so the difference
  comes down to shipping drawing commands across the pipe, vs. shipping
  the pixel data.
 
  Sometimes the commands take up a lot more CPU power than shipping the
  pixels.  Lets say you wanted to have a really rich map application that
  looked great, was highly interactive/fluid, but didn't use a lot of
  bandwidth.  Rendering different parts of the screen on different workers
  seems like a legit use.

 I admit to not being a graphics expert, but I would imagine you have
 to do quite a lot of drawing before
 1. Drawing on offscreen canvas
 2. Cloning the pixel data in order to ship it to a different thread
 3. Drawing the pixel data to the on-screen canvas


The pixel copies are not as expensive as you might imagine.  (You just
described how rendering works in Chrome.)  Step #1 can vastly dominate if
drawing is complex.

Imagine if it involved something as complicated and expensive as rendering a
web page.  Doing work that expensive on a background thread becomes
imperative to maintaining good responsiveness of the main UI thread of the
application, so the extra copies can be well worth the cost.

-Darin




 gets to be cheaper than

 1. Drawing to on-screen canvas.

  The other use case I can think of is doing image manipulation and then
  sending the result directly to the server, without ever displaying it
  to the user. However this is first of all not supported by the
  suggested API, and second I can't think of any image manipulation that
  you wouldn't want to display to the user except for scaling down a
  high resolution image. But that seems like a much simpler API than all
  of canvas. And again, not even this simple use case is supported by
  the current API.
 
  OK, so you solve this one problem.  Then soon enough someone wants to do
  something more than just scale an image.  So you you add another one off
  solution.  Then another.  Next thing you've essentially created canvas
  prime

 We've always started with use cases and then created APIs that
 fulfills those use cases, rather than come up with APIs and hope that
 that fulfills some future use case. That seems like a much wiser path
 here too.

  I'll note that there are a bunch of teams that want this behavior, though
 I
  can't remember exactly what for.

 But you're sure that it fulfills their requirements? ;-)

  At least some of it is simple image
  resizing type stuff.  Most of it is related to doing image manipulation
 work
  that the app is probably going to need soon (but isn't on the screen
  yet...and that we don't want to slow the main thread for).
  Really, if you use picassa (or iPhoto or some other competitor) it really
  isn't hard to think of a lot of uses for this.  Even for non-photo Apps
  (like Bespin) I could totally see it being worth it to them to do some
  rendering off the main loop.

 For many of these things you want to display the image to the user at
 the same time as the

  To be honest, I think the applications are largely self
 evident...especially
  if you think about taking rich desktop apps and making them web apps.

 So picassa and/or iPhoto uses off-main-thread *drawing* (not image
 scaling) today?

   Are
  you sure that you're negativity towards an offscreen canvas isn't simply
  being driven by implementation related worries?

 Quite certain. I can promise to for every API suggested, that if there
 are no use cases included, and no one else asks, I will ask what the
 use case is.

 / Jonas



Re: [whatwg] should async scripts block the document's load event?

2010-02-13 Thread Darin Fisher
I don't know... to me, asynchronous means completes later.  Precedence:
 XMLHttpRequest.

The Mozilla network code uses the phrase load background to describe a
load that happens asynchronously in the background _and_ does not block
onload.  Perhaps not coincidentally, this mode is used to load background
images :-)

-Darin


On Fri, Feb 12, 2010 at 11:50 AM, Jonas Sicking jo...@sicking.cc wrote:

 It's a good point. Curious to hear what other people are thinking.

 / Jonas

 On Fri, Feb 12, 2010 at 10:10 AM, Nicholas Zakas nza...@yahoo-inc.com
 wrote:
  To me “asynchronous” fundamentally means “doesn’t block other things from
  happening,” so if async currently does block the load event from firing
 then
  that seems very wrong to me.
 
 
 
  -Nicholas
 
 
 
  __
 
  Commander Lock: Damnit Morpheus, not everyone believes what you
 believe!
 
  Morpheus: My beliefs do not require them to.
 
  
 
  From: whatwg-boun...@lists.whatwg.org
  [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of Brian Kuhn
  Sent: Friday, February 12, 2010 8:03 AM
  To: Jonas Sicking
  Cc: Steve Souders; WHAT Working Group
  Subject: Re: [whatwg] should async scripts block the document's load
 event?
 
 
 
  Right.  Async scripts aren't really asynchronous if they block all the
  user-visible functionality that sites currently tie to window.onload.
 
 
 
  I don't know if we need another attribute, or if we just need to change
 the
  behavior for all async scripts.  But I think the best time to fix this is
  now; before too many UAs implement async.
 
 
 
  -Brian
 
 
 
 
 
 
 
 
 
  On Thu, Feb 11, 2010 at 10:41 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  Though what we want here is a DONTDELAYLOAD attribute. I.e. we want
  load to start asap, but we don't want the load to hold up the load
  event if all other resources finish loading before this one.
 
  / Jonas
 
  On Thu, Feb 11, 2010 at 10:23 PM, Steve Souders wha...@souders.org
 wrote:
  I just sent email last week proposing a POSTONLOAD attribute for
 scripts.
 
  -Steve
 
  On 2/10/2010 5:18 PM, Jonas Sicking wrote:
 
  On Fri, Nov 6, 2009 at 4:22 PM, Brian Kuhnbnk...@gmail.com  wrote:
 
 
  No one has any thoughts on this?
  It seems to me that the purpose of async scripts is to get out of the
  way
  of
  user-visible functionality.  Many sites currently attach user-visible
  functionality to window.onload, so it would be great if async scripts
 at
  least had a way to not block that event.  It would help minimize the
  affect
  that secondary-functionality like ads and web analytics have on the
 user
  experience.
  -Brian
 
 
  I'm concerned that this is too big of a departure from how people are
  used toscripts behaving.
 
  If we do want to do something like this, one possibility would be to
  create a generic attribute that can go on things likeimg,link
  rel=stylesheet,script  etc that make the resource not block the
  'load' event.
 
  / Jonas
 
 
 
 



Re: [whatwg] should async scripts block the document's load event?

2010-02-13 Thread Darin Fisher
The thing is, almost all subresources load asynchronously.  The load event
exists to tell us when those asynchronous loads have finished.  So, I think
it follows that an asynchronous resource load may reasonably block the load
event.  (That's the point of the load event afterall!)

-Darin


On Fri, Feb 12, 2010 at 10:10 AM, Nicholas Zakas nza...@yahoo-inc.comwrote:

  To me “asynchronous” fundamentally means “doesn’t block other things from
 happening,” so if async currently does block the load event from firing then
 that seems very wrong to me.



 -Nicholas



 __

 Commander Lock: Damnit Morpheus, not everyone believes what you believe!

 Morpheus: My beliefs do not require them to.
   --

 *From:* whatwg-boun...@lists.whatwg.org [mailto:
 whatwg-boun...@lists.whatwg.org] *On Behalf Of *Brian Kuhn
 *Sent:* Friday, February 12, 2010 8:03 AM
 *To:* Jonas Sicking
 *Cc:* Steve Souders; WHAT Working Group
 *Subject:* Re: [whatwg] should async scripts block the document's load
 event?



 Right.  Async scripts aren't really asynchronous if they block all the
 user-visible functionality that sites currently tie to window.onload.



 I don't know if we need another attribute, or if we just need to change the
 behavior for all async scripts.  But I think the best time to fix this is
 now; before too many UAs implement async.



 -Brian









 On Thu, Feb 11, 2010 at 10:41 PM, Jonas Sicking jo...@sicking.cc wrote:

 Though what we want here is a DONTDELAYLOAD attribute. I.e. we want
 load to start asap, but we don't want the load to hold up the load
 event if all other resources finish loading before this one.

 / Jonas


 On Thu, Feb 11, 2010 at 10:23 PM, Steve Souders wha...@souders.org
 wrote:
  I just sent email last week proposing a POSTONLOAD attribute for scripts.
 
  -Steve
 
  On 2/10/2010 5:18 PM, Jonas Sicking wrote:
 
  On Fri, Nov 6, 2009 at 4:22 PM, Brian Kuhnbnk...@gmail.com  wrote:
 
 
  No one has any thoughts on this?
  It seems to me that the purpose of async scripts is to get out of the
 way
  of
  user-visible functionality.  Many sites currently attach user-visible
  functionality to window.onload, so it would be great if async scripts
 at
  least had a way to not block that event.  It would help minimize the
  affect
  that secondary-functionality like ads and web analytics have on the
 user
  experience.
  -Brian
 
 
  I'm concerned that this is too big of a departure from how people are
  used toscripts behaving.
 
  If we do want to do something like this, one possibility would be to
  create a generic attribute that can go on things likeimg,link
  rel=stylesheet,script  etc that make the resource not block the
  'load' event.
 
  / Jonas
 
 





Re: [whatwg] api for fullscreen()

2010-01-30 Thread Darin Fisher
On Thu, Jan 28, 2010 at 6:42 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Fri, Jan 29, 2010 at 12:51 PM, Simon Fraser s...@me.com wrote:

 We have been discussing a more general fullscreen API that lets you take
 the page fullscreen (perhaps with the ability to focus on a single element),
 as Maciej mentions. We have not decided on a final form for this API, nor
 have we resolved whether it's possible to do some nice transition between
 the two modes. We have talked at some length about the security issues.

 Input on what people would like from this API is welcome, as are ideas on
 how the transitions should work.


 1) Should be convenient for authors to make any element in a page display
 fullscreen
 2) Should support in-page activation UI for discoverability
 3) Should support changing the layout of the element when you enter/exit
 fullscreen mode. For example, authors probably want some controls to be
 fixed size while other content fills the screen.
 4) Should accommodate potential UA security concerns, e.g. by allowing the
 transition to fullscreen mode to happen asynchronously after the user has
 confirmed permission

 *** WARNING: totally half-baked proposal ahead! ***

 New API for all elements:
 void enterFullscreen(optional boolean enableKeys);
 void exitFullscreen();
 boolean attribute supportsFullscreen;
 boolean attribute displayingFullscreen;
 beginfullscreen and endfullscreen events

 While an element is fullscreen, the UA imposes CSS style position:fixed;
 left:0; top:0; right:0; bottom:0 on the element and aligns the viewport of
 its DOM window with the screen. Only the element and its children are
 rendered, as a single CSS stacking context.

 enterFullscreen always returns immediately. If fullscreen mode is currently
 supported and permitted, enterFullscreen dispatches a task that a) imposes
 the fullscreen style, b) fires the beginfullscreen event on the element and
 c) actually initiates fullscreen display of the element. The UA may
 asynchronously display confirmation UI and dispatch the task when the user
 has confirmed (or never).

 The enableKeys parameter to enterFullscreen is a hint to the UA that the
 application would like to be able to receive arbitrary keyboard input.
 Otherwise the UA is likely to disable alphanumeric keyboard input. If
 enableKeys is specified, the UA might require more severe confirmation UI.

 In principle a UA could support multiple elements in fullscreen mode at the
 same time (e.g., if the user has multiple screens).

 enterFullscreen would throw an exception if fullscreen was definitely not
 going to happen for this element due to not being supported or currently
 permitted, or if all screens are already occupied.


Note:  The if all screens are already occupied implies acquiring some
global lock before returning from this method.  That's not so great for a
multi-threaded UA.  I'd prefer if we just defined an asynchronous error
event that could be used to report rejections.

-Darin



 supportsFullscreen returns false if it's impossible for this element to
 ever be shown fullscreen. It does not reveal whether permission will be
 granted.


 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]



Re: [whatwg] history.back()

2010-01-27 Thread Darin Fisher
On Wed, Jan 27, 2010 at 3:26 PM, Ian Hickson i...@hixie.ch wrote:


   Another is what should happen if a page goes back() past its fragment
   identifier entries, and then modifies the document or alerts the
   location? What location should it get? Which document should it
   mutate? (test 007)
  
   How about:
  
 location.hash = 'a';
 /* spin event loop */
 history.back();
 location.hash = 'b';
 history.forward();
 alert(location.hash);
 /* spin event loop */
 alert(location.hash);
 
  It would be nice if the navigation and history traversal algorithms did
  not proceed while the page is blocked on a modal alert.

 Sure, but what should alert?

 I guess you're saying we should have b and b here.


Yeah, exactly.





   How about:
  
 location.hash = 'x';
 location.hash = 'a';
 /* spin event loop */
 history.back();
 /* spin event loop */
 history.forward();
 location.hash = 'b';
 /* spin event loop */
 history.back();
 /* spin event loop */
 alert(location.hash);
  
   What does this alert? (test 010)

 For this I guess you are saying we should alert x?


Yes.





  I think it would be risky to make navigation to fragment identifiers
  asynchronously set Location.  All browsers do so synchronously today, so
  I wouldn't be surprised to find that it matters.

 Ok, but when should the session history be traversed? Synchronously or
 not?

 If you do:

   location.hash = 'a';
   location.hash = 'b';

 ...and then spin the event loop, then the user hits back, do you end up
 at a, or did a never get added to the history?


I think that location.hash = 'a' should synchronously add #a to the
session history, or at least it should appear to the web page that it was
added synchronously.

In the example you gave above,

location.hash = 'a'  // appends #a to session history
location.hash = 'b'  // appends #b to session history
spin the event loop  // not significant
user hits back  // queues a task on the event loop to traverse session
history back one step
spin the event loop  // #a is the current session history entry






 If you do:

   history.back();
   location.hash = 'a';

 ...do you end up with a no-op (synchronously traverse history to #a while
 the script is running, then go back in a task), or do you end up at a
 different page (go back to the previous page in a task, then do nothing
 with the location.href because the task for traversing its history is
 tossed when you switch to another page)? Or something else?


Hmm, good question... I'm not sure how much this matters.

That said, I think it would be good for location.hash = 'a' to interrupt the
history.back() request.  The net result being that #a is appended to
session history, and the history.back() request is discarded.




 If location changes traverse synchronously, there doesn't seem to be any
 benefit to making history.back() asynchronous -- it's the same algorithm.


I don't follow this implication.  Can you clarify?

I'm trying to treat history,{back,forward,go} as a UI command to the
navigator.  Synthesize
the user clicking on the corresponding back/forward buttons.  UI actions
typically do not
get dispatched during JS execution (ignoring window.showModalDialog of
course).




   Should Location be set synchronously but with the session history
   actually being updated asynchronously using a task, so that .back()
   and .forward() calls get interleaved with the Location setter?
 
  I think this would be problematic in other cases.  Imagine this
  scenario:
 
  location=#a;
  pushState(b, b, #b);
  location=#c;  // generates a synchronous popstate event

 (I assume you mean a hashchange event, not popstate.)


Oops, yes.  This is a bad example.  Nevermind.




 We can have synchronous traversal with asynchronous event dispatch, so I
 don't think that's really a problem.


   Should document.open() synchronously clear the session history, or
   should it asynchronously queue a task and do it that way? Should we,
   instead of using tasks that could run much later (e.g. if the script
   has previously invoked a bunch of setTimeout(0)s), add a step to the
   event loop so that after each task, any history traversal that's been
   queued up gets processed immediately?
 
  non-FIFO queuing makes me nervous ;-)

 It's not clear to me exactly what you want specced. Changing the location
 without traversing makes me pretty nervous (more so than non-FIFO
 queueing), but I don't know if that's what you want.


I agree that we should not change the location without traversing history.

I'm arguing for making history.{back,forward,go} start out by dispatching a
task
to then run the history traversal algorithm.  So, history.back() would never
change
the location synchronously.

-Darin







 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just 

Re: [whatwg] history.back()

2010-01-22 Thread Darin Fisher
On Fri, Jan 22, 2010 at 1:13 AM, Maciej Stachowiak m...@apple.com wrote:


 On Jan 21, 2010, at 8:37 PM, Darin Fisher wrote:

 On Thu, Jan 21, 2010 at 7:15 PM, Maciej Stachowiak m...@apple.com wrote:


 I asked Brady (the Safari/WebKit engineer who implemented pushState())
 about this, and he told me he found that in the pushState case it sometimes
 made things easier from the authoring side for history.back() to be
 synchronous. But I don't have the details.


 Brady and I have been discussing this too.  I'm not convinced that
 synchronous history.back() makes things significantly better for content
 authors.  Indeed, I'm concerned that it makes things worse.

 Here's what I mean:  If history.go() sometimes results in the history
 traversal completing synchronously (hash change) and sometimes not
 (navigation required), then there is a loss of predictability for the
 programmer.  They have to deal with event dispatch sometimes happening in a
 re-entrant manner, but other times not.  A consistent model seems better to
 me.


 I don't really have strong feelings about this at present. But I think the
 most important deciding factor should be what is the best behavior for
 authors. It seems like the implementation details are something that can be
 worked out either way, so implementation challenge should be a tiebreaker at
 best.


That's fair, and I completely agree in principle.

The reality of the situation of course is that implementation challenges
matter a lot.  If implementation costs are high and the benefits of
modifying the code are relatively low, then we are likely to end up with
browsers having inconsistent behavior for some time.



 You are correct that consistency is good. However, whether a navigation to
 a different document is asynchronous can barely be observed - it's not like
 any code from the current document will run after the navigation anyway. For
 within-document navigations, I could imagine synchronous behavior might make
 some things easier to code - you could call history.back() to pop the state
 stack, and your code running afterwards could assume you'd successfully
 traversed to the new state. On the other hand, I could also imagine it's
 harder. And history.back is probably not used all that much in modern Web
 apps anyway. That's why I don't have a very good picture of which approach
 is better.


Isn't this what popstate and hashchange notifications are for?  Afterall,
the user could also press the back button, and the developer would probably
be concerned with that case too.

-Darin




 Regards,
 MAciej




Re: [whatwg] history.back()

2010-01-22 Thread Darin Fisher
On Fri, Jan 22, 2010 at 2:08 AM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 21 Jan 2010, Darin Fisher wrote:
 
  In WebKit, history.back() is currently implemented asynchronously.

 It's not clear to me what you mean by asynchronously.

 Do you mean that the events fire asynchronously? That the Location object
 is updated asynchronously? That the decision about whether the call is a
 noop or not is fired asynchronously? That the navigation, if one is
 necessary, is done asynchronously? Are we talking about same-frame, or
 cross-frame? Same-origin, or cross-origin? Traversal from one entry in one
 document to another entry in the same document, or in another document?


To clarify:

history.{back,forward,go} begin by scheduling a task on the current thread
to run later.  From that task, the history traversal algorithm is executed.





 I made some demos to test this out, and it seems that IE8 behaves
 differently whether it's cross-frame or same-frame. I'd really rather we
 define this in a way that is consistent for all ways of invoking the API.
 It does the Location changes synchronously if invoked in-page, and
 asynchronously if the traversal affects another page.


That's very interesting.




 For simple cases, Firefox consistently does the Location change
 synchronously. Opera (10.x on Windows), Safari (4 for Windows), and Chrome
 do it async. But complicated cases make these descriptions simplistic.

   http://www.hixie.ch/tests/adhoc/dom/level0/history/sync-vs-async/


  IE however appears to implement history.back() asynchronously in all
  cases just like newer versions of WebKit.

 That doesn't appear to be completely accurate.


I was only testing the cross frame case.  Thank you for testing more
thoroughly.





  From a web compat perspective, it seems wise to match the behavior of
  IE. It also has other benefits.
 
  Can we change the spec?

 Yes, but that won't make it async if the goal is to match IE.


 On Thu, 21 Jan 2010, Jonas Sicking wrote:
 
  Sounds good to me. Having all navigation be asynchronous I suspect would
  have implementations benefits in Gecko too.

 It would be a reasonably minor change to the spec. I'm happy to go either
 way on this. The problem is I don't know exactly what async vs sync
 really means in this context, since the algorithms are quite complicated.


 On Thu, 21 Jan 2010, Olli Pettay wrote:
 
  And still one thing to test and specify;
  if history.back()/forward() is asynchronous,
  does that mean that loading start asynchronously,
  or that entries are added asynchronously to session history?
 
  What should happen if a page calls:
  history.back();
  history.forward();
 
  What if the page calls:
  history.back();
  history.go(-2);

 Indeed. There are the kinds of questions I am curious about.

 Another is what should happen if a page goes back() past its fragment
 identifier entries, and then modifies the document or alerts the location?
 What location should it get? Which document should it mutate? (test 007)

 How about:

   location.hash = 'a';
   /* spin event loop */
   history.back();
   location.hash = 'b';
   history.forward();
   alert(location.hash);
   /* spin event loop */
   alert(location.hash);


It would be nice if the navigation and history traversal algorithms did not
proceed while the page is blocked on a modal alert.




 What does this alert? (test 008)

 How about:

   location.hash = 'x';
   location.hash = 'a';
   /* spin event loop */
   history.back();
   /* spin event loop */
   history.forward();
   location.hash = 'b';
   /* spin event loop */
   history.back();
   /* spin event loop */
   alert(location.hash);

 What does this alert? (test 010)


  And btw, some of the session history handling is anyway synchronous. Per
  the current HTML5 draft calling document.open() adds a new entry to
  session history immediately (IIRC, webkit is the only one which doesn't
  support this).

 Another example is navigating to a fragment identifier, which in all
 browsers I tested changes the Location object immediately also.


 As I see it these are the criteria that we have to consider here in making
 a decision, in order of importance:

  * Compatibility.
   It seems that browsers are quite inconsistent here, and so it's likely
   that we have some room to maneuver. Nobody has mentioned any
   particular bugs in sites caused by implementing this one way or
   another. I am not convinced that compatibility is a factor at this
   point.

  * Consistency for authors
   I think whatever solution we come up with we should make sure it is
   sane for authors. In this case, however, pretty much any model works,
   so this doesn't really help decide what is best, so long as we are
   consistent in how we specify and implement it.

  * Implementation concerns
   This may be the deciding factor, in particular due to the multiprocess
   session history issues Darin raised.

  * Specification sanity
   I think we can probably make any model work

[whatwg] history.back()

2010-01-21 Thread Darin Fisher
In WebKit, history.back() is currently implemented asynchronously.

However, it was not always this way.  Previously, if the back navigation
corresponded to a hash change, then the back navigation would complete
synchronously.  If the back navigation corresponded to a different document,
then it would be completed asynchronously.

The HTML5 spec currently calls for the old behavior of WebKit, which happens
to match the behavior of Gecko.  Because the spec is written this way, there
is movement in WebKit to change WebKit back.

IE however appears to implement history.back() asynchronously in all cases
just like newer versions of WebKit.

I actually think this is a better behavior to spec for a couple reasons:

1)  It allows for history.back() to behave consistently regardless of the
type of navigation.
2)  It allows for the back/forward list to be decoupled from the main thread
of the rendering engine.

This last point is quite relevant to Chrome since we store the back/forward
list in a separate process.  We do this since items in the back/forward list
may actually need to be rendered using different WebKit processes.
 (Navigating in the location bar is a hint that we can spawn a new process.)

We could copy the entire back/forward list to each process and replicate
state, but that seems excessive.  Instead, simply matching the
history.back() behavior of IE avoids the need to do so.

From a web compat perspective, it seems wise to match the behavior of IE.
 It also has other benefits.

Can we change the spec?

-Darin


Re: [whatwg] history.back()

2010-01-21 Thread Darin Fisher
On Thu, Jan 21, 2010 at 3:18 AM, Olli Pettay olli.pet...@helsinki.fiwrote:

 On 1/21/10 11:12 AM, Darin Fisher wrote:

 In WebKit, history.back() is currently implemented asynchronously.

 However, it was not always this way.  Previously, if the back navigation
 corresponded to a hash change, then the back navigation would complete
 synchronously.  If the back navigation corresponded to a different
 document, then it would be completed asynchronously.

 The HTML5 spec currently calls for the old behavior of WebKit, which
 happens to match the behavior of Gecko.  Because the spec is written
 this way, there is movement in WebKit to change WebKit back.

 IE however appears to implement history.back() asynchronously in all
 cases just like newer versions of WebKit.

 I actually think this is a better behavior to spec for a couple reasons:

 1)  It allows for history.back() to behave consistently regardless of
 the type of navigation.
 2)  It allows for the back/forward list to be decoupled from the main
 thread of the rendering engine.

 This last point is quite relevant to Chrome since we store the
 back/forward list in a separate process.  We do this since items in the
 back/forward list may actually need to be rendered using different
 WebKit processes.  (Navigating in the location bar is a hint that we can
 spawn a new process.)

 We could copy the entire back/forward list to each process and replicate
 state, but that seems excessive.  Instead, simply matching the
 history.back() behavior of IE avoids the need to do so.

  From a web compat perspective, it seems wise to match the behavior of
 IE.  It also has other benefits.

 Can we change the spec?

 -Darin



 Do you propose to make all history traversal async?
 back/forward/go/location.reload  ?



My proposal is to only make history.{back,forward,go} asynchronous.

I haven't carefully reviewed location.reload, but off hand, I think it
should
behave similarly to location assignment.

My concern is really that the history traversal algorithm should not require
direct synchronous read access to session history entries other than the
current session history entry.

I think it would be best for history.{back,forward,go} to be asynchronous
to support an implementation that just sends an event all the way up to
the UI layer of a browser to synthesize a click of the corresponding
buttons.
That way an implementation can reuse most of the same code paths for
both user initiated history traversal as well as page initiated history
traversal.

-Darin


Re: [whatwg] history.back()

2010-01-21 Thread Darin Fisher
On Thu, Jan 21, 2010 at 7:15 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jan 21, 2010, at 1:12 AM, Darin Fisher wrote:

  In WebKit, history.back() is currently implemented asynchronously.
 
  However, it was not always this way.  Previously, if the back navigation
 corresponded to a hash change, then the back navigation would complete
 synchronously.  If the back navigation corresponded to a different document,
 then it would be completed asynchronously.
 
  The HTML5 spec currently calls for the old behavior of WebKit, which
 happens to match the behavior of Gecko.  Because the spec is written this
 way, there is movement in WebKit to change WebKit back.
 
  IE however appears to implement history.back() asynchronously in all
 cases just like newer versions of WebKit.
 
  I actually think this is a better behavior to spec for a couple reasons:
 
  1)  It allows for history.back() to behave consistently regardless of the
 type of navigation.
  2)  It allows for the back/forward list to be decoupled from the main
 thread of the rendering engine.
 
  This last point is quite relevant to Chrome since we store the
 back/forward list in a separate process.  We do this since items in the
 back/forward list may actually need to be rendered using different WebKit
 processes.  (Navigating in the location bar is a hint that we can spawn a
 new process.)
 
  We could copy the entire back/forward list to each process and replicate
 state, but that seems excessive.  Instead, simply matching the
 history.back() behavior of IE avoids the need to do so.

 I don't have strong feelings either way on what the spec should require.
 But I don't see why this is excessive. You'd only have to store fragment
 navigations and pushState navigations, not the full back/forward list. It
 seems like a good idea anyway not to have to serialize state objects back
 and forth.


We have to serialize state objects back and forth regardless so that if a
renderer process crashes, we still have the session history.  This allows us
to reload the pages, and restore relevant session history state (e.g.,
scroll position, form field values, and now state objects).




 I asked Brady (the Safari/WebKit engineer who implemented pushState())
 about this, and he told me he found that in the pushState case it sometimes
 made things easier from the authoring side for history.back() to be
 synchronous. But I don't have the details.


Brady and I have been discussing this too.  I'm not convinced that
synchronous history.back() makes things significantly better for content
authors.  Indeed, I'm concerned that it makes things worse.

Here's what I mean:  If history.go() sometimes results in the history
traversal completing synchronously (hash change) and sometimes not
(navigation required), then there is a loss of predictability for the
programmer.  They have to deal with event dispatch sometimes happening in a
re-entrant manner, but other times not.  A consistent model seems better to
me.

-Darin




 
  From a web compat perspective, it seems wise to match the behavior of IE.
  It also has other benefits.
 
  Can we change the spec?


 Regards,
 Maciej



Re: [whatwg] history.back()

2010-01-21 Thread Darin Fisher
On Thu, Jan 21, 2010 at 7:17 PM, Brady Eidson beid...@apple.com wrote:


 On Jan 21, 2010, at 1:12 AM, Darin Fisher wrote:

  In WebKit, history.back() is currently implemented asynchronously.
 
  However, it was not always this way.  Previously, if the back navigation
 corresponded to a hash change, then the back navigation would complete
 synchronously.  If the back navigation corresponded to a different document,
 then it would be completed asynchronously.
 
  The HTML5 spec currently calls for the old behavior of WebKit, which
 happens to match the behavior of Gecko.  Because the spec is written this
 way, there is movement in WebKit to change WebKit back.
 
  IE however appears to implement history.back() asynchronously in all
 cases just like newer versions of WebKit.
 
  I actually think this is a better behavior to spec for a couple reasons:
 
  1)  It allows for history.back() to behave consistently regardless of the
 type of navigation.

 I agree it would make history.back() consistent regardless of the type of
 navigation.  I don't necessarily know what benefit this has.


Please see my note to Maciej regarding this.




  2)  It allows for the back/forward list to be decoupled from the main
 thread of the rendering engine.

 I've both brainstormed on my own and discussed this point with others who
 have done a lot of thought on how a multi-threaded or multi-processed WebKit
 may work in the future, and we all agree that synchronous history.go() does
 not make an MT/MP implementation difficult or impossible.


Yes, I can imagine some MT/MP implementations where that could be true.

However, as someone with a great deal of experience developing a
multi-process browser, I can tell you that this would be very expensive for
us to support.  We heavily leverage the fact that the session history list
is divorced from the rendering process / thread.  For example, session
history is modified outside of the rendering process.




  This last point is quite relevant to Chrome since we store the
 back/forward list in a separate process.  We do this since items in the
 back/forward list may actually need to be rendered using different WebKit
 processes.  (Navigating in the location bar is a hint that we can spawn a
 new process.)
 
  We could copy the entire back/forward list to each process and replicate
 state, but that seems excessive.  Instead, simply matching the
 history.back() behavior of IE avoids the need to do so.
 
  From a web compat perspective, it seems wise to match the behavior of IE.

 This is often the case, yes, but not always a good enough rationalization
 on its own.


Sure, and it is not the only reason.




  It also has other benefits.

  Can we change the spec?

 I've heard one concrete benefit and one theoretical benefit.
 Concrete - Chrome's particular multi-process model easily fits in with
 asynchronous history.go()
 Theoretical - Matching the behavior of IE here is important to
 compatibility than matching Gecko and previous Webkit behavior.


There is also the consistency argument I made.  That is quite a concrete
benefit for programmers.





 To argue in favor of the current spec:

 One of the main drives for async behavior in modern programming and HTML 5
 specifically has been to remove the slow and unpredictable nature of
 blocking I/O from the equation.  Certainly that has been a motivating factor
 in a lot of feedback I have given, others at Apple have given, and many
 outside of Apple have given to the HTML 5 spec (including those who work on
 Chromium).


Yes, that is a huge motivation.  I believe there are other motivations.  For
example, asynchronous events provide a nice means to avoid re-entrancy and
the deeply nested stacks that come with them.  postMessage is asynchronous
for this reason.  The scroll event is spec'd to be asynchronous for similar
reasons.




 The current synchronous traversals called for by the spec are explicitly
 the ones that - from the engine's standpoint - will never have i/o to block
 on.

 More generally, asynchronicity adds complexity to using and understanding
 APIs as well as predicting the side effects of a particular method call.
  Web developers rarely - if ever - have sympathy for the difficulties of the
 engineers who created the environment they are working in.

 It seems that when designing and presenting an API, synchronicity should be
 preferred unless there's an inherent performance or scalability problem with
 it.  And I just don't see that problem with the specified behavior of
 history.go().



Again, there is a consistency problem, a re-entrancy problem, etc.

From the performance and scalability angle, I also believe that we should
not design ourselves into a corner with APIs.  I for one am very interested
in a future where more elements of an application can be split off into
separate threads.  An iframe in a separate domain being a good example where
this kind of separation is desirable.  The more synchronous events

Re: [whatwg] history.back()

2010-01-21 Thread Darin Fisher
On Thu, Jan 21, 2010 at 8:56 PM, Darin Fisher da...@chromium.org wrote:

 On Thu, Jan 21, 2010 at 7:17 PM, Brady Eidson beid...@apple.com wrote:


 On Jan 21, 2010, at 1:12 AM, Darin Fisher wrote:

  In WebKit, history.back() is currently implemented asynchronously.
 
  However, it was not always this way.  Previously, if the back navigation
 corresponded to a hash change, then the back navigation would complete
 synchronously.  If the back navigation corresponded to a different document,
 then it would be completed asynchronously.
 
  The HTML5 spec currently calls for the old behavior of WebKit, which
 happens to match the behavior of Gecko.  Because the spec is written this
 way, there is movement in WebKit to change WebKit back.
 
  IE however appears to implement history.back() asynchronously in all
 cases just like newer versions of WebKit.
 
  I actually think this is a better behavior to spec for a couple reasons:
 
  1)  It allows for history.back() to behave consistently regardless of
 the type of navigation.

 I agree it would make history.back() consistent regardless of the type of
 navigation.  I don't necessarily know what benefit this has.


 Please see my note to Maciej regarding this.




  2)  It allows for the back/forward list to be decoupled from the main
 thread of the rendering engine.

 I've both brainstormed on my own and discussed this point with others who
 have done a lot of thought on how a multi-threaded or multi-processed WebKit
 may work in the future, and we all agree that synchronous history.go() does
 not make an MT/MP implementation difficult or impossible.


 Yes, I can imagine some MT/MP implementations where that could be true.

 However, as someone with a great deal of experience developing a
 multi-process browser, I can tell you that this would be very expensive for
 us to support.  We heavily leverage the fact that the session history list
 is divorced from the rendering process / thread.  For example, session
 history is modified outside of the rendering process.




  This last point is quite relevant to Chrome since we store the
 back/forward list in a separate process.  We do this since items in the
 back/forward list may actually need to be rendered using different WebKit
 processes.  (Navigating in the location bar is a hint that we can spawn a
 new process.)
 
  We could copy the entire back/forward list to each process and replicate
 state, but that seems excessive.  Instead, simply matching the
 history.back() behavior of IE avoids the need to do so.
 
  From a web compat perspective, it seems wise to match the behavior of
 IE.

 This is often the case, yes, but not always a good enough rationalization
 on its own.


 Sure, and it is not the only reason.




  It also has other benefits.

  Can we change the spec?

 I've heard one concrete benefit and one theoretical benefit.
 Concrete - Chrome's particular multi-process model easily fits in with
 asynchronous history.go()
 Theoretical - Matching the behavior of IE here is important to
 compatibility than matching Gecko and previous Webkit behavior.


 There is also the consistency argument I made.  That is quite a concrete
 benefit for programmers.





 To argue in favor of the current spec:

 One of the main drives for async behavior in modern programming and HTML 5
 specifically has been to remove the slow and unpredictable nature of
 blocking I/O from the equation.  Certainly that has been a motivating factor
 in a lot of feedback I have given, others at Apple have given, and many
 outside of Apple have given to the HTML 5 spec (including those who work on
 Chromium).


 Yes, that is a huge motivation.  I believe there are other motivations.
  For example, asynchronous events provide a nice means to avoid re-entrancy
 and the deeply nested stacks that come with them.  postMessage is
 asynchronous for this reason.  The scroll event is spec'd to be asynchronous
 for similar reasons.




 The current synchronous traversals called for by the spec are explicitly
 the ones that - from the engine's standpoint - will never have i/o to block
 on.

 More generally, asynchronicity adds complexity to using and understanding
 APIs as well as predicting the side effects of a particular method call.
  Web developers rarely - if ever - have sympathy for the difficulties of the
 engineers who created the environment they are working in.

 It seems that when designing and presenting an API, synchronicity should
 be preferred unless there's an inherent performance or scalability problem
 with it.  And I just don't see that problem with the specified behavior of
 history.go().



 Again, there is a consistency problem, a re-entrancy problem, etc.

 From the performance and scalability angle, I also believe that we should
 not design ourselves into a corner with APIs.  I for one am very interested
 in a future where more elements of an application can be split off into
 separate threads.  An iframe in a separate domain

Re: [whatwg] HTMLCanvasElement.toFile()

2010-01-15 Thread Darin Fisher
On Thu, Jan 14, 2010 at 11:04 PM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 14 Jan 2010, Darin Fisher wrote:
  On Thu, Jan 14, 2010 at 12:10 PM, David Levin le...@google.com wrote:
  
   It seems like it the method should be toBlob.
  
This doesn't address my concern that you won't know the mime type of
the blob returned.
  
   This makes a good case to move the readonly attrbiute DOMString type
   from File to Blob.
 
  I like this suggestion.  It seems reasonable for a Blob, which is just a
  handle to data, to have an associated media type.

 What type should a blob have if it is the result of slicing another file?


I had the same thought after sending this ;-)

A slicing operation that changes the size of the file should probably clear
the type field or set it to application/octet-stream.  Perhaps Blob.type
should be settable in some cases?

-Darin


Re: [whatwg] HTMLCanvasElement.toFile()

2010-01-14 Thread Darin Fisher
On Thu, Jan 14, 2010 at 12:10 PM, David Levin le...@google.com wrote:

 It seems like it the method should be toBlob.

  This doesn't address my concern that you won't know the mime type of
  the blob returned.

 This makes a good case to move the readonly attrbiute DOMString
 type from File to Blob.

 dave



I like this suggestion.  It seems reasonable for a Blob, which is just a
handle to data, to have an associated media type.

-Darin


Re: [whatwg] question about the popstate event

2010-01-12 Thread Darin Fisher
Hi,

I've been discussing this issue with Brady Eidson over at
https://bugs.webkit.org/show_bug.cgi?id=33224,
and his interpretation appears to be different.  (I think he may have
convinced me too.)

I'd really like some help understanding how pushState is intended to work
and to see how that lines up
with the spec.

Also, assuming Brady is correct, then I wonder why pushState was designed
this way.  It seems strange
to me that entries in session history would disappear when you navigate away
from a document that used
pushState.

-Darin


On Tue, Jan 5, 2010 at 6:55 PM, Justin Lebar justin.le...@gmail.com wrote:

  From my reading of the spec, I would expect the following steps:
  5. Page A is loaded.
  6. The load event for Page A is dispatched.
  7. The popstate event for Page A is dispatched.

 I think this is correct.  A popstate event is always dispatched
 whenever a new session history entry is activated (6.10.3).

 -Justin

 On Tue, Jan 5, 2010 at 4:53 PM, Darin Fisher da...@chromium.org wrote:
  I'd like to make sure that I'm understanding the spec for pushState and
 the
  popstate event properly.
  Suppose, I have the following sequence of events:
  1. Page A is loaded.
  2. Page A calls pushState(foo, null).
  3. The user navigates to Page B.
  4. The user navigates back to Page A (clicks the back button once).
  Assuming the document of Page A was disposed upon navigation to Page B
  (i.e., that it was not preserved in a page cache), should a popstate
 event
  be generated as a result of step 4?
  From my reading of the spec, I would expect the following steps:
  5. Page A is loaded.
  6. The load event for Page A is dispatched.
  7. The popstate event for Page A is dispatched.
  Do I understand correctly?
  Thanks,
  -Darin



Re: [whatwg] question about the popstate event

2010-01-12 Thread Darin Fisher
On Tue, Jan 12, 2010 at 7:30 PM, Justin Lebar justin.le...@gmail.comwrote:

 If I'm understanding the bug correctly, Brady is suggesting not that a
 popstate event isn't fired when we navigate back to a document which
 has been unloaded from memory, but that the state object in that
 popstate event is null.

 As I understand it, the crux of his argument relates to the algorithm
 to update the session history with the new page [1]:

 2) If the navigation was initiated for entry update of an entry
 
1) Replace the entry being updated with a new entry representing
   the new resource and its Document object and related state.

 I think he's arguing that the set of related state that is copied to
 the new entry does not contain the state object.  His evidence for
 this is mostly textual: This state is referenced in other parts of the
 spec, and in those places, it's made clear that the state consists of
 scroll position and form fields:

 (From comment #4 at https://bugs.webkit.org/show_bug.cgi?id=33224)
  I believe state in this context is not referring to state objects,
 but
  rather persisted user state as set forth in 5.11.9 step 3:
  For example, some user agents might want to persist the scroll position,
 or
  the values of form controls.

 I think this is a good point from a textual perspective.


Ah, yes... agreed.



 But I think it's clear that we actually want to persist state objects
 across Document unloads.  If we didn't care about this use case, we
 could do away with state objects altogether.  A document could just
 call pushstate with no state variable and store its state objects in a
 global variable indexed by an identifier in the URL.  When the page
 receives a popstate, it checks its URL and grabs the relevant state
 object.  Simple.  (This doesn't handle multiple entries with the same
 URL, but hash navigation doesn't handle that either, so that's not a
 big problem.)

 My point is that state objects are pretty much useless unless you
 persist them after the document has been unloaded.


Right!  This is the very perspective I viewed pushState from, and it also
seems to me that the feature is far less useful if state objects are not
persisted after document unload.

-Darin




 I also think the fact that we take the structured clone of a state
 object before saving it (and that structured clone forbids pointers to
 DOM objects and whatnot) indicates that the spec intended for state
 objects to stick around after document unload.  Otherwise, why bother
 making a restrictive copy?

 (It should go without saying that if you're saving state objects
 across document unloads, you should also be saving the has same
 document relationships between history entries.  That is, suppose
 history entry A calls pushstate and creates history entry B.  Some
 time later, the document for A and B is unloaded, then the user goes
 back to B, which is re-fetched into a fresh Document.  Then the user
 clicks back, activating A.  We should treat the activation of A from B
 as an activation between two entries with the same document, and not
 re-fetch A.)

 Where the spec needs to be clarified to support this, I think it
 should be.  But let's first agree that this is the right thing to do.

 -Justin

 [1]
 http://www.whatwg.org/specs/web-apps/current-work/multipage/history.html#update-the-session-history-with-the-new-page

 On Tue, Jan 12, 2010 at 3:54 PM, Darin Fisher da...@chromium.org wrote:
  Hi,
  I've been discussing this issue with Brady Eidson over
  at https://bugs.webkit.org/show_bug.cgi?id=33224,
  and his interpretation appears to be different.  (I think he may have
  convinced me too.)
  I'd really like some help understanding how pushState is intended to work
  and to see how that lines up
  with the spec.
  Also, assuming Brady is correct, then I wonder why pushState was designed
  this way.  It seems strange
  to me that entries in session history would disappear when you navigate
 away
  from a document that used
  pushState.
  -Darin
 
  On Tue, Jan 5, 2010 at 6:55 PM, Justin Lebar justin.le...@gmail.com
 wrote:
 
   From my reading of the spec, I would expect the following steps:
   5. Page A is loaded.
   6. The load event for Page A is dispatched.
   7. The popstate event for Page A is dispatched.
 
  I think this is correct.  A popstate event is always dispatched
  whenever a new session history entry is activated (6.10.3).
 
  -Justin
 
  On Tue, Jan 5, 2010 at 4:53 PM, Darin Fisher da...@chromium.org
 wrote:
   I'd like to make sure that I'm understanding the spec for pushState
 and
   the
   popstate event properly.
   Suppose, I have the following sequence of events:
   1. Page A is loaded.
   2. Page A calls pushState(foo, null).
   3. The user navigates to Page B.
   4. The user navigates back to Page A (clicks the back button once).
   Assuming the document of Page A was disposed upon navigation to Page B
   (i.e., that it was not preserved in a page cache

Re: [whatwg] using postMessage() to send to a newly-created window

2010-01-06 Thread Darin Fisher
On Wed, Jan 6, 2010 at 1:11 AM, Anne van Kesteren ann...@opera.com wrote:

 On Wed, 06 Jan 2010 06:30:17 +0100, Darin Fisher da...@chromium.org
 wrote:

 I suspect the postMessage would be dispatched in this case, but the event
 dispatch would probably go to the document at http://a/ instead of
 http://b/.


 This would fail as well because of the targetOrigin argument. (Unless that
 is * I guess, but can't you just check before invoking postMessage()
 anyway?)


It was given as * in the example.
-Darin


[whatwg] question about the popstate event

2010-01-05 Thread Darin Fisher
I'd like to make sure that I'm understanding the spec for pushState and the
popstate event properly.

Suppose, I have the following sequence of events:

1. Page A is loaded.
2. Page A calls pushState(foo, null).
3. The user navigates to Page B.
4. The user navigates back to Page A (clicks the back button once).

Assuming the document of Page A was disposed upon navigation to Page B
(i.e., that it was not preserved in a page cache), should a popstate event
be generated as a result of step 4?

From my reading of the spec, I would expect the following steps:

5. Page A is loaded.
6. The load event for Page A is dispatched.
7. The popstate event for Page A is dispatched.

Do I understand correctly?

Thanks,
-Darin


Re: [whatwg] using postMessage() to send to a newly-created window

2010-01-05 Thread Darin Fisher
The window doesn't open synchronously, so you should have to wait for
http://x/ to load (or for its document to at least be created) before you
can start communicating with it.

Note: If you instead open about:blank you should be able to communicate
with it synchronously since about:blank is loaded synchronously.  It is
special-cased.

From the newly opened window, you could try posting a message to its opener.
 The opener could then handle that event and use it as a signal to know that
it can know begin communicating with the newly opened window.

I haven't tested any of this ;-)

-Darin


On Mon, Dec 21, 2009 at 7:24 PM, Dirk Pranke dpra...@chromium.org wrote:

 Hi all,

 In the course of testing something today, I attempted to create a
 window and immediately post a message to it, and was surprised that it
 didn't seem to work.

 E.g.:

 var w = window.open(http://x;);
 w.postMessage(hello, world, *);

 w never got the message - this seemed to be consistent across Safari,
 Chrome, and FF (all I had installed on my Mac at the time, so
 apologies to Opera, IE, and anyone else I've left out).

 Is this supposed to work? If not, is there a reliable way for the the
 source to know when it is safe to send a message to the target? The
 only way I can think of is for the target to send a message back to
 the source, which only works if the target can get a reference to the
 source using window.opener, which may or may not be possible or
 desirable ...

 If this isn't supposed to work, can we state this explicitly in the spec?

 -- dirk



Re: [whatwg] using postMessage() to send to a newly-created window

2010-01-05 Thread Darin Fisher
It sounds tempting to say that the postMessage should be queued until the
newly opened window is loaded, but what point in time is that exactly?  Is
that after the load event is dispatched on the newly opened window?

Note: a newly opened window can begin communicating with its opener much
earlier (via inline script execution).

However, if we try to dispatch the postMessage events before the load event
then the newly opened window may not have registered its event handlers yet.
 (A future script tag may define the event handler.)  So, I think we would
have to delay until the load event for the semantics to be sane.

There is perhaps a more critical issue that we should consider.  What
happens if the named window already exists?

Consider this case:

window.open(http://a/;, foo);
...
var w = window.open(http://b/;, foo);
w.postMessage(bar, *);

I suspect the postMessage would be dispatched in this case, but the event
dispatch would probably go to the document at http://a/ instead of http://b/.
 This is because the browser has no way of knowing if http://b/ will
actually be displayable content.  It could be of a mime type that should
just be downloaded (in which case the indicated window is not navigated).

So, queuing is probably not a good idea.  Workers do not have this issue
since they cannot be navigated like a window.

-Darin


On Tue, Jan 5, 2010 at 8:29 PM, Dirk Pranke dpra...@chromium.org wrote:

 I understand the rationale, and the workaround you suggest does work,
 (I have tested it, in FF, Safari and Chrome). But, as Jonas mentioned,
 this isn't what we do with workers, and it feels counter-intuitive to
 me (I'm having trouble thinking of other async messaging models that
 require an application-level handshake like this before messaging can
 commence). Are there reasons other than implementation complexity (an
 okay reason) or backwards-compatibility (a better reason) not to have
 the post work in this case? Put differently, would anything break
 (other than a rather oddly written app that explicitly counted on this
 behavior) if this did work?

 As an alternative, would it be possible to create an onChildLoad()
 event in the parent so that the parent could reliably send a message
 without needing the child's cooperation? These seems only marginally
 better than having the child post to the parent, so it may not be
 worth it ...

 -- Dirk

 On Tue, Jan 5, 2010 at 5:00 PM, Darin Fisher da...@chromium.org wrote:
  The window doesn't open synchronously, so you should have to wait for
  http://x/ to load (or for its document to at least be created) before
 you
  can start communicating with it.
  Note: If you instead open about:blank you should be able to communicate
  with it synchronously since about:blank is loaded synchronously.  It is
  special-cased.
  From the newly opened window, you could try posting a message to its
 opener.
   The opener could then handle that event and use it as a signal to know
 that
  it can know begin communicating with the newly opened window.
  I haven't tested any of this ;-)
  -Darin
 
  On Mon, Dec 21, 2009 at 7:24 PM, Dirk Pranke dpra...@chromium.org
 wrote:
 
  Hi all,
 
  In the course of testing something today, I attempted to create a
  window and immediately post a message to it, and was surprised that it
  didn't seem to work.
 
  E.g.:
 
  var w = window.open(http://x;);
  w.postMessage(hello, world, *);
 
  w never got the message - this seemed to be consistent across Safari,
  Chrome, and FF (all I had installed on my Mac at the time, so
  apologies to Opera, IE, and anyone else I've left out).
 
  Is this supposed to work? If not, is there a reliable way for the the
  source to know when it is safe to send a message to the target? The
  only way I can think of is for the target to send a message back to
  the source, which only works if the target can get a reference to the
  source using window.opener, which may or may not be possible or
  desirable ...
 
  If this isn't supposed to work, can we state this explicitly in the
 spec?
 
  -- dirk
 
 



Re: [whatwg] Question about pushState

2010-01-04 Thread Darin Fisher
As follow-up, I've filed these bugs:

http://www.w3.org/Bugs/Public/show_bug.cgi?id=8629
https://bugs.webkit.org/show_bug.cgi?id=33160

(Privately, Maciej Stachowiak told me that he supports changing WebKit's
pushState implementation to match Firefox, and so I have filed a bug against
the spec to get it updated to reflect what implementors are doing.)

-Darin

https://bugs.webkit.org/show_bug.cgi?id=33160

On Wed, Dec 16, 2009 at 11:51 AM, Darin Fisher da...@chromium.org wrote:

 [Apologies if this has been discussed before, but I couldn't find it in the
 archives.]

 Why does pushState only prune forward session history entries corresponding
 to the same document?  I would have expected it to behave like a reference
 fragment navigation, which prunes *all* forward session history entries.
  Reason: it seems strange when a navigation doesn't result in a disabled
 forward button in the browser UI, so an app developer may be unsatisfied
 using pushState in place of reference fragment navigations.

 Thoughts?
 -Darin



[whatwg] Question about pushState

2009-12-16 Thread Darin Fisher
[Apologies if this has been discussed before, but I couldn't find it in the
archives.]

Why does pushState only prune forward session history entries corresponding
to the same document?  I would have expected it to behave like a reference
fragment navigation, which prunes *all* forward session history entries.
 Reason: it seems strange when a navigation doesn't result in a disabled
forward button in the browser UI, so an app developer may be unsatisfied
using pushState in place of reference fragment navigations.

Thoughts?
-Darin


Re: [whatwg] Question about pushState

2009-12-16 Thread Darin Fisher
On Wed, Dec 16, 2009 at 12:06 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Dec 16, 2009 at 11:51 AM, Darin Fisher da...@chromium.org wrote:
  [Apologies if this has been discussed before, but I couldn't find it in
 the
  archives.]
  Why does pushState only prune forward session history entries
 corresponding
  to the same document?  I would have expected it to behave like a
 reference
  fragment navigation, which prunes *all* forward session history entries.
   Reason: it seems strange when a navigation doesn't result in a
 disabled
  forward button in the browser UI, so an app developer may be unsatisfied
  using pushState in place of reference fragment navigations.
  Thoughts?

 I agree. I *think* what you are suggesting is what the implementation
 that Justin Lebar has written for Firefox does.

 / Jonas



Hmm... the WebKit implementation appears to do as spec'd.

-Darin


Re: [whatwg] localStorage mutex - a solution?

2009-11-25 Thread Darin Fisher
On Wed, Nov 25, 2009 at 9:16 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/25/09 6:20 AM, Ian Hickson wrote:

- script calling a method implemented in native code on a host object

 ...

 If this is a MUST, this seems like a possible compat issue depending on
 whether the method is native or library-provided, at the very least. There's
 also been talk at least in Gecko of self-hosting some DOM methods in JS
 (e.g. getElementById), at which point they will no longer be implemented in
 native code.

 I'm not sure this is a fatal issue (and I haven't been following this
 thread closely enough in general to be sure of anything); just pointing out
 that it's an issue.

 -Boris



I had a similar thought as it pertains to Chrome.  I also worry about having
to do some storage mutex processing for every native call.  That seems
like unfortunate overhead.

-Darin


Re: [whatwg] localStorage feedback

2009-11-03 Thread Darin Fisher
On Mon, Nov 2, 2009 at 3:46 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Tue, Nov 3, 2009 at 6:36 AM, Darin Fisher da...@chromium.org wrote:

 1a) Given a page (domain A) containing an iframe (domain B), have the
 outer page navigate the inner frame to about:blank.  This navigation
 completes synchronously, and the unload handler for the iframe runs before
 the navigation request completes.  This is true of all browsers.

 1b) Suppose the inner page has a pending XMLHttpRequest when the outer
 frame navigates the inner frame.  The XHR's onabort handler would run before
 the navigation to about:blank completes.


 These are really the same problem: synchronous cross-domain about:blank
 navigation. If navigation to about:blank has to be synchronous, then I guess
 it needs to drop the lock, at least in the cross-domain case.


That's correct.  My point is simple:  Here is another case where nesting can
happen that hadn't been foreseen.  Trying to foresee all such issues is
difficult.

Will we just keep amending the spec each time we find such a possible case?

I think it is far saner to say that any nesting leads to unlocking the
storage mutex.  The spec can then list cases where this nesting might occur.




 2) Set a break point in the Mozilla JS debugger.  This runs a nested event
 loop each time you single step so that it can drive the rest of the browser
 UI.

 3) Install a Firefox extension that runs a nested event loop in response
 to an event generated by content.  I debugged many Firefox crashes resulting
 from extensions that do this kind of thing for various reasons.


 These are internal Mozilla issues and should not be allowed to influence
 the design of the Web platform. They'll probably change for multi-process
 anyway.


OK, but my point is that the spec should afford implementors with the
ability to unlock the storage mutex at other times for reasons not mentioned
in the spec.




 I'm not convinced.  Look at Google Maps and street view.  Gmail uses more
 Flash now than it used to.


 For new features, sure. But are they reimplementing existing browser-based
 functionality to use plugins instead?


I think it is sufficient to just talk in the context of new features.  A JS
library or component grows a new feature that suddenly starts using a
plugin.  Now, API calls that were not supposed to touch plugins start
touching plugins, and the storage mutex gets dropped.






 What will you do for Gecko when it supports content processes?


 Implement the spec, I hope!


 It seems odd to me that this behavior was put into the spec without any
 implementation experience to guide it.  The only multi-process
 implementations that I know of do not have a storage mutex.


 Lots of things are in the spec without implementation experience. I think
 we have time to collect more experience on this issue with multi-process
 browsers and revise the spec in light of it.


OK.  Please note my objection to the storage mutex.

-Darin


Re: [whatwg] localStorage feedback

2009-11-02 Thread Darin Fisher
On Mon, Nov 2, 2009 at 1:28 AM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Sun, Nov 1, 2009 at 3:53 AM, Darin Fisher da...@chromium.org wrote:

 On Fri, Oct 30, 2009 at 1:36 AM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Fri, Oct 30, 2009 at 7:27 PM, Darin Fisher da...@chromium.orgwrote:

 You are right that the conditions are specific, but I don't know that
 that is the
 exhaustive list.  Rather than defining unlock points, I would implement
 implicit
 unlocking by having any nested attempt to acquire a lock cause the
 existing lock
 to be dropped.  Nesting can happen in the cases you mention, but
 depending on
 the UA, it may happen for other reasons too.


 What reasons?

 If these reasons are situations where it's fundamentally difficult,
 impossible, or non-performant to follow the spec, we should change the spec.
 Otherwise this would just be a bug in the UA.


 My point is that it is difficult to ensure that all situations where
 nesting can occur are understood apriori and that the list doesn't change
 over time.  Because we are talking about multi-threading synchronization in
 a very complex system, I would much prefer a more isolated and less fragile
 solution.

 Unlock if yieldForStorageUpdates is called.
 Unlock when returning from script execution.
 Unlock if another attempt to lock occurs (any form of nesting).

 In the third case, I'd probably log something to the JS console to alert
 developers.

 I believe this simple implementation covers most of the cases enumerated
 in the spec, and it has the property of being easier to reason about and
 easier to support (more future proof).


 I think this would make the spec too dependent on implementation details.
 If your implementation needlessly or accidentally nests script execution
 --- e.g. by firing an event synchronously that should be, or could be,
 asynchronous --- then you'd still conform to your spec while the behaviour
 you present to authors gets quietly worse.

 If your description is (or can be, after suitable modifications) equivalent
 to what the spec currently says, but the equivalence is subtle (which it
 would be!), then I think they should *both* be in the spec, and the spec
 should say they're equivalent. Then if we find they're not equivalent, we
 clearly have a bug in the spec which must be fixed --- not carte blanche to
 proceed in an undesirable direction. It would be a sort of spec-level
 assertion.


I think the spec currently calls attention to only some situations that
could lead to nesting of implicitly acquired storage locks.

I previously described some other situations, which you and others indicated
should be treated as WebKit and IE bugs.  I didn't look very far to dig
those up.  After some more thought, I came up with these additional cases
that the spec doesn't cover:

1a) Given a page (domain A) containing an iframe (domain B), have the outer
page navigate the inner frame to about:blank.  This navigation completes
synchronously, and the unload handler for the iframe runs before the
navigation request completes.  This is true of all browsers.

1b) Suppose the inner page has a pending XMLHttpRequest when the outer frame
navigates the inner frame.  The XHR's onabort handler would run before the
navigation to about:blank completes.

2) Set a break point in the Mozilla JS debugger.  This runs a nested event
loop each time you single step so that it can drive the rest of the browser
UI.

3) Install a Firefox extension that runs a nested event loop in response to
an event generated by content.  I debugged many Firefox crashes resulting
from extensions that do this kind of thing for various reasons.








 For example, a JS library might evolve to use flash for something small
 (like
 storage or sound) that it previously didn't use when I first developed
 my code.
 Voila, now my storage lock is released out from under me.


 This example still sounds overly contrived to me. Nevertheless, it seems
 strange to say that because there might be a few unexpected race conditions,
 you have decided to allow a much larger set of unexpected race conditions.


 Why is it contrived?


 Because libraries tend to initially use plugins and move towards using core
 browser functionality, not the other way around. But even if these library
 issues aren't contrived, I don't see how they justify making things a lot
 more unpredictable for everyone.


I'm not convinced.  Look at Google Maps and street view.  Gmail uses more
Flash now than it used to.  Wave uses Gears for a variety of little things.
 There's a cool video gadget that swaps between HTML5 video or Flash
depending on the browser and the target media.




 What will you do for Gecko when it supports content processes?


 Implement the spec, I hope!


It seems odd to me that this behavior was put into the spec without any
implementation experience to guide it.  The only multi-process
implementations that I know of do not have a storage mutex.

-Darin

Re: [whatwg] localStorage feedback

2009-10-31 Thread Darin Fisher
On Fri, Oct 30, 2009 at 1:36 AM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Fri, Oct 30, 2009 at 7:27 PM, Darin Fisher da...@chromium.org wrote:

 You are right that the conditions are specific, but I don't know that that
 is the
 exhaustive list.  Rather than defining unlock points, I would implement
 implicit
 unlocking by having any nested attempt to acquire a lock cause the
 existing lock
 to be dropped.  Nesting can happen in the cases you mention, but depending
 on
 the UA, it may happen for other reasons too.


 What reasons?

 If these reasons are situations where it's fundamentally difficult,
 impossible, or non-performant to follow the spec, we should change the spec.
 Otherwise this would just be a bug in the UA.


My point is that it is difficult to ensure that all situations where nesting
can occur are understood apriori and that the list doesn't change over time.
 Because we are talking about multi-threading synchronization in a very
complex system, I would much prefer a more isolated and less fragile
solution.

Unlock if yieldForStorageUpdates is called.
Unlock when returning from script execution.
Unlock if another attempt to lock occurs (any form of nesting).

In the third case, I'd probably log something to the JS console to alert
developers.

I believe this simple implementation covers most of the cases enumerated in
the spec, and it has the property of being easier to reason about and easier
to support (more future proof).




 For example, a JS library might evolve to use flash for something small
 (like
 storage or sound) that it previously didn't use when I first developed my
 code.
 Voila, now my storage lock is released out from under me.


 This example still sounds overly contrived to me. Nevertheless, it seems
 strange to say that because there might be a few unexpected race conditions,
 you have decided to allow a much larger set of unexpected race conditions.


Why is it contrived?  Many developers use high level toolkits to get their
work done (e.g., jquery, prototype, dojo, google maps api, etc.).  People
are often one step removed from working directly with the web platform APIs.
 They have no idea what all is going on under the covers of those libraries,
and that's a fine thing.

The idea of unlocking whenever there is nesting occurred to me when Jeremy
and I were discussing how to implement unlocking for all of the cases
enumerated in the spec.  It equates to a good number of places in the code
that are quite separated from one another.  It seems very fragile to ensure
that all of those cases continue to be hooked properly.  I think it is very
hard to test that we get it right now and in the future.

But, if we step back, we realize that the implicit unlocking is all about
dealing with nesting of locks.  So, I think it is _way_ better to just
unlock the existing lock if an attempt is made to acquire a nested lock.




 At this point, I'm not favoring implementing the storage mutex in Chrome.
  I
 don't think we will have it in our initial implementation of LocalStorage.
  I think
 web developers that care will have to find another way to manage locking,
 like
 using a Web Database transaction or coordinating with a Shared Worker.


 Have you considered just not implementing LocalStorage? If it's so
 difficult for authors to use correctly and to implement according to the
 spec, this seems like the best path to me.


I have definitely considered it.  I would of course prefer to drop
LocalStorage and focus on something better.  Chrome is unfortunately in a
difficult spot given that everyone else has implemented LocalStorage (though
not necessarily to spec).

So, we are currently on track to support this feature without locking.  In
the future, we might add locking.  I've also considered other solutions,
like copy-on-write, which could obviously lead to data loss, but at least it
would ensure stability/consistency within a scripts execution.  I would like
it if the spec were open to such implementations.

What will you do for Gecko when it supports content processes?

-Darin


Re: [whatwg] localStorage feedback

2009-10-30 Thread Darin Fisher
On Mon, Oct 12, 2009 at 7:07 PM, Ian Hickson i...@hixie.ch wrote:
...

  the problem here is that localStorage is a pile of global variables.
  we are trying to give people global variables without giving them tools
  to synchronize access to them.  the claim i've heard is that developers
  are not savy enough to use those tools properly.  i agree that
  developers tend to use tools without fully understanding them.  ok, but
  then why are we giving them global variables?

 The global variables have implicit locks such that you can build the tools
 for explicit locking on top of them:

   // run this first, in one script block
   var id = localStorage['last-id'] + 1;
   localStorage['last-id'] = id;
   localStorage['email-ready-' + id] = 0; // begin

   // these can run each in separate script blocks as desired
   localStorage['email-subject-' + id] = subject;
   localStorage['email-from-' + id] = from;
   localStorage['email-to-' + id] = to;
   localStorage['email-body-' + id] = body;

   // run this last
   localStorage['email-ready-' + id] = 1; // commit


Dividing up work like this into separate SCRIPT elements to scope the
locking seems really awkward to me.




 On Thu, 24 Sep 2009, Darin Fisher wrote:
 
  The current API exposes race conditions to the web.  The implicit
  dropping of the storage lock is that.  In Chrome, we'll have to drop an
  existing lock whenever a new lock is acquired.  That can happen due to a
  variety of really odd cases (usually related to nested loops or nested
  JS execution), which will be difficult for developers to predict,
  especially if they are relying on third-party JS libraries.
 
  This issue seems to be discounted for reasons I do not understand.

 You can only lose the lock in very specific conditions. Those conditions
 are rarely going to interact with code that actually does storage work in
 a way that relies on the lock:

  - changing document.domain
  - history.back(), .forward(), .go(n)
  - invoking a plugin
  - alert(), confirm(), prompt(), print()
  - showModalDialog()
  - yieldForStorageUpdates()

 I discussed this in more detail here:


 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-September/023059.html


You are right that the conditions are specific, but I don't know that that
is the
exhaustive list.  Rather than defining unlock points, I would implement
implicit
unlocking by having any nested attempt to acquire a lock cause the existing
lock
to be dropped.  Nesting can happen in the cases you mention, but depending
on
the UA, it may happen for other reasons too.

This combined with the fact that most people use JS libraries means that the
coder is not going to have an easy time knowing when these specific
conditions
are met.  I don't think defining a set of allowed unlock points is
sufficient to make
this API not be a minefield for users.

For example, a JS library might evolve to use flash for something small
(like
storage or sound) that it previously didn't use when I first developed my
code.
Voila, now my storage lock is released out from under me.

At this point, I'm not favoring implementing the storage mutex in Chrome.  I
don't think we will have it in our initial implementation of LocalStorage.
 I think
web developers that care will have to find another way to manage locking,
like
using a Web Database transaction or coordinating with a Shared Worker.

Sorry to be a grump about this, but a cross-process lock that lasts until JS
returns is just going to slow down the web.  It is a really bad idea for
that
reason.

-Darin



 On Tue, 8 Sep 2009, Chris Jones wrote:
 
  Can those in the first camp explain why mutex semantics is better than
  transaction semantics?  And why it's desirable to have one DB spec
  specify transaction semantics (Web Database) and a second specify
  mutex semantics (localStorage)?

 I don't think it's desirable. It's just what we have, though an accident
 of history.


 Where we're at: localStorage can't really change. It is what it is.

 We have a better proposal, Web Database, but not everybody wants to
 implement it.

 To move forward, I would recommend that someone come up with a storage
 proposal with the following characteristics:

  * All major browsers vendors are willing to implement it.
  * Compatible with workers.
  * Doesn't have any race conditions.
  * Doesn't involve a cross-process mutex that blocks interaction.
  * Stores structured data.
  * Can be queried in arbitrary ways.
  * Doesn't expose authors to locking primitives.

 Then we can replace Web Database with it and we can move on.

 I suggest that the right venue for this discussion would be the W3C Web
 Apps group, at public-weba...@w3.org.


 On Wed, 9 Sep 2009, Darin Fisher wrote:
 
  What about navigating an iframe to a reference fragment, which could
  trigger a scroll event?  The scrolling has to be done synchronously for
  compat with the web.

 You can only do that with same-domain pages, as far as I can tell.

 (Does

Re: [whatwg] Storage events

2009-10-18 Thread Darin Fisher
On Sat, Oct 17, 2009 at 11:58 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Sat, Oct 17, 2009 at 8:37 PM, Darin Fisher da...@chromium.org wrote:
  On Sat, Oct 17, 2009 at 8:20 PM, Ian Hickson i...@hixie.ch wrote:
  ...
 
  On Thu, 15 Oct 2009, Darin Fisher wrote:
  
   This is interesting since documentURI is a read/write property:
   http://www.w3.org/TR/DOM-Level-3-Core/core.html#Document3-documentURI
 
  I assume that is a mistake. Does anyone support documentURI? It seems
  completely redundant with document.URL.
 
 
  Gecko and WebKit appear to both support documentURI.  Only WebKit allows
 it
  to be modified.

 Huh? So WebKit effectively have one of the main features of pushState
 already? Does the URL-bar change? Does the referrer change for
 subsequent requests such as navigation? I'm guessing it doesn't hook
 the back-button the way that pushState does though.

 / Jonas



It appears to impact the baseURL for the document.

-Darin


Re: [whatwg] Storage events

2009-10-17 Thread Darin Fisher
On Sat, Oct 17, 2009 at 8:20 PM, Ian Hickson i...@hixie.ch wrote:
...

 On Thu, 15 Oct 2009, Darin Fisher wrote:
 
  This is interesting since documentURI is a read/write property:
  http://www.w3.org/TR/DOM-Level-3-Core/core.html#Document3-documentURI

 I assume that is a mistake. Does anyone support documentURI? It seems
 completely redundant with document.URL.


Gecko and WebKit appear to both support documentURI.  Only WebKit allows it
to be modified.
-Darin


Re: [whatwg] Structured clone algorithm on LocalStorage

2009-10-02 Thread Darin Fisher
On Fri, Oct 2, 2009 at 8:08 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Sep 30, 2009 at 10:11 PM, Darin Fisher da...@chromium.org wrote:

 On Tue, Sep 29, 2009 at 11:48 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Tue, Sep 29, 2009 at 12:19 AM, Darin Fisher da...@chromium.org
 wrote:
  On Thu, Sep 24, 2009 at 11:57 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Thu, Sep 24, 2009 at 9:04 PM, Darin Fisher da...@chromium.org
 wrote:
   On Thu, Sep 24, 2009 at 4:43 PM, Jonas Sicking jo...@sicking.cc
 wrote:
  
   On Thu, Sep 24, 2009 at 10:52 AM, Darin Fisher da...@chromium.org
 
   wrote:
On Thu, Sep 24, 2009 at 10:40 AM, Jonas Sicking jo...@sicking.cc
 
wrote:
   
On Thu, Sep 24, 2009 at 1:17 AM, Darin Fisher 
 da...@chromium.org
wrote:
 On Thu, Sep 24, 2009 at 12:20 AM, Jonas Sicking
 jo...@sicking.cc
 wrote:

 On Wed, Sep 23, 2009 at 10:19 PM, Darin Fisher
 da...@chromium.org
 wrote:
 
  ... snip ...
 
 
  multi-core is the future.  what's the opposite of
 fine-grained
  locking?
   it's not good ;-)
  the implicit locking mechanism as spec'd is super lame.
   implicitly
  unlocking under
  mysterious-to-the-developer circumstances!  how can that be
 a
  good
  thing?
  storage.setItem(y,
 
 function_involving_implicit_unlocking(storage.getItem(x)));

 I totally agree on all points. The current API has big
 imperfections.
 However I haven't seen any workable counter proposals so far,
 and
 I
 honestly don't believe there are any as long as our goals
 are:

 * Don't break existing users of the current implementations.
 * Don't expose race conditions to the web.
 * Don't rely on authors getting explicit locking mechanisms
 right.


 The current API exposes race conditions to the web.  The
 implicit
 dropping of the storage lock is that.  In Chrome, we'll have
 to
 drop
 an existing lock whenever a new lock is acquired.  That can
 happen
 due to a variety of really odd cases (usually related to
 nested
 loops
 or nested JS execution), which will be difficult for
 developers to
 predict, especially if they are relying on third-party JS
 libraries.
 This issue seems to be discounted for reasons I do not
 understand.
   
I don't believe we've heard about this before, so that would be
 the
reason it hasn't been taken into account.
   
So you're saying that chrome would be unable implement the
 current
storage mutex as specified in spec? I.e. one that is only
 released
at
the explicit points that the spec defines? That seems like a
 huge
problem.
   
No, no... my point is that to the application developer, those
explicit
points will appear quite implicit and mysterious.  This is why I
called
out third-party JS libraries.  One day, a function that you are
 using
might transition to scripting a plugin, which might cause a
 nested
loop, which could then force the lock to be released.  As a
programmer,
the unlocking is not explicit or predictable.
  
   Ah, indeed, this is a problem. However the unfortunate fact remains
   that so far no other workable solution has been proposed.
  
   OK, so we agree that the current solution doesn't meet the goals you
   stated above :-(
 
  Well, it addresses them as long as users are aware of the risk, and
  properly document weather their various library functions will release
  the lock or not. However I agree that it's unlikely that they will do
  so correctly.
 
  I thought the point of not having lock APIs was that users shouldn't
 have
  to understand locks ;-)  The issue I've raised here is super subtle.
  We
  have not succeeded in avoiding subtlety!

 I think we're mostly in agreement. What I'm not sure about is what you
 are proposing we do with localStorage? Remove it from the spec? Change
 the API? Something else?


 I'm glad we agree.

 I'm not sure what we should do.  It seems like there is a legacy API
  argument for sticking with the current proposal even though it is flawed
 and
 HTML5 is not yet final.  (It has also not been implemented by browsers for
 very long.)  Stated that way, it sounds like a weak argument for
 preserving
 the API as is, and we should just fix it to be better.

 My understanding is that removal is not a popular position.  However,
 given
 that more browsers are moving to be multi-process, I have to say that I'm
 a
 bit surprised there isn't more support for ditching the current
 localStorage
 API.


 You're preaching to the choir :) I'd recommend talking to apple and
 microsoft directly. I don't know what their plans are regarding all this.


Fair enough :-)







 Moreover, there are other examples which have been discussed on
 the
list.  There are some DOM operations that can result in a frame
receiving
a DOM event synchronously.  That can result in a nesting of
 storage

Re: [whatwg] Structured clone algorithm on LocalStorage

2009-10-02 Thread Darin Fisher
On Fri, Oct 2, 2009 at 9:43 PM, Jonas Sicking jo...@sicking.cc wrote:

  Moreover, there are other examples which have been discussed on
 the
list.  There are some DOM operations that can result in a frame
receiving
a DOM event synchronously.  That can result in a nesting of
 storage
locks,
which can force us to have to implicitly unlock the outermost
 lock to
avoid
deadlocks.  Again, the programmer will have very poor
 visibility into
when
these things can happen.
  
   So far I don't think it has been shown that these events need to
 be
   synchronous. They all appear to be asynchronous in gecko, and in
 the
   case of different-origin frames, I'm not even sure there's a way
 for
   pages to detect if the event was fired asynchronously or not.
  
   IE and WebKit dispatch some of them synchronously.  It's hard to
 say
   which
   is correct or if it causes any web compat isues.  I'm also not
 sure that
   we
   have covered all of the cases.
 
  It still seems to me that it's extremely unlikely that pages depend
 on
  cross origin events to fire synchronously. I can't even think of a
 way
  to test if a browser dispatches these events synchronously or not.
 Can
  you?
 
  i agree that it seems uncommon.  maybe there could be some odd app
 that
  does something after resizing an iframe that could be dependent on
 the
  event handler setting some data field.  this kind of thing is
 probably even
  less common in the cross-origin case.

 But how would you read that data field in the cross-origin frame? I
 think it might be possible, but extremely hard.


 Yeah.

 My concern is simply that I cannot prove that I don't have to worry
 about this
 problem.  Future web APIs might also inadvertently make matters worse.


 I agree it's not ideal, but at the same time I don't think that not
 allowing synchronous cross-origin APIs is a huge burden. You campaigned
 heavily against that when we were designing postMessage for wholly other
 reasons. I would imagine those reasons will hole true no matter what.


 Agreed.  That's a good point.  In that case, I was concerned about stack
 depth.  The same issue might apply here.  Hmm...


 As far as I can see it does.




 ...snip...


   Not quite sure I follow your proposal. How would you for example
   increase the value of a property by one without risking race
   conditions? Or keep two values in different properties in sync?
 I.e.
   so that if you update one always update the other, so that they
 never
   have different values.
  
   / Jonas
  
  
   Easy.  Just like with database, the transaction is the storage
 lock.
Any
   storage
   operation performed on that transaction are done atomically.
  However,
   all
   storage
   operations are asynchronous.  You basically string together
 asynchronous
   storage
   operations by using the same transaction for each.
   We could add methods to get/set multiple items at once to simplify
 life
   for
   the coder.
 
  I think I still don't understand your proposal, could you give some
  code examples?
 
 
 
  ripping off database:
  interface ValueStorage {
void transaction(in DOMString namespace, in
  ValueStorageTransactionCallback callback);
  };
  interface ValueStorageTransactionCallback {
void handleEvent(in ValueStorageTransaction transaction);
  };
  interface ValueStorageTransaction {
void readValue(in DOMString name, in ValueStorageReadCallback
 callback);
void writeValue(in DOMString name, in DOMString value);
  };
  interface ValueStorageReadCallback {
void handleEvent(in ValueStorageTransaction transaction, in
 DOMString
  value);
  };
  then, to use these interfaces, you could implement thread-safe
 increment:
  window.localStorage.transaction(slice, function(transaction) {
transaction.readValue(foo, function(transaction, fooValue) {
  transaction.writeValue(foo, ++fooValue);
})
  })
  to fetch multiple values, you could do this:
  var values = [];
  var numValues = 10;
  function readNextValue(transaction) {
if (values.length == numValues)
 return;  // done!
var index = values.length;
transaction.readValue(value + index, function(transaction, value)
 {
  values.push(value);
  readNextValue(transaction);
})
  }
  window.localStorage.transaction(slice, readNextValue);
  This has the property that all IO is non-blocking and the lock is
 held
  only
  for a very limited scope.  The programmer is however free to extend
 the
  life of the lock as needed.

 What do you mean by that the lock is held for only a very limited
 scope? You still want to prevent modifications for as long as the
 transaction is being used right? I.e. no modifications can happen
 between the read and the write in the first example, and between the
 different reads in the second.


 Yes.  I only meant that the programmer doesn't have to call a special
 function to close the transaction.  It closes by virtue of the last
 handleEvent
 call 

Re: [whatwg] Async scripts

2009-09-30 Thread Darin Fisher
On Wed, Sep 30, 2009 at 1:36 AM, Jonas Sicking jo...@sicking.cc wrote:

 There's two things that I don't understand about the 'async' attribute
 on script elements:

 First of all, why is the parser responsible for executing scripts on
 the list of scripts that will execute asynchronously, as defined by
 [1]? It would seem simpler to always perform the steps defined further
 down, no matter if the document is still being parsed or not. This is
 mostly an editorial issue, but actually seems to make a slight
 behavioral difference. Right now if a document contains two async
 scripts, the tokenizer must always run one step between the execution
 of the two. I.e. This doesn't seem like a particularly desirable, nor
 testable, behavior. It's also really painful to implement if the
 tokenizer is running on a separate thread. Same thing applies to the
 list of scripts that will execute as soon as possible.

 Second, why are async scripts forced to run in the order they appear
 in the markup? I thought the whole idea of the async attribute was to
 run the scripts as soon as possible, while still not blocking parsing.
 This leads to weird situations like if a document contains the
 following markup:

 !DOCTYPE html
 html
  head
title.../title
script src=make-tables-sortable.js/script
script src=analytics.js async/script
  /head
  body.../body
 /html

 In this example, if the first script is changed from being a normal
 script (as above), to being a async script, that could lead to the
 analytics script actually executing later.


Did you perhaps mean to say if both scripts are changed to being async?

If not, then I'm confused because you prefaced this example with why are
async
scripts forced to run in the order they appear in the markup?

I agree that normal scripts should not be deferred behind async scripts that
happen to be listed before the normal scripts.  I don't think that is the
intent
of the async script spec.

-Darin




 I thought the purpose of the async attribute was to avoid people
 having to do nasty DOM hacks in order to increase pageload
 performance, but this makes it seem like such hacks are still needed.

 What is the use case for the current behavior?

 [1]
 http://www.whatwg.org/specs/web-apps/current-work/?slow-browser#when-a-script-completes-loading

 / Jonas



Re: [whatwg] Async scripts

2009-09-30 Thread Darin Fisher
On Wed, Sep 30, 2009 at 9:59 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Sep 30, 2009 at 1:36 AM, Jonas Sicking jo...@sicking.cc wrote:

 There's two things that I don't understand about the 'async' attribute
 on script elements:

 First of all, why is the parser responsible for executing scripts on
 the list of scripts that will execute asynchronously, as defined by
 [1]? It would seem simpler to always perform the steps defined further
 down, no matter if the document is still being parsed or not. This is
 mostly an editorial issue, but actually seems to make a slight
 behavioral difference. Right now if a document contains two async
 scripts, the tokenizer must always run one step between the execution
 of the two. I.e. This doesn't seem like a particularly desirable, nor
 testable, behavior. It's also really painful to implement if the
 tokenizer is running on a separate thread. Same thing applies to the
 list of scripts that will execute as soon as possible.

 Second, why are async scripts forced to run in the order they appear
 in the markup? I thought the whole idea of the async attribute was to
 run the scripts as soon as possible, while still not blocking parsing.
 This leads to weird situations like if a document contains the
 following markup:

 !DOCTYPE html
 html
  head
title.../title
script src=make-tables-sortable.js/script
script src=analytics.js async/script
  /head
  body.../body
 /html

 In this example, if the first script is changed from being a normal
 script (as above), to being a async script, that could lead to the
 analytics script actually executing later.


 Did you perhaps mean to say if both scripts are changed to being async?

 If not, then I'm confused because you prefaced this example with why are
 async
 scripts forced to run in the order they appear in the markup?

 I agree that normal scripts should not be deferred behind async scripts
 that
 happen to be listed before the normal scripts.  I don't think that is the
 intent
 of the async script spec.

 -Darin


D'oh, ignore me.  I overlooked the async attribute on the second script
tag.

Anyways, I agree with you.  Forcing the scripts to run in the order they are
listed
seems to defeat the purpose of the async attribute.

I'm guessing it was spec'd this way to minimize problems that could occur if
there
were any dependencies between the scripts.

-Darin






 I thought the purpose of the async attribute was to avoid people
 having to do nasty DOM hacks in order to increase pageload
 performance, but this makes it seem like such hacks are still needed.

 What is the use case for the current behavior?

 [1]
 http://www.whatwg.org/specs/web-apps/current-work/?slow-browser#when-a-script-completes-loading

 / Jonas





Re: [whatwg] Structured clone algorithm on LocalStorage

2009-09-29 Thread Darin Fisher
On Thu, Sep 24, 2009 at 11:57 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Sep 24, 2009 at 9:04 PM, Darin Fisher da...@chromium.org wrote:
  On Thu, Sep 24, 2009 at 4:43 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Thu, Sep 24, 2009 at 10:52 AM, Darin Fisher da...@chromium.org
 wrote:
   On Thu, Sep 24, 2009 at 10:40 AM, Jonas Sicking jo...@sicking.cc
   wrote:
  
   On Thu, Sep 24, 2009 at 1:17 AM, Darin Fisher da...@chromium.org
   wrote:
On Thu, Sep 24, 2009 at 12:20 AM, Jonas Sicking jo...@sicking.cc
wrote:
   
On Wed, Sep 23, 2009 at 10:19 PM, Darin Fisher 
 da...@chromium.org
wrote:


... snip ...


 multi-core is the future.  what's the opposite of fine-grained
 locking?
  it's not good ;-)
 the implicit locking mechanism as spec'd is super lame.
  implicitly
 unlocking under
 mysterious-to-the-developer circumstances!  how can that be a
 good
 thing?
 storage.setItem(y,
 function_involving_implicit_unlocking(storage.getItem(x)));
   
I totally agree on all points. The current API has big
imperfections.
However I haven't seen any workable counter proposals so far, and
 I
honestly don't believe there are any as long as our goals are:
   
* Don't break existing users of the current implementations.
* Don't expose race conditions to the web.
* Don't rely on authors getting explicit locking mechanisms right.
   
   
The current API exposes race conditions to the web.  The implicit
dropping of the storage lock is that.  In Chrome, we'll have to
 drop
an existing lock whenever a new lock is acquired.  That can happen
due to a variety of really odd cases (usually related to nested
 loops
or nested JS execution), which will be difficult for developers to
predict, especially if they are relying on third-party JS
 libraries.
This issue seems to be discounted for reasons I do not understand.
  
   I don't believe we've heard about this before, so that would be the
   reason it hasn't been taken into account.
  
   So you're saying that chrome would be unable implement the current
   storage mutex as specified in spec? I.e. one that is only released at
   the explicit points that the spec defines? That seems like a huge
   problem.
  
   No, no... my point is that to the application developer, those
   explicit
   points will appear quite implicit and mysterious.  This is why I
 called
   out third-party JS libraries.  One day, a function that you are using
   might transition to scripting a plugin, which might cause a nested
   loop, which could then force the lock to be released.  As a
 programmer,
   the unlocking is not explicit or predictable.
 
  Ah, indeed, this is a problem. However the unfortunate fact remains
  that so far no other workable solution has been proposed.
 
  OK, so we agree that the current solution doesn't meet the goals you
  stated above :-(

 Well, it addresses them as long as users are aware of the risk, and
 properly document weather their various library functions will release
 the lock or not. However I agree that it's unlikely that they will do
 so correctly.


I thought the point of not having lock APIs was that users shouldn't have
to understand locks ;-)  The issue I've raised here is super subtle.  We
have not succeeded in avoiding subtlety!




   Moreover, there are other examples which have been discussed on the
   list.  There are some DOM operations that can result in a frame
   receiving
   a DOM event synchronously.  That can result in a nesting of storage
   locks,
   which can force us to have to implicitly unlock the outermost lock to
   avoid
   deadlocks.  Again, the programmer will have very poor visibility into
   when
   these things can happen.
 
  So far I don't think it has been shown that these events need to be
  synchronous. They all appear to be asynchronous in gecko, and in the
  case of different-origin frames, I'm not even sure there's a way for
  pages to detect if the event was fired asynchronously or not.
 
  IE and WebKit dispatch some of them synchronously.  It's hard to say
 which
  is correct or if it causes any web compat isues.  I'm also not sure that
 we
  have covered all of the cases.

 It still seems to me that it's extremely unlikely that pages depend on
 cross origin events to fire synchronously. I can't even think of a way
 to test if a browser dispatches these events synchronously or not. Can
 you?



i agree that it seems uncommon.  maybe there could be some odd app that
does something after resizing an iframe that could be dependent on the
event handler setting some data field.  this kind of thing is probably even
less common in the cross-origin case.




  Our approach to implementing implicit locking (if we must) will be to
 detect
  nested locking, and simply unlock the first held lock to basically
 prevent
  nested locking.  This way we don't have to know all of the cases where
 this
  can happen.

 So

Re: [whatwg] Structured clone algorithm on LocalStorage

2009-09-24 Thread Darin Fisher
On Thu, Sep 24, 2009 at 12:20 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Sep 23, 2009 at 10:19 PM, Darin Fisher da...@chromium.org wrote:
 
 
  On Wed, Sep 23, 2009 at 8:10 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Wed, Sep 23, 2009 at 3:29 PM, Jeremy Orlow jor...@chromium.org
 wrote:
   On Wed, Sep 23, 2009 at 3:15 PM, Jonas Sicking jo...@sicking.cc
 wrote:
  
   On Wed, Sep 23, 2009 at 2:53 PM, Brett Cannon br...@python.org
 wrote:
On Wed, Sep 23, 2009 at 13:35, Jeremy Orlow jor...@chromium.org
wrote:
What are the use cases for wanting to store data beyond strings
 (and
what
can be serialized into strings) in LocalStorage?  I can't think of
any
that
outweigh the negatives:
1)  From previous threads, I think it's fair to say that we can
 all
agreed
that LocalStorage is a regrettable API (mainly due to its
synchronous
nature).  If so, it seems that making it more powerful and thus
 more
attractive to developers is just asking for trouble.  After all,
 the
more
people use it, the more lock contention there'll be, and the more
browser UI
jank users will be sure to experience.  This will also be worse
because
it'll be easier for developers to store large objects in
LoaclStorage.
2)  As far as I can tell, there's no where else in the spec where
you
have
to serialize structured clone(able) data to disk.  Given that
LocalStorage
is supposed to throw an exception if any ImageData is contained
 and
since
File and FileData objects are legal, it seems as though making
LocalStorage
handle structured clone data has a fairly high cost to
 implementors.
 Not to
mention that disallowing ImageData in only this one case is not
intuitive.
I think allowing structured clone(able) data in LocalStorage is a
big
mistake.  Enough so that, if SessionStorage and LocalStorage can't
diverge
on this issue, it'd be worth taking the power away from
SessionStorage.
J
   
Speaking from experience, I have been using localStorage in my PhD
thesis work w/o any real need for structured clones (I would have
used
Web Database but it isn't widely used yet and I was not sure if it
was
going to make the cut in the end). All it took to come close to
simulating structured clones now was to develop my own
 compatibility
wrapper for localStorage (http://realstorage.googlecode.com for
 those
who care) and add setJSONObject() and getJSONObject() methods on
 the
wrapper. Works w/o issue.
  
   Actually, this seems like a prime reason *to* add structured storage
   support. Obviously string data wasn't enough for you so you had to
   write extra code in order to work around that. If structured clones
   had been natively supported you both would have had to write less
   code, and the resulting algorithms would have been faster. Faster
   since the browser can serialize/parser to/from a binary internal
   format faster than to/from JSON through the JSON serializer/parser.
  
   Yes, but since LocalStorage is already widely deployed, authors are
   stuck
   with the the structured clone-less version of LocalStorage for a very
   long
   time.  So the only way an app can store anything that can't be
 JSONified
   is
   to break backwards compatibility.
  
  
  
   On Wed, Sep 23, 2009 at 3:11 PM, Jonas Sicking jo...@sicking.cc
  wrote:
  
   On Wed, Sep 23, 2009 at 1:35 PM, Jeremy Orlow jor...@chromium.org
   wrote:
What are the use cases for wanting to store data beyond strings
 (and
what
can be serialized into strings) in LocalStorage?  I can't think of
any
that
outweigh the negatives:
1)  From previous threads, I think it's fair to say that we can all
agreed
that LocalStorage is a regrettable API (mainly due to its
 synchronous
nature).  If so, it seems that making it more powerful and thus
 more
attractive to developers is just asking for trouble.  After all,
 the
more
people use it, the more lock contention there'll be, and the more
browser UI
jank users will be sure to experience.  This will also be worse
because
it'll be easier for developers to store large objects in
LoaclStorage.
2)  As far as I can tell, there's no where else in the spec where
 you
have
to serialize structured clone(able) data to disk.  Given that
LocalStorage
is supposed to throw an exception if any ImageData is contained and
since
File and FileData objects are legal, it seems as though making
LocalStorage
handle structured clone data has a fairly high cost to
 implementors.
 Not to
mention that disallowing ImageData in only this one case is not
intuitive.
I think allowing structured clone(able) data in LocalStorage is a
 big
mistake.  Enough so that, if SessionStorage and LocalStorage can't
diverge
on this issue, it'd be worth

Re: [whatwg] Structured clone algorithm on LocalStorage

2009-09-24 Thread Darin Fisher
On Thu, Sep 24, 2009 at 1:17 AM, Darin Fisher da...@chromium.org wrote:

 On Thu, Sep 24, 2009 at 12:20 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Sep 23, 2009 at 10:19 PM, Darin Fisher da...@chromium.org
 wrote:
 
 
  On Wed, Sep 23, 2009 at 8:10 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Wed, Sep 23, 2009 at 3:29 PM, Jeremy Orlow jor...@chromium.org
 wrote:
   On Wed, Sep 23, 2009 at 3:15 PM, Jonas Sicking jo...@sicking.cc
 wrote:
  
   On Wed, Sep 23, 2009 at 2:53 PM, Brett Cannon br...@python.org
 wrote:
On Wed, Sep 23, 2009 at 13:35, Jeremy Orlow jor...@chromium.org
wrote:
What are the use cases for wanting to store data beyond strings
 (and
what
can be serialized into strings) in LocalStorage?  I can't think
 of
any
that
outweigh the negatives:
1)  From previous threads, I think it's fair to say that we can
 all
agreed
that LocalStorage is a regrettable API (mainly due to its
synchronous
nature).  If so, it seems that making it more powerful and thus
 more
attractive to developers is just asking for trouble.  After all,
 the
more
people use it, the more lock contention there'll be, and the more
browser UI
jank users will be sure to experience.  This will also be worse
because
it'll be easier for developers to store large objects in
LoaclStorage.
2)  As far as I can tell, there's no where else in the spec where
you
have
to serialize structured clone(able) data to disk.  Given that
LocalStorage
is supposed to throw an exception if any ImageData is contained
 and
since
File and FileData objects are legal, it seems as though making
LocalStorage
handle structured clone data has a fairly high cost to
 implementors.
 Not to
mention that disallowing ImageData in only this one case is not
intuitive.
I think allowing structured clone(able) data in LocalStorage is a
big
mistake.  Enough so that, if SessionStorage and LocalStorage
 can't
diverge
on this issue, it'd be worth taking the power away from
SessionStorage.
J
   
Speaking from experience, I have been using localStorage in my PhD
thesis work w/o any real need for structured clones (I would have
used
Web Database but it isn't widely used yet and I was not sure if it
was
going to make the cut in the end). All it took to come close to
simulating structured clones now was to develop my own
 compatibility
wrapper for localStorage (http://realstorage.googlecode.com for
 those
who care) and add setJSONObject() and getJSONObject() methods on
 the
wrapper. Works w/o issue.
  
   Actually, this seems like a prime reason *to* add structured storage
   support. Obviously string data wasn't enough for you so you had to
   write extra code in order to work around that. If structured clones
   had been natively supported you both would have had to write less
   code, and the resulting algorithms would have been faster. Faster
   since the browser can serialize/parser to/from a binary internal
   format faster than to/from JSON through the JSON serializer/parser.
  
   Yes, but since LocalStorage is already widely deployed, authors are
   stuck
   with the the structured clone-less version of LocalStorage for a very
   long
   time.  So the only way an app can store anything that can't be
 JSONified
   is
   to break backwards compatibility.
  
  
  
   On Wed, Sep 23, 2009 at 3:11 PM, Jonas Sicking jo...@sicking.cc
  wrote:
  
   On Wed, Sep 23, 2009 at 1:35 PM, Jeremy Orlow jor...@chromium.org
   wrote:
What are the use cases for wanting to store data beyond strings
 (and
what
can be serialized into strings) in LocalStorage?  I can't think of
any
that
outweigh the negatives:
1)  From previous threads, I think it's fair to say that we can
 all
agreed
that LocalStorage is a regrettable API (mainly due to its
 synchronous
nature).  If so, it seems that making it more powerful and thus
 more
attractive to developers is just asking for trouble.  After all,
 the
more
people use it, the more lock contention there'll be, and the more
browser UI
jank users will be sure to experience.  This will also be worse
because
it'll be easier for developers to store large objects in
LoaclStorage.
2)  As far as I can tell, there's no where else in the spec where
 you
have
to serialize structured clone(able) data to disk.  Given that
LocalStorage
is supposed to throw an exception if any ImageData is contained
 and
since
File and FileData objects are legal, it seems as though making
LocalStorage
handle structured clone data has a fairly high cost to
 implementors.
 Not to
mention that disallowing ImageData in only this one case is not
intuitive.
I think allowing structured clone(able) data in LocalStorage is a
 big
mistake.  Enough so

Re: [whatwg] Structured clone algorithm on LocalStorage

2009-09-24 Thread Darin Fisher
On Thu, Sep 24, 2009 at 10:40 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Thu, Sep 24, 2009 at 1:17 AM, Darin Fisher da...@chromium.org wrote:
  On Thu, Sep 24, 2009 at 12:20 AM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Wed, Sep 23, 2009 at 10:19 PM, Darin Fisher da...@chromium.org
 wrote:
  
  
   On Wed, Sep 23, 2009 at 8:10 PM, Jonas Sicking jo...@sicking.cc
 wrote:
  
   On Wed, Sep 23, 2009 at 3:29 PM, Jeremy Orlow jor...@chromium.org
   wrote:
On Wed, Sep 23, 2009 at 3:15 PM, Jonas Sicking jo...@sicking.cc
wrote:
   
On Wed, Sep 23, 2009 at 2:53 PM, Brett Cannon br...@python.org
wrote:
 On Wed, Sep 23, 2009 at 13:35, Jeremy Orlow 
 jor...@chromium.org
 wrote:
 What are the use cases for wanting to store data beyond strings
 (and
 what
 can be serialized into strings) in LocalStorage?  I can't think
 of
 any
 that
 outweigh the negatives:
 1)  From previous threads, I think it's fair to say that we can
 all
 agreed
 that LocalStorage is a regrettable API (mainly due to its
 synchronous
 nature).  If so, it seems that making it more powerful and thus
 more
 attractive to developers is just asking for trouble.  After
 all,
 the
 more
 people use it, the more lock contention there'll be, and the
 more
 browser UI
 jank users will be sure to experience.  This will also be worse
 because
 it'll be easier for developers to store large objects in
 LoaclStorage.
 2)  As far as I can tell, there's no where else in the spec
 where
 you
 have
 to serialize structured clone(able) data to disk.  Given that
 LocalStorage
 is supposed to throw an exception if any ImageData is contained
 and
 since
 File and FileData objects are legal, it seems as though making
 LocalStorage
 handle structured clone data has a fairly high cost to
 implementors.
  Not to
 mention that disallowing ImageData in only this one case is not
 intuitive.
 I think allowing structured clone(able) data in LocalStorage is
 a
 big
 mistake.  Enough so that, if SessionStorage and LocalStorage
 can't
 diverge
 on this issue, it'd be worth taking the power away from
 SessionStorage.
 J

 Speaking from experience, I have been using localStorage in my
 PhD
 thesis work w/o any real need for structured clones (I would
 have
 used
 Web Database but it isn't widely used yet and I was not sure if
 it
 was
 going to make the cut in the end). All it took to come close to
 simulating structured clones now was to develop my own
 compatibility
 wrapper for localStorage (http://realstorage.googlecode.com for
 those
 who care) and add setJSONObject() and getJSONObject() methods on
 the
 wrapper. Works w/o issue.
   
Actually, this seems like a prime reason *to* add structured
 storage
support. Obviously string data wasn't enough for you so you had to
write extra code in order to work around that. If structured
 clones
had been natively supported you both would have had to write less
code, and the resulting algorithms would have been faster. Faster
since the browser can serialize/parser to/from a binary internal
format faster than to/from JSON through the JSON
 serializer/parser.
   
Yes, but since LocalStorage is already widely deployed, authors are
stuck
with the the structured clone-less version of LocalStorage for a
 very
long
time.  So the only way an app can store anything that can't be
JSONified
is
to break backwards compatibility.
   
   
   
On Wed, Sep 23, 2009 at 3:11 PM, Jonas
Sicking jo...@sicking.cc wrote:
   
On Wed, Sep 23, 2009 at 1:35 PM, Jeremy Orlow 
 jor...@chromium.org
wrote:
 What are the use cases for wanting to store data beyond strings
 (and
 what
 can be serialized into strings) in LocalStorage?  I can't think
 of
 any
 that
 outweigh the negatives:
 1)  From previous threads, I think it's fair to say that we can
 all
 agreed
 that LocalStorage is a regrettable API (mainly due to its
 synchronous
 nature).  If so, it seems that making it more powerful and thus
 more
 attractive to developers is just asking for trouble.  After all,
 the
 more
 people use it, the more lock contention there'll be, and the
 more
 browser UI
 jank users will be sure to experience.  This will also be worse
 because
 it'll be easier for developers to store large objects in
 LoaclStorage.
 2)  As far as I can tell, there's no where else in the spec
 where
 you
 have
 to serialize structured clone(able) data to disk.  Given that
 LocalStorage
 is supposed to throw an exception if any ImageData is contained
 and
 since
 File and FileData objects are legal, it seems as though making

Re: [whatwg] Structured clone algorithm on LocalStorage

2009-09-24 Thread Darin Fisher
On Thu, Sep 24, 2009 at 9:28 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Fri, Sep 25, 2009 at 5:52 AM, Darin Fisher da...@chromium.org wrote:

 No, no... my point is that to the application developer, those explicit
 points will appear quite implicit and mysterious.  This is why I called
 out third-party JS libraries.  One day, a function that you are using
 might transition to scripting a plugin, which might cause a nested
 loop, which could then force the lock to be released.  As a programmer,
 the unlocking is not explicit or predictable.


 The unlocking around plugin calls is a problem, but it seems to me that any
 given library function is much more likely start with a plugin-based
 implementation and eventually switch to a non-plugin-based implementation
 than the other way around.


Suppose a library decides to add sound effects one day.  They'd probably use
Flash.



 Beyond plugins, I hope and expect that library functions don't suddenly add
 calls to alert(), showModalDialog() or synchronous XHR.

 Rob



Anyways, I will the first to admit that my argument is steeped in the
hypothetical, but when it comes to thread synchronization, it is important
to consider such unlikely cases.  I would greatly prefer a watertight
solution instead of just relying on probabilities.

-Darin


Re: [whatwg] Structured clone algorithm on LocalStorage

2009-09-23 Thread Darin Fisher
On Wed, Sep 23, 2009 at 8:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Sep 23, 2009 at 3:29 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Wed, Sep 23, 2009 at 3:15 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Wed, Sep 23, 2009 at 2:53 PM, Brett Cannon br...@python.org wrote:
   On Wed, Sep 23, 2009 at 13:35, Jeremy Orlow jor...@chromium.org
 wrote:
   What are the use cases for wanting to store data beyond strings (and
   what
   can be serialized into strings) in LocalStorage?  I can't think of
 any
   that
   outweigh the negatives:
   1)  From previous threads, I think it's fair to say that we can all
   agreed
   that LocalStorage is a regrettable API (mainly due to its synchronous
   nature).  If so, it seems that making it more powerful and thus more
   attractive to developers is just asking for trouble.  After all, the
   more
   people use it, the more lock contention there'll be, and the more
   browser UI
   jank users will be sure to experience.  This will also be worse
 because
   it'll be easier for developers to store large objects in
 LoaclStorage.
   2)  As far as I can tell, there's no where else in the spec where you
   have
   to serialize structured clone(able) data to disk.  Given that
   LocalStorage
   is supposed to throw an exception if any ImageData is contained and
   since
   File and FileData objects are legal, it seems as though making
   LocalStorage
   handle structured clone data has a fairly high cost to implementors.
Not to
   mention that disallowing ImageData in only this one case is not
   intuitive.
   I think allowing structured clone(able) data in LocalStorage is a big
   mistake.  Enough so that, if SessionStorage and LocalStorage can't
   diverge
   on this issue, it'd be worth taking the power away from
 SessionStorage.
   J
  
   Speaking from experience, I have been using localStorage in my PhD
   thesis work w/o any real need for structured clones (I would have used
   Web Database but it isn't widely used yet and I was not sure if it was
   going to make the cut in the end). All it took to come close to
   simulating structured clones now was to develop my own compatibility
   wrapper for localStorage (http://realstorage.googlecode.com for those
   who care) and add setJSONObject() and getJSONObject() methods on the
   wrapper. Works w/o issue.
 
  Actually, this seems like a prime reason *to* add structured storage
  support. Obviously string data wasn't enough for you so you had to
  write extra code in order to work around that. If structured clones
  had been natively supported you both would have had to write less
  code, and the resulting algorithms would have been faster. Faster
  since the browser can serialize/parser to/from a binary internal
  format faster than to/from JSON through the JSON serializer/parser.
 
  Yes, but since LocalStorage is already widely deployed, authors are stuck
  with the the structured clone-less version of LocalStorage for a very
 long
  time.  So the only way an app can store anything that can't be JSONified
 is
  to break backwards compatibility.
 
 
 
  On Wed, Sep 23, 2009 at 3:11 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Wed, Sep 23, 2009 at 1:35 PM, Jeremy Orlow jor...@chromium.org
 wrote:
   What are the use cases for wanting to store data beyond strings (and
   what
   can be serialized into strings) in LocalStorage?  I can't think of any
   that
   outweigh the negatives:
   1)  From previous threads, I think it's fair to say that we can all
   agreed
   that LocalStorage is a regrettable API (mainly due to its synchronous
   nature).  If so, it seems that making it more powerful and thus more
   attractive to developers is just asking for trouble.  After all, the
   more
   people use it, the more lock contention there'll be, and the more
   browser UI
   jank users will be sure to experience.  This will also be worse
 because
   it'll be easier for developers to store large objects in LoaclStorage.
   2)  As far as I can tell, there's no where else in the spec where you
   have
   to serialize structured clone(able) data to disk.  Given that
   LocalStorage
   is supposed to throw an exception if any ImageData is contained and
   since
   File and FileData objects are legal, it seems as though making
   LocalStorage
   handle structured clone data has a fairly high cost to implementors.
Not to
   mention that disallowing ImageData in only this one case is not
   intuitive.
   I think allowing structured clone(able) data in LocalStorage is a big
   mistake.  Enough so that, if SessionStorage and LocalStorage can't
   diverge
   on this issue, it'd be worth taking the power away from
 SessionStorage.
 
  Despite localStorage unfortunate locking contention problem, it's
  become quite a popular API. It's also very successful in terms of
  browser deployment since it's available in at least latest versions of
  IE, Safari, Firefox, and Chrome. Don't know about support in Opera?
 
  

Re: [whatwg] Application defined locks

2009-09-10 Thread Darin Fisher
On Thu, Sep 10, 2009 at 12:32 PM, Maciej Stachowiak m...@apple.com wrote:


 On Sep 10, 2009, at 11:22 AM, Michael Nordman wrote:



 On Wed, Sep 9, 2009 at 7:55 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 2:38 PM, Michael Nordman micha...@google.comwrote:

 If this feature existed, we likely would have used it for offline Gmail
 to coordinate which instance of the app (page with gmail in it) should be
 responsible for sync'ing the local database with the mail service. In the
 absence of a feature like this, instead we used the local database itself to
 register which page was the 'syncagent'. This involved periodically updating
 the db by the syncagent, and periodic polling by the would be syncagents
 waiting to possibly take over. Much ugliness.
 var isSyncAgent = false;
 window.acquireFlag(syncAgency, function() { isSyncAgent = true; });

 Much nicer.


 How do you deal with the user closing the syncagent while other app
 instances remain open?


 In our db polling world... that was why the syncagent periodically updated
 the db... to say still alive... on close it would say i'm gone and on
 ugly exit, the others would notice the lack of still alives and fight
 about who was it next. A silly bunch of complexity for something so simple.

 In the acquireFlag world... wouldn't the page going away simply relinquish
 the flag?


 How would the pages that failed to acquire it before know that they should
 try to acquire it again? Presumably they would still have to poll (assuming
 the tryLock model).

 Regards,
 Maciej



In my proposed interace, you can wait asynchronously for the lock.  Just
call acquireLock with a second parameter, a closure that runs once you get
the lock.

-Darin


Re: [whatwg] Application defined locks

2009-09-10 Thread Darin Fisher
On Thu, Sep 10, 2009 at 1:08 PM, Oliver Hunt oli...@apple.com wrote:


 On Sep 10, 2009, at 12:55 PM, Darin Fisher wrote:

 On Thu, Sep 10, 2009 at 12:32 PM, Maciej Stachowiak m...@apple.com wrote:


 On Sep 10, 2009, at 11:22 AM, Michael Nordman wrote:



 On Wed, Sep 9, 2009 at 7:55 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 2:38 PM, Michael Nordman micha...@google.comwrote:

 If this feature existed, we likely would have used it for offline Gmail
 to coordinate which instance of the app (page with gmail in it) should be
 responsible for sync'ing the local database with the mail service. In the
 absence of a feature like this, instead we used the local database itself 
 to
 register which page was the 'syncagent'. This involved periodically 
 updating
 the db by the syncagent, and periodic polling by the would be syncagents
 waiting to possibly take over. Much ugliness.
 var isSyncAgent = false;
 window.acquireFlag(syncAgency, function() { isSyncAgent = true; });

 Much nicer.


 How do you deal with the user closing the syncagent while other app
 instances remain open?


 In our db polling world... that was why the syncagent periodically updated
 the db... to say still alive... on close it would say i'm gone and on
 ugly exit, the others would notice the lack of still alives and fight
 about who was it next. A silly bunch of complexity for something so simple.

 In the acquireFlag world... wouldn't the page going away simply relinquish
 the flag?


 How would the pages that failed to acquire it before know that they should
 try to acquire it again? Presumably they would still have to poll (assuming
 the tryLock model).

 Regards,
 Maciej



 In my proposed interace, you can wait asynchronously for the lock.  Just
 call acquireLock with a second parameter, a closure that runs once you get
 the lock.


 What if you don't want to wait asynchronously?  My reading of this is that
 you need two copies of the code, one that works synchronously, but you still
 need to keep the asynchronous model to deal with an inability to
 synchronously acquire the lock.  What am I missing?



Sounds like a problem that can be solved with a function.

The reason for the trylock support is to allow a programmer to easily do
nothing if they can't acquire the lock.  If you want to wait if you can't
acquire the lock, then using the second form of acquireLock, which takes a
function, is a good solution.

-Darin


Re: [whatwg] Application defined locks

2009-09-10 Thread Darin Fisher
On Thu, Sep 10, 2009 at 2:38 PM, James Robinson jam...@google.com wrote:



 On Thu, Sep 10, 2009 at 1:55 PM, Darin Fisher da...@chromium.org wrote:

 On Thu, Sep 10, 2009 at 1:08 PM, Oliver Hunt oli...@apple.com wrote:


 On Sep 10, 2009, at 12:55 PM, Darin Fisher wrote:

 On Thu, Sep 10, 2009 at 12:32 PM, Maciej Stachowiak m...@apple.comwrote:


 On Sep 10, 2009, at 11:22 AM, Michael Nordman wrote:



 On Wed, Sep 9, 2009 at 7:55 PM, Robert O'Callahan rob...@ocallahan.org
  wrote:

 On Thu, Sep 10, 2009 at 2:38 PM, Michael Nordman 
 micha...@google.comwrote:

 If this feature existed, we likely would have used it for offline
 Gmail to coordinate which instance of the app (page with gmail in it) 
 should
 be responsible for sync'ing the local database with the mail service. In 
 the
 absence of a feature like this, instead we used the local database 
 itself to
 register which page was the 'syncagent'. This involved periodically 
 updating
 the db by the syncagent, and periodic polling by the would be syncagents
 waiting to possibly take over. Much ugliness.
 var isSyncAgent = false;
 window.acquireFlag(syncAgency, function() { isSyncAgent = true; });

 Much nicer.


 How do you deal with the user closing the syncagent while other app
 instances remain open?


 In our db polling world... that was why the syncagent periodically
 updated the db... to say still alive... on close it would say i'm gone
 and on ugly exit, the others would notice the lack of still alives and
 fight about who was it next. A silly bunch of complexity for something so
 simple.

 In the acquireFlag world... wouldn't the page going away simply
 relinquish the flag?


 How would the pages that failed to acquire it before know that they
 should try to acquire it again? Presumably they would still have to poll
 (assuming the tryLock model).

 Regards,
 Maciej



 In my proposed interace, you can wait asynchronously for the lock.  Just
 call acquireLock with a second parameter, a closure that runs once you get
 the lock.


 What if you don't want to wait asynchronously?  My reading of this is
 that you need two copies of the code, one that works synchronously, but you
 still need to keep the asynchronous model to deal with an inability to
 synchronously acquire the lock.  What am I missing?



 Sounds like a problem that can be solved with a function.

 The reason for the trylock support is to allow a programmer to easily do
 nothing if they can't acquire the lock.  If you want to wait if you can't
 acquire the lock, then using the second form of acquireLock, which takes a
 function, is a good solution.


 I don't think there is much value in the first form of acquireLock() - only
 the second form really makes sense.  I also strongly feel that giving web
 developers access to locking mechanisms is a bad idea - it hasn't been a
 spectacular success in any other language.

 I think the useful semantics are equivalent to the following (being careful
 to avoid mentioning 'locks' or 'mutexes' explicit):  A script passes in a
 callback and a token.  The UA invokes the callback at some point in the
 future and provides the guarantee that no other callback with that token
 will be invoked in any context within the origin until the invoked callback
 returns.  Here's what I mean with an intentionally horrible name:

 window.runMeExclusively(callback, arbitrary string token);


This looks just like the acquireScopedLock method I proposed.




 An application developer could then put all of their logic that touches a
 particular shared resource behind a token.  It's also deadlock free so long
 as each callback terminates.

 Would this be sufficient?


This is sufficient for providing fine-grain locking for access to shared
resources.  It does not help you build long-lived locks, such as the one
offline gmail constructs from using the database API and timers (see the
post from michaeln).

I think there are good applications for setting a long-lived lock.  We can
try to make it hard for people to create those locks, but then the end
result will be suboptimal.  They'll still find a way to build them.



  If so it is almost possible to implement it correctly in a JavaScript
 library using a shared worker per origin and postMessage, except that it is
 not currently possible to detect when a context goes away.


Right.  Maybe the answer is to add support to shared workers so that you can
know when an end point disappears.  Then, it would be fairly trivial to
implement a lock master from a shared worker that could either manage short
or long lived locks.  The synchronous trylock API would not be possible,
but that's fine.  I only included that in my proposal for convenience.

-Darin


Re: [whatwg] Application defined locks

2009-09-10 Thread Darin Fisher
On Thu, Sep 10, 2009 at 4:59 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Fri, Sep 11, 2009 at 9:52 AM, Darin Fisher da...@chromium.org wrote:

 I think there are good applications for setting a long-lived lock.  We can
 try to make it hard for people to create those locks, but then the end
 result will be suboptimal.  They'll still find a way to build them.


 One use case is selecting a master instance of an app. I haven't really
 been following the global script thread, but doesn't that address this use
 case in a more direct way?


No it doesn't.  The global script would only be reachable by related
browsing contexts (similar to how window.open w/ a name works).  In a
multi-process browser, you don't want to _require_ script bindings to span
processes.

That's why I mentioned shared workers.  Because they are isolated and
communication is via string passing, it is possible for processes in
unrelated browsing contexts to communicate with the same shared workers.




 What other use-cases for long-lived locks are there?


This is a good question.  Most of the use cases I can imagine boil down to a
master/slave division of labor.

For example, if I write an app that does some batch asynchronous processing
(many setTimeout calls worth), then I can imagine setting a flag across the
entire job, so that other instances of my app know not to start another such
overlapping job until I'm finished.  In this example, I'm supposing that
storage is modified at each step such that guaranteeing storage consistency
within the scope of script evaluation is not enough.

-Darin


Re: [whatwg] Application defined locks

2009-09-10 Thread Darin Fisher
On Thu, Sep 10, 2009 at 5:28 PM, Darin Fisher da...@chromium.org wrote:

 On Thu, Sep 10, 2009 at 4:59 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Fri, Sep 11, 2009 at 9:52 AM, Darin Fisher da...@chromium.org wrote:

 I think there are good applications for setting a long-lived lock.  We
 can try to make it hard for people to create those locks, but then the end
 result will be suboptimal.  They'll still find a way to build them.


 One use case is selecting a master instance of an app. I haven't really
 been following the global script thread, but doesn't that address this use
 case in a more direct way?


 No it doesn't.  The global script would only be reachable by related
 browsing contexts (similar to how window.open w/ a name works).  In a
 multi-process browser, you don't want to _require_ script bindings to span
 processes.

 That's why I mentioned shared workers.  Because they are isolated and
 communication is via string passing, it is possible for processes in
 unrelated browsing contexts to communicate with the same shared workers.




 What other use-cases for long-lived locks are there?


 This is a good question.  Most of the use cases I can imagine boil down to
 a master/slave division of labor.

 For example, if I write an app that does some batch asynchronous processing
 (many setTimeout calls worth), then I can imagine setting a flag across the
 entire job, so that other instances of my app know not to start another such
 overlapping job until I'm finished.  In this example, I'm supposing that
 storage is modified at each step such that guaranteeing storage consistency
 within the scope of script evaluation is not enough.

 -Darin



Also, the other motivating factor for me is access to LocalStorage from
workers.  (I know it has been removed from the spec, but that is
unfortunate, no?)

By definition, workers are designed to be long lived, possibly doing long
stretches of computation, and being able to intermix reads and writes to
storage during that stretch of computation would be nice.

Moreover, it would be nice if a worker in domain A could effectively lock
part of the storage so that the portion of the app running on the main
thread could continue accessing the other parts of storage associated with
domain A.  The implicit storage mutex doesn't support this use case very
well.  You end up having to call the getStorageUpdates function periodically
(releasing the lock in the middle of computation!!).  That kind of thing is
really scary and hard to get right.  I cringe whenever I see someone
unlocking, calling out to foreign code, and then re-acquiring the lock.
 Why?  because it means that existing variables, stack based or otherwise,
that were previously consistent may have become inconsistent with global
data in storage due to having released the lock.  getStorageUpdates is
dangerous.  it is a big hammer that doesn't really fit the bill.

The alternative to getStorageUpdates in this case is to create another
domain on which to run the background worker just so that you can have an
independent slice of storage.  That seems really lame to me.  Why should
domain A have to jump through such hoops?

-Darin


[whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
The recent discussion about the storage mutex for Cookies and LocalStorage
got me thinking
Perhaps instead of trying to build implicit locking into those features, we
should give web apps the tools to manage exclusive access to shared
resources.



I imagine a simple lock API:

window.acquireLock(name)
window.releaseLock(name)

acquireLock works like pthread_mutex_trylock in that it is non-blocking.  it
returns true if you succeeded in acquiring the lock, else it returns false.
 releaseLock does as its name suggests: releases the lock so that others may
acquire it.

Any locks acquired would be automatically released when the DOM window is
destroyed or navigated cross origin.  This API could also be supported by
workers.

The name parameter is scoped to the origin of the page.  So, this locking
API only works between pages in the same origin.



We could also extend acquireLock to support an asynchronous callback when
the lock becomes available:

window.acquireLock(name, function() { /* lock acquired */ });

If the callback function is given, then upon lock acquisition, the callback
function would be invoked.  In this case, the return value of acquireLock is
true if the function was invoked or false if the function will be invoked
once the lock can be acquired.



Finally, there could be a helper for scoping lock acquisition:

window.acquireScopedLock(name, function() { /* lock acquired for this
scope only */ });



This lock API would provide developers with the ability to indicate that
their instance of the web app is the only one that should play with
LocalStorage.  Other instances could then know that they don't have
exclusive access and could take appropriate action.

This API seems like it could be used to allow LocalStorage to be usable from
workers.  Also, as we start developing other means of local storage (File
APIs), it seems like having to again invent a reasonable implicit locking
system will be a pain.  Perhaps it would just be better to develop something
explicit that application developers can use independent of the local
storage mechanism :-)



It may be the case that we want to only provide acquireScopedLock (or
something like it) to enforce fine grained locking, but I think that would
only force people to implement long-lived locks by setting a field in
LocalStorage.  That would then result in the locks not being managed by the
UA, which means that they cannot be reliably cleaned up when the page
closes.  I think it is very important that we provide facilities to guide
people away from building such ad-hoc locks on top of LocalStorage.  This is
why I like the explicit (non-blocking!) acquireLock and releaseLock methods.

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 11:08 AM, Aaron Boodman a...@google.com wrote:

 On Wed, Sep 9, 2009 at 10:55 AM, Darin Fisherda...@chromium.org wrote:
  I imagine a simple lock API:
  window.acquireLock(name)
  window.releaseLock(name)

 I do not think it is a good idea to allow long-lived (past a stack
 frame) locks on the types of things we've been discussing (local
 storage, databases, etc).

  This API seems like it could be used to allow LocalStorage to be usable
 from
  workers.  Also, as we start developing other means of local storage (File
  APIs), it seems like having to again invent a reasonable implicit locking
  system will be a pain.  Perhaps it would just be better to develop
 something
  explicit that application developers can use independent of the local
  storage mechanism :-)

 There would presumably have to be a separate name value for each API,
 though, right? So we're talking about the difference between:

 window.acquireLock(localStorage, function() {
 ...
 });

 and:

 window.acquireLocalStorage(function() {
 ...
 });

 It doesn't seem like much of a win for reusability IMO.


I wanted to leave it up to the app developer to choose the name so that they
could define how the lock is interpreted.

For example, they might want to partition the keyspace for local storage and
have separate locks for separate keys.  Or, they might want to have a single
lock that is inclusive of several storage mechanisms: LocalStorage and
FileAPI.

Besides, once we have an explicit locking API, why not just be generic and
give it a name divorced from LocalStorage or any kind of storage features
for that matter?  Locking can be useful to other applications that do not
even use local storage...




  It may be the case that we want to only provide acquireScopedLock (or
  something like it) to enforce fine grained locking, but I think that
 would
  only force people to implement long-lived locks by setting a field in
  LocalStorage.

 Do you have an example of a place where we want to allow long-lived locks?



It is important to think of these differently from normal mutexes that you
might program with in C++.  Maybe I should have used the term flag instead
of lock ;-)

You might use a long lived lock to indicate that you are the instance
responsible for X.  I can imagine applications where there could be a master
/ slave relationships between the instances.  One instance is the master and
the rest are the slaves.

If we only had fine grained locking, then we are saying that we want
simulaneous instances of the same web app to be able to stop on each others
data in LocalStorage.  Instead, a web app developer might want to disable
LocalStorage features in all but the first instance of their web app.  The
problem is that your state is held not just in LocalStorage but also in JS
variables, the DOM, and perhaps in session state held by the server.  It is
easy for LocalStorage to get corrupted even with proper fine-grained
locking.  You still need a way to have a big flag that says... hey, i'm
the one messing with LocalStorage.  A good example is the browser's profile
directory.  Chrome and Firefox both only allow one instance of the app per
profile.  A long-lived lock is held to support such behavior.

I suspect there is some overlap with my proposal and shared workers.
 Perhaps what I am trying to accomplish here could even be implemented on
top of shared workers, although using shared workers to achieve mutual
exclusion locks seems rather heavyweight.

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 11:30 AM, Aaron Boodman a...@google.com wrote:

 On Wed, Sep 9, 2009 at 11:23 AM, Darin Fisherda...@chromium.org wrote:
  On Wed, Sep 9, 2009 at 11:08 AM, Aaron Boodman a...@google.com wrote:
  There would presumably have to be a separate name value for each API,
  though, right? So we're talking about the difference between:
 
  window.acquireLock(localStorage, function() {
  ...
  });
 
  and:
 
  window.acquireLocalStorage(function() {
  ...
  });
 
  It doesn't seem like much of a win for reusability IMO.
 
  I wanted to leave it up to the app developer to choose the name so that
 they
  could define how the lock is interpreted.
  For example, they might want to partition the keyspace for local storage
 and
  have separate locks for separate keys.  Or, they might want to have a
 single
  lock that is inclusive of several storage mechanisms: LocalStorage and
  FileAPI.
  Besides, once we have an explicit locking API, why not just be generic
 and
  give it a name divorced from LocalStorage or any kind of storage features
  for that matter?  Locking can be useful to other applications that do not
  even use local storage...

 I see.

 So you are suggesting the localStorage could have zero concurrency
 guarantees and it is simply up to the developer to arrange things
 themselves using this new primitive.


Yes, exactly. Sorry for not making this clear.  I believe implicit locking
for LocalStorage (and the implicit unlocking) is going to yield something
very confusing and hard to implement well.  The potential for dead locks
when you fail to implicitly unlock properly scares me.

-Darin




 That is an interesting idea. You're right that it overlaps with the
 ideas that inspired shared workers, and the global script proposal.

 - a



Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 3:37 PM, Maciej Stachowiak m...@apple.com wrote:


 On Sep 9, 2009, at 10:55 AM, Darin Fisher wrote:

  The recent discussion about the storage mutex for Cookies and LocalStorage
 got me thinking

 Perhaps instead of trying to build implicit locking into those features,
 we should give web apps the tools to manage exclusive access to shared
 resources.


 I'm really hesitant to expose explicit locking to the Web platform. Mutexes
 are incredibly hard to program with correctly, and we will surely end up
 with stuck locks, race conditions, livelocks, and so forth. For Workers I
 was happy that we managed to avoid any locking primitives by using a
 message-passing model, but explicit mutexes would ruin that.

  - Maciej



Note: I probably made a mistake calling these locks since they do not work
like normal mutexes.  You can only wait for one of these locks
asynchronously.  There is no synchronous blocking, which avoids most of the
problems you mention.  Also, the locks are cleared when the page is
destroyed or navigated to another domain, so you lose the problem of stuck
locks.

What motivated this was that I wanted the ability to simulate the database
transaction model.  Since we support that, we might as well support a
similar system that is independent of a particular storage mechanism.  Seems
reasonable to me.

Alternatively, if we had a way to set a value in local storage and read the
value that was there, then a web page could implement a flag to indicate
mutual exclusion. Someone interested in acquiring a flag could wait for a
storage event to indicate that the flag was cleared.  However, what is
missing is that there isn't a way for the flag to be auto-cleared when the
DOM window is closed or navigated to another domain.

-Darin





 

 I imagine a simple lock API:

 window.acquireLock(name)
 window.releaseLock(name)

 acquireLock works like pthread_mutex_trylock in that it is non-blocking.
  it returns true if you succeeded in acquiring the lock, else it returns
 false.  releaseLock does as its name suggests: releases the lock so that
 others may acquire it.

 Any locks acquired would be automatically released when the DOM window is
 destroyed or navigated cross origin.  This API could also be supported by
 workers.

 The name parameter is scoped to the origin of the page.  So, this locking
 API only works between pages in the same origin.

 

 We could also extend acquireLock to support an asynchronous callback when
 the lock becomes available:

 window.acquireLock(name, function() { /* lock acquired */ });

 If the callback function is given, then upon lock acquisition, the
 callback function would be invoked.  In this case, the return value of
 acquireLock is true if the function was invoked or false if the function
 will be invoked once the lock can be acquired.

 

 Finally, there could be a helper for scoping lock acquisition:

 window.acquireScopedLock(name, function() { /* lock acquired for this
 scope only */ });

 

 This lock API would provide developers with the ability to indicate that
 their instance of the web app is the only one that should play with
 LocalStorage.  Other instances could then know that they don't have
 exclusive access and could take appropriate action.

 This API seems like it could be used to allow LocalStorage to be usable
 from workers.  Also, as we start developing other means of local storage
 (File APIs), it seems like having to again invent a reasonable implicit
 locking system will be a pain.  Perhaps it would just be better to develop
 something explicit that application developers can use independent of the
 local storage mechanism :-)

 

 It may be the case that we want to only provide acquireScopedLock (or
 something like it) to enforce fine grained locking, but I think that would
 only force people to implement long-lived locks by setting a field in
 LocalStorage.  That would then result in the locks not being managed by the
 UA, which means that they cannot be reliably cleaned up when the page
 closes.  I think it is very important that we provide facilities to guide
 people away from building such ad-hoc locks on top of LocalStorage.  This is
 why I like the explicit (non-blocking!) acquireLock and releaseLock methods.

 -Darin





Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 4:24 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 6:37 AM, Darin Fisher da...@chromium.org wrote:

 Yes, exactly. Sorry for not making this clear.  I believe implicit locking
 for LocalStorage (and the implicit unlocking) is going to yield something
 very confusing and hard to implement well.  The potential for dead locks
 when you fail to implicitly unlock properly scares me


 You mean when the browser implementation has a bug and fails to implicitly
 unlock?


What concerns me are the cases where synchronous events (e.g., resizing an
iframe) can cause script to execute in another domain.  As spec'd, there is
a potential dead lock with the storage mutex.  We must carefully unlock in
situations like this.  However, such unlocking will appear quite mysterious
to users, so much so that I question the value of the implicit storage
mutex.

That led me down this path of imagining a more explicit locking mechanism
that would give the app control over how local storage is protected.

I agree that explicit locking can be a big dangerous hammer, but that's why
I tried to soften it by removing blocking behavior.




 Giving Web authors the crappy race-prone and deadlock-prone locking
 programming model scares *me*.


Me too.  I don't believe that I'm proposing such an API.



 Yes, your acquireLock can't get you into a hard deadlock, strictly
 speaking, but you can still effectively deadlock your application by waiting
 for a lock to become available that never can.


Sure, but at least the thread of execution isn't blocked, and the user can
recover by closing the tab or what have you.  By the way, you can already
pretty much create my acquireLock / releaseLock API on top of SharedWorkers
today, but in a slightly crappier way.  Are SharedWorkers problematic
because of this?  I don't think so.



 Also, how many authors will forget to test the result of acquireLock
 (because they're used to other locking APIs that block) and find that things
 are OK in their testing?


Perhaps the API could be different.  Perhaps the name lock is part of the
problem.




 I think we should be willing to accept a very high implementation burden on
 browser vendors in exchange for minimizing the burden on Web authors.


Yes, I wholeheartedly agree.  Note: my concern is that there is no good
implementation for the storage mutex.  Implicitly dropping it at weird times
is not a good answer.

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 9:07 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 3:53 PM, Darin Fisher da...@chromium.org wrote:

 What concerns me are the cases where synchronous events (e.g., resizing an
 iframe) can cause script to execute in another domain.  As spec'd, there is
 a potential dead lock with the storage mutex.  We must carefully unlock in
 situations like this.  However, such unlocking will appear quite mysterious
 to users, so much so that I question the value of the implicit storage
 mutex.


 Right now I'm not sure how big a problem this actually is. The resize event
 for a document in a frame can surely be dispatched asynchronously so no
 unlocking is required. I would like to have a much better idea of how many
 places absolutely must release the storage mutex before deciding that
 approach is unworkable.

 Rob



What about navigating an iframe to a reference fragment, which could trigger
a scroll event?  The scrolling has to be done synchronously for compat with
the web.

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 6:46 PM, Aaron Boodman a...@google.com wrote:

 On Wed, Sep 9, 2009 at 11:30 AM, Aaron Boodmana...@google.com wrote:
  I see.
 
  So you are suggesting the localStorage could have zero concurrency
  guarantees and it is simply up to the developer to arrange things
  themselves using this new primitive.
 
  That is an interesting idea. You're right that it overlaps with the
  ideas that inspired shared workers, and the global script proposal.

 Ok, after thinking about this for a day, I'm going to say I think this
 is a very cool idea, and a worthwhile addition, but I don't think it
 should substitute for having the local storage API work correctly by
 default.

 The web platform is supposed to work for developers of all experience
 levels. If we make local storage have no concurrency guarantees, it
 will seem like it works in the overwhelming majority of cases. It will
 work in all SELUAs, and it will only NOT work in MELUAs in cases that
 are basically impossible to test, let alone see during development.

 We have tried hard with the design of the web platform to avoid these
 sort of untestable non-deterministic scenarios, and I think it is to
 the overall value of the platform to continue this.

 Therefore, my position continues to be that to access local storage,
 there should be an API that asynchronously acquires exclusive access
 to storage.

 - a



Yeah, if you had to call an API that asynchronously acquires exclusive
access
to storage, then I believe that would nicely address most of the issues.  It
is the
solution we have for database transactions.

I say most because I'm not sure that it eliminates the need to drop the
storage
mutex in the showModalDialog case.

If I call showModalDialog from within a database transaction, and then
showModalDialog
tries to create another database transaction, should I expect that the
transaction
can be started within the nested run loop of the modal dialog?  If not, then
it may cause
the app to get confused and never allow the dialog to be closed (e.g.,
perhaps the close
action is predicated on a database query.)

Nested loops suck.  showModalDialog sucks :-)

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 9:27 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Thu, Sep 10, 2009 at 1:13 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Sep 9, 2009 at 6:46 PM, Aaron Boodman a...@google.com wrote:

 On Wed, Sep 9, 2009 at 11:30 AM, Aaron Boodmana...@google.com wrote:
  I see.
 
  So you are suggesting the localStorage could have zero concurrency
  guarantees and it is simply up to the developer to arrange things
  themselves using this new primitive.
 
  That is an interesting idea. You're right that it overlaps with the
  ideas that inspired shared workers, and the global script proposal.

 Ok, after thinking about this for a day, I'm going to say I think this
 is a very cool idea, and a worthwhile addition, but I don't think it
 should substitute for having the local storage API work correctly by
 default.

 The web platform is supposed to work for developers of all experience
 levels. If we make local storage have no concurrency guarantees, it
 will seem like it works in the overwhelming majority of cases. It will
 work in all SELUAs, and it will only NOT work in MELUAs in cases that
 are basically impossible to test, let alone see during development.

 We have tried hard with the design of the web platform to avoid these
 sort of untestable non-deterministic scenarios, and I think it is to
 the overall value of the platform to continue this.

 Therefore, my position continues to be that to access local storage,
 there should be an API that asynchronously acquires exclusive access
 to storage.

 - a



 Yeah, if you had to call an API that asynchronously acquires exclusive
 access
 to storage, then I believe that would nicely address most of the issues.
  It is the
 solution we have for database transactions.

 I say most because I'm not sure that it eliminates the need to drop the
 storage
 mutex in the showModalDialog case.

 If I call showModalDialog from within a database transaction, and then
 showModalDialog
 tries to create another database transaction, should I expect that the
 transaction
 can be started within the nested run loop of the modal dialog?  If not,
 then it may cause
 the app to get confused and never allow the dialog to be closed (e.g.,
 perhaps the close
 action is predicated on a database query.)

 Nested loops suck.  showModalDialog sucks :-)


 We could just disallow showModalDialog and any other trouble APIs like that
 during localStorage and database transactions.  Doing so seems better than
 silently dropping transactional semantics.



It may not be so easy to disallow showModalDialog.  Imagine if you script a
plugin inside the transaction, and before returning, the plugin scripts
another window, where it calls showModalDialog.  There could have been a
process hop there.

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 9:28 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 4:11 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Sep 9, 2009 at 9:07 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 3:53 PM, Darin Fisher da...@chromium.orgwrote:

 What concerns me are the cases where synchronous events (e.g., resizing
 an iframe) can cause script to execute in another domain.  As spec'd, there
 is a potential dead lock with the storage mutex.  We must carefully unlock
 in situations like this.  However, such unlocking will appear quite
 mysterious to users, so much so that I question the value of the implicit
 storage mutex.


 Right now I'm not sure how big a problem this actually is. The resize
 event for a document in a frame can surely be dispatched asynchronously so
 no unlocking is required. I would like to have a much better idea of how
 many places absolutely must release the storage mutex before deciding that
 approach is unworkable.

 Rob


 What about navigating an iframe to a reference fragment, which could
 trigger a scroll event?  The scrolling has to be done synchronously for
 compat with the web.


 The scrolling itself may have to be synchronous, at least as far as
 updating scrollLeft/scrollTop if not visually ... but in this case the
 script execution in the frame would be an onscroll event handler, right?
 That's asynchronous in Gecko.


Interesting.  Gecko seems to be the odd man out there.  Both MSHTML and
WebKit dispatch the onscroll event handler synchronously.  Maybe my
assertion about that being important for web compat was overreaching.

At any rate, this should at least give us pause.  There could be other ways
in which script execution across domains could be nested :-/

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 9:43 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 4:37 PM, Darin Fisher da...@chromium.org wrote:

  Imagine if you script a plugin inside the transaction, and before
 returning, the plugin scripts another window,


 I'm curious, how common is that anyway? Can we just tell plugins not to do
 that, and abort any plugin that tries?


I don't know.  Are you saying that a plugin should not be able to invoke a
function that may trigger showModalDialog?  The code that calls
showModalDialog may be far removed / unrelated to the plugin script.  It may
just be an unfortunate side effect of invoking a method on a DOM window.

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 10:01 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 4:57 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Sep 9, 2009 at 9:43 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Thu, Sep 10, 2009 at 4:37 PM, Darin Fisher da...@chromium.orgwrote:

  Imagine if you script a plugin inside the transaction, and before
 returning, the plugin scripts another window,


 I'm curious, how common is that anyway? Can we just tell plugins not to
 do that, and abort any plugin that tries?


 I don't know.  Are you saying that a plugin should not be able to invoke a
 function that may trigger showModalDialog?  The code that calls
 showModalDialog may be far removed / unrelated to the plugin script.  It may
 just be an unfortunate side effect of invoking a method on a DOM window.


 No, I'm saying when a script in window A calls into a plugin, the plugin
 should not be allowed to synchronously call back out to script in window B.
 I realize that is currently allowed (i.e. not forbidden by anything in
 NPAPI), but do plugins actually do it in practice?



Yes, this is something that we have observed real plugins doing.  It is easy
for a plugin to have references to multiple windows.  They also like to
script the page in response to random NPP_ calls, like NPP_HandleEvent and
NPP_SetWindow, which perhaps is not too surprising.  NPP_HandleEvent
generally corresponds to input processing and painting for windowless
plugins, and NPP_SetWindow corresponds to a resize event.

-Darin


Re: [whatwg] Application defined locks

2009-09-09 Thread Darin Fisher
On Wed, Sep 9, 2009 at 10:03 PM, Aaron Boodman a...@google.com wrote:

 On Wed, Sep 9, 2009 at 9:13 PM, Darin Fisherda...@chromium.org wrote:
  If I call showModalDialog from within a database transaction, and then
  showModalDialog
  tries to create another database transaction, should I expect that the
  transaction
  can be started within the nested run loop of the modal dialog?

 By definition, in that case, the second transaction would not start
 until the dialog was closed.


Good, but



  If not, then it may cause
  the app to get confused and never allow the dialog to be closed (e.g.,
  perhaps the close
  action is predicated on a database query.)

 That is true, but it is an easily reproducible, deterministic
 application bug. It also doesn't destabilize the environment -- by
 making tabs or dialogs unclosable or whatever.


Well, the problem is that the creator of the transaction and the code
associated with the showModalDialog call may not be related.  The
showModalDialog code might normally be used outside the context of a
transaction, in which case the code would normally work fine.  However, if
triggered from within a transaction, the dialog would be stuck.

-Darin


Re: [whatwg] Storage mutex and cookies can lead to browser deadlock

2009-09-02 Thread Darin Fisher
On Tue, Sep 1, 2009 at 4:31 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 3:24 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 3:05 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Wed, Aug 26, 2009 at 2:54 PM, Jeremy Orlow jor...@chromium.orgwrote:

 Is there any data (or any way to collect the data) on how much of the
 web IE and Chrome's current behavior has broken?  Given that there hasn't
 been panic in the streets, I'm assuming approximately 0%?


 We previously had a lengthy discussion about this.

 If a site has a cookie race that causes a problem in IE/Chrome one in
 every 10,000 page loads, are you comfortable with that?


 I'm much more comfortable with that than the cost of a global mutex that
 all cookies and LocalStorage share.  There are other ways to come about this
 problem (like developer tools).

 I'm pretty sure Chromium has no intention of implementing a global storage
 mutex and putting all cookie access under it.  Has anyone heard anything
 (either way) from Microsoft?  Are there any browsers moving to a
 multi-event-loop (be it multi-threaded or multi-process) based model that
 intend to implement this?  If not, then it would seem like the spec is not
 grounded in reality.


 Does the silence mean that no one has any intention of implementing this?
  If so, maybe we should resign ourselves to a break in the single threaded
 illusion for cookies.  This doesn't seem too outlandish considering that web
 servers working with cookies will never have such a guarantee and given that
 we have no evidence of widespread breakage with IE 8 and Chrome.


IE 6 -- it is also multi process.  you can poke at wininet from any
application and change the cookies for IE.

-darin



 If we were to get rid of the storage mutex for cookie manipulation (as I
 believe we should) maybe we should re-examine it for local storage.  At a
 minimum, it could be implemented as a per-origin mutex.  But I feel strongly
 we should go further.  Why not have an asynchronous mechanism for atomic
 updates?  For example, if I wanted to write an ATM application, I would have
 to do the following:

 var accountDelta = /* something */;
 window.localStorage.executeAtomic(function() {
 localStorage.accountBalance = localStorage.accountBalance +
 accountDelta;
 }

 Alternatively, we could make it so that each statement is atomic, but that
 you have to use such a mechanism for anything more complicated. For example:

 localStorage.accountBalance = localStorage.accountBalance + accountDelta;
  // It's atomic, so no worries!
 var balance = localStorage.accountBalance;  /* Oh no  This isn't safe
 since it's implemented via multiple statements... */
 localStorage.accountBalance = balance + accountDelta;  /* we should
 have used localStorage.executeAtomic! */

 Such ideas would definitely lighten lock contention and possibly eliminate
 the need for yieldForStorageUpdates (formerly getStorageUpdates).  Another
 major bonus is that it'd allow us to expose localStorage to workers again,
 which is one of the top complaints I've gotten when talking to web
 developers about localStorage.

 I know this is radical stuff, but the way things are speced currently just
 are not practical.

 J



Re: [whatwg] Storage mutex

2009-08-26 Thread Darin Fisher
On Sun, Aug 23, 2009 at 11:33 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Sat, Aug 22, 2009 at 10:22 PM, Jeremy Orlow jor...@chromium.orgwrote:

 On Sat, Aug 22, 2009 at 5:54 AM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Wed, Aug 19, 2009 at 11:26 AM, Jeremy Orlow jor...@chromium.orgwrote:

 First of all, I was wondering why all user prompts are specified as
 must release the storage mutex (
 http://dev.w3.org/html5/spec/Overview.html#user-prompts).  Should this
 really say must instead of may?  IIRC (I couldn't find the original
 thread, unfortunately) this was added because of deadlock concerns.  It
 seems like there might be some UA implementation specific ways this could
 deadlock and there is the question of whether we'd want an alert() while
 holding the lock to block other execution requiring the lock, but I don't
 see why the language should be must.  For Chromium, I don't think we'll
 need to release the lock for any of these, unless there's some
 deadlock scenario I'm missing here.


 So if one page grabs the lock and then does an alert(), and another page
 in the same domain tries to get the lock, you're going to let the latter
 page hang until the user dismisses the alert in the first page?


 Yes.  And I agree this is sub-optimal, but shouldn't it be left up to the
 UAs what to do?  I feel like this is somewhat of an odd case to begin with
 since alerts lock up most (all?) browsers to a varying degrees even without
 using localStorage.


 That behaviour sounds worse than what Firefox currently does, where an
 alert disables input to all tabs in the window (which is already pretty
 bad), because it willl make applications in visually unrelated tabs and
 windows hang.


You can have script connections that span multiple tabs in multiple windows,
so in order to preserve the run-to-completion semantics of JavaScript, it is
important that window.{alert,confirm,prompt,showModalDialog} be modal across
all windows in the browser.  This is why those APIs suck rocks, and we
should never create APIs like them again.





  Given that different UAs are probably going to have
 other scenarios where they have to drop the lock (some of them may even be
 purely implementational issues), should we add some way for us to notify
 scripts the lock was dropped?  A normal event isn't going to be of much 
 use,
 since it'll fire after the scripts execution ends (so the lock would have
 been dropped by then anyway).  A boolean doesn't seem super useful, but 
 it's
 better than nothing and could help debugging.  Maybe fire an exception?  
 Are
 there other options?


 A generation counter might be useful.


 Ooo, I like that idea.  When would the counter increment?  It'd be nice if
 it didn't increment if the page did something synchronous but no one else
 took the lock in the mean time.


 Defining no-one else may be difficult. I haven't thought this through, to
 be honest, but I think you could update the counter every time the storage
 mutex is released and the shared state was modified since the storage mutex
 was acquired. Reading the counter would acquire the storage mutex. You'd
 basically write

 var counter = window.storageMutexGenerationCounter;
 ... do lots of stuff ...
 if (window.storageMutexGenerationCounter != counter) {
   // abort, or refresh local state, or something
 }

 I'm not sure what you'd do if you discovered an undesired lock-drop,
 though. If you can't write something sensible instead of abort, or
 something, it's not worth doing.


Implementation-wise, the easiest thing to support is a boolean that becomes
true when the lock is release and false when the lock is acquired.  Trying
to update a counter based on modifications to the local storage backend
which may be happening on another thread seems like more effort than it is
worth.

But, what would you call this boolean?  storageMayHaveBeenUpdated? :-P

I'm struggling to find a good use case for this.




  But getStorageUpdates is still not the right name for it.  The only way
 that there'd be any updates to get is if, when you call the function,
 someone else takes the lock and makes some updates.  Maybe it should be
 yieldStorage (or yieldStorageMutex)?  In other words, maybe the name should
 imply that you're allowing concurrent updates to happen?


 I thought that's what getStorageUpdates implied :-).


The getStorageUpdates name seems pretty decent to me when considering it
from the perspective of the caller.  The caller is saying that they are OK
with being able to see changes made to the localStorage by other threads.
 This cleverly avoids the need to talk about locks, which seems like a good
thing.  It is okay for there to be no updates to storage.

-Darin


Re: [whatwg] Storage mutex

2009-08-26 Thread Darin Fisher
On Sun, Aug 23, 2009 at 11:33 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Sat, Aug 22, 2009 at 10:22 PM, Jeremy Orlow jor...@chromium.orgwrote:

 On Sat, Aug 22, 2009 at 5:54 AM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Wed, Aug 19, 2009 at 11:26 AM, Jeremy Orlow jor...@chromium.orgwrote:

 First of all, I was wondering why all user prompts are specified as
 must release the storage mutex (
 http://dev.w3.org/html5/spec/Overview.html#user-prompts).  Should this
 really say must instead of may?  IIRC (I couldn't find the original
 thread, unfortunately) this was added because of deadlock concerns.  It
 seems like there might be some UA implementation specific ways this could
 deadlock and there is the question of whether we'd want an alert() while
 holding the lock to block other execution requiring the lock, but I don't
 see why the language should be must.  For Chromium, I don't think we'll
 need to release the lock for any of these, unless there's some
 deadlock scenario I'm missing here.


 So if one page grabs the lock and then does an alert(), and another page
 in the same domain tries to get the lock, you're going to let the latter
 page hang until the user dismisses the alert in the first page?


 Yes.  And I agree this is sub-optimal, but shouldn't it be left up to the
 UAs what to do?  I feel like this is somewhat of an odd case to begin with
 since alerts lock up most (all?) browsers to a varying degrees even without
 using localStorage.


 That behaviour sounds worse than what Firefox currently does, where an
 alert disables input to all tabs in the window (which is already pretty
 bad), because it willl make applications in visually unrelated tabs and
 windows hang.


You can have script connections that span multiple tabs in multiple windows,
so in order to preserve the run-to-completion semantics of JavaScript, it is
important that window.{alert,confirm,prompt,showModalDialog} be modal across
all windows in the browser.  This is why those APIs suck rocks, and we
should never create APIs like them again.




  Given that different UAs are probably going to have
 other scenarios where they have to drop the lock (some of them may even be
 purely implementational issues), should we add some way for us to notify
 scripts the lock was dropped?  A normal event isn't going to be of much 
 use,
 since it'll fire after the scripts execution ends (so the lock would have
 been dropped by then anyway).  A boolean doesn't seem super useful, but 
 it's
 better than nothing and could help debugging.  Maybe fire an exception?  
 Are
 there other options?


 A generation counter might be useful.


 Ooo, I like that idea.  When would the counter increment?  It'd be nice if
 it didn't increment if the page did something synchronous but no one else
 took the lock in the mean time.


 Defining no-one else may be difficult. I haven't thought this through, to
 be honest, but I think you could update the counter every time the storage
 mutex is released and the shared state was modified since the storage mutex
 was acquired. Reading the counter would acquire the storage mutex. You'd
 basically write

 var counter = window.storageMutexGenerationCounter;
 ... do lots of stuff ...
 if (window.storageMutexGenerationCounter != counter) {
   // abort, or refresh local state, or something
 }

 I'm not sure what you'd do if you discovered an undesired lock-drop,
 though. If you can't write something sensible instead of abort, or
 something, it's not worth doing.


Implementation-wise, the easiest thing to support is a boolean that becomes
true when the lock is release and false when the lock is acquired.  Trying
to update a counter based on modifications to the local storage backend
which may be happening on another thread seems like more effort than it is
worth.

But, what would you call this boolean?  storageMayHaveBeenUpdated? :-P

I'm struggling to find a good use case for this.



  But getStorageUpdates is still not the right name for it.  The only way
 that there'd be any updates to get is if, when you call the function,
 someone else takes the lock and makes some updates.  Maybe it should be
 yieldStorage (or yieldStorageMutex)?  In other words, maybe the name should
 imply that you're allowing concurrent updates to happen?


 I thought that's what getStorageUpdates implied :-).


The getStorageUpdates name seems pretty decent to me when considering it
from the perspective of the caller.  The caller is saying that they are OK
with being able to see changes made to the localStorage by other threads.
 This cleverly avoids the need to talk about locks, which seems like a good
thing.  It is okay for there to be no updates to storage.

-Darin


Re: [whatwg] Storage mutex

2009-08-26 Thread Darin Fisher
On Wed, Aug 26, 2009 at 1:27 AM, Jeremy Orlow jor...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 12:51 AM, Darin Fisher da...@chromium.org wrote:

 On Sun, Aug 23, 2009 at 11:33 PM, Robert O'Callahan rob...@ocallahan.org
  wrote:

 That behaviour sounds worse than what Firefox currently does, where an
 alert disables input to all tabs in the window (which is already pretty
 bad), because it willl make applications in visually unrelated tabs and
 windows hang.


 You can have script connections that span multiple tabs in multiple
 windows, so in order to preserve the run-to-completion semantics of
 JavaScript, it is important that
 window.{alert,confirm,prompt,showModalDialog} be modal across all windows in
 the browser.  This is why those APIs suck rocks, and we should never create
 APIs like them again.


 I don't understand your point here.  Are you saying that the current
 firefox behavior is not correct, that releasing the storage lock on these
 events is not correct, or something else?


I meant that the current Firefox behavior is technically incorrect.  No one
likes app modal dialogs, but how else can you guarantee run-to-completion
semantics? How else do you prevent other scripts from modifying your state
while you are stuck calling into window.alert().




 Defining no-one else may be difficult. I haven't thought this through, to
 be honest, but I think you could update the counter every time the storage
 mutex is released and the shared state was modified since the storage mutex
 was acquired. Reading the counter would acquire the storage mutex. You'd
 basically write

 var counter = window.storageMutexGenerationCounter;
 ... do lots of stuff ...
 if (window.storageMutexGenerationCounter != counter) {
   // abort, or refresh local state, or something
 }

 I'm not sure what you'd do if you discovered an undesired lock-drop,
 though. If you can't write something sensible instead of abort, or
 something, it's not worth doing.


 Implementation-wise, the easiest thing to support is a boolean that
 becomes true when the lock is release and false when the lock is acquired.
  Trying to update a counter based on modifications to the local storage
 backend which may be happening on another thread seems like more effort than
 it is worth.


 Such a boolean could be useful, but I disagree with the assertion that the
 implementation is significantly more difficult.  I'm pretty sure both would
 be trivial in Chromium, for example.


incrementing a counter on each modification to the database would involve
some broadcasting of notifications to each renderer, or we'd need to store
the counter in shared memory.  either seems unfortunate given the debugging
use case.

if we only record the fact that getStorageUpdates (or equivalent) was
called, then it is just a local boolean in the renderer process.  that seems
substantially simpler to implement without performance penalty.






  But, what would you call this boolean?  storageMayHaveBeenUpdated? :-P

 I'm struggling to find a good use case for this.


 None of the ones I already listed seemed interesting?  If nothing else, I
 would think debugability would be a key one.  If we're going to do something
 halfway magical, we should make it possible for developers to know it
 happened, right??

 The getStorageUpdates name seems pretty decent to me when considering it
 from the perspective of the caller.  The caller is saying that they are OK
 with being able to see changes made to the localStorage by other threads.
  This cleverly avoids the need to talk about locks, which seems like a good
 thing.  It is okay for there to be no updates to storage.


 So the use case I've had in my mind that maybe isn't clear is this:

 localStorage.getItem/setItem
 navigator.getStorageUpdates()
  localStorage.getItem/setItem

 In other words, no processing or anything between calls.

 If the act of calling getStorageUpdates gives the lock to everyone who's
 waiting for it before letting the caller get it again, then I guess I can
 buy this argument.


right, this ^^^ is what i meant.

-darin


Re: [whatwg] Run to completion in the face of modal dialog boxes (WAS: Storage mutex)

2009-08-26 Thread Darin Fisher
On Wed, Aug 26, 2009 at 12:49 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 11:17 AM, Darin Fisher da...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 1:27 AM, Jeremy Orlow jor...@chromium.orgwrote:

 On Wed, Aug 26, 2009 at 12:51 AM, Darin Fisher da...@chromium.orgwrote:

  On Sun, Aug 23, 2009 at 11:33 PM, Robert O'Callahan 
 rob...@ocallahan.org wrote:

 That behaviour sounds worse than what Firefox currently does, where an
 alert disables input to all tabs in the window (which is already pretty
 bad), because it willl make applications in visually unrelated tabs and
 windows hang.


 You can have script connections that span multiple tabs in multiple
 windows, so in order to preserve the run-to-completion semantics of
 JavaScript, it is important that
 window.{alert,confirm,prompt,showModalDialog} be modal across all windows 
 in
 the browser.  This is why those APIs suck rocks, and we should never create
 APIs like them again.


 I don't understand your point here.  Are you saying that the current
 firefox behavior is not correct, that releasing the storage lock on these
 events is not correct, or something else?


 I meant that the current Firefox behavior is technically incorrect.  No
 one likes app modal dialogs, but how else can you guarantee
 run-to-completion semantics? How else do you prevent other scripts from
 modifying your state while you are stuck calling into window.alert().


 I don't know much about this issue, but it seems like something that should
 either be fixed in Firefox (and other browsers?) or changed in the spec.
  I'm interested to hear if others have thoughts on it.


Chrome and Safari both implement app-modal alerts.  Firefox and IE implement
window modal, which is clearly buggy, but of course the world hasn't
imploded.  I haven't tested Opera.

Personally, I would like to change Chrome to not put up app modal alerts.  I
think it is bad UI, but I'm not sure how to do so without also breaking the
contract that JavaScript execution appear single threaded.

-Darin


Re: [whatwg] Storage mutex

2009-08-26 Thread Darin Fisher
On Wed, Aug 26, 2009 at 12:49 AM, Darin Fisher da...@google.com wrote:

 On Sun, Aug 23, 2009 at 11:33 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Sat, Aug 22, 2009 at 10:22 PM, Jeremy Orlow jor...@chromium.orgwrote:


  But getStorageUpdates is still not the right name for it.  The only way
 that there'd be any updates to get is if, when you call the function,
 someone else takes the lock and makes some updates.  Maybe it should be
 yieldStorage (or yieldStorageMutex)?  In other words, maybe the name should
 imply that you're allowing concurrent updates to happen?


 I thought that's what getStorageUpdates implied :-).


 The getStorageUpdates name seems pretty decent to me when considering it
 from the perspective of the caller.  The caller is saying that they are OK
 with being able to see changes made to the localStorage by other threads.
  This cleverly avoids the need to talk about locks, which seems like a good
 thing.  It is okay for there to be no updates to storage.

 -Darin



What about allowStorageUpdates?

-Darin


Re: [whatwg] Run to completion in the face of modal dialog boxes (WAS: Storage mutex)

2009-08-26 Thread Darin Fisher
On Wed, Aug 26, 2009 at 12:54 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 12:49 PM, Jeremy Orlow jor...@chromium.orgwrote:

 On Wed, Aug 26, 2009 at 11:17 AM, Darin Fisher da...@chromium.orgwrote:

 On Wed, Aug 26, 2009 at 1:27 AM, Jeremy Orlow jor...@chromium.orgwrote:

 On Wed, Aug 26, 2009 at 12:51 AM, Darin Fisher da...@chromium.orgwrote:

  On Sun, Aug 23, 2009 at 11:33 PM, Robert O'Callahan 
 rob...@ocallahan.org wrote:

 That behaviour sounds worse than what Firefox currently does, where an
 alert disables input to all tabs in the window (which is already pretty
 bad), because it willl make applications in visually unrelated tabs and
 windows hang.


 You can have script connections that span multiple tabs in multiple
 windows, so in order to preserve the run-to-completion semantics of
 JavaScript, it is important that
 window.{alert,confirm,prompt,showModalDialog} be modal across all windows 
 in
 the browser.  This is why those APIs suck rocks, and we should never 
 create
 APIs like them again.


 I don't understand your point here.  Are you saying that the current
 firefox behavior is not correct, that releasing the storage lock on these
 events is not correct, or something else?


 I meant that the current Firefox behavior is technically incorrect.  No
 one likes app modal dialogs, but how else can you guarantee
 run-to-completion semantics? How else do you prevent other scripts from
 modifying your state while you are stuck calling into window.alert().


 I don't know much about this issue, but it seems like something that
 should either be fixed in Firefox (and other browsers?) or changed in the
 spec.  I'm interested to hear if others have thoughts on it.


 Chrome and Safari both implement app-modal alerts.  Firefox and IE
 implement window modal, which is clearly buggy, but of course the world
 hasn't imploded.  I haven't tested Opera.

 Personally, I would like to change Chrome to not put up app modal alerts.
  I think it is bad UI, but I'm not sure how to do so without also breaking
 the contract that JavaScript execution appear single threaded.

 -Darin



Also, just for completeness, if you consider scoping an alert to a window,
then what happens when an alert is generated by another window?  If each
alert is implemented using a nested event loop, then closing the first alert
will not return execution control back to the page that call alert.

So, the user will be left with a dead browser window.  This is very similar
to the problem that exists with app modal alerts where one window is
inactive while another is showing an alert.

Without something like co-routines, I'm not sure how to solve this.

-Darin


Re: [whatwg] SharedWorkers and the name parameter

2009-08-18 Thread Darin Fisher
I agree.  Moreover, since a shared worker identified by a given name cannot
be navigated elsewhere, the name isn't all that synonymous with other
usages of names (e.g., window.open).  At the very least, it would seem
helpful to scope the name to the URL to avoid the name conflict issue.

-Darin




On Mon, Aug 17, 2009 at 3:53 PM, Michael Nordman micha...@google.comwrote:

 What purpose the the 'name' serve? Just seems uncessary to have the notion
 of 'named' workers. They need to be identified. The url, including the
 fragment part, could serve that purpse just fine without a seperate 'name'.
 The 'name' is not enough to identify the worker, url,name is the
 identifier. Can the 'name' be used independently of the 'url' in any way?

 * From within a shared worker context, it is exposed in the global scope.
 This could inform the work about what 'mode' to run.  The location including
 the fragment is also exposed within a shared worker context, the fragment
 part could just as well serve this 'modalility' purpose.

 * From the outside, it has to be provided as part of the identifier to
 create or connect to an shared worker. And there are awkward error
 conditions arising when a worker with 'name' already exists for a different
 'url'. The awkward error conditions would be eliminated if id == url.

 * Is 'name' visible to the web developer any place besides those two?


 On Mon, Aug 17, 2009 at 2:44 PM, Mike Shaver mike.sha...@gmail.comwrote:

 On Sat, Aug 15, 2009 at 8:29 PM, Jim Jewettjimjjew...@gmail.com wrote:
  Currently, SharedWorkers accept both a url parameter and a name
  parameter - the purpose is to let pages run multiple SharedWorkers
 using the
  same script resource without having to load separate resources from the
  server.
 
  [ request that name be scoped to the URL, rather than the entire
 origin,
  because not all parts of example.com can easily co-ordinate.]
 
  Would there be a problem with using URL fragments to distinguish the
 workers?
 
  Instead of:
 new SharedWorker(url.js, name);
 
  Use
 new SharedWorker(url.js#name);
  and if you want a duplicate, call it
 new SharedWorker(url.js#name2);
 
  The normal semantics of fragments should prevent the repeated server
 fetch.

 I don't think that it's very natural for the name to be derived from
 the URL that way.  Ignoring that we're not really identifying a
 fragment, it seems much less self-documenting than a name parameter.
 I would certainly expect, from reading that syntax, for the #part to
 be calling out a sub-script (property or function or some such) rather
 than changing how the SharedWorker referencing it is named!

 Mike





Re: [whatwg] Issues with Web Sockets API

2009-06-26 Thread Darin Fisher
On Fri, Jun 26, 2009 at 9:46 AM, Drew Wilson atwil...@google.com wrote:


 On Fri, Jun 26, 2009 at 9:18 AM, James Robinson jam...@google.com wrote:

 However, users can't usefully check the readyState to see if the WebSocket
 is still open because there are not and cannot be any
 synchronization guarantees about when the WebSocket may close.


 Is this true? Based on our prior discussion surrounding cookies, it seems
 like as a general rule we try to keep state from changing dynamically while
 JS code is executing for exactly these reasons.



I think this is a very different beast.  The state of a network connection
may change asynchronously whether we like it or not.  Unlike who may
access cookies or local storage, the state of the network connection is not
something we solely control.

-Darin


Re: [whatwg] Issues with Web Sockets API

2009-06-26 Thread Darin Fisher
On Fri, Jun 26, 2009 at 3:16 PM, Drew Wilson atwil...@google.com wrote:



 On Fri, Jun 26, 2009 at 2:11 PM, James Robinson jam...@google.com wrote:



 Forcing applications to build their own send/ack functionality would be
 pretty tragic considering that WebSockets are built on top of TCP.

 - James


 Every time I've written a response/reply protocol on TCP I've needed to put
 in my own acks - how else do you know your message has been delivered to the
 remote app layer?


This seems especially true given that WebSocket connections may be proxied.

-Darin




 One could argue that WebSockets should do this for you, but I like leaving
 this up to the app as it gives them more flexibility.

 -atw




  1   2   >