CfC: Proposal to add web packaging / asset compression

2012-02-14 Thread Paul Bakaus
Hi everybody,

This is a proposal to add a packaging format transparent to browsers to the 
charter. At Zynga, we have identified this as one of our most pressuring 
issues. Developers want to be able to send a collection of assets to the 
browser through a single request, instead of hundreds.

Today, we misuse image and audio sprites, slicing them again as base64 only to 
put them into weird caches. These are workarounds, and ugly ones, as well. None 
of the workarounds is satisfying, either in terms of robustness, performance or 
simply, a sane API. Coincidentally, this is also one of the most pressuring 
issues of WebGL. Since you are dealing with a lot of assets with WebGL games, 
proper solutions must be found.

A ticket at Mozilla, describing the issue further, has been opened by us here: 
https://bugzilla.mozilla.org/show_bug.cgi?id=681967

Here's an actual code draft I attached to the ticket:


window.loadPackage('package.webpf', function() {
var img = new Image();
img.src = package.webpf/myImage.png;
})

Or alternatively, with a local storage system (I prefer option one):


window.loadPackage('package.webpf', function(files) {
files[0].saveTo('myImage.png');
var img = new Image();
img.src = local://absolute path of url of site/myImage.png;
})

No big deal if the whole API looks entirely different when it's done. The 
format needs to be able to handle delta updates well, and must be cacheable. It 
needs to be transparent to the browser, and assets of uncompressed web packages 
need to be able to be included from CSS. I am aware this is a more inconvenient 
addition to work on, but there is immediate need.

This is also a call for implementors, testers and editors. I am unfortunately 
not experienced enough to handle any of those jobs.

Thanks, I'm looking forward to feedback!

Paul Bakaus
W3C Rep and Studio CTO, Zynga


Re: CfC: Proposal to add web packaging / asset compression

2012-02-14 Thread Charles Pritchard

On 2/14/2012 1:24 AM, Paul Bakaus wrote:

window.loadPackage('package.webpf', function() {
 var img = new Image();
 img.src = package.webpf/myImage.png;
})

Or alternatively, with a local storage system (I prefer option one):

window.loadPackage('package.webpf', function(files) {
 files[0].saveTo('myImage.png');
 var img = new Image();
 img.src = local://absolute path of url of site/myImage.png;
})



How about picking up FileSystem API semantics?

var img = new Image(); img.onload = myImageHandler;
var package = 'package.webpf';
var sprite = 'myImage.png';
window.requestFileSystem(package, 0, function(fs) {
myPackagePolyfill(function(root) { img.src = 
root.getFile(sprite).toURL(); }, fs.root, package);

});

Packages would be calculated with the temporary file system quota.

I'm fine with it being a different method name, but re-using toURL makes 
a lot of sense.


I've heard more than once about how we shouldn't be requiring more 
formats. So I'd leave the mount format undefined, and just throw an 
error if it's not supported.
This still brings the package into the browser cache, it can be 
re-requested via XHR + ArrayBuffer (or blob), and manually parsed by JS 
polyfill.


From a practical perspective:
Use uncompressed zip for packaging, it's trivial to support in JS these 
days.
Content is already compressed, typically, and deflate can be used in the 
request layer if the client-server relationship supports it.


Use the polyfill method to uncompress the files into the temporary file 
system if it's supported.
And if it's not supported, your polyfill can still use createObjectURL 
and file slice to handle business efficiently.


Just try to cleanup somewhere with a myPackagePolyfill.dispose(package);

-Charles


Re: Synchronous postMessage for Workers?

2012-02-14 Thread Arthur Barstow

On 2/14/12 2:02 AM, ext David Bruant wrote:

Le 13/02/2012 20:44, Ian Hickson a écrit :

Should we just tell authors to get used to the async style?

I think we should. More constructs are coming in ECMAScript. Things
related to language concurrency should probably be left to the core
language unless there is an extreme need (which there isn't as far as I
know).


David - if you have some recommended reading(s) re concurrency 
constructs that are relevant to this discussion and are coming to 
ECMAScript, please let me know.


(I tend to agree if there isn't some real urgency here, we should be 
careful [with our specs]).


-Thanks, ArtB





Re: [FileAPI] createObjectURL isReusable proposal

2012-02-14 Thread Bronislav Klučka



On 14.2.2012 5:56, Jonas Sicking wrote:

On Thu, Feb 2, 2012 at 4:40 PM, Ian Hicksoni...@hixie.ch  wrote:

On Thu, 2 Feb 2012, Arun Ranganathan wrote:

2. Could we modify things so that img.src = blob is a reality? Mainly,
if we modify things for the *most common* use case, that could be useful
in mitigating some of our fears. Hixie, is this possible?

Anything's possible, but I think the pain here would far outweigh the
benefits. There would be some really hard questions to answer, too (e.g.
what would innerHTML return? If you copied such an image from a
contentEditable section and pasted it lower down the same section, would
it still have the image?).

We could define that it returns an empty src attribute, which would
break the copy/paste example. That's the same behavior you'd get with
someone revoking the URL upon load anyway.

/ Jonas



The point of reusable Blob URL is the compatibility with regular URL, 
not having reusable URL would create unpleasant dichotomy in data 
manipulating...


Brona



Re: [FileAPI] createObjectURL isReusable proposal

2012-02-14 Thread Charles Pritchard

On 2/14/2012 5:35 AM, Bronislav Klučka wrote:



On 14.2.2012 5:56, Jonas Sicking wrote:

On Thu, Feb 2, 2012 at 4:40 PM, Ian Hicksoni...@hixie.ch  wrote:

On Thu, 2 Feb 2012, Arun Ranganathan wrote:

2. Could we modify things so that img.src = blob is a reality? Mainly,
if we modify things for the *most common* use case, that could be 
useful

in mitigating some of our fears. Hixie, is this possible?

Anything's possible, but I think the pain here would far outweigh the
benefits. There would be some really hard questions to answer, too 
(e.g.

what would innerHTML return? If you copied such an image from a
contentEditable section and pasted it lower down the same section, 
would

it still have the image?).

We could define that it returns an empty src attribute, which would
break the copy/paste example. That's the same behavior you'd get with
someone revoking the URL upon load anyway.

/ Jonas



The point of reusable Blob URL is the compatibility with regular URL, 
not having reusable URL would create unpleasant dichotomy in data 
manipulating...


What do you think of a global release mechanism? Such as 
URL.revokeAllObjectUrls();





Re: [FileAPI] createObjectURL isReusable proposal

2012-02-14 Thread Bronislav Klučka



On 14.2.2012 14:39, Charles Pritchard wrote:

On 2/14/2012 5:35 AM, Bronislav Klučka wrote:



On 14.2.2012 5:56, Jonas Sicking wrote:

On Thu, Feb 2, 2012 at 4:40 PM, Ian Hicksoni...@hixie.ch  wrote:

On Thu, 2 Feb 2012, Arun Ranganathan wrote:
2. Could we modify things so that img.src = blob is a reality? 
Mainly,
if we modify things for the *most common* use case, that could be 
useful

in mitigating some of our fears. Hixie, is this possible?

Anything's possible, but I think the pain here would far outweigh the
benefits. There would be some really hard questions to answer, too 
(e.g.

what would innerHTML return? If you copied such an image from a
contentEditable section and pasted it lower down the same section, 
would

it still have the image?).

We could define that it returns an empty src attribute, which would
break the copy/paste example. That's the same behavior you'd get with
someone revoking the URL upon load anyway.

/ Jonas



The point of reusable Blob URL is the compatibility with regular URL, 
not having reusable URL would create unpleasant dichotomy in data 
manipulating...


What do you think of a global release mechanism? Such as 
URL.revokeAllObjectUrls();


Sounds like very interesting idea... could clearly solve a lot of issues 
here (load everything you want on load and the release it once) . So +1


But I would still leave some functionality for one image manipulation, 
there still can be apps with mixed approach (some images with reusable 
{application data}, some images without {application UI}), ore they may 
not be even images here (images one time, but some file blob permanent).
I could also go with reverse approach, with createObjectURL being 
oneTimeOnly by default

createObjectURL(Blob aBlob, boolean? isPermanent)
instead of current
createObjectURL(Blob aBlob, boolean? isOneTime)
the fact, that user would have to explicitly specify, that such URL is 
permanent should limit cases of I forgot to release something 
somewhere... and I thing could be easier to understant, that explicit 
request for pemranent = explicit release. Would break current 
implementations, sure, but if we are considering changes


B.





Re: Synchronous postMessage for Workers?

2012-02-14 Thread Charles Pritchard

On 2/14/2012 5:31 AM, Arthur Barstow wrote:

On 2/14/12 2:02 AM, ext David Bruant wrote:

Le 13/02/2012 20:44, Ian Hickson a écrit :

Should we just tell authors to get used to the async style?

I think we should. More constructs are coming in ECMAScript. Things
related to language concurrency should probably be left to the core
language unless there is an extreme need (which there isn't as far as I
know).


David - if you have some recommended reading(s) re concurrency 
constructs that are relevant to this discussion and are coming to 
ECMAScript, please let me know.


(I tend to agree if there isn't some real urgency here, we should be 
careful [with our specs]).


We could still use some kind of synchronous semantic for passing 
gestures between frames.


This issue has popped up in various ways with Google Chrome extensions.

We can't have a user click on frame (a) and have frame (b) treat it as a 
gesture, for things like popup windows and the like.


This is of course intentional, for the purpose of security. But if both 
frames want it to happen, it's a side effect of the asynchronous nature 
of postMessage.






Re: [FileAPI] createObjectURL isReusable proposal

2012-02-14 Thread Glenn Maynard
2012/2/14 Bronislav Klučka bronislav.klu...@bauglir.com

 The point of reusable Blob URL is the compatibility with regular URL, not
 having reusable URL would create unpleasant dichotomy in data
 manipulating...


The point is avoiding the error-prone need to release resources by hand.

-- 
Glenn Maynard


Re: [FileAPI] createObjectURL isReusable proposal

2012-02-14 Thread Bronislav Klučka



On 14.2.2012 15:20, Glenn Maynard wrote:
2012/2/14 Bronislav Klučka bronislav.klu...@bauglir.com 
mailto:bronislav.klu...@bauglir.com


The point of reusable Blob URL is the compatibility with regular
URL, not having reusable URL would create unpleasant dichotomy in
data manipulating...


The point is avoiding the error-prone need to release resources by hand.

--
Glenn Maynard



Yes, that is why we have this thread, I was talking about Blob URL...
I'm trying to find solution that would solve both (sure, I do not mind 
explicit release).
I do not want solution where working with set of images would require to 
traverse through all images and somehow trying to determine whether 
images is regular URL or blob and go through 2 different branches.
Suggestions like We could define that it returns an empty src 
attribute prohibits any additional working with such image... accessing 
such image tells me nothing... is such image Blob image? Is it new empty 
image?


Brona



Re: Synchronous postMessage for Workers?

2012-02-14 Thread Jonas Sicking
On Mon, Feb 13, 2012 at 2:44 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 17 Nov 2011, Joshua Bell wrote:

 Wouldn't it be lovely if the Worker script could simply make a
 synchronous call to fetch data from the Window?

 It wouldn't be so much a synchronous call, so much as a blocking get.


 On Thu, 17 Nov 2011, Jonas Sicking wrote:

 We can only allow child workers to block on parent workers. Never the
 other way around.

 Indeed. And it would have to be limited to the built-in ports that workers
 have, because regular ports can be shunted all over the place and you
 could end up with a deadlock situation.

 It would be easy enough to add a blocking get on DedicatedWorkerGlobalScope.
 Is this something for which there's a lot of demand?

   // blocking get
   // no self.onmessage handler needed
   ...
   var message = self.getMessage();
   ...

 An alternative is to add continuations to the platform:

   // continuation
   // (this is not a formal proposal, just a illustration of the concept)
   var message;
   self.onmessage = function (event) {
     message = event;
     signal('got message'); // queues a task to resume from yieldUntil()
   };
   ...
   yieldUntil('got message');
   ...

 This would be a more general solution and would be applicable in many
 other parts of the platform. As we get more and more async APIs, I think
 it might be worth considering adding this.

 We could add it to HTML as an API rather than adding it to JS as a
 language construct. It would be relatively easy to define, if much harder
 to implement:

   yieldUntil(id) - spin the event loop until the signal() API unblocks
                    this script

   signal(id)     - unblock the script with the oldest invokation of
                    yieldUntil() called with the given ID, if any

 Given our definition of spin the event loop, this doesn't even result,
 per the spec, in nested event loops or anything like that. Note that per
 this definition, signal just queues a task to resume the earlier script,
 it is not your typical coroutine. That is:

   setTimeout(function () {
     console.log('2');
     signal('test');
     console.log('3');
   }, 1000);
   console.log('1');
   yieldUntil('test');
   console.log('4');

 ...logs 1, 2, 3, 4, not 1, 2, 4, 3.

 Anyone object to me adding something like this? Are there any better
 solutions? Should we just tell authors to get used to the async style?

Spinning the event loop is very bug prone. When calling the yieldUntil
function basically anything can happen, including re-entering the same
code. Protecting against the entire world changing under you is very
hard. Add to that that it's very racy. I.e. changes may or may not
happen under you depending on when exactly you call yieldUntil, and
which order events come in.

This is something we've been fighting with in Gecko for a long time.
Code that spins the event loop has very often lead to terrible bugs
and so we've been trying to get rid of it as much as possible. And
that despite being used to very random things happening under us since
we often call out into unknown/hostile code (js-code in a webpage).

/ Jonas



[Bug 15927] [IndexedDB] Allowing . and in keys specified using keyPath

2012-02-14 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15927

Odin Hørthe Omdal odi...@opera.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||WONTFIX

--- Comment #2 from Odin Hørthe Omdal odi...@opera.com 2012-02-14 16:35:30 
UTC ---
OK. Not for this version then.

It's a bit arbitrary to only allow valid javascript identifiers to be used as
indexes, but sometimes that's just the way the world works.

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.


Re: [CORS] Access-Control-Request-Method

2012-02-14 Thread Anne van Kesteren

On Thu, 22 Dec 2011 17:05:08 +0100, Boris Zbarsky bzbar...@mit.edu wrote:
No, what I mean is this.  Say we enter  
http://dvcs.w3.org/hg/cors/raw-file/tip/Overview.html#cross-origin-request  
with the following state:


* force preflight flag is true
* Request method is simple method
* No author request headers
* Empty preflight cache (not that this matters)

The spec says we should follow the cross-origin request with preflight  
algorithm.


Following that link, it says:

   Go to the next step if the following conditions are true:

 For request method there either is a method cache match or it is a
 simple method.

 For every header of author request headers there either is a header
 cache match for the field name or it is a simple header.

Since the method is a simple method and there are no author request  
headers, we skip the preflight and go on to the main request.


Now it's possible that I simply don't understand what this flag is  
_supposed_ to do or that I'm missing something


So the idea behind the force preflight flag is that there's a preflight  
request if upload event listeners are registered, because otherwise you  
can determine the existence of a server. Now the obvious way to fix CORS  
would be to add an additional condition in the text you quoted above,  
namely that the force preflight flag is unset; however, that would mean  
that caching is bypassed too.


How is this implemented in practice?

Jonas, Adam, Odin, any ideas?


--
Anne van Kesteren
http://annevankesteren.nl/



Re: CfC: Proposal to add web packaging / asset compression

2012-02-14 Thread Dimitri Glazkov
Though I don't know what shape this will take, I think this is
definitely worth vigorous research and discussion.

Without trying to derail this effort, I am somewhat interested in how
this thinking can be applied to Web Components, since components may
want to be coupled with various assets.

:DG

On Tue, Feb 14, 2012 at 1:24 AM, Paul Bakaus pbak...@zynga.com wrote:
 Hi everybody,

 This is a proposal to add a packaging format transparent to browsers to the
 charter. At Zynga, we have identified this as one of our most pressuring
 issues. Developers want to be able to send a collection of assets to the
 browser through a single request, instead of hundreds.

 Today, we misuse image and audio sprites, slicing them again as base64 only
 to put them into weird caches. These are workarounds, and ugly ones, as
 well. None of the workarounds is satisfying, either in terms of robustness,
 performance or simply, a sane API. Coincidentally, this is also one of the
 most pressuring issues of WebGL. Since you are dealing with a lot of assets
 with WebGL games, proper solutions must be found.

 A ticket at Mozilla, describing the issue further, has been opened by us
 here: https://bugzilla.mozilla.org/show_bug.cgi?id=681967

 Here's an actual code draft I attached to the ticket:

 window.loadPackage('package.webpf', function() {
 var img = new Image();
 img.src = package.webpf/myImage.png;
 })


 Or alternatively, with a local storage system (I prefer option one):

 window.loadPackage('package.webpf', function(files) {
 files[0].saveTo('myImage.png');
 var img = new Image();
 img.src = local://absolute path of url of site/myImage.png;
 })


 No big deal if the whole API looks entirely different when it's done. The
 format needs to be able to handle delta updates well, and must be cacheable.
 It needs to be transparent to the browser, and assets of uncompressed web
 packages need to be able to be included from CSS. I am aware this is a more
 inconvenient addition to work on, but there is immediate need.

 This is also a call for implementors, testers and editors. I am
 unfortunately not experienced enough to handle any of those jobs.

 Thanks, I'm looking forward to feedback!

 Paul Bakaus
 W3C Rep and Studio CTO, Zynga



Re: CfC: Proposal to add web packaging / asset compression

2012-02-14 Thread Marcos Caceres

I have the itching feeling that a Community Group might be the right place to 
do the exploratory work. Once there is a solid proposal for standardization 
(and hopefully a prototype), it should be brought back here.

To start a community group:  
http://www.w3.org/community/

And since we are top-posting :)… I can see quite a few issues with the proposal 
below (e.g., seems to rely on file extensions, instead of MIME type, etc… seems 
to break some REST principles etc…. lack of error handling, file/not/found?) 
plus some nice to haves (instead of callbacks, progress events might be nicer, 
so you get notified as each file in the package becomes ready for use… also, a 
streamable format… I think the Moz guys already did a whole bunch of research 
into this last year and proposed it to the WHATWG… the W3C also did a whole 
bunch of work on this about 12 years ago, but I'm having a hard time finding 
the link to that work).

Again, sorry for the lack of references; hopefully the right people will jump 
in and provide those.  
   
--  
Marcos Caceres


On Tuesday, February 14, 2012 at 6:12 PM, Dimitri Glazkov wrote:

 Though I don't know what shape this will take, I think this is
 definitely worth vigorous research and discussion.
  
 Without trying to derail this effort, I am somewhat interested in how
 this thinking can be applied to Web Components, since components may
 want to be coupled with various assets.
  
 :DG
  
 On Tue, Feb 14, 2012 at 1:24 AM, Paul Bakaus pbak...@zynga.com 
 (mailto:pbak...@zynga.com) wrote:
  Hi everybody,
   
  This is a proposal to add a packaging format transparent to browsers to the
  charter. At Zynga, we have identified this as one of our most pressuring
  issues. Developers want to be able to send a collection of assets to the
  browser through a single request, instead of hundreds.
   
  Today, we misuse image and audio sprites, slicing them again as base64 only
  to put them into weird caches. These are workarounds, and ugly ones, as
  well. None of the workarounds is satisfying, either in terms of robustness,
  performance or simply, a sane API. Coincidentally, this is also one of the
  most pressuring issues of WebGL. Since you are dealing with a lot of assets
  with WebGL games, proper solutions must be found.
   
  A ticket at Mozilla, describing the issue further, has been opened by us
  here: https://bugzilla.mozilla.org/show_bug.cgi?id=681967
   
  Here's an actual code draft I attached to the ticket:
   
  window.loadPackage('package.webpf', function() {
  var img = new Image();
  img.src = package.webpf/myImage.png;
  })
   
   
  Or alternatively, with a local storage system (I prefer option one):
   
  window.loadPackage('package.webpf', function(files) {
  files[0].saveTo('myImage.png');
  var img = new Image();
  img.src = local://absolute path of url of site/myImage.png;
  })
   
   
  No big deal if the whole API looks entirely different when it's done. The
  format needs to be able to handle delta updates well, and must be cacheable.
  It needs to be transparent to the browser, and assets of uncompressed web
  packages need to be able to be included from CSS. I am aware this is a more
  inconvenient addition to work on, but there is immediate need.
   
  This is also a call for implementors, testers and editors. I am
  unfortunately not experienced enough to handle any of those jobs.
   
  Thanks, I'm looking forward to feedback!
   
  Paul Bakaus
  W3C Rep and Studio CTO, Zynga
  






Re: CfC: Proposal to add web packaging / asset compression

2012-02-14 Thread Tab Atkins Jr.
On Tue, Feb 14, 2012 at 1:24 AM, Paul Bakaus pbak...@zynga.com wrote:
 Hi everybody,

 This is a proposal to add a packaging format transparent to browsers to the
 charter. At Zynga, we have identified this as one of our most pressuring
 issues. Developers want to be able to send a collection of assets to the
 browser through a single request, instead of hundreds.

 Today, we misuse image and audio sprites, slicing them again as base64 only
 to put them into weird caches. These are workarounds, and ugly ones, as
 well. None of the workarounds is satisfying, either in terms of robustness,
 performance or simply, a sane API. Coincidentally, this is also one of the
 most pressuring issues of WebGL. Since you are dealing with a lot of assets
 with WebGL games, proper solutions must be found.

I was once a believer in an approach like this, and supported previous
attempts at it like Mozilla's use of a zip + virtual paths.

Now, though, SPDY seems to be moving along nicely enough that we don't
really need to worry about this.  It's already supported in Chrome and
Firefox, and it lets you pull multiple assets in a single connection,
push assets that haven't yet been requested, and prioritize asset
retrieval.  I don't feel there's any real need to worry about asset
packaging formats anymore.

~TJ



Re: [FileAPI] createObjectURL isReusable proposal

2012-02-14 Thread Bronislav Klučka



On 14.2.2012 5:56, Jonas Sicking wrote:

On Thu, Feb 2, 2012 at 4:40 PM, Ian Hicksoni...@hixie.ch  wrote:

On Thu, 2 Feb 2012, Arun Ranganathan wrote:

2. Could we modify things so that img.src = blob is a reality? Mainly,
if we modify things for the *most common* use case, that could be useful
in mitigating some of our fears. Hixie, is this possible?

Anything's possible, but I think the pain here would far outweigh the
benefits. There would be some really hard questions to answer, too (e.g.
what would innerHTML return? If you copied such an image from a
contentEditable section and pasted it lower down the same section, would
it still have the image?).

We could define that it returns an empty src attribute, which would
break the copy/paste example. That's the same behavior you'd get with
someone revoking the URL upon load anyway.

/ Jonas



To the point of reusability of blob url and actual existence of such 
URL, we must also consider non-media usage: a element, window.open, etc. 
I do see the ideal of preparing data with JS and letting them be 
displayed/downloaded using those methods as quite usefull and intuitive.


Brona



Re: Synchronous postMessage for Workers?

2012-02-14 Thread David Bruant
Le 14/02/2012 14:31, Arthur Barstow a écrit :
 On 2/14/12 2:02 AM, ext David Bruant wrote:
 Le 13/02/2012 20:44, Ian Hickson a écrit :
 Should we just tell authors to get used to the async style?
 I think we should. More constructs are coming in ECMAScript. Things
 related to language concurrency should probably be left to the core
 language unless there is an extreme need (which there isn't as far as I
 know).

 David - if you have some recommended reading(s) re concurrency
 constructs that are relevant to this discussion and are coming to
 ECMAScript, please let me know.
* http://wiki.ecmascript.org/doku.php?id=harmony:generators (with yield)
Yield has been for quite some time in SpiderMonkey. i'm not sure the
syntax in the spec (especially the function* functions) is
implemented, but there is a form of it anyway
Unless 2012 ends because of some Mayan calendar issues, generators will
be part of ECMAScript 6.
** Task.js by Dave Herman which uses generators. Built on top of
SpiderMonkey generators (so not the upcoming standard), but really worth
having a look at.
http://taskjs.org/
Related blog post:
http://blog.mozilla.com/dherman/2011/03/11/who-says-javascript-io-has-to-be-ugly/


* Concurrency: http://wiki.ecmascript.org/doku.php?id=strawman:concurrency
Probably one of the most ambitious and promising work. Did not cut it to
ECMAScript 6, but I'm hopeful will be part of ES7.
In a nutshell, this brings the event loop in ECMAScript itself. Even in
ES6, ECMAScript will have no concurrency mechanism whatsoever.
Concurrency is currently defined in HTML5 (event loop, setTimeout, etc.).
Another addition will be promises.
An already working example of promises can be found at
https://github.com/kriskowal/q
** This proposal is championned by Mark S. Miller (added in copy).
The strawman on concurrency seems largely inspired by what is done in
the E programming language. I highly recommend reading the Part III of
Mark's thesis on the topic: http://erights.org/talks/thesis/markm-thesis.pdf

 (I tend to agree if there isn't some real urgency here, we should be
 careful [with our specs]).
I've been participating on es-discuss for more than a year now and
following progress in implementations and things are moving. They are
moving in the right direction (browser vendors cooperate). Maybe slowly,
but they are moving.

I really think all topics that are low level (like concurrency) should
be left to ECMAScript now or addressed first on es-discuss before
thinking of a DOM/WebApp API. Low level things belong to ECMAScript and
any API we think of should be a library built on top of that. That's the
way I see it at least and I'd be happy to discuss if some disagree.

David



Re: Synchronous postMessage for Workers?

2012-02-14 Thread John J Barton
On Tue, Feb 14, 2012 at 11:14 AM, David Bruant bruan...@gmail.com wrote:
 Le 14/02/2012 14:31, Arthur Barstow a écrit :

 Another addition will be promises.
 An already working example of promises can be found at
 https://github.com/kriskowal/q

Just to point out that promises are beyond the working example stage,
they are deployed in the major JS frameworks, eg:

http://dojotoolkit.org/reference-guide/dojo/Deferred.html
http://api.jquery.com/category/deferred-object/

The Q library is more like an exploration of implementation issues in
promises, trying to push them further.

jjb



[Bug 15987] New: [IndexedDB] Invalid dates should not be valid keys

2012-02-14 Thread bugzilla
https://www.w3.org/Bugs/Public/show_bug.cgi?id=15987

   Summary: [IndexedDB] Invalid dates should not be valid keys
   Product: WebAppsWG
   Version: unspecified
  Platform: All
OS/Version: All
Status: NEW
  Severity: normal
  Priority: P2
 Component: Indexed Database API
AssignedTo: dave.n...@w3.org
ReportedBy: jsb...@chromium.org
 QAContact: member-webapi-...@w3.org
CC: m...@w3.org, public-webapps@w3.org


Section 3.1.3 Keys:

... A value is said to be a valid key if it is one of the following types:
Array JavaScript objects [ECMA-262], DOMString [WEBIDL], Date [ECMA-262] or
float [WEBIDL]. ... Additionally, if the value is of type float, it is only a
valid key if it is not NaN. Conforming user agents must support all valid keys
as keys.

Just as NaN floats are not valid keys, invalid dates - i.e. dates where the
internal double (revealed by date.valueOf()) is NaN - should not be considered
valid keys as these cannot be compared to other Date values chronologically

-- 
Configure bugmail: https://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are on the CC list for the bug.



Re: Synchronous postMessage for Workers?

2012-02-14 Thread Glenn Maynard
On Tue, Feb 14, 2012 at 10:07 AM, Jonas Sicking jo...@sicking.cc wrote:

 Spinning the event loop is very bug prone. When calling the yieldUntil
 function basically anything can happen, including re-entering the same
 code. Protecting against the entire world changing under you is very
 hard. Add to that that it's very racy. I.e. changes may or may not
 happen under you depending on when exactly you call yieldUntil, and
 which order events come in.


I don't think this:

function() {
f1();
yieldUntil(condition);
f2();
}

is any more racy than this:

function() {
f1();
waitForCondition(function() {
f2();
});
}

The order of events with yieldUntil depends on when you yield, but the same
is true when you return and continue what you're doing from a callback.  I
don't believe it's any more bug-prone than the equivalent callback-based
code, either.  I believe continuations are less error prone than callbacks,
because it leads to code that's simpler and easier to understand.

An alternative way of defining it, rather than spinning the event loop,
might be to store the JS call stack (eg. setjmp), then to return undefined
to the original native caller.  When the condition is signalled, resume the
call stack (from a new native caller).  This would mean that if you yield
from an event handler (or any other callback), the event dispatch would
continue immediately, as if the event handler returned; when your code is
resumed, you'd no longer be inside event dispatch (eg. you can't call
stopPropagation on an event after yielding).  That makes sense to me, since
dispatching events shouldn't block, though it would probably surprise
people.

-- 
Glenn Maynard


Re: CfC: Proposal to add web packaging / asset compression

2012-02-14 Thread Yehuda Katz
I would agree with this. My initial thought when reading the proposal was
SPDY as well.

That said, there is ongoing discussion about improving the app-cache that
is also relevant[1]. I am also planning on opening a discussion about
programmatic control of a cache (probably not piggy-backed onto app-cache,
which has important atomicity guarantees and no programmatic control, but
possibly piggy-backed off of the File API). Between SPDY, improving app
cache semantics, and a clean way to programmatically store remote assets
that can be loaded via script, link and img tags, I think we have a
solution that does not require creating a new packaging format.

[1] https://www.w3.org/Bugs/Public/show_bug.cgi?id=14702

Yehuda Katz
(ph) 718.877.1325


On Tue, Feb 14, 2012 at 10:26 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Feb 14, 2012 at 1:24 AM, Paul Bakaus pbak...@zynga.com wrote:
  Hi everybody,
 
  This is a proposal to add a packaging format transparent to browsers to
 the
  charter. At Zynga, we have identified this as one of our most pressuring
  issues. Developers want to be able to send a collection of assets to the
  browser through a single request, instead of hundreds.
 
  Today, we misuse image and audio sprites, slicing them again as base64
 only
  to put them into weird caches. These are workarounds, and ugly ones, as
  well. None of the workarounds is satisfying, either in terms of
 robustness,
  performance or simply, a sane API. Coincidentally, this is also one of
 the
  most pressuring issues of WebGL. Since you are dealing with a lot of
 assets
  with WebGL games, proper solutions must be found.

 I was once a believer in an approach like this, and supported previous
 attempts at it like Mozilla's use of a zip + virtual paths.

 Now, though, SPDY seems to be moving along nicely enough that we don't
 really need to worry about this.  It's already supported in Chrome and
 Firefox, and it lets you pull multiple assets in a single connection,
 push assets that haven't yet been requested, and prioritize asset
 retrieval.  I don't feel there's any real need to worry about asset
 packaging formats anymore.

 ~TJ




Elements and Blob

2012-02-14 Thread Bronislav Klučka

Hi,
regarding current discussion about Blobs, URL, etc. I'd like to have 
following proposition:

every element would have following additional methods/fields:

Blob function saveToBlob();
that would return a blob containing element data  (e.g. for img element 
that would be image data, for p element that would be basically 
innerHTML, for input it would be current value, for script it would be 
either script element content [if exists], or it would be empty blob [if 
src is used]). With CORS applied here. There are progress events needed.


void function loadFromBlob(Blob blob);
that would load the blob content as element content (e.g. for img 
element it would display image data in that blob [no changes to src 
attribute], for p element that would be basically innerHTML, for input 
it serve as value, for script this would load data as script (and 
element content) and execute it [no changes to src attribute]). Function 
should create no reference between element and blob, just load blob 
data. There are progress events needed.


attribute Blob blob;
that would do the same as loadFromBlob, but it would also create 
reference between element and blob



and why that:
1/ saveToBlob - would create easy access to any element data, we are 
already talking about media elements (canvas, image), I see no point of 
limiting it. Do you want blob from image or textarea? Just one function.


2/ loadFromBlob, blob - could solve current issue with createObjectUrl 
(that functionality would remain as it is): no reference issues, 
intuitive usage




Brona








Re: Elements and Blob

2012-02-14 Thread Charles Pritchard
This was covered and dismissed in earlier form with element.saveData semantics 
in IE.

It's really unnecessary. We an cover the desired cases without this level of 
change.



On Feb 14, 2012, at 12:08 PM, Bronislav Klučka bronislav.klu...@bauglir.com 
wrote:

 Hi,
 regarding current discussion about Blobs, URL, etc. I'd like to have 
 following proposition:
 every element would have following additional methods/fields:
 
 Blob function saveToBlob();
 that would return a blob containing element data  (e.g. for img element that 
 would be image data, for p element that would be basically innerHTML, for 
 input it would be current value, for script it would be either script element 
 content [if exists], or it would be empty blob [if src is used]). With CORS 
 applied here. There are progress events needed.
 
 void function loadFromBlob(Blob blob);
 that would load the blob content as element content (e.g. for img element it 
 would display image data in that blob [no changes to src attribute], for p 
 element that would be basically innerHTML, for input it serve as value, for 
 script this would load data as script (and element content) and execute it 
 [no changes to src attribute]). Function should create no reference between 
 element and blob, just load blob data. There are progress events needed.
 
 attribute Blob blob;
 that would do the same as loadFromBlob, but it would also create reference 
 between element and blob
 
 
 and why that:
 1/ saveToBlob - would create easy access to any element data, we are already 
 talking about media elements (canvas, image), I see no point of limiting it. 
 Do you want blob from image or textarea? Just one function.
 
 2/ loadFromBlob, blob - could solve current issue with createObjectUrl (that 
 functionality would remain as it is): no reference issues, intuitive usage
 
 
 
 Brona
 
 
 
 
 
 



Re: Elements and Blob

2012-02-14 Thread Bronislav Klučka
Does anybody have any link / thread name to that discussion? I cannot 
find it (http://lists.w3.org/Archives/Public/public-webapps/)
and I really wonder about the reasons for dismissal (it is technically 
not change but addition).


Brona

On 14.2.2012 21:58, Charles Pritchard wrote:

This was covered and dismissed in earlier form with element.saveData semantics 
in IE.

It's really unnecessary. We an cover the desired cases without this level of 
change.



On Feb 14, 2012, at 12:08 PM, Bronislav Klučkabronislav.klu...@bauglir.com  
wrote:


Hi,
regarding current discussion about Blobs, URL, etc. I'd like to have following 
proposition:
every element would have following additional methods/fields:

Blob function saveToBlob();
that would return a blob containing element data  (e.g. for img element that 
would be image data, for p element that would be basically innerHTML, for input 
it would be current value, for script it would be either script element content 
[if exists], or it would be empty blob [if src is used]). With CORS applied 
here. There are progress events needed.

void function loadFromBlob(Blob blob);
that would load the blob content as element content (e.g. for img element it 
would display image data in that blob [no changes to src attribute], for p 
element that would be basically innerHTML, for input it serve as value, for 
script this would load data as script (and element content) and execute it [no 
changes to src attribute]). Function should create no reference between element 
and blob, just load blob data. There are progress events needed.

attribute Blob blob;
that would do the same as loadFromBlob, but it would also create reference 
between element and blob


and why that:
1/ saveToBlob - would create easy access to any element data, we are already 
talking about media elements (canvas, image), I see no point of limiting it. Do 
you want blob from image or textarea? Just one function.

2/ loadFromBlob, blob - could solve current issue with createObjectUrl (that 
functionality would remain as it is): no reference issues, intuitive usage



Brona












Re: Synchronous postMessage for Workers?

2012-02-14 Thread Boris Zbarsky

On 2/14/12 2:52 PM, Glenn Maynard wrote:

I don't think this:

function() {
 f1();
 yieldUntil(condition);
 f2();
}

is any more racy than this:

function() {
 f1();
 waitForCondition(function() {
 f2();
 });
}


Let's say the function above is called f, and consider this code:

var inG = false;
function g() {
  inG = true;
  h(); /* This function does things differently depending on
  whether inG is true */
  f();
  inG = false;
}

g is fine with the second snippet above, but broken with the first one.

So while in practice the two are equally racy, writing correct code that 
calls into the second snippet is much easier than writing correct code 
that calls into the first one.  And in practice, a developer would 
perceive the first snippet as being less racy than the second one, 
because the race is not nearly as obvious.



I don't believe it's any more bug-prone than the equivalent
callback-based code, either.


See above.  This is not a hypothetical issue.  We have some experience 
with APIs that look like the yieldUntil one above, and in practice it 
causes serious code maintainability problems because everything up the 
callstack has to make sure that it's data structures are in a consistent 
state before calling into functions.



I believe continuations are less error
prone than callbacks, because it leads to code that's simpler and easier
to understand.


It leads to code that _looks_ simpler and easier to understand.  Whether 
it _is_ that way is an interesting question.


-Boris



Re: CG for Speech JavaScript API

2012-02-14 Thread Olli Pettay

So, if I haven't made it clear before,
doing the initial standardization work in CG sounds ok to me.
I do expect that there will be a WG eventually, but perhaps
CG is a faster and more lightweight way to start - well continue from
what XG did.

-Olli


On 01/31/2012 06:01 PM, Glen Shires wrote:

We at Google propose the formation of a new Community Group to pursue a
JavaScript Speech API. Specifically, we are proposing this Javascript
API [1], which enables web developers to incorporate speech recognition
and synthesis into their web pages, and supports the majority of
use-cases in the Speech Incubator Group's Final Report [2]. This API
enables developers to use scripting to generate text-to-speech output
and to use speech recognition as an input for forms, continuous
dictation and control. For this first specification, we believe
this simplified subset API will accelerate implementation,
interoperability testing, standardization and ultimately developer
adoption. However, in the spirit of consensus, we are willing to broaden
this subset API to include additional Javascript API features in the
Speech Incubator Final Report.

We believe that forming a Community Group has the following advantages:

- It’s quick, efficient and minimizes unnecessary process overhead.

- We believe it will allow us, as a group, to reach consensus in an
efficient manner.

- We hope it will expedite interoperable implementations in multiple
browsers. (A good example is the Web Media Text Tracks CG, where
multiple implementations are happening quickly.)

- We propose the CG will use the public-webapps@w3.org
mailto:public-webapps@w3.org as its mailing list to provide visibility
to a wider audience, with a balanced web-centric view for new JavaScript
APIs.  This arrangement has worked well for the HTML Editing API CG [3].
Contributions to the specification produced by the Speech API CG will be
governed by the Community Group CLA and the CG is responsible for
ensuring that all Contributions come from participants that have agreed
to the CG CLA.  We believe the response to the CfC [4] has shown
substantial interest and support by WebApps members.

- A CG provides an IPR environment that simplifies future transition to
standards track.

Google plans to supply an implementation and a test suite for this
specification, and will commit to serve as editor.  We hope that others
will support this CG as they had stated support for the similar WebApps
CfC. [4]

Bjorn Bringert
Satish Sampath
Glen Shires

[1]
http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html
[2] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
[3] http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1402.html
[4] http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0315.html





Re: CG for Speech JavaScript API

2012-02-14 Thread Charles Pritchard
CG sounds good, but I agree that the technical aspects of speech are not a good 
match for the webapps mailing list.

This topic is heavy in linguistics; other work in web apps is not.

I'd like to feel free to explore IPA, code switching, grammar; voicexml, a 
whole group of topics not particularly relevant to the webapps mailing list.

I will certainly post threads to webapps when appropriate. The API hooks for 
webkitspeech; the applicability of speech to the IME API-- those I will post to 
webapps when they are mature.

But most of the issues around speech are not issues that I'd bring up in 
webapps.

-Charles



On Feb 14, 2012, at 2:45 PM, Olli Pettay olli.pet...@helsinki.fi wrote:

 So, if I haven't made it clear before,
 doing the initial standardization work in CG sounds ok to me.
 I do expect that there will be a WG eventually, but perhaps
 CG is a faster and more lightweight way to start - well continue from
 what XG did.
 
 -Olli
 
 
 On 01/31/2012 06:01 PM, Glen Shires wrote:
 We at Google propose the formation of a new Community Group to pursue a
 JavaScript Speech API. Specifically, we are proposing this Javascript
 API [1], which enables web developers to incorporate speech recognition
 and synthesis into their web pages, and supports the majority of
 use-cases in the Speech Incubator Group's Final Report [2]. This API
 enables developers to use scripting to generate text-to-speech output
 and to use speech recognition as an input for forms, continuous
 dictation and control. For this first specification, we believe
 this simplified subset API will accelerate implementation,
 interoperability testing, standardization and ultimately developer
 adoption. However, in the spirit of consensus, we are willing to broaden
 this subset API to include additional Javascript API features in the
 Speech Incubator Final Report.
 
 We believe that forming a Community Group has the following advantages:
 
 - It’s quick, efficient and minimizes unnecessary process overhead.
 
 - We believe it will allow us, as a group, to reach consensus in an
 efficient manner.
 
 - We hope it will expedite interoperable implementations in multiple
 browsers. (A good example is the Web Media Text Tracks CG, where
 multiple implementations are happening quickly.)
 
 - We propose the CG will use the public-webapps@w3.org
 mailto:public-webapps@w3.org as its mailing list to provide visibility
 to a wider audience, with a balanced web-centric view for new JavaScript
 APIs.  This arrangement has worked well for the HTML Editing API CG [3].
 Contributions to the specification produced by the Speech API CG will be
 governed by the Community Group CLA and the CG is responsible for
 ensuring that all Contributions come from participants that have agreed
 to the CG CLA.  We believe the response to the CfC [4] has shown
 substantial interest and support by WebApps members.
 
 - A CG provides an IPR environment that simplifies future transition to
 standards track.
 
 Google plans to supply an implementation and a test suite for this
 specification, and will commit to serve as editor.  We hope that others
 will support this CG as they had stated support for the similar WebApps
 CfC. [4]
 
 Bjorn Bringert
 Satish Sampath
 Glen Shires
 
 [1]
 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html
 [2] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
 [3] http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1402.html
 [4] http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0315.html
 
 



Re: CG for Speech JavaScript API

2012-02-14 Thread Charles Pritchard
Afaik, the grammar part of the spec along with speech strings for tts ought to 
be included. If it's a discussion about speech, it ought to be a full 
discussion.

It takes only minutes to route speech through translation and tts services 
online.

Let's take a look at it.





On Feb 14, 2012, at 3:09 PM, b...@pettay.fi b...@pettay.fi wrote:

 On 02/15/2012 12:58 AM, Charles Pritchard wrote:
 CG sounds good, but I agree that the technical aspects of speech are
 not a good match for the webapps mailing list.
 
 This topic is heavy in linguistics; other work in web apps is not.
 
 I'd like to feel free to explore IPA, code switching, grammar;
 voicexml, a whole group of topics not particularly relevant to the
 webapps mailing list.
 CG won't be anything about VoiceXML or similar.
 
 
 
 I will certainly post threads to webapps when appropriate. The API
 hooks for webkitspeech;
 The CG will be about this. API for speech. Certainly it is very
 different from the current webkitspeech stuff, but still an API for speech.
 
 
 the applicability of speech to the IME API--
 those I will post to webapps when they are mature.
 
 But most of the issues around speech are not issues that I'd bring up
 in webapps.
 
 -Charles
 
 
 
 On Feb 14, 2012, at 2:45 PM, Olli Pettayolli.pet...@helsinki.fi
 wrote:
 
 So, if I haven't made it clear before, doing the initial
 standardization work in CG sounds ok to me. I do expect that there
 will be a WG eventually, but perhaps CG is a faster and more
 lightweight way to start - well continue from what XG did.
 
 -Olli
 
 
 On 01/31/2012 06:01 PM, Glen Shires wrote:
 We at Google propose the formation of a new Community Group to
 pursue a JavaScript Speech API. Specifically, we are proposing
 this Javascript API [1], which enables web developers to
 incorporate speech recognition and synthesis into their web
 pages, and supports the majority of use-cases in the Speech
 Incubator Group's Final Report [2]. This API enables developers
 to use scripting to generate text-to-speech output and to use
 speech recognition as an input for forms, continuous dictation
 and control. For this first specification, we believe this
 simplified subset API will accelerate implementation,
 interoperability testing, standardization and ultimately
 developer adoption. However, in the spirit of consensus, we are
 willing to broaden this subset API to include additional
 Javascript API features in the Speech Incubator Final Report.
 
 We believe that forming a Community Group has the following
 advantages:
 
 - It’s quick, efficient and minimizes unnecessary process
 overhead.
 
 - We believe it will allow us, as a group, to reach consensus in
 an efficient manner.
 
 - We hope it will expedite interoperable implementations in
 multiple browsers. (A good example is the Web Media Text Tracks
 CG, where multiple implementations are happening quickly.)
 
 - We propose the CG will use the public-webapps@w3.org
 mailto:public-webapps@w3.org  as its mailing list to provide
 visibility to a wider audience, with a balanced web-centric view
 for new JavaScript APIs.  This arrangement has worked well for
 the HTML Editing API CG [3]. Contributions to the specification
 produced by the Speech API CG will be governed by the Community
 Group CLA and the CG is responsible for ensuring that all
 Contributions come from participants that have agreed to the CG
 CLA.  We believe the response to the CfC [4] has shown
 substantial interest and support by WebApps members.
 
 - A CG provides an IPR environment that simplifies future
 transition to standards track.
 
 Google plans to supply an implementation and a test suite for
 this specification, and will commit to serve as editor.  We hope
 that others will support this CG as they had stated support for
 the similar WebApps CfC. [4]
 
 Bjorn Bringert Satish Sampath Glen Shires
 
 [1]
 http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-1696/speechapi.html
 
 
 [2] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
 [3]
 http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1402.html
 
 
 [4] http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0315.html
 
 
 
 



Re: Revisiting Command Elements and Toolbars

2012-02-14 Thread Ian Hickson
On Mon, 28 Nov 2011, Ryosuke Niwa wrote:
 
1. Authors often want to fine-grained control over the appearance of
toolbars; UAs automatically rendering them in canonical form will make it
harder.

For now, authors who don't want the automatic formatting can just get the 
current formatting using menu without a type= attribute. They don't 
get any of the magic they would get with type=context.

Going forward, I think we should define a way to style menu type=toolbar 
controls, but I really don't know how to do that. Pseudos? Rely on the Web 
Component model? Special properties? I'm open to ideas here.


2. In many web apps, commands are involved and associated with multiple
UI components, toolbars, side panel, context menu, etc... commands being a
part of UI components doesn't represent this model well.

I've finally added the command= attribute to allow command elements to 
defer to another element.


3. Many commands make sense only in the context of some widget in a
page. E.g. on a CMS dashboard, bold command only makes sense inside a
WYSIWYG editor. There ought to be mechanism to scope commands.

Not sure what you mean. In what sense are commands not scoped?


4. Mixing UI-specific information such as hidden and checked with
more semantical information such as disabled or checked isn't clean.

hidden, disabled, and checked all seem semantical. I agree that 
icon should be in CSS; the CSS 'icon' property should really be the 
authoritative source here. However, people often have a particular icon 
per command/menu item/button/whatever so I think it makes some sense to 
put them together.


5. Some commands may need to have non-boolean values. e.g. consider
BackColor (as in execCommand), this command can't just be checked. It'll
have values such as white and #fff.

Yeah. I haven't added this yet. I expect we'll leverage the new form 
widgets for this kind of thing.


 Furthermore, it seems unfortunate that we already have a concept of 
 command in the editing API 
 http://dvcs.w3.org/hg/editing/raw-file/tip/editing.html and methods on 
 document such as execCommand, queryCommandState, etc... yet commands 
 defined by command elements and accessKey content attribute don't 
 interact with them at all. It'll be really nice if we could use 
 execCommand to run an arbitrary command defined on a page, or ask what 
 the value of command is by queryCommandValue.

I don't really see the link between the two. command is about the 
widgets. execCommand() is about the editor process. You would implement 
command actions in terms of execCommand() in an editor, but they are 
otherwise completely unrelated as far as I can tell.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Synchronous postMessage for Workers?

2012-02-14 Thread Mark S. Miller
On Mon, Feb 13, 2012 at 12:08 PM, John J Barton johnjbar...@johnjbarton.com
 wrote:

 On Mon, Feb 13, 2012 at 11:44 AM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 17 Nov 2011, Joshua Bell wrote:
 
  Wouldn't it be lovely if the Worker script could simply make a
  synchronous call to fetch data from the Window?
 
  It wouldn't be so much a synchronous call, so much as a blocking get.
 ..
  Anyone object to me adding something like this? Are there any better
  solutions? Should we just tell authors to get used to the async style?

 I guess the Q folks would say that remote promises provides another
 solution. If promises are adopted by the platform, then the async
 style gets much easier to work with.
 https://github.com/kriskowal/q
 (spec is somewhere on the es wiki)


http://wiki.ecmascript.org/doku.php?id=strawman:concurrency

This spec doesn't quite correspond to Kris' implementation. We're still
working that out. John's then below corresponds to when on the wiki
page.



 In the Q model you would fetch data like:
  parentWindow.fetchData('myQueryString').then(  // block until reply.
function(data) {...},
function(err) {...}
  );
 Q has functions to join promises; q_comm add remote promises.

 I believe this can be done today with q_comm in workers.

 Your signal/yieldUntil looks like what es-discuss calls generators.
 I found them much harder to understand than promises, but then I come
 from JS not python.

 jjb




-- 
Cheers,
--MarkM


Re: Synchronous postMessage for Workers?

2012-02-14 Thread Mark S. Miller
On Tue, Feb 14, 2012 at 11:32 AM, John J Barton johnjbar...@johnjbarton.com
 wrote:

 On Tue, Feb 14, 2012 at 11:14 AM, David Bruant bruan...@gmail.com wrote:
  Le 14/02/2012 14:31, Arthur Barstow a écrit :

  Another addition will be promises.
  An already working example of promises can be found at
  https://github.com/kriskowal/q

 Just to point out that promises are beyond the working example stage,
 they are deployed in the major JS frameworks, eg:

 http://dojotoolkit.org/reference-guide/dojo/Deferred.html
 http://api.jquery.com/category/deferred-object/

 The Q library is more like an exploration of implementation issues in
 promises, trying to push them further.


Relevant to the thread here is that the Q library uses promises both for
local asynchrony and for asynchronous distributed messaging. The other
promise libraries (even though they derive from (dojo - mochikit -
Twisted Python - E) only support local asynchrony.


-- 
Cheers,
--MarkM


Re: Synchronous postMessage for Workers?

2012-02-14 Thread Jonas Sicking
On Tue, Feb 14, 2012 at 2:52 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Feb 14, 2012 at 10:07 AM, Jonas Sicking jo...@sicking.cc wrote:

 Spinning the event loop is very bug prone. When calling the yieldUntil
 function basically anything can happen, including re-entering the same
 code. Protecting against the entire world changing under you is very
 hard. Add to that that it's very racy. I.e. changes may or may not
 happen under you depending on when exactly you call yieldUntil, and
 which order events come in.


 I don't think this:

 function() {
     f1();
     yieldUntil(condition);
     f2();
 }

 is any more racy than this:

 function() {
     f1();
     waitForCondition(function() {
         f2();
     });
 }

The problem is when you have functions which call yieldUntil. I.e.
when you have code like this:

function doStuff() {
  yieldUntil(x);
};

now what looks like perfectly safe innocent code:

function myFunction() {
  ... code here ...
  doStuff();
  ... more code ...
}

The myFunction code might look perfectly sane and safe. However since
the call to doStuff spins the event loop, the two code snippets can
see entirely different worlds.

Put it another way, when you spin the event loop, not only does your
code need to be prepared for anything happening. All functions up the
call stack also has to. That makes it very hard to reason about any of
your code, not just the code that calls yieldUntil.

/ Jonas



Re: [CORS] Access-Control-Request-Method

2012-02-14 Thread Jonas Sicking
On Tue, Feb 14, 2012 at 12:38 PM, Anne van Kesteren ann...@opera.com wrote:
 On Thu, 22 Dec 2011 17:05:08 +0100, Boris Zbarsky bzbar...@mit.edu wrote:

 No, what I mean is this.  Say we enter
 http://dvcs.w3.org/hg/cors/raw-file/tip/Overview.html#cross-origin-request
 with the following state:

 * force preflight flag is true
 * Request method is simple method
 * No author request headers
 * Empty preflight cache (not that this matters)

 The spec says we should follow the cross-origin request with preflight
 algorithm.

 Following that link, it says:

   Go to the next step if the following conditions are true:

     For request method there either is a method cache match or it is a
     simple method.

     For every header of author request headers there either is a header
     cache match for the field name or it is a simple header.

 Since the method is a simple method and there are no author request
 headers, we skip the preflight and go on to the main request.

 Now it's possible that I simply don't understand what this flag is
 _supposed_ to do or that I'm missing something


 So the idea behind the force preflight flag is that there's a preflight
 request if upload event listeners are registered, because otherwise you can
 determine the existence of a server. Now the obvious way to fix CORS would
 be to add an additional condition in the text you quoted above, namely that
 the force preflight flag is unset; however, that would mean that caching is
 bypassed too.

Just add the force preflight flag is unset condition to only the is
simple method check. That way a cache hit still counts prevents a
preflight even if the force-flag is set.

Note that a cache hit can only happen if a preflight-check has been
successful *from the requesting origin*. So things should still be
safe.

At least that's how we have it implemented in Firefox.

/ Jonas