Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread Robin Berjon

On 07/03/2013 23:34 , Tobie Langel wrote:

In which case, isn't part of the solution to paginate your data, and
parse those pages separately?


Assuming you can modify the backend. Also, data doesn't necessarily have 
to get all that bulky before you notice on a somewhat sluggish device.



Even if an async API for JSON existed, wouldn't the perf bottleneck
then simply fall on whatever processing needs to be done afterwards?


But for that part you're in control of whether your processing is 
blocking or not.



Wouldn't some form of event-based API be more indicated? E.g.:

var parser = JSON.parser();

 parser.parse(src);
 parser.onparse = function(e) { doSomething(e.data); };

I'm not sure how that snippet would be different from a single callback API.

There could possibly be value in an event-based API if you could set it 
up with a filter, e.g. JSON.filtered($.*).then(function (item) {}); 
which would call you for ever item in the root object. Getting an event 
for every information item that the parser processes would likely flood 
you in events.


Yet another option is a pull API. There's a lot of experience from the 
XML planet in APIs with specific performance characteristics. They would 
obviously be a lot simpler for JSON; I wonder how well that experience 
translates.


--
Robin Berjon - http://berjon.com/ - @robinberjon


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread Tobie Langel
On Friday, March 8, 2013 at 10:44 AM, Robin Berjon wrote:
 On 07/03/2013 23:34 , Tobie Langel wrote:
  Wouldn't some form of event-based API be more indicated? E.g.:
  
  var parser = JSON.parser();
  parser.parse(src);
  parser.onparse = function(e) { doSomething(e.data); };
 
 
 I'm not sure how that snippet would be different from a single callback API.
 
 There could possibly be value in an event-based API if you could set it 
 up with a filter, e.g. JSON.filtered($.*).then(function (item) {}); 
 which would call you for ever item in the root object. Getting an event 
 for every information item that the parser processes would likely flood 
 you in events.

Agreed, you need something higher-level than just JSON tokens. Which is why 
this can be very much app-specific, unless most of the use cases are to parse 
data of a format similar to [Object, Object, Object, ..., Object]. This could 
be special-cased so as to send each object to the event handler as it's parsed.

--tobie


Re: [whatwg] Fetch: Origin header

2013-03-08 Thread Anne van Kesteren
On Thu, Mar 7, 2013 at 7:29 PM, Adam Barth w...@adambarth.com wrote:
 I don't have strong feelings one way or another.  Generally, I think
 it's a good idea if the presence of the Origin header isn't synonymous
 with the request being a CORS request because that could limit our
 ability to use the Origin header in the future.

Okay. So currently the mix of the Origin specification and the HTML
specification suggests you either do Origin: /origin/ or Origin:
null. However WebKit seems to do Origin: /origin/ or no header at
all (for the privacy-sensitive cases). Ian also mentioned that we
can not just put the Origin header into every outgoing request as that
breaks the interwebs (per research you did for Chrome I believe?).

What do you think we should end up requiring?


-- 
http://annevankesteren.nl/


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Bruant

Le 08/03/2013 02:01, Glenn Maynard a écrit :
If you're dealing with lots of data, you should be loading or creating 
the data in the worker in the first place, not creating it in the UI 
thread and then shuffling it off to a worker.

Exactly. That would be the proper way to handle a big amount of data.

David


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Rajchenbach-Teller
Let me answer your question about the scenario, before entering the
specifics of an API.

For the moment, the main use case I see is for asynchronous
serialization of JSON is that of snapshoting the world without stopping
it, for backup purposes, e.g.:
a. saving the state of the current region in an open world RPG;
b. saving the state of an ongoing physics simulation;
c. saving the state of the browser itself in case of crash/power loss
(that's assuming a FirefoxOS-style browser implemented as a web
application);
d. backing up state and history of the browser itself to a server
(again, assuming that the browser is a web application).

Cases a., b. and d. are hypothetical but, I believe, realistic. Case c.
is very close to a scenario I am currently facing.

The natural course of action would be to do the following:
1. collect data to a JSON object (possibly a noop);
2. send the object to a worker;
3. apply some post-treatment to the object (possibly a noop);
4. write/upload the object.

Having an asynchronous JSON serialization to some Transferable form
would considerably the task of implement step 2. without janking if data
ends up very heavy.

Note that, in all the scenarios I have mentioned, it is generally
difficult for the author of the application to know ahead of time which
part of the JSON object will be heavy and should be transmitted through
an ad hoc protocol. In scenario c., for instance, it is quite frequent
that just one or two pages contain 90%+ of the data that needs to be
saved, in the form of form fields, or iframes, or Session Storage.

So far, I have discussed serializing JSON, not deserializing it, but I
believe that the symmetric scenarios also hold.

Best regards,
 David

On 3/7/13 11:34 PM, Tobie Langel wrote:
 I'd like to hear about the use cases a bit more. 
 
 Generally, structured data gets bulky because it contains more items, not 
 because items get bigger.
 
 In which case, isn't part of the solution to paginate your data, and parse 
 those pages separately?
 
 Even if an async API for JSON existed, wouldn't the perf bottleneck then 
 simply fall on whatever processing needs to be done afterwards?
 
 Wouldn't some form of event-based API be more indicated? E.g.:
 
 var parser = JSON.parser();
 parser.parse(src);
 parser.onparse = function(e) {
   doSomething(e.data);
 };
 
 And wouldn't this be highly dependent on how the data is structured, and thus 
 very much app-specific?
 
 --tobie 
 


-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Bruant

Le 07/03/2013 23:18, David Rajchenbach-Teller a écrit :

(Note: New on this list, please be gentle if I'm debating an
inappropriate issue in an inappropriate place.)

Actually, communicating large JSON objects between threads may cause
performance issues. I do not have the means to measure reception speed
simply (which would be used to implement asynchronous JSON.parse), but
it is easy to measure main thread blocks caused by sending (which would
be used to implement asynchronous JSON.stringify).

I have put together a small test here - warning, this may kill your browser:
http://yoric.github.com/Bugzilla-832664/

While there are considerable fluctuations, even inside one browser, on
my system, I witness janks that last 300ms to 3s.

Consequently, I am convinced that we need asynchronous variants of
JSON.{parse, stringify}.
I don't think this is necessary as all the processing can be done a 
worker (starting in the worker even).
But if an async solution were to happen, I think it should be all the 
way, that is changing the JSON.parse method so that it accepts not only 
a string, but a stream of data.
Currently, one has to wait until the entire string before being able to 
parse it. That's a waste of time for big data which is your use case 
(especially if waiting for data to come from the network) and probably a 
misuse of memory. With a stream, temporary strings can be thrown away.


David


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Rajchenbach-Teller
On 3/8/13 2:01 AM, Glenn Maynard wrote:
 (Not nitpicking, since I really wasn't sure what you meant at first, but
 I think you mean a JavaScript object.  There's no such thing as a JSON
 object.)

I meant a pure data structure, i.e. JavaScript object without methods.
It was my understanding that JSON object was a common denomination for
such objects, but I am willing to use something else.

I believe I have just addressed your other points in post
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-March/039090.html .

Best regards,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Rajchenbach-Teller
I fully agree that any asynchronous JSON [de]serialization should be
stream-based, not string-based.

Now, if the main heavy duty work is dealing with the large object, this
can certainly be kept on a worker thread. I suspect, however, that this
is not always feasible.

Consider, for instance, a browser implemented as a web application,
FirefoxOS-style. The data that needs to be collected to save its current
state is held in the DOM. For performance and consistency, it is not
practical to keep the DOM synchronized at all times with a worker
thread. Consequently, data needs to be collected on the main thread and
then sent to a worker thread.

Similarly, for a 3d game, until workers can perform some off-screen
WebGL, I suspect that a considerable amount of complex game data needs
to reside on the main thread, because sending the appropriate subsets
from a worker to the main thread on demand might not be reactive enough
for 60 fps. I have no experience with such complex games, though, so my
intuition could be wrong.

Best regards,
 David


On 3/8/13 11:53 AM, David Bruant wrote:
 I don't think this is necessary as all the processing can be done a
 worker (starting in the worker even).
 But if an async solution were to happen, I think it should be all the
 way, that is changing the JSON.parse method so that it accepts not only
 a string, but a stream of data.
 Currently, one has to wait until the entire string before being able to
 parse it. That's a waste of time for big data which is your use case
 (especially if waiting for data to come from the network) and probably a
 misuse of memory. With a stream, temporary strings can be thrown away.
 
 David


-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla


Re: [whatwg] HTML Specification update request: bug 20939

2013-03-08 Thread Bob Owen
Hi,

Has anyone had a chance to look at my proposals to improve the HTML
specification with regard to navigating browsing contexts while sandboxed?

Thanks,
Bob


Message: 3

 Date: Sat, 9 Feb 2013 16:26:12 +
 From: Bob Owen bobowenc...@gmail.com
 To: wha...@whatwg.org
 Subject: [whatwg] HTML Specification update request: bug 20939
 Message-ID:
 
 ca+djeedvym+hkc9vvnf70pxyxz7atibdosmfifncxxwyg-u...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 Hi,

 While discussing with Boris Zbarsky, some work I am doing on sandboxing for
 Mozilla, we realised that the sections in the HTML specification on
 browsing context security and browsing context names, could do with some
 clarification with regard to the effects of sandboxing.


 http://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#security-nav

 http://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#browsing-context-names

 The non-normative table, in the browsing context names section, does cover
 many of the effects.
 However, if the normative text in these two sections is read on its own,
 without knowledge of the sandboxing sections, then the reader could easily
 come away with an incorrect understanding.

 I have raised the following bug detailing all of this.

 https://www.w3.org/Bugs/Public/show_bug.cgi?id=20939

 The bug also contains some suggestions for new rows that could be added to
 the non-normative table.

 Thanks,
 Bob





Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Bruant

Le 08/03/2013 13:34, David Rajchenbach-Teller a écrit :

I fully agree that any asynchronous JSON [de]serialization should be
stream-based, not string-based.

Now, if the main heavy duty work is dealing with the large object, this
can certainly be kept on a worker thread. I suspect, however, that this
is not always feasible.

Consider, for instance, a browser implemented as a web application,
FirefoxOS-style. The data that needs to be collected to save its current
state is held in the DOM. For performance and consistency, it is not
practical to keep the DOM synchronized at all times with a worker
thread. Consequently, data needs to be collected on the main thread and
then sent to a worker thread.
I feel the data can be collected on the main thread in a Transferable 
(probably awkward, yet doable). This way, when the data needs to be 
transfered, the transfer is fast and heavy processing can happen in the 
worker.



Similarly, for a 3d game, until workers can perform some off-screen
WebGL
What if a cross-origin or sandbox iframe was actually a worker with a 
DOM? [1]

Not for today, I admit.
Today, canvas contexts can be transferred [2]. There is no 
implementation of that to my knowledge, but that's happening.



I suspect that a considerable amount of complex game data needs
to reside on the main thread, because sending the appropriate subsets
from a worker to the main thread on demand might not be reactive enough
for 60 fps. I have no experience with such complex games, though, so my
intuition could be wrong.
I share your intuition, but miss the relevant expertise too. Let's wait 
until people complain :-) And let's see how far transferable CanvasProxy 
let us go.


David

[1] 
https://groups.google.com/d/msg/mozilla.dev.servo/LQ46AtKp_t0/plqFfjLSER8J
[2] 
http://www.whatwg.org/specs/web-apps/current-work/multipage/common-dom-interfaces.html#transferable


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Rajchenbach-Teller
On 3/8/13 1:59 PM, David Bruant wrote:
 Consider, for instance, a browser implemented as a web application,
 FirefoxOS-style. The data that needs to be collected to save its current
 state is held in the DOM. For performance and consistency, it is not
 practical to keep the DOM synchronized at all times with a worker
 thread. Consequently, data needs to be collected on the main thread and
 then sent to a worker thread.
 I feel the data can be collected on the main thread in a Transferable
 (probably awkward, yet doable). This way, when the data needs to be
 transfered, the transfer is fast and heavy processing can happen in the
 worker.

Intuitively, this sounds like:
1. collect data to a JSON;
2. serialize JSON (hopefully asynchronously) to a Transferable (or
several Transferables).

If so, we are back to the problem of serializing JSON asynchronously to
something transferable. Possibly an iterator (or an asynchronous
iterator == a stream) of ByteArray, for instance.

The alternative would be to serialize to a stream while we are still
building the object. This sounds possible, although I suspect that the
API would be much more complex.

 Similarly, for a 3d game, until workers can perform some off-screen
 WebGL
 What if a cross-origin or sandbox iframe was actually a worker with a
 DOM? [1]
 Not for today, I admit.
 Today, canvas contexts can be transferred [2]. There is no
 implementation of that to my knowledge, but that's happening.

Yes, I believe that, in time, this will solve many scenarios. Definitely
not the DOM-related scenario above, though.

 I suspect that a considerable amount of complex game data needs
 to reside on the main thread, because sending the appropriate subsets
 from a worker to the main thread on demand might not be reactive enough
 for 60 fps. I have no experience with such complex games, though, so my
 intuition could be wrong.
 I share your intuition, but miss the relevant expertise too. Let's wait
 until people complain :-) And let's see how far transferable CanvasProxy
 let us go.

Ok, let's just say that I won't use games as a running example until
people start complaining :) However, the DOM situation remains.

Cheers,
 David

-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread Glenn Maynard
On Fri, Mar 8, 2013 at 4:51 AM, David Rajchenbach-Teller 
dtel...@mozilla.com wrote:

 a. saving the state of the current region in an open world RPG;
 b. saving the state of an ongoing physics simulation;


These should live in a worker in the first place.

c. saving the state of the browser itself in case of crash/power loss
 (that's assuming a FirefoxOS-style browser implemented as a web
 application);


I don't understand this case.  Why would you implement a browser in a
browser?  That sounds like a weird novelty app, not a real use case.  Can
you explain this for people who don't know what FirefoxOS means?

d. backing up state and history of the browser itself to a server
 (again, assuming that the browser is a web application).


(This sounds identical to C.)

Similarly, for a 3d game, until workers can perform some off-screen
 WebGL, I suspect that a considerable amount of complex game data needs
 to reside on the main thread, because sending the appropriate subsets
 from a worker to the main thread on demand might not be reactive enough
 for 60 fps. I have no experience with such complex games, though, so my
 intuition could be wrong.


If so, we should be fixing the problems preventing workers from being used
fully, not to add workarounds to help people do computationally-expensive
work in the UI thread.

-- 
Glenn Maynard


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Bruant

Le 08/03/2013 15:29, David Rajchenbach-Teller a écrit :

On 3/8/13 1:59 PM, David Bruant wrote:

Consider, for instance, a browser implemented as a web application,
FirefoxOS-style. The data that needs to be collected to save its current
state is held in the DOM. For performance and consistency, it is not
practical to keep the DOM synchronized at all times with a worker
thread. Consequently, data needs to be collected on the main thread and
then sent to a worker thread.

I feel the data can be collected on the main thread in a Transferable
(probably awkward, yet doable). This way, when the data needs to be
transfered, the transfer is fast and heavy processing can happen in the
worker.

Intuitively, this sounds like:
1. collect data to a JSON;

I don't understand this sentence. Do you mean collect data in an object?
Just to be sure we use the same vocabulary:
When I say object, I mean something described by ES5 - 8.6 [1], so 
basically a bag of properties (usually data properties) with an internal 
[[Prototype]], etc.
When I say JSON, it's a shortcut for JSON string following the 
grammar defined at ES5 - 5.1.5 [2].
Given the vocabulary I use, one can collect data in an object (by adding 
own properties, most likely), then serialize it as a JSON string with a 
call to JSON.stringify, but one cannot collect data in/to a JSON.



2. serialize JSON (hopefully asynchronously) to a Transferable (or
several Transferables).
Why not collect the data in a Transferable like an ArrayBuffer directly? 
It skips the additional serialization part. Writing a byte stream 
directly is a bit hardcore I admit, but an object full of setters can 
give the impression to create an object while actually filling an 
ArrayBuffer as a backend. I feel that could work efficiently.


What are the data you want to collect? Is it all at once or are you 
building the object little by little? For a backup and for FirefoxOS 
specifically, could a FileHandle [3] work? It's an async API to write in 
a file.


David

[1] http://es5.github.com/#x8.6
[2] http://es5.github.com/#x5.1.5
[3] https://developer.mozilla.org/en-US/docs/WebAPI/FileHandle_API


Re: [whatwg] HTML Specification update request: bug 20939

2013-03-08 Thread Ian Hickson
On Fri, 8 Mar 2013, Bob Owen wrote:
 
 Has anyone had a chance to look at my proposals to improve the HTML 
 specification with regard to navigating browsing contexts while 
 sandboxed?

Both your e-mail and your bug are on my list of feedback to process. Right 
now, I'm working on bugs primarily. I'll get there eventually. Sorry for 
the delay, I have a lot of backlog and a lot of it is really subtle 
complicated stuff that's taking a long time to carefully consider.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Fetch: Origin header

2013-03-08 Thread Adam Barth
On Fri, Mar 8, 2013 at 2:23 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Thu, Mar 7, 2013 at 7:29 PM, Adam Barth w...@adambarth.com wrote:
 I don't have strong feelings one way or another.  Generally, I think
 it's a good idea if the presence of the Origin header isn't synonymous
 with the request being a CORS request because that could limit our
 ability to use the Origin header in the future.

 Okay. So currently the mix of the Origin specification and the HTML
 specification suggests you either do Origin: /origin/ or Origin:
 null. However WebKit seems to do Origin: /origin/ or no header at
 all (for the privacy-sensitive cases). Ian also mentioned that we
 can not just put the Origin header into every outgoing request as that
 breaks the interwebs (per research you did for Chrome I believe?).

 What do you think we should end up requiring?

I would recommend including an Origin header in every non-GET request
(and, of course, in some GET requests because of CORS).

Adam


Re: [whatwg] Fetch: Origin header

2013-03-08 Thread Anne van Kesteren
On Fri, Mar 8, 2013 at 6:21 PM, Adam Barth w...@adambarth.com wrote:
 I would recommend including an Origin header in every non-GET request
 (and, of course, in some GET requests because of CORS).

That sounds fairly straightforward. Thanks!


-- 
http://annevankesteren.nl/


Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread David Rajchenbach-Teller
On 3/8/13 5:35 PM, David Bruant wrote:
 Intuitively, this sounds like:
 1. collect data to a JSON;
 I don't understand this sentence. Do you mean collect data in an object?

My bad. I sometimes write JSON for object that may be stringified to
JSON format and parsed back without loss, i.e. a bag of [bags of]
non-function properties. So let's just say object.

 2. serialize JSON (hopefully asynchronously) to a Transferable (or
 several Transferables).
 Why not collect the data in a Transferable like an ArrayBuffer directly?
 It skips the additional serialization part. Writing a byte stream
 directly is a bit hardcore I admit, but an object full of setters can
 give the impression to create an object while actually filling an
 ArrayBuffer as a backend. I feel that could work efficiently.

I suspect that this will quickly grow to either:
- an API for serializing an object to a Transferable or a stream of
Transferable; or
- a lower-level but equivalent API for doing the same, without having to
actually build the object.

For instance, how would you serialize something as simple as the following?

{
  name: The One,
  hp: 1000,
  achievements: [achiever, overachiever, extreme overachiever]
   // Length of the list is unpredictable
}

 What are the data you want to collect? Is it all at once or are you
 building the object little by little? For a backup and for FirefoxOS
 specifically, could a FileHandle [3] work? It's an async API to write in
 a file.

Thanks for the suggestion. I am effectively working on refactoring
storing browser session data. Not for FirefoxOS, but for Firefox
Desktop, which gives me more architectural constraints but frees my hand
to extend the platform with additional non-web libraries.

Best regards,
 David



-- 
David Rajchenbach-Teller, PhD
 Performance Team, Mozilla


Re: [whatwg] Fetch: crossorigin=anonymous and XMLHttpRequest

2013-03-08 Thread Jonas Sicking
On Tue, Feb 26, 2013 at 3:35 AM, Anne van Kesteren ann...@annevk.nl wrote:
 There's an unfortunate mismatch currently. new
 XMLHttpRequest({anon:true}) will generate a request where a) origin is
 a globally unique identifier b) referrer source is the URL
 about:blank, and c) credentials are omitted. From those
 crossorigin=anonymous only does c. Can we still change
 crossorigin=anonymous to match the anonymous flag semantics of
 XMLHttpRequest or is it too late?

Why do we want the a) and b) behavior? That's not implemented in the
gecko implementation of XHR({ anon: true }) (which precedes the spec
version, so i'm preemptively putting an end to complaints about us not
following the spec)

/ Jonas


Re: [whatwg] Enabling LCD Text and antialiasing in canvas

2013-03-08 Thread Stephen White
On Sat, Feb 23, 2013 at 6:48 AM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Sat, Feb 23, 2013 at 4:59 AM, Stephen White 
 senorbla...@chromium.orgwrote:

 On Thu, Feb 21, 2013 at 7:01 PM, Rik Cabanier caban...@gmail.com wrote:

 On Fri, Feb 22, 2013 at 10:33 AM, Robert O'Callahan 
 rob...@ocallahan.org wrote:

 I think a fully automatic solution that tries to use subpixel AA but is
 always able to render grayscale AA if needed is the way to go. Possibly
 with an author hint to suggest opting into a more expensive rendering path.


 Here are the problems I see with that approach:

 1)  In order to avoid a performance hit for existing content, it still
 requires a spec change (the hint)
 2)  Even with the hint, when the author knows they want LCD AA, they
 still incur a performance penalty of drawing to two buffers.
 3)  It still can't handle all cases, such as canvas - WebGL, which will
 have to remain grayscale-only, even when the author knows it would be safe
 for their application.


 I agree those are problems. All of the available options have problems.


Given that that's the case, I am going to move forward with the opaque
attribute, since I feel it is the lesser of all the evils presented thus
far.  Paying the cost of two buffers and double-rendering just isn't
palatable, IMHO.


  Also, what form should this authoring hint take?  Is it going to
 explicitly call out LCD AA?  In that case, how is it better than an opt-in
 canvas attribute?  If it doesn't explicitly call out LCD AA, but that's the
 only effect it has, what should it be called?


 Perhaps we could use text-rendering:optimizeLegibility on the canvas
 element.


We also might be over-thinking the danger that LCD AA poses.

Firefox/Linux and Firefox/Mac are both currently shipping with LCD AA
turned on unconditionally in canvas, and it's trivial to make them expose
color fringing.  WebKit nightlies (Safari build) seem do the same, although
Safari 6.0 doesn't.

Stephen


  I also have concerns that the knowledge of when it's safe to use the LCD
 AA buffer is going to spread throughout the browser codebase, even in areas
 which currently have no knowledge of canvas, in order to handle all the
 special cases.  This may just be an implementation detail (and may be
 avoidable, this is TBD), but it does have the potential to introduce
 dependencies or complicate implementation.


 Maybe.


 Maybe I'm missing something, but if we're going down the automatic road,
 why do we need a new function/attribute?  Why not simply detect when a
 canvas-sized fillRect() has been performed with an opaque fillStyle?  This
 would also allow optimization of existing content.


 I agree.

 Rob
 --
 Wrfhf pnyyrq gurz gbtrgure naq fnvq, “Lbh xabj gung gur ehyref bs gur
 Tragvyrf ybeq vg bire gurz, naq gurve uvtu bssvpvnyf rkrepvfr nhgubevgl
 bire gurz. Abg fb jvgu lbh. Vafgrnq, jubrire jnagf gb orpbzr terng nzbat
 lbh zhfg or lbhe freinag, naq jubrire jnagf gb or svefg zhfg or lbhe fynir
 — whfg nf gur Fba bs Zna qvq abg pbzr gb or freirq, ohg gb freir, naq gb
 tvir uvf yvsr nf n enafbz sbe znal.” [Znggurj 20:25-28]



Re: [whatwg] asynchronous JSON.parse

2013-03-08 Thread Glenn Maynard
On Thu, Mar 7, 2013 at 4:18 PM, David Rajchenbach-Teller 
dtel...@mozilla.com wrote:

 I have put together a small test here - warning, this may kill your
 browser:
http://yoric.github.com/Bugzilla-832664/


By the way, I'd recommend keeping sample benchmarks as minimal and concise
as possible.  It's always tempting to make things configurable and dynamic
and output lots of stats, but everyone interested in the results of your
benchmark needs to read the code, to verify it's correct.


On Fri, Mar 8, 2013 at 9:12 AM, David Rajchenbach-Teller 
dtel...@mozilla.com wrote:

 Ideally, yes. The question is whether this is actually feasible.

Also, once we have a worker thread that needs to react fast enough to
 provide sufficient data to the ui thread for animating at 60fps, this
 worker thread ends up being nearly as critical as the ui thread, in
 terms of jank.


I don't think making a call asynchronous is really going to help much, at
least for serialization.  You'd have to make a copy of the data
synchronously, before returning to the caller, in order to guarantee that
changes made after the call returns won't affect the result.  This would
probably be more expensive than the JSON serialization itself, since it
means allocating lots of objects instead of just appending to a string.

If it's possible to make that copy quickly, then that should be done for
postMessage itself, to make postMessage return quickly, instead of doing it
for a bunch of individual computationally-expensive APIs.

(Also, remember that returns quickly and does work asynchronously doesn't
mean the work goes away; the CPU time still has to be spent.  Serializing
the complete state of a large system while it's running and trying to
maintain 60 FPS doesn't sound like a good approach in the first place.)

Seriously?
 FirefoxOS [1, 2] is a mobile operating system in which all applications
 are written in JavaScript, HTML, CSS. This includes the browser itself.
 Given the number of companies involved in the venture, all over the
 world, I believe that this qualifies as real use case.


That doesn't sound like a good idea to me at all, but in any case that's a
system platform, not the Web.  APIs aren't typically added to the web to
support non-Web tasks.  For example, if there's something people want to do
in an iOS app using UIWebView, which doesn't come up on web pages, that
doesn't typically drive web APIs.  Platforms can add their own APIs for
their platform-specific needs.

-- 
Glenn Maynard