On Tue, 6 Aug 2013, Boris Zbarsky wrote:
On 8/6/13 5:58 PM, Ian Hickson wrote:
Parsing is easy to do on a separate worker, because it has no
dependencies -- you can do it all in isolation.
Sadly, that may not be the [case].
Actual JS implementations have various thread-local data
On Thu, 7 Mar 2013, j...@mailb.org wrote:
right now JSON.parse blocks the mainloop, this gets more and more of an
issue as JSON documents get bigger and are also used as serialization
format to communicate with web workers.
I think it would make sense to have a Promise-based API for JSON
On 8/6/13 5:58 PM, Ian Hickson wrote:
One could imagine an implementation strategy where the cloning is done on
the sending side, or even on a third thread altogether
The cloning needs to run to completion (in the sense of capturing an
immutable representation) before anyone can change the
Le 08/03/2013 22:16, David Rajchenbach-Teller a écrit :
On 3/8/13 5:35 PM, David Bruant wrote:
2. serialize JSON (hopefully asynchronously) to a Transferable (or
several Transferables).
Why not collect the data in a Transferable like an ArrayBuffer directly?
It skips the additional
On 07/03/2013 23:34 , Tobie Langel wrote:
In which case, isn't part of the solution to paginate your data, and
parse those pages separately?
Assuming you can modify the backend. Also, data doesn't necessarily have
to get all that bulky before you notice on a somewhat sluggish device.
Even
On Friday, March 8, 2013 at 10:44 AM, Robin Berjon wrote:
On 07/03/2013 23:34 , Tobie Langel wrote:
Wouldn't some form of event-based API be more indicated? E.g.:
var parser = JSON.parser();
parser.parse(src);
parser.onparse = function(e) { doSomething(e.data); };
I'm not sure how
Le 08/03/2013 02:01, Glenn Maynard a écrit :
If you're dealing with lots of data, you should be loading or creating
the data in the worker in the first place, not creating it in the UI
thread and then shuffling it off to a worker.
Exactly. That would be the proper way to handle a big amount of
Let me answer your question about the scenario, before entering the
specifics of an API.
For the moment, the main use case I see is for asynchronous
serialization of JSON is that of snapshoting the world without stopping
it, for backup purposes, e.g.:
a. saving the state of the current region in
Le 07/03/2013 23:18, David Rajchenbach-Teller a écrit :
(Note: New on this list, please be gentle if I'm debating an
inappropriate issue in an inappropriate place.)
Actually, communicating large JSON objects between threads may cause
performance issues. I do not have the means to measure
On 3/8/13 2:01 AM, Glenn Maynard wrote:
(Not nitpicking, since I really wasn't sure what you meant at first, but
I think you mean a JavaScript object. There's no such thing as a JSON
object.)
I meant a pure data structure, i.e. JavaScript object without methods.
It was my understanding that
I fully agree that any asynchronous JSON [de]serialization should be
stream-based, not string-based.
Now, if the main heavy duty work is dealing with the large object, this
can certainly be kept on a worker thread. I suspect, however, that this
is not always feasible.
Consider, for instance, a
Le 08/03/2013 13:34, David Rajchenbach-Teller a écrit :
I fully agree that any asynchronous JSON [de]serialization should be
stream-based, not string-based.
Now, if the main heavy duty work is dealing with the large object, this
can certainly be kept on a worker thread. I suspect, however, that
On 3/8/13 1:59 PM, David Bruant wrote:
Consider, for instance, a browser implemented as a web application,
FirefoxOS-style. The data that needs to be collected to save its current
state is held in the DOM. For performance and consistency, it is not
practical to keep the DOM synchronized at all
On Fri, Mar 8, 2013 at 4:51 AM, David Rajchenbach-Teller
dtel...@mozilla.com wrote:
a. saving the state of the current region in an open world RPG;
b. saving the state of an ongoing physics simulation;
These should live in a worker in the first place.
c. saving the state of the browser
Le 08/03/2013 15:29, David Rajchenbach-Teller a écrit :
On 3/8/13 1:59 PM, David Bruant wrote:
Consider, for instance, a browser implemented as a web application,
FirefoxOS-style. The data that needs to be collected to save its current
state is held in the DOM. For performance and consistency,
On 3/8/13 5:35 PM, David Bruant wrote:
Intuitively, this sounds like:
1. collect data to a JSON;
I don't understand this sentence. Do you mean collect data in an object?
My bad. I sometimes write JSON for object that may be stringified to
JSON format and parsed back without loss, i.e. a bag of
On Thu, Mar 7, 2013 at 4:18 PM, David Rajchenbach-Teller
dtel...@mozilla.com wrote:
I have put together a small test here - warning, this may kill your
browser:
http://yoric.github.com/Bugzilla-832664/
By the way, I'd recommend keeping sample benchmarks as minimal and concise
as
right now JSON.parse blocks the mainloop, this gets more and more of an
issue as JSON documents get bigger and are also used as serialization
format to communicate with web workers.
To handle large JSON Documents there is a need for an async JSON.parse,
something like:
JSON.parse(data,
(It's hard to talk to somebody called j, by the way. :)
On Thu, Mar 7, 2013 at 2:06 AM, j...@mailb.org wrote:
right now JSON.parse blocks the mainloop, this gets more and more of an
issue as JSON documents get bigger
Just load the data you want to parse inside a worker, and perform the
The JSON object and its API are part of the ECMAScript language
specification which is standardized by Ecma/TC39, not whatwg.
Rick
On Thursday, March 7, 2013, wrote:
right now JSON.parse blocks the mainloop, this gets more and more of an
issue as JSON documents get bigger and are also used
On Thu, Mar 7, 2013 at 9:29 AM, Rick Waldron waldron.r...@gmail.com wrote:
The JSON object and its API are part of the ECMAScript language
specification which is standardized by Ecma/TC39, not whatwg.
He's talking about an async interface to it, not the core parser. It's a
higher level of
On Thu, Mar 7, 2013 at 10:42 AM, Glenn Maynard gl...@zewt.org wrote:
On Thu, Mar 7, 2013 at 9:29 AM, Rick Waldron waldron.r...@gmail.comwrote:
The JSON object and its API are part of the ECMAScript language
specification which is standardized by Ecma/TC39, not whatwg.
He's talking about an
(Note: New on this list, please be gentle if I'm debating an
inappropriate issue in an inappropriate place.)
Actually, communicating large JSON objects between threads may cause
performance issues. I do not have the means to measure reception speed
simply (which would be used to implement
I'd like to hear about the use cases a bit more.
Generally, structured data gets bulky because it contains more items, not
because items get bigger.
In which case, isn't part of the solution to paginate your data, and parse
those pages separately?
Even if an async API for JSON existed,
On Thu, Mar 7, 2013 at 2:18 PM, David Rajchenbach-Teller
dtel...@mozilla.com wrote:
(Note: New on this list, please be gentle if I'm debating an
inappropriate issue in an inappropriate place.)
Actually, communicating large JSON objects between threads may cause
performance issues. I do not
It is.
However, to use Transferable objects for purpose of implementing
asynchronous parse/stringify, one needs conversions of JSON objects
from/to Transferable objects. As it turns out, these conversions are
just variants on JSON parse/stringify, so we have not simplified the issue.
Note that I
On Thu, Mar 7, 2013 at 4:18 PM, David Rajchenbach-Teller
dtel...@mozilla.com wrote:
(Note: New on this list, please be gentle if I'm debating an
inappropriate issue in an inappropriate place.)
(To my understanding of this list, it's completely acceptable to discuss
this here.)
Actually,
27 matches
Mail list logo