I tried this a long time ago with the binary to JS code in C. It didn't make 
for a huge improvement in total speed.



On Jan 27, 2013, at 11:37 PM, Jason Smith <[email protected]> wrote:

> Hey, Jan. This is a totally random and hypothetical idea:
> 
> Do you think there would be any speedup to use term_to_binary() and
> binary_to_term() instead of encoding through JSON? The view server would of
> course need to support that codec. I have already implemented encoding in
> erlang.js: https://github.com/iriscouch/erlang.js
> 
> My suspicion is that there would be minor or zero speedup. The Erlang side
> would get faster (term_to_binary is faster) but the JS side would get
> slower (custom decoding rather than JSON.parse()). The JS VM is slightly
> faster so the total change would reflect that.
> 
> But I thought I'd share the idea.
> 
> On Sun, Jan 27, 2013 at 12:50 PM, Jan Lehnardt <[email protected]> wrote:
> 
>> 
>> On Jan 27, 2013, at 13:22 , Alexander Shorin <[email protected]> wrote:
>> 
>>> On Sun, Jan 27, 2013 at 3:55 PM, Jason Smith <[email protected]> wrote:
>>>> 
>>>> * Very little difference in different implementations (because stdio is
>> the
>>>> bottleneck)
>>> 
>>> Why stdio is a bottleneck? I'm interesting underlay reasons.
>> 
>> It is actually not the the stdio, but the serialisation form erlang-terms
>> to JSON to JS Objects to JSON to erlang terms.
>> 
>> Cheers
>> Jan
>> --
>> 
>> 
>>> 
>>> As for my experience, the protocol design doesn't allows view and
>>> query servers works faster as they can. For example, we have 50 ddocs
>>> with validate functions. For each document save there would be
>>> executed from 100 commands (50 resets + 50 ddoc validate_doc_update
>>> calls) till 150 commands (+ddocs caches), while it's possible to
>>> process them in bulk mode.
>>> 
>>> --
>>> ,,,^..^,,,
>> 
>> 
> 
> 
> -- 
> Iris Couch

Reply via email to