Thanks for the time tips. In these test then I'm only interested in getting
the ball-park figures for the big picture but if I need a more accurate
timer I'll definitely keep microtime in mind. I am very interested in
exploring what you said about Crankshaft. Do you have some example code
showing this effect? Thanks, Simon


On Wed, Apr 23, 2014 at 1:18 PM, <[email protected]> wrote:

> Simon,
>
> A month ago I ran similar experiments and got results on the order of what
> you measured.  Two notes about this type of synthetic benchmark:
>
> 1. Use a high resolution timer (f.e.: npm install microtime) .  These
> results have suspicious times that you might get if you divide 1000000
> operations by a small integer.
>
> 2. Try a set of experiments that sweep through of different number of
> iterations (i.e.: powers of two from 1-1M).  After some number of
> iterations (about 32k in my experiments) your code is recompiled with
> Crankshaft which has completely different execution characteristics for
> both JS and native addons (considered deoptimizations by Crankshaft).
>  These results have some combination of the two compilers.
>
>           -J
>
>
>
>
> On Wednesday, April 23, 2014 12:53:55 PM UTC-7, SimonHF wrote:
>
>> FYI here are some perf results that I got calling different types of
>> dummy C++ functions:
>>
>> * estimate 25000322 calls per second; object: n/a, input: n/a, output: n/a
>> * estimate 20000019 calls per second; object: unwrapped, input: n/a,
>> output: n/a
>> * estimate 13333240 calls per second; object: unwrapped, input: 3 ints,
>> output: n/a
>> * estimate 10000010 calls per second; object: unwrapped, input: 3 ints,
>> output: int
>> * estimate 7142827 calls per second; object: unwrapped, input: 3 ints,
>> output: 8 byte str
>> * estimate 1428573 calls per second; object: unwrapped, input: 3 ints,
>> output: 4KB byte str
>> * estimate 5405379 calls per second: object: unwrapped, input: 3 ints,
>> output: 4KB byte str external
>> * estimate 338983 calls per second: object: unwrapped, input: 3 ints,
>> output: 4KB byte buffer
>> * estimate 555556 calls per second: object: unwrapped, input: 3 ints,
>> output: 4KB byte buffer external
>>
>> So a dummy C++ function with no input, output, or object unwrapping can
>> be called about 25M times per second on my laptop. However, calling the
>> same function which unwraps its object can only be called 20M times per
>> second. Then add 3 input parameters and the same function can only be
>> called 13.3M times per second... etc. Then comes the interesting bit (for
>> me anyway): If the function returns a 4KB large string then the calls per
>> second drops down to 1.4M. However, using the String::NewExternal() method
>> results in a much better -- as expected -- per second count of 5.4M. The
>> disappointing figures are with node Buffer::New(); only 339K calls per
>> second for the non-zero-copy method, and only 555K calls per second for the
>> zero-copy version; about 10x slower than the String::NewExternal() method.
>>
>> Why Buffer::New() is so slow...?
>>
>> On Wednesday, April 23, 2014 10:55:02 AM UTC-7, SimonHF wrote:
>>
>>> Thanks for the info and the link. Looks very interesting. I will
>>> definitely take a look at ems.
>>>
>>> FYI here's what I have discovered so far:
>>>
>>> I created a native Node addon which consists of a function which does
>>> nothing. If javascript calls the vanilla function as quickly as possible
>>> then it manages about 3 million calls per second. I guess this is the high
>>> water mark.
>>>
>>> If I modify the function so that it returns a string (which has to be
>>> created and the string bytes copied into the new string object) then the
>>> calls per second drop substantially depending upon the length of the
>>> returned string.
>>>
>>> A way around this is to use the String::NewExternal() mechanism which
>>> provides a way to make an immutable external string inside v8.
>>>
>>> So far I have not managed to get Buffer to give the same kind of
>>> performance as String::NewExternal(). Performance seems to be about a third
>>> as good :-( Still experimenting.
>>>
>>> I'm also on the lookout for mutable objects, as Andreas suggested...
>>>
>>> Thanks,
>>> Simon
>>>
>>>
>>> On Wed, Apr 23, 2014 at 10:03 AM, <[email protected]> wrote:
>>>
>>>> Simon,
>>>>
>>>> To rationale behind Andreas' answer is that v8 implements a virtual
>>>> machine and by definition the only way to move data into or out of it is
>>>> copy-in/copy-out through a v8 interface.  Using native a plug-in that
>>>> defeats the isolation of a v8 isolate will only break design assumptions in
>>>> v8.
>>>>
>>>> An off-heap buffer can be allocated and accessed from inside v8, but
>>>> referencing that memory from within a JS program requires buffer access
>>>> methods (Buffer Node.js v0.10.26 Manual & Documentation) limiting you to
>>>> scalar types.  In practice, these operations result in copying the data
>>>> from the buffer to the v8 heap anyhow, ultimately zero-copy in v8 is nearly
>>>> impossible.
>>>>
>>>> I wrote a native Node addon (https://www.npmjs.org/package/ems) that
>>>> combines synchronization primitives with shared memory, it also depends on
>>>> copy-in/out, and because it's a native plugin it deoptimizes code that uses
>>>> it.  Nevertheless, it's still capable of millions of atomic updates per
>>>> second, far better than is possible with messaging.
>>>>
>>>>              -J
>>>>
>>>>
>>>> On Tuesday, April 22, 2014 9:54:16 AM UTC-7, SimonHF wrote:
>>>>>
>>>>> For example, I can get a uint like this in a C++ function: uint32_t
>>>>> myuint32 = args[0]->Int32Value();
>>>>>
>>>>> But is it also possible to change the value somehow from C++ land, so
>>>>> that in javascript the variable passed into the function will reflect the
>>>>> changed value?
>>>>>
>>>>> If this is possible with some C++ argument types and not others, then
>>>>> which types allow modification?
>>>>>
>>>>> Thanks.
>>>>>
>>>>  --
>>>> --
>>>> v8-users mailing list
>>>> [email protected]
>>>>
>>>> http://groups.google.com/group/v8-users
>>>> ---
>>>> You received this message because you are subscribed to a topic in the
>>>> Google Groups "v8-users" group.
>>>> To unsubscribe from this topic, visit https://groups.google.com/d/
>>>> topic/v8-users/oIouqgJGfn4/unsubscribe.
>>>> To unsubscribe from this group and all its topics, send an email to
>>>> [email protected].
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  --
> --
> v8-users mailing list
> [email protected]
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "v8-users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/v8-users/oIouqgJGfn4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
[email protected]
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to