The point of using TypedArrays is that you can externalize their underlying
ArrayBuffers (or create them externalized in the first place) and then
operate on their raw data, avoiding back-and-forth copying:

On Sun, Oct 30, 2016 at 6:27 AM, Samantha Krieger <[email protected]>
wrote:

> OK I gave it a shot, but I know I did it wrong...:
>
>
> void TypedSort(const FunctionCallbackInfo<Value>& args) {
>    Isolate* isolate = Isolate::GetCurrent();
>    HandleScope scope(isolate);
>
>    if (args.Length() < 1 || !args[0]->IsTypedArray()) {
>       isolate->ThrowException(
>             Exception::TypeError(
>                   String::NewFromUtf8(isolate,
>                         "First argument should be an array")));
>       return;
>    }
>
>    Handle<TypedArray> arr = Handle<Uint16Array>::Cast(args[0]);
>    int size = arr->Length();
>    double other_arr[size];
>    for (int i = 0; i < size; i++){
>       other_arr[i] = arr->Get(i)->NumberValue(); //presumably this should be 
> changed to work off the fact that this is a typed array... couldn't figure 
> out the right way of manipulating the api...
>    }
>
> Not just the body of the loop should be changed; the entire loop can be
avoided.

>
>    qsort(other_arr, size, sizeof(other_arr[0]), compare);
>    Handle<Array> res = Array::New(isolate, size);
>    for (int i = 0; i < size; ++i) {
>       res->Set(i, Number::New(isolate, other_arr[i]));
>    }
>
> This loop can be avoided too.
Look for the terms "ArrayBuffer" and "externalize" in V8's API.

>
>    args.GetReturnValue().Set(res);
>
> }
>
>
> Updated to this ^^, probably doing something wrong cause I still get that
> this is slower by most accounts.
>
> I used benchmark js to call both initial and new functions (and compares
> to js). Below are results:
> C native sort x 252,557 ops/sec ±1.93% (70 runs sampled)
> C native type sort x 130,923 ops/sec ±2.80% (67 runs sampled)
> Javascript sort x 726,026 ops/sec ±1.37% (71 runs sampled)
> Fastest: Javascript sort
> Slowest: C native type sort
>
> So it looks like this new implementation is marginally faster than
> previous but still slower than js which leads me to believe that I'm doing
> it wrong. The results here are consistent with what I'm getting when using
> the 'shameful' simplified approach to benchmarking using date.now():
>
> Typed C time: 2 msec
>  ----- Cost for sorting 1000 -------
> JS time: 21 msec
> C time: 23 msec
> Typed C time: 23 msec
>  ----- Cost for sorting 10000 -------
> JS time: 148 msec
> C time: 234 msec
> Typed C time: 208 msec
>  ----- Cost for sorting 100000 -------
> JS time: 1594 msec
> C time: 2406 msec
> Typed C time: 2122 msec
>
> Any tips/advice/insight? Is there are particular downside to running the
> same operations 1000000 and doing a diff with date.now()?
>

What do you think benchmark.js is doing under the hood? One way to
microbenchmark is as potentially-misleading as the other. Which doesn't
mean that results are *necessarily* false, just that it's very easy to
accidentally measure benchmarking artifacts, and in consequence waste time
and effort on "optimizations" that don't actually make a difference in
larger applications.


> I hear its really frowned upon but with differences in magnitude such as
> these I feel like its safe to say C consistently underperforms JS in this
> circumstance with my naive implementation, and unfortunately so does my
> specific use of Typed Arrays using V8 as well. Any info would be great.
> Also there's a lot of flaws in this for a number of reasons that i wont get
> into rn.
>
> Thanks so much Ben and Jochen!
>
> On Wednesday, October 26, 2016 at 4:02:45 AM UTC-4, Ben Noordhuis wrote:
>>
>> On Wed, Oct 26, 2016 at 3:42 AM, Samantha Krieger <[email protected]>
>> wrote:
>> > OK I see. My bad on that assumption.
>> >
>> > The truth is I actually wanted to write a native node module to do a
>> lot of
>> > manipulation of dynamic JS objects (from a few to many thousand
>> properties
>> > of varying types). I wrote this example as a test because I started
>> noticing
>> > the overhead of marshaling objects in C++ was higher than I expected.
>> Are
>> > there any v8 data structures that would make ^^ task reasonably faster
>> in
>> > C++? Or would pulling the objects into v8 using the API just be
>> expensive?
>> > If so, what is the ideal situation that would improve performance by
>> > re-writing using v8? Fairly static types? That was the impression I got
>> from
>> > the few tutorials I found out there...
>>
>> Jochen mentioned typed arrays and that is really the best way to go if
>> you have to shift a lot of data from JS to C++ land or vice versa.
>> Node.js uses the same technique in many places.
>>
> --
> --
> v8-dev mailing list
> [email protected]
> http://groups.google.com/group/v8-dev
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to