void TypedSort(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = Isolate::GetCurrent();
HandleScope scope(isolate);
if (args.Length() < 1 || !args[0]->IsTypedArray()) {
isolate->ThrowException(
Exception::TypeError(
String::NewFromUtf8(isolate,
"First
argument should be an array")));
return;
}
Handle<TypedArray> arr = Handle<Uint16Array>::Cast(args[0]);
int size = arr->Length();
double other_arr[size];
for (int i = 0; i < size; i++){
other_arr[i] = arr->Get(i)->NumberValue();
}
v8::String::Utf8Value param1(args[0]->ToString());
qsort(other_arr, size, sizeof(other_arr[0]), compare);
Handle<Array> res = Array::New(isolate, size);
for (int i = 0; i < size; ++i) {
res->Set(i, Number::New(isolate, other_arr[i]));
}
args.GetReturnValue().Set(res);
}
Updated to this ^^, probably doing something wrong cause I still get that
this is slower by most accounts.
I used benchmark js to call both initial and new functions (and compares to
js). Below are results:
C native sort x 252,557 ops/sec ±1.93% (70 runs sampled)
C native type sort x 130,923 ops/sec ±2.80% (67 runs sampled)
Javascript sort x 726,026 ops/sec ±1.37% (71 runs sampled)
Fastest: Javascript sort
Slowest: C native type sort
So it looks like this new implementation is actually slower which leads me
to believe that I'm doing it wrong. Which is consistent with what I'm
getting when using the 'shameful' simplified approach to benchmarking using
date.now():
----- Cost for sorting 100 -------
JS time: 1 msec
C time: 1 msec
Typed C time: 1 msec
----- Cost for sorting 1000 -------
JS time: 2 msec
C time: 3 msec
Typed C time: 4 msec
----- Cost for sorting 10000 -------
JS time: 30 msec
C time: 46 msec
Typed C time: 49 msec
----- Cost for sorting 100000 -------
JS time: 341 msec
C time: 542 msec
Typed C time: 621 msec
Any tips/advice/insight? Is there are particular downside to running the
same operations 1000000 and doing a diff with date.now()? I hear its really
frowned upon but with differences in magnitude such as these I feel like
its safe to say C consistently underperforms JS in this circumstance with
my naive implementation, and unfortunately so does my specific use of Typed
Arrays using V8 as well. Any info would be great.
Thanks so much Ben and Jochen!
On Wednesday, October 26, 2016 at 4:02:45 AM UTC-4, Ben Noordhuis wrote:
>
> On Wed, Oct 26, 2016 at 3:42 AM, Samantha Krieger <[email protected]
> <javascript:>> wrote:
> > OK I see. My bad on that assumption.
> >
> > The truth is I actually wanted to write a native node module to do a lot
> of
> > manipulation of dynamic JS objects (from a few to many thousand
> properties
> > of varying types). I wrote this example as a test because I started
> noticing
> > the overhead of marshaling objects in C++ was higher than I expected.
> Are
> > there any v8 data structures that would make ^^ task reasonably faster
> in
> > C++? Or would pulling the objects into v8 using the API just be
> expensive?
> > If so, what is the ideal situation that would improve performance by
> > re-writing using v8? Fairly static types? That was the impression I got
> from
> > the few tutorials I found out there...
>
> Jochen mentioned typed arrays and that is really the best way to go if
> you have to shift a lot of data from JS to C++ land or vice versa.
> Node.js uses the same technique in many places.
>
--
--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.