On Sat, Sep 17, 2011 at 2:34 PM, Andrea Giammarchi
<andrea.giammar...@gmail.com> wrote:
> First of all thanks for clarifications ... as I have said, js-ctypes
> extension looked way too similar to what Brendan showed in his slides.
> In any case, there are still a couple of confusing points to me and yes, I
> will update my post as soon as these things are clear.
> For example ...
>
>>
>> 1. Binary Data, a spec for handling bit-level encodings of various
>> values.  This is related to Typed Arrays, another spec that's shipping
>> already in some browsers.
>
> precisely, these are indeed part of the benchmark code: Int32Array
> Also I have explicitly slowed down the logic creating a "classic" literal JS
> object per each loop iteration ... still, way too slow performances so
> whatever js-ctypes has been created for, something went wrong, imo

The benchmark code you posted doesn't show how you use Int32Array, so
I can't say anything definitive about it, but again, Int32Array is
part of the Typed Arrays system, which is *not the same* as js-ctypes.
 They are for totally different things.

>> 2. js-ctypes, which is a foreign function interface for JS in Firefox.
>> This is used for calling host libraries using the native platform
>> (C-based) calling convention.  If you're coming from a python
>> background, it's very similar to, and named after, the 'ctypes'
>> library.
>
> Still way too similar to what JS.next is proposing ... but I am actually
> glad this js-ctypes "thingy" is not the JS.next purpose.

No, this is *not similar at all* to what is proposed for ES.next.
js-ctypes is an FFI, which is not related at all to any ES.next
proposal.

>> 3. Cython, a restricted dialect of Python that compiles to C for
>> performance.
>
> And with TraceMonkey/tracer/whatever I would expect natively optimized JS
> structures and collections since it's about defining static type *into* JS
> world, regardless the fact C will use them for its purpose.

TraceMonkey does not add static types to JavaScript.  TraceMonkey is a
system for dynamic optimization of JavaScript code, based on its
runtime behavior.  Static types have nothing to do with it.

> Again, glad I have used the only way I had to test this stuff and this is
> not what we gonna have on JS world.

What you tested was not at all like what you thought you were testing,
or what you wrote about.

>> These are all totally different things.  Your blog post compares 2 and
>> 3, and then draws conclusions about 1.  If you want to benchmark
>> js-ctypes against something from Python, you should benchmark it
>> against ctypes, although they both use the same underlying library,
>> libffi, so the results probably won't be that different.
>
> If I compare Cython with js-ctypes I bet Cython will be faster ... then
> again, why?

Because Cython is a compiler from a restricted subset of Python,
annotated with static types, and js-ctypes is an FFI.  The point of
js-ctypes is to bind to an existing C library, which you do not do in
your blogpost.

> But at this point I don't care much since this extension brings
> JS.next without JS.next optimizations

js-ctypes has nothing to do with anything in ES.next.

>> There is currently no high-performance implementation of the Binary
>> Data spec, so it can't be usefully benchmarked at this point.
>
> at which point it will be useful to benchmark them then?
> Also why nobody gave me an answer about double memory/object allocation per
> each typed instance?

I'm sure that when browsers ship optimized implementations of Binary
Data, that will be widely announced.

>>
>>  Typed
>> Arrays does have high-performance implementations currently.  There's
>> also no analogue of Cython of JS, because the JS implementation
>> community has focused on making fast implementations of all of
>> JavaScript, rather than limiting the language.
>
> and this is good, indeed Int32Array is part of my benchmark

Int32Array is not for the same sort of thing as Cython.

>> Finally, your use of the Function constructor prevents many
>> optimizations -- don't do that in JavaScript code that you want to be
>> high-performance.
>
> NWMatcher is the fastest DOM engine out there and it's based on this trick
> ... I wonder why engines can optimize runtime but these cannot optimize
> runtime runtime created stuff ... in any case, check those functions, these
> can rarely be faster in JagerMonkey since only necessary checks are
> performed so that a tracer will optimize once rather than twice or more ...
> am I wrong?

DOM interaction will almost certainly dominate the performance of
anything like NWMatcher, so the performance of eval is unlikely to be
the issue here.

>>  There's an excellent overview of how to write
>> high-performance JavaScript code by David Mandelin here:
>>
>> http://blog.mozilla.com/dmandelin/2011/06/16/know-your-engines-at-oreilly-velocity-2011/
>
> this has been linked in the post itself and I know those slides and I have
> asked him personally questions ... questions that I really would like to
> understand in this ML before I update my post so that I can provide a proper
> feedback about my doubts.

However, you don't seem to have fully read the slides.  As he says "Do
not use [eval] anywhere near performance sensitive code".  The
Function constructor is a version of |eval|.

> Once again, why once a struct has been defined we need to create per each
> instance its counterpart in native JS ?
>
> new Point2D({x: 1, y: 2}) ... why?
> what's wrong with a *shimmable*
> new Point2D(x = 1, y = 2)
> so that no object has to be created?

You'd need a benchmark that shows that the object allocation you're
avoiding here is worth the lack of flexibility.
-- 
sam th
sa...@ccs.neu.edu
_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to