> I personaly think that the Shootout is pretty good to compare Virtual > Machines and Interpreter performances. This way you can see the cost of > a call, a method dispatch, ... This can give you a general idea about > what speed to expect of the VM, and more generaly the "cost" of using a > VM against using native code.
I think that's a good point. Indeed, the Shootout gives you an indication of how much of a performance difference there is between native code, JIT, virtual machine, and pure interpreter. Of course, even that comparison is flawed, because there are wildly differing implementations of a multitude of different languages, but, as you say, you can still observe that JIT tends to be (a lot) faster than just a virtual machine, which, in turn, tends to be (a lot) faster than a pure interpreter. Such a conclusion makes a whole lot more sense than concluding that, say, gcc generates faster code than ocamlopt, because there is a 30% difference in score (on a limited number of flawed micro benchmarks). In short: drawing conclusions based on minor differences is ill-advised; drawing conclusions based on large differences should be pretty safe. > For languages compiled to native code, this is less interesting since > they can perform a wide range of optimizations for a given test, > removing then the whole purpose of the test. Err...I think you're wrong there. I think that most optimizations, and the ones that matter most, can be done regardless of what the compilation target is (constant propagation, dead code elimination, SSA transformation, loop unrolling, ...). Even many of the optimizations that one could perform on native machine code could be performed on virtual machine code. I also don't agree that a compiler that optimizes away certain operations removes the purpose of that test. Maybe you exactly want to know what an implementation does with useless code. If that isn't what you want to know, you should obviously write your test so that the operations you want to measure cannot be optimized away. > What count more to me is to know : > - the "cost" of using a VM : cpu+memory > - the highlevel features that the language/VM offer Of course, the Shootout doesn't give you a lot of data about this. To get a real handle on the cost of using a VM, you would need to perform a comparison between a program compiled to native machine code and the same program (performing the exact same operations) running on a VM. I'm afraid tha Shootout doesn't give any example of this (of course, you can still make more general observations, such as that the VM-based implementations seem to be about a factor so-and-so much slower than the ones that compile to native code). As for high level features: the Shootout doesn't really say anything about these at all. Ok, there are benchmarks about threads and exceptions, but it's not clear what these actually measure. For example, there is a C program participating in the exception benchmark...but C doesn't have exceptions! And threads...are they a language or an OS feature? Is anything you care to mention a language or an OS feature? Or perhaps a library feature? > Neko was designed to be a good mix between lowering the cost as much as > possible while keeping a simple design and enabling enough highlevel > features. Of course, all this in the judgment of the author. :-) For example, what does "enough highlevel features" mean? What, even, is the purpose of high-level features in a language positioned as a target for compilation? Still, Neko's success shows that your judgments are appreciated, so I say congratulations and keep up the good work. Regards, Bob --- You are in a maze of twisty little passages, all different.
pgpPBFQZbsr1z.pgp
Description: PGP signature
-- Neko : One VM to run them all (http://nekovm.org)
