On Fri, 2006-05-19 at 13:12 -0700, therandthem wrote:

> I agree.  "Cheap concurrency" is also very expensive
> for some languages that can do it cheaper on their
> own.

Haskell, MLton, Erlang and Felix have high speed
cooperative multi-tasking. Lua has coroutines.
Neko can't do it yet but probably will: more or
less it is equivalent to call/CC but better structured.

Of these, Felix is probably the fastest, but I don't
know. A sensible comparison is needed. We have a 
micky mouse web server now using synchronous threads
which should support 1 million connections on a PC,
easily solving the so called 10K problem (but we don't
have enough machines to hammer the server).

You can be sure Nicolass is smarter than me and will
want Neko to be able to do this too :)

Conversely, since Felix is programmable binary code generator,
it might cooperate with Neko VM in some way. If I had to
choose a VM to use, I'd probably go for Neko.

> I had not thought too much about compiler
> optimizations.  You are right that better tests are in
> order.

The thing is for compiled languages there is nothing
else. What else makes speed OTHER than optimisations?
This includes bytecode compilers too of course.

That's the problem really. At one point, Haskell
was lifting loop invariants .. the test specified
to do something 100 times and Haskell did it once.
Is that fair and reasonable?

So basically, Shootout tries to measure performance
but disallows optimisations -- the very thing it
is trying to measure.

> Java sits in the high middle on the ranking.  If Ruby
> and Spidermonkey are at the bottom, this seems about
> right.  

I don't know either of these, but I know Python.
Is Python slow? No way. Python is lightning fast.
It is unfair not to include startup and compile times for
scripting languages. It is unfair not to allow a Python
programmer to solve problems in a way that suits the language.
And it is unfair not to include tests for which scripting
languages perform well.

This would apply to Neko too. There is an ML compiler
written in Neko. It performs reasonably well.
Perhaps it can't compile huge programs, but it isn't
designed for that.

Neko system is designed to allow good performance
server side scripting. Other systems, such as Ocaml,
are faster but have limitations which make them unsuitable,
for example Ocaml bytecode can load dynamically but it
cannot be *unloaded*. It also doesn't support multiprocessing.

So if you look at a set of benchmarks in the area for
which Neko was designed .. you'll probably find it
compares very well with other solutions.

It should .. because it was *designed to do so*.

[And if it doesn't .. Nicolas will fix it :]

> I still think Mickey Mouse benchmarks do have
> value,

Of course they do. I used the Shootout to measure
how my improvements to the Felix compiler were
coming along. I compared machine code Ocaml, C,
and Felix generated for the same test, and worked
on the compiler until Felix did better.

Having done that I found it also improved on other
tests too, but was still slow on some, so I focussed
on them, rinse and repeat.

I was working together with Brent Fulgham on this.
[He's the original admin of the project]

>  but better tests would be better.  The shootout
> establishes some *reasonable* expectations for a
> language based on its ranking.  I concede to
> everything else that you have brought up, but the
> shootout can establish that a language is not a total
> piece of junk (if it can get included). 

Not really. If you look at my own tests on Ackermann
performance:

http://felix.sourceforge.net/current/speed/en_flx_perf_0005.html

with these ranks:

Rankings for ack
    felix            15 176.28
    gnat             15 196.42
    gccopt           15 236.68
    felix            14  34.43
    gnat             14  44.30
    gccopt           14  48.83
    ocamlopt         14  81.98
    gcj              14 188.00
    gcc              14 230.98
    felix            13   5.64

you would conclude Haskell is junk (it is soooo slow on this
test it isn't even shown in the graph :)

You would also conclude Felix, C, and Ada are better than
ocaml optimising compiler.

But you don't know what this test is doing. I do, because I
have studied it intensely. What it actually measures for
all languages is how many words of stack are used in
the recursive function call. The test is heavily recursive,
so the stack use is proportional to this number, and
performance is entirely dominated by how fast the cache
spills.

These results are not very important for most applications,
because even if they're doing recursion, such as a visitor
of tree nodes, the overall optimisation of the work
the visitor does dominates. 

Now, IF one looked at benchmarks this way and used them to
compare similar tools for what the test *actually* measures,
the benchmarks would be useful -- to compiler implementors
at least.

Maybe I can give another example. Felix build system is
written entirely in Python, including a full scale literate
programming tool. The performance of the LP tool matters
because every change I make, it has to re-extract the sources,
and typically this is up to 10 times slower than actually
compiling them.

There is also a dependence on Python which is a bit unpleasant,
and some problems with using it on a larger scale project like
this. So Neko is under consideration as a replacement.

What effect the choice are (a) lack of ready made libraries
and (b) current dependence on Boehm garbage collector.
The appeal is that otherwise Neko builds just needing
a C compiler. 

Actually .. I don't care about the speed. It's the ability
to distribute it easily that matters most.

-- 
John Skaller <skaller at users dot sf dot net>
Felix, successor to C++: http://felix.sf.net


-- 
Neko : One VM to run them all
(http://nekovm.org)

Reply via email to