On Tue, Jun 10, 2003 at 04:59:50PM -0700, Robert Spier wrote:
> > >     mistral-jerome:/tmp > time python test.py
> > >     python test.py  2,59s user 0,00s system 100% cpu 2,582 total
> > >     mistral-jerome:/tmp > ocamlc -o tst test.ml; time ./tst
> > >     ./tst  0,14s user 0,00s system 106% cpu 0,131 total
> > >     mistral-jerome:/tmp > cat test.ml
> > >     let foo () = () in
> > >     for i = 1 to 1000000 do foo () done
> > 
> > That is impressive. I don't know Ocaml but do you think there is an
> > optimization being done since foo takes no args, returns not values and has
> > an empty body?
> 
> This also isn't a fair comparison.

Here are the results when doing ten times more iterations :

    mistral-jerome:/tmp > ocamlc -o tst test.ml; time ./tst
    ./tst  1,30s user 0,00s system 101% cpu 1,279 total
    mistral-jerome:/tmp > time python -O test.py
    python -O test.py  22,59s user 0,00s system 98% cpu 23,032 total

> python is compiling test.py to bytecode during the 'time' call.
> 
> ocaml is explicitly compiling _to a raw executable_ (and optimizing at
> three different levels) outside the call.  There's no startup overhead
> for the binary.

The startup overhead seems to be rather low.

Precompiled Python code :
    mistral-jerome:/tmp > python -O -c \
      "import sys,py_compile;py_compile.compile(sys.argv[1],sys.argv[2])" \
      test.py test.pyc
    mistral-jerome:/tmp > time python test.pyc
    python test.pyc  22,93s user 0,00s system 100% cpu 22,926 total

On-the-fly compilation of the ocaml code :
    mistral-jerome:/tmp > time ocaml test.ml
    ocaml test.ml  1,33s user 0,01s system 100% cpu 1,336 total

> Also, (in general) for this kind of test you really need to run each a
> lot of times and average.  There's too many things going on on a
> typical system that could tweak things.

I ran the tests several times, and always got similar results.

-- Jerome

Reply via email to