Michael Hudson a écrit :
Aurélien Campéas <[EMAIL PROTECTED]> writes:
So the same graph can exist in three states : raw python, annotated
python (possible whenever the raw python matches rpython), low-level
stuff ?


Yes.  Though I don't know if you can even form the flow graph of
unrestricted Python (not saying you can't, just it wasn't a design
goal).


Still confused I am, then.
I thought you could annotate the graph because you programmed into rpython and thus enabled type inference, thereby allowing the production of a low-level C-friendly version of the graph ...

ok, it's not that important right now for me to understand it all anyway


Then, to translate unrestricted python, I have to work on the first
pass/state of the graph.


Yes.  But believe us on this one: ahead-of-time analysis on
unrestricted python is not going to lead to significant speedups.

But there are maybe other benefits of an unrestricted python translator to lisp than only getting it natively compiled for small perf gains. Ability to develop quickly arbitrary extensions to python could be one beneficial side-effect, for instance.

Thing is, I don't know how feasible this is.  It's pretty hard,
without some kind of type inference, to translate, say this Python:
  a + b
into anything significantly more efficient than this Common Lisp:
  (py:add a b)

The mere fact that it will be compiled will make it more efficient I
guess. I mean, not on CLISP, but with a real lisp compiler.

Not much.  The "interpreter overhead" for Python is usually
estimated
at about 20%, though it obviously depends what code you're running to
some extent.

hmmmm, 20% related to what ? Is this an observed quantity from
benchmarking, say, C-translated rpython vs CPython ?


Well, it's very much a guesstimate, but what I was meaning to compare
was interpreted python as against compiled c code that executes the
same operations in the same order but without the overhead of
dispatching the opcodes (this is more or less what the now
more-or-less dead Python2C project did.  There's a reason it's
more-or-less dead).


Also, "on average" can, as you say, cover a lot of differences
depending on what we have to do.


Indeed!  If you're computation is bound, say, by the speed of
multiplying really large integers, the interpreter overhead is roughly
zero.  If you're manipluating small ints, then it's probably quite
large.

Builtin ops like bignums arithmetic or whatever is implemented in C is obviously fast. OTOH, I wonder if some implementation choices of current CPyton, and part of its slowness, were made balancing simplicity of the code versus speed (stackless could be an example of a faster implementation, couldn't it ?). I remember having read stuff about that in some distant past.



... but the more I think about it, it looks like CLOS might help with
these overloaded operators ...


Maybe :) Please don't let me discourage you too much; I'd be genuinely
interested in what you find.  But please don't expect miracles.


Thanks, I am not discouraged and don't expect miracles :)
What pypy brings me right now is a nice way to enhance my {python,lisp}-fu at the same time. I am for sure quite far from discovering anything not already known. Would this happen, I'd let pypy people informed.


Cheers,
Aurélien.

_______________________________________________
[email protected]
http://codespeak.net/mailman/listinfo/pypy-dev

Reply via email to