Aurélien Campéas <[EMAIL PROTECTED]> writes:

> Michael Hudson a écrit :
>> RPython is a subset of python with the main constraint being able to
>> do some level of static type analysis (for full python, the amount of
>> static ananlysis you can do is really very small, you can read Brett
>> Cannon's thesis about this:
>> http://www.ocf.berkeley.edu/~bac/thesis.pdf).
>> So, when translating, the code that implements the interpreter
>> (roughly interpreter/*.py, objspace/std/*.py) is imported and a flow
>> graph built from it.  This is then annotated (code in annotator/*.py),
>> a fairly hairy process.  This (for the C and LLVM backends, at least)
>> is then turned into a graph containing low level operations (like
>> "dereference this pointer").
>> Python is just the language all of the above happens to be
>> implemented
>> in (and also the interpreter/*.py code is involved in making a flow
>> graph which includes itself, but this isn't that important -- just
>> confusing :).
>
> So the same graph can exist in three states : raw python, annotated
> python (possible whenever the raw python matches rpython), low-level
> stuff ?

Yes.  Though I don't know if you can even form the flow graph of
unrestricted Python (not saying you can't, just it wasn't a design
goal).

> Then, to translate unrestricted python, I have to work on the first
> pass/state of the graph.

Yes.  But believe us on this one: ahead-of-time analysis on
unrestricted python is not going to lead to significant speedups.

>>>>Thing is, I don't know how feasible this is.  It's pretty hard,
>>>>without some kind of type inference, to translate, say this Python:
>>>>    a + b
>>>>into anything significantly more efficient than this Common Lisp:
>>>>    (py:add a b)
>>>
>>>The mere fact that it will be compiled will make it more efficient I
>>>guess. I mean, not on CLISP, but with a real lisp compiler.
>> Not much.  The "interpreter overhead" for Python is usually
>> estimated
>> at about 20%, though it obviously depends what code you're running to
>> some extent.
>
> hmmmm, 20% related to what ? Is this an observed quantity from
> benchmarking, say, C-translated rpython vs CPython ?

Well, it's very much a guesstimate, but what I was meaning to compare
was interpreted python as against compiled c code that executes the
same operations in the same order but without the overhead of
dispatching the opcodes (this is more or less what the now
more-or-less dead Python2C project did.  There's a reason it's
more-or-less dead).

> Also, "on average" can, as you say, cover a lot of differences
> depending on what we have to do.

Indeed!  If you're computation is bound, say, by the speed of
multiplying really large integers, the interpreter overhead is roughly
zero.  If you're manipluating small ints, then it's probably quite
large.

>>>>And making type inference possible is what RPython is all about.
>>>
>>>Sure, but then, it is a restricted subset of Python, and I like python
>>>completely unrestricted ;)
>> Well, so do we all, but then you can't have type inference.  There
>> is
>> no simple answer to this.
>
> Has the possibility to extend python with optional type annotations
> been studied ? (I guess the pypy-dev archives could answer this)

Well, there's been a fair amount of hot air emitted about it, but no
actual work that I know about.

>>>I am not sure I would use CLOS at all, in fact (at least for a first
>>>attempt at producing a lisp backend).
>>  Fair enough.
>> 
>
> ... but the more I think about it, it looks like CLOS might help with
> these overloaded operators ...

Maybe :) Please don't let me discourage you too much; I'd be genuinely
interested in what you find.  But please don't expect miracles.

Cheers,
mwh

-- 
  <MFen> want to write my requirements for me?
  <radix> Sure!
  <radix> "show a dancing monkey in the about box"
                                                -- from Twisted.Quotes

_______________________________________________
[email protected]
http://codespeak.net/mailman/listinfo/pypy-dev

Reply via email to