Hi Samuele,
So LLVM comes again under the focus; we should really give it a serious try at some point.
There are clearly two point of views that we can take on RPython. The first point of view is that RPython is a kind of nice syntax over a C- or Java-like language (in this view integers are primitive, and type inference is used to fill in the missing type declarations). The alternative is to see RPython as semantically close to Python, and all the variables contain objects, but we can obtain better performance via optimization. The current annotation-and-Pyrex-or-Lisp approach follows the 1st approach, but I feel like the 2nd one can give better results. Well, I guess you guessed :-)
I see, I just think that maybe we should try to get as far as possible with what we constructed so far, for example even ignoring structure inlining for the first approx. In Java, lisp etc is not that much a relevant problem because for example Java arrays carry a length anyway and are heap allocated.
One thing with approach 2 is that we need to rewrite a fair chunk of Python semantics themself in the compiler that way.
I'm thinking that as long as we depend/use on CPython ref counting, ownership issues will crop up, they will likely not mix easely with trying to be clever with structures.
In general I think that likely both 1 and 2 are worth pursuing, because of differences for target languages, characheteristic of produced code.
OTOH I'm sure whether trying to go with full force with 2 is the best thing to get the first protototype interpreter running as an extension in CPython, especially thinking about ref counting.
But you worked more on that, so you can best judge what is necessary.
I tend now to think that the "structure-inlining" problem is best attacked at the more general level, independently of RPython. It's quite possible that it could be done with an adaptation of the techniques that we already have in the annotation subdirectory. It would be simpler because we have fewer, more elementary types. It would be harder because we'd have to focus on the basic problem of tracking references more precisely. Trying to sketch the basics by putting together some equivalents of the SomeXxx classes and factories, it looks like it works... (but some more thinking needed)
yes, that is what I was thinking about, you need to track "types" for thing like object/structure from a specific creation point or coming from reading a specific field and track their propagation.
Re function pointers: in Java we'd probably have to group functions by families (two functions are in the same family as soon as there is a single variable that could point to either one), and then replace function pointers by integers and use switches... Lots of one-method classes with a common interface look like a waste... But the integer tricks seems independent of the optimization techniques. Did you have better tricks in mind that would require more care?
Jython right now use the switch thing in some cases, but I suspect it is not JIT inlining friendly so a lot of inner classes are preferable.
_______________________________________________
[EMAIL PROTECTED]
http://codespeak.net/mailman/listinfo/pypy-dev
