On Mon, Sep 27, 2010 at 21:58, Leonardo Santagada <[email protected]> wrote:
> On Mon, Sep 27, 2010 at 4:44 PM, Terrence Cole
> <[email protected]> wrote:
>> On Sun, 2010-09-26 at 23:57 -0700, Saravanan Shanmugham wrote:
>>> Well, I am happy to see that the my interest in a general purpose RPython 
>>> is not
>>> as isolated as I was lead to believe :-))
>>> Thx,
>>
>> What I wrote has apparently been widely misunderstood, so let me explain
>> what I mean in more detail.  What I want is _not_ RPython and it is
>> _not_ Shedskin.  What I want is not a compiler at all.  What I want is a
>> visual tool, for example, a plugin to an IDE.  This tool would perform
>> static analysis on a piece of python code.  Instead of generating code
>> with this information, it would mark up the python code in the text
>> display with colors, weights, etc in order to show properties from the
>> static analysis.  This would be something like semantic highlighting, as
>> opposed to syntax highlighting.
>>
>> I think it possible that this information would, if created and
>> presented in the correct way, represent the sort of optimizations that
>> pypy-c-jit -- a full python implementation, not a language subset --
>> would likely perform on the code if run.  Given this sort of feedback,
>> it would be much easier for a python coder to write code that works well
>> with the jit: for example, moving a declaration inside a loop to avoid
>> boxing, based on the information presented.
>>
>> Ideally, such a tool would perform instantaneous syntax highlighting
>> while editing and do full parsing and analysis in the background to
>> update the semantic highlighting as frequently as possible.  Obviously,
>> detailed static analysis will provide far more information than it would
>> be possible to display on the code at once, so I see this gui as having
>> several modes -- like predator vision -- that show different information
>> from the analysis.  Naturally, what those modes are will depend strongly
>> on the details of how pypy-c-jit works internally, what sort of
>> information can be sanely collected through static analysis, and,
>> naturally, user testing.
>>
>> I was somewhat baffled at first as to how what I wrote before was
>> interpreted as interest in a static python.  I think the disconnect here
>> is the assumption on many people's part that a static language will
>> always be faster than a dynamic one.  Given the existing tools that
>> provide basically no feedback from the compiler / interpreter / jitter,
>> this is inevitably true at the moment.  I foresee a future, however,
>> where better tools let us use the full power of a dynamic python AND let
>> us tighten up our code for speed to get the full advantages of jit
>> compilation as well.  I believe that in the end, this combination will
>> prove superior to any fully static compiler.
>
> This all looks interesting, and if you can plug that on emacs or
> textmate I would be really happy, but it is not what I want. I would
> settle for a tool that generates at runtime information about what the
> jit is doing in a simple text format (json, yaml or something even
> simpler?) and a tool to visualize this so you can optimize python
> programs to run on pypy easily. The biggest difference is that just
> collecting this info from the JIT appears to be much much easier than
> somehow implement a static processor for python code that do some form
> of analysis.

Have you looked at what the Azul Java VM supports for Java, in
particular RTPM (Real Time Performance Monitoring)?

Academic accounts are available, and from Cliff Click's presentations,
it seems to be a production-quality solution for this (for Java),
which could give interesting ideas. Azul business is exclusively
centered around Java optimization at the JVM level, so while
not-so-famous they are quite relevant.

See slide 28 of: www.azulsystems.com/events/vee_2009/2009_VEE.pdf for
some more details.
See also wiki.jvmlangsummit.com/pdf/36_Click_fastbcs.pdf, and the
account about JRuby's slowness (caused by unreliable performance
analysis tools).

Given that JIT can beat static compilation only through forms of
profile-directed optimization, I also believe that the interesting
information should be obtained through logs from the JIT. A static
analyser can't do something better than a static compiler - not
reliably at least.

_However_, static semantic highlighting might still be interesting:
while it does not help understanding profile-directed optimizations
done by the JIT, it might help understanding the consequences of the
execution model of the language itself, where it has a weird impact on
performance.
E.g., for CPython, it might be very useful simply highlighting usages
of global variables, that require a dict lookup, as "bad", especially
in tight loops. OTOH, that kind of optimization should be done by a
JIT like PyPy, not by the programmer.
I believe that CALL_LIKELY_BUILTIN and hidden classes already allow
PyPy to fix the problem without changing the source code.

The question then is: which kinds of constructs are unexpectedly slow
in Python, even with a good JIT?

Best regards
-- 
Paolo Giarrusso - Ph.D. Student
http://www.informatik.uni-marburg.de/~pgiarrusso/
_______________________________________________
[email protected]
http://codespeak.net/mailman/listinfo/pypy-dev

Reply via email to