Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-18 Thread Collin Winter
On Sat, Feb 13, 2010 at 12:12 AM, Maciej Fijalkowski fij...@gmail.com wrote:
 I like this wording far more. It's at the very least far more precise.
 Those examples are fair enough (except the fact that PyPy is not 32bit
 x86 only, the JIT is).
[snip]
 slower than US on some workloads is true, while not really telling
 much to a potential reader. For any X and Y implementing the same
 language X is faster than Y on some workloads is usually true.

 To be precise you would need to include the above table in the PEP,
 which is probably a bit too much, given that PEP is not about PyPy at
 all. I'm fine with any wording that is at least correct.

I've updated the language:
http://codereview.appspot.com/186247/diff2/9005:11001/11002. Thanks
for the clarifications.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-12 Thread Collin Winter
Hey Maciej,

On Thu, Feb 11, 2010 at 6:39 AM, Maciej Fijalkowski fij...@gmail.com wrote:
 Snippet from:

 http://codereview.appspot.com/186247/diff2/5014:8003/7002

 *PyPy*: PyPy [#pypy]_ has good performance on numerical code, but is
 slower than Unladen Swallow on non-numerical workloads. PyPy only
 supports 32-bit x86 code generation. It has poor support for CPython
 extension modules, making migration for large applications
 prohibitively expensive.

 That part at the very least has some sort of personal opinion
 prohibitively,

Of course; difficulty is always in the eye of the person doing the
work. Simply put, PyPy is not a drop-in replacement for CPython: there
is no embedding API, much less the same one exported by CPython;
important libraries, such as MySQLdb and pycrypto, do not build
against PyPy; PyPy is 32-bit x86 only.

All of these problems can be overcome with enough time/effort/money,
but I think you'd agree that, if all I'm trying to do is speed up my
application, adding a new x86-64 backend or implementing support for
CPython extension modules is certainly north of prohibitively
expensive. I stand by that wording. I'm willing to enumerate all of
PyPy's deficiencies in this regard in the PEP, rather than the current
vaguer wording, if you'd prefer.

 while the other part is not completely true slower
 than US on non-numerical workloads. Fancy providing a proof for that?
 I'm well aware that there are benchmarks on which PyPy is slower than
 CPython or US, however, I would like a bit more weighted opinion in
 the PEP.

Based on the benchmarks you're running at
http://codespeak.net:8099/plotsummary.html, PyPy is slower than
CPython on many non-numerical workloads, which Unladen Swallow is
faster than CPython at. Looking at the benchmarks there at which PyPy
is faster than CPython, they are primarily numerical; this was the
basis for the wording in the PEP.

My own recent benchmarking of PyPy and Unladen Swallow (both trunk;
PyPy wouldn't run some benchmarks):

| Benchmark| PyPy  | Unladen | Change  |
+==+===+=+=+
| ai   | 0.61  | 0.51|  1.1921x faster |
| django   | 0.68  | 0.8 |  1.1898x slower |
| float| 0.03  | 0.07|  2.7108x slower |
| html5lib | 20.04 | 16.42   |  1.2201x faster |
| pickle   | 17.7  | 1.09| 16.2465x faster |
| rietveld | 1.09  | 0.59|  1.8597x faster |
| slowpickle   | 0.43  | 0.56|  1.2956x slower |
| slowspitfire | 2.5   | 0.63|  3.9853x faster |
| slowunpickle | 0.26  | 0.27|  1.0585x slower |
| unpickle | 28.45 | 0.78| 36.6427x faster |

I'm happy to change the wording to slower than US on some workloads.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-12 Thread Nick Coghlan
Collin Winter wrote:
 Hey Maciej,
 
 On Thu, Feb 11, 2010 at 6:39 AM, Maciej Fijalkowski fij...@gmail.com wrote:
 Snippet from:

 http://codereview.appspot.com/186247/diff2/5014:8003/7002

 *PyPy*: PyPy [#pypy]_ has good performance on numerical code, but is
 slower than Unladen Swallow on non-numerical workloads. PyPy only
 supports 32-bit x86 code generation. It has poor support for CPython
 extension modules, making migration for large applications
 prohibitively expensive.

 That part at the very least has some sort of personal opinion
 prohibitively,
 
 Of course; difficulty is always in the eye of the person doing the
 work. Simply put, PyPy is not a drop-in replacement for CPython: there
 is no embedding API, much less the same one exported by CPython;
 important libraries, such as MySQLdb and pycrypto, do not build
 against PyPy; PyPy is 32-bit x86 only.

I think pointing out at least these two restrictions explicitly would be
helpful (since they put some objective bounds on the meaning of
prohibitive in this context).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-11 Thread Maciej Fijalkowski
Snippet from:

http://codereview.appspot.com/186247/diff2/5014:8003/7002

*PyPy*: PyPy [#pypy]_ has good performance on numerical code, but is
slower than Unladen Swallow on non-numerical workloads. PyPy only
supports 32-bit x86 code generation. It has poor support for CPython
extension modules, making migration for large applications
prohibitively expensive.

That part at the very least has some sort of personal opinion
prohibitively, while the other part is not completely true slower
than US on non-numerical workloads. Fancy providing a proof for that?
I'm well aware that there are benchmarks on which PyPy is slower than
CPython or US, however, I would like a bit more weighted opinion in
the PEP.

Cheers,
fijal
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-10 Thread Brett Cannon
On Tue, Feb 9, 2010 at 14:47, Collin Winter collinwin...@google.com wrote:
 To follow up on some of the open issues:

 On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com 
 wrote:
 [snip]
 Open Issues
 ===

 - *Code review policy for the ``py3k-jit`` branch.* How does the CPython
  community want us to procede with respect to checkins on the ``py3k-jit``
  branch? Pre-commit reviews? Post-commit reviews?

  Unladen Swallow has enforced pre-commit reviews in our trunk, but we realize
  this may lead to long review/checkin cycles in a purely-volunteer
  organization. We would like a non-Google-affiliated member of the CPython
  development team to review our work for correctness and compatibility, but 
 we
  realize this may not be possible for every commit.

 The feedback we've gotten so far is that at most, only larger, more
 critical commits should be sent for review, while most commits can
 just go into the branch. Is that broadly agreeable to python-dev?

 - *How to link LLVM.* Should we change LLVM to better support shared linking,
  and then use shared linking to link the parts of it we need into CPython?

 The consensus has been that we should link shared against LLVM.
 Jeffrey Yasskin is now working on this in upstream LLVM. We are
 tracking this at
 http://code.google.com/p/unladen-swallow/issues/detail?id=130 and
 http://llvm.org/PR3201.

 - *Prioritization of remaining issues.* We would like input from the CPython
  development team on how to prioritize the remaining issues in the Unladen
  Swallow codebase. Some issues like memory usage are obviously critical 
 before
  merger with ``py3k``, but others may fall into a nice to have category 
 that
  could be kept for resolution into a future CPython 3.x release.

 The big-ticket items here are what we expected: reducing memory usage
 and startup time. We also need to improve profiling options, both for
 oProfile and cProfile.

 - *Create a C++ style guide.* Should PEP 7 be extended to include C++, or
  should a separate C++ style PEP be created? Unladen Swallow maintains its 
 own
  style guide [#us-styleguide]_, which may serve as a starting point; the
  Unladen Swallow style guide is based on both LLVM's [#llvm-styleguide]_ and
  Google's [#google-styleguide]_ C++ style guides.

 Any thoughts on a CPython C++ style guide? My personal preference
 would be to extend PEP 7 to cover C++ by taking elements from
 http://code.google.com/p/unladen-swallow/wiki/StyleGuide and the LLVM
 and Google style guides (which is how we've been developing Unladen
 Swallow). If that's broadly agreeable, Jeffrey and I will work on a
 patch to PEP 7.


I have found the Google C++ style guide good so I am fine with taking
ideas from that and adding them to PEP 7.

-Brett



 Thanks,
 Collin Winter
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/brett%40python.org

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-09 Thread Collin Winter
To follow up on some of the open issues:

On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com wrote:
[snip]
 Open Issues
 ===

 - *Code review policy for the ``py3k-jit`` branch.* How does the CPython
  community want us to procede with respect to checkins on the ``py3k-jit``
  branch? Pre-commit reviews? Post-commit reviews?

  Unladen Swallow has enforced pre-commit reviews in our trunk, but we realize
  this may lead to long review/checkin cycles in a purely-volunteer
  organization. We would like a non-Google-affiliated member of the CPython
  development team to review our work for correctness and compatibility, but we
  realize this may not be possible for every commit.

The feedback we've gotten so far is that at most, only larger, more
critical commits should be sent for review, while most commits can
just go into the branch. Is that broadly agreeable to python-dev?

 - *How to link LLVM.* Should we change LLVM to better support shared linking,
  and then use shared linking to link the parts of it we need into CPython?

The consensus has been that we should link shared against LLVM.
Jeffrey Yasskin is now working on this in upstream LLVM. We are
tracking this at
http://code.google.com/p/unladen-swallow/issues/detail?id=130 and
http://llvm.org/PR3201.

 - *Prioritization of remaining issues.* We would like input from the CPython
  development team on how to prioritize the remaining issues in the Unladen
  Swallow codebase. Some issues like memory usage are obviously critical before
  merger with ``py3k``, but others may fall into a nice to have category that
  could be kept for resolution into a future CPython 3.x release.

The big-ticket items here are what we expected: reducing memory usage
and startup time. We also need to improve profiling options, both for
oProfile and cProfile.

 - *Create a C++ style guide.* Should PEP 7 be extended to include C++, or
  should a separate C++ style PEP be created? Unladen Swallow maintains its own
  style guide [#us-styleguide]_, which may serve as a starting point; the
  Unladen Swallow style guide is based on both LLVM's [#llvm-styleguide]_ and
  Google's [#google-styleguide]_ C++ style guides.

Any thoughts on a CPython C++ style guide? My personal preference
would be to extend PEP 7 to cover C++ by taking elements from
http://code.google.com/p/unladen-swallow/wiki/StyleGuide and the LLVM
and Google style guides (which is how we've been developing Unladen
Swallow). If that's broadly agreeable, Jeffrey and I will work on a
patch to PEP 7.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-08 Thread Collin Winter
Hi Craig,

On Tue, Feb 2, 2010 at 4:42 PM, Craig Citro craigci...@gmail.com wrote:
 Done. The diff is at
 http://codereview.appspot.com/186247/diff2/5014:8003/7002. I listed
 Cython, Shedskin and a bunch of other alternatives to pure CPython.
 Some of that information is based on conversations I've had with the
 respective developers, and I'd appreciate corrections if I'm out of
 date.


 Well, it's a minor nit, but it might be more fair to say something
 like Cython provides the biggest improvements once type annotations
 are added to the code. After all, Cython is more than happy to take
 arbitrary Python code as input -- it's just much more effective when
 it knows something about types. The code to make Cython handle
 closures has just been merged ... hopefully support for the full
 Python language isn't so far off. (Let me know if you want me to
 actually make a comment on Rietveld ...)

Indeed, you're quite right. I've corrected the description here:
http://codereview.appspot.com/186247/diff2/7005:9001/10001

 Now what's more interesting is whether or not U-S and Cython could
 play off one another -- take a Python program, run it with some
 generic input data under Unladen and record info about which
 functions are hot, and what types they tend to take, then let
 Cython/gcc -O3 have a go at these, and lather, rinse, repeat ... JIT
 compilation and static compilation obviously serve different purposes,
 but I'm curious if there aren't other interesting ways to take
 advantage of both.

Definitely! Someone approached me about possibly reusing the profile
data for a feedback-enhanced code coverage tool, which has interesting
potential, too. I've added a note about this under the Future Work
section: http://codereview.appspot.com/186247/diff2/9001:10002/9003

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-03 Thread M.-A. Lemburg
Reid Kleckner wrote:
 On Tue, Feb 2, 2010 at 8:57 PM, Collin Winter collinwin...@google.com wrote:
 Wouldn't it be possible to have the compiler approach work
 in three phases in order to reduce the memory footprint and
 startup time hit, ie.

  1. run an instrumented Python interpreter to collect all
the needed compiler information; write this information into
a .pys file (Python stats)

  2. create compiled versions of the code for various often
used code paths and type combinations by reading the
.pys file and generating an .so file as regular
Python extension module

  3. run an uninstrumented Python interpreter and let it
use the .so files instead of the .py ones

 In production, you'd then only use step 3 and avoid the
 overhead of steps 1 and 2.

 That is certainly a possibility if we are unable to reduce memory
 usage to a satisfactory level. I've added a Contingency Plans
 section to the PEP, including this option:
 http://codereview.appspot.com/186247/diff2/8004:7005/8006.
 
 This would be another good research problem for someone to take and
 run.  The trick is that you would need to add some kind of linking
 step to loading the .so.  Right now, we just collect PyObject*'s, and
 don't care whether they're statically allocated or user-defined
 objects.  If you wanted to pursue offline feedback directed
 compilation, you would need to write something that basically can map
 from the pointers in the feedback data to something like a Python
 dotted name import path, and then when you load the application, look
 up those names and rewrite the new pointers into the generated machine
 code.  It sounds a lot like writing a dynamic loader.  :)

You lost me there :-)

I am not familiar with how U-S actually implements the compilation
step and was thinking of it working at the functions/methods level
and based on input/output parameter type information.

Most Python functions and methods have unique names (when
combined with the module and class name), so these could
be used for the referencing and feedback writing.

The only cases where this doesn't work too well is dynamic
programming of the sort done in namedtuples: where you
dynamically create a class and then instantiate it.

Type information for basic types and their subclasses can
be had dynamically (there's also a basic type bitmap for
faster lookup) or in a less robust way by name.

For the feedback file the names should be a good reference.

 It sounds like a huge amount of work, and we haven't approached it.
 On the other hand, it sounds like it might be rewarding.

Indeed. Perhaps this could be further investigated in a SoC
project ?!

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 03 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-03 Thread Reid Kleckner
On Wed, Feb 3, 2010 at 6:51 AM, M.-A. Lemburg m...@egenix.com wrote:
 You lost me there :-)

 I am not familiar with how U-S actually implements the compilation
 step and was thinking of it working at the functions/methods level
 and based on input/output parameter type information.

Yes, but it's more like for every attribute lookup we ask what was
the type of the object we did the lookup on?  So, we simply take a
reference to obj-ob_type and stuff it in our feedback record, which
is limited to just three pointers.  Then when we generate code, we may
emit a guard that compares obj-ob_type in the compiled lookup to the
pointer we recorded.  We also need to place a weak reference to the
code object on the type so that when the type is mutated or deleted,
we invalidate the code, since its assumptions are invalid.

 Most Python functions and methods have unique names (when
 combined with the module and class name), so these could
 be used for the referencing and feedback writing.

Right, so when building the .so to load, you would probably want to
take all the feedback data and find these dotted names for the
pointers in the feedback data.  If you find any pointers that can't be
reliably identified, you could drop it from the feedback and flag that
site as polymorphic (ie don't optimize this site).  Then you generate
machine code from the feedback and stuff it in a .so, with special
relocation information.

When you load the .so into a fresh Python process with a different
address space layout, you try to recover the pointers to the PyObjects
mentioned in the relocation information and patch up the machine code
with the new pointers, which is very similar to the job of a linker.
If you can't find the name or things don't look right, you just drop
that piece of native code.

 The only cases where this doesn't work too well is dynamic
 programming of the sort done in namedtuples: where you
 dynamically create a class and then instantiate it.

It might actually work if the namedtuple is instantiated at module
scope before loading the .so.

 Type information for basic types and their subclasses can
 be had dynamically (there's also a basic type bitmap for
 faster lookup) or in a less robust way by name.

If I understand you correctly, you're thinking about looking the types
up by name or bitmap in the machine code.  I think it would be best to
just do the lookup once at load time and patch the native code.

 It sounds like a huge amount of work, and we haven't approached it.
 On the other hand, it sounds like it might be rewarding.

 Indeed. Perhaps this could be further investigated in a SoC
 project ?!

Or maybe a thesis.  I'm really walking out on a limb, and this idea is
quite hypothetical.  :)

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread M.-A. Lemburg
Collin Winter wrote:
 On Mon, Feb 1, 2010 at 11:17 AM, M.-A. Lemburg m...@egenix.com wrote:
 Collin Winter wrote:
 I think this idea underestimates a) how deeply the current CPython VM
 is intertwined with the rest of the implementation, and b) the nature
 of the changes required by these separate VMs. For example, Unladen
 Swallow adds fields to the C-level structs for dicts, code objects and
 frame objects; how would those changes be pluggable? Stackless
 requires so many modifications that it is effectively a fork; how
 would those changes be pluggable?

 They wouldn't be pluggable. Such changes would have to be made
 in a more general way in order to serve more than just one VM.
 
 I believe these VMs would have little overlap. I cannot imagine that
 Unladen Swallow's needs have much in common with Stackless's, or with
 those of a hypothetical register machine to replace the current stack
 machine.
 
 Let's consider that last example in more detail: a register machine
 would require completely different bytecode. This would require
 replacing the bytecode compiler, the peephole optimizer, and the
 bytecode eval loop. The frame object would need to be changed to hold
 the registers and a new blockstack design; the code object would have
 to potentially hold a new bytecode layout.
 
 I suppose making all this pluggable would be possible, but I don't see
 the point. This kind of experimentation is ideal for a branch: go off,
 test your idea, report your findings, merge back. Let the branch be
 long-lived, if need be. The Mercurial migration will make all this
 easier.
 
 Getting the right would certainly require a major effort, but it
 would also reduce the need to have several branches of C-based
 Python implementations.
 
 If such a restrictive plugin-based scheme had been available when we
 began Unladen Swallow, I do not doubt that we would have ignored it
 entirely. I do not like the idea of artificially tying the hands of
 people trying to make CPython faster. I do not see any part of Unladen
 Swallow that would have been made easier by such a scheme. If
 anything, it would have made our project more difficult.

I don't think that it has to be restrictive - much to the contrary,
it would provide a consistent API to those CPython internals and
also clarify the separation between the various parts. Something
which currently does not exist in CPython.

Note that it may be easier for you (and others) to just take
CPython and patch it as necessary. However, this doesn't relieve
you from the needed maintenance - which, I presume, is one of the
reasons why you are suggesting to merge U-S back into CPython ;-)

The same problem exists for all other branches, such as e.g.
Stackless. Now, why should we merge in your particular branch
and make it harder for those other teams ?

Instead of having 2-3 teams maintain complete branches, it would
be more efficient to just have them take care of their particular
VM implementation. Furthermore, we wouldn't need to decide
which VM variant to merge into the core, since all of them
would be equally usable.

The idea is based on a big picture perspective and focuses on long
term benefits for more than just one team of VM implementers.

BTW: I also doubt that Mercurial will make any of this easier.
It makes creating branches easier for non-committers, but the
problem of having to maintain the branches remains.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 02 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Nick Coghlan
M.-A. Lemburg wrote:
 BTW: I also doubt that Mercurial will make any of this easier.
 It makes creating branches easier for non-committers, but the
 problem of having to maintain the branches remains.

It greatly simplifies the process of syncing the branch with the main
line of development so yes, it should help with branch maintenance
(svnmerge is a pale shadow of what a true DVCS can handle). That aspect
is one of the DVCS selling points that also applies to core development.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Collin Winter
Hey Dirkjan,

[Circling back to this part of the thread]

On Thu, Jan 21, 2010 at 1:37 PM, Dirkjan Ochtman dirk...@ochtman.nl wrote:
 On Thu, Jan 21, 2010 at 21:14, Collin Winter collinwin...@google.com wrote:
[snip]
 My quick take on Cython and Shedskin is that they are
 useful-but-limited workarounds for CPython's historically-poor
 performance. Shedskin, for example, does not support the entire Python
 language or standard library
 (http://shedskin.googlecode.com/files/shedskin-tutorial-0.3.html).

 Perfect, now put something like this in the PEP, please. ;)

Done. The diff is at
http://codereview.appspot.com/186247/diff2/5014:8003/7002. I listed
Cython, Shedskin and a bunch of other alternatives to pure CPython.
Some of that information is based on conversations I've had with the
respective developers, and I'd appreciate corrections if I'm out of
date.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Craig Citro
 Done. The diff is at
 http://codereview.appspot.com/186247/diff2/5014:8003/7002. I listed
 Cython, Shedskin and a bunch of other alternatives to pure CPython.
 Some of that information is based on conversations I've had with the
 respective developers, and I'd appreciate corrections if I'm out of
 date.


Well, it's a minor nit, but it might be more fair to say something
like Cython provides the biggest improvements once type annotations
are added to the code. After all, Cython is more than happy to take
arbitrary Python code as input -- it's just much more effective when
it knows something about types. The code to make Cython handle
closures has just been merged ... hopefully support for the full
Python language isn't so far off. (Let me know if you want me to
actually make a comment on Rietveld ...)

Now what's more interesting is whether or not U-S and Cython could
play off one another -- take a Python program, run it with some
generic input data under Unladen and record info about which
functions are hot, and what types they tend to take, then let
Cython/gcc -O3 have a go at these, and lather, rinse, repeat ... JIT
compilation and static compilation obviously serve different purposes,
but I'm curious if there aren't other interesting ways to take
advantage of both.

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Collin Winter
Hey MA,

On Fri, Jan 29, 2010 at 11:14 AM, M.-A. Lemburg m...@egenix.com wrote:
 Collin Winter wrote:
 I added startup benchmarks for Mercurial and Bazaar yesterday
 (http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
 can use them as more macro-ish benchmarks, rather than merely starting
 the CPython binary over and over again. If you have ideas for better
 Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
 hg_startup and bzr_startup benchmarks should give us some more data
 points for measuring improvements in startup time.

 One idea we had for improving startup time for apps like Mercurial was
 to allow the creation of hermetic Python binaries, with all
 necessary modules preloaded. This would be something like Smalltalk
 images. We haven't yet really fleshed out this idea, though.

 In Python you can do the same with the freeze.py utility. See

 http://www.egenix.com/www2002/python/mxCGIPython.html

 for an old project where we basically put the Python
 interpreter and stdlib into a single executable.

 We've recently revisited that project and created something
 we call pyrun. It fits Python 2.5 into a single executable
 and a set of shared modules (which for various reasons cannot
 be linked statically)... 12MB in total.

 If you load lots of modules from the stdlib this does provide
 a significant improvement over standard Python.

Good to know there are options. One feature we had in mind for a
system of this sort would be the ability to take advantage of the
limited/known set of modules in the image to optimize the application
further, similar to link-time optimizations in gcc/LLVM
(http://www.airs.com/blog/archives/100).

 Back to the PEP's proposal:

 Looking at the data you currently have, the negative results
 currently don't really look good in the light of the small
 performance improvements.

The JIT compiler we are offering is more than just its current
performance benefit. An interpreter loop will simply never be as fast
as machine code. An interpreter loop, no matter how well-optimized,
will hit a performance ceiling and before that ceiling will run into
diminishing returns. Machine code is a more versatile optimization
target, and as such, allows many optimizations that would be
impossible or prohibitively difficult in an interpreter.

Unladen Swallow offers a platform to extract increasing performance
for years to come. The current generation of modern, JIT-based
JavaScript engines are instructive in this regard: V8 (which I'm most
familiar with) delivers consistently improving performance
release-over-release (see the graphs at the top of
http://googleblog.blogspot.com/2009/09/google-chrome-after-year-sporting-new.html).
 I'd like to see CPython be able to achieve the same thing, like the
new implementations of JavaScript and Ruby are able to do.

We are aware that Unladen Swallow is not finished; that's why we're
not asking to go into py3k directly. Unladen Swallow's memory usage
will continue to decrease, and its performance will only go up. The
current state is not its permanent state; I'd hate to see the perfect
become the enemy of the good.

 Wouldn't it be possible to have the compiler approach work
 in three phases in order to reduce the memory footprint and
 startup time hit, ie.

  1. run an instrumented Python interpreter to collect all
    the needed compiler information; write this information into
    a .pys file (Python stats)

  2. create compiled versions of the code for various often
    used code paths and type combinations by reading the
    .pys file and generating an .so file as regular
    Python extension module

  3. run an uninstrumented Python interpreter and let it
    use the .so files instead of the .py ones

 In production, you'd then only use step 3 and avoid the
 overhead of steps 1 and 2.

That is certainly a possibility if we are unable to reduce memory
usage to a satisfactory level. I've added a Contingency Plans
section to the PEP, including this option:
http://codereview.appspot.com/186247/diff2/8004:7005/8006.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com



Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Reid Kleckner
On Tue, Feb 2, 2010 at 8:57 PM, Collin Winter collinwin...@google.com wrote:
 Wouldn't it be possible to have the compiler approach work
 in three phases in order to reduce the memory footprint and
 startup time hit, ie.

  1. run an instrumented Python interpreter to collect all
    the needed compiler information; write this information into
    a .pys file (Python stats)

  2. create compiled versions of the code for various often
    used code paths and type combinations by reading the
    .pys file and generating an .so file as regular
    Python extension module

  3. run an uninstrumented Python interpreter and let it
    use the .so files instead of the .py ones

 In production, you'd then only use step 3 and avoid the
 overhead of steps 1 and 2.

 That is certainly a possibility if we are unable to reduce memory
 usage to a satisfactory level. I've added a Contingency Plans
 section to the PEP, including this option:
 http://codereview.appspot.com/186247/diff2/8004:7005/8006.

This would be another good research problem for someone to take and
run.  The trick is that you would need to add some kind of linking
step to loading the .so.  Right now, we just collect PyObject*'s, and
don't care whether they're statically allocated or user-defined
objects.  If you wanted to pursue offline feedback directed
compilation, you would need to write something that basically can map
from the pointers in the feedback data to something like a Python
dotted name import path, and then when you load the application, look
up those names and rewrite the new pointers into the generated machine
code.  It sounds a lot like writing a dynamic loader.  :)

It sounds like a huge amount of work, and we haven't approached it.
On the other hand, it sounds like it might be rewarding.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Dirkjan Ochtman
On Tue, Feb 2, 2010 at 23:54, Collin Winter collinwin...@google.com wrote:
 Done. The diff is at
 http://codereview.appspot.com/186247/diff2/5014:8003/7002. I listed
 Cython, Shedskin and a bunch of other alternatives to pure CPython.
 Some of that information is based on conversations I've had with the
 respective developers, and I'd appreciate corrections if I'm out of
 date.

Thanks, that's a very good list (and I think it makes for a useful
addition to the PEP).

Cheers,

Dirkjan
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-01 Thread M.-A. Lemburg
M.-A. Lemburg wrote:
 Collin Winter wrote:
 I added startup benchmarks for Mercurial and Bazaar yesterday
 (http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
 can use them as more macro-ish benchmarks, rather than merely starting
 the CPython binary over and over again. If you have ideas for better
 Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
 hg_startup and bzr_startup benchmarks should give us some more data
 points for measuring improvements in startup time.

 One idea we had for improving startup time for apps like Mercurial was
 to allow the creation of hermetic Python binaries, with all
 necessary modules preloaded. This would be something like Smalltalk
 images. We haven't yet really fleshed out this idea, though.
 
 In Python you can do the same with the freeze.py utility. See
 
 http://www.egenix.com/www2002/python/mxCGIPython.html
 
 for an old project where we basically put the Python
 interpreter and stdlib into a single executable.
 
 We've recently revisited that project and created something
 we call pyrun. It fits Python 2.5 into a single executable
 and a set of shared modules (which for various reasons cannot
 be linked statically)... 12MB in total.
 
 If you load lots of modules from the stdlib this does provide
 a significant improvement over standard Python.
 
 Back to the PEP's proposal:
 
 Looking at the data you currently have, the negative results
 currently don't really look good in the light of the small
 performance improvements.
 
 Wouldn't it be possible to have the compiler approach work
 in three phases in order to reduce the memory footprint and
 startup time hit, ie.
 
  1. run an instrumented Python interpreter to collect all
 the needed compiler information; write this information into
 a .pys file (Python stats)
 
  2. create compiled versions of the code for various often
 used code paths and type combinations by reading the
 .pys file and generating an .so file as regular
 Python extension module
 
  3. run an uninstrumented Python interpreter and let it
 use the .so files instead of the .py ones
 
 In production, you'd then only use step 3 and avoid the
 overhead of steps 1 and 2.
 
 Moreover, the .so file approach
 would only load the code for code paths and type combinations
 actually used in a particular run of the Python code into
 memory and allow multiple Python processes to share it.
 
 As side effect, you'd probably also avoid the need to have
 C++ code in the production Python runtime - that is unless
 LLVM requires some kind of runtime support which is written
 in C++.

BTW: Some years ago we discussed the idea of pluggable VMs for
Python. Wouldn't U-S be a good motivation to revisit this idea ?

We could then have a VM based on byte code using a stack
machines, one based on word code using a register machine
and perhaps one that uses the Stackless approach.

Each VM type could use the PEP 3147 approach to store
auxiliary files to store byte code, word code or machine
compiled code.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 01 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-01 Thread Collin Winter
Hey MA,

On Mon, Feb 1, 2010 at 9:58 AM, M.-A. Lemburg m...@egenix.com wrote:
 BTW: Some years ago we discussed the idea of pluggable VMs for
 Python. Wouldn't U-S be a good motivation to revisit this idea ?

 We could then have a VM based on byte code using a stack
 machines, one based on word code using a register machine
 and perhaps one that uses the Stackless approach.

What is the usecase for having pluggable VMs? Is the idea that, at
runtime, the user would select which virtual machine they want to run
their code under? How would the user make that determination
intelligently?

I think this idea underestimates a) how deeply the current CPython VM
is intertwined with the rest of the implementation, and b) the nature
of the changes required by these separate VMs. For example, Unladen
Swallow adds fields to the C-level structs for dicts, code objects and
frame objects; how would those changes be pluggable? Stackless
requires so many modifications that it is effectively a fork; how
would those changes be pluggable?

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-01 Thread M.-A. Lemburg
Collin Winter wrote:
 Hey MA,
 
 On Mon, Feb 1, 2010 at 9:58 AM, M.-A. Lemburg m...@egenix.com wrote:
 BTW: Some years ago we discussed the idea of pluggable VMs for
 Python. Wouldn't U-S be a good motivation to revisit this idea ?

 We could then have a VM based on byte code using a stack
 machines, one based on word code using a register machine
 and perhaps one that uses the Stackless approach.
 
 What is the usecase for having pluggable VMs? Is the idea that, at
 runtime, the user would select which virtual machine they want to run
 their code under? How would the user make that determination
 intelligently?

The idea back then (IIRC) was to have a compile time option to select
one of a few available VMs, in order to more easily experiment with
new or optimized implementations such as e.g. a register based VM.

It should even be possible to factor out the VM into a DLL/SO which
is then selected and loaded via a command line option.

 I think this idea underestimates a) how deeply the current CPython VM
 is intertwined with the rest of the implementation, and b) the nature
 of the changes required by these separate VMs. For example, Unladen
 Swallow adds fields to the C-level structs for dicts, code objects and
 frame objects; how would those changes be pluggable? Stackless
 requires so many modifications that it is effectively a fork; how
 would those changes be pluggable?

They wouldn't be pluggable. Such changes would have to be made
in a more general way in order to serve more than just one VM.

Getting the right would certainly require a major effort, but it
would also reduce the need to have several branches of C-based
Python implementations.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 01 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-01 Thread Collin Winter
On Mon, Feb 1, 2010 at 11:17 AM, M.-A. Lemburg m...@egenix.com wrote:
 Collin Winter wrote:
 I think this idea underestimates a) how deeply the current CPython VM
 is intertwined with the rest of the implementation, and b) the nature
 of the changes required by these separate VMs. For example, Unladen
 Swallow adds fields to the C-level structs for dicts, code objects and
 frame objects; how would those changes be pluggable? Stackless
 requires so many modifications that it is effectively a fork; how
 would those changes be pluggable?

 They wouldn't be pluggable. Such changes would have to be made
 in a more general way in order to serve more than just one VM.

I believe these VMs would have little overlap. I cannot imagine that
Unladen Swallow's needs have much in common with Stackless's, or with
those of a hypothetical register machine to replace the current stack
machine.

Let's consider that last example in more detail: a register machine
would require completely different bytecode. This would require
replacing the bytecode compiler, the peephole optimizer, and the
bytecode eval loop. The frame object would need to be changed to hold
the registers and a new blockstack design; the code object would have
to potentially hold a new bytecode layout.

I suppose making all this pluggable would be possible, but I don't see
the point. This kind of experimentation is ideal for a branch: go off,
test your idea, report your findings, merge back. Let the branch be
long-lived, if need be. The Mercurial migration will make all this
easier.

 Getting the right would certainly require a major effort, but it
 would also reduce the need to have several branches of C-based
 Python implementations.

If such a restrictive plugin-based scheme had been available when we
began Unladen Swallow, I do not doubt that we would have ignored it
entirely. I do not like the idea of artificially tying the hands of
people trying to make CPython faster. I do not see any part of Unladen
Swallow that would have been made easier by such a scheme. If
anything, it would have made our project more difficult.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-01 Thread Terry Reedy

On 2/1/2010 1:32 PM, Collin Winter wrote:

Hey MA,

On Mon, Feb 1, 2010 at 9:58 AM, M.-A. Lemburgm...@egenix.com  wrote:

BTW: Some years ago we discussed the idea of pluggable VMs for
Python. Wouldn't U-S be a good motivation to revisit this idea ?

We could then have a VM based on byte code using a stack
machines, one based on word code using a register machine
and perhaps one that uses the Stackless approach.


What is the usecase for having pluggable VMs?


Running an application full time on multiple machines


Is the idea that, at runtime, the user would select which virtual

  machine they want to run their code under?

From you comments below, I would presume the selection should be at 
startup, from the command line, before building Python objects.



How would the user make that determination intelligently?


The same way people would determine whether to select JIT or not -- by 
testing space and time performance for their app, with test time 
proportioned to expected run time.


Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-01 Thread Cesare Di Mauro
2010/2/1 Collin Winter collinwin...@google.com

 I believe these VMs would have little overlap. I cannot imagine that
 Unladen Swallow's needs have much in common with Stackless's, or with
 those of a hypothetical register machine to replace the current stack
 machine.

 Let's consider that last example in more detail: a register machine
 would require completely different bytecode. This would require
 replacing the bytecode compiler, the peephole optimizer, and the
 bytecode eval loop. The frame object would need to be changed to hold
 the registers and a new blockstack design; the code object would have
 to potentially hold a new bytecode layout.

 I suppose making all this pluggable would be possible, but I don't see
 the point. This kind of experimentation is ideal for a branch: go off,
 test your idea, report your findings, merge back. Let the branch be
 long-lived, if need be. The Mercurial migration will make all this
 easier.

  Getting the right would certainly require a major effort, but it
  would also reduce the need to have several branches of C-based
  Python implementations.

 If such a restrictive plugin-based scheme had been available when we
 began Unladen Swallow, I do not doubt that we would have ignored it
 entirely. I do not like the idea of artificially tying the hands of
 people trying to make CPython faster. I do not see any part of Unladen
 Swallow that would have been made easier by such a scheme. If
 anything, it would have made our project more difficult.

  Collin Winter


I completely agree. Working with wpython I have changed a lot of code
ranging from the ASDL grammar to the eval loop, including some library
module and tests (primarily the Python-based parser and the disassembly
tools; module finder required work, too).
I haven't changed the Python objects or the object model (except in the
alpha release; then I dropped this invasive change), but I've added some
helper functions in object.c, dict.c, etc.

A pluggable VM isn't feasible because we are talking about a brand new
CPython (library included), to be chosen each time.

If approved, this model will limit a lot the optimizations that can be
implemented to make CPython running faster.

Cesare Di Mauro
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-30 Thread Paul Moore
On 29 January 2010 23:45, Martin v. Löwis mar...@v.loewis.de wrote:
 On Windows, would a C extension author be able to distribute a single
 binary (bdist_wininst/bdist_msi) which would be compatible with
 with-LLVM and without-LLVM builds of Python?

 When PEP 384 gets implemented, you not only get that, but you will also
 be able to use the same extension module for 3.2, 3.3, 3.4, etc, with
 or without U-S.

Ah! That's the point behind PEP 384! Sorry, I'd only skimmed that PEP
when it came up, and completely missed the implications.

In which case a HUGE +1 from me for PEP 384.

Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-30 Thread Cesare Di Mauro
I'm back with some tests that I made with the U-S test suite.

2010/1/30 Scott Dial
scott+python-...@scottdial.comscott%2bpython-...@scottdial.com


 Cesare, just FYI, your Hg repository has lost the execute bits on some
 files (namely ./configure and ./Parser/asdl_c.py), so it does not
 quite build out-of-the-box.


Unfortunately, I haven't found a solution to this problem. If somebody
working with Windows and Mercurial (I use TortoiseHg graphical client) can
give help on this issue, I'll release wpython 1.1 final.


 I took the liberty of cloning your repo into my laptop's VirtualBox
 instance of Ubuntu. I ran the default performance tests from the U-S
 repo, with VirtualBox at highest priority. As a sanity check, I ran it
 against the U-S trunk. I think the numbers speak for themselves.

  --
 Scott Dial
 sc...@scottdial.com
 scod...@cs.indiana.edu


I downloaded U-S test suite, and made some benchmarks with my machine.
Django and Spambayes tests didn't run:

Running django...
INFO:root:Running D:\Projects\wpython\wpython10_test\PCbuild\python
performance/bm_django.py -n 100
Traceback (most recent call last):
File perf.py, line 1938, in module
main(sys.argv[1:])
File perf.py, line 1918, in main
options)))
File perf.py, line 1193, in BM_Django
return SimpleBenchmark(MeasureDjango, *args, **kwargs)
File perf.py, line 590, in SimpleBenchmark
*args, **kwargs)
File perf.py, line 1189, in MeasureDjango
return MeasureGeneric(python, options, bm_path, bm_env)
File perf.py, line 960, in MeasureGeneric
inherit_env=options.inherit_env)
File perf.py, line 916, in CallAndCaptureOutput
raise RuntimeError(Benchmark died:  + err)
RuntimeError: Benchmark died: Traceback (most recent call last):
File performance/bm_django.py, line 25, in module
from django.template import Context, Template
ImportError: No module named template

Running spambayes...
INFO:root:Running D:\Projects\wpython\wpython10_test\PCbuild\python
performance/bm_spambayes.py -n 50
Traceback (most recent call last):
File perf.py, line 1938, in module
main(sys.argv[1:])
File perf.py, line 1918, in main
options)))
File perf.py, line 1666, in BM_spambayes
return SimpleBenchmark(MeasureSpamBayes, *args, **kwargs)
File perf.py, line 590, in SimpleBenchmark
*args, **kwargs)
File perf.py, line 1662, in MeasureSpamBayes
return MeasureGeneric(python, options, bm_path, bm_env)
File perf.py, line 960, in MeasureGeneric
inherit_env=options.inherit_env)
File perf.py, line 916, in CallAndCaptureOutput
raise RuntimeError(Benchmark died:  + err)
RuntimeError: Benchmark died: Traceback (most recent call last):
File performance/bm_spambayes.py, line 18, in module
from spambayes import hammie, mboxutils
ImportError: No module named spambayes

Anyway, I run all others with wpython 1.0 final:

C:\Temp\unladen-swallow-testsC:\temp\Python-2.6.4\PCbuild\python perf.py -r
-b default,-django,-spambayes C:\temp\Python-2.6.4\PCbuild\python
D:\Projects\wpython\wpython10_test\PCbuild\python

Report on Windows Conan post2008Server 6.1.7600 x86 AMD64 Family 15 Model 12
Stepping 0, AuthenticAMD
Total CPU cores: 1

### 2to3 ###
Min: 43.408000 - 38.528000: 1.1267x faster
Avg: 44.448600 - 39.391000: 1.1284x faster
Significant (t=10.582185)
Stddev: 0.84415 - 0.65538: 1.2880x smaller
Timeline: http://tinyurl.com/ybdwese

### nbody ###
Min: 1.124000 - 1.109000: 1.0135x faster
Avg: 1.167630 - 1.148190: 1.0169x faster
Not significant
Stddev: 0.09607 - 0.09544: 1.0065x smaller
Timeline: http://tinyurl.com/yex7dfv

### slowpickle ###
Min: 1.237000 - 1.067000: 1.1593x faster
Avg: 1.283800 - 1.109070: 1.1575x faster
Significant (t=11.393574)
Stddev: 0.11086 - 0.10596: 1.0462x smaller
Timeline: http://tinyurl.com/y8t5ess

### slowspitfire ###
Min: 2.079000 - 1.928000: 1.0783x faster
Avg: 2.148920 - 1.987540: 1.0812x faster
Significant (t=7.731224)
Stddev: 0.15384 - 0.14108: 1.0904x smaller
Timeline: http://tinyurl.com/yzexcqa

### slowunpickle ###
Min: 0.617000 - 0.568000: 1.0863x faster
Avg: 0.645420 - 0.590790: 1.0925x faster
Significant (t=7.087322)
Stddev: 0.05478 - 0.05422: 1.0103x smaller
Timeline: http://tinyurl.com/ycsoouq


I also made some tests with wpython 1.1, leaving bytecode peepholer enabled:

C:\Temp\unladen-swallow-testsC:\temp\Python-2.6.4\PCbuild\python perf.py -r
-b default,-django,-spambayes C:\temp\Python-2.6.4\PCbuild\python
D:\Projects\wpython\wpython_test\PCbuild\python

Report on Windows Conan post2008Server 6.1.7600 x86 AMD64 Family 15 Model 12
Stepping 0, AuthenticAMD
Total CPU cores: 1

### 2to3 ###
Min: 43.454000 - 39.912000: 1.0887x faster
Avg: 44.301000 - 40.766800: 1.0867x faster
Significant (t=8.188533)
Stddev: 0.65325 - 0.71041: 1.0875x larger
Timeline: http://tinyurl.com/ya5z9mg

### nbody ###
Min: 1.125000 - 1.07: 1.0514x faster
Avg: 1.169270 - 1.105530: 1.0577x faster
Significant (t=4.774702)
Stddev: 0.09655 - 0.09219: 1.0473x smaller
Timeline: http://tinyurl.com/y8udjmk

### slowpickle ###
Min: 1.235000 - 1.094000: 1.1289x faster
Avg: 1.275860 - 1.132740: 

Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-30 Thread Brett Cannon
On Fri, Jan 29, 2010 at 15:04,  exar...@twistedmatrix.com wrote:
 On 10:47 pm, tjre...@udel.edu wrote:

 On 1/29/2010 4:19 PM, Collin Winter wrote:

 On Fri, Jan 29, 2010 at 7:22 AM, Nick Coghlanncogh...@gmail.com wrote:

 Agreed. We originally switched Unladen Swallow to wordcode in our
 2009Q1 release, and saw a performance improvement from this across the
 board. We switched back to bytecode for the JIT compiler to make
 upstream merger easier. The Unladen Swallow benchmark suite should
 provided a thorough assessment of the impact of the wordcode -
 bytecode switch. This would be complementary to a JIT compiler, rather
 than a replacement for it.

 I would note that the switch will introduce incompatibilities with
 libraries like Twisted. IIRC, Twisted has a traceback prettifier that
 removes its trampoline functions from the traceback, parsing CPython's
 bytecode in the process. If running under CPython, it assumes that the
 bytecode is as it expects. We broke this in Unladen's wordcode switch.
 I think parsing bytecode is a bad idea, but any switch to wordcode
 should be advertised widely.

 Several years, there was serious consideration of switching to a
 registerbased vm, which would have been even more of a change. Since I
 learned 1.4, Guido has consistently insisted that the CPython vm is not part
 of the language definition and, as far as I know, he has rejected any byte-
 code hackery in the stdlib. While he is not one to, say, randomly permute
 the codes just to frustrate such hacks, I believe he has always considered
 vm details private and subject to change and any usage thereof 'at one's own
 risk'.

 Language to such effect might be a useful addition to this page (amongst
 others, perhaps):

  http://docs.python.org/library/dis.html

 which very clearly and helpfully lays out quite a number of APIs which can
 be used to get pretty deep into the bytecode.  If all of this is subject to
 be discarded at the first sign that doing so might be beneficial for some
 reason, don't keep it a secret that people need to join python-dev to learn.


Can you file a bug and assign it to me?

-Brett
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread skip

Cesare I think that wpython as a proof-of-concept have done its work,
Cesare showing its potentials.

If you haven't alreayd is there any chance you can run the Unladen Swallow
performance test suite and post the results?  The code is separate from U-S
and should work with wpython:

http://unladen-swallow.googlecode.com/svn/tests

-- 
Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/29  s...@pobox.com


Cesare I think that wpython as a proof-of-concept have done its work,
Cesare showing its potentials.

 If you haven't alreayd is there any chance you can run the Unladen Swallow
 performance test suite and post the results?  The code is separate from U-S
 and should work with wpython:

http://unladen-swallow.googlecode.com/svn/tests

 --
 Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/


I work on a Windows machine, so I don't know if I can run the U-S test suite
on it (the first time I tried, it failed since U-S used a module available
on Unix machines only).

If it works now, I can provide results with wpython 1.0 final and the
current 1.1 I'm working on (which has additional optimizations; I've also
moved all peephole optimizer code on compile.c).

Anyway, Mart Sõmermaa provided some
resultshttp://www.mail-archive.com/python-dev@python.org/msg43294.htmlbased
on wpython 1.0 alpha (you can find the wpython 1.0 final
here http://code.google.com/p/wpython2/downloads/list).

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread skip

Cesare ... (you can find the wpython 1.0 final here
Cesare http://code.google.com/p/wpython2/downloads/list).

I tried downloading it.  Something about wpython10.7z and wpython10_fix.7z.
What's a 7z file?  What tool on my Mac will unpack that?  Can I build and
run wpython on my Mac or is it Windows only?

Thx,

Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Michael Foord

On 29/01/2010 13:24, s...@pobox.com wrote:

 Cesare  ... (you can find the wpython 1.0 final here
 Cesare  http://code.google.com/p/wpython2/downloads/list).

I tried downloading it.  Something about wpython10.7z and wpython10_fix.7z.
What's a 7z file?  What tool on my Mac will unpack that?  Can I build and
run wpython on my Mac or is it Windows only?
   


7z (7zip) is a semi-popular compression format, and yes there are 
cross-platform tools to decompress them. A quick google should reveal them.


Michael


Thx,

Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
   



--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of 
your employer, to release me from all obligations and waivers arising from any 
and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, 
clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and 
acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your 
employer, its partners, licensors, agents and assigns, in perpetuity, without 
prejudice to my ongoing rights and privileges. You further represent that you 
have the authority to release me from any BOGUS AGREEMENTS on behalf of your 
employer.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/29  s...@pobox.com


Cesare ... (you can find the wpython 1.0 final here
Cesare http://code.google.com/p/wpython2/downloads/list).

 I tried downloading it.  Something about wpython10.7z and wpython10_fix.7z.
 What's a 7z file?  What tool on my Mac will unpack that?  Can I build and
 run wpython on my Mac or is it Windows only?

 Thx,

 Skip


You can find 7-Zip tools here http://www.7-zip.org/download.html.

If you use Mercurial, you can grab a local copy this way:

hg clone https://wpython10.wpython2.googlecode.com/hg/ wpython2-wpython10

Wpython is intended to run on any platform where CPython 2.6.4 runs.

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Antoine Pitrou
Cesare Di Mauro cesare.di.mauro at gmail.com writes:
 
 If python dev community is interested, I can work on a 3.x branch, porting
 all optimizations I made (and many others that I've planned to implement) one 
 step at the time, in order to carefully check and validate any change with
 expert people monitoring it.

We are certainly more interested in a 3.x branch than in a 2.x one ;-)
You can start by cloning http://code.python.org/hg/branches/py3k/

Or you could submit patches piecewise on http://bugs.python.org
I think the first step would be to switch to 16-bit bytecodes. It would be
uncontroversial (the increase in code size probably has no negative effect) and
would provide the foundation for all of your optimizations.

Are you going to PyCon?


Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread skip

Cesare You can find 7-Zip tools here
Cesare http://www.7-zip.org/download.html.

Thanks.  Found a tool named 7za in MacPorts which I was able to install.

One strong suggestion for future releases: Please put a top-level directory
in your archives.  It is annoying to expect that only to have an archive
expand into the current directory without creating a directory of its own.
I've been burned often enough that I always check before expanding source
archives from new (to me) sources, so no harm, no foul in this case.

Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/29 Antoine Pitrou solip...@pitrou.net

 Cesare Di Mauro cesare.di.mauro at gmail.com writes:
 
  If python dev community is interested, I can work on a 3.x branch,
 porting
  all optimizations I made (and many others that I've planned to implement)
 one
  step at the time, in order to carefully check and validate any change
 with
  expert people monitoring it.

 We are certainly more interested in a 3.x branch than in a 2.x one ;-)
 You can start by cloning http://code.python.org/hg/branches/py3k/

 Or you could submit patches piecewise on http://bugs.python.org


I prefer to make a branch with Mercurial, which I found a comfortable tool.
:)


 I think the first step would be to switch to 16-bit bytecodes. It would be
 uncontroversial (the increase in code size probably has no negative effect)
 and
 would provide the foundation for all of your optimizations.


I agree. At the beginning I need to disable the peepholer, so performances
are better to be compared when all peephole optimizations will be ported to
the wordcode model.

I'll make the branch after I release wpython 1.1, which I'll do ASAP.


 Are you going to PyCon?

  Antoine.


No, I don't. But if there's a python-dev meeting, I can make a (long) jump.
May be it can be easier to talk about the superinstructions model, and I can
show and comment all optimizations that I made.

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/29  s...@pobox.com


 One strong suggestion for future releases: Please put a top-level directory
 in your archives.  It is annoying to expect that only to have an archive
 expand into the current directory without creating a directory of its own.
 I've been burned often enough that I always check before expanding source
 archives from new (to me) sources, so no harm, no foul in this case.

 Skip


You're right. Excuse me. I'll do it next time.

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Nick Coghlan
Antoine Pitrou wrote:
 Or you could submit patches piecewise on http://bugs.python.org
 I think the first step would be to switch to 16-bit bytecodes. It would be
 uncontroversial (the increase in code size probably has no negative effect) 
 and
 would provide the foundation for all of your optimizations.

I wouldn't consider changing from bytecode to wordcode uncontroversial -
the potential to have an effect on cache hit ratios means it needs to be
benchmarked (the U-S performance tests should be helpful there).

It's the same basic problem where any changes to the ceval loop can have
surprising performance effects due to the way they affect the compiled
switch statements ability to fit into the cache and other low level
processor weirdness.

If there was an old style map of the CPython code base, the whole area
would have 'ware, here be dragons written over the top of it ;)

Cheers,
Nick.

P.S. Note that I'm not saying I'm fundamentally opposed to a change to
wordcode - just that it needs to be benchmarked fairly thoroughly first.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Antoine Pitrou
Nick Coghlan ncoghlan at gmail.com writes:
 
 Antoine Pitrou wrote:
  Or you could submit patches piecewise on http://bugs.python.org
  I think the first step would be to switch to 16-bit bytecodes. It 
would be
  uncontroversial (the increase in code size probably has no negative 
effect) and
  would provide the foundation for all of your optimizations.
 
 I wouldn't consider changing from bytecode to wordcode uncontroversial -
 the potential to have an effect on cache hit ratios means it needs to be
 benchmarked (the U-S performance tests should be helpful there).

Well I said /probably/ has no negative effect :-)

Actually, wordcode could allow accesses in the eval loop to be done on 
aligned words, so as to fetch operands in one step on little-endian CPUs 
(instead of recombining bytes manually).

The change would be uncontroversial, however, in that it wouldn't 
modify code complexity or increase the maintenance burden.

Of course, performance checks have to be part of the review.

 If there was an old style map of the CPython code base, the whole area
 would have 'ware, here be dragons written over the top of it ;)

Just FYI: weakrefs and memoryviews are guarded by a mantichore, and only
Benjamin can tame it.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/29 Nick Coghlan ncogh...@gmail.com

 I wouldn't consider changing from bytecode to wordcode uncontroversial -
 the potential to have an effect on cache hit ratios means it needs to be
 benchmarked (the U-S performance tests should be helpful there).


It's quite strange, but from the tests made it seems that wpython perform
better with old architectures (such as my Athlon64 socket 754), which have
less resources like caches.

It'll be interesting to check how it works on more limited ISAs. I'm
especially curious about ARMs.


 It's the same basic problem where any changes to the ceval loop can have
 surprising performance effects due to the way they affect the compiled
 switch statements ability to fit into the cache and other low level
 processor weirdness.

 Cheers,
 Nick.


Sure, but consider that with wpython wordcodes require less space on
average. Also, less instructions are executed inside the ceval loop, thanks
to some natural instruction grouping.

For example, I recently introduced in wpython 1.1 a new opcode to handle
more efficiently expression generators. It's mapped as a unary operator, so
it exposes interesting properties which I'll show you with an example.

def f(a):
return sum(x for x in a)

With CPython 2.6.4 it generates:

  0 LOAD_GLOBAL 0 (sum)
  3 LOAD_CONST 1 (code object genexpr at 00512EC8, file stdin, line
1)
  6 MAKE_FUNCTION 0
  9 LOAD_FAST 0 (a)
12 GET_ITER
13 CALL_FUNCTION 1
16 CALL_FUNCTION 1
19 RETURN_VALUE

With wpython 1.1:

0 LOAD_GLOBAL 0 (sum)
1 LOAD_CONST 1 (code object genexpr at 01F13208, file stdin, line 1)
2 MAKE_FUNCTION 0
3 FAST_BINOP get_generator a
5 QUICK_CALL_FUNCTION 1
6 RETURN_VALUE

The new opcode is GET_GENERATOR, which is equivalent (but more efficient,
using a faster internal function call) to:

GET_ITER
CALL_FUNCTION 1

The compiler initially generated the following opcodes:

LOAD_FAST 0 (a)
GET_GENERATOR

then the peepholer recognized the pattern UNARY(FAST), and produced the
single opcode:

FAST_BINOP get_generator a

In the end, the ceval loop executes a single instruction instead of three.
The wordcode requires 14 bytes to be stored instead of 20, so it will use 1
data cache line instead of 2 on CPUs with 16 bytes lines data cache.

The same grouping behavior happens with binary operators as well. Opcodes
aggregation is a natural and useful concept with the new wordcode structure.

Cheers,
Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/29 Antoine Pitrou solip...@pitrou.net

 Actually, wordcode could allow accesses in the eval loop to be done on
 aligned words, so as to fetch operands in one step on little-endian CPUs
 (instead of recombining bytes manually).

  Regards

 Antoine.


I think that big-endians CPUs can get benefits too, since a single word load
operation is needed, followed by an instruction such as ROL #8 to adjust
the result (supposing that the compiler is smart enough to recognize the
pattern).

Using bytecodes, two loads are needed to retrieve the two bytes, and some
SHIFT  OR instructions to combine them getting the correct word. Loads are
generally more expensive / limited.

All that not counting the operations needed to advance the instruction
pointer.

Regards,

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread M.-A. Lemburg
Collin Winter wrote:
 I added startup benchmarks for Mercurial and Bazaar yesterday
 (http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
 can use them as more macro-ish benchmarks, rather than merely starting
 the CPython binary over and over again. If you have ideas for better
 Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
 hg_startup and bzr_startup benchmarks should give us some more data
 points for measuring improvements in startup time.
 
 One idea we had for improving startup time for apps like Mercurial was
 to allow the creation of hermetic Python binaries, with all
 necessary modules preloaded. This would be something like Smalltalk
 images. We haven't yet really fleshed out this idea, though.

In Python you can do the same with the freeze.py utility. See

http://www.egenix.com/www2002/python/mxCGIPython.html

for an old project where we basically put the Python
interpreter and stdlib into a single executable.

We've recently revisited that project and created something
we call pyrun. It fits Python 2.5 into a single executable
and a set of shared modules (which for various reasons cannot
be linked statically)... 12MB in total.

If you load lots of modules from the stdlib this does provide
a significant improvement over standard Python.

Back to the PEP's proposal:

Looking at the data you currently have, the negative results
currently don't really look good in the light of the small
performance improvements.

Wouldn't it be possible to have the compiler approach work
in three phases in order to reduce the memory footprint and
startup time hit, ie.

 1. run an instrumented Python interpreter to collect all
the needed compiler information; write this information into
a .pys file (Python stats)

 2. create compiled versions of the code for various often
used code paths and type combinations by reading the
.pys file and generating an .so file as regular
Python extension module

 3. run an uninstrumented Python interpreter and let it
use the .so files instead of the .py ones

In production, you'd then only use step 3 and avoid the
overhead of steps 1 and 2.

Moreover, the .so file approach
would only load the code for code paths and type combinations
actually used in a particular run of the Python code into
memory and allow multiple Python processes to share it.

As side effect, you'd probably also avoid the need to have
C++ code in the production Python runtime - that is unless
LLVM requires some kind of runtime support which is written
in C++.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jan 29 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Collin Winter
On Fri, Jan 29, 2010 at 7:22 AM, Nick Coghlan ncogh...@gmail.com wrote:
 Antoine Pitrou wrote:
 Or you could submit patches piecewise on http://bugs.python.org
 I think the first step would be to switch to 16-bit bytecodes. It would be
 uncontroversial (the increase in code size probably has no negative effect) 
 and
 would provide the foundation for all of your optimizations.

 I wouldn't consider changing from bytecode to wordcode uncontroversial -
 the potential to have an effect on cache hit ratios means it needs to be
 benchmarked (the U-S performance tests should be helpful there).

 It's the same basic problem where any changes to the ceval loop can have
 surprising performance effects due to the way they affect the compiled
 switch statements ability to fit into the cache and other low level
 processor weirdness.

Agreed. We originally switched Unladen Swallow to wordcode in our
2009Q1 release, and saw a performance improvement from this across the
board. We switched back to bytecode for the JIT compiler to make
upstream merger easier. The Unladen Swallow benchmark suite should
provided a thorough assessment of the impact of the wordcode -
bytecode switch. This would be complementary to a JIT compiler, rather
than a replacement for it.

I would note that the switch will introduce incompatibilities with
libraries like Twisted. IIRC, Twisted has a traceback prettifier that
removes its trampoline functions from the traceback, parsing CPython's
bytecode in the process. If running under CPython, it assumes that the
bytecode is as it expects. We broke this in Unladen's wordcode switch.
I think parsing bytecode is a bad idea, but any switch to wordcode
should be advertised widely.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Terry Reedy

On 1/29/2010 4:19 PM, Collin Winter wrote:

On Fri, Jan 29, 2010 at 7:22 AM, Nick Coghlanncogh...@gmail.com  wrote:



Agreed. We originally switched Unladen Swallow to wordcode in our
2009Q1 release, and saw a performance improvement from this across the
board. We switched back to bytecode for the JIT compiler to make
upstream merger easier. The Unladen Swallow benchmark suite should
provided a thorough assessment of the impact of the wordcode -
bytecode switch. This would be complementary to a JIT compiler, rather
than a replacement for it.

I would note that the switch will introduce incompatibilities with
libraries like Twisted. IIRC, Twisted has a traceback prettifier that
removes its trampoline functions from the traceback, parsing CPython's
bytecode in the process. If running under CPython, it assumes that the
bytecode is as it expects. We broke this in Unladen's wordcode switch.
I think parsing bytecode is a bad idea, but any switch to wordcode
should be advertised widely.


Several years, there was serious consideration of switching to a 
registerbased vm, which would have been even more of a change. Since I 
learned 1.4, Guido has consistently insisted that the CPython vm is not 
part of the language definition and, as far as I know, he has rejected 
any byte-code hackery in the stdlib. While he is not one to, say, 
randomly permute the codes just to frustrate such hacks, I believe he 
has always considered vm details private and subject to change and any 
usage thereof 'at one's own risk'.


tjr

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Collin Winter
Hey Terry,

On Fri, Jan 29, 2010 at 2:47 PM, Terry Reedy tjre...@udel.edu wrote:
 On 1/29/2010 4:19 PM, Collin Winter wrote:

 On Fri, Jan 29, 2010 at 7:22 AM, Nick Coghlanncogh...@gmail.com  wrote:

 Agreed. We originally switched Unladen Swallow to wordcode in our
 2009Q1 release, and saw a performance improvement from this across the
 board. We switched back to bytecode for the JIT compiler to make
 upstream merger easier. The Unladen Swallow benchmark suite should
 provided a thorough assessment of the impact of the wordcode -
 bytecode switch. This would be complementary to a JIT compiler, rather
 than a replacement for it.

 I would note that the switch will introduce incompatibilities with
 libraries like Twisted. IIRC, Twisted has a traceback prettifier that
 removes its trampoline functions from the traceback, parsing CPython's
 bytecode in the process. If running under CPython, it assumes that the
 bytecode is as it expects. We broke this in Unladen's wordcode switch.
 I think parsing bytecode is a bad idea, but any switch to wordcode
 should be advertised widely.

 Several years, there was serious consideration of switching to a
 registerbased vm, which would have been even more of a change. Since I
 learned 1.4, Guido has consistently insisted that the CPython vm is not part
 of the language definition and, as far as I know, he has rejected any
 byte-code hackery in the stdlib. While he is not one to, say, randomly
 permute the codes just to frustrate such hacks, I believe he has always
 considered vm details private and subject to change and any usage thereof
 'at one's own risk'.

No, I agree entirely: bytecode is an implementation detail that could
be changed at any time. But like reference counting, it's an
implementation detail that people have -- for better or worse -- come
to rely on. My only point was that a switch to wordcode should be
announced prominently in the release notes and not assumed to be
without impact on user code. That people are directly munging CPython
bytecode means that CPython should provide a better, more abstract way
to do the same thing that's more resistant to these kinds of changes.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread exarkun

On 10:47 pm, tjre...@udel.edu wrote:

On 1/29/2010 4:19 PM, Collin Winter wrote:
On Fri, Jan 29, 2010 at 7:22 AM, Nick Coghlanncogh...@gmail.com 
wrote:



Agreed. We originally switched Unladen Swallow to wordcode in our
2009Q1 release, and saw a performance improvement from this across the
board. We switched back to bytecode for the JIT compiler to make
upstream merger easier. The Unladen Swallow benchmark suite should
provided a thorough assessment of the impact of the wordcode -
bytecode switch. This would be complementary to a JIT compiler, rather
than a replacement for it.

I would note that the switch will introduce incompatibilities with
libraries like Twisted. IIRC, Twisted has a traceback prettifier that
removes its trampoline functions from the traceback, parsing CPython's
bytecode in the process. If running under CPython, it assumes that the
bytecode is as it expects. We broke this in Unladen's wordcode switch.
I think parsing bytecode is a bad idea, but any switch to wordcode
should be advertised widely.


Several years, there was serious consideration of switching to a 
registerbased vm, which would have been even more of a change. Since I 
learned 1.4, Guido has consistently insisted that the CPython vm is not 
part of the language definition and, as far as I know, he has rejected 
any byte- code hackery in the stdlib. While he is not one to, say, 
randomly permute the codes just to frustrate such hacks, I believe he 
has always considered vm details private and subject to change and any 
usage thereof 'at one's own risk'.


Language to such effect might be a useful addition to this page (amongst 
others, perhaps):


 http://docs.python.org/library/dis.html

which very clearly and helpfully lays out quite a number of APIs which 
can be used to get pretty deep into the bytecode.  If all of this is 
subject to be discarded at the first sign that doing so might be 
beneficial for some reason, don't keep it a secret that people need to 
join python-dev to learn.


Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread exarkun

On 10:55 pm, collinwin...@google.com wrote:


That people are directly munging CPython
bytecode means that CPython should provide a better, more abstract way
to do the same thing that's more resistant to these kinds of changes.


Yes, definitely!  Requesting a supported way to do the kind of 
introspection that you mentioned earlier (I think it's a little simpler 
than you recollect) has been on my todo list for a while.  The hold-up 
is just figuring out exactly what kind of introspection API would make 
sense.


It might be helpful to hear more about how the wordcode implementation 
differs from the bytecode implementation.  It's challenging to abstract 
from a single data point. :)


For what it's worth, I think this is the code in Twisted that Collin was 
originally referring to:


   # it is only really originating from
   # throwExceptionIntoGenerator if the bottom of the traceback
   # is a yield.
   # Pyrex and Cython extensions create traceback frames
   # with no co_code, but they can't yield so we know it's okay to 
just

   # return here.
   if ((not lastFrame.f_code.co_code) or
   lastFrame.f_code.co_code[lastTb.tb_lasti] != 
cls._yieldOpcode):

   return

And it's meant to be used in code like this:

def foo():
   try:
   yield
   except:
   # raise a new exception

def bar():
   g = foo()
   g.next()
   try:
   g.throw(...)
   except:
   # Code above is invoked to determine if the exception raised
   # was the one that was thrown into the generator or a different
   # one.

Jean-Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Martin v. Löwis
 On Windows, would a C extension author be able to distribute a single
 binary (bdist_wininst/bdist_msi) which would be compatible with
 with-LLVM and without-LLVM builds of Python?

When PEP 384 gets implemented, you not only get that, but you will also
be able to use the same extension module for 3.2, 3.3, 3.4, etc, with
or without U-S.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Terry Reedy

On 1/29/2010 5:55 PM, Collin Winter wrote:

Hey Terry,

On Fri, Jan 29, 2010 at 2:47 PM, Terry Reedytjre...@udel.edu  wrote:


Several years, there was serious consideration of switching to a
registerbased vm, which would have been even more of a change. Since I
learned 1.4, Guido has consistently insisted that the CPython vm is not part
of the language definition and, as far as I know, he has rejected any
byte-code hackery in the stdlib. While he is not one to, say, randomly
permute the codes just to frustrate such hacks, I believe he has always
considered vm details private and subject to change and any usage thereof
'at one's own risk'.


No, I agree entirely: bytecode is an implementation detail that could
be changed at any time. But like reference counting, it's an
implementation detail that people have -- for better or worse -- come
to rely on. My only point was that a switch to wordcode should be
announced prominently in the release notes and not assumed to be
without impact on user code.


Of course. If it does not already, What's New could routinely have a 
section Changes in the CPython Implementation that would include any 
such change that might impact people. It could also include a warning to 
not depend on certain details even if they are mentioned.


tjr


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Terry Reedy

On 1/29/2010 6:45 PM, Martin v. Löwis wrote:

On Windows, would a C extension author be able to distribute a single
binary (bdist_wininst/bdist_msi) which would be compatible with
with-LLVM and without-LLVM builds of Python?


When PEP 384 gets implemented, you not only get that, but you will also
be able to use the same extension module for 3.2, 3.3, 3.4, etc, with
or without U-S.


Even if CPython changes VC compiler version? In any case, greater 
usability of binaries would be really nice.


tjr


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Scott Dial
On 1/29/2010 8:43 AM, Cesare Di Mauro wrote:
 If you use Mercurial, you can grab a local copy this way:
 
 hg clone https://wpython10.wpython2.googlecode.com/hg/ wpython2-wpython10
 
 Wpython is intended to run on any platform where CPython 2.6.4 runs.
 

Cesare, just FYI, your Hg repository has lost the execute bits on some
files (namely ./configure and ./Parser/asdl_c.py), so it does not
quite build out-of-the-box.

I took the liberty of cloning your repo into my laptop's VirtualBox
instance of Ubuntu. I ran the default performance tests from the U-S
repo, with VirtualBox at highest priority. As a sanity check, I ran it
against the U-S trunk. I think the numbers speak for themselves.

$ ./perf.py -r -b default \
  ../python-2.6.4/python \
  ../wpython2-wpython10/python

Report on Linux narf-ubuntu 2.6.31-16-generic #53-Ubuntu SMP Tue Dec 8
04:01:29 UTC 2009 i686
Total CPU cores: 1

### 2to3 ###
Min: 21.629352 - 20.893306: 1.0352x faster
Avg: 22.245390 - 21.061316: 1.0562x faster
Significant (t=4.416683)
Stddev: 0.58810 - 0.11618: 5.0620x smaller
Timeline: http://tinyurl.com/yawzt5z

### django ###
Min: 1.105662 - 1.115090: 1.0085x slower
Avg: 1.117930 - 1.131781: 1.0124x slower
Significant (t=-11.024923)
Stddev: 0.00729 - 0.01023: 1.4027x larger
Timeline: http://tinyurl.com/ydzn6e6

### nbody ###
Min: 0.535204 - 0.559320: 1.0451x slower
Avg: 0.558861 - 0.572902: 1.0251x slower
Significant (t=-7.484374)
Stddev: 0.01309 - 0.01344: 1.0272x larger
Timeline: http://tinyurl.com/ygjnh5x

### slowpickle ###
Min: 0.788558 - 0.757067: 1.0416x faster
Avg: 0.799407 - 0.774368: 1.0323x faster
Significant (t=12.772246)
Stddev: 0.00686 - 0.01836: 2.6759x larger
Timeline: http://tinyurl.com/y8g3zjg

### slowspitfire ###
Min: 1.200616 - 1.218915: 1.0152x slower
Avg: 1.229028 - 1.255978: 1.0219x slower
Significant (t=-8.165772)
Stddev: 0.02133 - 0.02519: 1.1808x larger
Timeline: http://tinyurl.com/y9gg2x5

### slowunpickle ###
Min: 0.355483 - 0.347013: 1.0244x faster
Avg: 0.369828 - 0.359714: 1.0281x faster
Significant (t=6.817449)
Stddev: 0.01008 - 0.01089: 1.0804x larger
Timeline: http://tinyurl.com/ybf3qg9

### spambayes ###
Min: 0.316724 - 0.314673: 1.0065x faster
Avg: 0.327262 - 0.332370: 1.0156x slower
Significant (t=-3.690136)
Stddev: 0.00598 - 0.01248: 2.0876x larger
Timeline: http://tinyurl.com/ydck59l

$ ./perf.py -r -b default \
  ../python-2.6.4/python \
  ../unladen-swallow/python

### 2to3 ###
Min: 24.833552 - 24.433527: 1.0164x faster
Avg: 25.241577 - 24.878355: 1.0146x faster
Not significant
Stddev: 0.39099 - 0.28158: 1.3886x smaller
Timeline: http://tinyurl.com/yc7nm79

### django ###
Min: 1.153900 - 0.892072: 1.2935x faster
Avg: 1.198777 - 0.926776: 1.2935x faster
Significant (t=61.465586)
Stddev: 0.03914 - 0.02065: 1.8949x smaller
Timeline: http://tinyurl.com/ykm6lnk

### nbody ###
Min: 0.541474 - 0.307949: 1.7583x faster
Avg: 0.564526 - 0.327311: 1.7247x faster
Significant (t=57.615664)
Stddev: 0.01784 - 0.03711: 2.0798x larger
Timeline: http://tinyurl.com/ylmezw5

### slowpickle ###
Min: 0.832452 - 0.607266: 1.3708x faster
Avg: 0.860438 - 0.651645: 1.3204x faster
Significant (t=20.779110)
Stddev: 0.01559 - 0.09926: 6.3665x larger
Timeline: http://tinyurl.com/yaktykw

### slowspitfire ###
Min: 1.204681 - 1.038169: 1.1604x faster
Avg: 1.236843 - 1.085254: 1.1397x faster
Significant (t=20.203736)
Stddev: 0.02417 - 0.07103: 2.9391x larger
Timeline: http://tinyurl.com/ykgmop5

### slowunpickle ###
Min: 0.374148 - 0.279743: 1.3375x faster
Avg: 0.398137 - 0.315630: 1.2614x faster
Significant (t=16.069155)
Stddev: 0.01333 - 0.04958: 3.7203x larger
Timeline: http://tinyurl.com/y9b5rza

### spambayes ###
Min: 0.330391 - 0.302988: 1.0904x faster
Avg: 0.349153 - 0.394819: 1.1308x slower
Not significant
Stddev: 0.01158 - 0.35049: 30.2739x larger
Timeline: http://tinyurl.com/ylq8sef

-- 
Scott Dial
sc...@scottdial.com
scod...@cs.indiana.edu
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Nick Coghlan
Terry Reedy wrote:
 On 1/29/2010 6:45 PM, Martin v. Löwis wrote:
 When PEP 384 gets implemented, you not only get that, but you will also
 be able to use the same extension module for 3.2, 3.3, 3.4, etc, with
 or without U-S.
 
 Even if CPython changes VC compiler version? In any case, greater
 usability of binaries would be really nice.

Yep, one of the main goals of PEP 384 is to identify and cordon off all
of the APIs that are dependent on the version of the C runtime and
exclude them from the stable ABI.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/30  exar...@twistedmatrix.com

 On 10:55 pm, collinwin...@google.com wrote:


 That people are directly munging CPython
 bytecode means that CPython should provide a better, more abstract way
 to do the same thing that's more resistant to these kinds of changes.


 It might be helpful to hear more about how the wordcode implementation
 differs from the bytecode implementation.  It's challenging to abstract from
 a single data point. :)

  Jean-Paul


Wordcodes structure is simple. You can find information
herehttp://wpython2.googlecode.com/files/Beyond%20Bytecode%20-%20A%20Wordcode-based%20Python.pdf.
Slide 6 provide a description, 7 details about the structure and some
examples. Slide 9 explains how I mapped most bytecodes grouping in 6
families.

However, wordcodes internals can be complicated to pretty print, because
they may carry many information. You can take a look at
opcode.hhttp://code.google.com/p/wpython2/source/browse/Include/opcode.h?repo=wpython10and
dis.pyhttp://code.google.com/p/wpython2/source/browse/Lib/dis.py?repo=wpython10(function
common_disassemble) to understand why this happens.

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-29 Thread Cesare Di Mauro
2010/1/30 Scott Dial
scott+python-...@scottdial.comscott%2bpython-...@scottdial.com


 Cesare, just FYI, your Hg repository has lost the execute bits on some
 files (namely ./configure and ./Parser/asdl_c.py), so it does not
 quite build out-of-the-box.


That's probably because I worked on Windows. I have to address this issue.
Thanks.


 I took the liberty of cloning your repo into my laptop's VirtualBox
 instance of Ubuntu. I ran the default performance tests from the U-S
 repo, with VirtualBox at highest priority. As a sanity check, I ran it
 against the U-S trunk. I think the numbers speak for themselves.

  --
  Scott Dial


I see. I don't know why you got those numbers. Until now, what I saw are
better performances on average with wpython.

In a previous mail, Collin stated that when they implemented wordcode con
U-S they got benefits from the new opcode structure.

May be more tests on different hardware / platforms will help.

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Tim Wintle
On Wed, 2010-01-27 at 10:25 -0800, Collin Winter wrote:
 On Wed, Jan 27, 2010 at 7:26 AM, William Dode w...@flibuste.net wrote:
  I imagine that startup time and memory was also critical for V8.
 
 Startup time and memory usage are arguably *more* critical for a
 Javascript implementation, since if you only spend a few milliseconds
 executing Javascript code, but your engine takes 10-20ms to startup,
 then you've lost. Also, a minimized memory profile is important if you
 plan to embed your JS engine on a mobile platform, for example, or you
 need to run in a heavily-multiprocessed browser on low-memory consumer
 desktops and netbooks.

(as a casual reader)

I'm not sure if this has been specifically mentioned before, but there
are certainly multiple uses for python - and from the arguments I've
seen on-list, U-S appears to mainly appeal for one of them.

The three main uses I see everyday are:

 * simple configuration-style applications (I'm thinking of gnome
configuration options etc.)

 * shell-script replacements

 * Long running processes (web apps, large desktop applications etc)

(in addition, I know several people working on python applications for
mobile phones)

I think the performance/memory tradeoffs being discussed are fine for
the long-running / server apps (20mb on a 8Gb machine is negligable) -
but I think the majority of applications probably fall into the other
options - extra startup time and memory usage each time you have a
server monitoring script / cronjob run becomes noticeable if you have
enough of them.

 Among other reasons we chose LLVM, we didn't want to write code
 generators for each platform we were targeting. LLVM has done this for
 us. V8, on the other hand, has to implement a new code generator for
 each new platform they want to target. This is non-trivial work: it
 takes a long time, has a lot of finicky details, and it greatly
 increases the maintenance burden on the team.

I'm not involved in either, but as someone who started watching the
commit list on V8, I'd thoroughly agree that it would be completely
impractical for python-dev to work as the V8 team seem to. 

What appear minor patches very frequently have to be modified by
reviewers over technicalities, and I can't imagine that level of
intricate work happening efficiently without a large (funded) core team
focusing on code generation.

Personally, picking up the (current) python code, I was able to
comfortably start modifying it within a few hours - with the V8 codebase
I still haven't got a clue what's going on where, to be honest.

Tim Wintle

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread skip

Tim I think the performance/memory tradeoffs being discussed are fine
Tim for the long-running / server apps (20mb on a 8Gb machine is
Tim negligable) 

At work our apps' memory footprints are dominated by the Boost-wrapped C++
libraries we use.  100MB VM usage at run-time is pretty much the starting
point.  It just goes up from there.  We'd probably not notice an extra 20MB
if it was shared.

Skip

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Daniel Fetchinson
A question from someone writing C extension modules for python but not
involved in python-dev:

It has been said that compiling python with --without-llvm would not
include unladen swallow and would bypass llvm together with all C++.
Basically, as I understand it, --without-llvm gives the 'usual'
cpython we have today. Is this correct?

If this is correct, I still have one worry: since I wouldn't want to
touch the python install most linux distributions ship or most
windows/mac users install (or what MS/Apple ships) I will simply have
no choice than working with the python variant that is installed.

Is it anticipated that most linux distros and MS/Apple will ship the
python variant that comes with llvm/US? I suppose the goal of merging
llvm/US into python 3.x is this.

If this is the case then I, as a C extension author, will have no
choice than working with a python installation that includes llvm/US.
Which, as far as I undestand it, means dealing with C++ issues. Is
this correct? Or the same pure C extension module compiled with C-only
compilers would work with llvm-US-python and cpython?

Cheers,
Danil

-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Nick Coghlan
Daniel Fetchinson wrote:
 If this is the case then I, as a C extension author, will have no
 choice than working with a python installation that includes llvm/US.
 Which, as far as I undestand it, means dealing with C++ issues. Is
 this correct? Or the same pure C extension module compiled with C-only
 compilers would work with llvm-US-python and cpython?

As a C extension author you will be fine (the source and linker
interface will all still be C-only).

C++ extension authors may need to care about making the version of the
C++ runtime that they link against match that used internally by
CPython/LLVM (although I'm a little unclear on that point myself).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Stefan Behnel
Daniel Fetchinson, 28.01.2010 13:19:
 A question from someone writing C extension modules for python

I doubt that this will have any impact on C extension developers.


 If this is correct, I still have one worry: since I wouldn't want to
 touch the python install most linux distributions ship or most
 windows/mac users install (or what MS/Apple ships) I will simply have
 no choice than working with the python variant that is installed.
 
 Is it anticipated that most linux distros and MS/Apple will ship the
 python variant that comes with llvm/US? I suppose the goal of merging
 llvm/US into python 3.x is this.

Depends on the distro. My guess is that they will likely provide both as
separate packages (unless one turns out to be clearly 'better'), and
potentially even support their parallel installation. That's not
unprecedented, just think of different JVM implementations (or even just
different Python versions).


 If this is the case then I, as a C extension author, will have no
 choice than working with a python installation that includes llvm/US.
 Which, as far as I undestand it, means dealing with C++ issues.

I don't think so. Replacing the eval loop has no impact on the C-API
commonly used by binary extensions. It may have an impact on programs that
embed the Python interpreter, but not the other way round.

Remember that you usually don't have to compile the Python interpreter
yourself. Once it's a binary, it doesn't really matter anymore in what
language(s) it was originally written.


 Or the same pure C extension module compiled with C-only
 compilers would work with llvm-US-python and cpython?

That's to be expected.

Stefan

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Daniel Fetchinson
 If this is the case then I, as a C extension author, will have no
 choice than working with a python installation that includes llvm/US.
 Which, as far as I undestand it, means dealing with C++ issues. Is
 this correct? Or the same pure C extension module compiled with C-only
 compilers would work with llvm-US-python and cpython?

 As a C extension author you will be fine (the source and linker
 interface will all still be C-only).

Thanks, that is good to hear!

Cheers,
Daniel

 C++ extension authors may need to care about making the version of the
 C++ runtime that they link against match that used internally by
 CPython/LLVM (although I'm a little unclear on that point myself).


-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Daniel Fetchinson
 A question from someone writing C extension modules for python

 I doubt that this will have any impact on C extension developers.


 If this is correct, I still have one worry: since I wouldn't want to
 touch the python install most linux distributions ship or most
 windows/mac users install (or what MS/Apple ships) I will simply have
 no choice than working with the python variant that is installed.

 Is it anticipated that most linux distros and MS/Apple will ship the
 python variant that comes with llvm/US? I suppose the goal of merging
 llvm/US into python 3.x is this.

 Depends on the distro. My guess is that they will likely provide both as
 separate packages

Yes, it's clear that various packages will be available but what I was
asking about is the default python version that gets installed if a
user installs a vanilla version of a particular linux distro. It's a
big difference for developers of C extension modules to say just
install this module and go or first download the python version
so-and-so and then install my module and go.

But as I understand from you and Nick, this will not be a problem for
C extension module authors.

 (unless one turns out to be clearly 'better'), and
 potentially even support their parallel installation. That's not
 unprecedented, just think of different JVM implementations (or even just
 different Python versions).


 If this is the case then I, as a C extension author, will have no
 choice than working with a python installation that includes llvm/US.
 Which, as far as I undestand it, means dealing with C++ issues.

 I don't think so. Replacing the eval loop has no impact on the C-API
 commonly used by binary extensions. It may have an impact on programs that
 embed the Python interpreter, but not the other way round.

 Remember that you usually don't have to compile the Python interpreter
 yourself. Once it's a binary, it doesn't really matter anymore in what
 language(s) it was originally written.

 Or the same pure C extension module compiled with C-only
 compilers would work with llvm-US-python and cpython?

 That's to be expected.

Okay, that's great, basically Nick confirmed the same thing.

Cheers,
Daniel

-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Paul Moore
On 28 January 2010 12:58, Daniel Fetchinson fetchin...@googlemail.com wrote:
 If this is the case then I, as a C extension author, will have no
 choice than working with a python installation that includes llvm/US.
 Which, as far as I undestand it, means dealing with C++ issues. Is
 this correct? Or the same pure C extension module compiled with C-only
 compilers would work with llvm-US-python and cpython?

 As a C extension author you will be fine (the source and linker
 interface will all still be C-only).

 Thanks, that is good to hear!

So, just to extend the question a little (or reiterate, it may be that
this is already covered and I didn't fully understand):

On Windows, would a C extension author be able to distribute a single
binary (bdist_wininst/bdist_msi) which would be compatible with
with-LLVM and without-LLVM builds of Python?

Actually, if we assume that only a single Windows binary, presumably
with-LLVM, will be distributed on python.org, I'm probably being
over-cautious here, as distributing binaries compatible with the
python.org release should be sufficient. Nevertheless, I'd be
interested in the answer.

Paul
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Reid Kleckner
On Thu, Jan 28, 2010 at 1:14 PM, Paul Moore p.f.mo...@gmail.com wrote:
 So, just to extend the question a little (or reiterate, it may be that
 this is already covered and I didn't fully understand):

 On Windows, would a C extension author be able to distribute a single
 binary (bdist_wininst/bdist_msi) which would be compatible with
 with-LLVM and without-LLVM builds of Python?

 Actually, if we assume that only a single Windows binary, presumably
 with-LLVM, will be distributed on python.org, I'm probably being
 over-cautious here, as distributing binaries compatible with the
 python.org release should be sufficient. Nevertheless, I'd be
 interested in the answer.

We have broken ABI compatibility with Python 2.6, but unladen
--without-llvm should be ABI compatible with itself.  In the future we
would probably want to set up a buildbot to make sure we don't mess
this up.  One thing we have done is to add #ifdef'd attributes to
things like the code object, but so long as you don't touch those
attributes, you should be fine.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Cesare Di Mauro
Hi Collin,

Thanks for the useful links.

I think that superinstructions require a bit more work, because they aren't
just opcode arguments rearrangement. For example, in wpython 1.1 (that I'll
release next month) I've introduced a CALL_SUB opcode to handle all kind of
function types, so the 2 words pack together:
- the opcode (CALL_SUB);
- the function type and flags (normal, VAR, KW, procedure);
- the number of arguments;
- the number of keywords arguments.

Superinstructions aren't intended to be a simple drop-in replacement of
existing bytecodes. They can carry new ideas and implement them in a
versatile and efficient way.

Anyway, I don't think to continue with wpython: 1.1 will be the last version
I'll release (albeit I initially have planned 1.2 and 1.3 for this year, and
2.0 for 2011) for several reasons.

2.7 is the last planned 2.x release, and once it got alpha state, there's no
chance to introduce wordcodes model in it.

3.2 or later will be good candidates, but I don't want to make a new project
and fork again. Forking is a waste of time and resources (I spent over 1
year of my spare time just to prove an idea).

I think that wpython as a proof-of-concept have done its work, showing its
potentials.

If python dev community is interested, I can work on a 3.x branch, porting
all optimizations I made (and many others that I've planned to implement)
one step at the time, in order to carefully check and validate any change
with expert people monitoring it.

Cesare

2010/1/26 Collin Winter collinwin...@google.com

 Hi Cesare,

 On Tue, Jan 26, 2010 at 12:29 AM, Cesare Di Mauro
 cesare.di.ma...@gmail.com wrote:
  Hi Collin,
 
  One more question: is it easy to support more opcodes, or a different
 opcode
  structure, in Unladen Swallow project?

 I assume you're asking about integrating WPython. Yes, adding new
 opcodes to Unladen Swallow is still pretty easy. The PEP includes a
 section on this,

 http://www.python.org/dev/peps/pep-3146/#experimenting-with-changes-to-python-or-cpython-bytecode
 ,
 though it doesn't cover something more complex like converting from
 bytecode to wordcode, as a purely hypothetical example ;) Let me know
 if that section is unclear or needs more data.

 Converting from bytecode to wordcode should be relatively
 straightforward, assuming that the arrangement of opcode arguments is
 the main change. I believe the only real place you would need to
 update is the JIT compiler's bytecode iterator (see

 http://code.google.com/p/unladen-swallow/source/browse/trunk/Util/PyBytecodeIterator.cc
 ).
 Depending on the nature of the changes, the runtime feedback system
 might need to be updated, too, but it wouldn't be too difficult, and
 the changes should be localized.

 Collin Winter

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread William Dode
Hi (as a simple user),

I'd like to know why you didn't followed the same way as V8 Javascript, 
or the opposite, why for V8 they didn't choose llvm ?

I imagine that startup time and memory was also critical for V8.

thanks


-- 
William Dodé - http://flibuste.net
Informaticien Indépendant

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread David Malcolm
On Thu, 2010-01-21 at 14:46 -0800, Jeffrey Yasskin wrote:
 On Thu, Jan 21, 2010 at 10:09 AM, Hanno Schlichting ha...@hannosch.eu wrote:
  I'm a relative outsider to core development (I'm just a Plone release
  manager), but'll allow myself a couple of questions. Feel free to
  ignore them, if you think they are not relevant at this point :-) I'd
  note that I'm generally enthusiastic and supportive of the proposal :)
  As a data point, I can add that all tests of Zope 2.12 / Plone 4.0 and
  their dependency set run fine under Unladen Swallow.
 
 Hi, thanks for checking that!
 
  On Wed, Jan 20, 2010 at 11:27 PM, Collin Winter collinwin...@google.com 
  wrote:
  We have chosen to reuse a set of existing compiler libraries called LLVM
  [#llvm]_ for code generation and code optimization.
 
  Would it be prudent to ask for more information about the llvm
  project? Especially in terms of its non-code related aspects. I can
  try to hunt down this information myself, but as a complete outsider
  to the llvm project this takes much longer, compared to someone who
  has interacted with the project as closely as you have.
 
  Questions like:
 

[snip]

  Managing LLVM Releases, C++ API Changes
  ---
 
  LLVM is released regularly every six months. This means that LLVM may be
  released two or three times during the course of development of a CPython 
  3.x
  release. Each LLVM release brings newer and more powerful optimizations,
  improved platform support and more sophisticated code generation.
 
  How does the support and maintenance policy of llvm releases look
  like? If a Python version is pegged to a specific llvm release, it
  needs to be able to rely on critical bug fixes and security fixes to
  be made for that release for a rather prolonged time. How does this
  match the llvm policies given their frequent time based releases?
 
 LLVM doesn't currently do dot releases. So, once 2.7 is released,
 it's very unlikely there would be a 2.6.1. They do make release
 branches, and they've said they're open to dot releases if someone
 else does them, so if we need a patch release for some issue we could
 make it ourselves. I recognize that's not ideal, but I also expect
 that we'll be able to work around LLVM bugs with changes in Python,
 rather than needing to change LLVM.

[snip]

(I don't think the following has specifically been asked yet, though
this thread has become large)

As a downstream distributor of Python, a major pain point for me is when
Python embeds a copy of a library's source code, rather than linking
against a system library (zlib, libffi and expat spring to mind): if
bugs (e.g. security issues) arise in a library, I have to go chasing
down all of the embedded copies of the library, rather than having
dynamic linking deal with it for me.

So I have some concerns about having a copy of LLVM embedded in Python's
source tree, which I believe other distributors of Python would echo;
our rough preference ordering is:

   dynamic linking  static linking  source code copy

I would like CPython to be faster, and if it means dynamically linking
against the system LLVM, that's probably OK (though I have some C++
concerns already discussed elsewhere in this thread).  If it means
statically linking, or worse, having a separate copy of the LLVM source
as an implementation detail of CPython, that would be painful.

I see that the u-s developers have been run into issues in LLVM itself,
and fixed them (bravo!), and seem to have done a good job of sending
those fixes back to LLVM for inclusion. [1]

Some questions for the U-S devs:
  - will it be possible to dynamically link against the system LLVM?
(the PEP currently seems to speak of statically linking against it)
  - does the PEP anticipate that the Python source tree will start
embedding a copy of the LLVM source tree?
  - if so, what can be done to mitigate the risk of drift from upstream?
(this is the motivation for some of the following questions)
  - to what extent do you anticipate further changes needed in LLVM for
U-S? (given the work you've already put in, I expect the answer is
probably a lot, but we can't know what those will be yet)
  - do you anticipate all of these changes being accepted by the
upstream LLVM maintainers?
  - to what extent would these changes be likely to break API and/or ABI
compat with other users of LLVM (i.e. would a downstream distributor of
CPython be able to simply apply the necessary patches to the system LLVM
in order to track? if they did so, would it require a recompilation of
all of the other users of the system LLVM?)
  - if Python needed to make a dot-release of LLVM, would LLVM allow
Python to increment the SONAME version identifying the ABI within the
DSO (.so) files, and guarantee not to reuse that SONAME version? (so
that automated ABI dependency tracking in e.g. RPM can identify the ABI
incompatibilities without being stomped on by a future upstream LLVM
release)
  - 

Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread Collin Winter
Hi William,

On Wed, Jan 27, 2010 at 7:26 AM, William Dode w...@flibuste.net wrote:
 Hi (as a simple user),

 I'd like to know why you didn't followed the same way as V8 Javascript,
 or the opposite, why for V8 they didn't choose llvm ?

 I imagine that startup time and memory was also critical for V8.

Startup time and memory usage are arguably *more* critical for a
Javascript implementation, since if you only spend a few milliseconds
executing Javascript code, but your engine takes 10-20ms to startup,
then you've lost. Also, a minimized memory profile is important if you
plan to embed your JS engine on a mobile platform, for example, or you
need to run in a heavily-multiprocessed browser on low-memory consumer
desktops and netbooks.

Among other reasons we chose LLVM, we didn't want to write code
generators for each platform we were targeting. LLVM has done this for
us. V8, on the other hand, has to implement a new code generator for
each new platform they want to target. This is non-trivial work: it
takes a long time, has a lot of finicky details, and it greatly
increases the maintenance burden on the team. We felt that requiring
python-dev to understand code generation on multiple platforms was a
distraction from what python-dev is trying to do -- develop Python. V8
still doesn't have x86-64 code generation working on Windows
(http://code.google.com/p/v8/issues/detail?id=330), so I wouldn't
underestimate the time required for that kind of project.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread skip

David As a downstream distributor of Python, a major pain point for me
David is when Python embeds a copy of a library's source code, rather
David than linking against a system library (zlib, libffi and expat
David spring to mind): if bugs (e.g. security issues) arise in a
David library, I have to go chasing down all of the embedded copies of
David the library, rather than having dynamic linking deal with it for
David me.

The Unladen Swallow developers can correct me if I'm wrong, but I believe
the Subversion checkout holds a copy of LLVM strictly to speed development.
If the U-S folks find and fix a bug in LLVM they can shoot the fix upstream,
apply it locally, then keep moving forward without waiting for a new release
of LLVM.  Support exists now for building with an external LLVM, and I would
expect that as Unladen Swallow moves out of active development into a more
stable phase of its existence that it will probably stop embedding LLVM.

Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread William Dode
On 27-01-2010, Collin Winter wrote:
 Hi William,

 On Wed, Jan 27, 2010 at 7:26 AM, William Dode w...@flibuste.net wrote:
 Hi (as a simple user),

 I'd like to know why you didn't followed the same way as V8 Javascript,
 or the opposite, why for V8 they didn't choose llvm ?

 I imagine that startup time and memory was also critical for V8.

 Startup time and memory usage are arguably *more* critical for a
 Javascript implementation, since if you only spend a few milliseconds
 executing Javascript code, but your engine takes 10-20ms to startup,
 then you've lost. Also, a minimized memory profile is important if you
 plan to embed your JS engine on a mobile platform, for example, or you
 need to run in a heavily-multiprocessed browser on low-memory consumer
 desktops and netbooks.

 Among other reasons we chose LLVM, we didn't want to write code
 generators for each platform we were targeting. LLVM has done this for
 us. V8, on the other hand, has to implement a new code generator for
 each new platform they want to target. This is non-trivial work: it
 takes a long time, has a lot of finicky details, and it greatly
 increases the maintenance burden on the team. We felt that requiring
 python-dev to understand code generation on multiple platforms was a
 distraction from what python-dev is trying to do -- develop Python. V8
 still doesn't have x86-64 code generation working on Windows
 (http://code.google.com/p/v8/issues/detail?id=330), so I wouldn't
 underestimate the time required for that kind of project.

Thanks for this answer.

The startup time and memory comsumption are a limitation of llvm that 
their developers plan to resolve or is it only specific to the current 
python integration ? I mean the work to correct this is more on U-S or 
on llvm ?

thanks (and sorry for my english !)

-- 
William Dodé - http://flibuste.net
Informaticien Indépendant

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread Collin Winter
Hi David,

On Wed, Jan 27, 2010 at 8:37 AM, David Malcolm dmalc...@redhat.com wrote:
[snip]
 As a downstream distributor of Python, a major pain point for me is when
 Python embeds a copy of a library's source code, rather than linking
 against a system library (zlib, libffi and expat spring to mind): if
 bugs (e.g. security issues) arise in a library, I have to go chasing
 down all of the embedded copies of the library, rather than having
 dynamic linking deal with it for me.

 So I have some concerns about having a copy of LLVM embedded in Python's
 source tree, which I believe other distributors of Python would echo;
 our rough preference ordering is:

   dynamic linking  static linking  source code copy

 I would like CPython to be faster, and if it means dynamically linking
 against the system LLVM, that's probably OK (though I have some C++
 concerns already discussed elsewhere in this thread).  If it means
 statically linking, or worse, having a separate copy of the LLVM source
 as an implementation detail of CPython, that would be painful.

We absolutely do not want CPython to include a copy of LLVM in its
source tree. Unladen Swallow has done this to make it easier to pick
up changes to LLVM's codebase as we make them, but this is not a
viable model for CPython's long-term development. As mentioned in
http://www.python.org/dev/peps/pep-3146/#managing-llvm-releases-c-api-changes,
one of our full-time engineers is tasked with fixing all critical
issues in LLVM before LLVM's 2.7 release so that CPython can simply
use that release.

We are currently statically linking against LLVM because that's what
LLVM best supports, but that's not set in stone. We can make LLVM
better support shared linking; it's one of the open issues as to
whether this is important to python-dev
(http://www.python.org/dev/peps/pep-3146/#open-issues), and it sounds
like the answer is Yes. Our priorities will adjust accordingly.

To answer the individual bullet points:

 I see that the u-s developers have been run into issues in LLVM itself,
 and fixed them (bravo!), and seem to have done a good job of sending
 those fixes back to LLVM for inclusion. [1]

 Some questions for the U-S devs:
  - will it be possible to dynamically link against the system LLVM?
 (the PEP currently seems to speak of statically linking against it)

We currently link statically, but we will fix LLVM to better support
shared linking.

  - does the PEP anticipate that the Python source tree will start
 embedding a copy of the LLVM source tree?

No, that would be a terrible idea, and we do not endorse it.

  - to what extent do you anticipate further changes needed in LLVM for
 U-S? (given the work you've already put in, I expect the answer is
 probably a lot, but we can't know what those will be yet)

We have fixed all the critical issues that our testing has exposed and
have now moved on to nice to have items that will aid future
optimizations. LLVM 2.7 will probably be released in May, so we still
have time to fix LLVM as needed.

  - do you anticipate all of these changes being accepted by the
 upstream LLVM maintainers?

Three of the Unladen Swallow committers are also LLVM committers. Our
patches have historically been accepted enthusiastically by the LLVM
maintainers, and we believe this warm relationship will continue.

  - to what extent would these changes be likely to break API and/or ABI
 compat with other users of LLVM (i.e. would a downstream distributor of
 CPython be able to simply apply the necessary patches to the system LLVM
 in order to track? if they did so, would it require a recompilation of
 all of the other users of the system LLVM?)

As mentioned in
http://www.python.org/dev/peps/pep-3146/#managing-llvm-releases-c-api-changes,
every LLVM release introduces incompatibilities to the C++ API. Our
experience has been that these API changes are easily remedied. We
recommend that CPython depend on a single version of LLVM and not try
to track LLVM trunk.

  - if Python needed to make a dot-release of LLVM, would LLVM allow
 Python to increment the SONAME version identifying the ABI within the
 DSO (.so) files, and guarantee not to reuse that SONAME version? (so
 that automated ABI dependency tracking in e.g. RPM can identify the ABI
 incompatibilities without being stomped on by a future upstream LLVM
 release)

Nick Lewycky is better-positioned to answer that.

  - is your aim to minimize the difference between upstream LLVM and the
 U-S copy of LLVM?  (yes, probably)
  - will you publish/track the differences between upstream LLVM and the
 U-S copy of LLVM somewhere? I see that you currently do this here: [2],
 though it strikes me that the canonical representation of the LLVM
 you're using is in the embedded copy in Utils/llvm, rather than e.g. a
 recipe like grab upstream release foo and apply this short list of
 patches (a distributed SCM might help here, or a tool like jhbuild [3])

We generally use LLVM trunk verbatim, but will 

Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread Collin Winter
Hi William,

On Wed, Jan 27, 2010 at 11:02 AM, William Dode w...@flibuste.net wrote:
 The startup time and memory comsumption are a limitation of llvm that
 their developers plan to resolve or is it only specific to the current
 python integration ? I mean the work to correct this is more on U-S or
 on llvm ?

Part of it is LLVM, part of it is Unladen Swallow. LLVM is very
flexible, and there's a price for that. We have also found and fixed
several cases of quadratic memory usage in LLVM optimization passes,
and there may be more of those lurking around. On the Unladen Swallow
side, there are doubtless things we can do to improve our usage of
LLVM; http://code.google.com/p/unladen-swallow/issues/detail?id=68 has
most of our work on this, and there are still more ideas to implement.

Part of the issue is that Unladen Swallow is using LLVM's JIT
infrastructure in ways that it really hasn't been used before, and so
there's a fair amount of low-hanging fruit left in LLVM that no-one
has needed to pick yet.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread Jeffrey Yasskin
On Wed, Jan 27, 2010 at 11:16 AM, Collin Winter collinwin...@google.com wrote:
 We absolutely do not want CPython to include a copy of LLVM in its
 source tree. Unladen Swallow has done this to make it easier to pick
 up changes to LLVM's codebase as we make them, but this is not a
 viable model for CPython's long-term development. As mentioned in
 http://www.python.org/dev/peps/pep-3146/#managing-llvm-releases-c-api-changes,
 one of our full-time engineers is tasked with fixing all critical
 issues in LLVM before LLVM's 2.7 release so that CPython can simply
 use that release.

I'm now tracking my to-do list for LLVM 2.7 in
http://code.google.com/p/unladen-swallow/issues/detail?id=131.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread David Malcolm
On Wed, 2010-01-27 at 11:34 -0800, Jeffrey Yasskin wrote:
 On Wed, Jan 27, 2010 at 11:16 AM, Collin Winter collinwin...@google.com 
 wrote:
  We absolutely do not want CPython to include a copy of LLVM in its
  source tree. Unladen Swallow has done this to make it easier to pick
  up changes to LLVM's codebase as we make them, but this is not a
  viable model for CPython's long-term development. As mentioned in
  http://www.python.org/dev/peps/pep-3146/#managing-llvm-releases-c-api-changes,
  one of our full-time engineers is tasked with fixing all critical
  issues in LLVM before LLVM's 2.7 release so that CPython can simply
  use that release.
 
 I'm now tracking my to-do list for LLVM 2.7 in
 http://code.google.com/p/unladen-swallow/issues/detail?id=131.

Many thanks for addressing these concerns!

Dave

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-27 Thread Nick Lewycky
On 27 January 2010 08:37, David Malcolm dmalc...@redhat.com wrote:

 On Thu, 2010-01-21 at 14:46 -0800, Jeffrey Yasskin wrote:
  On Thu, Jan 21, 2010 at 10:09 AM, Hanno Schlichting ha...@hannosch.eu
 wrote:
   I'm a relative outsider to core development (I'm just a Plone release
   manager), but'll allow myself a couple of questions. Feel free to
   ignore them, if you think they are not relevant at this point :-) I'd
   note that I'm generally enthusiastic and supportive of the proposal :)
   As a data point, I can add that all tests of Zope 2.12 / Plone 4.0 and
   their dependency set run fine under Unladen Swallow.
 
  Hi, thanks for checking that!
 
   On Wed, Jan 20, 2010 at 11:27 PM, Collin Winter 
 collinwin...@google.com wrote:
   We have chosen to reuse a set of existing compiler libraries called
 LLVM
   [#llvm]_ for code generation and code optimization.
  
   Would it be prudent to ask for more information about the llvm
   project? Especially in terms of its non-code related aspects. I can
   try to hunt down this information myself, but as a complete outsider
   to the llvm project this takes much longer, compared to someone who
   has interacted with the project as closely as you have.
  
   Questions like:
  

 [snip]

   Managing LLVM Releases, C++ API Changes
   ---
  
   LLVM is released regularly every six months. This means that LLVM may
 be
   released two or three times during the course of development of a
 CPython 3.x
   release. Each LLVM release brings newer and more powerful
 optimizations,
   improved platform support and more sophisticated code generation.
  
   How does the support and maintenance policy of llvm releases look
   like? If a Python version is pegged to a specific llvm release, it
   needs to be able to rely on critical bug fixes and security fixes to
   be made for that release for a rather prolonged time. How does this
   match the llvm policies given their frequent time based releases?
 
  LLVM doesn't currently do dot releases. So, once 2.7 is released,
  it's very unlikely there would be a 2.6.1. They do make release
  branches, and they've said they're open to dot releases if someone
  else does them, so if we need a patch release for some issue we could
  make it ourselves. I recognize that's not ideal, but I also expect
  that we'll be able to work around LLVM bugs with changes in Python,
  rather than needing to change LLVM.

 [snip]

 (I don't think the following has specifically been asked yet, though
 this thread has become large)

 As a downstream distributor of Python, a major pain point for me is when
 Python embeds a copy of a library's source code, rather than linking
 against a system library (zlib, libffi and expat spring to mind): if
 bugs (e.g. security issues) arise in a library, I have to go chasing
 down all of the embedded copies of the library, rather than having
 dynamic linking deal with it for me.

 So I have some concerns about having a copy of LLVM embedded in Python's
 source tree, which I believe other distributors of Python would echo;
 our rough preference ordering is:

   dynamic linking  static linking  source code copy

 I would like CPython to be faster, and if it means dynamically linking
 against the system LLVM, that's probably OK (though I have some C++
 concerns already discussed elsewhere in this thread).  If it means
 statically linking, or worse, having a separate copy of the LLVM source
 as an implementation detail of CPython, that would be painful.

 I see that the u-s developers have been run into issues in LLVM itself,
 and fixed them (bravo!), and seem to have done a good job of sending
 those fixes back to LLVM for inclusion. [1]


Hi David, I'm unladen-swallow's resident llvm consultant, so I'll answer
the questions I can from an llvm upstream point of view.


 Some questions for the U-S devs:
  - will it be possible to dynamically link against the system LLVM?
 (the PEP currently seems to speak of statically linking against it)


Sadly, LLVM still only offers static libraries (there are two exceptions,
but they don't fit unladen-swallow). This is a something we want but
nobody's stepped up to make it happen yet.


  - does the PEP anticipate that the Python source tree will start
 embedding a copy of the LLVM source tree?
  - if so, what can be done to mitigate the risk of drift from upstream?
 (this is the motivation for some of the following questions)


Jeff and I are working to make sure that everything unladen-swallow needs is
in LLVM 2.7 when it releases.

 - to what extent do you anticipate further changes needed in LLVM for
 U-S? (given the work you've already put in, I expect the answer is
 probably a lot, but we can't know what those will be yet)


Unladen-swallow works with upstream LLVM now. The changes we're making
include refactoring interfaces and adding new optimization features which
python would make use of in the future.

 - do you anticipate all of 

Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Cesare Di Mauro
Hi Collin,

2010/1/25 Collin Winter collinwin...@google.com

 Hi Cesare,

 On Sat, Jan 23, 2010 at 1:09 PM, Cesare Di Mauro
 cesare.di.ma...@gmail.com wrote:
  Hi Collin
 
  IMO it'll be better to make Unladen Swallow project a module, to be
  installed and used if needed, so demanding to users the choice of having
 it
  or not. The same way psyco does, indeed.
  Nowadays it requires too much memory, longer loading time, and fat
 binaries
  for not-so-great performances. I know that some issues have being worked
 on,
  but I don't think that they'll show something comparable to the current
  CPython status.

 You're proposing that, even once the issues of memory usage and
 startup time are addressed, Unladen Swallow should still be an
  extension module? I don't see why.


Absolutely not, of course.


 You're assuming that these issues
 cannot be fixed, which I disagree with.


No, it's my belief, from what I see until now, that it'll be very difficult
to have a situation that is comparable to the current one, in the terms that
I've talked about (memory usage, load time, and binaries size).

I hope to make a mistake. :)



 I think maintaining something like a JIT compiler out-of-line, as
 Psyco is, causes long-term maintainability problems. Such extension
 modules are forever playing catchup with the CPython code, depending
 on implementation details that the CPython developers are right to
  regard as open to change.


I agree (especially for psyco), but ceval.c has a relatively stable code
(not mine, however :D).

It also limits what kind of optimizations
 you can implement or forces those optimizations to be implemented with
 workarounds that might be suboptimal or fragile. I'd recommend reading
 the Psyco codebase, if you haven't yet.


Optimizations are surely a point in favor of integrating U.S. project in the
main core.

Psyco, as I said before, is quite a mess. It's hard to add new back-ends for
other architectures. It's a bit less difficult to keep it in sync with
opcode changes (except for big changes), and a port to Python 3.x may be
suitable (but I don't know if the effort makes sense).

As others have requested, we are working hard to minimize the impact
 of the JIT so that it can be turned off entirely at runtime. We have
 an active issue tracking our progress at
 http://code.google.com/p/unladen-swallow/issues/detail?id=123.


I see, thanks.


  Introducing C++ is a big step, also. Aside the problems it can bring on
 some
  platforms, it means that C++ can now be used by CPython developers.

 Which platforms, specifically? What is it about C++ on those platforms
 that is problematic? Can you please provide details?


Others have talked about it.


  It
  doesn't make sense to force people use C for everything but the JIT part.
 In
  the end, CPython could become a mix of C and C++ code, so a bit more
  difficult to understand and manage.

 Whether CPython should allow wider usage of C++ or whether developer
 should be force[d] to use C is not our decision, and is not part of
 this PEP. With the exception of Python/eval.c, we deliberately have
 not converted any CPython code to C++ so that if you're not working on
 the JIT, python-dev's workflow remains the same. Even within eval.cc,
 the only C++ parts are related to the JIT, and so disappear completely
 with configured with --without-llvm (or if you're not working on the
 JIT).

 In any case, developers can easily tell which language to use based on
 file extension. The compiler errors that would result from compiling
 C++ with a C compiler would be a good indication as well.


OK, if CPython will be compilable without using C++ at all, I retire what I
said.


  What I see is that LLVM is a too big project for the goal of having
 just a
  JIT-ed Python VM. It can be surely easier to use and integrate into
 CPython,
  but requires too much resources

 Which resources do you feel that LLVM would tax, machine resources or
 developer resources? Are you referring to the portions of LLVM used by
 Unladen Swallow, or the entire wider LLVM project, including the
 pieces Unladen Swallow doesn't use at runtime?


No, I'm referring to the portions of LLVM used by U.S..

Regards resources, I was talking about memory, loading time, and binaries
size (both with static and dynamic compilation).



  (on the contrary, Psyco demands little
  resources, give very good performances, but seems to be like a mess to
  manage and extend).

 This is not my experience. For the workloads I have experience with,
 Psyco doubles memory usage while only providing a 15-30% speed
 improvement. Psyco's benefits are not uniform.


I made only computation intensive (integers, floats) tasks with Psyco, and
it worked fine.

I haven't made tests with U.S. benchmark suite.



 Unladen Swallow has been designed to be much more maintainable and
 easier to extend and modify than Psyco: the compiler and its attendant
 optimizations are well-tested (see Lib/test/test_llvm.py, for 

Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Cesare Di Mauro
Hi Collin,

One more question: is it easy to support more opcodes, or a different opcode
structure, in Unladen Swallow project?

Thanks,
Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Floris Bruynooghe
Hello Collin

On Mon, Jan 25, 2010 at 05:27:38PM -0800, Collin Winter wrote:
 On Mon, Jan 25, 2010 at 1:25 PM, Floris Bruynooghe
 floris.bruynoo...@gmail.com wrote:
  On Mon, Jan 25, 2010 at 10:14:35AM -0800, Collin Winter wrote:
  I'm working on a patch to completely remove all traces of C++ with
  configured with --without-llvm. It's a straightforward change, and
  should present no difficulties.
 
  Great to hear that, thanks for caring.
 
 This has now been resolved. As of
 http://code.google.com/p/unladen-swallow/source/detail?r=1036,
 ./configure --without-llvm has no dependency on libstdc++:

Great, thanks for the work.

Floris


-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread skip

Cesare ... but ceval.c has a relatively stable code ...

I believe you are mistaken on several counts:

* The names of the functions in there have changed over time.

* The suite of byte code operations have changed dramatically over the
  past ten years or so.  

* The relationship between the code in ceval.c and the Python threading
  model has changed.

Any or all of these aspects of the virtual machine, as well I'm sure as many
other things I've missed would have to be tracked by any extension module
which hoped to supplant or augment its function in some way.

Skip
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Cesare Di Mauro
Hi Skip

For relatively stable code I talk about recent years.

My experience with CPython is limited, of course.

Cesare

2010/1/26 s...@pobox.com


Cesare ... but ceval.c has a relatively stable code ...

 I believe you are mistaken on several counts:

* The names of the functions in there have changed over time.

* The suite of byte code operations have changed dramatically over the
  past ten years or so.

* The relationship between the code in ceval.c and the Python threading
  model has changed.

 Any or all of these aspects of the virtual machine, as well I'm sure as
 many
 other things I've missed would have to be tracked by any extension module
 which hoped to supplant or augment its function in some way.

 Skip

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Martin v. Löwis
 This
 may not be a problem in the LLVM code base, but it is the typical
 problem that C++ devs run into with initialization of objects with
 static storage duration.

This *really* doesn't have to do anything with U-S, but I'd like to
point out that standard C++ has a very clear semantics in this matter:
Any global object can be used in the translation unit where it is
defined after the point where it is defined. So if you arrange for all
accessors to a global object to occur after its definition, you don't
need to worry about initialization order at all.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Collin Winter
Hi Cesare,

On Tue, Jan 26, 2010 at 12:29 AM, Cesare Di Mauro
cesare.di.ma...@gmail.com wrote:
 Hi Collin,

 One more question: is it easy to support more opcodes, or a different opcode
 structure, in Unladen Swallow project?

I assume you're asking about integrating WPython. Yes, adding new
opcodes to Unladen Swallow is still pretty easy. The PEP includes a
section on this,
http://www.python.org/dev/peps/pep-3146/#experimenting-with-changes-to-python-or-cpython-bytecode,
though it doesn't cover something more complex like converting from
bytecode to wordcode, as a purely hypothetical example ;) Let me know
if that section is unclear or needs more data.

Converting from bytecode to wordcode should be relatively
straightforward, assuming that the arrangement of opcode arguments is
the main change. I believe the only real place you would need to
update is the JIT compiler's bytecode iterator (see
http://code.google.com/p/unladen-swallow/source/browse/trunk/Util/PyBytecodeIterator.cc).
Depending on the nature of the changes, the runtime feedback system
might need to be updated, too, but it wouldn't be too difficult, and
the changes should be localized.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Collin Winter
Hey Martin,

On Thu, Jan 21, 2010 at 2:25 PM, Martin v. Löwis mar...@v.loewis.de wrote:
 Reid Kleckner wrote:
 On Thu, Jan 21, 2010 at 4:34 PM, Martin v. Löwis mar...@v.loewis.de 
 wrote:
 How large is the LLVM shared library? One surprising data point is that the
 binary is much larger than some of the memory footprint measurements given 
 in
 the PEP.
 Could it be that you need to strip the binary, or otherwise remove
 unneeded debug information?

 Python is always built with debug information (-g), at least it was in
 2.6.1 which unladen is based off of, and we've made sure to build LLVM
 the same way.  We had to muck with the LLVM build system to get it to
 include debugging information.  On my system, stripping the python
 binary takes it from 82 MB to 9.7 MB.  So yes, it contains extra debug
 info, which explains the footprint measurements.  The question is
 whether we want LLVM built with debug info or not.

 Ok, so if 70MB are debug information, I think a lot of the concerns are
 removed:
 - debug information doesn't consume any main memory, as it doesn't get
  mapped when the process is started.
 - debug information also doesn't take up space in the system
  distributions, as they distribute stripped binaries.

 As 10MB is still 10 times as large as a current Python binary,people
 will probably search for ways to reduce that further, or at least split
 it up into pieces.

70MB of the increase was indeed debug information. Since the Linux
distros that I checked ship stripped Python binaries, I've stripped
the Unladen Swallow binaries as well, and while the size increase is
still significant, it's not as large as it once was.

Stripped CPython 2.6.4: 1.3 MB
Stripped CPython 3.1.1: 1.4 MB
Stripped Unladen r1041: 12 MB

A 9x increase is better than a 20x increase, but it's not great,
either. There is still room to trim the set of LLVM libraries used by
Unladen Swallow, and we're continuing to investigate reducing on-disk
binary size (http://code.google.com/p/unladen-swallow/issues/detail?id=118
tracks this).

I've updated the PEP to reflect this configuration, since it's what
most users will pick up via their system package managers. The exact
change to the PEP wording is
http://codereview.appspot.com/186247/diff2/6001:6003/5002.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-26 Thread Terry Reedy

On 1/26/2010 6:17 PM, Collin Winter wrote:


70MB of the increase was indeed debug information. Since the Linux
distros that I checked ship stripped Python binaries, I've stripped
the Unladen Swallow binaries as well, and while the size increase is
still significant, it's not as large as it once was.

Stripped CPython 2.6.4: 1.3 MB
Stripped CPython 3.1.1: 1.4 MB
Stripped Unladen r1041: 12 MB

A 9x increase is better than a 20x increase, but it's not great,


For downloading and installing for my own use, this would not bother me 
too much, especially since I expect you will be able to eliminate more 
that you do not need. People who package, say 500K or less of python 
code and resource files, using py2exe or whatever, might feel 
differently, especially if the time benefit is trivial for the app. If 
they can get, if they want, a no-U.S. Windows binary, which you have 
apparently now made possible, they should be happy. But I will not 
volunteer Martin to make two binaries unless he is able to complete the 
automation of the process, or unless someone volunteers to help.


Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Collin Winter
Hi Floris,

On Sun, Jan 24, 2010 at 3:40 AM, Floris Bruynooghe
floris.bruynoo...@gmail.com wrote:
 On Sat, Jan 23, 2010 at 10:09:14PM +0100, Cesare Di Mauro wrote:
 Introducing C++ is a big step, also. Aside the problems it can bring on some
 platforms, it means that C++ can now be used by CPython developers. It
 doesn't make sense to force people use C for everything but the JIT part. In
 the end, CPython could become a mix of C and C++ code, so a bit more
 difficult to understand and manage.

 Introducing C++ is a big step, but I disagree that it means C++ should
 be allowed in the other CPython code.  C++ can be problematic on more
 obscure platforms (certainly when static initialisers are used) and
 being able to build a python without C++ (no JIT/LLVM) would be a huge
 benefit, effectively having the option to build an old-style CPython
 at compile time.  (This is why I ased about --without-llvm being able
 not to link with libstdc++).

I'm working on a patch to completely remove all traces of C++ with
configured with --without-llvm. It's a straightforward change, and
should present no difficulties.

For reference, what are these obscure platforms where static
initializers cause problems?

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Collin Winter wrote:

 For reference, what are these obscure platforms where static
 initializers cause problems?

It's been a long while since I had to deal with it, but the usual
suspets back in the day were HP-UX, AIX, and Solaris with non-GCC
compilers, as well as Windows when different VC RT libraries got into
the mix.


Tres.
- --
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktd5pEACgkQ+gerLs4ltQ41aQCfXkZrJwIOt+wyeAquWKufwX/N
UmUAn3M/RNrwSLcZ94+Qtzjv9Yt6Q1tE
=40hq
-END PGP SIGNATURE-

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Collin Winter
Hi Cesare,

On Sat, Jan 23, 2010 at 1:09 PM, Cesare Di Mauro
cesare.di.ma...@gmail.com wrote:
 Hi Collin

 IMO it'll be better to make Unladen Swallow project a module, to be
 installed and used if needed, so demanding to users the choice of having it
 or not. The same way psyco does, indeed.
 Nowadays it requires too much memory, longer loading time, and fat binaries
 for not-so-great performances. I know that some issues have being worked on,
 but I don't think that they'll show something comparable to the current
 CPython status.

You're proposing that, even once the issues of memory usage and
startup time are addressed, Unladen Swallow should still be an
extension module? I don't see why. You're assuming that these issues
cannot be fixed, which I disagree with.

I think maintaining something like a JIT compiler out-of-line, as
Psyco is, causes long-term maintainability problems. Such extension
modules are forever playing catchup with the CPython code, depending
on implementation details that the CPython developers are right to
regard as open to change. It also limits what kind of optimizations
you can implement or forces those optimizations to be implemented with
workarounds that might be suboptimal or fragile. I'd recommend reading
the Psyco codebase, if you haven't yet.

As others have requested, we are working hard to minimize the impact
of the JIT so that it can be turned off entirely at runtime. We have
an active issue tracking our progress at
http://code.google.com/p/unladen-swallow/issues/detail?id=123.

 Introducing C++ is a big step, also. Aside the problems it can bring on some
 platforms, it means that C++ can now be used by CPython developers.

Which platforms, specifically? What is it about C++ on those platforms
that is problematic? Can you please provide details?

 It
 doesn't make sense to force people use C for everything but the JIT part. In
 the end, CPython could become a mix of C and C++ code, so a bit more
 difficult to understand and manage.

Whether CPython should allow wider usage of C++ or whether developer
should be force[d] to use C is not our decision, and is not part of
this PEP. With the exception of Python/eval.c, we deliberately have
not converted any CPython code to C++ so that if you're not working on
the JIT, python-dev's workflow remains the same. Even within eval.cc,
the only C++ parts are related to the JIT, and so disappear completely
with configured with --without-llvm (or if you're not working on the
JIT).

In any case, developers can easily tell which language to use based on
file extension. The compiler errors that would result from compiling
C++ with a C compiler would be a good indication as well.

 What I see is that LLVM is a too big project for the goal of having just a
 JIT-ed Python VM. It can be surely easier to use and integrate into CPython,
 but requires too much resources

Which resources do you feel that LLVM would tax, machine resources or
developer resources? Are you referring to the portions of LLVM used by
Unladen Swallow, or the entire wider LLVM project, including the
pieces Unladen Swallow doesn't use at runtime?

 (on the contrary, Psyco demands little
 resources, give very good performances, but seems to be like a mess to
 manage and extend).

This is not my experience. For the workloads I have experience with,
Psyco doubles memory usage while only providing a 15-30% speed
improvement. Psyco's benefits are not uniform.

Unladen Swallow has been designed to be much more maintainable and
easier to extend and modify than Psyco: the compiler and its attendant
optimizations are well-tested (see Lib/test/test_llvm.py, for one) and
well-documented (see Python/llvm_notes.txt for one). I think that the
project is bearing out the success of our design: Google's full-time
engineers are a small minority on the project at this point, and
almost all performance-improving patches are coming from non-Google
developers.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Jeffrey Yasskin
On Mon, Jan 25, 2010 at 10:44 AM, Tres Seaver tsea...@palladion.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Collin Winter wrote:

 For reference, what are these obscure platforms where static
 initializers cause problems?

 It's been a long while since I had to deal with it, but the usual
 suspets back in the day were HP-UX, AIX, and Solaris with non-GCC
 compilers, as well as Windows when different VC RT libraries got into
 the mix.

So then the question is, will this cause any problems we care about?
Do the problems still exist, or were they eliminated in the time
between back in the day and now? In what circumstances do static
initializers have problems? What problems do they have? Can the
obscure platforms work around the problems by configuring with
--without-llvm? If we eliminate static initializers in LLVM, are there
any other problems?

We really do need precise descriptions of the problems so we can avoid them.

Thanks,
Jeffrey
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Floris Bruynooghe
On Mon, Jan 25, 2010 at 10:14:35AM -0800, Collin Winter wrote:
 I'm working on a patch to completely remove all traces of C++ with
 configured with --without-llvm. It's a straightforward change, and
 should present no difficulties.

Great to hear that, thanks for caring.

 For reference, what are these obscure platforms where static
 initializers cause problems?

I've had serious trouble on AIX 5.3 TL 04 with a GCC toolchain
(apparently the IBM xlc toolchain is better for that instance).  The
problem seems to be that gcc stores the initialisation code in a
section (_GLOBAL__DI IIRC) which the system loader does not execute.
Altough this was involving dlopen() from a C main() which U-S would
not need AFAIK, having a C++ main() might make the loader do the right
thing.  I must also note that on more recent versions (TL 07) this was
no problem at all.  But you don't always have the luxury of being able
to use recent OSes.


Regards
Floris

PS: For completeness sake this was trying to use the omniorbpy module
with Python 2.5.

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Floris Bruynooghe
On Mon, Jan 25, 2010 at 11:48:56AM -0800, Jeffrey Yasskin wrote:
 On Mon, Jan 25, 2010 at 10:44 AM, Tres Seaver tsea...@palladion.com wrote:
  Collin Winter wrote:
 
  For reference, what are these obscure platforms where static
  initializers cause problems?
 
  It's been a long while since I had to deal with it, but the usual
  suspets back in the day were HP-UX, AIX, and Solaris with non-GCC
  compilers, as well as Windows when different VC RT libraries got into
  the mix.
 
 So then the question is, will this cause any problems we care about?
 Do the problems still exist, or were they eliminated in the time
 between back in the day and now? In what circumstances do static
 initializers have problems? What problems do they have? Can the
 obscure platforms work around the problems by configuring with
 --without-llvm? If we eliminate static initializers in LLVM, are there
 any other problems?

When Collin's patch is finished everything will be lovely since if
there's no C++ then there's no problem.  Since I was under the
impression that the JIT/LLVM can't emit machine code for the platforms
where these C++ problems would likely occur nothing would be lost.  So
trying to change the LLVM to avoid static initialisers would not seem
like a good use of someones time.

 We really do need precise descriptions of the problems so we can avoid them.

Sometimes these precise descriptions are hard to come by.  :-)


Regards
Floris

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Martin v. Löwis
 We really do need precise descriptions of the problems so we can avoid them.

One family of problems is platform lack of initializer support in the
object file format; any system with traditional a.out (or b.out) is
vulnerable (also, COFF is, IIRC).

The solution e.g. g++ came up with is to have the collect2 linker
replacement combine all such initializers into a synthesized function
__main; this function then gets magically called by main(), provided
that main() itself gets compiled by a C++ compiler. Python used to have
a ccpython.cc entry point to support such systems.

This machinery is known to fail in the following ways:
a) main() is not compiled with g++: static objects get not constructed
b) code that gets linked into shared libraries (assuming the system
   supports them) does not get its initializers invoked.
c) compilation of main() with a C++ compiler, but then linking with ld
   results in an unresolved symbol __main.

Not sure whether U-S has any global C++ objects that need construction
(but I would be surprised if it didn't).

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Martin v. Löwis
Floris Bruynooghe wrote:
 Since I was under the
 impression that the JIT/LLVM can't emit machine code for the platforms
 where these C++ problems would likely occur nothing would be lost.

AFAICT, LLVM doesn't support Itanium or HPPA, and apparently not POWER,
either (although they do support PPC - not sure what that means for
POWER). So that rules out AIX and HP-UX as sources of problems (beyond
the problems that they cause already :-)

LLVM does support SPARC, so I'd be curious about reports that it worked
or didn't work on a certain Solaris release, with either SunPRO or the
gcc release that happened to be installed on the system (Solaris
installation sometimes feature fairly odd g++ versions).

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jeffrey Yasskin wrote:
 On Mon, Jan 25, 2010 at 10:44 AM, Tres Seaver tsea...@palladion.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Collin Winter wrote:

 For reference, what are these obscure platforms where static
 initializers cause problems?
 It's been a long while since I had to deal with it, but the usual
 suspets back in the day were HP-UX, AIX, and Solaris with non-GCC
 compilers, as well as Windows when different VC RT libraries got into
 the mix.
 
 So then the question is, will this cause any problems we care about?
 Do the problems still exist, or were they eliminated in the time
 between back in the day and now? In what circumstances do static
 initializers have problems? What problems do they have? Can the
 obscure platforms work around the problems by configuring with
 --without-llvm? If we eliminate static initializers in LLVM, are there
 any other problems?
 
 We really do need precise descriptions of the problems so we can avoid them.

Yup, sorry:  I was trying to kick in the little I could remember, but it
has been eight years since I wrote / compiled C++ in anger (a decade of
work in it before that).  MvLs reply sounds *exactly* like the memories
I have, though.


Tres.
- --
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkteGGcACgkQ+gerLs4ltQ5SHwCfcQOswX0StFS32U3fFE6RZ5rr
z0QAmgKUECEhdZPQhgsNACkRiWrWX0t0
=eXM+
-END PGP SIGNATURE-

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Jeffrey Yasskin
On Mon, Jan 25, 2010 at 1:50 PM, Martin v. Löwis mar...@v.loewis.de wrote:
 We really do need precise descriptions of the problems so we can avoid them.

 One family of problems is platform lack of initializer support in the
 object file format; any system with traditional a.out (or b.out) is
 vulnerable (also, COFF is, IIRC).

 The solution e.g. g++ came up with is to have the collect2 linker
 replacement combine all such initializers into a synthesized function
 __main; this function then gets magically called by main(), provided
 that main() itself gets compiled by a C++ compiler. Python used to have
 a ccpython.cc entry point to support such systems.

 This machinery is known to fail in the following ways:
 a) main() is not compiled with g++: static objects get not constructed
 b) code that gets linked into shared libraries (assuming the system
   supports them) does not get its initializers invoked.
 c) compilation of main() with a C++ compiler, but then linking with ld
   results in an unresolved symbol __main.

Thank you for the details. I'm pretty confident that (a) and (c) will
not be a problem for the Unladen Swallow merge because we switched
python.c (which holds main()) and linking to use the C++ compiler when
LLVM's enabled at all:
http://code.google.com/p/unladen-swallow/source/browse/trunk/Makefile.pre.in.
Python already had some support for this through the LINKCC configure
variable, but it wasn't being used to compile main(), just to link.

(b) could be a problem if we depend on LLVM as a shared library on one
of these platforms (and, of course, if LLVM's JIT supports these
systems at all). The obvious answers are: 1) --without-llvm on these
systems, 2) link statically on these systems, 3) eliminate the static
constructors. There may also be less obvious answers.

 Not sure whether U-S has any global C++ objects that need construction
 (but I would be surprised if it didn't).

I checked with them, and LLVM would welcome patches to remove these if
we need to. They already have a llvm::ManagedStatic class
(http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Support/ManagedStatic.h?view=markup)
with no constructor that we can replace most of them with.

Jeffrey
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Collin Winter
Hey Floris,

On Mon, Jan 25, 2010 at 1:25 PM, Floris Bruynooghe
floris.bruynoo...@gmail.com wrote:
 On Mon, Jan 25, 2010 at 10:14:35AM -0800, Collin Winter wrote:
 I'm working on a patch to completely remove all traces of C++ with
 configured with --without-llvm. It's a straightforward change, and
 should present no difficulties.

 Great to hear that, thanks for caring.

This has now been resolved. As of
http://code.google.com/p/unladen-swallow/source/detail?r=1036,
./configure --without-llvm has no dependency on libstdc++:

Before: $ otool -L ./python.exe
./python.exe:
/usr/lib/libSystem.B.dylib
/usr/lib/libstdc++.6.dylib
/usr/lib/libgcc_s.1.dylib


After: $ otool -L ./python.exe
./python.exe:
/usr/lib/libSystem.B.dylib
/usr/lib/libgcc_s.1.dylib

I've explicitly noted this in the PEP (see
http://codereview.appspot.com/186247/diff2/2001:4001/5001).

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Meador Inge
 We really do need precise descriptions of the problems so we can avoid
them.

Initialization of objects with static storage duration typically get a bad
wrap for two main reasons: (1) each toolchain implements them differently
(but typically by storing initialization thunks in a table that is walked by
the C++RT before entry to 'main'), which may lead to subtle differences when
compiling for different platforms and (2) there is no guaranteed
initialization order across translation unit boundaries.

(1) is just a fact of multi-platform development.  (2) is a bit more
interesting.  Consider two translation units 'a.cpp' and 'b.cpp':

   // a.cpp
   T { T() {} };
   ...
   static T obj1;

   // b.cpp
   S { S() {} };
   ...
   static S obj2;

When 'obj1' and 'obj2' get linked into the final image there are no
guarantees on whose constructor (T::T or S::S) will be called first.
Sometimes folks write code where this initialization order matters.  It may
cause strange behavior at run-time that is hard to pin down.  This may not
be a problem in the LLVM code base, but it is the typical problem that C++
devs run into with initialization of objects with static storage duration.

Also related to reduced code size with C++ I was wondering whether or not
anyone has explored using the ability of some toolchains to remove unused
code and data?  In GCC this can be enabled by compiling with
'-ffunction-sections' and '-fdata-sections' and linking with
'--gc-sections'.  In MS VC++ you can compile with '/Gy' and link with
'/OPT'.  This feature can lead to size reductions sometimes with C++ due to
things like template instantation causing multiple copies of the same
function to be linked in.  I played around with compiling CPython with this
(gcc + Darwin) and saw about a 200K size drop.  I want to try compiling all
of U-S (e.g. including LLVM) with these options next.

Thanks,

-- Meador
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Nick Coghlan
Jeffrey Yasskin wrote:
 (b) could be a problem if we depend on LLVM as a shared library on one
 of these platforms (and, of course, if LLVM's JIT supports these
 systems at all). The obvious answers are: 1) --without-llvm on these
 systems, 2) link statically on these systems, 3) eliminate the static
 constructors. There may also be less obvious answers.

Could the necessary initialisation be delayed until the Py_Initialize()
call? (although I guess that is just a particular implementation
strategy for option 3).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Nick Coghlan
Meador Inge wrote:
 When 'obj1' and 'obj2' get linked into the final image there are no
 guarantees on whose constructor (T::T or S::S) will be called first. 
 Sometimes folks write code where this initialization order matters.  It
 may cause strange behavior at run-time that is hard to pin down.  This
 may not be a problem in the LLVM code base, but it is the typical
 problem that C++ devs run into with initialization of objects with
 static storage duration.

Avoiding this problem is actually one of the original reasons for the
popularity of the singleton design pattern in C++. With instantiation on
first use, it helps ensure the constructors are all executed in the
right order. (There are other problems with the double-checked locking
required under certain aggressive compiler optimisation strategies, but
the static initialisation ordering problem occurs even when optimisation
is completely disabled).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Reid Kleckner
On Mon, Jan 25, 2010 at 9:05 PM, Meador Inge mead...@gmail.com wrote:
 Also related to reduced code size with C++ I was wondering whether or not
 anyone has explored using the ability of some toolchains to remove unused
 code and data?  In GCC this can be enabled by compiling with
 '-ffunction-sections' and '-fdata-sections' and linking with
 '--gc-sections'.  In MS VC++ you can compile with '/Gy' and link with
 '/OPT'.  This feature can lead to size reductions sometimes with C++ due to
 things like template instantation causing multiple copies of the same
 function to be linked in.  I played around with compiling CPython with this
 (gcc + Darwin) and saw about a 200K size drop.  I want to try compiling all
 of U-S (e.g. including LLVM) with these options next.

I'm sure someone has looked at this before, but I was also considering
this the other day.  One catch is that C extension modules need to be
able to link against any symbol declared with the PyAPI_* macros, so
you're not allowed to delete PyAPI_DATA globals or any code reachable
from a PyAPI_FUNC.

Someone would need to modify the PyAPI_* macros to include something
like __attribute__((used)) with GCC and then tell the linker to strip
unreachable code.  Apple calls it dead stripping:
http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man1/ld.1.html

This seems to have a section on how to achieve the same effect with a
gnu toolchain:
http://utilitybase.com/article/show/2007/04/09/225/Size+does+matter:+Optimizing+with+size+in+mind+with+GCC

I would guess that we have a fair amount of unused LLVM code linked in
to unladen, so stripping it would reduce our size.  However, we can
only do that if we link LLVM statically.  If/When we dynamically link
against LLVM, we lose our ability to strip out unused symbols.  The
best we can do is only link with the libraries we use, which is what
we already do.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Martin v. Löwis
Nick Coghlan wrote:
 Jeffrey Yasskin wrote:
 (b) could be a problem if we depend on LLVM as a shared library on one
 of these platforms (and, of course, if LLVM's JIT supports these
 systems at all). The obvious answers are: 1) --without-llvm on these
 systems, 2) link statically on these systems, 3) eliminate the static
 constructors. There may also be less obvious answers.
 
 Could the necessary initialisation be delayed until the Py_Initialize()
 call? (although I guess that is just a particular implementation
 strategy for option 3).

Exactly. The convenience of constructors is that you don't need to know
what all your global objects. If you arrange it so that you can trigger
initialization, you must have already eliminated them.

The question now is what state they carry: if some of them act as
dynamic containers, e.g. for machine code, it would surely be desirable
to have them constructed with Py_Initialize and destroyed with
Py_Finalize (so that the user sees the memory being released). If they
are merely global flags of some kind, they don't matter much.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-24 Thread Floris Bruynooghe
On Sat, Jan 23, 2010 at 10:09:14PM +0100, Cesare Di Mauro wrote:
 Introducing C++ is a big step, also. Aside the problems it can bring on some
 platforms, it means that C++ can now be used by CPython developers. It
 doesn't make sense to force people use C for everything but the JIT part. In
 the end, CPython could become a mix of C and C++ code, so a bit more
 difficult to understand and manage.

Introducing C++ is a big step, but I disagree that it means C++ should
be allowed in the other CPython code.  C++ can be problematic on more
obscure platforms (certainly when static initialisers are used) and
being able to build a python without C++ (no JIT/LLVM) would be a huge
benefit, effectively having the option to build an old-style CPython
at compile time.  (This is why I ased about --without-llvm being able
not to link with libstdc++).

Regards
Floris

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-24 Thread Cesare Di Mauro
2010/1/24 Floris Bruynooghe floris.bruynoo...@gmail.com

 Introducing C++ is a big step, but I disagree that it means C++ should
 be allowed in the other CPython code.  C++ can be problematic on more
 obscure platforms (certainly when static initialisers are used) and
 being able to build a python without C++ (no JIT/LLVM) would be a huge
 benefit, effectively having the option to build an old-style CPython
 at compile time.  (This is why I ased about --without-llvm being able
 not to link with libstdc++).

 Regards
 Floris


That's why I suggested the use of an external module, but if I have
understood correctly ceval.c needs to be changed using C++ for some parts.

If no C++ is required compiling the classic, non-jitted, CPython, my thought
was wrong, of course.

Cesare
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-23 Thread Jake McGuire
On Fri, Jan 22, 2010 at 11:07 AM, Collin Winter collinwin...@google.com wrote:
 Hey Jake,

 On Thu, Jan 21, 2010 at 10:48 AM, Jake McGuire mcgu...@google.com wrote:
 On Thu, Jan 21, 2010 at 10:19 AM, Reid Kleckner r...@mit.edu wrote:
 On Thu, Jan 21, 2010 at 12:27 PM, Jake McGuire mcgu...@google.com wrote:
 On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com 
 wrote:
 Profiling
 -

 Unladen Swallow integrates with oProfile 0.9.4 and newer [#oprofile]_ to 
 support
 assembly-level profiling on Linux systems. This means that oProfile will
 correctly symbolize JIT-compiled functions in its reports.

 Do the current python profiling tools (profile/cProfile/pstats) still
 work with Unladen Swallow?

 Sort of.  They disable the use of JITed code, so they don't quite work
 the way you would want them to.  Checking tstate-c_tracefunc every
 line generated too much code.  They still give you a rough idea of
 where your application hotspots are, though, which I think is
 acceptable.

 Hmm.  So cProfile doesn't break, but it causes code to run under a
 completely different execution model so the numbers it produces are
 not connected to reality?

 We've found the call graph and associated execution time information
 from cProfile to be extremely useful for understanding performance
 issues and tracking down regressions.  Giving that up would be a huge
 blow.

 FWIW, cProfile's call graph information is still perfectly accurate,
 but you're right: turning on cProfile does trigger execution under a
 different codepath. That's regrettable, but instrumentation-based
 profiling is always going to introduce skew into your numbers. That's
 why we opted to improve oProfile, since we believe sampling-based
 profiling to be a better model.

Sampling-based may be theoretically better, but we've gotten a lot of
mileage out of profile, hotshot and especially cProfile.  I know that
other people at Google have also used cProfile (backported to 2.4)
with great success.  The couple of times I tried to use oProfile it
was less illuminating than I'd hoped, but that could just be
inexperience.

 Profiling was problematic to support in machine code because in
 Python, you can turn profiling on from user code at arbitrary points.
 To correctly support that, we would need to add lots of hooks to the
 generated code to check whether profiling is enabled, and if so, call
 out to the profiler. Those is profiling enabled now? checks are
 (almost) always going to be false, which means we spend cycles for no
 real benefit.

Well, we put the ability to profile on demand to good use - in
particular by restricting profiling to one particular servlet (or a
subset of servlets) and by skipping the first few executions of that
servlet in a process to avoid startup noise.  All of this gets kicked
off by talking to the management process of our app server via http.

 Can YouTube use oProfile for profiling, or is instrumented profiling
 critical?

[snip]

I don't know that instrumented profiling is critical, but the level of
insight we have now is very important for keeping the our site happy.
It seems like it'd be a fair bit of work to get oProfile to give us
the same level of insight, and it's not clear who would be motivated
to do that work.

 - Add the necessary profiling hooks to JITted code to better support
 cProfile, but add a command-line flag (something explicit like -O3)
 that removes the hooks and activates the current behaviour (or
 something even more restrictive, possibly).

This would be workable albeit suboptimal; as I said we start and stop
profiling on the fly, and while we currently fork a new process to do
this, that's only because we don't have a good arbitrary RPC mechanism
from parent to child.  Having to start up a new python process from
scratch would be a big step back.

 - Initially compile Python code without the hooks, but have a
 trip-wire set to detect the installation of profiling hooks. When
 profiling hooks are installed, purge all machine code from the system
 and recompile all hot functions to include the profiling hooks.

This would be the closest to the way we are doing things now.

If Unladen Swallow is sufficiently faster, we would probably make
oProfile work.  But if it's a marginal improvement, we'd be more
inclined to try for more incremental improvements (e.g. your excellent
cPickle work).

-jake
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-23 Thread Cesare Di Mauro
Hi Collin

IMO it'll be better to make Unladen Swallow project a module, to be
installed and used if needed, so demanding to users the choice of having it
or not. The same way psyco does, indeed.

Nowadays it requires too much memory, longer loading time, and fat binaries
for not-so-great performances. I know that some issues have being worked on,
but I don't think that they'll show something comparable to the current
CPython status.

Introducing C++ is a big step, also. Aside the problems it can bring on some
platforms, it means that C++ can now be used by CPython developers. It
doesn't make sense to force people use C for everything but the JIT part. In
the end, CPython could become a mix of C and C++ code, so a bit more
difficult to understand and manage.

What I see is that LLVM is a too big project for the goal of having just a
JIT-ed Python VM. It can be surely easier to use and integrate into CPython,
but requires too much resources (on the contrary, Psyco demands little
resources, give very good performances, but seems to be like a mess to
manage and extend).

I know that a new, custom JIT code is an hard project to work on, requiring
long time, but the harry to have something faster to the current CPython can
bring to a mammoth that runs just a bit bitter.

Anyway, it seems that performance is a sensible argument for the Python
community. I think that a lot can be made to squeeze out more speed, working
both on CPython internals and on the JIT side.

Best regards,
Cesare

2010/1/20 Collin Winter collinwin...@google.com

 Hello python-dev,

 I've just committed the initial draft of PEP 3146, proposing to merge
 Unladen Swallow into CPython's source tree and roadmap. The initial
 draft is included below. I've also uploaded the PEP to Rietveld at
 http://codereview.appspot.com/186247, where individual fine-grained
 updates will be tracked. Feel free to comment either in this thread or
 on the Rietveld issue. I'll post periodic summaries of the
 discussion-to-date.

 We're looking forward to discussing this with everyone.

 Thanks,
 Collin Winter

 [snip...]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   >