Re: [Python-Dev] Mailing List archive corruption?

2010-01-21 Thread Vinay Sajip
Barry Warsaw barry at python.org writes:

 WTF?  I think the archives were recently regenerated, so there's probably a
 fubar there.  CC'ing the postmasters.
 

Is someone still working on this? I see no updates coming in to the Python-dev
archive on mail.python.org, though I do see them on Gmane (for example, the PEP
3146 thread from yesterday).

Regards,

Vinay Sajip

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread M.-A. Lemburg
Michael Foord wrote:
 On 20/01/2010 21:37, M.-A. Lemburg wrote:
 The only supported default encodings in Python are:

   Python 2.x: ASCII
   Python 3.x: UTF-8

 
 Is this true? I thought the default encoding in Python 3 was platform
 specific (i.e. cp1252 on Windows). That means files written using the
 default encoding on one platform may not be read correctly on another
 platform. Slightly off topic for this discussion I realise.

Yes, the above is what Python implements.

However, the default encoding is only used internally when converting
8-bit strings to Unicode.

When it comes to I/O there are several other encodings which can get
used, e.g. stdin/out/err streams each have their own encoding
(just like arbitrary file objects), OS APIs use their own encoding,
etc.

If no encoding is given for these, Python will try to find a
reasonable default. Python 2.x and 3.x differ a lot in how this
is done.

As always: It's better not to rely on such defaults and explicitly
provide the encoding as parameter where possible.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jan 21 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Michael Foord

On 21/01/2010 06:54, Gregory P. Smith wrote:

+1

My biggest concern is memory usage but it sounds like addressing that 
is already in your mind.  I don't so much mind an additional up front 
constant and per-line-of-code hit for instrumentation but leaks are 
unacceptable.  Any instrumentation data or jit caches should be 
managed (and tunable at run time when possible and it makes sense).


I think having a run time flag (or environment variable for those who 
like that) to disable the use of JIT at python3 execution time would 
be a good idea.




Echoes my sentiments. The work done so far sounds like an awesome and 
necessary foundation for the long term goal of speeding up Python. Given 
the likely cost / benefit tradeoffs when unladen-swallow is merged, the 
ability to switch off the JIT at runtime would seem essential.


In fact off by default to start with seems reasonable given that many 
users (particularly Windows users) are likely to only have one version 
of Python installed at a time (so side by side with-JIT and without-JIT 
installs is not really acceptable as the standard solution). Messing 
with environment variables is user-unfriendly on Windows. Users can use 
site.py and / or environment variables for global switches.


All the best,


Michael Foord



-gps

disclaimer: I work for Google but not on unladen-swallow.  My 
motivation is to improve the future of CPython for the entire world in 
the long term.



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
   



--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of your 
employer, to release me from all obligations and waivers arising from any and all 
NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, 
confidentiality, non-disclosure, non-compete and acceptable use policies (BOGUS 
AGREEMENTS) that I have entered into with your employer, its partners, licensors, 
agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. 
You further represent that you have the authority to release me from any BOGUS AGREEMENTS 
on behalf of your employer.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Barry Warsaw
On Jan 20, 2010, at 11:05 PM, Jack Diederich wrote:

Does disabling the LLVM change binary compatibility between modules
targeted at the same version?  At tonight's Boston PIG we had some
binary package maintainers but most people (including myself) only
cared about source compatibility.I assume linux distros care about
binary compatibility _a lot_.

A few questions come to mind:

1. What are the implications for PEP 384 (Stable ABI) if U-S is added?

2. What effect does requiring C++ have on the embedded applications across the
   set of platforms that Python is currently compatible on?  In a previous
   life I had to integrate a C++ library with Python as an embedded language
   and had lots of problems on some OSes (IIRC Solaris and Windows) getting
   all the necessary components to link properly.

3. Will the U-S bits come with a roadmap to the code?  It seems like this is
   dropping a big black box of code on the Python developers, and I would want
   to reduce the learning curve as much as possible.

I'm generally +0 with the current performance improvements, I could certainly
be +1 if we get even more gains out of it.  I think there's a lot of issues
that need to be addressed, and the PEP process is the right way to do that.

-Barry



signature.asc
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Barry Warsaw
On Jan 20, 2010, at 10:09 PM, Benjamin Peterson wrote:

 Does disabling the LLVM change binary compatibility between modules
 targeted at the same version?  At tonight's Boston PIG we had some
 binary package maintainers but most people (including myself) only
 cared about source compatibility.    I assume linux distros care about
 binary compatibility _a lot_.

We've traditionally broken binary compatibility between major releases
anyway, so I don't see much of an issue there. However, this might
change should PEP 384 be implemented.

Yep, and implementing PEP 384 is on my radar.

-Barry


signature.asc
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread Michael Foord

On 20/01/2010 23:46, MRAB wrote:

Martin v. Löwis wrote:

The only supported default encodings in Python are:

Python 2.x: ASCII
Python 3.x: UTF-8

Is this true?


For 3.x: yes. However, the default encoding is much less relevant in
3.x, since Python will never implicitly use the default encoding, except
when some C module asks for a char*. In particular, ordering between
bytes and unicodes causes a type error always.


I thought the default encoding in Python 3 was platform
specific (i.e. cp1252 on Windows).


Not at all. You are confusing this with the IO encoding of text
files, which indeed defaults to the locale encoding (and CP_ACP
on Windows specifically - which may or may not be cp1252).

The default encoding (i.e. the one you could theoretically set
with sys.setdefaultencoding) in 3.x is UTF-8.


It's UTF-8 precisely to avoid cross-platform encoding problems,
especially important now that 'normal' strings are Unicode.


Where the default *file system encoding* is used (i.e. text files are 
written or read without specifying an encoding) then there are still 
cross-platform issues. As I said, a file written on one platform may be 
unreadable on another platform.


The default encoding is only used for encoding strings to bytes when an 
encoding is not specified. I'm not sure when that can happen in Python 
3? (Most streams will have an encoding and is the default encoding used 
in preference to the stream encoding?)


Michael


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk 




--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of 
your employer, to release me from all obligations and waivers arising from any 
and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, 
clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and 
acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your 
employer, its partners, licensors, agents and assigns, in perpetuity, without 
prejudice to my ongoing rights and privileges. You further represent that you 
have the authority to release me from any BOGUS AGREEMENTS on behalf of your 
employer.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread Michael Foord

On 21/01/2010 11:15, M.-A. Lemburg wrote:

Michael Foord wrote:
   

On 20/01/2010 21:37, M.-A. Lemburg wrote:
 

The only supported default encodings in Python are:

   Python 2.x: ASCII
   Python 3.x: UTF-8

   

Is this true? I thought the default encoding in Python 3 was platform
specific (i.e. cp1252 on Windows). That means files written using the
default encoding on one platform may not be read correctly on another
platform. Slightly off topic for this discussion I realise.
 

Yes, the above is what Python implements.

However, the default encoding is only used internally when converting
8-bit strings to Unicode.
   


Ok. I know when in Python 2 an implicit encode (or decode) can happen. 
When in Python 3 can implicit encodes happen?



When it comes to I/O there are several other encodings which can get
used, e.g. stdin/out/err streams each have their own encoding
(just like arbitrary file objects), OS APIs use their own encoding,
etc.

If no encoding is given for these, Python will try to find a
reasonable default. Python 2.x and 3.x differ a lot in how this
is done.

As always: It's better not to rely on such defaults and explicitly
provide the encoding as parameter where possible.

   
Sure. I do worry that developers will still rely on the default behavior 
assuming that Python 3 fixes their encoding problems and cause 
cross-platform issues. But oh well.


Michael

--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of 
your employer, to release me from all obligations and waivers arising from any 
and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, 
clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and 
acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your 
employer, its partners, licensors, agents and assigns, in perpetuity, without 
prejudice to my ongoing rights and privileges. You further represent that you 
have the authority to release me from any BOGUS AGREEMENTS on behalf of your 
employer.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread M.-A. Lemburg
Michael Foord wrote:
 On 21/01/2010 11:15, M.-A. Lemburg wrote:
 Michael Foord wrote:
   
 On 20/01/2010 21:37, M.-A. Lemburg wrote:
 
 The only supported default encodings in Python are:

Python 2.x: ASCII
Python 3.x: UTF-8


 Is this true? I thought the default encoding in Python 3 was platform
 specific (i.e. cp1252 on Windows). That means files written using the
 default encoding on one platform may not be read correctly on another
 platform. Slightly off topic for this discussion I realise.
  
 Yes, the above is what Python implements.

 However, the default encoding is only used internally when converting
 8-bit strings to Unicode.

 
 Ok. I know when in Python 2 an implicit encode (or decode) can happen.
 When in Python 3 can implicit encodes happen?

When e.g. passing a Unicode object to a C API that uses the s#
parser marker.

 When it comes to I/O there are several other encodings which can get
 used, e.g. stdin/out/err streams each have their own encoding
 (just like arbitrary file objects), OS APIs use their own encoding,
 etc.

 If no encoding is given for these, Python will try to find a
 reasonable default. Python 2.x and 3.x differ a lot in how this
 is done.

 As always: It's better not to rely on such defaults and explicitly
 provide the encoding as parameter where possible.


 Sure. I do worry that developers will still rely on the default behavior
 assuming that Python 3 fixes their encoding problems and cause
 cross-platform issues. But oh well.

IMHO, it would be better not to give them that feeling and
instead have the default for I/O always be UTF-8 (for Python 3.x)
regardless of what the OS, device or locale says.

Developers could then customize these settings as necessary
in their applications or via Python environment variables.

Working around UnicodeDecodeErrors is almost always the wrong
approach. They need to be fixed in the code by deciding what
to do on a case-by-case basis.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jan 21 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread Michael Foord

On 21/01/2010 12:00, M.-A. Lemburg wrote:

Michael Foord wrote:
   

On 21/01/2010 11:15, M.-A. Lemburg wrote:
 

Michael Foord wrote:

   

On 20/01/2010 21:37, M.-A. Lemburg wrote:

 

The only supported default encodings in Python are:

Python 2.x: ASCII
Python 3.x: UTF-8


   

Is this true? I thought the default encoding in Python 3 was platform
specific (i.e. cp1252 on Windows). That means files written using the
default encoding on one platform may not be read correctly on another
platform. Slightly off topic for this discussion I realise.

 

Yes, the above is what Python implements.

However, the default encoding is only used internally when converting
8-bit strings to Unicode.

   

Ok. I know when in Python 2 an implicit encode (or decode) can happen.
When in Python 3 can implicit encodes happen?
 

When e.g. passing a Unicode object to a C API that uses the s#
parser marker.
   


Ah right, thanks.

   

When it comes to I/O there are several other encodings which can get
used, e.g. stdin/out/err streams each have their own encoding
(just like arbitrary file objects), OS APIs use their own encoding,
etc.

If no encoding is given for these, Python will try to find a
reasonable default. Python 2.x and 3.x differ a lot in how this
is done.

As always: It's better not to rely on such defaults and explicitly
provide the encoding as parameter where possible.


   

Sure. I do worry that developers will still rely on the default behavior
assuming that Python 3 fixes their encoding problems and cause
cross-platform issues. But oh well.
 

IMHO, it would be better not to give them that feeling and
instead have the default for I/O always be UTF-8 (for Python 3.x)
regardless of what the OS, device or locale says.

   


Well, I agree - but Python 3.1 is now out and uses the platform specific 
encoding as the default filesystem encoding. To change that now would be 
an incompatible change.


Michael


Developers could then customize these settings as necessary
in their applications or via Python environment variables.

Working around UnicodeDecodeErrors is almost always the wrong
approach. They need to be fixed in the code by deciding what
to do on a case-by-case basis.

   



--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of 
your employer, to release me from all obligations and waivers arising from any 
and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, 
clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and 
acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your 
employer, its partners, licensors, agents and assigns, in perpetuity, without 
prejudice to my ongoing rights and privileges. You further represent that you 
have the authority to release me from any BOGUS AGREEMENTS on behalf of your 
employer.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread M.-A. Lemburg
Michael Foord wrote:
 As always: It's better not to rely on such defaults and explicitly
 provide the encoding as parameter where possible.



 Sure. I do worry that developers will still rely on the default behavior
 assuming that Python 3 fixes their encoding problems and cause
 cross-platform issues. But oh well.
  
 IMHO, it would be better not to give them that feeling and
 instead have the default for I/O always be UTF-8 (for Python 3.x)
 regardless of what the OS, device or locale says.


 
 Well, I agree - but Python 3.1 is now out and uses the platform specific
 encoding as the default filesystem encoding. To change that now would be
 an incompatible change.

True and that's why we have to educate developers not to
rely on those defaults.

 Michael
 
 Developers could then customize these settings as necessary
 in their applications or via Python environment variables.

 Working around UnicodeDecodeErrors is almost always the wrong
 approach. They need to be fixed in the code by deciding what
 to do on a case-by-case basis.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jan 21 2010)
 Python/Zope Consulting and Support ...http://www.egenix.com/
 mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
 mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Antoine Pitrou

Hello,

 If we have done any original work, it is by accident.

:-)

 The increased memory usage comes from a) LLVM code generation, analysis
 and optimization libraries; b) native code; c) memory usage issues or
 leaks in LLVM; d) data structures needed to optimize and generate
 machine code; e) as-yet uncategorized other sources.

Does the increase in memory occupation disappear when the JIT is disabled 
from the command-line?
Do you think LLVM might suffer from a lot of memory leaks?

 We seek guidance from the community on
 an acceptable level of increased memory usage.

I think a 10-20% increase would be acceptable.

 32-bit; gcc 4.0.3
 
 +-+---+---+--+ |
 Binary size | CPython 2.6.4 | CPython 3.1.1 | Unladen Swallow r988 |
 +=+===+===+==+ |
 Release | 3.8M  | 4.0M  |  74M |
 +-+---+---+--+ |

This is positively humongous. Is there any way to shrink these numbers 
dramatically (I'm talking about the release builds)? Large executables or 
libraries may make people anxious about the interpreter's memory 
efficiency; and they will be a nuisance in many situations (think making 
standalone app bundles using py2exe or py2app).

   Unladen Swallow has enforced pre-commit reviews in our trunk, but we
   realize this may lead to long review/checkin cycles in a
   purely-volunteer organization. We would like a non-Google-affiliated
   member of the CPython development team to review our work for
   correctness and compatibility, but we realize this may not be possible
   for every commit.

Probably not... Perhaps you could post the most critical patches on 
rietveld and ask for review there?
I suppose the patches will be quite large anyway, so most of them will 
remain unreviewed.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Floris Bruynooghe
On Wed, Jan 20, 2010 at 02:27:05PM -0800, Collin Winter wrote:
 Platform Support
 
[...]
 In order to support hardware and software platforms where LLVM's JIT
 does not work, Unladen Swallow provides a ``./configure
 --without-llvm`` option. This flag carves out any part of Unladen
 Swallow that depends on LLVM, yielding a Python binary that works
 and passes its tests, but has no performance advantages. This
 configuration is recommended for hardware unsupported by LLVM, or
 systems that care more about memory usage than performance.

I just compiled with the --without-llvm option and see that the
binary, while only an acceptable 4.1M, still links with libstdc++.  Is
it possible to completely get rid of the C++ dependency if this option
is used?  Introducing a C++ dependency on all platforms for no
additional benefit (with --without-llvm) seems like a bad tradeoff to
me.

Regards
Floris

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Thomas Wouters
On Wed, Jan 13, 2010 at 19:51, Guido van Rossum gu...@python.org wrote:

 Please mail me topics you'd like to hear me talk about in my keynote
 at PyCon this year.


I'd like to hear you lay to rest that nonsense about you retiring :

-- 
Thomas Wouters tho...@python.org

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Michael Foord

On 21/01/2010 15:03, Thomas Wouters wrote:



On Wed, Jan 13, 2010 at 19:51, Guido van Rossum gu...@python.org 
mailto:gu...@python.org wrote:


Please mail me topics you'd like to hear me talk about in my keynote
at PyCon this year.



How about something completely different... ?

Your history of Python stuff has been really interesting.



I'd like to hear you lay to rest that nonsense about you retiring :


Well, ditto. :-)

All the best,

Michael



--
Thomas Wouters tho...@python.org mailto:tho...@python.org

Hi! I'm a .signature virus! copy me into your .signature file to help 
me spread!



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
   



--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of 
your employer, to release me from all obligations and waivers arising from any 
and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, 
clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and 
acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your 
employer, its partners, licensors, agents and assigns, in perpetuity, without 
prejudice to my ongoing rights and privileges. You further represent that you 
have the authority to release me from any BOGUS AGREEMENTS on behalf of your 
employer.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Kortatu
Hi!

For me, could be very interesting something about Unladen Swallow, and your
opinion about JIT compilers.
Unfortunately, I can't go to PyCon, are you going to upload the keynote
presentation?

Cheers.

2010/1/21 Michael Foord fuzzy...@voidspace.org.uk

  On 21/01/2010 15:03, Thomas Wouters wrote:



 On Wed, Jan 13, 2010 at 19:51, Guido van Rossum gu...@python.org wrote:

 Please mail me topics you'd like to hear me talk about in my keynote
 at PyCon this year.


 How about something completely different... ?

 Your history of Python stuff has been really interesting.


  I'd like to hear you lay to rest that nonsense about you retiring :


 Well, ditto. :-)

 All the best,

 Michael


 --
 Thomas Wouters tho...@python.org

 Hi! I'm a .signature virus! copy me into your .signature file to help me
 spread!


 ___
 Python-Dev mailing 
 listpython-...@python.orghttp://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk



 -- http://www.ironpythoninaction.com/http://www.voidspace.org.uk/blog

 READ CAREFULLY. By accepting and reading this email you agree, on behalf of 
 your employer, to release me from all obligations and waivers arising from 
 any and all NON-NEGOTIATED agreements, licenses, terms-of-service, 
 shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, 
 non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have 
 entered into with your employer, its partners, licensors, agents and assigns, 
 in perpetuity, without prejudice to my ongoing rights and privileges. You 
 further represent that you have the authority to release me from any BOGUS 
 AGREEMENTS on behalf of your employer.



 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/gloryboy84%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Glyph Lefkowitz

On Jan 21, 2010, at 7:25 AM, Antoine Pitrou wrote:

 We seek guidance from the community on
 an acceptable level of increased memory usage.
 
 I think a 10-20% increase would be acceptable.

It would be hard for me to put an exact number on what I would find acceptable, 
but I was really hoping that we could get a *reduced* memory footprint in the 
long term.

My real concern here is not absolute memory usage, but usage for each 
additional Python process on a system; even if Python supported fast, GIL-free 
multithreading, I'd still prefer the additional isolation of multiprocess 
concurrency.  As it currently stands, starting cores+1 Python processes can 
start to really hurt, especially in many-core-low-RAM environments like the 
Playstation 3.

So, if memory usage went up by 20%, but per-interpreter overhead were decreased 
by more than that, I'd personally be happy.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Collin Winter
On Thu, Jan 21, 2010 at 8:37 AM, Kortatu glorybo...@gmail.com wrote:
 Hi!

 For me, could be very interesting something about Unladen Swallow, and your
 opinion about JIT compilers.

FWIW, there will be a separate talk about Unladen Swallow at PyCon. I
for one would like to hear Guido talk about something else :)

Collin

 2010/1/21 Michael Foord fuzzy...@voidspace.org.uk

 On 21/01/2010 15:03, Thomas Wouters wrote:

 On Wed, Jan 13, 2010 at 19:51, Guido van Rossum gu...@python.org wrote:

 Please mail me topics you'd like to hear me talk about in my keynote
 at PyCon this year.

 How about something completely different... ?

 Your history of Python stuff has been really interesting.


 I'd like to hear you lay to rest that nonsense about you retiring :

 Well, ditto. :-)

 All the best,

 Michael


 --
 Thomas Wouters tho...@python.org

 Hi! I'm a .signature virus! copy me into your .signature file to help me
 spread!

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


 --
 http://www.ironpythoninaction.com/
 http://www.voidspace.org.uk/blog

 READ CAREFULLY. By accepting and reading this email you agree, on behalf
 of your employer, to release me from all obligations and waivers arising
 from any and all NON-NEGOTIATED agreements, licenses, terms-of-service,
 shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure,
 non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have
 entered into with your employer, its partners, licensors, agents and
 assigns, in perpetuity, without prejudice to my ongoing rights and
 privileges. You further represent that you have the authority to release me
 from any BOGUS AGREEMENTS on behalf of your employer.


 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/gloryboy84%40gmail.com



 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/collinw%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
Hi Dirkjan,

On Wed, Jan 20, 2010 at 10:55 PM, Dirkjan Ochtman dirk...@ochtman.nl wrote:
 On Thu, Jan 21, 2010 at 02:56, Collin Winter collinwin...@google.com wrote:
 Agreed. We are actively working to improve the startup time penalty.
 We're interested in getting guidance from the CPython community as to
 what kind of a startup slow down would be sufficient in exchange for
 greater runtime performance.

 For some apps (like Mercurial, which I happen to sometimes hack on),
 increased startup time really sucks. We already have our demandimport
 code (I believe bzr has something similar) to try and delay imports,
 to prevent us spending time on imports we don't need. Maybe it would
 be possible to do something like that in u-s? It could possibly also
 keep track of the thorny issues, like imports where there's an except
 ImportError that can do fallbacks.

I added startup benchmarks for Mercurial and Bazaar yesterday
(http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
can use them as more macro-ish benchmarks, rather than merely starting
the CPython binary over and over again. If you have ideas for better
Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
hg_startup and bzr_startup benchmarks should give us some more data
points for measuring improvements in startup time.

One idea we had for improving startup time for apps like Mercurial was
to allow the creation of hermetic Python binaries, with all
necessary modules preloaded. This would be something like Smalltalk
images. We haven't yet really fleshed out this idea, though.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread A.M. Kuchling
On Wed, Jan 20, 2010 at 10:54:11PM -0800, Gregory P. Smith wrote:
 I think having a run time flag (or environment variable for those who like
 that) to disable the use of JIT at python3 execution time would be a good
 idea.

Another approach could be to compile two binaries, 'python' which is
smaller and doesn't contain the JIT, and jit-python, which is much
larger and contains LLVM.

--amk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread John Arbash Meinel
Collin Winter wrote:
 Hi Dirkjan,
 
 On Wed, Jan 20, 2010 at 10:55 PM, Dirkjan Ochtman dirk...@ochtman.nl wrote:
 On Thu, Jan 21, 2010 at 02:56, Collin Winter collinwin...@google.com wrote:
 Agreed. We are actively working to improve the startup time penalty.
 We're interested in getting guidance from the CPython community as to
 what kind of a startup slow down would be sufficient in exchange for
 greater runtime performance.
 For some apps (like Mercurial, which I happen to sometimes hack on),
 increased startup time really sucks. We already have our demandimport
 code (I believe bzr has something similar) to try and delay imports,
 to prevent us spending time on imports we don't need. Maybe it would
 be possible to do something like that in u-s? It could possibly also
 keep track of the thorny issues, like imports where there's an except
 ImportError that can do fallbacks.
 
 I added startup benchmarks for Mercurial and Bazaar yesterday
 (http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
 can use them as more macro-ish benchmarks, rather than merely starting
 the CPython binary over and over again. If you have ideas for better
 Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
 hg_startup and bzr_startup benchmarks should give us some more data
 points for measuring improvements in startup time.
 
 One idea we had for improving startup time for apps like Mercurial was
 to allow the creation of hermetic Python binaries, with all
 necessary modules preloaded. This would be something like Smalltalk
 images. We haven't yet really fleshed out this idea, though.
 
 Thanks,
 Collin Winter

There is freeze:
http://wiki.python.org/moin/Freeze

Which IIRC Robert Collins tried in the past, but didn't see a huge gain.
It at least tries to compile all of your python files to C files and
then build an executable out of that.

John
=:-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 7:25 AM, Antoine Pitrou solip...@pitrou.net wrote:
 32-bit; gcc 4.0.3

 +-+---+---+--+ |
 Binary size | CPython 2.6.4 | CPython 3.1.1 | Unladen Swallow r988 |
 +=+===+===+==+ |
 Release     | 3.8M          | 4.0M          |  74M                 |
 +-+---+---+--+ |

 This is positively humongous. Is there any way to shrink these numbers
 dramatically (I'm talking about the release builds)? Large executables or
 libraries may make people anxious about the interpreter's memory
 efficiency; and they will be a nuisance in many situations (think making
 standalone app bundles using py2exe or py2app).

When we link against LLVM as a shared library, LLVM will still all be
loaded into memory, but it will be shared between all python
processes.

The size increase is a recent regression, and we used to be down
somewhere in the 20 MB range:
http://code.google.com/p/unladen-swallow/issues/detail?id=118

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Lennart Regebro
My 2 cents:

For environments like Plone that has a lot of code and uses a lot of
memory the current speed increases is probaly not worth it if the
memory increases as much as it does now. It would for almost all cases
mean you need to skimp on data caching instead, which would probably
slow down the sites more than speed them up.

But of course, if in the future U-S would get that 4-5 times speedup,
doubling the memory would not be an issue, IMO.

-- 
Lennart Regebro: Python, Zope, Plone, Grok
http://regebro.wordpress.com/
+33 661 58 14 64
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Lennart Regebro
Hmmm.

A list of favorite restaurants?

OK, more seriously: Your favourite python tools.

-- 
Lennart Regebro: Python, Zope, Plone, Grok
http://regebro.wordpress.com/
+33 661 58 14 64
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Hanno Schlichting
Hi,

I'm a relative outsider to core development (I'm just a Plone release
manager), but'll allow myself a couple of questions. Feel free to
ignore them, if you think they are not relevant at this point :-) I'd
note that I'm generally enthusiastic and supportive of the proposal :)
As a data point, I can add that all tests of Zope 2.12 / Plone 4.0 and
their dependency set run fine under Unladen Swallow.

On Wed, Jan 20, 2010 at 11:27 PM, Collin Winter collinwin...@google.com wrote:
 We have chosen to reuse a set of existing compiler libraries called LLVM
 [#llvm]_ for code generation and code optimization.

Would it be prudent to ask for more information about the llvm
project? Especially in terms of its non-code related aspects. I can
try to hunt down this information myself, but as a complete outsider
to the llvm project this takes much longer, compared to someone who
has interacted with the project as closely as you have.

Questions like:

- Who holds the copyright, is there a foundation or is there a risk of
the project getting into trouble because of copyright issues?
- What licence is the code and the tools under and what affect does
that have on the code generated by the JIT compiler?
- What's the bus factor of the project? Is it in practice dependent on
a single BDFL or a single company like Apple?
- What's the status of the documentation or web presence? Does the
project consist only of developers or has it been able to attract
people from other backgrounds?

On the code related side you described general enthusiasm and support
of the llvm community, but noted that some aspect of it aren't as
mature as others. You also noted that given Python's good test
coverage you have been able to find problems in llvm itself. This
raises some questions to me:

- Does the llvm project employ automated test driven development, are
there functioning nightly test runs?
- What do they do to ensure a good quality of their software, how many
critical regressions have their been in final releases?
- Is there a clearly distinguished core and stable part of the project
that is separate from more experimental ideas?
- What's the status of their known bugs? Do important issues get
resolved in a reasonable time?
- Did they have security problems related to their codebase, have
those been handled in a professional manner?

There's probably more of these questions you tend to ask yourself
somewhat implicitly when trying to assess how mature and stable a
given open-source project is. My feeling is that llvm is still a
slightly new and up and coming project and it would be good to get a
better idea about how mature they are.

 Platform Support
 

 Unladen Swallow is inherently limited by the platform support provided by 
 LLVM,
 especially LLVM's JIT compilation system [#llvm-hardware]_. LLVM's JIT has the
 best support on x86 and x86-64 systems, and these are the platforms where
 Unladen Swallow has received the most testing. We are confident in 
 LLVM/Unladen
 Swallow's support for x86 and x86-64 hardware. PPC and ARM support exists, but
 is not widely used and may be buggy.

How does the platform support work with new versions of operating
systems? If for example a Windows 8 is released, is it likely that
llvm will need to be adjusted to support such a new version? What kind
of effect does this have on how a Python release can add support for
such a new version? Is there evidence from past releases that give an
indication on the time if costs the llvm team to support a new
version? Do they only add support for this to their latest release or
are such changes backported to older releases?

The Windows support doesn't seem to be a specific focus of the
project. Are there core developers of llvm that run and support
Windows or is this left to outside contributors? Is the feature set
the same across different operating systems, or are there areas in
which they differ in a significant manner?

 Managing LLVM Releases, C++ API Changes
 ---

 LLVM is released regularly every six months. This means that LLVM may be
 released two or three times during the course of development of a CPython 3.x
 release. Each LLVM release brings newer and more powerful optimizations,
 improved platform support and more sophisticated code generation.

How does the support and maintenance policy of llvm releases look
like? If a Python version is pegged to a specific llvm release, it
needs to be able to rely on critical bug fixes and security fixes to
be made for that release for a rather prolonged time. How does this
match the llvm policies given their frequent time based releases?

Sorry if those questions aren't appropriate. It's what comes to my
mind when thinking about a new and quite heavy dependency :-)

Hanno
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 

Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Antoine Pitrou
Reid Kleckner rnk at mit.edu writes:
 
  This is positively humongous. Is there any way to shrink these numbers
  dramatically (I'm talking about the release builds)? Large executables or
  libraries may make people anxious about the interpreter's memory
  efficiency; and they will be a nuisance in many situations (think making
  standalone app bundles using py2exe or py2app).
 
 When we link against LLVM as a shared library, LLVM will still all be
 loaded into memory, but it will be shared between all python
 processes.

How large is the LLVM shared library? One surprising data point is that the
binary is much larger than some of the memory footprint measurements given in
the PEP. Do those measurements ignore some of the space taken by
unladen-swallow? Or is it rather that the binary contains lots of never used
code?

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 9:35 AM, Floris Bruynooghe
floris.bruynoo...@gmail.com wrote:
 I just compiled with the --without-llvm option and see that the
 binary, while only an acceptable 4.1M, still links with libstdc++.  Is
 it possible to completely get rid of the C++ dependency if this option
 is used?  Introducing a C++ dependency on all platforms for no
 additional benefit (with --without-llvm) seems like a bad tradeoff to
 me.

There isn't (and shouldn't be) any real source-level dependency on
libstdc++ when LLVM is turned off.  However, the eval loop is now
compiled as C++, and that may be adding some hidden dependency
(exception handling code?).  The final binary is linked with $(CXX),
which adds an implicit -lstdc++, I think.  Someone just has to go and
track this down.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 12:27 PM, Jake McGuire mcgu...@google.com wrote:
 On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com 
 wrote:
 Profiling
 -

 Unladen Swallow integrates with oProfile 0.9.4 and newer [#oprofile]_ to 
 support
 assembly-level profiling on Linux systems. This means that oProfile will
 correctly symbolize JIT-compiled functions in its reports.

 Do the current python profiling tools (profile/cProfile/pstats) still
 work with Unladen Swallow?

Sort of.  They disable the use of JITed code, so they don't quite work
the way you would want them to.  Checking tstate-c_tracefunc every
line generated too much code.  They still give you a rough idea of
where your application hotspots are, though, which I think is
acceptable.

oprofile is useful for figuring out if more time is being spent in
JITed code or with interpreter overhead.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread David Malcolm
On Thu, 2010-01-21 at 00:06 +0100, Martin v. Löwis wrote:
  Why only set an encoding on these streams when they're directly
  connected to a tty?
 
 If you are sending data to the terminal, you can be fairly certain
 that the locale's encoding should be used. It's a convenience feature
 for the interactive mode, so that Unicode strings print correctly.
 
 When sending data to a pipe or to a file, God knows what encoding
 should have been used. If it's any XML file (for example), using the
 locale's encoding would be incorrect, and the encoding declared
 in the XML declaration should be used (or UTF-8 if no declaration
 is included). If it's a HTTP socket, it really should be restricted
 to ASCII in the headers, and then to the content-type. And so on.
 
 So in general, the applications should arrange to the the encoding
 or encode themselves when they write to some output stream. If they
 fail to do so, it's a bug in the application, not in Python.
 
  I'll patch things to remove the isatty conditional if that's acceptable.
 
 It will make your Python release incompatible with everybody else's,
 and will probably lead to moji-bake. Otherwise, it's fine.

Thanks (everyone) for your feedback

It's clear that me unilaterally making this change is extremely
unpopular, so I'm no longer planning to do so: maintaining consistency
of behavior between different downstream distributions of CPython 2.* is
the most important concern here.

For reference I filed the tty patch as http://bugs.python.org/issue7745
(I don't seem to have rights to set it closed-rejected myself).


One of my concerns here is the change of behavior between Python
programs when run at a tty versus within a shell pipeline/cronjob/system
daemon/etc, which I know many people find to be a gotcha; I know many
developers who've been burned by this difference between
development/deployment (myself included).

I suspect I'm reinventing the wheel here, but one way of avoiding this
gotcha is to set PYTHONIOENCODING=ascii, to override the tty locale
setting.

Without this, these two cases have different behavior:
[da...@brick ~]$ python -c 'print u\u03b1\u03b2\u03b3'
αβγ

[da...@brick ~]$ python -c 'print u\u03b1\u03b2\u03b3'|less
Traceback (most recent call last):
  File string, line 1, in module
UnicodeEncodeError: 'ascii' codec can't encode characters in position
0-2: ordinal not in range(128)


With PYTHONIOENCODING=ascii, the two cases have the same behavior:
[da...@brick ~]$ PYTHONIOENCODING=ascii python -c 'print u\u03b1\u03b2
\u03b3'
Traceback (most recent call last):
  File string, line 1, in module
UnicodeEncodeError: 'ascii' codec can't encode characters in position
0-2: ordinal not in range(128)

[da...@brick ~]$ PYTHONIOENCODING=ascii python -c 'print u\u03b1\u03b2
\u03b3'|less
Traceback (most recent call last):
  File string, line 1, in module
UnicodeEncodeError: 'ascii' codec can't encode characters in position
0-2: ordinal not in range(128)

(this is with my site.py reset back to the svn default)

So I think there's a case for suggesting that developers set
PYTHONIOENCODING=ascii in their environment, so ensure that attempts
to write unicode to a std stream using defaults will fail immediately
during the development cycle, rather than on deployment.   (Though,
alas, that will break the corresponding cases [1] for any python3
processes if they ever inherit that envvar).

Hope this is helpful
Dave

[1] in that this works:
[da...@brick ~]$ python3.1 -c 'print(\u03b1\u03b2\u03b3)'
αβγ

but this (naive and contrived) invocation of python3 from python2 fails:
[da...@brick ~]$ PYTHONIOENCODING=ascii python2.6 -c import os;
os.system('python3.1 -c \'print(\\u03b1\u03b2\u03b3\)\'')
Traceback (most recent call last):
  File string, line 1, in module
UnicodeEncodeError: 'ascii' codec can't encode characters in position
0-2: ordinal not in range(128)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Dirkjan Ochtman
On Wed, Jan 13, 2010 at 19:51, Guido van Rossum gu...@python.org wrote:
 Please mail me topics you'd like to hear me talk about in my keynote
 at PyCon this year.

Your thoughts on lean stdlib (obviously not too lean) vs. fat
stdlib might be interesting.

Cheers,

Dirkjan
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Jake McGuire
On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com wrote:
 Profiling
 -

 Unladen Swallow integrates with oProfile 0.9.4 and newer [#oprofile]_ to 
 support
 assembly-level profiling on Linux systems. This means that oProfile will
 correctly symbolize JIT-compiled functions in its reports.

Do the current python profiling tools (profile/cProfile/pstats) still
work with Unladen Swallow?

-jake
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Jake McGuire
On Thu, Jan 21, 2010 at 10:19 AM, Reid Kleckner r...@mit.edu wrote:
 On Thu, Jan 21, 2010 at 12:27 PM, Jake McGuire mcgu...@google.com wrote:
 On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com 
 wrote:
 Profiling
 -

 Unladen Swallow integrates with oProfile 0.9.4 and newer [#oprofile]_ to 
 support
 assembly-level profiling on Linux systems. This means that oProfile will
 correctly symbolize JIT-compiled functions in its reports.

 Do the current python profiling tools (profile/cProfile/pstats) still
 work with Unladen Swallow?

 Sort of.  They disable the use of JITed code, so they don't quite work
 the way you would want them to.  Checking tstate-c_tracefunc every
 line generated too much code.  They still give you a rough idea of
 where your application hotspots are, though, which I think is
 acceptable.

Hmm.  So cProfile doesn't break, but it causes code to run under a
completely different execution model so the numbers it produces are
not connected to reality?

We've found the call graph and associated execution time information
from cProfile to be extremely useful for understanding performance
issues and tracking down regressions.  Giving that up would be a huge
blow.

-jake
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Dirkjan Ochtman
On Thu, Jan 21, 2010 at 18:32, Collin Winter collinwin...@google.com wrote:
 I added startup benchmarks for Mercurial and Bazaar yesterday
 (http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
 can use them as more macro-ish benchmarks, rather than merely starting
 the CPython binary over and over again. If you have ideas for better
 Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
 hg_startup and bzr_startup benchmarks should give us some more data
 points for measuring improvements in startup time.

Sounds good! I seem to remember from a while ago that you included the
Mercurial test suite in your performance tests, but maybe those were
the correctness tests rather than the performance tests (or maybe I'm
just mistaken). I didn't see any mention of that in the proto-PEP, in
any case.

 One idea we had for improving startup time for apps like Mercurial was
 to allow the creation of hermetic Python binaries, with all
 necessary modules preloaded. This would be something like Smalltalk
 images. We haven't yet really fleshed out this idea, though.

Yeah, that might be interesting. I think V8 can do something similar, right?

One problem we've had with using py2exe on Windows is that it makes it
kind of a pain to properly support our extension/hooks mechanism
(which pretty much relies on importing stuff), so that would have to
be fixed somehow (I think py2exe apps don't have anything outside
their library.zip in the PYTHONPATH, or something like that).

What I personally would consider interesting for the PEP is a (not too
big) section evaluating where other Python-performance efforts are at.
E.g. does it make sense to propose a u-s merge now when, by the time
3.3 (or whatever) is released, there'll be a very good PyPy that
sports memory usage competitive for embedded development (already does
right now, I think) and a good tracing JIT? Or when we can compile
Python using Cython, or Shedskin -- probably not as likely; but I
think it might be worth assessing the landscape a bit before this huge
change is implemented.

Cheers,

Dirkjan

P.S. Is there any chance of LLVM doing something like tracing JITs?
Those seem somewhat more promising to me (even though I understand
they're quite hard in the face of Python features like stack frames).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Toshio Kuratomi
On Thu, Jan 21, 2010 at 12:25:59PM +, Antoine Pitrou wrote:
  We seek guidance from the community on
  an acceptable level of increased memory usage.
 
 I think a 10-20% increase would be acceptable.
 
I'm just a user of the core interpreter but the bottleneck in using python
in my environment has almost always been memory usage and almost never speed
of execution.  Note, though, that we run multiple python apps at a time so
anything that's shared between interpreters is less costly than anything that
must be unique.

Still, any growth in memory usage is painful since that's already the
limiting resource.

  32-bit; gcc 4.0.3
  
  +-+---+---+--+ |
  Binary size | CPython 2.6.4 | CPython 3.1.1 | Unladen Swallow r988 |
  +=+===+===+==+ |
  Release | 3.8M  | 4.0M  |  74M |
  +-+---+---+--+ |
 
 This is positively humongous. Is there any way to shrink these numbers 
 dramatically (I'm talking about the release builds)? Large executables or 
 libraries may make people anxious about the interpreter's memory 
 efficiency; and they will be a nuisance in many situations (think making 
 standalone app bundles using py2exe or py2app).
 
Binary size has an impact on linux distributions (and people who make
embedded systems from those distributions).  This kind of growth would push
the interpreter out of the livecds that we make and prevent shipping other
useful software on our DVDs and other media.

Somebody suggested building two interpeters in another part of this thread
(jit-python and normal python).  That has both pros and cons in situations
like this:

1) For some programs that we want to use the jit-python binary if available,
we'd need to implement some method of detecting its presence and using it if
present.
2) We'd need to specify that both jit-python and python are able to run this
program successfully.
3) If the compiled modules (byte code or C compiled) are incompatible
between the two interpreters that will make us cry.  (we're already shipping
two versions of things for python2.x and python3.x, also shipping a version
for jit-python would be... suboptimal.)

-Toshio


pgpOFpN8U0kT3.pgp
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Antoine Pitrou
Dirkjan Ochtman dirkjan at ochtman.nl writes:
 
 What I personally would consider interesting for the PEP is a (not too
 big) section evaluating where other Python-performance efforts are at.
 E.g. does it make sense to propose a u-s merge now when, by the time
 3.3 (or whatever) is released, there'll be a very good PyPy that
 sports memory usage competitive for embedded development (already does
 right now, I think) and a good tracing JIT?

I think PyPy still targets (and is written in) Python 2.5.
Given PyPy's history, it also looks quite difficult to know when it will be
ready for production (or perhaps I'm too pessimistic).

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Toshio Kuratomi
On Thu, Jan 21, 2010 at 09:32:23AM -0800, Collin Winter wrote:
 Hi Dirkjan,
 
 On Wed, Jan 20, 2010 at 10:55 PM, Dirkjan Ochtman dirk...@ochtman.nl wrote:
  For some apps (like Mercurial, which I happen to sometimes hack on),
  increased startup time really sucks. We already have our demandimport
  code (I believe bzr has something similar) to try and delay imports,
  to prevent us spending time on imports we don't need. Maybe it would
  be possible to do something like that in u-s? It could possibly also
  keep track of the thorny issues, like imports where there's an except
  ImportError that can do fallbacks.
 
 I added startup benchmarks for Mercurial and Bazaar yesterday
 (http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
 can use them as more macro-ish benchmarks, rather than merely starting
 the CPython binary over and over again. If you have ideas for better
 Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
 hg_startup and bzr_startup benchmarks should give us some more data
 points for measuring improvements in startup time.
 
This is great!

 One idea we had for improving startup time for apps like Mercurial was
 to allow the creation of hermetic Python binaries, with all
 necessary modules preloaded. This would be something like Smalltalk
 images. We haven't yet really fleshed out this idea, though.
 
Coming from a background building packages for a Linux distribution I'd like
to know what you're designing here.  There's may things that you could mean,
most of them having problems.  An image with all of the modules contained in
it is like a statically linked binary in the C world.  This can bloat our
livemedia installs, make memory usage go up if the modules were
sharable before and no longer are, cause pain in hunting down affected
packages and getting all of the rebuilt packages to users when a security
issue is found in a library, and, for some strange reason, encourages
application authors to bundle specific versions of libraries with their
apps.

So if you're going to look into this,please be careful and try to minimize
the tradeoffs that can occur.

Thanks,
-Toshio


pgpWt8kT5xbkd.pgp
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
Hey Dirkjan,

On Thu, Jan 21, 2010 at 11:16 AM, Dirkjan Ochtman dirk...@ochtman.nl wrote:
 On Thu, Jan 21, 2010 at 18:32, Collin Winter collinwin...@google.com wrote:
 I added startup benchmarks for Mercurial and Bazaar yesterday
 (http://code.google.com/p/unladen-swallow/source/detail?r=1019) so we
 can use them as more macro-ish benchmarks, rather than merely starting
 the CPython binary over and over again. If you have ideas for better
 Mercurial/Bazaar startup scenarios, I'd love to hear them. The new
 hg_startup and bzr_startup benchmarks should give us some more data
 points for measuring improvements in startup time.

 Sounds good! I seem to remember from a while ago that you included the
 Mercurial test suite in your performance tests, but maybe those were
 the correctness tests rather than the performance tests (or maybe I'm
 just mistaken). I didn't see any mention of that in the proto-PEP, in
 any case.

We used to run the Mercurial correctness tests at every revision, but
they were incredibly slow and a bit flaky under CPython 2.6. Bazaar's
tests were faster, but were flakier, so we ended up disabling them,
too. We only run these tests occasionally.

 One idea we had for improving startup time for apps like Mercurial was
 to allow the creation of hermetic Python binaries, with all
 necessary modules preloaded. This would be something like Smalltalk
 images. We haven't yet really fleshed out this idea, though.

 Yeah, that might be interesting. I think V8 can do something similar, right?

Correct; V8 loads a pre-compiled image of its builtins to reduce startup time.

 What I personally would consider interesting for the PEP is a (not too
 big) section evaluating where other Python-performance efforts are at.
 E.g. does it make sense to propose a u-s merge now when, by the time
 3.3 (or whatever) is released, there'll be a very good PyPy that
 sports memory usage competitive for embedded development (already does
 right now, I think) and a good tracing JIT? Or when we can compile
 Python using Cython, or Shedskin -- probably not as likely; but I
 think it might be worth assessing the landscape a bit before this huge
 change is implemented.

I can definitely work on that.
http://codespeak.net:8099/plotsummary.html should give you a quick
starting point for PyPy's performance. My reading of those graphs is
that it does very well on heavily-numerical workloads, but is much
slower than CPython on more diverse workloads. When I initially
benchmarked PyPy vs CPython last year, PyPy was 3-5x slower on
non-numerical workloads, and 60x slower on one benchmark (./perf.py -b
pickle,unpickle, IIRC).

My quick take on Cython and Shedskin is that they are
useful-but-limited workarounds for CPython's historically-poor
performance. Shedskin, for example, does not support the entire Python
language or standard library
(http://shedskin.googlecode.com/files/shedskin-tutorial-0.3.html).
Cython is a super-set of Python, and files annotated for maximum
Cython performance are no longer valid Python code, and will not run
on any other Python implementation. The advantage of using an
integrated JIT compiler is that we can support Python-as-specified,
without workarounds or changes in workflow. The compiler can observe
which parts of user code are static (or static-ish) and take advantage
of that, without the manual annotations needed by Cython. Cython is
good for writing extension modules without worrying about the details
of reference counting, etc, but I don't see it as an either-or
alternative for a JIT compiler.

 P.S. Is there any chance of LLVM doing something like tracing JITs?
 Those seem somewhat more promising to me (even though I understand
 they're quite hard in the face of Python features like stack frames).

Yes, you could implement a tracing JIT with LLVM. We chose a
function-at-a-time JIT because it would a) be an easy-to-implement
baseline to measure future improvement, and b) create much of the
infrastructure for a future tracing JIT. Implementing a tracing JIT
that crosses the C/Python boundary would be interesting.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
Hey Greg,

On Wed, Jan 20, 2010 at 10:54 PM, Gregory P. Smith g...@krypto.org wrote:
 +1
 My biggest concern is memory usage but it sounds like addressing that is
 already in your mind.  I don't so much mind an additional up front constant
 and per-line-of-code hit for instrumentation but leaks are unacceptable.
  Any instrumentation data or jit caches should be managed (and tunable at
 run time when possible and it makes sense).

Reducing memory usage is a high priority. One thing being worked on
right now is to avoid collecting runtime data for functions that will
never be considered hot. That's one leak in the current
implementation.

 I think having a run time flag (or environment variable for those who like
 that) to disable the use of JIT at python3 execution time would be a good
 idea.

Yep, we already have a -j flag that supports don't ever use the JIT
(-j never), use the JIT when you think you should (-j whenhot), and
always the use the JIT (-j always) options. I'll mention this in the
PEP (we'll clearly need to make this an -X option before merger).

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 3:14 PM, Collin Winter collinwin...@google.com wrote:
 P.S. Is there any chance of LLVM doing something like tracing JITs?
 Those seem somewhat more promising to me (even though I understand
 they're quite hard in the face of Python features like stack frames).

 Yes, you could implement a tracing JIT with LLVM. We chose a
 function-at-a-time JIT because it would a) be an easy-to-implement
 baseline to measure future improvement, and b) create much of the
 infrastructure for a future tracing JIT. Implementing a tracing JIT
 that crosses the C/Python boundary would be interesting.

I was thinking about this recently.  I think it would be a good 3
month project for someone.

Basically, we could turn off feedback recording until we decide to
start a trace at a loop header, at which point we switch to recording
everything, and compile the trace into a single stream of IR with a
bunch of guards and side exits.  The side exits could be indirect tail
calls to either a side exit handler, or a freshly compiled trace
starting at the opcode where the side exit occurred.  The default
handler would switch back to the interpreter, record the trace, kick
off compilation, and patch the indirect tail call target.

The only limitation with that approach is that you would have to do
extra work to propagate conditions like passed guards across the call
boundary, since we currently try to throw away as much LLVM IR as
possible after compilation to save memory.

So yes, I think it would be possible to implement a tracing JIT in the
future.  If people are really interested in that, I think the best way
to get there is to land unladen in py3k as described in the PEP and do
more perf work like this there and in branches on python.org, where it
can be supported by the wider Python developer community.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
Hey Barry,

On Thu, Jan 21, 2010 at 3:34 AM, Barry Warsaw ba...@python.org wrote:
 On Jan 20, 2010, at 11:05 PM, Jack Diederich wrote:

Does disabling the LLVM change binary compatibility between modules
targeted at the same version?  At tonight's Boston PIG we had some
binary package maintainers but most people (including myself) only
cared about source compatibility.    I assume linux distros care about
binary compatibility _a lot_.

 A few questions come to mind:

 1. What are the implications for PEP 384 (Stable ABI) if U-S is added?

PEP 384 looks to be incomplete at this writing, but reading the
section Structures, it says


Only the following structures and structure fields are accessible to
applications:

- PyObject (ob_refcnt, ob_type)
- PyVarObject (ob_base, ob_size)
- Py_buffer (buf, obj, len, itemsize, readonly, ndim, shape, strides,
suboffsets, smalltable, internal)
- PyMethodDef (ml_name, ml_meth, ml_flags, ml_doc)
- PyMemberDef (name, type, offset, flags, doc)
- PyGetSetDef (name, get, set, doc, closure)


Of these, the only one we have changed is PyMethodDef, and then to add
two fields to the end of the structure. We have changed other types
(dicts and code come to mind), but I believe we have only appended
fields and not deleted or reordered existing fields. I don't believe
that introducing the Unladen Swallow JIT will make maintaining a
stable ABI per PEP 384 more difficult. We've been careful about not
exporting any C++ symbols via PyAPI_FUNC(), so I don't believe that
will be an issue either, but Jeffrey can comment more deeply on this
issue.

If PEP 384 is accepted, I'd like it to include a testing strategy so
that we can be sure that we haven't accidentally broken ABI
compatibility. That testing should ideally be automated.

 2. What effect does requiring C++ have on the embedded applications across the
   set of platforms that Python is currently compatible on?  In a previous
   life I had to integrate a C++ library with Python as an embedded language
   and had lots of problems on some OSes (IIRC Solaris and Windows) getting
   all the necessary components to link properly.

To be clear, you're talking about embedding Python in a C/C++
application/library?

We have successfully integrated Unladen Swallow into a large C++
application that uses Python as an embedded scripting language. There
were no special issues or restrictions that I had to overcome to do
this. If you have any applications/libraries in particular that you'd
like me to test, I'd be happy to do that.

 3. Will the U-S bits come with a roadmap to the code?  It seems like this is
   dropping a big black box of code on the Python developers, and I would want
   to reduce the learning curve as much as possible.

Yes; there is 
http://code.google.com/p/unladen-swallow/source/browse/trunk/Python/llvm_notes.txt,
which goes into developer-level detail about various optimizations and
subsystems. We have other documentation in the Unladen Swallow wiki
that is being merged into llvm_notes.txt. Simply dropping this code
onto python-dev without a guide to it would be unacceptable.
llvm_notes.txt also details available instrumentation, useful to
CPython developers who are investigating performance changes.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Paul Moore
2010/1/20 Collin Winter collinwin...@google.com:
 Hello python-dev,

 I've just committed the initial draft of PEP 3146, proposing to merge
 Unladen Swallow into CPython's source tree and roadmap. The initial
 draft is included below. I've also uploaded the PEP to Rietveld at
 http://codereview.appspot.com/186247, where individual fine-grained
 updates will be tracked. Feel free to comment either in this thread or
 on the Rietveld issue. I'll post periodic summaries of the
 discussion-to-date.

 We're looking forward to discussing this with everyone.

I'll comment on a number of points here - I've read the thread but
it'd get too complex trying to quote specific items.

First of all, I'm generally in favour of this. While I don't have any
problems with Python's speed, I still think performance improvements
are a good thing, and the fact that Unladen Swallow opens up a whole
new range of possibilities, above and beyond any immediate performance
gains, means that Python should end up with a healthy change of
ongoing performance improvements.

I'm concerned about the memory and startup time penalties. It's nice
that you're working on them - I'd like to see them remain a priority.
Ultimately a *lot* of people use Python for short-running transient
commands (not just adhoc scripts, think hg log) and startup time and
memory penalties can really hurt there.

Windows compatibility is a big deal to me. And IMHO, it's a great
strength of Python at the moment that it has solid Windows support. I
would be strongly *against* this PEP if it was going to be Unix or
Linux only. As it is, I have concerns that Windows could suffer from
the common none of the developers use Windows, but we do our best
problem. I'm hoping that having U-S integrated into the core will mean
that there will be more Windows developers able to contribute and
alleviate that problem.

One question - once Unladen Swallow is integrated, will Google's
support (in terms of dedicated developer time) remain? If not, I'd
rather see more of the potential gains realised before integration, as
otherwise it could be a long time before it happens. Ideally, I'd like
to see a commitment from Google - otherwise the cynic in me is
inclined to say no until the suggested speed benefits have
materialised and only then accept U-S for integration. Less cynically,
it's clear that there's quite a way to go before the key advertised
benefits of U-S are achieved, and I don't want the project to lose
steam before it gets there.

I suppose as far as the goals in the PEP go:

 - Approval for the overall concept of adding a just-in-time compiler to 
 CPython,
   following the design laid out below.

+1

 - Permission to continue working on the just-in-time compiler in the CPython
   source tree.

+1

 - Permission to eventually merge the just-in-time compiler into the ``py3k``
   branch once all blocking issues have been addressed.

Provisionally +1, subject to clarification on eventually. I'd rather
that not only blocking issues be addressed, but also some (not
minimal) level of performance gain be achieved.

 - A pony.

I'll leave that to Guido :-)

Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Steve Steiner (listsin)

On Jan 21, 2010, at 3:20 PM, Collin Winter wrote:

 Hey Greg,
 
 On Wed, Jan 20, 2010 at 10:54 PM, Gregory P. Smith g...@krypto.org wrote:
 +1
 My biggest concern is memory usage but it sounds like addressing that is
 already in your mind.  I don't so much mind an additional up front constant
 and per-line-of-code hit for instrumentation but leaks are unacceptable.
  Any instrumentation data or jit caches should be managed (and tunable at
 run time when possible and it makes sense).
 
 Reducing memory usage is a high priority. One thing being worked on
 right now is to avoid collecting runtime data for functions that will
 never be considered hot. That's one leak in the current
 implementation.

Me, personally, I'd rather that you  give me the profile information to make my 
own decisions, give me an @hot decorator to flag things that I want to be sped 
up, and let me switch the heat profiling  gymnastics out of the runtime when I 
don't want them.

That way, I can run a profile if I want to get the info to flag the things that 
are important, but a normal run doesn't waste a lot of time or energy doing 
something I don't want it to do during a regular run.

Ideally, I could pre-JIT as much as possible on compile so that I could 
precompile my whole app pay the minimum JIT god's penalty at runtime.

Yes, sometimes I'd like to run on full automatic, but not often.  I run a 
*lot* of quick little scripts that do a few intense things once or in a tight 
loop.  I know where the hotspots are, and I want them compiled before they're 
*ever* run.  

99% of the time, I don't need a runtime babysitter, I need a performance boost 
in known places, right away and without any load or runtime penalty to go along 
with it.

S

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Paul Moore
2010/1/21 Paul Moore p.f.mo...@gmail.com:
 2010/1/20 Collin Winter collinwin...@google.com:
 Hello python-dev,
[...]
 We're looking forward to discussing this with everyone.

 I'll comment on a number of points here - I've read the thread but
 it'd get too complex trying to quote specific items.
[...]

One thing I forgot to mention - thanks to the individuals involved in
Unladen Swallow. So far all of your comments in response to questions
raised on this thread encourage me enormously - it's obvious that
you're all strongly committed to making this work, and that in itself
is a huge plus.

Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Alex Gaynor
On Thu, Jan 21, 2010 at 3:00 PM, Steve Steiner (listsin)
list...@integrateddevcorp.com wrote:

 On Jan 21, 2010, at 3:20 PM, Collin Winter wrote:

 Hey Greg,

 On Wed, Jan 20, 2010 at 10:54 PM, Gregory P. Smith g...@krypto.org wrote:
 +1
 My biggest concern is memory usage but it sounds like addressing that is
 already in your mind.  I don't so much mind an additional up front constant
 and per-line-of-code hit for instrumentation but leaks are unacceptable.
  Any instrumentation data or jit caches should be managed (and tunable at
 run time when possible and it makes sense).

 Reducing memory usage is a high priority. One thing being worked on
 right now is to avoid collecting runtime data for functions that will
 never be considered hot. That's one leak in the current
 implementation.

 Me, personally, I'd rather that you  give me the profile information to make 
 my own decisions, give me an @hot decorator to flag things that I want to be 
 sped up, and let me switch the heat profiling  gymnastics out of the runtime 
 when I don't want them.

 That way, I can run a profile if I want to get the info to flag the things 
 that are important, but a normal run doesn't waste a lot of time or energy 
 doing something I don't want it to do during a regular run.

 Ideally, I could pre-JIT as much as possible on compile so that I could 
 precompile my whole app pay the minimum JIT god's penalty at runtime.

 Yes, sometimes I'd like to run on full automatic, but not often.  I run a 
 *lot* of quick little scripts that do a few intense things once or in a tight 
 loop.  I know where the hotspots are, and I want them compiled before they're 
 *ever* run.


Unfortunately that model doesn't work particularly well with a JIT.
The point of a JIT is that it can respond to runtime feedback, and
take advantage of run time data.  If you were to precompile it you'd
lose interpretter overhead, and nothing else, because you can't do
things like embed pointers to data in the assembly.

Alex

P.S.: SOrry to anyone who I personally sent that message to, stupid
reply to all not being the default...

 99% of the time, I don't need a runtime babysitter, I need a performance 
 boost in known places, right away and without any load or runtime penalty to 
 go along with it.

 S

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/alex.gaynor%40gmail.com




-- 
I disapprove of what you say, but I will defend to the death your
right to say it. -- Voltaire
The people's good is the highest law. -- Cicero
Code can always be simpler than you think, but never as simple as you
want -- Me
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread Martin v. Löwis
 Where the default *file system encoding* is used (i.e. text files are
 written or read without specifying an encoding)

I think you misunderstand the notion of the *file system encoding*.
It is *not* a file encoding, but the file *system* encoding, i.e.
the encoding for file *names*, not for file *content*.

It was used on Windows for Windows 95; it is not used anymore on Windows
(although it's still used on Unix).

I think there are way too many specific cases where Python 3 will encode
implicitly to get a complete list from the memory. If you really are
after a complete list, you'll need to perform a thorough code review.
For a few examples where some kind of default encoding is applied,
consider XML and the dbm interfaces.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
 1. What are the implications for PEP 384 (Stable ABI) if U-S is added?

I haven't studied U-S yet, but I'd hope that there might be no
implications. No basic API should change, and everything the JIT
compiler does should be well shielded from the object API (which PEP 384
deals with).

 2. What effect does requiring C++ have on the embedded applications across the
set of platforms that Python is currently compatible on?

I think this depends on the operating system. On systems with a single
C++ compiler, integration should be easy. On Windows, it may be that
the C++ runtime of LLVM can be encapsulated so well that nothing of it
surfaces into Python, allowing, in theory, to include a CRT of a
different C++ compiler as well (and not requiring any C++ for the host
application).

On systems with multiple C++ compilers (e.g. Solaris), thing may get
tricky, and it may be required to use the same C++ compiler throughout
the entire application. Building Python without JIT then might be the
best option.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
Reid Kleckner wrote:
 On Thu, Jan 21, 2010 at 7:25 AM, Antoine Pitrou solip...@pitrou.net wrote:
 32-bit; gcc 4.0.3

 +-+---+---+--+ |
 Binary size | CPython 2.6.4 | CPython 3.1.1 | Unladen Swallow r988 |
 +=+===+===+==+ |
 Release | 3.8M  | 4.0M  |  74M |
 +-+---+---+--+ |
 This is positively humongous. Is there any way to shrink these numbers
 dramatically (I'm talking about the release builds)? Large executables or
 libraries may make people anxious about the interpreter's memory
 efficiency; and they will be a nuisance in many situations (think making
 standalone app bundles using py2exe or py2app).
 
 When we link against LLVM as a shared library, LLVM will still all be
 loaded into memory, but it will be shared between all python
 processes.

Even if you don't, it will *still* be shared, since the operating system
will also share executables.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
 How large is the LLVM shared library? One surprising data point is that the
 binary is much larger than some of the memory footprint measurements given in
 the PEP.

Could it be that you need to strip the binary, or otherwise remove
unneeded debug information?

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
 There is freeze:
 http://wiki.python.org/moin/Freeze
 
 Which IIRC Robert Collins tried in the past, but didn't see a huge gain.
 It at least tries to compile all of your python files to C files and
 then build an executable out of that.

to C files is a bit of an exaggeration, though. It embeds the byte
code into the executable. When loading the byte code, Python still has
to perform unmarshalling.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Dirkjan Ochtman
Hey Collin,

Thanks for the good answers so far!

On Thu, Jan 21, 2010 at 21:14, Collin Winter collinwin...@google.com wrote:
 We used to run the Mercurial correctness tests at every revision, but
 they were incredibly slow and a bit flaky under CPython 2.6. Bazaar's
 tests were faster, but were flakier, so we ended up disabling them,
 too. We only run these tests occasionally.

Oh, how I know how slow the tests are. I think the flakiness should be
limited to only a few tests (notably the inotify ones), but I guess
you may have enough correctness tests going to not have to test
everything at every revision.

 I can definitely work on that.
 http://codespeak.net:8099/plotsummary.html should give you a quick
 starting point for PyPy's performance. My reading of those graphs is
 that it does very well on heavily-numerical workloads, but is much
 slower than CPython on more diverse workloads. When I initially
 benchmarked PyPy vs CPython last year, PyPy was 3-5x slower on
 non-numerical workloads, and 60x slower on one benchmark (./perf.py -b
 pickle,unpickle, IIRC).

Yes, it makes sense from what I've seen to suppose that PyPy so far
has been focused more on numeric tasks, rather than more
common/diverse usage. Nevertheless, I think the PyPy JIT has come a
long way in the past 6 months or so (there should be a PyPy 1.1
release relatively soon, I think), and I think it makes sense at least
to take this into account in a kind of landscape section.

 My quick take on Cython and Shedskin is that they are
 useful-but-limited workarounds for CPython's historically-poor
 performance. Shedskin, for example, does not support the entire Python
 language or standard library
 (http://shedskin.googlecode.com/files/shedskin-tutorial-0.3.html).

Perfect, now put something like this in the PEP, please. ;)

 Yes, you could implement a tracing JIT with LLVM. We chose a
 function-at-a-time JIT because it would a) be an easy-to-implement
 baseline to measure future improvement, and b) create much of the
 infrastructure for a future tracing JIT. Implementing a tracing JIT
 that crosses the C/Python boundary would be interesting.

Okay, this (combined with Reid's later post about it being a good
3-month project) sounds promising enough.

Cheers,

Dirkjan
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
 On Wed, Jan 20, 2010 at 10:54:11PM -0800, Gregory P. Smith wrote:
 I think having a run time flag (or environment variable for those who like
 that) to disable the use of JIT at python3 execution time would be a good
 idea.
 
 Another approach could be to compile two binaries, 'python' which is
 smaller and doesn't contain the JIT, and jit-python, which is much
 larger and contains LLVM.

I wonder whether the JIT compiler could become a module, which, when
loaded, replaces a few function pointers (like the eval loop).

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread John Arbash Meinel
Martin v. Löwis wrote:
 There is freeze:
 http://wiki.python.org/moin/Freeze

 Which IIRC Robert Collins tried in the past, but didn't see a huge gain.
 It at least tries to compile all of your python files to C files and
 then build an executable out of that.
 
 to C files is a bit of an exaggeration, though. It embeds the byte
 code into the executable. When loading the byte code, Python still has
 to perform unmarshalling.
 
 Regards,
 Martin
 

Sure, though it sounds quite similar to what they were mentioning with:
the creation of hermetic Python binaries, with all necessary modules
preloaded

My understanding was that because 'stuff' happens at import time, there
isn't a lot that you can do to improve startup time. I guess it depends
on what sort of state you could persist safely. And, of course, what you
could get away with for a library would probably be different than what
you could do with a standalone app.

John
=:-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
 Sure, though it sounds quite similar to what they were mentioning with:
 the creation of hermetic Python binaries, with all necessary modules
 preloaded

I wondered whethe this hermetic binary would also include the result of
JIT compilation - if so, it would go beyond freeze, and contain actual
machine code.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com wrote:
[snip]
 Incremental builds, however, are significantly slower. The table below shows
 incremental rebuild times after touching ``Objects/listobject.c``.

 +-+---+---+--+
 | Incr make   | CPython 2.6.4 | CPython 3.1.1 | Unladen Swallow r988 |
 +=+===+===+==+
 | Run 1       | 0m1.854s      | 0m1.456s      | 0m24.464s            |
 +-+---+---+--+
 | Run 2       | 0m1.437s      | 0m1.442s      | 0m24.416s            |
 +-+---+---+--+
 | Run 3       | 0m1.440s      | 0m1.425s      | 0m24.352s            |
 +-+---+---+--+

http://code.google.com/p/unladen-swallow/source/detail?r=1015 has
significantly improved this situation. The new table of incremental
build times:

+-+---+---+---+
| Incr make   | CPython 2.6.4 | CPython 3.1.1 | Unladen Swallow r1024 |
+=+===+===+===+
| Run 1   | 0m1.854s  | 0m1.456s  | 0m6.680s  |
+-+---+---+---+
| Run 2   | 0m1.437s  | 0m1.442s  | 0m5.310s  |
+-+---+---+---+
| Run 3   | 0m1.440s  | 0m1.425s  | 0m7.639s  |
+-+---+---+---+

The remaining increase is from statically linking LLVM into libpython.

PEP updated: http://codereview.appspot.com/186247/diff2/1:4/5

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 4:34 PM, Martin v. Löwis mar...@v.loewis.de wrote:
 How large is the LLVM shared library? One surprising data point is that the
 binary is much larger than some of the memory footprint measurements given in
 the PEP.

 Could it be that you need to strip the binary, or otherwise remove
 unneeded debug information?

Python is always built with debug information (-g), at least it was in
2.6.1 which unladen is based off of, and we've made sure to build LLVM
the same way.  We had to muck with the LLVM build system to get it to
include debugging information.  On my system, stripping the python
binary takes it from 82 MB to 9.7 MB.  So yes, it contains extra debug
info, which explains the footprint measurements.  The question is
whether we want LLVM built with debug info or not.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread David Malcolm
On Wed, 2010-01-20 at 14:27 -0800, Collin Winter wrote:

[snip]

 At a high level, the Unladen Swallow JIT compiler works by translating a
 function's CPython bytecode to platform-specific machine code, using data
 collected at runtime, as well as classical compiler optimizations, to improve
 the quality of the generated machine code. Because we only want to spend
 resources compiling Python code that will actually benefit the runtime of the
 program, an online heuristic is used to assess how hot a given function is. 
 Once
 the hotness value for a function crosses a given threshold, it is selected for
 compilation and optimization. Until a function is judged hot, however, it runs
 in the standard CPython eval loop, which in Unladen Swallow has been
 instrumented to record interesting data about each bytecode executed. This
 runtime data is used to reduce the flexibility of the generated machine code,
 allowing us to optimize for the common case. For example, we collect data on
 
 - Whether a branch was taken/not taken. If a branch is never taken, we will 
 not
   compile it to machine code.
 - Types used by operators. If we find that ``a + b`` is only ever adding
   integers, the generated machine code for that snippet will not support 
 adding
   floats.
 - Functions called at each callsite. If we find that a particular ``foo()``
   callsite is always calling the same ``foo`` function, we can optimize the
   call or inline it away
 
 Refer to [#us-llvm-notes]_ for a complete list of data points gathered and how
 they are used.

[snip]

To what extent would it be possible to use (conditionally) use full
ahead-of-time compilation as well as JIT?

With my downstream distributor of Python hat on, I'm wondering if it
would be feasible to replace the current precompiled .pyc/.pyo files in
marshal format with .so/.dll files in platform-specific shared-library
format, so that the pre-compiled versions of the stdlib could be
memory-mapped and shared between all Python processes on a system.  This
ought to dramatically reduce the whole-system memory load of the various
Python processes, whilst giving a reduction in CPU usage.  Distributors
of Python could build these shared libraries as part of the packaging
process, so that e.g. all of the Fedora python3 rpm packages would
contain .so files for every .py  (and this could apply to packaged
add-ons as well, so that every module you import would typically be
pre-compiled); startup of a python process would then involve
shared-readonly mmap-ing these files (which would typically be already
paged in if you're doing a lot of Python).

Potentially part of the memory bloat you're seeing could be debug data;
if that's the case, then the debug information could be stripped from
those .so files and shipped in a debuginfo package, to be loaded on
demand by the debugger (we do something like this in Fedora with our
RPMs for regular shared libraries and binaries).

(I wonder if to do this well would require adding annotations to the
code with hints about types to expect, since you'd have to lose the
run-time instrumentation, I think).

I did some research into the benefits of mmap-ing the data in .pyc files
to try to share the immutable data between them.  Executive summary is
that a (rather modest) saving of about 200K of heap usage per python
process is possible that way (with a rewrite of PyStringObject), with
higher savings depending on how many modules you import;  see:
http://dmalcolm.livejournal.com/4183.html

I'd expect to see this approach be more worthwhile when the in-memory
sizes of the modules get larger (hence this email).

[snip]

Hope this is helpful
Dave

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
Hi Paul,

On Thu, Jan 21, 2010 at 12:56 PM, Paul Moore p.f.mo...@gmail.com wrote:
 I'm concerned about the memory and startup time penalties. It's nice
 that you're working on them - I'd like to see them remain a priority.
 Ultimately a *lot* of people use Python for short-running transient
 commands (not just adhoc scripts, think hg log) and startup time and
 memory penalties can really hurt there.

I think final merger from the proposed py3k-jit branch into py3k
should block on reducing the startup and memory usage penalties.
Improving startup time is high on my list of priorities, and I think
there's a fair amount of low-hanging fruit there. I've just updated
http://code.google.com/p/unladen-swallow/issues/detail?id=64 with some
ideas that have recently come up to improve startup time, as well
results from the recently-added hg_startup and bzr_startup benchmarks.
I'll also update the PEP with these benchmark results, since they're
important to a lot of people.

 Windows compatibility is a big deal to me. And IMHO, it's a great
 strength of Python at the moment that it has solid Windows support. I
 would be strongly *against* this PEP if it was going to be Unix or
 Linux only. As it is, I have concerns that Windows could suffer from
 the common none of the developers use Windows, but we do our best
 problem. I'm hoping that having U-S integrated into the core will mean
 that there will be more Windows developers able to contribute and
 alleviate that problem.

One of our contributors, James Abbatiello (cc'd), has done a bang-up
job of making Unladen Swallow work on Windows. My understanding from
his last update is that Unladen Swallow works well on Windows, but he
can comment further as to the precise state of Windows support and any
remaining challenges faced on that platform, if any.

 One question - once Unladen Swallow is integrated, will Google's
 support (in terms of dedicated developer time) remain? If not, I'd
 rather see more of the potential gains realised before integration, as
 otherwise it could be a long time before it happens. Ideally, I'd like
 to see a commitment from Google - otherwise the cynic in me is
 inclined to say no until the suggested speed benefits have
 materialised and only then accept U-S for integration. Less cynically,
 it's clear that there's quite a way to go before the key advertised
 benefits of U-S are achieved, and I don't want the project to lose
 steam before it gets there.

While this decision is not mine, I don't believe our director would be
open to an open-ended commitment of full-time Google engineering
resources, though we still have another few quarters of engineering
time allocated to the Google team (myself, Jeffrey Yasskin). At this
point, the clear majority of Unladen Swallow's performance patches are
coming from the non-Google developers on the project (Jeffrey and I
are mostly working on infrastructure), and I believe that pattern will
continue once the py3k-jit branch is established.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
Reid Kleckner wrote:
 On Thu, Jan 21, 2010 at 4:34 PM, Martin v. Löwis mar...@v.loewis.de wrote:
 How large is the LLVM shared library? One surprising data point is that the
 binary is much larger than some of the memory footprint measurements given 
 in
 the PEP.
 Could it be that you need to strip the binary, or otherwise remove
 unneeded debug information?
 
 Python is always built with debug information (-g), at least it was in
 2.6.1 which unladen is based off of, and we've made sure to build LLVM
 the same way.  We had to muck with the LLVM build system to get it to
 include debugging information.  On my system, stripping the python
 binary takes it from 82 MB to 9.7 MB.  So yes, it contains extra debug
 info, which explains the footprint measurements.  The question is
 whether we want LLVM built with debug info or not.

Ok, so if 70MB are debug information, I think a lot of the concerns are
removed:
- debug information doesn't consume any main memory, as it doesn't get
  mapped when the process is started.
- debug information also doesn't take up space in the system
  distributions, as they distribute stripped binaries.

As 10MB is still 10 times as large as a current Python binary,people
will probably search for ways to reduce that further, or at least split
it up into pieces.

Regards,
Martin

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
On Thu, Jan 21, 2010 at 2:24 PM, Collin Winter collinwin...@google.com wrote:
 I'll also update the PEP with these benchmark results, since they're
 important to a lot of people.

Done; see http://codereview.appspot.com/186247/diff2/4:8/9 for the
wording change and new startup data.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Martin v. Löwis
 With my downstream distributor of Python hat on, I'm wondering if it
 would be feasible to replace the current precompiled .pyc/.pyo files in
 marshal format with .so/.dll files in platform-specific shared-library
 format, so that the pre-compiled versions of the stdlib could be
 memory-mapped and shared between all Python processes on a system.

I don't think replacing the byte code will be feasible, at least not
without breaking compatibility (else f.func_code.co_code would stop
working).

I also think you are overestimating the potential for sharing: much
of what lives in pyc files are actual Python objects, which need to
be reference-counted; doing this in a shared fashion is not feasible.

Regards,
Martin

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Jeffrey Yasskin
On Thu, Jan 21, 2010 at 10:09 AM, Hanno Schlichting ha...@hannosch.eu wrote:
 I'm a relative outsider to core development (I'm just a Plone release
 manager), but'll allow myself a couple of questions. Feel free to
 ignore them, if you think they are not relevant at this point :-) I'd
 note that I'm generally enthusiastic and supportive of the proposal :)
 As a data point, I can add that all tests of Zope 2.12 / Plone 4.0 and
 their dependency set run fine under Unladen Swallow.

Hi, thanks for checking that!

 On Wed, Jan 20, 2010 at 11:27 PM, Collin Winter collinwin...@google.com 
 wrote:
 We have chosen to reuse a set of existing compiler libraries called LLVM
 [#llvm]_ for code generation and code optimization.

 Would it be prudent to ask for more information about the llvm
 project? Especially in terms of its non-code related aspects. I can
 try to hunt down this information myself, but as a complete outsider
 to the llvm project this takes much longer, compared to someone who
 has interacted with the project as closely as you have.

 Questions like:

 - Who holds the copyright, is there a foundation or is there a risk of
 the project getting into trouble because of copyright issues?

See http://llvm.org/docs/DeveloperPolicy.html#clp. The University of
Illinois holds the copyright.

 - What licence is the code and the tools under and what affect does
 that have on the code generated by the JIT compiler?

LLVM's under the University of Illinois/NCSA license, which is
BSD-like: http://www.opensource.org/licenses/UoI-NCSA.php or
http://llvm.org/releases/2.6/LICENSE.TXT. If I understand correctly,
code transformed by a compiler isn't a work derived from that
compiler, so the compiler's license has no bearing on the license of
the output code.

 - What's the bus factor of the project? Is it in practice dependent on
 a single BDFL or a single company like Apple?

I believe the project would survive either Chris Lattner or Apple
getting bored. Even though Apple provides much of the development
effort right now, LLVM has enough users (http://llvm.org/Users.html)
to pick up the slack if they need to. Adobe, Cray, Google, NVidia, and
Rapidmind all contribute patches back to LLVM. (And Unladen Swallow
isn't the only reason Google cares.)

 - What's the status of the documentation or web presence? Does the
 project consist only of developers or has it been able to attract
 people from other backgrounds?

The documentation's quite good: http://llvm.org/docs/. What other
backgrounds are you asking about? I don't think they have any
sculptors on the project. ;)

 On the code related side you described general enthusiasm and support
 of the llvm community, but noted that some aspect of it aren't as
 mature as others. You also noted that given Python's good test
 coverage you have been able to find problems in llvm itself. This
 raises some questions to me:

 - Does the llvm project employ automated test driven development, are
 there functioning nightly test runs?

Yep: http://google1.osuosl.org:8011/console. We've been adding tests
as we fix bugs so they don't regress. They also have a larger
nightly test suite that runs whole programs,
http://llvm.org/docs/TestingGuide.html#quicktestsuite, but it isn't
actually run every night.

 - What do they do to ensure a good quality of their software, how many
 critical regressions have their been in final releases?

In addition to the unit tests and nightly tests, they have beta and
release candidate phases during which they ask the community to test
out the new release with our projects.
Nick Lewycky (cc'ed) answered your second question with: back in ...
1.7-ish? we had to do a 1.7.1 release within 24 hours due to an
accidental release with a critical regression. Since LLVM's now on
2.6, I take that as not very many.

 - Is there a clearly distinguished core and stable part of the project
 that is separate from more experimental ideas?

Not really.

 - What's the status of their known bugs? Do important issues get
 resolved in a reasonable time?

I'm not entirely sure how to answer that. The bugs database is at
http://llvm.org/bugs, and they've tended to fix bugs I ran into pretty
quickly, when those bugs affected the static compiler. JIT bugs get
handled somewhat more slowly, but I've also been able to fix those
bugs myself fairly easily.

 - Did they have security problems related to their codebase, have
 those been handled in a professional manner?

I haven't been exposed to their resolution of any security problems,
so I can't really answer that either. Inside of Python, LLVM will be
pretty insulated from the outside world, so the risk exposure
shouldn't be _too_ great, but I'm not really qualified to evaluate
that. If a Python user could send arbitrary IR into LLVM, they could
probably exploit the process, but Unladen doesn't give users a way to
do that. Nick can talk more about this.

 There's probably more of these questions you tend to ask yourself
 somewhat 

Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread David Malcolm
On Thu, 2010-01-21 at 23:42 +0100, Martin v. Löwis wrote:
  With my downstream distributor of Python hat on, I'm wondering if it
  would be feasible to replace the current precompiled .pyc/.pyo files in
  marshal format with .so/.dll files in platform-specific shared-library
  format, so that the pre-compiled versions of the stdlib could be
  memory-mapped and shared between all Python processes on a system.
 
 I don't think replacing the byte code will be feasible, at least not
 without breaking compatibility (else f.func_code.co_code would stop
 working).

co_code would remain a PyObject* referencing a PyBytesObject instance.

 I also think you are overestimating the potential for sharing: much
 of what lives in pyc files are actual Python objects, which need to
 be reference-counted; doing this in a shared fashion is not feasible.
 
The struct PyObject instances themselves wouldn't be shared; my idea
(for 2.*) was to introduce a new ob_sstate value into PyStringObject
indicating a pointer into a shared memory area, so that this large
immutable data can be shared; something like this:

typedef struct {
PyObject_VAR_HEAD
long ob_shash;
int ob_sstate;
union {
  char ob_sval[1];
  char *ob_sdata;
};

...
} PyStringObject;

In Py3k the ob_sstate has gone away (from PyBytesObject), so another
approach would be needed (e.g. add an indirection to PyBytesObject).



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Scott Dial
On 1/21/2010 1:09 PM, Hanno Schlichting wrote:
 - Who holds the copyright, is there a foundation or is there a risk of
 the project getting into trouble because of copyright issues?
 - What licence is the code and the tools under and what affect does
 that have on the code generated by the JIT compiler?

http://llvm.org/docs/DeveloperPolicy.html#clp


We intend to keep LLVM perpetually open source and to use a liberal open
source license. The current license is the University of llinois/NCSA
Open Source License[1], which boils down to this:

* You can freely distribute LLVM.
* You must retain the copyright notice if you redistribute LLVM.
* Binaries derived from LLVM must reproduce the copyright notice
(e.g. in an included readme file).
* You can't use our names to promote your LLVM derived products.
* There's no warranty on LLVM at all.

We believe this fosters the widest adoption of LLVM because it allows
commercial products to be derived from LLVM with few restrictions and
without a requirement for making any derived works also open source
(i.e. LLVM's license is not a copyleft license like the GPL). We
suggest that you read the License[1] if further clarification is needed.



We have no plans to change the license of LLVM. If you have questions or
comments about the license, please contact the LLVM Oversight Group[2].


[1] http://www.opensource.org/licenses/UoI-NCSA.php
[2] llvm-oversi...@cs.uiuc.edu

-- 
Scott Dial
sc...@scottdial.com
scod...@cs.indiana.edu
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposed downstream change to site.py in Fedora (sys.defaultencoding)

2010-01-21 Thread David Malcolm
On Thu, 2010-01-21 at 22:21 +0100, Martin v. Löwis wrote:
  Where the default *file system encoding* is used (i.e. text files are
  written or read without specifying an encoding)
 
 I think you misunderstand the notion of the *file system encoding*.
 It is *not* a file encoding, but the file *system* encoding, i.e.
 the encoding for file *names*, not for file *content*.
 
 It was used on Windows for Windows 95; it is not used anymore on Windows
 (although it's still used on Unix).
 
 I think there are way too many specific cases where Python 3 will encode
 implicitly to get a complete list from the memory. If you really are
 after a complete list, you'll need to perform a thorough code review.
 For a few examples where some kind of default encoding is applied,
 consider XML and the dbm interfaces.

Thanks for the clarification.

To add to the fun, libraries accessed via wrapper modules may have
their own ideas about filename encodings as well.  For example, GTK's
GLib library uses environment variables G_FILENAME_ENCODING and
G_BROKEN_FILENAMES to when converting between strings and OS calls [1].

Dave

[1] http://library.gnome.org/devel/glib/stable/glib-running.html

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
Hey Antoine,

On Thu, Jan 21, 2010 at 4:25 AM, Antoine Pitrou solip...@pitrou.net wrote:
 The increased memory usage comes from a) LLVM code generation, analysis
 and optimization libraries; b) native code; c) memory usage issues or
 leaks in LLVM; d) data structures needed to optimize and generate
 machine code; e) as-yet uncategorized other sources.

 Does the increase in memory occupation disappear when the JIT is disabled
 from the command-line?

It does not disappear, but it is significantly reduced. Running our
django benchmark against three different configurations gives me these
max memory usage numbers:

CPython 2.6.4: 8508 kb
Unladen Swallow default: 26768 kb
Unladen Swallow -j never: 15144 kb

-j never is Unladen Swallow's flag to disable JIT compilation.

As it stands right now, -j never gives a 1.76x reduction in memory
usage, but is still 1.77x larger than CPython. It occurs to me that
we're still doing a lot of LLVM-side initialization and setup that we
don't need to do under -j never. We're also collecting runtime
feedback in the eval loop, which is yet more memory usage. Optimizing
this mode has not yet been a priority for us, but it seems to be the
emerging consensus of python-dev that we need to give -j never some
more love. There's a lot of low-hanging fruit there.

I've added this information to
http://code.google.com/p/unladen-swallow/issues/detail?id=123, which
is our issue tracking -j never improvements.

 Do you think LLVM might suffer from a lot of memory leaks?

I don't know that it suffers from a lot of memory leaks, though we
have certainly observed and fixed quadratic memory usage in some of
the optimization passes. We've fixed all the memory leaks that
Google's internal heapchecker has found.

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
On Thu, Jan 21, 2010 at 10:14 AM, Reid Kleckner r...@mit.edu wrote:
 On Thu, Jan 21, 2010 at 9:35 AM, Floris Bruynooghe
 floris.bruynoo...@gmail.com wrote:
 I just compiled with the --without-llvm option and see that the
 binary, while only an acceptable 4.1M, still links with libstdc++.  Is
 it possible to completely get rid of the C++ dependency if this option
 is used?  Introducing a C++ dependency on all platforms for no
 additional benefit (with --without-llvm) seems like a bad tradeoff to
 me.

 There isn't (and shouldn't be) any real source-level dependency on
 libstdc++ when LLVM is turned off.  However, the eval loop is now
 compiled as C++, and that may be adding some hidden dependency
 (exception handling code?).  The final binary is linked with $(CXX),
 which adds an implicit -lstdc++, I think.  Someone just has to go and
 track this down.

We've opened http://code.google.com/p/unladen-swallow/issues/detail?id=124
to track this issue. It should be straight-forward to fix.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
Hey Glyph,

On Thu, Jan 21, 2010 at 9:11 AM, Glyph Lefkowitz
gl...@twistedmatrix.com wrote:
 It would be hard for me to put an exact number on what I would find 
 acceptable, but I was really hoping that we could get a *reduced* memory 
 footprint in the long term.

 My real concern here is not absolute memory usage, but usage for each 
 additional Python process on a system; even if Python supported fast, 
 GIL-free multithreading, I'd still prefer the additional isolation of 
 multiprocess concurrency.  As it currently stands, starting cores+1 Python 
 processes can start to really hurt, especially in many-core-low-RAM 
 environments like the Playstation 3.

 So, if memory usage went up by 20%, but per-interpreter overhead were 
 decreased by more than that, I'd personally be happy.

There's been a recent thread on our mailing list about a patch that
dramatically reduces the memory footprint of multiprocess concurrency
by separating reference counts from objects. We're looking at possibly
incorporating this work into Unladen Swallow, though I think it should
really go into upstream CPython first (since it's largely orthogonal
to the JIT work). You can see the thread here:
http://groups.google.com/group/unladen-swallow/browse_thread/thread/21d7248e8279b328/2343816abd1bd669

Thanks,
Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Barry Warsaw
On Jan 21, 2010, at 12:42 PM, Collin Winter wrote:

Hi Collin,

I don't believe that introducing the Unladen Swallow JIT will make
maintaining a stable ABI per PEP 384 more difficult. We've been careful about
not exporting any C++ symbols via PyAPI_FUNC(), so I don't believe that will
be an issue either, but Jeffrey can comment more deeply on this issue.

Cool.  Martin seems to agree that U-S will have little effect on PEP 384.

If PEP 384 is accepted, I'd like it to include a testing strategy so
that we can be sure that we haven't accidentally broken ABI
compatibility. That testing should ideally be automated.

Agreed.

To be clear, you're talking about embedding Python in a C/C++
application/library?

We have successfully integrated Unladen Swallow into a large C++
application that uses Python as an embedded scripting language. There
were no special issues or restrictions that I had to overcome to do
this. If you have any applications/libraries in particular that you'd
like me to test, I'd be happy to do that.

Martin's follow up reminds me what the issues with C++ here are.  They center
around which C++ compilers you use on which platforms.  Solaris, and to some
extent Windows IIRC, were the most problematic for the work I was doing 3+
years ago.  Usually, everything's fine if you're compiling all the code with
the same compiler.  The problem comes if you want to mix say C++ libraries
compiled with Sun's C++ compiler and Python compiled with g++.  Unlike the C
world, where it doesn't matter much, once C++ is in the mix you have to be
very careful about how all your libraries and core binary are compiled and
linked.

This might not be a show-stopper, but it will make things more difficult for
Python users, so we have to add this to the list of implications for including
C++ in Python's core.

Yes; there is 
http://code.google.com/p/unladen-swallow/source/browse/trunk/Python/llvm_notes.txt,

Excellent.  Thanks for the answers.
-Barry


signature.asc
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Barry Warsaw
On Jan 21, 2010, at 02:46 PM, Jeffrey Yasskin wrote:

LLVM's under the University of Illinois/NCSA license, which is
BSD-like: http://www.opensource.org/licenses/UoI-NCSA.php or
http://llvm.org/releases/2.6/LICENSE.TXT.

Cool.  This is a GPL compatible license, so it would presumably not change
Python's status there.

http://www.fsf.org/licensing/licenses/index_html#GPLCompatibleLicenses

-Barry


signature.asc
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Jeffrey Yasskin
On Thu, Jan 21, 2010 at 3:49 PM, Barry Warsaw ba...@python.org wrote:
 Martin's follow up reminds me what the issues with C++ here are.  They center
 around which C++ compilers you use on which platforms.  Solaris, and to some
 extent Windows IIRC, were the most problematic for the work I was doing 3+
 years ago.  Usually, everything's fine if you're compiling all the code with
 the same compiler.  The problem comes if you want to mix say C++ libraries
 compiled with Sun's C++ compiler and Python compiled with g++.  Unlike the C
 world, where it doesn't matter much, once C++ is in the mix you have to be
 very careful about how all your libraries and core binary are compiled and
 linked.

I'm not an expert here, but this is why we've tried to avoid exposing
C++ functions into the public API. As long as the public API stays C,
we only require that LLVM and Python are compiled with the same
compiler, right? Extension modules, since they only call C API
functions, can be compiled with different compilers.

I could imagine a problem if Python+LLVM link in one libstdc++, and an
extension module links in a different one, even if no C++ objects are
passed across the boundary. Does that cause problems in practice? We'd
have the same problems as from having two different C runtimes in the
same application.

Jeffrey
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Barry Warsaw
On Jan 21, 2010, at 04:07 PM, Jeffrey Yasskin wrote:

I could imagine a problem if Python+LLVM link in one libstdc++, and an
extension module links in a different one, even if no C++ objects are
passed across the boundary. Does that cause problems in practice? We'd
have the same problems as from having two different C runtimes in the
same application.

Yes, I think this could cause problems.

-Barry


signature.asc
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] newgil for python 2.5.4

2010-01-21 Thread Ross Cohen
Done:
http://bugs.python.org/issue7753

Porting to 2.7 was easier since it didn't involve putting the
changesets listed in issue 4293.

The performance numbers in the bug are more accurate than the ones I
previously posted. Turns out the system python is not a good baseline.
The improvement from this patch isn't as good, but still ~15% for the 4
thread case. I hear rumor that platforms other than linux show much
better improvement from the new GIL.

Ross

On Wed, 20 Jan 2010 20:40:38 -0600
Benjamin Peterson benja...@python.org wrote:

 2010/1/20 Ross Cohen rco...@snurgle.org:
  Comments? Suggestions? I'm going to continue fixing this up, but was
  wondering if this could possibly make it into python 2.7.
 
 Yes, it could, but please post it to the tracker instead of attaching patches.
 
 
 
 -- 
 Regards,
 Benjamin
 
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread skip

How about explaining why you're not going to give Collin a pony?

Skip

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
On Thu, Jan 21, 2010 at 12:20 PM, Collin Winter collinwin...@google.com wrote:
 Hey Greg,

 On Wed, Jan 20, 2010 at 10:54 PM, Gregory P. Smith g...@krypto.org wrote:
 I think having a run time flag (or environment variable for those who like
 that) to disable the use of JIT at python3 execution time would be a good
 idea.

 Yep, we already have a -j flag that supports don't ever use the JIT
 (-j never), use the JIT when you think you should (-j whenhot), and
 always the use the JIT (-j always) options. I'll mention this in the
 PEP (we'll clearly need to make this an -X option before merger).

FYI, I just committed
http://code.google.com/p/unladen-swallow/source/detail?r=1027, which
dramatically improves the performance of Unladen Swallow when running
with `-j never`, making disabling the JIT at runtime more viable.
We're continuing to make progress minimizing the impact of the JIT
when running under `-j never`. Progress can be tracked at
http://code.google.com/p/unladen-swallow/issues/detail?id=123.

Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Jesse Noller
On Thu, Jan 21, 2010 at 6:16 PM,  s...@pobox.com wrote:

 How about explaining why you're not going to give Collin a pony?

 Skip

You're on to something, but the question is:

1 How do we get a pony to atlanta
2 Later deliver it to Mountain View
3 Get it to review patches?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Alex Gaynor
On Thu, Jan 21, 2010 at 9:19 PM, Jesse Noller jnol...@gmail.com wrote:
 On Thu, Jan 21, 2010 at 6:16 PM,  s...@pobox.com wrote:

 How about explaining why you're not going to give Collin a pony?

 Skip

 You're on to something, but the question is:

 1 How do we get a pony to atlanta
 2 Later deliver it to Mountain View
 3 Get it to review patches?
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/alex.gaynor%40gmail.com


A Pony reviewing patches?  That's absurd.  Clearly we should review
patches ourselves and pray that the Pony doesn't decide to smite us.

Alex

-- 
I disapprove of what you say, but I will defend to the death your
right to say it. -- Voltaire
The people's good is the highest law. -- Cicero
Code can always be simpler than you think, but never as simple as you
want -- Me
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCon Keynote

2010-01-21 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jesse Noller wrote:
 On Thu, Jan 21, 2010 at 6:16 PM,  s...@pobox.com wrote:
 How about explaining why you're not going to give Collin a pony?

 Skip
 
 You're on to something, but the question is:
 
 1 How do we get a pony to atlanta
 2 Later deliver it to Mountain View

Majyk Ponies are TSA-pre-approved. ;)

 3 Get it to review patches?

That one is easy:  we offer it five handfuls of hay for every patch it
reviews.


Tres.
- --
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktZG3wACgkQ+gerLs4ltQ78pgCfarNZbpJTwmBniBR3ulSWqaDV
EmkAoLruVkImu3GQsIkh5XEqlqICe8Fa
=PBWr
-END PGP SIGNATURE-

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jake McGuire wrote:

 Hmm.  So cProfile doesn't break, but it causes code to run under a
 completely different execution model so the numbers it produces are
 not connected to reality?
 
 We've found the call graph and associated execution time information
 from cProfile to be extremely useful for understanding performance
 issues and tracking down regressions.  Giving that up would be a huge
 blow.

IIUC, optimizing your application using standard (non-JITed) profiling
tools would still be a win for the app when run under the JIT, because
your are going to be trimming code / using better algorithms, which will
tend to provide orthagonal speedups to anything the JIT does.  The
worst case would be that you hand-optimze the code to the point that the
JIT can't help any longer, kind of like writing libc syscalls in
assembler rather than C.


Tres.
- --
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktZIFEACgkQ+gerLs4ltQ76UwCgzx2QXWQIrgJc8KAvDrrllv5p
fxgAoMHAbM/iWFehLLBEtNj5T25fo8Pt
=NUEa
-END PGP SIGNATURE-

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 5:07 PM, David Malcolm dmalc...@redhat.com wrote:
 To what extent would it be possible to use (conditionally) use full
 ahead-of-time compilation as well as JIT?

It would be possible to do this, but it doesn't have nearly the same
benefits as JIT compilation, as Alex mentioned.  You could do a static
compilation of all code objects in a .pyc to LLVM IR and compile that
to a .so that you load at runtime, but it just eliminates the
interpreter overhead.  That is significant, and I think someone should
try it, but I think there are far more wins to be had using feedback.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Chris Bergstresser
On Thu, Jan 21, 2010 at 9:49 PM, Tres Seaver tsea...@palladion.com wrote:
 IIUC, optimizing your application using standard (non-JITed) profiling
 tools would still be a win for the app when run under the JIT, because
 your are going to be trimming code / using better algorithms, which will
 tend to provide orthagonal speedups to anything the JIT does.  The
 worst case would be that you hand-optimze the code to the point that the
 JIT can't help any longer, kind of like writing libc syscalls in
 assembler rather than C.

   You'd hope.  I don't think it's quite that simple, though.
   The problem is code might have completely different hotspots with
the JIT than without it.  The worst case in this scenario would be
that some code takes 1 second to run function A and 30 seconds to run
function B without the JIT, but 30 seconds to run function A and 1
second to run function B with the JIT.  The profiler's telling you to
put all your effort into fixing function A, but you won't see any
significant performance gains no matter how often you change it.
   Generally, that's not going to be the case.  But the broader
point--that you've no longer got an especially good idea of what's
taking time to run in your program--is still very valid.

-- Chris
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Glyph Lefkowitz

On Jan 21, 2010, at 6:48 PM, Collin Winter wrote:

 Hey Glyph,

 There's been a recent thread on our mailing list about a patch that
 dramatically reduces the memory footprint of multiprocess concurrency
 by separating reference counts from objects. We're looking at possibly
 incorporating this work into Unladen Swallow, though I think it should
 really go into upstream CPython first (since it's largely orthogonal
 to the JIT work). You can see the thread here:
 http://groups.google.com/group/unladen-swallow/browse_thread/thread/21d7248e8279b328/2343816abd1bd669

AWESOME.

Thanks for the pointer.  I read through both of the threads but I didn't see 
any numbers on savings-per-multi-process.  Do you have any?

Keep up the good work,

-glyph

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Collin Winter
On Thu, Jan 21, 2010 at 11:37 PM, Glyph Lefkowitz
gl...@twistedmatrix.com wrote:

 On Jan 21, 2010, at 6:48 PM, Collin Winter wrote:

 Hey Glyph,

 There's been a recent thread on our mailing list about a patch that
 dramatically reduces the memory footprint of multiprocess concurrency
 by separating reference counts from objects. We're looking at possibly
 incorporating this work into Unladen Swallow, though I think it should
 really go into upstream CPython first (since it's largely orthogonal
 to the JIT work). You can see the thread here:
 http://groups.google.com/group/unladen-swallow/browse_thread/thread/21d7248e8279b328/2343816abd1bd669

 AWESOME.
 Thanks for the pointer.  I read through both of the threads but I didn't see
 any numbers on savings-per-multi-process.  Do you have any?

The data I've seen comes from
http://groups.google.com/group/comp.lang.python/msg/c18b671f2c4fef9e:


This test code[1] consumes roughly 2G of RAM on an x86_64 with python
2.6.1, with the patch, it *should* use 2.3G of RAM (as specified by
its output), so you can see the footprint overhead... but better page
sharing makes it consume about 6 times less - roughly 400M... which is
the size of the dataset. Ie: near-optimal data sharing.


Collin Winter
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com