Re: [Python-Dev] PEP 396, Module Version Numbers

2011-04-06 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


...
> #. ``__version_info__`` SHOULD be of the format returned by PEP 386's
>``parse_version()`` function.

The only reference to parse_version in PEP 386 I could find was the
setuptools implementation which is pretty odd:

> 
> In other words, parse_version will return a tuple for each version string, 
> that is compatible with StrictVersion but also accept arbitrary version and 
> deal with them so they can be compared:
> 
 from pkg_resources import parse_version as V
 V('1.2')
> ('0001', '0002', '*final')
 V('1.2b2')
> ('0001', '0002', '*b', '0002', '*final')
 V('FunkyVersion')
> ('*funkyversion', '*final')

bzrlib has certainly used 'version_info' as a tuple indication such as:

version_info = (2, 4, 0, 'dev', 2)

and

version_info = (2, 4, 0, 'beta', 1)

and

version_info = (2, 3, 1, 'final', 0)

etc.

This is mapping what we could sort out from Python's "sys.version_info".

The *really* nice bit is that you can do:

if sys.version_info >= (2, 6):
  # do stuff for python 2.6(.0) and beyond

Doing that as:

if sys.version_info >= ('2', '6'):

is pretty ugly.

John
=:->

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2cLIcACgkQJdeBCYSNAAPT9wCg01L2s0DcqXE+zBAVPB7/Ts0W
HwgAnRRrzR1yiQCSeFGh9jZzuXYrHwPz
=0l4b
-END PGP SIGNATURE-
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Buildbot status

2011-04-06 Thread Nick Coghlan
On Wed, Apr 6, 2011 at 12:05 AM, Antoine Pitrou  wrote:
>
> Hello,
>
> For the record, we have 9 stable buildbots, one of which is currently
> offline: 3 Windows, 2 OS X, 3 Linux and 1 Solaris.
> Paul Moore's XP buildbot is back in the stable stable.
> (http://www.python.org/dev/buildbot/all/waterfall?category=3.x.stable)

Huzzah!

Since it appears the intermittent failures affecting these platforms
have been dealt with, is it time to switch python-committers email
notifications back on for buildbot failures that turn the stable bots
red?

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Antoine Pitrou
On Tue, 5 Apr 2011 12:57:13 -0700
Raymond Hettinger  wrote:
> 
> * I would like to see a restriction on the use of
>   the concrete C API such that it is *only* used
>   when a exact type match has been found or created
>   (i.e. if someone writes Py_ListNew(), then it
>   is okay to use Py_ListSetItem()).

That should be qualified.
For example, not being able to use PyUnicode_AS_STRING in some
performance-critical code (such as the io lib) would be a large
impediment.

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Buildbot status

2011-04-06 Thread Antoine Pitrou
Le mercredi 06 avril 2011 à 23:55 +1000, Nick Coghlan a écrit :
> On Wed, Apr 6, 2011 at 12:05 AM, Antoine Pitrou  wrote:
> >
> > Hello,
> >
> > For the record, we have 9 stable buildbots, one of which is currently
> > offline: 3 Windows, 2 OS X, 3 Linux and 1 Solaris.
> > Paul Moore's XP buildbot is back in the stable stable.
> > (http://www.python.org/dev/buildbot/all/waterfall?category=3.x.stable)
> 
> Huzzah!
> 
> Since it appears the intermittent failures affecting these platforms
> have been dealt with, is it time to switch python-committers email
> notifications back on for buildbot failures that turn the stable bots
> red?

They have not been "dealt with" (not all of them anyway), you are just
lucky that they are all green at the moment :)

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Nick Coghlan
On Wed, Apr 6, 2011 at 5:57 AM, Raymond Hettinger
 wrote:
> [Brett]
>> This PEP requires that in these instances that both
>> the Python and C code must be semantically identical
>
> Are you talking about the guaranteed semantics
> promised by the docs or are you talking about
> every possible implementation detail?
>
> ISTM that even with pure python code, we get problems
> with people relying on implementation specific details.

Indeed.

Argument handling is certainly a tricky one - getting positional only
arguments requires a bit of a hack in pure Python code (accepting
*args and unpacking the arguments manually), but it comes reasonably
naturally when parsing arguments directly using the C API.

As another example where these questions will arise (this time going
the other way) is that I would like to see a pure-Python version of
partial added back in to functools, with the C version becoming an
accelerated override for it.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Nick Coghlan
On Wed, Apr 6, 2011 at 11:59 PM, Antoine Pitrou  wrote:
> On Tue, 5 Apr 2011 12:57:13 -0700
> Raymond Hettinger  wrote:
>>
>> * I would like to see a restriction on the use of
>>   the concrete C API such that it is *only* used
>>   when a exact type match has been found or created
>>   (i.e. if someone writes Py_ListNew(), then it
>>   is okay to use Py_ListSetItem()).
>
> That should be qualified.
> For example, not being able to use PyUnicode_AS_STRING in some
> performance-critical code (such as the io lib) would be a large
> impediment.

Str/unicode/bytes are really an exception to most rules when it comes
to duck-typing. There's so much code out there that only works with
"real" strings, nobody is surprised when an API doesn't accept string
look-alikes. (There aren't any standard ABCs for those interfaces, and
I haven't really encountered anyone clamouring for them, either).

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Michael Foord

On 05/04/2011 20:57, Raymond Hettinger wrote:

[snip...]
[Brett]

(sorry, Raymond, for picking on heapq, but is
was what bit the PyPy people most recently =).

No worries, it wasn't even my code.  Someone
donated it.  The was a discusion on python-dev
and collective agreement to allow it to have
semantic differences that would let it run faster.
IIRC, the final call was made by Uncle Timmy.



The major problem that pypy had with heapq, aside from semantic 
differences, was (is?) that if you run the tests against the pure-Python 
version (without the C accelerator) then tests *fail*. This means they 
have to patch the CPython tests in order to be able to use the pure 
Python version.


Ensuring that tests run against both (even if there are some unavoidable 
differences like exception types with the tests allowing for both or 
skipping some tests) would at least prevent this happening.


All the best,

Michael


That being said, I would like to see a broader set
of examples rather rather than extrapolating from
a single piece 7+ year-old code.  It is purely
algorithmic, so it really just represents the
simplest case.  It would be much more interesting
to discuss something what should be done with
future C implementations for threading, decimal,
OrderedDict, or some existing non-trivial C
accelerators like that for JSON or XML.

Brett, thanks for bringing the issue up.
I've been bugged for a good while about
issues like overbroad use of the concrete C API.


Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk



--
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 396, Module Version Numbers

2011-04-06 Thread Nick Coghlan
On Wed, Apr 6, 2011 at 6:22 AM, Glenn Linderman  wrote:
> With more standardization of versions, should the version module be promoted
> to stdlib directly?

When Tarek lands "packaging" (i.e. what distutils2 becomes in the
Python 3.3 stdlib), the standardised version handling will come with
it.

> On 4/5/2011 11:52 AM, Barry Warsaw wrote:
>
> DEFAULT_VERSION_RE = re.compile(r'(?P\d+\.\d(?:\.\d+)?)')
>
> __version__ = pkgutil.get_distribution('elle').metadata['version']
>
> The RE as given won't match alpha, beta, rc, dev, and post suffixes that are
> discussed in POP 386.

Indeed, I really don't like the RE suggestion - better to tell people
to just move the version info into the static config file and use
pkgutil to make it available as shown. That solves the build time vs
install time problem as well.

> Nor will it match the code shown and quoted for the alternative distutils2
> case.
>
>
> Other comments:
>
> Are there issues for finding and loading multiple versions of the same
> module?

No, you simply can't do it. Python's import semantics are already
overly complicated even without opening that particular can of worms.

> Should it be possible to determine a version before loading a module?  If
> yes, the version module would have to be able to find a parse version
> strings in any of the many places this PEP suggests they could be... so that
> would be somewhat complex, but the complexity shouldn't be used to change
> the answer... but if the answer is yes, it might encourage fewer variant
> cases to be supported for acceptable version definition locations for this
> PEP.

Yep, this is why the version information should be in the setup.cfg
file, and hence available via pkgutil without loading the module
first.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Buildbot status

2011-04-06 Thread Nick Coghlan
On Thu, Apr 7, 2011 at 12:01 AM, Antoine Pitrou  wrote:
> Le mercredi 06 avril 2011 à 23:55 +1000, Nick Coghlan a écrit :
>> Since it appears the intermittent failures affecting these platforms
>> have been dealt with, is it time to switch python-committers email
>> notifications back on for buildbot failures that turn the stable bots
>> red?
>
> They have not been "dealt with" (not all of them anyway), you are just
> lucky that they are all green at the moment :)

Ah, 'twas mere unfounded optimism, then. We'll get there one day :)

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Antoine Pitrou
On Wed, 06 Apr 2011 15:17:05 +0100
Michael Foord  wrote:
> On 05/04/2011 20:57, Raymond Hettinger wrote:
> > [snip...]
> > [Brett]
> >> (sorry, Raymond, for picking on heapq, but is
> >> was what bit the PyPy people most recently =).
> > No worries, it wasn't even my code.  Someone
> > donated it.  The was a discusion on python-dev
> > and collective agreement to allow it to have
> > semantic differences that would let it run faster.
> > IIRC, the final call was made by Uncle Timmy.
> >
> 
> The major problem that pypy had with heapq, aside from semantic 
> differences, was (is?) that if you run the tests against the pure-Python 
> version (without the C accelerator) then tests *fail*. This means they 
> have to patch the CPython tests in order to be able to use the pure 
> Python version.

Was the tests patch contributed back?

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Buildbot status

2011-04-06 Thread Brian Curtin
On Tue, Apr 5, 2011 at 09:05, Antoine Pitrou  wrote:

>
> Hello,
>
> For the record, we have 9 stable buildbots, one of which is currently
> offline: 3 Windows, 2 OS X, 3 Linux and 1 Solaris.
> Paul Moore's XP buildbot is back in the stable stable.
> (http://www.python.org/dev/buildbot/all/waterfall?category=3.x.stable)
>
> We also have a new 64-bit FreeBSD 8.2 buildbot donated and managed by
> Stefan Krah.
> (http://www.python.org/dev/buildbot/all/buildslaves/krah-freebsd)
>
> Regards
>
> Antoine.


Apologies to anyone hoping to see Windows Server 2008 in this list...or
maybe you Linux guys are laughing :)

That build slave has had more problems than I've had time to deal with, so
it's resting for now.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Nick Coghlan
On Thu, Apr 7, 2011 at 1:03 AM, James Y Knight  wrote:
> Perhaps the argument handling for C functions ought to be enhanced to work 
> like python's argument handling, instead of trying to hack it the other way 
> around?

Oh, definitely. It is just that you pretty much have to use the *args
hack when providing Python versions of C functions that accept both
positional-only arguments and arbitrary keyword arguments.

For "ordinary" calls, simply switching to PyArg_ParseTupleAndKeywords
over other alternatives basically deals with the problem.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread James Y Knight

On Apr 6, 2011, at 10:08 AM, Nick Coghlan wrote:

> On Wed, Apr 6, 2011 at 5:57 AM, Raymond Hettinger
>  wrote:
>> [Brett]
>>> This PEP requires that in these instances that both
>>> the Python and C code must be semantically identical
>> 
>> Are you talking about the guaranteed semantics
>> promised by the docs or are you talking about
>> every possible implementation detail?
>> 
>> ISTM that even with pure python code, we get problems
>> with people relying on implementation specific details.
> 
> Indeed.
> 
> Argument handling is certainly a tricky one - getting positional only
> arguments requires a bit of a hack in pure Python code (accepting
> *args and unpacking the arguments manually), but it comes reasonably
> naturally when parsing arguments directly using the C API.

Perhaps the argument handling for C functions ought to be enhanced to work like 
python's argument handling, instead of trying to hack it the other way around?

James
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supporting Visual Studio 2010

2011-04-06 Thread Éric Araujo

Le 06/04/2011 03:39, [email protected] a écrit :

On 5 Apr, 07:58 pm, [email protected] wrote:
Does this mean new versions of distutils let you build_ext with any 
C

compiler, instead of enforcing the same compiler as it has done
previously?


No, it doesn't. distutils was considered frozen, and changes to it 
to

better support the ABI where rejected.


How about distutils2 then?


If there isn’t already an open bug about that, it would be welcome.

Regards
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [GSoC] Developing a benchmark suite (for Python 3.x)

2011-04-06 Thread DasIch
Hello Guys,
I would like to present my proposal for the Google Summer of Code,
concerning the idea of porting the benchmarks to Python 3.x for
speed.pypy.org. I think I have successfully integrated the feedback I
got from prior discussions on the topic and I would like to hear your
opinion.

Abstract
===

As of now there are several benchmark suites used by Python
implementations, PyPy[1] uses the benchmarks developed for the Unladen
Swallow[2] project as well as several other benchmarks they
implemented on their own, CPython[3] uses the Unladen Swallow
benchmarks and several "crap benchmarks used for historical
reasons"[4].

This makes comparisons unnecessarily hard and causes confusion. As a
solution to this problem I propose merging the existing benchmarks -
at least those considered worth having - into a single benchmark suite
which can be shared by all implementations and ported to Python 3.x.
Milestones
The project can be divided into several milestones:

1. Definition of the benchmark suite. This will entail contacting
developers of Python implementations (CPython, PyPy, IronPython and
Jython), via discussion on the appropriate mailing lists. This might
be achievable as part of this proposal.

2. Implementing the benchmark suite. Based on the prior agreed upon
definition, the suite will be implemented, which means that the
benchmarks will be merged into a single mercurial repository on
Bitbucket[5].

3. Porting the suite to Python 3.x. The suite will be ported to 3.x
using 2to3[6], as far as possible. The usage of 2to3 will make it
easier make changes to the repository especially for those still
focusing on 2.x. It is to be expected that some benchmarks cannot be
ported due to dependencies which are not available on Python 3.x.
Those will be ignored by this project to be ported at a later time,
when the necessary requirements are met.

Start of Program (May 24)
==

Before the coding, milestones 2 and 3, can begin it is necessary to
agree upon a set of benchmarks, everyone is happy with, as described.

Midterm Evaluation (July 12)
===

During the midterm I want to finish the second milestone and before
the evaluation I want to start in the third milestone.

Final Evaluation (Aug 16)
=

In this period the benchmark suite will be ported. If everything works
out perfectly I will even have some time left, if there are problems I
have a buffer here.

Probably Asked Questions
==

Why not use one of the existing benchmark suites for porting?

The effort will be wasted if there is no good base to build upon,
creating a new benchmark suite based upon the existing ones ensures
that.

Why not use Git/Bazaar/...?

Mercurial is used by CPython, PyPy and is fairly well known and used
in the Python community. This ensures easy accessibility for everyone.

What will happen with the Repository after GSoC/How will access to the
repository be handled?

I propose to give administrative rights to one or two representatives
of each project. Those will provide other developers with write
access.

Communication
=

Communication of the progress will be done via Twitter[7] and my
blog[8], if desired I can also send an email with the contents of the
blog post to the mailing lists of the implementations. Furthermore I
am usually quick to answer via IRC (DasIch on freenode), Twitter or
E-Mail([email protected]) if anyone has any questions.

Contact to the mentor can be established via the means mentioned above
or via Skype.

About Me


My name is Daniel Neuhäuser, I am 19 years old and currently a student
at the Bergstadt-Gymnasium Lüdenscheid[9]. I started programming (with
Python) about 4 years ago and became a member of the Pocoo Team[10]
after successfully participating in the Google Summer of Code last
year, during which I ported Sphinx[11] to Python 3.x and implemented
an algorithm to diff abstract syntax trees to preserve comments and
translated strings which has been used by the other GSoC projects
targeting Sphinx.

.. [1]: https://bitbucket.org/pypy/benchmarks/src
.. [2]: http://code.google.com/p/unladen-swallow/
.. [3]: http://hg.python.org/benchmarks/file/tip/performance
.. [4]: http://hg.python.org/benchmarks/file/62e754c57a7f/performance/README
.. [5]: http://bitbucket.org/
.. [6]: http://docs.python.org/library/2to3.html
.. [7]: http://twitter.com/#!/DasIch
.. [8]: http://dasdasich.blogspot.com/
.. [9]: http://bergstadt-gymnasium.de/
.. [10]: http://www.pocoo.org/team/#daniel-neuhauser
.. [11]: http://sphinx.pocoo.org/

P.S.: I would like to get in touch with the IronPython developers as
well, unfortunately I was not able to find a mailing list or IRC
channel is there anybody how can send me in the right direction?
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/

Re: [Python-Dev] clarification: subset vs equality Re: [Python-checkins] peps: Draft of PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Brett Cannon
On Tue, Apr 5, 2011 at 06:10, Jim Jewett  wrote:

> On 4/4/11, brett.cannon  wrote:
> >   Draft of PEP 399: Pure Python/C Accelerator Module Compatibiilty
> > Requirements
>
> > +Abstract
> > +
> > +
> > +The Python standard library under CPython contains various instances
> > +of modules implemented in both pure Python and C. This PEP requires
> > +that in these instances that both the Python and C code *must* be
> > +semantically identical (except in cases where implementation details
> > +of a VM prevents it entirely). It is also required that new C-based
> > +modules lacking a pure Python equivalent implementation get special
> > +permissions to be added to the standard library.
>
> I think it is worth stating explicitly that the C version can be even
> a strict subset.  It is OK for the accelerated C code to rely on the
> common python version; it is just the reverse that is not OK.
>

I thought that was obvious, but I went ahead and tweaked the abstract and
rationale to make this more explicit.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Maciej Fijalkowski
> No worries, it wasn't even my code.  Someone
> donated it.  The was a discusion on python-dev
> and collective agreement to allow it to have
> semantic differences that would let it run faster.
> IIRC, the final call was made by Uncle Timmy.
>

The bug link is here:

http://bugs.python.org/issue3051

I think this PEP is precisely targeting this:

"I saw no need to complicate the pure python code for this."

if you complicate the C code for this, then please as well complicate
python code for this since it's breaking stuff.

And this:

"FWIW, the C code is not guaranteed to be exactly the same in terms of
implementation details, only the published API should be the same.
And, for this module, a decision was made for the C code to support
only lists eventhough the pure python version supports any sequence."

The idea of the PEP is for C code to be guaranteed to be the same as
Python where it matters to people.

Cheers,
fijal
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Brett Cannon
On Tue, Apr 5, 2011 at 12:57, Raymond Hettinger  wrote:

> [Brett]
> > This PEP requires that in these instances that both
> > the Python and C code must be semantically identical
>
> Are you talking about the guaranteed semantics
> promised by the docs or are you talking about
> every possible implementation detail?
>
> ISTM that even with pure python code, we get problems
> with people relying on implementation specific details.
>
> * Two functions accept a sequence, but one accesses
>  it using __len__ and __getitem__ while the other
>  uses __iter__.   (This is like the Spam example
>  in the PEP).
>

That's a consistency problem in all of our C code and not unique to Python/C
modules.


>
> * Given pure python library code like:
>   if x < y: ...
>  I've seen people only implement __lt__
>  but not __gt__, making it impossible to
>  make even minor adjustments to the code such as:
>   if y > x:  ...
>

How is that an issue here? Because someone was lazy in the C code but not
the Python code? That is an issue as that is a difference in what methods
are provided.


>
> * We also suffer from inconsistency in choice of
>  exceptions (i.e. overly large sequence indices
>  raising either an IndexError, OverflowError, or
>  ValueError).
>

Once again, a general issue in our C code and not special to this PEP.


>
> With C code, I wonder if certain implementation
> differences go with the territory:
>
> * Concurrency issues are a common semantic difference.
>  For example, deque.pop() is atomic because the C
>  code holds the GIL but a pure python equivalent
>  would have to use locks to achieve same effect
>  (and even then might introduce liveness or deadlock
>  issues).
>

That's just a CPython-specific issue that will always be tough to work
around. Obviously we can do the best we can but since the other VMs don't
necessarily have the same concurrency guarantees per Python expression it is
near impossible to define.


>
> * Heapq is one of the rare examples of purely
>  algorithmic code.  Much of the code in CPython
>  does accesses libraries (i.e. the math module),
>  interfaces with the OS, access binary data
>  structures, links to third-party tools (sqlite3
>  and Tkinter) or does something else that doesn't
>  have pure python equivalents (at least without
>  using C types).
>

Those C modules are outside the scope of the PEP.


>
> * The C API for parsing argument tuples and keywords
>  do not readily parallel the way the same are
>  written in Python.  And with iterators, the argument
>  checking in the C versions tends to happen when the
>  iterator is instantiated, but code written with
>  pure python generators doesn't have its setup and
>  checking section run until next() is called the
>  first time.
>
> * We've had a very difficult time bridging the gulf
>  between python's infinite precision numbers and
>  and C's fixed width numbers (for example, it took
>  years to get range() to handle values greater than
>  a word size).
>

I don't expect that to be an issue as this is a limitation in CPython that
the other VMs never run into. If anything it is puts the other VMs at an
advantage for us relying on C code.


>
> * C code tends to be written in a way that takes
>  advantage of that language's features instead of
>  in a form that is a direct translation of pure
>  python.  For example, I think the work being done
>  on a C implementation of decimal has vastly different
>  internal structures and it would be a huge challenge
>  to make it semantically identical to the pure python
>  version with respect to its implementation details.
>  Likewise, a worthwhile C implementation of OrderedDict
>  can only achieve massive space savings by having
>  majorly different implementation details.
>
> Instead of expressing the wishful thought that C
> versions and pure Python versions are semantically
> identical with respect to implementation details,
> I would like to see more thought put into specific
> limitations on C coding techniques and general
> agreement on which implementation specific details
> should be guaranteed:
>
> * I would like to see a restriction on the use of
>  the concrete C API such that it is *only* used
>  when a exact type match has been found or created
>  (i.e. if someone writes Py_ListNew(), then it
>  is okay to use Py_ListSetItem()).  See
>  http://bugs.python.org/issue10977 for a discussion
>  of what can go wrong.  The original json C
>  was an example of code that used the concrete
>  C API is a way that precluded pure python
>  subclasses of list and dict.
>

That's a general coding policy that is not special to this PEP.


>
> * I would like to see better consistency on when to
>  use OverflowError vs ValueError vs IndexError.
>
>
Once again, not specific to this PEP.


> * There should also be a discussion of whether the
>  possible exceptions should be a guaranteed part
>  of the API as it is in Java.  Because there were
>  no guarantees (i.e. ord(x) can r

Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Brett Cannon
On Tue, Apr 5, 2011 at 05:01, Nick Coghlan  wrote:

> On Tue, Apr 5, 2011 at 9:46 AM, Brett Cannon  wrote:
> > try:
> > c_heapq.heappop(Spam())
> > except TypeError:
> > # "heap argument must be a list"
> > pass
> >
> > try:
> > py_heapq.heappop(Spam())
> > except AttributeError:
> > # "'Foo' object has no attribute 'pop'"
> > pass
> >
> > This kind of divergence is a problem for users as they unwittingly
> > write code that is CPython-specific. This is also an issue for other
> > VM teams as they have to deal with bug reports from users thinking
> > that they incorrectly implemented the module when in fact it was
> > caused by an untested case.
>
> While I agree with the PEP in principle, I disagree with the way this
> example is written. Guido has stated in the past that code simply
> *cannot* rely on TypeError being consistently thrown instead of
> AttributeError (or vice-versa) when it comes to duck-typing. Code that
> cares which of the two is thrown is wrong.
>
> However, there actually *is* a significant semantic discrepancy in the
> heapq case, which is that py_heapq is duck-typed, while c_heapq is
> not:
>

That's true. I will re-word it to point that out. The example code still
shows it, I just didn't explicitly state that in the example.

-Brett


>
> >>> from test.support import import_fresh_module
> >>> c_heapq = import_fresh_module('heapq', fresh=['_heapq'])
> >>> py_heapq = import_fresh_module('heapq', blocked=['_heapq'])
> >>> from collections import UserList
> >>> class Seq(UserList): pass
> ...
> >>> c_heapq.heappop(UserList())
> Traceback (most recent call last):
>  File "", line 1, in 
> TypeError: heap argument must be a list
> >>> py_heapq.heappop(UserList())
> Traceback (most recent call last):
>  File "", line 1, in 
>  File "/home/ncoghlan/devel/py3k/Lib/heapq.py", line 140, in heappop
>lastelt = heap.pop()# raises appropriate IndexError if heap is empty
>  File "/home/ncoghlan/devel/py3k/Lib/collections/__init__.py", line 848, in
> pop
>def pop(self, i=-1): return self.data.pop(i)
> IndexError: pop from empty list
>
> Cheers,
> Nick.
>
> P.S. The reason I was bugging Guido to answer the TypeError vs
> AttributeError question in the first place was to find out whether or
> not I needed to get rid of the following gross inconsistency in the
> behaviour of the with statement relative to other language constructs:
>
> >>> 1()
> Traceback (most recent call last):
>  File "", line 1, in 
> TypeError: 'int' object is not callable
> >>> with 1: pass
> ...
> Traceback (most recent call last):
>  File "", line 1, in 
> AttributeError: 'int' object has no attribute '__exit__'
>
>
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   [email protected]   |   Brisbane, Australia
>
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Raymond Hettinger

On Apr 6, 2011, at 10:24 AM, Maciej Fijalkowski wrote:
> 
> "I saw no need to complicate the pure python code for this."
> 
> if you complicate the C code for this, then please as well complicate
> python code for this since it's breaking stuff.


Do you really need a PEP for this one extraordinary and weird case?
The code is long since gone (never in 3.x).  If you disagreed with
the closing of the bug report, just re-open it and a patch can go
into a 2.7 point release.  The downside is that it would not be a
pretty piece of python.


> And this:
> 
> "FWIW, the C code is not guaranteed to be exactly the same in terms of
> implementation details, only the published API should be the same.
> And, for this module, a decision was made for the C code to support
> only lists eventhough the pure python version supports any sequence."
> 
> The idea of the PEP is for C code to be guaranteed to be the same as
> Python where it matters to people.


That is a good goal.  Unfortunately, people can choose to rely on
all manner of implementation details (whether in C or pure Python).

If we want a pure python version of map() for example, the straight-forward
way doesn't work very well because "map(chr, 3)" raises a TypeError
right away in C code, but a python version using a generator wouldn't
raise until next() is called.  Would this be considered a detail that
matters to people?  If so, it means that all the pure python equivalents
for itertools would be have to be written as classes, making them hard
to read and making them run slowly on all implementations except for PyPy.

The original of the bug report you mentioned arose because a major
tool relied on the pure python heapq code comparing "not b <= a"
rather than the equivalent "a < b".  So this was an implementation
detail that mattered to someone, but it went *far* beyond any guaranteed
behaviors.

Tracebacks are another area where C code and pure python code
can't be identical.  This may or may not matter to someone.

The example in the PEP focused on which particular exception,
a TypeError or AttributeError, was raised in response to an
oddly constructed Spam() class.  I don't know that that was
forseeable or that there would have been a reasonable way
to eliminate the difference.  It does sound like the difference
mattered to someone though.

C code tends to use direct internal calls such as Py_SIZE(obj)
rather than doing a lookup using obj.__len__().  This is often
a detail that matters to people because it prevents them from
hooking the call to  __len__.   The C code has to take this 
approach in order to protect its internal invariants and not crash.
If the pure python code tried to emulate this, then every call to
len(self) would need to be replaced by self.__internal_len()
where __internal_len is the real length method and __len__
is made equal to it.

In C to Python translations, do we want locks to be required
so that atomicity behaviors are matched?  That property
likely matters to some users.

ISTM that every person who sets out to translate code from
C to Python or vice versa is already trying their best to make them
behave as similarly as possible.  That is always the goal.

However, the PEP seems to be raising the bar by insisting
on the code being functionally identical.  I think we should
make some decisions about what that really means; otherwise,
every piece of code will be in violation of the PEP for someone
choosing to rely on an implementation detail that isn't the same.

In my opinion, a good outcome of this discussion would be
a list of implementation details that we want to guarantee
and ones that we explicitly say that are allowed to vary.

I would also like to see strong guidance on the use of the
concrete C API which can make it impossible for client code
to use subclasses of builtin types (issue 10977).  That is
another area where differences will arise that will matter
to some users.


Raymond


P.S.  It would be great if the PEP were to supply a
complete, real-word example of code that is considered
to be identical.  A pure python version of map() could
serve as a good example, trying to make it model all 
the C behaviors as exactly as possible (argument handling,
choice of exceptions, length estimation and presizing, etc).








___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supporting Visual Studio 2010

2011-04-06 Thread Martin v. Löwis
Am 06.04.2011 03:39, schrieb [email protected]:
> On 5 Apr, 07:58 pm, [email protected] wrote:
>>> Does this mean new versions of distutils let you build_ext with any C
>>> compiler, instead of enforcing the same compiler as it has done
>>> previously?
>>
>> No, it doesn't. distutils was considered frozen, and changes to it to
>> better support the ABI where rejected.
> 
> How about distutils2 then?

That certainly will be changed to support the ABI better.

Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Terry Reedy

On 4/6/2011 1:24 PM, Maciej Fijalkowski wrote:

No worries, it wasn't even my code.  Someone
donated it.  The was a discusion on python-dev
and collective agreement to allow it to have
semantic differences that would let it run faster.
IIRC, the final call was made by Uncle Timmy.

...

And, for this module, a decision was made for the C code to support
only lists eventhough the pure python version supports any sequence."


I believe that at the time of that decision, the Python code was only 
intended for humans, like the Python (near) equivalents in the itertools 
docs to C-coded itertool functions. Now that we are aiming to have 
stdlib Python code be a reference implementation for all interpreters, 
that decision should be revisited. Either the C code should be 
generalized to sequences or the Python code specialized to lists, making 
sure the doc matches either way.



The idea of the PEP is for C code to be guaranteed to be the same as
Python where it matters to people.


--
Terry Jan Reedy

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Stefan Krah
Brett Cannon  wrote:
> * We also suffer from inconsistency in choice of
>  exceptions (i.e. overly large sequence indices
>  raising either an IndexError, OverflowError, or
>  ValueError).
> 
> 
> Once again, a general issue in our C code and not special to this PEP.

Not only in the C code. I get the impression that exceptions are
sometimes handled somewhat arbitrarily. Example:


decimal.py encodes the rounding mode as strings. For a simple invalid
argument we have the following three cases:


# I would prefer a ValueError:
>>> Decimal("1").quantize(Decimal('2'), "this is not a rounding mode")
Decimal('1')

# I would prefer a ValueError:
>>> Decimal("1.111").quantize(Decimal('1e10'), "this is not a 
>>> rounding mode")
Traceback (most recent call last):
  File "", line 1, in 
  File "/home/stefan/pydev/cpython/Lib/decimal.py", line 2494, in quantize
ans = self._rescale(exp._exp, rounding)
  File "/home/stefan/pydev/cpython/Lib/decimal.py", line 2557, in _rescale
this_function = getattr(self, self._pick_rounding_function[rounding])
KeyError: 'this is not a rounding mode'

# I would prefer a TypeError:
>>> Decimal("1.23456789").quantize(Decimal('1e-10'), ROUND_UP, "this is not 
>>> a context")
Traceback (most recent call last):
  File "", line 1, in 
  File "/home/stefan/pydev/cpython/Lib/decimal.py", line 2478, in quantize
if not (context.Etiny() <= exp._exp <= context.Emax):
AttributeError: 'str' object has no attribute 'Etiny'


cdecimal naturally encodes the rounding mode as integers and raises a
TypeError in all three cases. The context in cdecimal is a custom
type that translates the flag dictionaries to simple C integers.

This is extremely fast since the slow dictionaries are only updated
on actual accesses. In normal usage, there is no visible difference
to the decimal.py semantics, but there is no way that one could
use a custom context (why would one anyway?).


I think Raymond is right that these issues need to be addressed. Other
C modules will have similar discrepancies to their Python counterparts.


A start would be:

  1) Module constants (like ROUND_UP), should be treated as opaque. If
 a user relies on a specific type, he is on his own.

  2) If it is not expected that custom types will used for a certain
 data structure, then a fixed type can be used.


For cdecimal, the context actually falls under the recently added subset
clause of the PEP, but 2) might be interesting for other modules.



Stefan Krah


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Raymond Hettinger

On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> Since people are taking my "semantically identical" point too strongly for 
> what I mean (there is a reason I said "except in cases
> where implementation details of a VM prevents [semantic equivalency] 
> entirely"), how about we change the requirement that C acceleration code must 
> pass the same test suite (sans C specific issues such as refcount tests or 
> word size) and adhere to the documented semantics the same? It should get us 
> the same result without ruffling so many feathers. And if the other VMs find 
> an inconsistency they can add a proper test and then we fix the code (as 
> would be the case regardless). And in instances where it is simply not 
> possible because of C limitations the test won't get written since the test 
> will never pass.

Does the whole PEP just boil down to "if a test is C specific, it should be 
marked as such"?

Anyone setting out to create equivalent code is already trying to making it as 
functionally equivalent as possible.   At some point, we should help 
implementers by thinking out what kinds of implementation details are 
guaranteed.


Raymond


P.S.  We also need a PEP 8 entry or somesuch giving specific advice about rich 
comparisons (i.e. never just supply one ordering method, always implement all 
six); otherwise, rich comparisons will be a never ending source of headaches.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Brett Cannon
On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger  wrote:

>
> On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> > Since people are taking my "semantically identical" point too strongly
> for what I mean (there is a reason I said "except in cases
> > where implementation details of a VM prevents [semantic equivalency]
> entirely"), how about we change the requirement that C acceleration code
> must pass the same test suite (sans C specific issues such as refcount tests
> or word size) and adhere to the documented semantics the same? It should get
> us the same result without ruffling so many feathers. And if the other VMs
> find an inconsistency they can add a proper test and then we fix the code
> (as would be the case regardless). And in instances where it is simply not
> possible because of C limitations the test won't get written since the test
> will never pass.
>
> Does the whole PEP just boil down to "if a test is C specific, it should be
> marked as such"?
>

How about the test suite needs to have 100% test coverage (or as close as
possible) on the pure Python version? That will guarantee that the C code
which passes that level of test detail is as semantically equivalent as
possible. It also allows the other VMs to write their own acceleration code
without falling into the same trap as CPython can.

There is also the part of the PEP strongly stating that any module that gets
added with no pure Python equivalent will be considered CPython-only and you
better have a damned good reason for it to be only in C from this point
forward.


>
> Anyone setting out to create equivalent code is already trying to making it
> as functionally equivalent as possible.   At some point, we should help
> implementers by thinking out what kinds of implementation details are
> guaranteed.
>

I suspect 100% test coverage will be as good of a metric as any without
bogging ourselves down with every minute detail of C code that could change
as time goes on.

If we want a more thorough definition of what C code should be trying to do
to be as compatible with Python practices should be in a doc in the devguide
rather than this PEP.


>
>
> Raymond
>
>
> P.S.  We also need a PEP 8 entry or somesuch giving specific advice about
> rich comparisons (i.e. never just supply one ordering method, always
> implement all six); otherwise, rich comparisons will be a never ending
> source of headaches.



Fine by me, but I will let you handle that one.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Stefan Behnel

James Y Knight, 06.04.2011 17:03:

On Apr 6, 2011, at 10:08 AM, Nick Coghlan wrote:

Argument handling is certainly a tricky one - getting positional only
arguments requires a bit of a hack in pure Python code (accepting
*args and unpacking the arguments manually), but it comes reasonably
naturally when parsing arguments directly using the C API.


Perhaps the argument handling for C functions ought to be enhanced to work like 
python's argument handling, instead of trying to hack it the other way around?


FWIW, Cython implemented functions have full Python 3 semantics for 
argument unpacking but the generated code is usually faster (and sometimes 
much faster) than the commonly used C-API function calls because it is 
tightly adapted to the typed function signature.


Stefan

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Antoine Pitrou
On Wed, 6 Apr 2011 13:22:09 -0700
Brett Cannon  wrote:
> On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger  > wrote:
> 
> >
> > On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> > > Since people are taking my "semantically identical" point too strongly
> > for what I mean (there is a reason I said "except in cases
> > > where implementation details of a VM prevents [semantic equivalency]
> > entirely"), how about we change the requirement that C acceleration code
> > must pass the same test suite (sans C specific issues such as refcount tests
> > or word size) and adhere to the documented semantics the same? It should get
> > us the same result without ruffling so many feathers. And if the other VMs
> > find an inconsistency they can add a proper test and then we fix the code
> > (as would be the case regardless). And in instances where it is simply not
> > possible because of C limitations the test won't get written since the test
> > will never pass.
> >
> > Does the whole PEP just boil down to "if a test is C specific, it should be
> > marked as such"?
> >
> 
> How about the test suite needs to have 100% test coverage (or as close as
> possible) on the pure Python version?

Let's say "as good coverage as the C code has", instead ;)

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread skip

Brett> How about the test suite needs to have 100% test coverage (or as
Brett> close as possible) on the pure Python version?

Works for me, but you will have to define what "100%" is fairly clearly.
100% of the lines get executed?  All the branches are taken?  Under what
circumstances might the 100% rule be relaxed?

Skip
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Raymond Hettinger

On Apr 6, 2011, at 1:22 PM, Brett Cannon wrote:

> 
> 
> On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger  
> wrote:
> 
> On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
> > Since people are taking my "semantically identical" point too strongly for 
> > what I mean (there is a reason I said "except in cases
> > where implementation details of a VM prevents [semantic equivalency] 
> > entirely"), how about we change the requirement that C acceleration code 
> > must pass the same test suite (sans C specific issues such as refcount 
> > tests or word size) and adhere to the documented semantics the same? It 
> > should get us the same result without ruffling so many feathers. And if the 
> > other VMs find an inconsistency they can add a proper test and then we fix 
> > the code (as would be the case regardless). And in instances where it is 
> > simply not possible because of C limitations the test won't get written 
> > since the test will never pass.
> 
> Does the whole PEP just boil down to "if a test is C specific, it should be 
> marked as such"?
> 
> How about the test suite needs to have 100% test coverage (or as close as 
> possible) on the pure Python version? That will guarantee that the C code 
> which passes that level of test detail is as semantically equivalent as 
> possible. It also allows the other VMs to write their own acceleration code 
> without falling into the same trap as CPython can.

Sounds good.

> 
> There is also the part of the PEP strongly stating that any module that gets 
> added with no pure Python equivalent will be considered CPython-only and you 
> better have a damned good reason for it to be only in C from this point 
> forward.

That seems reasonable for purely algorithmic modules.  I presume if an xz 
compressor gets added, there won't be a requirement that it be coded in Python 
;-)

Also, I'm not sure the current wording of the PEP makes it clear that this is a 
going-forward requirement.  We don't want to set off an avalanche of new devs 
rewriting all the current C components (struct, math, cmath, bz2, defaultdict, 
arraymodule, sha1, mersenne twister, etc).

For the most part, I expect that people writing algorithmic C modules will 
start-off by writing a pure python version anyway, so this shouldn't be a big 
change to their development process.


> 
> P.S.  We also need a PEP 8 entry or somesuch giving specific advice about 
> rich comparisons (i.e. never just supply one ordering method, always 
> implement all six); otherwise, rich comparisons will be a never ending source 
> of headaches.
> 
> 
> Fine by me, but I will let you handle that one. 
> 

Done!



Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Raymond Hettinger

On Apr 6, 2011, at 1:22 PM, Brett Cannon wrote:

> How about the test suite needs to have 100% test coverage (or as close as 
> possible) on the pure Python version? That will guarantee that the C code 
> which passes that level of test detail is as semantically equivalent as 
> possible. It also allows the other VMs to write their own acceleration code 
> without falling into the same trap as CPython can.

One other thought:  we should probably make a specific exception for pure 
python code using generators.  It is common for generators to defer argument 
checking until the next() method is called while the C equivalent makes the 
check immediately upon instantiation (i.e. map(chr, 3) raises TypeError 
immediately in C but a pure python generator won't raise until the generator is 
actually run).


Raymond 
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Force build form

2011-04-06 Thread Antoine Pitrou

Hello,

For the record, I've tried to make the force build form clearer on the
buildbot Web UI. See e.g.:
http://www.python.org/dev/buildbot/all/builders/x86%20OpenIndiana%20custom

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Nick Coghlan
On Thu, Apr 7, 2011 at 6:22 AM, Brett Cannon  wrote:
> How about the test suite needs to have 100% test coverage (or as close as
> possible) on the pure Python version? That will guarantee that the C code
> which passes that level of test detail is as semantically equivalent as
> possible. It also allows the other VMs to write their own acceleration code
> without falling into the same trap as CPython can.

Independent of coverage numbers, C acceleration code should really be
tested with 3 kinds of arguments:
- builtin types
- subclasses of builtin types
- duck types

Those are (often) 2 or 3 different code paths in accelerated C code,
but will usually be a single path in the Python code.

(e.g. I'd be willing to bet that it is possible to get the Python
version of heapq to 100% coverage without testing the second two
cases, since the Python code doesn't special-case list in any way)

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 396, Module Version Numbers

2011-04-06 Thread Glenn Linderman

On 4/6/2011 7:26 AM, Nick Coghlan wrote:

On Wed, Apr 6, 2011 at 6:22 AM, Glenn Linderman  wrote:

With more standardization of versions, should the version module be promoted
to stdlib directly?

When Tarek lands "packaging" (i.e. what distutils2 becomes in the
Python 3.3 stdlib), the standardised version handling will come with
it.


I thought that might be part of the answer :)  But that, and below, seem 
to indicate that use of "packaging" suddenly becomes a requirement for 
all modules that want to include versions.  The packaging of "version" 
inside a version of "packaging" implies more dependencies on a larger 
body of code for a simple function.




On 4/5/2011 11:52 AM, Barry Warsaw wrote:

 DEFAULT_VERSION_RE = re.compile(r'(?P\d+\.\d(?:\.\d+)?)')

 __version__ = pkgutil.get_distribution('elle').metadata['version']

The RE as given won't match alpha, beta, rc, dev, and post suffixes that are
discussed in POP 386.

Indeed, I really don't like the RE suggestion - better to tell people
to just move the version info into the static config file and use
pkgutil to make it available as shown. That solves the build time vs
install time problem as well.


Nor will it match the code shown and quoted for the alternative distutils2
case.


Other comments:

Are there issues for finding and loading multiple versions of the same
module?

No, you simply can't do it. Python's import semantics are already
overly complicated even without opening that particular can of worms.


OK, I just recalled some discussion about multiple coexisting versions 
in past time, not that they produced any conclusion that such should or 
would ever be implemented.



Should it be possible to determine a version before loading a module?  If
yes, the version module would have to be able to find a parse version
strings in any of the many places this PEP suggests they could be... so that
would be somewhat complex, but the complexity shouldn't be used to change
the answer... but if the answer is yes, it might encourage fewer variant
cases to be supported for acceptable version definition locations for this
PEP.

Yep, this is why the version information should be in the setup.cfg
file, and hence available via pkgutil without loading the module
first.


So, no support for single .py file modules, then?

If "packaging" truly is the only thing that knows the version of 
something, and "version" lives in "packaging", then perhaps packaging 
"__version__" as part of the module is inappropriate, and the API to 
obtain the version of a module should be inside "packaging" with the 
module (or its name) as a parameter, rather than asking the module, 
which may otherwise not need a dependency on the internals of 
"packaging" except to obtain its own version, which, it doesn't likely 
need for its own use anyway, except to report it.


Which is likely why Barry offered so many choices as to where the 
version of a package or module might live in the first place.


Perhaps a different technique would be that if packaging is in use, that 
it could somehow inject the version from setup.cfg into the module, 
either by tweaking the source as it gets packaged, or installed, or 
tweaking the module as/after it gets loaded (the latter still required 
some runtime dependency on code from the packaging system).  A line like 
the following in some designated-to-"packaging" source file could be 
replaced during packaging:


__version__ = "7.9.7" # replaced by "packaging"

could be used for source codes that use "packaging" which would replace 
it by the version from setup.cfg during the packaging process, whereas a 
module that doesn't use "packaging" would put in the real version, and 
avoid the special comment.  The reason the fake version would have a 
(redundant) number would be to satisfy dependencies during 
pre-"packaging" testing.  (The above would add a new parsing requirement 
to "version" for "" at the end.  Something different than "dev" so 
that development releases that still go through the packaging process 
are still different than developers test code.  "packaging" should 
probably complain if the versions are numerically different and the 
version in the source file doesn't have "" or doesn't exactly match 
the version in setup.cfg, and if the special comment is not found.)


Caveat: I'm not 100% clear on when/how any of "distutils", "setuptools", 
or "packaging" are invoked (I was sort of waiting for the dust to settle 
on "packaging" to learn how to use this latest way of doing things), but 
based on historical experience with other systems, and expectations 
about how things "should" work, I would expect that a packaging system 
is something that should be used after a module is complete, to wrap it 
up for distribution and installation, but that the module itself should 
not have significant knowledge of or dependency on such a packaging 
system, so that when the module is invoked at runtime, it doesn't bring 
overhead

Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/06/2011 04:37 PM, Antoine Pitrou wrote:
> On Wed, 6 Apr 2011 13:22:09 -0700
> Brett Cannon  wrote:
>> On Wed, Apr 6, 2011 at 12:45, Raymond Hettinger >> wrote:
>>
>>>
>>> On Apr 6, 2011, at 10:39 AM, Brett Cannon wrote:
 Since people are taking my "semantically identical" point too strongly
>>> for what I mean (there is a reason I said "except in cases
 where implementation details of a VM prevents [semantic equivalency]
>>> entirely"), how about we change the requirement that C acceleration code
>>> must pass the same test suite (sans C specific issues such as refcount tests
>>> or word size) and adhere to the documented semantics the same? It should get
>>> us the same result without ruffling so many feathers. And if the other VMs
>>> find an inconsistency they can add a proper test and then we fix the code
>>> (as would be the case regardless). And in instances where it is simply not
>>> possible because of C limitations the test won't get written since the test
>>> will never pass.
>>>
>>> Does the whole PEP just boil down to "if a test is C specific, it should be
>>> marked as such"?
>>>
>>
>> How about the test suite needs to have 100% test coverage (or as close as
>> possible) on the pure Python version?
> 
> Let's say "as good coverage as the C code has", instead ;)

The point is to require that the *Python* version be the "reference
implementation", which means that the tests should be fully covering it
(especially for any non-grandfathered module).


Tres.
- -- 
===
Tres Seaver  +1 540-429-0999  [email protected]
Palladion Software   "Excellence by Design"http://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2c48QACgkQ+gerLs4ltQ4p2ACgjds89LnzLnSEZOykwZKzqFVn
VVAAn10q1x74JOW2gi/DlYDgf9hkRCuv
=ee3b
-END PGP SIGNATURE-

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Force build form

2011-04-06 Thread Raymond Hettinger

On Apr 6, 2011, at 2:40 PM, Antoine Pitrou wrote:

> For the record, I've tried to make the force build form clearer on the
> buildbot Web UI. See e.g.:
> http://www.python.org/dev/buildbot/all/builders/x86%20OpenIndiana%20custom


Much improved.  Thanks.


Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread R. David Murray
On Wed, 06 Apr 2011 18:05:57 -0400, Tres Seaver  wrote:
> On 04/06/2011 04:37 PM, Antoine Pitrou wrote:
> > On Wed, 6 Apr 2011 13:22:09 -0700 Brett Cannon  wrote:
> >> How about the test suite needs to have 100% test coverage (or as close as
> >> possible) on the pure Python version?
> > 
> > Let's say "as good coverage as the C code has", instead ;)
> 
> The point is to require that the *Python* version be the "reference
> implementation", which means that the tests should be fully covering it
> (especially for any non-grandfathered module).

There are two slightly different requirements covered by these two
suggested rules.  The Python one says "any test the Python package passes
the C version should also pass, and let's make sure we test all of the
Python code".  The C one says "any test that the C code passes the Python
code should also pass".   These are *almost* the same rule, but not quite.

Brett's point in asking for 100% coverage of the Python code is to make
sure the C implementation covers the same ground as the Python code.
Antoine's point in asking that the Python tests be at least as good as
the C tests is to make sure that the Python code covers the same ground
as the C code.  The former is most important for modules that are
getting new accelerator code, the latter for existing modules that
already have accelerators or are newly acquiring Python versions.

The PEP actually already contains the combined rule:  both the C and
the Python version must pass the *same* test suite (unless there are
virtual machine issues that simply can't be worked around).  I think
the thing that we are talking about adding to the PEP is that there
should be no untested features in *either* the Python or the C version,
insofar as we can make that happen (that is, we are testing also that
the feature sets are the same).  And that passing that comprehensive
test suite is the definition of compliance with the PEP, not abstract
arguments about semantics.  (We can argue about the semantics when we
discuss individual tests :)

100% branch coverage as measured by coverage.py is one good place to
start for building such a comprehensive test suite.  Existing tests
for C versions getting (or newly testing) Python code is another.
Bug reports from alternate VMs will presumably fill out the remainder.

--
R. David Murray   http://www.bitdance.com

PS: testing that Python code handles subclasses and duck typing is by
no means wasted effort; I've some bugs in the email package using such
tests, and it is pure Python.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread James Y Knight

On Apr 6, 2011, at 4:44 PM, [email protected] wrote:

>Brett> How about the test suite needs to have 100% test coverage (or as
>Brett> close as possible) on the pure Python version?
> 
> Works for me, but you will have to define what "100%" is fairly clearly.
> 100% of the lines get executed?  All the branches are taken?  Under what
> circumstances might the 100% rule be relaxed?

And...does that include all branches taken within the interpreter too? :)

E.g. check whether all possible exceptions are thrown in all possible places an 
exception could be thrown? (As per the exception compatibility subthread)

And what about all the possible crazy stuff you could do in callbacks back to 
user code (e.g. mutating arguments passed to the initial function, or 
installing a trace hook or...)?

Does use of the function as a class attribute need to be covered? (see previous 
discussion on differences in behavior due to descriptors).

Etcetc.

I'd love it if CPython C modules acted equivalently to python code, but there 
is almost an endless supply of differences...100% test coverage of the behavior 
seems completely infeasible if interpreted strictly; some explicit subset of 
all possible behavior needs to be defined for what users cannot reasonably 
depend on. (sys.settrace almost certainly belonging on that list :).)

James
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Terry Reedy

On 4/6/2011 2:54 PM, Terry Reedy wrote:


I believe that at the time of that decision, the Python [heapq] code was only
intended for humans, like the Python (near) equivalents in the itertools
docs to C-coded itertool functions. Now that we are aiming to have
stdlib Python code be a reference implementation for all interpreters,
that decision should be revisited.


OK so far.

> Either the C code should be generalized to sequences or
> the Python code specialized to lists, making sure the doc matches 
either way.


After rereading the heapq doc and .py file and thinking some more, I 
retract this statement for the following reasons.


1. The heapq doc clearly states that a list is required. It leaves the 
behavior for other types undefined. Let it be so.


2. Both _heapq.c (or its actual name) and heapq.py meet (I presume) the 
documented requirements and pass (or would pass) a complete test suite 
based on using lists as heaps. In that regard, both are conformant and 
should be considered 'equivalent'.


3. _heapq.c is clearly optimized for speed. It allows a list subclass as 
input and will heapify such, but it ignores a custom __getitem__. My 
informal test on the result of random.shuffle(list(range(999) shows 
that heapify is over 10x as fast as .sort(). Let it be so.


4. When I suggested changing heapq.py, I had forgetten that heap.py 
defined several functions rather than a wrapper class with methods. I 
was thinking of putting a type check in .__init__, where it would be 
applied once per heap (and possibly bypassed), and could easily be 
removed. Instead every function would require a type check for every 
call. This would be too obnoxious to me. I love duck typing and held my 
nose a bit when suggesting a one-time type check.


5. Python already has an "extra's allowed" principle. In other words, an 
implementation does not have to bother to enforce documented 
restrictions. For one example, Python 2 manuals restrict identifiers to 
ascii letters. CPython 2 (at least in recent versions) actually allows 
extended ascii letters, as in latin-1. For another, namespaces (globals 
and attribute namespaces), by their name, only need to map identifiers 
to objects. However, CPython uses general dicts rather than specialized 
string dicts with validity checks. People have exploited both loopholes. 
But those who have should not complain to us if such code fails on a 
different implementation that adheres to the doc.


I think the Language and Library references should start with something 
a bit more specific than at present:


"The Python x.y Language and Library References define the Python x.y 
language, its builtin objects, and standard library. Code written to 
these docs should run on any implementation that includes the features 
used. Code that exploits or depends on any implementation-specific 
feature or behavior may not be portable."


_x.c and x.py are separate implementations of module x. I think they 
should be subject to the same disclaimer.



Therefore, I currently think that the only change needed for heapq 
(assuming both versions pass complete tests as per the doc) is an 
explanation at the top of heapq.py that goes something like this:


"Heapq.py is a reference implementation of the heapq module for both 
humans and implementations that do not have an accelerated version. For 
CPython, most of the functions are replaced by much faster C-coded versions.


Heapq is documented to required a python list as input to the heap 
functions. The C functions enforce this restriction. The Python versions 
do not and should work with any mutable random-access sequence. Should 
you wish to run the Python code with CPython, copy this file, give it a 
new name, delete the following lines:


try:
from _heapq import *
except ImportError:
pass

make any other changes you wish, and do not expect the result to be 
portable."


--
Terry Jan Reedy

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Code highlighting in tracker

2011-04-06 Thread anatoly techtonik
Is it a good idea to have code highlighting in tracker?

I'd like to gather independent unbiased opinion for a little research
of Python development. Unfortunately, there is no way to create a
poll, but if you just say yes or no without reading all other comments
- that would be fine. Thanks.
-- 
anatoly t.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Code highlighting in tracker

2011-04-06 Thread Benjamin Peterson
2011/4/6 anatoly techtonik :
> Is it a good idea to have code highlighting in tracker?

Why would we need it?



-- 
Regards,
Benjamin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 396, Module Version Numbers

2011-04-06 Thread Nick Coghlan
On Thu, Apr 7, 2011 at 7:58 AM, Glenn Linderman  wrote:
> Perhaps a different technique would be that if packaging is in use, that it
> could somehow inject the version from setup.cfg into the module, either by
> tweaking the source as it gets packaged, or installed, or tweaking the
> module as/after it gets loaded (the latter still required some runtime
> dependency on code from the packaging system).  A line like the following in
> some designated-to-"packaging" source file could be replaced during
> packaging:
>
> __version__ = "7.9.7" # replaced by "packaging"

If you don't upload your module to PyPI, then you can do whatever you
want with your versioning info. If you *do* upload it to PyPI, then
part of doing so properly is to package it so that your metadata is
where other utilities expect it to be. At that point, you can move the
version info over to setup.cfg and add the code into the module to
read it from the metadata store.

The guidelines in 396 really only apply to distributed packages, so it
doesn't make sense to obfuscate by catering to esoteric use cases. If
prviate modules don't work with the standard tools, who is going to
care? The module author clearly doesn't, and they aren't distributing
it to anyone else. Once they *do* start distributing it, then their
new users will help bring them into line. Having the recommended
practice clearly documented just makes it easier for those users to
point new module distributors in the right direction.

(Also, tsk, tsk, Barry for including Standards track proposals in an
Informational PEP!)

Cheers,
Nick.

P.S. A nice coincidental progression: PEP 376, 386 and 396 are all
related to versioning and package metadata

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Force build form

2011-04-06 Thread Nick Coghlan
On Thu, Apr 7, 2011 at 7:40 AM, Antoine Pitrou  wrote:
> For the record, I've tried to make the force build form clearer on the
> buildbot Web UI. See e.g.:
> http://www.python.org/dev/buildbot/all/builders/x86%20OpenIndiana%20custom

Looks good - trying it out on my LHS precedence correction branch to
confirm I am using it correctly.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Code highlighting in tracker

2011-04-06 Thread Nick Coghlan
On Thu, Apr 7, 2011 at 1:37 PM, anatoly techtonik  wrote:
> Is it a good idea to have code highlighting in tracker?

The tracker doesn't display code. Only the code review tool and the
repository browser display code (and syntax highlighting is useful but
not essential for those use cases, just as it is useful but not
essential during actual coding).

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 396, Module Version Numbers

2011-04-06 Thread Glenn Linderman

On 4/6/2011 9:08 PM, Nick Coghlan wrote:

On Thu, Apr 7, 2011 at 7:58 AM, Glenn Linderman  wrote:

Perhaps a different technique would be that if packaging is in use, that it
could somehow inject the version from setup.cfg into the module, either by
tweaking the source as it gets packaged, or installed, or tweaking the
module as/after it gets loaded (the latter still required some runtime
dependency on code from the packaging system).  A line like the following in
some designated-to-"packaging" source file could be replaced during
packaging:

__version__ = "7.9.7" # replaced by "packaging"

If you don't upload your module to PyPI, then you can do whatever you
want with your versioning info. If you *do* upload it to PyPI, then
part of doing so properly is to package it so that your metadata is
where other utilities expect it to be. At that point, you can move the
version info over to setup.cfg and add the code into the module to
read it from the metadata store.


The PEP doesn't mention PyPI, and at present none of the modules there 
use "packaging" :)  So it wasn't obvious to me that the PEP applies only 
to PyPI, and I have used modules that were not available from PyPI yet 
were still distributed and packaged somehow (not using "packaging" clearly).


While there has been much effort (discussion by many) to make 
"packaging" useful to many, and that is probably a good thing, I still 
wonder why a packaging system should be loaded into applications when 
all the code has already been installed.  Or is the runtime of 
"packaging" packaged so that only a small amount of code has to be 
loaded to obtain "version" and "__version__"?  I don't recall that being 
discussed on this list, but maybe it has been on more focused lists, 
sorry for my ignorance... but I also read about embedded people 
complaining about how many files Python opens at start up, and see no 
need for a full packaging system to be loaded, just to do version checking.




The guidelines in 396 really only apply to distributed packages, so it
doesn't make sense to obfuscate by catering to esoteric use cases. If
prviate modules don't work with the standard tools, who is going to
care? The module author clearly doesn't, and they aren't distributing
it to anyone else. Once they *do* start distributing it, then their
new users will help bring them into line. Having the recommended
practice clearly documented just makes it easier for those users to
point new module distributors in the right direction.


Oh, I fully agree that there be a PEP with guidelines, and yesterday 
converted my private versioning system to conform with the names in the 
PEP, and the style of version string in the referenced PEP.  And I 
distribute my modules -- so far only in a private group, and so far as 
straight .py files... no use of "packaging".  And even if I never use 
"packaging", it seems like a good thing to conform to this PEP, if I 
can.  Version checking is useful.



(Also, tsk, tsk, Barry for including Standards track proposals in an
Informational PEP!)

Cheers,
Nick.

P.S. A nice coincidental progression: PEP 376, 386 and 396 are all
related to versioning and package metadata



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

2011-04-06 Thread Stefan Behnel

Brett Cannon, 06.04.2011 19:40:

On Tue, Apr 5, 2011 at 05:01, Nick Coghlan wrote:

However, there actually *is* a significant semantic discrepancy in the
heapq case, which is that py_heapq is duck-typed, while c_heapq is
not:

TypeError: heap argument must be a list


That's true. I will re-word it to point that out. The example code still
shows it, I just didn't explicitly state that in the example.


Assuming there always is an "equivalent" Python implementation anyway, what 
about using that as a fallback for input types that the C implementation 
cannot deal with?


Or would it be a larger surprise for users if the code ran slower when 
passing in a custom type than if it throws an exception instead?


Stefan

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 396, Module Version Numbers

2011-04-06 Thread Nick Coghlan
On Thu, Apr 7, 2011 at 2:55 PM, Glenn Linderman  wrote:
> __version__ = "7.9.7" # replaced by "packaging"
>
> If you don't upload your module to PyPI, then you can do whatever you
> want with your versioning info. If you *do* upload it to PyPI, then
> part of doing so properly is to package it so that your metadata is
> where other utilities expect it to be. At that point, you can move the
> version info over to setup.cfg and add the code into the module to
> read it from the metadata store.
>
> The PEP doesn't mention PyPI, and at present none of the modules there use
> "packaging" :)

They all use distutils (or setuptools or distutils2) though, which is
what packaging replaces.

(Sorry for not making that clear - it's easy to forget which aspects
of these issues aren't common knowledge as yet)

> So it wasn't obvious to me that the PEP applies only to
> PyPI, and I have used modules that were not available from PyPI yet were
> still distributed and packaged somehow (not using "packaging" clearly).

packaging is the successor to the current distutils package.
Distribution via PyPI is the main reason to bother with creating a
correctly structured package - for internal distribution, people use
all sorts of ad hoc schemes (often just the packaging systems of their
internal target platforms). I'll grant that some people do use
properly structured packages for purely internal use, but I'd also be
willing to bet that they're the exception rather than the rule.

What I would like to see the PEP say is that if you don't *have* a
setup.cfg file, then go ahead and embed the version directly in your
Python source file. If you *do* have one, then put the version there
and retrieve it with "pkgutil" if you want to provide a __version__
attribute.

Barry is welcome to make a feature request to allow that dependency to
go the other way, with the packaging system reading the version number
out of the source file, but such a suggestion doesn't belong in an
Informational PEP. If such a feature is ever accepted, then the
recommendation in the PEP could be updated.

> While there has been much effort (discussion by many) to make "packaging"
> useful to many, and that is probably a good thing, I still wonder why a
> packaging system should be loaded into applications when all the code has
> already been installed.  Or is the runtime of "packaging" packaged so that
> only a small amount of code has to be loaded to obtain "version" and
> "__version__"?  I don't recall that being discussed on this list, but maybe
> it has been on more focused lists, sorry for my ignorance... but I also read
> about embedded people complaining about how many files Python opens at start
> up, and see no need for a full packaging system to be loaded, just to do
> version checking.

pkgutil will be able to read the metadata - it is a top level standard
library module, *not* a submodule of distutils/packaging.

It may make sense for the version parsing support to be in pkgutil as
well, since PEP 345 calls for it to be stored as a string in the
package metadata, but it needs to be converted with NormalizedVersion
to be safe to use in arbitrary version range checks. That's Tarek's
call as to whether to provide it that way, or as a submodule of
packaging. As you say, the fact that distutils/packaging are usually
first on the chopping block when distros are looking to save space is
a strong point in favour of having that particular functionality
somewhere else.

That said, I've seen people have problems because a Python 2.6
redistributor decided "contextlib" wasn't important and left it out,
so YMMV regardless of where the code ends up.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com