Re: [Python-Dev] test_itertools fails for trunk on x86 OS X machine

2006-09-22 Thread Tim Peters
[Neal Norwitz]
 It looks like %zd of a negative number is treated as an unsigned
 number on OS X, even though the man page says it should be signed.

 
 The z modifier, when applied to a d or i conversion, indicates that
 the argument is of a signed type equivalent in size to a size_t.
 

It's not just some man page ;-), this is required by the C99 standard
(which introduced the `z` length modifier -- and it's the `d` or `i`
here that imply `signed`, `z` is only supposed to specify the width of
the integer type, and can also be applied to codes for unsigned
integer types, like %zu and %zx).

 The program below returns -123 on Linux and 4294967173 on OS X.

 n
 --
 #include stdio.h
 int main()
 {
 char buffer[256];
   if(sprintf(buffer, %zd, (size_t)-123)  0)
 return 1;
  printf(%s\n, buffer);
  return 0;
 }

Well, to be strictly anal, while the result of

(size_t)-123

is defined, the result of casting /that/ back to a signed type of the
same width is not defined.  Maybe your compiler was doing you a
favor ;-)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] test_itertools fails for trunk on x86 OS X machine

2006-09-22 Thread Neal Norwitz
On 9/21/06, Tim Peters [EMAIL PROTECTED] wrote:

 Well, to be strictly anal, while the result of

 (size_t)-123

 is defined, the result of casting /that/ back to a signed type of the
 same width is not defined.  Maybe your compiler was doing you a
 favor ;-)

I also tried with a cast to an ssize_t and replacing %zd with an %zi.
None of them make a difference; all return an unsigned value.  This is
with powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 20041026 (Apple
Computer, Inc. build 4061).  Although i would expect the issue is in
the std C library rather than the compiler.

Forcing PY_FORMAT_SIZE_T to be l instead of z fixes this problem.

BTW, this is the same issue on Mac OS X:

 struct.pack('=b', -59)
__main__:1: DeprecationWarning: 'b' format requires 4294967168 = number = 127
'A'

n
--
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New relative import issue

2006-09-22 Thread Josiah Carlson

Phillip J. Eby [EMAIL PROTECTED] wrote:
 
 At 08:44 PM 9/21/2006 -0700, Josiah Carlson wrote:
 This can be implemented with a fairly simple package registry, contained
 within a (small) SQLite database (which is conveniently shipped in
 Python 2.5).  There can be a system-wide database that all users use as
 a base, with a user-defined package registry (per user) where the
 system-wide packages can be augmented.
 
 As far as I can tell, you're ignoring that per-user must *also* be 
 per-version, and per-application.  Each application or runtime environment 
 needs its own private set of information like this.

Having a different database per Python version is not significantly
different than having a different Python binary for each Python version. 
About the only (annoying) nit is that the systemwide database needs to
be easily accessable to the Python runtime, and is possibly volatile. 
Maybe a symlink in the same path as the actual Python binary on *nix,
and the file located next to the binary on Windows.

I didn't mention the following because I thought it would be superfluous,
but it seems that I should have stated it right out.  My thoughts were
that on startup, Python would first query the 'system' database, caching
its results in a dictionary, then query the user's listing, updating the
dictionary as necessary, then unload the databases.  On demand, when
code runs packages.register(), if both persist and systemwide are False,
it just updates the dictionary. If either are true, it opens up and
updates the relevant database.

With such a semantic, every time Python gets run, every instance gets
its own private set of paths, derived from the system database, user
database, and runtime-defined packages.


 Next, putting the installation data inside a database instead of 
 per-installation-unit files presents problems of its own.  While some 
 system packaging tools allow install/uninstall scripts to run, they are 
 often frowned upon, and can be unintentionally bypassed.

This is easily remedied with a proper 'packages' implementation:

python -Mpackages name path

Note that Python could auto-insert standard library and site-packages
'packages' on startup (creating the initial dictionary, then the
systemwide, then the user, ...).


 These are just a few of the issues that come to mind.  Realistically 
 speaking, .pth files are currently the most effective mechanism we have, 
 and there actually isn't much that can be done to improve upon them.

Except that .pth files are only usable in certain (likely) system paths,
that the user may not have write access to.  There have previously been
proposals to add support for .pth files in the path of the run .py file,
but they don't seem to have gotten any support.


 What's more needed are better mechanisms for creating and managing Python 
 environments (to use a term coined by Ian Bicking and Jim Fulton over on 
 the distutils-sig), which are individual contexts in which Python 
 applications run.  Some current tools in development by Ian and Jim include:
 
 Anyway, system-wide and per-user environment information isn't nearly 
 sufficient to address the issues that people have when developing and 
 deploying multiple applications on a server, or even using multiple 
 applications on a client installation (e.g. somebody using both the 
 Enthought Python IDE and Chandler on the same machine).  These relatively 
 simple use cases rapidly demonstrate the inadequacy of system-wide or 
 per-user configuration of what packages are available.

It wouldn't be terribly difficult to add environment switching and
environment derivation (copying or linked, though copying would be
simpler).

packages.derive_environment(parent_environment)
packages.register(name, path, env=environment)
packages.use(environment)

It also wouldn't be terribly difficult to set up environments that
required certain packages...

packages.new_environment(environment, *required_packages, test=True)

To verify that the Python installation has the required packages, then
later...

packages.new_environment(environment, *required_packages, persist=True)


I believe that most of the concerns that you have brought up can be
addressed, and I think that it could be far nicer to deal with than the
current sys.path hackery. The system database location is a bit annoying,
but I lack the *nix experience to say where such a database could or
should be located.

 - Josiah

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] list.discard? (Re: dict.discard)

2006-09-22 Thread Fredrik Lundh
Greg Ewing wrote:

 Actually I'd like this for lists. Often I find myself
 writing

   if x not in somelist:
 somelist.remove(x)

 A single method for doing this would be handy, and
 more efficient.

there is a single method that does this, of course, but you have to sprinkle
some sugar on it:

try:
somelist.remove(x)
except ValueError: pass

/F 



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] test_itertools fails for trunk on x86 OS X machine

2006-09-22 Thread Ronald Oussoren
 
On Friday, September 22, 2006, at 08:38AM, Neal Norwitz [EMAIL PROTECTED] 
wrote:

On 9/21/06, Tim Peters [EMAIL PROTECTED] wrote:

 Well, to be strictly anal, while the result of

 (size_t)-123

 is defined, the result of casting /that/ back to a signed type of the
 same width is not defined.  Maybe your compiler was doing you a
 favor ;-)

I also tried with a cast to an ssize_t and replacing %zd with an %zi.
None of them make a difference; all return an unsigned value.  This is
with powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 20041026 (Apple
Computer, Inc. build 4061).  Although i would expect the issue is in
the std C library rather than the compiler.

Forcing PY_FORMAT_SIZE_T to be l instead of z fixes this problem.

BTW, this is the same issue on Mac OS X:

 struct.pack('=b', -59)
__main__:1: DeprecationWarning: 'b' format requires 4294967168 = number = 127

Has anyone filed a bug at bugreport.apple.com about this (that is '%zd' not 
behaving as the documentation says it should behave)? I'll file a bug (as 
well), but the more people tell Apple about this the more likely it is that 
someone will fix this.

Ronald

'A'

n
--
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New relative import issue

2006-09-22 Thread Nick Coghlan
Brett Cannon wrote:
 But either way I will be messing with the import system in the 
 relatively near future.  If you want to help, Paul (or anyone else), 
 just send me an email and we can try to coordinate something (plan to do 
 the work in the sandbox as a separate thing from my security stuff).

Starting with pkgutil.get_loader and removing the current dependency on 
imp.find_module and imp.load_module would probably be a decent way to start.

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://www.boredomandlaziness.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] release25-maint is UNFROZEN

2006-09-22 Thread Fred L. Drake, Jr.
On Thursday 21 September 2006 08:35, Armin Rigo wrote:
  Thanks for the hassle!  I've got another bit of it for you, though.  The
  freezed 2.5 documentation doesn't seem to be available on-line.  At
  least, the doc links from the release page point to the 'dev' 2.6a0
  version, and the URL following the common scheme -
  http://www.python.org/doc/2.5/ - doesn't work.

This should mostly be working now.  The page at www.python.org/doc/2.5/ 
isn't really right, but will do the trick.  Hopefully I'll be able to work 
out how these pages should be updated properly at the Arlington sprint this 
weekend, at which point I can update PEP 101 appropriately and make sure this 
gets done when releases are made.


  -Fred

-- 
Fred L. Drake, Jr.   fdrake at acm.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Python network Programmign

2006-09-22 Thread Raja Rokkam
Hi, I am currently doing my final year project Secure mobile Robot Management . I have done the theoretical aspects of it till now and now thinking of coding it .I would like to code in Python , but i am new to Python Network Programming . 
Some of features of my project are: 1. Each robot can send data to any other robot.2. Each robot can receive data from any other robot.3. Every Robot has atleast 1 other bot in its communication range.
4. maximum size of a data packet is limited to 35 bytes5. each mobile robot maintains a table with routes6. all the routes stored in the routing table include a ï¬eld named life-time.7. Route Discovery Process initiated if there is no known route to other bot.
8. There is no server over here . 9. every bot should be able to process the data from other bots and both multicast/unicast  need to be supported.Assume the environment is gridded mesh and bots exploring the area. They need to perform a set of tasks (assume finding some locations which are dangerous or smthing like that).
My main concern is how to go about modifying the headers such that everything fits in 35bytes . I would like to know how to proceed and if any links or resources in this regard. How to modify the headers ? ie. all in 35 bytes . 
Thank You, Raja.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python network Programmign

2006-09-22 Thread Fredrik Lundh
Raja Rokkam wrote:

 I would like to code in Python , but i am new to Python Network Programming

wrong list: python-dev is for people who develop the python core, not people
who want to develop *in* python.

see

http://www.python.org/community/lists/

for a list of more appropriate forums.

cheers /F 



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New relative import issue

2006-09-22 Thread Phillip J. Eby
At 12:08 AM 9/22/2006 -0700, Josiah Carlson wrote:
Phillip J. Eby [EMAIL PROTECTED] wrote:
 
  At 08:44 PM 9/21/2006 -0700, Josiah Carlson wrote:
  This can be implemented with a fairly simple package registry, contained
  within a (small) SQLite database (which is conveniently shipped in
  Python 2.5).  There can be a system-wide database that all users use as
  a base, with a user-defined package registry (per user) where the
  system-wide packages can be augmented.
 
  As far as I can tell, you're ignoring that per-user must *also* be
  per-version, and per-application.  Each application or runtime environment
  needs its own private set of information like this.

Having a different database per Python version is not significantly
different than having a different Python binary for each Python version.

You misunderstood me: I mean that the per-user database must be able to 
store information for *different Python versions*.  Having a single 
per-user database without the ability to include configuration for more 
than one Python version (analagous to the current situation with the 
distutils per-user config file) is problematic.

In truth, a per-user configuration is just a special case of the real need: 
to have per-application environments.  In effect, a per-user environment is 
a fallback for not having an appplication environment, and the system 
environment is a fallback for not having a user environment.


About the only (annoying) nit is that the systemwide database needs to
be easily accessable to the Python runtime, and is possibly volatile.
Maybe a symlink in the same path as the actual Python binary on *nix,
and the file located next to the binary on Windows.

I didn't mention the following because I thought it would be superfluous,
but it seems that I should have stated it right out.  My thoughts were
that on startup, Python would first query the 'system' database, caching
its results in a dictionary, then query the user's listing, updating the
dictionary as necessary, then unload the databases.  On demand, when
code runs packages.register(), if both persist and systemwide are False,
it just updates the dictionary. If either are true, it opens up and
updates the relevant database.

Using a database as the primary mechanism for managing import locations 
simply isn't workable.  You might as well suggest that each environment 
consist of a single large zipfile containing the packages in question: this 
would actually be *more* practical (and fast!) in terms of Python startup, 
and is no different from having a database with respect to the need for 
installation and uninstallation to modify a central file!

I'm not proposing we do that -- I'm just pointing out why using an actual 
database isn't really workable, considering that it has all of the 
disadvantages of a big zipfile, and none of the advantages (like speed, 
having code already written that supports it, etc.)


This is easily remedied with a proper 'packages' implementation:

 python -Mpackages name path

Note that Python could auto-insert standard library and site-packages
'packages' on startup (creating the initial dictionary, then the
systemwide, then the user, ...).

I presume here you're suggesting a way to select a runtime environment from 
the command line, which would certainly be a good idea.


  These are just a few of the issues that come to mind.  Realistically
  speaking, .pth files are currently the most effective mechanism we have,
  and there actually isn't much that can be done to improve upon them.

Except that .pth files are only usable in certain (likely) system paths,
that the user may not have write access to.  There have previously been
proposals to add support for .pth files in the path of the run .py file,
but they don't seem to have gotten any support.

Setuptools works around this by installing an enhancement for the 'site' 
module that extends .pth support to include all PYTHONPATH 
directories.  The enhancement delegates to the original site module after 
recording data about sys.path that the site module destroys at startup.



I believe that most of the concerns that you have brought up can be
addressed,

Well, as I said, I've already dealt with them, using .pth files, for the 
use cases I care about.  Ian Bicking and Jim Fulton have also gone farther 
with work on tools to create environments with greater isolation or more 
fixed version linkages than what setuptools does.  (Setuptools-generated 
environments dynamically select requirements based on available versions at 
runtime, while Ian and Jim's tools create environments whose inter-package 
linkages are frozen at installation time.)


and I think that it could be far nicer to deal with than the
current sys.path hackery.

I'm not sure of that, since I don't yet know how your approach would deal 
with namespace packages, which are distributed in pieces and assembled 
later.  For example, many PEAK and Zope distributions live in the peak.* 

[Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Michael Foord
Hello all,

I have a suggestion for a new Python built in function: 'flatten'.

This would (as if it needs explanation) take a single sequence, where 
each element can be a sequence (or iterable ?) nested to an arbitrary 
depth. It would return a flattened list. A useful restriction could be 
that it wouldn't expand strings :-)

I've needed this several times, and recently twice at work. There are 
several implementations in the Python cookbook. When I posted on my blog 
recently asking for one liners to flatten a list of lists (only 1 level 
of nesting), I had 26 responses, several of them saying it was a problem 
they had encountered before.

There are also numerous  places on the web bewailing the lack of this as 
a built-in. All of this points to the fact that it is something that 
would be appreciated as a built in.

There is an implementation already in Tkinter :

import _tkinter._flatten as flatten

There are several different possible approaches in pure Python, but is 
this an idea that has legs ?

All the best,


Michael Foord
http://www.voidspace.org.uk/python/index.shtml


-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.7/454 - Release Date: 21/09/2006

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Relative import bug?

2006-09-22 Thread Thomas Heller
Consider a package containing these files:

a/__init__.py
a/b/__init__.py
a/b/x.py
a/b/y.py

If x.py contains this:


from ..b import y
import a.b.x
from ..b import x


Python trunk and Python 2.5 both complain:

Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on 
win32
Type help, copyright, credits or license for more information.
 import a.b.x
Traceback (most recent call last):
  File stdin, line 1, in module
  File a\b\x.py, line 2, in module
from ..b import x
ImportError: cannot import name x


A bug?

Thomas

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Relative import bug?

2006-09-22 Thread Phillip J. Eby
At 08:10 PM 9/22/2006 +0200, Thomas Heller wrote:
Consider a package containing these files:

a/__init__.py
a/b/__init__.py
a/b/x.py
a/b/y.py

If x.py contains this:


from ..b import y
import a.b.x
from ..b import x


Python trunk and Python 2.5 both complain:

Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] 
on win32
Type help, copyright, credits or license for more information.
  import a.b.x
Traceback (most recent call last):
   File stdin, line 1, in module
   File a\b\x.py, line 2, in module
 from ..b import x
ImportError: cannot import name x
 

A bug?

If it is, it has nothing to do with relative importing per se.  Note that 
changing it to from a.b import x produces the exact same error.

This looks like a standard circular import bug.  What's happening is that 
the first import doesn't set a.b.x = x until after a.b.x is fully 
imported.  But subsequent import a.b.x statements don't set it either, 
because they are satisfied by finding 'a.b.x' in sys.modules.  So, when the 
'from ... import x' runs, it tries to get the 'x' attribute of 'a.b' 
(whether it gets a.b relatively or absolutely), and fails.

If you make the last import be import a.b.x as x, you'll get a better 
error message:

Traceback (most recent call last):
   File string, line 1, in module
   File a/b/x.py, line 3, in module
 import a.b.x as x
AttributeError: 'module' object has no attribute 'x'

But the entire issue is a bug that exists in Python 2.4, and possibly prior 
versions as well.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Pep 353: Py_ssize_t advice

2006-09-22 Thread David Abrahams

Pep 353 advises the use of this incantation:

  #if PY_VERSION_HEX  0x0205
  typedef int Py_ssize_t;
  #define PY_SSIZE_T_MAX INT_MAX
  #define PY_SSIZE_T_MIN INT_MIN
  #endif

I just wanted to point out that this advice could lead to library
header collisions when multiple 3rd parties decide to follow it.  I
suggest it be changed to something like:

  #if PY_VERSION_HEX  0x0205  !defined(PY_SSIZE_T_MIN)
  typedef int Py_ssize_t;
  #define PY_SSIZE_T_MAX INT_MAX
  #define PY_SSIZE_T_MIN INT_MIN
  #endif

(C++ allows restating of typedefs; if C allows it, that should be
something like):

  #if PY_VERSION_HEX  0x0205
  typedef int Py_ssize_t;
  # if !defined(PY_SSIZE_T_MIN)
  #  define PY_SSIZE_T_MAX INT_MAX
  #  define PY_SSIZE_T_MIN INT_MIN
  # endif
  #endif

You may say that library developers should know better, but I just had
an argument with a very bright guy who didn't get it at first.

Thanks, and HTH.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Josiah Carlson

Michael Foord [EMAIL PROTECTED] wrote:
 
 Hello all,
 
 I have a suggestion for a new Python built in function: 'flatten'.

This has been brought up many times.  I'm -1 on its inclusion, if only
because it's a fairly simple 9-line function (at least the trivial
version I came up with), and not all X-line functions should be in the
standard library.  Also, while I have had need for such a function in
the past, I have found that I haven't needed it in a few years.


 - Josiah

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Brett Cannon
On 9/22/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Michael There are several different possible approaches in pure Python,Michael but is this an idea that has legs ?Why not add it to itertools?Then, if you need a true list, just calllist() on the returned iterator.
Yeah, this is a better solution. flatten() just doesn't scream built-in! to me.-Brett
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Bob Ippolito
On 9/22/06, Josiah Carlson [EMAIL PROTECTED] wrote:

 Michael Foord [EMAIL PROTECTED] wrote:
 
  Hello all,
 
  I have a suggestion for a new Python built in function: 'flatten'.

 This has been brought up many times.  I'm -1 on its inclusion, if only
 because it's a fairly simple 9-line function (at least the trivial
 version I came up with), and not all X-line functions should be in the
 standard library.  Also, while I have had need for such a function in
 the past, I have found that I haven't needed it in a few years.

I think instead of adding a flatten function perhaps we should think
about adding something like Erlang's iolist support. The idea is
that methods like writelines should be able to take nested iterators
and consume any object they find that implements the buffer protocol.

-bob
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Brian Harring
On Fri, Sep 22, 2006 at 12:05:19PM -0700, Bob Ippolito wrote:
 On 9/22/06, Josiah Carlson [EMAIL PROTECTED] wrote:
 
  Michael Foord [EMAIL PROTECTED] wrote:
  
   Hello all,
  
   I have a suggestion for a new Python built in function: 'flatten'.
 
  This has been brought up many times.  I'm -1 on its inclusion, if only
  because it's a fairly simple 9-line function (at least the trivial
  version I came up with), and not all X-line functions should be in the
  standard library.  Also, while I have had need for such a function in
  the past, I have found that I haven't needed it in a few years.
 
 I think instead of adding a flatten function perhaps we should think
 about adding something like Erlang's iolist support. The idea is
 that methods like writelines should be able to take nested iterators
 and consume any object they find that implements the buffer protocol.

Which is no different then just passing in a generator/iterator that 
does flattening.

Don't much see the point in gumming up the file protocol with this 
special casing; still will have requests for a flattener elsewhere.

If flattening was added, should definitely be a general obj, not a 
special casing in one method in my opinion.
~harring


pgpudc8tPUGor.pgp
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Relative import bug?

2006-09-22 Thread Thomas Heller
Phillip J. Eby schrieb:
 At 08:10 PM 9/22/2006 +0200, Thomas Heller wrote:
If x.py contains this:


from ..b import y
import a.b.x
from ..b import x

...
ImportError: cannot import name x
 

A bug?
 
 If it is, it has nothing to do with relative importing per se.  Note that 
 changing it to from a.b import x produces the exact same error.
 
 This looks like a standard circular import bug.

Of course.  Thanks.

Thomas

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread glyph
On Fri, 22 Sep 2006 18:43:42 +0100, Michael Foord [EMAIL PROTECTED] wrote:

I have a suggestion for a new Python built in function: 'flatten'.

This seems superficially like a good idea, but I think adding it to Python 
anywhere would do a lot more harm than good.  I can see that consensus is 
already strongly against a builtin, but I think it would be bad to add to 
itertools too.

Flattening always *seems* to be a trivial and obvious operation.  I just need 
something that takes a group of deeply structured data and turns it into a 
group of shallowly structured data..  Everyone that has this requirement 
assumes that their list of implicit requirements for flattening is the 
obviously correct one.

This wouldn't be a problem except that everyone has a different idea of those 
requirements:).

Here are a few issues.

What do you do when you encounter a dict?  You can treat it as its keys(), its 
values(), or its items().

What do you do when you encounter an iterable object?

What order do you flatten set()s in?  (and, ha ha, do you Set the same?)

How are user-defined flattening behaviors registered?  Is it a new special 
method, a registration API?

How do you pass information about the flattening in progress to the 
user-defined behaviors?

If you do something special to iterables, do you special-case strings?  Why or 
why not?

What do you do if you encounter a function?  This is kind of a trick question, 
since Nevow's flattener *calls* functions as it encounters them, then treats 
the *result* of calling them as further input.

If you don't think that functions are special, what about *generator* 
functions?  How do you tell the difference?  What about functions that return 
generators but aren't themselves generators?  What about functions that return 
non-generator iterators?  What about pre-generated generator objects (if you 
don't want to treat iterables as special, are generators special?).

Do you produce the output as a structured list or an iterator that works 
incrementally?

Also, at least Nevow uses flatten to mean serialize to bytes, not produce 
a flat list, and I imagine at least a few other web frameworks do as well.  
That starts to get into encoding issues.

If you make a decision one way or another on any of these questions of policy, 
you are going to make flatten() useless to a significant portion of its 
potential userbase.  The only difference between having it in the standard 
library and not is that if it's there, they'll spend an hour being confused by 
the weird way that it's dealing with insert your favorite data type here 
rather than just doing the obvious thing, and they'll take a minute to write 
the 10-line function that they need.  Without the standard library, they'll 
skip to step 2 and save a lot of time.

I would love to see a unified API that figured out all of these problems, and 
put them together into a (non-stdlib) library that anyone interested could use 
for a few years to work the kinks out.  Although it might be nice to have a 
simple flatten interface, I don't think that it would ever be simple enough 
to stick into a builtin; it would just be the default instance of the 
IncrementalDestructuringProcess class with the most popular (as determined by 
polling users of the library after a year or so) 
IncrementalDestructuringTypePolicy.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New relative import issue

2006-09-22 Thread Josiah Carlson

Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 12:08 AM 9/22/2006 -0700, Josiah Carlson wrote:
 Phillip J. Eby [EMAIL PROTECTED] wrote:
   At 08:44 PM 9/21/2006 -0700, Josiah Carlson wrote:
[snip]
 You misunderstood me: I mean that the per-user database must be able to 
 store information for *different Python versions*.  Having a single 
 per-user database without the ability to include configuration for more 
 than one Python version (analagous to the current situation with the 
 distutils per-user config file) is problematic.

Just like having different systemwide databases for each Python version
makes sense, why wouldn't we have different user databases for each
Python version?  Something like ~/.python_packages.2.6 and
~/.python_packages.3.0

Also, by separating out the files per Python version, we can also
guarantee database compatability for any fixed Python series (2.5.x, etc.).
I don't know if the internal organization of SQLite databases changes
between revisions in a backwards compatible way, so this may not
actually be a concern (it is with bsddb).


 In truth, a per-user configuration is just a special case of the real need: 
 to have per-application environments.  In effect, a per-user environment is 
 a fallback for not having an appplication environment, and the system 
 environment is a fallback for not having a user environment.

I think you are mostly correct.  The reason you are not completely
correct is that if I were to install psyco, and I want all applications
that could use it to use it (they guard the psyco import with a
try/except), I merely need to register the package in the systemwide (or
user) package registery.  No need to muck about with each environment I
(or my installed applications) have defined, it just works.  Is it a
fallback?  Sure, but I prefer to call them convenient defaults.


 I didn't mention the following because I thought it would be superfluous,
 but it seems that I should have stated it right out.  My thoughts were
 that on startup, Python would first query the 'system' database, caching
 its results in a dictionary, then query the user's listing, updating the
 dictionary as necessary, then unload the databases.  On demand, when
 code runs packages.register(), if both persist and systemwide are False,
 it just updates the dictionary. If either are true, it opens up and
 updates the relevant database.
 
 Using a database as the primary mechanism for managing import locations 
 simply isn't workable.

Why?  Remember that this database isn't anything other than a
persistance mechanism that has pre-built locking semantics for
multi-process opening, reading, writing, and closing.  Given proper
cross-platform locking, we could use any persistance mechanism as a
replacement; miniconf, Pickle, marshal; whatever.


 You might as well suggest that each environment 
 consist of a single large zipfile containing the packages in question: this 
 would actually be *more* practical (and fast!) in terms of Python startup, 
 and is no different from having a database with respect to the need for 
 installation and uninstallation to modify a central file!

We should remember that the sizes of databases that (I expect) will be
common, we are talking about maybe 30k if a user has installed every
package in pypi.  And after the initial query, everything will be stored
in a dictionary or dictionary-like object, offering faster query times
than even a zip file (though loading the module/package from disk won't
have its performance improved).


 I'm not proposing we do that -- I'm just pointing out why using an actual 
 database isn't really workable, considering that it has all of the 
 disadvantages of a big zipfile, and none of the advantages (like speed, 
 having code already written that supports it, etc.)

SQLite is pretty fast.  And for startup, we are really only performing a
single query per database SELECT * FROM package_registry.  It will end
up reading the entire database, but these databases will be generally
small, perhaps a few dozen rows, maybe a few thousand if we have set up
a bunch of installation-time application environments.


 This is easily remedied with a proper 'packages' implementation:
 
  python -Mpackages name path
 
 Note that Python could auto-insert standard library and site-packages
 'packages' on startup (creating the initial dictionary, then the
 systemwide, then the user, ...).
 
 I presume here you're suggesting a way to select a runtime environment from 
 the command line, which would certainly be a good idea.

Actually, I'm offering a way of *registering* a package with the
repository from the command line.  I'm of the opinion that setting the
environment via command line for the subsequent Python runs is a bad
idea, but then again, I have been using wxPython's wxversion method for
a while to select which wxPython installation I want to use, and find
things like:

import wxversion
wxversion.ensureMinimal('2.6-unicode', 

Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Bob Ippolito
On 9/22/06, Brian Harring [EMAIL PROTECTED] wrote:
 On Fri, Sep 22, 2006 at 12:05:19PM -0700, Bob Ippolito wrote:
  On 9/22/06, Josiah Carlson [EMAIL PROTECTED] wrote:
  
   Michael Foord [EMAIL PROTECTED] wrote:
   
Hello all,
   
I have a suggestion for a new Python built in function: 'flatten'.
  
   This has been brought up many times.  I'm -1 on its inclusion, if only
   because it's a fairly simple 9-line function (at least the trivial
   version I came up with), and not all X-line functions should be in the
   standard library.  Also, while I have had need for such a function in
   the past, I have found that I haven't needed it in a few years.
 
  I think instead of adding a flatten function perhaps we should think
  about adding something like Erlang's iolist support. The idea is
  that methods like writelines should be able to take nested iterators
  and consume any object they find that implements the buffer protocol.

 Which is no different then just passing in a generator/iterator that
 does flattening.

 Don't much see the point in gumming up the file protocol with this
 special casing; still will have requests for a flattener elsewhere.

 If flattening was added, should definitely be a general obj, not a
 special casing in one method in my opinion.

I disagree, the reason for iolist is performance and convenience; the
required indirection of having to explicitly call a flattener function
removes some optimization potential and makes it less convenient to
use.

While there certainly should be a general mechanism available to
perform the task (easily accessible from C), the user would be better
served by not having to explicitly call itertools.iterbuffers every
time they want to write recursive iterables of stuff.

-bob
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Michael Foord
[EMAIL PROTECTED] wrote:
 On Fri, 22 Sep 2006 18:43:42 +0100, Michael Foord [EMAIL PROTECTED] wrote:

   
 I have a suggestion for a new Python built in function: 'flatten'.
 

 This seems superficially like a good idea, but I think adding it to Python 
 anywhere would do a lot more harm than good.  I can see that consensus is 
 already strongly against a builtin, but I think it would be bad to add to 
 itertools too.

 Flattening always *seems* to be a trivial and obvious operation.  I just 
 need something that takes a group of deeply structured data and turns it into 
 a group of shallowly structured data..  Everyone that has this requirement 
 assumes that their list of implicit requirements for flattening is the 
 obviously correct one.

 This wouldn't be a problem except that everyone has a different idea of those 
 requirements:).

 Here are a few issues.

 What do you do when you encounter a dict?  You can treat it as its keys(), 
 its values(), or its items().

 What do you do when you encounter an iterable object?

 What order do you flatten set()s in?  (and, ha ha, do you Set the same?)

 How are user-defined flattening behaviors registered?  Is it a new special 
 method, a registration API?

 How do you pass information about the flattening in progress to the 
 user-defined behaviors?

 If you do something special to iterables, do you special-case strings?  Why 
 or why not?

   
If you consume iterables, and only special case strings - then none of 
the issues you raise above seem to be a problem.

Sets and dictionaries are both iterable.

If it's not iterable it's an element.

I'd prefer to see this as a built-in, lots of people seem to want it. IMHO

Having it in itertools is a good compromise.

 What do you do if you encounter a function?  This is kind of a trick 
 question, since Nevow's flattener *calls* functions as it encounters them, 
 then treats the *result* of calling them as further input.
   
Sounds like not what anyone would normally expect.


 If you don't think that functions are special, what about *generator* 
 functions?  How do you tell the difference?  What about functions that return 
 generators but aren't themselves generators?  What about functions that 
 return non-generator iterators?  What about pre-generated generator objects 
 (if you don't want to treat iterables as special, are generators special?).

   
What does the list constructor do with these ? Do the same.

 Do you produce the output as a structured list or an iterator that works 
 incrementally?
   
Either would be fine. I had in mind a list, but converting an iterator 
into a list is trivial.

 Also, at least Nevow uses flatten to mean serialize to bytes, not 
 produce a flat list, and I imagine at least a few other web frameworks do 
 as well.  That starts to get into encoding issues.

   
Not a use of the term I've come across. On the other hand I've heard of 
flatten in the context of nested data-structures many times.

 If you make a decision one way or another on any of these questions of 
 policy, you are going to make flatten() useless to a significant portion of 
 its potential userbase.  The only difference between having it in the 
 standard library and not is that if it's there, they'll spend an hour being 
 confused by the weird way that it's dealing with insert your favorite data 
 type here rather than just doing the obvious thing, and they'll take a 
 minute to write the 10-line function that they need.  Without the standard 
 library, they'll skip to step 2 and save a lot of time.
   
I think that you're over complicating it and that the term flatten is 
really fairly straightforward. Especially if it's clearly documented in 
terms of consuming iterables.

All the best,


Michael Foord
http://www.voidspace.org.uk


 I would love to see a unified API that figured out all of these problems, and 
 put them together into a (non-stdlib) library that anyone interested could 
 use for a few years to work the kinks out.  Although it might be nice to have 
 a simple flatten interface, I don't think that it would ever be simple 
 enough to stick into a builtin; it would just be the default instance of the 
 IncrementalDestructuringProcess class with the most popular (as determined by 
 polling users of the library after a year or so) 
 IncrementalDestructuringTypePolicy.
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


   



-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.7/454 - Release Date: 21/09/2006

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 

Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Josiah Carlson

Bob Ippolito [EMAIL PROTECTED] wrote:
 On 9/22/06, Brian Harring [EMAIL PROTECTED] wrote:
  On Fri, Sep 22, 2006 at 12:05:19PM -0700, Bob Ippolito wrote:
   I think instead of adding a flatten function perhaps we should think
   about adding something like Erlang's iolist support. The idea is
   that methods like writelines should be able to take nested iterators
   and consume any object they find that implements the buffer protocol.
 
  Which is no different then just passing in a generator/iterator that
  does flattening.
 
  Don't much see the point in gumming up the file protocol with this
  special casing; still will have requests for a flattener elsewhere.
 
  If flattening was added, should definitely be a general obj, not a
  special casing in one method in my opinion.
 
 I disagree, the reason for iolist is performance and convenience; the
 required indirection of having to explicitly call a flattener function
 removes some optimization potential and makes it less convenient to
 use.

Sorry Bob, but I disagree.  In the few times where I've needed to 'write
a list of buffers to a file handle', I find that iterating over the
buffers to be sufficient.  And honestly, in all of my time dealing
with socket and file IO, I've never needed to write a list of iterators
of buffers.  Not to say that YAGNI, but I'd like to see an example where
1) it was being used in the wild, and 2) where it would be a measurable
speedup.

 - Josiah

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 353: Py_ssize_t advice

2006-09-22 Thread Martin v. Löwis
David Abrahams schrieb:
   #if PY_VERSION_HEX  0x0205
   typedef int Py_ssize_t;
   #define PY_SSIZE_T_MAX INT_MAX
   #define PY_SSIZE_T_MIN INT_MIN
   #endif
 
 I just wanted to point out that this advice could lead to library
 header collisions when multiple 3rd parties decide to follow it.  I
 suggest it be changed to something like:
 
   #if PY_VERSION_HEX  0x0205  !defined(PY_SSIZE_T_MIN)

Strictly speaking, this shouldn't be necessary. C allows redefinition
of an object-like macro if the replacement list is identical (for
some definition of identical which applies if the fragment is
copied literally from the PEP).

So I assume you had non-identical replacement list? Can you share
what alternative definition you were using?

In any case, I still think this is good practice, so I added it
to the PEP.

 (C++ allows restating of typedefs; if C allows it, that should be
 something like):

C also allows this; yet, our advise would be that these three
names get always defined together - if that is followed, having
a single guard macro should suffice. PY_SSIZE_T_MIN, as you propose,
should be sufficient.

Regards,
Martin

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Raymond Hettinger
[Michael Foord]
I have a suggestion for a new Python built in function: 'flatten'.
 ...
 There are several different possible approaches in pure Python, 
 but is this an idea that has legs ?

No legs.

It has been discussed ad naseum on comp.lang.python.  People seem to
enjoy writing their own versions of flatten more than finding legitimate
use cases that don't already have trivial solutions.

A general purpose flattener needs some way to be told was is atomic and
what can be further subdivided.  Also, it not obvious how the algorithm
should be extended to cover inputs with tree-like data structures with
data at nodes as well as the leaves (preorder, postorder, inorder
traversal, etc.)

I say use your favorite cookbook approach and leave it out of the
language.


Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] GCC patch for catching errors in PyArg_ParseTuple

2006-09-22 Thread Martin v. Löwis
I wrote a patch for the GCC trunk to add an
__attribute__((format(PyArg_ParseTuple, 2, 3)))
declaration to functions (this specific declaration
should go to PyArg_ParseTuple only).

With that patch, parameter types are compared with the string parameter
(if that's a literal), and errors are reported if there is a type
mismatch (provided -Wformat is given).

I'll post more about this patch in the near future, and commit
some bug fixes I found with it, but here is the patch, in
a publish-early fashion.

There is little chance that this can go into GCC (as it is too
specific), so it likely needs to be maintained separately.
It was written for the current trunk, but hopefully applies
to most recent releases.

Regards,
Martin

Index: c-format.c
===
--- c-format.c  (revision 113267)
+++ c-format.c  (working copy)
@@ -62,7 +62,11 @@
   gcc_cdiag_format_type,
   gcc_cxxdiag_format_type, gcc_gfc_format_type,
   scanf_format_type, strftime_format_type,
-  strfmon_format_type, format_type_error = -1};
+  strfmon_format_type, 
+#define python_first PyArg_ParseTuple_type
+#define python_last _PyArg_ParseTuple_SizeT_type
+  PyArg_ParseTuple_type, _PyArg_ParseTuple_SizeT_type,
+  format_type_error = -1};
 
 typedef struct function_format_info
 {
@@ -759,6 +763,12 @@
 strfmon_flag_specs, strfmon_flag_pairs,
 FMT_FLAG_ARG_CONVERT, 'w', '#', 'p', 0, 'L',
 NULL, NULL
+  },
+  { PyArg_ParseTuple, NULL, NULL, NULL, NULL, NULL, NULL, 
+0, 0, 0, 0, 0, 0, NULL, NULL
+  },
+  { _PyArg_ParseTuple_SizeT, NULL, NULL, NULL, NULL, NULL, NULL, 
+0, 0, 0, 0, 0, 0, NULL, NULL
   }
 };
 
@@ -813,6 +823,10 @@
const char *, int, tree,
unsigned HOST_WIDE_INT);
 
+static void check_format_info_python (format_check_results *,
+ function_format_info *,
+ const char *, int, tree, int);
+
 static void init_dollar_format_checking (int, tree);
 static int maybe_read_dollar_number (const char **, int,
 tree, tree *, const format_kind_info *);
@@ -1414,8 +1428,12 @@
  will decrement it if it finds there are extra arguments, but this way
  need not adjust it for every return.  */
   res-number_other++;
-  check_format_info_main (res, info, format_chars, format_length,
- params, arg_num);
+  if (info-format_type = python_first  info-format_type = python_last)
+check_format_info_python (res, info, format_chars, format_length,
+ params, arg_num);
+  else
+check_format_info_main (res, info, format_chars, format_length,
+   params, arg_num);
 }
 
 
@@ -2102,7 +2120,309 @@
 }
 }
 
+static tree
+lookup_type (const char* ident)
+{
+  tree result = maybe_get_identifier (ident);
+  if (!result)
+{
+  error (%s is not defined as a type, ident);
+  return NULL;
+}
+  result = identifier_global_value (result);
+  if (!result || TREE_CODE (result) != TYPE_DECL)
+{
+  error (%s is not defined as a type, ident);
+  return NULL;
+}
+  result = DECL_ORIGINAL_TYPE (result);
+  gcc_assert (result);
+  return result;
+}
 
+static int
+is_object (tree type, int indirections)
+{
+  static tree PyObject = NULL;
+  static tree ob_refcnt = NULL;
+  static tree ob_next = NULL;
+  tree name;
+
+  if (!PyObject)
+{
+  ob_refcnt = get_identifier (_ob_refcnt);
+  ob_next = get_identifier (_ob_next);
+  PyObject = lookup_type (PyObject);
+  if (!PyObject) return 0;
+}
+
+  while (indirections--)
+{
+  if (TREE_CODE (type) != POINTER_TYPE)
+   return 0;
+  type = TREE_TYPE (type);
+}
+
+  /* type should be PyObject */
+  if (lang_hooks.types_compatible_p (type, PyObject))
+return 1;
+  /* might be a derived PyObject */
+  if (TREE_CODE (type) != RECORD_TYPE)
+return 0;
+  name = DECL_NAME (TYPE_FIELDS (type));
+  return name == ob_refcnt || name == ob_next;
+}
+
+static void
+check_format_info_python (format_check_results *ARG_UNUSED(res),
+ function_format_info *info, 
+ const char *format_chars,
+ int format_length, tree params,
+ int arg_num)
+{
+  static tree PyTypeObject_ptr = NULL;
+  static tree Py_ssize_t = NULL;
+  static tree Py_UNICODE = NULL;
+  static tree Py_complex = NULL;
+  int parens = 0;
+  tree type = NULL;
+  tree cur_param, cur_type;
+  int is_writing = 1;
+  int advance_fmt;
+  int first = 1;
+  /* If the wanted type is a pointer type, we need
+ to strip of all indirections, or else
+ char const* will not compare as compatible with
+ char*. */
+  int indirections;
+
+  if (!PyTypeObject_ptr)
+{
+  

Re: [Python-Dev] New relative import issue

2006-09-22 Thread Phillip J. Eby
At 12:42 PM 9/22/2006 -0700, Josiah Carlson wrote:
  You might as well suggest that each environment
  consist of a single large zipfile containing the packages in question: 
 this
  would actually be *more* practical (and fast!) in terms of Python startup,
  and is no different from having a database with respect to the need for
  installation and uninstallation to modify a central file!

We should remember that the sizes of databases that (I expect) will be
common, we are talking about maybe 30k if a user has installed every
package in pypi.  And after the initial query, everything will be stored
in a dictionary or dictionary-like object, offering faster query times
than even a zip file

Measure it.  Be sure to include the time to import SQLite vs. the time to 
import the zipimport module.


SQLite is pretty fast.  And for startup, we are really only performing a
single query per database SELECT * FROM package_registry.  It will end
up reading the entire database, but these databases will be generally
small, perhaps a few dozen rows, maybe a few thousand if we have set up
a bunch of installation-time application environments.

Again, seriously, compare this against a zipfile.  You'll find that there's 
absolutely no comparison between reading this and reading a zipfile central 
directory -- which also results in an in-memory cache that can then be used 
to seek() directly to the module.


Actually, I'm offering a way of *registering* a package with the
repository from the command line.  I'm of the opinion that setting the
environment via command line for the subsequent Python runs is a bad
idea, but then again, I have been using wxPython's wxversion method for
a while to select which wxPython installation I want to use, and find
things like:

 import wxversion
 wxversion.ensureMinimal('2.6-unicode', optionsRequired=True)

To be exactly the amount of control I want, where I want it.

Well, that's already easy to do for arbitrary packages and arbitrary 
versions with setuptools.  Eggs installed in multi-version mode are added 
to sys.path at runtime if/when they are requested.


With a package registry (perhaps as I have been describing, perhaps
something different), all of the disparate ways of choosing a version of
a library during import can be removed in favor of a single mechanism.
This single mechanism could handle things like the wxPython
'ensureMinimal', perhaps even 'ensure exact' or 'use latest'.

This discussion is mostly making me realize that sys.path is exactly the 
right thing to have, and that the only thing that actually need fixing is 
universal .pth support, and maybe some utility functions for better 
sys.path manipulation within .pth files.  I suggest that there is no way an 
arbitrary registry implementation is going to be faster than reading 
lines from a text file.


  Setuptools works around this by installing an enhancement for the 'site'
  module that extends .pth support to include all PYTHONPATH
  directories.  The enhancement delegates to the original site module after
  recording data about sys.path that the site module destroys at startup.

But wasn't there a recent discussion describing how keeping persistant
environment variables is a PITA both during install and runtime?

Yes, exactly.


Extending .pth files to PYTHONPATH seems to me like a hack meant to work
around the fact that Python doesn't have a package registry.  And really,
all of the current sys.path + .pth + PYTHONPATH stuff could be subsumed
into a *single* mechanism.

Sure -- I suggest that the single mechanism is none other than 
*sys.path*.  The .pth files, PYTHONPATH, and a new command-line option 
merely being ways to set it.

All of the discussion that's taken place here has sufficed at this point to 
convince me that sys.path isn't broken at all, and doesn't need 
fixing.  Some tweaks to 'site' and maybe a new command-line option will 
suffice to clean everything up quite nicely.

I say this because all of the version and dependency management things that 
people are asking about can already be achieved by setuptools, so clearly 
the underlying machinery is fine.  It wasn't until this message of yours 
that I realized that you are trying to solve a bunch of problems that are 
quite solvable within the existing machinery.  I was mainly interested in 
cleaning up the final awkwardness that's effectively caused by lack of .pth 
support for the startup script directory.


  I'm not sure of that, since I don't yet know how your approach would deal
  with namespace packages, which are distributed in pieces and assembled
  later.  For example, many PEAK and Zope distributions live in the peak.*
  and zope.* package namespaces, but are installed separately, and glued
  together via __path__ changes (see the pkgutil docs).

 packages.register('zope', '/path/to/zope')

And if the installation path is different:

 packages.register('zope.subpackage', '/different/path/to/subpackage/')

Otherwise the 

Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread Bob Ippolito
On 9/22/06, Josiah Carlson [EMAIL PROTECTED] wrote:

 Bob Ippolito [EMAIL PROTECTED] wrote:
  On 9/22/06, Brian Harring [EMAIL PROTECTED] wrote:
   On Fri, Sep 22, 2006 at 12:05:19PM -0700, Bob Ippolito wrote:
I think instead of adding a flatten function perhaps we should think
about adding something like Erlang's iolist support. The idea is
that methods like writelines should be able to take nested iterators
and consume any object they find that implements the buffer protocol.
  
   Which is no different then just passing in a generator/iterator that
   does flattening.
  
   Don't much see the point in gumming up the file protocol with this
   special casing; still will have requests for a flattener elsewhere.
  
   If flattening was added, should definitely be a general obj, not a
   special casing in one method in my opinion.
 
  I disagree, the reason for iolist is performance and convenience; the
  required indirection of having to explicitly call a flattener function
  removes some optimization potential and makes it less convenient to
  use.

 Sorry Bob, but I disagree.  In the few times where I've needed to 'write
 a list of buffers to a file handle', I find that iterating over the
 buffers to be sufficient.  And honestly, in all of my time dealing
 with socket and file IO, I've never needed to write a list of iterators
 of buffers.  Not to say that YAGNI, but I'd like to see an example where
 1) it was being used in the wild, and 2) where it would be a measurable
 speedup.

The primary use for this is structured data, mostly file formats,
where you can't write the beginning until you have a bunch of
information about the entire structure such as the number of items or
the count of bytes when serialized. An efficient way to do that is
just to build a bunch of nested lists that you can use to calculate
the size (iolist_size(...) in Erlang) instead of having to write a
visitor that constructs a new flat list or writes to StringIO first. I
suppose in the most common case, for performance reasons, you would
want to restrict this to sequences only (as in PySequence_Fast)
because iolist_size(...) should be non-destructive (or else it has to
flatten into a new list anyway).

I've definitely done this before in Python, most recently here:
http://svn.red-bean.com/bob/flashticle/trunk/flashticle/

The flatten function in this case is flashticle.util.iter_only, and
it's used in flashticle.actions, flashticle.amf, flashticle.flv,
flashticle.swf, and flashticle.remoting.

-bob
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 353: Py_ssize_t advice

2006-09-22 Thread David Abrahams
Martin v. Löwis [EMAIL PROTECTED] writes:

 David Abrahams schrieb:
   #if PY_VERSION_HEX  0x0205
   typedef int Py_ssize_t;
   #define PY_SSIZE_T_MAX INT_MAX
   #define PY_SSIZE_T_MIN INT_MIN
   #endif
 
 I just wanted to point out that this advice could lead to library
 header collisions when multiple 3rd parties decide to follow it.  I
 suggest it be changed to something like:
 
   #if PY_VERSION_HEX  0x0205  !defined(PY_SSIZE_T_MIN)

 Strictly speaking, this shouldn't be necessary. C allows redefinition
 of an object-like macro if the replacement list is identical (for
 some definition of identical which applies if the fragment is
 copied literally from the PEP).

 So I assume you had non-identical replacement list? 

No:

a. I didn't actually experience a collision; I only anticipated it

b. We were using C++, which IIRC does not allow such redefinition

c. anyway you'll get a nasty warning, which for some people will be
   just as bad as an error

 Can you share what alternative definition you were using?

 In any case, I still think this is good practice, so I added it
 to the PEP.

Thanks,

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] GCC patch for catching errors in PyArg_ParseTuple

2006-09-22 Thread Giovanni Bajo
Martin v. Löwis wrote:

 I'll post more about this patch in the near future, and commit
 some bug fixes I found with it, but here is the patch, in
 a publish-early fashion.

 There is little chance that this can go into GCC (as it is too
 specific), so it likely needs to be maintained separately.
 It was written for the current trunk, but hopefully applies
 to most recent releases.

A way not to maintain this patch forever would be to devise a way to make
format syntax pluggable / scriptable. There have been previous discussions
on the GCC mailing lists.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Typo.pl scan of Python 2.5 source code

2006-09-22 Thread Johnny Lee





Hello,My name is Johnny Lee. I have developed a *ahem* perl script which scans C/C++ source files for typos. I ran the typo.pl script on the released Python 2.5 source code. The scan took about two minutes and produced ~340 typos.After spending about 13 minutes weeding out the obvious false positives, 149 typos remain.One of the pros/cons of the script is that it doesn't need to be intergrated into the build process to work.It just searches for files with typical C/C++ source code file extensions and scans them.The downside is if the source file is not included in the build process, then the script is scanning an irrelevant file.Unless you aid the script via some parameters, it will scan all the code, even stuff inside #ifdef'sthat wouldn't normally be compiled.You can access the list of typos from http://www.geocities.com/typopl/typoscan.htmThe Perl 1999 paper can be read at http://www.geocities.com/typopl/index.htmI've mapped the Python memory-related calls PyMem_Alloc, PyMem_Realloc, etc. to the same behaviour as the C std library malloc, realloc, etc. sinceInclude\pymem.h seem to map them to those calls. If that assumption is not valid, then you can ignore typos that involve those PyMem_XXX calls.The Python 2.5typos can be classified into 7 types.1) if (X = 0)Assignment within an if statement. Typically a false positive, but sometimes it catches something valid.In Python's case, the one typo is:if (status = ERROR_MORE_DATA)but the previous code statement returns an error code into the status variable.2) realloc overwrite src if NULL, i.e. p = realloc(p, new_size);If realloc() fails, it will return NULL. If you assign the return value to the same variable you passed into realloc,then you've overwritten the variable and possibly leaked the memory that the variable pointed to.3) if (CreateFileMapping == IHV)On Win32, the CreateFileMapping() API will return NULL on failure, not INVALID_HANDLE_VALUE.The Python code does not check for NULL though.4) if ((X!=0) || (X!=1))The problem with code of this type is that it doesn't work. In the Python case, we have in a large if statement:quotetabs  ((data[in]!='\t')||(data[in]!=' '))Now if data[in] == '\t', then it will fail the first data[in] but it will pass the second data[in] comparison.Typically you want "" not "||".5) using API result w/no checkThere are several APIs that should be checked for success before using the returned ptrs/cookies, i.e.malloc, realloc, and fopen among others.6) XX;;Just being anal here. Two semicolons in a row. Second one is extraneous.7) extraneous test for non-NULL ptrSeveral memory calls that free memory accept NULL ptrs. So testing for NULL before calling them is redundant and wastes code space.Now some codepaths may be time-critical, but probably not all, and smaller code usually helps.If you have any questions, comments, feel free to email. I hope this scan is useful.Thanks for your time,JUse Messenger to talk to your IM friends, even those on Yahoo! Talk now!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New relative import issue

2006-09-22 Thread Josiah Carlson

Phillip J. Eby [EMAIL PROTECTED] wrote:
 At 12:42 PM 9/22/2006 -0700, Josiah Carlson wrote:
[snip]
 Measure it.  Be sure to include the time to import SQLite vs. the time to 
 import the zipimport module.
[snip]
 Again, seriously, compare this against a zipfile.  You'll find that there's 
 absolutely no comparison between reading this and reading a zipfile central 
 directory -- which also results in an in-memory cache that can then be used 
 to seek() directly to the module.

They are not directly comparable.  The registry of packages can do more
than zipimport in terms of package naming and hierarchy, but it's not an
importer; it's a conceptual replacement of sys.path.  I have already
stated that the actual imports from this registry won't be any faster,
as it will still need to read modules/packages from disk *after* it has
decided on a list of paths to check for the package/module.  Further,
whether we use SQLite, or any one of a number of other persistance
mechanisms, such a choice should depend on a few things (speed being one
of them, though maybe not the *only* consideration).  Perhaps even a zip
file whose 'files' are named with the desired package hierarchy, and
whose contents are something like:

import imp
globals.update(imp.load_XXX(...).__dict__)
del imp


 Actually, I'm offering a way of *registering* a package with the
 repository from the command line.  I'm of the opinion that setting the
 environment via command line for the subsequent Python runs is a bad
 idea, but then again, I have been using wxPython's wxversion method for
 a while to select which wxPython installation I want to use, and find
 things like:
 
  import wxversion
  wxversion.ensureMinimal('2.6-unicode', optionsRequired=True)
 
 To be exactly the amount of control I want, where I want it.
 
 Well, that's already easy to do for arbitrary packages and arbitrary 
 versions with setuptools.  Eggs installed in multi-version mode are added 
 to sys.path at runtime if/when they are requested.

Why do we have to use eggs or setuptools to get a feature that
*arguably* should have existed a decade ago in core Python?

The core functionality I'm talking about is:

packages.register(name, path, env=None, system=False, persist=False)
#system==True implies persist==True

packages.copy_env(fr_env, to_env)
packages.use_env(env)

packages.check(name, version=None)

packages.use(name, version)

With those 5 functions and a few tricks, we can replace all user-level .pth
and PYTHONPATH use, and sys.path manipulation done in other 3rd party
packages (setuptools, etc.) are easily handled and supported.


 With a package registry (perhaps as I have been describing, perhaps
 something different), all of the disparate ways of choosing a version of
 a library during import can be removed in favor of a single mechanism.
 This single mechanism could handle things like the wxPython
 'ensureMinimal', perhaps even 'ensure exact' or 'use latest'.
 
 This discussion is mostly making me realize that sys.path is exactly the 
 right thing to have, and that the only thing that actually need fixing is 
 universal .pth support, and maybe some utility functions for better 
 sys.path manipulation within .pth files.  I suggest that there is no way an 
 arbitrary registry implementation is going to be faster than reading 
 lines from a text file.
 
   Setuptools works around this by installing an enhancement for the 'site'
   module that extends .pth support to include all PYTHONPATH
   directories.  The enhancement delegates to the original site module after
   recording data about sys.path that the site module destroys at startup.
 
 But wasn't there a recent discussion describing how keeping persistant
 environment variables is a PITA both during install and runtime?
 
 Yes, exactly.

You have confused me, because not only have you just said we use
PYTHONPATH as a solution, but you have just acknowledged that using
PYTHONPATH is not reasonable as a solution.  You have also just said
that we need to add features to .pth support so that it is more usable.

So, sys.path is exactly the right thing to have, but we need to add
more features to make it better.

Ok, here's a sample .pth file if we are willing to make it better (in my
opinion):

zope,/path/to/zope,3.2.1,netserver
zope.subpackage,/path/to/subpackage,.1.1,netserver

That's a CSV file with rows defining packages, and columns in order:
package name, path to package, version, and a semicolon-separated list
of environments that this package is available in (a leading semicolon,
or a double semicolon says that it is available when no environment is
specified).

With a base sys.path, a dictionary of environment - packages created
from .pth files, and a simple function, one can generally develop an
applicable sys.path on demand to some choose_environment() call.

This is, effectively, a variant of what I was suggesting, only with
a different persistance 

Re: [Python-Dev] Suggestion for a new built-in - flatten

2006-09-22 Thread glyph
On Fri, 22 Sep 2006 20:55:18 +0100, Michael Foord [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:
On Fri, 22 Sep 2006 18:43:42 +0100, Michael Foord 
[EMAIL PROTECTED] wrote:

This wouldn't be a problem except that everyone has a different idea of 
those requirements:).

You didn't really address this, and it was my main point.  In fact, you more or 
less made my point for me.  You just assume that the type of application you 
have in mind right now is the only one that wants to use a flatten function, 
and dismiss out of hand any uses that I might have in mind.

If you consume iterables, and only special case strings - then none of the 
issues you raise above seem to be a problem.

You have just made two major policy decisions about the flattener without 
presenting a specific use case or set of use cases it is meant to be restricted 
to.

For example, you suggest special casing strings.  Why?  Your guideline 
otherwise is to follow what the iter() or list() functions do.  What about 
user-defined classes which subclass str and implement __iter__?

Sets and dictionaries are both iterable.

If it's not iterable it's an element.

I'd prefer to see this as a built-in, lots of people seem to want it. IMHO

Can you give specific examples?  The only significant use of a flattener I'm 
intimately familiar with (Nevow) works absolutely nothing like what you 
described.

Having it in itertools is a good compromise.

No need to compromise with me.  I am not in a position to reject your change.  
No particular reason for me to make any concessions either: I'm simply trying 
to communicate the fact that I think this is a terrible idea, not come to an 
agreement with you about how progress might be made.  Absolutely no changes on 
this front are A-OK by me :).

You have made a case for the fact that, perhaps, you should have a utility 
library which you use in all your projects could use for consistency and to 
avoid repeating yourself, since you have a clearly defined need for what a 
flattener should do.  I haven't read anything that indicates there's a good 
reason for this function to be in the standard library.  What are the use cases?

It's definitely better for the core language to define lots of basic types so 
that you can say something in a library like returns a dict mapping strings to 
ints without having a huge argument about what dict and string and int 
mean.  What's the benefit to having everyone flatten things the same way, 
though?  Flattening really isn't that common of an operation, and in the cases 
where it's needed, a unified approach would only help if you had two 
flattenable data-structures from different libraries which needed to be 
combined.  I can't say I've ever seen a case where that would happen, let alone 
for it to be common enough that there should be something in the core language 
to support it.

What do you do if you encounter a function?  This is kind of a trick 
question, since Nevow's flattener *calls* functions as it encounters 
them, then treats the *result* of calling them as further input.

Sounds like not what anyone would normally expect.

Of course not.  My point is that there is nothing that anyone would normally 
expect from a flattener except a few basic common features.  Bob's use-case is 
completely different from yours, for example: he's talking about flattening to 
support high-performance I/O.

What does the list constructor do with these ? Do the same.

 list('hello')
['h', 'e', 'l', 'l', 'o']

What more can I say?

Do you produce the output as a structured list or an iterator that works 
incrementally?

Either would be fine. I had in mind a list, but converting an iterator into 
a list is trivial.

There are applications where this makes a big difference.  Bob, for example, 
suggested that this should only work on structures that support the 
PySequence_Fast operations.

Also, at least Nevow uses flatten to mean serialize to bytes, not 
produce a flat list, and I imagine at least a few other web frameworks do 
as well.  That starts to get into encoding issues.

Not a use of the term I've come across. On the other hand I've heard of 
flatten in the context of nested data-structures many times.

Nevertheless the only respondent even mildly in favor of your proposal so far 
also mentions flattening sequences of bytes, although not quite as directly.

I think that you're over complicating it and that the term flatten is really 
fairly straightforward. Especially if it's clearly documented in terms of 
consuming iterables.

And I think that you're over-simplifying.  If you can demonstrate that there is 
really a broad consensus that this sort of thing is useful in a wide variety of 
applications, then sure, I wouldn't complain too much.  But I've spent a LOT of 
time thinking about what flattening is, and several applications that I've 
worked on have very different ideas about how it should work, and I see very 
little benefit to unifying them.  That's just the 

Re: [Python-Dev] list.discard? (Re: dict.discard)

2006-09-22 Thread Greg Ewing
[EMAIL PROTECTED] wrote:

 It's obvious for sets and dictionaries that there is only one thing to
 discard and that after the operation you're guaranteed the key no longer
 exists.  Would you want the same semantics for lists or the semantics of
 list.remove where it only removes the first instance?

In my use cases I usually know that there is either
zero or one occurrences in the list.

But maybe it would be more useful to have a remove_all()
method, whose behaviour with zero occurrences would just
be a special case.

Or maybe remove() should just do nothing if the item is
not found. I don't think I've ever found getting an exception
from it to be useful, and I've often found it a nuisance.
What experiences have others had with it?

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Typo.pl scan of Python 2.5 source code

2006-09-22 Thread Neal Norwitz
On 9/22/06, Johnny Lee [EMAIL PROTECTED] wrote:

 Hello,
 My name is Johnny Lee. I have developed a *ahem* perl script which scans
 C/C++ source files for typos.

Hi Johnny.

Thanks for running your script, even if it is written in Perl and ran
on Windows. :-)

 The Python 2.5 typos can be classified into 7 types.

 2) realloc overwrite src if NULL, i.e. p = realloc(p, new_size);
 If realloc() fails, it will return NULL. If you assign the return value to
 the same variable you passed into realloc,
 then you've overwritten the variable and possibly leaked the memory that the
 variable pointed to.

A bunch of these warnings were accurate and a bunch were not.  There
were 2 reasons for the false positives.  1) The pointer was aliased,
thus not lost, 2) On failure, we exited (Parser/*.c)

 4) if ((X!=0) || (X!=1))

These 2 cases occurred in binascii.  I have no idea if the warning is
wright or the code is.

 6) XX;;
 Just being anal here. Two semicolons in a row. Second one is extraneous.

I already checked in a fix for these on HEAD.  Hard for even me to
screw up those fixes. :-)

 7) extraneous test for non-NULL ptr
 Several memory calls that free memory accept NULL ptrs.
 So testing for NULL before calling them is redundant and wastes code space.
 Now some codepaths may be time-critical, but probably not all, and smaller
 code usually helps.

I ignored these as I'm not certain all the platforms we run on accept
free(NULL).

Below is my categorization of the warnings except #7.  Hopefully
someone will fix all the real problems in the first batch.

Thanks again!

n
--

# Problems
Objects\fileobject.c (338): realloc overwrite src if NULL; 17:
file-f_setbuf=(char*)PyMem_Realloc(file-f_setbuf,bufsize)
Objects\fileobject.c (342): using PyMem_Realloc result w/no check
30: setvbuf(file-f_fp, file-f_setbuf, type, bufsize);
[file-f_setbuf]
Objects\listobject.c (2619):using PyMem_MALLOC result w/no check
30: garbage[i] = selfitems[cur]; [garbage]
Parser\myreadline.c (144):  realloc overwrite src if NULL; 17:
p=(char*)PyMem_REALLOC(p,n+incr)
Modules\_csv.c (564):   realloc overwrite src if NULL; 17:
self-field=PyMem_Realloc(self-field,self-field_size)
Modules\_localemodule.c (366):  realloc overwrite src if NULL; 17:
buf=PyMem_Realloc(buf,n2)
Modules\_randommodule.c (290):  realloc overwrite src if NULL; 17:
key=(unsigned#long*)PyMem_Realloc(key,bigger*sizeof(*key))
Modules\arraymodule.c (1675):   realloc overwrite src if NULL; 17:
self-ob_item=(char*)PyMem_REALLOC(self-ob_item,itemsize*self-ob_size)
Modules\cPickle.c (536):realloc overwrite src if NULL; 17:
self-buf=(char*)realloc(self-buf,n)
Modules\cPickle.c (592):realloc overwrite src if NULL; 17:
self-buf=(char*)realloc(self-buf,bigger)
Modules\cPickle.c (4369):   realloc overwrite src if NULL; 17:
self-marks=(int*)realloc(self-marks,s*sizeof(int))
Modules\cStringIO.c (344):  realloc overwrite src if NULL; 17:
self-buf=(char*)realloc(self-buf,self-buf_size)
Modules\cStringIO.c (380):  realloc overwrite src if NULL; 17:
oself-buf=(char*)realloc(oself-buf,oself-buf_size)
Modules\_ctypes\_ctypes.c (2209):   using PyMem_Malloc result w/no
check 30: memset(obj-b_ptr, 0, dict-size); [obj-b_ptr]
Modules\_ctypes\callproc.c (1472):  using PyMem_Malloc result w/no
check 30: strcpy(conversion_mode_encoding, coding);
[conversion_mode_encoding]
Modules\_ctypes\callproc.c (1478):  using PyMem_Malloc result w/no
check 30: strcpy(conversion_mode_errors, mode);
[conversion_mode_errors]
Modules\_ctypes\stgdict.c (362):using PyMem_Malloc result w/no
check 30: memset(stgdict-ffi_type_pointer.elements, 0,
[stgdict-ffi_type_pointer.elements]
Modules\_ctypes\stgdict.c (376):using PyMem_Malloc result w/no
check 30: memset(stgdict-ffi_type_pointer.elements, 0,
[stgdict-ffi_type_pointer.elements]

# No idea if the code or tool is right.
Modules\binascii.c (1161)
Modules\binascii.c (1231)

# Platform specific files.  I didn't review and won't fix without testing.
Python\thread_lwp.h (107):  using malloc result w/no check 30:
lock-lock_locked = 0; [lock]
Python\thread_os2.h (141):  using malloc result w/no check 30:
(long)sem)); [sem]
Python\thread_os2.h (155):  using malloc result w/no check 30:
lock-is_set = 0; [lock]
Python\thread_pth.h (133):  using malloc result w/no check 30:
memset((void *)lock, '\0', sizeof(pth_lock)); [lock]
Python\thread_solaris.h (48):   using malloc result w/no check 30:
funcarg-func = func; [funcarg]
Python\thread_solaris.h (133):  using malloc result w/no check 30:
if(mutex_init(lock,USYNC_THREAD,0)) [lock]

# Who cares about these modules.
Modules\almodule.c:182
Modules\svmodule.c:547

# Not a problem.
Parser\firstsets.c (76)
Parser\grammar.c (40)
Parser\grammar.c (59)
Parser\grammar.c (83)
Parser\grammar.c (102)
Parser\node.c (95)
Parser\pgen.c (52)
Parser\pgen.c (69)
Parser\pgen.c (126)
Parser\pgen.c (438)
Parser\pgen.c (462)
Parser\tokenizer.c (797)

Re: [Python-Dev] Pep 353: Py_ssize_t advice

2006-09-22 Thread Martin v. Löwis
David Abrahams schrieb:
 b. We were using C++, which IIRC does not allow such redefinition

You remember incorrectly. 16.3/2 (cpp.replace) says

# An identifier currently defined as a macro without use of lparen (an
# object-like macro) may be redefined by another #define preprocessing
# directive provided that the second definition is an object-like macro
# definition and the two replacement lists are identical, otherwise the
# program is ill-formed.

 c. anyway you'll get a nasty warning, which for some people will be 
 just as bad as an error

Try for yourself. You get the warning only if the redefinition is not
identical to the original definition (or an object-like macro is
redefined as a function-like macro or vice versa).

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] GCC patch for catching errors in PyArg_ParseTuple

2006-09-22 Thread Martin v. Löwis
Giovanni Bajo schrieb:
 A way not to maintain this patch forever would be to devise a way to make
 format syntax pluggable / scriptable. There have been previous discussions
 on the GCC mailing lists.

Perhaps. I very much doubt that this can or will be done, in a way that
would support PyArg_ParseTuple. It's probably easier to replace
PyArg_ParseTuple with something that can be statically checked by any
compiler.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com