Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-12 Thread Giovanni Bajo
On Mon, 30 Mar 2009 20:34:21 +0200, Christian Heimes wrote:

 Hallo Alexander!
 
 Alexander Neundorf wrote:
 This of course depends on the definition of as good as ;-) Well, I
 have met Windows-only developers which use CMake because it is able to
 generate project files for different versions of Visual Studio, and
 praise it for that.
 
 So far I haven't heard any complains about or feature requests for the
 project files. ;)

In fact, I have had one.

I asked to put all those big CJK codecs outside of python2x.dll because 
they were too big and create far larger self-contained distributions 
(aka: py2exe/pyinstaller) as would normally be required.

I was replied that it would be unconvienent to do so because of the fact 
that the build system is made by hand and it's hard to generate project 
files for each third party module.

Were those project files generated automatically, changing between 
external modules within or outside python2x dll would be a one-line 
switch in CMakeLists.txt (or similar).
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Evaluated cmake as an autoconf replacement

2009-04-12 Thread Giovanni Bajo
On Fri, 10 Apr 2009 11:49:04 +1000, Neil Hodgson wrote:

This means that generated Visual Studio project files will not work
 for other people unless a particular absolute build location is
 specified for everyone which will not suit most. Each person that wants
 to build Python will have to run cmake before starting Visual Studio
 thus increasing the prerequisites.

Given that we're now stuck with using whatever Visual Studio version the 
Python maintainers decided to use, I don't see this as a problem. As in: 
there is already a far larger and invasive dependency. 

CMake is readily available on all platforms, and it can be installed in a 
couple of seconds.
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 372 -- Adding an ordered directory to collections ready for pronouncement

2009-03-02 Thread Giovanni Bajo
On Mon, 02 Mar 2009 14:36:32 -0800, Raymond Hettinger wrote:

 [Nick Coghlan]
 The examples in the PEP used 'odict' (until recently), but the patch
 was for OrderedDict.
 
 As an experiment, try walking down the hall asking a few programmers who
 aren't in this conversion what they think collections.odict() is?
 Is it a class or function?  What does it do?  Can the English as second
 language folks guess what the o stands for?  Is it a builtin or pure
 python?  My guess is that the experiment will be informative.

Just today, I was talking with a colleague (which is learning Python 
right now) about ordered dict. His first thought was a dictionary that, 
when iterated, would return keys in sorted order.

I beleive he was partly misguided by his knowledge of C++. C++ has always 
had std::map which returns sorted data upon iteration (it's a binary 
tree); they're now adding std::unordered_map (and std::unordered_set), to 
be implemented with a hash table. So, if you come from C++, it's easy to 
mistake the meaning of an ordered dict.

This said, I don't have a specific suggestion, but I would stay with 
lowercase-only for simmetry with defaultdict.
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 374 (DVCS) now in reST

2009-01-26 Thread Giovanni Bajo
On Mon, 26 Jan 2009 10:31:55 -0800, Guido van Rossum wrote:

 On Mon, Jan 26, 2009 at 8:08 AM, Paul Hummer p...@eventuallyanyway.com
 wrote:
 At a previous employer, we had this same discussion about switching to
 a DVCS, and the time and cost required to learn the new tool.  We
 switched to bzr, and while there were days where someone got lost in
 the DVCS, the overall advantages with merging allowed that cost to be
 offset by the fact that merging was so cheap (and we merged a lot).

 That's a big consideration to be made when you're considering a DVCS. 
 Merges in SVN and CVS can be painful, where merging well is a core
 feature of any DVCS.
 
 I hear you. I for one have been frustrated (now that you mention it) by
 the inability to track changes across merges. We do lots of merges from
 the trunk into the py3k branch, and the merge revisions in the branch
 quotes the full text of the changes merged from the trunk, but not the
 list of affected files for each revision merged. Since merges typically
 combine a lot of revisions, it is very painful to find out more about a
 particular change to a file when that change came from such a merge --
 often even after reading through the entire list of descriptions you
 still have no idea which merged revision is responsible for a particular
 change. Assuming this problem does not exist in DVCS, that would be a
 huge bonus from switching to a DVCS!

Well, not only it does not exist by design in any DVCS, but I have a 
better news: it does not exist anymore in Subversion 1.5. You just need 
to upgrade your SVN server to 1.5, migrate your merge history from the 
format of svnmerge to the new builtin format (using the official script), 
and you're done: say hello to -g/--use-merge-history, to be use with 
svn log and svn blame.

This is a good writeup of the new features:
http://chestofbooks.com/computers/revision-control/subversion-svn/Merge-
Sensitive-Logs-And-Annotations-Branchmerge-Advanced-Lo.html
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __del__ and tp_dealloc in the IO lib

2009-01-23 Thread Giovanni Bajo
On gio, 2009-01-22 at 18:42 -0800, Guido van Rossum wrote:
 On Thu, Jan 22, 2009 at 5:22 PM, Giovanni Bajo ra...@develer.com wrote:
  CPython will always use reference counting and thus have a simple and
  clear GC criteria that can be exploited to simplify the code.
 
 Believe this at your own peril.
 
 Once, CPython didn't have GC at all (apart from refcounting). Now it
 does. There are GC techniques that delay DECREF operations until it's
 more convenient. If someone finds a way to exploit that technique to
 save 10% of execution time it would only be right to start using it.
 
 You *can* assume that objects that are no longer referenced will
 *eventually* be GC'ed, and that GC'ing a file means flushing its
 buffer and closing its file descriptor. You *cannot* assume that
 objects are *immediately* GC'ed. This is already not always true in
 CPython for many different reasons, like objects involved in cycles,
 weak references,

I don't understand what you mean with weak references delaying object
deallocation. I'm probably missing something here...

  or tracebacks saved with exceptions, or perhaps
 profiling/debugging hooks. If we found a good reason to introduce file
 objects into some kind of cycle or weak reference dict, I could see
 file objects getting delayed reclamation even without changes in GC
 implementation.

That would be break so much code that I doubt that, in practice, you can
slip it through within a release. Besides, being able to write simpler
code like for L in open(foo.txt) is per-se a good reason *not to*
put file objects in cycles; so you will probably need more than one good
reason to change this. OK, not *you* because of your BDFL powers ;), but
anyone else would surely have to face great opposition.

The fact that file objects are collected and closed immediately in all
reasonable use cases (and even in case of exceptions, that you mention,
things get even better with the new semantic of the except clause) is a
*good* property of Python. I regularly see people *happy* about it.

I miss to understand why many Python developers are so fierce in trying
to push the idea of cross-python compatibility (which is something that
does simply *not* exist in real world for applications) or to warn about
rainy days in the future when this would stop working in CPython. I
would strongly prefer that CPython would settle on (= document) using
reference counting and immediate destruction so that people can stop
making their everyday code more complex with no benefit. You will be
losing no more than an open door that nobody has entered in 20 years,
and people would only benefit from it.
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __del__ and tp_dealloc in the IO lib

2009-01-23 Thread Giovanni Bajo

On 1/23/2009 4:27 PM, Guido van Rossum wrote:

On Fri, Jan 23, 2009 at 2:57 AM, Giovanni Bajo ra...@develer.com wrote:

I miss to understand why many Python developers are so fierce in trying
to push the idea of cross-python compatibility (which is something that
does simply *not* exist in real world for applications) or to warn about
rainy days in the future when this would stop working in CPython. I
would strongly prefer that CPython would settle on (= document) using
reference counting and immediate destruction so that people can stop
making their everyday code more complex with no benefit. You will be
losing no more than an open door that nobody has entered in 20 years,
and people would only benefit from it.


You are so very wrong, my son. CPython's implementation strategy
*will* evolve. Several groups are hard at work trying to make a faster
Python interpreter, and when they succeed, everyone, including you,
will want to use their version (or their version may simply *be* the
new CPython).


I'm basing my assumption on 19 years of history of CPython. Please, 
correct me if I'm wrong, but the only thing that changed is that the 
cyclic-GC was added so that loops are now collected, but nothing change 
with respect to cyclic collection. And everybody (including you, IIRC) 
has always agreed that it would be very very hard to eradicate reference 
counting from CPython and all the existing extensions; so hard that it 
is probably more convenient to start a different interpreter implementation.



Plus, you'd be surprised how many people might want to port existing
code (and that may include code that uses C extensions, many of which
are also ported) to Jython or IronPython.


I would love to be surprised, in fact!

Since I fail to see any business strategy behind such a porting, I don't 
see this happening very often in the business industry (and even less in 
the open source community, where there are also political issues between 
those versions of Python, I would say). I also never met someone that 
wanted to make a cross-interpreter Python application, nor read about 
someone that has a reasonable use case for wanting to do that, besides 
geek fun; which is why I came to this conclusion, though I obviously 
have access only to a little information compared to other people in here.


On the other hand, I see people using IronPython so that they can access 
to the .NET framework (which can't be ported to other Python versions), 
or using Java so that they can blend to existing Java programs. And 
those are perfectly good use cases for the existence of such 
interpreters, but not for the merits of writing cross-interpreter 
portable code.


I would be pleased if you (or others) could point me to real-world use 
cases of this cross-interpreter portability.



Your mistake sounds more like nobody will ever want to run this on
Windows, so I won't have to use the os.path module and other
short-sighted ideas. While you may be right in the short run, it may
also be the death penalty for a piece of genius code that is found to
be unportable.


And in fact, I don't defensively code cross-OS portable code. Python is 
great in that *most* of what you naturally write is portable; which 
means that, the day that you need it, it's a breeze to port your code 
(assuming that you have also picked up the correct extensions, which I 
always try to do). But that does not mean that I have today to waste 
time on something that I don't need.



And, by the way, for line in open(filename): ... will continue to
work. It may just not close the file right away. This is a forgivable
sin in a small program that opens a few files only. It only becomes a
program when this is itself inside a loop that loops over many
filenames -- you could run out of file descriptors.


I do understand this, but I'm sure you realize that there other similars 
example where the side effects are far worse. Maybe you don't care since 
you simply decided to declare that code *wrong*. But I'm unsure the 
community will kindly accept such a deep change in behaviour. Especially 
 within the existing 2.x or 3.x release lines.

--
Giovanni Bajo
Develer S.r.l.
http://www.develer.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] 2.6.1 and 3.0

2008-11-27 Thread Giovanni Bajo
On gio, 2008-11-27 at 00:29 +0100, Martin v. Löwis wrote:
  So, deducing from your reply, this merge module is a thing that allows
  to install the CRT (and other shared components)? 
 
 Correct. More generally, a merge module is a something like an MSI
 library (.a). It includes a set of files and snippets of an installation
 procedure for them.

OK. One question: why CRT doesn't get installed as regular files near to
the python executable? That's how I usually ship it, but maybe Python
has some special need.

  Another option is to contact the Advanced Installer vendor and ask for a
  free license for the Python Software Foundation. This would mean that
  everybody in the world would still be able to build an installer without
  CRT, and only PSF would build the official one with CRT bundled. I
  personally don't see this as a show-stopper (does anyone ever build
  the .msi besides Martin?).
 
 I personally don't have any interest to spend any time on an alternative
 technology. The current technology works fine for me, and I understand
 it fully. Everybody in the world is able to build an installer today,
 also. However, I won't stop anybody else from working a switch to a
 different technology, either.

I proposed an alternatives because I read you saying: The tricky part
really is when it breaks (which it does more often than
not), in which case you need to understand msi.py, for which you need to
understand MSI. Which means that maybe everybody *has tools* to build
an installer today, but only a few people have the required knowledge to
really do releases on Windows.

So I believe that switching to an alternative that doesn't require full
understanding of MSI and msi.py would probably low the barrier and allow
more people to help you out.
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] 2.6.1 and 3.0

2008-11-26 Thread Giovanni Bajo
On mer, 2008-11-26 at 21:03 +0100, Martin v. Löwis wrote:

  I'm sure the 
  Python Software Foundation would easily get a free license of one of the 
  good commercial MSI installer generators.
 
 Can you recommend a specific one?

I've had good results with Advanced Installer:
http://www.advancedinstaller.com/feats-list.html

It does support 64-bit packages, and it uses a XML file as input. It
supports Vista and UAC, per-user and per-machine install, registry
modification, environment variables, upgrades/downgrades/side installs,
online installs. And it's free as in beer. The commercial version has
many more features, but I don't think Python needs them.

But the basic idea is that this tool totally abstracts the MSI details.
I know *nothing* of MSI but I'm fully able to use this tool and produce
installers with more features than those I notice within Python's
installer.
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] 2.6.1 and 3.0

2008-11-26 Thread Giovanni Bajo
On mer, 2008-11-26 at 23:38 +0100, Martin v. Löwis wrote:
  Merge Modules into your installation
  Create self-contained MSI packages, by including and configuring the
  required merge modules.
 
 Right. Still, if people want to go this route (I personally don't),
 I think it would be useful to build an installer from the free edition.
 You can then run Tools/msi/merge.py, which adds the CRT merge module
 into the MSI file (mostly as-is, except for discarding the ALLUSERS
 property from that merge module). Alternatively, for testing, you can
 just assume that the CRT is already installed.

So, deducing from your reply, this merge module is a thing that allows
to install the CRT (and other shared components)? I quickly googled but
I'm not really into the msi slang, so I'm not sure I understood.

 When we then have a script that generates a mostly-complete installer,
 I'm sure Giovanni would be happy to add support for the CRT merge
 module to see how the tool fares (my expectation is that it breaks,
 as I assume it just doesn't deal with the embedded ALLUSERS property
 correctly - merge.py really uses a bad hack here).

Another option is to contact the Advanced Installer vendor and ask for a
free license for the Python Software Foundation. This would mean that
everybody in the world would still be able to build an installer without
CRT, and only PSF would build the official one with CRT bundled. I
personally don't see this as a show-stopper (does anyone ever build
the .msi besides Martin?).
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] 2.6.1 and 3.0

2008-11-26 Thread Giovanni Bajo
On mer, 2008-11-26 at 22:54 +0100, Martin v. Löwis wrote:
  I've had good results with Advanced Installer:
  http://www.advancedinstaller.com/feats-list.html
 
 So how much effort would it be to create a Python installer?
 Could you kindly provide one?

In my case, the biggest effort would be finding out what needs to be put
within the installer. If you can give me a pointer to where the current
build process reads the complete file list to put within the .msi (and
their relative destination path), I can try and build a simple test
installer, on which we can start doing some evaluations.
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fwd: Removal of GIL through refcounting removal.

2008-11-02 Thread Giovanni Bajo
On Sun, 02 Nov 2008 10:21:26 +1000, Nick Coghlan wrote:

 Maciej Fijalkowski wrote:
 ...
 
 We know it is the plan for PyPy to work in this way, and also that
 Jython and Ironpython works like that (using the host vm's GC), so it
 seems to be somehow agreeable with the python semantics (perhaps not
 really with __del__ but they are not really nice anyway).


 PyPy has a semi-advanced generational moving gc these days. It's not as
 well tuned as JVMs one, but it works quite well. Regarding semantic
 changes, there is a couple which as far as I remember are details which
 you should not rely on anyway (At least some of the below applies also
 for Jython/IronPython):
 
 * __del__ methods are not called immediately when object is not in a
 cycle
 
 * all objects are collected, which means objects in cycles are broken
 in arbitrary order (gc.garbage is always empty), but normal ordering is
 preserved.
 
 * id's don't always resemble address of object in memory
 
 * resurrected objects have __del__ called only once in total.
 
 Yep, I'm pretty those are all even explicitly documented as
 implementation details of CPython (PEP 343's with statement was largely
 added as a cross-implementation way of guaranteeing prompt cleanup of
 resources like file handles without relying on CPython's __del__
 semantics or writing your own try/finally statements everywhere).

Though there is a fair difference from explicitly documented as 
implementation details and the real-world code where programmers have 
learnt to save code lines by relying on the reference-counting semantics.

[[ my 0.2: it would be a great loss if we lose reference-counting 
semantic (eg: objects deallocated as soon as they exit the scope). I 
would bargain that for a noticable speed increase of course, but my own 
experience with standard GCs from other languages has been less than 
stellar. ]]
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Troubles with Roundup

2008-06-16 Thread Giovanni Bajo

Hello,

I'm trying to login into the tracker but it gives me invalid login 
even after multiple password resets. I can't submit a proper bugreport 
because... I can't login :)


Who can I privately contact to avoid spamming this list?

Thanks!
--
Giovanni Bajo
Develer S.r.l.
http://www.develer.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Warn about mktemp once again?

2008-04-29 Thread Giovanni Bajo

On 4/29/2008 2:18 PM, Nick Coghlan wrote:


Same here. In fact, is there a good reason to have mkstemp() return the

  fd (except backward compatibility)?

 Except for backwards compatibility: is there a good reason to keep
 os.mkstemp at all?


Greg Ewing's use-case is one I've also had at times - ie. as a
convenience function for creating a somewhat temporary file that is
randomly named, but persists beyond the closing of the file.  If the
function doesn't stay in os it doesn't make any difference to me
though :-)


As of 2.6, Greg's use case is addressed by the new 'delete' parameter on 
tempfile.NamedTemporaryFile.


Then I personally don't have any objection to the removal of os.mkstemp.

Since we're at it, a common pattern I use is to create temporary file to 
atomically replace files: I create a named temporary file in the same 
directory of the file I want to replace; write data into it; close it; 
and move it (POSIX move: rename with silent overwrite) to the 
destination file. AFAIK, this is allows an atomic file replacemente on 
most filesystems.


I believe this is a common useful pattern that could be handled in 
module tmpfile (especially since the final rename step requires a 
little care to be truly multiplatform).

--
Giovanni Bajo
Develer S.r.l.
http://www.develer.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Warn about mktemp once again?

2008-04-28 Thread Giovanni Bajo
On Tue, 29 Apr 2008 17:15:11 +1200, Greg Ewing wrote:

 [EMAIL PROTECTED] wrote:
 Guido Have we documented the alternatives well enough?
 
 I suppose we could document explicitly how to use mkstemp() in place of
 mktemp(), but the difference in return value is fairly modest:
 
 I'd like to see a variation of mkstemp() that returns a file object
 instead of a file descriptor, since that's what you really want most of
 the time. At least I always end up calling fdopen on it.

Same here. In fact, is there a good reason to have mkstemp() return the 
fd (except backward compatibility)?
-- 
Giovanni Bajo
Develer S.r.l.
http://www.develer.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Subversion branch merging

2007-07-13 Thread Giovanni Bajo
On 13/07/2007 14.23, Steve Holden wrote:

 I can't speak to how easily any of these cross over to the windows
 platform, although none of them seem to be overly windows friendly
 (YMMV).  But I presume this would be one of the key problems facing a
 distributed versioning system by the python community.

 We can probably assume that none of the Linux kernel team are developing 
 on Windows. There is probably s a group with relevant experience 
 somewhere. I'd Google for it, but I expect that the results would be 
 dominated by British assertions that you have to be a stupid git to run 
 Windows.

git doesn't support Windows in a way that Windows users would find reasonable. 
In fact, the only ones saying it does already support Windows are non-Windows 
users.

hg has a much more mature Windows support. In fact, I didn't face any major 
problems in using it under Windows (even in the details: eg, it supports 
case-insensitive filesystems).

I can't speak of bzr.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building Python with CMake

2007-07-13 Thread Giovanni Bajo
On 13/07/2007 20.53, Facundo Batista wrote:

 as I wrote in my previous email, I'm currently porting Python to some more
 unusual platforms, namely to a super computer
 (http://www.research.ibm.com/bluegene/) and a tiny embedded operating system
 (http://ecos.sourceware.org), which have more or less surprisingly quite
 similar properties.
 
 Sorry, missed the previous mail. Have two questions for you:
 
 - Why?

Because it would be a single unified build system instead of having two build 
systems like we have one (UNIX and Windows).

Also, it would be much easier to maintain because Visual Studio projects are 
generated from a simple description, while right now if you want to change 
something you need to go through the hassle of defining it within the Visual 
Studio GUI.

Consider for instance if you want to change the Windows build so that a 
builtin module is compiled as an external .pyd instead. Right now, you need to 
go through the hassle of manually defining a new project, setting all the 
include/libraries dependencies correctly, ecc. ecc. With CMake or a similar 
tool, it would be a matter of a couple of textual line changes.

[ I'll also remember that ease of maintanance for developers is the #1 
reason for having a 2.1Mb python25.dll under Windows, which I would really 
love to reduce. ]
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] itertools addition: getitem()

2007-07-10 Thread Giovanni Bajo
On 09/07/2007 21.23, Walter Dörwald wrote:

   from ll.xist import parsers, xfind
   from ll.xist.ns import html
   e = parsers.parseURL(http://www.python.org;, tidy=True)
   print e.walknode(html.h2  xfind.hasclass(news))[-1]
 Google Adds Python Support to Google Calendar Developer's Guide
 
 
 Get the first comment line from a python file:
 
   getitem((line for line in open(Lib/codecs.py) if 
 line.startswith(#)), 0)
 '### Registry and builtin stateless codec functions\n'
 
 
 Create a new unused identifier:
 
   def candidates(base):
 ... yield base
 ... for suffix in count(2):
 ... yield %s%d % (base, suffix)
 ...
   usedids = set((foo, bar))
   getitem((i for i in candidates(foo) if i not in usedids), 0)
 'foo2'

You keep posting examples where you call your getitem() function with 0 as 
index, or -1.

getitem(it, 0) already exists and it's spelled it.next(). getitem(it, -1) 
might be useful in fact, and it might be spelled last(it) (or it.last()). Then 
one may want to add first() for simmetry, but that's it:

first(i for i in candidates(foo) if i not in usedids)
last(line for line in open(Lib/codecs.py) if line[0] == '#')

Are there real-world use cases for getitem(it, n) with n not in (0, -1)? I 
share Raymond's feelings on this. And by the way, if you wonder, I have these 
exact feelings as well for islice... :)
-- 
Giovanni Bajo


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 buildouts to the set API

2007-05-19 Thread Giovanni Bajo
On 19/05/2007 3.34, Raymond Hettinger wrote:

 * Make sets listenable for changes (proposed by Jason Wells):
 
 s = set(mydata)
 def callback(s):
  print 'Set %d now has %d items' % (id(s), len(s))
 s.listeners.append(callback)
 s.add(existing_element)   # no callback
 s.add(new_element)# callback

-1 because I can't see why sets are so specials (compared to other containers 
or objects) to provide a builtin implementation of the observer pattern.

In fact, in my experience, real-world use cases of this pattern often require 
more attention to details (eg: does the set keep a strong or weak reference to 
the callback? What if I need to do several *transactional* modifications in a 
row, and thus would like my callback to be called only once at the end?).
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] svn logs

2007-05-09 Thread Giovanni Bajo
On 08/05/2007 19.37, Neal Norwitz wrote:

 Part of the problem might be that we are using an old version of svn
 (1.1) AFAIK.  IIRC these operations were sped up in later versions.

Yes they were. If that's the case, then probably the server should be updated.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New Super PEP

2007-05-02 Thread Giovanni Bajo
On 29/04/2007 17.04, Guido van Rossum wrote:

 This is only a halfway fix to DRY, and it really only fixes the less
 important half. The important problem with super is that it
 encourages people to write incorrect code by requiring that you
 explicitly specify an argument list. Since calling super with any
 arguments other than the exact same arguments you have received is
 nearly always wrong, requiring that the arglist be specified is an
 attractive nuisance.
 
 Nearly always wrong? You must be kidding. There are tons of reasons to
 call your super method with modified arguments. E.g. clipping,
 transforming, ...

Really?
http://fuhm.net/super-harmful/

I don't believe that there are really so many. I would object to forcing super 
to *only* be able to pass unmodified arguments. But if it had an alternative 
syntax to do it (ala Dylan's next-method), I would surely use it often enough 
to make it worth.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New Super PEP

2007-05-02 Thread Giovanni Bajo
On 02/05/2007 12.00, Christian Tanzer wrote:

 Nearly always wrong? You must be kidding. There are tons of reasons to
 call your super method with modified arguments. E.g. clipping,
 transforming, ...

 Really?
 http://fuhm.net/super-harmful/
 
 Hmmm.
 
 I've just counted more than 1600 usages of `super` in my
 sandbox. And all my tests pass.

And you don't follow any of the guidelines reported in that article? And you 
never met any of those problems? I find it hard to believe.

The fact that your code *works* is of little importance, since the article is 
more about maintenance of existing code using super (and the suggestions he 
proposes are specifically for making code using super less fragile to 
refactorings).
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of thread cancellation

2007-03-16 Thread Giovanni Bajo
On 16/03/2007 1.06, Greg Ewing wrote:

 Can you suggest any use-cases for thread termination which will *not* 
 result in a completely broken and unpredictable heap after the thread 
 has died?
 
 Suppose you have a GUI and you want to launch a
 long-running computation without blocking the
 user interface. You don't know how long it will
 take, so you want the user to be able to cancel
 it if he gets bored.
 
 There's no single place in the code where you
 could put in a check for cancellation. Sprinkling
 such checks all over the place would be tedious,
 or even impossible if large amounts of time are
 spent in calls to a third-party library that
 wasn't designed for such things.
 
 Interaction with the rest of the program is
 extremely limited -- some data is passed in,
 it churns away, and some data is returned. It
 doesn't matter what happens to its internal
 state if it gets interrupted, as it's all going
 to be thrown away.
 
 In that situation, it doesn't seem unreasonable
 to me to want to be able to just kill the thread.
 I don't see how it could do any more harm than
 using KeyboardInterrupt to kill a program,
 because that's all it is -- a subprogram running
 inside your main program.
 
 How would you handle this situation?

It's really simple: don't use threads, use processes!

Spawn an external process which does the calculation, pass data to it through 
pipe/socket/namedpipe/xmlrpc/whatever and read data back from it when it's 
done. If you need to kill it, just kill it away, at any asynchronous time: the 
OS will clean up after it.

After many years working with these issues, I came to the personal conclusion 
of avoiding threads as much as possible. Threads are processes with shared 
memory, but in many real-world use cases I faced, there is really only a very 
little chunk of memory which is shared, and Python makes it incredibly easy to 
marshal data to a process (pickle or whatever). So in many cases there's 
really little excuses for going mad with threads.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-05 Thread Giovanni Bajo
On 05/03/2007 20.30, Phil Thompson wrote:

 1. Don't suggest to people that, in order to get their patch reviewed, they 
 should review other patches. The level of knowledge required to put together 
 a patch is much less than that required to know if a patch is the right one.

+1000.

 2. Publically identify the core developers and their areas of expertise and 
 responsibility (ie. which parts of the source tree they own).

I think this should be pushed to its extreme consequences for the standard 
library. Patching the standard library requires *much less* knowledge than 
patching the standard core. Basically, almost any Python developer in the wild 
can quickly learn a module and start patching it in a few days/weeks -- still, 
the stdlib is a total mess of outdated and broken modules.

My suggestion is:

  - keep a public list of official maintainers for each and every 
package/module in the standard library (if any, otherwise explicitly specify 
that it's unmaintained).
  - if there's no maintainer for a module, the *first* volunteer can become so.
  - *any* patch to stdlib which follows the proper guidelines (have a test, 
don't break compatibility, etc.) *must* be applied *unless* the maintainer 
objects in X days (if a maintainer exists... otherwise it will just go in).

 4. Acceptance by core developers that only half the job is developing the 
 core - the other half is mentoring potential future core developers.

Acceptance that any patch is better than no patch. There are many valid Python 
programmers out there, and there are many many patches to stdlib which really 
don't even require a good programmer to be written.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-05 Thread Giovanni Bajo
On 05/03/2007 19.46, A.M. Kuchling wrote:

 At PyCon, there was general agreement that exposing a read-only
 Bazaar/Mercurial/git/whatever version of the repository wouldn't be
 too much effort, and might make things easier for external people
 developing patches.  

I really believe this is just a red herring, pushed by some SCM wonk. The 
problem with patch submission has absolutely *nothing* to do with tools. Do we 
have any evidence that new developers are getting frustrated because they 
can't handle their patches well enough with the current tools?
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making builtins more efficient

2007-02-21 Thread Giovanni Bajo
On 20/02/2007 16.07, Steven Elliott wrote:

 I'm finally getting back into this.  I'd like to take one more shot at
 it with a revised version of what I proposed before.  
 
 For those of you that did not see the original thread it was about ways
 that accessing builtins could be more efficient.  It's a bit much to
 summarize again now, but you should be able to find it in the archive
 with this subject and a date of 2006-03-08.  

Are you aware of this patch, which is still awaiting review?
https://sourceforge.net/tracker/?func=detailatid=305470aid=1616125group_id=5470
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Py2.6 ideas

2007-02-15 Thread Giovanni Bajo
On 15/02/2007 20.59, Raymond Hettinger wrote:

 * Add a pure python named_tuple class to the collections module.  I've been 
 using the class for about a year and found that it greatly improves the 
 usability of tuples as records. 
 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/500261

+1 from me too, I've been using a class with the same name and semantic 
(though an inferior implementation) for almost two years now, with great 
benefits.

As suggested in the cookbook comment, please consider changing the semantic of 
  the generated constructor so that it accepts a single iterable positional 
arguments (or keyword arguments). This matches tuple() (and other containers) 
in behaviour, and makes it easier to substitute existing uses with named tuples.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New syntax for 'dynamic' attribute access

2007-02-13 Thread Giovanni Bajo
On 13/02/2007 7.39, Martin v. Löwis wrote:

 And again. Apparently, people favor hasattr over catching 
 AttributeError. I'm not sure why this is - 

Because the code becomes longer, unless you want to mask other exceptions:


  name = 'http_error_%d' % errcode
-if hasattr(self, name):
-method = self.(name)
+try:
+method = self.(name)
+except AttributeError:
+pass
+else:
  if data is None:
  result = method(url, fp, errcode, errmsg, headers)
  else:
  result = method(url, fp, errcode, errmsg, headers, data)
  if result: return result
  return self.http_error_default(url, fp, errcode, errmsg, headers)

-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New syntax for 'dynamic' attribute access

2007-02-13 Thread Giovanni Bajo
On 13/02/2007 5.33, Maric Michaud wrote:

 I really dislikes the .[ or .( or .{ operators.
 Just on my mail editor the two expressions
 
 a.[b]
 
 and
 
 a,[b]
 
 are quite hard to differentiate while completely unrelated.

I'll propose a new color for this bikeshed:

a.[[b]]

  handlers = chain.get(kind, ())
  for handler in handlers:
  func = handler.[[meth_name]]
  result = func(*args)
  if result is not None:
  return result

Little heavy on the eye, but it seems that it's exactly what people want and 
can't find in the .[] syntax.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python's C interface for types

2007-01-26 Thread Giovanni Bajo
On 26/01/2007 17.03, Thomas Wouters wrote:

 How critical is the 'numeric' property of the nb_hash function?  I
 can certainly honour it, but is it worth it?
 
 [...]
 There's no strict requirement that 
 equal objects must have equal hashes, 

Uh? I thought that was the *only* strict requirement of hash. In fact the docs 
agree:


__hash__( self)

Called for the key object for dictionary operations, and by the built-in 
function hash(). Should return a 32-bit integer usable as a hash value for 
dictionary operations. The only required property is that objects which 
compare equal have the same hash value; [...]


I personally consider *very* important that hash(5.0) == hash(5) (and that 5.0 
== 5, of course).
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] buildbot failure in amd64 gentoo 2.5

2007-01-23 Thread Giovanni Bajo
On 23/01/2007 10.20, Brian Warner wrote:

 Do I miss something here, or is the buildbot hit by spammers now?
 It looks like it is. If that continues, we have to disable the web
 triggers.
 
 Good grief. If anyone has any bright ideas about simple ways to change that
 form to make it less vulnerable to the spambots, I'd be happy to incorporate
 them into Buildbot.

I'd throw a CAPTCHA in. There are even some written in Python.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file(file)

2007-01-13 Thread Giovanni Bajo
On 13/01/2007 1.37, Brett Cannon wrote:

 For security reasons I might be asking for file's constructor to be
 removed from the type for Python source code at some point (it can be
 relocated to an extension module if desired).  

Isn't there the wrong list then? I can't see how this can ever happen in the 
2.x serie...
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] fpectl: does a better implementation make sense?

2006-11-30 Thread Giovanni Bajo
Hello,

I spent my last couple of hourse reading several past threads about fpectl. If 
I'm correct

1) fpectl is scheduled for deletion in 2.6.
2) The biggest problem is that the C standard says that it's undefined to 
return from a SIGFPE handler. Thus, it's impossible to traps floating point 
exceptions and convert them to Python exceptions in a way that really works.
3) Moreover, the current implementation of PyFPE_* macros (turning on/off the 
traps + setjmp) are pretty slow, so they're off by default.

Now, I would like Python to rause exceptions (FloatingPointError) whenever a 
Inf or NaN is computed or used in calculations (which to the best of my little 
understanding of 754 basically means that I want all FPU errors to be 
detected and handled). I am not arguing that this should be the default 
behaviour, I'm just saying that I would like if there was a way to enable this 
(even if only at Python compile time, in fact).

I read that Tim Peters suggested several times to rewrite fpectl so that it 
does not use traps/signals at all, but just checks the FPU bits to see if 
there was a FPU error. Basically, the PyFPE BEGIN macro would clear the FPU 
bits, and the STOP macro would check for FPU errors and raise an appropriate 
exception if needed.

Is this suggestion still valid or people changed their mind meanwhile? Would 
such a rewrite of fpectl (or a new module with a different name) be accepted?
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Summer of Code: zipfile?

2006-11-12 Thread Giovanni Bajo
Hello,

wasn't there a project about the zipfile module in the Summer of Code? How did
it go?

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Importing .pyc in -O mode and vice versa

2006-11-06 Thread Giovanni Bajo
Armin Rigo wrote:

 Typical example: someone in the project removes a .py file, and checks
 in this change; someone else does an 'svn up', which kills the .py in
 his working copy, but not the .pyc.  These stale .pyc's cause pain,
 e.g.
 by shadowing the real module (further down sys.path), or simply by
 preventing the project's developers from realizing that they forgot to
 fix some imports.  We regularly had obscure problems that went away as
 soon as we deleted all .pyc files around, but I cannot comment more on
 that because we never really investigated.

This is exactly why I always use this module:

== nobarepyc.py 
#!/usr/bin/env python
#-*- coding: utf-8 -*-
import ihooks
import os

class _NoBarePycHooks(ihooks.Hooks):
def load_compiled(self, name, filename, *args, **kwargs):
sourcefn = os.path.splitext(filename)[0] + .py
if not os.path.isfile(sourcefn):
raise ImportError('forbidden import of bare .pyc file: %r' %
filename)
return ihooks.Hooks.load_compiled(name, filename, *args, **kwargs)

ihooks.ModuleImporter(ihooks.ModuleLoader(_NoBarePycHooks())).install()
== /nobarepyc.py 

Just import it before importing anything else (or in site.py if you prefer)
and you'll be done.

Ah, it doesn't work with zipimports...
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Importing .pyc in -O mode and vice versa

2006-11-06 Thread Giovanni Bajo
Martin v. Löwis wrote:

 Why not only import *.pyc files and no longer use *.pyo files.

 It is simpler to have one compiled python file extension.
 PYC files can contain optimized python byte code and normal byte
 code.

 So what would you do with the -O option of the interpreter?

I just had an idea: we could have only pyc files, and *no* way to identify
whether specific optimizations (-O, -OO --only-strip-docstrings, whatever)
were performed on them or not. So, if you regularly run different python
applications with different optimization settings, you'll end up with .pyc
files containing bytecode that was generated with mixed optimization
settings. It doesn't really matter in most cases, after all.

Then, we add a single command line option (eg: -I) which is: ignore
*every* .pyc file out there, and regenerate them as needed. So, the few
times that you really care that a certain application is run with a specific
setting, you can use python -I -OO app.py.

And that's all.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal: No more standard library additions

2006-10-13 Thread Giovanni Bajo
Antoine wrote:

 The standard library is not about easeness of installation. It is
 about having
 a consistent fixed codebase to work with. I don't want to go
 Perl/CPAN, where you have 3-4 alternatives to do thing A which will
 never interoperate
 with whatever you chose among the 3-4 alternatives to do thing B.

 Currently in Python:
 http://docs.python.org/lib/module-xml.dom.html
 http://docs.python.org/lib/module-xml.dom.minidom.html
 http://docs.python.org/lib/module-xml.sax.html
 http://docs.python.org/lib/module-xml.parsers.expat.html
 http://docs.python.org/lib/module-xml.etree.ElementTree.html

 The problem of consistent fixed codebase is that standards get
 higher, so eventually those old stable modules lose popularity in
 favor of newer, better modules.

Those are different paradigms of doing XML. For instance, the standard
library was missing a pythonic library to do XML processing, and several
arose. ElementTree (fortunately) won and joined the standard distribution. This
should allievate the need for other libraries in future.

Instead of looking what we have inside, look outside. There are dozens of
different XML pythonic libraries. I have fought in the past with programs
that required large XML frameworks, that in turn required to be downloaded,
built, installed, and *understood* to make the required modifictions to the
programs themselves. This slowed down my own development, and caused infinite
headaches before of version compatibilities (A requires the XML library B, but
only versions  1.2, otherwise you can use A 2.0, which needs Python 2.4+, and
then you can use latest B; etc. etc. repeat and complicate ad-libitum). A
single version number (that of Python) and a large fixed set of libraries
anybody can use is a *strong* PLUS.

Then, there is the opposite phenomenom, which is interesting as well. I met
many perl programmers which simply re-invented their little wheel everytime.
They were mostly system administrators, so they *knew* very well what hell the
dependency chains are for both programmers and users. Thus, since perl does not
have a standard library, they simply did not import *any* module. This way, the
program is easier to ship, distribute and use, but it's harder to code, read,
fix, and contain unnecessary duplications with everybody's else script. Need to
send an e-mail? Why using a library, just paste chunks of cutpasted mail
headers (with MIME, etc.) and do some basic string substitution; and the SMTP
protocol is easy, just open a socket a dump some strings to it; or you can use
'sendmail' which is available on any UNIX (and there it goes portability, just
because they did not want to evaluate and choose one of the 6 Perl SMTP
libraries... and rightfully so!).

 Therefore, you have to obsolete old stuff if you want there to be
 only One Obvious Way To Do It.

I'm totally in favor of obsoletion and removal of old cruft from the standard
library.
I'm totally against *not* having a standard library.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [py3k] Re: Proposal: No more standard library additions

2006-10-13 Thread Giovanni Bajo
I apologize, this had to go to [EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why spawnvp not implemented on Windows?

2006-10-13 Thread Giovanni Bajo
Alexey Borzenkov wrote:

 Oh! Wow! I just simply didn't know of its existance (I'm pretty much
 new to python), and both distutils and SCons (I was looking inside
 them because they are major build systems and surely had to execute
 compilers somehow), and upon seeing that each of them invented their
 own method of searching path created a delusion as if inventing custom
 workarounds was the only way... Sorry... x_x

SCons is still compatible with Python 1.5. Distutils was written in the
1.5-1.6 timeframe; it has been updated since, but it is basically
unmaintained at this point (if you exclude the setuptools stuff which is its
disputed maintenance/evolution).

subprocess has been introduced in Python 2.4.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.5 performance

2006-10-12 Thread Giovanni Bajo
Kristján V. Jónsson wrote:

 This is an improvement of another 3.5 %.
 In all, we have a performance increase of more than 10%.
 Granted, this is from a single set of runs, but I think we should
 start considering to make PCBuild8 a supported build.

Kristján, I wonder if the performance improvement comes from ceval.c only
(or maybe a few other selected files). Is it possible to somehow link the
PGO-optimized ceval.obj into the VS2003 project?
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 355 status

2006-09-30 Thread Giovanni Bajo
Guido van Rossum wrote:

 OK. Pronouncement: PEP 355 is dead. The authors (or the PEP editor)
 can update the PEP.

 I'm looking forward to a new PEP.

It would be terrific if you gave us some clue about what is wrong in PEP355, so
that the next guy does not waste his time. For instance, I find PEP355
incredibly good for my own path manipulation (much cleaner and concise than the
awful os.path+os+shutil+stat mix), and I have trouble understanding what is
*so* wrong with it.

You said it's an amalgam of unrelated functionality, but you didn't say what
exactly is unrelated for you.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New relative import issue

2006-09-23 Thread Giovanni Bajo
Armin Rigo wrote:

 This doesn't match my experience, which is that sys.path hackery is
 required in any project that is larger than one directory, but is not
 itself a library.  [...]

myapp/
   main.py
   a/
  __init__.py
  b.py
  test_b.py
   c/
  __init__.py

 This theoretical example shows main.py (the main entry point) at the
 root of the package directories - it is the only place where it can be
 if it needs to import the packages a and c.  The module a.b can import
 c, too (and this is not bad design - think about c as a package
 regrouping utilities that make sense for the whole application).  But
 then the testing script test_b.py cannot import the whole application
 any more.  Imports of a or c will fail, and even a relative import of
 b will crash when b tries to import c.  The only way I can think of
 is to insert the root directory in sys.path from within test_b.py,
 and then use absolute imports.

This also matches my experience, but I never used sys.path hackery for this
kind of things. I either set PYTHONPATH while I work on myapp (which I
consider not such a big trouble after all, and surely much less invasive than
adding specific Python code tweaking sys.path into all the tests), or, even
more simply, I run the test from myapp main directory (manually typing
myapp/b/test_b.py).

There is also another possibility, which is having a smarter test framework
where you can specify substrings of test names: I don't know py.test in detail,
but in my own framework I can say something like ./run_tests.py PAT, which
basically means recursively discover and run all files named test_NAME, and
where PAT is a substring of NAME).

 (For example, to support this way of organizing applications, the 'py'
 lib provides a call py.magic.autopath() that can be dropped at the
 start of test_b.py.  It hacks sys.path by guessing the real root
 according to how many levels of __init__.py there are...)

Since I consider this more of an environmental problem, I would not find
satisfying any kind of solution at the single module level (and even less so
one requiring so much guess-work as this one).

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Minipython

2006-09-23 Thread Giovanni Bajo
Milan Krcmar wrote:

 Current (2.5) stripped and gzipped (I am going to use a compressed
 filesystem) CPython binary, compiled with defaults on a i386/glibc
 Linux, results in 500 KiB of flash. How to make the Python
 interpreter even smaller?

In my experience, the biggest gain can be obtained by dropping the rarely-used
CJK codecs (for Asian languages). That should sum up to almost 800K
(uncompressed), IIRC. After that, I once had to strip down the binary even
more, and found out (by guesswork and inspection of map files) that there is no
other low hanging fruit. By carefully selecting which modules to link in, I was
able to reduce of another 300K or so, but nothing really incredible. I would
also suggest -ffunction-sections in these cases, but you might already know
that.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing __del__

2006-09-23 Thread Giovanni Bajo
Marcin 'Qrczak' Kowalczyk wrote:

 1) There's a way to destruct the handle BEFORE __del__ is called,
 which would require killing the weakref / deregistering the
 finalization hook.

 Weakrefs should have a method which runs their callback and
 unregisters them.

 2) The objects required in the destructor can be mutated / changed
 during the lifetime of the instance. For instance, a class that
 wraps Win32 FindFirstFirst/FindFirstNext and support transparent
 directory recursion needs something similar.

 Listing files with transparent directory recursion can be implemented
 in terms of listing files of a given directory, such that a finalizer
 is only used with the low level object.

 Another example is a class which creates named temporary files
 and needs to remove them on finalization. It might need to create
 several different temporary files (say, self.handle is the filename
 in that case)[1], so the filename needed in the destructor changes
 during the lifetime of the instance.

 Again: move the finalizer to a single temporary file object, and refer
 to such object instead of a raw handle.

Yes, I know Python is turing-complete even without __del__, but that is not my
point. The fact that we can enhance weakrefs and find a very complicated way to
solve problems which __del__ solves right now easily does not make things
different. People are still propsing to drop a feature which is perceived as
easy by users, and replace it with a complicated set of workarounds, which
are prone to mistakes, more verbose, hard to learn and to maintain.

I'm totally in favor of the general idea of dropping rarely used features (like
__var in the other thread). I just can't see how dropping __del__ makes things
easier, while it surely makes life a lot harder for the legitimate users of it.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] GCC patch for catching errors in PyArg_ParseTuple

2006-09-22 Thread Giovanni Bajo
Martin v. Löwis wrote:

 I'll post more about this patch in the near future, and commit
 some bug fixes I found with it, but here is the patch, in
 a publish-early fashion.

 There is little chance that this can go into GCC (as it is too
 specific), so it likely needs to be maintained separately.
 It was written for the current trunk, but hopefully applies
 to most recent releases.

A way not to maintain this patch forever would be to devise a way to make
format syntax pluggable / scriptable. There have been previous discussions
on the GCC mailing lists.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New relative import issue

2006-09-21 Thread Giovanni Bajo
Oleg Broytmann wrote:

 There really shouldn't be
 any such thing as sys.path -- the view that any
 given module has of the package namespace should
 depend only on where it is

I do not understand this. Can you show an example? Imagine I have
 two servers, Linux and FreeBSD, and on Linux python is in /usr/bin,
 home is /home/phd, on BSD these are /usr/local/bin and /usr/home/phd.
 I have some modules in site-packages and some modules in
 $HOME/lib/python. How can I move programs from one server to the
 other without rewriting them (how can I not to put full paths to
 modules)? I use PYTHONPATH manipulation - its enough to write a shell
 script that starts daemons once and use it for many years. How can I
 do this without sys.path?!

My idea (and interpretation of Greg's statement) is that a module/package
should be able to live with either relative imports within itself, or fully
absolute imports. No sys.path *hackery* should ever be necessary to access
modules in sibling namespaces. Either it's an absolute import, or a relative
(internal) import. A sibling import is a symptom of wrong design of the
packages.

This is how I usually design my packages at least. There might be valid use
cases for doing sys.path hackery, but I have yet to find them.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Testsuite fails on Windows if a space is in the path

2006-09-18 Thread Giovanni Bajo
Martin v. Löwis wrote:

 People are well-advised to accept the installer's default directory.

 That's very true, but difficult to communicate. Too many people
 actually
 complain about that, and some even bring reasonable arguments (such
 as the ACL in c:\ being too permissive for a software installation).

Besides, it won't be allowed in Vista with the default user permissions.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unicode Imports

2006-09-08 Thread Giovanni Bajo
Guido van Rossum [EMAIL PROTECTED] wrote:

 IMO it's the call of the release managers. Board members ought to
 trust the release managers and not apply undue pressure.


+1, but I would love to see a more formal definition of what a bugfix is,
which would reduce the ambiguous cases, and thus reduce the number of times the
release managers are called to pronounce.

Other projects, for instance, describe point releases as open for regression
fixes only, which means that a patch, to be eligible for a point release, must
fix a regression (something which used to work before, and doesn't anymore).

Regressions are important because they affect people wanting to upgrade Python.
If something never worked before (like this unicode path thingie), surely
existing Python users are not affected by the bug (or they have already
workarounds in place), so that NOT having the bug fixed in a point release is
not a problem.

Anyway, I'm not pushing for this specific policy (even if I like it): I'm just
suggesting Release Managers to more formally define what should and what should
not go in a point release.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Error while building 2.5rc1 pythoncore_pgo on VC8

2006-09-04 Thread Giovanni Bajo
Fredrik Lundh wrote:

 That error mentioned in that post was in pythoncore module.
 My error is while compiling pythoncore_pgo module.
 
 iirc, that's a partially experimental alternative build for playing
 with performance guided optimizations.  are you sure you need 
 that module ?

Oh yes, it's a 30% improvement in pystone, for free.
-- 
Giovanni Bajo
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dicts are broken Was: unicode hell/mixing str andunicode asdictionarykeys

2006-08-05 Thread Giovanni Bajo
Bob Ippolito [EMAIL PROTECTED] wrote:

 Well it's not recomended to mix strings and unicode in the
 dictionaries
 but if we mix for example integer and float we have the same
 thing. It
 doesn't raise exception but still it is not expected behavior for
 me:
 d = { 1.0: 10, 2.0: 20 }
 then if i somewhere later do:
 d[1] = 100
 d[2] = 200
 to have here all floats in d.keys(). May be this is not a best
 example.

 There is a strong difference. Python is moving towards unifying
 number types in
 a way (see the true division issue): the idea is that, all in all,
 user
 shouldn't really care what type a number is, as long as he knows
 it's a number.
 On the other hand, unicode and str are going to diverge more and
 more.

 Well, not really. True division makes int/int return float instead of
 an int. You really do have to care if you have an int or a float most
 of the time, they're very different semantically.

Then I'd ask why Python goes through hoops to make sure that hash(1.0) ==
hash(1), in the first place.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dicts are broken Was: unicode hell/mixing str and unicode asdictionarykeys

2006-08-04 Thread Giovanni Bajo
Paul Colomiets [EMAIL PROTECTED] wrote:

 Well it's not recomended to mix strings and unicode in the
 dictionaries
 but if we mix for example integer and float we have the same thing. It
 doesn't raise exception but still it is not expected behavior for me:
   d = { 1.0: 10, 2.0: 20 }
 then if i somewhere later do:
   d[1] = 100
   d[2] = 200
 to have here all floats in d.keys(). May be this is not a best
 example.

There is a strong difference. Python is moving towards unifying number types in
a way (see the true division issue): the idea is that, all in all, user
shouldn't really care what type a number is, as long as he knows it's a number.
On the other hand, unicode and str are going to diverge more and more.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Document performance requirements?

2006-07-23 Thread Giovanni Bajo
Armin Rigo wrote:

 I think that O-wise the current CPython situation should be documented
 as a minimal requirement for implementations of the language, with
 just one exception: the well-documented don't rely on this hack in
 2.4 to make repeated 'str += str' amortized linear, for which the 2.3
 quadratic behavior is considered compliant enough.

 I suppose that allowing implementations to provide better algorithmic
 complexities than required is fine, although I can think of some
 problems with that (e.g. nice and efficient user code that would
 perform horribly badly on CPython).

I'm not sure big-O tells the whole truth. For instance, do we want to allow
an implementation to use a hash table as underlying type for a list? It
would match big-O requirements, but would still be slower than a plain array
because of higher overhead of implementation (higher constant factor).

And if this is allowed, I would like to find in CPython tutorials and
documentations a simple statement like: to implement the list and match its
requirements, CPython choose a simple array as underlying data structure.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.4, VS 2005 Profile Guided Optmization

2006-07-23 Thread Giovanni Bajo
Trent Nelson wrote:

 Has anyone else built Python with Visual Studio 2005 and played around
 with Profile Guided Optimization?

Yes, there was some work at the recent Need for Speed sprint. Python 2.5 has
a PCBuild8 directory (for VS 2005) with a specific project for PGO.

 Results were interesting, an average speedup of around 33% was
 noticeable:

Yes, they are.

 Is there any motivation in the Win32 Python dev camp to switch from
 VC6 to VS 2005?

I think Martin decided to keep VC71 (Visual Studio .NET 2003) for another
release cycle. Given the impressive results of VC8 with PGO, and the fact
that Visual Studio Express 2005 is free forever, I would hope as well for
the decision to be reconsidered.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Document performance requirements?

2006-07-21 Thread Giovanni Bajo
Jason Orendorff wrote:

 However, I'm also struggling to think of a case other than list vs
 deque where the choice of a builtin or standard library data
 structure would be dictated by big-O() concerns.

 OK, but that doesn't mean the information is unimportant.  +1 on
 making this something of a priority.  People looking for this info
 should find it in the obvious place.  Some are unobvious. (How fast is
 dict.__eq__ on average? Worst case?)

I also found out that most people tend to think of Python's lists as a
magical data structure optimized for many operations (like a rope or
something complex like that). Documenting that it's just a bare vector
(std::vector in C++) would be of great help.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] new security doc using object-capabilities

2006-07-20 Thread Giovanni Bajo
Nick Maclaren wrote:

 This recipe for safe_eval:
 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/496746
 which is otherwise very cute, does not handle this case as well: it
 tries to catch and interrupt long-running operations through a
 secondary thread, but fails on a single long operation because the
 GIL is not released and the alarm thread does not get its chance to
 run.

 Grin :-)

 You have put your finger on the Great Myth of such virtualisations,
 which applies to the system-level ones and even to the hardware-level
 ones.  In practice, there is always some request that a sandbox can
 make to the hypervisor that can lock out or otherwise affect other
 sandboxes.

 The key is, of course, to admit that and to specify what is and is
 not properly virtualised, so that the consequences can at least be
 analysed.

I agree, and in fact Brett's work on a proper security model is greatly
welcome. It's just that us mere mortals need to use eval() *now*, and that
recipe is good enough for many practice uses. If you can't win, you can at
least lose with dignity :)
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] new security doc using object-capabilities

2006-07-20 Thread Giovanni Bajo
Brett Cannon wrote:


http://svn.python.org/view/python/branches/bcannon-sandboxing/securing_python.txt?rev=50717view=log
 .

 How do you plan to handle CPU-hogs? Stuff like execution of a
 gigantic integer multiplication.


 I don't.  =)  Protecting the CPU is damn hard to do in any form of
 portable fashion.  And even getting it to work on an OS you do know
 the details of leads to probably an interrupt  implementation and
 that doesn't sound fun.

I think the trick used by the safe_eval recipe (a separate thread which
interrupts the script through thread.interrupt_main()) shows that, in most
cases, it's possible to make sure that an embedded script does not take too
long to execute. Do you agree that this usage case (allow me to timeout an
embedded script) is something which would be a very good start in the right
direction?

Now, I wonder, in a restricted execution environment such as that depicted
in your document, how many different ways are there to make the Python
interpreter enter a long calcolation loop which does not release the GIL? I
can think of bignum*bignum, bignum**bignum or similar mathematical
operations, but there are really a few. If we could make those release the
GIL (or poll some kind of watchdog used to abort them, pretty much like they
normally poll CTRL+C), then the same trick used by the recipe could be used.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Strategy for converting the decimal module to C

2006-07-18 Thread Giovanni Bajo
Tim Peters wrote:

 Changing the user-visible API is a hard egg to
 swallow, and it's unfortunate that the Python code used a dict to hold
 flags to begin with.  The dict doesn't just record whether an
 exception has occurred, it also counts how many times the exception
 occurred.  It's possible that someone, somewhere, has latched on to
 that as a feature.

Especially since it was a documented one:

 import decimal
 help(decimal.Context)
Help on class Context in module decimal:

class Context(__builtin__.object)
 |  Contains the context for a Decimal instance.
[...]
 |  flags  - When an exception is caused, flags[exception] is incremented.
 |   (Whether or not the trap_enabler is set)
 |   Should be reset by user of Decimal instance.
[...]

-- 
Giovanni Bajo
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dynamic module namspaces

2006-07-15 Thread Giovanni Bajo
Johan Dahlin wrote:

 My point is that I consider this to be a valid use case, the amount of
 saved memory is significan, and I could not find another way of doing
 it and still keep the gtk interface (import gtk; gtk.Button) to still be
 backwards compatible.

You may want to have a look at SIP/PyQt. They implement the full Qt
interface which is rather large, but import time is blazingly fast and
memory occupation grows only of 4-5 Mb at import-time. The trick is that
methods are generated dynamically at their first usage somehow (but dir()
and introspection still works...).

SIP is free and generic btw, you may want to consider it as a tool.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots

2006-07-14 Thread Giovanni Bajo
Greg Ewing [EMAIL PROTECTED] wrote:

 from __future__ import new_classes exists, but the syntax is
 different:
 
 __metaclass__ = type
 
 Although it's not a very obvious spelling,
 particularly to the casual reader who may not be
 familiar with the intricacies of classes and
 metaclasses. I don't think it would hurt to have
 it available as a __future__ import as well.
 
 There's also the advantage that all of a
 module's future assumptions could then be
 documented uniformly in one place, i.e. in
 a __future__ import at the top.

+1.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots (was Re: User's complaints)

2006-07-14 Thread Giovanni Bajo
Neal Norwitz [EMAIL PROTECTED] wrote:

 a longer beta period gives *external* developers more time to catch
 up, and results in less work for the end users.

 This is the part I don't get.  For the external developers, if they
 care about compatibility, why aren't they testing periodically,
 regardless of alpha/beta releases?

Because it is a cost, and I don't want to waste my time if the trunk is
unstable or whatnot. There is no official statement of the kind all the real
development is done in branches, the trunk is always very stable, feel free to
grab it. Thus, we (external developers) assume that it's better to wait for
the first alpha or beta before starting doing any testing. Personally, I won't
touch something before the first beta: there are already enough known bugs when
the alphas are shipped that I prefer to wait a little bit more.

In my case, the beta period is too short. It's already a great luck if, during
the beta cycle, I'm not deep into some milestone or release cycle for my own
software. It might well happen that I have barely time to notice a Python beta
is out, but I can't afford spending any time on it because of time constraint.
By the time I'm done with my own deadlines, Python final is already out. But
for the sake of the argument and the rest of this mail, let's assume I have
tons of spare time to dedicate to Python 2.5 beta testing.

My applications have several external dependencies (I wouldn't classify those
as many, but still around 6-7 libraries in average). For each of those
libraries, there won't be Py2.5-ready RPM or Windows installer to grab and
build. Most of the time, I have to download the source package, and try to
build it. This also means hunting down and fixing Py2.5-related bugs in all
those libraries *before* I can even boot my own application (which is what I
would care about). Then I occasionally hit a core Python bug while doing so,
which is good, but I have to sit and wait. Some libraries have very complex
build process which are still distutils-based, but might require many other
external C/C++ libraries, which need to be fetched and compiled just to setup
the build environment.

Alternatively, I could cross my fingers and wait for all the maintainers of the
external libraries to have spare time, and dedicate to Python 2.5 upgrading. If
I'm lucky, by the time RC1 is out, most of them might have binary packages
available for download, or at least have their (unstable) CVS/SVN trunk fixed
for Python 2.5 (which means that I'll have to fetch that unstable version and
basically perform a forced upgrade of the library, which might trigger another
class of compatibility/feature/regression bugs in my application, not related
at all to Python 2.5 by itself, but still needed to be dealt).

So I think it's useless to ask people to rush testing beta releases: it takes
months to get the community synched, and will always take. It's also useless to
point the finger to external developers if they don't test Python and there is
a bug in a .0 release. Bugs happen. It's software that evolves. My own
suggestion is that it's useless to stall a release for months to give external
developers time to fix things. I think it's much better to release early -
release often, and just get the damn release out of the door. Usually, it's at
that point that the whole community start working on it, and discover bugs. And
so what? For production usage, .0 releases of libraries (let alone the
interpreter of the language) are a no-go for all software, and I know that for
a long time already. I don't ship an application of mine with a Python .0
release no matter what, no matter even if the whole testsuite passes and
everything seems to work. I don't have enough benefits for the risks, so I'll
wait for .1 or .2 release anyway. It's *very* fine by me if .0 release is
actually the first official beta release for the community, when the core
developers say we're done and external developers really start get things
going.

If you really care about .0 stability from a purely commercial point-of-view
(it's bad advertisement if many people complain if a major release is broken,
etc. etc.), I might suggest you to coordinate with a small group of selected
external developers maintaing the larger external packages (Twisted and
others). You could put a list of those packages in the release PEP as
showstoppers for the release: you should not get a .0 out if those packages are
broken. I think this could help smooth out the process. If important external
libraries work, many applications relying on it will *mostly* work as well. I
personally don't think it's such a big problem if one has to fix a couple of
things in a 100K-line application to adjust it to the new .0 release, even if
it's actually because of a bug in Python itself.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http

Re: [Python-Dev] Community buildbots (was Re: User's complaints)

2006-07-14 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

 Greg Maybe there could be an unstable release phase that lasts
 for a Greg whole release cycle. So you'd first release version
 2.n as Greg unstable, and keep 2.(n-1) as the current stable
 release. Then Greg when 2.(n+1) is ready, 2.n would become
 stable and 2.(n+1) would Greg become the new unstable.

 In GCC don't they do an odd (stable)/even (unstable) release
 schedule?  Same for Linux kernels?  Would that help?

No. Linux kernel releases are more aggressive because of the fact that all
the patches are mostly developed in different branch/repositories and get
merged when they are already well tested and incorporated. Linus can merge
literally hundreds of patches daily into his *stable* tree, and do releases
from it even weekly, because most destabilizing works are being done in
large branches carried on for months before they even are evaluated for
being merged; or because patches were settled in the -mm tree for months.
Linus' tree is kind-of a release branch, with the difference that he is the
BDFL and does what he wants with his tree :) To keep this into perspective,
remember also that they don't have *any* kind of testsuite (nor a debugger,
if I might say).

GCC has a more old-fashioned release process, where the trunk evolves
through 3 stages: Stage 1 is open for all kind of changes (eg: from simple
polishing/refactoring, to merging of large branches containing work of
several man-years). Stage 2 is still open for new features, but not for big
merges. Stage 3 is feature-freezed, bug-fixing only. Then, the trunk is
branched into the new release branch, and the trunk gets back to Stage 1.
Nominally, a stage lasts 3 months, but Stage 3 often stretches up to 6
months.

The release branches are open for *only* regression fixes (that is, fixes
that correct things that used to work in previous releases but do not work
anymore). Any regression fix (with a corresponding Bugzilla entry, where
it's marked and confirmed as regression) is applied to trunk *and* the
open release branches where the regression exists. For convoluted or
dangerous regression fixes, usually maintainers prefer to wait 1 week for
the patch to settle down on the trunk before applying it to the release
branches. The release manager pulls dot releases from the release branch.
Usually, the first release (.0) happens 3-4 months *after* the release
branch was created, that is after several months of regression-only patches
being merged to it (while new work is being done on the trunk in parallel,
in its aggressive Stage 1).

The 3-Stage work in the trunk is streamlined by the release manager. At the
beginning of Stage 1, a detailed techinical list of on-going projects (new
features) is presented to the mailing list, explaining the current status of
the work and its ETA, and the release manager then publishes a work-plan for
Stage 1 and 2, telling which projects will be merged when. This avoids
multiple large projects to hit the trunk at the same time, causing may
headaches to all the other developers. The work-plan is managed and updated
in the GCC Wiki (which is off-line right now, but I'll post a link as
example when it's back).
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots (was Re: User's complaints)

2006-07-13 Thread Giovanni Bajo
Barry Warsaw wrote:

 OTOH, a more formal gamma phase would allow us to say
 absolutely no changes are allowed now unless it's to fix backward
 compatibility.  No more sneaking in new sys functions or types
 module constants wink during the gamma phase.

This is pretty common in project management. For instance, GCC has a rather
complex 4-stage release process, whose last phase (beginning at the point
the release is branched in SVN) is made of commits only for regressions.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Community buildbots

2006-07-13 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

 (Aside: IMHO, the sooner we can drop old-style classes entirely, the
 better.
 That is one bumpy Python upgrade process that I will be _very_ happy
 to do.

I think python should have a couple more of future imports. from __future__
import new_classes and from __future__ import unicode_literals would be
really welcome, and would smooth the Py3k migration process
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch and static, redux

2006-07-07 Thread Giovanni Bajo
Guido van Rossum [EMAIL PROTECTED] wrote:

 So, my proposal is to give up on static, accept PEP 3103 with the
 following options:
   - Syntax alternative 2+B (unindented cases, 'case in ...' for
 multiple cases).
   - Semantics option 3 (def-time freezing)


I know it's only a bikeshed issue here, but wouldn't it be the first case where
a statement ending with : does not introduce an indented suite? Is it really
worth to create this non-orthogonality in the language?

IMHO, if we went for indentend cases, we could teach editors to indent cases
*only* 1 or 2 spaces. That would preserve orthogonality of the language, and
allow not to consume too much horizontal space.

Or, what about optionally indented cases? That is, allow both forms as correct
syntax. It would make it just a matter of style at that point (and Python will
finally have its first religious wars over indentation AT LAST! :)

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Explicit Lexical Scoping (pre-PEP?)

2006-07-04 Thread Giovanni Bajo
Talin wrote:

 This is sort of a re-do of an earlier proposal which seems to have
 gotten lost in the shuffle of the larger debate.

 I propose to create a new type of scoping rule, which I will call
 explicit lexical scoping, that will co-exist with the current
 implicit scoping rule that exists in Python today.

Interesting. What if for-loops implicitally used my on the iteration
variable? That would solve the binding problem we were discussing and make
lambdas Do The Right Thing(TM) when used in loops.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 and beyond

2006-07-01 Thread Giovanni Bajo
Andrew Koenig wrote:

 Suppose I write

 x = []
 for i in range(10):
 x.append(lambda:i)
 print [f() for f in x]

 This example will print [9, 9, 9, 9, 9, 9, 9, 9, 9, 9], which I think
 is wildly unintuitive.

That is my point: to me, it's counter-intuitive just like the infamous
except NameError, TypeError. I believe that names in
lambdas/nested-functions referring to local names in the outer scope should
really be bound at function definition time (much like default arguments
are).

 What surprises me even more is that if I try to define such a variable
 explicitly, it still doesn't work:

 x = []
 for i in range(10):
 j = i
 x.append(lambda:j)
 print [f() for f in x]

 This example still prints [9, 9, 9, 9, 9, 9, 9, 9, 9, 9].  If I
 understand the reason correctly, it is because even though j is
 defined only in the body of the loop, loop bodies are not scopes, so
 the variable's definition is hoisted out into the surrounding
 function scope.

Yes. And by itself, I like this fact because it's very handy in many cases.
And it's also handy that the iteration variable of the for loop is
accessible after the for loop is terminated (in fact, this specific
behaviour is already listed among the wont-change for Py3k).

 On the other hand, I think that
 the subtle pitfalls that come from allowing for variables to leak
 into the surrounding scopes are much harder to deal with and
 understand than would be the consequences of restricting their scopes
 as outlined above.

As I said, to me there's nothing wrong with the way Python variables leak
out of the suites; or, in other words, with the fact that Python has only
two namespaces, the function-local and the global namespace. What I don't
like is that the lookup of lambda's names are fully deferred at execution
time. This behaviour is already not fully followed for local variables in
functions, since:

 y = 0
 def foo():
... print y
... y = 2
...
 foo()
Traceback (most recent call last):
  File stdin, line 1, in ?
  File stdin, line 2, in foo
UnboundLocalError: local variable 'y' referenced before assignment

which means that Python users *already* know that a variable is not really
looked up only at run-time, but there's something going on even at
function definition time. I don't see anything wrong if lambdas (or nested
scopes) did the same for names provably coming from the outer scope.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 and beyond

2006-06-30 Thread Giovanni Bajo
Tim Peters [EMAIL PROTECTED] wrote:

 ...
 Incidentally, I think that lexical scoping would also deal with the
 problem
 that people often encounter in which they have to write things like
 lambda
 x=x: where one would think lambda x: would suffice.

 They _shouldn't_ encounter that at all anymore.  For example,

 def f(x):
 ... return lambda: x+1
 f(3)()
 4

 works fine in modern Pythons.

Yes but:

 a = []
 for i in range(10):
... a.append(lambda: i)
...
 print [x() for x in a]
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]

This subtle semantic of lambda is quite confusing, and still forces people to
use the i=i trick.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 and beyond

2006-06-30 Thread Giovanni Bajo
[Giovanni Bajo]
 Yes but:

 a = []
 for i in range(10):
 ... a.append(lambda: i)
 ...
 print [x() for x in a]
 [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]

 This subtle semantic of lambda is quite confusing, and still forces people to
 use the i=i trick.

[Tim Peters]
 So stay away from excruciating abuses of lexical scoping you don't
 understand   What do you expect `i` to refer to?  Oh, it should
 guess that I didn't really mean to defer evaluation of the lambda body
 at all, but instead evaluate the lambda body at the time I define the
 lambda and then synthesize some other function that captures the
 specific outer bindings in effect at lambda-definition time doesn't
 really cut it.

I think I understand what happens, I just don't know whether this can be
fixed or not. Unless you are saying that the above behaviour is not only a
complex side-effect the way things are, but the way things should be. Do you
agree that it would be ideal if the above code generated range(10) instead of
[9]*10, or you believe that the current behaviour is more sound (and if so,
why)?

As for actual implementing this change of semantic, the fact that `i` is a
local variable in the outer scope (assuming it's all within a function),
doesn't make it possible for Python to early-bound it, by realizing that, since
`i` is not an argument of the lambda, and it's a local of the outer scope? At
worse, couldn't Python do the i=i trick by itself when it sees that `i` is a
local in the outer scope? Right now I can't think off-hand of a case in which
this would break things.

[Tim Peters]
 This isn't typical use for lambda,


Yes, maybe it's not the most used idiom and Andrew wasn't referring to this,
but it happens quite often to me (where 'often' means 'many times' among my
rare usages of lambda).

For instance, in GUI code, it's common to do things like:

for b in self.buttons:
 self.setEventCallback(b, clicked, lambda: self.label.setText(I pressed
button %r % b))

... which of course won't work, as written above.

Giovanni Bajo


wink.gif
Description: GIF image
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 328 and PEP 338, redux

2006-06-29 Thread Giovanni Bajo
Nick Coghlan wrote:

 Writing modules that use the approach but want to work with both 2.5
 and 2.6 becomes a little more annoying - such modules have to finish
 with the coda:

 if __name__ == '__main__':
from sys import version_info, argv
if version_info  (2, 6):
sys.exit(__main__(argv))

Actually, this should be enough:

if __name__ == '__main__':
sys.exit(__main__(argv))

and it will still work for the python -mpackage.module case which we're
discussing about. The if suite can be dropped when you won't need pre-2.6
compatibility anymore.

 The interpreter would also have to be careful to ensure that a
 __main__ variable in the globals isn't the result of a module doing
 import __main__.

Real-world usage case for import __main__? Otherwise, I say screw it :)

 Another downside I've discovered recently is that calling sys.exit()
 prevents the use of a post-mortem debugging session triggered by -i
 or PYTHONINSPECT. sys.exit() crashes out of the entire process, so
 the post-mortem interactive session never even gets started.

In fact, this is an *upside* of implementing the __main__ PEP, because the
call to sys.exit() is not needed in that case. All of my Python programs
right now need a sys.exit() *because* the __main__ PEP was not implemented.

 The only real upside I can see to PEP 299 is that main is a
 function is more familiar to people coming from languages like C
 where you can't have run-time code at the top level of a module.
 Python's a scripting language though, and having run-time logic at
 the top level of a script is perfectly normal!

My personal argument is that if __name__ == '__main__' is totally
counter-intuitve and unpythonic. It also proves my memory: after many years,
I still have to think a couple of seconds before rememebering whether I
should use __file__, __name__ or __main__ and where to put the damn quotes.
The fact that you're comparing a variable name and a string literal which
seems very similar (both with the double underscore syntax) is totally
confusing at best.

Also, try teaching it to a beginner and he will go huh wtf. To fully
understand it, you must understand how import exactly works (that is, the
fact that importing a module equals evaluating all of its statement one by
one). A function called __main__ which is magically invoked by the python
itself is much much easier to grasp. A different, clearer spelling for the
if condition (like: if not __imported__) would help as well.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 328 and PEP 338, redux

2006-06-29 Thread Giovanni Bajo
Guido van Rossum wrote:

 Real-world usage case for import __main__? Otherwise, I say screw it
 :) [...] My personal argument is that if __name__ == '__main__' is
 totally counter-intuitve and unpythonic. It also proves my memory:
 after many years, I still have to think a couple of seconds before
 rememebering whether I should use __file__, __name__ or __main__ and
 where to put the damn quotes. The fact that you're comparing a
 variable name and a string literal which seems very similar (both
 with the double underscore syntax) is totally confusing at best.

 Also, try teaching it to a beginner and he will go huh wtf. To
 fully understand it, you must understand how import exactly works
 (that is, the fact that importing a module equals evaluating all of
 its statement one by one). A function called __main__ which is
 magically invoked by the python itself is much much easier to grasp.
 A different, clearer spelling for the if condition (like: if not
 __imported__) would help as well.

 You need to watch your attitude, and try to present better arguments
 than I don't like it.

Sorry for the attitude. I though I brought arguments against if __name__:

- Harder to learn for beginners (requires deeper understanding of import
workings)
- Harder to remember (a string literal compared to a name with the same
naming convention)
- Often requires explicit sys.exit() which breaks python -i
- Broken by -mpkg.mod, and we ended up with another __magic_name__ just
because of this.
- (new) Defining a main() function is already the preferred style for
reusability, so __main__ would encourage the preferred style.

If you believe that these arguments collapse to I don't like it, then no,
I don't have any arguments.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 328 and PEP 338, redux

2006-06-28 Thread Giovanni Bajo
Guido van Rossum wrote:

 This is where I wonder why the def __main__() PEP was rejected in
 the first place. It would have solved this problem as well.

 Could this be reconsidered for Py3k?

 You have a point.

AFAICT, there's nothing preventing it from being added in 2.6. It won't
break existing code with the if name == main paradigm.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 328 and PEP 338, redux

2006-06-27 Thread Giovanni Bajo
Phillip J. Eby wrote:

 Actually, maybe we *do* want to, for this usage.

 Note that until Python 2.5, it was not possible to do python -m
 nested.module, so this change merely prevents *existing* modules from
 being run this way -- when they could not have been before!

 So, such modules would require a minor change to run under -m.  Is
 this
 actually a problem, or is it a new feature?

This is where I wonder why the def __main__() PEP was rejected in the
first place. It would have solved this problem as well.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Moving the ctypes repository to python.org

2006-06-23 Thread Giovanni Bajo
Thomas Heller wrote:

 Is it possible to take the CVS repository files (they can be accessed
 with rsync), and import that, preserving the whole history, into SVN?

Yes:
http://cvs2svn.tigris.org/

You just need a maintainer of the Python SVN repository to load the dump
files this tool generates.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.4 extensions require VC 7.1?

2006-06-16 Thread Giovanni Bajo
Bill Janssen [EMAIL PROTECTED] wrote:

 I'm trying to build a Python extension, and Python 2.4 insists on
 the MS
 Visual C++ compiler version 7.1, which is included with the MS VC++
 2003
 toolkit.  This toolkit is no longer available for download from
 Microsoft (superseded by the 2005 version), so I'm stuck.

 This seems sub-optimal.  I'm afraid I don't follow the Windows track
 closely; has this been fixed for 2.5, or should it be reported as a
 bug?


It was discussed before, and the agreement was to use VS 2003 for another cycle
(i.e. 2.5). But the fact that VS 2003 is no longer available for download is an
important fact, and we might want to rediscuss the issue.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] UUID module

2006-06-12 Thread Giovanni Bajo
Ka-Ping Yee [EMAIL PROTECTED] wrote:

 for dir in ['', r'c:\windows\system32', r'c:\winnt\system32']:

 Can we get rid of these absolute paths? Something like this should
 suffice:

 from ctypes import *
 buf = create_string_buffer(4096)
 windll.kernel32.GetSystemDirectoryA(buf, 4096)
 17
 buf.value.decode(mbcs)
 u'C:\\WINNT\\system32'

 I'd like to, but i don't want to use a method for finding the system
 directory that depends on ctypes.

Why?

 Is there a more general way?

GetSystemDirectory() is the official way to find the system directory. You can
either access it through ctypes or through pywin32, but I guess we're moving to
ctypes for this kind of stuff, since it's bundled in 2.5. I don't know any
Python sys/os method to get a pointer to that directory. Another thing that you
might do is to drop those absolute system directories altogether. After all,
ipconfig should always be in the path.

As a last note, you are parsing ipconfig output assuming an English Windows
installation. My Italian Windows 2000 has localized output.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance (was Re: Please stopchanging wsgiref on the trunk)

2006-06-12 Thread Giovanni Bajo
Guido van Rossum wrote:

 I personally think that, going forward, external maintainers should
 not be granted privileges such as are being granted by PEP 360, and
 an inclusion of a package in the Python tree should be considered a
 fork for all practical purposes. If an external developer is not
 okay with such an arrangement, they shouldn't contribute.

 This is going to make it tougher to get good contributions, where
 good means has existing users and a maintainer committed to
 supporting them.

 To which I say, fine. From the Python core maintainers' POV, more
 standard library code is just more of a maintenance burden. Maybe we
 should get serious about slimming down the core distribution and
 having a separate group of people maintain sumo bundles containing
 Python and lots of other stuff.

-1000.

One of the biggest Python strength, and one that I personally rely on a lot,
is the large *standard* library. It means that you can write scripts and
programs that will run on any Python installation out there, no matter how
many eggs were downloaded before, no matter whether the Internet connection
is available or not, no matter if the user has privileges to install
extensions, even if the SourceForge mirror is down, even if SourceForge
changed their HTML and now the magic code can't grok it anymore, etc etc
etc.

If Python were to lose this standard library in favor of several different
distributions, users could not sensibly write a program anymore without
incurring the risk of using packages not available to some users. Perl has
this problem with CPAN, and system administrators going through hoops to
write admin scripts which do not rely on any external package just because
you can't be sure if a package is installed or not; this leads to code
duplication (duplication of the code included in an external package, but
which can't be reliably used), and to bugs (since the local copy of the
functionality can surely be more buggy than the widespread implementation of
the external package).

Let's not get into this mess, please. I think we just need a smoother way to
maintain the standard library, not an agreement to remove it, just because
we cannot find a way to maintain it properly. The fact that there hundreds
of unreviewed patches to the standard library made by wannabe contributors
is a blatant sign that something *can* be improved.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Source control tools

2006-06-12 Thread Giovanni Bajo
Thomas Wouters [EMAIL PROTECTED] wrote:

 It would be an important move towards world peace, if it didn't
 inspire whole new SCM-holy-wars :-)  I have a fair bit of experience with
 different
 SCM (VC, source control tool, however you want to call them) so I'll
 take
 this opportunity to toss up some observations. Not that switching to
 another
 SCM will really solve the issues you are referring to, but I happen
 to think
 switching to another SCM is  a great idea :)

Would you like to show me a comprehensive list of which procedures need to be
improved, wrt the current workflow with SVN?

My own experience is that SVN is pretty bare-bone, that is it provides powerful
low-level primitives, but basically no facility to abstract these primitives
into higher-level concepts. Basically, it makes many things possible, but few
convenient. I believe that tools like svnmerge show that it is indeed possible
to build upon SVN to construct higher-level layers, but there's quite some work
to do.

I would like also to note that SVK is sexy :)

 The real reason I think we should consider other SCMs is because I
 fear what the history will look like when 3.0 is done. In Subversion, merges
of
 branches don't preserve history -- you have to do it yourself. Given
 the way Subversion works, I don't think that'll really change;

Actually, Subversion is growing merge-tracking facilities, but it's a long way
to what you'd like to happen with the Py3k history (and I don't think that's
achievable whatsoever).

 Git, the 'low level SCM' developed for the Linux kernel, is
 incredibly fast in normal operation and does branches quite well. I
 suspect much of its speed is lost on non-Linux platforms,
 though, and I don't know  how well it works on Windows ;)

The higher-level part of GIT is written in a combination of bash and perl, with
richful usage of textuils, coreutils and whatnot. It's basically unportable
by design to native Windows. I guess the only sane approach is to use it under
Cygwin. IMO this is a big no-go for real Windows developers.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] External Package Maintenance (was Re: Please stopchanging wsgiref on the trunk)

2006-06-12 Thread Giovanni Bajo
Phillip J. Eby [EMAIL PROTECTED] wrote:

 Control isn't the issue; it's ensuring that fixes don't get lost or
 reverted from either the external version or the stdlib version.
 Control
 is merely a means to that end.  If we can accomplish that via some
 other
 means (e.g. an Externals/ subtree), I'm all for it.  (Actually,
 perhaps
 Packages/ would be a better name, since the point is that they're
 packages
 that are maintained for separate distribution for older Python
 versions.  They're really *not* external any more, they just get
 snapshotted for release.)

IMO, the better way is exactly this you depicted: move the official development
tree into this Externals/ dir *within* Python's repository. Off that, you can
have your own branch for experimental work, from which extract your own
releases, and merge changes back and forth much more simply (since if they
reside on the same repository, you can use svnmerge-like features to find out
modifications and whatnot).

Maintaining an external repository seems like a larger effort, and probably not
worth it.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] UUID module

2006-06-11 Thread Giovanni Bajo
Ka-Ping Yee [EMAIL PROTECTED] wrote:

 Quite a few people have expressed interest in having UUID
 functionality in the standard library, and previously on this
 list some suggested possibly using the uuid.py module i wrote:

 http://zesty.ca/python/uuid.py


Some comments on the code:

 for dir in ['', r'c:\windows\system32', r'c:\winnt\system32']:

Can we get rid of these absolute paths? Something like this should suffice:

 from ctypes import *
 buf = create_string_buffer(4096)
 windll.kernel32.GetSystemDirectoryA(buf, 4096)
17
 buf.value.decode(mbcs)
u'C:\\WINNT\\system32'


  for function in functions:
try:
_node = function()
except:
continue

This also hides typos and whatnot. I guess it's better if each function catches
its own exceptions, and either return None or raise a common exception (like a
class _GetNodeError(RuntimeError)) which is then caught.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Subversion repository question - back up to older versions

2006-06-09 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

  I have three Python branches, trunk, release23-maint and
  release24-maint.  In the (for example) release24-maint, what
 svn up  command would I use to get to the 2.4.2 version?

 Tim First question:

 Timcd to the root of your release24-maint checkout, then
 Timsvn switch
 svn+ssh://[EMAIL PROTECTED]/python/tags/r242

 How is that different than noting that r242 corresponds to revision
 39619
 and executing:

 svn up -r 39619


If you realize that each file/directory in Subversion is uniquely identified by
a 2-space coordinate system [url, revision] (given a checkout, you can use svn
info to get its coordinates), then we can say that svn up -r 39619 keeps the
url changed and change the revision to whatever number you specified. In other
words, you get the state of the working copy at whatever state it was that URL
at that time. For instance, if you execute it within the trunk working copy,
you will get the trunk at the moment 2.4.2 was released.

On the other hand, svn switch moves the url: it basically moves your
checkout from [whatever_url, whatever_rev] to [url_specified, HEAD],
downloading the minimal set of diffs to do so. Given that /tags/r242 is a tag,
it means that any revision is good, since nobody is going to commit into that
directory (it will stay unchanged forever). So [/tags/r242, HEAD] is the same
of any other [/tags/r242, REVNUM]  (assuming of course that /tags/r242 was
already created at the time of REVNUM).

So basically you want svn switch to [/tags/r242, HEAD] if you don't plan on
doing modifications, while you want [/branches/release24-maint, HEAD] if you
want to work on the 2.4 branch. Going to [/branches/release24-maint, 39619]
does not really serve many purposes: you have to find out and write 39619
manually, you still can't do commits since of course you want to work on the
top of the branch, and you get less meaningful information if you later run
svn info on the working copy (as you're probably going to forget what
[/branches/release24-maint, 39619] means pretty soon, while [/tags/r242, NNN]
is more clear).

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Subversion repository question - back up to older versions

2006-06-09 Thread Giovanni Bajo
[EMAIL PROTECTED] wrote:

 Giovanni If you realize that each file/directory in Subversion is
 Giovanni uniquely identified by a 2-space coordinate system [url,
 Giovanni revision] ...

 Thanks, I found this very helpful.  I found it so helpful that I
 added a question to the dev faq with this as the answer.  Hope you
 don't mind... It should show up on

 http://www.python.org/dev/faq/

 as question 3.23 in a few minutes.

Sure, I'm glad to help. You may want to revise it a little since it wasn't
meant to be read out of the context...
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal for a new itertools function: iwindow

2006-05-27 Thread Giovanni Bajo
Aahz [EMAIL PROTECTED] wrote:

 Some open question remain:
 - should iwindow return lists or tuples?
 - what happens if the length of the iterable is smaller than the
 window size, and no padding is specified? Is this an error? Should
 the generator return no value at all or one window that is too small?

 You should probably try this idea out on comp.lang.python; if there is
 approval, you'll probably need to write a PEP because of these issues.
 Note that my guess is that the complexity of the issues relative to
 the benefit means that BDFL will probably veto it.


A PEP for adding a simple generator to a library of generators??

The function does look useful to me. It's useful even if it returns a tuple
instead of a list, or if it doesn't support padding. It's still better than
nothing, and can still be succesfully used even if it requires some smallish
wrapper to achieve the exact functionality. I know I have been implementing
something similar very often. Since when do we need a full PEP process,
nitpicking the small details to death, just to add a simple function?

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New string method - splitquoted

2006-05-18 Thread Giovanni Bajo
Heiko Wundram [EMAIL PROTECTED] wrote:

 Don't get me wrong, I personally find this functionality very, very
 interesting (I'm +0.5 on adding it in some way or another),
 especially as a
 part of the standard library (not necessarily as an extension to
 .split()).


It's already there. It's called shlex.split(), and follows the semantic of a
standard UNIX shell, including escaping and other things.

 import shlex
 shlex.split(rHey I\'m a bad guy for you)
['Hey', I'm, 'a', 'bad guy', 'for', 'you']

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New string method - splitquoted

2006-05-18 Thread Giovanni Bajo
Heiko Wundram [EMAIL PROTECTED] wrote:

 Don't get me wrong, I personally find this functionality very, very
 interesting (I'm +0.5 on adding it in some way or another),
 especially as a
 part of the standard library (not necessarily as an extension to
 .split()).

 It's already there. It's called shlex.split(), and follows the
 semantic of a standard UNIX shell, including escaping and other
 things.

 I knew about *nix shell escaping, but that isn't necessarily what I
 find in input I have to process (although generally it's what you
 see, yeah). That's why I said that it would be interesting to have a
 generalized method, sort of like the csv module but only for string
 interpretation, which takes a dialect, and parses a string for the
 specified dialect.

 Remember, there also escaping by doubling the end of string marker
 (for example, 'this is not a single argument'.split() should be
 parsed as ['this','is','not','a',]), and I know programs that
 use exactly this format for file storage.

I never met this one. Anyway, I don't think it's harder than:

 def mysplit(s):
... Allow double quotes to escape a quotes
... return shlex.split(s.replace(r'', r'\'))
...
 mysplit('This is not a single argument')
['This', 'is', 'not', 'a', 'single', 'argument']


 Maybe, one could simply export the function the csv module uses to
 parse the actual data fields as a more prominent method, which
 accepts keyword arguments, instead of a Dialect-derived class.


I think you're over-generalizing a very simple problem. I believe that
str.split, shlex.split, and some simple variation like the one above (maybe
using regular expressions to do the substitution if you have slightly more
complex cases) can handle 99.99% of the splitting cases. They surely handle
100% of those I myself had to parse.

I believe the standard library already covers common usage. There will surely
be cases where a custom lexer/splitetr will have to be written, but that's life
:)

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New string method - splitquoted

2006-05-18 Thread Giovanni Bajo
Dave Cinege wrote:

 It's already there. It's called shlex.split(), and follows the
 semantic of a standard UNIX shell, including escaping and other
 things.

 Not quite. As I said in my other post, simple is the idea for this,
 just like the split method itself.  (no escaping, etc.just
 recognizing delimiters as an exception to the split seperatation)

And what's the actual problem? You either have a syntax which does not
support escaping or one that it does. If it can't be escaped, there won't be
any weird characters in the way, and shlex.split() will do it. If it does
support escaping in a decent way, you can either use shlex.split() directly
or modify the string before (like I've shown in the other message). In any
case, you get your job done.

Do you have any real-world case where you are still not able to split a
string? And if you do, are they really so many to warrant a place in the
standard library? As I said before, I think that split() and shlex.split()
cover the majority of real world usage cases.

 shlex.split() does not let one choose the separator
 or use a maxsplit

Real-world use case? Show me what you need to parse, and I assume this weird
format is generated by a program you have not written yourself (or you could
just change it to generate a more standard and simple format!)

 , nor is it a pure method to strings.

This is a totally different problem. It doesn't make it less useful nor it
does provide a need for adding a new method to the string.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Alternative path suggestion

2006-05-06 Thread Giovanni Bajo
Greg Ewing [EMAIL PROTECTED] wrote:

 So I suggest splitting the internal data into 'path elements
 separated by os.sep', 'name elements separated by os.extsep'

 What bothers me about that is that in many systems
 there isn't any formal notion of an extension,
 just a convention used by some applications.

 Just because I have a . in my filename doesn't
 necessarily mean I intend what follows to be
 treated as an extension.

This is up to the application to find out or deduce. Python still needs a
proper support for this highly diffused conventions. I would say that *most* of
the files in my own hard-disk follow this convention (let's see: *.html, *.py,
*.sh, *.cpp, *.c, *.h, *.conf, *.jpg, *.mp3, *.png, *.txt, *.zip, *.tar, *.gz,
*.pl, *.odt, *.mid, *.so... ehi, I can't find a file which it doesn't follow
this convention). Even if you can have a Python file without extension, it
doesn't mean that an application should manually extract foo out of foo.py
just because you could also name it only foo.

 assert pth.basepath == HOMEDIR
 assert pth.dirparts == ('foo', 'bar')
 assert pth.nameparts == ('baz', 'tar', 'gz')

 What if one of the dirparts contains a character
 happening to match os.extsep? When you do
 pth2 = pth[:-1], does it suddenly get split up
 into multiple nameparts, even though it's actually
 naming a directory rather than a file?

 (This is not hypothetical -- it's a common convention
 in some unix systems to use names like spam.d for
 directories of configuration files.)

Yes. And there is absolutely nothing wrong with it. Do you have in mind any
real-world case in which an algorithm is broken by the fact that splitext()
doesn't magically guess what's an extension and what is not? People coding
Python applications *know* that splitext() will split the string using the
rightmost dot (if any). They *know* it doesn't do anything more than that; and
that it does that on any path string, even those where the rightmost dot
doesn't indicate a real extension. They know it's not magical, in that it can't
try to guess whether it is a real extension or not, and so does the simple,
clear, mechanical thing: it splits on the right-most dot.

And even if they know this limitation (if you want to call it so, I call it
clear, consistent behaviour which applies to a not-always-consistently-used
convention), the function is still useful.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Visual studio 2005 express now free

2006-04-21 Thread Giovanni Bajo
Martin v. Löwis wrote:

 - Paul Moore has contributed a Python build procedure for the
   free version of the 2003 compiler. This one is without IDE,
   but still, it should allow people without a VS 2003 license
   to work on Python itself; it should also be possible to develop
   extensions with that compiler (although I haven't verified
   that distutils would pick that up correctly).

It's been possible to compile distutils extensions with the VS 2003 toolkit
for far longer than it's possible to compile Python itself:
http://www.vrplumber.com/programming/mstoolkit/

In fact, it would be great if the patches provided here were reviewed and
integrated into the official Python distutils.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setuptools in the stdlib

2006-04-19 Thread Giovanni Bajo
Phillip J. Eby [EMAIL PROTECTED] wrote:

 If so, can't we have some kind of versioning
 system?

 We do: import setuptools.  We could perhaps rename it to import
 distutils2 if you prefer, but it would mean essentially the same
 thing.  :)


I believe the naming is important, though. I'd rather it be called distutils2,
or from distutils.core import setup2 or something like that. setuptools *is*
a new version of distutils, so it shouldn't have a different name.

Then, about new commands. Why should I need to do import distutils2 to do,
eg, setup.py develop? This doesn't break backward compatibility.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] adding Construct to the standard library?

2006-04-18 Thread Giovanni Bajo
tomer filiba [EMAIL PROTECTED] wrote:

 the point is -- ctypes can define C types. not the TCP/IP stack.
 Construct can do both. it's a superset of ctype's typing mechanism.
 but of course both have the right to *coexist* --
 ctypes is oriented at interop with dlls, and provides the mechanisms
 needed for that.
 Construst is about data structures of all sorts and kinds.

 ctypes is a very helpful library as a builtin, and so is Construct.
 the two don't compete on a spot in the stdlib.


I don't agree. Both ctypes and construct provide a way to describe a
binary-packed structure in Python terms: and this is an overload of
functionality. When I first saw Construct, the thing that crossed my head was:
hey, yet another syntax to describe a binary-packed structure in Python.
ctypes uses its description to interoperate with native libraries, while
Construct uses its to interoperate with binary protocols. I didn't see a good
reason why you shouldn't extend ctypes so to provide features that it is
currently missing. It looks like it could be easily extended to do so.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] setuptools in the stdlib

2006-04-18 Thread Giovanni Bajo
Fred L. Drake, Jr. [EMAIL PROTECTED] wrote:

   So, I'm not too pleased by insinuations that setuptools is
  anything other  than a Python community project.

 I've no doubt about that at all, FWIW.  I think you've put a lot of
 effort into discussing it with the community, and applaud you for
 that as well as your implementation efforts.


I agree but I have a question for Phil though: why can't many of the setuptools
feature be simply integrated within the existing distutils?

I have been fighting with distutils quite some time and have had to
monkey-patch it somehow to fit my needs. I later discovered that setuptools
included many of those fixes already (let alone the new features). I actually
welcome all those setuptools fixes in the Just Works(TM) principle with which
I totally agree.

But, why can't setuptools be gradually merged into distutils, instead of being
kept as a separate package? Let's take a real example: setuptools' sdist is
much enhanced, has integration with CVS/SVN, uses MANIFEST in a way that it
really works, etc. Why can't it be merged into the original distutils? Is it
just for backward compatibility? If so, can't we have some kind of versioning
system?

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] elementtree in stdlib

2006-04-07 Thread Giovanni Bajo
Greg Ewing [EMAIL PROTECTED] wrote:

 try:
 import xml.etree.ElementTree as ET # in python =2.5
 except ImportError:
  ... etc ad nauseam
 
 For situations like this I've thought it might
 be handy to be able to say
 
import xml.etree.ElementTree or cElementTree or \
  elementtree.ElementTree or lxml.etree as ET


Astonishingly cute. +1.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Discussing the Great Library Reorganization

2006-03-29 Thread Giovanni Bajo
Anthony Baxter [EMAIL PROTECTED] wrote:

 I don't have a problem with reorganising the standard library, but
 what's the motivation for moving everything under a new root? Is it
 just to allow people to unambigiously get hold of something from the
 stdlib, rather than following the normal search path? Doesn't the
 absolute/relative import PEP solve this problem?


I don't think so. For instance, if I have a package called db in my
application (which I import with absolute imports from other packages), I might
have problems with the newly added db.sqlite package in Python 2.5. In fact,
I guess my db will shadow the stdlib one, making it impossible to access. An
unique prefix for stdlib would solve this.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] I'm not getting email from SF when assignedabug/patch

2006-03-28 Thread Giovanni Bajo
Just van Rossum [EMAIL PROTECTED] wrote:

 http://www.edgewall.com/trac/

 It is based on python and has a very good svn integration.

 We started using it recently and so far it's working really well. I love
 the svn (and wiki!) integration. However, I have no idea how well it
 scales to a project the size of Python.

Having extensively used both Trac and Bugzilla, let me say that the ticket
tracker in Trac is a child-play version of Bugzilla. It might be enough for
Python, though, if SF was enough till now. I thought that a large project
like Python required something more advanced. Anyway, I'll shut up as I see
there is a committee for this decision.

The integration between tickets/svn/wiki in Trac is cute though, even if,
after a while, you'd really want that mailman parsed that syntax as well
(and maybe your MUA too :)
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Changing -Q to warn for 2.5?

2006-03-26 Thread Giovanni Bajo
Neal Norwitz [EMAIL PROTECTED] wrote:

 
 The -Q command line option takes a string argument that can take four
 values: old, warn, warnall, or new.  The default is old in
 Python 2.2 but will change to warn in later 2.x versions.
 

 I'm not sure this is worth in 2.x.  If we aren't going to change it,
 we should update the PEP.  OTOH, people ask how they can find integer
 division in their code.  Even though they can use the flag themselves,
 I wonder if anyone who wants help finding integer division really uses
 the flag.

-1 gratuitous breakage. There's noting really wrong or dangerous about the old
semantic, it just won't be the one used by Python 3.0. While it's nice to have
an option to help forward porting, I don't think we should force it.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] GeneratorExit inheriting from Exception

2006-03-19 Thread Giovanni Bajo
Nick Coghlan [EMAIL PROTECTED] wrote:

 Rather than trying to change course midstream, I *like* the fact that
 the PEP 352 hierarchy introduces BaseException to bring the language
 itself into line with what people have already been taught. Breaking
 things in Py3k is all well and good, but breaking them gratuitously
 is something else entirely :)

I really like this too, but Barry's proposal doesn't really *break* anything.
Existing Python code that creates a FooBar(Exception) and then catches it with
either except FooBar: or except Exception, e: + check for FooBar, will
still work as expected. At *worse*, it would be catching too much, like
SystemExit or GeneratorExit, which are still pretty uncommon exception.

OTOH, I also understand that people have been told that deriving from Exception
is the right thing to do forever now.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.5 Schedule

2006-03-18 Thread Giovanni Bajo
Raymond Hettinger [EMAIL PROTECTED] wrote:

 They include [...] the str.partition function,

Where is the current version of this patch?

After reharsing the archives, I have an additional suggestion which I didn't
find already mentioned in the discussion. What about:

str.partition()  -  (left, right or None)
str.rparition()  -  (left or None, right)

which means:

foo:bar.partition(:) - (foo, bar)
foo:bar.rpartition(:) - (foo, bar)
foo:.partition(:) - (foo, )
foo:.rpartition(:) - (foo, )
:foo.partition(:) - (, foo)
:foo.rpartition(:) - (, foo)
foo.partition(:) - (foo, None)
foo.rpartition(:) - (None, foo)

Notice that None-checking can be done as a way to know if the separator was
found. I mentally went through the diff here
(http://mail.python.org/pipermail/python-dev/2005-August/055781.html) and
found out that most (all?) the usages of '_' disappears with this semantic.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] GeneratorExit inheriting from Exception

2006-03-18 Thread Giovanni Bajo
Georg Brandl [EMAIL PROTECTED] wrote:

 Exception
 +- KeyboardInterrupt
 +- GeneratorExit
 +- SystemExit
 +- StopIteration
 +- Error
  +- ImportError
  +- (etc.)

 +- Warning
+- UserWarning
+- (etc.)

 Cool! That's so far the clearest solution. An additional bonus is that
 except
 statements look nicer:

 except:# still catches all Exceptions, just like
 except Exception:

 except Error:  # is what you normally should do

+1 on the general idea, I just don't specifically like that except: is the
wrong thing to do: part of the PEP352 idea was that people writing
except: out of ignorance would still not cause their program to intercept
KeyboardInterrupt, or StopIteration.

Unless this new proposal also includes changing the meaning of except: to
except Error. Also, under this new proposal, we could even remove
Exception from the builtins namespace in Py3k. It's almost always wrong to
use it, and if you really really need it, it's spelled exceptions.Exception.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] GeneratorExit inheriting from Exception

2006-03-18 Thread Giovanni Bajo
Barry Warsaw wrote:

 Unless this new proposal also includes changing the meaning of
 except: to except Error.

 It's worth debating.  OT1H, it's a semantic different for Python 2.x
 (although +1 on the idea for Py3K).

I was speaking of Py3K here, yes.

 Going along with that, maybe the interpreter should do something
 different when an Exception that's not an Error reaches the top
 (e.g. not print a traceback if KeyboardInterrupt is seen -- 
 we usually just catch that, print Interrupted and exit).

SystemExit is already special-cased, as far as I can tell. KeyboardInterrupt
could be in fact be special cased as well (I saw many Python newbies -- but
otherwise experienced -- being disgusted at first when they interrupt their
code with CTRL+C: they expect the program to exit almost silently).

 Also, under this new proposal, we could even remove
 Exception from the builtins namespace in Py3k. It's almost
 always wrong to use it, and if you really really need it, it's
 spelled exceptions.Exception.

 I'm not sure I'd go as far as hiding Exception, since I don't think the
 penalty is that great and it makes it easier to document.

The situation (in Py3k) I was thinking is when people see this code:

except:
# something

and want to change it so to get a name to the exception object. I *think* many
could get confused and write:

except Exception, e:
# something

which changes the meaning. It sounds correct, but it's wrong. Of course, it's
easy to argue that Exception is just that, and people actually meant Error.
In a way, the current PEP352 is superior here because it makes harder to do the
bad thing by giving it a complex name (BaseException).

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] libbzip2 version?

2006-03-11 Thread Giovanni Bajo
Martin v. Löwis [EMAIL PROTECTED] wrote:

 On bzip2, I wonder whether
 2.4 should also update to the newer library;

+1, I seem to remember of exploits with corrupted data fed to the bz2
decompressor.

Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ctypes is in SVN now.

2006-03-09 Thread Giovanni Bajo
Thomas Heller [EMAIL PROTECTED] wrote:

 Missing are .vcproj files for Windows, both for the _ctypes.pyd extension
 and the _ctypes_test.pyd extension needed for testing.  IIRC, Martin
 promised to create them - is this offer still valid? I could do that
 myself, but only for x86, while other .pyd files also have build settings
 for Itanium and x86_64.  (I find it always very painful to click through
 the settings dialog in MSVC - isn't there any other way to create these
 files?)


I discussed with Martin a bit about the opportunity of generating .vcproj
files with a script-driven tool. I'm going to try and setup something as
soon as I find some spare time.

The goal of this work would exactly fit your need: make the creation of
extension modules trivially easy (just as easy as adding the modules to the
main python.dll). My personal goal, in fact, is to move many of those
builtin extension modules from python.dll out into their own .pyd files
where they'd belong (were not for this technical annoyance of being forced
to use the settings dialog in MSVC).
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ctypes is in SVN now.

2006-03-09 Thread Giovanni Bajo
Thomas Heller [EMAIL PROTECTED] wrote:

 I discussed with Martin a bit about the opportunity of generating .vcproj
 files with a script-driven tool. I'm going to try and setup something as
 soon as I find some spare time.

 Ideally this would be integrated with distutils because the setup-script
 has most of the information that's needed.

 OTOH, extensions could be built with distutils even for core Python, and
 even on Windows.  For extensions that are *not* builtin or not included
with
 Python itself distutils works good enough.

I fear this is an orthogonal change. Alas, using distutils to build core
extensions is not something I'm ready to tackle at the moment.

I was just thiking of something much more naive like using a free tool to
build .vcproj/.sln (see www.cmake.org for instance) from a script
description. With such a tool, it would be very easy to build a .pyd file
around a self-contained .c file (like _heapqmodule.c, etc.), it would mostly
be a couple of line changes in the script file, and then re-run the tool
executable to regenerate vcproj/sln. OTOH, both the tool executable
(cmake.exe in this example) and the current version of the generated
vcproj/sln files would be committed in SVN under PCbuild, so to have a
minimal impact on developer habits.
-- 
Giovanni Bajo

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   >