[Python-Dev] 2.3 BRANCH FREEZE imminent!
As those of you playing along at home with python-checkins would know, we're going to be cutting a 2.3.5c1 shortly (in about 12 hours time). Can people not in the set of the normal release team (you know the drill) please hold off on checkins to the branch from about UTC, 26th January (in about 12 hours time). After than, we'll have a one-week delay from release candidate until the final 2.3.5 - until then, please be ultra-conservative with checkins to the 2.3 branch (unless you're also volunteering to cut an emergency 2.3.6 wink). Assuming nothing horrible goes wrong, this will be the final release of Python 2.3. The next bugfix release will be 2.4.1, in a couple of months. (As usual - any questions, comments or whatever, let me know via email, or #python-dev on irc.freenode.net) Anthony -- Anthony Baxter [EMAIL PROTECTED] It's never too late to have a happy childhood. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Strange segfault in Python threads and linux kernel 2.6
On Wednesday 26 January 2005 01:01, Donovan Baarda wrote: In this case it turns out to be don't do exec() in a thread, because what you exec can have all it's signals masked. That turns out to be a hell of a lot of things; popen, os.command, etc. They all only work OK in a threaded application if what you are exec'ing doesn't use any signals. Yep. You just have to be aware of it. We do a bit of this at work, and we either spool via a database table, or a directory full of spool files. Actually, I've noticed that zope often has a sorta zombie which process which it spawns. I wonder it this is a stuck thread waiting for some signal... Quite likely. -- Anthony Baxter [EMAIL PROTECTED] It's never too late to have a happy childhood. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
RE: [Python-Dev] state of 2.4 final release
[Anthony Baxter] I didn't see any replies to the last post, so I'll ask again with a better subject line - as I said last time, as far as I'm aware, I'm not aware of anyone having done a fix for the issue Tim identified ( http://www.python.org/sf/1069160 ) So, my question is: Is this important enough to delay a 2.4 final for? [Tim] Not according to me; said before I'd be happy if everyone pretended I hadn't filed that report until a month after 2.4 final was released. Any chance of this getting fixed before 2.4.1 goes out in February? Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] state of 2.4 final release
[Anthony Baxter] I didn't see any replies to the last post, so I'll ask again with a better subject line - as I said last time, as far as I'm aware, I'm not aware of anyone having done a fix for the issue Tim identified ( http://www.python.org/sf/1069160 ) So, my question is: Is this important enough to delay a 2.4 final for? [Tim] Not according to me; said before I'd be happy if everyone pretended I hadn't filed that report until a month after 2.4 final was released. [Raymond Hettinger] Any chance of this getting fixed before 2.4.1 goes out in February? It probably won't be fixed by me. It would be better if a Unix-head volunteered to repair it, because the most likely kind of thread race (explained in the bug report) has proven impossible to provoke on Windows (short of carefully inserting sleeps into Python's C code) any of the times this bug has been reported in the past (the same kind of bug has appeared several times in different parts of Python's threading code -- holding the GIL is not sufficient protection against concurrent mutation of the tstate chain, for reasons explained in the bug report). A fix is very simple (also explained in the bug report) -- acquire the damn mutex, don't trust to luck. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Re: Allowing slicing of iterators
Guido van Rossum wrote: As a trivial example, here's how to skip the head of a zero-numbered list: for i, item in enumerate(ABCDEF)[1:]: print i, item Is this idea a non-starter, or should I spend my holiday on Wednesday finishing it off and writing the documentation and tests for it? Since we already have the islice iterator, what's the point? readability? I don't have to import seqtools to work with traditional sequences, so why should I have to import itertools to be able to use the goodies in there? better leave that to the compiler. /F ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Allowing slicing of iterators
[me] Since we already have the islice iterator, what's the point? [Nick] I'd like to see iterators become as easy to work with as lists are. At the moment, anything that returns an iterator forces you to use the relatively cumbersome itertools.islice mechanism, rather than Python's native slice syntax. Sorry. Still -1. I read your defense, and I'm not convinced. Even Fredrik's support didn't convince me. Iterators are for single sequential access. It's a feature that you have to import itertools (or at least that you have to invoke its special operations) -- iterators are not sequences and shouldn't be confused with such. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Allowing slicing of iterators
Nick Coghlan wrote: In the example below (printing the first 3 items of a sequence), the fact that sorted() produces a new iterable list, while reversed() produces an iterator over the original list *should* be an irrelevant implementation detail from the programmer's point of view. You have to be aware on some level of whether or not you're using a list when you use slice notation -- what would you do for iterators when given a negative step index? Presumably it would have to raise an exception, where doing so with lists would not... Steve -- You can wordify anything if you just verb it. --- Bucky Katt, Get Fuzzy ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Allowing slicing of iterators
Nick Coghlan [EMAIL PROTECTED] wrote: Guido van Rossum wrote: Since we already have the islice iterator, what's the point? I'd like to see iterators become as easy to work with as lists are. At the moment, anything that returns an iterator forces you to use the relatively cumbersome itertools.islice mechanism, rather than Python's native slice syntax. If you want to use full sequence slicing semantics, then make yourself a list or tuple. I promise it will take less typing than itertools.islice() (at least in the trivial case of list(iterable)). Using language syntax to pretend that an arbitrary iterable is a list or tuple may well lead to unexpected behavior, whether that behavior is data loss or a caching of results. Which behavior is desireable is generally application specific, and I don't believe that Python should make that assumption for the user or developer. - Josiah ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
RE: [Python-Dev] state of 2.4 final release
[Anthony Baxter] I'm not aware of anyone having done a fix for the issue Tim identified ( http://www.python.org/sf/1069160 ) [Raymond Hettinger] Any chance of this getting fixed before 2.4.1 goes out in February? [Timbot] It probably won't be fixed by me. It would be better if a Unix-head volunteered to repair it, because the most likely kind of thread race (explained in the bug report) has proven impossible to provoke on Windows (short of carefully inserting sleeps into Python's C code) any of the times this bug has been reported in the past (the same kind of bug has appeared several times in different parts of Python's threading code -- holding the GIL is not sufficient protection against concurrent mutation of the tstate chain, for reasons explained in the bug report). A fix is very simple (also explained in the bug report) -- acquire the damn mutex, don't trust to luck. Hey Unix-heads. Any takers? Raymond ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Allowing slicing of iterators
Raymond Hettinger [EMAIL PROTECTED] wrote: FWIW, someone (Bengt Richter perhaps) once suggested syntactic support differentiated from sequences but less awkward than a call to itertools.islice(). itertools.islice(someseq, lo, hi) would be rendered as someseq'[lo:hi]. Just to make sure I'm reading this right, the difference between sequence slicing and iterator slicing is a single-quote? IMVHO, that's pretty hard to read... If we're really looking for a builtin, wouldn't it be better to go the route of getattr/setattr and have something like getslice that could operate on both lists and iterators? Then getslice(lst, lo, hi) would just be an alias for lst[lo:hi] and getslice(itr, lo, hi) would just be an alias for itertools.islice(itr, lo, hi) Steve -- You can wordify anything if you just verb it. --- Bucky Katt, Get Fuzzy ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Deprecating modules (python-dev summary for early Dec, 2004)
It was also agreed that deleting deprecated modules was not needed; it breaks code and disk space is cheap. It seems that no longer listing documentation and adding a deprecation warning is what is needed to properly deprecate a module. By no longer listing documentation new programmers will not use the code since they won't know about it.[*]And adding the warning will let old users know that they should be using something else. [* Unless they try to maintain old code. Hopefully, they know to find the documentation at python.org.] Would it make sense to add an attic (or even deprecated) directory to the end of sys.path, and move old modules there? This would make the search for non-deprecated modules a bit faster, and would make it easier to verify that new code isn't depending (perhaps indirectly) on any deprecated features. New programmers may just browse the list of files for names that look right. They're more likely to take the first (possibly false) hit if the list is long. I'm not the only one who ended up using markupbase for that reason. Also note that some shouldn't-be-used modules don't (yet?) raise a deprecation warning. For instance, I'm pretty sure regex_syntax and and reconvert are both fairly useless without deprecated regex, but they aren't deprecated on their own -- so they show up as tempting choices in a list of library files. (Though reconvert does something other than I expected, based on the name.) I understand not bothering to repeat the deprecation for someone who is using them correctly, but it would be nice to move them to an attic. Bastion and rexec should probably also raise Deprecation errors, if that becomes the right way to mark them deprecated. (They import fine; they just don't work -- which could be interpreted as merely an XXX not done yet comment.) -jJ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] __str__ vs. __unicode__
M.-A. Lemburg wrote: Walter Dörwald wrote: M.-A. Lemburg wrote: [...] __str__ and __unicode__ as well as the other hooks were specifically added for the type constructors to use. However, these were added at a time where sub-classing of types was not possible, so it's time now to reconsider whether this functionality should be extended to sub-classes as well. So can we reach consensus on this, or do we need a BDFL pronouncement? I don't have a clear picture of what the consensus currently looks like :-) If we're going for for a solution that implements the hook awareness for all __typename__ hooks, I'd be +1 on that. If we only touch the __unicode__ case, we'd only be created yet another special case. I'd vote -0 on that. [...] Here's the patch that implements this for int/long/float/unicode: http://www.python.org/sf/1109424 Note that complex already did the right thing. For int/long/float this is implemented in the following way: Converting an instance of a subclass to the base class is done in the appropriate slot of the type (i.e. intobject.c::int_int() etc.) instead of in PyNumber_Int()/PyNumber_Long()/PyNumber_Float(). It's still possible for a conversion method to return an instance of a subclass of int/long/float. Bye, Walter Dörwald ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating modules (python-dev summary for early Dec, 2004)
On Tue, 25 Jan 2005 16:21:34 -0600, Skip Montanaro [EMAIL PROTECTED] wrote: Jim Would it make sense to add an attic (or even deprecated) Jim directory to the end of sys.path, and move old modules there? This Jim would make the search for non-deprecated modules a bit faster, and Jim would make it easier to verify that new code isn't depending Jim (perhaps indirectly) on any deprecated features. That's what lib-old is for. All people have to do is append it to sys.path to get access to its contents: That seems to be for obsolete modules. Should deprecated modules be moved there as well? I had proposed a middle ground, where they were moved to a separate directory, but that directory was (by default) included on the search path. Moving deprecated modules to lib-old (not on the search path at all) seems to risk breaking code. -jJ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Speed up function calls
On Tue, 25 Jan 2005 06:42:57 -0500, Raymond Hettinger [EMAIL PROTECTED] wrote: I think tested a method I changed from METH_O to METH_ARGS and could not measure a difference. Something is probably wrong with the measurements. The new call does much more work than METH_O or METH_NOARGS. Those two common and essential cases cannot be faster and are likely slower on at least some compilers and some machines. If some timing shows differently, then it is likely a mirage (falling into an unsustainable local minimum). I tested w/chr() which Martin pointed out is broken in my patch. I just tested with len('') and got these results (again on opteron): # without patch [EMAIL PROTECTED] clean $ ./python ./Lib/timeit.py -v len('') 10 loops - 8.11e-06 secs 100 loops - 6.7e-05 secs 1000 loops - 0.000635 secs 1 loops - 0.00733 secs 10 loops - 0.0634 secs 100 loops - 0.652 secs raw times: 0.654 0.652 0.654 100 loops, best of 3: 0.652 usec per loop # with patch [EMAIL PROTECTED] src $ ./python ./Lib/timeit.py -v len('') 10 loops - 9.06e-06 secs 100 loops - 7.01e-05 secs 1000 loops - 0.000692 secs 1 loops - 0.00693 secs 10 loops - 0.0708 secs 100 loops - 0.703 secs raw times: 0.712 0.714 0.713 100 loops, best of 3: 0.712 usec per loop So with the patch METH_O is .06 usec slower. I'd like to discuss this later after I explain a bit more about the direction I'm headed. I agree that METH_O and METH_NOARGS are near optimal wrt to performance. But if we could have one METH_UNPACKED instead of 3 METH_*, I think that would be a win. A beneift would be to consolidate METH_O, METH_NOARGS, and METH_VARARGS into a single case. This should make code simpler all around (IMO). Will backwards compatibility allow those cases to be eliminated? It would be a bummer if most existing extensions could not compile with Py2.5. Also, METH_VARARGS will likely have to hang around unless a way can be found to handle more than nine arguments. Sorry, I meant eliminated w/3.0. METH_O couldn't be eliminated, but METH_NOARGS actually could since min/max args would be initialized to 0. so #define METH_NOARGS METH_UNPACKED would work. But I'm not proposing that, unless there is consensus that it's ok. This patch appears to be taking on a life of its own and is being applied more broadly than is necessary or wise. The patch is extensive and introduces a new C API that cannot be taken back later, so we ought to be careful with it. I agree we should be careful. But it's all experimentation right now. The reason to modify METH_O and METH_NOARGS is verify direction and various effects. It's not necessarily meant to be integrated. That being said, I really like the concept. I just worry that many of the stated benefits won't materialize: * having to keep the old versions for backwards compatibility, * being slower than METH_O and METH_NOARGS, * not handling more than nine arguments, There are very few functions I've found that take more than 2 arguments. Should 9 be lower, higher? I don't have a good feel. From what I've seen, 5 may be more reasonable as far as catching 90% of the cases. * separating function signature info from the function itself, I haven't really seen any discussion on this point. I think Raymond pointed out this isn't really much different today with METH_NOARGS and METH_KEYWORD. METH_O too if you consider how the arg is used even though the signature is still the same. * the time to initialize all the argument variables to NULL, See below how this could be fixed. * somewhat unattractive case stmt code for building the c function call. This is the python test coverage: http://coverage.livinglogic.de/coverage/web/selectEntry.do?template=2850entryToSelect=182530 Note that VARARGS is over 3 times as likely as METH_O or METH_NOARGS. Plus we could get rid of a couple of if statements. So far it seems there isn't any specific problems with the approach. There are simply concerns. I not sure it would be best to modify this patch over many iterations and then make one huge checkin. I also don't want to lose the changes or the results. Perhaps I should make a branch for this work? It's easy to abondon it or take only the pieces we want if it should ever see the light of day. Here's some thinking out loud. Raymond mentioned about some of the warts of the current patch. In particular, all nine argument variables are initialized each time and there's a switch on the number of arguments. Ultimately, I think we can speed things up more by having 9 different op codes, ie, one for each # of arguments. CALL_FUNCTION_0, CALL_FUNCTION_1, ... (9 is still arbitrary and subject to change) Then we would have N little functions, each with the exact # of parameters. Each would still need a switch to call the C function because there may be optional parameters. Ultimately, it's possible the code would be small enough to stick
Re: [Python-Dev] Speed up function calls
Neal Norwitz wrote: So far it seems there isn't any specific problems with the approach. There are simply concerns. I not sure it would be best to modify this patch over many iterations and then make one huge checkin. I also don't want to lose the changes or the results. Perhaps I should make a branch for this work? It's easy to abondon it or take only the pieces we want if it should ever see the light of day. A branch would seem the best way to allow other people to contribute to the experiment. I'll also note that this mechanism should make it easier to write C functions which are easily used both from Python and as direct entries in a C API. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.skystorm.net ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Allowing slicing of iterators
Steven Bethard wrote: If we're really looking for a builtin, wouldn't it be better to go the route of getattr/setattr and have something like getslice that could operate on both lists and iterators? Such a builtin should probably be getitem() rather than getslice() (since getitem(iterable, slice(start, stop, step)) covers the getslice() case). However, I don't really see the point of this, since from itertools import islice is nearly as good as such a builtin. More importantly, I don't see how this alters Guido's basic criticism that slicing a list and slicing an iterator represent fundamentally different concepts. (ie. if itr[x] is unacceptable, I don't see how changing the spelling to getitem(itr, x) could make it any more acceptable). If slicing is taken as representing random access to a data structure (which seems to be Guido's view), then using it to represent sequential access to an item in or region of an iterator is not appropriate. I'm not sure how compatible that viewpoint is with wanting Python 3k to be as heavily iterator based as 2.x is list based, but that's an issue for the future. For myself, I don't attach such specific semantics to slicing (I see it as highly dependent on the type of object being sliced), and consider it obvious syntactic sugar for the itertools islice operation. As mentioned in my previous message, I also think the iterator/iterable distinction should be able to be ignored as much as possible, and the lack of syntactic support for working with iterators is the major factor that throws the distinction into a programmer's face. It currently makes the fact that some builtins return lists and others iterators somewhat inconvenient. Those arguments have already failed to persuade Guido though, so I guess the idea is dead for the moment (unless/until someone comes up with a convincing argument that I haven't thought of). Given Guido's lack of enthusiasm for *this* idea though, I'm not even going to venture into the realms of + on iterators defaulting to itertools.chain or * to itertools.repeat. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://boredomandlaziness.skystorm.net ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Speed up function calls
Neal Norwitz [EMAIL PROTECTED] writes: * not handling more than nine arguments, There are very few functions I've found that take more than 2 arguments. Should 9 be lower, higher? I don't have a good feel. From what I've seen, 5 may be more reasonable as far as catching 90% of the cases. Five is probably conservative. http://mail.python.org/pipermail/python-dev/2004-February/042847.html -- KBK ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com