Re: [Python-Dev] repeated keyword arguments
Jeff Hall wrote: > That's all fine and good but in this case there may be "stealth errors". That is fully understood, in all of its consequences. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] urllib, multipart/form-data encoding and file uploads
> I didn't see any recent discussion about this so I thought I'd ask > here: do you think this would make a good addition to the new urllib > package? Just in case that isn't clear: any such change must be delayed for 2.7/3.1. That is not to say that you couldn't start implementing it now, of course. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] repeated keyword arguments
On Jun 28, 12:56 am, "Guido van Rossum" <[EMAIL PROTECTED]> wrote: > No, it could just be a harmless typo in a long argument list. to elaborate on this point a little, i came across this error when i ported my code to 2.4. i used the optparse class which takes 10's of kwargs, and it turned out i'd given the same parameter twice (the infamous copy-paste problem). so on the one hand, it was a harmless typo (because the latest instance was taken). on the other hand, it's a good thing i tested it on 2.4, or i'd never notice the repeated argument, which may have led to weird runtime errors (if the order of the params was changed). i'd be in favor of fixing this in 2.5, just to eliminate possibly hard- to-debug runtime errors. since it's a syntax error, it would be early- noticed when the code is first run/imported, and it wouldn't require the original author of the code to fix. -tomer ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] repeated keyword arguments
Guido van Rossum wrote: In such cases I think it's better not to introduce new exceptions in point-point releases. Perhaps they should be backported to the maintenance as warnings? Then users can decide on a case-by-case basis if they want to make that particularly warning trigger an exception. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] repeated keyword arguments
> i'd be in favor of fixing this in 2.5, just to eliminate possibly hard- > to-debug runtime errors. since it's a syntax error, it would be early- > noticed when the code is first run/imported, and it wouldn't require > the original author of the code to fix. As release manager for Python 2.5, I'd like to support Guido's position: the risk of breaking existing code is just not worth it. Developers who made such a mistake will find out when they port the code to 2.6; there is no value whatsoever in end-users finding out minor bugs in software they didn't even know was written in Python. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] repeated keyword arguments
> Perhaps they should be backported to the maintenance as warnings? Then > users can decide on a case-by-case basis if they want to make that > particularly warning trigger an exception. No. There will likely be one more 2.5 release. This entire issue never came up in the lifetime of 2.5, so it can't be so serious to annoy end-users with a warning they don't know how to deal with. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Python 2.6 SSL
HI, I have used the SSL lib to wrap sockets in existing client server code, but I find that the max send size is 16K. In other words I send 20K, the write call returns 20K but the receiving end only gets 16K. I remove the wraper and everything worksagain. I even modified the unit test to send >16K, but got the same result. Is therea patch for this ? I hope this the correct place to report this... Regards Roger -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Cycle collection enhancement idea
I see why a cycle that has multiple objects with a __del__ method is a problem. Once you call __del__ on one of the objects, its no longer usable by the others, and its not clear which order is correct. My question regards the case where a cycle of objects only has 1 object which has a __del__. I think a correct strategy to collect the entire cycle is the same one used on a single object. On a single object Python uses: 1. Temporarily revive object 2. Call __del__ 3. Unrevive object (if(--refcount == 0) then we're done), otherwise it was resurrected). We can apply this to the whole cycle: 1. Temporarily revive entire cycle (each of its objects) 2. Call __del__ 3. Unrevive the objects of the entire cycle (each of its objects). Step 1 will allow __del__ to run safely. Since there is only one __del__ in the cycle, it is not dangerous that its references will disappear from "under its feet". (Some code restructuring will probably be necessary, because of assumptions that are hard-coded into slot_tp_del and subtype_dealloc). I believe this enhancement is important, because: A. When using existing code -- you do not control whether its objects have a __del__. In my experience, a majority of these cases only have a single __del__-containing object in their cycles. B. Python's exit cleanup calls __del__ in the wrong order, and Python's runtime is full of cycles (Each global is a cycle, including the class objects themselves: class->dict->function->func_globals)). These cycles very often have only 1 __del__ method. Some examples of the problem posed by B: http://www.google.com/search?q=ignored+%22%27NoneType%27+object+has+no+attribute%22+%22__del__+of%22&btnG=Search Ugly workarounds exist even in the standard library [subprocess]: "def __del__(self, sys=sys):"). Example: import os class RunningFile(object): filename = '/tmp/running' def __init__(self): open(self.filename, 'wb') def __del__(self): os.unlink(self.filename) running_file = RunningFile() The deller object is in a cycle as described above [as well as the Deller class itself]. When Python exits, it could call deller.__del__() and then collect the cycle. But Python does the wrong thing here, and gets rid of the globals before calling __del__: Exception exceptions.AttributeError: "'NoneType' object has no attribute 'unlink'" in > ignored I believe applying the above enhancement would solve these problems. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Cycle collection enhancement idea
Eyal Lotem wrote: Example: import os class RunningFile(object): filename = '/tmp/running' def __init__(self): open(self.filename, 'wb') def __del__(self): os.unlink(self.filename) running_file = RunningFile() The deller object is in a cycle as described above [as well as the Deller class itself]. When Python exits, it could call deller.__del__() and then collect the cycle. But Python does the wrong thing here, and gets rid of the globals before calling __del__: Exception exceptions.AttributeError: "'NoneType' object has no attribute 'unlink'" in > ignored I don't know what you're trying to get at with this example. There isn't any cyclic GC involved at all, just referencing counting. And before the module globals are cleared, running_file is still referenced, so calling its __del__ method early would be an outright error in the interpreter (as far as I know, getting __del__ methods to run is one of the *reasons* for clearing the module globals). It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked. Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 2.6 SSL
Roger, can you show us the relevant code? On Sat, Jun 28, 2008 at 5:59 AM, Roger wenham <[EMAIL PROTECTED]> wrote: > HI, > > I have used the SSL lib to wrap sockets in existing client server code, but I > find that > the max send size is 16K. In other words I send 20K, the write call returns > 20K > but the receiving end only gets 16K. > > I remove the wraper and everything worksagain. > > I even modified the unit test to send >16K, but got the same result. > > Is therea patch for this ? > > I hope this the correct place to report this... > > Regards > > Roger > > > -- > Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! > Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer > ___ > Python-Dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Cycle collection enhancement idea
> Example: > > import os > class RunningFile(object): > filename = '/tmp/running' > def __init__(self): > open(self.filename, 'wb') > def __del__(self): > os.unlink(self.filename) > running_file = RunningFile() > > The deller object is in a cycle as described above [as well as the > Deller class itself]. I think you are mistaken here. The RunningFile instance in above code is *not* part of a cycle. It doesn't have any instance variables (i.e. its __dict__ is empty), and it only refers to its class, which (AFAICT) doesn't refer back to the instance. > When Python exits, it could call > deller.__del__() and then collect the cycle. But Python does the wrong > thing here, and gets rid of the globals before calling __del__: > Exception exceptions.AttributeError: "'NoneType' object has no > attribute 'unlink'" in <__main__.RunningFile object at 0x7f9655eb92d0>> ignored This is a different issue. For shutdown, Python doesn't rely on cyclic garbage collection (only). Instead, all modules get forcefully cleared, causing this problem. > I believe applying the above enhancement would solve these problems. No, they wouldn't. To work around the real problem in your case, put everything that the destructor uses into an instance or class attribute: class RunningFile(object): filename = '/tmp/running' _unlink = os.unlink def __init__(self): open(self.filename, 'wb') def __del__(self): self._unlink(self.filename) Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] repeated keyword arguments
On Sat, Jun 28, 2008 at 1:30 AM, tomer filiba <[EMAIL PROTECTED]> wrote: > On Jun 28, 12:56 am, "Guido van Rossum" <[EMAIL PROTECTED]> wrote: >> No, it could just be a harmless typo in a long argument list. > > to elaborate on this point a little, i came across this error when i > ported my code to 2.4. i used the optparse class which takes 10's of > kwargs, and it turned out i'd given the same parameter twice (the > infamous copy-paste problem). > > so on the one hand, it was a harmless typo (because the latest > instance was taken). on the other hand, it's a good thing i tested it > on 2.4, or i'd never notice the repeated argument, which may have led > to weird runtime errors (if the order of the params was changed). > > i'd be in favor of fixing this in 2.5, just to eliminate possibly hard- > to-debug runtime errors. since it's a syntax error, it would be early- > noticed when the code is first run/imported, and it wouldn't require > the original author of the code to fix. But your anecdote is exactly why I don't want it fixed in 2.5. Your code was working, the typo was harmless. I don't want upgrades from 2.5.2 to 2.5.3 (or any x.y.z to x.y.(z+1)) to break code that was working before the upgrade! This is a principle we've adopted for such point-point upgrades for a long time. Also note that it took a long time before this was first reported, so it's not exactly like it's an important or frequent problem. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 2.6 SSL
> I hope this the correct place to report this... Hi, Roger. Please file a bug report at http://bugs.python.org/, and assign it to me. Please attach a patch for the change you made to the unit test suite to send >16K. Thanks! Bill ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] GC Proposal
Adam Olsen gmail.com> writes: > > We need two counters: one is the total number of traceable objects > (those we would inspect if we did a full collection) and a number of > "pending" trace operations. Every time an object is moved into the > last generation, we increase "pending" by two - once for itself and > once for an older object. Once pending equals the total number of > traceable objects we do a full collection (and reset "pending" to 0). It sounds rather similar to Martin's proposal, except with different coefficients and slightly different definitions (but the "total number of traceable objects" should be roughly equal to the number of objects in the oldest generation, and the "number of pending trace operations" roughly equal to the number of survivor objects after a collection of the middle generation). Am I missing something? ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] GC Proposal
> It sounds rather similar to Martin's proposal, except with different > coefficients and slightly different definitions (but the "total number > of traceable objects" should be roughly equal to the number of objects > in the oldest generation, and the "number of pending trace operations" > roughly equal to the number of survivor objects after a collection of > the middle generation). > > Am I missing something? I think that's an accurate description. I think the major difference is the factor, and I think making it 100% (in my terminology) might hold up collections for too long, in some cases. Whether or not 10% is the "best" factor, I don't know (it most likely is not). Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] GC Proposal
On Sat, Jun 28, 2008 at 12:47 PM, Antoine Pitrou <[EMAIL PROTECTED]> wrote: > Adam Olsen gmail.com> writes: >> >> We need two counters: one is the total number of traceable objects >> (those we would inspect if we did a full collection) and a number of >> "pending" trace operations. Every time an object is moved into the >> last generation, we increase "pending" by two - once for itself and >> once for an older object. Once pending equals the total number of >> traceable objects we do a full collection (and reset "pending" to 0). > > It sounds rather similar to Martin's proposal, except with different > coefficients and slightly different definitions (but the "total number > of traceable objects" should be roughly equal to the number of objects > in the oldest generation, and the "number of pending trace operations" > roughly equal to the number of survivor objects after a collection of > the middle generation). The effect is similar for the "batch allocation" case, but opposite for the "long-running program" case. Which is preferred is debatable.. If we had an incremental GC mine wouldn't have any bad cases, just the constant overhead. However, lacking an incremental GC, and since refcounting GC is sufficient for most cases, we might prefer to save overhead and avoid the pauses than to handle the "long-running program" case. My proposal can be made equivalent to Martin's proposal by removing all of its pending traces when an untraced object is deleted. We could even change this at runtime, by adding a counter for pending objects. Come to think of it, I think Martin's proposal needs to be implemented as mine. He wants the middle generation to be 10% larger than the oldest generation, but to find out the size you need to either iterate it (reintroducing the original problem), or keep some counters. With counters, his middle generation size is my "pending traces". > Am I missing something? Actually, I was. I lost track of Martin's thread when preparing my idea. Doh! -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] GC Proposal
> The effect is similar for the "batch allocation" case, but opposite > for the "long-running program" case. I don't understand. Where is the difference? > My proposal can be made equivalent to Martin's proposal by removing > all of its pending traces when an untraced object is deleted. We > could even change this at runtime, by adding a counter for pending > objects. What is a "pending trace"? > Come to think of it, I think Martin's proposal needs to be implemented > as mine. He wants the middle generation to be 10% larger than the > oldest generation Not exactly: 10% of the oldest generation, not 10% larger. So if the oldest generation has 1,000,000 objects, I want to collect when the survivors from the middle generation reach 100,000, not when they reach 1,100,000. > but to find out the size you need to either iterate > it (reintroducing the original problem), or keep some counters. With > counters, his middle generation size is my "pending traces". Yes, I have two counters: one for the number of objects in the oldest generation (established at the last collection, and unchanged afterwards), and one for the number of survivors from the middle collection (increased every time objects move to the oldest generation). So it seems there are minor difference (such as whether a counter for the total number of traceable objects is maintained, which you seem to be suggesting), but otherwise, I still think the proposals are essentially the same. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] r64424 - inpython/trunk:Include/object.h Lib/test/test_sys.pyMisc/NEWSObjects/intobject.c Objects/longobject.cObjects/typeobject.cPython/bltinmodule.c
From: "Mark Dickinson" <[EMAIL PROTECTED]> There's one other major difference between the C99 notation and the current patch: the C99 notation includes a (hexa)decimal point. The advantages of this include: - the exponent gives a rough idea of the magnitude of the number, and - the exponent doesn't vary with changes to the least significant bits of the float. Is everyone agreed on a tohex/fromhex pair using the C99 notation as recommended in 754R? Are you thinking of math module functions or as a method and classmethod on floats? Raymond ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Cycle collection enhancement idea
Nick Coghlan wrote: It's a fact of Python development: __del__ methods cannot safely reference module globals, because those globals may be gone by the time that method is invoked. Speaking of this, has there been any more thought given to the idea of dropping the module clearing and just relying on cyclic GC? -- Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] r64424 - inpython/trunk:Include/object.h Lib/test/test_sys.pyMisc/NEWSObjects/intobject.c Objects/longobject.cObjects/typeobject.cPython/bltinmodule.c
On Sat, Jun 28, 2008 at 4:46 PM, Raymond Hettinger <[EMAIL PROTECTED]> wrote: > From: "Mark Dickinson" <[EMAIL PROTECTED]> >> >> There's one other major difference between the C99 notation and the >> current patch: the C99 notation includes a (hexa)decimal point. The >> advantages of this include: >> >> - the exponent gives a rough idea of the magnitude of the number, and >> - the exponent doesn't vary with changes to the least significant bits >> of the float. > > Is everyone agreed on a tohex/fromhex pair using the C99 notation as > recommended in 754R? Dunno about everyone, but I'm +1 on that. > Are you thinking of math module functions or as a method and classmethod on > floats? I'd prefer math modules functions. Alex ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] GC Proposal
On Sat, Jun 28, 2008 at 2:42 PM, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: >> The effect is similar for the "batch allocation" case, but opposite >> for the "long-running program" case. > > I don't understand. Where is the difference? > >> My proposal can be made equivalent to Martin's proposal by removing >> all of its pending traces when an untraced object is deleted. We >> could even change this at runtime, by adding a counter for pending >> objects. > > What is a "pending trace"? > >> Come to think of it, I think Martin's proposal needs to be implemented >> as mine. He wants the middle generation to be 10% larger than the >> oldest generation > > Not exactly: 10% of the oldest generation, not 10% larger. So if the > oldest generation has 1,000,000 objects, I want to collect when the > survivors from the middle generation reach 100,000, not when they reach > 1,100,000. > >> but to find out the size you need to either iterate >> it (reintroducing the original problem), or keep some counters. With >> counters, his middle generation size is my "pending traces". > > Yes, I have two counters: one for the number of objects in the oldest > generation (established at the last collection, and unchanged > afterwards), and one for the number of survivors from the middle > collection (increased every time objects move to the oldest > generation). > > So it seems there are minor difference (such as whether a counter > for the total number of traceable objects is maintained, which you > seem to be suggesting), but otherwise, I still think the proposals > are essentially the same. They are definitely quite close to equivalent. The terminology doesn't quite match up, so let me rearrange things and compare: old * 0.1 <= survivors# Martin old <= survivors * 2.0# Adam Looks about equivalent, but "survivors" may mean two different things depending on if it removes deleted survivors or not. Splitting that up, we get this form: old <= survivors * 2.0 + deleted * 1.0 The deleted count ensures stable memory loads will still eventually cause full collections. Since our GC isn't incremental/concurrent/realtime, we'd probably don't want the full collection pauses except from big bursts, which is trivially done by making the deleted factor 0.0. My original proposal was assuming a non-zero deleted factor, while yours (and the existing codebase) assumed it'd be zero - this is how our two proposals differed. (My "pending traces" is merely a running total of survivors * 2.0 + deleted * 1.0. It looks much easier to keep separate counts though.) -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Cycle collection enhancement idea
On Sat, Jun 28, 2008 at 5:39 PM, Greg Ewing <[EMAIL PROTECTED]> wrote: > Nick Coghlan wrote: > >> It's a fact of Python development: __del__ methods cannot safely reference >> module globals, because those globals may be gone by the time that method is >> invoked. > > Speaking of this, has there been any more thought given > to the idea of dropping the module clearing and just > relying on cyclic GC? No, but it is an intriguing thought nevertheless. The module clearing causes nothing but trouble... -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] GC Proposal
> Looks about equivalent, but "survivors" may mean two different things > depending on if it removes deleted survivors or not. Splitting that > up, we get this form: > > old <= survivors * 2.0 + deleted * 1.0 What precisely would be the "deleted" count? If it counts deallocations, is it relevant what generation the deallocated object was from? If so, how do you determine the generation? If not, wouldn't while 1: x=[] trigger a full garbage collection fairly quickly? Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
