Re: [Python-Dev] How to make _sre.c compile w/ C++?
[EMAIL PROTECTED] wrote: > if (b == 1) { > -literal = sre_literal_template(ptr, n); > + literal = sre_literal_template((SRE_CHAR *)ptr, n); > } else { > #if defined(HAVE_UNICODE) > -literal = sre_uliteral_template(ptr, n); > + literal = sre_uliteral_template((Py_UNICODE *)ptr, n); > #endif > ../Modules/_sre.c: In function 'PyObject* pattern_subx(PatternObject*, > PyObject*, PyObject*, int, int)': > ../Modules/_sre.c:2287: error: cannot convert 'Py_UNICODE*' to 'unsigned > char*' for argument '1' to 'int sre_literal_template(unsigned char*, int)' > > During the 16-bit pass, SRE_CHAR expands to Py_UNICODE, so the call to > sre_literal_template is incorrect. Any ideas how to fix things? sre_literal_template doesn't take SRE_CHAR*, but unsigned char*. So just cast to that. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] remote debugging with pdb
On Mon, 17 Apr 2006, [ISO-8859-1] "Martin v. L?wis" wrote: > > There is a patch on SourceForge > > python.org/sf/721464 > > which allows pdb to read/write from/to arbitrary file objects. Would it > > answer some of your concerns (eg remote debugging)? > > > > I guess, I could revive it if anyone thinks that it's worthwhile... > > > I just looked at it, and yes, it's a good idea. Ok, I'll look into it and submit as a new SF item (probably within 2-3 weeks)... A question though: the patch will touch the code in many places and so is likely to break other pdb patches which are in SF (e.g 1393667( restart patch by rockyb) and 1267629('until' patch by me)). Any chance of getting those accepted/rejected before remote debugging patch? Thanks Ilya ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] refleaks & test_tcl & threads
[Thomas Wouters] > ... > One remaining issue with refleakhunting on my machine is that test_tcl can't > stand being run twice. Even without -R, this makes Python hang while waiting > for a mutex in the second run through test_tcl: > > ...trunk $ ./python -E -tt Lib/test/regrtest test_tcl test_tcl > > Attaching gdb to the hung process shows this unenlightening trace: > #0 0x2b7d6629514b in __lll_mutex_lock_wait () from /lib/libpthread.so.0 > #1 0x2b7d6639a280 in completed.4801 () from /lib/libpthread.so.0 > #2 0x0004 in ?? () > #3 0x2b7d66291dca in pthread_mutex_lock () from /lib/libpthread.so.0 > #4 0x in ?? () > > The process has one other thread, which is stuck here: > #0 0x2b7d667f14d6 in __select_nocancel () from /lib/libc.so.6 > #1 0x2b7d67512d8c in Tcl_WaitForEvent () from /usr/lib/libtcl8.4.so.0 > #2 0x2b7d66290b1c in start_thread () from /lib/libpthread.so.0 > #3 0x2b7d667f8962 in clone () from /lib/libc.so.6 > #4 0x in ?? () > > It smells like test_tcl or Tkinter is doing something wrong with regards to > threads. I can reproduce this on a few machines, but all of them run newish > linux kernels with newish glibc's and newish tcl/tk. At least in kernel/libc > land, various thread related things changed of late. I don't have access to > other machines with tcl/tk right now, but I wonder if anyone can reproduce > this in different situations. FYI, there's no problem running test_tcl with -R on WinXP Pro SP2 from current trunk: C:\Code\python\PCbuild>python_d -E -tt ../lib/test/regrtest.py -R:: test_tcl test_tcl beginning 9 repetitions 123456789 . 1 test OK. [27306 refs] That's using Tcl/Tk 8.4.12 (I don't know what newish means in Tcl-land these days). ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Gentoo failures - it's blaming me...
[EMAIL PROTECTED] > I'm on the blame list for the current gentoo buildbot failures. I promise I > ran "make test" before checking anything in. I don't see where the changes > I checked in would have caused the reported test failures, but I'm > investigating. If anyone has any suggestions, let me know. A URL really helps. I'm guessing this one: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/546/step-test/0 If so, Phillip was also in the blamelist, and he's since fixed the test_pyclbr problem he introduced. BTW, thanks for taking the blamelist seriously! It's usually very helpful. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] How to make _sre.c compile w/ C++?
On Tuesday 18 April 2006 11:27, [EMAIL PROTECTED] wrote: > During the 16-bit pass, SRE_CHAR expands to Py_UNICODE, so the call > to sre_literal_template is incorrect. Any ideas how to fix things? I thought (but haven't had time to test) that making getstring return a union that's either SRE_CHAR* or Py_UNICODE* would make the problem go away. -- Anthony Baxter <[EMAIL PROTECTED]> It's never too late to have a happy childhood. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Gentoo failures - it's blaming me...
I'm on the blame list for the current gentoo buildbot failures. I promise I ran "make test" before checking anything in. I don't see where the changes I checked in would have caused the reported test failures, but I'm investigating. If anyone has any suggestions, let me know. Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] How to make _sre.c compile w/ C++?
I checked in a number of minor changes this evening to correct various problems compiling Python with a C++ compiler, in my case Apple's version of g++ 4.0. I'm stuck on Modules/_sre.c though. After applying this change: Index: Modules/_sre.c === --- Modules/_sre.c (revision 45497) +++ Modules/_sre.c (working copy) @@ -2284,10 +2284,10 @@ ptr = getstring(ptemplate, &n, &b); if (ptr) { if (b == 1) { -literal = sre_literal_template(ptr, n); + literal = sre_literal_template((SRE_CHAR *)ptr, n); } else { #if defined(HAVE_UNICODE) -literal = sre_uliteral_template(ptr, n); + literal = sre_uliteral_template((Py_UNICODE *)ptr, n); #endif } } else { I am left with this error: ../Modules/_sre.c: In function 'PyObject* pattern_subx(PatternObject*, PyObject*, PyObject*, int, int)': ../Modules/_sre.c:2287: error: cannot convert 'Py_UNICODE*' to 'unsigned char*' for argument '1' to 'int sre_literal_template(unsigned char*, int)' During the 16-bit pass, SRE_CHAR expands to Py_UNICODE, so the call to sre_literal_template is incorrect. Any ideas how to fix things? As clever as the two-pass compilation thing is, I must admit it confuses me. Thx, Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 359: The "make" Statement
On 4/17/06, Ian Bicking <[EMAIL PROTECTED]> wrote: > Steven Bethard wrote: > > This PEP proposes a generalization of the class-declaration syntax, > > the ``make`` statement. The proposed syntax and semantics parallel > > the syntax for class definition, and so:: > > > >make : > > > > I can't really see any use case for . FWIW, I've been thinking of the tuple as the "*args" and the block as the "**kwargs". But certainly any function can be written to take all keyword arguments. > In particular, you could always choose to implement this: > >make Foo someobj(stuff): ... > > like: > >make Foo(stuff) someobj: ... [snip] > and so moving to this might feel a bit better: > >make someobj Foo(stuff): ... Just to clarify, you mean translating: make : into the assignment:: = ("", ) ? Looks okay to me. I'm only hesitant because on c.l.py I got a pretty strong push for maintaining compatiblity with the class statement. > With that in mind, I think __call__ might be the wrong method to call on > the builder. For instance, if you were actually going to implement > prototypes on this, you wouldn't want to steal all uses of __call__ just > for the cloning machinery. So __make__ would be nicer. Personally this > would also let people using older constructs (like a plain > __call__(**kw)) to keep that in addition to supporting this new construct. Yeah, I guess the real question here is, do we expect that types will want to support both normal creation and creation using the make statement? If the answer is yes, then we definitely need to introduce a __make__ slot. Steve -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 359: The "make" Statement
On 4/17/06, Russell E. Owen <[EMAIL PROTECTED]> wrote: > At some point folks were discussing use cases of "make" where it was > important to preserve the order in which items were added to the > namespace. > > I'd like to suggest adding an implementation of an ordered dictionary to > standard python (e.g. as a library or built in type). It's inherently > useful, and in this case it could be used by "make". Not to argue against adding an ordered dictionary somewhere to Python, but for the moment I've been convinced that it's not worth the complication to allow the dict in which the make-statement's block is executed to be customized. The original use case had been XML-building, and there are much nicer solutions using the with-statement than there would be with the make-statement. If you think you have a better (non-XML) use case though, I'm willing to reconsider it. The next update of the PEP will discuss this in more detail, but search c.l.py for some of the discussion. Steve -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] possible fix for recursive __call__ segfault
Bug 532646 is a check for recursive __call__ methods where it is just set to an instance of the same class:: class A: pass A.__call__ = A() a = A() try: a() # This should not segfault except RuntimeError: pass else: raise TestFailed, "how could this not have overflowed the stack?" Turns out this was never handled for new-style classes and thus goes back to 2.4 at least. I don't know if this is a good solution or not, but I came up with this as a quick fix:: Index: Objects/typeobject.c === --- Objects/typeobject.c(revision 45499) +++ Objects/typeobject.c(working copy) @@ -4585,6 +4585,11 @@ if (meth == NULL) return NULL; + if (meth == self) { + PyErr_SetString(PyExc_RuntimeError, + "recursive __call__ definition"); + return NULL; + } res = PyObject_Call(meth, args, kwds); Py_DECREF(meth); return res; Of course SF is down (can't wait until the summer when I can do more tracker work) so I can't post there at the moment. But does anyone think there is a better solution to this without some counter somewhere to keep track how far one goes down fetching __call__ attributes? -Brett ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 359: The "make" Statement
At some point folks were discussing use cases of "make" where it was important to preserve the order in which items were added to the namespace. I'd like to suggest adding an implementation of an ordered dictionary to standard python (e.g. as a library or built in type). It's inherently useful, and in this case it could be used by "make". -- Russell ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 359: The "make" Statement
Steven Bethard wrote: > This PEP proposes a generalization of the class-declaration syntax, > the ``make`` statement. The proposed syntax and semantics parallel > the syntax for class definition, and so:: > >make : > I can't really see any use case for . In particular, you could always choose to implement this: make Foo someobj(stuff): ... like: make Foo(stuff) someobj: ... I don't think I'd naturally use the tuple position for anything, and so it's an arbitrary and usually empty position in the call, just to support type() which already has its own syntax. So maybe it makes less sense to copy the class/metaclass arguments so closely, and so moving to this might feel a bit better: make someobj Foo(stuff): ... And actually it reminds me more of class statements, which are in the form "keyword name(things_you_build_from)". Which then obviously leads to more parenthesis: make someobj(Foo(stuff)): ... Except I don't know what "make someobj(A, B)" would mean, so maybe the parenthesis are uncalled for. I prefer the look of the statement without parenthesis anyway. Really, to me this syntax feels like support for a more prototype-based construct. And many of the class-abusing metaclasses I've used have really looked similar to prototypes. The "class" statement is caught up in a bunch of very class-like semantics, and a more explicit/manual technique of creating objects opens up lots of potential. With that in mind, I think __call__ might be the wrong method to call on the builder. For instance, if you were actually going to implement prototypes on this, you wouldn't want to steal all uses of __call__ just for the cloning machinery. So __make__ would be nicer. Personally this would also let people using older constructs (like a plain __call__(**kw)) to keep that in addition to supporting this new construct. -- Ian Bicking / [EMAIL PROTECTED] / http://blog.ianbicking.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] pdb segfaults in 2.5 trunk?
Tim Peters wrote: >> I might see if I can work up a patch over the easter long weekend if >> no one beats me to it. What files should I be looking at (it would >> be my first C-level python patch)? Blegh - my parents came to visit ... Tim Delaney ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Returning -1 from function with unsigned long type
Tim> Explicitly casting -1 is both the obvious and best way, and is Tim> guaranteed to "work as intended" by the standards. Thanks. I'll fix 'em. Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] posix_confstr seems wrong
Fred> Looks like a bug to me. It should be set just before confstr() is Fred> called. Thanks. I'll fix, test and check in... Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Returning -1 from function with unsigned long type
[EMAIL PROTECTED] > I'm fiddling with the "compile Python w/ C++" stuff and came across a number > of places where a function is defined as returning unsigned long or unsigned > long long but returns -1. For example, see PyInt_AsUnsignedLongMask. > What's the correct fix for that, return ~0 (assuming twos-complement > arithmetic), cast -1 to unsigned long? Explicitly casting -1 is both the obvious and best way, and is guaranteed to "work as intended" by the standards. > Or does the API need to be changed somehow? Well, it's ubiquitous in Python that C API calls returning any kind of integer return -1 (and arrange to make PyErr_Occurred() return true) in case of error. This is clumsy when the integer retured is of an unsigned type, but it _is_ C we're talking about ;-) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] posix_confstr seems wrong
On Monday 17 April 2006 17:39, [EMAIL PROTECTED] wrote: > 1. Why is errno being set to 0? The C APIs don't promise to clear errno on input; you have to do that yourself. > 2. Why is errno's value then tested to see if it's not zero? > > Looks like this have been that way since December 1999 when Fred added it. Looks like a bug to me. It should be set just before confstr() is called. -Fred -- Fred L. Drake, Jr. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] posix_confstr seems wrong
More C++ stuff... According to the man page on my Mac: If the call to confstr() is not successful, -1 is returned and errno is set appropriately. but the code in posix_confstr looks like: if (PyArg_ParseTuple(args, "O&:confstr", conv_confstr_confname, &name)) { int len = confstr(name, buffer, sizeof(buffer)); errno = 0; if (len == 0) { if (errno != 0) posix_error(); else result = PyString_FromString(""); } ... 1. Why is errno being set to 0? 2. Why is errno's value then tested to see if it's not zero? Looks like this have been that way since December 1999 when Fred added it. Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] adding Construct to the standard library?
hello folksafter several people (several > 10) contacted me and said "IMHO 'construct' is a good candidate for stdlib",i thought i should give it a try. of course i'm not saying it should be included right now, but in 6 months time, or such a timeframe (aiming at python 2.6? some 2.5.x release?)a little intro:"Construct" ( http://pyconstruct.sourceforge.net/) is a library for declaratively defining data structures at the bit-level. these constructs can be used to parse raw data into objects, or build objects into raw data. you can see a couple of examples at http://pyconstruct.wikispaces.com/examples being "data structures" they are not limited to simple structures -- they can be linked lists, for example, or an enitreefl32 file, with sections and pointers (included in the distribution). currently i'm writing a parser of ext2 file systems, to allow inspecting file systems without mounting.why include Construct?* the struct module is very nice, but very limited and non-pythonic as well* pure python (no platform/security issues)* lots of people need to parse and build binary data structures, it's not an esoteric library * license: public domain* quite a large user base for such a short time (proves the need of the community)* easy to use and extend (follows the componentization pattern)* declarative: you don't need to write executable code for most cases why not:* the code is (very) young. stable and all, but less than a month on the loose. * new features may still be added / existing ones may be changed in a non-backwards-compatible mannerso why am i saying this now, instead of waiting a few months for it to maturet? well, i wanted to get feedback. those of you who have seen/used the library, please tell me what you think:* is it suitable for a standard library?* what more features would you want?* any changes you think are necessary?i'm starting this now, in order to have a mature version in the (near) future. thanks,-tomer ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Returning -1 from function with unsigned long type
I'm fiddling with the "compile Python w/ C++" stuff and came across a number of places where a function is defined as returning unsigned long or unsigned long long but returns -1. For example, see PyInt_AsUnsignedLongMask. What's the correct fix for that, return ~0 (assuming twos-complement arithmetic), cast -1 to unsigned long? Or does the API need to be changed somehow? Skip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] refleaks & test_tcl & threads
[Thomas Wouters] >> test_threading_local is not entirely consistent, but it looks a lot more >> reliable on my box than on Neal's automated mails: >> >> test_threading_local >> beginning 11 repetitions >> 12345678901 >> ... >> test_threading_local leaked [34, 34, 34, 34, 34, 26, 26, 22, 34] >> references [also Thomas] > This is caused by _threading_local.local's __del__ method, or rather the > fact that it's part of a closure enclosing threading.enumerate . Fixing the > inner __del__ to call enumerate (which is 'threading.enumerate') directly, > rather than through the cellvar 'threading_enumerate', makes the leak go > away. The fact that the leakage is inconsistent is actually easily > explained: the 'local' instances are stored on the 'currentThread' object > indexed by 'key', and keys sometimes get reused (since they're basically > id(self)), so sometimes an old reference is overwritten. It doesn't quite > explain why using the cellvar causes the cycle, nor does it explain why > gc.garbage remains empty. I guess some Thread objects linger in threading's > _active or _limbo dicts, but I have no idea why having a cellvar in the > cycle matters; they seem to be participating in GC just fine, and I cannot > reproduce the leak with a simpler test. > > And on top of that, I'm not sure *why* _threading_local.local is doing the > song and dance to get a cellvar. If the global 'enumerate' (which is > threading.enumerate) disappears, it will be because Python is cleaning up. > Even if we had a need to clean up the local dict at that time (which I don't > believe we do), using a cellvar doesn't guarantee anything more than using a > global name. The threading_enumerate = enumerate line creates a persistent local variable at module import time, which (unlike a global name) can't get "None'd out" at shutdown time. BTW, it's very easy to miss that this line _is_ executed at module import time, and is executed only once over the life of the interpreter; more on that below. > Chances are very good that the 'threading' module has also been > cleared, meaning that while we still have a reference to > threading.enumerate, it cannot use the three globals it uses (_active, > _limbo, _active_limbo_lock.) All in all, I think matters improve > significantly if it just deals with the NameError it'll get at cleanup > (which it already does.) Well, you missed something obvious :-): the code is so clever now that its __del__ doesn't actually do anything. In outline: ""|" ... # Threading import is at end ... class local(_localbase): ... def __del__(): threading_enumerate = enumerate ... def __del__(self): try: threads = list(threading_enumerate()) except: # if enumerate fails, as it seems to do during # shutdown, we'll skip cleanup under the assumption # that there is nothing to clean up return ... return __del__ __del__ = __del__() from threading import currentThread, enumerate, RLock """ Lots of questions pop to mind, from why the import is at the bottom of the file, to why it's doing this seemingly bizarre nested-__del__ dance. I don't have good answers to any of them <0.1 wink>, but this is the bottom line: at the time threading_enumerate = enumerate is executed, `enumerate` is bound to __builtin__.enumerate, not to threading.enumerate. That line is executed during the _execution_ of the "class local" statement, the first time the module is imported, and the import at the bottom of the file has not been executed by that time. So there is no global `enumerate` at the time, but there is one in __builtin__ so that's the one it captures. As a result, "the real" __del__'s threads = list(threading_enumerate()) line _always_ raises an exception, namely the TypeError "enumerate() takes exactly 1 argument (0 given)", which is swallowed by the bare "except:", and __del__ then returns having accomplished nothing. Of course that's the real reason it never cleans anything up now -- while virtually any way of rewriting it causes it to get the _intended_ threading.enumerate, and then the leaks stop. I'll check one of those in. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [ python-Patches-790710 ] breakpoint command lists in pdb
Martin v. Löwis wrote: > Grégoire Dooms wrote: > >> What should I do to get it reviewed further ? (perhaps just this : >> posting to python-dev :-) >> > > It didn't help that much, except for keeping your mail in my inbox. > > In any case, I went back to it and checked it in. > Thanks for taking the time to review it and include it. I won't have to apply it by myself each time I upgrade my Python install. Keep up with the good work, Best, -- Grégoire ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] windows buildbot failures
Tim Peters wrote: > No, what's surprising is that it keeps running _forever_. This isn't > Unix, and, e.g., a defunct child process doesn't sit around waiting > for its parent to reap it. Why doesn't the leftover python_d.exe > complete running the test suite, and then go away all by itself? It > doesn't, no matter how long you wait. That's the mystery to me. True. But I find that not too surprising: something deadlocks. A perfect deadlock aims to hold until the heat death of the universe; most of them only hold until reboot, or even just process termination. Now, as to *why* it deadlocks: that's indeed a mystery. But hey: it's Windows, so processes just do get stuck. It took them years to make sure they system continues running in such a case. > It suppose it's possible that killing cmd.exe actually did work, but > the buildbot code misreports the outcome, and python_d.exe "runs > forever" because it's blocked waiting on some resource (console I/O > handle?) it inherited from its (no longer there) parent process. It can't be that simple. Python's stdout should indeed be inherited from cmd.exe, but that, in turn, should have obtained it from buildbot. So even though cmd.exe closes its handle, Python's handle should still be fine. If buildbot then closes the other end of the pipe, Python should get ERROR_BROKEN_PIPE. The only deadlock I can see here is when buildbot does *not* close the pipe, but stops reading from it. In that case, Python's WriteFile would block. If that happens, it would be useful to attach with a debugger to find out where Python got stuck. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] fat binaries for OSX
Hi, I've uploaded 3 patches that form the core of the python24-fat tree that Bob Ippolito and I have been maintaining for a while. With these patches one can build fat/universal binaries for python that run natively on OSX 10.3 and later. I'd like to merge these patches to the trunk, but would like some review. I'm especially unhappy with the code duplication in patch 1471925, but don't know how to solve that. * Patch 1471883: --enable-universalsdk on Mac OS X This patch introduces a --enable-universalsdk flag for configure and the required changes to the build system to get this to work. When this flag is used Python is build as a universal (aka fat) binary. * Patch 1471761: test for broken poll at runtime This patch moves the HAVE_BROKEN_POLL test from configure-time to runtime. With this patch we can have a single binary on OSX that works on OSX 10.3.9 or later while having select.poll available on those versions of the OS that have a functioning version poll(). * Patch 1471925: Weak linking support for OSX This patch adds weak linking support to the posix, time and socket modules. That is, the existance of a number of functions is tested for at runtime (on OSX only). With this patch one can use a python binary that was build on OSX 10.4 on OSX 10.3 systems, without loosing access to APIs that were introduced in 10.4 on OSX 10.4 systems. Ronald ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] windows buildbot failures
[Tim] >> ... >> 2. The buildbot code tries to kill the process itself. It appears (to judge >>from the buildbot messges) that this never works on Windows. >> >> 3. For reasons that are still unknown, python_d.exe keeps running, >>and forever. [Martin] > It's actually not too surprising that python_d.exe keeps running. No, what's surprising is that it keeps running _forever_. This isn't Unix, and, e.g., a defunct child process doesn't sit around waiting for its parent to reap it. Why doesn't the leftover python_d.exe complete running the test suite, and then go away all by itself? It doesn't, no matter how long you wait. That's the mystery to me. > The buildbot has a process handle for the cmd.exe process that runs > test.bat. python_d.exe is only a child process of process. So killing > cmd.exe wouldn't help, even if it worked. It suppose it's possible that killing cmd.exe actually did work, but the buildbot code misreports the outcome, and python_d.exe "runs forever" because it's blocked waiting on some resource (console I/O handle?) it inherited from its (no longer there) parent process. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()?
Ronald Oussoren wrote: > A couple of lines down it says: > "The pointer returned by readdir() points to data which may be > overwritten by another call to readdir() on the same directory > stream. This data is not overwritten by another call to readdir() on > a different directory stream." > > This explicitly says that implementations cannot use a static dirent > structure. Ah, right. I read over this several times, and still managed to miss that point. Thanks. >> Of course, the most natural implementation associates the storage >> for the result with the DIR*, so it's probably not a real problem... > > If this were a problem on some platform I'd expect it to be so > ancient that it doesn't offer readdir_r either. Sure - I would have just removed Py_BEGIN_ALLOW_THREADS on systems which don't have readdir_r. But this is now unnecessary. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] problem installing current cvs - TabError
Anthony Baxter wrote: > There's a scripts Tools/scripts/reindent.py - put it somewhere on your > PATH and run it before checkin, like "reindent.py -r Lib". It means Tim > or I don't have to run it for you As I kept forgetting what the name, location, and command line options of that script are, I now added a reindent makefile target. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] FishEye on Python CVS Repository
Peter Moore wrote: > I'm responsible for setting up free FishEye hosting for community > projects. As a long time python user I of course added Python up > front. You can see it here: > > http://fisheye.cenqua.com/viewrep/python/ Can you please move that to the subversion repository (http://svn.python.org/projects/python), or, failing that, remove that entry? The CVS repository is no longer used. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [ python-Patches-790710 ] breakpoint command lists in pdb
Grégoire Dooms wrote: > What should I do to get it reviewed further ? (perhaps just this : > posting to python-dev :-) It didn't help that much, except for keeping your mail in my inbox. In any case, I went back to it and checked it in. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()?
On 17-apr-2006, at 20:50, Martin v. Löwis wrote: > Ronald Oussoren wrote: >> AFAIK readdir is only unsafe when multiple threads use the same >> DIR* at >> the same time. The spec[1] seems to agree with me. >> [1] : http://www.opengroup.org/onlinepubs/009695399/functions/ >> readdir.html > > What specific sentence makes you think so? I see > > "The readdir() interface need not be reentrant." > > which seems to allow for an implementation that returns a static > struct dirent. A couple of lines down it says: "The pointer returned by readdir() points to data which may be overwritten by another call to readdir() on the same directory stream. This data is not overwritten by another call to readdir() on a different directory stream." This explicitly says that implementations cannot use a static dirent structure. > > Of course, the most natural implementation associates the storage > for the result with the DIR*, so it's probably not a real problem... If this were a problem on some platform I'd expect it to be so ancient that it doesn't offer readdir_r either. Ronald ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] windows buildbot failures
> OTOH, we could just as well check in an executable that > does all that, e.g. like the one in I did something like this: I checked the source of a kill_python.exe application which looks at all running processes and tries to kill python_d.exe. After several rounds of experimentation, this now was able to unstick Trent's build slave. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()?
Ronald Oussoren wrote: > AFAIK readdir is only unsafe when multiple threads use the same DIR* at > the same time. The spec[1] seems to agree with me. > [1] : http://www.opengroup.org/onlinepubs/009695399/functions/readdir.html What specific sentence makes you think so? I see "The readdir() interface need not be reentrant." which seems to allow for an implementation that returns a static struct dirent. Of course, the most natural implementation associates the storage for the result with the DIR*, so it's probably not a real problem... Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()?
On 17-apr-2006, at 18:19, Martin v. Löwis wrote: > Currently, the readdir() call releases the GIL. I believe > this is not thread-safe, because readdir() does not need > to be re-entrant; we should use readdir_r where available > to get a thread-safe version. > > Comments? AFAIK readdir is only unsafe when multiple threads use the same DIR* at the same time. The spec[1] seems to agree with me. It seems to me that this means the implementation of listdir in posixmodule.c doesn't need to be changed. Ronald [1] : http://www.opengroup.org/onlinepubs/009695399/functions/ readdir.html > > Regards, > Martin > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ > ronaldoussoren%40mac.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] FYI: more clues re: tee+generator leak
At 12:53 PM 4/17/2006 -0400, Phillip J. Eby wrote: >By the way, the above cycle will leak even if the generator is never >iterated even once; it's quite simple to set up. I'm testing this using >-R:: on test_generators, and hacking on the _fib function and friends. Follow-up note: it's possible to create the same leak with this code: l = [] a, b = tee(l) l.append(b) Which -R:: reports as leaking 4 references. If you "l.append(a)" instead of 'b', there is no leaking. This showed that the problem was actually in the itertools module, as no generators are involved here. After staring at tee_copy until my eyes bled, I accidentally scrolled such that tee_new was on the screen at the same time and notice that tee_copy was missing a call to PyObject_GC_Track();. So then I fixed everything up and tried to check it in, to find that Thomas Wouters already found and fixed this yesterday. The moral of the story? Always catch up on the Python-checkins list before trying to track down cycle leaks. :) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PyObject_REPR()
If PyObject_REPR changes or gets renamed in Py2.5, I suggest modifying the implementation so that it returns a newly allocated C pointer rather one embedded in an inaccessible (unfreeable) PyStringObject. Roughly: r = PyObject_Repr(o); if (r == NULL) return NULL; s1 = PyObject_AS_STRING(r); s2 = strcpy(s1); Py_DECREF(r); return s2; The benefits are: * it won't throw-off leak checking (no Python objects get leaked) * the leak is slightly smaller (only the allocated string) * if the caller cares about memory, they have the option of freeing the returned pointer * error-checking is still possible. Neal Norwitz wrote: >Ok, then how about prefixing with _, adding a comment saying in big, >bold letters: FOR DEBUGGING PURPOSES ONLY, THIS LEAKS, and only >defining in a debug build? > >n >-- >On 4/11/06, Jeremy Hylton <[EMAIL PROTECTED]> wrote: > > >>It's intended as an internal debugging API. I find it very convenient >>for adding with a printf() here and there, which is how it got added >>long ago. It should really have a comment mentioning that it leaks >>the repr object, and starting with an _ wouldn't be bad either. >> >>Jeremy >> >>On 4/11/06, Neal Norwitz <[EMAIL PROTECTED]> wrote: >> >> >>>On 4/11/06, Raymond Hettinger <[EMAIL PROTECTED]> wrote: >>> >>> >It strikes me that it should not be used, or maybe renamed to >_PyObject_REPR. >Should removing or renaming it be done in 2.5 or in Py3K? > > Since it is intrinsically buggy, I would support removal in Py2.5 >>>+1 on removal. Google only turned up a handleful of uses that I saw. >>> >>>n >>>___ >>>Python-Dev mailing list >>>Python-Dev@python.org >>>http://mail.python.org/mailman/listinfo/python-dev >>>Unsubscribe: >>>http://mail.python.org/mailman/options/python-dev/jeremy%40alum.mit.edu >>> >>> >>> >___ >Python-Dev mailing list >Python-Dev@python.org >http://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >http://mail.python.org/mailman/options/python-dev/rhettinger%40ewtllc.com > > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] FYI: more clues re: tee+generator leak
I've been fiddling a bit with test_generators this morning, and have found that a stripped down version of the fibonacci test only leaks if the generator has a reference to a *copied* tee object. It doesn't matter whether the copied tee object is the second result from tee(), or if you just create a single tee object and use its __copy__() method, the leak only occurs if the cycle is: geniter -> frame -> ... -> copied_tee -> tdo ---+ ^| || ++ The "..." is to indicate that the frame may reference the object directly as a local variable, or via a cell. I've tried it both ways and it still leaks. Replacing "copied_tee" with an uncopied tee object does *not* leak. I have no idea what this means, although I've been staring at the relevant itertools code for some time now. It doesn't appear that the traverse functions are skipping anything. By the way, the above cycle will leak even if the generator is never iterated even once; it's quite simple to set up. I'm testing this using -R:: on test_generators, and hacking on the _fib function and friends. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()?
Currently, the readdir() call releases the GIL. I believe this is not thread-safe, because readdir() does not need to be re-entrant; we should use readdir_r where available to get a thread-safe version. Comments? Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] remote debugging with pdb
Ilya Sandler wrote: > There is a patch on SourceForge > python.org/sf/721464 > which allows pdb to read/write from/to arbitrary file objects. Would it > answer some of your concerns (eg remote debugging)? > > The patch probably will not apply to the current code, but I guess, I > could revive it if anyone thinks that it's worthwhile... > > What do you think? I just looked at it, and yes, it's a good idea. As you say, the patch is currently out of date. It is probably easiest to redo it from scratch; if you do, please use print redirections instead of self.file.write. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [C++-sig] GCC version compatibility
David Abrahams wrote: > I just wanted to write to encourage some Python developers to look at > (and accept!) Christoph's patch. This is really crucial for smooth > interoperability between C++ and Python. I did, and accepted the patch. If there is anything left to be done, please submit another patch. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] windows buildbot failures
Tim Peters wrote: > 2. The buildbot code tries to kill the process itself. It appears (to judge >from the buildbot messges) that this never works on Windows. > > 3. For reasons that are still unknown, python_d.exe keeps running, >and forever. It's actually not too surprising that python_d.exe keeps running. The buildbot has a process handle for the cmd.exe process that runs test.bat. python_d.exe is only a child process of process. So killing cmd.exe wouldn't help, even if it worked. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] windows buildbot failures
Neal Norwitz wrote: > If the patch won't fix the problem, is there something else we can do > to ensure the python DLL is no longer used regardless of whether the > previous test passed or not? Rebooting the machine will help, and might be the only cure. It's Windows, after all :-( Of course, we shouldn't do that, and even if it was ok to reboot "remotely", the buildbot likely wouldn't come back automatically. > If we can get the process handle, can we > can subprocess.TerminateProcess()? You get the process handle either from CreateProcess (which buildbot did, so we can't get the handle), or from OpenProcess. For OpenProcess, we need a process id. One way to get that is through Process32First/Process32Next. These would provide the executable path, so it should be easy to find out which one is a python_d.exe binary. None of these functions is exposed through subprocess, so this is no option. In addition, I believe that buildbot *tries* to use TerminateProcess. The code is twisted, though, so it is hard to tell what actually happens. Of course, it would be possible to do this all in VisualBasic, so we could check in a vbscript file, similar to the one in http://support.microsoft.com/kb/q187913/ OTOH, we could just as well check in an executable that does all that, e.g. like the one in http://msdn.microsoft.com/library/default.asp?url=/library/en-us/perfmon/base/enumerating_all_modules_for_a_process.asp Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py_Finalize does not release all memory, not even closely
Tim Peters wrote: > Putting a collection call inside an initialize/finalize loop isn't > doing it late, it's doing it early. If we can't collect cyclic trash > after Py_Initialize(), that would be a showstopper for apps embedding > Python "in a loop"! There's either nothing to fear here, or Python > has a very bad bug. Right. I did that, and it collects 308 objects after the first call in the second "round" of Py_Initialize/Py_Finalize, and then no additional objects. However, I don't think that helps much: Py_Finalize will call PyGC_Collect(), anyway, and before any counts are made. > Are you thinking of this comment?: Yes; I was assuming you suggested to enable that block of code. > I wrote that, and think it's pretty clear: after PyImport_Cleanup(), > so little of the interpreter still exists that _any_ problem while > running Python code has a way of turning into a fatal problem. Right. I still haven't tried it, but it might be that, after a plain Py_Initialize/Py_Finalize sequence, no such problems will occur, and that it would be safe to call it in this specific case. > Could you check in the code you're using? I had to modify code in ways that shouldn't be checked in, e.g. by putting API calls into _Py_PrintReferenceAddresses, even though the comment says it does't call any API. When I get to clean this up, I'll check it in. With some debugging, I now found a "leak" that contributes to quite some of these garbage objects: Each round of Py_Initialize/Py_Finalize will leave a CodecInfo type behind. I think it comes from this block of code /* Note that as of Python 2.2, heap-allocated type objects * can go away, but this code requires that they stay alive * until program exit. That's why we're careful with * refcounts here. type_list gets a new reference to tp, * while ownership of the reference type_list used to hold * (if any) was transferred to tp->tp_next in the line above. * tp is thus effectively immortal after this. */ Py_INCREF(tp); so that this "leak" would only exist if COUNT_ALLOCS is defined. I would guess that even more of the leaking type objects (16 per round) can be attributed to this. This completely obstructs measurements, and could well explain why the number of leaked objects is so much higher when COUNT_ALLOCS is defined. OTOH, I can see why "this code requires that they stay alive". Any ideas on how to solve this dilemma? Perhaps the type_list could be a list of weak references, so that the types do have a chance to go away when the last instance disappears? Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Summer of Code preparation
We've only got a short time to get setup for Google's Summer of Code. We need to start identifying mentors and collecting ideas for students to implement. We have the SimpleTodo list (http://wiki.python.org/moin/SimpleTodo), but nothing on the SoC page yet (http://wiki.python.org/moin/SummerOfCode). I can help manage the process from inside Google, but I need help gathering mentors and ideas. I'm not certain of the process, but if you are interested in being a mentor, send me an email. I will try to find all the necessary info and post here again tomorrow. Pass the word! I hope all mentors from last year will return again this year. Can someone take ownership of drumming up mentors and ideas? We also need to spread the word to c.l.p and beyond. Thanks, n ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com