pytest-timeout 0.4
Hi, I'm pleased to announce the availability of pytest-timeout 0.4: https://pypi.python.org/pypi/pytest-timeout/0.4 pytest-timeout is a plugin for the py.test testing framework which will interrupt hanging tests after a timeout and show stack traces for all threads at this time. This can greatly easy debugging certain issues, especially when running tests on continuous integration server. This release provides support for using pytest-timeout in conjunction with the --pdb option from py.test. When a test fails and py.test drops you into an interactive pdb session pytest-timeout will now no longer time-out the test. Additionally this release fixes a bug where a hang in the teardown of a session-scoped fixture would not be caught by pytest-timeout. Regards, Floris -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org -- https://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
[issue15617] FAIL: test_create_connection (test.test_socket.NetworkConnectionNoServer)
Floris Bruynooghe added the comment: Oops, I've kicked the bruynooghe-solaris-csw buildslave and it should now be building again. A bit disappointed that buildbot/twisted doesn't reconnect automatically though. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15617 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20671] test_create_at_shutdown_with_encoding() of test_io hangs on SPARC Solaris 10 OpenCSW 3.x
Floris Bruynooghe added the comment: Turns out that the timeout is configured in the buildmaster's master.cfg which Antoine Pitrou has kindly done. It should also run tests a bit more parallel now which will hopefully reduce the 10h runtime a bit, but it remains a slow box. -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20671 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15643] Support OpenCSW in setup.py
New submission from Floris Bruynooghe: This patch proposes to add out of the box support for building against OpenCSW libraries on Solaris. It makes building all the extension modules a lot simpler since the CSW repositories provide almost all required libaries. The order of preference is /usr/local, then /opt/csw which should prefer libaries manually installed by the admin. -- components: Build files: csw_setup.py.diff keywords: patch messages: 168156 nosy: flub priority: normal severity: normal status: open title: Support OpenCSW in setup.py type: behavior versions: Python 3.3, Python 3.4 Added file: http://bugs.python.org/file26789/csw_setup.py.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15643 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15617] FAIL: test_create_connection (test.test_socket.NetworkConnectionNoServer)
Floris Bruynooghe added the comment: I have no issue with changing the buildhost's zone configuration if that's the right thing to do. Just one more option. Is widening the expected errno in the test a valid thing to do? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15617 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15617] FAIL: test_create_connection (test.test_socket.NetworkConnectionNoServer)
New submission from Floris Bruynooghe: The SPARC Solaris 10 OpenCSW 3.x builder fails with == FAIL: test_create_connection (test.test_socket.NetworkConnectionNoServer) -- Traceback (most recent call last): File /export/home/buildbot/buildarea/3.x.bruynooghe-solaris-csw/build/Lib/test/test_socket.py, line 4101, in test_create_connection self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) AssertionError: 128 != 146 Here 128 is ENETUNREACH I think the issue here is that socket.create_connection iterates over the result of socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) which returns [(2, 2, 0, '', ('127.0.0.1', 0)), (26, 2, 0, '', ('::1', 0, 0, 0))] on this host. The first result is tried and returns ECONNREFUSED but then the second address is tried and this returns ENETUNREACH because this host has not IPv6 network configured. And create_connection() raises the last exception it received. If getaddrinfo() is called with the AI_ADDRCONFIG flag then it will only return the IPv4 version of localhost. -- components: Tests messages: 167867 nosy: flub priority: normal severity: normal status: open title: FAIL: test_create_connection (test.test_socket.NetworkConnectionNoServer) type: behavior versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15617 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15617] FAIL: test_create_connection (test.test_socket.NetworkConnectionNoServer)
Floris Bruynooghe added the comment: It was my understanding that this is what the AI_ADDRCONFIG flag is for, if you don't use it you have no such guarantee. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15617 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15617] FAIL: test_create_connection (test.test_socket.NetworkConnectionNoServer)
Floris Bruynooghe added the comment: I think this is influenced by what you have in /etc/hosts. On my laptop I also have IPv6 loopback as well as an IPv6 link-local on eth0. But I have both 127.0.0.1 and ::1 in /etc/hosts as locahost. With that configuration I get the same getaddrinfo results as on the solaris host (which btw, has the same /etc/hosts configuration for localhost, i.e. both IPv4 IPv6). Basically I don't think loopback and link-local addresses count as configured address for getaddrinfo. Btw, removing the ::1 localhost line from /etc/hosts on the solaris host does fix the issue and gives the same results you show. But I don't think this is correct. My linux laptop behaves exactly the same as the solaris host here. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15617 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15589] Bus error on Debian sparc
Floris Bruynooghe added the comment: Running on Solaris 10 (T1000, OpenCSW toolchain, gcc 4.6.3) I also get a bus error, with added coredump: $ ./python Lib/test/regrtest.py == CPython 3.3.0b1 (default:67a994d5657d, Aug 8 2012, 21:43:48) [GCC 4.6.3] == Solaris-2.10-sun4v-sparc-32bit big-endian == /export/home/flub/python/cpython/build/test_python_7320 Testing with flags: sys.flags(debug=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=0, hash_randomization=1) [ 1/369] test_grammar [ 2/369] test_opcodes [ 3/369] test_dict [ 4/369] test_builtin [ 5/369] test_exceptions test test_exceptions failed -- Traceback (most recent call last): File /export/home/flub/python/cpython/Lib/test/test_exceptions.py, line 432, in testChainingDescriptors self.assertTrue(e.__suppress_context__) AssertionError: False is not true [ 6/369/1] test_types [ 7/369/1] test_unittest [ 8/369/1] test_doctest [ 9/369/1] test_doctest2 [ 10/369/1] test_support [ 11/369/1] test___all__ [ 12/369/1] test___future__ [ 13/369/1] test__locale [ 14/369/1] test__osx_support [ 15/369/1] test_abc [ 16/369/1] test_abstract_numbers [ 17/369/1] test_aifc [ 18/369/1] test_argparse [ 19/369/1] test_array [ 20/369/1] test_ast [ 21/369/1] test_asynchat [ 22/369/1] test_asyncore [ 23/369/1] test_atexit [ 24/369/1] test_audioop [ 25/369/1] test_augassign [ 26/369/1] test_base64 [ 27/369/1] test_bigaddrspace [ 28/369/1] test_bigmem [ 29/369/1] test_binascii [ 30/369/1] test_binhex [ 31/369/1] test_binop [ 32/369/1] test_bisect [ 33/369/1] test_bool [ 34/369/1] test_buffer [ 35/369/1] test_bufio [ 36/369/1] test_bytes [ 37/369/1] test_bz2 [ 38/369/1] test_calendar [ 39/369/1] test_call [ 40/369/1] test_capi Fatal Python error: Bus error Current thread 0x0001: File /export/home/flub/python/cpython/Lib/test/test_capi.py, line 264 in test_skipitem File /export/home/flub/python/cpython/Lib/unittest/case.py, line 385 in _executeTestPart File /export/home/flub/python/cpython/Lib/unittest/case.py, line 440 in run File /export/home/flub/python/cpython/Lib/unittest/case.py, line 492 in __call__ File /export/home/flub/python/cpython/Lib/unittest/suite.py, line 105 in run File /export/home/flub/python/cpython/Lib/unittest/suite.py, line 67 in __call__ File /export/home/flub/python/cpython/Lib/unittest/suite.py, line 105 in run File /export/home/flub/python/cpython/Lib/unittest/suite.py, line 67 in __call__ File /export/home/flub/python/cpython/Lib/test/support.py, line 1312 in run File /export/home/flub/python/cpython/Lib/test/support.py, line 1413 in _run_suite File /export/home/flub/python/cpython/Lib/test/support.py, line 1447 in run_unittest File /export/home/flub/python/cpython/Lib/test/test_capi.py, line 290 in test_main File Lib/test/regrtest.py, line 1219 in runtest_inner File Lib/test/regrtest.py, line 941 in runtest File Lib/test/regrtest.py, line 714 in main File Lib/test/regrtest.py, line 1810 in module Bus Error (core dumped) Not sure if this should be tracked in the same issue or not? -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15589 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15589] Bus error on Debian sparc
Floris Bruynooghe added the comment: I compiled with a simple ./configure which I think is what you mean (it defaults to -O3). But when executing your test it doesn't give a bus error. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15589 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15589] Bus error on Debian sparc
Floris Bruynooghe added the comment: I think I can confirm this fixes the BusError. The test suite got past test_capi on my machine as well. Unfortunately I killed the ssh session by accident before the testsuite completed so I had to restart it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15589 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15589] Bus error on Debian sparc
Floris Bruynooghe added the comment: I can now confirm the whole testsuite runs, so the BusError part seems fixed on my host: 329 tests OK. 7 tests failed: test_cmd_line test_exceptions test_ipaddress test_os test_raise test_socket test_traceback 1 test altered the execution environment: test_site 32 tests skipped: test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses test_dbm_gnu test_epoll test_gdb test_kqueue test_lzma test_msilib test_ossaudiodev test_pep277 test_readline test_smtpnet test_socketserver test_sqlite test_ssl test_startfile test_tcl test_timeout test_tk test_ttk_guionly test_ttk_textonly test_unicode_file test_urllib2net test_urllibnet test_winreg test_winsound test_xmlrpc_net test_zipfile64 8 skips unexpected on sunos5: test_lzma test_readline test_smtpnet test_ssl test_tcl test_tk test_ttk_guionly test_ttk_textonly -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15589 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8881] socket.getaddrinfo() should return named tuples
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: Attached in a patch for this, I've also changed the version to 3.4 since this is a feature and therefore probably too late to go in 3.3. Please let me know if anything is inadequate. -- keywords: +patch versions: +Python 3.4 -Python 3.3 Added file: http://bugs.python.org/file26287/getaddrinfo.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8881 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14290] Importing script as module causes ImportError with pickle.load
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: Hi, I think this is a usage error and if not you should try to provide a test case with both files for this. Pickle needs to be able to import the module which contains the classes by the same name as the original module. That means pickling an instance of a class defined in a script will not work unless it is the same script which did the pickling. The object is probably pickled under the name __main__.YourClass and when you import it in another script it will be objectScript.YourClass, hence pickle is unable to find the class for the object you are trying to unpickle. -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14290 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8881] socket.getaddrinfo() should return named tuples
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: I think the part which could possibly a problem is addressed in http://hg.python.org/cpython/rev/384f73a104e9/. Bearing in mind that direct usage for string interpolation is a pretty strange use for the result of getaddrinfo. -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8881 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1975] signals not always delivered to main thread, since other threads have the signal unmasked
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1975 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Need some IPC pointers
I'm surprised no one has mentioned zeromq as transport yet. It provides scaling from in proc (between threads) to inter-process and remote machines in a fairly transparent way. It's obviously not the python stdlib and as any system there are downsides too. Regards, Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue13338] Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: Apologies for not attaching a patch, I thought it was pretty trivial. Attached it now. -- keywords: +patch Added file: http://bugs.python.org/file23616/pyatomic.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13338 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13338] Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: Hi, When compiling using gcc and -Werror=switch-enum the compilation fails, e.g. while compiling an extension module: In file included from /usr/include/python3.2mu/Python.h:52:0, from src/util.c:27: /usr/include/python3.2mu/pyatomic.h: In function ‘_Py_ANNOTATE_MEMORY_ORDER’: /usr/include/python3.2mu/pyatomic.h:61:5: error: enumeration value ‘_Py_memory_order_relaxed’ not handled in switch [-Werror=switch-enum] /usr/include/python3.2mu/pyatomic.h:61:5: error: enumeration value ‘_Py_memory_order_acquire’ not handled in switch [-Werror=switch-enum] /usr/include/python3.2mu/pyatomic.h:70:5: error: enumeration value ‘_Py_memory_order_relaxed’ not handled in switch [-Werror=switch-enum] /usr/include/python3.2mu/pyatomic.h:70:5: error: enumeration value ‘_Py_memory_order_release’ not handled in switch [-Werror=switch-enum] This could be easily resolved without any drawbacks by simply listing the missing enumeration items together with the default. And that would enable extensions to be built using -Werror=switch-enum again. Regards, Floris -- components: Interpreter Core messages: 146993 nosy: flub priority: normal severity: normal status: open title: Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER type: compile error versions: Python 3.2, Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13338 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12419] Add ident parameter to SysLogHandler
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: It would be nice if the SysLogHandler also accepted an ident parameter in line with the syslog.openlog() function. This simply prepends the string passed in as ident to each log message which currently needs to be implemented with a log filter which modifies the record. -- components: Library (Lib) messages: 139260 nosy: flub priority: normal severity: normal status: open title: Add ident parameter to SysLogHandler type: feature request versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12419 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12419] Add ident parameter to SysLogHandler
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +vinay.sajip ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12419 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12419] Add ident parameter to SysLogHandler
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: That was quick, thanks! -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue12419 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Condition.wait(timeout) oddities
Hi all I'm a little confused about the corner cases of Condition.wait() with a timeout parameter in the threading module. When looking at the code the first thing that I don't quite get is that the timeout should never work as far as I understand it. .wait() always needs to return while holding the lock, therefore it does an .acquire() on the lock in a finally clause. Thus pretty much ignoring the timeout value. The second issue is that while looking around for this I found two bug reports: http://bugs.python.org/issue1175933 and http://bugs.python.org/issue10218. Both are proposing to add a return value indicating whether the .wait() timed out or not similar to the other .wait() methods in threading. However the first was rejected after some (seemingly inconclusive) discussion. While the latter had minimal discussion and and was accepted without reference to the earlier attempt. Not sure if this was a process oversight or what, but it does leave the situation confusing. But regardless I don't understand how the return value can be used currently: yes you did time out but you're still promised to hold the lock thanks to the .acquire() call on the lock in the finally block. In my small brain I just can't figure out how Condition.wait() can both respect a timeout parameter and the promise to hold the lock on return. It seems to me that the only way to handle the timeout is to raise an exception rather then return a value because when you get an exception you can break the promise of holding the lock. But maybe I'm missing something important or obvious, so I'd be happy to be enlightened! Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Condition.wait(timeout) oddities
On Monday, 23 May 2011 17:32:19 UTC, Chris Torek wrote: In article 94d1d127-b423-4bd4...@glegroupsg2000goo.googlegroups.com Floris Bruynooghe comp.lan...@googlegroups.com wrote: I'm a little confused about the corner cases of Condition.wait() with a timeout parameter in the threading module. When looking at the code the first thing that I don't quite get is that the timeout should never work as far as I understand it. .wait() always needs to return while holding the lock, therefore it does an .acquire() on the lock in a finally clause. Thus pretty much ignoring the timeout value. It does not do a straight acquire, it uses self._acquire_restore(), which for a condition variable, does instead: self.__block.acquire() self.__count = count self.__owner = owner (assuming that you did not override the lock argument or passed in a threading.RLock() object as the lock), due to this bit of code in _Condition.__init__(): # If the lock defines _release_save() and/or _acquire_restore(), # these override the default implementations (which just call # release() and acquire() on the lock). Ditto for _is_owned(). [snippage] try: self._acquire_restore = lock._acquire_restore except AttributeError: pass Ah, I missed this bit in the __init__() and the fact that RLock provides the _acquire_restore() and _release_save(). I was wondering why they jumped around via self._acquire_restore() and self._release_save(), it seemed rather a lot of undocumented effort for custom locks. That is, lock it holds is the one on the blocking lock (the __block of the underlying RLock), which is the same one you had to hold in the first place to call the .wait() function. To put it another way, the lock that .wait() waits for is a new lock allocated for the duration of the .wait() operation: That makes more sense now. I knew that really, just never quite realised until you wrote this here so clearly. Thanks. So essentially the condition's lock is only meant to lock the internal state of the condition and not meant to be acquired for long times outside that as .wait() calls will not be able to return. My confusion started from looking at queue.Queue which replaces the lock with a regular lock and uses it to lock the Queue's resource. I guess the Queue's mutex satisfies the requirement of never being held for long times. The second issue is that while looking around for this I found two bug reports: http://bugs.python.org/issue1175933 and http://bugs.python.org/issue10218. Both are proposing to add a return value indicating whether the .wait() timed out or not similar to the other .wait() methods in threading. However the first was rejected after some (seemingly inconclusive) discussion. Tim Peters' reply seemed pretty conclusive to me. :-) Which is why I'm surprised that it now does. Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue3526] Customized malloc implementation on SunOS and AIX
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: On 29 April 2011 17:16, Antoine Pitrou rep...@bugs.python.org wrote: Antoine Pitrou pit...@free.fr added the comment: Yes, I was probably not clear: When --with-dlmalloc is activated, PyMem_MALLOC/PyMem_Malloc will call dlmalloc, PyMem_REALLOC/PyMem_Realloc will call dlrealloc and PyMem_FREE/PyMem_Free will call dlfree. While calls to malloc/free/realloc will use the platform implementation. I'm not sure why you would want that. If dlmalloc is clearly superior, why not use it for all allocations inside the application (not only Python ones)? For the same reason that extension modules can choose between PyMem_Malloc and plain malloc (or whatever else). Python has never forced it's malloc on extension modules why should it now? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue3526 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3526] Customized malloc implementation on SunOS and AIX
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: So by using dlmalloc on SunOS and AIX you would get the same level of performance for memory operations that you already probably can appreciate on Linux systems. Yes, but with the above trick, you can do that without patching python nor your app. I mean, if you start embedding malloc in python, why stop there, and not embed the whole glibc ;-) Note that I realize this won't solve the problem for other AIX users (if there are any left :-), but since this patch doesn't seem to be gaining adhesion, I'm just proposing an alternative that I find cleaner, simpler and easier to maintain. This trick is hard to find however and I don't think it serves Solaris and AIX users very much (and sadly IBM keeps pushing AIX so yes it's used more then I like :-( ). So how about a --with-dlmalloc=path/to/dlmalloc.c? This way the dlmalloc code does not live inside Python and doesn't need to be maintained by python. But python still supports the code and will easily be built using it. Add a note in the README for AIX and Solaris and I think this would be a lot friendlier to users. This is similar in how python uses e.g. openssl to provide optional extra functionality/performance. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue3526 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Twisted and txJSON-RPC
On Sunday, April 11, 2010 5:04:49 PM UTC+1, writeson wrote: I get an error message: error: docs/PRELUDE.txt: No such file or directory The setup.py code is trying to be too clever and the released package is missing files it requires. The easiest way to fix it is to simply get the latests code from the VCS which contains all the required files. That's what I did anyway: brz branch lp:txjsonrpc Then just do your usual favourite incantation of python setup.py install --magic-options-to-make-setuptools-sane-for-you. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue5672] Implement a way to change the python process name
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: There are actually a few implementations on pypi, just search for prctl. At least one of them is pretty decent IIRC but I can't remember which one I looked at in detail before. Anyway, they would certainly be a reasonable starting point for python inclusion. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5672 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3526] Customized malloc implementation on SunOS and AIX
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue3526 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9912] Fail when vsvarsall.bat produces stderr
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: It would have saved me a lot of time if msvc9compiler would fail if executing the vsvarsall.bat file produced any output. The attached patch does this and fails when I try to compile from within a cygwin environment. I've also tested this from the normal windows command prompt and there buiding does succeed with this patch applied. -- assignee: tarek components: Distutils files: msvc9.diff keywords: patch messages: 117067 nosy: eric.araujo, flub, tarek priority: normal severity: normal status: open title: Fail when vsvarsall.bat produces stderr type: feature request versions: Python 3.2, Python 3.3 Added file: http://bugs.python.org/file18950/msvc9.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9912 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9912] Fail when vsvarsall.bat produces stderr
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: I'm aware of that but my limited testing showed that in this case that doesn't happen. However if this is considered too brittle to just plain fail as soon as there's stderr, how about using distutils' log facility to log the stderr at a reasonable level (warning?)? That way at least you'll be able to see something useful when you get a failure at a strange looking and far less meaningful traceback a few lines lower. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9912 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9912] Fail when vsvarsall.bat produces stderr
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: msvc9_log.diff does log stderr at warning level when it occurs. -- Added file: http://bugs.python.org/file18961/msvc9_log.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9912 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9608] Re-phrase best way of using exceptions in doanddont.rst
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: The description of how to best use exceptions is slightly confusing and led me to believe there was an issue when using open() as a context manager. The main issue is that the wording seems to suggest the example above it is the best and not the very last. Attached is a patch which uses a slightly different wording which IMHO makes it clearer that the with-statement is the preferred method and does not introduce subtle bugs. -- assignee: d...@python components: Documentation files: doandont.diff keywords: patch messages: 113949 nosy: d...@python, flub priority: normal severity: normal status: open title: Re-phrase best way of using exceptions in doanddont.rst type: feature request versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3 Added file: http://bugs.python.org/file18538/doandont.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9608 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: How to read source code of python?
On Jun 10, 8:55 am, Thomas Jollans tho...@jollans.com wrote: On 06/10/2010 07:25 AM, Qijing Li wrote: Thanks for your reply. I'm trying to understand python language deeply and use it efficiently. For example: How the operator in works on list? the running time is be O(n)? if my list is sorted, what the running time would be? Taking this example, you know you want the in operator. Which you somehow need to know is implemented by the __contains__ protocol (you can find this in the expressions section of the Language Reference). Now you can either know how objects look like in C (follow the Extending and Embedding tutorial, specifically the Defining New Types section) and therefore know you need to look at the sq_contains slot of the PySequenceMethods sturcture. Or you could just locate the list object in Objects/listobjects.c (which you can easily find by looking at the source tree) and search for contains. Both ways will lead you pretty quickly to the list_contains() function in Objects/listobject.c. And now you just need to know the C-API (again in the docs) to be able to read it (even if you don't that's a pretty straightforward function to read). Hope that helps Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue8906] Document TestCase attributes in class docstring
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: The unittest.TestCase class has some public attributes: failureException, longMessage and maxDiff. They each have a description in a comment, but I think it would be good if that description got moved into the class docstring so that it would be found using help(). -- components: Library (Lib) messages: 107132 nosy: ezio.melotti, flub priority: normal severity: normal status: open title: Document TestCase attributes in class docstring type: feature request versions: Python 2.7, Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8906 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8351] Suppress large diffs in unitttest.TestCase.assertSequenceEqual()
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: This patch adds the ability to suppress large diffs in the failure message of TestCase.assertSequenceEqual(). The maximum size of the diff is customisable as an new keyword parameter with hopefully a sensible default. -- components: Library (Lib) files: case_seq.diff keywords: patch messages: 102653 nosy: flub severity: normal status: open title: Suppress large diffs in unitttest.TestCase.assertSequenceEqual() type: feature request versions: Python 3.3 Added file: http://bugs.python.org/file16831/case_seq.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8351 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: converting a timezone-less datetime to seconds since the epoch
On Apr 7, 9:57 am, Chris Withers ch...@simplistix.co.uk wrote: Chris Rebert wrote: To convert from struct_time in ***UTC*** to seconds since the epoch use calendar.timegm() ...and really, wtf is timegm doing in calendar rather than in time? ;-) You're not alone in finding this strange: http://bugs.python.org/issue6280 (the short apologetic reason is that timegm is written in python rather the C) Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue peek?
On Mar 2, 6:18 pm, Raymond Hettinger pyt...@rcn.com wrote: On Mar 2, 8:29 am, Veloz michaelve...@gmail.com wrote: Hi all I'm looking for a queue that I can use with multiprocessing, which has a peek method. I've seen some discussion about queue.peek but don't see anything in the docs about it. Does python have a queue class with peek semantics? Am curious about your use case? Why peek at something that could be gone by the time you want to use it. val = q.peek() if something_i_want(val): v2 = q.get() # this could be different than val Wouldn't it be better to just get() the value and return if you don't need it? val = q.peek() if not something_i_want(val): q.put(val) What I have found myself wanting when thinking of this pattern is a q.put_at_front_of_queue(val) method. I've never actually used this because of not having such a method. Not that it's that much of an issue as I've never been completely stuck and usually found a way to solve whatever I was trying to do without peeking, which could be argued as a better design in the first place. I was just wondering if other people ever missed the q.put_at_front_of_queue() method or if it is just me. Regards Floris PS: assuming val = q.get() on the first line -- http://mail.python.org/mailman/listinfo/python-list
Ad hoc lists vs ad hoc tuples
One thing I ofter wonder is which is better when you just need a throwaway sequence: a list or a tuple? E.g.: if foo in ['some', 'random', 'strings']: ... if [bool1, bool2, boo3].count(True) != 1: ... (The last one only works with tuples since python 2.6) Is a list or tuple better or more efficient in these situations? Regards Floris PS: This is inspired by some of the space-efficiency comments from the list.pop(0) discussion. -- http://mail.python.org/mailman/listinfo/python-list
Re: Ad hoc lists vs ad hoc tuples
On Jan 27, 10:15 pm, Terry Reedy tjre...@udel.edu wrote: On 1/27/2010 12:32 PM, Antoine Pitrou wrote: Le Wed, 27 Jan 2010 02:20:53 -0800, Floris Bruynooghe a écrit : Is a list or tuple better or more efficient in these situations? Tuples are faster to allocate (they are allocated in one single step) and quite a bit smaller too. Thanks for all the answers! This is what I was expecting but it's nice to see it confirmed. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Install script under a different name
On Dec 5, 1:52 am, Lie Ryan lie.1...@gmail.com wrote: on linux/unix, you need to add the proper #! line to the top of any executable scripts and of course set the executable bit permission (chmod +x scriptname). In linux/unix there is no need to have the .py extension for a file to be recognized as python script (i.e. just remove it). The #! line will even get replaced by the interpreter used during installation, so you can safely write #!/usr/bin/env python in your development copy and get #!/usr/bin/python when users install it. -- http://mail.python.org/mailman/listinfo/python-list
Re: question about subprocess and shells
On Dec 4, 9:38 pm, Ross Boylan r...@biostat.ucsf.edu wrote: If one uses subprocess.Popen(args, ..., shell=True, ...) When args finishes execution, does the shell terminate? Either way seems problematic. Essentially this is executing /bin/sh args so if you're unsure as to the behaviour just try it on your command line. Basically once the pipeline in args had finished the shell has nothing more to do and will return itself (and the return code for the shell depends on the return code of the pipeline executed which is normally the return code of the last process executed). Of course when I say pipeline it could also be a single command or a list or any valid shell. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Bored.
On Nov 30, 11:52 pm, Stef Mientki stef.mien...@gmail.com wrote: Well I thought that after 2 years you would know every detail of a language ;-) Ouch, I must be especially stupid then! ;-) Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue7407] Minor Queue doc improvement
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: The documentation of the queue module (Queue in 2.x) does not mention that the constructors have a default argument of 0 for maxsize. The trivial patch adds this (patch against py3k trunk). -- assignee: georg.brandl components: Documentation files: queue.diff keywords: patch messages: 95806 nosy: flub, georg.brandl severity: normal status: open title: Minor Queue doc improvement type: behavior versions: Python 2.6, Python 2.7, Python 3.0, Python 3.1, Python 3.2 Added file: http://bugs.python.org/file15413/queue.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7407 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: how to create a pip package
On Nov 10, 2:30 pm, Phlip phlip2...@gmail.com wrote: On Nov 10, 1:54 am, Wolodja Wentland wentl...@cl.uni-heidelberg.de wrote: http://docs.python.org/library/distutils.html#module-distutils http://packages.python.org/distribute/ ktx... now some utterly retarded questions to prevent false starts. the distutils page starts with from distutils.core import setup. but a sample project on github, presumably a pippable project, starts with: from setuptools import setup, find_packages (and it also has ez_setup()) I don't foresee my project growing larger than one* file any time soon. 'pip freeze' prob'ly won't write a setup.py. What is the absolute simplest setup.py to stick my project on the PYTHONPATH, and be done with it? Do what the distutils page says, setuptools tries to extend distutils with many fancy things but if you don't need those (and that's what it sounds like) then sticking to distutils is better as you only require the python stdlib. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue5672] Implement a way to change the python process name
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5672 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Best Way to Handle All Exceptions
On Jul 13, 2:26 pm, seldan24 selda...@gmail.com wrote: The first example: from ftplib import FTP try: ftp = FTP(ftp_host) ftp.login(ftp_user, ftp_pass) except Exception, err: print err *If* you really do want to catch *all* exceptions (as mentioned already it is usually better to catch specific exceptions) this is the way to do it. To know why you should look at the class hierarchy on http://docs.python.org/library/exceptions.html. The reason is that you almost never want to be catching SystemExit, KeyboardInterrupt etc. catching them will give you trouble at some point (unless you really know what you're doing but then I would suggest you list them explicitly instead of using the bare except statement). While it is true that you could raise an object that is not a subclass from Exception it is very bad practice, you should never do that. And I've haven't seen an external module in the wild that does that in years and the stdlib will always play nice. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Where does setuptools live?
On Jul 4, 4:50 pm, David Wilson d...@botanicus.net wrote: I'm trying to create a patch for a diabolical issue I keep running into, but I can't seem to find the setuptools repository. Is it this one? http://svn.python.org/view/sandbox/trunk/setuptools/ It is, see http://mail.python.org/pipermail/distutils-sig/2009-July/012374.html It's seen no changes in 9 months. It's setuptools... I'm sure you can find many flamefests on distutils- sig about this. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue6405] Redundant redeclarations in descrobject.h
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: There are redundant redeclarations for PyGetSetDescr_Type and PyMemberDescr_Type in descrobject.h. This is an issue when compiling an extension module with the -Wredundant-decls flag: In file included from /usr/local/include/python3.1/Python.h:98, from src/util.c:27: /usr/local/include/python3.1/descrobject.h:76: error: redundant redeclaration of ‘PyGetSetDescr_Type’ /usr/local/include/python3.1/descrobject.h:71: error: previous declaration of ‘PyGetSetDescr_Type’ was here /usr/local/include/python3.1/descrobject.h:77: error: redundant redeclaration of ‘PyMemberDescr_Type’ /usr/local/include/python3.1/descrobject.h:72: error: previous declaration of ‘PyMemberDescr_Type’ was here error: command 'gcc' failed with exit status 1 The patch is trivial. -- components: Extension Modules files: descrobject.diff keywords: patch messages: 90047 nosy: flub severity: normal status: open title: Redundant redeclarations in descrobject.h type: compile error versions: Python 3.1, Python 3.2 Added file: http://bugs.python.org/file14435/descrobject.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6405 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6336] nb_divide missing in docs
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: http://docs.python.org/c-api/typeobj.html#number-object-structures is missing the entry for nb_divide, this is confusing. -- assignee: georg.brandl components: Documentation messages: 89664 nosy: flub, georg.brandl severity: normal status: open title: nb_divide missing in docs type: behavior versions: Python 2.6, Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6336 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: multi-thread python interpreaters and c++ program
On Jun 9, 6:50 am, myopc my...@aaa.com wrote: I am ruuning a c++ program (boost python) , which create many python interpreaters and each run a python script with use multi-thread (threading). when the c++ main program exit, I want to shut down python interpreaters, but it crashed. Your threads are daemonic, you could be seeing http://bugs.python.org/issue1856 You'll have to check your stack in a debugger to know. But as said this can be avoided by making the threads finish themself and joining them. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue5201] Using LDFLAGS='-rpath=\$$LIB:/some/other/path' ./configure breaks the build
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: Hi What's the status of this? I haven't seen a commit message regarding this. Cheers -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5201 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1856] shutdown (exit) can hang or segfault with daemon threads running
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1856 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5201] Using LDFLAGS='-rpath=\$$LIB:/some/other/path' ./configure breaks the build
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: Oh, sorry about the super() that is why the ar test failed then. Sorry, I got a little confused by the conflicting update on that file while working on this patch and must have merged it badly. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5201 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5998] Add __bool__ to threading.Event and multiprocessing.Event
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: I think it would allow for more pythonic code if the threading.Event and multiprocessing.Event classes had the __bool__ special attribute. This would allow doing if e: ... instead of if e.is_set(): This could be backported to 2.x really easily by just replacing __bool__ to __nonzero__. See also the thread starting here: http://mail.python.org/pipermail/python-ideas/2009-May/004617.html -- components: Library (Lib) files: event.diff keywords: patch messages: 87587 nosy: flub severity: normal status: open title: Add __bool__ to threading.Event and multiprocessing.Event type: feature request versions: Python 2.7, Python 3.2 Added file: http://bugs.python.org/file13959/event.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5998 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5201] Using LDFLAGS='-rpath=\$$LIB:/some/other/path' ./configure breaks the build
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: The updated patch inserts the single $ when needed. I've checked this on compiling python, stdlib extension modules and custom extension modules and this gives the correct results in all cases. -- Added file: http://bugs.python.org/file13962/makevars2.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5201 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5991] Add non-command help topics to help completion of cmd.Cmd
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: The cmd.Cmd module has a default complete_help() method which will complete all existing commands (methods starting with do_). It would be useful to complete all exising help topics too by default, i.e. all methods starting with help_. The attached patch does this. -- components: Library (Lib) files: cmd.diff keywords: patch messages: 87557 nosy: flub severity: normal status: open title: Add non-command help topics to help completion of cmd.Cmd type: feature request versions: Python 2.7 Added file: http://bugs.python.org/file13954/cmd.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5991 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5941] customize_compiler broken
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5941 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5900] Ensure RUNPATH is added to extension modules with RPATH if GNU ld is used
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: I'm not convinced that would help much. The GNULD variable in the makefile is for when the default linker is used. If you change that by using LDSHARED then you're probably not going to be using --rpath but LDFLAGS to configure it the way you want. If anything maybe using configure/Makefile to detect if GNU ld is used is wrong just for the case they use LDSHARED (I didn't think of this before), since then they can use LDSHARED and --rpath and get misterious failures. But it seems a lot more complicated to do, LDSHARED can be set to something like cc -shared (the default) in which case we can't use -V and assume it's a non-GNU ld if we don't get GNU back. So we'd have to try and detect if LDSHARED is set to a compiler or a linker, then try to find which linker gets invoked etc. A lot more complicated and way more possibilities then I can test. I'd argue that when someone uses LDSHARED they should be using LDFLAGS intead of --rpath, they obviously know what they are doing. --rpath is there if you want to use the environment Python was compiled in to build an extension module with a RPATH/RUNPATH in. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5900 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5201] Using LDFLAGS='-rpath=\$$LIB:/some/other/path' ./configure breaks the build
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: The attached patch does fix this issue. Concerning the specific example of LDFLAGS used here there is still and issue with LDFLAGS being ignored by the buid for the shared modules, but that is an other issue. -- keywords: +patch Added file: http://bugs.python.org/file13845/makevars.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5201 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5201] Using LDFLAGS='-rpath=\$$LIB:/some/other/path' ./configure breaks the build
Floris Bruynooghe floris.bruynoo...@gmail.com added the comment: Hmm, the patch isn't quite right yet. When a $$ is present in the makefile .parse_makefile() needs to return a single $. I'm not sure yet what needs to happen with the \ for the shell escape. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5201 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5900] Ensure RUNPATH is added to extension modules with RPATH if GNU ld is used
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: The build_ext command does accept a handy --rpath option to encode an RPATH in the built extension modules. However RPATH is superseded by RUNPATH since the former can not be overwritten by the LD_LIBRARY_PATH environment varialbe, while the later can. While most linkers will add a RUNPATH automatically when you ask for an RPATH, GNU ld does not do this. Therefore this patch does detect if GNU ld is used and if so will use the --enable-new-dtags option which will add the RUNPATH. -- assignee: tarek components: Distutils files: runpath.diff keywords: patch messages: 86924 nosy: flub, tarek severity: normal status: open title: Ensure RUNPATH is added to extension modules with RPATH if GNU ld is used type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file13833/runpath.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5900 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5854] logging module's __all__ attribute not in sync with documentation
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: The logging module in Python 2.6 has started to use the __all__ method. However it does not list all the symbols that are described in the documentation. Most notably the getLogger function is not in the __all__ list, but there are others like addLevelName, getLoggerClass, setLoggerClass, ... This does break code that does from logging import * which suddenly can't use getLogger etc anymore. -- components: Library (Lib) messages: 86653 nosy: flub severity: normal status: open title: logging module's __all__ attribute not in sync with documentation type: behavior versions: Python 2.6 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5854 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5726] ld_so_aix does exit successfully even in case of failure
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: ld_so_aix is used to invoke the linker correctly on AIX. However when the linking fails the script happily returns 0 and a Makefile using it will assume all went well. See the trivial patch attached. -- components: Build files: ld_so_aix.diff keywords: patch messages: 85807 nosy: flub severity: normal status: open title: ld_so_aix does exit successfully even in case of failure type: compile error versions: Python 2.5, Python 2.6, Python 2.7, Python 3.0, Python 3.1 Added file: http://bugs.python.org/file13661/ld_so_aix.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5726 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Python C API String Memory Consumption
On Apr 7, 2:10 pm, John Machin sjmac...@lexicon.net wrote: On Apr 7, 9:19 pm, MRAB goo...@mrabarnett.plus.com wrote: k3xji wrote: Interestaing I changed malloc()/free() usage with PyMem_xx APIs and the problem resolved. However, I really cannot understand why the first version does not work. Here is the latest code that has no problems at all: static PyObject * penc(PyObject *self, PyObject *args) { PyObject * result = NULL; unsigned char *s= NULL; unsigned char *buf = NULL; unsigned int v,len,i = 0; if (!PyArg_ParseTuple(args, s#, s, len)) return NULL; buf = (unsigned char *) PyMem_Malloc(len); if (buf == NULL) { PyErr_NoMemory(); return NULL; } /* string manipulation. */ result = PyString_FromStringAndSize((char *)buf, len); PyMem_Free(buf); return result; } I assume you're doing a memcpy() somewhere in there... This is also safer then your first version since the python string can contain an embeded \0 and the strdup() of the first version would not copy that. But maybe you're sure your input doesn't have NULLs in them so it might be fine. In general I'd say don't mix your memory allocators. I don't know whether CPython implements PyMem_Malloc using malloc, The fantastic manual (http://docs.python.org/c-api/ memory.html#overview) says: the C allocator and the Python memory manager ... implement different algorithms and operate on different heaps. but it's better to stick with CPython's memory allocators when writing for CPython. for the reasons given in the last paragraph of the above reference. That document explictly says you're allowed to use malloc() and free() in extensions. There is nothing wrong with allocating things on different heaps, I've done and seen it many times and never had trouble. Why the original problem ocurred I don't understand either tough. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 3143: Standard daemon process library
On Mar 21, 11:06 pm, Ben Finney ben+pyt...@benfinney.id.au wrote: Floris Bruynooghe floris.bruynoo...@gmail.com writes: Had a quick look at the PEP and it looks very nice IMHO. Thank you. I hope you can try the implementation and report feedback on that too. One of the things that might be interesting is keeping file descriptors from the logging module open by default. Hmm. I see that this would be a good idea. but it raises the question of how to manage the set of file handles that should not be closed on becoming a daemon. So far, the logic of closing the file descriptors is a little complex: * Close all open file descriptors. This excludes those listed in the `files_preserve` attribute, and those that correspond to the `stdin`, `stdout`, or `stderr` attributes. Extending that by saying “… and also any file descriptors for ``logging.FileHandler`` objects” starts to make the description too complex. I have a strong instinct that it the description is complex, the design might be bad. Can you suggest an alternative API that will ensure that all file descriptors get closed *except* those that should not be closed? Not an answer yet, but I'll try to find time in the next few days to play with this and tell you what I think. logging.FileHandler would be too narrow in any case I think. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: distutils compiler flags for extension modules
On Mar 20, 9:48 am, Christian Meesters meest...@gmx.de wrote: as I got no answers with the previous question (subject: disabling compiler flags in distutils), I thought I should ask the question in a different way: Is there an option to set the compiler flags for a C/C++ extension in distutils? There is the extra_compile_args-option in the Extension class, yet this offers only to give additional flags, but I'd like to have 'total' control about the compile args. Any hint? You can subclass the build_ext class and overwrite .finalize_options() to do something like: for ext in self.extensions: build_ext.finalize_options() # fiddle with ext.extra_compile_args And if that isn't enough you can modify the compiler (with some flags) by overwriting .build_extension() and modify self.compiler using it's .set_executables() method (file:///usr/share/doc/python2.5/html/ dist/module-distutils.ccompiler.html#l2h-37) before calling build_ext.build_extension(). Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 3143: Standard daemon process library (was: Writing a well-behaved daemon)
On Mar 20, 9:58 am, Ben Finney ben+pyt...@benfinney.id.au wrote: Ben Finney b...@benfinney.id.au writes: Writing a Python program to become a Unix daemon is relatively well-documented: there's a recipe for detaching the process and running in its own process group. However, there's much more to a Unix daemon than simply detaching. […] My searches for such functionality haven't borne much fruit though. Apart from scattered recipes, none of which cover all the essentials (let alone the optional features) of 'daemon', I can't find anything that could be relied upon. This is surprising, since I'd expect this in Python's standard library. I've submitted PEP 3143 URL:http://www.python.org/dev/peps/pep-3143/ to meet this need, and have re-worked an existing library into a new ‘python-daemon’ URL:http://pypi.python.org/pypi/python-daemon/ library, the reference implementation. Now I need wider testing and scrutiny of the implementation and specification. Had a quick look at the PEP and it looks very nice IMHO. One of the things that might be interesting is keeping file descriptors from the logging module open by default. So that you can setup your loggers before you daemonise --I do this so that I can complain on stdout if that gives trouble-- and are still able to use them once you've daemonised. I haven't looked at how feasable this is yet so it might be difficult, but useful anyway. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue1785] inspect gets broken by some descriptors
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1785 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: memory recycling/garbage collecting problem
On Feb 17, 5:31 am, Chris Rebert c...@rebertia.com wrote: My understanding is that for efficiency purposes Python hangs on to the extra memory even after the object has been GC-ed and doesn't give it back to the OS right away. Even if Python would free() the space no more used by it's own memory allocator (PyMem_Malloc(), PyMem_Free() Co) the OS usually doesn't return this space to the global free memory pool but instead leaves it assigned to the process, again for performance reasons. Only when the OS is running out of memory it will go and get the free()ed memory of processes back. There might be a way to force your OS to do so earlier manually if you really want but I'm not sure how you'd do that. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Pythonic way to determine if a string is a number
On Feb 16, 12:05 am, Mel mwil...@the-wire.com wrote: Christian Heimes wrote: Roy Smith wrote: They make sense when you need to recover from any error that may occur, possibly as the last resort after catching and dealing with more specific exceptions. In an unattended embedded system (think Mars Rover), the top-level code might well be: while 1: try: main() except: reset() Do you really want to except SystemExit, KeyboardInterrupt, MemoryError and SyntaxError? Exactly. A normal program should never do anything more comprehensive than try: some_function () except StandardError: some_handling () Hmm, most places advocate or even outright recommend derriving your own exceptions from Exception and not from StandardError. So maybe your catch-all should be Exception? In that case you would be catching warnings though, no idea what influence that has on the warning system. Regards Floris PS: Does anybody know why StopIterantion derrives from Exception while GeneratorExit derrives from BaseException? This could be as annoying/ confusing as Warning. -- http://mail.python.org/mailman/listinfo/python-list
Re: Pythonic way to determine if a string is a number
On Feb 16, 7:09 am, Python Nutter pythonnut...@gmail.com wrote: silly me, forgot to mention build a set from digits + '.' and use that for testing. `.' is locale dependent. Some locales might use `,' instead and maybe there's even more out there that I don't know of. So developing this yourself from scratch seems dangerous, let it bubble down to libc which should handle it correctly. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue5201] Using LDFLAGS='-rpath=\$$LIB:/some/other/path' ./configure breaks the build
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: When specifying an RPATH with -rpath or -R you can use the special tokens `$LIB' and `$ORIGIN' which the runtime linker interprets as normal search path and relative to current sofile respectively. To get these correctly to the gcc command line you need to specify this in LDFLAGS as `\$$LIB' to work around escapes of both the makefile and shell, so in the Python Makefile this will appear somewhere as (this is on one line): CONFIG_ARGS= '--prefix=/opt/example.com/python25' 'LDFLAGS=-Wl,-rpath=\$$LIB:/opt/example.com/lib,--enable-new-dtags' This works for compiling the main python binary. But when the extension modules get compiled distutils chokes on this. distutils.sysconfig.parse_makefile() does think that any value of a variable that contains `$' in it refers to an other variable in the makefile. It will fail to find the value and CONFIG_ARGS will not be defined. This then fails in setup.py for the _ctypes extension: if not '--with-system-ffi' in sysconfig.get_config_var(CONFIG_ARGS): return Where `None' is returned instead of a list by .get_config_var(). It seems that distutils.sysconfig.parse_makefile() needs to understand more of the makefile synatax to deal with this. -- assignee: tarek components: Distutils messages: 81538 nosy: flub, tarek severity: normal status: open title: Using LDFLAGS='-rpath=\$$LIB:/some/other/path' ./configure breaks the build type: compile error versions: Python 2.5, Python 2.6, Python 2.7, Python 3.0 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5201 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: x64 speed
On Feb 4, 10:14 am, Robin Becker ro...@reportlab.com wrote: [rpt...@localhost tests]$ time python25 runAll.py . . -- Ran 193 tests in 27.841s OK real 0m28.150s user 0m26.606s sys 0m0.917s [rpt...@localhost tests]$ magical how the total python time is less than the real time. Not really. Python was still running at the time that it prints the time of the tests. So it's only natural that the wall time Python prints on just the tests is going to be smaller then the wall time time prints for the entire python process. Same for when it starts, some stuff is done in Python before it starts its timer. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue4908] adding a get_metadata in distutils
Changes by Floris Bruynooghe floris.bruynoo...@gmail.com: -- nosy: +flub ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4908 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Overriding base class methods in the C API
Hello I've been trying to figure out how to override methods of a class in the C API. For Python code you can just redefine the method in your subclass, but setting tp_methods on the type object does not seem to have any influcence. Anyone know of a trick I am missing? Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue4950] Redundant declaration in pyerrors.h
New submission from Floris Bruynooghe floris.bruynoo...@gmail.com: When compiling with -Wredundant-decls gcc spots a redundant declaration: f...@laurie:sandbox$ cat test.c #include Python.h #include stdio.h int main(void) { printf(hello\n); return 0; } f...@laurie:sandbox$ gcc -I /usr/local/include/python3.0/ -Wredundant-decls test.c In file included from /usr/local/include/python3.0/Python.h:102, from test.c:1: /usr/local/include/python3.0/pyerrors.h:155: warning: redundant redeclaration of ‘PyExc_BufferError’ /usr/local/include/python3.0/pyerrors.h:147: warning: previous declaration of ‘PyExc_BufferError’ was here f...@laurie:sandbox$ This is annoying since when developing extension modules I usually use -Werror on top of -Wredundant-decls (among others). Regards Floris -- components: Extension Modules messages: 79870 nosy: flub severity: normal status: open title: Redundant declaration in pyerrors.h type: compile error versions: Python 3.0, Python 3.1 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4950 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Using exceptions in defined in an extension module inside another extension module
Hello If I have an extension module and want to use an exception I can do by declaring the exception as extern PyObject *PyExc_FooError in the object files if I then link those together inside a module where the module has them declared the same (but no extern) and then initialises them in the PyMODINIT_FUNC using PyErr_NewException. What I can't work out however is how to then be able to raise this exception in another extension module. Just defining it as extern doesn't work, even if I make sure the first module -that creates the exception- gets loaded first. Because the symbol is defined in the first extension module the dynamic linker can't find it as it only seems to look in the main python executable for symbols used in dlloaded sofiles. Does anyone have an idea of how you can do this? Thanks Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Using exceptions in defined in an extension module inside another extension module
Christian Heimes wrote: Floris Bruynooghe schrieb: What I can't work out however is how to then be able to raise this exception in another extension module. Just defining it as extern doesn't work, even if I make sure the first module -that creates the exception- gets loaded first. Because the symbol is defined in the first extension module the dynamic linker can't find it as it only seems to look in the main python executable for symbols used in dlloaded sofiles. Does anyone have an idea of how you can do this? The answer is so obvious that you are going to bang your head against the next wall. You have to do exactly the same as you'd do with a pure Python module: import it. :) Well, I hope the wall hurts as much as my head... Great tip, at first I wasn't looking forward to importing the module in every function where I wanted the exceptions. But then I realised they are global variables anyway so I could have them as such again and just assign them in the module init function. Thanks Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: C API and memory allocation
On Dec 18, 6:43 am, Stefan Behnel stefan...@behnel.de wrote: Floris Bruynooghe wrote: I'm slightly confused about some memory allocations in the C API. If you want to reduce the number of things you have to get your head around, learn Cython instead of the raw C-API. It's basically Python, does all the reference counting for you and also reduces the amount of memory handling you have to care about. http://cython.org/ Sure that is a good choice in some cases. Not in my case currently though, it would mean another build dependency on all our build hosts and I'm just (trying to) stop an existing extension module from leaking memory, no way I'm going to re-write that from scratch. But interesting discussion though, thanks! Floris -- http://mail.python.org/mailman/listinfo/python-list
C API and memory allocation
Hi I'm slightly confused about some memory allocations in the C API. Take the first example in the documentation: static PyObject * spam_system(PyObject *self, PyObject *args) { const char *command; int sts; if (!PyArg_ParseTuple(args, s, command)) return NULL; sts = system(command); return Py_BuildValue(i, sts); } What I'm confused about is the memory usage of command. As far as I understand the compiler provides space for the size of the pointer, as sizeof(command) would indicate. So I'm assuming PyArg_ParseTuple() must allocate new memory for the returned string. However there is nothing in the API that provides for freeing that allocated memory again. So does this application leak memory then? Or am I misunderstanding something fundamental? Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: C API and memory allocation
Hello again On Dec 17, 11:06 pm, Floris Bruynooghe floris.bruynoo...@gmail.com wrote: So I'm assuming PyArg_ParseTuple() must allocate new memory for the returned string. However there is nothing in the API that provides for freeing that allocated memory again. I've dug a little deeper into this and found that PyArg_ParseTuple (and friends) end up using PyString_AS_STRING() (Python/getargs.c:793) which according to the documentation returns a pointer to the internal buffer of the string and not a copy and that because of this you should not attempt to free this buffer. But how can python now know how long to keep that buffer object in memory for? When the reference count of the string object goes to zero the object can be deallocated I though, and then your pointer will point to something different all of a sudden. Does this mean you always have too keep a reference to the original objects when you've extracted information from them with PyArg_Parse*() functions? (At least while you want to hang on to that information.) Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
[issue4483] Error to build _dbm module during make
Floris Bruynooghe [EMAIL PROTECTED] added the comment: Hi, I'd like to confirm that Skip's last patch fixes the issue. Hope it gets included soon! Thanks -- nosy: +flub ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue4483 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: C Module question
Hi On Nov 10, 11:11 am, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: 1. How can I pass a file-like object into the C part? The PyArg_* functions can convert objects to all sort of types, but not FILE*. Parse it as a generic PyObject object (format string of O in PyArg_*), check the type and cast it. Or use O! as format string and the typechecking can be done for you, only thing left is casting it. See http://docs.python.org/c-api/arg.html and http://docs.python.org/c-api/file.html for exact details. (2 is answered already...) Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: C Module question
On Nov 10, 1:18 pm, Floris Bruynooghe [EMAIL PROTECTED] wrote: On Nov 10, 11:11 am, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: 1. How can I pass a file-like object into the C part? The PyArg_* functions can convert objects to all sort of types, but not FILE*. Parse it as a generic PyObject object (format string of O in PyArg_*), check the type and cast it. Or use O! as format string and the typechecking can be done for you, only thing left is casting it. Seehttp://docs.python.org/c-api/arg.htmlandhttp://docs.python.org/c-api/file.html for exact details. Sorry, I probably should have mentioned you want to cast the object to PyFileObject and then use the PyFile_AsFile() function to get the FILE* handle. Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Logging thread with Queue and multiple threads to log messages
Hi On Nov 9, 8:28 pm, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: I am trying to put up a queue (through a logging thread) so that all worker threads can ask it to log messages. There is no need to do anything like this, the logging module is thread safe and you can happily just create loggers in a thread and use them, you can even use loggers that where created outside of the thread. We use logging from threads all the time and it works flawlessly. As mentioned by Vinay in your other thread the problem you have is that your main thread exists before the worker threads, which makes atexit run the exithandlers and logging registers the logging.shutdown() function which does flush all logging buffers and closes all handles (i.e. closes the logfile). So as said before by simply calling .join() on all the worker threads inside the main thread your problem will be solved. You might get away with making your threads daemonic, but I can't guarentee you won't run into race conditions in that case. If you want to be really evil you could get into muddling with atexit... Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Module clarification
On Jul 28, 9:54 am, Hussein B [EMAIL PROTECTED] wrote: Hi. I'm a Java guy and I'm playing around Python these days... In Java, we organize our classes into packages and then jarring the packages into JAR files. What are modules in Python? An importable or runable (i.e. script) collection of classes, functions, variables etc... What is the equivalent of modules in Java? Don't know. Not even sure if it exists, but my Java is old and never been great. Please correct me if I'm wrong: I saved my Python code under the file Wow.py Wow.py is now a module and I can use it in other Python code: import Wow Indeed, you can now access things defined in Wow as Wow.foo Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
lxml validation and xpath id function
Hi I'm trying to use the .xpath('id(foo)') function on an lxml tree but can't get it to work. Given the following XML: rootchild id=foo//root And it's XMLSchema: ?xml version=1.0 encoding=UTF-8? xs:schema xmlns:xs=http://www.w3.org/2001/XMLSchema; elementFormDefault=qualified xs:element name=root xs:complexType xs:sequence xs:element ref=child/ /xs:sequence /xs:complexType /xs:element xs:element name=child xs:complexType xs:attribute name=id use=required type=xs:ID/ /xs:complexType /xs:element /xs:schema Or in more readable, compact RelaxNG, form: element root { element child { attribute id { xsd:ID } } } Now I'm trying to parse the XML and use the .xpath() method to find the child/ element using the id XPath function: from lxml import etree schema_root = etree.parse(file('schema.xsd')) schema = etree.XMLSchema(schema_root) parser = etree.XMLParser(schema=schema) root = etree.fromstring('rootchild id=foo//root', parser) root.xpath('id(foo)') -- [] I was expecting to get the child/ element with that last statement (well, inside a list that is), but instead I just get an empty list. Is there anything obvious I'm doing wrong? As far as I can see the lxml documentation says this should work. Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
Context manager for files vs garbage collection
Hi I was wondering when it was worthwil to use context managers for file. Consider this example: def foo(): t = False for line in file('/tmp/foo'): if line.startswith('bar'): t = True break return t What would the benefit of using a context manager be here, if any? E.g.: def foo(): t = False with file('/tmp/foo') as f: for line in f: if line.startswith('bar'): t = True break return t Personally I can't really see why the second case would be much better, I've got used to just relying on the garbage collector... :-) But what is the use case of a file as a context manager then? Is it only useful if your file object is large and will stay in scope for a long time? Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Running commands on cisco routers using python
On May 19, 4:18 pm, SPJ [EMAIL PROTECTED] wrote: Is it possible to run specific commands on cisco router using Python? I have to run command show access-list on few hundred cisco routers and get the dump into a file. Please let me know if it is feasible and the best way to achieve this. There's no way I'd think about doing this in python. The best tool for the task is just shell IMHO: [EMAIL PROTECTED]:~$ ssh mercury show access-lists Welcome to mercury [EMAIL PROTECTED]'s password: Standard IP access list 1 10 permit any (265350 matches) Standard IP access list 23 10 permit 192.168.2.0, wildcard bits 0.0.0.255 (2 matches) Extended IP access list 100 10 deny ip any 192.168.0.0 0.0.255.255 log-input (8576 matches) 20 permit ip any any (743438 matches)Connection to mercury closed by remote host. [EMAIL PROTECTED]:~$ You could plug in expect to solve the password thing. Search for ssh expect for that (and ignore suggestions about public keys, I haven't found yet how to use those on cisco). -- http://mail.python.org/mailman/listinfo/python-list
Re: How to kill Python interpreter from the command line?
On May 9, 11:19 am, [EMAIL PROTECTED] wrote: Thanks for the replies. On May 8, 5:50 pm, Jean-Paul Calderone [EMAIL PROTECTED] wrote: Ctrl+C often works with Python, but as with any language, it's possible to write a program which will not respond to it. You can use Ctrl+\ instead (Ctrl+C sends SIGINT which can be masked or otherwise ignored, Ctrl+\ sends SIGQUIT which typically isn't) Yes, thank you, this seems to work. :) I did some more testing and found out that the problem seems to be thread-related. If I have a single-threaded program, then Ctrl+C usually works, but if I have threads, it is usually ignored. For instance, the below program does not respond to Ctrl+C (but it does die when issued Ctrl+\): import threading def loop(): while True: pass threading.Thread(target=loop,args=()).start() Your thread needs to be daemonised for this work. Othewise your main thread will be waiting for your created thread to finish. E.g.: thread = threading.Thread(target=loop) thread.setDaemon(True) thread.start() But now it will exit immediately as soon as your main thread has nothing to do anymore (which is right after .start() in this case), so plug in another infinite loop at the end of this: while True: time.sleep(10) And now your threaded app will stop when using C-c. -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 11, 10:16 am, Floris Bruynooghe [EMAIL PROTECTED] wrote: On Apr 10, 5:09 pm, Arnaud Delobelle [EMAIL PROTECTED] wrote: On Apr 10, 3:37 pm, Floris Bruynooghe [EMAIL PROTECTED] wrote: On Apr 7, 2:19 pm, Andrii V. Mishkovskyi [EMAIL PROTECTED] wrote: 2008/4/7, Floris Bruynooghe [EMAIL PROTECTED]: Have been grepping all over the place and failed to find it. I found the test module for them, but that doesn't get me very far... I think you should take a look at 'descrobject.c' file in 'Objects' directory. Thanks, I found it! So after some looking around here was my implementation: class myproperty(property): def setter(self, func): self.fset = func But that doesn't work since fset is a read only attribute (and all of this is implemented in C). So I've settled with the (nearly) original proposal from Guido on python-dev: def propset(prop): assert isinstance(prop, property) @functools.wraps def helper(func): return property(prop.fget, func, prop.fdel, prop.__doc__) return helper The downside of this is that upgrade from 2.5 to 2.6 will require code changes, I was trying to minimise those to just removing an import statement. Regards Floris Here's an implementation of prop.setter in pure python 2.6, but using sys._getframe, and the only test performed is the one below :) import sys def find_key(mapping, searchval): for key, val in mapping.iteritems(): if val == searchval: return key _property = property class property(property): def setter(self, fset): cls_ns = sys._getframe(1).f_locals propname = find_key(cls_ns, self) # if not propname: there's a problem! cls_ns[propname] = property(self.fget, fset, self.fdel, self.__doc__) return fset # getter and deleter can be defined the same way! # Example --- class Foo(object): @property def bar(self): return self._bar @bar.setter def setbar(self, x): self._bar = '%s' % x # Interactive test - foo = Foo() foo.bar = 3 foo.bar '3' foo.bar = 'oeufs' foo.bar 'oeufs' Having fun'ly yours, Neat! Unfortunatly both this one and the one I posted before work when I try them out on the commandline but both fail when I try to use them in a module. And I just can't figure out why. This in more detail: Imaging mod.py: import sys _property = property class property(property): Python 2.6/3.0 style property def setter(self, fset): cls_ns = sys._getframe(1).f_locals for k, v in cls_ns.iteritems(): if v == self: propname = k break cls_ns[propname] = property(self.fget, fset, self.fdel, self.__doc__) return fset class Foo(object): @property def x(self): return self._x @x.setter def x(self, v): self._x = v + 1 Now enter the interpreter: import mod f = mod.Foo() f.x = 4 f.x 4 I don't feel like giving up on this now, so close... -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 10, 5:09 pm, Arnaud Delobelle [EMAIL PROTECTED] wrote: On Apr 10, 3:37 pm, Floris Bruynooghe [EMAIL PROTECTED] wrote: On Apr 7, 2:19 pm, Andrii V. Mishkovskyi [EMAIL PROTECTED] wrote: 2008/4/7, Floris Bruynooghe [EMAIL PROTECTED]: Have been grepping all over the place and failed to find it. I found the test module for them, but that doesn't get me very far... I think you should take a look at 'descrobject.c' file in 'Objects' directory. Thanks, I found it! So after some looking around here was my implementation: class myproperty(property): def setter(self, func): self.fset = func But that doesn't work since fset is a read only attribute (and all of this is implemented in C). So I've settled with the (nearly) original proposal from Guido on python-dev: def propset(prop): assert isinstance(prop, property) @functools.wraps def helper(func): return property(prop.fget, func, prop.fdel, prop.__doc__) return helper The downside of this is that upgrade from 2.5 to 2.6 will require code changes, I was trying to minimise those to just removing an import statement. Regards Floris Here's an implementation of prop.setter in pure python 2.6, but using sys._getframe, and the only test performed is the one below :) import sys def find_key(mapping, searchval): for key, val in mapping.iteritems(): if val == searchval: return key _property = property class property(property): def setter(self, fset): cls_ns = sys._getframe(1).f_locals propname = find_key(cls_ns, self) # if not propname: there's a problem! cls_ns[propname] = property(self.fget, fset, self.fdel, self.__doc__) return fset # getter and deleter can be defined the same way! # Example --- class Foo(object): @property def bar(self): return self._bar @bar.setter def setbar(self, x): self._bar = '%s' % x # Interactive test - foo = Foo() foo.bar = 3 foo.bar '3' foo.bar = 'oeufs' foo.bar 'oeufs' Having fun'ly yours, Neat! Unfortunatly both this one and the one I posted before work when I try them out on the commandline but both fail when I try to use them in a module. And I just can't figure out why. Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
Oh, that was a good hint! See inline On Apr 11, 12:02 pm, Arnaud Delobelle [EMAIL PROTECTED] wrote: On Apr 11, 11:19 am, Floris Bruynooghe [EMAIL PROTECTED] wrote: [...] Unfortunatly both this one and the one I posted before work when I try them out on the commandline but both fail when I try to use them in a module. And I just can't figure out why. This in more detail: Imaging mod.py: import sys _property = property class property(property): Python 2.6/3.0 style property def setter(self, fset): cls_ns = sys._getframe(1).f_locals for k, v in cls_ns.iteritems(): if v == self: propname = k break cls_ns[propname] = property(self.fget, fset, self.fdel, self.__doc__) return fset return cls_ns[propname] And then it works as I tried originally! class Foo(object): @property def x(self): return self._x @x.setter def x(self, v): ^ Don't call this 'x', it will override the property, change it to 'setx' and everything will work. The same probably goes for your own 'propset' decorator function. self._x = v + 1 Now enter the interpreter: import mod f = mod.Foo() f.x = 4 f.x 4 I don't feel like giving up on this now, so close... -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 7, 2:19 pm, Andrii V. Mishkovskyi [EMAIL PROTECTED] wrote: 2008/4/7, Floris Bruynooghe [EMAIL PROTECTED]: Have been grepping all over the place and failed to find it. I found the test module for them, but that doesn't get me very far... I think you should take a look at 'descrobject.c' file in 'Objects' directory. Thanks, I found it! So after some looking around here was my implementation: class myproperty(property): def setter(self, func): self.fset = func But that doesn't work since fset is a read only attribute (and all of this is implemented in C). So I've settled with the (nearly) original proposal from Guido on python-dev: def propset(prop): assert isinstance(prop, property) @functools.wraps def helper(func): return property(prop.fget, func, prop.fdel, prop.__doc__) return helper The downside of this is that upgrade from 2.5 to 2.6 will require code changes, I was trying to minimise those to just removing an import statement. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 6, 6:41 pm, Daniel Fetchinson [EMAIL PROTECTED] wrote: I found out about the new methods on properties, .setter() and .deleter(), in python 2.6. Obviously that's a very tempting syntax and I don't want to wait for 2.6... It would seem this can be implemented entirely in python code, and I have seen hints in this directrion. So before I go and try to invent this myself does anyone know if there is an official implementation of this somewhere that we can steal until we move to 2.6? The 2.6 source? Have been grepping all over the place and failed to find it. I found the test module for them, but that doesn't get me very far... -- http://mail.python.org/mailman/listinfo/python-list
@x.setter property implementation
Hello I found out about the new methods on properties, .setter() and .deleter(), in python 2.6. Obviously that's a very tempting syntax and I don't want to wait for 2.6... It would seem this can be implemented entirely in python code, and I have seen hints in this directrion. So before I go and try to invent this myself does anyone know if there is an official implementation of this somewhere that we can steal until we move to 2.6? Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
Ignoring windows registry PythonPath subkeys
Hi We basically want the same as the OP in [1], i.e. when python starts up we don't want to load *any* sys.path entries from the registry, including subkeys of the PythonPath key. The result of that thread seems to be to edit PC/getpathp.c[2] and recompile. This isn't that much of a problem since we're compiling python anyway, but is that really still the only way? Surely this isn't such an outlandish requirement? Regards Floris [1] http://groups.google.com/group/comp.lang.python/browse_frm/thread/4df87ffb23ac0c78/1b47f905eb3f990a?lnk=gstq=sys.path+registry#1b47f905eb3f990a [2] By looking at getpathp.c it seems just commenting out the two calls to getpythonregpath(), for machinepath and userpath should work in most cases. -- http://mail.python.org/mailman/listinfo/python-list
Re: Any fancy grep utility replacements out there?
On Mar 19, 2:44 am, Peter Wang [EMAIL PROTECTED] wrote: On Mar 18, 5:16 pm, Robert Kern [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: So I need to recursively grep a bunch of gzipped files. This can't be easily done with grep, rgrep or zgrep. (I'm sure given the right pipeline including using the find command it could be donebut seems like a hassle). So I figured I'd find a fancy next generation grep tool. Thirty minutes of searching later I find a bunch in Perl, and even one in Ruby. But I can't find anything that interesting or up to date for Python. Does anyone know of something? I have a grep-like utility I call grin. I wrote it mostly to recursively grep SVN source trees while ignoring the garbage under the .svn/ directories and more or less do exactly what I need most frequently without configuration. It could easily be extended to open gzip files with GzipFile. https://svn.enthought.com/svn/sandbox/grin/trunk/ Let me know if you have any requests. And don't forget: Colorized output! :) I tried to find something similar a while ago and found ack[1]. I do realise it's written in perl but it does the job nicely. Never needed to search in zipfiles though, just unzipping them in /tmp would always work... I'll check out grin this afternoon! Floris [1] http://petdance.com/ack/ -- http://mail.python.org/mailman/listinfo/python-list
Re: How to send a var to stdin of an external software
On Mar 14, 11:37 am, Benjamin Watine [EMAIL PROTECTED] wrote: Bryan Olson a écrit : I wrote: [...] Pipe loops are tricky business. Popular solutions are to make either the input or output stream a disk file, or to create another thread (or process) to be an active reader or writer. Or asynchronous I/O. On Unix-like systems, you can select() on the underlying file descriptors. (MS-Windows async mechanisms are not as well exposed by the Python standard library.) Hi Bryan Thank you so much for your advice. You're right, I just made a test with a 10 MB input stream, and it hangs exactly like you said (on cat.stdin.write(myStdin))... I don't want to use disk files. In reality, this script was previously done in bash using disk files, but I had problems with that solution (the files wasn't always cleared, and sometimes, I've found a part of previous input at the end of the next input.) That's why I want to use python, just to not use disk files. Could you give me more information / examples about the two solutions you've proposed (thread or asynchronous I/O) ? The source code of the subprocess module shows how to do it with select IIRC. Look at the implementation of the communicate() method. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list