[issue2389] Array pickling exposes internal memory representation of elements
Alexandre Vassalotti [EMAIL PROTECTED] added the comment: I'm all in for a standardized representation of array's pickles (with width and endianness preserved). However to happen, we will either need to change array's constructor to support at least the byte-order specification (like struct) or add built-in support for array in the pickle module (which could be done without modifying the pickle protocol). ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2389 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2389] Array pickling exposes internal memory representation of elements
Martin v. Löwis [EMAIL PROTECTED] added the comment: I think changing the array constructor is fairly easy: just pick a set of codes that are defined to be platform-neutral (i.e. for each size two codes, one for each endianness). For example, the control characters (\0..\x1F) could be used in the following way: char, signed-byte, unsigned byte: c, b, B (big/little) sint16: 1,2 uint16: 3,4 sint32: 5,6 uint32: 7,8 sint64: 9,10 uint64: 11,12 float: 13,14 double: 15,16 UCS-2: 17,18 UCS-4: 19,20 In above scheme, even codes are little-endian, odd codes are big endian. Converting the codes to native codes could be table-driven. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2389 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2756] urllib2 add_header fails with existing unredirected_header
Senthil [EMAIL PROTECTED] added the comment: The submitted patch has problems. It does not correctly solve this issue (which I want to confirm, if there is issue at all after understanding the logic behind unredirected_headers). My explanation of this issue and comments on the patch is here: http://urllib-gsoc.blogspot.com/2008/08/issue2756-urllib2-addheader-fails-with.html Now, coming back to the current issue. We see that addition of unredirected_hdrs takes place in the do_request_ call of AbstractHTTPHandler and it adds the unredirected_hdrs based on certain conditions, like when Content-Type is not there in header add the unredirected header ('Content-Type','application/x-www-form-urlencoded') The value of Content-Type is hardcoded here, but other header values are not hardcoded and got from header request only. Question here is: When the request contains the Content-Type header and has a updated value, why is it not supposed to change the unredirected_header to the updated value? (Same for other request header items). John J Lee, can perhaps help us understand more. If it is supposed to change, then the following snippet (rough) at do_request_ for key, value in request.headers: request.add_unredirected_header(key,request.get_header(key)) should update it, there is no need to change the add_header and add_unredirected_header method as proposed by the patch. On our conclusion, I shall provide the updated patch (if required). Thanks, Senthil ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2756 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2756] urllib2 add_header fails with existing unredirected_header
Senthil [EMAIL PROTECTED] added the comment: The problem with the patch was: The attached patch modifies the add_header() and add_unredirected_header() method to remove the existing headers of the same name alternately in the headers and unredirected_hdrs. What we observe is unredirected_hdrs item is removed during add_header() calland it is never added back/updated in teh undirected_hdrs. Let us discuss on the points mentioned in my previous post. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2756 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2065] trunk version does not compile with vs8 and vc6
Hirokazu Yamamoto [EMAIL PROTECTED] added the comment: Can I close this entry? ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2065 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2065] trunk version does not compile with vs8 and vc6
Martin v. Löwis [EMAIL PROTECTED] added the comment: Sure. Feel free to commit any further changes to these build files directly. -- status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2065 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3439] math.frexp and obtaining the bit size of a large integer
Mark Dickinson [EMAIL PROTECTED] added the comment: With the patch, the following code causes a non-keyboard-interruptible interpreter hang. from sys import maxint (-maxint-1).numbits() [... interpreter hang ...] The culprit is, of course, the statement if (n 0) n = -n; in int_numbits: LONG_MIN is negated to itself (this may even be undefined behaviour according to the C standards). The patch also needs documentation, and that documentation should clearly spell out what happens for zero and for negative numbers. It's not at all clear that everyone will expect (0).numbits() to be 0, though I agree that this is probably the most useful definition in practice. One could make a case for (0).numbits() raising ValueError: for some algorithms, what one wants is an integer k such that 2**(k-1) = abs(n) 2**k; when n == 0 no such integer exists. Other than those two things, I think the patch looks fine. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3439 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3439] math.frexp and obtaining the bit size of a large integer
Mark Dickinson [EMAIL PROTECTED] added the comment: One possible fix would be to compute the absolute value of n as an unsigned long. I *think* the following is portable and avoids any undefined behaviour coming from signed arithmetic overflow. unsigned long absn; if (n 0) absn = 1 + (unsigned long)(-1-n); else absn = (unsigned long)n; Might this work? Perhaps it would also be worth changing the tests in test_int from e.g. self.assertEqual((-a).numbits(), i+1) to self.assertEqual(int(-a).numbits(), i+1) This would have caught the -LONG_MAX error. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3439 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3545] Python turning off assertions (Windows)
Anders Bensryd [EMAIL PROTECTED] added the comment: We started using Python 2.5.2 recently and a few developers have complained that they do not get any assertions anymore so yes, we do use _ASSERT() and _ASSERTE(), but after a brief look it seems as if we mainly use assert(). The developer using _ASSERT() cannot remember why this was necessary and the tests I have made today shows that we could probably move to assert() everywhere. A more interesting aspect is that we have recently moved the the more secure CRT routines (strcpy_s etc) and tests have shown issues with these if we turn off assertions: int prevCrtReportMode=_CrtSetReportMode(_CRT_ASSERT,0); char str[8]; strcpy_s(str,123456789); With assertions turned on, I get an assertion dialog saying Buffer is too small which is what I expect and want. With assertions turned off (as in the example above), I get a dialog saying Microsoft Visual Studio C Runtime Library has detected a fatal error in crt.exe.. The stack is still useful and we can find the cause of the error so it is not a serious problem for us since we will continue to turn on assertions after Py_Initialize(). I have not yet seen any examples where the are erroneous assertions. Anyway, you have made your point and I really do not want to take up anymore of your time. I respect your opinion and at least I have forced you to think about this. We have a workaround that works for us so I am OK with closing this issue. Many thanks! ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3545 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2819] Full precision summation
Mark Dickinson [EMAIL PROTECTED] added the comment: Here's a patch, in final form, that replaces fsum with an lsum-based implementation. In brief, the accumulated sum-so-far is represented in the form huge_integer * 2**(smallest_possible_exponent) and the huge_integer is stored in base 2**30, with a signed-digit representation (digits in the range [-2**29, 2**29). What are the chances of getting this in before next week's beta? I did toy with a base 2**52 version, with digits stored as doubles. It's attractive for a couple of reasons: (1) each 53-bit double straddles exactly two digits, which makes the inner loop more predictable and removes some branches, and (2) one can make some optimizations (e.g. being sloppy about transferring single-bit carries to the next digit up) based on the assumption that the input is unlikely to have more than 2**51 summands. The result was slightly faster on OS X, and slower on Linux; the final rounding code also became a little more complicated (as a result of not being able to do bit operations on a double easily), and making sure that things work for non IEEE doubles is a bit of a pain. So in the end I abandoned this approach. Added file: http://bugs.python.org/file11108/fsum11.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3551] multiprocessing.Pipe terminates with ERROR_NO_SYSTEM_RESOURCES if large data is sent (win2000)
New submission from Hirokazu Yamamoto [EMAIL PROTECTED]: I noticed sometimes regrtest.py fails in test_multiprocessing.py (test_connection) on win2000. I could not reproduce error by invoking test_multiprocessing alone, but finally I could do it by incresing 'really_big_msg' to 32MB or more. I attached reproducable code. I don't know why this happens yet. -- components: Library (Lib), Windows files: reproduce.py messages: 71119 nosy: ocean-city severity: normal status: open title: multiprocessing.Pipe terminates with ERROR_NO_SYSTEM_RESOURCES if large data is sent (win2000) versions: Python 2.6 Added file: http://bugs.python.org/file11109/reproduce.py ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3551 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3551] multiprocessing.Pipe terminates with ERROR_NO_SYSTEM_RESOURCES if large data is sent (win2000)
Hirokazu Yamamoto [EMAIL PROTECTED] added the comment: This is traceback when run reproducable.py. Traceback (most recent call last): File string, line 1, in module File e:\python-dev\trunk\lib\multiprocessing\forking.py, line 341, in main prepare(preparation_data) File e:\python-dev\trunk\lib\multiprocessing\forking.py, line 456, in prepar e '__parents_main__', file, path_name, etc File reproducable.py, line 20, in module conn.send_bytes(really_big_msg) IOError: [Errno 1450] Insufficient system resources complete the requested service to exist. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3551 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2819] Full precision summation
Changes by Mark Dickinson [EMAIL PROTECTED]: Removed file: http://bugs.python.org/file10988/fsum7.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2819] Full precision summation
Changes by Mark Dickinson [EMAIL PROTECTED]: Removed file: http://bugs.python.org/file11008/fsum8.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2819] Full precision summation
Changes by Mark Dickinson [EMAIL PROTECTED]: Removed file: http://bugs.python.org/file11014/fsum10.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3300] urllib.quote and unquote - Unicode issues
Matt Giuca [EMAIL PROTECTED] added the comment: Ah cheers Antoine, for the tip on using defaultdict (I was confused as to how I could access the key just by passing defaultfactory, as the manual suggests). ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3300 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3550] Socket Python 3k Documentation mistake OR Unicode string is not supported with socket.send
Georg Brandl [EMAIL PROTECTED] added the comment: Thanks, fixed the docs to refer to bytes objects in r65674. -- resolution: - fixed status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3550 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3546] Missing linebreak in ext.doctest output
Georg Brandl [EMAIL PROTECTED] added the comment: Thanks, applied in r65675. -- resolution: - fixed status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3546 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3300] urllib.quote and unquote - Unicode issues
Matt Giuca [EMAIL PROTECTED] added the comment: OK I implemented the defaultdict solution. I got curious so ran some rough speed tests, using the following code. import random, urllib.parse for i in range(0, 10): str = ''.join(chr(random.randint(0, 0x10)) for _ in range(50)) quoted = urllib.parse.quote(str) Time to quote 100,000 random strings of 50 characters. (Ran each test twice, worst case printed) HEAD, chars in range(0,0x11): 1m44.80 HEAD, chars in range(0,256): 25.0s patch9, chars in range(0,0x11): 35.3s patch9, chars in range(0,256): 27.4s New, chars in range(0,0x11): 31.4s New, chars in range(0,256): 25.3s Head is the current Py3k head. Patch 9 is my previous patch (before implementing defaultdict), and New is after implementing defaultdict. Interesting. Defaultdict didn't really make much of an improvement. You can see the big help the cache itself makes, though (my code caches all chars, whereas the HEAD just caches ASCII chars, which is why HEAD is so slow on the full repertoire test). Other than that, differences are fairly negligible. However, I'll keep the defaultdict code, I quite like it, speedy or not (it is slightly faster). ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3300 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3551] multiprocessing.Pipe terminates with ERROR_NO_SYSTEM_RESOURCES if large data is sent (win2000)
Hirokazu Yamamoto [EMAIL PROTECTED] added the comment: After googling, ERROR_NO_SYSTEM_RESOURCES seems to happen when one I/O size is too large. And in Modules/_multiprocessing/pipe_connection.c, conn_send_string is implemented with one call WriteFile(). Maybe this should be devided into some reasonable sized chunks for several WriteFile() calls? ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3551 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3300] urllib.quote and unquote - Unicode issues
Antoine Pitrou [EMAIL PROTECTED] added the comment: Hello Matt, OK I implemented the defaultdict solution. I got curious so ran some rough speed tests, using the following code. import random, urllib.parse for i in range(0, 10): str = ''.join(chr(random.randint(0, 0x10)) for _ in range(50)) quoted = urllib.parse.quote(str) I think if you move the line defining str out of the loop, relative timings should change quite a bit. Chances are that the random functions are not very fast, since they are written in pure Python. Or you can create an inner loop around the call to quote(), for example to repeat it 100 times. cheers Antoine. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3300 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2466] os.path.ismount doesn't work for mounts the user doesn't have permission to see
Changes by Ross Burton [EMAIL PROTECTED]: -- title: os.path.ismount doesn't work for NTFS mounts - os.path.ismount doesn't work for mounts the user doesn't have permission to see versions: +Python 2.5 -Python 2.4 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2466 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3545] Python turning off assertions (Windows)
Martin v. Löwis [EMAIL PROTECTED] added the comment: I have not yet seen any examples where the are erroneous assertions. Please take a look at the code in signalmodule.c. The MS CRT asserts that the signal number is supported (i.e. among a fixed list of signal numbers), even though C 99, 7.14.1.1p8 says that the library shall return SIG_ERR, and set errno to a positive value, if the request cannot be honored. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3545 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3545] Python turning off assertions (Windows)
Changes by Martin v. Löwis [EMAIL PROTECTED]: -- resolution: - wont fix status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3545 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2466] os.path.ismount doesn't work for mounts the user doesn't have permission to see
Antoine Pitrou [EMAIL PROTECTED] added the comment: If ismount() used os.path.dirname() instead of appending .., then this wouldn't happen. But it may change the function result if the argument is a symlink to something (directory or mount point) on another filesystem. It should be verified before making a decision. -- nosy: +pitrou priority: - normal versions: +Python 2.6, Python 3.0 -Python 2.5 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue2466 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3552] uuid - exception on uuid3/uuid5
New submission from Matt Giuca [EMAIL PROTECTED]: The test suite breaks on the Lib/test/test_uuid.py, as of r65661. This is because uuid3 and uuid5 now raise exceptions. TypeError: new() argument 1 must be bytes or read-only buffer, not bytearray The problem is due to the changes in the way s# now expects a read-only buffer in PyArg_ParseTupleAndKeywords. (Which was changed in r65661). A rundown of the problem: Lib/uuid.py:553 (in uuid.uuid3): hash = md5(namespace.bytes + bytes(name, utf-8)).digest() namespace.bytes is a bytearray, so the argument to md5 is a bytearray. Modules/md5module.c:517 (in _md5.md5.new): if (!PyArg_ParseTupleAndKeywords(args, kwdict, |s#:new, kwlist, Using s# now requires a read-only buffer, so this raises a TypeError. The same goes for uuid5 (which calls _sha1.sha1, and has exactly the same problem). The commit log for r65561 suggests changing some s# into s* (which allows readable buffers). I don't understand the ramifications here (some problem with threading), and when I made that change, it seg faulted, so I'll leave well enough alone. But for someone who knows more what they're doing, that may be a more root-of-the-problem fix. In the meantime, I propose this simple patch to fix uuid: I think namespace.bytes should actually return a bytes, not a bytearray, so I'm modifying it to return a bytes. Related issue: http://bugs.python.org/issue3139 Patch for r65675. Commit log: Fixed TypeError raised by uuid.uuid3 and uuid.uuid5, by passing a bytearray to hash functions. Now namespace.bytes returns a bytes instead of a bytearray. -- components: Library (Lib) files: uuid.patch keywords: patch messages: 71129 nosy: mgiuca severity: normal status: open title: uuid - exception on uuid3/uuid5 type: compile error versions: Python 3.0 Added file: http://bugs.python.org/file0/uuid.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3552 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3300] urllib.quote and unquote - Unicode issues
Matt Giuca [EMAIL PROTECTED] added the comment: New patch (patch10). Details on Rietveld review tracker (http://codereview.appspot.com/2827). Another update on the remaining outstanding issues: Resolved issues since last time: Should unquote accept a bytes/bytearray as well as a str? No. But see below. Lib/email/utils.py: Should encode_rfc2231 with charset=None accept strings with non-ASCII characters, and just encode them to UTF-8? Implemented Antoine's fix (or 'ascii'). Should quote accept safe characters outside the ASCII range (thereby potentially producing invalid URIs)? No. New issues: unquote_to_bytes doesn't cope well with non-ASCII characters (currently encodes as UTF-8 - not a lot we can do since this is a str-bytes operation). However, we can allow it to accept a bytes as input (while unquote does not), and it preserves the bytes precisely. Discussion at http://codereview.appspot.com/2827/diff/82/84, line 265. I have *implemented* that suggestion - so unquote_to_bytes now accepts either a bytes or str, while unquote accepts only a str. No changes need to be made unless there is disagreement on that decision. I also emailed Barry Warsaw about the email/utils.py patch (because we weren't sure exactly what that code was doing). However, I'm sure that this patch isn't breaking anything there, because I call unquote with encoding=latin-1, which is the same behaviour as the current head. That's all the issues I have left over in this patch. Attaching patch 10 (for revision 65675). Commit log for patch 10: Fix for issue 3300. urllib.parse.unquote: Added encoding and errors optional arguments, allowing the caller to determine the decoding of percent-encoded octets. As per RFC 3986, default is utf-8 (previously implicitly decoded as ISO-8859-1). Fixed a bug in which mixed-case hex digits (such as %aF) weren't being decoded at all. urllib.parse.quote: Added encoding and errors optional arguments, allowing the caller to determine the encoding of non-ASCII characters before being percent-encoded. Default is utf-8 (previously characters in range(128, 256) were encoded as ISO-8859-1, and characters above that as UTF-8). Characters/bytes above 128 are no longer allowed to be safe. Now allows either bytes or strings. Optimised Quoter; now inherits defaultdict. Added functions urllib.parse.quote_from_bytes, urllib.parse.unquote_to_bytes. All quote/unquote functions now exported from the module. Doc/library/urllib.parse.rst: Updated docs on quote and unquote to reflect new interface, added quote_from_bytes and unquote_to_bytes. Lib/test/test_urllib.py: Added many new test cases testing encoding and decoding Unicode strings with various encodings, as well as testing the new functions. Lib/test/test_http_cookiejar.py, Lib/test/test_cgi.py, Lib/test/test_wsgiref.py: Updated and added test cases to deal with UTF-8-encoded URIs. Lib/email/utils.py: Calls urllib.parse.quote and urllib.parse.unquote with encoding=latin-1, to preserve existing behaviour (which the email module is dependent upon). Added file: http://bugs.python.org/file1/parse.py.patch10 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3300 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3300] urllib.quote and unquote - Unicode issues
Matt Giuca [EMAIL PROTECTED] added the comment: Antoine: I think if you move the line defining str out of the loop, relative timings should change quite a bit. Chances are that the random functions are not very fast, since they are written in pure Python. Well I wanted to test throwing lots of different URIs to test the caching behaviour. You're right though, probably a small % of the time is spent on calling quote. Oh well, the defaultdict implementation is in patch10 anyway :) It cleans Quoter up somewhat, so it's a good thing anyway. Thanks for your help. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3300 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3553] 2to3 -l doesn't work when installed in /opt
New submission from STINNER Victor [EMAIL PROTECTED]: I just installed Python 3.0b2 in /opt/py3k and tried 2to3 tool: $ /opt/py3k/bin/2to3 -l Available transformations for the -f/--fix option: Traceback (most recent call last): File /opt/py3k/bin/2to3, line 5, in module sys.exit(refactor.main(lib2to3/fixes)) File /opt/py3k/lib/python3.0/lib2to3/refactor.py, line 69, in main for fixname in get_all_fix_names(fixer_dir): File /opt/py3k/lib/python3.0/lib2to3/refactor.py, line 102, in get_all_fix_names names = os.listdir(fixer_dir) OSError: [Errno 2] No such file or directory: 'lib2to3/fixes' fixer_dir is the relative directory name lib2to3/fixes, it should be an absolute directory. Example (ugly code copied from RefactoringTool.get_fixers()) to get the full path in refactor.py (main function): if not os.path.isabs(fixer_dir): fixer_pkg = fixer_dir.replace(os.path.sep, .) if os.path.altsep: fixer_pkg = fixer_pkg.replace(os.path.altsep, .) mod = __import__(fixer_pkg, {}, {}, [*]) fixer_dir = os.path.dirname(mod.__file__) -- assignee: collinwinter components: 2to3 (2.x to 3.0 conversion tool) messages: 71132 nosy: collinwinter, haypo severity: normal status: open title: 2to3 -l doesn't work when installed in /opt versions: Python 3.0 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3553 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3552] uuid - exception on uuid3/uuid5
Martin v. Löwis [EMAIL PROTECTED] added the comment: I couldn't reproduce the problem (and apparently, many of the buildbots can't, either). It depends on whether you have openssl available, i.e. whether hashlib can be built. I explicitly disabled use of OpenSSL on my system, and have now committed a fix as r65676. -- nosy: +loewis resolution: - fixed status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3552 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3553] 2to3 -l doesn't work when installed in /opt
Changes by STINNER Victor [EMAIL PROTECTED]: -- keywords: +patch Added file: http://bugs.python.org/file2/2to3_fixer_dir.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3553 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Martin v. Löwis [EMAIL PROTECTED] added the comment: I'm a bit confused. In the PyBuffer_Release implementation you committed, there is no DECREF at all. Oops, I meant to make the reference own by Py_buffer, but actually forgot to implement that. Fixed in r65677, r65678. Now, when I try to merge that into the 3k branch, test_unicode terribly leaks memory :-( It's really frustrating. Regards, Martin ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3554] ctypes.wstring_at and string_at call Python API without the GIL
New submission from Kevin Watters [EMAIL PROTECTED]: in Lib/ctypes/__init__.py the wstring_at and string_at functions are declared with CFUNCTYPE. This means that in Modules/_ctypes/callproc.c when the functions are invoked, Py_UNBLOCK_THREADS and Py_BLOCK_THREADS surround the call. But string_at and wstring_at call PyString_FromString and PyUnicode_FromWideChar, respectively. The solution (I think) is to declare the functions with PYFUNCTYPE instead, so that callproc.c doesn't release the GIL when calling them. -- assignee: theller components: ctypes messages: 71135 nosy: kevinwatters, theller severity: normal status: open title: ctypes.wstring_at and string_at call Python API without the GIL type: crash versions: Python 2.5, Python 2.6, Python 2.7, Python 3.0, Python 3.1 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3554 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Antoine Pitrou [EMAIL PROTECTED] added the comment: Le jeudi 14 août 2008 à 16:13 +, Martin v. Löwis a écrit : Now, when I try to merge that into the 3k branch, test_unicode terribly leaks memory :-( It's really frustrating. If you don't have the time to work on it anymore, you can post the current patch here and I'll take a try. Regards Antoine. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1424152] urllib/urllib2: HTTPS over (Squid) Proxy fails
Andrew Trick [EMAIL PROTECTED] added the comment: Mercurial will not work for anyone in a large company without this fix. I appreciate the patch, but hope its released soon. I did try the patch with Mercurial, but now I'm getting different error. I'm not sure if its related to the same bug: abort: error: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol -- nosy: +AndrewTrick ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue1424152 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1424152] urllib/urllib2: HTTPS over (Squid) Proxy fails
Martin Wilck [EMAIL PROTECTED] added the comment: I am not in my office. I'll be back on August 25, 2008. In urgent cases, please contact: Peter Pols [EMAIL PROTECTED] or Gerhard Wichert [EMAIL PROTECTED] Best regards Martin Wilck Added file: http://bugs.python.org/file3/unnamed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue1424152 ___meta http-equiv=Content-Type content=text/html; charset=utf-8div class=BodyFragment font size=2div class=PlainTextI am not in my office. I'll be back on August 25, 2008.br In urgent cases, please contact: br nbsp;nbsp; Peter Pols lt;[EMAIL PROTECTED]gt; orbr nbsp;nbsp; Gerhard Wichert lt;[EMAIL PROTECTED]gt;br br Best regardsbr Martin Wilck/div/font /div ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1424152] urllib/urllib2: HTTPS over (Squid) Proxy fails
Changes by Antoine Pitrou [EMAIL PROTECTED]: ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue1424152 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1424152] urllib/urllib2: HTTPS over (Squid) Proxy fails
Changes by Antoine Pitrou [EMAIL PROTECTED]: Removed file: http://bugs.python.org/file3/unnamed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue1424152 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Martin v. Löwis [EMAIL PROTECTED] added the comment: The patch is really trivial, and attached. Added file: http://bugs.python.org/file4/refcount.diff ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Antoine Pitrou [EMAIL PROTECTED] added the comment: Le jeudi 14 août 2008 à 18:52 +, Martin v. Löwis a écrit : The patch is really trivial, and attached. Added file: http://bugs.python.org/file4/refcount.diff By the way, even without that patch, there is a memory leak: Python 3.0b2+ (py3k, Aug 14 2008, 20:49:19) [GCC 4.3.1 20080626 (prerelease)] on linux2 Type help, copyright, credits or license for more information. import sys, codecs b = bytearray() sys.getrefcount(b) 2 codecs.ascii_decode(memoryview(b)) ('', 0) sys.getrefcount(b) 3 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3555] Regression: nested exceptions crash (Cannot recover from stack overflow)
New submission from Daniel Diniz [EMAIL PROTECTED]: The following code works[1] on trunk and 2.5.1, but crashes with Fatal Python error: Cannot recover from stack overflow, on py3k as of rev65676: ## # Python 3.0b2+ (py3k:65676, Aug 14 2008, 14:37:38) # [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 import sys def overflower(): try: return overflower() except: return sys.exc_info() def f(): try: return f() except: return overflower() f() ## Catching RuntimeError crashes, letting it be raised avoids the crash. Adding finally: return overflower() along with a non RuntimeError-catching except also gives a Fatal Python error. A smaller test case for hitting the overflow in py3k would be def f(): [...] except: return f(), but that hangs in an (desirable?) infinite loop in 2.5 and trunk. [1] Works as in doesn't crash, but both the code above and the infinite loop hit issue2548 when run on a debug build of trunk. Calling overflower() alone in trunk hits the undetected error discussed in that issue, but works fine in py3k. -- components: Interpreter Core messages: 71141 nosy: ajaksu2 severity: normal status: open title: Regression: nested exceptions crash (Cannot recover from stack overflow) type: crash versions: Python 3.0 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3555 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Martin v. Löwis [EMAIL PROTECTED] added the comment: By the way, even without that patch, there is a memory leak: With the patch, this memory leak goes away. Regards, Martin ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3554] ctypes.wstring_at and string_at call Python API without the GIL
Thomas Heller [EMAIL PROTECTED] added the comment: Good catch! Indeed, when PyString_FromString or PyUnicode_FromWideChar fail, Python crashes with Fatal Python error: PyThreadState_Get: no current thread in a debug build, and an access violation in a release build (tested on Windows). Also, your patch suggestion is absolutely correct and fixes the problem. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3554 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3476] BufferedWriter not thread-safe
Antoine Pitrou [EMAIL PROTECTED] added the comment: Here is a new patch which simply wraps the current BufferedWriter methods with a lock. It has a test case, and Amaury's example works fine too. Martin, do you think it's fine? (as for BufferedReader, I don't see the use cases for multithreaded reading) -- assignee: - pitrou Added file: http://bugs.python.org/file5/bufferedwriter3.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3476 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Antoine Pitrou [EMAIL PROTECTED] added the comment: Le jeudi 14 août 2008 à 19:06 +, Martin v. Löwis a écrit : Martin v. Löwis [EMAIL PROTECTED] added the comment: By the way, even without that patch, there is a memory leak: With the patch, this memory leak goes away. But now: 30 m = memoryview(b) sys.getrefcount(b) 32 del m sys.getrefcount(b) 31 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Antoine Pitrou [EMAIL PROTECTED] added the comment: Sorry, the roundup e-mail interface ate some lines of code: b = b'' sys.getrefcount(b) 30 m = memoryview(b) sys.getrefcount(b) 32 del m sys.getrefcount(b) 31 It doesn't happen with bytearrays, so I suspect it's that you only DECREF when releasebuffer method exists: b = bytearray() sys.getrefcount(b) 2 m = memoryview(b) sys.getrefcount(b) 4 del m sys.getrefcount(b) 2 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3555] Regression: nested exceptions crash (Cannot recover from stack overflow)
Changes by Guido van Rossum [EMAIL PROTECTED]: -- nosy: +gvanrossum ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3555 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1432] Strange behavior of urlparse.urljoin
Facundo Batista [EMAIL PROTECTED] added the comment: Commited in revs 65679 and 65680. Thank you all!! -- resolution: - fixed status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue1432 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3555] Regression: nested exceptions crash (Cannot recover from stack overflow)
Antoine Pitrou [EMAIL PROTECTED] added the comment: I'm no expert in recursion checking inside the Python interpreter, but looking at the code for _Py_CheckRecursiveCall(), I don't think it is a bug but a feature. Here how I understand it. When the recursion level exceeds the normal recursion limit (let's call the latter N), a RuntimeError is raised and the normal recursion check is temporarily disabled (by setting tstate-overflowed) so that Python can run some recovery code (e.g. an except statement with a function call to log the problem), and another recursion check is put in place that is triggered at N+50. When the latter check triggers, the interpreter prints the aforementioned Py_FatalError and bails out. This is actually what happens in your example: when the normal recursion limit is hit and a RuntimeError is raised, you immediately catch the exception and run into a second infinite loop while the normal recursion check is temporarily disabled: the N+50 check then does its job. Here is a simpler way to showcase this behaviour, without any nested exceptions: def f(): try: return f() except: pass f() f() Can someone else comment on this? -- nosy: +pitrou ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3555 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3476] BufferedWriter not thread-safe
Martin v. Löwis [EMAIL PROTECTED] added the comment: The patch looks fine (as far as it goes). I do think the same should be done to the reader: IO libraries typically provide a promise that concurrent threads can read, and will get the complete stream in an overlapped manner (i.e. each input byte goes to exactly one thread - no input byte gets lost, and no input byte is delivered to multiple threads). I don't think this is currently the case: two threads reading simultaneously may very well read the same bytes twice, and then, subsequently, skip bytes (i.e. when both increment _read_pos, but both see the original value of pos) ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3476 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3556] test_raiseMemError consumes an insane amount of memory
New submission from Martin v. Löwis [EMAIL PROTECTED]: It appears that test_unicode::test_raiseMemError was meant to produce a MemoryError. Unfortunately, on my machine (Linux 2.6.25, 32-bit processor, 1GiB main memory, plenty swap), allocation *succeed*, and then brings the machine to a near halt, trying to fill that memory with data. IMO, the patch should be rewritten to either reliably produce a MemoryError (why not allocate sys.maxsize characters, or sys.maxsize//2?), or else it should be removed. -- assignee: amaury.forgeotdarc messages: 71150 nosy: amaury.forgeotdarc, loewis severity: normal status: open title: test_raiseMemError consumes an insane amount of memory versions: Python 2.6, Python 3.0 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3556 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue600362] relocate cgi.parse_qs() into urlparse
Senthil [EMAIL PROTECTED] added the comment: Hi Facundo, This issue/comments somehow escaped from my noticed, initially. I have addressed your comments in the new set of patches. 1) Previous patch Docs had issues. Updated the Docs patch. 2) Included message in cgi.py about parse_qs, parse_qsl being present for backward compatiblity. 3) The reason, py26 version of patch has quote function from urllib is to avoid circular reference. urllib import urlparse for urljoin method. So only way for us use quote is to have that portion of code in the patch as well. Please have a look the patches. As this request has been present from a long time ( 2002-08-26 !), is it possible to include this change in b3? Thanks, Senthil Added file: http://bugs.python.org/file6/issue600362-py26-v2.diff ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue600362 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue600362] relocate cgi.parse_qs() into urlparse
Changes by Senthil [EMAIL PROTECTED]: Added file: http://bugs.python.org/file7/issue600362-py3k-v2.diff ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue600362 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue600362] relocate cgi.parse_qs() into urlparse
Changes by Senthil [EMAIL PROTECTED]: Removed file: http://bugs.python.org/file10771/issue600362-py26.diff ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue600362 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue600362] relocate cgi.parse_qs() into urlparse
Changes by Senthil [EMAIL PROTECTED]: Removed file: http://bugs.python.org/file10772/issue600362-py3k.diff ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue600362 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Martin v. Löwis [EMAIL PROTECTED] added the comment: It doesn't happen with bytearrays, so I suspect it's that you only DECREF when releasebuffer method exists: Thanks, that was indeed the problem; this is now fixed in r65683 and r65685. My initial observation that test_unicode leaks a lot of memory was incorrect - it's rather that test_raiseMemError consumes all my memory (probably without leaking). test_unicode still leaks 6 references each time; one reference is leaked whenever a SyntaxError is raised. I'm not sure though whether this was caused by this patch, so I'll close this issue as fixed. Any further improvements should be done through separate patches (without my involvement, most likely). ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3139] bytearrays are not thread safe
Changes by Martin v. Löwis [EMAIL PROTECTED]: -- resolution: - fixed status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3139 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3476] BufferedWriter not thread-safe
Antoine Pitrou [EMAIL PROTECTED] added the comment: Both BufferedReader and BufferedWriter are now fixed in r65686. Perhaps someone wants to open a separate issue for TextIOWrapper... -- resolution: - fixed status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3476 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3555] Regression: nested exceptions crash (Cannot recover from stack overflow)
Daniel Diniz [EMAIL PROTECTED] added the comment: Antoine, Thanks for your analysis. I still believe this is a regression for the case described, but take my opinion with a grain of salt :) looking at the code for _Py_CheckRecursiveCall(), I don't think it is a bug but a feature. It does seem to be working as designed, if that is a desirable behavior then this issue should be closed. This is actually what happens in your example: when the normal recursion limit is hit and a RuntimeError is raised, you immediately catch the exception and run into a second infinite loop while the normal recursion check is temporarily disabled: the N+50 check then does its job. Except that it wasn't an infinite loop in 2.5 and isn't in trunk: it terminates on overflower's except. That's why I think this is a regression. Besides being different behavior, it seems weird to bail out on a recursion issue instead of dealing with it. Your showcase is a better way of getting an infinite loop in trunk than the one I mentioned, but AFAIK we are more comfortable with infinite loops than with fatal errors. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3555 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3555] Regression: nested exceptions crash (Cannot recover from stack overflow)
Antoine Pitrou [EMAIL PROTECTED] added the comment: Except that it wasn't an infinite loop in 2.5 and isn't in trunk: it terminates on overflower's except. That's because the stated behaviour is only implemented in 3.0 and not in 2.x. I'm not sure what motivated it, but you are the first one to complain about it. If you think it is a regression, I think you should open a thread on the python-dev mailing-list about it. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3555 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3552] uuid - exception on uuid3/uuid5
Matt Giuca [EMAIL PROTECTED] added the comment: So are you saying that if I had libopenssl (or whatever the name is) installed and linked with Python, it would bypass the use of _md5 and _sha1, and call the hash functions in libopenssl instead? And all the buildbots _do_ have it linked? That would indicate that the bots _aren't_ testing the code in _md5 and _sha1 at all. Perhaps one should be made to? ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3552 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3557] Segfault in sha1
New submission from Matt Giuca [EMAIL PROTECTED]: Continuing the discussion from Issue 3552 (http://bugs.python.org/issue3552). r65676 makes changes to Modules/md5module.c and Modules/sha1module.c, to allow them to read mutable buffers. There's a segfault in sha1module if given 0 arguments. eg: import _sha1 _sha1.sha1() Segmentation fault Docs here suggest this should be OK: http://docs.python.org/dev/3.0/library/hashlib.html This crashes on the Lib/test/test_hmac.py test case, but apparently (according to Margin on issue 3552) none of the build bots see it because they use libopenssl and completely bypass the _md5 and _sha1 modules. Also there are no direct test cases for either of these modules. This is because new code in r65676 doesn't initialise a pointer to NULL. Fixed in patch (as well as replaced tab with spaces for consistency, in both modules). I strongly recommend that a) A build bot be made to use _md5 and _sha1 instead of OpenSSL (or they aren't running that code at all), AND/OR b) Direct test cases be written for _md5 and _sha1. Commit log: Fixed crash on _sha1.sha1(), with no arguments, due to not initialising pointer. Normalised indentation in md5module.c and sha1module.c. -- components: Interpreter Core files: sha1.patch keywords: patch messages: 71157 nosy: mgiuca severity: normal status: open title: Segfault in sha1 type: crash versions: Python 3.0 Added file: http://bugs.python.org/file8/sha1.patch ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3557 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3514] pickle segfault with infinite loop in __getattr__
Alexandre Vassalotti [EMAIL PROTECTED] added the comment: Committed fix in r65689. Thanks! -- resolution: - fixed status: open - closed ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3514 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3385] cPickle to pickle conversion in py3k missing methods
Alexandre Vassalotti [EMAIL PROTECTED] added the comment: I ran into a few problems while trying to fix this issue. First, does someone know how to add class attributes on extension types? It sounds like I will need either some tp_dict hacking or a Pickler subclass. Second, which methods of Pickler should be made public? I know save_reduce() is needed, but would it be worthwhile to expose more? In the recipe Amaury linked (which abuses Pickler IMHO), save_global(), save_dict(), write() and memoize() are used. Exposing all save_* methods is out of question for now as none were written to be used standalone. Third, should Pickler allows pickling support for built-in types (e.g., int, dict, tuple, etc) to be overridden? Currently, pickle.py allows it. However, I am not sure if it is a good idea to copy this feature in _pickle.c. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3385 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3558] Operator precedence misdocumented
New submission from Terry J. Reedy [EMAIL PROTECTED]: Reference/Expressions/Primaries: Primaries represent the most tightly bound operations of the language. Their syntax is: primary ::= atom | attributeref | subscription | slicing | call This (along with the fact that all sections after 'call' doc follow in order of decreasing precedence) implies to me that atom is highest and call is lowest of this group. Certainly, attributeref seems higher than those that follow, as ob.attr[x] and ob.attr(x) are executed as (ob.attr)[x] and (ob.attr)(x), not as ob.(attr[x]) or ob.(attr(x)) (both taken literally are syntax errors). (Michael Tobis gave an example today on c.l.p showing this.) But the Summary of precedence at the chapter end lists attriburef to call as low to high. I think these should be reversed. -- assignee: georg.brandl components: Documentation messages: 71160 nosy: georg.brandl, tjreedy severity: normal status: open title: Operator precedence misdocumented versions: Python 2.5, Python 2.6, Python 3.0 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3558 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3559] Pasted \n not same as typed \n
New submission from Terry J. Reedy [EMAIL PROTECTED]: Winxp, 3.0b2, but I suspect earlier as well, since reported on c.l.p. If I paste '1+2\n' into the command window interpreter, it responds with '3' and a new prompt. In IDLE, the pasted \n is ignored and a typed \n is required to trigger execution. As a consequence, pasting multiple statements does not work; anything after the first is ignored. If this cannot be changed, following Paste-- Insert system-wide clipboard into window with(The shell will only recognize one statement) would at least document the limitation. -- components: IDLE messages: 71161 nosy: tjreedy severity: normal status: open title: Pasted \n not same as typed \n type: behavior versions: Python 3.0 ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue3559 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com