[issue14074] argparse allows nargs1 for positional arguments but doesn't allow metavar to be a tuple
paul j3 added the comment: This patch fixes both help and error formatting. A module level '_format_metavars' does the formatting for both. I have tried several alternatives, including using the 'usage' style. There is similarity between this fix and that for issue 16468 (custom choices), though I don't think they conflict. In both cases, code needed to format the usage or help is also needed to help format error messages. Issue 9849 (better nargs warning) is another case where error checking in the parser depends on the formatter. In the long run we may want to refactor these issues. -- Added file: http://bugs.python.org/file35049/issue14074_1.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14074 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11709] help-method crashes if sys.stdin is None
Jessica McKellar added the comment: Thanks for reporting this, palm.kevin, and thanks for the patch, amaury.forgeotdarc. First, just to be explicit here's a short reproducer: import sys sys.stdin = None help(1) (Note that to get to the isatty check you need to provide an argument and it has to be something that has help, so `help()` and `help(a)` don't exercise this code path) Also, here is where sys.stdin can be set to None: http://hg.python.org/cpython/file/dbceba88b96e/Python/pythonrun.c#l1201 The provided patch fixes the above test case; instead of erroring out with the traceback in the original bug report, the plain pager is used and the help message is printed to stdout. Anyone on the nosy list interested in writing some tests? -- nosy: +jesstess stage: patch review - test needed versions: +Python 3.5 -Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11709 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: I read again some remarks about alignement, it was suggested to provide allocators providing an address aligned to a requested alignement. This topic was already discussed in #18835. The alignement issue is really orthogonal to the calloc one, so IMO this shouldn't be discussed here (and FWIW I don't think we should expose those: alignement only matters either for concurrency or SIMD instructions, and I don't think we should try to standardize this kind of API, it's way to special-purpose (then we'd have to think about huge pages, etc...). Whereas calloc is a simple and immediately useful addition, not only for Numpy but also CPython). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19940] ssl.cert_time_to_seconds() returns wrong results if local timezone is not UTC
akira added the comment: Here's a new patch with a simplified ssl.cert_time_to_seconds() implementation that brings strptime() back. The behaviour is changed: - accept both %e and %d strftime formats for days as strptime-based implementation did before - return an integer instead of a float (input date has not fractions of a second) I've added more tests. Please, review. -- Added file: http://bugs.python.org/file35050/ssl_cert_time_to_seconds-462470859e57.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19940 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19940] ssl.cert_time_to_seconds() returns wrong results if local timezone is not UTC
akira added the comment: Replace IndexError with ValueError in the patch because tuple.index raises ValueError. -- Added file: http://bugs.python.org/file35051/ssl_cert_time_to_seconds-ps5.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19940 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21090] File read silently stops after EIO I/O error
STINNER Victor added the comment: 2014-04-27 5:26 GMT+02:00 ivank rep...@bugs.python.org: (I see the `IOError: [Errno 5] Input/output error` exception now.) Can you please run your test in strace to see system calls? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21090 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: 2014-04-27 10:30 GMT+02:00 Charles-François Natali rep...@bugs.python.org: I read again some remarks about alignement, it was suggested to provide allocators providing an address aligned to a requested alignement. This topic was already discussed in #18835. The alignement issue is really orthogonal to the calloc one, so IMO this shouldn't be discussed here (and FWIW I don't think we should expose those: alignement only matters either for concurrency or SIMD instructions, and I don't think we should try to standardize this kind of API, it's way to special-purpose (then we'd have to think about huge pages, etc...). Whereas calloc is a simple and immediately useful addition, not only for Numpy but also CPython). This issue was opened to be able to use tracemalloc on numpy. I would like to make sure that calloc is enough for numpy. I would prefer to change the malloc API only once. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19950] Document that unittest.TestCase.__init__ is called once per test
Claudiu.Popa added the comment: In Python 3 docs there is a hint in the documentation for `loadTestsFromModule`: This method searches module for classes derived from TestCase and creates an instance of the class for each test method defined for the class. The phrase with a fixture per test from Python 2 docs is gone though. It would be nice if the same explanation from loadTestsFromModule could be applied to TestCase documentation. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19950 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20642] Enhance deepcopy-ing for tuples
Claudiu.Popa added the comment: Ping? The change is clear, has the same semantics and its a little bit faster. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20642 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18039] dbm.open(..., flag=n) does not work and does not give a warning
Claudiu.Popa added the comment: Can anyone review this patch? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18039 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18615] sndhdr.whathdr could return a namedtuple
Claudiu.Popa added the comment: Ping. :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18615 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: This issue was opened to be able to use tracemalloc on numpy. I would like to make sure that calloc is enough for numpy. I would prefer to change the malloc API only once. Then please at least rename the issue. Also, I don't see why everything should be done at once: calloc support is a self-contained change, which is useful outside of numpy. Enhanced tracemalloc support for numpy certainly belongs to another issue. Regarding the *Calloc functions: how about we provide a sane API instead of reproducing the cumbersome C API? I mean, why not expose: PyAPI_FUNC(void *) PyMem_Calloc(size_t size); insteaf of PyAPI_FUNC(void *) PyMem_Calloc(size_t nelem, size_t elsize); AFAICT, the two arguments are purely historical (it was used when malloc() didn't guarantee suitable alignment, and has the advantage of performing overflow check when doing the multiplication, but in our code we always check for it anyway). See https://groups.google.com/forum/#!topic/comp.lang.c/jZbiyuYqjB4 http://stackoverflow.com/questions/4083916/two-arguments-to-calloc And http://www.eglibc.org/cgi-bin/viewvc.cgi/trunk/libc/malloc/malloc.c?view=markup to check that calloc(nelem, elsize) is implemented as calloc(nelem * elsize) I'm also concerned about the change to _PyObject_GC_Malloc(): it now calls calloc() instead of malloc(): it can definitely be slower. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: Note to numpy devs: it would be great if some of you followed the python-dev mailing list (I know it can be quite volume intensive, but maybe simple filters could help keep the noise down): you guys have definitely both expertise and real-life applications that could be very valuable in helping us design the best possible public/private APIs. It's always great to have downstream experts/end-users! -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: I wrote a short microbenchmark allocating objects using my benchmark.py script. It looks like the operation (None,) * N is slower with calloc-3.patch, but it's unclear how much times slower it is. I don't understand why only this operation has different speed. Do you have ideas for other benchmarks? Using the timeit module: $ ./python.orig -m timeit '(None,) * 10**5' 1000 loops, best of 3: 357 usec per loop $ ./python.calloc -m timeit '(None,) * 10**5' 1000 loops, best of 3: 698 usec per loop But with different parameters, the difference is lower: $ ./python.orig -m timeit -r 20 -n '1000' '(None,) * 10**5' 1000 loops, best of 20: 362 usec per loop $ ./python.calloc -m timeit -r 20 -n '1000' '(None,) * 10**5' 1000 loops, best of 20: 392 usec per loop Results of bench_alloc.py: Common platform: CFLAGS: -Wno-unused-result -Werror=declaration-after-statement -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes CPU model: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz Python unicode implementation: PEP 393 Timer info: namespace(adjustable=False, implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, resolution=1e-09) Timer: time.perf_counter SCM: hg revision=462470859e57+ branch=default date=2014-04-26 19:01 -0400 Platform: Linux-3.13.8-200.fc20.x86_64-x86_64-with-fedora-20-Heisenbug Bits: int=32, long=64, long long=64, size_t=64, void*=64 Platform of campaign orig: Timer precision: 42 ns Date: 2014-04-27 12:27:26 Python version: 3.5.0a0 (default:462470859e57, Apr 27 2014, 11:52:55) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] Platform of campaign calloc: Timer precision: 45 ns Date: 2014-04-27 12:29:10 Python version: 3.5.0a0 (default:462470859e57+, Apr 27 2014, 12:04:57) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] ---+--+--- Tests | orig | calloc ---+--+--- object() | 61 ns (*) | 62 ns b'A' * 10 | 55 ns (*) | 51 ns (-7%) b'A' * 10**3 | 99 ns (*) | 94 ns b'A' * 10**6 | 37.5 us (*) | 36.6 us 'A' * 10 | 62 ns (*) | 58 ns (-7%) 'A' * 10**3 | 107 ns (*) | 104 ns 'A' * 10**6 | 37 us (*) | 36.6 us 'A' * 10**8 | 16.2 ms (*) | 16.4 ms decode 10 null bytes from ASCII | 253 ns (*) | 248 ns decode 10**3 null bytes from ASCII | 359 ns (*) | 357 ns decode 10**6 null bytes from ASCII | 78.8 us (*) | 78.7 us decode 10**8 null bytes from ASCII | 26.2 ms (*) | 25.9 ms (None,) * 10**0 | 30 ns (*) | 30 ns (None,) * 10**1 | 78 ns (*) | 77 ns (None,) * 10**2 | 427 ns (*) | 460 ns (+8%) (None,) * 10**3 | 3.5 us (*) | 3.7 us (+6%) (None,) * 10**4 | 34.7 us (*) | 37.2 us (+7%) (None,) * 10**5 | 357 us (*) | 390 us (+9%) (None,) * 10**6 | 3.86 ms (*) | 4.43 ms (+15%) (None,) * 10**7 | 50.4 ms (*) | 50.3 ms (None,) * 10**8 | 505 ms (*) | 504 ms ([None] * 10)[1:-1] | 121 ns (*) | 120 ns ([None] * 10**3)[1:-1] | 3.57 us (*) | 3.57 us ([None] * 10**6)[1:-1] | 4.61 ms (*) | 4.59 ms ([None] * 10**8)[1:-1] | 585 ms (*) | 582 ms ---+--+--- Total | 1.19 sec (*) | 1.19 sec ---+--+--- -- Added file: http://bugs.python.org/file35052/bench_alloc.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Antoine Pitrou added the comment: Regarding the *Calloc functions: how about we provide a sane API instead of reproducing the cumbersome C API? Isn't the point of reproducing the C API to allow quickly switching from calloc() to PyObject_Calloc()? (besides, it seems the OpenBSD guys like the two-argument form :-)) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Stefan Krah added the comment: Just to add another data point, I don't find the calloc() API cumbersome. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20230] structseq types should expose _fields
Changes by Stefan Krah stefan-use...@bytereef.org: -- nosy: +skrah ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20230 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: It looks like calloc-3.patch is wrong: it modify _PyObject_GC_Malloc() to fill the newly allocated buffer with zeros, but _PyObject_GC_Malloc() is not only called by PyType_GenericAlloc(): it is also used by _PyObject_GC_New() and _PyObject_GC_NewVar(). The patch is maybe a little bit slower because it writes zeros twice. calloc.patch adds PyObject* _PyObject_GC_Calloc(size_t); and doesn't have this issue. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1820] Enhance Object/structseq.c to match namedtuple and tuple api
Changes by Stefan Krah stefan-use...@bytereef.org: -- nosy: +skrah ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1820 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Stefan Krah added the comment: Actually, I think we have to match the C-API: For instance, in Modules/_decimal/_decimal.c:5527 the libmpdec allocators are set to the Python allocators. So I'd need to do: mpd_callocfunc = PyMem_Calloc; I suppose that's a common use case. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21362] concurrent.futures does not validate that max_workers is proper
New submission from Claudiu.Popa: Due to some bad math on my side, I passed max_workers=0 to concurrent.futures.ThreadPoolExecutor. This didn't fail properly, but hanged. The same behaviour occurs in ProcessPoolExecutor, but this time it fails internally with something like this: Exception in thread Thread-1: Traceback (most recent call last): File C:\Python34\lib\threading.py, line 921, in _bootstrap_inner self.run() File C:\Python34\lib\threading.py, line 869, in run self._target(*self._args, **self._kwargs) File C:\Python34\lib\concurrent\futures\process.py, line 225, in _queue_management_worker assert sentinels AssertionError The attached patch checks that *max_workers* is = 0 and raises ValueError if so. -- components: Library (Lib) files: futures_max_workers.patch keywords: patch messages: 217258 nosy: Claudiu.Popa, bquinlan priority: normal severity: normal status: open title: concurrent.futures does not validate that max_workers is proper type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file35053/futures_max_workers.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21362 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21362] concurrent.futures does not validate that max_workers is proper
Claudiu.Popa added the comment: For instance, multiprocessing behaves like this: multiprocessing.Pool(-1) Traceback (most recent call last): File stdin, line 1, in module File C:\Python34\lib\multiprocessing\context.py, line 118, in Pool context=self.get_context()) File C:\Python34\lib\multiprocessing\pool.py, line 157, in __init__ raise ValueError(Number of processes must be at least 1) ValueError: Number of processes must be at least 1 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21362 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21363] io.TextIOWrapper always closes wrapped files
New submission from Armin Ronacher: I'm trying to write some code that fixes a misconfigured sys.stdin on a case by case bases but unfortunately I cannot use TextIOWrapper for this because it always closes the underlying file: Python import io sys.stdin.encoding 'ANSI_X3.4-1968' stdin = sys.stdin correct_stdin = io.TextIOWrapper(stdin.buffer, 'utf-8') correct_stdin.readline() foobar 'foobar\n' del correct_stdin stdin.readline() Traceback (most recent call last): File stdin, line 1, in module ValueError: I/O operation on closed file. Ideally there would be a way to disable this behavior. -- messages: 217260 nosy: aronacher priority: normal severity: normal status: open title: io.TextIOWrapper always closes wrapped files ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21363 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16104] Compileall script: add option to use multiple cores
Claudiu.Popa added the comment: Added a new patch with improvements suggested by Jim. Thanks! I removed the handling of processes=1, because it can still be useful: having a background worker which processes the files received from _walk_dir. Also, it checks that compile_dir receives a positive *processes* value, otherwise it raises a ValueError. As a side note, I just found that ProcessPoolExecutor / ThreadPoolExecutor don't verify the value of processes, leading to certain types of errors (see issue21362 for more details). Jim, the default for processes is still None, meaning do not use multiple process, because the purpose of ProcessPoolExecutor makes it easy for it to use processes=None=os.cpu_count(). Here we want the user to be explicit about wanting multiple processes or not. -- Added file: http://bugs.python.org/file35054/issue16104_9.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16104 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: It looks like calloc-3.patch is wrong: it modify _PyObject_GC_Malloc() to fill the newly allocated buffer with zeros, but _PyObject_GC_Malloc() is not only called by PyType_GenericAlloc(): it is also used by _PyObject_GC_New() and _PyObject_GC_NewVar(). The patch is maybe a little bit slower because it writes zeros twice. Exactly (sorry, I thought you'd already seen that, otherwise I could have told you!) Actually, I think we have to match the C-API: For instance, in Modules/_decimal/_decimal.c:5527 the libmpdec allocators are set to the Python allocators. Hmm, ok then, I didn't know we were plugging our allocators for external libraries: that's indeed a very good reason to keep the same prototype. But I still find this API cumbersome: calloc is exactly like malloc except for the zeroing, so the prototype could be simpler (a quick look at Victor's patch shows a lot of calloc(1, n), which is a sign something's wrong). Maybe it's just me ;-) Otherwise, a random thought: by changing PyType_GenericAlloc() from malloc() + memset(0) to calloc(), there could be a subtle side effect: if a given type relies on the 0-setting (which is documented), and doesn't do any other work on the allocated area behind the scenes (think about a mmap-like object), we could lose our capacity to detect MemoryError, and run into segfaults instead. Because if a code creates many such objects which basically just do calloc(), on operating systems with memory overommitting (such as Linux), the calloc() allocations will pretty much always succeed, but will segfault when the page is first written to in case of low memory. I don't think such use cases should be common: I would expect most types to use tp_alloc(type, 0) and then use an internal additional pointer for the allocations it needs, or immediately write to the allocated memory area right after allocation, but that's something to keep in mind. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16104] Compileall script: add option to use multiple cores
Changes by Steven D'Aprano steve+pyt...@pearwood.info: -- nosy: -steven.daprano ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16104 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21225] io.py: Improve docstrings for classes
Berker Peksag added the comment: Can this be closed? (or needs backport to 2.7? http://hg.python.org/cpython/file/2.7/Lib/io.py#l69) -- nosy: +berker.peksag ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21225 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16104] Compileall script: add option to use multiple cores
Claudiu.Popa added the comment: Add new patch with fixes proposed by Berker Peksag. Thanks for the review. Hopefully, this is the last iteration of this patch. -- Added file: http://bugs.python.org/file35055/issue16104_10.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16104 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21090] File read silently stops after EIO I/O error
Charles-François Natali added the comment: I'm with Antoine, it's likely a glibc bug. We already had a similar issue with fwrite(): http://bugs.python.org/issue17976 -- nosy: +neologix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21090 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21090] File read silently stops after EIO I/O error
Antoine Pitrou added the comment: ivank, if you know some C, perhaps you could write a trivial program that does an fopen() followed by an fread() of 131072 bytes, and see if the fread() errors out. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21090 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21361] Add how to run a single test case to the devguide
Roundup Robot added the comment: New changeset 6b912c90de72 by Benjamin Peterson in branch 'default': say how to run one test (closes #21361) http://hg.python.org/devguide/rev/6b912c90de72 -- nosy: +python-dev resolution: - fixed stage: patch review - resolved status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21361 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21349] crash in winreg SetValueEx with memoryview
Tim Golden added the comment: Committed. Thanks for the patch. -- resolution: - fixed stage: - resolved status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21349 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21320] dict() allows keyword expansion with integer keys, e.g. dict(**{5:'v'})
Eric V. Smith added the comment: I agree with Raymond: this isn't a practical problem worth solving. If it's causing an actual problem, please re-open this issue and give a use case. Thanks. -- nosy: +eric.smith resolution: - wont fix status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21320 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21364] Documentation Recommends Broken Pattern
New submission from Armin Ronacher: The documentation recommends replacing sys.stdin with a binary stream currently: https://docs.python.org/3/library/sys.html#sys.stdin This sounds like a bad idea because it will break pretty much everything in Python in the process. As example: import sys sys.stdin = sys.stdin.detach() input('Test: ') Traceback (most recent call last): File stdin, line 1, in module AttributeError: '_io.BufferedReader' object has no attribute 'errors' sys.stdout = sys.stdout.detach() print('Hello World!') Traceback (most recent call last): File stdin, line 1, in module TypeError: 'str' does not support the buffer interface -- messages: 217270 nosy: aronacher priority: normal severity: normal status: open title: Documentation Recommends Broken Pattern ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21364 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9291] mimetypes initialization fails on Windows because of non-Latin characters in registry
Roundup Robot added the comment: New changeset 18cfc2a42772 by Tim Golden in branch '2.7': Issue #9291 Do not attempt to re-encode mimetype data read from registry in ANSI mode. Initial patches by Dmitry Jemerov Vladimir Iofik http://hg.python.org/cpython/rev/18cfc2a42772 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9291 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21365] asyncio.Task reference misses the most important fact about it, related info spread around intros and example commentary instead
New submission from Paul Sokolovsky: It caused me a big surprise that asyncio.Task object automatically schedules itself in the main loop for execution upon creation (i.e. in constructor). Nowhere in the main reference part of section 18.5.2.4. Task (https://docs.python.org/3.5/library/asyncio-task.html#task) does it mention that fact. Vice versa, it explicitly says that Task is merely A coroutine object wrapped in a Future., which surely sets grounds for surprise when finding that Task is not just coroutine wrapped in Future, but exhibits extra behavior unexpected of plain Future. Docs cursorily mention this property of Task outside main reference section for it. Specifically: 1) 18.5.2.1. Coroutines, end of intro section: In the case of a coroutine object, there are two basic ways to start it running: call yield from coroutine from another coroutine (assuming the other coroutine is already running!), or convert it to a Task. I would argue that this is way too cursorily and doesn't put strong enough emphasis on the property of self-scheduling, to catch attention of novice or casual reader. For example, my entailments after reading the passage above are: ... or convert it to a Task, to schedule it in a loop [explicitly], because a coroutine can't be scheduled in a loop directly, but Task can be. 2) Very end of subsection 18.5.2.4.1. Example: Parallel execution of tasks, a short line squeezed between colored example block and new section heading - a place where some user will miss it outright: A task is automatically scheduled for execution when it is created. Based on case study above, I would like to propose: 1). In section 18.5.2.4. Task, in class description, make unmissable fact that instantiating an object makes it scheduled. For example, append after: A coroutine object wrapped in a Future. Subclass of Future. following: Instantiating object of this class automatically schedules it to be run in an event loop specified by 'loop' parameter (or default event loop). 2) Ideally, update 2 excerpts above to convey more explicit information, and be more visible (for case 2, for example, move it before the example, not after). -- assignee: docs@python components: Documentation messages: 217272 nosy: docs@python, pfalcon priority: normal severity: normal status: open title: asyncio.Task reference misses the most important fact about it, related info spread around intros and example commentary instead type: enhancement versions: Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21365 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9291] mimetypes initialization fails on Windows because of non-Latin characters in registry
Roundup Robot added the comment: New changeset 0c8a7299c7e3 by Tim Golden in branch '2.7': Issue #9291 Add ACKS NEWS http://hg.python.org/cpython/rev/0c8a7299c7e3 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9291 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: And http://www.eglibc.org/cgi-bin/viewvc.cgi/trunk/libc/malloc/malloc.c?view=markup to check that calloc(nelem, elsize) is implemented as calloc(nelem * elsize) __libc_calloc() starts with a check on integer overflow. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21365] asyncio.Task reference misses the most important fact about it, related info spread around intros and example commentary instead
Paul Sokolovsky added the comment: Based on discussion https://groups.google.com/forum/#!topic/python-tulip/zfMQIUcIR-0 . That discussion actually questions the grounds of such Task behavior, and points it as a violation of Explicit is better than implicit principle, and as inconsistent behavior wrt to similar objects in Python stdlib (threads, processes, etc.) This ticket however assumes that there're very good reasons for such behavior, and/or it just should be accepted as API idiosyncrasy which is late to fix, and just tries to make sure that docs are not culprit for mismatching user expectations. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21365 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: __libc_calloc() starts with a check on integer overflow. Yes, see my previous message: AFAICT, the two arguments are purely historical (it was used when malloc() didn't guarantee suitable alignment, and has the advantage of performing overflow check when doing the multiplication, but in our code we always check for it anyway). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21366] Document that return in finally overwrites prev value
New submission from Jon Brandvein: def foo(): try: return 1 finally; return 2 print(foo()) # 2 I've seen this peculiar case discussed on a few blogs lately, but was unable to find confirmation that this behavior is defined. In the try/finally section of Doc/reference/compound_stmts.rst, immediately after the sentence beginning When a return, break, or continue statement is executed I propose adding something to the effect of: A return statement in a finally clause overrides the value of any return statement executed in the try suite. This wording also handles the case of nested try/finally blocks. -- assignee: docs@python components: Documentation messages: 217277 nosy: brandjon, docs@python priority: normal severity: normal status: open title: Document that return in finally overwrites prev value type: behavior ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21366 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16104] Compileall script: add option to use multiple cores
Changes by Claudiu.Popa pcmantic...@gmail.com: Added file: http://bugs.python.org/file35056/issue16104_11.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16104 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21363] io.TextIOWrapper always closes wrapped files
eryksun added the comment: It works if you detach the buffer beforehand: import io, sys stdin = sys.stdin stdin.flush() correct_stdin = io.TextIOWrapper(stdin.buffer, 'utf-8') correct_stdin.readline() foobar 'foobar\n' correct_stdin.detach() _io.BufferedReader name='stdin' del correct_stdin stdin.readline() foobar 'foobar\n' -- nosy: +eryksun ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21363 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21342] multiprocessing RLock and Lock raise incorrect exceptions when releasing an unlocked lock.
Charles-François Natali added the comment: Thanks for the patch. That's IMO a good change, but I would only apply it to default, and not backport it. -- nosy: +neologix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21342 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21365] asyncio.Task reference misses the most important fact about it, related info spread around intros and example commentary instead
Changes by Berker Peksag berker.pek...@gmail.com: -- nosy: +giampaolo.rodola, gvanrossum, haypo, pitrou, yselivanov ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21365 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21362] concurrent.futures does not validate that max_workers is proper
Claudiu.Popa added the comment: Attached patch with improvements suggested by Charles-François Natali. Thank you for the review. -- Added file: http://bugs.python.org/file35057/issue21362.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21362 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: list: items are allocated in a second memory block. PyList_New() uses memset(0) to set all items to NULL. tuple: header and items are stored in a single structure (PyTupleObject), in a single memory block. PyTuple_New() fills the items will NULL (so write again null bytes). Something can be optimized here. dict: header, keys and values are stored in 3 different memory blocks. It may be interesting to use calloc() to allocate keys and values. Initialization of keys and values to NULL uses a dummy loop. I expect that memset(0) would be faster. Anyway, I expect that all items of builtin containers (tuple, list, dict, etc.) are set to non-NULL values. So the lazy initialization to zeros may be useless for them. It means that benchmarking builtin containers should not show any speedup. Something else (numpy?) should be used to see an interesting speedup. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21305] PEP 466: update os.urandom
Charles-François Natali added the comment: Like Antoine, I'm really skeptical about the backport: honestly, this change doesn't bring much in a normal application. To run into the number of open file descriptors limit (so the scalability aspect), one would need to have *many* concurrent threads reading from /dev/urandom. For the performance aspect, I have a hard time believing that the overhead of the extra open() + close() syscalls is significant in a realistic workload. If reading from /dev/urandom becomes a bottleneck, this means that you're depleting your entropy pool anyway, so you're in for some potential trouble... There is a reason we don't backport new features! Couldn't agree more. This whole let's backport security enhancements sounds scary to me. -- nosy: +neologix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21305 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: Because if a code creates many such objects which basically just do calloc(), on operating systems with memory overommitting (such as Linux), the calloc() allocations will pretty much always succeed, but will segfault when the page is first written to in case of low memory. Overcommit leads to segmentation fault when there is no more memory, but I don't see how calloc() is worse then malloc()+memset(0). It will crash in both cases, no? In my experience (embedded device with low memory), programs crash because they don't check the result of malloc() (return NULL on allocation failure), not because of overcommit. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Nathaniel Smith added the comment: @Charles-François: I think your worries about calloc and overcommit are unjustified. First, calloc and malloc+memset actually behave the same way here -- with a large allocation and overcommit enabled, malloc and calloc will both go ahead and return the large allocation, and then the actual out-of-memory (OOM) event won't occur until the memory is accessed. In the malloc+memset case this access will occur immediately after the malloc, during the memset -- but this is still too late for us to detect the malloc failure. Second, OOM does not cause segfaults on any system I know. On Linux it wakes up the OOM killer, which shoots some random (possibly guilty) process in the head. The actual program which triggered the OOM is quite likely to escape unscathed. In practice, the *only* cases where you can get a MemoryError on modern systems are (a) if the user has turned overcommit off, (b) you're on a tiny embedded system that doesn't have overcommit, (c) if you run out of virtual address space. None of these cases are affected by the differences between malloc and calloc. Regarding the calloc API: it's a wart, but it seems like a pretty unavoidable wart at this point, and the API compatibility argument is strong. I think we should just keep the two argument form and live with it... -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20962] Rather modest chunk size in gzip.GzipFile
Charles-François Natali added the comment: William, thanks for the benchmarks. Unfortunately this type of benchmark depends on the hardware (disk, SSD, emmoey bandwitdh, etc). So I'd suggest, instead of using an hardcoded value, to simply reuse io.DEFAULT_BUFFER_SIZE. That way, if some day we decide to change it, all user code wil benefit from the change. -- nosy: +neologix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20962 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21364] Documentation Recommends Broken Pattern
Changes by Florent Xicluna florent.xicl...@gmail.com: -- components: +IO nosy: +benjamin.peterson, flox, hynek, pitrou, stutzbach type: - behavior versions: +Python 3.3, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21364 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21362] concurrent.futures does not validate that max_workers is proper
Changes by Florent Xicluna florent.xicl...@gmail.com: -- nosy: +flox stage: - patch review versions: +Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21362 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21364] Documentation Recommends Broken Pattern
Antoine Pitrou added the comment: Initial introduction is 59cb9c074e09. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21364 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21362] concurrent.futures does not validate that max_workers is proper
Changes by Claudiu.Popa pcmantic...@gmail.com: Added file: http://bugs.python.org/file35058/issue21362_1.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21362 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18314] Have os.unlink remove junction points
Roundup Robot added the comment: New changeset 17df50df62c7 by Tim Golden in branch 'default': Issue #18314 os.unlink will now remove junction points on Windows. Patch by Kim Gräsman. http://hg.python.org/cpython/rev/17df50df62c7 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18314 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18314] Have os.unlink remove junction points
Roundup Robot added the comment: New changeset 4b97092aa4bd by Tim Golden in branch 'default': Issue #18314 Add NEWS item. http://hg.python.org/cpython/rev/4b97092aa4bd -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18314 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21367] multiprocessing.JoinableQueue requires new kwarg
New submission from Lee Clemens: Not mentioned (at least not specifically) in the release notes, multiprocessing.JoinableQueue now requires 'ctx' keyword argument: def __init__(self, maxsize=0, *, ctx): This causes an application calling JoinableQueue() to work with 3.3.2 (my single test) to work, but not with 3.4.0 TypeError: __init__() missing 1 required keyword-only argument: 'ctx' The documentation is also incorrect: https://docs.python.org/3.4/library/multiprocessing.html#multiprocessing.JoinableQueue -- components: Interpreter Core messages: 217289 nosy: s...@leeclemens.net priority: normal severity: normal status: open title: multiprocessing.JoinableQueue requires new kwarg type: compile error versions: Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21367 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21367] multiprocessing.JoinableQueue requires new kwarg
Lee Clemens added the comment: Same issue (ctx keyword) occurs with multiprocessing.queues.SimpleQueue -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21367 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: @Charles-François: I think your worries about calloc and overcommit are unjustified. First, calloc and malloc+memset actually behave the same way here -- with a large allocation and overcommit enabled, malloc and calloc will both go ahead and return the large allocation, and then the actual out-of-memory (OOM) event won't occur until the memory is accessed. In the malloc+memset case this access will occur immediately after the malloc, during the memset -- but this is still too late for us to detect the malloc failure. Not really: what you describe only holds for a single object. But if you allocate let's say 1000 such objects at once: - in the malloc + memset case, the committed pages are progressively accessed (i.e. the pages for object N are accessed before the memory is allocated for object N+1), so they will be counted not only as committed, but also as active (for example the RSS will increase gradually): so at some point, even though by default the Linux VM subsystem is really lenient toward overcommitting, you'll likely have malloc/mmap return NULL because of this - in the calloc() case, all the memory is first committed, but not touched: the kernel will likely happily overcommit all of this. Only when you start progressively accessing the pages will the OOM kick in. Second, OOM does not cause segfaults on any system I know. On Linux it wakes up the OOM killer, which shoots some random (possibly guilty) process in the head. The actual program which triggered the OOM is quite likely to escape unscathed. Ah, did I say segfault? Sorry, I of course meant that the process will get nuked by the OOM killer. In practice, the *only* cases where you can get a MemoryError on modern systems are (a) if the user has turned overcommit off, (b) you're on a tiny embedded system that doesn't have overcommit, (c) if you run out of virtual address space. None of these cases are affected by the differences between malloc and calloc. That's a common misconception: provided that the memory allocated is accessed progressively (see above point), you'll often get ENOMEM, even with overcommitting: $ /sbin/sysctl -a | grep overcommit vm.nr_overcommit_hugepages = 0 vm.overcommit_memory = 0 vm.overcommit_ratio = 50 $ cat /tmp/test.py l = [] with open('/proc/self/status') as f: try: for i in range(5000): l.append(i) except MemoryError: for line in f: if 'VmPeak' in line: print(line) raise $ python /tmp/test.py VmPeak: 720460 kB Traceback (most recent call last): File /tmp/test.py, line 7, in module l.append(i) MemoryError I have a 32-bit machine, but the process definitely has more than 720MB of address space ;-) If your statement were true, this would mean that it's almost impossible to get ENOMEM with overcommitting on a 64-bit machine, which is - fortunately - not true. Just try python -c [i for i in range(large value)] on a 64-bit machine, I'll bet you'll get a MemoryError (ENOMEM). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21367] multiprocessing.JoinableQueue requires new kwarg
Changes by Claudiu.Popa pcmantic...@gmail.com: -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21367 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21305] PEP 466: update os.urandom
Nick Coghlan added the comment: Yep, it's scary indeed, but such a long lived feature release is a novel situation that may require some adjustments to our risk management. However, we can still decide to defer some of the changes until 2.7.8, even though the notion of backporting them has been approved in principle. For 2.7.7, we should probably focus on the more essential SSL enhancements. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21305 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18314] Have os.unlink remove junction points
Tim Golden added the comment: Backed out the commits after all the Windows buildbots broke. Need to look further. (No problems on a Win7 or Ubuntu build here). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18314 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Antoine Pitrou added the comment: Just try python -c [i for i in range(large value)] on a 64-bit machine, I'll bet you'll get a MemoryError (ENOMEM). Hmm, I get an OOM kill here. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Nathaniel Smith added the comment: On my laptop (x86-64, Linux 3.13, 12 GB RAM): $ python3 -c [i for i in range(9)] zsh: killed python3 -c [i for i in range(9)] $ dmesg | tail -n 2 [404714.401901] Out of memory: Kill process 10752 (python3) score 687 or sacrifice child [404714.401903] Killed process 10752 (python3) total-vm:17061508kB, anon-rss:10559004kB, file-rss:52kB And your test.py produces the same result. Are you sure you don't have a ulimit set on address space? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21340] Possible concurrency bug in asyncio, AttributeError in tasks.py
Roundup Robot added the comment: New changeset d42d3d3f9c41 by Guido van Rossum in branch '3.4': asyncio: Be careful accessing instance variables in __del__ (closes #21340). http://hg.python.org/cpython/rev/d42d3d3f9c41 New changeset 0cb436c6f082 by Guido van Rossum in branch 'default': Merge 3.4 - default: asyncio: Be careful accessing instance variables in __del__ (closes #21340). http://hg.python.org/cpython/rev/0cb436c6f082 -- nosy: +python-dev resolution: - fixed stage: - resolved status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21340 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: And your test.py produces the same result. Are you sure you don't have a ulimit set on address space? Yep, I'm sure: $ ulimit -v unlimited It's probably due to the exponential over-allocation used by the array (to guarantee amortized constant cost). How about: python -c b = bytes('x' * large) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: Dammit, read: python -c 'bx * (2**48)' -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5001] Remove assertion-based checking in multiprocessing
Jessica McKellar added the comment: Thanks for the patches, vladris! I've reviewed the latest version, and it addresses all of Antoine's review feedback. Ezio left some additional feedback (http://bugs.python.org/review/5001/#ps3407) which still needs to be addressed. -- nosy: +jesstess ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5001 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19977] Use surrogateescape error handler for sys.stdin and sys.stdout on UNIX for the C locale
Nick Coghlan added the comment: Additional environments where the system misreports the encoding to use (courtesy of Armin Ronacher Graham Dumpleton on Twitter): upstart, Salt, mod_wsgi. Note that for more complex applications (e.g. integrated web UIs, socket servers, sending email), round tripping to the standard streams won't be enough - what we really need is a better source of truth as to the real system encoding when POSIX compliant systems provide incorrect configuration data to the interpreter. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19977 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1820] Enhance Object/structseq.c to match namedtuple and tuple api
Stefan Krah added the comment: 1. _asdict() returns a normal dictionary. I don't know if this is what is required. Good question. I don't think we can import OrderedDict from collections because of the impact on startup time (_collections_abc was created to avoid the issue for MutableMapping). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1820 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Nathaniel Smith added the comment: Right, python3 -c 'bx * (2 ** 48)' does give an instant MemoryError for me. So I was wrong about it being the VM limit indeed. The documentation on this is terrible! But, if I'm reading this right: http://lxr.free-electrons.com/source/mm/util.c#L434 the actual rules are: overcommit mode 1: allocating a VM range always succeeds. overcommit mode 2: (Slightly simplified) You can allocate total VM ranges up to (swap + RAM * overcommit_ratio), and overcommit_ratio is 50% by default. So that's a bit odd, but whatever. This is still entirely a limit on VM size. overcommit mode 0 (guess, the default): when allocating a VM range, the kernel imagines what would happen if you immediately used all those pages. If that would put you OOM, then we fall back to mode 2 rules. If that would *not* put you OOM, then the allocation unconditionally succeeds. So yeah, touching pages can affect whether a later malloc returns ENOMEM. I'm not sure any of this actually matters in the Python case though :-). There's still no reason to go touching pages pre-emptively just in case we might write to them later -- all that does is increase the interpreter's memory footprint, which can't help anything. If people are worried about overcommit, then they should turn off overcommit, not try and disable it on a piece-by-piece basis by trying to get individual programs to memory before they need it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: Alright, it bothered me so I wrote a small C testcase (attached), which calls malloc in a loop, and can call memset upon the allocated block right after allocation: $ gcc -o /tmp/test /tmp/test.c; /tmp/test malloc() returned NULL after 3050MB $ gcc -DDO_MEMSET -o /tmp/test /tmp/test.c; /tmp/test malloc() returned NULL after 2130MB Without memset, the kernel happily allocates until we reach the 3GB user address space limit. With memset, it bails out way before. I don't know what this'll give on 64-bit, but I assume one should get comparable result. I would guess that the reason why the Python list allocation fails is because of the exponential allocation scheme: since memory is allocated in large chunks before being used, the kernel happily overallocates. With a more progressive allocation+usage, it should return ENOMEM at some point. Anyway, that's probably off-topic! -- Added file: http://bugs.python.org/file35059/test.c ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___#include stdio.h #include stdlib.h #include string.h #define BLOCK_SIZE (10*1024*1024) int main(int argc, char *argv[]) { unsigned long size = 0; char *p; while ((p = malloc(BLOCK_SIZE)) != NULL) { #ifdef DO_MEMSET memset(p, 0, BLOCK_SIZE); #endif size += BLOCK_SIZE; } printf(malloc() returned NULL after %uMB\n, (size/(1024*1024))); exit(EXIT_SUCCESS); } ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: So yeah, touching pages can affect whether a later malloc returns ENOMEM. I'm not sure any of this actually matters in the Python case though :-). There's still no reason to go touching pages pre-emptively just in case we might write to them later -- all that does is increase the interpreter's memory footprint, which can't help anything. If people are worried about overcommit, then they should turn off overcommit, not try and disable it on a piece-by-piece basis by trying to get individual programs to memory before they need it. Absolutely: that's why I'm really in favor of exposing calloc, this could definitely help many workloads. Victor, did you run any non-trivial benchmark, like pybench Co? As I said, I'm not expecting any improvement, I just want to make sure there's not hidden regression somewhere (like the one for GC-tracked objects above). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Antoine Pitrou added the comment: $ gcc -o /tmp/test /tmp/test.c; /tmp/test malloc() returned NULL after 3050MB $ gcc -DDO_MEMSET -o /tmp/test /tmp/test.c; /tmp/test malloc() returned NULL after 2130MB Without memset, the kernel happily allocates until we reach the 3GB user address space limit. With memset, it bails out way before. I don't know what this'll give on 64-bit, but I assume one should get comparable result. Both OOM here (3.11.0-20-generic, 64-bit, Ubuntu). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Stefan Krah added the comment: This is probably offtopic, but I think people who want reliable MemoryErrors can use limits, e.g. via djb's softlimit (daemontools): $ softlimit -m 1 ./python Python 3.5.0a0 (default:462470859e57+, Apr 27 2014, 19:34:06) [GCC 4.7.2] on linux Type help, copyright, credits or license for more information. [i for i in range(999)] Traceback (most recent call last): File stdin, line 1, in module File stdin, line 1, in listcomp MemoryError -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: Both OOM here (3.11.0-20-generic, 64-bit, Ubuntu). Hm... What's /proc/sys/vm/overcommit_memory ? If it's set to 0, then the kernel will always overcommit. If you set it to 2, normally you'd definitely get ENOMEM (which is IMO much nicer than getting nuked by the OOM killer, especially because, like in real life, there's often collateral damage ;-) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: Hm... What's /proc/sys/vm/overcommit_memory ? If it's set to 0, then the kernel will always overcommit. I meant 1 (damn, I need sleep). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Antoine Pitrou added the comment: Hm... What's /proc/sys/vm/overcommit_memory ? If it's set to 0, then the kernel will always overcommit. Ah, indeed. If you set it to 2, normally you'd definitely get ENOMEM You're right, but with weird results: $ gcc -o /tmp/test test.c; /tmp/test malloc() returned NULL after 600MB $ gcc -DDO_MEMSET -o /tmp/test test.c; /tmp/test malloc() returned NULL after 600MB (I'm supposed to have gigabytes free?!) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Charles-François Natali added the comment: Hm... What's /proc/sys/vm/overcommit_memory ? If it's set to 0, then the kernel will always overcommit. Ah, indeed. See above, I mistyped: 0 is the default (which is already quite optimistic), 1 is always. If you set it to 2, normally you'd definitely get ENOMEM You're right, but with weird results: $ gcc -o /tmp/test test.c; /tmp/test malloc() returned NULL after 600MB $ gcc -DDO_MEMSET -o /tmp/test test.c; /tmp/test malloc() returned NULL after 600MB (I'm supposed to have gigabytes free?!) The formula is RAM * vm.overcommit_ratio /100 + swap So if you don't have swap, or a low overcommit_ratio, it could explain why it returns so early. Or maybe you have some processes with a lot of mapped-yet-unused memory (chromium is one of those for example). Anyway, it's really a mess! -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18314] Have os.unlink remove junction points
Kim Gräsman added the comment: Thanks for pushing this forward! Do you have links to the failing bots I could take a look at? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18314 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18314] Have os.unlink remove junction points
Tim Golden added the comment: Here are a couple: http://buildbot.python.org/all/builders/AMD64%20Windows7%20SP1%203.x/builds/4423 http://buildbot.python.org/all/builders/x86%20Windows7%203.x/builds/8288 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18314 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21368] Check for systemd locale on startup if current locale is set to POSIX
New submission from Nick Coghlan: Issue 19977 added surrogateescape to the fallback settings for the standard streams if Python 3 appears to be running under the POSIX locale (which Python 3 currently reads as setting a default encoding of ASCII, which is almost certainly wrong on any modern Linux system). If a modern Linux system is using systemd as the process manager, then there will likely be a /etc/locale.conf file providing settings like LANG - due to problematic requirements in the POSIX specification, this file (when available) is likely to be a better source of truth regarding the system encoding than the environment where the interpreter process is started, at least when the latter is claiming ASCII as the default encoding. See http://www.freedesktop.org/software/systemd/man/locale.conf.html for more details. -- components: Interpreter Core messages: 217313 nosy: ncoghlan priority: normal severity: normal status: open title: Check for systemd locale on startup if current locale is set to POSIX versions: Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21368 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21362] concurrent.futures does not validate that max_workers is proper
Changes by Charles-François Natali cf.nat...@gmail.com: -- stage: patch review - commit review ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21362 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19977] Use surrogateescape error handler for sys.stdin and sys.stdout on UNIX for the C locale
Nick Coghlan added the comment: Issue 21368 now suggests looking for /etc/locale.conf before falling back to ASCII+surrogateescape. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19977 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19977] Use surrogateescape error handler for sys.stdin and sys.stdout on UNIX for the C locale
Antoine Pitrou added the comment: We should not overcomplicate this. I suggest that we simply use utf-8 under the C locale. -- versions: +Python 3.5 -Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19977 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13330] Attempt full test coverage of LocaleTextCalendar.formatweekday
Jessica McKellar added the comment: Thanks for working to increase our test coverage, Sean.Fleming! Looking at the current coverage, the there is one line in LocaleTextCalendar.formatweekday without coverage: http://hg.python.org/cpython/file/e159cb0d955b/Lib/calendar.py#l519. You add some additional tests, but they are mostly testing some very literal aspects of the implementation rather than the purpose of the function. For example: +self.assertRaises(IndexError, calendar.LocaleTextCalendar(locale='').formatweekday, 7, 1 ) It's true that this will raise an IndexError, but formatweekday isn't supposed to be called with these values. I've added some tests that add coverage for the line that didn't have coverage, while focusing on the purpose of the function, namely to provide an appropriate day name when constrained to various widths. * The patch passes the full test suite * The patch passes `make patchcheck` * The patch results in full coverage for LocaleTextCalendar.formatweekday Coverage results, before and after: $ ./python.exe ../coveragepy/ run --pylib --source=calendar Lib/test/regrtest.py test_calendar [1/1] test_calendar 1 test OK. nitefly:cpython jesstess$ ./python.exe ../coveragepy/ report --show-missing Name Stmts Miss Cover Missing Lib/calendar 375 5486% 511, 519, 541, 608-699, 703 $ patch -p1 issue13330.patch patching file Lib/test/test_calendar.py patching file Misc/ACKS $ ./python.exe ../coveragepy/ run --pylib --source=calendar Lib/test/regrtest.py test_calendar [1/1] test_calendar 1 test OK. nitefly:cpython jesstess$ ./python.exe ../coveragepy/ report --show-missing Name Stmts Miss Cover Missing Lib/calendar 375 5386% 511, 541, 608-699, 703 (519 was the one line without coverage inside LocaleTextCalendar.formatweekday) -- nosy: +jesstess versions: +Python 3.5 -Python 3.3 Added file: http://bugs.python.org/file35060/issue13330.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13330 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19977] Use surrogateescape error handler for sys.stdin and sys.stdout on UNIX for the C locale
Nick Coghlan added the comment: If you can convince Stephen Turnbull that's a good idea, sure. It's probably more likely to be the right thing than ASCII or ASCII + surrogateescape, but in the absence of hard data, he's in a better position than we are to judge the likely impact of that, at least in Japan. I'm also going to hunt around on freedesktop.org to see if there's anything more general there on the topic of encodings. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19977 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18314] Have os.unlink remove junction points
Kim Gräsman added the comment: Thanks! At first I suspected 32 vs 64 bit, but the failing bots cover both... One thing that stands out to me as risky is the memcmp() against \\??\\, which could overrun a short src_path buffer. But I don't think that would fail here. I must have made some mistake with the REPARSE_DATA_BUFFER, but I can't see anything off hand. What are our debugging options? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue18314 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20265] Bring Windows docs up to date
Kathleen Weaver added the comment: Latest update -- Added file: http://bugs.python.org/file35061/mywork.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20265 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21369] Extended modes for tarfile.TarFile()
New submission from Sworddragon: tarfile.open() does support optionally an compression method on the mode argument in the form of 'filemode[:compression]' but tarfile.TarFile() does only suport 'a', 'r' and 'w'. Is there a special reason that tarfile.TarFile() doesn't directly support an optional compression method? Otherwise it would be nice if they could be used directly on tarfile.TarFile() too. -- components: Library (Lib) messages: 217320 nosy: Sworddragon priority: normal severity: normal status: open title: Extended modes for tarfile.TarFile() type: enhancement versions: Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21369 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21370] segfault from simple traceback.format_exc call
New submission from John Rusnak: Launch python3.3 and then import traceback tracebacke.format_exc() Seomteims a long trace about missing attribute is produced, on subequent of sometimes first call, python executable segfaults. I see this behavior in an app as well then calling format_exc() under a real exception condition. -- messages: 217321 nosy: John.Rusnak priority: normal severity: normal status: open title: segfault from simple traceback.format_exc call versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21370 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20951] SSLSocket.send() returns 0 for non-blocking socket
Nikolaus Rath added the comment: As discussed on python-dev, here is a patch that changes the behavior of send() and sendall() to raise SSLWant* exceptions instead of returning zero. -- Added file: http://bugs.python.org/file35062/issue20951_r2.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20951 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Changes by STINNER Victor victor.stin...@gmail.com: Added file: http://bugs.python.org/file35064/use_calloc.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: I splitted my patch into two parts: - calloc-4.patch: add new Calloc functions including _PyObject_GC_Calloc() - use_calloc.patch: patch types (bytes, dict, list, set, tuple, etc.) and various modules to use calloc I reverted my changes on _PyObject_GC_Malloc() and added _PyObject_GC_Calloc(), performance regressions are gone. Creating a large tuple is a little bit (8%) faster. But the real speedup is to build a large bytes strings of null bytes: $ ./python.orig -m timeit 'bytes(50*1024*1024)' 100 loops, best of 3: 5.7 msec per loop $ ./python.calloc -m timeit 'bytes(50*1024*1024)' 10 loops, best of 3: 4.12 usec per loop On Linux, no memory is allocated, even if you read the bytes content. RSS is almost unchanged. Ok, now the real use case where it becomes faster: I implemented the same optimization for bytearray. $ ./python.orig -m timeit 'bytearray(50*1024*1024)' 100 loops, best of 3: 6.33 msec per loop $ ./python.calloc -m timeit 'bytearray(50*1024*1024)' 10 loops, best of 3: 4.09 usec per loop If you overallocate a bytearray and only write a few bytes, the bytes of end of bytearray will not be allocated (at least on Linux). Result of bench_alloc.py comparing original Python to patched Python (calloc-4.patch + use_calloc.patch). Common platform: SCM: hg revision=4b97092aa4bd+ tag=tip branch=default date=2014-04-27 18:02 +0100 Timer info: namespace(adjustable=False, implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, resolution=1e-09) Python unicode implementation: PEP 393 CFLAGS: -Wno-unused-result -Werror=declaration-after-statement -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes Bits: int=32, long=64, long long=64, size_t=64, void*=64 Timer: time.perf_counter CPU model: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz Platform: Linux-3.13.9-200.fc20.x86_64-x86_64-with-fedora-20-Heisenbug Platform of campaign orig: Timer precision: 42 ns Date: 2014-04-28 00:27:19 Python version: 3.5.0a0 (default:4b97092aa4bd, Apr 28 2014, 00:24:03) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] Platform of campaign calloc: Timer precision: 54 ns Date: 2014-04-28 00:28:35 Python version: 3.5.0a0 (default:4b97092aa4bd+, Apr 28 2014, 00:25:56) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] ---+-+-- Tests | orig | calloc ---+-+-- object() | 61 ns (*) | 71 ns (+16%) b'A' * 10 | 54 ns (*) | 52 ns b'A' * 10**3 | 124 ns (*) | 110 ns (-12%) b'A' * 10**6 | 38.4 us (*) | 38.5 us 'A' * 10 | 59 ns (*) | 62 ns 'A' * 10**3 | 132 ns (*) | 107 ns (-19%) 'A' * 10**6 | 38.5 us (*) | 38.5 us 'A' * 10**8 | 10.3 ms (*) | 10.6 ms decode 10 null bytes from ASCII | 264 ns (*) | 263 ns decode 10**3 null bytes from ASCII | 403 ns (*) | 379 ns (-6%) decode 10**6 null bytes from ASCII | 80.5 us (*) | 80.5 us decode 10**8 null bytes from ASCII | 17.7 ms (*) | 17.3 ms (None,) * 10**0 | 29 ns (*) | 28 ns (None,) * 10**1 | 75 ns (*) | 76 ns (None,) * 10**2 | 461 ns (*) | 460 ns (None,) * 10**3 | 3.6 us (*) | 3.57 us (None,) * 10**4 | 35.7 us (*) | 35.7 us (None,) * 10**5 | 364 us (*) | 365 us (None,) * 10**6 | 4.12 ms (*) | 4.11 ms (None,) * 10**7 | 43.5 ms (*) | 40.3 ms (-7%) (None,) * 10**8 | 433 ms (*) | 400 ms (-8%) ([None] * 10)[1:-1] | 121 ns (*) | 134 ns (+11%) ([None] * 10**3)[1:-1] | 3.62 us (*) | 3.61 us ([None] * 10**6)[1:-1] | 4.24 ms (*) | 4.22 ms ([None] * 10**8)[1:-1] | 440 ms (*) | 402 ms (-9%) ---+-+-- Total | 954 ms (*) | 880 ms (-8%) ---+-+-- -- Added file: http://bugs.python.org/file35063/calloc-4.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: bench_alloc2.py: updated benchmark script. I added bytes(n) and bytearray(n) tests and removed the test decoding from ASCII. Common platform: Timer: time.perf_counter Timer info: namespace(adjustable=False, implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, resolution=1e-09) Platform: Linux-3.13.9-200.fc20.x86_64-x86_64-with-fedora-20-Heisenbug SCM: hg revision=4b97092aa4bd+ tag=tip branch=default date=2014-04-27 18:02 +0100 Python unicode implementation: PEP 393 Bits: int=32, long=64, long long=64, size_t=64, void*=64 CFLAGS: -Wno-unused-result -Werror=declaration-after-statement -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes CPU model: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz Platform of campaign orig: Date: 2014-04-28 01:11:49 Timer precision: 39 ns Python version: 3.5.0a0 (default:4b97092aa4bd, Apr 28 2014, 01:02:01) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] Platform of campaign calloc: Date: 2014-04-28 01:12:29 Timer precision: 44 ns Python version: 3.5.0a0 (default:4b97092aa4bd+, Apr 28 2014, 01:06:54) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] ---+-+ Tests | orig | calloc ---+-+ object() | 62 ns (*) | 72 ns (+16%) b'A' * 10 | 53 ns (*) | 52 ns b'A' * 10**3 | 96 ns (*) | 110 ns (+15%) b'A' * 10**6 | 38.5 us (*) | 38.6 us 'A' * 10 | 59 ns (*) | 61 ns 'A' * 10**3 | 105 ns (*) | 108 ns 'A' * 10**6 | 38.6 us (*) | 38.6 us 'A' * 10**8 | 10.3 ms (*) | 10.4 ms (None,) * 10**0 | 29 ns (*) | 29 ns (None,) * 10**1 | 75 ns (*) | 76 ns (None,) * 10**2 | 432 ns (*) | 461 ns (+7%) (None,) * 10**3 | 3.58 us (*) | 3.6 us (None,) * 10**4 | 35.8 us (*) | 35.7 us (None,) * 10**5 | 365 us (*) | 365 us (None,) * 10**6 | 4.1 ms (*) | 4.13 ms (None,) * 10**7 | 43.6 ms (*) | 40.3 ms (-8%) (None,) * 10**8 | 433 ms (*) | 401 ms (-7%) ([None] * 10)[1:-1] | 122 ns (*) | 134 ns (+10%) ([None] * 10**3)[1:-1] | 3.6 us (*) | 3.62 us ([None] * 10**6)[1:-1] | 4.22 ms (*) | 4.2 ms ([None] * 10**8)[1:-1] | 441 ms (*) | 402 ms (-9%) bytes(10) | 137 ns (*) | 136 ns bytes(10**3) | 181 ns (*) | 191 ns (+5%) bytes(10**6) | 38.7 us (*) | 39.2 us bytes(10**8) | 10.3 ms (*) | 4.36 us (-100%) bytearray(10) | 138 ns (*) | 153 ns (+11%) bytearray(10**3) | 184 ns (*) | 211 ns (+14%) bytearray(10**6) | 38.7 us (*) | 39.3 us bytearray(10**8) | 10.3 ms (*) | 4.32 us (-100%) ---+-+ Total | 957 ms (*) | 862 ms (-10%) ---+-+ -- Added file: http://bugs.python.org/file35065/bench_alloc2.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
Antoine Pitrou added the comment: Common platform: Timer: time.perf_counter Timer info: namespace(adjustable=False, implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, resolution=1e-09) Platform: Linux-3.13.9-200.fc20.x86_64-x86_64-with-fedora-20-Heisenbug ^ Are you sure this is a good platform for performance reports? :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21233] Add *Calloc functions to CPython memory allocation API
STINNER Victor added the comment: Are you sure this is a good platform for performance reports? :) Don't hesitate to rerun my benchmark on more different platforms? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21233 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21340] Possible concurrency bug in asyncio, AttributeError in tasks.py
STINNER Victor added the comment: Why not using try/except AttributeError? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21340 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21368] Check for systemd locale on startup if current locale is set to POSIX
STINNER Victor added the comment: I don't think that Python should read such configuration file. If you consider that something is wrong here, please report the issue to the C library. -- nosy: +haypo ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21368 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com