[issue26039] More flexibility in zipfile interface

2016-02-04 Thread Thomas Kluyver

Thomas Kluyver added the comment:

Is there anything more I should be doing with either of these patches? I think 
I've incorporated all review comments I've seen. Thanks!

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Florin Papa

Florin Papa added the comment:

I ran perf to use calibration and there is no difference in stability
compared to the unpatched version.

With patch:

python perf.py -b json_dump_v2 -v --csv=out1.csv --affinity=2 ../cpython/python 
../cpython/python
INFO:root:Automatically selected timer: perf_counter
[1/1] json_dump_v2...
Calibrating
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 1 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 2 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 4 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 8 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 16 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 1 -l 32 --timer perf_counter`
Calibrating => num_runs=10, num_loops=32 (0.50 sec < 0.87 sec)
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 10 -l 32 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 10 -l 32 --timer perf_counter`

Report on Linux centos 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 
2015 x86_64 x86_64
Total CPU cores: 18

### json_dump_v2 ###
Min: 0.877497 -> 0.886482: 1.01x slower   <--
Avg: 0.878150 -> 0.888351: 1.01x slower
Not significant
Stddev: 0.00054 -> 0.00106: 1.9481x larger


Without patch:

python perf.py -b json_dump_v2 -v --csv=out1.csv --affinity=2 ../cpython/python 
../cpython/python
INFO:root:Automatically selected timer: perf_counter
[1/1] json_dump_v2...
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 50 --timer perf_counter`
INFO:root:Running `taskset --cpu-list 2 ../cpython/python 
performance/bm_json_v2.py -n 50 --timer perf_counter`

Report on Linux centos 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 
2015 x86_64 x86_64
Total CPU cores: 18

### json_dump_v2 ###
Min: 2.755514 -> 2.764131: 1.00x slower <-- (almost) same as above
Avg: 2.766546 -> 2.775587: 1.00x slower
Not significant
Stddev: 0.00538 -> 0.00382: 1.4069x smaller

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22107] tempfile module misinterprets access denied error on Windows

2016-02-04 Thread Thomas Kluyver

Thomas Kluyver added the comment:

This issue was closed, but I believe the original bug reported was not fixed: 
trying to create a temporary file in a directory where you don't have write 
permissions hangs for a long time before failing with a misleading 
FileExistsError, rather than failing immediately with PermissionError.

I've just run into this on Python 3.5.1 while trying to use tempfile to check 
if a directory is writable - which I'm doing precisely because os.access() 
isn't useful on Windows!

I find it hard to believe that there is no way to distinguish a failure because 
the name is already used for a subdirectory from a failure because we don't 
have permission to create a file.

--
nosy: +takluyver

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

fastint2.patch adds small regression for string multiplication:

$ ./python -m timeit -s "x = 'x'" -- "x*2; x*2; x*2; x*2; x*2; x*2; x*2; x*2; 
x*2; x*2; "
Unpatched:  1.46 usec per loop
Patched:1.54 usec per loop

Here is an alternative patch. It just uses existing specialized functions for 
integers: long_add, long_sub and long_mul. It doesn't add regression for above 
example with string multiplication, and it looks faster than fastint2.patch for 
integer multiplication.

$ ./python -m timeit -s "x = 12345" -- "x*2; x*2; x*2; x*2; x*2; x*2; x*2; x*2; 
x*2; x*2; "
Unpatched:  0.887 usec per loop
fastint2.patch: 0.841 usec per loop
fastint_alt.patch:  0.804 usec per loop

--
Added file: http://bugs.python.org/file41801/fastint_alt.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26252] Add an example to importlib docs on setting up an importer

2016-02-04 Thread Maciej Szulik

Changes by Maciej Szulik :


--
nosy: +maciej.szulik

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Victor, this is a very interesting write-up, thank you.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

> STINNER Victor added the comment:
> I modified Stefan's telco.py to remove all I/O from the hot code: the 
> benchmark is now really CPU-bound. I also modified telco.py to run the 
> benchmark 5 times. One run takes around 2.6 seconds.
> 

Nice. telco.py is an ad-hoc script from the original decimal.py sandbox,
I missed that it called "infil.read(8)". :)

> And *NOW* using my isolated CPU physical cores #2 and #3 (Linux CPUs 2, 3, 6 
> and 7), still on the heavily loaded system:
> ---
> $ taskset -c 2,3,6,7 python3 telco_haypo.py full 
> 
> Elapsed time: 2.57948748662
> Elapsed time: 2.582796103536
> Elapsed time: 2.5811954810001225
> Elapsed time: 2.578203360887
> Elapsed time: 2.57237063649

Great.  I'll try that out in the weekend.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26269] zipfile should call lstat instead of stat if available

2016-02-04 Thread Anish Shah

Changes by Anish Shah :


--
nosy: +anish.shah

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

Stefan: "In my experience it is very hard to get stable benchmark results with 
Python.  Even long running benchmarks on an empty machine vary: (...)"

tl; dr We *can* tune the Linux kernel to avoid most of the system noise when 
running kernels.


I modified Stefan's telco.py to remove all I/O from the hot code: the benchmark 
is now really CPU-bound. I also modified telco.py to run the benchmark 5 times. 
One run takes around 2.6 seconds.

I also added the following lines to check the CPU affinity and the number of 
context switches:

os.system("grep -E -i 'cpu|ctx' /proc/%s/status" % os.getpid())

Well, see attached telco_haypo.py for the full script.

I used my system_load.py script to get a system load >= 5.0. Without tasksel, 
the benchmark result changes completly: at least 5 seconds. Well, it's not 
really surprising, it's known that benchmarks depend on the system load.


*BUT* I have a great kernel called Linux which has cool features called "CPU 
isolation" and "no HZ" (tickless kernel). On my Fedoera 23, the kernel is 
compiled with CONFIG_NO_HZ=y and CONFIG_NO_HZ_FULL=y.

haypo@smithers$ lscpu --extended
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZMINMHZ
0   00  00:0:0:0   oui5900, 1600,
1   00  11:1:1:0   oui5900, 1600,
2   00  22:2:2:0   oui5900, 1600,
3   00  33:3:3:0   oui5900, 1600,
4   00  00:0:0:0   oui5900, 1600,
5   00  11:1:1:0   oui5900, 1600,
6   00  22:2:2:0   oui5900, 1600,
7   00  33:3:3:0   oui5900, 1600,

My CPU is on a single socket, has 4 physical cores, but Linux gets 8 cores 
because of hyper threading.


I modified the Linux command line during the boot in GRUB to add: 
isolcpus=2,3,6,7 nohz_full=2,3,6,7. Then I forced the CPU frequency to 
performance to avoid hiccups:

# for id in 2 3 6 7; do echo performance > cpu$id/cpufreq/scaling_governor; 
done 

Check the config with:

$ cat /sys/devices/system/cpu/isolated
2-3,6-7
$ cat /sys/devices/system/cpu/nohz_full
2-3,6-7
$ cat /sys/devices/system/cpu/cpu[2367]/cpufreq/scaling_governor
performance
performance
performance
performance


Ok now with this kernel config but still without tasksel on an idle system:
---
Elapsed time: 2.66008842437
Elapsed time: 2.592753862844
Elapsed time: 2.613568236813
Elapsed time: 2.581926057324
Elapsed time: 2.599129409322

Cpus_allowed:   33
Cpus_allowed_list:  0-1,4-5
voluntary_ctxt_switches:1
nonvoluntary_ctxt_switches: 21
---

With system load >= 5.0:
---
Elapsed time: 5.348448917415
Elapsed time: 5.33679747233
Elapsed time: 5.18741368792
Elapsed time: 5.2412202058
Elapsed time: 5.1020124644

Cpus_allowed_list:  0-1,4-5
voluntary_ctxt_switches:1
nonvoluntary_ctxt_switches: 1597
---

And *NOW* using my isolated CPU physical cores #2 and #3 (Linux CPUs 2, 3, 6 
and 7), still on the heavily loaded system:
---
$ taskset -c 2,3,6,7 python3 telco_haypo.py full 

Elapsed time: 2.57948748662
Elapsed time: 2.582796103536
Elapsed time: 2.5811954810001225
Elapsed time: 2.578203360887
Elapsed time: 2.57237063649

Cpus_allowed:   cc
Cpus_allowed_list:  2-3,6-7
voluntary_ctxt_switches:2
nonvoluntary_ctxt_switches: 16
---

Numbers look *more* stable than the numbers of the first test without taskset 
on an idle system! You can see that number of context switches is very low 
(total: 18).

Example of a second run:
---
haypo@smithers$ taskset -c 2,3,6,7 python3 telco_haypo.py full 

Elapsed time: 2.538398498999868
Elapsed time: 2.54471196891
Elapsed time: 2.532367733904
Elapsed time: 2.53625264783
Elapsed time: 2.52574818205

Cpus_allowed:   cc
Cpus_allowed_list:  2-3,6-7
voluntary_ctxt_switches:2
nonvoluntary_ctxt_switches: 15
---

Third run:
---
haypo@smithers$ taskset -c 2,3,6,7 python3 telco_haypo.py full 

Elapsed time: 2.581917293605
Elapsed time: 2.578302425365
Elapsed time: 2.57849358701
Elapsed time: 2.577419851588
Elapsed time: 2.577214899445

Cpus_allowed:   cc
Cpus_allowed_list:  2-3,6-7
voluntary_ctxt_switches:2
nonvoluntary_ctxt_switches: 15
---

Well, it's no perfect, but it looks much stable than timings without specific 
kernel config nor CPU pinning.

Statistics on the 15 timings of the 3 runs with tunning on a heavily loaded 
system:

>>> times
[2.57948748662, 2.582796103536, 2.5811954810001225, 2.578203360887, 
2.57237063649, 2.538398498999868, 2.54471196891, 2.532367733904, 
2.53625264783, 2.52574818205, 

[issue26110] Speedup method calls 1.2x

2016-02-04 Thread Maciej Szulik

Changes by Maciej Szulik :


--
nosy: +maciej.szulik

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

I prefer fastint_alt.patch design, it's simpler. I added a comment on the 
review.

My numbers, best of 5 timeit runs:

$ ./python -m timeit -s "x = 12345" -- "x*2; x*2; x*2; x*2; x*2; x*2; x*2; x*2; 
x*2; x*2; "

* original: 299 ns
* fastint2.patch: 282 ns (-17 ns, -6%)
* fastint_alt.patch: 267 ns (-32 ns, -11%)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26286] dis module: coroutine opcode documentation clarity

2016-02-04 Thread Jim Jewett

New submission from Jim Jewett:

https://docs.python.org/3/library/dis.html includes a section describing the 
various opcodes.

Current documentation: """
Coroutine opcodes

GET_AWAITABLE
Implements TOS = get_awaitable(TOS), where get_awaitable(o) returns o if o is a 
coroutine object or a generator object with the CO_ITERABLE_COROUTINE flag, or 
resolves o.__await__.

GET_AITER
Implements TOS = get_awaitable(TOS.__aiter__()). See GET_AWAITABLE for details 
about get_awaitable

GET_ANEXT
Implements PUSH(get_awaitable(TOS.__anext__())). See GET_AWAITABLE for details 
about get_awaitable

BEFORE_ASYNC_WITH
Resolves __aenter__ and __aexit__ from the object on top of the stack. Pushes 
__aexit__ and result of __aenter__() to the stack.

SETUP_ASYNC_WITH
Creates a new frame object.
"""

(1)  There is a PUSH macro in ceval.c, but no PUSH bytecode.  I spent a few 
minutes trying to figure out what a PUSH command was, and how the GET_ANEXT 
differed from 
TOS = get_awaitable(TOS.__anext__())
which would match the bytecodes right above it.

After looking at ceval.c, I think GET_ANEXT is the only such bytecode to leave 
the original TOS in place, but I'm not certain about that.  Please be explicit. 
 (Unless they are the same, in which case, please use the same wording.)
 
(2)  The coroutine bytecode instructions should have a "New in 3.5" marker, as 
the GET_YIELD_FROM_ITER does.  It might make sense to just place the mark under 
Coroutine opcodes section header and say it applies to all of them, instead of 
marking each individual opcode.  

(3)  The GET_AITER and GET_ANEXT descriptions do not show the final period.  
Opcodes such as INPLACE_LSHIFT also end with a code quote, but still include a 
(not-marked-as-code) final period.

(4)  Why does SETUP_ASYNC_WITH talk about frames?  Is there actually a python 
frame involved, or is this another bytecode "block", similar to that used for 
except and finally?

--
assignee: yselivanov
components: Documentation
messages: 259595
nosy: Jim.Jewett, yselivanov
priority: normal
severity: normal
stage: needs patch
status: open
title: dis module: coroutine opcode documentation clarity
versions: Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23551] IDLE to provide menu link to PIP gui.

2016-02-04 Thread Upendra Kumar

Upendra Kumar added the comment:

I am trying to make a Tk based GUI for pip package manager. In reference to 
msg256736, I am confused about the discovery method mentioned. Is there any way 
already implemented to detect the Python versions installed systemwide?

Moreover, how to manage the non-standard installation of Python by users? I 
think in that case, it would be very difficult to detect the Python versions 
installed in the user's system. In addition to it different tools used for 
installation of Python generally end up in installing in different folders or 
paths. 

Therefore, initially what functionality should I try to implement in it? Can 
anyone please suggest me?

--
nosy: +upendra-k14

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Le 04/02/2016 21:42, Alecsandru Patrascu a écrit :
> 
> To compress all of the above, the main reason for this speedup is the
> reduction of the code path length and having the useful function
> close together, so that the CPU will be able to prefetch them in
> advance and use them instead of trowing them away because they are
> not used.

I'm expecting this patch to have an impact on executable or library
size, but not really on runtime performance, as the CPU instruction
cache only fetches whichever pieces of code are actually called.  In
other words, unused sections of code should remain cold wrt. the CPU
caches.  Apart from more or less random aliasing effects (and perhaps
TLB effects, but those should be very minor) I'm surprised that it has
positive performance effects.  But since you work at Intel, perhaps you
know things that I don't ;-)

Also any name starting with Py_ or _Py_ is an API that may be called by
third-party code, so it shouldn't be removed at all...

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

People should stop getting hung up about benchmarks numbers and instead should 
first think about what they are trying to *achieve*. FP performance in pure 
Python does not seem like an important goal in itself. Also, some benchmarks 
may show variations which are randomly correlated with a patch (e.g. before of 
different code placement by the compiler interfering with instruction cache 
wayness). It is important not to block a patch because some random benchmark on 
some random machine shows an unexpected slowdown.

That said, both of Serhiy's patches are probably ok IMO.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23551] IDLE to provide menu link to PIP gui.

2016-02-04 Thread Terry J. Reedy

Terry J. Reedy added the comment:

I think an initial version of a pip gui need only install to the Python version 
running it.

The py launcher must discover some version of 'all' Python installs to choose, 
for instance, the latest 3.x version.  I do not know the details, nor which 
system py.exe runs on.  I was suggesting looking into the details after a first 
version.

A few days ago Steve Dower reported on pydev list how PSF installs on Windows 
register themselves in the registry (the keys used).  He also proposed a 
standard convention for other distributions to register, if they wish be be 
discovered by other apps, in a way that does not interfere with the entries for 
PSF installations.

Upendra, are you an intended GSOC student or simply a volunteer?  I am asking 
because, in the absence of submissions in nearly a year, I proposed on 
core-mentorship that this might be a good GSOC project.  I will not reserve 
this for GSOC if someone else is actually going to submit something.  But I 
also do not want to withdraw the idea unless someone is.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

> Also any name starting with Py_ or _Py_ is an API that may be called by 
> third-party code, so it shouldn't be removed at all...

Right. You cannot remove the following functions, they are part of the
public C API (Include/pymem.h).

/usr/bin/ld: Removing unused section '.text.PyMem_RawMalloc' in file
'Objects/obmalloc.o'
/usr/bin/ld: Removing unused section '.text.PyMem_RawCalloc' in file
'Objects/obmalloc.o'
/usr/bin/ld: Removing unused section '.text.PyMem_RawRealloc' in file
'Objects/obmalloc.o'
/usr/bin/ld: Removing unused section '.text.PyMem_RawFree' in file
'Objects/obmalloc.o'

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

I’m not sure if resize() should change the len(). Dustin, why do you think it 
should? Not all ctypes objects even implement len(). The len() of 10 seems 
embedded in the class of the return value:

>>> b


Also, how would this affect create_unicode_buffer(), if the buffer is resized 
to a non-multiple of sizeof(c_wchar)?

Gedai: I’m not that familiar with the ctypes internals, but it looks like 
__len__() is implemented on the Array base class:

>>> type(b).__len__


Maybe look for the implementation of this method: 
.

--
nosy: +martin.panter

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Alecsandru Patrascu

Alecsandru Patrascu added the comment:

Sure, I attached them as files because they have a lot of lines for posting 
here (~90 in total).

The linker offers the possibility to show what piece of data/functions was 
removed, but I intentionally omitted it in order not to clutter the build 
trace. If you think it will be useful for the user to see it, I can add them to 
the patch also.

--
Added file: http://bugs.python.org/file41808/gc-removed-cpython2.txt

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Alecsandru Patrascu

Changes by Alecsandru Patrascu :


Added file: http://bugs.python.org/file41809/gc-removed-cpython3.txt

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

> I'm pretty sure that optimizing lists (and tuples?) is a great idea.

I think it's a good idea indeed.

> It would also be nice to optimize [-1] lookup

How is that different from the above? :)

--
nosy: +pitrou

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20160] broken ctypes calling convention on MSVC / 64-bit Windows (large structs)

2016-02-04 Thread Mark Lawrence

Changes by Mark Lawrence :


--
nosy:  -BreamoreBoy

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26291] Floating-point arithmetic

2016-02-04 Thread good.bad

New submission from good.bad:

print(1 - 0.8)
0.19996
print(1 - 0.2)
0.8

why not 0.2?

--
messages: 259622
nosy: goodbad
priority: normal
severity: normal
status: open
title: Floating-point arithmetic
versions: Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

Okay, I see. To clarify, it is Python that sets up Gnu Readline for stdout: 
. The 
problem is whichever way we go, we will have to change some part of the 
behaviour to make it internally consistent. I think my patch is the minimal 
change required.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

Okay, I see. To clarify, it is Python that sets up Gnu Readline for stdout: 
. The 
problem is whichever way we go, we will have to change some part of the 
behaviour to make it internally consistent. I think my patch is the minimal 
change required.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22847] Improve method cache efficiency

2016-02-04 Thread Benjamin Peterson

Benjamin Peterson added the comment:

I suppose we've backported scarier things.

--
resolution:  -> fixed
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

This is rather an objection.

If Gnu Readline is configured for stdout, why bash outputs to stderr? We should 
investigate what exactly do bash and other popular programs with readline, and 
implement this in Python.

Changing the documentation usually is a less drastic change than changing 
behavior.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17446] doctest test finder doesnt find line numbers of properties

2016-02-04 Thread Michael Cuthbert

Michael Cuthbert added the comment:

The test looks great to me.  Does anyone on nosy know the proper way to request 
a patch review?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Hi Yury,

> I'm not sure how to respond to that. Every performance aspect *is*
> important.

Performance is not a religion (not any more than security or any other
matter).  It is not helpful to brandish results on benchmarks which have
no relevance to real-world applications.

It helps to define what we should achieve and why we want to achieve it.
 Once you start asking "why", the prospect of speeding up FP
computations in the eval loop starts becoming dubious.

> numpy isn't shipped with CPython, not everyone uses it.

That's not the point. *People doing FP-heavy computations* should use
Numpy or any of the packages that can make FP-heavy computations faster
(Numba, Cython, Pythran, etc.).

You should use the right tool for the job.  There is no need to
micro-optimize a hammer for driving screws when you could use a
screwdriver instead.  Lists or tuples of Python float objects are an
awful representation for what should be vectorized native data.  They
eat more memory in addition to being massively slower (they will also be
slower to serialize from/to disk, etc.).

"Not using" Numpy when you would benefit from it is silly.
Numpy is not only massively faster on array-wide tasks, it also makes it
easier to write high-level, readable, reusable code instead of writing
loops and iterating by hand.  Because it has been designed explicitly
for such use cases (which the Python core was not, despite the existence
of the colorsys module ;-)).  It also gives you access to a large
ecosystem of third-party modules implementing various domain-specific
operations, actively maintained by experts in the field.

Really, the mindset of "people shouldn't need to use Numpy, they can do
FP computations in the interpreter loop" is counter-productive.  I
understand that it's seductive to think that Python core should stand on
its own, but it's also a dangerous fallacy.

You *should* advocate people use Numpy for FP computations.  It's an
excellent library, and it's currently a major selling point for Python.
Anyone doing FP-heavy computations with Python should learn to use
Numpy, even if they only use it from time to time.  Downplaying its
importance, and pretending core Python is sufficient, is not helpful.

> It also harms Python 3 adoption a little bit, since many benchmarks
> are still slower. Some of them are FP related.

The Python 3 migration is happening already. There is no need to worry
about it... Even the diehard 3.x haters have stopped talking of
releasing a 2.8 ;-)

> In any case, I think that if we can optimize something - we should.

That's not true. Some optimizations add maintenance overhead for no real
benefit. Some may even hinder performance as they add conditional
branches in a critical path (increasing the load on the CPU's branch
predictors and making them potentially less efficient).

Some optimizations are obviously good, like the method call optimization
which caters to real-world use cases (and, by the way, kudos for that...
you are doing much better than all previous attempts ;-)). But some are
solutions waiting for a problem to solve.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1927] raw_input behavior incorrect if readline not enabled

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

Serhiy, was your comment an objection to changing away from stderr, or was that 
just an observation that Python’s design is inconsistent with the rest of the 
world?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12923] test_urllib fails in refleak mode

2016-02-04 Thread Roundup Robot

Roundup Robot added the comment:

New changeset eb69070e5382 by Martin Panter in branch '3.5':
Issue #12923: Reset FancyURLopener's redirect counter even on exception
https://hg.python.org/cpython/rev/eb69070e5382

New changeset a8aa7944c5a8 by Martin Panter in branch '2.7':
Issue #12923: Reset FancyURLopener's redirect counter even on exception
https://hg.python.org/cpython/rev/a8aa7944c5a8

New changeset d3be5c4507b4 by Martin Panter in branch 'default':
Issue #12923: Merge FancyURLopener fix from 3.5
https://hg.python.org/cpython/rev/d3be5c4507b4

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26287] Core dump in f-string with lambda and format specification

2016-02-04 Thread Petr Viktorin

New submission from Petr Viktorin:

Evaluating the expression f"{(lambda: 0):x}" crashes Python.

$ ./python
Python 3.6.0a0 (default, Feb  5 2016, 02:14:48) 
[GCC 5.3.1 20151207 (Red Hat 5.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> f"{(lambda: 0):x}"  
Fatal Python error: Python/ceval.c:3576 object at 0x7f6b42f21338 has negative 
ref count -2604246222170760230
Traceback (most recent call last):
  File "", line 1, in 
TypeError: non-empty format string passed to object.__format__
Aborted (core dumped)

--
messages: 259609
nosy: encukou, eric.smith
priority: normal
severity: normal
status: open
title: Core dump in f-string with lambda and format specification
versions: Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

tl;dr   I'm attaching a new patch - fastint4 -- the fastest of them all. It 
incorporates Serhiy's suggestion to export long/float functions and use them.  
I think it's reasonable complete -- please review it, and let's get it 
committed.

== Benchmarks ==

spectral_norm (fastint_alt)-> 1.07x faster
spectral_norm (fastintfloat)   -> 1.08x faster
spectral_norm (fastint3.patch) -> 1.29x faster
spectral_norm (fastint4.patch) -> 1.16x faster

spectral_norm (fastint**.patch)-> 1.31x faster
nbody (fastint**.patch)-> 1.16x faster

Where:
- fastint3 - is my previous patch that nobody likes (it inlined a lot of logic 
from longobject/floatobject)

- fastint4 - is the patch I'm attaching and ideally want to commit

- fastint** - is a modification of fastint4.  This is very interesting -- I 
started to profile different approaches, and found two bottlenecks, that really 
made Serhiy's and my other patches slower than fastint3.  What I found is that 
PyLong_AsDouble can be significantly optimized, and PyLong_FloorDiv is super 
inefficient.

PyLong_AsDouble can be sped up several times if we add a fastpath for 1-digit 
longs:

// longobject.c: PyLong_AsDouble
if (PyLong_CheckExact(v) && Py_ABS(Py_SIZE(v)) <= 1) {
/* fast path; single digit will always fit decimal */
return (double)MEDIUM_VALUE((PyLongObject *)v);
}


PyLong_FloorDiv (fastint4 adds it) can be specialized for single digits, which 
gives it a tremendous boost.

With those too optimizations, fastint4 becomes as fast as fastint3.  I'll 
create separate issues for PyLong_AsDouble and FloorDiv.

== Micro-benchmarks ==

Floats + ints:  -m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + 
(x+0.1)/(x-0.1)*2 + (x+10)*(x-30)"

2.7:  0.42 (usec)
3.5:  0.619
fastint_alt   0.619
fastintfloat: 0.52
fastint3: 0.289
fastint4: 0.51
fastint**:0.314

===

Ints:  -m timeit -s "x=2" "x + 10 + x * 20 - x // 3 + x* 10 + 20 -x"

2.7:  0.151 (usec)
3.5:  0.19
fastint_alt:  0.136
fastintfloat: 0.135
fastint3: 0.135
fastint4: 0.122
fastint**:0.122


P.S. I have another variant of fastint4 that uses fast_* functions in ceval 
loop, instead of a big macro.  Its performance is slightly worse than with the 
macro.

--
Added file: http://bugs.python.org/file41811/fastint4.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12923] test_urllib fails in refleak mode

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

One extra change I made to test_redirect_limit_independent() was to stop 
relying on _urlopener being created before we call urlopen(). As a consequence, 
in the Python 3 tests I made a wrapper around FancyURLopener to suppress the 
deprecation warning.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

> People should stop getting hung up about benchmarks numbers and instead 
> should first think about what they are trying to *achieve*. FP performance in 
> pure Python does not seem like an important goal in itself.

I'm not sure how to respond to that.  Every performance aspect *is* important.  
numpy isn't shipped with CPython, not everyone uses it.  In one of my programs 
I used colorsys extensively -- did I need to rewrite it using numpy?  Probably 
I could, but that was a simple shell script without any dependencies.

It also harms Python 3 adoption a little bit, since many benchmarks are still 
slower.  Some of them are FP related.

In any case, I think that if we can optimize something - we should.


> Also, some benchmarks may show variations which are randomly correlated with 
> a patch (e.g. before of different code placement by the compiler interfering 
> with instruction cache wayness). 

30-50% speed improvement is not a variation.  It's just that a lot less code 
gets executed if we inline some operations.


> It is important not to block a patch because some random benchmark on some 
> random machine shows an unexpected slowdown.

Nothing is blocked atm, we're just discussing various approaches.


> That said, both of Serhiy's patches are probably ok IMO.

Current Serhiy's patches are incomplete.  In any case, I've been doing some 
research and will post another message shortly.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Zach Byrne

Zach Byrne added the comment:

Ok, I've started on the instrumenting, thanks for that head start, that would 
have taken me a while to figure out where to call the stats dump function from. 
Fun fact: BINARY_SUBSCR is called 717 starting python.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26287] Core dump in f-string with lambda and format specification

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

I had to recompile with “--with-pydebug” to get the crash. I know f-strings 
don’t support the lambda syntax very well, but I can also make it crash without 
using lambda:

>>> f"{ {1: 2}:x}"
Fatal Python error: Python/ceval.c:3576 object at 0x7fa32ab030c8 has negative 
ref count -1
Traceback (most recent call last):
  File "", line 1, in 
TypeError: non-empty format string passed to object.__format__
Aborted (core dumped)

--
components: +Interpreter Core
nosy: +martin.panter
type:  -> crash

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Martin Panter

Martin Panter added the comment:

I can’t really comment on the patch, but I’m a bit worried that this is not the 
purpose of the b_length field.

--
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26288] Optimize PyLong_AsDouble for single-digit longs

2016-02-04 Thread Yury Selivanov

New submission from Yury Selivanov:

The attached patch drastically speeds up PyLong_AsDouble for single digit longs:


-m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + (x+0.1)/(x-0.1)*2 + 
(x+10)*(x-30)"

with patch: 0.414
without: 0.612

spectral_norm: 1.05x faster.The results are even better when paired with 
patch from issue #21955.

--
components: Interpreter Core
files: as_double.patch
keywords: patch
messages: 259615
nosy: haypo, pitrou, serhiy.storchaka, yselivanov
priority: normal
severity: normal
status: open
title: Optimize PyLong_AsDouble for single-digit longs
versions: Python 3.6
Added file: http://bugs.python.org/file41812/as_double.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26289] Optimize floor division for ints

2016-02-04 Thread Yury Selivanov

New submission from Yury Selivanov:

The attached patch optimizes floor division for ints.

### spectral_norm ###
Min: 0.319087 -> 0.289172: 1.10x faster
Avg: 0.322564 -> 0.294319: 1.10x faster
Significant (t=21.71)
Stddev: 0.00249 -> 0.01277: 5.1180x larger


-m timeit -s "x=22331" "x//2;x//3;x//4;x//5;x//6;x//7;x//8;x/99;x//100;"

with patch: 0.298
without:0.515

--
components: Interpreter Core
files: floor_div.patch
keywords: patch
messages: 259617
nosy: haypo, pitrou, serhiy.storchaka, yselivanov
priority: normal
severity: normal
stage: patch review
status: open
title: Optimize floor division for ints
type: performance
versions: Python 3.6
Added file: http://bugs.python.org/file41813/floor_div.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26290] fileinput and 'for line in sys.stdin' do strange mockery of input buffering

2016-02-04 Thread Don Hatch

New submission from Don Hatch:

Iterating over input using either 'for line in fileinput.input():'
or 'for line in sys.stdin:' has the following unexpected behavior:
no matter how many lines of input the process reads, the loop body is not
entered until either (1) at least 8193 chars have been read and at least one of
them was a newline, or (2) EOF is read (i.e. the read() system call returns
zero bytes).

The behavior I expect instead is what
"for line in iter(sys.stdin.readline, ''):" does: that is, the loop body is
entered for the first time as soon as a newline or EOF is read.
Furthermore strace reveals that this well-behaved alternative code does
sensible input buffering, in the sense that the underlying system call being
made is read(0,buf,8192), thereby allowing it to get as many characters as are
available on input, up to 8192 of them, to be buffered and used in subsequent
loop iterations.  This is familiar and sensible behavior, and is what I think
of as "input buffering".

I anticipate there will be responses to this bug report of the form "this is
documented behavior; the fileinput and sys.stdin iterators do input buffering".
To that, I say: no, these iterators' unfriendly behavior is *not* input
buffering in any useful sense; my impression is that someone may have
implemented what they thought the words "input buffering" meant, but if so,
they really botched it.

This bug is most noticeable and harmful when using a filter written in python
to filter the output of an ongoing process that may have long pauses between
lines of output; e.g. running "tail -f" on a log file.  In this case, the
python filter spends a lot of time in a state where it is paused without
reason, having read many input lines that it has not yet processed.

If there is any suspicion that the delayed output is due to the previous
program in the pipeline buffering its output instead, strace can be used on the
python filter process to confirm that its input lines are in fact being read in
a timely manner.  This is certainly true if the previous process in the
pipeline is "tail -f", at least on my ubuntu linux system.

To demonstrate the bug, run each of the following from the bash command line.
This was observed using bash 4.3.11(1), python 2.7.6, and python 3.4.3,
on ubuntu 14.04 linux.

--
{ echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import fileinput,sys\nfor 
line in fileinput.input(): sys.stdout.write("line: "+line)'
# result (BAD): pauses for 1 second, prints the three lines, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import sys\nfor line in 
sys.stdin: sys.stdout.write("line: "+line)'
# result (BAD): pauses for 1 second, prints the three lines, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python2.7 -c $'import sys\nfor line in 
iter(sys.stdin.readline, ""): sys.stdout.write("line: "+line)'
# result (GOOD): prints the three lines, pauses for 1 second, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import fileinput,sys\nfor 
line in fileinput.input(): sys.stdout.write("line: "+line)'
# result (BAD): pauses for 1 second, prints the three lines, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import sys\nfor line in 
sys.stdin: sys.stdout.write("line: "+line)'
# result (GOOD): prints the three lines, pauses for 1 second, returns to 
prompt

{ echo a;echo b;echo c;sleep 1;} | python3.4 -c $'import sys\nfor line in 
iter(sys.stdin.readline, ""): sys.stdout.write("line: "+line)'
# result (GOOD): prints the three lines, pauses for 1 second, returns to 
prompt
--

Notice the 'for line in sys.stdin:' behavior is apparently fixed in python 3.4.
So the matrix of behavior observed above can be summarized as follows:

   2.7  3.4
for line in fileinput.input(): BAD  BAD
for line in sys.stdin: BAD  GOOD
for line in iter(sys.stdin.readline, ""):  GOOD GOOD

Note that adding '-u' to the python args makes no difference in behavior, in
any of the above 6 command lines.

Finally, if I insert "strace -T" before "python" in each of the 6 command lines
above, it confirms that the python process is reading the 3 lines of input
immediately in all cases, in a single read(..., ..., 4096 or 8192) which seems
reasonable.

--
components: Library (Lib)
messages: 259619
nosy: Don Hatch
priority: normal
severity: normal
status: open
title: fileinput and 'for line in sys.stdin' do strange mockery of input 
buffering
type: behavior
versions: Python 2.7, Python 3.4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 

[issue17446] doctest test finder doesnt find line numbers of properties

2016-02-04 Thread Emanuel Barry

Emanuel Barry added the comment:

Left a comment on Rietveld. I don't have time right now to check the test, but 
I suspect you tested it before submitting the patch, so it should probably be 
fine.

--
nosy: +ebarry
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17446] doctest test finder doesnt find line numbers of properties

2016-02-04 Thread Timo Furrer

Timo Furrer added the comment:

Yes, I've tested it.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Tamás Bence Gedai

Tamás Bence Gedai added the comment:

I've added a patch, that solves the problem with the built-in len. Even if it 
turns out that this functionality is not needed, it was quite of a challenge to 
track down the issue, I've learned a lot. :)

Here are some functions, that I looked through, might be useful for someone, 
who'd like to look into this issue.

https://github.com/python/cpython/blob/master/Python/bltinmodule.c#L1443
static PyObject *
builtin_len(PyModuleDef *module, PyObject *obj)
/*[clinic end generated code: output=8e5837b6f81d915b input=bc55598da9e9c9b5]*/
{
Py_ssize_t res;

res = PyObject_Size(obj);
if (res < 0 && PyErr_Occurred())
return NULL;
return PyLong_FromSsize_t(res);
}

https://github.com/python/cpython/blob/master/Objects/abstract.c#L42
Py_ssize_t
PyObject_Size(PyObject *o)
{
/*...*/
m = o->ob_type->tp_as_sequence;
if (m && m->sq_length)
return m->sq_length(o);
/*...*/
}

https://github.com/python/cpython/blob/master/Modules/_ctypes/_ctypes.c#L4449
static PySequenceMethods Array_as_sequence = {
Array_length,   /* sq_length; */
/*...*/
};

https://github.com/python/cpython/blob/master/Modules/_ctypes/_ctypes.c#L4442
static Py_ssize_t
Array_length(PyObject *myself)
{
CDataObject *self = (CDataObject *)myself;
return self->b_length;
}

--
keywords: +patch
Added file: http://bugs.python.org/file41810/resize.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

Antoine, FWIW I agree on most of your points :)  And yes, numpy, scipy, numba, 
etc rock.

Please take a look at my fastint4.patch.  All tests pass, no performance 
regressions, no crazy inlining of floating point exceptions etc.  And yet we 
have a nice improvement for both ints and floats.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26291] Floating-point arithmetic

2016-02-04 Thread Emanuel Barry

Emanuel Barry added the comment:

This is due to how floating point numbers are handled under the hood. See 
http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm 
and https://docs.python.org/3/tutorial/floatingpoint.html for some useful read 
about why Python behaves like this regarding floating point numbers. Both these 
link state that this isn't a bug in Python, rightly so as it isn't.

--
nosy: +ebarry
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20160] broken ctypes calling convention on MSVC / 64-bit Windows (large structs)

2016-02-04 Thread Christoph Sarnowski

Changes by Christoph Sarnowski :


--
nosy: +Christoph Sarnowski

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

> Why not combine my patch and Serhiy's?  First we check if left & right are 
> both longs.  Then we check if they are unicode (for +).  And then we have a 
> fastpath for floats.

See my comment on Serhiy's patch. Maybe we can start by check that the type of 
both operands are the same, and then use PyLong_CheckExact and 
PyUnicode_CheckExact.

Using such design, we may add a _PyFloat_Add(). But the next question is then 
the overhead on the "slow" path, which requires a benchmark too! For example, 
use a subtype of int.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Florin Papa

Florin Papa added the comment:

I was also talking about the variance/deviation of the mean value
displayed by perf.py, sorry if I was unclear. The perf.py output in my
previous message showed little difference between the patched and
non-patched version. I will also try increasing the number of
runs to see if there is any change.

The CPU isolation feature is a great finding, thank you.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

tl; dr I'm disappointed. According to the statistics module, running the 
bm_regex_v8.py benchmark more times with more iterations make the benchmark 
more unstable... I expected the opposite...


Patch version 2:

* patch also performance/bm_pickle.py
* change min_time from 100 ms to 500 ms with --fast
* compute the number of runs using a maximum time, maximum time change with 
--fast and --rigorous

+if options.rigorous:
+min_time = 1.0
+max_time = 100.0  # 100 runs
+elif options.fast:
+min_time = 0.5
+max_time = 25.0   # 50 runs
+else:
+min_time = 0.5
+max_time = 50.0   # 100 runs


To measure the stability of perf.py, I pinned perf.py to CPU cores which are 
isolated of the system using Linux "isolcpus" kernel parameter. I also forced 
the CPU frequency governor to "performance" and enabled "no HZ full" on these 
cores.

I ran perf.py 5 times on regex_v8.


Calibration (original => patched):

* --fast: 1 iteration x 5 runs => 16 iterations x 50 runs
* (no option): 1 iteration x 50 runs => 16 iterations x 100 runs


Approximated duration of the benchmark (original => patched):

* --fast: 7 sec => 7 min 34 sec
* (no option): 30 sec => 14 min 35 sec

(I made a mistake, so I was unable to get the exact duration.)

Hum, maybe timings are not well chosen because the benchmark is really slow 
(minutes vs seconds) :-/


Standard deviation, --fast:

* (python2) 0.00071 (1.2%, mean=0.05961) => 0.01059 (1.1%, mean=0.96723)
* (python3) 0.00068 (1.5%, mean=0.04494) => 0.05925 (8.0%, mean=0.74248)
* (faster) 0.02986 (2.2%, mean=1.32750) => 0.09083 (6.9%, mean=1.31000)

Standard deviation, (no option):

* (python2) 0.00072 (1.2%, mean=0.05957) => 0.00874 (0.9%, mean=0.97028)
* (python3) 0.00053 (1.2%, mean=0.04477) => 0.00966 (1.3%, mean=0.72680)
* (faster) 0.02739 (2.1%, mean=1.33000) => 0.02608 (2.0%, mean=1.33600)

Variance, --fast:

* (python2) 0.0 (0.001%, mean=0.05961) => 0.9 (0.009%, mean=0.96723)
* (python3) 0.0 (0.001%, mean=0.04494) => 0.00281 (0.378%, mean=0.74248)
* (faster) 0.00067 (0.050%, mean=1.32750) => 0.00660 (0.504%, mean=1.31000)

Variance, (no option):

* (python2) 0.0 (0.001%, mean=0.05957) => 0.6 (0.006%, mean=0.97028)
* (python3) 0.0 (0.001%, mean=0.04477) => 0.7 (0.010%, mean=0.72680)
* (faster) 0.00060 (0.045%, mean=1.33000) => 0.00054 (0.041%, mean=1.33600)

Legend:

* (python2) are timings of python2 ran by perf.py (of the "Min" line)
* (python3) are timings of python3 ran by perf.py (of the "Min" line)
* (faster) are the "1.34x" numbers of "faster" or "slower" of the "Min" line
* percentages are: value * 100 / mean

It's not easy to compare these values since the number of iterations is very 
different (1 => 16) and so timings are very different (ex: 0.059 sec => 0.950 
sec). I guess that it's ok to compare percentages.


I used the stability.py script, attached to this issue, to compute deviation 
and variance from the "Min" line of the 5 runs. The script takes the output of 
perf.py as input.

I'm not sure that 5 runs are enough to compute statistics.

--

Raw data.

Original perf.py.

$ grep ^Min original.fast 
Min: 0.059236 -> 0.045948: 1.29x faster
Min: 0.059005 -> 0.044654: 1.32x faster
Min: 0.059601 -> 0.044547: 1.34x faster
Min: 0.060605 -> 0.044600: 1.36x faster

$ grep ^Min original
Min: 0.060479 -> 0.044762: 1.35x faster
Min: 0.059002 -> 0.045689: 1.29x faster
Min: 0.058991 -> 0.044587: 1.32x faster
Min: 0.060231 -> 0.044364: 1.36x faster
Min: 0.059165 -> 0.044464: 1.33x faster

Patched perf.py.

$ grep ^Min patched.fast 
Min: 0.950717 -> 0.711018: 1.34x faster
Min: 0.968413 -> 0.730810: 1.33x faster
Min: 0.976092 -> 0.847388: 1.15x faster
Min: 0.964355 -> 0.711083: 1.36x faster
Min: 0.976573 -> 0.712081: 1.37x faster

$ grep ^Min patched
Min: 0.968810 -> 0.729109: 1.33x faster
Min: 0.973615 -> 0.731308: 1.33x faster
Min: 0.974215 -> 0.734259: 1.33x faster
Min: 0.978781 -> 0.709915: 1.38x faster
Min: 0.955977 -> 0.729387: 1.31x faster

$ grep ^Calibration patched.fast 
Calibration: num_runs=50, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 36.4 sec)
Calibration: num_runs=50, num_loops=16 (0.75 sec per run > min_time 0.50 sec, 
estimated total: 37.3 sec)
Calibration: num_runs=50, num_loops=16 (0.75 sec per run > min_time 0.50 sec, 
estimated total: 37.4 sec)
Calibration: num_runs=50, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 36.6 sec)
Calibration: num_runs=50, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 36.7 sec)

$ grep ^Calibration patched
Calibration: num_runs=100, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 73.0 sec)
Calibration: num_runs=100, num_loops=16 (0.75 sec per run > min_time 0.50 sec, 
estimated total: 75.3 sec)
Calibration: num_runs=100, num_loops=16 (0.73 sec per run > min_time 0.50 sec, 
estimated total: 73.2 sec)
Calibration: 

[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

Changes by STINNER Victor :


Added file: http://bugs.python.org/file41804/perf_calibration-2.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

> I agree with Marc-Andre, people doing FP-heavy math in Python use Numpy 
> (possibly with Numba, Cython or any other additional library). 
> Micro-optimizing floating-point operations in the eval loop makes little 
> sense IMO.

I disagree.

30% faster floats (sic!) is a serious improvement, that shouldn't just be 
discarded.  Many applications have floating point calculations one way or 
another, but don't use numpy because it's an overkill.

Python 2 is much faster than Python 3 on any kind of numeric calculations.  
This point is being frequently brought up in every python2 vs 3 debate.  I 
think it's unacceptable.


> * the ceval loop may no longer fit in to the CPU cache on
   systems with small cache sizes, since the compiler will likely
   inline all the fast_*() functions (I guess it would be possible
   to simply eliminate all fast paths using a compile time
   flag)

That's a speculation.  It may still fit.  Or it had never really fitted, it's 
already huge.  I tested the patch on a 8 year old desktop CPU, no performance 
degradation on our benchmarks.

### raytrace ###
Avg: 1.858527 -> 1.652754: 1.12x faster

### nbody ###
Avg: 0.310281 -> 0.285179: 1.09x faster

### float ###
Avg: 0.392169 -> 0.358989: 1.09x faster

### chaos ###
Avg: 0.355519 -> 0.326400: 1.09x faster

### spectral_norm ###
Avg: 0.377147 -> 0.303928: 1.24x faster

### telco ###
Avg: 0.012845 -> 0.013006: 1.01x slower

The last benchmark (telco) is especially interesting.  It uses decimals for 
calculation, that means that it uses overloaded numeric operators.  Still no 
significant performance degradation.

> * maintenance will get more difficult

Fast path for floats is easy to understand for every core dev that works with 
ceval, there is no rocket science there (if you want rocket science that is 
hard to maintain look at generators/yield from).  If you don't like inlining 
floating point calculations, we can make float_add, float_sub, float_div, and 
float_mul exported and use them in ceval.

Why not combine my patch and Serhiy's?  First we check if left & right are both 
longs.  Then we check if they are unicode (for +).  And then we have a fastpath 
for floats.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Le 04/02/2016 15:18, Yury Selivanov a écrit :
> 
> But it is faster. That's visible on many benchmarks. Even simple
timeit oneliners can show that. Probably it's because that such
benchmarks usually combine floats and ints, i.e. "2 * smth" instead of
"2.0 * smth".

So it's not about FP-related calculations anymore. It's about ints
having become slower ;-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

>> But it is faster. That's visible on many benchmarks. Even simple
> timeit oneliners can show that. Probably it's because that such
> benchmarks usually combine floats and ints, i.e. "2 * smth" instead of
> "2.0 * smth".
> 
> So it's not about FP-related calculations anymore. It's about ints
> having become slower ;-)

I should have written 2 * smth_float vs 2.0 * smth_float

--
nosy: +Yury.Selivanov

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

Attaching another approach -- fastint5.patch.

Similar to what fastint4.patch does, but doesn't export any new APIs.  Instead, 
similarly to abstract.c, it uses type slots directly.

--
Added file: http://bugs.python.org/file41815/fastint5.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

Looks like we want to specialize it for lists, tuples, and dicts; as expected.  
Not so sure about [-1, but I suggest to benchmark it anyways ;)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26290] fileinput and 'for line in sys.stdin' do strange mockery of input buffering

2016-02-04 Thread Don Hatch

Don Hatch added the comment:

Possibly related to http://bugs.python.org/issue1633941 .
Note that the matrix of GOOD and BAD versions and input methods is
exactly the same for this bug as for that one.  To verify: run
each of the 6 python commands I mentioned on its own, being sure to type
at least one line of input ending in newline before hitting ctrl-D -- if it 
exits after one ctrl-D it's GOOD; having to type a second ctrl-D is BAD.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Looks as statistics varies from test to test too much. Could you please collect 
the statistics for running all tests?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Zach Byrne

Zach Byrne added the comment:

I'll put together something comprehensive in a bit, but here's a quick preview:

$ ./python
Python 3.6.0a0 (default, Feb  4 2016, 20:08:03) 
[GCC 4.6.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
Total BINARY_SUBSCR calls: 726
List BINARY_SUBSCR calls: 36
Tuple BINARY_SUBSCR calls: 103
Dict BINARY_SUBSCR calls: 227
Unicode BINARY_SUBSCR calls: 288
Bytes BINARY_SUBSCR calls: 68
[-1] BINARY_SUBSCR calls: 0

$ python bm_elementtree.py -n 100 --timer perf_counter
...[snip]...
Total BINARY_SUBSCR calls: 1078533
List BINARY_SUBSCR calls: 513
Tuple BINARY_SUBSCR calls: 1322
Dict BINARY_SUBSCR calls: 1063075
Unicode BINARY_SUBSCR calls: 13150
Bytes BINARY_SUBSCR calls: 248
[-1] BINARY_SUBSCR calls: 0

Lib/test$ ../../python -m unittest discover
...[snip]...^C <== I got bored waiting
KeyboardInterrupt
Total BINARY_SUBSCR calls:  4732885
List BINARY_SUBSCR calls:   1418730
Tuple BINARY_SUBSCR calls:  1300717
Dict BINARY_SUBSCR calls:   1151766
Unicode BINARY_SUBSCR calls: 409924
Bytes BINARY_SUBSCR calls:   363029
[-1] BINARY_SUBSCR calls: 26623

So dict seems to be the winner here

--
keywords: +patch
Added file: http://bugs.python.org/file41814/26280_stats.diff

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Eryk Sun

Eryk Sun added the comment:

You can't reassign the array object's __class__, and you can't modify the array 
type itself, so I think modifying the internal b_length field of the object is 
a confused result. 

Even if you ignore this confusion, it's still not as simple as using the size 
in bytes as the length. That's what b_size is for, after all. The array length 
is the new size divided by the element size, which you can get from 
PyType_stgdict(dict->proto)->size. Also, you'd have to ensure it's only 
updating b_length for arrays, i.e. ArrayObject_Check(obj), since it makes no 
sense to modify the length of a simple type, struct, union, or [function] 
pointer.

However, I don't think this result is worth the confusion. ctypes buffers can 
grow, but arrays have a fixed size by design. There are already ways to access 
a resized buffer. For example, you can use the from_buffer method to create an 
instance of a new array type that has the desired length, or you can 
dereference a pointer for the new array type. So I'm inclined to close this 
issue as "not a bug".

Note: 
Be careful when resizing buffers that are shared across objects. Say you resize 
array "a" and share it as array "b" using from_buffer or a pointer dereference. 
Then later you resize "a" again. The underlying realloc might change the base 
address of the buffer, while "b" still uses the old address. For example:

>>> a = (ctypes.c_int * 5)(*range(5))
>>> ctypes.resize(a, 4 * 10)
>>> b = ctypes.POINTER(ctypes.c_int * 10)(a)[0]
>>> ctypes.addressof(a)
20136704
>>> ctypes.addressof(b)
20136704
>>> b._b_needsfree_ # b doesn't own the buffer
0
>>> b[:] # the reallocation is not zeroed
[0, 1, 2, 3, 4, 0, 0, 32736, 48, 0]

Here's the problem to keep in mind:

>>> ctypes.resize(a, 4 * 1000)
>>> ctypes.addressof(a)
22077952
>>> ctypes.addressof(b)
20136704
>>> b[:] # garbage; maybe a segfault
[1771815800, 32736, 1633841761, 540763495, 1652121965,
 543236197, 1718052211, 1953264993, 10, 0]

--
nosy: +eryksun
type:  -> enhancement

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26290] fileinput and 'for line in sys.stdin' do strange mockery of input buffering

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

For fileinput see issue15068.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26280] ceval: Optimize [] operation similarly to CPython 2.7

2016-02-04 Thread Zach Byrne

Zach Byrne added the comment:

One thing I forgot to do was count slices.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1633941] for line in sys.stdin: doesn't notice EOF the first time

2016-02-04 Thread Don Hatch

Don Hatch added the comment:

I've reported the unfriendly input withholding that several people have
observed and mentioned here as a separate bug: 
http://bugs.python.org/issue26290 . The symptom is different but I suspect it 
has exactly the same underlying cause (incorrect use of stdio) and fix that 
Ralph Corderoy has described clearly here.

--
nosy: +Don Hatch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22847] Improve method cache efficiency

2016-02-04 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 6357d851029d by Antoine Pitrou in branch '2.7':
Issue #22847: Improve method cache efficiency.
https://hg.python.org/cpython/rev/6357d851029d

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26269] zipfile should call lstat instead of stat if available

2016-02-04 Thread SilentGhost

Changes by SilentGhost :


--
components: +Extension Modules
nosy: +alanmcintyre, serhiy.storchaka, twouters
versions:  -Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

In my experience it is very hard to get stable benchmark results with Python.  
Even long running benchmarks on an empty machine vary:

wget http://www.bytereef.org/software/mpdecimal/benchmarks/telco.py
wget http://speleotrove.com/decimal/expon180-1e6b.zip
unzip expon180-1e6b.zip

taskset -c 0 ./python telco.py full


$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 7.16255
$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 6.982884
$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 7.0953491
$ taskset -c 0 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']

--
nosy: +skrah

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

I've cut off the highest result in the previous message:

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 7.304339
$ taskset -c 0 ./python telco.py full

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

> taskset -c 0 ./python telco.py full

Did you see that I just merged Florin's patch to add --affinity parameter to 
perf.py? :-)

You may isolate some CPU cores using the kernel command parameter isolcpus=xxx. 
I don't think that the core #0 is the best choice, it may be preferred.

It would be nice to collect "tricks" to get most reliable benchmark results. 
Maybe in perf.py README page? Or a wiki page? Whatever? :-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

I agree with Marc-Andre, people doing FP-heavy math in Python use Numpy 
(possibly with Numba, Cython or any other additional library). Micro-optimizing 
floating-point operations in the eval loop makes little sense IMO.

The point of optimizing integers is that they are used for many purposes, not 
only "math" (e.g. indexing).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

For an older project (Fusil the fuzzer), I wrote a short function to sleep 
until the system load is lower than a threshold. I had to write such function 
to reduce the noise when the system is heavily loaded. I wrote that to be able 
to run a very long task (it takes at least 1 hour, but may run for multiple 
days!) on my desktop and continue to use my desktop for various other tasks.

On Linux, we can use the "cpu xxx xxx xxx ..." line of /proc/stat to get the 
system load.

My code to read the system load:
https://bitbucket.org/haypo/fusil/src/32ddc281219cd90c1ad12a3ee4ce212bea1c3e0f/fusil/linux/cpu_load.py?at=default=file-view-default#cpu_load.py-51

My code to wait until the system load is lower than a threshold:
https://bitbucket.org/haypo/fusil/src/32ddc281219cd90c1ad12a3ee4ce212bea1c3e0f/fusil/system_calm.py?at=default=file-view-default#system_calm.py-5

--

I also write a script to do the opposite :-) A script to stress the system to 
have a system load higher or equal than a minimum load:

https://bitbucket.org/haypo/misc/src/3fd3993413b128b37e945690865ea2c5ef48c446/bin/system_load.py?at=default=file-view-default

This script helped to me reproduce sporadic failures like timeouts which only 
occur when the system is highly loaded.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

Core 1 fluctuates even more (My machine only has 2 cores):

$ taskset -c 1 ./python telco.py full

Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 6.783009
Control totals:
Actual   ['1004737.58', '57628.30', '25042.17']
Expected ['1004737.58', '57628.30', '25042.17']
Elapsed time: 7.335563
$ taskset -c 1 ./python telco.py full


I have some of the same concerns as Serhiy. There's a lot of statistics going 
on in the benchmark suite -- is it really possible to separate that cleanly 
from the actual runtime of the benchmarks?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

> I agree with Marc-Andre, people doing FP-heavy math in Python use Numpy 
> (possibly with Numba, Cython or any other additional library). 
> Micro-optimizing floating-point operations in the eval loop makes little 
> sense IMO.

Oh wait, I maybe misunderstood Marc-Andre comment. If the question is only on 
float: I'm ok to drop the fast-path for float. By the way, I would prefer to 
see PyLong_CheckExact() in the main loop, and only call fast_mul() if both 
operands are Python int.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

I mean, if you run the benchmark 10 times and the unpatched result is always 
between 11.3 and 12.0 for floats while the patched result is
between 12.3 and 12.9, for me the situation is clear.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

I'm surprised about the speedups. Is there a logical reason for them?

--
nosy: +pitrou

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread STINNER Victor

STINNER Victor added the comment:

Florin Papa added the comment:
> I ran perf to use calibration and there is no difference in stability
> compared to the unpatched version.

Sorry, what are you calling "stability"? For me, stability means that
you run the same benchmark 3, 5 or 10 times, and the results must be
as close as possible: see variance and standard deviation of my
previous message.

I'm not talking of variance/deviation of the N runs of bm_xxx.py
scripts, but variance/deviation of the mean value displayed by
perf.py.

perf_calibration.patch is a proof-of-concept. I changed the number of
runs from 50 to 10 to test my patch more easily. You should modify the
patch to keep 50 runs if you want to compare the stability.

By the way, --fast/--rigorous options should not only change the
minimum duration of a single run to calibrate the number of loops, but
they should also change the "maximum" duration of perf.py by using
different number of runs.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Le 04/02/2016 14:54, Yury Selivanov a écrit :
> 
> 30% faster floats (sic!) is a serious improvement, that shouldn't
> just be discarded. Many applications have floating point calculations one way
> or another, but don't use numpy because it's an overkill.

Can you give any example of such an application and how they would
actually benefit from "faster floats"? I'm curious why anyone who wants
fast FP calculations would use pure Python with CPython...

Discarding Numpy because it's "overkill" sounds misguided to me.
That's like discarding asyncio because it's "less overkill" to write
your own select() loop. It's often far more productive to use the
established, robust, optimized library rather than tweak your own
low-level code.

(and Numpy is easier to learn than asyncio ;-))

I'm not violently opposing the patch, but I think maintenance effort
devoted to such micro-optimizations is a bit wasted. And once you add
such a micro-optimization, trying to remove it often faces a barrage of
FUD about making Python slower, even if the micro-optimization is
practically worthless.

> Python 2 is much faster than Python 3 on any kind of numeric
> calculations.

Actually, it shouldn't really be faster on FP calculations, since the
float object hasn't changed (as opposed to int/long). So I'm skeptical
of FP-heavy code that would have been made slower by Python 3 (unless
there's also integer handling in that, e.g. indexing).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

>But the next question is then the overhead on the "slow" path, which requires 
>a benchmark too! For example, use a subtype of int.

telco is such a benchmark (although it's very unstable).  It uses decimals 
extensively.  I've tested it many times on three different CPUs, and it doesn't 
seem to become any slower.


> Discarding Numpy because it's "overkill" sounds misguided to me.
That's like discarding asyncio because it's "less overkill" to write
your own select() loop. It's often far more productive to use the
established, robust, optimized library rather than tweak your own
low-level code.

Don't get me wrong, numpy is simply amazing!  But if you have a 100,000 lines 
application that happens to have a a few FP-related calculations here and 
there, you won't use numpy (unless you had experience with it before).

My opinion on this: numeric operations in Python (and any general purpose 
language) should be as fast as we can make them.


> Python 2 is much faster than Python 3 on any kind of numeric
> calculations.

> Actually, it shouldn't really be faster on FP calculations, since the
float object hasn't changed (as opposed to int/long). So I'm skeptical
of FP-heavy code that would have been made slower by Python 3 (unless
there's also integer handling in that, e.g. indexing).

But it is faster.  That's visible on many benchmarks.  Even simple timeit 
oneliners can show that.  Probably it's because that such benchmarks usually 
combine floats and ints, i.e. "2 * smth" instead of "2.0 * smth".

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26284] FIx telco benchmark

2016-02-04 Thread Stefan Krah

New submission from Stefan Krah:

The telco benchmark is unstable. It needs some of Victor's changes from #26275 
and probably a larger data set:

http://speleotrove.com/decimal/expon180-1e6b.zip is too big for
_pydecimal, but the one that is used is probably too small for
_decimal.

--
components: Benchmarks
messages: 259569
nosy: brett.cannon, haypo, pitrou, skrah, yselivanov
priority: normal
severity: normal
stage: needs patch
status: open
title: FIx telco benchmark
type: behavior
versions: Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

> 
> Stefan Krah added the comment:
> 
> It's instructive to run ./python Modules/_decimal/tests/bench.py (Hit Ctrl-C 
> after the first cdecimal result, 5 repetitions or so).
> 
> fastint2.patch speeds up floats enormously and slows down decimal by 6%.
> fastint_alt.patch slows down float *and* decimal (5% or so).
> 
> Overall the status quo isn't that bad, but I concede that float benchmarks 
> like that are useful for PR.
> 

Thanks Stefan! I'll update my patch to include Serhiy's ideas. The fact that 
fastint_alt slows down floats *and* decimals is not acceptable.

I'm all for keeping cpython and ceval loop simple, but we should not pass on 
opportunities to improve some things in a significant way.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Alecsandru Patrascu

Changes by Alecsandru Patrascu :


Added file: http://bugs.python.org/file41806/cpython3-deadcode-v01.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Alecsandru Patrascu

New submission from Alecsandru Patrascu:

Hi all,

This is Alecsandru from the Dynamic Scripting Languages Optimization Team at 
Intel Corporation. I would like to submit a patch that enables garbage 
collection of unused input sections from the CPython2 and CPython3 binaries, by 
using the "--gc-sections" linker flag, which decides which input sections are 
used by examining symbols and relocations. In order for this to work, GCC must 
place each function or data item into its own section in the output file, thus 
dedicated flags are used. With this technique, an average of 1% is gained in 
both interpreters, with a few small regressions.

Steps:
==
1. Get the CPython source codes
hg clone https://hg.python.org/cpython cpython
cd cpython
hg update 2.7 (for CPython2)

2. Build the binary
a) Default:
./configure
make

b) Unused input sections patch
Copy the attached patch files
hg import --no-commit cpython2-deadcode-v01.patch.patch (for CPython3)
hg import --no-commit cpython2-deadcode-v01.patch (for CPython2)
./configure
make


Hardware and OS Configuration
=
Hardware:   Intel XEON (Haswell-EP) 18 Cores

BIOS settings:  Intel Turbo Boost Technology: false
Hyper-Threading: false  

OS: Ubuntu 14.04.3 LTS Server

OS configuration:   Address Space Layout Randomization (ASLR) disabled to 
reduce run
to run variation by echo 0 > 
/proc/sys/kernel/randomize_va_space
CPU frequency set fixed at 2.6GHz

GCC version:GCC version 4.9.2

Benchmark:  Grand Unified Python Benchmark from 
https://hg.python.org/benchmarks/


Measurements and Results

CPython2 and CPython3 sample results, measured using GUPB on a Haswell 
platform, can be viewed in Table 1 and 2. On the first column (Benchmark) you 
can see the benchmark name and on the second (%S) the speedup compared with the 
default version; a higher value is better.

Table 1. CPython3 results:
Benchmark   %S
--
telco   11
etree_parse 7
call_simple 6
etree_iterparse 5
regex_v84
meteor_contest  3
etree_process   3
call_method_unknown 3
json_dump_v23
formatted_logging   2
hexiom2 2
chaos   2
richards2
django_v3   2
nbody   2
etree_generate  2
pickle_list 1
go  1
nqueens 1
call_method 1
mako_v2 1
raytrace1
chameleon_v21
silent_logging  0
fastunpickle0
2to30
float   0
regex_effbot0
pidigits0
json_load   0
simple_logging  0
normal_startup  0
startup_nosite  0
fastpickle  0
tornado_http0
regex_compile   0
fannkuch0
spectral_norm   0
pickle_dict 0
unpickle_list   0
call_method_slots   0
pathlib -2
unpack_sequence -2

Table 2. CPython2 results:
Benchmark   %S
--
simple_logging  4
formatted_logging   3
slowpickle  2
silent_logging  2
pickle_dict 1
chameleon_v21
hg_startup  1
pickle_list 1
call_method_unknown 1
pidigits1
regex_effbot1
regex_v81
html5lib0
normal_startup  0
regex_compile   0
etree_parse 0
spambayes   0
html5lib_warmup 0
unpack_sequence 0
richards0
rietveld0
startup_nosite  0
raytrace0
etree_iterparse 0
json_dump_v20
fastpickle  0
slowspitfire0
slowunpickle0
call_simple 0
float   0
2to30
bzr_startup 0
json_load   0
hexiom2 0
chaos   0
unpickle_list   0
call_method_slots   0
tornado_http0
fastunpickle0
etree_process   0
spectral_norm   0
meteor_contest  0
pybench 0
go  0
etree_generate  0
mako_v2 0
django_v3   0
fannkuch0
nbody   0
nqueens 0
telco   -1
call_method -2
pathlib -3

Thank you,
Alecsandru

--
components: Build
files: cpython2-deadcode-v01.patch
keywords: patch
messages: 259572
nosy: alecsandru.patrascu
priority: normal
severity: normal
status: open
title: Garbage collection of unused input sections from CPython binaries
type: performance
versions: Python 2.7, Python 3.6
Added file: http://bugs.python.org/file41805/cpython2-deadcode-v01.patch

___
Python tracker 

___

[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread SilentGhost

Changes by SilentGhost :


--
nosy: +benjamin.peterson, brett.cannon, haypo, skrah, yselivanov

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25660] tabs don't work correctly in python repl

2016-02-04 Thread Berker Peksag

Berker Peksag added the comment:

Windows issue is a blocker so I suggest reverting 64417e7a1760. We could apply 
the same fix in readline.c for 3.5.x but it would probably be a bit hackish.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26186] LazyLoader rejecting use of SourceFileLoader

2016-02-04 Thread Brett Cannon

Brett Cannon added the comment:

My current plan is to simply remove the check in 3.5 -- the docs say it's 
ignored so I don't think it will hurt anything -- and add the warning I 
proposed in 3.6. What do you think, Antoine?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26186] LazyLoader rejecting use of SourceFileLoader

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

By the way does this mean the LazyLoader won't work with ExtensionFileLoader? 
That would reduce its usefulness quite a bit.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26186] LazyLoader rejecting use of SourceFileLoader

2016-02-04 Thread Brett Cannon

Brett Cannon added the comment:

You're right, it won't work with extension modules based on how 
ExtensionFileLoader is structured.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

I thought this was the usual telco benchmark instability, but with the patch 
_decimal *does* seem to be faster in other areas, too.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26186] LazyLoader rejecting use of SourceFileLoader

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Just stumbled on this very issue while trying to use LazyLoader.

--
nosy: +pitrou

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26283] zipfile can not handle the path build by os.path.join()

2016-02-04 Thread SilentGhost

Changes by SilentGhost :


--
nosy: +alanmcintyre, serhiy.storchaka, twouters
versions: +Python 3.6 -Python 3.4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26271] freeze.py makefile uses the wrong flags variables

2016-02-04 Thread SilentGhost

Changes by SilentGhost :


--
nosy: +twouters

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

It is easy to extend fastint_alt.patch to support floats too. Here is new patch.

> It's instructive to run ./python Modules/_decimal/tests/bench.py (Hit Ctrl-C 
> after the first cdecimal result, 5 repetitions or so).

Note that this benchmark is not very stable. I ran it few times and the 
difference betweens runs was about 20%.

--
Added file: http://bugs.python.org/file41807/fastintfloat_alt.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

I've never seen 20% fluctuation in that benchmark between runs. The benchmark 
is very stable if you take the average of 10 runs.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21955] ceval.c: implement fast path for integers with a single digit

2016-02-04 Thread Stefan Krah

Stefan Krah added the comment:

It's instructive to run ./python Modules/_decimal/tests/bench.py (Hit Ctrl-C 
after the first cdecimal result, 5 repetitions or so).

fastint2.patch speeds up floats enormously and slows down decimal by 6%.
fastint_alt.patch slows down float *and* decimal (5% or so).

Overall the status quo isn't that bad, but I concede that float benchmarks like 
that are useful for PR.

--
nosy: +skrah

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21328] Resize doesn't change reported length on create_string_buffer()

2016-02-04 Thread Gedai Tamás Bence

Gedai Tamás Bence added the comment:

I found out that if you modify Modules/_cpython/callproc.c resize function in a 
way that you set the obj->b_length to size, the len function returns the 
correct value.

To be able to provide a proper patch, I'd like to look into len's 
implementation, can someone tell me where to look for it?

--
nosy: +beng94

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26275] perf.py: calibrate benchmarks using time, not using a fixed number of iterations

2016-02-04 Thread Brett Cannon

Brett Cannon added the comment:

What would happen if we shifted to counting the number of executions within a 
set amount of time instead of how fast a single execution occurred? I believe 
some JavaScript benchmarks started to do this about a decade ago when they 
realized CPUs were getting so fast that older benchmarks were completing too 
quickly to be reliably measured. This also would allow one to have a very 
strong notion of how long a benchmark run would take based on the number of 
iterations and what time length bucket a benchmark was placed in (i.e., for 
microbenchmarks we could say a second while for longer running benchmarks we 
can increase that threshold). And it won't hurt benchmark comparisons since we 
have always done relative comparisons rather than absolute ones.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26186] LazyLoader rejecting use of SourceFileLoader

2016-02-04 Thread Antoine Pitrou

Antoine Pitrou added the comment:

I don't know what the impact of the error is, but it seems like at least the 
default loader classes should be able to work with LazyLoader...

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25660] tabs don't work correctly in python repl

2016-02-04 Thread Yury Selivanov

Yury Selivanov added the comment:

Everything should be OK now (both broken tests and using rlcompleter on 
Windows).  Please confirm.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6953] readline documentation needs work

2016-02-04 Thread Berker Peksag

Berker Peksag added the comment:

Looks good to me. I left a comment on Rietveld.

--
nosy: +berker.peksag
versions:  -Python 3.4

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26133] asyncio: ugly error related to signal handlers at exit if the loop is not closed explicitly

2016-02-04 Thread Hans Lellelid

Hans Lellelid added the comment:

FWIW, I am experiencing the issue described here with Python 3.5.1.

--
nosy: +Hans Lellelid

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26285] Garbage collection of unused input sections from CPython binaries

2016-02-04 Thread Alecsandru Patrascu

Alecsandru Patrascu added the comment:

I realize now that I should have explained a bit more the background of this 
patch. I'll do this now, for everyone to be clear of what is the effect of 
those flags.

This issue was revealed after running the coverage target over various 
workloads, for both CPython2 and CPython3. After running, it can be observed 
that there are functions in the interpreter that are not called at all over the 
lifespan of the interpreter. Even more, these functions occupy space in the 
resulting binary file, and the CPU is forced to jump to longer offsets than it 
is required. Furthermore, for production level binaries, it is a good idea to 
remove these stubs, as they bring no benefit. Now, in order to do this, in the 
first step, every function or data item must exist in its own section (and the 
flags -ffunction-sections and -fdata-sections come to help in GCC). In the 
second step, the linker comes into play and because it has the entire picture 
of every piece of data or function, it is able to see if there are functions 
that are never called for the current build (and the flag --gc-sections come to 
help).

This functionality is not unique or new and are used by default in other 
interpreters, such as V8/Node.JS in their Release target, to achieve exactly 
the same goal. Another example for behind the scene usage of this functionality 
is the Microsoft's compiler, which does it automatically in their 
interprocedural optimization phase.

To compress all of the above, the main reason for this speedup is the reduction 
of the code path length and having the useful function close together, so that 
the CPU will be able to prefetch them in advance and use them instead of 
trowing them away because they are not used.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >