[issue25638] Verify the etree_parse and etree_iterparse benchmarks are working appropriately

2015-12-09 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 1fe904420c20 by Serhiy Storchaka in branch 'default':
Issue #25638: Optimized ElementTree parsing; it is now 10% faster.
https://hg.python.org/cpython/rev/1fe904420c20

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23948] Deprecate os.kill() on Windows

2015-12-09 Thread Eryk Sun

Eryk Sun added the comment:

The signal module switched to using an enum for signal values: 

>>> print(*map(repr, sorted(signal.Signals)), sep='\n')










We can use the console API when passed the CTRL_C_EVENT or CTRL_BREAK_EVENT 
enum. It may be easier to implement this version of os.kill in Python code 
instead of C. We can add the necessary WinAPI functions to the _winapi module.

BTW, I don't think it would be useful to implement the POSIX signal 0 test. 
Windows readily recycles process and thread IDs. These values are allocated out 
of the system's PspCidTable handle table. When a thread or process exits, the 
associated handle table entry is added to a freelist to be reused. So checking 
whether a process exists with a given PID is all but meaningless. Maybe the 
process you're interested in died and a new process was assigned the same ID.

To back that up, the following local kernel debugging session demonstrates that 
the kernel's PspCidTable is implemented as a handle table.

Python 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37)
[MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.getpid()
3004
>>> os.system('kd -kl')

Microsoft (R) Windows Debugger Version 10.0.10240.9 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.

[...]

lkd> ;as pid 3004
lkd> ;as cidtab ((nt!_HANDLE_TABLE *)@@(poi(nt!PspCidTable)))
lkd> ;as tables ((void **)(${cidtab}->TableCode & ~0x11UI64))
lkd> ;as tabnum ((${pid} / 4) / 256)
lkd> ;as eindex ((${pid} / 4) % 256)
lkd> ;as hentry ((nt!_HANDLE_TABLE_ENTRY *)${tables}[${tabnum}] + ${eindex})
lkd> ;as ptrbit ((${hentry}->ObjectPointerBits << 4) | (0xUI64 << 48))
lkd> ;as eprobj ((nt!_EPROCESS *)${ptrbit})

eprobj evaluates to the kernel process object that's assigned to the handle 
table entry based on the given PID. To complete the circle, check that the 
process object really does have this PID:

lkd> ?? ${eprobj}->UniqueProcessId
void * 0x`0bbc
lkd> ?? 0xbbc
int 0n3004

and that the image filename is python.exe:

lkd> ?? (char *)${eprobj}->ImageFileName
char * 0xe001`647564c8
 "python.exe"

--
stage:  -> needs patch
versions: +Python 3.6 -Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25493] warnings.warn: wrong stacklevel causes import of local file "sys"

2015-12-09 Thread John Mark Vandenberg

John Mark Vandenberg added the comment:

It seems like there is already sufficient detection of invalid stack levels in 
warnings.warn, and one of the code paths does `module = ""` and later 
another does `filename = module`, so `filename` can be intentionally junk data, 
which will be passed to `linecache`.

I expect this could be satisfactorily resolved by warn() setting filename = 
'', and `formatwarning` not invoking linecache when the 
filename is '', '', etc., or at least ignoring the 
exception from linecache when the filename is .

Looking forward, why not let Python 3.6 warn() behave 'better' when the 
stacklevel is invalid.

e.g. it could raise ValueError (ouch, but 'correct'), or it could reset the 
stacklevel to 1 (a sensible fallback) and issue an auxillary SyntaxWarning to 
inform everyone that the stacklevel requested was incorrect.

--
nosy: +John.Mark.Vandenberg

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25788] fileinput.hook_encoded has no way to pass arguments to codecs

2015-12-09 Thread Swati Jaiswal

Swati Jaiswal added the comment:

I want to work on this issue. @lac, can you please help as I searched but 
couldn't find the related files. Where can I find the code for this?

--
nosy: +curioswati

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25274] sys.setrecursionlimit() must fail if the current recursion depth is higher than the new low-water mark

2015-12-09 Thread A. Jesse Jiryu Davis

Changes by A. Jesse Jiryu Davis :


--
nosy: +emptysquare

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5501] Update multiprocessing docs re: freeze_support

2015-12-09 Thread Camilla Montonen

Camilla Montonen added the comment:

Hi everyone!
It has been a while since this issue has been updated. 
The addition requested by Brandon Corfman is a simple clarification of what 
happens when the freeze_support function from multiprocessing is called on 
non-Windows platforms (if I understand correctly, the primary purpose of 
freeze_support is to help in creating Windows executables). 
Although there is already some documentation which suggests that no adverse 
effects will occur when running code that calls freeze_support on OS X and 
Unix, I added a sentence in the docs that will hopefully clarify this. 

Patch attached.

--
keywords: +patch
nosy: +Winterflower
Added file: http://bugs.python.org/file41278/issue5501-v1.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25768] compileall functions do not document or test return values

2015-12-09 Thread Nicholas Chammas

Nicholas Chammas added the comment:

I've added the tests as we discussed. A couple of comments:

* I found it difficult to reuse the existing setUp() code so had to essentially 
repeat a bunch of very similar code to create "bad" files. Let me know if you 
think there is a better way to do this.
* I'm having trouble with the test for compile_path(). Specifically, it doesn't 
seem to actually use the value for skip_curdir. Do you understand why?

--
Added file: http://bugs.python.org/file41277/compileall.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25711] Rewrite zipimport from scratch

2015-12-09 Thread Rose Ames

Rose Ames added the comment:

Serhiy, how far along are you on this?  I have a wip from this summer that I 
could finish over the holidays.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25831] dbm.gnu leaks file descriptors on .reorganize()

2015-12-09 Thread Isaac Schwabacher

Isaac Schwabacher added the comment:

Further searching reveals this as a dupe of #13947. Closing.

--
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25831] dbm.gnu leaks file descriptors on .reorganize()

2015-12-09 Thread Isaac Schwabacher

New submission from Isaac Schwabacher:

I found because test_dbm_gnu fails on NFS; my initial thought was that the test 
was failing to close a file somewhere (similarly to #20876), but a little 
digging suggested that the problem is in dbm.gnu itself:

$ ./python
Python 3.5.1 (default, Dec  9 2015, 11:55:23) 
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dbm.gnu
>>> import subprocess
>>> db = dbm.gnu.open('foo', 'c')
>>> db.reorganize()
>>> db.close()
>>> subprocess.check_call(['lsof', 'foo'])
COMMAND PIDUSER  FD   TYPE DEVICE SIZE/OFFNODE NAME
python  2302377 schwabacher mem-W  REG   0,5298304 25833923756 foo
0

A quick look at _gdbmmodule.c makes clear that the problem is upstream, but 
their bug tracker has 9 total entries... The best bet might just be to skip the 
test on NFS.

--
components: Library (Lib)
messages: 256159
nosy: ischwabacher
priority: normal
severity: normal
status: open
title: dbm.gnu leaks file descriptors on .reorganize()
type: behavior
versions: Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25638] Verify the etree_parse and etree_iterparse benchmarks are working appropriately

2015-12-09 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Following patch speeds up ElementTree parsing (the result of the etree parse 
benchmark is improved by 10%). Actually it restores 2.7 code and avoids 
creating an empty dict for attributes if not needed.

--
stage:  -> patch review
Added file: http://bugs.python.org/file41276/etree_start_handler_no_attrib.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25828] PyCode_Optimize() (peephole optimizer) doesn't handle KeyboardInterrupt correctly

2015-12-09 Thread Raymond Hettinger

Changes by Raymond Hettinger :


--
nosy: +pitrou

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25829] Mixing multiprocessing pool and subprocess may create zombie process, and cause program to hang.

2015-12-09 Thread R. David Murray

R. David Murray added the comment:

Oh, I think there is a solution, though: don't use fork, use spawn.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25829] Mixing multiprocessing pool and subprocess may create zombie process, and cause program to hang.

2015-12-09 Thread R. David Murray

R. David Murray added the comment:

Well, it sounds more like a problem of posix fork semantics.  What you need is 
to prevent workers from being spawned while the subprocess is running, which 
the multiprocessing API may not support (I'm not that familiar with it), and 
which might or might not work for your application in any case depending on 
what you are using each one for.

I'm not sure there's much Python can do to mitigate this problem, but I'll 
leave answering that to the experts :)

--
nosy: +gps, r.david.murray, sbt

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25825] AIX shared library extension modules installation broken

2015-12-09 Thread David Edelsohn

David Edelsohn added the comment:

$(prefix) and $(exec_prefix) result in the same path on AIX, so it does not 
matter in practice, although the semantics are different.

# Install prefix for architecture-dependent files
exec_prefix=${prefix}

python.exp is not architecture dependent, although it only is useful on AIX 
target.  It is essentially equivalent to a list of symbols with ELF global, 
non-hidden visibility.  It is less confusing if the list is co-located with the 
scripts that use it.

LIBPL is fine with me.  configure.ac and Makefile.pre.in must match.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25830] _TypeAlias: Discrepancy between docstring and behavior

2015-12-09 Thread flying sheep

New submission from flying sheep:

_TypeAlias claims in its docstring that “It can be used in instance and 
subclass checks”, yet promptly contradicts itself if you try it: “Type aliases 
cannot be used with isinstance().”

it would be useful to either document (and therefore bless) type_impl, or make 
it actually work with isinstance (probably by delegating to type_impl)

--
components: Library (Lib)
messages: 256154
nosy: flying sheep
priority: normal
severity: normal
status: open
title: _TypeAlias: Discrepancy between docstring and behavior
type: behavior
versions: Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25787] Add an explanation what happens with subprocess parent and child processes when signals are sent

2015-12-09 Thread Martin Panter

Martin Panter added the comment:

I think Karl’s original report was mainly about normal signal handling in the 
long term in the child, i.e. behaviour after exec() succeeds.

Ian, your problem sounds more like a bug or unfortunate quirk of the window 
between fork() and exec(). Maybe it is possible to solve it by blocking or 
resetting signal handlers, or using posix_spawn() (Issue 20104).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25823] Speed-up oparg decoding on little-endian machines

2015-12-09 Thread Mark Dickinson

Mark Dickinson added the comment:

Benchmarks would help here: if none of the patches gives a significant 
real-world speedup, then the whole UB discussion is moot.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25823] Speed-up oparg decoding on little-endian machines

2015-12-09 Thread Mark Dickinson

Mark Dickinson added the comment:

> I think following patch doesn't introduce undefined behavior.

Agreed. As I recall, the memcpy trick is a fairly standard way around this 
issue, for compilers that are smart enough to compile away the actual memcpy 
call.

> I don't know  wherever the patch can cause performance regression on other 
> platforms or compilers.

Me neither.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25829] Mixing multiprocessing pool and subprocess may create zombie process, and cause program to hang.

2015-12-09 Thread Ami Koren

New submission from Ami Koren:

Happens on Linux (Debian), Linux version 3.16.0-4-amd64 .
Seems like a multiprocessing issue.

When I use both multiprocessing pool and subprocess somewhere in the same 
python program, sometimes the subprocess become
'zombie', and the parent wait for it forever.

Reproduce:
 run the attached script (I ran it on both python 3.4 and 3.5), and wait (up to 
a minute in my
computer). Eventually, the script will hang (wait forever).

After it hangs:
ps -ef | grep "hang_multiprocess\|ls"

You should see now the "[ls] " process - zombie.


Analysis:
Players:
- Parent process
- Subprocess Child - forked by parent using subprocess.popen()
- Handle_workers thread - multiprocessing thread responsible for verifying all 
workers are OK, and create them if not.
- Multiprocessing Worker - forked by multiprocessing, either at handle_workers 
thread context, or at main thread context.

The problem, in a nutshell, is that Handle_workers thread forks a Worker, while 
Subprocess Child creation.
This causes one of the Child pipes to be 'copied' to the Worker. When the 
Subprocess Child finishes, the
pipe is still alive (at Worker), hence Parent Process wait forever for the pipe 
to finish. Child turn into zombie because Parent doesn't reach the 
communicate/wait line.

In more details:
- The problematic line at subprocess is at  subprocess.py->_execute_child(), 
before  'while True:' loop, where errpipe_read pipe is read.
- The entry point at multiprocessing is at 
multiprocessing/pool.py->_handle_workers(). There the thread sleeps for 0.1,
  and then try to create (=fork) new workers.

Handle_workers thread 'copies' errpipe_read to the forked Worker. Hence the 
pipe never gets closed.

To me, it seems like a multiprocessing issue: The forking from a thread at 
multiprocessing module is the cause for this mess.

I'm a newbe at Python (first bug launched), so please be patient if I missed 
anything or jumped into non-based conclusions.

--
components: Extension Modules
files: hang_multiprocess_subprocess.py
messages: 256150
nosy: amiko...@yahoo.com
priority: normal
severity: normal
status: open
title: Mixing multiprocessing pool and subprocess may create zombie process, 
and cause program to hang.
type: behavior
versions: Python 3.4, Python 3.5
Added file: http://bugs.python.org/file41275/hang_multiprocess_subprocess.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25823] Speed-up oparg decoding on little-endian machines

2015-12-09 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

I think following patch doesn't introduce undefined behavior. With this patch 
GCC on 32-bit i386 Linux produces the same code as for simple unsigned short 
read.

I don't know wherever the benefit worth such complication. I don't know  
wherever the patch can cause performance regression on other platforms or 
compilers.

--
Added file: http://bugs.python.org/file41274/improve_arg_decoding3.diff

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20032] asyncio.Future.set_exception() creates a reference cycle

2015-12-09 Thread Jean-Louis Fuchs

Jean-Louis Fuchs added the comment:

Just to let you know I hit this problem in my code, simplified version:

https://gist.github.com/ganwell/ce3718e5119c6e7e9b3e

Of course it is only a problem because I am a ref-counting stickler.

--
nosy: +Jean-Louis Fuchs

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25787] Add an explanation what happens with subprocess parent and child processes when signals are sent

2015-12-09 Thread Ian Macartney

Ian Macartney added the comment:

I don't have much experience with what should and shouldn't be in the python 
docs, however I was recently bitten by a subtlety in signal/subprocess that 
might be worth documenting.

Anything written to stdout in a signal handler could end up in the stdout of a 
process returned by Popen, if stdout gets flushed in the signal handler during 
a critical part of the Popen call. Furthermore, any output buffered before 
calling Popen could also be flushed by the signal handler, showing up in the 
child process's stdout. The same goes for stderr.

On 2.7.7 at least, the window where this can happen is between lines 1268-1290 
in subprocess.py, where the child process has duplicated stdout, but hasn't yet 
called execvp.

This is a result of signal inheritance that caught me off guard, and wasn't 
clear to me by reading the documentation.

--
nosy: +Ian Macartney
Added file: http://bugs.python.org/file41273/bug.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25823] Speed-up oparg decoding on little-endian machines

2015-12-09 Thread Mark Dickinson

Mark Dickinson added the comment:

The latest patch still replaces valid C with code that has undefined behaviour. 
That's not good! Introducing undefined behaviour is something we should avoid 
without a really good reason.

Benchmarks showing dramatic real-world speed improvements from this change 
might count as a good reason, of course. :-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25807] test_multiprocessing_fork.test_mymanager fails and hangs

2015-12-09 Thread SilentGhost

SilentGhost added the comment:

Hm, after doing "make clean" and rebuilding anew I'm not able to reproduce the 
bug myself. I guess I just had a stale version of the module still hanging 
around. Sorry for the mistaken report.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25828] PyCode_Optimize() (peephole optimizer) doesn't handle KeyboardInterrupt correctly

2015-12-09 Thread STINNER Victor

STINNER Victor added the comment:

FYI.

> Peephole optimizer of Python 2.7 (...) only optimizes 2**100, but not 
> 2**(2**100).

This optimizer is dummy, it's only able to replace "LOAD_CONST x; LOAD_CONST y; 
OPERATION" with the result, whereas the optimizer replaces the bytecode with 
"NOP; NOP; NOP; NOP; LOAD_CONST z".

So "LOAD_CONST x; LOAD_CONST y; LOAD_CONST z; OPERATION1; OPERATION2" cannot be 
optimized. But it's enough to optimize 1+2+3 or 1*2*3 for example.

Python 3 peephole optimize does better thanks to a better design:
---
changeset:   68375:14205d0fee45
user:Antoine Pitrou 
date:Fri Mar 11 17:27:02 2011 +0100
files:   Lib/test/test_peepholer.py Misc/NEWS Python/peephole.c
description:
Issue #11244: The peephole optimizer is now able to constant-fold
arbitrarily complex expressions.  This also fixes a 3.2 regression where
operations involving negative numbers were not constant-folded.
---

It uses a stack for constants, so it's able to optimize more cases.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25828] PyCode_Optimize() (peephole optimizer) doesn't handle KeyboardInterrupt correctly

2015-12-09 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
components: +Interpreter Core
nosy: +serhiy.storchaka
stage:  -> needs patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25828] PyCode_Optimize() (peephole optimizer) doesn't handle KeyboardInterrupt correctly

2015-12-09 Thread STINNER Victor

New submission from STINNER Victor:

The peephole optimizer computes 2**(2**100), but if I press CTRL+c (the result 
will probably kills my memory anyway), I get an assertion error (with a Python 
compiled in debug mode).

$ ./python
>>> 2**(2**100)
^C
python: Python/ceval.c:1218: PyEval_EvalFrameEx: Assertion `!PyErr_Occurred()' 
failed.
Abandon (core dumped)

fold_binops_on_constants() returns 0 with an exception (KeyboardInterrupt) 
raised. The problem is in the caller which doesn't handle the exception 
properly:

if (h >= 0 &&
ISBASICBLOCK(blocks, h, i-h+1)  &&
fold_binops_on_constants(&codestr[i], consts, 
CONST_STACK_LASTN(2))) {
i -= 2;
memset(&codestr[h], NOP, i - h);
assert(codestr[i] == LOAD_CONST);
CONST_STACK_POP(2);
CONST_STACK_PUSH_OP(i);
}

There is probably the same error on fold_unaryops_on_constants().


Python 2.7 looks to behave correctly:

$ python2.7
>>> 2**(2**100)
^C
Traceback (most recent call last):
  File "", line 1, in 
KeyboardInterrupt


But in fact Python 2.7 is much worse :-D Peephole optimizer of Python 2.7 
clears *all* exceptions (!) and it only optimizes 2**100, but not 2**(2**100). 
That's why the bug is not easily reproduced on Python 2.7. 
fold_binops_on_constants():

if (newconst == NULL) {
if(!PyErr_ExceptionMatches(PyExc_KeyboardInterrupt))
PyErr_Clear();
return 0;
}

--
messages: 256143
nosy: haypo
priority: normal
severity: normal
status: open
title: PyCode_Optimize() (peephole optimizer) doesn't handle KeyboardInterrupt 
correctly
type: crash
versions: Python 2.7, Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com