[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

Yep, my current goal is to see if I can come up with a surgical fix that solves 
the established problem with the bad interaction between cells and trace 
functions without any unintended consequences for either CPython or other 
interpreters.

That means the only behaviours I actually *want* to change are those that are 
pretty clearly quirky at best, and outright bugs at worst:

- registering a trace function can result in closure state being reset 
inappropriately, even when none of the code involved accesses locals() or 
f_locals
- registering a trace function may lead to changes to locals() made outside the 
trace function nevertheless being written back to the actual frame state

Establishing a write-through proxy for cell references is definitely fine - 
allowing shared access to closure state is the whole reason we have cell 
objects in the first place.

The more complex case is with regular locals since:

- they used to be regular dictionaries in 1.x, but one of the early 2.x 
releases deliberately changed their semantics with the introduction of fast 
locals
- people *do* sometimes treat the result of locals() at function scope as a 
regular dictionary, and hence they don't always copy it before mutating it 
and/or returning a reference to it
- f_locals is accessible from outside the running function/generator/coroutine, 
so compilers can't just key off calls to locals() inside the function to decide 
whether or not they can see all changes to local variables
- looking for calls to locals() at compile time is dubious anyway, since the 
builtin may have been aliased under a different name (we do something like that 
for zero-arg super(), but that breaks far more obviously when the use of name 
aliasing prevents the compiler from detecting that you need a __class__ 
reference compiled in)
- trace functions nevertheless still need to be able to write their changes 
back to the function locals in order for debuggers to support value injection

My latest design concept for the trace proxy thus looks like this (I've been 
iterating on design ideas to try to reduce the potential memory impact arising 
from merely installing a trace function):

1. The core proxy behaviour would be akin to wrapping f_locals in 
types.MappingProxyType (and I expect the new proxy will be a subclass of that 
existing type, with the tentative name "_FunctionLocalsProxy")

2. The currently planned differences consist of the following:
- setting and deleting items is supported
- holding a reference back to the originating frame (to allow for lazy 
initialisation of the extra state only if the local variables are actually 
mutated through the proxy)
- when a cell variable is mutated through the proxy, the cell gets added to a 
lazily constructed mapping from names to cells (if it isn't already there), and 
the value in the cell is also modified
- when a local variable is mutated through the proxy, it gets added to a set of 
"pending writebacks"

The post-traceback frame update in the trampoline function would then write 
back only the locals registered in "pending writebacks" (i.e. only those 
changes made through the proxy, *not* any incidental changes made directly to 
the result of locals()), which would allow this change to reduce the potential 
local state manipulation side effects of registering a trace function.

If actual implementation shows that this approach faces some technical hurdle 
that makes it infeasible in practice, then I agree it would make sense for us 
to look at alternatives with higher risks of unwanted side effects. However, 
based on what I learned while writing the first draft of PEP 558, I'm currently 
fairly optimistic I'll be able to make the idea work.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31757] Tutorial: Fibonacci numbers start with 1, 1

2017-10-13 Thread Raymond Hettinger

Change by Raymond Hettinger :


--
keywords: +patch
pull_requests: +3967
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31672] string.Template should use re.ASCII flag

2017-10-13 Thread INADA Naoki

Change by INADA Naoki :


--
pull_requests: +3966

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31672] string.Template should use re.ASCII flag

2017-10-13 Thread INADA Naoki

INADA Naoki  added the comment:


New changeset 7060380d577690a40ebc201c0725076349e977cd by INADA Naoki in branch 
'3.6':
bpo-31672: Fix string.Template accidentally matched non-ASCII identifiers 
(GH-3872)
https://github.com/python/cpython/commit/7060380d577690a40ebc201c0725076349e977cd


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31757] Tutorial: Fibonacci numbers start with 1, 1

2017-10-13 Thread Raymond Hettinger

Raymond Hettinger  added the comment:

I update the example in "First Steps Towards Programming" to match the one in 
"Defining Functions" (which is also shown on the home page at www.python.org.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31757] Tutorial: Fibonacci numbers start with 1, 1

2017-10-13 Thread Raymond Hettinger

Change by Raymond Hettinger :


--
assignee: docs@python -> rhettinger
nosy: +rhettinger
versions: +Python 3.7 -Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31778] ast.literal_eval supports non-literals in Python 3

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

I'm marking this as documentation issue for now, as the operators that 
literal_eval allows are solely those where constant folding support is needed 
to correctly handle complex and negative numbers (as noted in the original 
post):

```
>>> dis.dis("+1")
  1   0 LOAD_CONST   1 (1)
  2 RETURN_VALUE
>>> dis.dis("-1")
  1   0 LOAD_CONST   1 (-1)
  2 RETURN_VALUE
>>> dis.dis("1+1")
  1   0 LOAD_CONST   1 (2)
  2 RETURN_VALUE
>>> dis.dis("1+1j")
  1   0 LOAD_CONST   2 ((1+1j))
  2 RETURN_VALUE
>>> dis.dis("2017-10-10")
  1   0 LOAD_CONST   3 (1997)
  2 RETURN_VALUE
```

So the key promise that literal_eval makes is that it will not permit arbitrary 
code execution, but the docs should make it clearer that it *does* permit 
constant folding for addition and subtraction in order to handle the full range 
of numeric literals.

If folks want to ensure that the input string *doesn't* include a binary 
operator, then that currently needs to be checked separately with ast.parse:

```
>>> type(ast.parse("2+3").body[0].value) is ast.BinOp
True
>>> type(ast.parse("2017-10-10").body[0].value) is ast.BinOp
True
```

For 3.7+, that check could potentially be encapsulated as an 
"allow_folding=True" keyword-only parameter (where the default gives the 
current behaviour, while "allow_folding=False" disables processing of UnaryOp 
and BinOp), but the docs update is the more immediate need.

--
assignee:  -> docs@python
components: +Documentation
nosy: +docs@python
versions: +Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

Aye, I think that check would make the most sense, since the bytecode 
invalidation check is "_r_long(raw_timestamp) != source_mtime" (to allow for 
things like version control operations that send source timestamps backwards).

A test for that could then just mock time.time() to make sure it returned  a 
time matching the source mtime, and checked that the bytecode wasn't written.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31742] Default to emitting FutureWarning for provisional APIs

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

OK, I'll head down the path of creating a new procedural PEP to supersede PEP 
411 (I'll try to get the locals() semantic clarification PEP out of the way 
first, though).

I'll make "Where to put the feature flags?" an open question, as my rationale 
for proposing __main__ was three-fold:

1. In regular scripts, it makes feature flags as easy to set as possible, since 
you can just do "use_provisional_interpreters = True" without any import at all
2. In applications, "import __main__; use_provisional_interpreters = True" 
isn't markedly more brittle as a process-global state storage location than any 
other module name (as long as the feature flag names are prefixed to minimise 
the risk of name collisions)
3. Using an existing always imported module makes the runtime cost of managing 
the feature flags as close to zero as possible

However, you'd also get most of those benefits with an even lower risk of name 
collisions by co-opting "sys" for the same purpose.

Silencing the warning via the feature flag:

import sys
sys.use_provisional_interpreters = True
import interpreters


Silencing the warning via the warnings module:

from warnings import filterwarnings
filterwarnings("ignore", module="interpreters", category=FutureWarning)
import interpreters

Emitting the warning:

import sys
_feature_flag = f"use_provisional_{__name__}"
if not getattr(sys, _feature_flag):
import warnings
_provisional_msg = (
f"The {__name__} module is currently a provisional API - see 
documentation for details. "
f"Set 'sys.{_feature_flag} = True' before importing the API to 
disable this warning."
)
warnings.warn(FutureWarning, _provisional_msg)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Armin Rigo

Armin Rigo  added the comment:

Guido: you must be tired and forgot that locals() is a regular function :-)  
The compiler cannot recognize it reliably.  Moreover, if f_locals can be 
modified outside a tracing hook, then we have the same problem in a 
cross-function way, e.g. if function f1() calls function f2() which does 
sys._getframe(1).f_locals['foo'] = 42.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31785] Move instruction code from ceval.c to a separate file

2017-10-13 Thread pdox

Change by pdox :


--
title: Move instruction code blocks to separate file -> Move instruction code 
from ceval.c to a separate file

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31785] Move instruction code blocks to separate file

2017-10-13 Thread pdox

New submission from pdox :

I'd like to move all instruction code (the code inside a TARGET(NAME) block) 
from Python/ceval.c to a new file, Python/instructions.h. The new file will 
contain only instruction bodies (prefixed by related helper functions/macros).

Eval-context macros (e.g. TARGET, DISPATCH, TOP, PUSH, POP, etc) will not be 
moved to instructions.h, but will be expected to be available (defined by the 
#includer).

ceval.c will define the eval-context macros in the same way, and #include 
"instructions.h", inside the body of _PyEval_EvalFrameDefault. The code emitted 
should remain exactly the same.

The benefit of this change, is that it becomes easy to produce alternative 
implementations of EvalFrame which reuse the same instruction code, but with 
changes to the evaluation context or dispatch mechanism. In particular, after 
this change, I hope to experiment with adding a cross-platform 
subroutine-threading code evaluator. (for performance testing)

--
components: Interpreter Core
messages: 304370
nosy: pdox
priority: normal
severity: normal
status: open
title: Move instruction code blocks to separate file
type: enhancement
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31622] Make threading.get_ident() return an opaque type

2017-10-13 Thread pdox

pdox  added the comment:

I don't see much enthusiasm or agreement here, so I'm closing for now.

--
resolution:  -> postponed
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31780] Using format spec ',x' displays incorrect error message

2017-10-13 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

The formatting part of PEP 515 was implemented in #27080.  Chris Angelico's 
initial patch https://bugs.python.org/file44152/underscores_decimal_only.patch 
was, as the name says, for decimal only, and added "or '_' " to the error 
message.  The next patch added other bases but neglected to remove the 
obsoleted addition.

Chris, can you do a PR?  I don't think any new test is needed.  I would also be 
inclined to skip the news blurb, but since Eric should review and merge, it is 
his call.

--
components: +Interpreter Core
nosy: +Rosuav, terry.reedy
stage:  -> needs patch
versions: +Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31778] ast.literal_eval supports non-literals in Python 3

2017-10-13 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

It has been some time since literal_eval literally only evaluated literals.  
'constant_eval' might be a better name now, with the proviso of 'safely, in 
reasonable time'.

>>> from ast import literal_eval as le
>>> le('(1,2,3)')
(1, 2, 3)
>>> le('(1,2, (3,4))')
(1, 2, (3, 4))

I believe there was once a time when a simple tuple would be evaluated, while a 
nested one would not be.

"It is not capable of evaluating arbitrarily complex expressions, for example 
involving operators or indexing."  I do not read this as prohibiting all 
operators, but rather that now all will be accepted.

>>> le(2**2)
...
ValueError: malformed node or string: 4

Exponentiation of ints can take exponential time and can be used for denial of 
service attacks.

>>> le('2017-10-10')
1997

This is correct.  For '2017-10-10' to be a string representing a date, it must 
be quoted as a string in the code.

>>> le("'2017-10-10'")
'2017-10-10'

Rolling back previous enhancements would break existing code, so a deprecation 
period would be required.  But I would be inclined to instead update the doc to 
match the updated code better.  Lets see what others think.

--
nosy: +benjamin.peterson, brett.cannon, ncoghlan, terry.reedy, yselivanov
type: behavior -> enhancement
versions:  -Python 3.4, Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31784] Add time.time_ns(): get time with nanosecond resolution

2017-10-13 Thread STINNER Victor

Change by STINNER Victor :


--
keywords: +patch
pull_requests: +3965
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31784] Add time.time_ns(): get time with nanosecond resolution

2017-10-13 Thread STINNER Victor

New submission from STINNER Victor :

time.time() returns time as a float, but the conversion to float lose precision 
at the nanosecond resolution.

I propose to add a new time.time_ns() function which returns time as an integer 
number of nanoseconds since epoch. It's similar to the st_mtime_ns field of 
os.stat_result which extended the old st_mtime field.

For the full rationale, see my thread on python-ideas:
[Python-ideas] Add time.time_ns(): system clock with nanosecond resolution
https://mail.python.org/pipermail/python-ideas/2017-October/047318.html

--
components: Library (Lib)
messages: 304365
nosy: haypo
priority: normal
severity: normal
status: open
title: Add time.time_ns(): get time with nanosecond resolution
type: enhancement
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31765] BUG: System deadlocks performing big loop operations in python 3.5.4, windows 10

2017-10-13 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

Nikhil, I am closing this for now because I am at least 85% sure David is 
right.  To get any help, you need to reduce your code to the minimum needed to 
produce the symptoms and then include that with any question.  You would be 
told the same on Stackoverflow, which is another place to seek answers.

--
nosy: +terry.reedy
resolution:  -> later
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2017-10-13 Thread Steven Barker

New submission from Steven Barker :

While investigating a Stack Overflow question (here: 
https://stackoverflow.com/q/46529767/1405065), I found that there may be a race 
condition in the cleanup code for concurrent.futures.ThreadPoolIterator. The 
behavior in normal situations is fairly benign (the executor may run a few more 
jobs than you'd expect, but exits cleanly), but in rare situations it might 
lose track of a running thread and allow the interpreter to shut down while the 
thread is still trying to do work.

Here's some example that concisely demonstrates the situation where the issue 
can come up (it doesn't actually cause the race to go the wrong way on my 
system, but sets up the possibility for it to occur):


from threading import current_thread
from concurrent.futures import ThreadPoolExecutor
from time import sleep

pool = ThreadPoolExecutor(4)

def f(_):
print(current_thread().name)
future = pool.submit(sleep, 0.1)
future.add_done_callback(f)

f(None)


The callback from completion of one job schedules another job, indefinitely.

When run in an interactive session, this code will print thread names forever. 
You should get "MainThread" once, followed by a bunch of 
"ThreadPoolExecutor-X_Y" names (often the same name will be repeated most of 
the time, due to the GIL I think, but in theory the work could rotate between 
threads). The main thread will return to the interactive REPL right away, so 
you can type in other stuff while the executor's worker threads are printing 
stuff the background (I suggest running pool.shutdown() to make them stop). 
This is fine.

But if the example code is run as a script, you'll usually get "MainThread", 
followed by exactly four repeats of "ThreadPoolExecutor-0_0" (or fewer in the 
unlikely case that the race condition strikes you). That's the number of 
threads the ThreadPoolExecutor was limited to, but note that the thread name 
that gets printed will usually end with 0 every time (you don't get one output 
from each worker thread, just the same number of outputs as there are threads, 
all from the first thread). Why you get that number of outputs (instead of zero 
or one or an infinite number) was one part of the Stack Overflow question.

The answer turned out to be that after the main thread has queued up the first 
job in the ThreadPoolExecutor, it runs off the end of the script's code, so it 
starts shutting down the interpreter. The cleanup function _python_exit (in 
Lib/concurrent/futures/thread.py) gets run since it is registered with atexit, 
and it tries to signal the worker threads to shut down cleanly. However, the 
shutdown logic interacts oddly with an executor that's still spinning up its 
threads. It will only signal and join the threads that existed when it started 
running, not any new threads.

As it turns out, usually the newly spawned threads will shut themselves down 
immediately after they spawn, but as a side effect, the first worker thread 
carries on longer than expected, doing one additional job for each new thread 
that gets spawned and exiting itself only when the executor has a full set. 
This is why there are four outputs from the worker thread instead of some other 
number. But the exact behavior is dependent on the thread scheduling order, so 
there is a race condition.

You can demonstrate a change in behavior from different timing by putting a 
call to time.sleep at the very top of the _worker() function (delaying how 
quickly the new threads can get to the work queue). You should see the program 
behavior change to only print "ThreadPoolExecutor-0_0" once before exiting.

Lets go through the steps of the process:

1. The main thread runs f() and schedules a job (which adds a work item to the 
executor's work queue). The first worker thread is spawned by the executor to 
run the job, since it doesn't have any threads yet. The main thread also sets a 
callback on the future to run f() again.

2. The main thread exits f() and reaches the end of the script, so it begins 
the interpreter shutdown process, including calling atexit functions. One of 
those is _python_exit, which adds a reference to None to the executor's work 
queue. Note that the None is added *after* the job item from step 1 (since 
they're both done by the same main thread). It then calls join() on the worker 
thread spawned in step 1, waiting for it to exit. It won't try to join any 
other threads that spawn later, since they don't exist yet.

3. The first worker thread spawned by the executor in step 1 begins running and 
pops an item off the work queue. The first item is a real job, so it runs it. 
(The first parts of this step may be running in parallel with step 2, but 
completing job will take much longer than step 2, so the rest of this step runs 
by itself after step 2 has finished.) Eventually the job ends and the callback 
function on the Future is called, which schedules another job (putting a job 
item 

[issue31757] Tutorial: Fibonacci numbers start with 1, 1

2017-10-13 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

I agree that we should be consistent -- with the current standard definition -- 
with the changes suggested above.  Heinrich, can you, and do you want to, 
submit a patch?  If so, please also sign the contributor agreement.  See 
https://www.python.org/psf/contrib/

The Fibonacci numbers start with 1 if there is no F(0), as in Fibonacci's 
rabbit model.  Before 0 was accepted as a number, the series had to start with 
F(1) = F(2) = 1.  See https://en.wikipedia.org/wiki/Fibonacci_number

--
nosy: +terry.reedy

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14465] xml.etree.ElementTree: add feature to prettify XML output

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

For the record, at 2015-04-02, the bpo-23847 has been marked as a duplicate of 
this issue.

--
nosy: +haypo

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31759] re wont recover nor fail on runaway regular expression

2017-10-13 Thread Matthew Barnett

Matthew Barnett  added the comment:

@Tim: the regex module includes some extra checks to reduce the chance of 
excessive backtracking. In the case of the OP's example, they seem to be 
working. However, it's difficult to know when adding such checks will help, and 
your example is one case where they are being done but aren't helping, with the 
result that it's slower.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31754] Documented type of parameter 'itemsize' to PyBuffer_FillContiguousStrides is incorrect.

2017-10-13 Thread Mariatta Wijaya

Mariatta Wijaya  added the comment:

Welcome Aniket. Yes those two links are good starting points.
Please propose a PR with the fix.

--
nosy: +Mariatta

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31676] test.test_imp.ImportTests.test_load_source has side effects

2017-10-13 Thread STINNER Victor

Change by STINNER Victor :


--
versions: +Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30807] setitimer() can disable timer by mistake

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:


New changeset ef611c96eab0ab667ebb43fdf429b319f6d99890 by Victor Stinner in 
branch 'master':
bpo-30807: signal.setitimer() now uses _PyTime API (GH-3865)
https://github.com/python/cpython/commit/ef611c96eab0ab667ebb43fdf429b319f6d99890


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31676] test.test_imp.ImportTests.test_load_source has side effects

2017-10-13 Thread Roundup Robot

Change by Roundup Robot :


--
pull_requests: +3964

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31676] test.test_imp.ImportTests.test_load_source has side effects

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:


New changeset a505ecdc5013cd8f930aacc1ec4fb2afa62d3853 by Victor Stinner in 
branch 'master':
bpo-31676: Fix test_imp.test_load_source() side effect (#3871)
https://github.com/python/cpython/commit/a505ecdc5013cd8f930aacc1ec4fb2afa62d3853


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25588] Run test suite from IDLE idlelib.run subprocess

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:


New changeset 6234e9068332f61f935cf13fa5b1a924a99c28b2 by Victor Stinner (Miss 
Islington (bot)) in branch '3.6':
[3.6] bpo-25588: Fix regrtest when run inside IDLE (GH-3962) (#3987)
https://github.com/python/cpython/commit/6234e9068332f61f935cf13fa5b1a924a99c28b2


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31753] Unnecessary closure in ast.literal_eval

2017-10-13 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

On win10, installed 3.7.0a1, speedup is 7-8% (It is 'only' 5% on repository 
debug build that takes 5-6 times longer.)

--
nosy: +terry.reedy

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31754] Documented type of parameter 'itemsize' to PyBuffer_FillContiguousStrides is incorrect.

2017-10-13 Thread Aniket Vyas

Aniket Vyas  added the comment:

Hello !  I am new to the community and would love to start my contribution 
here. Can I take up this bug ? 

In order to do so I am going through the following links: 

https://docs.python.org/devguide/docquality.html
https://docs.python.org/devguide/pullrequest.html

Is the list exhaustive for this particular issue ? 

Thanks !

--
nosy: +Aniket Vyas

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread STINNER Victor

Change by STINNER Victor :


--
nosy:  -haypo

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25588] Run test suite from IDLE idlelib.run subprocess

2017-10-13 Thread Roundup Robot

Change by Roundup Robot :


--
pull_requests: +3963

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25588] Run test suite from IDLE idlelib.run subprocess

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:


New changeset ccef823939d4ef602f2d8d13d0bfec29eda597a5 by Victor Stinner in 
branch 'master':
bpo-25588: Fix regrtest when run inside IDLE (#3962)
https://github.com/python/cpython/commit/ccef823939d4ef602f2d8d13d0bfec29eda597a5


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31705] test_sha256 from test_socket fails on ppc64le arch

2017-10-13 Thread Ryan C. Decker

Ryan C. Decker  added the comment:

I seem to be having this issue on CentOS 7.4 but running on x86_64 instead of 
ppc64le. I have attached an strace using version 4.17 (the lastest version from 
scl) created as follows:

strace -s 128 -e trace=%network -o trace ./python -m test -v test_socket -m 
test_sha256


== CPython 3.6.3 (default, Oct 13 2017, 11:16:36) [GCC 4.8.5 20150623 (Red Hat 
4.8.5-16)]
== Linux-3.10.0-693.2.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core 
little-endian
== cwd: /home/ryan/Downloads/Python-3.6.3/build/test_python_4140
== CPU count: 8
== encodings: locale=UTF-8, FS=utf-8
Run tests sequentially
0:00:00 load avg: 0.13 [1/1] test_socket
test_sha256 (test.test_socket.LinuxKernelCryptoAPI) ... ERROR

==
ERROR: test_sha256 (test.test_socket.LinuxKernelCryptoAPI)
--
Traceback (most recent call last):
  File "/home/ryan/Downloads/Python-3.6.3/Lib/test/test_socket.py", line 5424, 
in test_sha256
op.sendall(b"abc")
OSError: [Errno 126] Required key not available

--
Ran 1 test in 0.001s

FAILED (errors=1)
test test_socket failed
test_socket failed

1 test failed:
test_socket

Total duration: 39 ms
Tests result: FAILURE

--
nosy: +Ryan C. Decker
Added file: https://bugs.python.org/file47221/trace_x86_64

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31782] Add a timeout to multiprocessing's Pool.join

2017-10-13 Thread Will Starms

Will Starms  added the comment:

A timeout alternative that raises TimeoutError

--
Added file: https://bugs.python.org/file47220/cpython_raise_timeout.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31752] Assertion failure in timedelta() in case of bad __divmod__

2017-10-13 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

On Win10, installed 3.5.4, 3.6.3, 3.7.1a1 all raise SystemError.
3.6 and 3.7 repository debug builds raise AssertionError and Windows crash box. 
 After the patch, a silent crash.

--
nosy: +terry.reedy

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31781] crashes when calling methods of an uninitialized zipimport.zipimporter object

2017-10-13 Thread Oren Milman

Change by Oren Milman :


--
keywords: +patch
pull_requests: +3962
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31782] Add a timeout to multiprocessing's Pool.join

2017-10-13 Thread Will Starms

New submission from Will Starms :

Pool's join function currently (3.6.3) lacks a timeout, which can cause the 
managing thread to sleep indefinitely when a pool worker hangs or starts 
misbehaving. Adding a timeout allows the owning thread to attempt a join and, 
after the timeout, return to other tasks, such as monitoring worker health.

In my specific situation, I have a Pool running a task on a large set of files. 
If any single task fails, the whole operation is ruined and the pool should be 
terminated. A task can communicate with the main thread through error_callback, 
but if the thread has already called join, it can't check until join returns, 
after the Pool has finished all processing.

Attached is an incredibly simple patch to the current (3.6) cpython 
implementation that emulates threading.thread.join's behavior.

--
components: Library (Lib)
files: cpython_timeout.patch
keywords: patch
messages: 304350
nosy: Will Starms
priority: normal
severity: normal
status: open
title: Add a timeout to multiprocessing's Pool.join
type: enhancement
versions: Python 3.6
Added file: https://bugs.python.org/file47219/cpython_timeout.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31727] FTP_TLS errors when use certain subcommands

2017-10-13 Thread Terry J. Reedy

Change by Terry J. Reedy :


--
stage:  -> test needed
title: FTP_TLS errors when -> FTP_TLS errors when use certain subcommands
type:  -> behavior
versions: +Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Brett Cannon

Brett Cannon  added the comment:

To make the proposal concrete, would you then change 
https://github.com/python/cpython/blob/master/Lib/importlib/_bootstrap_external.py#L785
 to include a `source_mtime != int(time.time())` guard? I think as long as 
that's the last check in the guard since that has they highest performance cost 
it should be okay (I don't think any platform makes calling the clock an 
expensive operation).

And nothing personal about pro-actively closing this, Devin, but with over 6000 
issues in this issue tracker alone it's easier to ask forgiveness than 
permission when it comes to issues that at the outset don't look like a good 
idea. Otherwise I would have to track this issue personally in case you never 
responded again (which does happen), and that just becomes a maintenance burden 
on me when I have 3 other Python-related projects that I'm also actively 
working on issues with (ignoring any issues that I get involved with on this 
tracker).

--
resolution: wont fix -> 
stage: resolved -> test needed
status: closed -> open

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31742] Default to emitting FutureWarning for provisional APIs

2017-10-13 Thread Aaron Gallagher

Aaron Gallagher <_...@habnab.it> added the comment:

>Storing the marker attribute in __main__ [...]

Can I request please not using __main__ for this? setuptools console_scripts 
are very common, which is a case where __main__ will be a generated (i.e. not 
user-controllable) file. Making application code import __main__ to set the 
flag would be brittle.

--
nosy: +habnabit

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31742] Default to emitting FutureWarning for provisional APIs

2017-10-13 Thread Guido van Rossum

Guido van Rossum  added the comment:

I think this ought to be a new PEP which supersedes PEP 411. I think it should 
simply *offer* the option of using a feature flag rather than prescribing it. 
It could also explain that feature flags could cover an entire module or only 
the unstable parts. I would be happy to accept something like that.

I am still unclear as to how you are proposing to implement the "feature flag". 
Making this an attribute of `__main__` doesn't feel right (I've seen programs 
with different main entry point). However I do think it could be global state, 
similar to what `warnings` does (maybe it should just be a warning).

I don't like the parallel with `__future__` imports because those are for 
*stable* APIs that use backwards incompatible syntax (typically a new keyword).

Thinking back on my experiences with asyncio and typing, I have a feeling that 
the provisional status was *mostly* used to introduce *new* APIs at bugfix 
releases. We were in general pretty careful with changes to the documented 
APIs, with some exceptions for where the design was broken, and we sometimes 
pushed backwards incompatibilities to feature releases (which get more vetting 
by users than bugfix releases). But in both cases the API surface was 
sufficiently large that we simply didn't know in which areas details would have 
to change in the future, and we didn't want to be stuck with backwards 
compatibility hacks long-term. (The worst thing is when the perfect name for an 
API is found to require an incompatible signature change -- if you solve it by 
using a different the API will forever look ugly or confusing or weird.)

I know there have been times for both asyncio and typing where we wished they 
weren't in the stdlib at all -- mostly because we had users stuck with a 
CPython version (e.g. 3.5.1) that was missing an important addition to the API. 
But all in all I think that for both libraries the advantages have well 
outweighed the disadvantages. And a required warning on import would have 
really bothered me.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31781] crashes when calling methods of an uninitialized zipimport.zipimporter object

2017-10-13 Thread Oren Milman

New submission from Oren Milman :

The following code crashes:
import zipimport
zi = zipimport.zipimporter.__new__(zipimport.zipimporter)
zi.find_module('foo')

This is because get_module_info() (in Modules/zipimport.c) assumes that the
zipimporter object is initialized, so it assumes that `self->prefix` is not
NULL, and passes it to make_filename(), which crashes.

get_module_code() makes the same assumption, and 
zipimport_zipimporter_get_data_impl()
assumes that `self->archive` is not NULL, and passes it to 
PyUnicode_GET_LENGTH(),
which crashes.
Thus, every method of an uninitialized zipimporter object might crash.


I would open a PR to fix this soon.

--
components: Extension Modules
messages: 304346
nosy: Oren Milman
priority: normal
severity: normal
status: open
title: crashes when calling methods of an uninitialized zipimport.zipimporter 
object
type: crash
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Guido van Rossum

Guido van Rossum  added the comment:

Hm I may just be completely off here, but I thought that compilers could be 
allowed to recognize the use of locals() in a particular function and then 
disable JIT optimizations for that function. (In fact I thought we already had 
a rule like this but I can't find any language about it, but maybe I'm mistaken 
and we only have such an exception for sys._getframe() -- though it's not 
mentioned in the docs for that either.)

I do like Nathaniel's idea of making locals() a write-through proxy (and let 
f_locals the same thing). If this keeps the frame alive, well too bad -- there 
are lots of other things that do this too, e.g. tracebacks.

Or what about a read-only proxy or a plain snapshot? The docs already say that 
it *shouldn't* be written and *may* not write through -- are we concerned that 
a lot of people depend on the actual runtime behavior rather than the 
documented behavior?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28507] Regenerate ./configure on the default branch

2017-10-13 Thread Дилян Палаузов

Дилян Палаузов  added the comment:

For the record, on master runstatedir was added on 7th September 2017, removed 
on 5th September, added on 29 June, removed on 9th June, added on 14th April, 
removed on 6th December 2016 and 10th October in two branches, added on 13th 
September...

The history would be easier to understand and follow, if one clarifies whether 
runstatedir belongs to configure and then everybody sticks to the same rule.

Autoconf 2.69 does not insert runstatedir, autoconf.git since 12 September 2013 
does insert runstatedir.

--
nosy: +dilyan.palauzov

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31780] Using format spec ',x' displays incorrect error message

2017-10-13 Thread Ned Deily

Change by Ned Deily :


--
nosy: +eric.smith

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30156] PYTHONDUMPREFS segfaults on exit

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

Note: if you care of namedtuple performance, Raymond Hettinger wrote that he 
would be interested to reuse the C structseq sequence. I measured that getting 
an attribute by name is faster in structseq than with the current property 
cached tuple hack.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30156] PYTHONDUMPREFS segfaults on exit

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

> Removing this micro-optimization makes attribute access in namedtuple more 
> than 1.5 times slower:
> Mean +- std dev: [python.default] 126 ns +- 4 ns -> [python] 200 ns +- 7 ns: 
> 1.58x slower (+58%)

I wrote the PR 3985, it's only 20 ns slower (1.3x slower):

[ref] 80.4 ns +- 3.3 ns -> [fastcall] 103 ns +- 5 ns: 1.28x slower (+28%)

Maybe Python was optimized further in the meanwhile, or the slowdown is higher 
on your computer?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30156] PYTHONDUMPREFS segfaults on exit

2017-10-13 Thread STINNER Victor

Change by STINNER Victor :


--
keywords: +patch
pull_requests: +3961

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30767] logging must check exc_info correctly

2017-10-13 Thread Vinay Sajip

Vinay Sajip  added the comment:

> I've triggered it which is why I looked for the problem and offered the 
> defensive patch.

That's why I asked for a small example which used logging as documented and 
demonstrated a problem. You haven't done that.

> As API writers you can NEVER assume your parameters are what you think they 
> should be and just blindly proceed.

I think you'll find that in Python usage in general and the Python stdlib in 
particular, exhaustive checking of input parameters is not the norm. I'm not 
saying that no error checking is ever done nor that it should never be done, 
but your "NEVER" doesn't seem to hold up, as the stdlib contains many 
counterexamples.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31779] assertion failures and a crash when using an uninitialized struct.Struct object

2017-10-13 Thread Oren Milman

Change by Oren Milman :


--
keywords: +patch
pull_requests: +3960
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30767] logging must check exc_info correctly

2017-10-13 Thread Matthew Patton

Matthew Patton  added the comment:

I've triggered it which is why I looked for the problem and offered the 
defensive patch. As API writers you can NEVER assume your parameters are what 
you think they should be and just blindly proceed.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31509] test_subprocess hangs randomly on AMD64 Windows10 3.x

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

Ned: "I also have seen test_subprocess hangs on both macOS and on Debian Linux 
on both 3.6 and master, as recently as 3.6.3 and 3.7.0a1 but not with current 
heads.  After some experimenting and bisecting, I tracked the fix down to the 
mock os.waitpid fixes for bpo-31178 (git11045c9d8a and gitfae0512e58). So 
perhaps this issue can be closed."

Oh, that's a very good news. Thanks!

--
resolution:  -> fixed
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31455] ElementTree.XMLParser() mishandles exceptions

2017-10-13 Thread Serhiy Storchaka

Serhiy Storchaka  added the comment:

It would be nice. But I see you already have opened issue31758 for reference 
leaks. I think that other problems can be solved in the same issue.

Do you mind to backport your patch to 2.7 Stefan? If this makes sense. 
Otherwise I'll just close this issue. Live exceptions don't cause crash in 2.7.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31732] Add TRACE level to the logging module

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

Vinay:
> I feel that there is no need for a TRACE level in the stdlib

Ok, that's fine.

I just pushed the idea of someone on IRC. Since I had the same idea once, I 
tried, but I lost :-) I can easily survive without TRACE in the stdlib ;-)


Vinay:
> Victor says "we need to a 6th level since DEBUG might be too verbose" - but 
> as the PR is currently constituted, enabling the TRACE level would output 
> DEBUG messages too, and so be even more verbose than just DEBUG! In this case 
> I feel that, when considering the number of levels, "less is more".

Oh, I'm not sure that I explained my point correctly, since someone else on IRC 
told me that he misunderstood.

My point is that logs have different consumers who have different expectations 
and usages of the logs.

In my case, the customer may want to go to up to the DEBUG level "by default" 
to collect more data on failures. Enabling TRACE would produce too many logs 
and should be restricted to the most tricky bugs where you cannot guess the bug 
only with the DEBUG level.

I tried to explain that if you are putting all debug traces at the DEBUG level, 
you may produce 10 MB of log per minute (arbitrary number for my explanation). 
But producing 10 MB per machine in a large datacenter made of thousands of 
servers can lead to practical storage issues: too many logs would fill the log 
partition too quickly, especially when logs are centralized.

The idea is to reduce debug traces to 10% at the DEBUG level, and put the 
remaings traces at the TRACE level. For example, you can imagine to log an 
exception message at DEBUG, but log the traceback at TRACE level. The traceback 
is likely to produce 10x more logs.

The TRACE level is only enabled on-demand for a short period of time on a few 
selected servers.

Technically, you can already do that INFO and DEBUG levels. But in OpenStack, 
these levels are already "busy" with enough messages and we needed a 6th level 
:-)

(I don't want to reopen the discssion, just to make sure that I was correctly 
understood ;-))

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28647] python --help: -u is misdocumented as binary mode

2017-10-13 Thread Berker Peksag

Berker Peksag  added the comment:

Modules/main.c and Python.man is same in 3.6 branch. We could backport the 
change in Doc/library/sys.rst from 7f580970836b0f6bc9c5db868d95bea81a3e1558 but 
I didn't do it yet since it needs be manually backported.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22674] RFE: Add signal.strsignal(): string describing a signal

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

> 3. For the unknown signal, what is the description should be? "Unknown 
> signal" like c function returns or None?

Hum, Linux returns "Unknown signal 12345". I propose to use this behaviour on 
all platforms (which provide strsignal()).

--
title: strsignal() missing from signal module -> RFE: Add signal.strsignal(): 
string describing a signal

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28647] python --help: -u is misdocumented as binary mode

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

Thanks Berker for this nice documentation enhancement! It was required.

Do we need to update Python 3.6 documentation using the commit 
5f908005ce16b06d5af7b413264009c4b062f33c, or are we good? (sorry, I didn't 
check)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28647] python --help: -u is misdocumented as binary mode

2017-10-13 Thread Berker Peksag

Berker Peksag  added the comment:

Thank you for reviews, Serhiy and Victor.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30404] Make stdout and stderr truly unbuffered when using -u option

2017-10-13 Thread Berker Peksag

Berker Peksag  added the comment:


New changeset 7f580970836b0f6bc9c5db868d95bea81a3e1558 by Berker Peksag in 
branch 'master':
bpo-28647: Update -u documentation after bpo-30404 (GH-3961)
https://github.com/python/cpython/commit/7f580970836b0f6bc9c5db868d95bea81a3e1558


--
nosy: +berker.peksag

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28647] python --help: -u is misdocumented as binary mode

2017-10-13 Thread Berker Peksag

Berker Peksag  added the comment:


New changeset 7f580970836b0f6bc9c5db868d95bea81a3e1558 by Berker Peksag in 
branch 'master':
bpo-28647: Update -u documentation after bpo-30404 (GH-3961)
https://github.com/python/cpython/commit/7f580970836b0f6bc9c5db868d95bea81a3e1558


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31780] Using format spec ',x' displays incorrect error message

2017-10-13 Thread FHTMitchell

New submission from FHTMitchell :

Minor issue. Using the ',b', ',o' or ',x' raises the error

ValueError("Cannot specify ',' or '_' with 'x'.",)

(or equivalently for 'b' and 'o'). However, it is possible to use the format 
specs '_b', '_o' and '_x' in Python 3.6 due to PEP 515. 

The following test demonstrates this:

>>> i = 1
>>> for base in 'box':
... for sep in ',_':
... try:
... print(f'{i:{sep}{base}}')
... except ValueError as err:
... print(repr(err))

ValueError("Cannot specify ',' or '_' with 'b'.",)
1_1000_0110_1010_
ValueError("Cannot specify ',' or '_' with 'o'.",)
30_3240
ValueError("Cannot specify ',' or '_' with 'x'.",)
1_86a0

--
messages: 304330
nosy: FHTMitchell
priority: normal
severity: normal
status: open
title: Using format spec ',x' displays incorrect error message
type: behavior
versions: Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

I'll also note there's a simpler reason the namespace object exposed at the 
function level can't just be a write-through proxy for the underlying frame: 
references to frame.f_locals may outlive the frame backing it, at which point 
we really do want it to be a plain dictionary with no special behaviour, just 
as it is for regular execution frames. 

(Think "return locals()" as the last line in a helper function, as well as 
variants like "data = locals(); data.pop('some_key'); return data")

That means that no matter what, we need to snapshot the frame locals the when 
frame.f_locals is requested. The question then becomes:

- when we do we update the contents of cell references? (this is what's buggy 
right now when a trace function is installed)
- when do we update ordinary local variables? (this isn't broken, so we want to 
avoid changing it)

Providing write-through support *just* for cells should thus make it possible 
to fix the buggy interaction between cells and trace function, while minimising 
the risk of any unintended consequences affecting regular function locals.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31779] assertion failures and a crash when using an uninitialized struct.Struct object

2017-10-13 Thread Oren Milman

New submission from Oren Milman :

The following code causes an assertion failure:
import _struct
struct_obj = _struct.Struct.__new__(_struct.Struct)
struct_obj.iter_unpack(b'foo')

This is because Struct_iter_unpack() (in Modules/_struct.c) assumes that
Struct.__init__() was called, and so it does `assert(self->s_codes != NULL);`.

The same happens in (almost) every method of Struct, and in s_get_format(), so
in all them, too, we would get an assertion failure in case of an uninitialized
Struct object.
The exception is __sizeof__(), which doesn't have an `assert`, and simply
crashes while trying to iterate over `self->s_codes`.


I would open a PR to fix this soon.

--
components: Extension Modules
messages: 304328
nosy: Oren Milman
priority: normal
severity: normal
status: open
title: assertion failures and a crash when using an uninitialized struct.Struct 
object
type: crash
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31773] Rewrite _PyTime_GetWinPerfCounter() for _PyTime_t

2017-10-13 Thread STINNER Victor

STINNER Victor  added the comment:

I reopen the issue since I found a solution to only use integer in pytime.c for 
QueryPerformanceCounter() / QueryPerformanceFrequency() *and* prevent integer 
overflow.

--
resolution: fixed -> 
status: closed -> open

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31773] Rewrite _PyTime_GetWinPerfCounter() for _PyTime_t

2017-10-13 Thread STINNER Victor

Change by STINNER Victor :


--
pull_requests: +3959

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31761] regrtest: faulthandler.enable() fails with io.UnsupportedOperation: fileno when run from IDLE

2017-10-13 Thread Terry J. Reedy

Terry J. Reedy  added the comment:

After starting Python from a command line instead of an icon, importing/running 
autotest results in the same three failures.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Armin Rigo

Armin Rigo  added the comment:

Thanks Nick for the clarification.  Yes, that's what I meant: supporting such 
code in simple JITs is a nightmare.  Perhaps more importantly, I am sure that 
if Python starts supporting random mutation of locals outside tracing hooks, 
then it would open the door to various hacks that are best not done at all, 
from a code quality point of view.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31455] ElementTree.XMLParser() mishandles exceptions

2017-10-13 Thread Oren Milman

Oren Milman  added the comment:

Serhiy, in addition to the problems you mentioned with not calling __init__(), 
it seems
that calling every method of an uninitialized XMLParser object would crash.

If you don't mind, i would be happy to open an issue to fix these crashes.

--
nosy: +Oren Milman

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

We're OK with the idea that installing a trace function might automatically 
turn off various compiler and interpreter managed optimisations (it's similar 
to how requesting or implying reliance on full frame support in other 
implementations can disable various optimisations). For trace hooks like 
coverage tools and debuggers, that's often downright desirable, since it makes 
the runtime behaviour correlate more directly with the source code.

What we're aiming to avoid is requiring that implementations make the assertion 
in "a = 1; locals('a') = 2; assert a == 2" pass at function scope, and if 
anything, we'd prefer to make it a requirement for that assertion to *fail*.

Requiring locals to actually *be* a write-through proxy (for all locals, not 
just cell references) would revert Python's semantics to the state they were in 
before the "fast locals" concept was first introduced, and we have no intention 
of going back there.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31758] various refleaks in _elementtree

2017-10-13 Thread Oren Milman

Oren Milman  added the comment:

Shame on me. I only now found out that Serhiy already mentioned most of the 
refleaks
in https://bugs.python.org/issue31455#msg302103.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30767] logging must check exc_info correctly

2017-10-13 Thread Vinay Sajip

Vinay Sajip  added the comment:

Matthew Patton: you don't appear to have read the documentation correctly. The 
formatException() method's exc_info positional parameter is expected to be a 
normal exception tuple, not just any truthy value. This is clearly stated in 
the documentation for the method.

That is different to the logger.debug() etc. methods, where a truthy value can 
be provided for the exc_info keyword parameter.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

I wasn't clear on what you meant by "potentially in the future".

Now that I realise you meant "Defer refreshing the bytecode cache to the next 
import attempt if `int(source_mtime) == int(time.time())`, but still bypass it 
for the current import", then yes, I agree that would reliably resolve the 
problem, since all imports during the same second as the edit would bypass the 
cache without updating it, and the first subsequent import would refresh it a 
timestamp that's guaranteed to be later than the source_mtime (not just equal).

Brett, what do you think? Such an adjustment to the caching logic should have 
next to no impact in the typical case where `int(source_mtime) < 
int(time.time())`, while still making it even less likely that hot reloaders 
will accidentally cache stale bytecode.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Nathaniel Smith

Nathaniel Smith  added the comment:

@arigo: But CPython is already committed to supporting writes to locals() at 
any moment, because at any moment you can set a trace function and in every 
proposal trace functions can reliably write to locals. So I don't think this is 
a serious obstacle for addng a JIT to CPython -- or at least, it doesn't add 
any new obstacles.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31732] Add TRACE level to the logging module

2017-10-13 Thread Vinay Sajip

Vinay Sajip  added the comment:

As Raymond has said: though it might appear reasonable to add a TRACE level 
from the numerous examples that Victor has given, in practice it is hard enough 
to know when a particular level should be applied. Victor says "we need to a 
6th level since DEBUG might be too verbose" - but as the PR is currently 
constituted, enabling the TRACE level would output DEBUG messages too, and so 
be even more verbose than just DEBUG! In this case I feel that, when 
considering the number of levels, "less is more".

For specific applications different levels might be desirable, and the logging 
system makes this possible, though not at the level of convenience of having a 
trace() method on loggers. However, it's easy enough to provide your own 
subclass with that method, if you really need it that badly. Of course you can 
currently also do logger.log(TRACE, ...) without the need for any subclass or 
need to "patch the stdlib" (as per Victor's comment).

This is one of those areas where tastes differ - and it is IMO really just a 
matter of taste. The five levels we have presently are based on what was 
considered best practice when the logging module was added to Python, and it 
specifically eschewed adopting prior art where more levels were available (e.g. 
syslog). The documentation gives a clear rationale for when to use what level:

https://docs.python.org/2/howto/logging.html#when-to-use-logging

and this seems of reasonably universal applicability across projects.

Given that individual projects *can* provide additional levels according to 
their need, I feel that there is no need for a TRACE level in the stdlib; as 
Raymond has pointed out in msg304304, the current levels are easy to understand 
when to apply, and finer levels invariably lead to different opinions on when 
to use them, due to essentially being matters of taste.

--
resolution:  -> wont fix
stage: patch review -> resolved
status: open -> closed
type:  -> enhancement

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Armin Rigo

Armin Rigo  added the comment:

FWIW, a Psyco-level JIT compiler can support reads from locals() or f_locals, 
but writes are harder.  The need to support writes would likely become another 
hard step on the way towards adding some simple JIT support to CPython in the 
future, should you decide you ever want to go that way.  (It is not a problem 
for PyPy but PyPy is not a simple JIT.)  Well, I guess CPython is not ever 
going down that path anyway.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30744] Local variable assignment is broken when combined with threads + tracing + closures

2017-10-13 Thread Nathaniel Smith

Nathaniel Smith  added the comment:

I guess I should say that I'm still confused about why we're coming up with 
such elaborate schemes here, instead of declaring that f_locals and locals() 
shall return a dict proxy so that from the user's point of view, they Always 
Just Work The Way Everyone Expects.

The arguments against that proposal I'm aware of are:

1) Implementing a full dict-like mapping object in C is tiresome. But your 
proposal above also requires doing this, so presumably that's not an issue.

2) We want to keep the current super-complicated and confusing locals() 
semantics, because we like making life difficult for alternative 
implementations (PyPy at least exactly copies all the weird details of how 
CPython's locals() works, which is why it inherited this bug), and by making 
the language more confusing we can encourage the use of linters and boost 
Python's Stackoverflow stats. ...I guess my bias against this argument is 
showing :-). But seriously, if we want to discourage writing to locals() then 
the way to do that is to formally deprecate it, not go out of our way to make 
it silently unreliable.

3) According to the language spec, all Python implementations have to support 
locals(), but only some of them have to support frame introspection, f_locals, 
debugging, and mutation of locals. But... I think this is a place where the 
language spec is out of touch with reality. I did a quick survey and AFAICT in 
practice, Python implementations either support *both* locals() and f_locals 
(CPython, PyPy, Jython, IronPython), or else they support *neither* locals() 
nor f_locals (MicroPython -- in fact MicroPython defines locals() to 
unconditionally return an empty dict). We could certainly document that 
supporting writes through locals() is a quality-of-implementation thing CPython 
provides, similar to the prompt destruction guarantees provided by refcounting. 
But I don't think implementing this is much of a burden -- if you have enough 
introspection metadata to get the list of locals and figure out where their 
values are stored in memory (which is the absolute minimum to implement local
 s()), then you probably also have enough metadata to write back to those same 
locations. Plus debugger support is obviously a priority for any serious 
full-fledged implementation.

So the original write-through proxy idea still seems like the best solution to 
me.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Devin Bayer

Devin Bayer  added the comment:

You can't demand a hot loader to react instantly and there are other use cases, 
like generating modules programatically.

What is your objection to my proposed solution, which behaves correctly in all 
cases?

If you are not importing modules immediately after writing them, there is no 
harm in skipping the cache if the file mtime differs from the current time.

If you are importing modules immediately, then you want to bypass the cache in 
that case, since it can't be relied upon.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31672] string.Template should use re.ASCII flag

2017-10-13 Thread INADA Naoki

Change by INADA Naoki :


--
pull_requests: +3958

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

If there's no hot reloader forcing a reimport for every saved edit, it's 
sufficiently unlikely to encounter this problem in the first place that I'm not 
worried about that scenario. (The time required for a human to context switch 
from code editing to code execution and back sees to that)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31672] string.Template should use re.ASCII flag

2017-10-13 Thread INADA Naoki

INADA Naoki  added the comment:


New changeset b22273ec5d1992b0cbe078b887427ae9977dfb78 by INADA Naoki in branch 
'master':
bpo-31672: Fix string.Template accidentally matched non-ASCII identifiers 
(GH-3872)
https://github.com/python/cpython/commit/b22273ec5d1992b0cbe078b887427ae9977dfb78


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Devin Bayer

Devin Bayer  added the comment:

That wouldn't always work either. If the source file is imported, then edited, 
then not reimported until the next second (or far in the future) the stale 
bytecode would still be used.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Nick Coghlan

Nick Coghlan  added the comment:

The problem with changing the bytecode format is that code other than the 
import machinery reads the bytecode headers, so when we change the format, we 
need to consider the impact on that code. (Even my multiplication proposal 
above suffers from that problem)

A freshness check that would avoid the extra stat call, while still making the 
import system skeptical of the validity of the bytecode cache for just-edited 
sources would be to also check the source mtime against the *current* time: if 
they're the same within the resolution of the bytecode format (i.e. 1 second), 
then compile the code again even if the bytecode headers claims it's already up 
to date.

That way hot reloaders would be sure to pick up multiple edits to the source 
file properly, and would reliably be left with the final version loaded, rather 
than the first version from the final second of edits.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31772] SourceLoader uses stale bytecode in case of equal mtime seconds

2017-10-13 Thread Devin Bayer

Devin Bayer  added the comment:

Thanks for the support Nick. I think your proposed idea would still result in 
rare but confusing behavior, which is the type of surprise Python should avoid.

The hash-based pyc files doesn't seem like a solution to me, because it won't 
be enabled by default. And I think it's obvious the performance loss of doing 
so is unacceptable.

If changing the bytecode format is unacceptable, though it seems like the 
cleanest answer, the import machinery could just avoid caching bytecode when 
the int(mtime) of the source file is potentially in the future.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com