Nathaniel Smith added the comment:
To make sure I understand correctly: your concern is that the event loop is not
implemented in C. So if you have this patch + an async CM that's implemented in
C + the async CM never *actually* yields to the event loop, then that will be
signal safe
Nathaniel Smith added the comment:
For purposes of writing a test case, can you install a custom Python-level
signal handler, and make some assertion about where it runs? I.e., the calling
frame should be inside the __aexit__ body, not anywhere earlier
Nathaniel Smith added the comment:
My reading of the man page is that if SSL_shutdown returns 0, this means that
it succeeded at doing the first phase of shutdown. If there are errors then
they should be ignored, because it actually succeeded.
If you want to then complete the second phase
Nathaniel Smith added the comment:
Adding Ned to CC in case he wants to comment on the utility of per-opcode
tracing from the perspective of coverage.py.
--
nosy: +nedbat, njs
___
Python tracker <rep...@bugs.python.org>
<http://bugs.p
Changes by Nathaniel Smith <n...@pobox.com>:
--
nosy: +njs
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29302>
___
__
Nathaniel Smith added the comment:
In bpo-30703 Antoine fixed signal handling so it doesn't use Py_AddPendingCall
anymore. At this point the only time the interpreter itself uses
Py_AddPendingCall is when there's an error writing to the signal_fd, which
should never happen in normal usage
Nathaniel Smith added the comment:
This would still provide value even if you have to do a GET_AWAITABLE in the
protected region: the most common case is that __aenter__ is a coroutine
function, which means that its __await__ is implemented in C and already
protected against interrupts.
Also
Nathaniel Smith added the comment:
Was this actually fixed, or did everyone just get tired and give up on the
original patch?
--
nosy: +njs
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Nathaniel Smith added the comment:
Ugh, apparently this weird behavior is actually mandated by the RFC :-(.
RFC 3493:
The nodename and servname arguments are either null pointers or
pointers to null-terminated strings. One or both of these two
arguments must be a non-null pointer
Changes by Nathaniel Smith <n...@pobox.com>:
--
nosy: +giampaolo.rodola
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31198>
___
New submission from Nathaniel Smith:
socket.getaddrinfo accepts None as a port argument, and translates it into 0.
This is handy, because bind() understands 0 to mean "pick a port for me", and
if you want bind to pick a port for you and port=None is a slightly more
obvious
Changes by Nathaniel Smith <n...@pobox.com>:
--
pull_requests: +3043
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28414>
___
_
Nathaniel Smith added the comment:
> I haven't dug in deeply, but it sounds like we handle IDNs in CNs and SANs
> differently?
No -- Python's ssl module uses exactly the same hostname checking logic in both
cases, and it's equally broken regardless. But, since CAs do all kinds of weird
Nathaniel Smith added the comment:
@arigo: Technically we also need that the writes to memory are observed to
happen-before the write() call on the wakeup fd, which is not something that
Intel is going to make any guarantees about. But *probably* this is also safe
because the kernel has
Nathaniel Smith added the comment:
On further investigation (= a few hours staring at the ceiling last night), it
looks like there's another explanation for my particular bug... which is good,
because on further investigation (= a few hours squinting at google results) it
looks like
New submission from Nathaniel Smith:
Sometimes, CPython's signal handler (signalmodule.c:signal_handler) runs in a
different thread from the main thread. On Unix this is rare but it does happen
(esp. for SIGCHLD), and on Windows it happens 100% of the time. It turns out
that there is a subtle
Changes by Nathaniel Smith <n...@pobox.com>:
--
nosy: +njs
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue14243>
___
__
Changes by Nathaniel Smith <n...@pobox.com>:
--
nosy: +njs
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue16487>
___
__
Nathaniel Smith added the comment:
Then maybe simplest solution is to scale back the claim :-).
The important semantic change would be that right now, interrupt_main() is
documented to cause KeyboardInterrupt to be raised in the main thread. So if
you register a custom SIGINT handler
Changes by Nathaniel Smith <n...@pobox.com>:
--
resolution: -> fixed
stage: -> resolved
status: open -> closed
___
Python tracker <rep...@bugs.python.org>
<http://bu
Nathaniel Smith added the comment:
> A real Ctrl+C executes the registered control handlers for the process.
Right, but it's *extremely* unusual for a python 3 program to have a control
handler registered directly via SetConsoleCtrlHandler. This isn't an API that
the interpreter u
Nathaniel Smith added the comment:
> I like it because it categorically eliminates the "tracing or not?" global
> state dependence when it comes to manipulation of the return value of
> locals() - manipulating that will either always affect the original execution
>
Nathaniel Smith added the comment:
Sorry, I meant bpo-21895.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29926>
___
___
Pyth
Nathaniel Smith added the comment:
In terms of general design cleanliness, I think it would be better to make
`interrupt_main` work reliably than to have IDLE add workarounds for its
unreliability.
AFAICT the ideal, minimal redundancy solution would be:
- interrupt_main just calls raise
Nathaniel Smith added the comment:
> Folks that actually *wanted* the old behaviour would then need to do either
> "sys._getframe().f_locals" or "inspect.currentframe().f_locals".
So by making locals() and f_locals have different semantics, we'd be adding yet
another
Nathaniel Smith added the comment:
Interesting idea! I'm not sure I fully understand how it would work though.
What would you do for the frames that don't use the fast array, and where
locals() currently returns the "real" namespace?
How are you imagining that the trace function
Nathaniel Smith added the comment:
It isn't obvious to me whether the write-through proxy idea is a good one on
net, but here's the rationale for why it might be.
Currently, the user-visible semantics of locals() and f_locals are a bit
complicated. AFAIK they aren't documented anywhere
Nathaniel Smith added the comment:
Some thoughts based on discussion with Armin in #pypy:
It turns out if you simply delete the LocalsToFast and FastToLocals calls in
call_trampoline, then the test suite still passes. I'm pretty sure that pdb
relies on this as a way to set local variables
New submission from Nathaniel Smith:
The attached script looks innocent, but gives wildly incorrect results on all
versions of CPython I've tested.
It does two things:
- spawns a thread which just loops, doing nothing
- in the main thread, repeatedly increments a variable 'x'
And most
Nathaniel Smith added the comment:
@Arek: It's great that you're testing your code against the latest 3.7
pre-release, because that helps give early warning of issues in CPython as its
developed, which helps everyone. BUT, you really cannot expect to use
in-development versions and expect
Nathaniel Smith added the comment:
It's because Julian thinks _PyTraceMalloc_Untrack is going to lose its
underscore in 3.7, so he made numpy assume that. Numpy issue, I'll see what we
can do.
--
___
Python tracker <rep...@bugs.python.org>
Nathaniel Smith added the comment:
But, by that definition, like... every change is backwards incompatible.
I'm pretty confident that no-one was relying on this race condition. I can't
even figure out what that would mean.
--
___
Python tracker
Nathaniel Smith added the comment:
I think you mean it's backwards *compatible*? There's definitely no change in
APIs or ABIs or anything, all that changes is the order of some statements
inside Python's signal handler. (We used to we write to the magic wakeup fd and
then set a flag saying
Nathaniel Smith added the comment:
I guess now would be a good time to decide whether this should be backported to
3.6, with 3.6.2 coming out in a few days :-). (Or if not, then it can probably
be closed?)
--
___
Python tracker <
Nathaniel Smith added the comment:
Looks good to me, thanks Serhiy.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29943>
___
___
Changes by Nathaniel Smith <n...@pobox.com>:
--
pull_requests: +2065
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30594>
___
_
Nathaniel Smith added the comment:
posted backports for 3.5 and 3.6. It looks like 2.7 is actually unaffected,
because it doesn't have IDNA support, so there's no failure path in between
when the reference is stored and when its INCREFed.
--
versions: -Python 2.7
Changes by Nathaniel Smith <n...@pobox.com>:
--
pull_requests: +2057
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30594>
___
_
Changes by Nathaniel Smith <n...@pobox.com>:
--
pull_requests: +2058
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30594>
___
_
Changes by Nathaniel Smith <n...@pobox.com>:
--
nosy: +njs
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue17305>
___
__
Nathaniel Smith added the comment:
If the SSL module followed the pattern of encoding all str to bytes at the
edges while leaving bytes alone, and used exclusively bytes internally (and in
this case by "bytes" I mean "bytes objects containing A-labels"), then it would
a
New submission from Nathaniel Smith:
If you pass a server_hostname= that fails IDNA decoding to
SSLContext.wrap_socket or SSLContext.wrap_bio, then the SSLContext object has a
spurious Py_DECREF called on it, eventually leading to segfaults.
Demo attached.
--
assignee
Changes by Nathaniel Smith <n...@pobox.com>:
--
pull_requests: +2056
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30594>
___
_
Nathaniel Smith added the comment:
I can think of two downsides to using __annotations__ for this:
1) Obviously contextlib itself isn't going to add any kind of annotation in any
versions before 3.7, but third-party projects might (like contextlib2, say).
And these projects have been known
Nathaniel Smith added the comment:
My understanding is that the major difference between a real traceback object
and a TracebackException object is that the latter is specialized for printing,
so it can be lighter weight (no pinning of frame objects in memory), but loses
some utility (can't
Nathaniel Smith added the comment:
Uh, please ignore the random second paste of the jinja2 URL in the middle of
the second to last paragraph.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
New submission from Nathaniel Smith:
Currently, traceback objects don't expose any public constructor, are
immutable, and don't have a __dict__ or allow subclassing, which makes it
impossible to add extra annotations them.
It would be nice if these limitations were lifted, because
Changes by Nathaniel Smith <n...@pobox.com>:
--
nosy: +njs
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30482>
___
__
New submission from Nathaniel Smith:
A common problem when working with async functions is to attempt to call them
but forget the 'await', which eventually leads to a 'Warning: coroutine ... was
never awaited' (possibly buried in the middle of a bunch of traceback shrapnel
caused by follow
Nathaniel Smith added the comment:
> Yes, whenever you touch frames you're disabling the JIT for the call site
> (and maybe for more call sites up the stack, idk). So it doesn't matter what
> you use, `f_func` or `f_locals`, the performance will suffer big time. Is
> tha
Nathaniel Smith added the comment:
On further thought, I think the way I'd write a test for this is:
(1) add a testing primitive that waits for N instructions and then injects a
SIGINT. Probably this would require tweaking the definition of
Py_MakePendingCalls like I described in my previous
Nathaniel Smith added the comment:
> I'm not sure I understand how `f_func` would help to better handle Control-C
> in Trio. Nathaniel, could you please elaborate on that?
Sure. The issue is that I need to mark certain frames as "protected" from
KeyboardInterrupt, in a wa
Nathaniel Smith added the comment:
Certainly which frame is being executed is an implementation detail, and I can
see an argument from that that we shouldn't have a frame introspection API at
all... but we do have one, and it has some pretty important use cases, like
traceback printing
Nathaniel Smith added the comment:
> If all you need is that with foo: pass guarantees that either both or neither
> of __enter__ and __exit__ are called, for C context managers, and only C
> context managers, then the fix is trivial.
It would be nice to have it for 'async with
Nathaniel Smith added the comment:
Debian testing, x86-64, with:
Python 3.5.3rc1 (default, Jan 3 2017, 04:40:57)
[GCC 6.3.0 20161229] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
>
Nathaniel Smith added the comment:
Right, fixing this bug alone can't make programs control-C safe in general. But
there exist techniques to make __enter__/__exit__ methods safe WRT control-C,
and if we fix this bug *and* apply those techniques then you can get some
meaningful guarantees
Nathaniel Smith added the comment:
I'd also like to make use of this in trio, as a way to get safer and less
complicated control-C handling without having to implement things in C.
(Exhaustive discussion:
https://vorpus.org/blog/control-c-handling-in-python-and-trio/)
@Nick: I understand
Nathaniel Smith added the comment:
Oddly, I expected to run into this with my code using SSLObject in trio [1],
but if I connect to python.org:443 and then 'await
trio_ssl_stream.do_handshake(); trio_ssl_stream.getpeercert()' it works just
fine ... even though when I run the sslbugs.py script
New submission from Nathaniel Smith:
The SSL_shutdown man page says that if it returns 0, and an SSL_ERROR_SYSCALL
is set, then SSL_ERROR_SYSCALL should be ignored - or at least I think that's
what it's trying to say. See the RETURN VALUES section. I think this means we
should only raise
Nathaniel Smith added the comment:
@Dima:
> @njs: to point out that usefulness of this module is not just wishful
> thinking. I just used it to locate, up to the line in a Python extension
> module written in C, a bug in Sagemath (that has perhaps 20 FPU-using
> extensions,
Nathaniel Smith added the comment:
> (BTW do you happen to know any tricks to force CPython to do an immediate
> PyErr_CheckSignals on Windows?)
Never mind on this... it looks like calling repr() on any object is sufficient.
--
___
Python t
Nathaniel Smith added the comment:
> While I suggest you to *not* use an event loop (wakeup fd pipe/socket handle
> with select) and signal.signal(), you are true that there is a race condition
> if you use select() with signal.signal() so I merged your change.
Unfortunately this is
Nathaniel Smith added the comment:
Another option you might want to consider is proposing to add a proper fpu
control flag setting/checking API to the math module.
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Nathaniel Smith added the comment:
Also fixing the abi issues that started this, and probably making an argument
for why it makes sense for all of cpython's built-in float operations to check
the fpu flags, and to do so using a weird longjmp-based mechanism that only
some platforms support
Nathaniel Smith added the comment:
@Dima: are you volunteering to fix and maintain it? I can see why it's useful
to have some way to get at the fpu flags, but I don't see how fpectl
specifically helps with that issue, and fpectl has always been broken on x86-64
New submission from Nathaniel Smith:
sphinxcontrib-trio [1] does a few things; one of them is to enhance sphinx's
autodoc support by trying to sniff out the types of functions so that it can
automatically determine that something is, say, a generator, or an async
classmethod.
This runs
Nathaniel Smith added the comment:
@Jonathan: Even 3.6.1 was careful to retain compatibility with code built by
3.6.0. And your proposed 3.6.1-patched will generate binaries equivalent to the
ones 3.6.0 generates. So I don't think you need to worry; 3.6.2 is not going to
add a new and worse
Nathaniel Smith added the comment:
Looks interesting! What's the advantage over running the server and the test in
the same loop? The ability to use blocking operations in the tests, and to
re-use an expensive-to-start server over multiple tests? (I've mostly used
threads in tests to run
Nathaniel Smith added the comment:
I don't find it helpful to think of it as declaring 3.6.0 broken vs declaring
3.6.1 broken. 3.6.0 is definitely good in the sense that if you build a module
on it then it will import on both 3.6.0 and 3.6.1, and 3.6.1 is definitely good
in the sense
Nathaniel Smith added the comment:
More collateral damage: apparently the workaround that Pandas used for this bug
(#undef'ing PySlice_GetIndicesEx) broke PyPy, so now they need a workaround for
the workaround: https://github.com/pandas-dev/pandas/pull/16192
Recording this here partly
Nathaniel Smith added the comment:
Pillow also had broken wheels up on pypi for a while; they've now put out a bug
fix release that #undef's PySlice_GetIndicesEx, basically monkeypatching out
the bugfix to get back to the 3.6.0 behavior:
https://github.com/python-pillow/Pillow/issues/2479
Nathaniel Smith added the comment:
Apparently this also broke pyqt for multiple users; here's the maintainers at
conda-forge struggling to figure out the best workaround:
https://github.com/conda-forge/pyqt-feedstock/pull/25
I really think it would be good to fix this in 3.6 sooner rather
Nathaniel Smith added the comment:
Oh, I should also say that this isn't actually affecting me, I just figured
that once I was aware of the bug it was worth making a record here. Might be a
good starter bug for someone trying to get into CPython's internals
New submission from Nathaniel Smith:
As pointed out in this stackoverflow answer:
http://stackoverflow.com/a/43578450/
and since I seem to be collecting signal-handling bugs these days :-), there's
a race condition in how the interpreter uses _PyOS_SigintEvent to allow
control-C to break out
Changes by Nathaniel Smith <n...@pobox.com>:
--
title: If you forget to call do_handshake, then everything seems to work but
hostname is disabled -> If you forget to call do_handshake, then everything
seems to work but hostname checking is
New submission from Nathaniel Smith:
Basically what it says in the title... if you create an SSL object via
wrap_socket with do_handshake_on_connect=False, or via wrap_bio, and then
forget to call do_handshake and just go straight to sending and receiving data,
then the encrypted connection
Nathaniel Smith added the comment:
FYI: https://github.com/pandas-dev/pandas/pull/16066
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Nathaniel Smith added the comment:
I haven't noticed the error in the wild because I don't use set_wakeup_fd on
Linux/MacOS, because of this issue :-). But on MacOS literally all it would
take is to receive two signals in quick succession, or to receive one signal at
a moment when someone has
New submission from Nathaniel Smith:
When a wakeup fd is registered via signal.set_wakeup_fd, then the C level
signal handler writes a byte to the wakeup fd on each signal received. If this
write fails, then it prints an error message to the console.
Some projects use the wakeup fd as a way
Nathaniel Smith added the comment:
The attached script wakeup-fd-racer.py fails consistently for me using cpython
3.6.0 on my windows 10 vm:
> python wakeup-fd-racer.py
Attempt 0: start
Attempt 0: FAILED, took 10.0160076 seconds
select_calls = 2
(It may help that the VM only has 1
Nathaniel Smith added the comment:
If it helps, notice that the SetEvent(sigint_event) call used to wake up the
main thread on windows is also performed unconditionally and after the call to
Py_AddPendingEvent. From the point of view of twisted/tornado/trio, this is
exactly the same
Nathaniel Smith added the comment:
I think the idea in c13ef6664998 wasn't so much that we wanted the wakeup fd to
be written to first, as that the way the code was written back then, the
presence of 'if (is_tripped) return;' meant that it wasn't getting written to
*at all* in some cases
Nathaniel Smith added the comment:
Err, libuv obviously doesn't use a Python-level signal handler. I just meant to
include them as another example of a library I checked that uses a self-pipe to
handle signals but relies on out-of-band information to transmit what the
actual signal
Nathaniel Smith added the comment:
Right. My claim would be that the PR I just submitted is the correct fix for
bpo-21645 as well.
The approach asyncio uses is very elegant, but unfortunately it assumes that
the wakeup fd has infinite buffer, which isn't true. If enough signals or other
Changes by Nathaniel Smith <n...@pobox.com>:
--
pull_requests: +1226
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30038>
___
_
Changes by Nathaniel Smith <n...@pobox.com>:
--
pull_requests: +1224
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30039>
___
_
New submission from Nathaniel Smith:
If we have a chain of generators/coroutines that are 'yield from'ing each
other, then resuming the stack works like:
- call send() on the outermost generator
- this enters _PyEval_EvalFrameDefault, which re-executes the YIELD_FROM opcode
- which calls send
New submission from Nathaniel Smith:
In trip_signal [1], the logic goes:
1) set the flag saying that this particular signal was tripped
2) write to the wakeup fd
3) set the global is_tripped flag saying "at least one signal was tripped", and
do Py_AddPendingCall (which sets some gl
Nathaniel Smith added the comment:
It looks like PyTorch got bitten by this:
https://discuss.pytorch.org/t/pyslice-adjustindices-error/1475/11
--
___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/i
Nathaniel Smith added the comment:
> Can we consider 3.6.0 rather than 3.6.1 as broken release?
In the last week, pypi downloads were about evenly split between 3.6.0 and
3.6.1 (2269969 for "3.6.1", 1927189 for "3.6.0", and those two were ~2 orders
of magnitude more co
New submission from Nathaniel Smith:
You might hope the interpreter would enforce the invariant that for 'with' and
'async with' blocks, either '__(a)enter__' and '__(a)exit__' are both called,
or else neither of them is called. But it turns out that this is not true once
KeyboardInterrupt
Nathaniel Smith added the comment:
It does make sense to skip emitting a warning if there's no try or with block
active.
Don't we already have the ability to check for this, though, without any
extensions to the frame or code objects? That's what the public
PyGen_NeedsFinalizing does, right
New submission from Nathaniel Smith:
In the process of fixing issue 27867, a new function PySlice_AdjustIndices was
added, and PySlice_GetIndicesEx was converted into a macro that calls this new
function. The patch was backported to both the 3.5 and 3.6 branches, was
released in 3.6.1
Nathaniel Smith added the comment:
(oh, in case it wasn't obvious: the advantage of raise() over kill() and
pthread_kill() is that raise() works everywhere, including Windows, so it would
avoid platform specific logic. Or if you don't like raise() for some reason
then you can get the same
Nathaniel Smith added the comment:
If you want to trigger the standard signal handling logic, then raise(SIGINT)
is also an option. On unix it probably won't help because of issue 21895, but
using raise() here + fixing 21895 by using pthread_kill in the c level signal
handler would together
Nathaniel Smith added the comment:
Letting Python-level signal handlers run in arbitrary threads is an interesting
idea, but it's a non-trivial change to Python semantics that may well break
some programs (previously it was guaranteed that signal handlers couldn't race
with main thread code
Nathaniel Smith added the comment:
@haypo: okay, looked over things over for a third time and this time I found my
very silly error :-). So I'm now able to use set_wakeup_fd on Windows
(https://github.com/python-trio/trio/pull/108), but not on Unix
(https://github.com/python-trio/trio/issues
Nathaniel Smith added the comment:
@haypo: It's a socketpair. It works fine when I set up a toy test case using
set_wakeup_fd + select, and it works fine in my real code when I use CFFI
cleverness to register a signal handler that manually writes a byte to my
wakeup socket, but when I pass
Nathaniel Smith added the comment:
I don't really have a specific use case personally -- for trio, I haven't found
a way to make use of set_wakeup_fd because of various issues[1], but I'm also
not planning to use SIGCHLD, so this isn't very urgent.
In general set_wakeup_fd can be a workaround
Nathaniel Smith added the comment:
It turns out that this bug is more general than signal.pause, and has caused
problems for a few different people:
https://github.com/dabeaz/curio/issues/118#issuecomment-287735781
https://github.com/dabeaz/curio/issues/118#issuecomment-287798241
https
301 - 400 of 486 matches
Mail list logo