New submission from Tom Christie :
Raising an issue that's impacting us on `httpx`.
It appears that in some cases SSL unwrapping can cause `.wait_closed()` to hang
indefinately.
Trio are particularly careful to work around this case, and have an extensive
comment on it:
https://github.com
Tom Christie added the comment:
Right, and `requests` *does* provide both those styles.
The point more being that *not* having closed the transport at the point of
exit shouldn't end up raising a hard error. It doesn't raise errors in
sync-land, and it shouldn't do so in async-land
Tom Christie added the comment:
> From my understanding, the correct code should close all transports and wait
> for their connection_lost() callbacks before closing the loop.
Ideally, yes, although we should be able to expect that an SSL connection that
hasn't been gracefully
Tom Christie added the comment:
This appears somewhat related: https://bugs.python.org/issue34506
As it *also* logs exceptions occuring during `_fatal_error` and `_force_close`.
--
___
Python tracker
<https://bugs.python.org/issue36
New submission from Tom Christie :
If an asyncio SSL connection is left open (eg. any kind of keep-alive
connection) then after closing the event loop, an exception will be raised...
Python:
```
import asyncio
import ssl
import certifi
async def f():
ssl_context
Tom Christie <t...@tomchristie.com> added the comment:
Refs: https://github.com/python/cpython/pull/6617
--
___
Python tracker <rep...@bugs.python.org>
<https://bugs.python
New submission from Tom Christie <t...@tomchristie.com>:
The `contextvars` documentation, at
https://docs.python.org/3.7/library/contextvars.html starts with the following:
"This module provides APIs to manage, store, and access non-local state."
I assume that must be a d
Tom Christie added the comment:
Confirming that I've also bumped into this for Python 3.5.
A docs update would seem to be the lowest-cost option to start with.
Right now `mimetypes.guess_extension()` isn't terribly useful, and it'd be
better to at least know that upfront.
--
nosy
Tom Christie added the comment:
I believe the status of this should be reassessed and that python should
default to escaping '\u2028' and '\u2029'. *Strictly* speaking this isn't a bug
and is per the JSON spec.
*However* this *is* a bug in the JSON spec - which *should* be a strict subset
Tom Christie added the comment:
There is explicit note in the documentation about incompatibility with
JavaScript.
That may be, but we're still unnecessarily making for a poorer user experience.
There's no good reason why we shouldn't just treat \u2028 and \u2029 as control
characters
New submission from Tom Christie:
This is one of those behavioural issues that is a borderline bug.
The seperators argument to `json.dumps()` behaves differently across python 2
and 3.
* In python 2 it should be provided as a bytestring, and can cause a
UnicodeDecodeError otherwise
Tom Christie added the comment:
Not too fussed if this is addressed or not, but I think this is closed a little
prematurely.
I don't think there's a problem under Python 3, that's entirely reasonable.
However under Python 2, `json.dumps()` will normally handle *either* bytes or
unicode
Tom Christie added the comment:
But only if you use non-ascii in the binary input, in which case you get an
encoding error, which is a correct error.
Kind of, except that this (python 2.7) works just fine:
data = {'snowman': '☃'}
json.dumps(data, ensure_ascii=False)
'{snowman
Tom Christie added the comment:
So, as soon as (but only as soon as) you mix unicode with your non-ascii
data, your program blows up.
Indeed. For context tho my example of running into this the unicode literals
used as seperators weren't even in the same package as the non-ASCII binary
New submission from Tom Christie t...@tomchristie.com:
json.dumps() documentation is slightly incorrect.
http://docs.python.org/library/json.html#json.dumps
Reads:
If ensure_ascii is False, then the return value will be a unicode instance.
Should read:
If ensure_ascii is False
15 matches
Mail list logo