[issue47258] Python 3.10 hang at exit in drop_gil() (due to resource warning at exit?)

2022-04-08 Thread Richard Purdie


New submission from Richard Purdie :

We had a python hang at shutdown. The gdb python backtrace and C backtraces are 
below. It is hung in the COND_WAIT(gil->switch_cond, gil->switch_mutex) call in 
drop_gil().

Py_FinalizeEx -> handle_system_exit() -> PyGC_Collect -> handle_weakrefs -> 
drop_gil 

I think from the stack trace it may have been printing the warning:

sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper 
name='/home/pokybuild/yocto-worker/oe-selftest-fedora/build/build-st-1560250/bitbake-cookerdaemon.log'
 mode='a+' encoding='UTF-8'>

however I'm not sure if it was that or trying to show a different exception. 
Even if we have a resource leak, it shouldn't really hang!

(gdb) py-bt
Traceback (most recent call first):
  File "/usr/lib64/python3.10/weakref.py", line 106, in remove
def remove(wr, selfref=ref(self), _atomic_removal=_remove_dead_weakref):
  Garbage-collecting

#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, 
op=393, expected=0, futex_word=0x7f0f7bd54b20 <_PyRuntime+512>) at 
futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x7f0f7bd54b20 
<_PyRuntime+512>, expected=expected@entry=0, clockid=clockid@entry=0, 
abstime=abstime@entry=0x0, 
private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87
#2  0x7f0f7b88979f in __GI___futex_abstimed_wait_cancelable64 
(futex_word=futex_word@entry=0x7f0f7bd54b20 <_PyRuntime+512>, 
expected=expected@entry=0, clockid=clockid@entry=0, 
abstime=abstime@entry=0x0, private=private@entry=0) at futex-internal.c:139
#3  0x7f0f7b88beb0 in __pthread_cond_wait_common (abstime=0x0, clockid=0, 
mutex=0x7f0f7bd54b28 <_PyRuntime+520>, cond=0x7f0f7bd54af8 <_PyRuntime+472>) at 
pthread_cond_wait.c:504
#4  ___pthread_cond_wait (cond=cond@entry=0x7f0f7bd54af8 <_PyRuntime+472>, 
mutex=mutex@entry=0x7f0f7bd54b28 <_PyRuntime+520>) at pthread_cond_wait.c:619
#5  0x7f0f7bb388d8 in drop_gil (ceval=0x7f0f7bd54a78 <_PyRuntime+344>, 
ceval2=, tstate=0x558744ef7c10)
at /usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/ceval_gil.h:182
#6  0x7f0f7bb223e8 in eval_frame_handle_pending (tstate=) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/ceval.c:1185
#7  _PyEval_EvalFrameDefault (tstate=, f=, 
throwflag=) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/ceval.c:1775
#8  0x7f0f7bb19600 in _PyEval_EvalFrame (throwflag=0, 
f=Frame 0x7f0f7a0c8a60, for file /usr/lib64/python3.10/weakref.py, line 
106, in remove (wr=, selfref=, _atomic_removal=), tstate=0x558744ef7c10)
at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Include/internal/pycore_ceval.h:46
#9  _PyEval_Vector (tstate=, con=, 
locals=, args=, argcount=1, kwnames=)
at /usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/ceval.c:5065
#10 0x7f0f7bb989a8 in _PyObject_VectorcallTstate (kwnames=0x0, 
nargsf=9223372036854775809, args=0x7fff8b815bc8, callable=, 
tstate=0x558744ef7c10) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Include/cpython/abstract.h:114
#11 PyObject_CallOneArg (func=, 
arg=) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Include/cpython/abstract.h:184
#12 0x7f0f7bb0fce1 in handle_weakrefs (old=0x558744edbd30, 
unreachable=0x7fff8b815c70) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Modules/gcmodule.c:887
#13 gc_collect_main (tstate=0x558744ef7c10, generation=2, 
n_collected=0x7fff8b815d50, n_uncollectable=0x7fff8b815d48, nofail=0)
at /usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Modules/gcmodule.c:1281
#14 0x7f0f7bb9194e in gc_collect_with_callback 
(tstate=tstate@entry=0x558744ef7c10, generation=generation@entry=2)
at /usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Modules/gcmodule.c:1413
#15 0x7f0f7bbc827e in PyGC_Collect () at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Modules/gcmodule.c:2099
#16 0x7f0f7bbc7bc2 in Py_FinalizeEx () at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/pylifecycle.c:1781
#17 0x7f0f7bbc7d7c in Py_Exit (sts=0) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/pylifecycle.c:2858
#18 0x7f0f7bbc4fbb in handle_system_exit () at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/pythonrun.c:775
#19 0x7f0f7bbc4f3d in _PyErr_PrintEx (set_sys_last_vars=1, 
tstate=0x558744ef7c10) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/pythonrun.c:785
#20 PyErr_PrintEx (set_sys_last_vars=1) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/pythonrun.c:880
#21 0x7f0f7bbbcece in PyErr_Print () at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/pythonrun.c:886
#22 _PyRun_SimpleFileObject (fp=, filename=, 
closeit=1, flags=0x7fff8b815f18) at 
/usr/src/debug/python3.10-3.10.4-1.fc35.x86_64/Python/pythonrun.c:462
#23 0x7f0f7bbbcc57 in _PyRun_AnyFileObject (fp=0x558744ed9370, 
filename='/home/pokybuild/yocto-worker/oe-selftest-fedora/build/bitbake/

[issue47139] pthread_sigmask needs SIG_BLOCK behaviour explaination

2022-04-05 Thread Richard Purdie


Richard Purdie  added the comment:

I think the python code implementing pthread_sigmask already does trigger 
interrupts if any have been queued before the function returns from blocking or 
unblocking.

The key subtlety which I initially missed is that if you have another thread in 
your python script, any interrupt it receives can be raised in the main thread 
whilst you're in the SIGBLOCK section. This obviously isn't what you expect at 
all as those interrupts are supposed to be blocked! It isn't really practical 
to try and SIGBLOCK on all your individual threads.

What I'd wondered is what you mention, specifically checking if a signal is 
masked in the python signal raising code with something like the 
"pthread_sigmask(SIG_UNBLOCK, NULL /* set */, )" before it raises it and 
if there is blocked, just leave it queued. The current code would trigger the 
interrupts when it was unmasked. This would effectively only apply on the main 
thread where all the signals/interrupts are raised. 

This would certainly give the behaviour that would be expected from the calls 
and save everyone implementing the workarounds as I have. Due to the threads 
issue, I'm not sure SIGBLOCK is actually useful in the real world with the 
current implementation unfortunately.

Equally, if that isn't an acceptable fix, documenting it would definitely be 
good too.

--

___
Python tracker 
<https://bugs.python.org/issue47139>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue47195] importlib lock race issue in deadlock handling code

2022-04-02 Thread Richard Purdie


Richard Purdie  added the comment:

This is a production backtrace after I inserted code to traceback if tid was 
already in _blocking_on. It is being triggered by a warning about an unclosed 
asyncio event loop and confirms my theory about nested imports, in the 
production case I'd guess being triggered by gc given the __del__.

  File 
"/home/pokybuild/yocto-worker/oe-selftest-fedora/build/meta/classes/base.bbclass",
 line 26, in oe_import
import oe.data
  File "", line 1024, in _find_and_load
  File "", line 171, in __enter__
  File 
"/home/pokybuild/yocto-worker/oe-selftest-fedora/build/bitbake/lib/bb/cooker.py",
 line 168, in acquire
return orig_acquire(self)
  File "", line 110, in acquire
  File "/usr/lib64/python3.10/asyncio/base_events.py", line 685, in __del__
_warn(f"unclosed event loop {self!r}", ResourceWarning, source=self)
  File "/usr/lib64/python3.10/warnings.py", line 112, in _showwarnmsg
_showwarnmsg_impl(msg)
  File "/usr/lib64/python3.10/warnings.py", line 28, in _showwarnmsg_impl
text = _formatwarnmsg(msg)
  File "/usr/lib64/python3.10/warnings.py", line 128, in _formatwarnmsg
return _formatwarnmsg_impl(msg)
  File "/usr/lib64/python3.10/warnings.py", line 56, in _formatwarnmsg_impl
import tracemalloc
  File "", line 1024, in _find_and_load
  File "", line 171, in __enter__
  File 
"/home/pokybuild/yocto-worker/oe-selftest-fedora/build/bitbake/lib/bb/cooker.py",
 line 167, in acquire
bb.warn("\n".join(traceback.format_stack()))

--

___
Python tracker 
<https://bugs.python.org/issue47195>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue47195] importlib lock race issue in deadlock handling code

2022-04-01 Thread Richard Purdie


New submission from Richard Purdie :

We've seen tracebacks in production like:

  File "", line 1004, in 
_find_and_load(name='oe.gpg_sign', import_=)
  File "", line 158, in 
_ModuleLockManager.__enter__()
  File "", line 110, in _ModuleLock.acquire()
 KeyError: 139622474778432

and

  File "", line 1004, in 
_find_and_load(name='oe.path', import_=)
  File "", line 158, in 
_ModuleLockManager.__enter__()
  File "", line 110, in _ModuleLock.acquire()
 KeyError: 140438942700992

I've attached a reproduction script which shows that if an import XXX is in 
progress and waiting at the wrong point when an interrupt arrives (in this case 
a signal) and triggers it's own import YYY, _blocking_on[tid] in 
importlib/_bootstrap.py gets overwritten and lost, triggering the traceback we 
see above upon exit from the second import.

I'm using a signal handler here as the interrupt, I don't know what our 
production source is as yet but this reproducer proves it is possible.

--
components: Interpreter Core
files: testit2.py
messages: 416517
nosy: rpurdie
priority: normal
severity: normal
status: open
title: importlib lock race issue in deadlock handling code
versions: Python 3.10
Added file: https://bugs.python.org/file50714/testit2.py

___
Python tracker 
<https://bugs.python.org/issue47195>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue47139] pthread_sigmask needs SIG_BLOCK behaviour explaination

2022-03-28 Thread Richard Purdie


New submission from Richard Purdie :

I've been struggling to get signal.pthread_sigmask to do what I expected it to 
do from the documentation. Having looked at the core python code handling 
signals I now (think?!) I understand what is happening. It might be possible 
for python to improve the behaviour, or it might just be something to document, 
I'm not sure but I though I'd mention it.

I'd added pthread_sigmask(SIG_BLOCK, (SIGTERM,)) and 
pthread_sigmask(SIG_UNBLOCK, (SIGTERM,)) calls around a critical section I 
wanted to protect from the SIGTERM signal. I was still seeing SIGTERM inside 
that section. Using SIGMASK to restore the mask instead of SIG_UNBLOCK behaves 
the same.

What I hadn't realised is that firstly python defers signals to a convenient 
point and secondly that signals are processed in the main thread regardless of 
the thread they arrived in. This means that I can see SIGTERM arrive in my 
critical section as one of my other threads created in the background by the 
core python libs helpfully handles it.  This makes SIG_BLOCK rather ineffective 
in any threaded code.

To work around it, I can add my own handlers and have them track whether a 
signal arrived, then handle any signals after my critical section by re-raising 
them. It is possible python itself could defer processing signals masked with 
SIG_BLOCK until they're unblocked. Alternatively, a note in the documentation 
warning of the pitfalls here might be helpful to save someone else from 
wondering what is going on!

--
components: Interpreter Core
messages: 416154
nosy: rpurdie
priority: normal
severity: normal
status: open
title: pthread_sigmask needs SIG_BLOCK behaviour explaination
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue47139>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45943] kids10yrsap...@gmail.com

2021-11-30 Thread Qualyn Richard


Change by Qualyn Richard :


--
components: email
files: PSX_20210903_080553.jpg
nosy: barry, oktaine57, r.david.murray
priority: normal
severity: normal
status: open
title: kids10yrsap...@gmail.com
type: behavior
versions: Python 3.11
Added file: https://bugs.python.org/file50463/PSX_20210903_080553.jpg

___
Python tracker 
<https://bugs.python.org/issue45943>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45936] collections.Counter drops key if value is 0 and updating using += operator

2021-11-29 Thread Richard Decal


New submission from Richard Decal :

In brief:

```
from collections import Counter
x = Counter({'a': 0, 'b': 1})
x.update(x)  # works: Counter({'a': 0, 'b': 2})
x += x  # expected: Counter({'a': 0, 'b': 3}) actual: Counter({'b': 3})
```

I expect `+=` and `.update()` to be synonymous. However, the += operator is 
deleting keys if the source Counter has a zero count to begin with:

```
x = Counter({'a': 1})
x += Counter({'a': 0})  # ok: Counter({'a': 1})

y = Counter({'a': 0})
y += y  # expected: Counter({'a': 0}) actual: Counter()
```

--
messages: 407348
nosy: crypdick
priority: normal
severity: normal
status: open
title: collections.Counter drops key if value is 0 and updating using += 
operator
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue45936>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42738] subprocess: don't close all file descriptors by default (close_fds=False)

2021-10-26 Thread Richard Xia


Richard Xia  added the comment:

I'd like to provide another, non-performance-related use case for changing the 
default value of Popen's close_fds parameters back to False.

In some scenarios, a (non-Python) parent process may want its descendant 
processes to inherit a particular file descriptor and for each descendant 
process to pass on that file descriptor its own children. In this scenario, a 
Python program may just be an intermediate script that calls out to multiple 
subprocesses, and closing the inheritable file descriptors by default would 
interfere with the parent process's ability to pass on that file descriptor to 
descendants.

As a concrete example, we have a (non-Python) build system and task runner that 
orchestrates many tasks to run in parallel. Some of those tasks end up invoking 
Python scripts that use subprocess.run() to run other programs. Our task runner 
intentionally passes an inheritable file descriptor that is unique to each task 
as a form of a keep-alive token; if the child processes continue to pass 
inheritable file descriptors to their children, then we can determine whether 
all of the processes spawned from a task have terminated by checking whither 
the last open handle to that file descriptor has been closed. This is 
particularly important when a processes exits before its children, sometimes 
uncleanly due to being force killed by the system or by a user.

In our use case, Python's default value of close_fds=True interferes with our 
tracking scheme, since it prevents Python's subprocesses from inheriting that 
file descriptor, even though that file descriptor has intentionally been made 
inheritable.

While we are able to work around the issue by explicitly setting 
close_fds=False in as much of our Python code as possible, it's difficult to 
enforce this globally since we have many small Python scripts. We also have no 
control over any third party libraries that may possibly call Popen.

Regarding security, PEP 446 already makes it so that any files opened from 
within a Python program are non-inheritable by default, which I agree is a good 
default. One can make the argument that it's not Python's job to enforce a 
security policy on file descriptors that a Python process has inherited from a 
parent process, since Python cannot distinguish from descriptors that were 
accidentally or intentionally inherited.

--
nosy: +richardxia

___
Python tracker 
<https://bugs.python.org/issue42738>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37569] Complete your registration to Python tracker

2021-10-24 Thread Richard Hinerfeld


Richard Hinerfeld  added the comment:

I just get an error when  I visit the URL

On Sun, Oct 24, 2021 at 4:57 PM Python tracker 
wrote:

> To complete your registration of the user "rhinerfeld1" with
> Python tracker, please visit the following URL:
>
>
> https://bugs.python.org/?@action=confrego=MxJ6fZghVQdh3dhyE1fj8I7bFrmjfve9
>
>

--
nosy: +rhinerfeld1

___
Python tracker 
<https://bugs.python.org/issue37569>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45601] test_tk and test_ttk_guionly fail with resource not availiable

2021-10-24 Thread Richard Hinerfeld


Richard Hinerfeld  added the comment:

running build_scripts
copying and adjusting /home/richard/Python-3.8.9/Tools/scripts/pydoc3 -> 
build/scripts-3.8
copying and adjusting /home/richard/Python-3.8.9/Tools/scripts/idle3 -> 
build/scripts-3.8
copying and adjusting /home/richard/Python-3.8.9/Tools/scripts/2to3 -> 
build/scripts-3.8
changing mode of build/scripts-3.8/pydoc3 from 644 to 755
changing mode of build/scripts-3.8/idle3 from 644 to 755
changing mode of build/scripts-3.8/2to3 from 644 to 755
renaming build/scripts-3.8/pydoc3 to build/scripts-3.8/pydoc3.8
renaming build/scripts-3.8/idle3 to build/scripts-3.8/idle3.8
renaming build/scripts-3.8/2to3 to build/scripts-3.8/2to3-3.8
./python  ./Tools/scripts/run_tests.py -v test_ttk_guionly
/home/richard/Python-3.8.9/python -u -W default -bb -E -m test -r -w -j 0 -u 
all,-largefile,-audio,-gui -v test_ttk_guionly
== CPython 3.8.9 (default, Oct 24 2021, 15:58:53) [GCC 10.2.1 20210110]
== Linux-5.10.0-9-amd64-x86_64-with-glibc2.29 little-endian
== cwd: /home/richard/Python-3.8.9/build/test_python_34348
== CPU count: 2
== encodings: locale=UTF-8, FS=utf-8
Using random seed 6980064
0:00:00 load avg: 0.32 Run tests in parallel using 4 child processes
0:00:00 load avg: 0.32 [1/1] test_ttk_guionly skipped (resource denied)
test_ttk_guionly skipped -- Use of the 'gui' resource not enabled

== Tests result: SUCCESS ==

1 test skipped:
test_ttk_guionly

Total duration: 957 ms
Tests result: SUCCESS

--

___
Python tracker 
<https://bugs.python.org/issue45601>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45601] test_tk and test_ttk_guionly fail with resource not availiable

2021-10-24 Thread Richard Hinerfeld


New submission from Richard Hinerfeld :

Please note that test_tk and test_ttk_guionly fail when running testall
when compiling 3.8.9 python from source code.
Compiling on Linux Debian 64-bit bullseye 11.1.0 on a 2008 Mac Book.

--
components: Build
files: TestTK.txt
messages: 404942
nosy: rhinerfeld1
priority: normal
severity: normal
status: open
title: test_tk and test_ttk_guionly fail with resource not availiable
type: compile error
versions: Python 3.8
Added file: https://bugs.python.org/file50393/TestTK.txt

___
Python tracker 
<https://bugs.python.org/issue45601>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5004] socket.getfqdn() doesn't cope properly with purely DNS-based setups

2021-10-22 Thread Richard van den Berg


Richard van den Berg  added the comment:

In that case Stijn Hope should create the PR since he wrote the patch. Anyone 
else could get in trouble for using his code without proper permission.

--

___
Python tracker 
<https://bugs.python.org/issue5004>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45487] SSLEOFError regression with certain servers in Python 3.10

2021-10-22 Thread Richard

Richard  added the comment:

Never mind, I found the root cause after some debugging. Adding 
AES256-GCM-SHA384 to the cipher string resolved the issue.

And now I see that the release notes say this:

> The ssl module now has more secure default settings. Ciphers without forward 
> secrecy or SHA-1 MAC are disabled by default. Security level 2 prohibits weak 
> RSA, DH, and ECC keys with less than 112 bits of security. SSLContext 
> defaults to minimum protocol version TLS 1.2. Settings are based on Hynek 
> Schlawack’s research. (Contributed by Christian Heimes in bpo-43998.)

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue45487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5004] socket.getfqdn() doesn't cope properly with purely DNS-based setups

2021-10-22 Thread Richard van den Berg


Richard van den Berg  added the comment:

Here is the updated patch. Is python5004-test.c enough as a test case?

--
Added file: https://bugs.python.org/file50390/python2.7-socket-getfqdn.patch

___
Python tracker 
<https://bugs.python.org/issue5004>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5004] socket.getfqdn() doesn't cope properly with purely DNS-based setups

2021-10-22 Thread Richard van den Berg


Richard van den Berg  added the comment:

I just ran into this 12 year old issue. Can this be merged please?

--
nosy: +richard.security.consultant

___
Python tracker 
<https://bugs.python.org/issue5004>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45487] SSLEOFError regression with certain servers in Python 3.10

2021-10-15 Thread Richard


Richard  added the comment:

Sorry, I mean it works fine with Python 3.9.2 from apt as well as Python 3.9.7 
from pyenv. But 3.10.0 and 3.11-dev from pyenv are broken.

--

___
Python tracker 
<https://bugs.python.org/issue45487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45487] SSLEOFError regression with certain servers in Python 3.10

2021-10-15 Thread Richard

Richard  added the comment:

Note that the same happens with pyenv-compiled Python 3.9.7 (same way as I 
compiled 3.10 and 3.11), to rule out issues with different installation methods:

```
❯ python3.9 -VV
Python 3.9.7 (default, Oct  8 2021, 10:30:22) 
[GCC 10.2.1 20210110]
```

--

___
Python tracker 
<https://bugs.python.org/issue45487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45487] SSLEOFError regression with certain servers in Python 3.10

2021-10-15 Thread Richard

New submission from Richard :

Starting in Python 3.10, TLS connections to certain servers (e.g. 
websocket-cs.vudu.com:443) are failing when it worked fine on Python 3.9 and 
earlier on the same system.


Minimal working example:

```
#!/usr/bin/env python3

import socket
import ssl

HOST = 'websocket-cs.vudu.com'
PORT = 443

sock = socket.create_connection((HOST, PORT))
ctx = ssl.create_default_context()
ssock = ctx.wrap_socket(sock, server_hostname=HOST)
print("Connection successful")
```


Output:
```
❯ python3.9 ssl_eof_test.py
Connection successful

❯ python3.10 ssl_eof_test.py
Traceback (most recent call last):
  File "/home/nyuszika7h/ssl_eof_test.py", line 11, in 
ssock = ctx.wrap_socket(sock, server_hostname=HOST)
  File "/home/nyuszika7h/.pyenv/versions/3.10.0/lib/python3.10/ssl.py", line 
512, in wrap_socket
return self.sslsocket_class._create(
  File "/home/nyuszika7h/.pyenv/versions/3.10.0/lib/python3.10/ssl.py", line 
1070, in _create
self.do_handshake()
  File "/home/nyuszika7h/.pyenv/versions/3.10.0/lib/python3.10/ssl.py", line 
1341, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)

❯ python3.11 ssl_eof_test.py
Traceback (most recent call last):
  File "/home/nyuszika7h/ssl_eof_test.py", line 11, in 
ssock = ctx.wrap_socket(sock, server_hostname=HOST)
^^^
  File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/ssl.py", line 
517, in wrap_socket
return self.sslsocket_class._create(
   ^
  File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/ssl.py", line 
1075, in _create
self.do_handshake()
^^^
  File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/ssl.py", line 
1346, in do_handshake
self._sslobj.do_handshake()
^^^
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:998)
```


System information:

```
❯ uname -a
Linux cadoth 5.10.0-8-amd64 #1 SMP Debian 5.10.46-5 (2021-09-23) x86_64 
GNU/Linux

❯ lsb_release -d
Description:Debian GNU/Linux 11 (bullseye)

❯ openssl version
OpenSSL 1.1.1k  25 Mar 2021

❯ python3.9 -VV
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110]

❯ python3.10 -VV
Python 3.10.0 (default, Oct  5 2021, 00:24:29) [GCC 10.2.1 20210110]

❯ python3.11 -VV
Python 3.11.0a1+ (heads/main:547d26aa08, Oct 15 2021, 17:35:52) [GCC 10.2.1 
20210110]

❯ python3.9 -c 'import ssl; print(ssl.OPENSSL_VERSION)'
OpenSSL 1.1.1k  25 Mar 2021

❯ python3.10 -c 'import ssl; print(ssl.OPENSSL_VERSION)'
OpenSSL 1.1.1k  25 Mar 2021

❯ python3.11 -c 'import ssl; print(ssl.OPENSSL_VERSION)'
OpenSSL 1.1.1k  25 Mar 2021
```

--
assignee: christian.heimes
components: SSL
messages: 404033
nosy: christian.heimes, nyuszika7h
priority: normal
severity: normal
status: open
title: SSLEOFError regression with certain servers in Python 3.10
type: behavior
versions: Python 3.10, Python 3.11

___
Python tracker 
<https://bugs.python.org/issue45487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24132] Direct sub-classing of pathlib.Path

2021-09-16 Thread Richard


Richard  added the comment:

I agree this would be nice. For now, I'm doing this as a hack:

class Path(type(pathlib.Path())):
...

--
nosy: +nyuszika7h

___
Python tracker 
<https://bugs.python.org/issue24132>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38222] pathlib Path objects should support __format__

2021-09-13 Thread Richard


Richard  added the comment:

Sorry, that should have been:

log_dir = Path('logs/{date}')

--

___
Python tracker 
<https://bugs.python.org/issue38222>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38222] pathlib Path objects should support __format__

2021-09-13 Thread Richard


Richard  added the comment:

I would like for this to be reconsidered. Yes, you can use str(), but 
converting back and forth becomes really clunky:

log_dir = 'logs/{date}'
log_file = Path(str(path).format(time.strftime('%Y-%m-%d')) / 'log.txt'

--
nosy: +nyuszika7h

___
Python tracker 
<https://bugs.python.org/issue38222>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45130] shlex.join() does not accept pathlib.Path objects

2021-09-07 Thread Richard


Richard  added the comment:

IMO comparing shlex.join() to str.join() is a mistake. Comparing it to 
subprocess.run() is more appropriate.

What do you mean by "proposal"? subprocess.run() already converts Path 
arguments to str since Python 3.6 (though IIRC this was broken on Windows until 
3.7 or so). It indeed does not convert int arguments, but like I said that's 
irrelevant here, you're the one who brought it up.

--

___
Python tracker 
<https://bugs.python.org/issue45130>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45130] shlex.join() does not accept pathlib.Path objects

2021-09-07 Thread Richard


Richard  added the comment:

While it may be primarily intended to combine output from shlex.split() again, 
IMO it's useful for manually constructed command lines as well, for example 
displaying instructions to a user where a path may contain spaces and special 
characters and needs to be properly escaped.

As for converting int to str, since subprocess.run() does not do that, 
shlex.split() does not need to do so either. I never mentioned that, and while 
I could see that being useful as well, that would have to be a separate 
discussion.

There's more of a case for automatic conversion for Path objects, which are 
supposed to work seamlessly in most places where strings are accepted. But such 
quirks of certain functions not accepting them and being forced to convert to 
str manually makes pathlib a little annoying to use compared to os.path.

--

___
Python tracker 
<https://bugs.python.org/issue45130>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45130] shlex.join() does not accept pathlib.Path objects

2021-09-07 Thread Richard


New submission from Richard :

When one of the items in the iterable passed to shlex.join() is a pathlib.Path 
object, it throws an exception saying it must be str or bytes. I believe it 
should accept Path objects just like other parts of the standard library such 
as subprocess.run() already do.


Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shlex
>>> from pathlib import Path
>>> shlex.join(['foo', Path('bar baz')])
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.9/shlex.py", line 320, in join
return ' '.join(quote(arg) for arg in split_command)
  File "/usr/lib/python3.9/shlex.py", line 320, in 
return ' '.join(quote(arg) for arg in split_command)
  File "/usr/lib/python3.9/shlex.py", line 329, in quote
if _find_unsafe(s) is None:
TypeError: expected string or bytes-like object
>>> shlex.join(['foo', str(Path('bar baz'))])
"foo 'bar baz'"


Python 3.11.0a0 (heads/main:fa15df77f0, Sep  7 2021, 18:22:35) [GCC 10.2.1 
20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shlex
>>> from pathlib import Path
>>> shlex.join(['foo', Path('bar baz')])
Traceback (most recent call last):
  File "", line 1, in 
  File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/shlex.py", 
line 320, in join
return ' '.join(quote(arg) for arg in split_command)
   ^
  File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/shlex.py", 
line 320, in 
return ' '.join(quote(arg) for arg in split_command)
^^
  File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/shlex.py", 
line 329, in quote
if _find_unsafe(s) is None:
   ^^^
TypeError: expected string or bytes-like object, got 'PosixPath'
>>> shlex.join(['foo', str(Path('bar baz'))])
"foo 'bar baz'"

--
components: Library (Lib)
messages: 401301
nosy: nyuszika7h
priority: normal
severity: normal
status: open
title: shlex.join() does not accept pathlib.Path objects
type: behavior
versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45130>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45109] pipes seems designed for bytes but is str-only

2021-09-05 Thread Richard Tollerton


New submission from Richard Tollerton :

1. https://github.com/python/cpython/blob/3.9/Lib/pipes.py#L6

> Suppose you have some data that you want to convert to another format,
> such as from GIF image format to PPM image format.

2. https://docs.python.org/3.9/library/pipes.html

> Because the module uses /bin/sh command lines, a POSIX or compatible shell 
> for os.system() and os.popen() is required.

3. https://docs.python.org/3.9/library/os.html#os.popen

> The returned file object reads or writes text strings rather than bytes.


(1) and (3) are AFAIK mutually contradictory: you can't reasonably expect to 
shove GIFs down a str file object. I'm guessing that pipes is an API that never 
got its bytes API fleshed out?

My main interest in this is that I'm writing a large CSV to disk and wanted to 
pipe it through zstd first. And I wanted something like perl's open FILE, 
"|zstd -T0 -19 > out.txt.zst". But the CSV at present is all bytes. 
(Technically the content is all latin1 at the moment, so I may have a 
workaround, but I'm not 100% certain it will stay that way.)

What I'd like to see is for pipes.Template.open() to accept 'b' in flags, and 
for that to be handled in the usual way.

--
components: Library (Lib)
messages: 401103
nosy: rtollert
priority: normal
severity: normal
status: open
title: pipes seems designed for bytes but is str-only
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45109>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42560] Improve Tkinter Documentation

2021-08-17 Thread Richard Sheridan


Change by Richard Sheridan :


--
nosy: +Richard Sheridan

___
Python tracker 
<https://bugs.python.org/issue42560>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44401] const kwlist for PyArg_ParseTupleAndKeywords and PyArg_VaParseTupleAndKeywords

2021-06-11 Thread Richard


Change by Richard :


--
keywords: +patch
nosy: +immortalplants
nosy_count: 1.0 -> 2.0
pull_requests: +25274
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/26686

___
Python tracker 
<https://bugs.python.org/issue44401>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44401] const kwlist for PyArg_ParseTupleAndKeywords and PyArg_VaParseTupleAndKeywords

2021-06-11 Thread Richard Barnes


New submission from Richard Barnes :

PyArg_ParseTupleAndKeywords and PyArg_VaParseTupleAndKeywords currently accept 
`kwlist` as `char **`; however, is not modified by either function. Therefore, 
a `const char **` might be better since this allows calling code to take 
advantage of `const` safety.

--
components: C API
messages: 395674
nosy: r-barnes
priority: normal
severity: normal
status: open
title: const kwlist for PyArg_ParseTupleAndKeywords and 
PyArg_VaParseTupleAndKeywords
type: security

___
Python tracker 
<https://bugs.python.org/issue44401>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44387] Not obvious that locale.LC_MESSAGES may not exist sometimes (e.g. on Windows)

2021-06-11 Thread Richard Mines


Richard Mines  added the comment:

If you need a proof that it is possible that locale.LC_MESSAGES doesn't exist, 
I've attached a screenshot. Even more I'm showing that locale.LC_TIME may be 
equal to 5 which is a placeholder for locale.LC_MESSAGES if there is an 
ImportError:
https://github.com/python/cpython/blob/62f1d2b3d7dda99598d053e10b785c463fdcf591/Lib/locale.py#L57

OS: Windows 10 20H2
Python: 3.8.10
Exact link to get python: 
https://www.microsoft.com/ru-ru/p/python-38/9mssztt1n39l?activetab=pivot:overviewtab

--
Added file: https://bugs.python.org/file50102/lc_messages_not_exist.png

___
Python tracker 
<https://bugs.python.org/issue44387>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44387] Not obvious that locale.LC_MESSAGES may not exist sometimes (e.g. on Windows)

2021-06-10 Thread Richard Mines


New submission from Richard Mines :

Documentation page:
https://docs.python.org/3/library/locale.html#locale.LC_MESSAGES

Code comment saying that locale.LC_MESSAGES doesn't exist sometimes:
https://github.com/python/cpython/blob/62f1d2b3d7dda99598d053e10b785c463fdcf591/Lib/locale.py#L25-L26

Code fragment showing that locale.LC_MESSAGES can be non-existent:
https://github.com/python/cpython/blob/62f1d2b3d7dda99598d053e10b785c463fdcf591/Lib/locale.py#L1747-L1752

Reading documentation it's not obvious that locale.LC_MESSAGES may not exist 
(e.g. Windows - Microsoft Store - Python 3.8)

--
assignee: docs@python
components: Documentation
messages: 395588
nosy: docs@python, richardmines91
priority: normal
severity: normal
status: open
title: Not obvious that locale.LC_MESSAGES may not exist sometimes (e.g. on 
Windows)
type: enhancement
versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue44387>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43946] unpickling a subclass of list fails when it implements its own extend method

2021-04-26 Thread Richard Levasseur


Richard Levasseur  added the comment:

Here's a self-contained repro:


```
import pickle

class MyList(list):
  def __init__(self, required, values):
self.required = required
super().__init__(values)

  def __getstate__(self):
return self.required

  def __setstate__(self, state):
self.required = state

  def extend(self, values):
assert self.required
super().extend(values)

mylist = MyList('foo', [1, 2])
pickled = pickle.dumps(mylist)
unpickled = pickle.loads(pickled)

print(mylist)

```

The above will raise an AttributeError when self.required is accessed in 
extend(). 

Oddly, defining a `__reduce__()` function that simply calls and returns 
`super().__reduce__()` seems to restore the previous behavior and things work 
again.

--
nosy: +richardlev

___
Python tracker 
<https://bugs.python.org/issue43946>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43273] Mock `_mock_wraps` is undocumented and inconsistently named

2021-02-20 Thread Richard Wise


New submission from Richard Wise :

I am trying to use wraps to delegate a call to a decorated patch mock to 
another method. By examining the source code, I was able to achieve this using 
the (apparently undocumented) `Mock._mock_wraps` attribute instead of the 
`wraps` attribute which would be expected given the constructor parameter 
names. I find this behaviour very confusing and inconsistent. Can we either 
expose `Mock.wraps` attribute or document `_mock_wraps` accordingly?

Example:

class MockRepro(unittest.TestCase)

@patch('foo')
def test_side_effect(self, mock_foo):
  # Set side effect in constructor as per 
https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock 
  Mock(side_effect = [1, 2])
  # Or can set on decorated patch
  foo.side_effect = [1, 2]

@patch('foo')
def test_wraps(self, mock_foo):
  def wrapped_method():
return 3
  # Set wraps in constructor as per 
https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock 
  Mock(wraps=wrapped_method)
  # Or can set on decorated patch
  foo.wraps = wrapped_method # This silently fails
  foo._mock_wraps = wrapped_method # Where does `_mock_wraps` come from?

--
components: Library (Lib)
messages: 387397
nosy: Woodz
priority: normal
severity: normal
status: open
title: Mock `_mock_wraps` is undocumented and inconsistently named
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue43273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43237] datetime.__eq__ returns true when timezones don't match

2021-02-15 Thread Richard Wise


New submission from Richard Wise :

from datetime import datetime, timezone, timedelta

datetime_in_sgt = datetime(2021, 2, 16, 8, 0, 0, 
tzinfo=timezone(timedelta(hours=8)))
datetime_in_utc = datetime(2021, 2, 16, 0, 0, 0, tzinfo=timezone.utc)

print(datetime_in_sgt == datetime_in_utc)

Expected: False
Actual: True

Although these two datetimes represent the same instant on the timeline, they 
are not identical because they use different timezones. This means that when 
unit testing timezone handling, tests will incorrectly pass despite data being 
returned in UTC instead of the requested timezone, so we need to write code 
such as this:

# Timestamp comparison
self.assertEqual(datetime_in_sgt, datetime_in_utc)
# Timezone comparison
self.assertEqual(datetime_in_sgt.tzinfo, datetime_in_utc.tzinfo)

This is confusing and non-intuitive.

For examples of how other languages handle such comparison, can refer to: 
https://docs.oracle.com/javase/8/docs/api/java/time/ZonedDateTime.html#equals-java.lang.Object-
 and 
https://docs.oracle.com/javase/8/docs/api/java/time/Instant.html#equals-java.lang.Object-

--
components: Library (Lib)
messages: 387087
nosy: Woodz
priority: normal
severity: normal
status: open
title: datetime.__eq__ returns true when timezones don't match
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue43237>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41629] __class__ not set defining 'X' as

2021-01-06 Thread Richard Neumann


Richard Neumann  added the comment:

I just stumbled across this issue trying to resolve this: 
https://bugs.python.org/issue42765?

While this fails:

from typing import NamedTuple


class Spamm(NamedTuple):

foo: int
bar: str

def __getitem__(self, index_or_key):
"""Returns the respective item."""
if isinstance(index_or_key, str):
try:
return getattr(self, index_or_key)
except AttributeError:
raise IndexError(index_or_key) from None

return super().__getitem__(index_or_key)

def keys(self):
return self._fields


def main():

spamm = Spamm(12, 'hello')
print(dir(spamm))
print(spamm._fields)
d = dict(spamm)
print(d)


if __name__ == '__main__':
main()


with

Traceback (most recent call last):
  File "/home/neumann/test.py", line 4, in 
class Spamm(NamedTuple):
RuntimeError: __class__ not set defining 'Spamm' as . 
Was __classcell__ propagated to type.__new__?


The following works:

from typing import NamedTuple


def _getitem(instance, index_or_key):
"""Returns the respective item."""

if isinstance(index_or_key, str):
try:
return getattr(instance, index_or_key)
except AttributeError:
raise IndexError(index_or_key) from None

return super().__getitem__(index_or_key)


def dicttuple(cls: tuple):
"""Extends a tuple class with methods for the dict constructor."""

cls.keys = lambda self: self._fields
cls.__getitem__ = _getitem
return cls


@dicttuple
class Spamm(NamedTuple):

foo: int
bar: str


def main():

spamm = Spamm(12, 'hello')
print(dir(spamm))
print(spamm._fields)
d = dict(spamm)
print(d)


if __name__ == '__main__':
main()


And produces:

['__add__', '__annotations__', '__class__', '__class_getitem__', 
'__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', 
'__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__gt__', 
'__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', 
'__lt__', '__module__', '__mul__', '__ne__', '__new__', '__orig_bases__', 
'__reduce__', '__reduce_ex__', '__repr__', '__rmul__', '__setattr__', 
'__sizeof__', '__slots__', '__str__', '__subclasshook__', '_asdict', 
'_field_defaults', '_fields', '_make', '_replace', 'bar', 'count', 'foo', 
'index', 'keys']
('foo', 'bar')
{'foo': 12, 'bar': 'hello'}


I am a bit baffled, why it works when the method is injected via a decorator.

--
nosy: +conqp

___
Python tracker 
<https://bugs.python.org/issue41629>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42765] Introduce new data model method __iter_items__

2021-01-06 Thread Richard Neumann


Richard Neumann  added the comment:

Okay, I found the solution. Not using super() works:

from typing import NamedTuple


class Spamm(NamedTuple):

foo: int
bar: str

def __getitem__(self, index_or_key):
if isinstance(index_or_key, str):
try:
return getattr(self, index_or_key)
except AttributeError:
raise KeyError(index_or_key) from None

return tuple.__getitem__(self, index_or_key)

def keys(self):
yield 'foo'
yield 'bar'


def main():

spamm = Spamm(12, 'hello')
print(spamm.__getitem__)
print(spamm.__getitem__(1))
d = dict(spamm)
print(d)


if __name__ == '__main__':
main()

Result:


hello
{'foo': 12, 'bar': 'hello'}

--

___
Python tracker 
<https://bugs.python.org/issue42765>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42765] Introduce new data model method __iter_items__

2021-01-06 Thread Richard Neumann


Richard Neumann  added the comment:

Thank you all for your input.
I had a look at aforementioned discussion and learned something new.
So I tried to implement the dict data model by implementing keys() and 
__getitem__() accordingly:

from typing import NamedTuple


class Spamm(NamedTuple):

foo: int
bar: str

def __getitem__(self, item):
if isinstance(item, str):
try:
return getattr(self, item)
except AttributeError:
raise KeyError(item) from None

return super().__getitem__(item)

def keys(self):
yield 'foo'
yield 'bar'


def main():

spamm = Spamm(12, 'hello')
print(spamm.__getitem__)
print(spamm.__getitem__(1))
d = dict(spamm)


if __name__ == '__main__':
main()


Unfortunately this will result in an error:

Traceback (most recent call last):
  File "/home/neumann/test.py", line 4, in 
class Spamm(NamedTuple):
RuntimeError: __class__ not set defining 'Spamm' as . 
Was __classcell__ propagated to type.__new__?

Which seems to be caused by the __getitem__ implementation.
I found a corresponding issue here: https://bugs.python.org/issue41629
Can I assume, that this is a pending bug and thusly I cannot implement the 
desired behaviour until a fix?

--

___
Python tracker 
<https://bugs.python.org/issue42765>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42768] super().__new__() of list expands arguments

2020-12-28 Thread Richard Neumann


Richard Neumann  added the comment:

I could have sworn, that this worked before, but it was obviously me being 
tired at the end of the work day.
Thanks for pointing this out and sorry for the noise.

--

___
Python tracker 
<https://bugs.python.org/issue42768>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42768] super().__new__() of list expands arguments

2020-12-28 Thread Richard Neumann


New submission from Richard Neumann :

When sublassing the built-in list, the invocation of super().__new__ will 
unexpectedly expand the passed arguments:

class MyTuple(tuple):

def __new__(cls, *items):
print(cls, items)
return super().__new__(cls, items)


class MyList(list):

def __new__(cls, *items):
print(cls, items)
return super().__new__(cls, items)


def main():

my_tuple = MyTuple(1, 2, 3, 'foo', 'bar')
print('My tuple:', my_tuple)
my_list = MyList(1, 2, 3, 'foo', 'bar')
print('My list:', my_list)


if __name__ == '__main__':
main()


Actual result:

 (1, 2, 3, 'foo', 'bar')
My tuple: (1, 2, 3, 'foo', 'bar')
 (1, 2, 3, 'foo', 'bar')
Traceback (most recent call last):
  File "/home/neumann/listbug.py", line 24, in 
main()
  File "/home/neumann/listbug.py", line 19, in main
my_list = MyList(1, 2, 3, 'foo', 'bar')
TypeError: list expected at most 1 argument, got 5


Expected:

 (1, 2, 3, 'foo', 'bar')
My tuple: (1, 2, 3, 'foo', 'bar')
 (1, 2, 3, 'foo', 'bar')
My list: [1, 2, 3, 'foo', 'bar']

--
components: ctypes
messages: 383902
nosy: conqp
priority: normal
severity: normal
status: open
title: super().__new__() of list expands arguments
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42768>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42765] Introduce new data model method __iter_items__

2020-12-28 Thread Richard Neumann

New submission from Richard Neumann :

I have use cases in which I use named tuples to represent data sets, e.g:

class BasicStats(NamedTuple):
"""Basic statistics response packet."""

type: Type
session_id: BigEndianSignedInt32
motd: str
game_type: str
map: str
num_players: int
max_players: int
host_port: int
host_ip: IPAddressOrHostname

I want them to behave as intended, i.e. that unpacking them should behave as 
expected from a tuple:

type, session_id, motd, … = BasicStats(…)

I also want to be able to serialize them to a JSON-ish dict.
The NamedTuple has an _asdict method, that I could use.

json = BasicStats(…)._asdict()

But for the dict to be passed to JSON, I need customization of the dict 
representation, e.g. set host_ip to str(self.host_ip), since it might be a 
non-serializable ipaddress.IPv{4,6}Address. Doing this in an object hook of 
json.dumps() is a non-starter, since I cannot force the user to remember, which 
types need to be converted on the several data structures.
Also, using _asdict() seems strange as an exposed API, since it's an underscore 
method and users hence might not be inclined to use it.

So what I did is to add a method to_json() to convert the named tuple into a 
JSON-ish dict:

def to_json(self) -> dict:
"""Returns a JSON-ish dict."""
return {
'type': self.type.value,
'session_id': self.session_id,
'motd': self.motd,
'game_type': self.game_type,
'map': self.map,
'num_players': self.num_players,
'max_players': self.max_players,
'host_port': self.host_port,
'host_ip': str(self.host_ip)
}

It would be nicer to have my type just return this appropriate dict when 
invoking dict(BasicStats(…)). This would require me to override the __iter__() 
method to yield key / value tuples for the dict.
However, this would break the natural behaviour of tuple unpacking as described 
above.

Hence, I propose to add a method __iter_items__(self) to the python data model 
with the following properties:

1) __iter_items__ is expected to return an iterator of 2-tuples representing 
key / value pairs.
2) the built-in function dict(), when called on an object, will attempt to 
create the object from __iter_items__ first and fall back to __iter__.

Alternative names could also be __items__ or __iter_dict__.

--
components: C API
messages: 383897
nosy: conqp
priority: normal
severity: normal
status: open
title: Introduce new data model method __iter_items__
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue42765>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25024] Allow passing "delete=False" to TemporaryDirectory

2020-12-06 Thread Richard


Richard  added the comment:

Sorry for reviving a 9 months old issue, but IMO there was no good reason to 
reject this especially when a patch was provided. Even if the context manager 
can be replaced with 3 lines of code, I still don't consider that very 
user-friendly.

My use case would be passing `delete=False` temporarily while debugging my 
script, it would be much simpler than using a whole different hacky method when 
the end goal is to change it back to `delete=True` once it is completed anyway.

What issues exactly does the addition of a simple option cause? I don't think 
something as trivial as this causes a maintenance burden, and you can't call it 
feature creep either when TemporaryDirectory doesn't have *any* other optional 
keyword arguments.

--
nosy: +nyuszika7h

___
Python tracker 
<https://bugs.python.org/issue25024>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41891] asyncio.wait_for does not wait for task/future to be completed in all cases

2020-09-30 Thread Richard Kojedzinszky


Change by Richard Kojedzinszky :


--
keywords: +patch
pull_requests: +21487
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/22461

___
Python tracker 
<https://bugs.python.org/issue41891>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41891] asyncio.wait_for does not wait for task/future to be completed in all cases

2020-09-30 Thread Richard Kojedzinszky


New submission from Richard Kojedzinszky :

This code should run without errors:

```
#!/usr/bin/env python

import asyncio

async def task1():
cv = asyncio.Condition()

async with cv:
await asyncio.wait_for(cv.wait(), 10)

async def main(loop):
task = loop.create_task(task1())

await asyncio.sleep(0)

task.cancel()

res = await asyncio.wait({task})

if __name__ == '__main__':
loop = asyncio.get_event_loop()

loop.run_until_complete(main(loop))
```

--
components: asyncio
messages: 377695
nosy: asvetlov, rkojedzinszky, yselivanov
priority: normal
severity: normal
status: open
title: asyncio.wait_for does not wait for task/future to be completed in all 
cases
type: behavior
versions: Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue41891>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41795] Allow assignment in yield statement

2020-09-16 Thread Richard Neumann


Richard Neumann  added the comment:

Awesome, I didn't know that.
I tried it without the parens and it gave me a SyntaxError.
This can be closed then as it's obviously already implemented.
Let's get to refactoring.

--

___
Python tracker 
<https://bugs.python.org/issue41795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41795] Allow assignment in yield statement

2020-09-16 Thread Richard Neumann


New submission from Richard Neumann :

I often write factory (deserialization) methods for ORM models for web 
application backends that produce a number of records (ORM model instances) of 
itself and related database tables:

@classmethod
def from_json(cls, json):
"""Yields records from a JSON-ish dict."""
modules = json.pop('modules', None) or ()
order = super().from_json(json)
yield order

for module in modules:
yield OrderedModule(order=order, module=Module(module))

This yields the main record "order" and related records from OrderedModules, 
which have a foreign key to Order.
Thusly I can save all records by:

for record in Order.from_json(json):
record.save()

Since I have several of those deserialization functions for multiple tables in 
multiple databases, it'd be nice to reduce the amount of code with some extra 
syntactic sugar, like:

@classmethod
def from_json(cls, json):
"""Yields records from a JSON-ish dict."""
modules = json.pop('modules', None) or ()
yield order = super().from_json(json)  # Assignment via "="

for module in modules:
yield OrderedModule(order=order, module=Module(module))

or:

@classmethod
def from_json(cls, json):
"""Yields records from a JSON-ish dict."""
modules = json.pop('modules', None) or ()
yield order := super().from_json(json)  # Assignment via ":="

for module in modules:
yield OrderedModule(order=order, module=Module(module))

I therefor propose to allow assignment of names in generator-like yield 
statements as described above.

--
messages: 376979
nosy: conqp
priority: normal
severity: normal
status: open
title: Allow assignment in yield statement
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41795>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41714] multiprocessing.Queue deadlock

2020-09-04 Thread Richard Purdie


Richard Purdie  added the comment:

Even my hack to call _writer.close() doesn't seem to be enough, it makes the 
problem rarer but there is still an issue. 
Basically, if you call cancel_join_thread() in one process, the queue is 
potentially totally broken in all other processes that may be using it. If for 
example another has called join_thread() as it was exiting and has queued data 
at the same time as another process exits using cancel_join_thread() and exits 
holding the write lock, you'll deadlock on the processes now stuck in 
join_thread() waiting for a lock they'll never get.
I suspect the answer is "don't use cancel_join_thread()" but perhaps the docs 
need a note to point out that if anything is already potentially exiting, it 
can deadlock? I'm not sure you can actually use the API safely unless you stop 
all users from exiting and synchronise that by other means?

--

___
Python tracker 
<https://bugs.python.org/issue41714>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41714] multiprocessing.Queue deadlock

2020-09-04 Thread Richard Purdie


Richard Purdie  added the comment:

I should also add that if we don't use cancel_join_thread() in the parser 
processes, things all work out ok. There is therefore seemingly something odd 
about the state that is leaving things in.
This issue doesn't occur every time, its maybe 1 in 40 runs where we throw 
parsing errors but I can brute force reproduce it.

--

___
Python tracker 
<https://bugs.python.org/issue41714>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41714] multiprocessing.Queue deadlock

2020-09-04 Thread Richard Purdie


New submission from Richard Purdie :

We're having some problems with multiprocessing.Queue where the parent process 
ends up hanging with zombie children. The code is part of bitbake, the task 
execution engine behind OpenEmbedded/Yocto Project.

I've cut down our code to the pieces in question in the attached file. It 
doesn't give a runnable test case unfortunately but does at least show what 
we're doing. Basically, we have a set of items to parse, we create a set of 
multiprocessing.Process() processes to handle the parsing in parallel. Jobs are 
queued in one queue and results are fed back to the parent via another. There 
is a quit queue that takes sentinels to cause the subprocesses to quit.

If something fails to parse, shutdown with clean=False is called, the sentinels 
are sent. the Parser() process calls results.cancel_join_thread() on the 
results queue. We do this since we don't care about the results any more, we 
just want to ensure everyting exits cleanly. This is where things go wrong. The 
Parser processes and their queues all turn into zombies. The parent process 
ends up stuck in self.result_queue.get(timeout=0.25) inside shutdown().

strace shows its acquired the locks and is doing a read() on the os.pipe() it 
created. Unfortunately since the parent still has a write channel open to the 
same pipe, it hangs indefinitely.

If I change the code to do:

self.result_queue._writer.close()
while True:
try:
   self.result_queue.get(timeout=0.25)
except (queue.Empty, EOFError):
break

i.e. close the writer side of the pipe by poking at the queue internals, we 
don't see the hang. The .close() method would close both sides.

We create our own process pool since this code dates from python 2.x days and 
multiprocessing pools had issues back when we started using this. I'm sure it 
would be much better now but we're reluctant to change what has basically been 
working. We drain the queues since in some cases we have clean shutdowns where 
cancel_join_thread() hasn't been used and we don't want those cases to block.

My question is whether this is a known issue and whether there is some kind of 
API to close just the write side of the Queue to avoid problems like this?

--
components: Library (Lib)
files: simplified.py
messages: 376350
nosy: rpurdie
priority: normal
severity: normal
status: open
title: multiprocessing.Queue deadlock
type: crash
versions: Python 3.6
Added file: https://bugs.python.org/file49444/simplified.py

___
Python tracker 
<https://bugs.python.org/issue41714>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33479] Document tkinter and threads

2020-08-12 Thread Richard Sheridan


Change by Richard Sheridan :


--
nosy: +Richard Sheridan

___
Python tracker 
<https://bugs.python.org/issue33479>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39093] tkinter objects garbage collected from non-tkinter thread cause crash

2020-07-15 Thread Richard Sheridan


Richard Sheridan  added the comment:

I stumbled into this in another project and I want to +1 the uncommenting 
solution. The problem occurs on __del__ rather than specifically in the gc 
somewhere (it happens when refs drop to zero too), so I wouldn't worry too much 
about killing the garbage collector.

It also looks like fixing the python part would be about 3 lines of 
non-user-facing code with weakrefs. Are you sure that's no-go?

Would it be any help to roll this fix into https://bugs.python.org/issue41176 
and https://github.com/python/cpython/pull/21299 since we fixed the quit() docs 
there?

--
nosy: +Richard Sheridan

___
Python tracker 
<https://bugs.python.org/issue39093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39093] tkinter objects garbage collected from non-tkinter thread cause crash

2020-07-15 Thread Richard Sheridan


Richard Sheridan  added the comment:

I stumbled into this in another project and I want to +1 the uncommenting 
solution. The problem occurs on __del__ rather than specifically in the gc 
somewhere (it happens when refs drop to zero too), so I wouldn't worry too much 
about killing the garbage collector.

It also looks like fixing the python part would be about 3 lines of 
non-user-facing code with weakrefs. Are you sure that's no-go?

Would it be any help to roll this fix into https://bugs.python.org/issue41176 
and https://github.com/python/cpython/pull/21299 since we fixed the quit() docs 
there?

--
nosy: +Richard Sheridan

___
Python tracker 
<https://bugs.python.org/issue39093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39093] tkinter objects garbage collected from non-tkinter thread cause crash

2020-07-15 Thread Richard Sheridan


Richard Sheridan  added the comment:

I stumbled into this in another project and I want to +1 the uncommenting 
solution. The problem occurs on __del__ rather than specifically in the gc 
somewhere (it happens when refs drop to zero too), so I wouldn't worry too much 
about killing the garbage collector.

It also looks like fixing the python part would be about 3 lines of 
non-user-facing code with weakrefs. Are you sure that's no-go?

Would it be any help to roll this fix into https://bugs.python.org/issue41176 
and https://github.com/python/cpython/pull/21299 since we fixed the quit() docs 
there?

--
nosy: +Richard Sheridan

___
Python tracker 
<https://bugs.python.org/issue39093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39093] tkinter objects garbage collected from non-tkinter thread cause crash

2020-07-15 Thread Richard Sheridan


Richard Sheridan  added the comment:

I stumbled into this in another project and I want to +1 the uncommenting 
solution. The problem occurs on __del__ rather than specifically in the gc 
somewhere (it happens when refs drop to zero too), so I wouldn't worry too much 
about killing the garbage collector.

It also looks like fixing the python part would be about 3 lines of 
non-user-facing code with weakrefs. Are you sure that's no-go?

Would it be any help to roll this fix into https://bugs.python.org/issue41176 
and https://github.com/python/cpython/pull/21299 since we fixed the quit() docs 
there?

--
nosy: +Richard Sheridan

___
Python tracker 
<https://bugs.python.org/issue39093>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41176] revise Tkinter mainloop dispatching flag behavior

2020-07-06 Thread Richard Sheridan


Richard Sheridan  added the comment:

I'm planning to write the long-awaited Tkinter Internals section of the docs. 
(https://github.com/python/cpython/blame/master/Doc/library/tk.rst#L48) I've 
spent too much time at this point to let it all go down the memory hole. 
Unfortunately, I don't know how ALL of the internals work. Is there someone 
else we could add to nosy that might be interested in writing some subsections?

Also, should this extended docs contribution be a new issue or rolled in with 
this one? Much abbreviated documentation of the new methods in this PR could be 
added to tkinter.rst. The new docs issue would be dependent on this issue since 
I won't be able to complete the docs until we have finished discussing what the 
future behavior of threads waiting for `dispatching` will be (error & poll vs 
Tcl_ConditionWait).

--

___
Python tracker 
<https://bugs.python.org/issue41176>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41176] revise Tkinter mainloop dispatching flag behavior

2020-07-05 Thread Richard Sheridan


Richard Sheridan  added the comment:

I'd like to consider one more possibility for future behavior that sort of came 
to mind while discussing the PR. In current behavior, it is possible to use 
`willdispatch` to trick `WaitForMainloop` into letting a thread pass through 
the timeout, where it will eventually wait on a `Tcl_ConditionWait` in 
`Tkapp_ThreadSend`. 

This could be very efficient default behavior, since no polling is required; 
the thread just goes when the loop comes up. Is it possible to make this a 
well-documented feature and default behavior of tkinter? Or would it be too 
surprising for new and existing users? It would be important to make sure that 
threads aren't silently getting lost in old programs and new users can figure 
out they need to call `mainloop`, `doonevent`, or `update` when not on the REPL 
or the thread will hang.

--

___
Python tracker 
<https://bugs.python.org/issue41176>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41176] revise Tkinter mainloop dispatching flag behavior

2020-07-03 Thread Richard Sheridan


Change by Richard Sheridan :


--
keywords: +patch
pull_requests: +20448
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21299

___
Python tracker 
<https://bugs.python.org/issue41176>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41176] revise Tkinter mainloop dispatching flag behavior

2020-07-01 Thread Richard Sheridan


Richard Sheridan  added the comment:

Removing `WaitForMainloop` would surely break some existing programs, but
that's why I suggested deprecation instead of just removing it suddenly. We
could issue a RuntimeWarning if `WaitForMainloop` actually waits and tell
the client to take responsibility for the race condition they created.
(They may have no idea! What if their delay unexpectedly increases to 1.2
seconds?) Whether or not waiting gets deprecated, it would make sense to
make the sleep behavior configurable instead of hardcoded. I'll include
something along those lines in my PR.

On Wed, Jul 1, 2020 at 6:15 AM E. Paine  wrote:

>
> E. Paine  added the comment:
>
> I agree it would be helpful to expose an explicit way of telling if the
> mainloop was running but am not sure about removing `WaitForMainloop` as it
> could very easily break existing programs.
>
> If a program executes a tkinter method in a thread before the mainloop is
> executed, the method will wait because of the call to `WaitForMainloop`. In
> the example script this is done deliberately to demonstrate the behaviour
> but could be done accidentally if the main thread has to do something else
> before the mainloop (and after the thread has been created).
>
> I think the changes (whatever is concluded we should do) would be
> considered an 'enhancement', which would not be backported to 3.9 and
> before (I believe 'behaviour' is generally used for logic errors).
>
> I am very willing to help review a PR, however the people you really need
> to convince are Serhiy and/or Guilherme (I have added them to the nosy).
>
> --
> nosy: +epaine, gpolo, serhiy.storchaka
> versions:  -Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9
> Added file: https://bugs.python.org/file49283/waitmainloop.py
>
> ___
> Python tracker 
> <https://bugs.python.org/issue41176>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue41176>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41176] revise Tkinter mainloop dispatching flag behavior

2020-06-30 Thread Richard Sheridan


New submission from Richard Sheridan :

This could also be considered a "behavior" type issue.

`TkappObject` has a member `dispatching` that could usefully be exposed by a 
very simple read-only method for users to determine at runtime if the tkinter 
mainloop is running. Matplotlib and I'm sure other packages rely on fragile 
hacks 
(https://github.com/matplotlib/matplotlib/blob/a68562aa230e5895136120f5073dd01f124d728d/lib/matplotlib/cbook/__init__.py#L65-L71)
 to determine this state. I ran into this in 
https://github.com/matplotlib/matplotlib/pull/17802. All these projects would 
be more reliable with a new "dispatching()" method on the TkappObject, 
tkinter.Misc objects, and possibly the tkinter module itself.

Internally, `dispatching` is used to, yes, determine if the mainloop is 
running. However, this determination is always done within the 
`WaitForMainloop` function 
(https://github.com/python/cpython/blob/bd4a3f21454a6012f4353e2255837561fc9f0e6a/Modules/_tkinter.c#L363-L380),
 which waits up to 1 second for the mainloop to come up. Apparently, this 
function allows a thread to implicitly wait for the loop to come up by calling 
any `TkappObject` method. This is a bad design choice in my opinion, because if 
client code wants to start immediately and the loop is not started by mistake, 
there will be a meaningless, hard-to-diagnose delay of one second before 
crashing. Instead, if some client code in a thread needs to wait for the 
mainloop to run, it should explicitly poll `dispatching()` on its own. This 
waiting behavior should be deprecated and, after a deprecation cycle perhaps, 
all `WaitForMainloop()` statements should be converted to inline 
`self->dispatching`.

The correctness of the `dispatching` flag is dampened by the currently 
existing, undocumented `willdispatch` method which simply arbitrarily sets the 
`dispatching` to 1. It seems `willdispatch` was added 18 years ago to 
circumvent a bug building pydoc caused by `WaitForMainloop` not waiting long 
enough, as it tricks `WaitForMainloop` into... not waiting for the mainloop. 
This was in my opinion a bad choice in comparison to adding a dispatching flag: 
again, if some thread needs to wait for the mainloop, it should poll 
`dispatching()`, and avoid adding spurious 1 second waits. `willdispatch` 
currently has no references in CPython and most GitHub references are to 
Pycharm stubs for the CPython method. It should be deprecated and removed to 
preserve the correctness of `dispatching`.

Happy to make a PR about this, except I don't understand clinic at all, nor the 
specifics of deprecation cycles in CPython.

--
components: Tkinter
messages: 372722
nosy: Richard Sheridan
priority: normal
severity: normal
status: open
title: revise Tkinter mainloop dispatching flag behavior
type: enhancement
versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue41176>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40054] Allow formatted strings as docstrings

2020-03-24 Thread Richard Neumann


New submission from Richard Neumann :

Currently only plain strings can be used as docstrings, such as:


class Foo:
"""Spamm eggs."""

For dynamic class generation, it would be useful to allow format strings as 
docstrings as well:

doc = 'eggs'

class Foo:
"""Spamm {}.""".format(doc)

or:

doc = 'eggs'

class Foo:
f"""Spamm {doc}."""

A current use case in which I realized that this feature was missing is:


class OAuth2ClientMixin(Model, ClientMixin):   # pylint: disable=R0904
"""An OAuth 2.0 client mixin for peewee models."""



@classmethod
def get_related_models(cls, model=Model):
"""Yields related models."""
for mixin, backref in CLIENT_RELATED_MIXINS:
yield cls._get_related_model(model, mixin, backref)

@classmethod
def _get_related_model(cls, model, mixin, backref):
"""Returns an implementation of the related model."""
class ClientRelatedModel(model, mixin):
f"""Implementation of {mixin.__name__}."""
client = ForeignKeyField(
cls, column_name='client', backref=backref,
on_delete='CASCADE', on_update='CASCADE')

return ClientRelatedModel

It actually *is* possible to dynamically set the docstring via the __doc__ 
attribute:

doc = 'eggs'

class Foo:
pass

Foo.__doc__ = doc


Allowing format strings would imho be more obvious when reading the code as it 
is set, where a docstring is expected i.e. below the class / function 
definition.

--
messages: 364934
nosy: conqp
priority: normal
severity: normal
status: open
title: Allow formatted strings as docstrings
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue40054>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39964] adding a string to a list works differently with x+='' compared to x=x+''

2020-03-14 Thread Richard King


New submission from Richard King :

x = ['a']

x += ' ' results in ['a',' ']

x = x + ' ' results in an exception:
Traceback (most recent call last):
  File "", line 1, in 
TypeError: can only concatenate list (not "str") to list

It behaves the same in 2.7.15 and 3.7.2.

--
components: Windows
messages: 364213
nosy: paul.moore, rickbking, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: adding a string to a list works differently with x+='' compared to x=x+''
type: behavior
versions: Python 2.7, Python 3.7

___
Python tracker 
<https://bugs.python.org/issue39964>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39686] add dump_json to ast module

2020-02-19 Thread Richard K


Richard K  added the comment:

> I don't think the clang argument holds because clang is a command-line tool 
> after all and it makes sense that it can produce several outputs while the 
> ast module is exposes APIs that you can further process inside the language. 
> Having json from the clang output will require more than one tool if clang 
> does not support it while doing it in Python only requires Python.

I see what you mean. I was just trying to illustrate that such a feature is 
desired by some. 

Perhaps 'Python only requires Python' means that Python _could_ be the first 
widely used language with such a superior meta-programming feature with respect 
to AST analysis/code generation. 


> > it appears that they do so in non-standard ways.

> Can you clarify what do you mean with that? 

By non-standard I mean that the resulting json does not follow the structure of 
the tree explicitly. For example with ast2json, '"_type": "Print"' includes a 
(somewhat misleading) key that is not represented in the actual AST. 

Example of ast2json output (example found here, 
https://github.com/YoloSwagTeam/ast2json#example),

{
"body": [
{
"_type": "Print",
"nl": true,
"col_offset": 0,
"dest": null,
"values": [
{
"s": "Hello World!",
"_type": "Str",
"lineno": 1,
"col_offset": 6
}
],
"lineno": 1
}
],
"_type": "Module"
}


> Just to clarify: ast.dump *will* fail with a more deph object as well, I am 
> not claiming that ast.dump will parse everything because of course suffers 
> the same problem.

Makes sense. As you mentioned, these are edge cases which I assume will not be 
an issue for those seeking to gain the benefits of 'ast.dump_json'

--

___
Python tracker 
<https://bugs.python.org/issue39686>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39686] add dump_json to ast module

2020-02-19 Thread Richard K


Richard K  added the comment:

Batuhan & Pablo thank you for your thoughts! Just wanted to reply to a few of 
the comments to clarify my position on the issue.


> IMHO this is not a feature that has a general usage. If you want, as far as I 
> can see, there are some packages for doing that in PyPI already. Also, the 
> patch looks small so you can just add this to the required project.


There seems to be movement towards a general usage. For instance, take a look 
at clang, in particular the flag '-ast-dump=json'.

$ clang -cc1 -ast-dump=json foo.cc


> ast.dump now can dump in pretty-printed way.

Indeed however, there is not much one can do further with the output of 
ast.dump. With ast.dump_json one would benefit from programmer-centric 
functionality.

-- 

> Thanks, Richard for your proposal. I concur with Batuhan: I am -1 as well on 
> this addition. Echoing some of the same ideas, I think this is specialized 
> enough that does not make sense to have it in the standard library, 
> especially if a Pypi package already exists. 


After just browsing the the pypi package/s you may be referring to, it appears 
that they do so in non-standard ways.


> Additionally, this is straightforward to implement for very simple cases but 
> PR18558 will fail for very generic ASTs if they are deep enough (it uses 
> recursion).


The implementation of ast.dump also uses recursion. I have tested ast.dump_json 
on sufficiently large source files and have not run into recursion depth 
exceeded issues.


Thanks again for your perspective!

--

___
Python tracker 
<https://bugs.python.org/issue39686>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39686] add dump_json to ast module

2020-02-18 Thread Richard K


Change by Richard K :


--
keywords: +patch
pull_requests: +17938
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/18558

___
Python tracker 
<https://bugs.python.org/issue39686>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39686] add dump_json to ast module

2020-02-18 Thread Richard K


New submission from Richard K :

Currently within the ast module, `dump` generates a string representation of 
the AST for example,

>>> ast.dump(node)
'Module(body=[], type_ignores=[])'


The proposed enhancement would provide a complementary function, `dump_json` as 
in a json representation of the ast. 
This would be useful for those who would like to benefit from the utilities of 
the json module for formatting, pretty-printing, and the like.  
It would also be useful for those who want to serialize the AST or export it in 
a form that can be consumed in an other programming language.
A simplified example, 


>>> import ast
>>> node = ast.parse('')
>>> ast.dump_json(node)
{'Module': {'body': [], 'type_ignores': []}}


A simplified example of using `ast.dump_json` with the json module,

>>> import json
>>> json.dumps(ast.dump_json(node))
'{"Module": {"body": [], "type_ignores": []}}'

--
components: Library (Lib)
messages: 362256
nosy: sparverius
priority: normal
severity: normal
status: open
title: add dump_json to ast module
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39686>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35108] inspect.getmembers passes exceptions from object's properties through

2020-02-01 Thread Richard Bruskiewich


Richard Bruskiewich  added the comment:

This "bug" is buzzing around my project head right now, interfering with the 
operation of the Python Fire CLI library when it attempts to interrogate the 
Python Pandas DataFrame using the inspect.getmembers() call. See 
https://github.com/pandas-dev/pandas/issues/31474 and 
https://github.com/pandas-dev/pandas/pull/31549.

I have code that uses Fire and Pandas, but have to "dumb it down" to use Pandas 
0.24.3 rather than the latest 0.25.1 which raises a "NotImplementedError" which 
leaks out of the getmembers() call.  The Pandas people are passing the buck to 
you folks in the Python community.

This is terribly frustrating for we minions in the real work trying to 
implementing real working software systems leveraging all these wonderful 
libraries (and the Python language).

When is this "bug" going to be fixed? Help!

--
nosy: +richardbruskiewich

___
Python tracker 
<https://bugs.python.org/issue35108>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38904] "signal only works in main thread" in main thread

2019-12-17 Thread Richard Warfield


Richard Warfield  added the comment:

I think so, yes.

On Wed, Dec 18, 2019 at 1:10 AM Eric Snow  wrote:

>
> Eric Snow  added the comment:
>
> So resolving issue39042 would be enough, particularly if we backported
> the change to 3.8?
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue38904>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue38904>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38904] "signal only works in main thread" in main thread

2019-12-15 Thread Richard Warfield


Richard Warfield  added the comment:

Thanks for looking into this.  Changing the behavior of the
"threading" module to be consistent with the runtime and "signal" module
would be sufficient, at least for my particular case.  If the "if
threading.current_thread() is threading.main_thread()" guard worked as
expected it would be clear how to resolve the problem.

On Sat, Dec 14, 2019 at 6:25 AM Eric Snow  wrote:

>
> Eric Snow  added the comment:
>
> Before 3.8, the "signal" module checked against the thread in which the
> module was initially loaded, treating that thread as the "main" thread.
> That same was true (and still is) for the "threading" module.  The problem
> for both modules is that the Python runtime may have actually been
> initialized in a different thread, which is the actual "main" thread.
>
> Since Python 3.8 we store the ID of the thread where the runtime is
> initialized and use that in the check the "signal" module does.  However,
> the "threading" module still uses the ID of the thread where it is first
> imported.  So your check against "threading.main_thread()" must be in code
> that isn't running in the same thread where you ran Py_Initialize().  It
> probably used to work because you imported "signal" and "threading" for the
> first time in the same thread.
>
> So what next?
>
> First, I've created issue39042 to address the current different meanings
> of "main thread".  That should resolve the discrepancy between the signal
> and threading modules.
>
> Second, what can we do to help embedders make sure they are running their
> code in the main thread (e.g. when setting signals)?  Is there any C-API we
> could add which would have helped you here?
>
> --
> nosy: +eric.snow, vstinner
>
> ___
> Python tracker 
> <https://bugs.python.org/issue38904>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue38904>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38904] "signal only works in main thread" in main thread

2019-11-23 Thread Richard Warfield


Richard Warfield  added the comment:

I should mention, this behavior is new in 3.8.0.  It did not occur in 3.7.x.

--

___
Python tracker 
<https://bugs.python.org/issue38904>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38904] "signal only works in main thread" in main thread

2019-11-23 Thread Richard Warfield


New submission from Richard Warfield :

I have an application (https://github.com/litxio/ptghci) using embedded Python, 
which needs to set a signal handler (and use the prompt-toolkit library which 
itself sets signal handlers).

My call to signal.signal is guarded by a check that we're running in the main 
thread:

if threading.current_thread() is threading.main_thread():
print (threading.current_thread().name)
signal.signal(signal.SIGINT, int_handler)

And the above indeed prints "MainThread".  But this raises an exception:

  File "app.py", line 45, in __init__
signal.signal(signal.SIGINT, int_handler)
  File "/usr/lib/python3.8/signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
ValueError: signal only works in main thread

This seems like something that should not happen.  

Now, I tried to generate a simple replicating example but thus far haven't been 
able to do so -- simply calling signal.signal from PyRun_SimpleString doesn't 
do the trick, even within a pthreads thread.

--
messages: 357390
nosy: Richard Warfield
priority: normal
severity: normal
status: open
title: "signal only works in main thread" in main thread
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue38904>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38612] some non-ascii charcters link to same identifier/data

2019-10-28 Thread Richard Pausch


Change by Richard Pausch :


--
components: +Unicode
nosy: +ezio.melotti, vstinner

___
Python tracker 
<https://bugs.python.org/issue38612>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38612] some non-ascii charcters link to same identifier/data

2019-10-28 Thread Richard Pausch

New submission from Richard Pausch :

The issue was first reported in 
https://github.com/ipython/ipython/issues/11918. 

Some non-ascii characters like φ (\u03c6) and ϕ (\u03d5) map/link to the same 
data/identifier. 

```python
ϕ = 1
φ = 2
print(ϕ) # results in 2 - should be 1
print(φ) # results in 2
```

It has so far been shown to occur both in python 3.6 and 3.7.

--
messages: 355525
nosy: PrometheusPi
priority: normal
severity: normal
status: open
title: some non-ascii charcters link to same identifier/data
type: behavior
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue38612>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37843] CGIHTTPRequestHandler does not take args.directory in constructor

2019-08-13 Thread Richard Jayne


New submission from Richard Jayne :

In Lib/http/server.py

if args.cgi:
handler_class = CGIHTTPRequestHandler
else:
handler_class = partial(SimpleHTTPRequestHandler,
directory=args.directory)

Notice that CGIHTTPRequestHandler does not accept directory=args.directory, and 
so the option does not work with the --cgi option.

--
components: Extension Modules
messages: 349585
nosy: rjayne
priority: normal
severity: normal
status: open
title: CGIHTTPRequestHandler does not take args.directory in constructor
type: enhancement
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue37843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16535] json encoder unable to handle decimal

2019-08-08 Thread Richard Musil


Richard Musil  added the comment:

It looks like I am resurrecting an old item, but I have been just hit by this 
and was directed to this issue 
(https://mail.python.org/archives/list/python-id...@python.org/thread/WT6Z6YJDEZXKQ6OQLGAPB3OZ4OHCTPDU/)

I wonder if adding something similar to what `simplejson` uses (i.e. explicitly 
specifying in `json.dump(s)` how to serialize `decimal.Decimal`) could be 
acceptable.

Or, the other idea would be to expose a method in JSONEncoder, which would 
accept "raw" textual output, i.e. string (or even `bytes`) and would encode it 
without adding additional characters to it. (as explained in my posts in the 
other threads).

As it seems right now, there is no way to serialize `decimal.Decimal` the same 
way it is deserialized, i.e. while preserving the (arbitrary) precision.

--
nosy: +risa2000

___
Python tracker 
<https://bugs.python.org/issue16535>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37113] 'ß'.upper() should return 'ẞ'

2019-05-31 Thread Richard Neumann


Richard Neumann  added the comment:

See also: https://en.wikipedia.org/wiki/Capital_%E1%BA%9E

--

___
Python tracker 
<https://bugs.python.org/issue37113>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37113] 'ß'.upper() should return 'ẞ'

2019-05-31 Thread Richard Neumann

New submission from Richard Neumann :

Currently, calling the method .upper() on a string containing 'ß' will replace 
this character by 'SS'. It should, however, be replaced by 'ẞ'.

--
components: Unicode
messages: 344065
nosy: Richard Neumann, ezio.melotti, vstinner
priority: normal
severity: normal
status: open
title: 'ß'.upper() should return 'ẞ'
type: behavior
versions: Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37113>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18078] threading.Condition to allow notify on a specific waiter

2019-01-24 Thread Richard Whitehead

Richard Whitehead  added the comment:

Thanks João.

We are working on a medical prototype, and I wouldn't want to rely on our own 
version of something so fundamental without having a thorough test harness for 
it, which would obviously be quite time-consuming, and something of a dead-end.

I've worked around the issue now (the system pushing to a queue has to be given 
a Condition to set when they push, so that if a system listens on multiple 
queues it can give all the senders the same Condition), but it makes the 
architecture quite messy, just being able to wait on one of several Conditions 
would have been neater and less error-prone.

I suppose I expected to see this method because I'm familiar with the Windows 
API. But I checked and it is not present in the posix threading API, so there 
is some justification for peoples' reluctance to implement it in Python.

Thanks again,

Richard

--

___
Python tracker 
<https://bugs.python.org/issue18078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18078] threading.Condition to allow notify on a specific waiter

2019-01-23 Thread Richard Whitehead


Richard Whitehead  added the comment:

Condition.wait_for_any is still a desirable feature, e.g. to wait on multiple 
command queues, or a work queue and a command queue.
Is there any chance of pulling this into the latest version?

--
nosy: +richardnwhitehead

___
Python tracker 
<https://bugs.python.org/issue18078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35342] email "default" policy raises exception iterating over unparseable date headers

2018-11-28 Thread Richard Brooksby


Change by Richard Brooksby :


--
versions: +Python 3.7

___
Python tracker 
<https://bugs.python.org/issue35342>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35342] email "default" policy raises exception iterating over unparseable date headers

2018-11-28 Thread Richard Brooksby


Change by Richard Brooksby :


--
versions: +Python 3.6 -Python 3.7

___
Python tracker 
<https://bugs.python.org/issue35342>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35342] email "default" policy raises exception iterating over unparseable date headers

2018-11-28 Thread Richard Brooksby


New submission from Richard Brooksby :

It is not possible to loop over the headers of a message with an unparseable 
date field using the "default" policy.  This means that a poison email can 
break email processing.

I expect to be able to process an email with an unparseable date field using 
the "default" policy.

$ python3 --version
Python 3.6.7
$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17) 
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import email
>>> import email.policy
>>> email.message_from_string('Date: not a parseable date', 
>>> policy=email.policy.default).items()
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.6/email/message.py", line 460, in items
for k, v in self._headers]
  File "/usr/lib/python3.6/email/message.py", line 460, in 
for k, v in self._headers]
  File "/usr/lib/python3.6/email/policy.py", line 162, in header_fetch_parse
return self.header_factory(name, value)
  File "/usr/lib/python3.6/email/headerregistry.py", line 589, in __call__
return self[name](name, value)
  File "/usr/lib/python3.6/email/headerregistry.py", line 197, in __new__
cls.parse(value, kwds)
  File "/usr/lib/python3.6/email/headerregistry.py", line 306, in parse
value = utils.parsedate_to_datetime(value)
  File "/usr/lib/python3.6/email/utils.py", line 210, in parsedate_to_datetime
*dtuple, tz = _parsedate_tz(data)
TypeError: 'NoneType' object is not iterable
>>> 

Related: 
https://docs.python.org/3/library/email.headerregistry.html#email.headerregistry.DateHeader
 does not specify what happens to the datetime field if a date header cannot be 
parsed.

Related: 
https://docs.python.org/3/library/email.utils.html#email.utils.parsedate_to_datetime
 does not specify what happens if a date cannot be parsed.

Suggested tests: random fuzz testing of the contents of all email headers, 
especially those with parsers in the header registry.

--
components: email
messages: 330621
nosy: barry, r.david.murray, rptb1
priority: normal
severity: normal
status: open
title: email "default" policy raises exception iterating over unparseable date 
headers
type: behavior
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue35342>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34732] uuid returns version more than 5

2018-09-20 Thread Richard Neumann


Richard Neumann  added the comment:

I updated my pull request.
Since "_windll_getnode()" is only returning a (random?) node for a UUID, I 
circumevented the value checking by introducing a new keyword-only argument 
"strict" defaulting to "True", there being set to "False".

--

___
Python tracker 
<https://bugs.python.org/issue34732>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31958] UUID versions are not validated to lie in the documented range

2018-09-19 Thread Richard Neumann


Richard Neumann  added the comment:

@xtreak Indeed. It fails on _windll_getnode().

==
ERROR: test_windll_getnode (test.test_uuid.TestInternalsWithoutExtModule)
--
Traceback (most recent call last):
  File "C:\projects\cpython\lib\test\test_uuid.py", line 748, in 
test_windll_getnode
node = self.uuid._windll_getnode()
  File "C:\projects\cpython\lib\uuid.py", line 659, in _windll_getnode
return UUID(bytes=bytes_(_buffer.raw)).node
  File "C:\projects\cpython\lib\uuid.py", line 208, in __init__
raise ValueError('illegal version number')
ValueError: illegal version number
--

Apparently on Windows systems, there are UUIDs of type RFC_4122 being used 
which have versions not in 1..5, which actually makes them non-RFC 4122 
compliant.
Unfortunately I cannot investigate this further, since I do not have a windows 
machine available right now.

--
nosy: +conqp

___
Python tracker 
<https://bugs.python.org/issue31958>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34732] uuid returns version more than 5

2018-09-19 Thread Richard Neumann

Richard Neumann  added the comment:

Typos:
"For explicitely checking the version" → "For explicitely *setting* the 
version".
"on not 1<= verision 1<=5" → "on not 1 <= version <= 5".

--

___
Python tracker 
<https://bugs.python.org/issue34732>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34732] uuid returns version more than 5

2018-09-19 Thread Richard Neumann


Richard Neumann  added the comment:

@xtreak RFC 4122, section 4.1.3. specifies only versions 1 to 5.
For explicitely checking the version, there is already a test in UUID.__init__, 
raising a ValueError on not 1<= verision 1<=5.
I moved it to the bottom of __init__, i.e. after setting the "int" property, 
causing the test to run on the actual instance's property value.

--

___
Python tracker 
<https://bugs.python.org/issue34732>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34732] uuid returns version more than 5

2018-09-19 Thread Richard Neumann


Richard Neumann  added the comment:

I'm not sure whether the property method should be changed.
I think it'd be more appropriate to raise a value error upon __init__ in this 
case as it is done with other checks.

--
nosy: +conqp

___
Python tracker 
<https://bugs.python.org/issue34732>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34524] Format conversion documentation example don't match comment

2018-08-27 Thread Richard Evans


Change by Richard Evans :


--
resolution:  -> not a bug
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue34524>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34524] Format conversion documentation example don't match comment

2018-08-27 Thread Richard Evans


New submission from Richard Evans :

When reading the documentation for string formats I found that the conversion 
examples had comments that didn't match the example.

--
assignee: docs@python
components: Documentation
messages: 324207
nosy: Richard Evans, docs@python
priority: normal
severity: normal
status: open
title: Format conversion documentation example don't match comment
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue34524>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33450] unexpected EPROTOTYPE returned by sendto on MAC OSX

2018-05-09 Thread Richard C

New submission from Richard C <r...@racitup.com>:

The following exception is raised unexpectedly on macOS versions 10.13, 10.12 & 
10.11 at least. It appears to be macOS specific (works okay on Linux).

Further information can be found at the following links:
https://github.com/benoitc/gunicorn/issues/1487
http://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/

[2017-03-20 00:46:39 +0100] [79068] [ERROR] Socket error processing request.
Traceback (most recent call last):
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/workers/async.py",
 line 66, in handle
six.reraise(*sys.exc_info())
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/six.py",
 line 625, in reraise
raise value
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/workers/async.py",
 line 56, in handle
self.handle_request(listener_name, req, client, addr)
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/workers/ggevent.py",
 line 152, in handle_request
super(GeventWorker, self).handle_request(*args)
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/workers/async.py",
 line 129, in handle_request
six.reraise(*sys.exc_info())
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/six.py",
 line 625, in reraise
raise value
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/workers/async.py",
 line 115, in handle_request
resp.write(item)
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/http/wsgi.py",
 line 362, in write
util.write(self.sock, arg, self.chunked)
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gunicorn-19.7.0-py3.6.egg/gunicorn/util.py",
 line 321, in write
sock.sendall(data)
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gevent-1.2.1-py3.6-macosx-10.12-x86_64.egg/gevent/_socket3.py",
 line 418, in sendall
data_sent += self.send(data_memory[data_sent:], flags)
  File 
"/Users/ahmad/Projects/Side-Gigs/sa7beh-app/venv/lib/python3.6/site-packages/gevent-1.2.1-py3.6-macosx-10.12-x86_64.egg/gevent/_socket3.py",
 line 391, in send
return _socket.socket.send(self._sock, data, flags)
OSError: [Errno 41] Protocol wrong type for socket

--
components: IO, macOS
messages: 316328
nosy: ned.deily, racitup, ronaldoussoren
priority: normal
severity: normal
status: open
title: unexpected EPROTOTYPE returned by sendto on MAC OSX
type: behavior
versions: Python 3.5, Python 3.6

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33450>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33028] tempfile.TemporaryDirectory incorrectly documented

2018-03-08 Thread Richard Neumann

New submission from Richard Neumann <r.neum...@homeinfo.de>:

The tempfile.TemporaryDirectory is incorrectly documented at 
https://docs.python.org/3.6/library/tempfile.html#tempfile.TemporaryDirectory.

It is described as a function, though actually being a class (unlinke 
tempfile.NamedTemporaryFile).
The respective property "name" and method "cleanup" are only documented in the 
continuous text but not explicitely highlighted as the properties and method of 
e.g. TarFile (https://docs.python.org/3/library/tarfile.html#tarfile-objects).

--
assignee: docs@python
components: Documentation
messages: 313431
nosy: Richard Neumann, docs@python
priority: normal
severity: normal
status: open
title: tempfile.TemporaryDirectory incorrectly documented
type: enhancement
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33028>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32664] Connector "|" missing between ImportError and LookupError

2018-01-25 Thread Richard Neumann

New submission from Richard Neumann <r.neum...@homeinfo.de>:

In the documentation of the built-in exceptions hierarchy, there is a "|" 
missing connecting ImportError and LookupError.

https://docs.python.org/3/library/exceptions.html#exception-hierarchy

>From LookupError.__mro__ we can tell, that it is actually derived from 
>Exception, thus there should be a "|" connecting it to the hierarchy under 
>Exception to emphasize that (like between ArithmeticError and AssertionError).

--
assignee: docs@python
components: Documentation
messages: 310666
nosy: Richard Neumann, docs@python
priority: normal
severity: normal
status: open
title: Connector "|"  missing between ImportError and LookupError
type: enhancement
versions: Python 3.6

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32664>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16487] Allow ssl certificates to be specified from memory rather than files.

2017-11-30 Thread Martin Richard

Martin Richard <mart...@martiusweb.net> added the comment:

FWIW, PyOpenSSL allows to load certificates and keys from a memory buffer and 
much more. It's also fairly easy to switch from ssl to PyOpenSSL.

It's probably a viable alternative in many cases.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue16487>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31992] Make iteration over dict_items yield namedtuples

2017-11-10 Thread Richard Neumann

Richard Neumann <r.neum...@homeinfo.de> added the comment:

Maybe there is no need to sacrifice performance, if a new, optional keyword 
argument would be introduced to dict.items():

def items(self, named=False):
if named:

else:


Currently I need to define a namedtuple everywhere I do this and starmap the 
dicts' items.

It'd be nice to have this option built-in or a new collections class like:

from collections import namedtuple
from itertools import starmap


DictItem = namedtuple('DictItem', ('key', 'value'))


class NamedDict(dict):
"""Dictionary that yields named tuples on item iterations."""

def items(self):
"""Yields DictItem named tuples."""
return starmap(DictItem, super().items())

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31992>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31992] Make iteration over dict_items yield namedtuples

2017-11-09 Thread Richard Neumann

New submission from Richard Neumann <r.neum...@homeinfo.de>:

Currently, iterating over dict_items will yield plain tuples, where the first 
item will be the key and the second item will be the respective value.

This has some disadvantages when e.g. sorting dict items by value and key:

def sort_by_value_len(dictionary):
return sorted(dictionary.items(), key=lambda item: (len(item[1]), 
item[0]))

I find this index accessing extremely unelegant and unnecessarily hard to read.

If dict_items would instead yield namedtuples like

DictItem = namedtuple('DictItem', ('key', 'value'))

this would make constructs like

def sort_by_value_len(dictionary):
return sorted(dictionary.items(), key=lambda item: (len(item.value), 
item.key))

possible and increase code clarity a lot.
Also, namedtuples mimic the behaviour of plain tuples regarding unpacking and 
index accessing, so that backward-compatipility should exist.

--
components: Library (Lib)
messages: 305970
nosy: Richard Neumann
priority: normal
severity: normal
status: open
title: Make iteration over dict_items yield namedtuples
type: enhancement
versions: Python 3.8

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31992>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1566331] Bad behaviour in .obuf*

2017-10-11 Thread Richard Aplin

Richard Aplin <drt...@gmail.com> added the comment:

Hi there yes this is very much an issue on Arm linux (e.g. Armbian). Calling 
any function that triggers a call to _ssize(..) - a function which is clearly 
intended to have no side-effects - instead resets the number of channels (and 
sample format?) by calling IOCTLs "SNDCTL_DSP_SETFMT" and "SNDCTL_DSP_CHANNELS" 
with arguments of zero as a way to query the current values. 

This doesn't work on many drivers; e.g. they take '0' as meaning 'mono' and 
switch to one channel. 

To repro:
import ossaudiodev
self.dsp=ossaudiodev.open("/dev/dsp1","w")
self.dsp.setfmt(ossaudiodev.AFMT_S16_LE)
self.dsp.channels(2)  #<
<https://bugs.python.org/issue1566331>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31737] Documentation renders incorrectly

2017-10-09 Thread Richard Gibson

New submission from Richard Gibson <richard.gib...@gmail.com>:

The content at docs.python.org seems to be inserting language-dependent "smart 
quotes" in code blocks, which mangles backslashes and sequences like `'''`. 
Observed at 
https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals
 , which renders

  longstring  ::=  “”’” longstringitem* “”’” | ‘”“”’ longstringitem* ‘”“”’

instead of

  longstring  ::=  "'''" longstringitem* "'''" | '"""' longstringitem* '"""'

and

  stringescapeseq ::=  “" 

instead of

  stringescapeseq ::=  "\" 

, and looks even worse in other languages:

  longstring  ::=   » »” » longstringitem*  » »” » | “ »«  »” 
longstringitem* “ »«  »”

  longstring  ::=  「」』」 longstringitem* 「」』」 | 『」「」』 longstringitem* 『」「」』


Running `make html` locally produces the desired output, so whatever's going on 
appears specific to the public site.

--
assignee: docs@python
components: Documentation
messages: 303988
nosy: docs@python, gibson042
priority: normal
severity: normal
status: open
title: Documentation renders incorrectly
versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31737>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21622] ctypes.util incorrectly fails for libraries without DT_SONAME

2017-09-18 Thread Richard Eames

Changes by Richard Eames <rea...@asymmetricventures.com>:


--
nosy: +Richard Eames

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue21622>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31290] segfault on missing library symbol

2017-08-28 Thread Richard

New submission from Richard:

I'm building a Python library with a C++ component composed of a number of 
source .cpp files.

After some changes today, compiling and loading in Python3 resulted in a 
segfault:


Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import mymodule
Segmentation fault (core dumped)


As you might imagine, this is not the funnest thing ever to debug.

Thankfully, when I compiled the module for Python2 and tried to load, a much 
more helpful and informative error message was displayed.


Python 2.7.13 (default, Jan 19 2017, 14:48:08) 
[GCC 6.3.0 20170118] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mymodule
Traceback (most recent call last):
  File "", line 1, in 
  File "mymodule/__init__.py", line 1, in 
import _mymodule
ImportError: ./_mymodule.so: undefined symbol: DBFGetRecordCount
>>> 


And indeed, it seems that in my setup.py file where I had

setuptools.Extension(
  "_mymodule",
  glob.glob('src/*.cpp') + glob.glob('lib/mylib/*.cpp') + 
glob.glob('lib/mylib/sublib/*.c'),

I should have had 'lib/mylib/sublib/*.cpp'.



I think the current behaviour can be improved:

1. Python3 should provide a helpful error message, like Python2 does.

2. setup.py should fail or at least provide a warning if there are missing 
symbols. (Perhaps this suggestion should be aimed at one of the setup 
utilities, but I'm not sure where to send it.)

--
messages: 300942
nosy: immortalplants
priority: normal
severity: normal
status: open
title: segfault on missing library symbol
versions: Python 2.7, Python 3.5

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue31290>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28287] Refactor subprocess.Popen to let a subclass handle IO asynchronously

2017-08-25 Thread Martin Richard

Martin Richard added the comment:

Yes, the goal is to isolate the blocking IO in __init__ into other methods so 
Popen can be subclassed in asyncio.

The end goal is to ensure that when asyncio calls Popen(), it doesn't block the 
process. In the context of asyncio, there's no need to make Popen() IOs 
non-blocking as they will be performed with the asyncio API (rather than the IO 
methods provided by the Popen object).

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue28287>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30609] Python 3.6.1 fails to generate 256 colors on Cygwin based 64-bit Windows 10

2017-06-18 Thread Richard S. Gordon

Richard S. Gordon added the comment:

FYI: Here is an update to my subsequent bug report to the Cygwin project team. 
You might find my answers to their questions useful in the future.

> Begin forwarded message:
> 
> From: "Richard S. Gordon" <rigo...@comcast.net <mailto:rigo...@comcast.net>>
> Subject: Re: Python 3.6.1 fails to generate 256 colors, with xterm-256color, 
> on Cygwin based 64-bit Windows 10.
> Date: June 12, 2017 at 3:21:22 AM EDT
> To: cyg...@cygwin.com <mailto:cyg...@cygwin.com>
> Cc: rigo...@comcast.net <mailto:rigo...@comcast.net>, 
> brian.ing...@systematicsw.ab.ca <mailto:brian.ing...@systematicsw.ab.ca>
> Reply-To: "Richard S. Gordon" <rigo...@comcast.net 
> <mailto:rigo...@comcast.net>>
> 
> Hello Cygwin & Brian Inglis,
> 
> I  have  not yet received the subject e-mail but did see a copy in the
> Cygwin  Archive.  I’ve  reproduced  it below to facilitate my reply. I
> used   a   different  mailer  to  generate  plain  text,  without  pdf
> attachment.
> 
> My HTML formatted email was rejected.
> 
> Re: Python 3.6.1 fails to generate 256 colors, with xterm-256color, on Cygwin 
> based 64-bit Windows 10.
> 
> From: Brian Inglis 
> To: cygwin at cygwin dot com
> Date: Sun, 11 Jun 2017 09:44:09 -0600
> Subject: Re: Python 3.6.1 fails to generate 256 colors, with xterm-256color, 
> on Cygwin based 64-bit Windows 10.
> Authentication-results: sourceware.org <http://sourceware.org/>; auth=none
> References: <86daff59-6ea8-4288-9d7d-e3262988b...@comcast.net 
> <mailto:86daff59-6ea8-4288-9d7d-e3262988b...@comcast.net>>
> Reply-to: Brian dot Inglis at SystematicSw dot ab dot ca
> *** IMPORTANT ***
> Each test application is a small part of a software repository which contains:
> 
> Development Sandboxes (not requiring installation and registration with a 
> specific Python 2x or Python3x release).
> Site Packages (requiring installation and registration with a specific Python 
> 2x or Python3x release).
> My toolkit uses the Python curses API to interrogate the built-in features. 
> It overrides the built-in curses color palette only if curses allows the 
> change.
> 
> In order to verify or troubleshoot my Python 3.6.1 failure, it is necessary 
> to clone a copy of my toolkit repository on to your computer from its GitHub 
> repository.
> The errors you got when you tried to run one of my failing test applications 
> are the result of trying to run it without its associated built-in toolkit 
> libraries.
> 
> You can place the repository copy into any convenient location on your 
> system. If you work within one of its Developer Sandboxes, instead of 
> installing
> any of its Site Packages, you will be able to delete the entire repository 
> rather than those components which were installed and registered with 
> individual
> Python 2x or Python 3x releases.
> 
> Each Developer Sandbox automatically finds and uses its associated libraries.
> On 2017-06-11 08:18, Richard S. Gordon wrote:
> See how to make decent Cygwin problem reports at the link below my sig.
> 
>> 3. Python 3.6.1 generates 256 colors (65536-color pairs), with 
>> xterm-256color, on Cygwin based 64-bit Windows 10. However, the 
>> generated colors appear to be corrupted by overloading text
>> attribute with specified foreground and background colors.
> Could you please give some examples of what you expect to see and why,
> and what you actually see?

> NOTES:
> On left is 32-bit Python 3.6.1 which supports only 16 colors (per limitation 
> of 32-bit processor)
> On right is 64-bit Python 3.6.1 which supports 140 colors (per emulation of 
> 68 WxPython + 72 extra); besides wrong colors, notice the spurious underlines.
> Sample 32-bit and 64-bit Python 3.6.1.pdf
> 
> Which Windows console are you running the test in: mintty, cmd, …?
> Cygwin’s MINTTY, typically configured for 80 columns x 43-50 rows.
> 
> What are the results when you run it in another console?
> None available
> 
> Are you running a Windows Insider or some developer build?
> No
> That recently had a keyboard problem that was fixed a few days later.
> 
>> 6. Cygwin Problem Reporter's Test Case: This Cygwin problem can be 
>> demonstrated by running the Problem Reporter's 
>> test_tsWxColorPalette.py in Python 3x (developer-sandbox) which can 
>> be found in https://github.com/rigordo959/tsWxGTUI_PyVx_Repository 
>> <https://github.com/rigordo959/tsWxGTUI_PyVx_Repository>
> Could you please provide a direct link to a Simple Test Case program,
> again with examples of what you expect to see and what you actually see?
> I had to dig to find where you hid your test pr

[issue30609] Python 3.6.1 fails to generate 256 colors on Cygwin based 64-bit Windows 10

2017-06-11 Thread Richard S. Gordon

Richard S. Gordon added the comment:

> On Jun 9, 2017, at 4:57 PM, Masayuki Yamamoto <rep...@bugs.python.org> wrote:
> 
> 
> Masayuki Yamamoto added the comment:
> 
> @rigordo Are you using mintty? If I remember rightly, mintty hasn't been set 
> 256 colors after installation (at least in past release, I'm not sure 
> currently).
> 

Yes, I am using the Cygwin mintty console (typically configured for 80 columns 
by 50 rows). On Linux, MacOS X, Solaris. Unix and Windows (with cygwin) 
platforms, I issue the appropriate console bash commands to change 
(TERM=emulator) or reset (STTY sane) the terminal emulation:

TERM=xterm
TERM=xterm-16color
TERM=xterm-88color
TERM=xterm-256color

> --
> 
> ___
> Python tracker <rep...@bugs.python.org>
> <http://bugs.python.org/issue30609>
> ___

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30609>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30609] Python 3.6.1 fails to generate 256 colors on Cygwin based 64-bit Windows 10

2017-06-11 Thread Richard S. Gordon

Richard S. Gordon added the comment:

> On Jun 9, 2017, at 5:41 PM, Terry J. Reedy <rep...@bugs.python.org> wrote:
> 
> 
> Terry J. Reedy added the comment:
> 
> Richard, when replying by email, please strip quoted text except for an 
> occasional line or two.  (See example of limited in-context quote below.)
> 
> A bug, for the purpose of this tracker, is a discrepancy between between the 
> docs written by the PSF's core development team and the code provided by the 
> same.  The PSF curses module is documented as unix-only.  In particular, 
> "This extension module is designed to match the API of ncurses, an 
> open-source curses library hosted on Linux and the BSD variants of Unix."  It 
> does not run on the PSF (python.org) Windows distribution, because there is 
> no C interface to a Windows implementation of curses.
> 

Cygwin is a Linux-like (Unix-compatible) command line interface and run-time 
environment plug-in for Windows. My cross-platform Python code does not use a 
PSF Windows implementation. It only uses the standard Python 2x and Python 3x 
curses library module low level API to emulate a subset of the wxPython 
high-level API. It has been run successfully with xterm (8-color) and 
xterm-16color terminal emulators (including the ones provided with all Cygwin 
releases since 2007). All platforms manifest the same failure when my software 
attempts to use the xterm-256color terminal emulator:

PC-BSD 10.3 Unix
TrueOS (PC-BSD) 12.0 Unix
MacOS X 7.0-10.12.5 (Darwin & BSD Unix based)
Oracle OpenSolaris 11
OpenIndiana Hipster-1610 Solaris 1
CentOS Linux 7.2 & 7.3
Debian Linux 8.7.0 & 8.8.0
Fedora Linux 24 & 25
Scientific Linux 7.2 & 7.3
Windows  XP, 7, 8.1 and 10 (each with Cygwin plug-in)

I am reporting this issue to PSF because I suspect that the standard Python 
3.6.1 curses libraries has not be updated to support more that 16 colors on 
64-bit platforms. None of my non-Windows 64-bit platforms currently incorporate 
ncurses 6.0 or Python 3.6.1. I’m anxiously waiting for new releases to those 
non-windows operating systems.

>>>> import curses  # 64-bit 3.6.1 on Win 10 using python.org installer
> Traceback (most recent call last):
>  File "<pyshell#4>", line 1, in 
>import curses
>  File "C:\Programs\Python36\lib\curses\__init__.py", line 13, in 
>from _curses import *
> ModuleNotFoundError: No module named '_curses'
> 
> Anything Cygwin does to improve on this is their responsibility.
> 
>> how do you explain my success in running my wxPython emulation on all Cygwin 
>> releases since 2007
> 
> One or more people on the wxPython and/or Cygwin and/or other teams exerted 
> the effort to make this happen.

wxPython is a pixel-mode GUI. It does use the character-mode curses or ncurses 
libraries.

Cygwin provides both X11 pixel-mode graphics and ncurses-based character-mode 
graphics.
> 
> --
> nosy: +terry.reedy
> resolution:  -> third party
> stage: test needed -> resolved
> status: open -> closed
> 
> ___
> Python tracker <rep...@bugs.python.org>
> <http://bugs.python.org/issue30609>
> ___

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue30609>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   6   7   8   9   10   >