[issue26577] inspect.getclosurevars returns incorrect variable when using class member with the same name as other variable

2021-12-18 Thread Ryan Fox


Ryan Fox  added the comment:

If you change the class member 'x' to a different name like 'y', then cv
doesn't include 'x', but does include an unbound 'y'.

In both cases, the function isn't referring to a global variable, just the
class member of that name. Besides the function itself, no other globals
are included in cv.globals.

On Sat, Dec 18, 2021 at 10:34 AM hongweipeng  wrote:

>
> hongweipeng  added the comment:
>
> Why is expected that 'x' would not exist in cv.globals? I think it works
> normally, you can see `x` in `func.__globals__`.
>
> --
> nosy: +hongweipeng
>
> ___
> Python tracker 
> <https://bugs.python.org/issue26577>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue26577>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44556] ctypes unittest crashes with libffi 3.4.2

2021-11-19 Thread Ryan May


Change by Ryan May :


--
nosy: +Ryan May

___
Python tracker 
<https://bugs.python.org/issue44556>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-26 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

> How is manually dispatched workflows different from just opening a PR to your 
> own fork? I do that from time to time in order to run the CI before opening a 
> PR against the CPython repo.

Here are a few thoughts on how it is different:
- They can be set up to take `inputs` that customize the behavior of the 
workflow run. Recently in another issue someone had asked about getting the 
Windows MSI installers from a CI build run but that workflow doesn't upload any 
artifacts; an input could be added that uploads the installers as artifacts in 
the cases they would useful while keeping the default behavior of not uploading 
artifacts for PRs, or an input could be added to enable additional debugging 
output/running extra tests to track down an issue
- Multiple builds of the same commit can be started; if there's a test that 
fails intermittently you could queue up 10 runs of a workflow and come back 
later to see if it is still happening from a larger sample size
- The jobs/workflows run can be more targeted; if you just want to build the 
docs part of a larger change set, you don't need to run the workflow for 
compiling + running tests. If all you care about is a generated installer, only 
that workflow needs to get run (less likely to hit the max concurrent builds 
for your account if you have workflows running in other non-cpython 
repositories)
- Temporary PRs don't need closing to keep subsequent commits from runnings 
jobs if you don't care about their results, or after the PR gets merged in the 
upstream CPython repo
- May be marginally faster to trigger a workflow run than opening a PR (in 
terms of slower loading pages/tabs on the GitHub website)

--

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44974] Warning about "Unknown child process pid" in test_asyncio

2021-08-26 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

I haven't gotten a chance to narrow it down much yet, it might be that it is 
occurs more often on systems with a low core count/higher load -- a bit hard to 
tell with it being intermittent though.

--

___
Python tracker 
<https://bugs.python.org/issue44974>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-23 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

> On the main request, provided the workflow_dispatch is only triggerable by 
> non-contributors in their own fork (without any of our tokens/etc.) then it's 
> fine by me. If it allows anyone to trigger CI builds against the main repo, 
> I'd rather not.

It should require write permissions in a repository to use the trigger, so 
they'll only be able to run workflows in the context of their fork: 
https://github.community/t/who-can-manually-trigger-a-workflow-using-workflow-dispatch/128592/4

I think you could also test this out by going to my fork and seeing if it lets 
you trigger the workflow: 
https://github.com/nightlark/cpython/actions/workflows/build.yml

--

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-23 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

Another observation, first-time contributors need a maintainer to approve the 
workflows for their PRs -- from the looks of it that isn't an instant process 
and could take a few days, so this also gives first-time contributors a way to 
check their changes against the jobs that will run as part of the required CI 
checks.

--

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-21 Thread Ryan Mast (nightlark)


Change by Ryan Mast (nightlark) :


--
nosy: +brett.cannon

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-21 Thread Ryan Mast (nightlark)


Change by Ryan Mast (nightlark) :


--
nosy: +steve.dower

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-21 Thread Ryan Mast (nightlark)


Change by Ryan Mast (nightlark) :


--
keywords: +patch
pull_requests: +26328
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/27873

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-21 Thread Ryan Mast (nightlark)


Change by Ryan Mast (nightlark) :


--
nosy: +FFY00

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-21 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

* The main constraint is that the the `workflow_dispatch` trigger must be in 
the GHA workflow yaml files for the branch to be selectable as a target on the 
Actions tab when manually triggering a workflow. Having the change made to the 
main branch and backported would mean that new or rebased branches would have 
the trigger needed for manually running the CI job in forks.

--

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44972] Add workflow_dispatch trigger for GitHub Actions jobs

2021-08-21 Thread Ryan Mast (nightlark)


New submission from Ryan Mast (nightlark) :

Adding a workflow_dispatch trigger for the GitHub Actions jobs makes it 
possible to run the GHA CI jobs for commits to branches in a fork without 
opening a "draft/WIP" PR to one of the main release branches. It also runs the 
SSL tests which normally get skipped for PRs.

The main constraint is that

--
components: Build
messages: 400036
nosy: pablogsal, rmast, vstinner, zach.ware
priority: normal
severity: normal
status: open
title: Add workflow_dispatch trigger for GitHub Actions jobs
type: enhancement
versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44972>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41322] unittest: deprecate test methods returning non-None values

2021-08-20 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

The new issue for the failing test is bpo-44968

--

___
Python tracker 
<https://bugs.python.org/issue41322>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44968] Fix/remove test_subprocess_wait_no_same_group from test_asyncio tests

2021-08-20 Thread Ryan Mast (nightlark)


New submission from Ryan Mast (nightlark) :

A deprecation made in bpo-41322 uncovered issues with 
test_subprocess_wait_no_same_group in test_asyncio that seems to have been 
broken for some time.

Reverting to a similar structure prior to the refactoring in 
https://github.com/python/cpython/commit/658103f84ea860888f8dab9615281ea64fee31b9
 using async/await avoids the deprecation error, though it still might not be 
running correctly.

With the change I tried in 
https://github.com/python/cpython/commit/658103f84ea860888f8dab9615281ea64fee31b9
 there is a message about an `unknown child process`, which makes me think 
there could be some issues with the subprocess exiting prior to the refactoring 
~8 years ago.

--
components: Tests
messages: 400018
nosy: asvetlov, ezio.melotti, michael.foord, rbcollins, rmast, yselivanov
priority: normal
severity: normal
status: open
title: Fix/remove test_subprocess_wait_no_same_group from test_asyncio tests
versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44968>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41322] unittest: deprecate test methods returning non-None values

2021-08-20 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

Sure, I'll open a new issue.

--

___
Python tracker 
<https://bugs.python.org/issue41322>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41322] unittest: deprecate test methods returning non-None values

2021-08-20 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

Would rewriting the test so that it doesn't yield be the right fix?

```
def test_subprocess_wait_no_same_group(self):
# start the new process in a new session
connect = self.loop.subprocess_shell(
functools.partial(MySubprocessProtocol, self.loop),
'exit 7', stdin=None, stdout=None, stderr=None,
start_new_session=True)
_, proto = yield self.loop.run_until_complete(connect)
self.assertIsInstance(proto, MySubprocessProtocol)
self.loop.run_until_complete(proto.completed)
self.assertEqual(7, proto.returncode)
```

to

```
def test_subprocess_wait_no_same_group(self):
# start the new process in a new session
connect = self.loop.subprocess_shell(
functools.partial(MySubprocessProtocol, self.loop),
'exit 7', stdin=None, stdout=None, stderr=None,
start_new_session=True)
transp, proto = self.loop.run_until_complete(connect)
self.assertIsInstance(proto, MySubprocessProtocol)
self.loop.run_until_complete(proto.completed)
self.assertEqual(7, proto.returncode)
transp.close()
```

--

___
Python tracker 
<https://bugs.python.org/issue41322>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41322] unittest: deprecate test methods returning non-None values

2021-08-20 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

It looks like since GH-27748 got merged, `test_subprocess_wait_no_same_group` 
in `test_asyncio` has been failing for the Ubuntu SSL tests.

--
nosy: +rmast

___
Python tracker 
<https://bugs.python.org/issue41322>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20041] TypeError when f_trace is None and tracing.

2021-08-17 Thread Ryan Mast (nightlark)


Change by Ryan Mast (nightlark) :


--
nosy: +rmast

___
Python tracker 
<https://bugs.python.org/issue20041>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26545] [doc] os.walk is limited by python's recursion limit

2021-08-17 Thread Ryan Mast (nightlark)


Change by Ryan Mast (nightlark) :


--
nosy: +rmast

___
Python tracker 
<https://bugs.python.org/issue26545>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15373] copy.copy() does not properly copy os.environment

2021-08-17 Thread Ryan Mast (nightlark)


Change by Ryan Mast (nightlark) :


--
nosy: +rmast

___
Python tracker 
<https://bugs.python.org/issue15373>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44942] Add number pad enter bind to TK's simpleDialog

2021-08-17 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

I'm new to this system, if I'm understanding 
https://devguide.python.org/triaging/#nosy-list then it looks like the people 
listed for `tkinter` should be added to the Nosy List?

--
nosy: +gpolo, serhiy.storchaka

___
Python tracker 
<https://bugs.python.org/issue44942>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44942] Add number pad enter bind to TK's simpleDialog

2021-08-17 Thread Ryan Mast (nightlark)


New submission from Ryan Mast (nightlark) :

Tk the number pad enter and main enter keys separately. The number pad enter 
button should be bound to `self.ok` in simpleDialog's `Dialog` class so that 
both enter buttons have the same behavior.

A PR for this change has been submitted on GitHub by Electro707.

--
components: Tkinter
messages: 399816
nosy: rmast
priority: normal
pull_requests: 26272
severity: normal
status: open
title: Add number pad enter bind to TK's simpleDialog
type: behavior
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue44942>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24905] Allow incremental I/O to blobs in sqlite3

2021-08-17 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

It looks like this has a PR that just needs rebasing, then it will be ready for 
another review.

--
nosy: +rmast

___
Python tracker 
<https://bugs.python.org/issue24905>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33140] shutil.chown should not be defined in Windows

2021-08-17 Thread Ryan Mast (nightlark)


Ryan Mast (nightlark)  added the comment:

If this function were to be implemented on Windows would the preferred  
approach be the one described in the initial message for this issue of creating 
a Windows `os.set_owner` function that uses the appropriate Windows API calls 
to change owner/group settings, and then using that for Windows in the 
`shutil.chown` function?

--
nosy: +rmast

___
Python tracker 
<https://bugs.python.org/issue33140>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30924] RPM build doc_files needs files separated into separate lines

2021-08-17 Thread Ryan Mast


Ryan Mast  added the comment:

Should this be closed? It looks like the PR is only changing a file in 
distutils, and according to https://bugs.python.org/issue30925#msg386350 
distutils is deprecated and only release blocking issues will be considered.

--
nosy: +rmast

___
Python tracker 
<https://bugs.python.org/issue30924>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44752] Tab completion executes @property getter function

2021-07-27 Thread Ryan Pecor


Ryan Pecor  added the comment:

Wow, that was quick and the code looks clean too! Thanks for fixing that up!

--

___
Python tracker 
<https://bugs.python.org/issue44752>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44752] Tab completion executes @property getter function

2021-07-27 Thread Ryan Pecor


Ryan Pecor  added the comment:

Actually, hasattr() specifically states that it uses getattr() so that behavior 
is expected from getattr() so I will not be creating a separate issue for that.

Now that I see hasattr() uses getattr(), it looks like the tab completion issue 
might not stem from line 155, but from line 180 
(https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Lib/rlcompleter.py#L180)
 where it calls getattr().

A possible fix to the tab completion issue might be to add to/expand the 
warning about evaluating arbitrary C code 
(https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Lib/rlcompleter.py#L145)
 when using tab to autocomplete.

--

___
Python tracker 
<https://bugs.python.org/issue44752>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44752] Tab completion executes @property getter function

2021-07-27 Thread Ryan Pecor


Ryan Pecor  added the comment:

It looks to me like the issue is caused by the eval() in line 155 of the 
rlcompleter.py file 
(https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Lib/rlcompleter.py#L155)
 which runs the function in order to see if it runs or raises an exception.

I thought maybe replacing it with hasattr() might work, but it looks like the 
issue is repeated there as well!

>>> hasattr(bar, "print_value")
Foo has a value of 4
True

This goes over to the C side of things now 
(https://github.com/python/cpython/blob/196998e220d6ca030e5a1c8ad63fcaed8e049a98/Python/bltinmodule.c#L1162)
 and I'll put in another issue regarding that!

--

___
Python tracker 
<https://bugs.python.org/issue44752>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44752] Tab completion executes @property getter function

2021-07-27 Thread Ryan Pecor


Ryan Pecor  added the comment:

I forgot to mention that I also added "~~~" to either side of the printed 
string every time it printed to help differentiate the printed string from 
commands that I typed into the interpreter.

--

___
Python tracker 
<https://bugs.python.org/issue44752>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44752] Tab completion executes @property getter function

2021-07-27 Thread Ryan Pecor


New submission from Ryan Pecor :

After making a class using the @property decorator to implement a getter, using 
tab completion that matches the getter function name executes the function. 

See below for example (line numbers added,  indicates when the user 
presses the tab key):

1  >>> class Foo(object):
2  ... def __init__(self,value):
3  ... self.value = value
4  ... @property
5  ... def print_value(self):
6  ... print("Foo has a value of {}".format(self.value))
7  ... 
8  >>> bar = Foo(4)
9  >>> bar.~~~Foo has a value of 4~~~
10 ~~~Foo has a value of 4~~~
11 
12 bar.print_value  bar.value
13 >>> bar.p~~~Foo has a value of 4~~~
14 rint_value~~~Foo has a value of 4~~~
15 ~~~Foo has a value of 4~~~
16 
17 bar.print_value
18 >>> bar.value

I pressed tab after typing "bar." in line 9. It then printed the remainder of 
line 9 and moved the cursor to line 10. Pressing tab again prints line 10 and 
11 before finally showing the expected output on line 12. lines 13-17 follow 
the same steps, but after typing "bar.p" to show that it happens whenever you 
tab and it matches the getter. Line 18 shows a correct tab completion resulting 
from hitting tab after typing "bar.v" which does not match the getter function.

--
components: Interpreter Core
messages: 398323
nosy: RPecor
priority: normal
severity: normal
status: open
title: Tab completion executes @property getter function
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44752>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28356] [easy doc] Document os.rename() behavior on Windows when src and dst are on different filesystems

2021-07-26 Thread Ryan Ozawa


Change by Ryan Ozawa :


--
pull_requests: +25912
stage: needs patch -> patch review
pull_request: https://github.com/python/cpython/pull/27376

___
Python tracker 
<https://bugs.python.org/issue28356>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28356] [easy doc] Document os.rename() behavior on Windows when src and dst are on different filesystems

2021-07-25 Thread Ryan Ozawa


Ryan Ozawa  added the comment:

Hi all,

This is my first issue so feedback is welcome.

Following @vstinner 's suggestions:
> * os.rename() can fail if source and destination are on two different
file systems
> * Use shutil.move() to support move to a different directory

And from @eryksun :
> ... on Windows the "operation will fail if src and dst are on different 
> filesystems".

Just 2 short lines:
2292,6 +2292,8 @@ features:
will fail with an :exc:`OSError` subclass in a number of cases:
 
On Windows, if *dst* exists a :exc:`FileExistsError` is always raised.
+   The operation will fail if *src* and *dst* are on different filesystems. Use
+   :func:`shutil.move` to support moves to a different filesystem.


If nothing to change, I will make a PR soon.

--
keywords: +patch
nosy: +rhyn0
Added file: https://bugs.python.org/file50180/os_rename_windows.patch

___
Python tracker 
<https://bugs.python.org/issue28356>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44437] Add multimap() function similar to map(), but with multiprocessing functionality to the multiprocessing module

2021-06-16 Thread Ryan Rudes


Change by Ryan Rudes :


--
components: +Library (Lib) -Tkinter

___
Python tracker 
<https://bugs.python.org/issue44437>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44437] Add multimap() function similar to map(), but with multiprocessing functionality to the multiprocessing module

2021-06-16 Thread Ryan Rudes


Change by Ryan Rudes :


--
keywords: +patch
pull_requests: +25347
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/26762

___
Python tracker 
<https://bugs.python.org/issue44437>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44437] Add multimap() function similar to map(), but with multiprocessing functionality to the multiprocessing module

2021-06-16 Thread Ryan Rudes


Change by Ryan Rudes :


--
components: Tkinter
nosy: Ryan-Rudes
priority: normal
severity: normal
status: open
title: Add multimap() function similar to map(), but with multiprocessing 
functionality to the multiprocessing module
type: enhancement
versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44437>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41299] Python3 threading.Event().wait time is twice as large as Python27

2021-06-14 Thread Ryan Hileman


Ryan Hileman  added the comment:

> Do you think that pytime.c has the bug? I don't think so.

No, a misaligned stack would be an issue in the caller or compiler, not 
pytime.c. I have hit misaligned stack in practice, but it should be rare enough 
to check on init only.

> In theory yes, in practice we got zero error reports. So it sounds like it 
> cannot happen.
> I don't think that it's a good practice to start to add checks in all 
> functions using a clock "just in case" the clock might fail.

My read is that as long as we're not confident enough to remove those checks 
from pytime.c, a caller should assume they're reachable. If the pytime checks 
need to stay, adding a Windows only pytime init check to make sure that locks 
won't deadlock sounds fine to me.

--

___
Python tracker 
<https://bugs.python.org/issue41299>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44328] time.monotonic() should use a different clock source on Windows

2021-06-14 Thread Ryan Hileman


Ryan Hileman  added the comment:

> It shouldn't behave drastically different just because the user closed the 
> laptop lid for an hour

I talked to someone who's been helping with the Go time APIs and it seems like 
that holds pretty well for interactive timeouts, but makes no sense for network 
related code. If you lost a network connection (with, say, a 30 second timeout) 
due to the lid being closed, you don't want to wait 30 seconds after opening 
the lid for the application to realize it needs to reconnect. (However there's 
probably no good way to design Python's locking system around both cases, so 
it's sufficient to say "lock timers won't advance during suspend" and make the 
application layer work around that on its own in the case of network code)

> Try changing EnterNonRecursiveMutex() to break out of the loop in this case

This does work, but unfortunately a little too well - in a single test I saw 
several instances where that approach returned _earlier_ than the timeout.

I assume the reason for this loop is the call can get interrupted with a "needs 
retry" state. If so, you'd still see 16ms of jitter anytime that happens as 
long as it's backed by a quantized time source.

--

___
Python tracker 
<https://bugs.python.org/issue44328>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41299] Python3 threading.Event().wait time is twice as large as Python27

2021-06-14 Thread Ryan Hileman


Ryan Hileman  added the comment:

Perhaps the simplest initial fix would be to move that check down to 
PyThread__init_thread() in the same file. I'm not sure what the cpython 
convention for that kind of init error is, would it just be the same 
Py_FatalError block or is there a better pattern?

--

___
Python tracker 
<https://bugs.python.org/issue41299>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41299] Python3 threading.Event().wait time is twice as large as Python27

2021-06-14 Thread Ryan Hileman


Ryan Hileman  added the comment:

I agree with not throwing fatal errors, but that check is unlikely to actually 
be hit, and you removed the startup checks covering the underlying clocks here: 
https://github.com/python/cpython/commit/ae6cd7cfdab0599139002c526953d907696d9eef

I think if the time source is broken, a user would probably prefer an exception 
or fatal error to the deadlock they will otherwise get (as time not advancing 
means it's impossible to timeout), so it doesn't make sense to remove the check 
without doing something else about it.

There are three places win_perf_counter_frequency() can fail: 
https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Python/pytime.c#L974

I mention the QueryPerformanceFrequency error case here (stack misalignment): 
https://bugs.python.org/issue41299#msg395237

The other option, besides startup checks, would be to better parameterize the 
timer used (mentioned in bpo-44328): Prefer QueryUnbiasedInterruptTimePrecise 
if available (Win 10+ via GetProcAddress), then QPC, then 
QueryUnbiasedInterruptTime (which has the original bug wrt jitter but should 
never be used as QPC is unlikely to be broken).

--

___
Python tracker 
<https://bugs.python.org/issue41299>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44328] time.monotonic() should use a different clock source on Windows

2021-06-13 Thread Ryan Hileman


Ryan Hileman  added the comment:

> The monotonic clock should thus be based on QueryUnbiasedInterruptTime

My primary complaint here is that Windows is the only major platform with a low 
resolution monotonic clock. Using QueryUnbiasedInterruptTime() on older OS 
versions wouldn't entirely help that, so we have the same consistency issue 
(just on a smaller scale). I would personally need to still use 
time.perf_counter() instead of time.monotonic() due to this, but I'm not 
totally against it.

> For consistency, an external deadline (e.g. for SIGINT support) should work 
> the same way.

Have there been any issues filed about the deadline behaviors across system 
suspend?

> which I presume includes most users of Python 3.9+

Seems like Windows 7 may need to be considered as well, as per vstinner's 
bpo-32592 mention?

> starting with Windows 8, WaitForSingleObject() and WaitForMultipleObjects() 
> exclude time when the system is suspended

Looks like Linux (CLOCK_MONOTONIC) and macOS (mach_absolute_time()) already 
don't track suspend time in time.monotonic(). I think that's enough to suggest 
that long-term Windows shouldn't either, but I don't know how to reconcile that 
with my desire for Windows not to be the only platform with low resolution 
monotonic time by default.

> then the change to use QueryPerformanceCounter() to resolve bpo-41299 should 
> be reverted. The deadline should instead be computed with 
> QueryUnbiasedInterruptTime()

I don't agree with this, as it would regress the fix. This is more of a topic 
for bpo-41299, but I tested QueryUnbiasedInterruptTime() and it exhibits the 
same 16ms jitter as GetTickCount64() (which I expected), so non-precise 
interrupt time can't solve this issue. I do think 
QueryUnbiasedInterruptTimePrecise() would be a good fit. I think making this 
particular timeout unbiased (which would be a new behavior) should be a lower 
priority than making it not jitter.

> For Windows we could implement the following clocks:

I think that list is great and making those enums work with clock_gettime on 
Windows sounds like a very clear improvement to the timing options available. 
Having the ability to query each clock source directly would also reduce the 
impact if time.monotonic() does not perfectly suit a specific application.

---

I think my current positions after writing all of this are:

- I would probably be in support of a 3.11+ change for time.monotonic() to use 
QueryUnbiasedInterruptTime() pre-Windows 10, and dynamically use 
QueryUnbiasedInterruptTimePrecise() on Windows 10+. Ideally the Windows 
clock_gettime() code lands in the same release, so users can directly pick 
their time source if necessary. This approach also helps my goal of making 
time.monotonic()'s suspend behavior more uniform across platforms.

- Please don't revert bpo-41299 (especially the backports), as it does fix the 
issue and tracking suspend time is the same (not a regression) as the previous 
GetTickCount64() code. I think the lock timeouts should stick with QPC 
pre-Windows-10 to fix the jitter, but could use 
QueryUnbiasedInterruptTimePrecise() on Windows 10+ (which needs the same 
runtime check as the time.monotonic() change, thus could probably land in the 
same patch set).

- I'm honestly left with more questions than I started after diving into the 
GetSystemTimePreciseAsFileTime() rabbit hole. I assume it's not a catastrophic 
issue? Maybe it's a situation where adding the clock_gettime() enums would 
sufficiently help anyone who cares about the exact behavior during clock 
modification. I don't have strong opinions about it, besides it being a shame 
that Windows currently has lower precision timestamps in general. Could be 
worth doing a survey of other languages' choices, but any further discussion 
can probably go to bpo-19007.

--

___
Python tracker 
<https://bugs.python.org/issue44328>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44328] time.monotonic() should use a different clock source on Windows

2021-06-12 Thread Ryan Hileman


Ryan Hileman  added the comment:

I think a lot of that is based on very outdated information. It's worth reading 
this article: 
https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps

I will repeat Microsoft's current recommendation (from that article):

> Windows has and will continue to invest in providing a reliable and efficient 
> performance counter. When you need time stamps with a resolution of 1 
> microsecond or better and you don't need the time stamps to be synchronized 
> to an external time reference, choose QueryPerformanceCounter, 
> KeQueryPerformanceCounter, or KeQueryInterruptTimePrecise. When you need 
> UTC-synchronized time stamps with a resolution of 1 microsecond or better, 
> choose GetSystemTimePreciseAsFileTime or KeQuerySystemTimePrecise.

(Based on that, it may also be worth replacing time.time()'s 
GetSystemTimeAsFileTime with GetSystemTimePreciseAsFileTime in CPython, as 
GetSystemTimePreciseAsFileTime is available in Windows 8 and newer)

PEP 418:

> It has a much higher resolution, but has lower long term precision than 
> GetTickCount() and timeGetTime() clocks. For example, it will drift compared 
> to the low precision clocks.

Microsoft on drift (from the article above):

> To reduce the adverse effects of this frequency offset error, recent versions 
> of Windows, particularly Windows 8, use multiple hardware timers to detect 
> the frequency offset and compensate for it to the extent possible. This 
> calibration process is performed when Windows is started.

Modern Windows also automatically detects and works around stoppable TSC, as 
well as several other issues:

> Some processors can vary the frequency of the TSC clock or stop the 
> advancement of the TSC register, which makes the TSC unsuitable for timing 
> purposes on these processors. These processors are said to have non-invariant 
> TSC registers. (Windows will automatically detect this, and select an 
> alternative time source for QPC).

It seems like Microsoft considers QPC to be a significantly better time source 
now, than when PEP 418 was written.

Another related conversation is whether Python can just expose all of the 
Windows clocks directly (through clock_gettime enums?), as that gives anyone 
who really wants full control over their timestamps a good escape hatch.

--

___
Python tracker 
<https://bugs.python.org/issue44328>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44328] time.monotonic() should use a different clock source on Windows

2021-06-09 Thread Ryan Hileman


Ryan Hileman  added the comment:

Great information, thanks!

> Windows 10 also provides QueryInterruptTimePrecise(), which is a hybrid 
> solution. It uses the performance counter to interpolate a timestamp between 
> interrupts. I'd prefer to use this for time.monotonic() instead of QPC, if 
> it's available via GetProcAddress()

My personal vote is to use the currently most common clock source (QPC) for now 
for monotonic(), because it's the same across Windows versions and the most 
likely to produce portable monotonic timestamps between apps/languages on the 
same system. It's also the easiest patch, as there's already a code path for 
QPC.

(As someone building multi-app experiences around Python, I don't want to check 
the Windows version to see which time base Python is using. I'd feel better 
about switching to QITP() if/when Python drops Windows 8 support.)

A later extension of this idea (maybe behind a PEP) could be to survey the 
existing timers available on each platform and consider whether it's worth 
extending `time` to expose them all, and unify cross-platform the ones that are 
exposed (e.g. better formalize/document which clocks will advance while the 
machine is asleep on each platform).

--

___
Python tracker 
<https://bugs.python.org/issue44328>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44328] time.monotonic() should use a different clock source on Windows

2021-06-06 Thread Ryan Hileman


Ryan Hileman  added the comment:

I found these two references:
- 
https://stackoverflow.com/questions/35601880/windows-timing-drift-of-performancecounter-c
- https://bugs.python.org/issue10278#msg143209

Which suggest QueryPerformanceCounter() may be bad because it can drift. 
However, these posts are fairly old and the StackOverflow post also says the 
drift is small on newer hardware / Windows.

Microsoft's current stance is that QueryPerformanceCounter() is good: 
https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps

> Guidance for acquiring time stamps
> Windows has and will continue to invest in providing a reliable and efficient 
> performance counter. When you need time stamps with a resolution of 1 
> microsecond or better and you don't need the time stamps to be synchronized 
> to an external time reference, choose QueryPerformanceCounter

I looked into how a few other languages provide monotonic time on Windows:

Golang seems to read the interrupt time (presumably equivalent to 
QueryInterruptTime) directly by address. 
https://github.com/golang/go/blob/a3868028ac8470d1ab7782614707bb90925e7fe3/src/runtime/sys_windows_amd64.s#L499

Rust uses QueryPerformanceCounter: 
https://github.com/rust-lang/rust/blob/38ec87c1885c62ed8c66320ad24c7e535535e4bd/library/std/src/time.rs#L91

V8 uses QueryPerformanceCounter after checking for old CPUs: 
https://github.com/v8/v8/blob/dc712da548c7fb433caed56af9a021d964952728/src/base/platform/time.cc#L672

Ruby uses QueryPerformanceCounter: 
https://github.com/ruby/ruby/blob/44cff500a0ad565952e84935bc98523c36a91b06/win32/win32.c#L4712

C# implements QueryPerformanceCounter on other platforms using CLOCK_MONOTONIC, 
indicating that they should be roughly equivalent: 
https://github.com/dotnet/runtime/blob/01b7e73cd378145264a7cb7a09365b41ed42b240/src/coreclr/pal/src/misc/time.cpp#L175

Swift originally used QueryPerformanceCounter, but switched to 
QueryUnbiasedInterruptTime() because they didn't want to count time the system 
spent asleep: 
https://github.com/apple/swift-corelibs-libdispatch/commit/766d64719cfdd07f97841092bec596669261a16f

--

Note that none of these languages use GetTickCount64(). Swift is an interesting 
counter point, and I noticed QueryUnbiasedInterruptTime() is available on 
Windows 8 while QueryInterruptTime() is new as of Windows 10. The "Unbiased" 
just refers to whether it advances during sleep.

I'm not actually sure whether time.monotonic() in Python counts time spent 
asleep, or whether that's desirable. Some kinds of timers using monotonic time 
should definitely freeze during sleep so they don't cause a flurry of activity 
on wake, but others definitely need to roughly track wall clock time, even 
during sleep.

Perhaps the long term answer would be to introduce separate "asleep" and 
"awake" monotonic clocks in Python, and possibly deprecate perf_counter() if 
it's redundant after this (as I think it's aliased to monotonic() on 
non-Windows platforms anyway).

--
title: time.monotonic() should use QueryPerformanceCounter() on Windows -> 
time.monotonic() should use a different clock source on Windows

___
Python tracker 
<https://bugs.python.org/issue44328>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41299] Python3 threading.Event().wait time is twice as large as Python27

2021-06-06 Thread Ryan Hileman


Ryan Hileman  added the comment:

Ok, I filed a PR for this. I used pytime's interface to avoid duplicating the 
QueryPerformanceFrequency() code.

I found a StackOverflow answer that says QueryPerformance functions will only 
fail if you pass in an unaligned pointer: https://stackoverflow.com/a/27258700

Per that, I used Py_FatalError to catch this case, as there is probably 
something wrong with the compiler at that point, and the other recovery options 
would be likely to result in incorrect program behavior (e.g. dead lock).

--

___
Python tracker 
<https://bugs.python.org/issue41299>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41299] Python3 threading.Event().wait time is twice as large as Python27

2021-06-06 Thread Ryan Hileman


Change by Ryan Hileman :


--
keywords: +patch
pull_requests: +25157
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/26568

___
Python tracker 
<https://bugs.python.org/issue41299>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44328] time.monotonic() should use QueryPerformanceCounter() on Windows

2021-06-06 Thread Ryan Hileman


New submission from Ryan Hileman :

Related to https://bugs.python.org/issue41299#msg395220

Presumably `time.monotonic()` on Windows historically used GetTickCount64() 
because QueryPerformanceCounter() could fail. However, that hasn't been the 
case since Windows XP: 
https://docs.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter

> On systems that run Windows XP or later, the function will always succeed and 
> will thus never return zero

I've run into issues with this when porting python-based applications to 
Windows. On other platforms, time.monotonic() was a decent precision so I used 
it. When I ported to Windows, I had to replace all of my time.monotonic() calls 
with time.perf_counter(). I would pretty much never knowingly call 
time.monotonic() if I knew ahead of time it could be quantized to 16ms.

My opinion is that the GetTickCount64() monotonic time code in CPython should 
be removed entirely and only the QueryPerformanceCounter() path should be used.

I also think some of the failure checks could be removed from 
QueryPerformanceCounter() / QueryPerformanceFrequency(), as they're documented 
to never fail in modern Windows and CPython has been dropping support for older 
versions of Windows, but that's less of a firm opinion.

--
components: Library (Lib), Windows
messages: 395221
nosy: lunixbochs2, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: time.monotonic() should use QueryPerformanceCounter() on Windows
type: performance
versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44328>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41299] Python3 threading.Event().wait time is twice as large as Python27

2021-06-06 Thread Ryan Hileman


Change by Ryan Hileman :


--
versions: +Python 3.10, Python 3.11, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue41299>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41299] Python3 threading.Event().wait time is twice as large as Python27

2021-06-06 Thread Ryan Hileman


Ryan Hileman  added the comment:

I just ran into this. GetTickCount64() is a bad choice even without improving 
the Windows timer resolution, as every mutex wait will have 16ms of jitter. 
Here are some lock.acquire(timeout=0.001) times measured with 
time.perf_counter():

elapsed=21.215ms
elapsed=30.960ms
elapsed=21.686ms
elapsed=30.998ms
elapsed=30.794ms

Here's the same lock.acquire(timeout=0.001) with CPython patched to use 
QueryPerformanceCounter() instead of GetTickCount64(). Notice this is less 
overhead than even the original post's Python 2.x times.

elapsed=9.554ms
elapsed=14.516ms
elapsed=13.985ms
elapsed=13.434ms
elapsed=13.724ms

Here's the QueryPerformanceCounter() test in a timeBeginPeriod(1) block:

elapsed=1.135ms
elapsed=1.204ms
elapsed=1.189ms
elapsed=1.052ms
elapsed=1.052ms

I'd like to submit a PR to fix the underlying issue by switching to 
QueryPerformanceCounter() in EnterNonRecursiveMutex().

QueryInterruptTime() is a bad candidate because it's only supported on Windows 
10, and CPython still supports Windows 8. Improvements based on 
QueryPerformanceCounter() can be backported to at least 3.8 (3.8 dropped 
Windows XP support, which was the last Windows version where 
QueryPerformanceCounter() could fail).

I checked and the only other use of GetTickCount64() seems to be in 
time.monotonic(). Honestly I would vote to change time.monotonic() to 
QueryPerformanceCounter() as well, as QueryPerformanceCounter() can no longer 
fail on any Windows newer than XP (which is no longer supported by Python), but 
that's probably a topic for a new BPO.

--
nosy: +lunixbochs2
versions:  -Python 3.10, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue41299>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43913] unittest module cleanup functions not run unless tearDownModule() is defined

2021-04-22 Thread Ryan Tarpine


New submission from Ryan Tarpine :

Functions registered with unittest.addModuleCleanup are not called unless the 
user defines tearDownModule in their test module.

This behavior is unexpected because functions registered with 
TestCase.addClassCleanup are called even the user doesn't define tearDownClass, 
and similarly with addCleanup/tearDown.

The implementing code is basically the same for all 3 cases, the difference is 
that unittest.TestCase itself defines tearDown and tearDownClass; so even 
though doClassCleanups is only called if tearDownClass is defined, in practice 
it always is.

doModuleCleanups should be called even if tearDownModule is not defined.

--
components: Library (Lib)
messages: 391619
nosy: rtarpine
priority: normal
severity: normal
status: open
title: unittest module cleanup functions not run unless tearDownModule() is 
defined
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue43913>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43738] Clarify public name of curses.window

2021-04-05 Thread Ryan McCampbell


New submission from Ryan McCampbell :

Until 3.8 the curses window class was not directly available in code, but now 
it is available as `_curses.window`. This is not explicitly stated in the 
documentation (although it is consistent with how the method signatures are 
written). It is useful to have a public name for the type to aid IDE's with 
explicit type annotations, i.e.

@curses.wrapper
def main(stdscr: curses.window):
stdscr.addstr(...)

See https://github.com/python/typeshed/pull/5180, which adds this name to type 
hints in the typeshed project.

This name should be more clearly documented so programmers can annotate the 
type without worrying that it may change (which will cause a runtime error 
unless it is quoted).

--
assignee: docs@python
components: Documentation
messages: 390266
nosy: docs@python, rmccampbell7
priority: normal
severity: normal
status: open
title: Clarify public name of curses.window
type: enhancement
versions: Python 3.10, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43738>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43532] Add keyword-only fields to dataclasses

2021-03-17 Thread Ryan Hiebert


Change by Ryan Hiebert :


--
nosy: +ryanhiebert

___
Python tracker 
<https://bugs.python.org/issue43532>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29687] smtplib does not support proxy

2021-03-17 Thread Ryan Hiebert


Ryan Hiebert  added the comment:

Thank you, Christian. It sounds like you believe that we should view the 
`_get_socket` method as a public interface? That at least makes it possible to 
use a proxy socket through an appropriate mechanism, which solves my use-case.

--

___
Python tracker 
<https://bugs.python.org/issue29687>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29687] smtplib does not support proxy

2021-03-17 Thread Ryan Hiebert


Change by Ryan Hiebert :


--
nosy: +ryanhiebert

___
Python tracker 
<https://bugs.python.org/issue29687>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-02-22 Thread Ryan Hileman


Ryan Hileman  added the comment:

> Sounds good to me. We can deprecate RESTRICTED with no intention to 
remove it, since it's documented.
> Do you want to prepare a PR for this?

In case you missed it, the attached PR 24182 as of commit d3e998b is based on 
the steps I listed - I moved all of the proposed audited properties over to a 
new AUDIT_READ flag that is much simpler.

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-22 Thread Ryan Hileman


Ryan Hileman  added the comment:

Just updated the PR with another much simpler attempt, using a new READ_AUDIT 
flag (aliased to READ_RESTRICTED, and newtypes documentation updated).

I re-ran timings for the new build, and in all cases they match or slightly 
beat my previous reported timings.

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-22 Thread Ryan Hileman


Ryan Hileman  added the comment:

I agree that READ_RESTRICTED would work, and I'm strongly in support of 
refactoring my patch around that kind of flag, as it simplifies it quite a bit 
and the if statement is already there.

However, using the seemingly legacy RESTRICTED flag names for audit is 
confusing in my opinion:

- The audit subsystem does something entirely different from the long 
deprecated "Restricted execution" feature (removed in 3.0?)
- Nothing in the stdlib uses RESTRICTED that I can see.
- The documentation for RESTRICTED flags (Doc/extending/newtypes.rst) doesn't 
say anything about the audit system for READ_RESTRICTED, and talks about 
restricted mode as though it still exists.
- RESTRICTED only supports __getattr__ (PY_WRITE_RESTRICTED does nothing at 
all, and there is no delattr equivalent). This doesn't actually matter for this 
patch, it's just confusing in the context of audit, as there are 
`object.__setattr__` and `object.__delattr__` audit points but no corresponding 
flags.

I think it could make sense to:
1. Alias READ_RESTRICTED to a new READ_AUDIT flag and use the latter instead, 
as it is more clear.
2. Update the newtype docs to mention READ_AUDIT and remove documentation for 
the the unused RESTRICTED flags.
3. Deprecate the non-functional RESTRICTED flags if that's possible?
4. Only cross the setattr/delattr audit flag bridge if a future refactor calls 
for it.

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-21 Thread Ryan Hileman


Ryan Hileman  added the comment:

How's this for maintainable?

https://github.com/lunixbochs/cpython/commit/2bf1cc93d19a49cbed09b45f7dbb00212229f0a1

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-21 Thread Ryan Hileman


Ryan Hileman  added the comment:

My understanding as per the outline in PEP 551 as well as PEP 578, is that the 
audit system is meant primarily to observe the behavior of code rather than to 
have good sandbox coverage / directly prevent behavior.

I am using audit hooks to observe the behavior of third party Python, and I 
identified an indicator of shady behavior which includes code and frame object 
access (which includes sys._getframe(), and __code__, both of which are part of 
the original PEP 578).

I looked into it further and realized the CPython's auditing for those 
attributes/objects is superficial. I understand that auditing isn't perfect, 
and I'm not trying to change that. This patch just seems to me like a really 
basic and obvious extension of the existing __code__ and getframe audit points.



I ask that if your main hesitation is the impact of future audit hooks, we use 
this opportunity to establish a basic written precedent we can reference in the 
future about which kind of audit hook modifications are likely to be accepted 
without, say, another PEP.

One possible set of criteria:
 - The added hooks should be justified as functionally identical to something 
the existing PEP(s) suggested.
 - Performance should be measured and it should have very little impact on 
stdlib or general code.
 - The requester should be expected to justify the change, e.g. how it closes 
an obvious gap in an existing PEP 578 hook.

And my answers for those criteria:
 - These are functionally equivalent to the existing PEP 578 hooks for 
sys._getframe() and function.__code__ - they operate on similar types of 
objects and are used for accessing the exact same information.
 - Performance impact here appears to be only for debugging code, and 
performance impact on debugging code is infinitesimal when no audit hook is 
active.
 - I am auditing code for trivial usage of Python frames and code objects, and 
I can't do that sufficiently with the existing hooks (especially so now that 
I'm publicly documenting this gap).



If the primary complaint is maintenance burden, would it be preferable to add 
an attribute audit flag to PyMemberDef instead of using one-off PyGetSetDef 
functions? e.g.:

static PyMemberDef frame_memberlist[] = {
{"f_code",  T_OBJECT,   OFF(f_code),  
READONLY|AUDIT_ACCESS},
}

That would definitely simplify the implementation.

If these additions aren't worth it, I would almost recommend removing or 
deprecating the existing __code__ and sys._getframe() audit hooks instead, as I 
find them to be not very useful without this patch.

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-21 Thread Ryan Hileman


Ryan Hileman  added the comment:

My personal motivation is not to unilaterally prevent access to globals, but to 
close a simpler gap in the audit system that affects a currently deployed high 
performance production system (which is not trying to be a sandbox). I am also 
already using a C audit hook for my purposes.

If you are referencing vstinner's first message, please remember to read their 
follow up https://bugs.python.org/msg384988 where they seem to have changed 
their mind in support of the patch.

The audit attributes I'm chasing here are fairly small in scope, and 
overwhelmingly only used in debug code. I believe adding them is in the spirit 
of the original PEP. I have also done extensive testing and CPython C and 
stdlib code analysis as part of this effort.

If you agree with the original PEP authors that __code__ and sys._getframe() 
are worth auditing, then I believe this is a natural extension of that concept. 
My patch improves upon the PEP by increasing the audit coverage to every way I 
can see of getting a frame and code object from basic CPython types.

This is a simple patch with clear performance metrics. I don't see any reason 
to expand the scope of this in the future unless CPython adds another basic 
object type along the same lines (e.g. a new async function type, a new 
traceback type, or a new frame type).

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-21 Thread Ryan Hileman


Ryan Hileman  added the comment:

I just found out that generator object variants have their own code attributes. 
I investigated the stdlib usage and it seems to be for debug / dis only, so 
adding these attributes shouldn't impact performance.

I updated the PR to now cover the following attributes:

PyTracebackObject.tb_frame
PyFrameObject.f_code
PyGenObject.gi_code
PyCoroObject.cr_code
PyAsyncGenObject.ag_code

I have also attached a `check_hooks.py` file which allows for quick visual 
inspection that all of the hooks are working (It prints each attribute name, 
then accesses it. Expected output is an AUDIT line after each attribute 
printed.)

--
Added file: https://bugs.python.org/file49755/check_hooks.py

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42945] weakref.finalize documentation contradicts itself RE: finalizer callback or args referencing object

2021-01-16 Thread Ryan Heisler


Ryan Heisler  added the comment:

Perfect, thanks for your quick response. I was passing a bound method of obj as 
the func to `weakref.finalize(obj, func, /, *args, **kwargs)`. It slipped my 
mind that an instance variable like self.name and a bound method like 
self.clean_up, though they both belong to self, would be evaluated differently.

For anyone wondering, I was looking for weakref.proxy 
(https://docs.python.org/3/library/weakref.html#weakref.proxy) or 
weakref.WeakMethod 
(https://docs.python.org/3/library/weakref.html#weakref.WeakMethod)

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue42945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42945] weakref.finalize documentation contradicts itself RE: finalizer callback or args referencing object

2021-01-16 Thread Ryan Heisler


New submission from Ryan Heisler :

In the documentation for `weakref.finalize` 
(https://docs.python.org/3.9/library/weakref.html#weakref.finalize), it says:

"Note It is important to ensure that func, args and kwargs do not own any 
references to obj, either directly or indirectly, since otherwise obj will 
never be garbage collected. In particular, func should not be a bound method of 
obj."

However, at the bottom of the document, in the section called "Comparing 
finalizers with __del__() methods" 
(https://docs.python.org/3.8/library/weakref.html#comparing-finalizers-with-del-methods),
 the following code is part of an example of how to use `weakref.finalize`:

class TempDir:
def __init__(self):
self.name = tempfile.mkdtemp()
self._finalizer = weakref.finalize(self, shutil.rmtree, self.name)

I believe this code violates the rule that func, args, and kwargs should not 
have a reference to obj. In the example, obj is the instance of TempDir, and 
one of the arguments to finalize's callback is an attribute of the same 
instance of TempDir.

I do not know how to fix this example code. I found it while trying to figure 
out how to use `weakref.finalize`.

--
assignee: docs@python
components: Documentation
messages: 385155
nosy: docs@python, ryan.a.heisler
priority: normal
severity: normal
status: open
title: weakref.finalize documentation contradicts itself RE: finalizer callback 
or args referencing object
versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-09 Thread Ryan Hileman


Ryan Hileman  added the comment:

PR submitted, waiting on CLA process.

I added documentation at the field sites, but the audit event table generation 
does not handle attributes or object.__getattr__ very well at all, so I'm not 
updating the audit table for now.

The `.. audit-event:: object.__getattr__ obj,name frame-objects` sphinx 
directive right now just inserts a canned string """Raises an :ref:`auditing 
event ` object.__getattr__ with arguments obj,name.""", which would 
need additional boilerplate to describe these attributes properly. It also only 
adds a footnote style link to the audit table under __getattr__, and even moves 
object.__getattribute__ from the first [1] link position to a later number 
which is IMO is more confusing than not even linking them.

I think to make attributes look good in the table we would need a special 
sphinx directive for audited object.__getattr__ attributes, for example by 
modifying the template generator to fit each attribute on its own line under  
object.__getattr__ in the table.

For now I did not use the audit-event sphinx directive and manually inserted 
strings like this near the original attribute description in the docs: 
"""Accessing ``f_code`` raises an :ref:`auditing event ` 
``object.__getattr__`` with arguments ``obj`` and ``"f_code"``."""

I think audit table improvements should be handled in a separate issue, and by 
someone more familiar with that part of the doc generator, as cleaning it up 
looks like maybe a bigger scope than the current contribution.

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-09 Thread Ryan Hileman


Change by Ryan Hileman :


--
keywords: +patch
pull_requests: +23010
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24182

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-08 Thread Ryan Hileman


Ryan Hileman  added the comment:

Oops, by tb_code I meant traceback.tb_frame.f_code. So you can get to a frame 
from traceback.tb_frame (without triggering audit) or sys._getframe (which has 
an audit hook already), and you can get to __code__ from a frame via 
frame.f_code (without triggering audit).

Here's a patch for both frame.f_code and traceback.tb_frame:
https://github.com/lunixbochs/cpython/commit/2334a00c833874b7a2427e88abc9b51315bb010c

---

Benchmarks follow this section, made using the commit I linked (and the parent 
commit without the patch for comparison). My takeaways from playing around:

1. You probably shouldn't install a Python audit hook if you care about 
performance.
2. C audit hook performance impact shows up in microbenchmarking but only have 
a small impact on real workloads (see the traceback.format_tb benchmark at the 
end).
3. Performance impact of this change when you _don't_ have an audit hook 
installed is very small.
4. This seems to mostly impact debugging and test code. A quick check of the 
stdlib shows:
- traceback.tb_frame usage seems to be entirely in debugger, traceback, and 
testing code: https://github.com/python/cpython/search?l=Python&p=3&q=tb_frame
- frame.f_code primarily has similar debug use (dis, warnings, profiling, 
inspect): https://github.com/python/cpython/search?l=Python&p=3&q=f_code

Attached (c_audit_ext.zip) is the empty C audit hook I used for the benchmarks. 
`python3 setup.py build_ext` builds a `c_audit` module which registers an empty 
audit hook on import.


 frame.f_code object.__getattr__ audit hook

# Testing frame.f_code impact (no audit hook installed):
./python.exe -m timeit -s 'frame = sys._getframe()' -- 'frame.f_code'

with patch 2334a00c833874b7a2427e88abc9b51315bb010c
2000 loops, best of 5: 19.1 nsec per loop
2000 loops, best of 5: 18.7 nsec per loop
2000 loops, best of 5: 19.1 nsec per loop

without patch 2334a00c833874b7a2427e88abc9b51315bb010c
2000 loops, best of 5: 17 nsec per loop
2000 loops, best of 5: 16.7 nsec per loop
2000 loops, best of 5: 17 nsec per loop

# Testing frame.f_code impact (C audit hook installed):
python.exe -m timeit -s 'import c_audit; frame = sys._getframe()' -- 
'frame.f_code'

with patch 2334a00c833874b7a2427e88abc9b51315bb010c
500 loops, best of 5: 66.1 nsec per loop
500 loops, best of 5: 66.1 nsec per loop
500 loops, best of 5: 66.5 nsec per loop

without patch 2334a00c833874b7a2427e88abc9b51315bb010c
2000 loops, best of 5: 16.9 nsec per loop
2000 loops, best of 5: 16.9 nsec per loop
2000 loops, best of 5: 16.8 nsec per loop

# Testing frame.f_code impact (pure Python audit hook installed):
./python.exe -m timeit -s 'frame = sys._getframe(); sys.addaudithook(lambda *a: 
None)' -- 'frame.f_code'

with patch 2334a00c833874b7a2427e88abc9b51315bb010c
50 loops, best of 5: 1.02 usec per loop
50 loops, best of 5: 1.04 usec per loop
50 loops, best of 5: 1.02 usec per loop

without patch 2334a00c833874b7a2427e88abc9b51315bb010c
2000 loops, best of 5: 16.8 nsec per loop
2000 loops, best of 5: 17.1 nsec per loop
2000 loops, best of 5: 16.8 nsec per loop


 tb.tb_frame object.__getattr__ audit hook

# Testing tb.tb_frame impact (no audit hook installed)
./python.exe -m timeit -s "$(echo -e "try: a\nexcept Exception as e: tb = 
e.__traceback__")" -- 'tb.tb_frame'

with patch 2334a00c833874b7a2427e88abc9b51315bb010c
2000 loops, best of 5: 19.2 nsec per loop
2000 loops, best of 5: 18.9 nsec per loop
2000 loops, best of 5: 18.9 nsec per loop

without patch 2334a00c833874b7a2427e88abc9b51315bb010c
2000 loops, best of 5: 17 nsec per loop
2000 loops, best of 5: 16.7 nsec per loop
2000 loops, best of 5: 16.8 nsec per loop

# Testing tb.tb_frame impact (C audit hook installed)
./python.exe -m timeit -s "$(echo -e "import c_audit\ntry: a\nexcept Exception 
as e: tb = e.__traceback__")" -- 'tb.tb_frame'

with patch 2334a00c833874b7a2427e88abc9b51315bb010c
500 loops, best of 5: 64.8 nsec per loop
500 loops, best of 5: 64.8 nsec per loop
500 loops, best of 5: 64.8 nsec per loop

without patch 2334a00c833874b7a2427e88abc9b51315bb010c
2000 loops, best of 5: 16.7 nsec per loop
2000 loops, best of 5: 16.9 nsec per loop
2000 loops, best of 5: 16.9 nsec per loop

# Testing tb.tb_frame impact (pure Python audit hook installed)
./python.exe -m timeit -s "$(echo -e "sys.addaudithook(lambda *a: None)\ntry: 
a\nexcept Exception as e: tb = e.__traceback__")" -- 'tb.tb_frame'

with patch 2334a00c833874b7a2427e88abc9b51315bb010c
50 loops, best of 5: 1.04 usec per loop

[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-08 Thread Ryan Hileman


Ryan Hileman  added the comment:

I'm definitely not proposing to hook all of object.__getattr__, as my intuition 
says that would be very slow. I simply refer to "object.__getattr__" as the 
event name used by a couple of rare event audit hooks. This is how getting 
__code__ is emitted: 
https://github.com/python/cpython/blob/7301979b23406220510dd2c7934a21b41b647119/Objects/funcobject.c#L250

However, there's not much point in the sys._getframe and func.__code__ family 
of audit hooks right now as tracebacks expose the same information (and may 
even do so accidentally). I am personally interested in these hooks for non 
sandbox reasons in a production application that cares about perf, FWIW.

I think this would be implemented by extending the traceback object's getters 
to include tb_code and tb_frame: 
https://github.com/python/cpython/blob/7301979b23406220510dd2c7934a21b41b647119/Python/traceback.c#L156-L159

I project it won't have any noticeable perf impact (especially if the audit 
hook is written in C), as most reasons to inspect a traceback object will be 
exceptional and not in critical paths.

I'd be happy to write a proposed patch if that would help.

--

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42800] Traceback objects allow accessing frame objects without triggering audit hooks

2021-01-08 Thread Ryan Hileman


Ryan Hileman  added the comment:

traceback's `tb_code` attribute also allows you to bypass the 
`object.__getattr__` audit event for `__code__`.

Perhaps accessing a traceback object's `tb_code` and `tb_frame` should both 
raise an `object.__getattr__` event?

--
nosy: +lunixbochs2

___
Python tracker 
<https://bugs.python.org/issue42800>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42542] weakref documentation does not fully describe proxies

2020-12-02 Thread Ryan Govostes


New submission from Ryan Govostes :

The documentation for weakref.proxy() does not describe how the proxy object 
behaves when the object it references is gone.

The apparent behavior is that it raises a ReferenceError when an attribute of 
the proxy object is accessed.

It would probably be a good idea to describe what the proxy object does in 
general, for those who are unfamiliar with the concept: attribute accesses on 
the proxy object are forwarded to the referenced object, if it exists.

--
assignee: docs@python
components: Documentation
messages: 382319
nosy: docs@python, rgov
priority: normal
severity: normal
status: open
title: weakref documentation does not fully describe proxies
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42542>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41987] singledispatchmethod raises an error when relying on a forward declaration

2020-11-12 Thread Ryan Sobol

Ryan Sobol  added the comment:

Also, I'm a bit puzzled about something from the previously mentioned Integer 
class and its use of __future__.annotations. 

Why isĀ it possible to declare an Integer return type for the add() method, but 
only possible to declare an "Integer" forward reference for the _() method?

--

___
Python tracker 
<https://bugs.python.org/issue41987>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41987] singledispatchmethod raises an error when relying on a forward declaration

2020-11-12 Thread Ryan Sobol


Ryan Sobol  added the comment:

Does anyone know why the treatment of unresolved references was changed in 3.9?

--

___
Python tracker 
<https://bugs.python.org/issue41987>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42203] Unexpected behaviour NameError: name 'open' is not defined

2020-10-30 Thread john ryan


john ryan  added the comment:

Thanks. That is understandable. I reported it in case it was helpful.

--

___
Python tracker 
<https://bugs.python.org/issue42203>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42203] Unexpected behaviour NameError: name 'open' is not defined

2020-10-30 Thread john ryan


New submission from john ryan :

My test environment runs Ubuntu 18.04 in a virtualbox hosted on Windows 8.1 and 
Python executes within a venv running Python 3.9.0 (Python 3.9.0 (default, Oct 
26 2020, 09:02:51) 
[GCC 7.5.0] on linux

Running a test with unittest.IsolatedAsyncioTestCase my code hung so I hit 
ctrl-C twice to stop it and got the below traceback.

The really odd behaviour is that open is reported as not defined.

The behaviour is repeatable in my test, but I do not know how to produce a 
simple test case. I do not know what the issue is with my code.


 ^C^CTraceback (most recent call last):
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/async_case.py", 
line 158, in run
return super().run(result)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/case.py", line 
593, in run
self._callTestMethod(testMethod)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/async_case.py", 
line 65, in _callTestMethod
self._callMaybeAsync(method)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/async_case.py", 
line 88, in _callMaybeAsync
return self._asyncioTestLoop.run_until_complete(fut)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/asyncio/base_events.py", 
line 629, in run_until_complete
self.run_forever()
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/asyncio/base_events.py", 
line 596, in run_forever
self._run_once()
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/asyncio/base_events.py", 
line 1854, in _run_once
event_list = self._selector.select(timeout)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/selectors.py", line 469, 
in select
fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 197, in 
_run_module_as_main
return _run_code(code, main_globals, None,
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 87, in 
_run_code
exec(code, run_globals)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/__main__.py", 
line 18, in 
main(module=None)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/main.py", line 
101, in __init__
self.runTests()
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/main.py", line 
271, in runTests
self.result = testRunner.run(self.test)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/runner.py", 
line 176, in run
test(result)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/suite.py", line 
84, in __call__
return self.run(*args, **kwds)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/suite.py", line 
122, in run
test(result)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/suite.py", line 
84, in __call__
return self.run(*args, **kwds)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/suite.py", line 
122, in run
test(result)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/suite.py", line 
84, in __call__
return self.run(*args, **kwds)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/suite.py", line 
122, in run
test(result)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/case.py", line 
653, in __call__
return self.run(*args, **kwds)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/async_case.py", 
line 160, in run
self._tearDownAsyncioLoop()
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/unittest/async_case.py", 
line 126, in _tearDownAsyncioLoop
loop.run_until_complete(self._asyncioCallsQueue.join())
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/asyncio/base_events.py", 
line 629, in run_until_complete
self.run_forever()
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/asyncio/base_events.py", 
line 596, in run_forever
self._run_once()
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/asyncio/base_events.py", 
line 1854, in _run_once
event_list = self._selector.select(timeout)
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/selectors.py", line 469, 
in select
fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt
Exception ignored in: >
Traceback (most recent call last):
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/asyncio/base_events.py", 
line 1771, in call_exception_handler
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/logging/__init__.py", 
line 1463, in error
  File "/home/john/.pyenv/versions/3.9.0/lib/python3.9/lo

[issue41987] singledispatchmethod raises an error when relying on a forward declaration

2020-10-29 Thread Ryan Sobol


Ryan Sobol  added the comment:

It's worth pointing out that a similar error is produced for a 
forward-referenced return type of a registered method, but only for python3.9. 
For example:

from __future__ import annotations
from functools import singledispatchmethod


class Integer:
def __init__(self, value: int):
self.value = value

def __str__(self) -> str:
return str(self.value)

@singledispatchmethod
def add(self, other: object) -> Integer:
raise NotImplementedError(f"Unsupported type {type(other)}")

@add.register
def _(self, other: int) -> "Integer":
return Integer(self.value + other)


print(Integer(2).add(40))

This code runs without error in python3.8, and I am using this technique in 
code running in a production environment.

$ python3.8 --version
Python 3.8.6
$ python3.8 integer.py
42

However, this code throws a NameError in python3.9.

$ python3.9 --version
Python 3.9.0
$ python3.9 integer.py
Traceback (most recent call last):
  File "/Users/ryansobol/Downloads/integer.py", line 5, in 
class Integer:
  File "/Users/ryansobol/Downloads/integer.py", line 17, in Integer
def _(self, other: int) -> "Integer":
  File 
"/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/functools.py",
 line 909, in register
return self.dispatcher.register(cls, func=method)
  File 
"/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/functools.py",
 line 860, in register
argname, cls = next(iter(get_type_hints(func).items()))
  File 
"/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py",
 line 1386, in get_type_hints
value = _eval_type(value, globalns, localns)
  File 
"/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py",
 line 254, in _eval_type
return t._evaluate(globalns, localns, recursive_guard)
  File 
"/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py",
 line 497, in _evaluate
self.__forward_value__ = _eval_type(
  File 
"/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py",
 line 254, in _eval_type
return t._evaluate(globalns, localns, recursive_guard)
  File 
"/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py",
 line 493, in _evaluate
eval(self.__forward_code__, globalns, localns),
  File "", line 1, in 
NameError: name 'Integer' is not defined

I know that some may see this issue as a feature request for 3.10+. However, 
for me, it is a bug preventing my code from migrating to 3.9.

--
nosy: +ryansobol

___
Python tracker 
<https://bugs.python.org/issue41987>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42154] Bad proxy returned immediately after BaseManager server restarted

2020-10-26 Thread john ryan


New submission from john ryan :

I am building an application that is made up of several separate processes, 
where each process is a python program. They are all started by the supervisord 
utility and execute within a venv running Python 3.8.5 (default, Aug 13 2020, 
15:42:06) [GCC 7.5.0] on linux, under Ubuntu 18.04.

I am using a multiprocessing BaseManager to implement a repository of queues. 
Each process asks for a queue by name then uses put/get on that queue.

The application needs to be resilient so it must be possible to restart the 
respository process and have the various client processes re-connect to the 
queues hosted by it.

The problem I am getting is that the first call to `get_queue()` after 
restarting the BaseManager server process does not return a queue.

The sequence below shows some testing by hand. (My test environment runs Ubuntu 
in a virtualbox hosted on Windows 8.1)

Here I started the server in a different terminal then started python as below 
(both pythons in the same venv).

This works as expected with the first call to get_queue returning a queue.
```
(hydra_env) john@u1804-VirtualBox:~/sw/code/hydra$ python
Python 3.8.5 (default, Aug 13 2020, 15:42:06) 
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
... 
>>> QueueManager.register('get_queue')
>>> mgr = QueueManager(address=('localhost', 5), authkey=b'abracadabra' )
>>> mgr.connect()
>>> q = mgr.get_queue('name', 'src'); print(str(q))

>>> q = mgr.get_queue('name', 'src'); print(str(q))

```

Stop and restart the server to see the problem. The first call to get_queue 
seems to succeed but in fact it has failed as shown by the print(str...). The 
second call to get_queue succeeds.
```
>>> mgr.connect()
>>> q = mgr.get_queue('name', 'src'); print(str(q))

>>> q = mgr.get_queue('name', 'src'); print(str(q))

```

The server logs show it sent queues on all 4 calls
```
^C(hydra_env) john@u1804-VirtualBox:~/sw/code/hydra$ python 
../../trials/test_mgr.py 
starting
serving 
serving 
^C(hydra_env) john@u1804-VirtualBox:~/sw/code/hydra$ python 
../../trials/test_mgr.py 
starting
serving 
serving 
```

I get the same behaviour if I re-instantiate the local manager object

```
>>> mgr = QueueManager(address=('localhost', 5), authkey=b'abracadabra' )
>>> mgr.connect()
>>> q = mgr.get_queue('name', 'src'); print(str(q))

>>> q = mgr.get_queue('name', 'src'); print(str(q))

>>>
```

I even get the same behaviour if I just call `get_queue()` after restarting the 
server (ie without explicitly reconnecting).

I would have expected the first call to `get_queue()` to return a valid queue 
since neither it nor the call to `connect()` raised any kind of error.

It seems to me that there is some kind of state held that is the underlying 
cause of the issue. I did some investigating in  but I was not able to work out 
what was happening.

I found that it was possible to get into a state where a valid queue was never 
returned by `get_queue()` if an error had been raised by `get_nowait()` first.

Stop the server

```
>>> q.get_nowait()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in get_nowait
  File 
"/home/john/.pyenv/versions/3.8.5/lib/python3.8/multiprocessing/managers.py", 
line 835, in _callmethod
kind, result = conn.recv()
  File 
"/home/john/.pyenv/versions/3.8.5/lib/python3.8/multiprocessing/connection.py", 
line 250, in recv
buf = self._recv_bytes()
  File 
"/home/john/.pyenv/versions/3.8.5/lib/python3.8/multiprocessing/connection.py", 
line 414, in _recv_bytes
buf = self._recv(4)
  File 
"/home/john/.pyenv/versions/3.8.5/lib/python3.8/multiprocessing/connection.py", 
line 383, in _recv
raise EOFError
EOFError
```

Restart the server but do not call `get_queue()`

```
>>> q.get_nowait()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in get_nowait
  File 
"/home/john/.pyenv/versions/3.8.5/lib/python3.8/multiprocessing/managers.py", 
line 834, in _callmethod
conn.send((self._id, methodname, args, kwds))
  File 
"/home/john/.pyenv/versions/3.8.5/lib/python3.8/multiprocessing/connection.py", 
line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
  File 
"/home/john/.pyenv/versions/3.8.5/lib/python3.8/multiprocessing/connection.py", 
line 411, in _send_bytes
self._send(header + buf)
  File 
"/home/john/.pyenv/versions/3

[issue33129] Add kwarg-only option to dataclass

2020-10-17 Thread Ryan Hiebert


Change by Ryan Hiebert :


--
nosy: +ryanhiebert

___
Python tracker 
<https://bugs.python.org/issue33129>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41080] re.sub treats * incorrectly?

2020-06-22 Thread Ryan Westlund


Ryan Westlund  added the comment:

Sorry, I forgot the pydoc docs don't have as much information as the online
docs.

On Mon, Jun 22, 2020 at 1:54 PM Ezio Melotti  wrote:

>
> Ezio Melotti  added the comment:
>
> This behavior was changed in 3.7: "Empty matches for the pattern are
> replaced only when not adjacent to a previous empty match, so sub('x*',
> '-', 'abxd') returns '-a-b--d-'." [0]
>
> See also bpo-32308 and bpo-25054.
>
>
> [0]: https://docs.python.org/3/library/re.html#re.sub
>
> --
> resolution:  -> not a bug
> stage:  -> resolved
> status: open -> closed
> superseder:  -> Replace empty matches adjacent to a previous non-empty
> match in re.sub()
>
> ___
> Python tracker 
> <https://bugs.python.org/issue41080>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue41080>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41080] re.sub treats * incorrectly?

2020-06-22 Thread Ryan Westlund


New submission from Ryan Westlund :

```
>>> re.sub('a*', '-', 'a')
'--'
>>> re.sub('a*', '-', 'aa')
'--'
>>> re.sub('a*', '-', 'aaa')
'--'
```

Shouldn't it be returning one dash, not two, since the greedy quantifier will 
match all the a's? I understand why substituting on 'b' returns '-a-', but 
shouldn't this constitute only one match? In Python 2.7, it behaves as I expect:

```
>>> re.sub('a*', '-', 'a')
'-'
>>> re.sub('a*', '-', 'aa')
'-'
>>> re.sub('a*', '-', 'aaa')
'-'
```

The original case that led me to this was trying to normalize a path to end in 
one slash. I used `re.sub('/*$', '/', path)`, but a nonzero number of slashes 
came out as two.

--
components: Regular Expressions
messages: 372104
nosy: Yujiri, ezio.melotti, mrabarnett
priority: normal
severity: normal
status: open
title: re.sub treats * incorrectly?
type: behavior
versions: Python 3.10, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue41080>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1222585] C++ compilation support for distutils

2020-06-18 Thread Ryan Schmidt


Ryan Schmidt  added the comment:

Christian, thanks for the pointer. I think you're right, I probably am actually 
wanting this to be fixed in setuptools, not distutils, since setuptools is what 
people are using today. Since setuptools is an offshoot of distutils, I had 
assumed that the developers of setuptools would take ownership of any remaining 
distutils bugs that affected setuptools but I guess not. I looked through the 
setuptools issue tracker and this was the closest existing bug I could find: 
https://github.com/pypa/setuptools/issues/1732

--

___
Python tracker 
<https://bugs.python.org/issue1222585>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1222585] C++ compilation support for distutils

2020-06-16 Thread Ryan Schmidt


Ryan Schmidt  added the comment:

What needs to happen to get this 15 year old bug fixed? It prevents C++ Python 
modules from being compiled in situations where the user needs to supply 
CXXFLAGS.

--
nosy: +ryandesign

___
Python tracker 
<https://bugs.python.org/issue1222585>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40628] sockmodule.c: sock_connect vs negative errno values...

2020-05-14 Thread Ryan C. Gordon


New submission from Ryan C. Gordon :

(Forgive any obvious mistakes in this report, I'm almost illiterate with 
Python, doubly so with Python internals.)

In trying to get buildbot-worker running on Haiku ( https://haiku-os.org/ ), it 
runs into a situation where it tries to connect a non-blocking TCP socket, 
which correctly reports EINPROGRESS, and cpython/Modules/sockmodule.c's 
internal_connect() returns this error code to sock_connect() and 
sock_connect_ex().

Both of the sock_connect* functions will return NULL if the error code is 
negative, but on Haiku, all the errno values are negative (EINPROGRESS, for 
example, is -2147454940).

I _think_ what sock_connect is intending to do here...

res = internal_connect(s, SAS2SA(&addrbuf), addrlen, 1);
if (res < 0)
return NULL;

...is say "if we had a devastating and unexpected system error, give up 
immediately." Buildbot-worker seems to confirm this by throwing this exception 
in response:

  builtins.SystemError:  
returned NULL without setting an error

internal_connect returns -1 in those devastating-and-unexpected cases--namely 
when an exception is to be raised--and does not ever use that to otherwise 
signify a legit socket error. Linux and other systems don't otherwise fall into 
this "res < 0" condition because errno values are positive on those systems.

So I believe the correct fix here, in sock_connect() and sock_connect_ex(), is 
to check "if (res == -1)" instead of "res < 0" and let all other negative error 
codes carry on.

If this seems like the correct approach, I can assemble a pull request, but I 
don't know the full ramifications of this small change, so I thought I'd report 
it here first.

--ryan.

--
components: Extension Modules
messages: 368863
nosy: icculus
priority: normal
severity: normal
status: open
title: sockmodule.c: sock_connect vs negative errno values...
type: behavior
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue40628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8704] cgitb sends a bogus HTTP header if the app crashes before finishing headers

2020-03-14 Thread Ryan Tu


Ryan Tu  added the comment:

#Maybe not a good solution
I do not know the should we delete the code in cgitb.py or adjust the 
configration of apache httpd. My solution is deleting some code as follows:
```
return '''
 -->
 --> -->
  
   '''
```
Then it works very well, and it has good view.Anyone know what is the situation 
in ngix?

--
nosy: +Ryan Tu
versions: +Python 3.8 -Python 2.7, Python 3.2, Python 3.3, Python 3.5

___
Python tracker 
<https://bugs.python.org/issue8704>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18834] Add Clang to distutils to build C/C++ extensions

2020-03-06 Thread Ryan Gonzalez


Ryan Gonzalez  added the comment:

Oh my god this was still open? I think you can just use the CC variable, not 
sure what 6-years-younger-and-more-stupid me was thinking here. Sorry about the 
noise.

--
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue18834>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34305] inspect.getsourcefile and inspect.getcomments do not work with decorated functions

2020-03-05 Thread Ryan McCampbell


Ryan McCampbell  added the comment:

This seems like a pretty straightforward fix. What's holding it up?

--
nosy: +rmccampbell7

___
Python tracker 
<https://bugs.python.org/issue34305>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39503] [security][CVE-2020-8492] Denial of service in urllib.request.AbstractBasicAuthHandler

2020-03-04 Thread Ryan Ware


Change by Ryan Ware :


--
nosy: +ware

___
Python tracker 
<https://bugs.python.org/issue39503>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38576] CVE-2019-18348: CRLF injection via the host part of the url passed to urlopen()

2020-02-28 Thread Ryan Ware


Change by Ryan Ware :


--
nosy: +ware

___
Python tracker 
<https://bugs.python.org/issue38576>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38884] __import__ is not thread-safe on Python 3

2020-02-26 Thread Ryan Petrello


Ryan Petrello  added the comment:

I believe I'm also encountering some version of this bug while importing code 
from the kombu library within a multi-threaded context.

For what it's worth, I'm able to reproduce the reported deadlock 
(https://bugs.python.org/file48737/issue38884.zip) using python3.8 on RHEL8 and 
CentOS 8 builds.

More details about what I'm encountering here: 
https://github.com/ansible/awx/issues/5617
https://github.com/ansible/awx/issues/5617#issuecomment-591618205

My intended workaround for now is to just not use a thread (we'll see if it 
helps):

https://github.com/ansible/awx/pull/6093

--
nosy: +ryan.petrello

___
Python tracker 
<https://bugs.python.org/issue38884>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39587] Mixin repr overrides Enum repr in some cases

2020-02-08 Thread Ryan McCampbell


New submission from Ryan McCampbell :

In Python 3.6 the following works:

class HexInt(int):
def __repr__(self):
return hex(self)

class MyEnum(HexInt, enum.Enum):
A = 1
B = 2
C = 3

>>> MyEnum.A


However in Python 3.7/8 it instead prints
>>> MyEnum.A
0x1

It uses HexInt's repr instead of Enum's. Looking at the enum.py module it seems 
that this occurs for mixin classes that don't define __new__ due to a change in 
the _get_mixins_ method. If I define a __new__ method on the HexInt class then 
the expected behavior occurs.

--
components: Library (Lib)
messages: 361635
nosy: rmccampbell7
priority: normal
severity: normal
status: open
title: Mixin repr overrides Enum repr in some cases
type: behavior
versions: Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39587>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20126] sched doesn't handle events added after scheduler starts

2020-01-19 Thread Ryan Govostes


Ryan Govostes  added the comment:

This absolutely should be documented. If adding an earlier event is not 
supported then it should raise an exception. Appearing the enqueue the event 
but not triggering the callback is especially confusing. It may be obvious 
behavior to someone who has spent time developing this module or working with 
it but not to someone who just wants to, e.g., build an alarm clock or calendar 
app.

--
nosy: +rgov

___
Python tracker 
<https://bugs.python.org/issue20126>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39324] Add mimetype for extension .md (markdown)

2020-01-13 Thread Ryan Batchelder


Change by Ryan Batchelder :


--
keywords: +patch
pull_requests: +17398
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/17995

___
Python tracker 
<https://bugs.python.org/issue39324>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39324] Add mimetype for extension .md (markdown)

2020-01-13 Thread Ryan Batchelder


New submission from Ryan Batchelder :

I would like to propose that the mimetype for markdown files ending in .md to 
text/markdown is included in the mimetypes library. This is registered here: 
https://www.iana.org/assignments/media-types/text/markdown

--
messages: 359931
nosy: Ryan Batchelder
priority: normal
severity: normal
status: open
title: Add mimetype for extension .md (markdown)
type: enhancement
versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39324>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39292] syslog constants behind rfc

2020-01-11 Thread Ryan


Ryan  added the comment:

Thank you, this looks good. I'm pinned to 3.6 so while it won't work for me 
currently, maybe it will in a few years.
For clarity and because I can't edit my original message, the RFC is 5424 (I 
had mistakenly said 5454 but you got it right).

--

___
Python tracker 
<https://bugs.python.org/issue39292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39292] syslog constants behind rfc

2020-01-10 Thread Ryan


New submission from Ryan :

When using the SysLogHandler 
(https://docs.python.org/3/library/logging.handlers.html#logging.handlers.SysLogHandler)
 the supported facilities appear to be lagging the RFC (5454 ?), or at least 
what is being supported in other mainstream languages. I Specifically need 
LOG_AUDIT and LOG_NTP but there are a couple others. The syslog "openlog" 
function takes an INT but not sure how to get an INT through the python 
SysLogHandler because it's based on a static list of names and symbolic values.
Wikipedia (https://en.wikipedia.org/wiki/Syslog#Facility) suggests LOG_AUTH and 
LOG_NTP are in the RFC. 
This is my first ticket here so hopefully this is the right place for it. Maybe 
there is a workaround or some re-education needed on my part...

--
components: Library (Lib)
messages: 359746
nosy: tryanunderw...@gmail.com
priority: normal
severity: normal
status: open
title: syslog constants behind rfc
versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39252] email.contentmanager.raw_data_manager bytes handler breaks on 7bit cte

2020-01-07 Thread Ryan McCampbell


New submission from Ryan McCampbell :

The email.contentmanager.set_bytes_content function which handles bytes content 
for raw_data_manager fails when passed cte="7bit" with an AttributeError: 
'bytes' object has no attribute 'encode'. This is probably not a major use case 
since bytes are generally not for 7-bit data but the failure is clearly not 
intentional.

--
components: Library (Lib)
messages: 359555
nosy: rmccampbell7
priority: normal
severity: normal
status: open
title: email.contentmanager.raw_data_manager bytes handler breaks on 7bit cte
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39252>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38989] pip install selects 32 bit wheels for 64 bit python if vcvarsall.bat amd64_x86 in environment

2019-12-06 Thread Ryan Thornton


New submission from Ryan Thornton :

## Expected Behavior

pip install should download dependencies matching the architecture of the 
python executable being used.

## Actual Behavior

When calling pip install from a Visual Studio command prompt configured to 
cross compile from x64 to x86, pip installs wheels matching the architecture of 
Visual Studio's cross compile target (i.e. `VSCMD_ARG_TGT_ARCH=x86`) and not 
the architecture of python itself (x64).

This results in a broken installation of core libraries.

## Steps to Reproduce

System Details:
Windows 10 x64
Python 3.8 x64
Visual Studio 2017 15.9.14

Environment Details:
vcvarsall.bat amd64_x86

1. "C:\Program Files\Python38\python.exe" -mvenv "test"
2. cd test\Scripts
3. pip install cffi==1.13.2

Results in the following:

> Collecting cffi
>  Using cached 
> https://files.pythonhosted.org/packages/f8/26/5da5cafef77586e4f7a136b8a24bc81fd2cf1ecb71b6ec3998ffe78ea2cf/cffi-1.13.2-cp38-cp38-win32.whl

## Context

I think the regression was introduced here:
62dfd7d6fe11bfa0cd1d7376382c8e7b1275e38c

https://github.com/python/cpython/commit/62dfd7d6fe11bfa0cd1d7376382c8e7b1275e38c

--
components: Distutils
messages: 357936
nosy: Ryan Thornton, dstufft, eric.araujo
priority: normal
severity: normal
status: open
title: pip install selects 32 bit wheels for 64 bit python if vcvarsall.bat 
amd64_x86 in environment
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue38989>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38595] io.BufferedRWPair doc warning may need clarification

2019-10-25 Thread Ryan Govostes


Ryan Govostes  added the comment:

The origin of this warning involves interleaving read and write operations, and 
was added here: https://bugs.python.org/issue12213

I'm not sure if it applies to sockets, pipes, etc. though.

The pySerial documentation advises using io.BufferedRWPair(x, x) where x is a 
serial device. But this StackOverflow post reports that it is problematic, in 
line with the warning. (The author appears to be talking about Windows.)

https://stackoverflow.com/questions/24498048/python-io-modules-textiowrapper-or-buffererwpair-functions-are-not-playing-nice

--
nosy: +benjamin.peterson, pitrou, stutzbach

___
Python tracker 
<https://bugs.python.org/issue38595>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38595] io.BufferedRWPair doc warning may need clarification

2019-10-25 Thread Ryan Govostes


New submission from Ryan Govostes :

The documentation for the io.BufferedRWPair class gives this warning:

> BufferedRWPair does not attempt to synchronize accesses to its underlying raw 
> streams. You should not pass it the same object as reader and writer; use 
> BufferedRandom instead.

I have a hard time understanding what this warning is trying to tell me.

1. What does it mean to "synchronize accesses"?

2. Why can't I pass the same object as reader and writer? The docstring in 
_pyio.py says, "This is typically used with a socket or two-way pipe."

3. How does BufferedRandom, which adds the seek() and tell() interfaces, 
address the issue of "synchroniz[ing] accesses"? Is synchronization automatic? 
What does this do for sockets or pipes which cannot seek?

--
assignee: docs@python
components: Documentation
messages: 355404
nosy: docs@python, rgov
priority: normal
severity: normal
status: open
title: io.BufferedRWPair doc warning may need clarification
type: behavior
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue38595>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33725] Python crashes on macOS after fork with no exec

2019-10-16 Thread Ryan May


Change by Ryan May :


--
nosy: +Ryan May

___
Python tracker 
<https://bugs.python.org/issue33725>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38217] argparse should support multiple types when nargs > 1

2019-09-18 Thread Ryan Govostes


New submission from Ryan Govostes :

argparse supports consuming multiple command-line arguments with nargs=2, etc. 
It converts them to the type given in the argument's type parameter.

argparse does not provide a good solution when the input arguments should be 
different data types. For an example, you cannot have an argument that expects 
a str followed by an int, '--set-age Bob 34'.

Ordinarily, the suggestion would be to split it into two arguments, like 
'--set-person Bob --set-age 34'.

However, this becomes awkward with an action such as 'append', where the 
command line arguments become tedious, like '--set-person Bob --set-age 34 
--set-person Alice --set-age 29', or confusing, as in '--set-person Bob 
--set-person Alice --set-age 34 --set-age 29'.

My proposal is to allow the 'type' parameter to accept a tuple of types:

p.add_argument('--set-age', nargs=2, type=(str, int))

Since 'nargs' is redundant, this could even be simplified to just:

p.add_argument('--set-age', type=(str, int))

The resulting parsed argument would then be a tuple of (str, int), as opposed 
to a list. If action='append', the result would be a list of such tuples.

This creates no backwards compatibility issue because tuple instances are not 
callable, so this was never valid code that did something else.

A further enhancement could be that when nargs='+' or '*', and a tuple of types 
is provided, the types are used round robin: '--set-ages Bob 34 Alice 29'. An 
exception would be raised if it would create an incomplete tuple.

See here for other discussion and workarounds: 
https://stackoverflow.com/questions/16959101/python-argparse-how-to-have-nargs-2-with-type-str-and-type-int

--
components: Library (Lib)
messages: 352741
nosy: rgov
priority: normal
severity: normal
status: open
title: argparse should support multiple types when nargs > 1
type: enhancement
versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38217>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30491] Add a lightweight mechanism for detecting un-awaited coroutine objects

2019-09-08 Thread Ryan Hiebert


Change by Ryan Hiebert :


--
nosy: +ryanhiebert

___
Python tracker 
<https://bugs.python.org/issue30491>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31387] asyncio should make it easy to enable cooperative SIGINT handling

2019-09-07 Thread Ryan Hiebert


Change by Ryan Hiebert :


--
nosy: +ryanhiebert

___
Python tracker 
<https://bugs.python.org/issue31387>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   >