hi

2023-03-04 Thread Tom
   Bonjour je suis français et je ne comprend pas comment je peux acceder a
   python merci de me repondre

   CORDIALEMENT Lilian

    

   Envoyé à partir de [1]Courrier pour Windows

    

References

   Visible links
   1. https://go.microsoft.com/fwlink/?LinkId=550986
-- 
https://mail.python.org/mailman/listinfo/python-list


pls donate acces

2023-03-01 Thread Tom
   I need make an scrpit to impressive my friends

    

   Envoyé à partir de [1]Courrier pour Windows

    

References

   Visible links
   1. https://go.microsoft.com/fwlink/?LinkId=550986
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue9334] argparse does not accept options taking arguments beginning with dash (regression from optparse)

2021-12-26 Thread Tom Karzes


Tom Karzes  added the comment:

If it's going to be closed, it should at least be acknowledged that it *is* a 
fundamental design flaw, stemming from the misguided goal of trying (and 
necessarily failing) to allow options to be freely intermixed with positional 
arguments, which of course can't be done without dropping support for 
unrestricted string arguments.  This means that argparse does not, and 
apparently never will, support string argument values that begin with hyphens.  
Yes, I know that the equal sign can be used to handle *some* cases, but who 
wants to use equal signs to specify command-line options?  And there is no 
direct workaround for options that specify an nargs value of 2 or more.

Also, fixing this is *not* hard at all.  All that needs to be done is to add a 
keyword argument to the parser that tells it not to try to look ahead to find 
options, but to instead scan for them sequentially.  Just like any properly 
designed option parser does.  It's *easier* than trying to look ahead.

I have my own quick-and-dirty hack that approximates this, but it's ugly and 
disgusting, it only handles some cases, and I hate using it.  All the hack does 
is replace -- with a different prefix that I'm willing to avoid, then uses a 
different option prefix character so that the remaining strings starting with a 
single - are not seen as options.  This handles a single leading - in an option 
value.  It's not a general solution at all, but it's just barely good enough to 
solve my problem cases.

Yes, argparse has been successful, but the reason for that success is that it 
wasn't made for proficient users.  Rather, it was designed to cater to people 
who aren't very capable, and are perfectly happy to live with restricted string 
values if it means they can shuffle their command-line arguments willy nilly 
and still have it recognize them, rather than stopping when it should and 
giving the appropriate error.  It trades precision for ease of use.  This 
creates a vacuum for highly capable users who don't want to give up precise 
argument processing for the sake of a feature that's of no use to them in the 
first place.

Don't get me wrong.  There are a lot of nice features that argparse added which 
I find very useful.  The problem is it also sacrifices core functionality, 
making its predecessor, optparse, preferable for some applications.  In those 
cases, there is no transition path from optparse to argparse, since argparse 
does not handle all of the cases that optparse does.  And since argparse does 
not subsume the functionality of optparse, I find the decision to deprecate 
optparse highly questionable.

With optparse deprecated, Python no longer supports POSIX-compliant 
command-line option processing.  How is that a sound decision?

--

___
Python tracker 
<https://bugs.python.org/issue9334>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45933] Illegal Instrution (Core Dumped)

2021-12-02 Thread Tom E


Tom E  added the comment:

Well I updated the kernel to 5.15.0 and used my succeeding 3.9.9 build and now 
it works.

--
resolution:  -> fixed
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue45933>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45933] Illegal Instrution (Core Dumped)

2021-12-01 Thread Tom E


Tom E  added the comment:

its defos fine because ive tried it without the CFLAGS and same error

--

___
Python tracker 
<https://bugs.python.org/issue45933>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45933] Illegal Instrution (Core Dumped)

2021-11-30 Thread Tom E


Tom E  added the comment:

Not yet... my configure flags are ./configure CFLAGS="-O3 -mtune=corei7-avx 
-march=corei7-avx" LDFLAGS="-L/usr/local/ssl/lib64" CPP=cpp CXX=g++ 
--with-openssl=/usr/local/ssl --enable-optimizations --enable-shared

--

___
Python tracker 
<https://bugs.python.org/issue45933>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45933] Illegal Instrution (Core Dumped)

2021-11-29 Thread Tom E


New submission from Tom E :

When compiling CPython 3.10 on Ubuntu 22.04, with GCC 11.2.0, it compiles 
successfully, but when trying to run it it just gives Illegal Instrution (Core 
Dumped). But when I build 3.9.9 its just fine... CPU is Intel Core i5-10400.

--
messages: 407330
nosy: guacaplushy
priority: normal
severity: normal
status: open
title: Illegal Instrution (Core Dumped)
type: crash
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue45933>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28140] Give better errors for OS commands, like 'pip', in REPL, script

2021-11-28 Thread Tom Viner


Tom Viner  added the comment:

I've updated my pull request from 3 years ago. Fixed merge conflicts and 
addressed all comments. 

https://github.com/python/cpython/pull/8536

--

___
Python tracker 
<https://bugs.python.org/issue28140>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45466] Simple curl/wget-like download functionality in urllib (like http offers server)

2021-11-11 Thread Tom Pohl


Change by Tom Pohl :


--
nosy:  -tom.pohl

___
Python tracker 
<https://bugs.python.org/issue45466>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45466] Simple curl/wget-like download functionality in urllib (like http offers server)

2021-10-25 Thread Tom Pohl


Tom Pohl  added the comment:

Thanks, Terry, for the hint.

The idea got some support on python-ideas, so I thought it is worthwhile to do 
a PR. As a first-time contributor, I now have to wait for approval for the 
pipeline to run...

--

___
Python tracker 
<https://bugs.python.org/issue45466>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9334] argparse does not accept options taking arguments beginning with dash (regression from optparse)

2021-10-18 Thread Tom Karzes


Tom Karzes  added the comment:

Is there *still* no fix for this?  I keep running into this bug.  People 
sometimes say "oh, it's no problem, just use = to associate the option value 
with the option name".  That is so sad.  It's basically saying "it can't be 
made to work the way it should, so instead use = to introduce your option 
values."  I should *never* have to use = to introduce an option value.

And besides, using = doesn't even handle all cases.  For example, suppose I 
have an option that takes two string arguments, i.e. type=str and nargs=2.  Now 
I want to specify "-x" and "-y" as the two string arguments, like this:

--opt -x -y

As far as I can tell, argparse simply cannot handle this, and there's no 
workaround.  Using = doesn't solve this case.

One more time:  All I want to do is disable the undesirable option look-ahead.  
It is utterly and completely useless to me.  I want sequential, unambiguous 
option parsing.  You know, the way the entire rest of the world does it.  All 
that's needed is something that tells argparse to disable its look-ahead 
heuristic and to simply do what it's told.  Scan left-to-right.  If the next 
string is a recognized option name, then treat it as an option and take its 
arguments from the strings that follow, regardless of what they look like.  
Rinse and repeat.  That is how correct option parsing is done.

All this look-ahead heuristic does is cater to confused beginners, at the cost 
of breaking it for experienced users who know exactly what they want and are 
frustrated that argparse won't let them specify it.

By the way, is there any supported, competing alternative to argparse?  It 
seems like argparse is never going to support option values that begin with 
hyphens, so at this point I'm looking for an alternative that I don't have to 
fight every time I want to allow option values that begin with hyphens.  Maybe 
it's time to create a new option parsing package that supports the most useful 
argparse features, but doesn't mistake option values for option names.  You 
know, something more like optparse, but with some added features.  It just 
needs to support strict left-to-right option parsing.

At this point, I'm thinking it may be time to bite the bullet and write my own 
option parsing package.  One that actually works, and can't be deprecated.  But 
it seems like such a waste of time.  It's hard to fathom why Python no longer 
provides a working option parser.

--

___
Python tracker 
<https://bugs.python.org/issue9334>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45466] Simple curl/wget-like download functionality in urllib (like http offers server)

2021-10-14 Thread Tom Pohl


New submission from Tom Pohl :

In the context of building Docker images, it is often required to download 
stuff. If curl/wget are available, great, but often slim images don't offer 
that.

The urllib could provide a very simple download functionality (like http offers 
a simple server):

from urllib.request import urlopen
data = urlopen('https://.../install-poetry.py').read()
# print or save data

If there's some interest, I could open a PR.

--
components: Library (Lib)
messages: 403888
nosy: tom.pohl
priority: normal
severity: normal
status: open
title: Simple curl/wget-like download functionality in urllib (like http offers 
server)
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue45466>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20779] Add pathlib.chown method

2021-06-28 Thread Tom Cook


Tom Cook  added the comment:

+1 this.

I have a program that opens a UNIX socket as root for other processes to 
communicate with it.  I need to set the permissions to 0o775 and set the owner 
gid to a specific group so that members of that group can communicate with the 
process.  It's annoying to have to drop back to `os.chown(str(path), ...)` for 
this.

--
nosy: +Tom Cook

___
Python tracker 
<https://bugs.python.org/issue20779>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44342] enum with inherited type won't pickle

2021-06-07 Thread Tom Brown


New submission from Tom Brown :

The following script runs without error in 3.8.5 and raises an error in 3.8.6, 
3.9.5 and 3.10.0b1. 

Source:
```
import enum, pickle

class MyInt(int):
pass
# work-around: __reduce_ex__ = int.__reduce_ex__

class MyEnum(MyInt, enum.Enum):
A = 1

pickle.dumps(MyEnum.A)
```

Error (same in 3.8.6, 3.9.5 and 3.10.0b1):
```
Traceback (most recent call last):
  File "/home/thecap/projects/covid-data-model/./enum-pickle.py", line 12, in 

pickle.dumps(MyEnum.A)
  File "/home/thecap/.pyenv/versions/3.10.0b1/lib/python3.10/enum.py", line 83, 
in _break_on_call_reduce
raise TypeError('%r cannot be pickled' % self)
TypeError: MyEnum.A cannot be pickled
```

Like https://bugs.python.org/issue41889 this seems to be related to the fix for 
https://bugs.python.org/issue39587 which changes member_type from int to MyInt. 
A work-around is in the comment above.

--
components: Library (Lib)
messages: 395300
nosy: Tom.Brown
priority: normal
severity: normal
status: open
title: enum with inherited type won't pickle
versions: Python 3.10, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44342>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44075] Add a PEP578 audit hook for Asyncio loop stalls

2021-05-08 Thread Tom Forbes


Tom Forbes  added the comment:

Actually reacting to a stall would require something more and probably should 
be done at some point.

But this is purely about monitoring - in our use case we'd send a metric via 
statsd that would be used to correlate stalls against other service level 
metrics. This seems pretty critical when running a large number of asyncio 
applications in production because you can only currently _infer_ that a stall 
is happening, and it's hard to trace the cause across service boundaries. An 
event hook that was sent the loop and handle would be ideal for this.

--

___
Python tracker 
<https://bugs.python.org/issue44075>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44075] Add a PEP578 audit hook for Asyncio loop stalls

2021-05-08 Thread Tom Forbes


Change by Tom Forbes :


--
keywords: +patch
pull_requests: +24642
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/25990

___
Python tracker 
<https://bugs.python.org/issue44075>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44075] Add a PEP578 audit hook for Asyncio loop stalls

2021-05-08 Thread Tom Forbes


Tom Forbes  added the comment:

I don't see why we shouldn't use PEP 578 for this - the events provide rich 
monitoring information about what a Python process is "doing" with an easy, 
central way to register callbacks to receive these events and shovel them off 
to a monitoring solution.

Is there that much of a difference between monitoring the number of files, 
sockets, emails or even web browsers opened and the number of times an asyncio 
application has stalled?

The alternative would be to make the loop stalling some kind of hookable event, 
which just seems like reinventing `sys.audit()`.

--

___
Python tracker 
<https://bugs.python.org/issue44075>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44075] Add a PEP578 audit hook for Asyncio loop stalls

2021-05-08 Thread Tom Forbes


New submission from Tom Forbes :

Detecting and monitoring loop stalls in a production asyncio application is 
more difficult than it could be.

Firstly you must enable debug mode for the entire loop then you need to look 
for warnings outputted via the asyncio logger. This makes it hard to send loop 
stalls to monitoring systems via something like statsd.

Ideally asyncio callbacks would always be timed and an auditevent always 
triggered if it passes a particular threshold. If debug mode is enabled then a 
warning is logged.

--
components: asyncio
messages: 393251
nosy: asvetlov, orf, yselivanov
priority: normal
severity: normal
status: open
title: Add a PEP578 audit hook for Asyncio loop stalls
type: enhancement
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue44075>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43750] Undefined constant PACKET_MULTIHOST referred to in package socket

2021-04-06 Thread Tom Cook


New submission from Tom Cook :

The documentation for the `AF_PACKET` address family refers to 
`PACKET_MULTIHOST`.  I believe this should read `PACKET_MULTICAST`, which is 
defined on Linux systems (`PACKET_MULTIHOST` is not).

--
assignee: docs@python
components: Documentation
messages: 390345
nosy: Tom Cook, docs@python
priority: normal
severity: normal
status: open
title: Undefined constant PACKET_MULTIHOST referred to in package socket
type: enhancement
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43750>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29672] `catch_warnings` context manager causes all warnings to be printed every time, even after exiting

2021-04-02 Thread Tom Aldcroft


Tom Aldcroft  added the comment:

I encountered this issue today and want to +1 getting some attention on this.

The disconnected nature of this issue makes it especially difficult to 
understand -- any package in the stack can change this hidden global variable 
`_filters_version` in the warnings module that then impacts the local behavior 
of warnings in the user script.

The only way I was able to finally understand that an update to an obscure 
dependency was breaking our regression testing was by reading the `warnings` 
source code and then monkey patching it to print diagnostic information.

Even a documentation update would be useful. This could explain not only 
`catch_warnings()`, but in general the unexpected feature that if any package 
anywhere in the stack sets a warning filter, then that globally resets whether 
a warning has been seen before (via the call to `_filters_mutated()`).

--
nosy: +taldcroft
versions:  -Python 3.5, Python 3.6, Python 3.7

___
Python tracker 
<https://bugs.python.org/issue29672>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43661] api-ms-win-core-path-l1-1.0.dll, redux of 40740 (which has since been closed)

2021-03-29 Thread Tom Kacvinsky


New submission from Tom Kacvinsky :

Even though bpo#40740 has been closed, I wanted to re-raise the issue as this 
affects me.  There are only two functions that come from this missing DLL:

PathCchCombineEx
PathCchCanonicalizeEx

Would there be a way of rewriting join/canonicalize in getpathp.c (each of 
which uses one of the above functions) to _not_ rely on functions from a 
missing DLL on Windows 7 SP1?  Or has the ship truly sailed on this matter?

--
components: C API
messages: 389727
nosy: tkacvinsky
priority: normal
severity: normal
status: open
title: api-ms-win-core-path-l1-1.0.dll, redux of 40740 (which has since been 
closed)
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43661>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43292] xml.ElementTree iterparse filehandle left open

2021-02-23 Thread Tom Dougherty


Tom Dougherty  added the comment:

"erase all files"

--
nosy: +dtom9424
type: crash -> security

___
Python tracker 
<https://bugs.python.org/issue43292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42616] C Extensions on Darwin that link against libpython are likely to crash

2021-02-03 Thread Tom Birch


Change by Tom Birch :


--
components: +C API, Extension Modules

___
Python tracker 
<https://bugs.python.org/issue42616>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42616] C Extensions on Darwin that link against libpython are likely to crash

2021-02-03 Thread Tom Birch


Tom Birch  added the comment:

Steve Dower: this issue is independent of distutils, reopening

--
components:  -Distutils
status: closed -> open

___
Python tracker 
<https://bugs.python.org/issue42616>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42514] Relocatable framework for macOS

2021-01-21 Thread Tom Goddard


Change by Tom Goddard :


--
nosy: +tomgoddard

___
Python tracker 
<https://bugs.python.org/issue42514>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Add race-free os.link and os.symlink wrapper / helper

2020-12-29 Thread Tom Hale


Tom Hale  added the comment:

Related issue found in testing:

bpo-42778 Add follow_symlinks=True parameter to both os.path.samefile() and 
Path.samefile()

--

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42778] Add follow_symlinks=True parameter to both os.path.samefile() and Path.samefile()

2020-12-28 Thread Tom Hale


Tom Hale  added the comment:

In summary:

The underlying os.stat() takes a follow_symlinks=True parameter but it can't be 
set to False when trying to samefile() two symbolic links.

--
title: Add follow_symlinks=True to {os.path,Path}.samefile -> Add 
follow_symlinks=True parameter to both os.path.samefile() and Path.samefile()

___
Python tracker 
<https://bugs.python.org/issue42778>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42778] Add follow_symlinks=True to {os.path,Path}.samefile

2020-12-28 Thread Tom Hale


New submission from Tom Hale :

The os.path and Path implementations of samefile() do not allow comparisons of 
symbolic links:

% mkdir empty && chdir empty
% ln -s non-existant broken
% ln broken lnbroken
% ls -i # Show inode numbers
19325632 broken@  19325632 lnbroken@
% Yup, they are the same file... but...
% python -c 'import os; print(os.path.samefile("lnbroken", "broken", 
follow_symlinks=False))'
Traceback (most recent call last):
  File "", line 1, in 
TypeError: samefile() got an unexpected keyword argument 'follow_symlinks'
% python -c 'import os; print(os.path.samefile("lnbroken", "broken"))'
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.8/genericpath.py", line 100, in samefile
s1 = os.stat(f1)
FileNotFoundError: [Errno 2] No such file or directory: 'lnbroken'
%

Both samefile()s use os.stat under the hood, but neither allow setting  
os.stat()'s `follow_symlinks` parameter.

https://docs.python.org/3/library/os.html#os.stat

https://docs.python.org/3/library/os.path.html#os.path.samefile

https://docs.python.org/3/library/pathlib.html#pathlib.Path.samefile

--
components: Library (Lib)
messages: 383965
nosy: Tom Hale
priority: normal
severity: normal
status: open
title: Add follow_symlinks=True to {os.path,Path}.samefile
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42778>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42616] C Extensions on Darwin that link against libpython are likely to crash

2020-12-11 Thread Tom Birch


Tom Birch  added the comment:

> Does this affect unix-style builds with --enable-shared or framework builds?

I believe the answer is no, since in both those cases the `python` executable 
doesn't contain definitions for any of the libpython symbols. In my testing I 
was using a python binary from anaconda, where the `python` executable defines 
all the symbols found in libpython.

--

___
Python tracker 
<https://bugs.python.org/issue42616>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42616] C Extensions on Darwin that link against libpython are likely to crash

2020-12-10 Thread Tom Birch


New submission from Tom Birch :

After https://github.com/python/cpython/pull/12946, there exists an issue on 
MacOS due to the two-level namespace for symbol resolution. If a C extension 
links against libpython.dylib, all symbols dependencies from the Python C API 
will be bound to libpython. When the C extension is loaded into a process 
launched by running the `python` binary, there will be two active copies of the 
python runtime. This can lead to crashes if objects from one runtime are used 
by the other.

https://developer.apple.com/library/archive/documentation/Porting/Conceptual/PortingUnix/compiling/compiling.html#//apple_ref/doc/uid/TP40002850-BCIHJBBF

See issue/test case here: https://github.com/PDAL/python/pull/76

--
components: C API, macOS
messages: 382841
nosy: froody, ned.deily, ronaldoussoren
priority: normal
severity: normal
status: open
title: C Extensions on Darwin that link against libpython are likely to crash
type: crash
versions: Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42616>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32830] tkinter documentation suggests "from tkinter import *", contradicting PEP8

2020-12-01 Thread Tom Middleton


Tom Middleton  added the comment:

While I agree that it shouldn't be imposed on changing previous code, changing 
the documentation isn't changing the previous code it is encouraging future 
code. I think that the documentation should have a caveat. I'm seeing a lot of 
new code using tkinter with glob import which is obviously from the 
documentation as it suggests doing so. I think that the glob import practice 
should be overall discouraged. I think adding a disclaimer in the documentation 
at the reference to `from tkinter import *` would be sufficient. A reference to 
PEP8 would be good. I'd imagine that most people look up docs and are not as 
familiar with PEP.
Why can't something like that be added to the documentation?

--
nosy: +busfault

___
Python tracker 
<https://bugs.python.org/issue32830>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42526] Exceptions in asyncio.Server callbacks are not retrievable

2020-12-01 Thread Tom


Tom  added the comment:

How do you suggest one might test code in a Server callback with asyncio?

Of course, I don't want any old exception to affect another client connection,
only an exception which is uncaught up to the handler coro. And I'm not
suggesting that it happen by default, only that it be possible.

With this, the behaviour would perfectly align with the asyncio.gather
functionality, and its 'return_exceptions' kwarg.

--

___
Python tracker 
<https://bugs.python.org/issue42526>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42526] Exceptions in asyncio.Server callbacks are not retrievable

2020-12-01 Thread Tom


New submission from Tom :

Consider this program:

import asyncio

async def handler(r, w):
raise RuntimeError

async def main():
server = await asyncio.start_server(handler, host='localhost', 
port=1234)
r, w = await asyncio.open_connection(host='localhost', port=1234)
await server.serve_forever()
server.close()

asyncio.run(main())

The RuntimeError is not retrievable via the serve_forever coroutine. To my
knowledge, there is no feature of the asyncio API which causes the server to
stop on an exception and retrieve it. I have also tried wrapping serve_forever
in a Task, and waiting on the coro with FIRST_EXCEPTION.

This severely complicates testing asyncio servers, since failing tests hang
forever if the failure occurs in a callback.

It should be possible to configure the server to end if a callback fails, e.g.
by a 'stop_on_error' kwarg to start_server (defaulting to False for
compatibility).

I know this isn't a technical problem, since AnyIO, which uses asyncio, does
this by default. This equivalent program ends after the exception:

import anyio

async def handler(client):
raise RuntimeError

async def main():
async with anyio.create_task_group() as tg:
listener = await anyio.create_tcp_listener(local_host='localhost', 
local_port=1234)
await tg.spawn(listener.serve, handler)
async with await anyio.connect_tcp('localhost', 1234) as client:
pass

anyio.run(main)

--
components: asyncio
messages: 382265
nosy: asvetlov, tmewett, yselivanov
priority: normal
severity: normal
status: open
title: Exceptions in asyncio.Server callbacks are not retrievable
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue42526>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42396] Add a whatsnew entry about async contextlib.nullcontext

2020-11-17 Thread Tom Gringauz


Change by Tom Gringauz :


--
keywords: +patch
pull_requests: +22250
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/23357

___
Python tracker 
<https://bugs.python.org/issue42396>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42396] Add a whatsnew entry about async contextlib.nullcontext

2020-11-17 Thread Tom Gringauz


Change by Tom Gringauz :


--
nosy: tomgrin10
priority: normal
severity: normal
status: open
title: Add a whatsnew entry about async contextlib.nullcontext

___
Python tracker 
<https://bugs.python.org/issue42396>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42395] aclosing was not added to __all__ in contextlib

2020-11-17 Thread Tom Gringauz


Change by Tom Gringauz :


--
keywords: +patch
pull_requests: +22249
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/23356

___
Python tracker 
<https://bugs.python.org/issue42395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42395] aclosing was not added to __all__ in contextlib

2020-11-17 Thread Tom Gringauz


New submission from Tom Gringauz :

Related to this PR https://github.com/python/cpython/pull/21545

--
components: Library (Lib)
messages: 381296
nosy: tomgrin10
priority: normal
severity: normal
status: open
title: aclosing was not added to __all__ in contextlib

___
Python tracker 
<https://bugs.python.org/issue42395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36011] ssl - tls verify on Windows fails

2020-11-15 Thread Tom Kent


Tom Kent  added the comment:

Christian's message indicated that a workaround was possible by adding 
mozilla's certs to windows cert store. 

I'm sure there are sysadmins who will really hate this idea, but I've 
successfully implemented it in a windows docker image, and wanted to document 
here.

Powershell commands, requires OpenSSL to be installed on the system:
```
cd $env:USERPROFILE;
Invoke-WebRequest https://curl.haxx.se/ca/cacert.pem -OutFile 
$env:USERPROFILE\cacert.pem;
$plaintext_pw = 'PASSWORD';
$secure_pw = ConvertTo-SecureString $plaintext_pw -AsPlainText -Force;
& 'C:\Program Files\OpenSSL-Win64\bin\openssl.exe' pkcs12 -export -nokeys -out 
certs.pfx -in cacert.pem -passout pass:$plaintext_pw;
Import-PfxCertificate -Password $secure_pw  -CertStoreLocation 
Cert:\LocalMachine\Root -FilePath certs.pfx;
```

Once mozilla's store is imported into the microsoft trusted root store, python 
has everything it needs to access files directly.

--
nosy: +teeks99

___
Python tracker 
<https://bugs.python.org/issue36011>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42252] Embeddable Python indicates that it uses PYTHONPATH

2020-11-03 Thread Tom Kent


Tom Kent  added the comment:

A couple things...

>> One possible use-case is to package it along with another program to use the 
>> interpreter.

> This is the primary use case. If you're doing something else with it, you're 
> probably misusing it :)

Interesting, I'd been expecting this was commonly used as the way to give 
access to python3X.dll. We actually do (or are trying to do) both from our 
installation.



I've been mostly focusing on `PYTHONPATH` because that's where I encountered 
the issue. Which if any of the other env variables are respected? 



Would there be an argument to add additional command line options that could be 
used as a more secure alternative to the env variables? A command line argument 
`-e` that is the opposite of `-E` and enables the usage of PYTHON* env? Maybe 
this doesn't make sense since you said it is the ._pth that causes this...just 
thinking aloud.

The two options you mention (modify ._pth and append to sys.path) aren't great 
because we 1) would prefer to use the un-modified python distro 2) don't own 
the scripts that we are embedding, they are from a 3rd party so modifications 
are complicated.

--

___
Python tracker 
<https://bugs.python.org/issue42252>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42252] Embeddable Python indicates that it uses PYTHONPATH

2020-11-03 Thread Tom Kent


Tom Kent  added the comment:

I'm not sure I agree with that. One possible use-case is to package it along 
with another program to use the interpreter. In this case they could use the 
other program's native language features (e.g. .Net's Process.Start(), Win32 
API's CreateProcess(), Even Python's subprocess but why?, etc) to run 
`python.exe myscript.py`. 

In this case, the user may assume that adding something to the `PYTHONPATH` env 
variable, as most of the launching methods allow, would take hold. When this 
fails, the first attempt at debugging would be to try it interactively with the 
same command, then promptly look at python --help when that fails. 

Maybe a better question is why should the embeddable distribution's python.exe 
ignore env variables? Wouldn't it make more sense to depend on the user to add 
a `-E` if that is what they desire?

--

___
Python tracker 
<https://bugs.python.org/issue42252>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42252] Embeddable Python indicates that it uses PYTHONPATH

2020-11-03 Thread Tom Kent

New submission from Tom Kent :

According to the documentation 
https://docs.python.org/3/using/windows.html#windows-embeddable

> When extracted, the embedded distribution is (almost) fully isolated 
> from the user’s system, including environment variables, system registry 
> settings, and installed packages

The embedded distribution should ignore the environment variables. 

This is echoed in this prior issue that thought `PYTHONPATH` not being 
respected was a bug:
https://bugs.python.org/issue28245

Regardless of the decision to respect environment variables, the message that 
is displayed when running the distribution's `python --help` needs to indicate 
how it will act. 

Currently, for the embedded distribution, which doesn't respect the env 
variables, there is a section in the output from running `python -help` that 
indicates:

```
Other environment variables:
PYTHONSTARTUP: file executed on interactive startup (no default)
PYTHONPATH   : ';'-separated list of directories prefixed to the
   default module search path.  The result is sys.path.
PYTHONHOME   : alternate  directory (or ;).
   The default module search path uses 
\python{major}{minor}.
PYTHONPLATLIBDIR : override sys.platlibdir.
PYTHONCASEOK : ignore case in 'import' statements (Windows).
PYTHONUTF8: if set to 1, enable the UTF-8 mode.
PYTHONIOENCODING: Encoding[:errors] used for stdin/stdout/stderr.
PYTHONFAULTHANDLER: dump the Python traceback on fatal errors.
PYTHONHASHSEED: if this variable is set to 'random', a random value is used
   to seed the hashes of str and bytes objects.  It can also be set to an
   integer in the range [0,4294967295] to get hash values with a
   predictable seed.
PYTHONMALLOC: set the Python memory allocators and/or install debug hooks
   on Python memory allocators. Use PYTHONMALLOC=debug to install debug
   hooks.
PYTHONCOERCECLOCALE: if this variable is set to 0, it disables the locale
   coercion behavior. Use PYTHONCOERCECLOCALE=warn to request display of
   locale coercion and locale compatibility warnings on stderr.
PYTHONBREAKPOINT: if this variable is set to 0, it disables the default
   debugger. It can be set to the callable of your debugger of choice.
PYTHONDEVMODE: enable the development mode.
PYTHONPYCACHEPREFIX: root directory for bytecode cache (pyc) files.
```

This may lead users (it did lead this one) to assume that they are doing 
something wrong when for example the output of `sys.path` doesn't included 
items in `os.environ["PYTHONPATH"]`. 

Realizing that it may be difficult to achieve, the help output should match the 
state of what the interpreter will actually do if run.

--
components: Windows
messages: 380274
nosy: paul.moore, steve.dower, teeks99, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: Embeddable Python indicates that it uses PYTHONPATH
versions: Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42252>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Trying to Download PygameZero

2020-10-10 Thread Tom Hedge via Python-list
I am in a 8 grade coding class at the moment and my teacher asked me to 
download a script called pgzero. I can not seem to download pgzer or pygame 
when i try it shoots me a error message:  ERROR: Command errored out with exit 
status 1:     command: 'c:\program files\python39\python.exe' -c 'import sys, 
setuptools, tokenize; sys.argv[0] = 
'"'"'C:\\Users\\thedg\\AppData\\Local\\Temp\\pip-install-g9sb_mr0\\pygame\\setup.py'"'"';
 
__file__='"'"'C:\\Users\\thedg\\AppData\\Local\\Temp\\pip-install-g9sb_mr0\\pygame\\setup.py'"'"';f=getattr(tokenize,
 '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', 
'"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info 
--egg-base 'C:\Users\thedg\AppData\Local\Temp\pip-pip-egg-info-rre7x6r3'        
 cwd: C:\Users\thedg\AppData\Local\Temp\pip-install-g9sb_mr0\pygame\    
Complete output (17 lines):

    WARNING, No "Setup" File Exists, Running "buildconfig/config.py"    Using 
WINDOWS configuration...

    Download prebuilts to "prebuilt_downloads" and copy to "./prebuilt-x64"? 
[Y/n]Traceback (most recent call last):      File "", line 1, in 
      File 
"C:\Users\thedg\AppData\Local\Temp\pip-install-g9sb_mr0\pygame\setup.py", line 
194, in         buildconfig.config.main(AUTO_CONFIG)      File 
"C:\Users\thedg\AppData\Local\Temp\pip-install-g9sb_mr0\pygame\buildconfig\config.py",
 line 210, in main        deps = CFG.main(**kwds)      File 
"C:\Users\thedg\AppData\Local\Temp\pip-install-g9sb_mr0\pygame\buildconfig\config_win.py",
 line 576, in main        and download_win_prebuilt.ask(**download_kwargs):     
 File 
"C:\Users\thedg\AppData\Local\Temp\pip-install-g9sb_mr0\pygame\buildconfig\download_win_prebuilt.py",
 line 302, in ask        reply = raw_input(    EOFError: EOF when reading a 
line    ERROR: Command errored out with 
exit status 1: python setup.py egg_info Check the logs for full command output.

Please let me know ASAP how to fix this. Ive spent countless hours trying to 
fix this, ive tried my changing my Path becasue thats what everyone told me. I 
have also tried some other means to fix this and non have been sucsussful.
Please Help me,

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue41912] Long generator chain causes segmentation fault

2020-10-02 Thread Tom Karzes


Tom Karzes  added the comment:

Thanks Tim and Terry.  Stackless Python sounds interesting.  It's nice to know 
that others had the same idea I did, although I tend to shy away from exotic 
variants since they tend to be less well-supported.  Any chance that CPython 
will go stackless at some point in the future?

By the way, I saw that I can increase the process stack limit from within a 
Python app:

resource.setrlimit(resource.RLIMIT_STACK, (resource.RLIM_INFINITY,
   resource.RLIM_INFINITY))

I tried it, and it works (on my Linux system), but of course is unavailable on 
Windows systems.

--

___
Python tracker 
<https://bugs.python.org/issue41912>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41912] Long generator chain causes segmentation fault

2020-10-02 Thread Tom Karzes


Tom Karzes  added the comment:

I tested this some more, and one thing became clear that I hadn't realized 
before:  This bug has nothing to do specifically with generators (as I had 
thought), but is in fact due purely to the recursion limit.

I created a recursive test program that doesn't use generators at all, and ran 
into the exact same problem (at only a slightly higher recursion level).  I am 
attaching that test case as recurse_bug4.py.  Here's a sample failing 
invocation:

% ./recurse_bug4.py 2
Segmentation fault (core dumped)
% 

On my system, the cutoff point for this one seems to be around 17600, roughly 
1100 higher than for the generator example.

What surprises me about this is the Python implementation uses recursion to 
implement recursion in Python apps.  I would have thought that it would use 
heap memory for something like this, and as a result only require a fixed 
amount of stack.  That would clearly have major advantages, since it would 
decouple recursion limits in Python code from stack limits on the platform.

On the other hand, now that I understand that this is in fact the result of a 
system limit, I was *very* easily able to work around the problem!  I simply 
disabled the stack limit.  From csh:

% unlimit stacksize
%

Then:

% ./gen_bug3.py 10
10
%

and:

% ./recurse_bug4.py 10
10
%

So you were right, this was due solely to the default stack limit on my system, 
and not a bug in the Python implementation.  And it was also very easy to 
remove that limit.  Hopefully something similar can be done on Windows (which I 
know very little about).

Thank you for your help!

--
Added file: https://bugs.python.org/file49487/recurse_bug4.py

___
Python tracker 
<https://bugs.python.org/issue41912>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41912] Long generator chain causes segmentation fault

2020-10-02 Thread Tom Karzes


Tom Karzes  added the comment:

That is a good point, except I don't believe the value needed to expose this 
bug is a "too-high limit" (as the documentation calls it).  I set it to 100100 
for convenience, but in practice even a value of 17000 is more than enough to 
expose the bug on my system (it occurs at around 16500).  For my friend using 
Windows, a value as low as 4000 suffices, which I don't think anyone would 
argue is unreasonably high.

The default value of 1000 is extremely low, and is intended to catch recursion 
bugs in programs that are not expected to recurse very deeply.  For a program 
that genuinely needs recursion, I don't think a value of 2, or even 10, 
is unreasonable given today's typical memory sizes (and when I run my failing 
case, the memory usage is so low as to be inconsequential).  By my 
interpretation, these limits should be well within the range that Python can 
handle.

It seems likely to me that in this case, the problem isn't due to any kind of 
system limit, but is rather the result of a logical error in the implementation 
which is somehow exposed by this test.  Hopefully a developer will take 
advantage of this test case to fix what I believe is a serious bug.

--

___
Python tracker 
<https://bugs.python.org/issue41912>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41912] Long generator chain causes segmentation fault

2020-10-02 Thread Tom Karzes


New submission from Tom Karzes :

If I create a sufficiently long chain of generators, I encounter a segmentation 
fault.  For example, the following works as expected:

% ./gen_bug3.py 1
1
%

But for sufficiently larger chain lengths, it seg faults:

% ./gen_bug3.py 2
Segmentation fault (core dumped)
%

and:

% ./gen_bug3.py 10
Segmentation fault (core dumped)
%

The exact point where it seg faults seems to vary slightly between different 
invocations of Python, but the range is very narrow for a given Python 
installation.  I believe the difference is due to slight variations in used 
memory upon startup.

I can't see any reason why this should happen, and in any case, if there is 
some limit that I'm exceeding, it should raise an exception rather than core 
dump.

I'm using:

3.6.9 (default, Jul 17 2020, 12:50:27)
[GCC 8.4.0]

on a 64-bit Ubuntu Linux system.

Additional info:  A friend of mine is running 3.7.9 on a Windows system.  In 
his case, the symptom is that the program produces no output for a sufficiently 
long generator chain (presumably it's silently crashing).  Additionally, he 
encounters the problem with much shorter generator chains than I do.  I suspect 
it's the same underlying problem.

--
components: Interpreter Core
files: gen_bug3.py
messages: 377826
nosy: karzes
priority: normal
severity: normal
status: open
title: Long generator chain causes segmentation fault
type: crash
versions: Python 3.6
Added file: https://bugs.python.org/file49486/gen_bug3.py

___
Python tracker 
<https://bugs.python.org/issue41912>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41628] All unittest.mock autospec-generated methods are coroutine functions

2020-08-24 Thread Tom Most


Tom Most  added the comment:

Note that this can interact poorly with AsyncMock, introduced in Python 3.8, 
because it causes a mock generated from a mock produces an object with async 
methods rather than regular ones. In Python 3.7 chaining mocks like this would 
produce regular methods. (This is how I discovered this issue.)

--

___
Python tracker 
<https://bugs.python.org/issue41628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41628] All unittest.mock autospec-generated methods are coroutine functions

2020-08-24 Thread Tom Most


New submission from Tom Most :

Given a class:

class Foo:
def bar(self):
pass

And an autospec'd mock of it:

foo_mock = mock.create_autospec(spec=Foo)

The result of `asyncio.iscoroutinefunction()` differs:

asyncio.iscoroutinefunction(Foo.bar)# -> False
asyncio.iscoroutinefunction(foo_mock.bar)   # -> True

This behavior is the same on Python 3.7 and 3.8.

I've attached a demonstration script, repro4.py. Here is the output on Python 
3.7.9:

$ python3.7 repro4.py 
baz is a coroutine function?False
Foo.bar is a coroutine function?False
foo_instance.bar is a coroutine function?   False
baz_mock is a coroutine function?   False
foo_mock.bar is a coroutine function?   True
foo_mock_instance.bar is a coroutine function?  True
foo_mock_mock.bar is a coroutine function?  False
foo_mock_mock_instance.bar is a coroutine function? False
foo_mock_instance.bar()  ->  
foo_mock_mock_instance.bar() ->  

And on Python 3.8.2:

python3.8 repro4.py 
baz is a coroutine function?False
Foo.bar is a coroutine function?False
foo_instance.bar is a coroutine function?   False
baz_mock is a coroutine function?   False
foo_mock.bar is a coroutine function?   True
foo_mock_instance.bar is a coroutine function?  True
foo_mock_mock.bar is a coroutine function?  True
foo_mock_mock_instance.bar is a coroutine function? False
foo_mock_instance.bar()  ->  
foo_mock_mock_instance.bar() ->  

--
components: Library (Lib)
files: repro4.py
messages: 375862
nosy: twm
priority: normal
severity: normal
status: open
title: All unittest.mock autospec-generated methods are coroutine functions
type: behavior
versions: Python 3.7, Python 3.8
Added file: https://bugs.python.org/file49425/repro4.py

___
Python tracker 
<https://bugs.python.org/issue41628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33727] Server.wait_closed() doesn't always wait for its transports to fihish

2020-08-14 Thread Tom


Tom  added the comment:

I ran into this while working on an asyncio application using
asyncio.start_server.

>From the documentation, I expected the combination of `close` and `wait_closed`
to wait until all connection handlers have finished. Instead, handlers remaining
running with open connections as background tasks. I wanted to be able to
"gracefully" close the server, with all processing done, so I could inspect some
results for a test case.

Could there be a method for this? One suggestion would be:

*   Clarify the current behaviour of `close` and `wait_closed`
(https://bugs.python.org/issue34852)
*   Add new coro `wait_finished` which waits until all handler tasks are done

I'm afraid I'm not familiar with low-level asyncio APIs like transports and
protocols, so I don't know how/if this fits in with those.

--
nosy: +tmewett

___
Python tracker 
<https://bugs.python.org/issue33727>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41543] contextlib.nullcontext doesn't work with async context managers

2020-08-13 Thread Tom Gringauz


Change by Tom Gringauz :


--
keywords: +patch
pull_requests: +20995
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21870

___
Python tracker 
<https://bugs.python.org/issue41543>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41543] contextlib.nullcontext doesn't work with async context managers

2020-08-13 Thread Tom Gringauz


New submission from Tom Gringauz :

`contextlib.nullcontext` cannot be used with async conetext managers, because 
it implements only `__enter__` and `__exit__`, and doesn't implement 
`__aenter__` and `__aexit__`.

--
components: Library (Lib), asyncio
messages: 375346
nosy: asvetlov, tomgrin10, yselivanov
priority: normal
severity: normal
status: open
title: contextlib.nullcontext doesn't work with async context managers
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue41543>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41134] distutils.dir_util.copy_tree FileExistsError when updating symlinks

2020-07-16 Thread Tom Hale


Change by Tom Hale :


--
keywords: +patch
pull_requests: +20651
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/14464

___
Python tracker 
<https://bugs.python.org/issue41134>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41280] lru_cache on 0-arity functions should default to maxsize=None

2020-07-11 Thread Tom Forbes


New submission from Tom Forbes :

`functools.lru_cache` has a maxsize=128 default for all functions.

If a function has no arguments then this maxsize default is redundant and 
should be set to `maxsize=None`:

```
@functools.lru_cache()
def function_with_no_args():
pass
```

Currently you need to add `maxsize=None` manually, and ensure that it is also 
updated if you alter the function to add arguments.

--
components: Library (Lib)
messages: 373542
nosy: Tom Forbes
priority: normal
severity: normal
status: open
title: lru_cache on 0-arity functions should default to maxsize=None
type: performance
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41280>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Add race-free os.link and os.symlink wrapper / helper

2020-06-26 Thread Tom Hale


Tom Hale  added the comment:

Related:

bpo-41134 distutils.dir_util.copy_tree FileExistsError when updating symlinks


WIP update:

I am just about to ask for feedback on my proposed solution at 
core-mentors...@python.org

--
title: Please add race-free os.link and os.symlink wrapper / helper -> Add 
race-free os.link and os.symlink wrapper / helper

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41134] distutils.dir_util.copy_tree FileExistsError when updating symlinks

2020-06-26 Thread Tom Hale


New submission from Tom Hale :

Here is a minimal test case:

==
#!/bin/bash

cd /tmp || exit 1

dir=test-copy_tree
src=$dir/src
dst=$dir/dst

mkdir -p "$src"
touch "$src"/file
ln -s file "$src/symlink"

python -c "from distutils.dir_util import copy_tree;
copy_tree('$src', '$dst', preserve_symlinks=1, update=1);
copy_tree('$src', '$dst', preserve_symlinks=1, update=1);"

rm -r "$dir"

==

Traceback (most recent call last):
  File "", line 3, in 
  File "/usr/lib/python3.8/distutils/dir_util.py", line 152, in copy_tree
os.symlink(link_dest, dst_name)
FileExistsError: [Errno 17] File exists: 'file' -> 'test-copy_tree/dst/symlink'

==


Related:
=

This issue will likely be resolved via:

bpo-36656 Add race-free os.link and os.symlink wrapper / helper

https://bugs.python.org/issue36656
(WIP under discussion at python-mentor)


Prior art:
===
https://stackoverflow.com/questions/53090360/python-distutils-copy-tree-fails-to-update-if-there-are-symlinks

--
messages: 372449
nosy: Tom Hale
priority: normal
severity: normal
status: open
title: distutils.dir_util.copy_tree FileExistsError when updating symlinks

___
Python tracker 
<https://bugs.python.org/issue41134>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39758] StreamWriter.wait_closed() can hang indefinately.

2020-02-26 Thread Tom Christie


New submission from Tom Christie :

Raising an issue that's impacting us on `httpx`.

It appears that in some cases SSL unwrapping can cause `.wait_closed()` to hang 
indefinately.

Trio are particularly careful to work around this case, and have an extensive 
comment on it: 
https://github.com/python-trio/trio/blob/31e2ae866ad549f1927d45ce073d4f0ea9f12419/trio/_ssl.py#L779-L829
 

Originally raised via https://github.com/encode/httpx/issues/634

Tested on:

* Python 3.7.6
* Python 3.8.1

```
import asyncio
import ssl
import certifi

hostname = 'login.microsoftonline.com'
context = ssl.create_default_context()
context.load_verify_locations(cafile=certifi.where())

async def main():
reader, writer = await asyncio.open_connection(hostname, 443, ssl=context)
print('opened')
writer.close()
print('close started')
await writer.wait_closed()
print('close completed')

asyncio.run(main())
```

--
components: asyncio
messages: 362688
nosy: asvetlov, tomchristie, yselivanov
priority: normal
severity: normal
status: open
title: StreamWriter.wait_closed() can hang indefinately.
type: behavior
versions: Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39758>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39671] Mention in docs that asyncio.FIRST_COMPLETED does not guarantee the completion of no more than one task

2020-02-18 Thread Tom Pohl


New submission from Tom Pohl :

Currently, the documentation of asyncio.wait gives the impression that using 
FIRST_COMPLETED guarantees the completion of no more than one task. In reality, 
the number of completed task after asyncio.wait can be larger than one.

While this behavior (exactly one complete task if no error or cancellation 
occurred) would be ultimately desirable, a sentence describing the current 
behavior would be helpful for new users of asyncio.

--
assignee: docs@python
components: Documentation
messages: 362181
nosy: docs@python, tom.pohl
priority: normal
severity: normal
status: open
title: Mention in docs that asyncio.FIRST_COMPLETED does not guarantee the 
completion of no more than one task
type: enhancement
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39671>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35108] inspect.getmembers passes exceptions from object's properties through

2020-02-03 Thread Tom Augspurger


Change by Tom Augspurger :


--
keywords: +patch
pull_requests: +17703
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/18330

___
Python tracker 
<https://bugs.python.org/issue35108>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38646] Invalid check on the result of pthread_self() leads to libpython startup failure

2019-10-30 Thread Tom Lane


New submission from Tom Lane :

Assorted code in the Python core supposes that the result of pthread_self() 
cannot be equal to PYTHREAD_INVALID_THREAD_ID, ie (void *) -1.  If it is, you 
get a crash at interpreter startup.  Unfortunately, this supposition is 
directly contrary to the POSIX specification for pthread_self(), which defines 
no failure return value; and it is violated by NetBSD's implementation in some 
circumstances.  In particular, we (the Postgres project) are observing that 
libpython.so fails when dynamically loaded into a host executable that does not 
itself link libpthread.  NetBSD's code always returns -1 if libpthread was not 
present at main program start, as they do not support forking new threads in 
that case.  They assert (and I can't disagree) that their implementation 
conforms to POSIX.

A lazy man's solution might be to change PYTHREAD_INVALID_THREAD_ID to some 
other value like -3, but that's not fixing the core problem that you're 
violating POSIX by testing for any specific value at all.

Details and background info can be found in this email thread:

https://www.postgresql.org/message-id/flat/25662.1560896200%40sss.pgh.pa.us

--
components: Interpreter Core
messages: 355723
nosy: tgl
priority: normal
severity: normal
status: open
title: Invalid check on the result of pthread_self() leads to libpython startup 
failure
type: crash
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue38646>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37915] Segfault in comparison between datetime.timezone.utc and pytz.utc

2019-08-22 Thread Tom Augspurger


Tom Augspurger  added the comment:

Thanks for debugging this Karthikeyan and for the quick fix Pablo!

--

___
Python tracker 
<https://bugs.python.org/issue37915>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37915] Segfault in comparison between datetime.timezone.utc and putz.utc

2019-08-22 Thread Tom Augspurger


New submission from Tom Augspurger :

The following crashes with Python 3.8b3

```
import sys
import pytz
import datetime

print(sys.version_info)
print(pytz.__version__)
print(datetime.timezone.utc == pytz.utc)
```

When run with `-X faulthandler`, I see

```
sys.version_info(major=3, minor=8, micro=0, releaselevel='beta', serial=3)
2019.2
Fatal Python error: Segmentation fault

Current thread 0x0001138dc5c0 (most recent call first):
  File "foo.py", line 8 in 
Segmentation fault: 11
```

--
messages: 350184
nosy: tomaugspurger
priority: normal
severity: normal
status: open
title: Segfault in comparison between datetime.timezone.utc and putz.utc
type: crash
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue37915>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: absolute path to a file

2019-08-21 Thread tom arnall
Thanks. Hope you found a solution to the problem.

On Tue, Aug 20, 2019, 2:51 AM Cameron Simpson  wrote:

> Please remember to CC the list.
>
> On 19Aug2019 22:06, Paul St George  wrote:
> >On 19/08/2019 14:16, Cameron Simpson wrote:
> [...]
> >>There's a remark on that web page I mentioned that suggests that the
> >>leading '//' indicates the filename is relative to the Blender model,
> >>so the context directory for the '//' is likely
> >>/Users/Lion/Desktop/test8.
> >
> >Yes. That makes sense. The reason I was testing with two images, one
> >at /Users/Lion/Desktop/test8/image01.tif and the other at
> >/Users/Lion/Desktop/images/image02.tif is that I cannot rely on images
> >being in the same folder as the Blender file.
> >
> >So, let's assume the context directory is /Users/Lion/Desktop/test8
> >and see how we get on below.
> [...]
> >>realpath needs a UNIX path. Your //image01.tif isn't a UNIX path, it
> >>is a special Blender path. First you need to convert it. By replacing
> >>'//' with the blend file's directory. Then you can call realpath. If
> >>you still need to.
> >
> >Understood. Now. Thanks!
> >>
> >>[...snip...]
> >
> >Did you just [...snip...] yourself?
>
> Yes. It keeps the surrounding context manageable. In this way you know
> to which text I am referring, without having to wade through paragraphs
> to guess what may be relevant.
>
> >>from os.path import dirname
> >>
> >># Get this from somewhere just hardwiring it for the example.
> >># Maybe from your 'n' object below?
> >>blend_file = '/Users/Lion/Desktop/test8/tifftest8.blend'
> >Is this setting a relative path?
> >>
> >>blender_image_file = n.image.filename
> >>
> >>unix_image_file = unblenderise(blender_image_file,
> dirname(blend_file))
> >>
> >>Now you have a UNIX path. If blend_file is an absolute path,
> >>unix_image_path will also be an absolute path. But if blend_file is
> >>a relative path (eg you opened up "tifftest8.blend") unix_image_path
> >>will be a relative path.
> >
> >Does unix_image_path = unix_image_file?
>
> Yeah, sorry,  my mistake.
>
> >Two possibilities here.
> >blend_file (and so unix_image_file) is an absolute path OR blend_file
> >(and so unix_image_file) is a relative path.
> >
> >I just want to check my understanding. If I supply the path to
> >blend_file then it is absolute, and if I ask Python to generate the
> >path to blend_file from within Blender it is relative. Have I got it?
>
> Not quite. What seems to be the situation is:
>
> You've got some object from Blender called "n.image", which has a
> ".file" attribute which is a Blender reference to the image file of the
> form "//image01.tif".
>
> I presume that Blender has enough state inside "n" or "n.image" to
> locate this in the real filesystem; maybe it has some link to the
> Blender model of your blend file, and thus knows the path to the blend
> file and since //image01.tif is a reference relative to the blend file,
> it can construct the UNIX path to the file.
>
> You want to know the UNIX pathname to the image file (maybe you want to
> pass it to some unrelated application to view the file or something).
>
> So you need to do what Blender would do if it needs a UNIX path (eg to
> open the file).
>
> The formula for that is dirname(path_to_blendfile) with n.image.file[2:]
> appended to it. So that's what we do with unblenderise(): if the
> filename is a "Blender relative name", rip off the "//" and prepend the
> blend file directory path. That gets you a UNIX path which you can hand
> to any function expecting a normal operating system pathname.
>
> Whether that is an absolute path or a relative path is entirely "does
> the resulting path start with a '/'"?
>
> An absolute path starts with a slash and is relative to the root of the
> filesystem. You can use such a path regardless of what your current
> working directory is, because it doesn't use the working directory.
>
> A relative path doesn't start with a slash and is relative to the
> current working directory. It only works if you're in the right working
> directory.
>
> _Because_ a relative path depends on the _your_ working directory,
> usually we pass around absolute paths if we need to tell something else
> about a file, because that will work regardless if what _their_ working
> directory may be.
>
> So, you may well want to turn a relative path into an absolute path...
>
> >If I decided not to supply the path and so ended up with a relative
> >UNIX path, I could now use realpath or abspath to find the absolute
> >path. Have I still got it?
>
> This is correct.
>
> Abspath may even call realpath to do its work, unsure.
>
> >It works very well. So thank you! I tested it with a Blend file that
> >had two images, one in the same folder as the Blend file and the other
> >was in a folder on the Desktop called 'images'.
> >
> >The initial results were:
> >Plane uses image01.tif saved at //image01.tif which is at
> >/Users/Lion/Desktop/test8/image01.tif
> >Plane 

[issue36656] Please add race-free os.link and os.symlink wrapper / helper

2019-06-29 Thread Tom Hale


Tom Hale  added the comment:

I've created a PR here:

https://github.com/python/cpython/pull/14464

Only shutil.symlink is currently implemented.

Feedback sought from Windows users.

@Michael.Felt please note that `overwrite=False` is the default.

@taleinat I hope that the new implementation addresses the infinite loop 
concern.

Please be both thorough and gentle - this is my first PR.

--

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Race conditions due to os.link and os.symlink POSIX specification

2019-06-03 Thread Tom Hale


Tom Hale  added the comment:

Serhiy wrote

> Detected problem is better than non-detected problem.

I agree. I also assert that no problem (via a shutil wrapper) is better than a 
detected problem which may not be handled.

While it's up to the programmer to handle exceptions, it's only systems 
programmers who will realise that there is a sometimes-occurring exception 
which should be handled.

So most people won't handle it. They don't know that they need to.

The shutil os.link and os.symlink wrappers under discussion on python-ideas 
prevent non-system-programmers from needing to know about the race-condition 
case (and correctly handling it).

If something is difficult to get right, then let's put it in shutil and save 
everyone reinventing the wheel.

3 specific cases in which an atomic link replacement would be useful are listed 
here:

https://code.activestate.com/lists/python-ideas/56195/

--

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Race conditions due to os.link and os.symlink POSIX specification

2019-06-03 Thread Tom Hale


Change by Tom Hale :


--
title: Allow os.symlink(src, target, force=True) to prevent race conditions -> 
Race conditions due to os.link and os.symlink POSIX specification

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36709] Asyncio SSL keep-alive connections raise errors after loop close.

2019-05-30 Thread Tom Christie


Tom Christie  added the comment:

Right, and `requests` *does* provide both those styles.

The point more being that *not* having closed the transport at the point of 
exit shouldn't end up raising a hard error. It doesn't raise errors in 
sync-land, and it shouldn't do so in async-land.

Similarly, we wouldn't expect an open file resource to cause errors to be 
raised at the point of exit.

--

___
Python tracker 
<https://bugs.python.org/issue36709>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36709] Asyncio SSL keep-alive connections raise errors after loop close.

2019-05-28 Thread Tom Christie


Tom Christie  added the comment:

> From my understanding, the correct code should close all transports and wait 
> for their connection_lost() callbacks before closing the loop.

Ideally, yes, although we should be able to expect that an SSL connection that 
hasn't been gracefully closed wouldn't loudly error on teardown like that.

In standard sync code, the equivelent would running something like this...

```python
session = requests.Session()
session.get('https://example.com/')
```

We wouldn't expect a traceback to be raised on exiting. (Even though the user 
*hasn't* explicitly closed the session, and even though a keep alive SSL 
connection will be open at the point of exit.)

--

___
Python tracker 
<https://bugs.python.org/issue36709>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions

2019-05-14 Thread Tom Hale


Change by Tom Hale :


--
type: security -> enhancement

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions

2019-05-14 Thread Tom Hale


Tom Hale  added the comment:

Thanks Toshio Kuratomi, I raised it on the mailing list at:

https://code.activestate.com/lists/python-ideas/55992/

--

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions

2019-05-03 Thread Tom Hale


Tom Hale  added the comment:

Yes, but by default (because of difficulty) people won't check for this case:

1. I delete existing symlink in order to recreate it
2. Attacker watching symlink finds it deleted and recreates it
3. I try to create symlink, and an unexpected exception is raised

--

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36709] Asyncio SSL keep-alive connections raise errors after loop close.

2019-04-24 Thread Tom Christie


Tom Christie  added the comment:

This appears somewhat related: https://bugs.python.org/issue34506

As it *also* logs exceptions occuring during `_fatal_error` and `_force_close`.

--

___
Python tracker 
<https://bugs.python.org/issue36709>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36709] Asyncio SSL keep-alive connections raise errors after loop close.

2019-04-24 Thread Tom Christie


New submission from Tom Christie :

If an asyncio SSL connection is left open (eg. any kind of keep-alive 
connection) then after closing the event loop, an exception will be raised...

Python:

```
import asyncio
import ssl
import certifi


async def f():
ssl_context = ssl.create_default_context()
ssl_context.load_verify_locations(cafile=certifi.where())
await asyncio.open_connection('example.org', 443, ssl=ssl_context)


loop = asyncio.get_event_loop()
loop.run_until_complete(f())
loop.close()
```

Traceback:

```
$ python example.py 
Fatal write error on socket transport
protocol: 
transport: <_SelectorSocketTransport fd=8>
Traceback (most recent call last):
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py",
 line 868, in write
n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor
Fatal error on SSL transport
protocol: 
transport: <_SelectorSocketTransport closing fd=8>
Traceback (most recent call last):
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py",
 line 868, in write
n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/sslproto.py",
 line 676, in _process_write_backlog
self._transport.write(chunk)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py",
 line 872, in write
self._fatal_error(exc, 'Fatal write error on socket transport')
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py",
 line 681, in _fatal_error
self._force_close(exc)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/selector_events.py",
 line 693, in _force_close
self._loop.call_soon(self._call_connection_lost, exc)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py",
 line 677, in call_soon
self._check_closed()
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py",
 line 469, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
```

It looks to me like the original "OSError: [Errno 9] Bad file descriptor" 
probably shouldn't be raised in any case - if when attempting to tear down the 
SSL connection, then we should probably pass silently in the case that the 
socket has already been closed uncleanly.

Bought to my attention via: https://github.com/encode/httpcore/issues/16

--
assignee: christian.heimes
components: SSL, asyncio
messages: 340764
nosy: asvetlov, christian.heimes, tomchristie, yselivanov
priority: normal
severity: normal
status: open
title: Asyncio SSL keep-alive connections raise errors after loop close.
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue36709>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions

2019-04-19 Thread Tom Hale


Tom Hale  added the comment:

The most correct work-around I believe exists is:

(updates at: https://stackoverflow.com/a/55742015/5353461)

def symlink_force(target, link_name):
'''
Create a symbolic link pointing to target named link_name.
Overwrite target if it exists.
'''

# os.replace may fail if files are on different filesystems.
# Therefore, use the directory of target
link_dir = os.path.dirname(target)

# os.symlink requires that the target does NOT exist.
# Avoid race condition of file creation between mktemp and symlink:
while True:
temp_pathname = tempfile.mktemp(suffix='.tmp', \
prefix='symlink_force_tmp-', dir=link_dir)
try:
os.symlink(target, temp_pathname)
break  # Success, exit loop
except FileExistsError:
time.sleep(0.001)  # Prevent high load in pathological 
conditions
except:
raise
os.replace(temp_pathname, link_name)

An unlikely race condition still remains: the symlink created at the 
randomly-named `temp_path` could be modified between creation and 
rename/replacing the specified link name.

Suggestions for improvement welcome.

--
type:  -> security

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36656] Allow os.symlink(src, target, force=True) to prevent race conditions

2019-04-18 Thread Tom Hale


New submission from Tom Hale :

I cannot find a race-condition-free way to force overwrite an existing symlink.

os.symlink() requires that the target does not exist, meaning that it could be 
created via race condition the two workaround solutions that I've seen:

1. Unlink existing symlink (could be recreated, causing following symlink to 
fail)

2. Create a new temporary symlink, then overwrite target (temp could be changed 
between creation and replace.

The additional gotcha with the safer (because the attack filename is unknown) 
option (2) is that replace() may fail if the two files are on separate 
filesystems.

I suggest an additional `force=` argument to os.symlink(), defaulting to 
`False` for backward compatibility, but allowing atomic overwriting of a 
symlink when set to `True`.

I would be willing to look into a PR for this.

Prior art:  https://stackoverflow.com/a/55742015/5353461

--
messages: 340474
nosy: Tom Hale
priority: normal
severity: normal
status: open
title: Allow os.symlink(src, target, force=True) to prevent race conditions
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue36656>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16177] Typing left parenthesis in IDLE causes intermittent Cocoa Tk crash on OS X

2019-04-11 Thread Tom Goddard


Tom Goddard  added the comment:

This Mac Tk bug was supposedly fixed in 2016 or 2017.  Details are in the 
following Tk ticket.

http://core.tcl.tk/tk/tktview/c84f660833546b1b84e7

The previous URL to the Tk ticket no longer works.  In case the above URL also 
goes bad, the id number for the Tk ticket is 

c84f660833546b1b84e7fd3aef930c2f17207461

--

___
Python tracker 
<https://bugs.python.org/issue16177>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35627] multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1

2019-01-21 Thread Tom Wilson


Tom Wilson  added the comment:

In case this is a clue - the attached script "mp_hang2.py" adds a call to 
qsize() and uses only a single consumer. When I run it from the command line it 
does one of two things:


Option 1:

C:\TEMP\Py-3.7.2b-Venv\Scripts>.\python.exe 
"C:\Users\Tom.Wilson\Documents\Python-Bugs\mp_hang2.py"
Creating 1 consumers
Putting
Poisoning
Joining
Process Consumer-1:
Traceback (most recent call last):
  File 
"C:\Users\Tom.Wilson\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py",
 line 297, in _bootstrap
self.run()
  File "C:\Users\Tom.Wilson\Documents\Python-Bugs\mp_hang2.py", line 18, in run
print(f'Queue size: {self.task_queue.qsize()}')
  File 
"C:\Users\Tom.Wilson\AppData\Local\Programs\Python\Python37\lib\multiprocessing\queues.py",
 line 117, in qsize
return self._maxsize - self._sem._semlock._get_value()
PermissionError: [WinError 5] Access is denied


Option 2:

C:\TEMP\Py-3.7.2b-Venv\Scripts>.\python.exe 
"C:\Users\Tom.Wilson\Documents\Python-Bugs\mp_hang2.py"
Creating 1 consumers
Putting
Poisoning
Joining
Queue size: 2147483647
Getting task
 <<< Hangs here >>>


If I can provide anything else please let me know.

--
Added file: https://bugs.python.org/file48070/mp_hang2.py

___
Python tracker 
<https://bugs.python.org/issue35627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35627] multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1

2019-01-21 Thread Tom Wilson


Tom Wilson  added the comment:

Hi there. I get this behavior as well, although only in a venv.

  Main   Virtual  
v3.7.1:260ec2c36a  CompletesCompletes
v3.7.2:9a3ffc0492  Completes  Hangs

Some other details of my setup:
 - Windows 10 Pro, Version 1803 (OS Build 17134.472)
 - Python platform is AMD64 on win32
 - Running from the command line (cmd.exe)
 - The virtual environment was created from the command line like this: 
 .\python -m venv c:\temp\Py-3.7.2b-Venv

--
nosy: +tom.wilson

___
Python tracker 
<https://bugs.python.org/issue35627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23078] unittest.mock patch autospec doesn't work on staticmethods

2018-11-26 Thread Tom Dalton

Tom Dalton  added the comment:

Here's a minimal example so my comment is not totally vacuous:

```
import unittest
from unittest import mock

class Foo:
@classmethod
def bar(cls, baz):
pass

class TestFoo(unittest.TestCase):
def test_bar(self):
with unittest.mock.patch.object(Foo, "bar", autospec=True):
Foo.bar()
```

->

```
~/› python -m unittest example.py 
E
==
ERROR: test_bar (example.TestFoo)
--
Traceback (most recent call last):
  File "example.py", line 14, in test_bar
Foo.bar()
TypeError: 'NonCallableMagicMock' object is not callable

--
Ran 1 test in 0.001s
```

--

___
Python tracker 
<https://bugs.python.org/issue23078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23078] unittest.mock patch autospec doesn't work on staticmethods

2018-11-26 Thread Tom Dalton


Tom Dalton  added the comment:

I've just come across this too, so would be great if the patch can be 
progressed.

--
nosy: +tom.dalton.fanduel

___
Python tracker 
<https://bugs.python.org/issue23078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12657] Cannot override JSON encoding of basic type subclasses

2018-11-15 Thread Tom Brown


Tom Brown  added the comment:

I found this work-around useful https://stackoverflow.com/a/32782927

--
nosy: +Tom.Brown

___
Python tracker 
<https://bugs.python.org/issue12657>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34877] Inconsistent Behavior Of futures.ProcessPoolExecutor

2018-10-02 Thread Tom Ashley

New submission from Tom Ashley :

Not sure if this goes in core or modules.

There is an inconsistency in the output of the attached script. From the docs I 
read it's supposed to have the behavior of:

"If something happens to one of the worker processes to cause it to exit 
unexpectedly, the ProcessPoolExecutor is considered “broken” and will no longer 
schedule tasks."

That script is supposed to exemplify that. Instead, if I run the code several 
times, I get the following output:

(bot-LP2ewIkY) ⋊> ~/w/p/b/bot on master ⨯ python brokenPool.py  

 18:54:55
getting the pid for one worker
killing process 4373
submitting another task
could not start new tasks: A process in the process pool was terminated 
abruptly while the future was running or pending.
(bot-LP2ewIkY) ⋊> ~/w/p/b/bot on master ⨯ python brokenPool.py  

 18:54:56
getting the pid for one worker
killing process 4443
submitting another task
could not start new tasks: A process in the process pool was terminated 
abruptly while the future was running or pending.
(bot-LP2ewIkY) ⋊> ~/w/p/b/bot on master ⨯ python brokenPool.py  

 18:54:57
getting the pid for one worker
killing process 4514
submitting another task  <- (No exception thrown after this)


The exception isn't always thrown. This seems problematic to me. Related stack 
post: 
https://stackoverflow.com/questions/52617558/python-inconsistent-behavior-of-futures-processpoolexecutor

--
components: Interpreter Core
files: brokenPool.py
messages: 326922
nosy: TensorTom
priority: normal
severity: normal
status: open
title: Inconsistent Behavior Of futures.ProcessPoolExecutor
versions: Python 3.7
Added file: https://bugs.python.org/file47843/brokenPool.py

___
Python tracker 
<https://bugs.python.org/issue34877>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34873] re.finditer behaviour in re.MULTILINE mode fails to match first 7 characters

2018-10-02 Thread Tom Dawes


Tom Dawes  added the comment:

Please ignore, re.finditer and REGEX.finditer aren't the same. I was passing 
re.MULTILINE (= 8) to endPos.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue34873>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34873] re.finditer behaviour in re.MULTILINE mode fails to match first 7 characters

2018-10-02 Thread Tom Dawes


New submission from Tom Dawes :

re.finditer appears to fail to match within the first 7 characters in a string 
when re.MULTILINE is used:

>>> REGEX = re.compile("y")
>>> [list(m.start() for m in REGEX.finditer("{}y".format("x"*i), re.MULTILINE)) 
>>> for i in range(10)]
[[], [], [], [], [], [], [], [], [8], [9]]

Without re.MULTILINE, this works fine:

>>> [list(m.start() for m in REGEX.finditer("{}y".format("x"*i))) for i in 
>>> range(10)]
[[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]]

Passing re.MULTILINE to re.compile doesn't seem to have any effect.

--
components: Regular Expressions
messages: 326911
nosy: ezio.melotti, mrabarnett, tdawes
priority: normal
severity: normal
status: open
title: re.finditer behaviour in re.MULTILINE mode fails to match first 7 
characters
type: behavior
versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue34873>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34564] Tutorial Section 2.1 Windows Installation Path Correction

2018-09-02 Thread Tom Berry


New submission from Tom Berry :

The listed installation location is incorrect in the 02 Sep 18 release of the 
tutorial. It shows the default install path as C:\python36 vice C:\Program 
Files\python37. 

This may be related to an installer issue, as installing single-user places the 
program in an entirely different directory. Changing the installer to default 
back to C:\python37 would remove the need to differentiate Program Files (x86) 
for a 32-bit installer, as well as Program Files\ for the 64-bit. 

Corrected Tutorial attached reflecting change FROM C:\python36 TO C:\Program 
Files\Python37.

--
assignee: docs@python
components: Documentation
files: Tutorial_EDIT.pdf
messages: 324482
nosy: aperture, docs@python
priority: normal
severity: normal
status: open
title: Tutorial Section 2.1 Windows Installation Path Correction
type: enhancement
versions: Python 3.7
Added file: https://bugs.python.org/file47781/Tutorial_EDIT.pdf

___
Python tracker 
<https://bugs.python.org/issue34564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



ANN: Pandas 0.23.4 Released

2018-08-10 Thread Tom Augspurger
Hi all,

I'm happy to announce pandas that pandas 0.23.4 has been released.

This is a minor bug-fix release in the 0.23.x series and includes some
regression fixes, bug fixes, and performance improvements. We recommend
that all users upgrade to this version.
See the full whatsnew
<https://pandas.pydata.org/pandas-docs/version/0.23.4/whatsnew.html> for a
list of all the changes.

The release can be installed with conda from the default channel and
conda-forge::

conda install pandas

Or via PyPI:

python -m pip install --upgrade pandas


A total of 4 people contributed to this release.  People with a "+" by their
names contributed a patch for the first time.

* Jeff Reback
* Tom Augspurger
* chris-b1
* h-vetinari
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[issue28140] Attempt to give better errors for pip commands typed into the REPL

2018-07-28 Thread Tom Viner


Change by Tom Viner :


--
keywords: +patch
pull_requests: +8053
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue28140>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28140] Attempt to give better errors for pip commands typed into the REPL

2018-07-28 Thread Tom Viner


Tom Viner  added the comment:

I am looking at this, as part of the EuroPython 2018 sprint.

--

___
Python tracker 
<https://bugs.python.org/issue28140>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



ANN: Pandas 0.23.2 Released

2018-07-06 Thread Tom Augspurger
Hi all,

I'm happy to announce pandas that pandas 0.23.2 has been released.

This is a minor bug-fix release in the 0.23.x series and includes some
regression fixes, bug fixes, and performance improvements. We recommend
that all users upgrade to this version.
See the full whatsnew
<https://pandas.pydata.org/pandas-docs/version/0.23.2/whatsnew.html> for a
list of all the changes.

The release can be installed with conda from the default channel and
conda-forge::

conda install pandas

Or via PyPI:

python -m pip install --upgrade pandas

A total of 17 people contributed to this release. People with a “+” by
their names contributed a patch for the first time.

   - David Krych
   - Jacopo Rota +
   - Jeff Reback
   - Jeremy Schendel
   - Joris Van den Bossche
   - Kalyan Gokhale
   - Matthew Roeschke
   - Michael Odintsov +
   - Ming Li
   - Pietro Battiston
   - Tom Augspurger
   - Uddeshya Singh
   - Vu Le +
   - alimcmaster1 +
   - david-liu-brattle-1 +
   - gfyoung
   - jbrockmendel
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


ANN: Pandas 0.23.1 Released

2018-06-13 Thread Tom Augspurger
Hi all,

I'm happy to announce pandas that pandas 0.23.1 has been released.

This is a minor bug-fix release in the 0.23.x series and includes some
regression fixes, bug fixes, and performance improvements.
We recommend that all users upgrade to this version.
See the full whatsnew
<https://pandas.pydata.org/pandas-docs/version/0.23.1/whatsnew.html#v0-23-1>
for a list of all the changes.

The release can be installed with conda from the default channel and
conda-forge::

conda install pandas

Or via PyPI:

python -m pip install --upgrade pandas

A total of 30 people contributed patches to this release. People with a “+”
by their names contributed a patch for the first time.

   - Adam J. Stewart
   - Adam Kim +
   - Aly Sivji
   - Chalmer Lowe +
   - Damini Satya +
   - Dr. Irv
   - Gabe Fernando +
   - Giftlin Rajaiah
   - Jeff Reback
   - Jeremy Schendel +
   - Joris Van den Bossche
   - Kalyan Gokhale +
   - Kevin Sheppard
   - Matthew Roeschke
   - Max Kanter +
   - Ming Li
   - Pyry Kovanen +
   - Stefano Cianciulli
   - Tom Augspurger
   - Uddeshya Singh +
   - Wenhuan
   - William Ayd
   - chris-b1
   - gfyoung
   - h-vetinari
   - nprad +
   - ssikdar1 +
   - tmnhat2001
   - topper-123
   - zertrin +
-- 
https://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


ANN: Pandas 0.23.0 Released

2018-05-17 Thread Tom Augspurger
Hi all,

I'm happy to announce that pandas 0.23.0 has been released. This is a major
release from 0.22.0 and includes a number of API changes, new features,
enhancements, and performance improvements along with a large number of bug
fixes.

Tom
--

*What is it:*

pandas is a Python package providing fast, flexible, and expressive data
structures designed to make working with relational or labeled data both
easy and intuitive. It aims to be the fundamental high-level building block
for doing practical, real world data analysis in Python. Additionally, it
has the broader goal of becoming the most powerful and flexible open source
data analysis / manipulation tool available in any language.

*Highlights of the 0.23.0 release include:*

   - Round-trippable JSON format with 'table' orient
   
<https://pandas.pydata.org/pandas-docs/version/0.23.0/whatsnew.html#json-read-write-round-trippable-with-orient-table>
   - Instantiation from dicts respects order for Python 3.6+
   
<https://pandas.pydata.org/pandas-docs/version/0.23.0/whatsnew.html#instantation-from-dicts-preserves-dict-insertion-order-for-python-3-6>
   - Dependent column arguments for assign
   
<https://pandas.pydata.org/pandas-docs/version/0.23.0/whatsnew.html#assign-accepts-dependent-arguments>
   - Merging / sorting on a combination of columns and index levels
   
<https://pandas.pydata.org/pandas-docs/version/0.23.0/whatsnew.html#merging-on-a-combination-of-columns-and-index-levels>
   - Extending Pandas with custom types
   
<https://pandas.pydata.org/pandas-docs/version/0.23.0/whatsnew.html#extending-pandas-with-custom-types-experimental>
   - Excluding unobserved categories from groupby
   
<https://pandas.pydata.org/pandas-docs/version/0.23.0/whatsnew.html#categorical-groupers-has-gained-an-observed-keyword>

See the full whatsnew
<https://pandas.pydata.org/pandas-docs/version/0.23.0/whatsnew.html#v0-23-0>
for a list of all the changes.

*How to get it*:

Source tarballs and wheels are available on PyPI (thanks to Christoph
Gohlke for the windows wheels, and to Matthew Brett for setting up the
mac/linux wheels).

Conda packages are available on conda-forge and will be available on the
default channel shortly.

*Thanks to all the contributors:*

A total of 328 people contributed to this release. People with a “+†by
their names contributed a patch for the first time.

* Aaron Critchley
* AbdealiJK +
* Adam Hooper +
* Albert Villanova del Moral
* Alejandro Giacometti +
* Alejandro Hohmann +
* Alex Rychyk
* Alexander Buchkovsky
* Alexander Lenail +
* Alexander Michael Schade
* Aly Sivji +
* Andreas Költringer +
* Andrew
* Andrew Bui +
* András Novoszáth +
* Andy Craze +
* Andy R. Terrel
* Anh Le +
* Anil Kumar Pallekonda +
* Antoine Pitrou +
* Antonio Linde +
* Antonio Molina +
* Antonio Quinonez +
* Armin Varshokar +
* Artem Bogachev +
* Avi Sen +
* Azeez Oluwafemi +
* Ben Auffarth +
* Bernhard Thiel +
* Bhavesh Poddar +
* BielStela +
* Blair +
* Bob Haffner
* Brett Naul +
* Brock Mendel
* Bryce Guinta +
* Carlos Eduardo Moreira dos Santos +
* Carlos García Márquez +
* Carol Willing
* Cheuk Ting Ho +
* Chitrank Dixit +
* Chris
* Chris Burr +
* Chris Catalfo +
* Chris Mazzullo
* Christian Chwala +
* Cihan Ceyhan +
* Clemens Brunner
* Colin +
* Cornelius Riemenschneider
* Crystal Gong +
* DaanVanHauwermeiren
* Dan Dixey +
* Daniel Frank +
* Daniel Garrido +
* Daniel Sakuma +
* DataOmbudsman +
* Dave Hirschfeld
* Dave Lewis +
* David Adrián Cañones Castellano +
* David Arcos +
* David C Hall +
* David Fischer
* David Hoese +
* David Lutz +
* David Polo +
* David Stansby
* Dennis Kamau +
* Dillon Niederhut
* Dimitri +
* Dr. Irv
* Dror Atariah
* Eric Chea +
* Eric Kisslinger
* Eric O. LEBIGOT (EOL) +
* FAN-GOD +
* Fabian Retkowski +
* Fer Sar +
* Gabriel de Maeztu +
* Gianpaolo Macario +
* Giftlin Rajaiah
* Gilberto Olimpio +
* Gina +
* Gjelt +
* Graham Inggs +
* Grant Roch
* Grant Smith +
* Grzegorz Konefał +
* Guilherme Beltramini
* HagaiHargil +
* Hamish Pitkeathly +
* Hammad Mashkoor +
* Hannah Ferchland +
* Hans
* Haochen Wu +
* Hissashi Rocha +
* Iain Barr +
* Ibrahim Sharaf ElDen +
* Ignasi Fosch +
* Igor Conrado Alves de Lima +
* Igor Shelvinskyi +
* Imanflow +
* Ingolf Becker
* Israel Saeta Pérez
* Iva Koevska +
* Jakub Nowacki +
* Jan F-F +
* Jan Koch +
* Jan Werkmann
* Janelle Zoutkamp +
* Jason Bandlow +
* Jaume Bonet +
* Jay Alammar +
* Jeff Reback
* JennaVergeynst
* Jimmy Woo +
* Jing Qiang Goh +
* Joachim Wagner +
* Joan Martin Miralles +
* Joel Nothman
* Joeun Park +
* John Cant +
* Johnny Metz +
* Jon Mease
* Jonas Schulze +
* Jongwony +
* Jordi Contestí +
* Joris Van den Bossche
* José F. R. Fonseca +
* Jovixe +
* Julio Martinez +
* Jörg Döpfert
* KOBAYASHI Ittoku +
* Kate Surta +
* Kenneth +
* Kevin Kuhl
* Kevin Sheppard
* Krzysztof Chomski
* Ksenia +
* Ksenia Bobrova +
* Kunal Gosar +
* Kurtis Kerstein +
* Kyle Barron +
* Laksh Arora +
* Laurens Geffert +

[issue27987] obmalloc's 8-byte alignment causes undefined behavior

2018-04-27 Thread Tom Grigg

Change by Tom Grigg <tomegr...@gmail.com>:


--
nosy: +tgrigg

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue27987>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33374] generate-posix-vars failed when building Python 2.7.14 on Linux

2018-04-27 Thread Tom Grigg

Change by Tom Grigg <tomegr...@gmail.com>:


--
nosy: +fweimer

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33374>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33374] generate-posix-vars failed when building Python 2.7.14 on Linux

2018-04-27 Thread Tom Grigg

Tom Grigg <tomegr...@gmail.com> added the comment:

I beleive this is caused by https://bugs.python.org/issue27987 in combination 
with GCC 8.

Florian Weimer proposed a patch which is included in the Fedora build:

https://src.fedoraproject.org/cgit/rpms/python2.git/tree/00293-fix-gc-alignment.patch

It would be nice if this was fixed for 2.7.15 so that it would build out of the 
box with GCC 8 on x86_64-linux-gnu.

--
nosy: +tgrigg

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33374>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33366] `contextvars` documentation incorrectly refers to "non-local state".

2018-04-27 Thread Tom Christie

Tom Christie <t...@tomchristie.com> added the comment:

Refs: https://github.com/python/cpython/pull/6617

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33366>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33366] `contextvars` documentation incorrectly refers to "non-local state".

2018-04-26 Thread Tom Christie

New submission from Tom Christie <t...@tomchristie.com>:

The `contextvars` documentation, at 
https://docs.python.org/3.7/library/contextvars.html starts with the following:

"This module provides APIs to manage, store, and access non-local state."

I assume that must be a documentation bug, right. The module isn't for managing 
non-local state, it's for managing state that *is* local.

I'd assume it ought to read...

"This module provides APIs to manage, store, and access context-local state."

(ie. for managing state that is transparently either thread-local or task-local 
depending on the current execution context.)

--
assignee: docs@python
components: Documentation, asyncio
messages: 315792
nosy: asvetlov, docs@python, tomchristie, yselivanov
priority: normal
severity: normal
status: open
title: `contextvars` documentation incorrectly refers to "non-local state".
type: enhancement
versions: Python 3.7, Python 3.8

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue33366>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10685] trace does not ignore --ignore-module

2018-04-13 Thread Tom Hines

Tom Hines <tomhi...@gmail.com> added the comment:

bump

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue10685>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Writing a C extension - borrowed references

2018-03-20 Thread Tom Evans via Python-list
On Tue, Mar 20, 2018 at 5:36 PM, Rob Gaddi
<rgaddi@highlandtechnology.invalid> wrote:
> If all you're doing is a thin-wrapper around a C library, have you thought
> about just using ctypes?

Yep; the C library whose API I'm using uses macros to cast things to
the right structure, and (similar to Cython), as I already _have_ the
code, I wasn't particularly interested in working out how to convert
things like:

LASSO_SAMLP2_NAME_ID_POLICY(LASSO_SAMLP2_AUTHN_REQUEST(
LASSO_PROFILE(login)->request)->NameIDPolicy
)->Format = strdup(LASSO_SAML2_NAME_IDENTIFIER_FORMAT_PERSISTENT);

into ctypes compatible syntax, when I can simply adapt the working C
code to Python. :)

Plus, there is the library static initialisation to manage, the issues
of distributing the C libraries if I do a C wrapper to call from
ctypes. This way, it can be distributed from our devpi very easily.

Cheers

Tom
-- 
https://mail.python.org/mailman/listinfo/python-list


[OT] Re: Style Q: Instance variables defined outside of __init__

2018-03-20 Thread Tom Evans via Python-list
On Tue, Mar 20, 2018 at 5:25 PM, Grant Edwards
<grant.b.edwa...@gmail.com> wrote:
> On 2018-03-20, Neil Cerutti <ne...@norwich.edu> wrote:
>
>> My automotive course will probaly divide cars into Automatic
>> Transmission, and Front Wheel Drive.
>
> I get your point: the two characteristics are, in theory, orthogonal.
> But, in the US, the two appear to be correlated.  ISTM that cars with
> manual transmissions are much more likely to be RWD than are
> automatics.

I find that slightly strange, I guess in the US, manual transmission
correlates to the non default option(?), along with RWD.

In Europe, RWD and automatic transmission are more expensive options
than FWD and manual transmissions, and most cars are FWD and manual.

At least most cars I can afford..

Cheers

Tom
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Writing a C extension - borrowed references

2018-03-20 Thread Tom Evans via Python-list
On Tue, Mar 20, 2018 at 4:38 PM, Chris Angelico <ros...@gmail.com> wrote:
> BTW, have you looked into Cython? It's smart enough to take care of a
> lot of this sort of thing for you.

I did a bit; this work is to replace our old python 2 SAML client,
which used python-lasso and python-libxml2, both packages that are
built as part of building the C library and thus an utter PITA to
package for different versions of python (something our Infra team
were extremely reluctant to do). When the latest (PITA to build)
version of python-lasso started interfering with python-xmlsec
(validating an invalid signature was causing segfaults), I got fed up
of fighting it.

I actually also maintain a C version of the same code, using the same
libraries, so porting those few segments of code to Python/C seemed
more expedient than rewriting in Cython. I'm not writing an API to
these libraries, just a few functions.

Cheers

Tom
-- 
https://mail.python.org/mailman/listinfo/python-list


Writing a C extension - borrowed references

2018-03-20 Thread Tom Evans via Python-list
Hi all

I'm writing my first C extension for Python here, and all is going
well. However, I was reading [1], and the author there is advocating
Py_INCREF 'ing *every* borrowed reference.

Now, I get that if I do something to mutate and perhaps invalidate the
PyObject that was borrowed I can get unpredictable results - eg, by
getting a borrowed reference to a value from a dictionary, clearing
the dictionary and then expecting to use the borrowed reference.

However, if my code does not do things like that, is there a chance of
a borrowed reference being invalidated that is not related to my use
of the thing I borrowed it from? Is this cargo cult advice (sometimes
the gods need this structure, and so to please the gods it must be
everywhere), sensible belt and braces or needless overkill?

Cheers

Tom


[1] 
http://pythonextensionpatterns.readthedocs.io/en/latest/refcount.html#borrowed-references
-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   9   10   >