[Python-Dev] Re: Object deallocation during the finalization of Python program

2020-01-09 Thread Tim Peters
[Pau Freixes ]
> Recently I've been facing a really weird bug where a Python program
> was randomly segfaulting during the finalization, the program was
> using some C extensions via Cython.

There's nothing general that can be said that would help.  These
things require excruciating details to resolve.

It's generally the case that these things are caused by code missing
some aspect of correct use of Python's C API.  Which can be subtle!
It's very rare that such misuse is found in the core, but not so rare
in extensions.

Here's the most recent.  Be aware that it's a very long report, and -
indeed - is stuffed to the gills with excruciating details ;-)

https://bugs.python.org/issue38006

In that case a popular C extension wasn't really "playing by the
rules", but we changed CPython's cyclic garbage collector to prevent
it from segfaulting anyway.

So, one thing you didn't tell us:  which version of Python were you
using?  If not the most recent (3.8.1), try again with that (which
contains the patches discussed in the issue report above).

> Debugging the issue I realized that during the deallocation of one of
> the Python objects the deallocation function was trying to release a
> pointer that was surprisingly assigned to  NULL. The pointer was at
> the same time held by another Python object that was an attribute of
> the Python object that had the deallocation function, something like
> this:
>
> ...
>
> So my question would be, could CPython deallocate the objects during
> the finalization step without considering the dependencies between
> objects?

No, it could not.  BUT.  But but but.  If C code isn't playing by all
the rules, it's possible that CPython doesn't _know_ what the
dependencies actually are.  CPython can only know about pointers that
the user's C API calls tell CPython about.  In the report above, the C
extension didn't tell CPython about some pointers in cycles at all.

Curiously, for most kinds of garbage collectors, failing to inform the
collector about pointers is massively & widely & very reliably
catastrophic.  If it doesn't see a pointer, it can conclude that a
live object is actually trash.  But CPython's cyclic collector follows
pointers to prove objects _are_ trash, not to prove that they're
alive.  So when CPython isn't told about a pointer, it usually "fails
soft", believing a trash object is actually alive instead.  That can
cause a memory leak, but usually not a crash.

But not always.  In the presence of weakrefs too, believing a trash
object is actually alive can cause some kinds of finalization to be
done "too late", which can cause a crash when refcounting discovers
that the object actually is trash.  No, you're not going to get a
simple example of that ;-)  DIg through the issue report above.

> If this is not the right list to make this kind of questions, just let
> me know what would be the best place for making this kind of questions

Open an issue report instead?  Discussions won't solve it, alas.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WTH2LH3S52CN435B6I273BJ3QJW5GTES/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we pass tstate in the VECTORCALL calling convention before making it public?

2020-01-09 Thread Victor Stinner
Le jeu. 9 janv. 2020 à 19:33, Steve Dower  a écrit :
> Requiring an _active_ Python thread (~GIL held) to only run on a single
> OS thread is fine, but tying it permanently to a single OS thread makes
> things very painful. (Of course, this isn't the only thing that makes it
> painful, but if we're going to make this a deliberate design of CPython
> then we are essentially excluding entire interesting classes of embedding.)

Do you have an use case where you reuse the same Python thread state
in different native threads?

Currently, the Python runtime expects to have a different Python
thread state per native thread. For example, PyGILState_Ensure()
creates one if there is no Python thread state associated to the
current thread.

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ONX6SSKP6JMPYMEHAOAMGQWOX4O376LA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Object deallocation during the finalization of Python program

2020-01-09 Thread Pau Freixes
Hi,

Recently I've been facing a really weird bug where a Python program
was randomly segfaulting during the finalization, the program was
using some C extensions via Cython.

Debugging the issue I realized that during the deallocation of one of
the Python objects the deallocation function was trying to release a
pointer that was surprisingly assigned to  NULL. The pointer was at
the same time held by another Python object that was an attribute of
the Python object that had the deallocation function, something like
this:

class Foo:
my_type * value

class Bar
def __cinit__:
self._foo = Foo()
self._foo->value = initialize()

def __dealloc__:
destroy(self._foo->value)

Seems that randomly the instance of the object Foo held by the Bar
object was deallocated by the CPython interpreter before the Foo
deallocation, so after being deallocated - and zeroing the memory
space of the instance of Foo - the execution of the
`destroy(self._foo->value)` was in fact given as a parameter a NULL
address and raising a segfault.

It was a surprise for me, If I'm not missing something the
deallocation of the Foo instance happened even though there was still
an active reference held by the Bar object.

As a kind of double-checking I changed the program for making an
explicit `gc.collect()` before the last line of the Python program. As
a result, I couldn't reproduce the segfault, which theoretically would
mean that objects were deallocated "in order".

So my question would be, could CPython deallocate the objects during
the finalization step without considering the dependencies between
objects?

If this is not the right list to make this kind of questions, just let
me know what would be the best place for making this kind of questions

Thanks in advance,
-- 
--pau
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EQMJ2EZRI3QMGHOOWN7JVYVK7ZMIM2IP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Requesting PR review on locale module

2020-01-09 Thread Cédric Krier via Python-Dev
Hi,

Any chance to get this PR reviewed? It is blocking the resolution of
https://bugs.tryton.org/issue8574

Thanks

On 2019-11-18 06:36, Stéphane Wirtel wrote:
> Hi Cedric,
> 
> It’s my fault, I am going to check your PR for this week. I am really sorry 
> because I am busy with the preparation of my black belt in Karate and I have 
> a training every day. 
> 
> Have a nice day,
> 
> Stéphane 
> 
> > Le 17 nov. 2019 à 14:06, Tal Einat  a écrit :
> > 
> > 
> > Hi Cédric,
> > 
> > Thanks for writing and sorry that you've experienced such a delay. Such 
> > occurrences are unfortunately rather common right now, though we're working 
> > on improving the situation.
> > 
> > Stéphane Wirtel self-assigned that PR to himself a couple of months ago, 
> > but indeed hasn't followed it up after his requested changes were 
> > apparently addressed, despite a couple of "pings".
> > 
> > The experts index[1] lists Marc-Andre Lemburg as the expert for the locale 
> > module, so perhaps he'd be interested in taking a look.
> > 
> > I've CC-ed both Stéphane and Marc-Andre.
> > 
> > [1] https://devguide.python.org/experts/
> > 
> >> On Sun, Nov 17, 2019 at 2:06 PM Cédric Krier via Python-Dev 
> >>  wrote:
> >> Hi,
> >> 
> >> Few months ago, I submitted a PR [1] for the bpo [2] about locale.format
> >> casting Decimal into float. Has someone some times to finish the review?
> >> This change is blocking a bugfix on Tryton [3].
> >> 
> >> 
> >> [1] https://github.com/python/cpython/pull/15275
> >> [2] https://bugs.python.org/issue34311
> >> [3] https://bugs.tryton.org/issue8574
> >> 
> >> 
> >> Thanks,
> >> -- 
> >> Cédric Krier - B2CK SPRL
> >> Email/Jabber: cedric.kr...@b2ck.com
> >> Tel: +32 472 54 46 59
> >> Website: http://www.b2ck.com/
> >> ___
> >> Python-Dev mailing list -- python-dev@python.org
> >> To unsubscribe send an email to python-dev-le...@python.org
> >> https://mail.python.org/mailman3/lists/python-dev.python.org/
> >> Message archived at 
> >> https://mail.python.org/archives/list/python-dev@python.org/message/UOGIZADIK3EOLGPDDA2C525R63DULSDG/
> >> Code of Conduct: http://python.org/psf/codeofconduct/

-- 
Cédric Krier - B2CK SPRL
Email/Jabber: cedric.kr...@b2ck.com
Tel: +32 472 54 46 59
Website: https://www.b2ck.com/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RFY3DCK47ALK6X6AYCZ3ET5O6P6US3NA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we pass tstate in the VECTORCALL calling convention before making it public?

2020-01-09 Thread Steve Dower

On 09Jan2020 0429, Mark Shannon wrote:
There is a one-to-one correspondence between Python threads and O/S 
threads. So the threadstate can, and should, be stored in a thread local 
variable.


This is a major issue for embedders (and potentially asyncio), as it 
prevents integration with existing thread models or event loops.


Requiring an _active_ Python thread (~GIL held) to only run on a single 
OS thread is fine, but tying it permanently to a single OS thread makes 
things very painful. (Of course, this isn't the only thing that makes it 
painful, but if we're going to make this a deliberate design of CPython 
then we are essentially excluding entire interesting classes of embedding.)


Accessing thread local storage is fast. x86/64 uses the fs register to 
point to it, whereas ARM dedicates R15 (I think).


The register used is OS-specific. We do (and should) use the provided 
TLS APIs, but these add overhead.


Thread locals are not "global". Each sub-interpreter will have its own 
pool of threads. Each threadstate object should contain a pointer to its 
sub-interpreter.


I agree with this, but it's an argument for passing PyThreadState 
explicitly, which seems to go against your previous point (unless you're 
insisting that subinterpreters *must* be tied to specific and distinct 
"physical" threads, in which case let's argue about that because I think 
you're wrong :) ).


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QQRNUZMVGHGL5PLU6RCVKS74SKIYO22I/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Why doesn't venv also install python3*-config?

2020-01-09 Thread Xavier de Gaye




On 1/9/20 3:18 PM, Victor Stinner wrote:

Is the "ARM64" python-config (shell script) executed on the x86-64 host?


A cross-compilation means that there is probably no build framework on the 
target platform and therefore the build configuration of the cross-compilation 
of Python is not very useful there.


Which build command rely on python-config? Is it to cross-compile 3rd
party C extensions on the host?


It is useful when cross-compiling an application that embeds Python.
And I have a hunch that this may be also useful to fix the cross-compilation of 
third-party extension modules :)

Xavier
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2DRE2EEMJ4IYNC52ZV6HA4FNNIK5PRL4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Why doesn't venv also install python3*-config?

2020-01-09 Thread Victor Stinner
Le jeu. 9 janv. 2020 à 12:11, Xavier de Gaye  a écrit :
> The shell script python-config has been introduced by bpo issue 16235 named 
> "Add python-config.sh for use during cross compilation" in order "to behave 
> exactly the same as python-config.py except it doesn't depend on a Python 
> interpreter" as stated in the OP. So it does not make sense to say that one 
> may pick the wrong python-config script. Its purpose is to provide 
> cross-compilation configuration data independently of any Python interpreter 
> running natively on the build machine used to do the cross-compilation.

I'm thinking aloud to try to understand how it's supposed to work.

Let's say that I cross-compile Python on x86-64 targeting ARM64. The
build creates a ARM64 "python" program which cannot be executed on the
x86-64 host. A python-config script which contains the target
configuration is generated.

Is the "ARM64" python-config (shell script) executed on the x86-64 host?

Which build command rely on python-config? Is it to cross-compile 3rd
party C extensions on the host?

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZTYARNCVQW4EBRVNIZ32VAUBMULSL77D/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we pass tstate in the VECTORCALL calling convention before making it public?

2020-01-09 Thread Victor Stinner
Le jeu. 9 janv. 2020 à 13:35, Mark Shannon  a écrit :
> Passing the thread state explicitly creates a new class of errors that
> was not there before.
> What happens if the tstate argument is NULL,

You will likely get a crash.

> or points to a different thread?

No idea what will happen. If it's a different subinterpreter, bad
things can happen I guess :-)

Well, I don't see this class of bug as a blocker issuer. If you pass
invalid data, you get a crash. I'm not surprised by that.

In my previous email, I proposed to continue to pass implicitly tstate
when you call a function using the public C API. Only Python internals
would pass explicitly tstate and so only internals should be carefully
reviewed.


> There is a one-to-one correspondence between Python threads and O/S
> threads.

I'm not convinced that this :-) So far, it's unclear to me how the
transition from one interpreter to another happens with Python thread
states.

Let's say that the main thead spawns a thread A. The main thread and
the thread A are running in the main interpreter. Then thread A calls
_testcapi.run_in_subinterp() or _xxsubinterpreter.run_string() (or
something else to run code in a subinterpreter). A subinterpreter is
created and a new and different Python thread state is "attached" to
thread A. The thread A gets 2 Python thread states, one per
interpreter. It gets one or the other depending which in which
interpreter the thread is running...

We have to be careful to pass the proper thread state to internal C
functions while doing this dance between two interpreters in the same
thread. PyThreadState_Swap(tstate) must be called to set the current
Python thread state.

Full implementation of _testcapi.run_in_subinterp(), notice the
PyThreadState_Swap() dance:

/* To run some code in a sub-interpreter. */
static PyObject *
run_in_subinterp(PyObject *self, PyObject *args)
{
const char *code;
int r;
PyThreadState *substate, *mainstate;

if (!PyArg_ParseTuple(args, "s:run_in_subinterp",
  ))
return NULL;

mainstate = PyThreadState_Get();

PyThreadState_Swap(NULL);

substate = Py_NewInterpreter();
if (substate == NULL) {
/* Since no new thread state was created, there is no exception to
   propagate; raise a fresh one after swapping in the old thread
   state. */
PyThreadState_Swap(mainstate);
PyErr_SetString(PyExc_RuntimeError, "sub-interpreter creation failed");
return NULL;
}
r = PyRun_SimpleString(code);
Py_EndInterpreter(substate);

PyThreadState_Swap(mainstate);

return PyLong_FromLong(r);
}



> So the threadstate can, and should, be stored in a thread local
> variable.
> Accessing thread local storage is fast. x86/64 uses the fs register to
> point to it, whereas ARM dedicates R15 (I think).

*If* we managed to design subinterpreters in a way that a native
thread is always associated to the same Python interpreter, moving to
a thread local variable can make Python more efficient :-) It is
likely to be more efficient than the current atomic variable design.


> > I started to move more and more things from "globals" to "per
> > interpreter". For example, the garbage collector is now "per
> > interpreter" (lives in PyThreadState). Small integer singletons are
> > now also "per singleton": int("1") are now different objects in each
> > interpreter, whereas they were shared previously. Later, even "None"
> > singleton (and all other singletons) should be made "per interpreter".
> > Getting a "per interpreter" object requires to state from the Python
> > thread state: call _PyThreadState_GET(). Avoiding _PyThreadState_GET()
> > calls reduces any risk of making Python slower with incoming
> > subinterpreter changes.
>
> Thread locals are not "global". Each sub-interpreter will have its own
> pool of threads. Each threadstate object should contain a pointer to its
> sub-interpreter.

Getting the interpreter from a Python thread state is trivial: interp
= tstate->interp.

The problem is how to get the current Python thread state.
*Currently*, it's an atomic variable. But tomorrow with multiple
interpreters running in parallel, I expect that it's going to me more
expensive like first having the current the interpreter running the
current native thread, and then get the Python thread state of this
interpreter. Or something like that. We may get more and more
indirections...

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HMSRW57KNZOOWRWHSXVGYR6BYLT3NPPN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Differences in what lint (e.g., mypy, flake8, etc) require from new code and what Cpython uses/returns - e.g., strings with single/double quotes.

2020-01-09 Thread Michael
On 09/01/2020 13:16, Steven D'Aprano wrote:
> Hi Michael, and welcome!
>
>
> On Thu, Jan 09, 2020 at 11:37:33AM +0100, Michael wrote:
>
>> I am not questioning the demands of the lint checker - rather - I am
>> offering my services (aka time) to work through core to make CPython
>> pass it's own (I assume) PEP recommendations or guidelines.
> That would create a lot of code churn for no good reason. The CPython 
> project doesn't encourage making style changes just for the sake of 
> changing the style.

Code churn is not my goal. Passing pypa/packaging CI (as Paul commented)
is only required by them - afaik. And a tool, such as `black` can fix
these things auto/magic/ally. But it got me thinking as it is not the
first time I have been forced to make a style change because there is a
new tweak that can be turned - yet ignore everything else. Further, this
is something I would expect to be extremely boring to someone wanting to
make functional improvements, rather than just "sweeping the floor".

So thanks you for your clear responses! Discussion closed.

Best wishes for 2020!

>
> The most important part of PEP 8 is this:
>
> https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds
>
> Going through the entire stdlib creating bug reports for style issues is 
> not a productive use of anyone's time. There are large numbers of open 
> bug reports and documentation issues that are far more important.
>
> Regarding strings, PEP 8 doesn't recommend either single quotes or 
> double quotes for the std lib, except that doc strings should use triple 
> double-quotes. Each project or even each module is free to choose its 
> own rules.
>
> https://www.python.org/dev/peps/pep-0008/#string-quotes
>
>
>



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DEYPS6RM3IZ5LAWAVSZPLIFTZOHFU25M/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Should we pass tstate in the VECTORCALL calling convention before making it public?

2020-01-09 Thread Mark Shannon



On 08/01/2020 3:46 pm, Victor Stinner wrote:

Hi,

I started to modify Python internals to pass explicitly the Python
thread state ("tstate") to internal C a functions:
https://vstinner.github.io/cpython-pass-tstate.html


Passing the thread state explicitly creates a new class of errors that 
was not there before.
What happens if the tstate argument is NULL, or points to a different 
thread?




Until subinterpreters will be fully implemented, it's unclear if
getting the current Python thread state using _PyThreadState_GET()
macro (which uses an atomic read) will remain efficient. For example,
right now, the "GIL state" API doesn't support subinterpreters: fixing
this may require to add a lock somewhere which may make
_PyThreadState_GET() less efficient. Sorry, I don't have numbers,
since I didn't experiment to implement these changes yet: I was
blocked by other issues. We can only guess at this point.



There is a one-to-one correspondence between Python threads and O/S 
threads. So the threadstate can, and should, be stored in a thread local 
variable.
Accessing thread local storage is fast. x86/64 uses the fs register to 
point to it, whereas ARM dedicates R15 (I think).


Example code for x64 using C99:
https://godbolt.org/z/AZp9TS

C11 provides the performance of the static case with the advantages 
external linkage.


https://godbolt.org/z/Xth7vb



To me, it sounds like a good practice to pass the current Python
thread state to internal C functions. It seems like most core
developers agreed with that in my previous python-dev thread "Pass the
Python thread state to internal C functions":


I don't :)



https://mail.python.org/archives/list/python-dev@python.org/thread/PQBGECVGVYFTVDLBYURLCXA3T7IPEHHO/#Q4IPXMQIM5YRLZLHADUGSUT4ZLXQ6MYY

The question is now if we should "propagate" tstate to function calls
in the latest VECTORCALL calling convention (which is currently
private). Petr Viktorin plans to make VECTORCALL APIs public in Python
3.9, as planned in the PEP 590:
https://bugs.python.org/issue39245


I say no. Passing the thread state was an option that I rejected when 
designing PEP 590.




I added explicitly Stefan Behnel in copy, since Cython should be
directly impacted by such change. Cython is the kind of project which
may benefit of having tstate accessible directly.

I started to move more and more things from "globals" to "per
interpreter". For example, the garbage collector is now "per
interpreter" (lives in PyThreadState). Small integer singletons are
now also "per singleton": int("1") are now different objects in each
interpreter, whereas they were shared previously. Later, even "None"
singleton (and all other singletons) should be made "per interpreter".
Getting a "per interpreter" object requires to state from the Python
thread state: call _PyThreadState_GET(). Avoiding _PyThreadState_GET()
calls reduces any risk of making Python slower with incoming
subinterpreter changes.


Thread locals are not "global". Each sub-interpreter will have its own 
pool of threads. Each threadstate object should contain a pointer to its 
sub-interpreter.




For the long term, the goal is that each subinterpreter has its own
isolated world: no Python object should be shared, no state should be
shared. The intent is to avoid any need of locking, to maximize
performance when running interpreters in parallel. Obviously, each
interpreter will have its own private GIL ;-) Py_INCREF() and
Py_DECREF() would require slow atomic operations if Python objects are
shared. If objects are not shared between interpreters, Py_INCREF()
and Py_DECREF() can remain as fast as they are today. Any additional
locking may kill performance.


I agree entirely, but this is IMO unrelated to passing the thread state 
as an explicit extra argument.




Victor


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VV3FKBSBCYWTV67LY3NLBS3BVOROAWUW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Differences in what lint (e.g., mypy, flake8, etc) require from new code and what Cpython uses/returns - e.g., strings with single/double quotes.

2020-01-09 Thread Steven D'Aprano
Hi Michael, and welcome!


On Thu, Jan 09, 2020 at 11:37:33AM +0100, Michael wrote:

> I am not questioning the demands of the lint checker - rather - I am
> offering my services (aka time) to work through core to make CPython
> pass it's own (I assume) PEP recommendations or guidelines.

That would create a lot of code churn for no good reason. The CPython 
project doesn't encourage making style changes just for the sake of 
changing the style.

The most important part of PEP 8 is this:

https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds

Going through the entire stdlib creating bug reports for style issues is 
not a productive use of anyone's time. There are large numbers of open 
bug reports and documentation issues that are far more important.

Regarding strings, PEP 8 doesn't recommend either single quotes or 
double quotes for the std lib, except that doc strings should use triple 
double-quotes. Each project or even each module is free to choose its 
own rules.

https://www.python.org/dev/peps/pep-0008/#string-quotes



-- 
Steven
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RILHWKTNWSV552LPHO4VHB35ZY5ZXCKH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Differences in what lint (e.g., mypy, flake8, etc) require from new code and what Cpython uses/returns - e.g., strings with single/double quotes.

2020-01-09 Thread Paul Moore
On Thu, 9 Jan 2020 at 10:42, Michael  wrote:
> Last year I was struggling to get some code to pass CI in pypa/packaging. 
> There were other issues, but one that suprised me most was learning to ALWAYS 
> use double quotes (") to get the code to pass the lint check (type checking). 
> Anything using single quotes (') as string delimiters were not accepted as a 
> STRING type.

pypa/packaging is a different project than CPython, with different
standards and contribution guidelines. The insistence on double quoted
strings sounds like the project uses black for formatting, and indeed
the project documentation at
https://packaging.pypa.io/en/latest/development/submitting-patches/#code
confirms that.

You shouldn't need to manually correct all of the code you write in
that case - simply running black on any file you change should fix the
style to match the project standards (again, please note this is
*only* for the pypa/packaging project - you should check the
contribution guidelines for any other project you contribute to,
because their standards may be different).

Paul
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7C6YD4X3YEEMOAOVPAFF7R2UUZ2G6GOV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Why doesn't venv also install python3*-config?

2020-01-09 Thread Xavier de Gaye

On 1/8/20 5:53 PM, Victor Stinner wrote:


You may get the wrong information if you pick the wrong python-config script :-(

IMHO we should add a new module (problem: how should it be called? pyconfig?)


The shell script python-config has been introduced by bpo issue 16235 named "Add 
python-config.sh for use during cross compilation" in order "to behave exactly the same 
as python-config.py except it doesn't depend on a Python interpreter" as stated in the OP. So 
it does not make sense to say that one may pick the wrong python-config script. Its purpose is to 
provide cross-compilation configuration data independently of any Python interpreter running 
natively on the build machine used to do the cross-compilation.

FWIW the shell script python-config is generated from Misc/python-config.sh by 
configure and updated by the Makefile.

Xavier
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UNIJPEYGZHDW2UGBK4JNI5TEDY3MDDPF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Differences in what lint (e.g., mypy, flake8, etc) require from new code and what Cpython uses/returns - e.g., strings with single/double quotes.

2020-01-09 Thread Michael
Hi all,

Last year I was struggling to get some code to pass CI in
pypa/packaging. There were other issues, but one that suprised me most
was learning to ALWAYS use double quotes (") to get the code to pass the
lint check (type checking). Anything using single quotes (') as string
delimiters were not accepted as a STRING type.

I am not questioning the demands of the lint checker - rather - I am
offering my services (aka time) to work through core to make CPython
pass it's own (I assume) PEP recommendations or guidelines.

Again, this is not a bug-report - but an offer to assist with what I see
as an inconsistency.

If my assistance in this area is appreciated/desired I would appreciate
one or more core developers to assist me with managing the PR process.
Not for speed, but I would not want to burden just one core developer.

My approach would entail opening a number of related 'issues' for
different pieces: e.g., Lib, Modules, Python, Documentation where some
Lib|Modules|Python pieces might need to be individual 'issues' due to
size and/or complexity.

I do not see this as happening 'overnight'.

so-called simple things to fix:

In Documentation:

The return value is the result of the evaluated expression. Syntax
errors are reported as exceptions. Example:

x = 1

eval('x+1')
2 Change the text above to state eval("x+1") - assuming the lint process
would no longer accept eval('x+1') as proper typed syntax. And, then,
hoping this is still regarded as 'simple' make sure code such as
ctypes.util.find_library() is consistent, returning strings terminated
by double quotes rather than the single quotes as of now. >>> import
ctypes.util >>> ctypes.util.find_library("c") >>> 'libc.a(shr.o)' >>>
ctypes.util.find_library("ssl") >>> 'libssl.a(libssl.so)' Something more
"complex" may be the list of names dir() returns: >>> >>> import
sysconfig >>> dir(sysconfig)) ['_BASE_EXEC_PREFIX', '_BASE_PREFIX',
'_CONFIG_VARS', '_EXEC_PREFIX', '_INSTALL_SCHEMES', '_PREFIX',
'_PROJECT_BASE', '_PYTHON_BUILD', '_PY_VERSION', '_PY_VERSION_SHORT',
'_PY_VERSION_SHORT_NO_DOT', '_SCHEME_KEYS', '_USER_BASE', '__all__',
'__builtins__', '__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__spec__', '_expand_vars', '_extend_dict',
'_generate_posix_vars', '_get_default_scheme',
'_get_sysconfigdata_name', '_getuserbase', '_init_non_posix',
'_init_posix', '_is_python_source_dir', '_main', '_parse_makefile',
'_print_dict', '_safe_realpath', '_subst_vars', '_sys_home',
'get_config_h_filename', 'get_config_var', 'get_config_vars',
'get_makefile_filename', 'get_path', 'get_path_names', 'get_paths',
'get_platform', 'get_python_version', 'get_scheme_names',
'is_python_build', 'os', 'pardir', 'parse_config_h', 'realpath',
'scheme', 'sys'] >>> Regards, Michael



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DZGPIUWBI75KMIG6VPRXKDZBN3L4MQKF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Why doesn't venv also install python3*-config?

2020-01-09 Thread Petr Viktorin




On 2020-01-08 17:53, Victor Stinner wrote:

Hi,

I dislike python-config for multiple reasons

You may get the wrong information if you pick the wrong python-config script :-(

IMHO we should add a new module (problem: how should it be called?
pyconfig?) which could be used using "python3 -m xxx ...". There is a
similar discussion between "pip ..." and "python3 -m pip ..."
commands: I understand that "python3 -m pip ..." is more reliable to
know which Python will be picked, especially in a virtual environment.


Indeed. It's a bad idea to have separate executables that use/affect 
"the Python" or are needed to work with "the Python", but don't 
precisely specify "the Python". And `python -m ...` is the most sane way 
out of the mess.



There are two implementations: one in Python used on macOS, one in
shell used on other platforms.

Moreover, if it's implemented in the stdlib, it can be implemented in
Python and it should be easier to maintain than a shell script. For
example, python-config has no unit test...


Could sysconfig be extended? Currently it doesn't respond to CLI flags; 
it should be possible to add config-like ones, e.g.:


python -m sysconfig --libs
python -m sysconfig --help

Would that help?
I must admit I don't really understand python-config. Does anyone know 
its purpose? History? Documentation? One thing I suspect is that it was 
meant as a helper for pkg-config, not to be used alone.

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FTKE7GRT4IWHJOBX5JPZQXTR42ZT6ZH6/
Code of Conduct: http://python.org/psf/codeofconduct/