Re: [Python-Dev] PEP 553; the interaction between $PYTHONBREAKPOINT and -E

2017-10-04 Thread Barry Warsaw
On Oct 4, 2017, at 21:52, Nick Coghlan  wrote:
> 
>> Unfortunately we probably won’t really get a good answer in practice until 
>> Python 3.7 is released, so maybe I just choose one and document that the 
>> behavior of PYTHONBREAKPOINT under -E is provision for now.  If that’s 
>> acceptable, then I would just treat -E for PYTHONBREAKPOINT the same as all 
>> other environment variables, and we’ll see how it goes.
> 
> I'd be fine with this as the main reason I wanted PYTHONBREAKPOINT=0
> was for pre-merge CI systems, and those tend to have tightly
> controlled environment settings, so you don't need to rely on -E or -I
> when running your tests.
> 
> That said, it may also be worth considering a "-X nobreakpoints"
> option (and then -I could imply "-E -s -X nobreakpoints").

Thanks for the feedback Nick.  For now we’ll go with the standard behavior of 
-E and see how it goes.  We can always add a -X later.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553

2017-10-04 Thread Guido van Rossum
Yarko, there's one thing I don't understand. Maybe you can enlighten me.
Why would you prefer

breakpoint(x >= 1000)

over

if x >= 1000: breakpoint()

?

The latter seems unambiguous and requires thinking all around. Is there
something in iPython that makes this impractical?

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553; the interaction between $PYTHONBREAKPOINT and -E

2017-10-04 Thread Nick Coghlan
On 5 October 2017 at 10:28, Barry Warsaw  wrote:
>> """Special cases aren't special enough to break the rules."""
>>
>> People expect -E to disable envvar-driven overrides, so just treat it
>> like that and don't try to second-guess the user.
>
> And of course "Although practicality beats purity.” :)
>
> So while I agree that the consistency argument makes sense, does it make the 
> most practical sense?
>
> I’m not sure.  On the PR, Nick suggests even another option: treat -E as all 
> other environment variables, but then -I would be PYTHONBREAKPOINT=0.  Since 
> the documentation for -I says "(implies -E and -s)” that seems even more 
> special-case-y to me.

-I is inherently a special-case, since it's effectively our "system
Python mode", while we don't actually have a separate system Python
binary.

> Unfortunately we probably won’t really get a good answer in practice until 
> Python 3.7 is released, so maybe I just choose one and document that the 
> behavior of PYTHONBREAKPOINT under -E is provision for now.  If that’s 
> acceptable, then I would just treat -E for PYTHONBREAKPOINT the same as all 
> other environment variables, and we’ll see how it goes.

I'd be fine with this as the main reason I wanted PYTHONBREAKPOINT=0
was for pre-merge CI systems, and those tend to have tightly
controlled environment settings, so you don't need to rely on -E or -I
when running your tests.

That said, it may also be worth considering a "-X nobreakpoints"
option (and then -I could imply "-E -s -X nobreakpoints").

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 554 v3 (new interpreters module)

2017-10-04 Thread Nick Coghlan
On 4 October 2017 at 23:51, Eric Snow  wrote:
> On Tue, Oct 3, 2017 at 11:36 PM, Nick Coghlan  wrote:
>> The problem relates to the fact that there aren't any memory barriers
>> around CPython's INCREF operations (they're implemented as an ordinary
>> C post-increment operation), so you can get the following scenario:
>>
>> * thread on CPU A has the sole reference (ob_refcnt=1)
>> * thread on CPU B acquires a new reference, but hasn't pushed the
>> updated ob_refcnt value back to the shared memory cache yet
>> * original thread on CPU A drops its reference, *thinks* the refcnt is
>> now zero, and deletes the object
>> * bad things now happen in CPU B as the thread running there tries to
>> use a deleted object :)
>
> I'm not clear on where we'd run into this problem with channels.
> Mirroring your scenario:
>
> * interpreter A (in thread on CPU A) INCREFs the object (the GIL is still 
> held)
> * interp A sends the object to the channel
> * interp B (in thread on CPU B) receives the object from the channel
> * the new reference is held until interp B DECREFs the object
>
> From what I see, at no point do we get a refcount of 0, such that
> there would be a race on the object being deleted.

Having the sending interpreter do the INCREF just changes the problem
to be a memory leak waiting to happen rather than an access-after-free
issue, since the problematic non-synchronised scenario then becomes:

* thread on CPU A has two references (ob_refcnt=2)
* it sends a reference to a thread on CPU B via a channel
* thread on CPU A releases its reference (ob_refcnt=1)
* updated ob_refcnt value hasn't made it back to the shared memory cache yet
* thread on CPU B releases its reference (ob_refcnt=1)
* both threads have released their reference, but the refcnt is still
1 -> object leaks!

We simply can't have INCREFs and DECREFs happening in different
threads without some way of ensuring cache coherency for *both*
operations - otherwise we risk either the refcount going to zero when
it shouldn't, or *not* going to zero when it should.

The current CPython implementation relies on the process global GIL
for that purpose, so none of these problems will show up until you
start trying to replace that with per-interpreter locks.

Free threaded reference counting relies on (expensive) atomic
increments & decrements.

The cross-interpreter view proposal aims to allow per-interpreter GILs
without introducing atomic increments & decrements by instead relying
on the view itself to ensure that it's holding the right GIL for the
object whose refcount it's manipulating, and the receiving interpreter
explicitly closing the view when it's done with it.

So while CIVs wouldn't be as easy to use as regular object references:

1. They'd be no harder to use than memoryviews in general
2. They'd structurally ensure that regular object refcounts can still
rely on "protected by the GIL" semantics
3. They'd structurally ensure zero performance degradation for regular
object refcounts
4. By virtue of being memoryview based, they'd encourage the adoption
of interfaces and practices that can be adapted to multiple processes
through the use of techniques like shared memory regions and memory
mapped files (see
http://www.boost.org/doc/libs/1_54_0/doc/html/interprocess/sharedmemorybetweenprocesses.html
for some detailed explanations of how that works, and
https://arrow.apache.org/ for an example of ways tools like Pandas can
use that to enable zero-copy data sharing)

> The only problem I'm aware of (it dawned on me last night), is in the
> case that the interpreter that created the object gets deleted before
> the object does.  In that case we can't pass the deletion back to the
> original interpreter.  (I don't think this problem is necessarily
> exclusive to the solution I've proposed for Bytes.)

The cross-interpreter-view idea proposes to deal with that by having
the CIV hold a strong reference not only to the sending object (which
is already part of the regular memoryview semantics), but *also* to
the sending interpreter - that way, neither the sending object nor the
sending interpreter can go away until the receiving interpreter closes
the view.

The refcount-integrity-ensuring sequence of events becomes:

1. Sending interpreter submits the object to the channel
2. Channel creates a CIV with references to the sending interpreter &
sending object, and a view on the sending object's memory
3. Receiving interpreter gets the CIV from the channel
4. Receiving interpreter closes the CIV either explicitly or via
__del__ (the latter would emit ResourceWarning)
5. CIV switches execution back to the sending interpreter and releases
both the memory buffer and the reference to the sending object
6. CIV switches execution back to the receiving interpreter, and
releases its reference to the sending interpreter
7. Execution continues in the receiving interpreter

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
__

Re: [Python-Dev] PEP 553

2017-10-04 Thread Yarko Tymciurak
On Wed, Oct 4, 2017 at 7:50 PM, Barry Warsaw  wrote:

> On Oct 4, 2017, at 20:22, Yarko Tymciurak  wrote:
>
> > I've recently started using a simple conditional breakpoint in ipython,
> and wonder if  - in addition to Nick Coghlan's request for the env
> 'PYTHONBREAKPOINT'  (boolean?), it would make sense (I _think_ so) to add a
> condition parameter to the breakpoint() call.  This does raise several
> questions, but it seems that it could make for a simple unified way to
> conditionally call an arbitrary debugger.  What I found useful (in the
> contecxt of ipython - but general enough) you can see in this gist:
> https://gist.github.com/yarko/bdaa9d3178a6db03e160fdbabb3a9885
> >
> > If PEP 553's breakpoint() were to follow this sort of interface (with
> "condition"), it raises a couple of questions:
> > - how would a missing (default) parameter be done?
> > - how would parameters to be passed to the debugger "of record" be
> passed in (named tuple? - sort of ugly)
> > - would PYTHONBREAKPOINT be a global switch (I think yes), vs a
> `condition` default.
> >
> > I have no dog in the fight, but to raise the possibility (?) of having
> PEP 553 implement simple conditional breakpoint processing.
>
> Thanks for bringing this up Yarko.  I think this could be done with the
> current specification for PEP 553 and an additional API from the various
> debuggers.  I don’t think it needs to be part of PEP 553 explicitly, given
> the additional complications you describe above.
>
> Remember that both built-in breakpoint() and sys.breakpointhook() accept
> *args, **kws, and it is left up to the actual debugger API to
> interpret/accept those additional arguments.  So let’s say you wanted to
> implement this behavior with pdb.  I think you could do something as simple
> as:
>
> def conditional_set_trace(*, condition=True):
> if condition:
> pdb.set_trace()
>
> sys.breakpointhook = conditional_set_trace
>
> Then in your code, you would just write:
>
> def foo(value):
> breakpoint(condition=(value < 0))
>
> With the IPython gist you referenced, you wouldn’t even need that
> convenience function.  Just set 
> sys.breakpointhook=conditional_breakpoint.breakpoint_
> and viola!
>
> You could also PYTHONBREAKPOINT=conditional_breakpoint.breakpoint_
> python3.7 … and it should Just Work.
>

Thanks Barry - yes, I see:  you're correct.   Thanks for the pep!
- Yarko


>
> Cheers,
> -Barry
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> yarkot1%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553

2017-10-04 Thread Glenn Linderman

On 10/4/2017 5:22 PM, Yarko Tymciurak wrote:

Barry suggested I bring this up here.

It seems the right time to at least discuss this:

RE:  PEP 553 enabling / disabling breakpoints ---

I've recently started using a simple conditional breakpoint in 
ipython, and wonder if  - in addition to Nick Coghlan's request for 
the env 'PYTHONBREAKPOINT'  (boolean?), it would make sense (I _think_ 
so) to add a condition parameter to the breakpoint() call.  This does 
raise several questions, but it seems that it could make for a simple 
unified way to conditionally call an arbitrary debugger.  What I found 
useful (in the contecxt of ipython - but general enough) you can see 
in this gist: 
https://gist.github.com/yarko/bdaa9d3178a6db03e160fdbabb3a9885


If PEP 553's breakpoint() were to follow this sort of interface (with 
"condition"), it raises a couple of questions:

- how would a missing (default) parameter be done?
- how would parameters to be passed to the debugger "of record" be 
passed in (named tuple? - sort of ugly)
- would PYTHONBREAKPOINT be a global switch (I think yes), vs a 
`condition` default.


I have no dog in the fight, but to raise the possibility (?) of having 
PEP 553 implement simple conditional breakpoint processing.


Any / all comments much appreciated.

breakpoint() already accepts arguments. Therefore no change to the PEP 
is needed to implement your suggestion. What you are suggesting is 
simply a convention among debuggers to handle a parameter named 
"condition" in a particular manner.


It seems to me that

if condition:
    breakpoint()

would be faster and clearer, but there is nothing to prevent a debugger 
from implementing your suggestion if it seems useful to the developers 
of the debugger. If it is useful enough to enough people, the users will 
clamor for other debuggers to implement it also.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553

2017-10-04 Thread Barry Warsaw
On Oct 4, 2017, at 20:22, Yarko Tymciurak  wrote:

> I've recently started using a simple conditional breakpoint in ipython, and 
> wonder if  - in addition to Nick Coghlan's request for the env 
> 'PYTHONBREAKPOINT'  (boolean?), it would make sense (I _think_ so) to add a 
> condition parameter to the breakpoint() call.  This does raise several 
> questions, but it seems that it could make for a simple unified way to 
> conditionally call an arbitrary debugger.  What I found useful (in the 
> contecxt of ipython - but general enough) you can see in this gist: 
> https://gist.github.com/yarko/bdaa9d3178a6db03e160fdbabb3a9885
> 
> If PEP 553's breakpoint() were to follow this sort of interface (with 
> "condition"), it raises a couple of questions:
> - how would a missing (default) parameter be done?
> - how would parameters to be passed to the debugger "of record" be passed in 
> (named tuple? - sort of ugly)
> - would PYTHONBREAKPOINT be a global switch (I think yes), vs a `condition` 
> default.
> 
> I have no dog in the fight, but to raise the possibility (?) of having PEP 
> 553 implement simple conditional breakpoint processing.

Thanks for bringing this up Yarko.  I think this could be done with the current 
specification for PEP 553 and an additional API from the various debuggers.  I 
don’t think it needs to be part of PEP 553 explicitly, given the additional 
complications you describe above.

Remember that both built-in breakpoint() and sys.breakpointhook() accept *args, 
**kws, and it is left up to the actual debugger API to interpret/accept those 
additional arguments.  So let’s say you wanted to implement this behavior with 
pdb.  I think you could do something as simple as:

def conditional_set_trace(*, condition=True):
if condition:
pdb.set_trace()

sys.breakpointhook = conditional_set_trace

Then in your code, you would just write:

def foo(value):
breakpoint(condition=(value < 0))

With the IPython gist you referenced, you wouldn’t even need that convenience 
function.  Just set sys.breakpointhook=conditional_breakpoint.breakpoint_ and 
viola!

You could also PYTHONBREAKPOINT=conditional_breakpoint.breakpoint_ python3.7 … 
and it should Just Work.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553; the interaction between $PYTHONBREAKPOINT and -E

2017-10-04 Thread Barry Warsaw
> """Special cases aren't special enough to break the rules."""
> 
> People expect -E to disable envvar-driven overrides, so just treat it
> like that and don't try to second-guess the user.

And of course "Although practicality beats purity.” :)

So while I agree that the consistency argument makes sense, does it make the 
most practical sense?

I’m not sure.  On the PR, Nick suggests even another option: treat -E as all 
other environment variables, but then -I would be PYTHONBREAKPOINT=0.  Since 
the documentation for -I says "(implies -E and -s)” that seems even more 
special-case-y to me.

"In the face of ambiguity, refuse the temptation to guess.”

I’m really not sure what the right answer is, including to *not* make 
PYTHONBREAKPOINT obey -E.

Unfortunately we probably won’t really get a good answer in practice until 
Python 3.7 is released, so maybe I just choose one and document that the 
behavior of PYTHONBREAKPOINT under -E is provision for now.  If that’s 
acceptable, then I would just treat -E for PYTHONBREAKPOINT the same as all 
other environment variables, and we’ll see how it goes.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553

2017-10-04 Thread Yarko Tymciurak
Barry suggested I bring this up here.

It seems the right time to at least discuss this:

RE:  PEP 553 enabling / disabling breakpoints ---

I've recently started using a simple conditional breakpoint in ipython, and
wonder if  - in addition to Nick Coghlan's request for the env
'PYTHONBREAKPOINT'  (boolean?), it would make sense (I _think_ so) to add a
condition parameter to the breakpoint() call.  This does raise several
questions, but it seems that it could make for a simple unified way to
conditionally call an arbitrary debugger.  What I found useful (in the
contecxt of ipython - but general enough) you can see in this gist:
https://gist.github.com/yarko/bdaa9d3178a6db03e160fdbabb3a9885

If PEP 553's breakpoint() were to follow this sort of interface (with
"condition"), it raises a couple of questions:
- how would a missing (default) parameter be done?
- how would parameters to be passed to the debugger "of record" be passed
in (named tuple? - sort of ugly)
- would PYTHONBREAKPOINT be a global switch (I think yes), vs a `condition`
default.

I have no dog in the fight, but to raise the possibility (?) of having PEP
553 implement simple conditional breakpoint processing.

Any / all comments much appreciated.

Regards,
Yarko


On Mon, Oct 2, 2017 at 7:06 PM, Barry Warsaw  wrote:

> On Oct 2, 2017, at 18:43, Guido van Rossum  wrote:
> >
> > OK. That then concludes the review of your PEP. It is now accepted!
> Congrats. I am looking forward to using the backport. :-)
>
> Yay, thanks!  We’ll see if I can sneak that backport past Ned. :)
>
> -Barry
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> yarkot1%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 544

2017-10-04 Thread Steven D'Aprano
On Wed, Oct 04, 2017 at 03:56:14PM -0700, VERY ANONYMOUS wrote:
> i want to learn

Start by learning to communicate in full sentences. You want to learn 
what? Core development? Python? How to program? English?

This is not a mailing list for Python beginners.  Try the "tutor" or 
"python-list" mailing lists.


-- 
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 544

2017-10-04 Thread VERY ANONYMOUS
i want to learn
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553; the interaction between $PYTHONBREAKPOINT and -E

2017-10-04 Thread Guido van Rossum
Well that also makes sense.

On Wed, Oct 4, 2017 at 1:52 PM, Antoine Pitrou  wrote:

> On Wed, 4 Oct 2017 14:06:48 -0400
> Barry Warsaw  wrote:
> > Victor brings up a good question in his review of the PEP 553
> implementation.
> >
> > https://github.com/python/cpython/pull/3355
> > https://bugs.python.org/issue31353
> >
> > The question is whether $PYTHONBREAKPOINT should be ignored if -E is
> given?
> >
> > I think it makes sense for $PYTHONBREAKPOINT to be sensitive to -E, but
> in thinking about it some more, it might make better sense for the
> semantics to be that when -E is given, we treat it like PYTHONBREAKPOINT=0,
> i.e. disable the breakpoint, rather than fallback to the `pdb.set_trace`
> default.
>
> """Special cases aren't special enough to break the rules."""
>
> People expect -E to disable envvar-driven overrides, so just treat it
> like that and don't try to second-guess the user.
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553; the interaction between $PYTHONBREAKPOINT and -E

2017-10-04 Thread Antoine Pitrou
On Wed, 4 Oct 2017 14:06:48 -0400
Barry Warsaw  wrote:
> Victor brings up a good question in his review of the PEP 553 implementation.
> 
> https://github.com/python/cpython/pull/3355
> https://bugs.python.org/issue31353
> 
> The question is whether $PYTHONBREAKPOINT should be ignored if -E is given?
> 
> I think it makes sense for $PYTHONBREAKPOINT to be sensitive to -E, but in 
> thinking about it some more, it might make better sense for the semantics to 
> be that when -E is given, we treat it like PYTHONBREAKPOINT=0, i.e. disable 
> the breakpoint, rather than fallback to the `pdb.set_trace` default.

"""Special cases aren't special enough to break the rules."""

People expect -E to disable envvar-driven overrides, so just treat it
like that and don't try to second-guess the user.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553; the interaction between $PYTHONBREAKPOINT and -E

2017-10-04 Thread Guido van Rossum
Treating -E as PYTHONBREAKPOINT=0 makes sense.

On Wed, Oct 4, 2017 at 11:06 AM, Barry Warsaw  wrote:

> Victor brings up a good question in his review of the PEP 553
> implementation.
>
> https://github.com/python/cpython/pull/3355
> https://bugs.python.org/issue31353
>
> The question is whether $PYTHONBREAKPOINT should be ignored if -E is given?
>
> I think it makes sense for $PYTHONBREAKPOINT to be sensitive to -E, but in
> thinking about it some more, it might make better sense for the semantics
> to be that when -E is given, we treat it like PYTHONBREAKPOINT=0, i.e.
> disable the breakpoint, rather than fallback to the `pdb.set_trace` default.
>
> My thinking is this: -E is often used in production environments to
> prevent stray environment settings from affecting the Python process.  In
> those environments, you probably also want to prevent stray breakpoints
> from stopping the process, so it’s more helpful to disable breakpoint
> processing when -E is given rather than running pdb.set_trace().
>
> If you have a strong opinion either way, please follow up here, on the PR,
> or on the bug tracker.
>
> Cheers,
> -Barry
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> guido%40python.org
>
>


-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 553; the interaction between $PYTHONBREAKPOINT and -E

2017-10-04 Thread Barry Warsaw
Victor brings up a good question in his review of the PEP 553 implementation.

https://github.com/python/cpython/pull/3355
https://bugs.python.org/issue31353

The question is whether $PYTHONBREAKPOINT should be ignored if -E is given?

I think it makes sense for $PYTHONBREAKPOINT to be sensitive to -E, but in 
thinking about it some more, it might make better sense for the semantics to be 
that when -E is given, we treat it like PYTHONBREAKPOINT=0, i.e. disable the 
breakpoint, rather than fallback to the `pdb.set_trace` default.

My thinking is this: -E is often used in production environments to prevent 
stray environment settings from affecting the Python process.  In those 
environments, you probably also want to prevent stray breakpoints from stopping 
the process, so it’s more helpful to disable breakpoint processing when -E is 
given rather than running pdb.set_trace().

If you have a strong opinion either way, please follow up here, on the PR, or 
on the bug tracker.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Intention to accept PEP 552 soon (deterministic pyc files)

2017-10-04 Thread Benjamin Peterson


On Wed, Oct 4, 2017, at 07:14, Barry Warsaw wrote:
> On Oct 3, 2017, at 13:29, Benjamin Peterson  wrote:
> 
> > I'm not sure turning the implementation details of our internal formats
> > into APIs is the way to go.
> 
> I still think an API in the stdlib would be useful and appropriate, but
> it’s not like this couldn’t be done as a 3rd party module.

It might be helpful to enumerate the usecases for such an API. Perhaps a
narrow, specialized API could satisfy most needs in a supportable way.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 554 v3 (new interpreters module)

2017-10-04 Thread Antoine Pitrou
On Wed, 4 Oct 2017 17:50:33 +0200
Antoine Pitrou  wrote:
> On Mon, 2 Oct 2017 21:31:30 -0400
> Eric Snow  wrote:
> >   
> > > By contrast, if we allow an actual bytes object to be shared, then
> > > either every INCREF or DECREF on that bytes object becomes a
> > > synchronisation point, or else we end up needing some kind of
> > > secondary per-interpreter refcount where the interpreter doesn't drop
> > > its shared reference to the original object in its source interpreter
> > > until the internal refcount in the borrowing interpreter drops to
> > > zero.
> > 
> > There shouldn't be a need to synchronize on INCREF.  If both
> > interpreters have at least 1 reference then either one adding a
> > reference shouldn't be a problem.  
> 
> I'm not sure what Nick meant by "synchronization point", but at least
> you certainly need INCREF and DECREF to be atomic, which is a departure
> from today's Py_INCREF / Py_DECREF behaviour (and is significantly
> slower, even on high-level benchmarks).

To be clear, I'm writing this under the hypothesis of per-interpreter
GILs.  I'm not really interested in the per-process GIL case :-)

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reorganize Python categories (Core, Library, ...)?

2017-10-04 Thread Antoine Pitrou
On Wed, 4 Oct 2017 09:39:21 -0400
Barry Warsaw  wrote:
> On Oct 4, 2017, at 05:52, Victor Stinner  wrote:
> 
> > My problem is that almost all changes go into "Library" category. When
> > I read long changelogs, it's sometimes hard to identify quickly the
> > context (ex: impacted modules) of a change.
> > 
> > It's also hard to find open bugs of a specific module on
> > bugs.python.org, since almost all bugs are in the very generic
> > "Library" category. Using full text returns "false positives".
> > 
> > It's hard to find categories generic enough to not only contain a
> > single item, but not contain too many items neither. Other ideas:
> > 
> > * XML: xml.doc, xml.etree, xml.parsers, xml.sax modules
> > * Import machinery: imp and importlib modules
> > * Typing: abc and typing modules  
> 
> I often run into the same problem.  If we’re going to split up the Library 
> section, then I think it makes sense to follow the top-level organization of 
> the library manual:
> 
> https://docs.python.org/3/library/index.html

I think I'd rather type the module name than have to look up the
proper category in the documentation.

IOW, the module name -> category mapping alluded to by Victor would
need to exist somewhere in programmatic (or machine-readable) form.

But then we might as well store the actual module name in the NEWS
files and do the mapping when generating the presentation :-)

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reorganize Python categories (Core, Library, ...)?

2017-10-04 Thread Antoine Pitrou
On Wed, 4 Oct 2017 15:22:48 +0200
Victor Stinner  wrote:
> 2017-10-04 14:36 GMT+02:00 Antoine Pitrou :
> > If  there's a crash in socket.sendmsg() that affects mainly
> > multiprocessing, should it be in "Networking", "Security" or
> > "Parallelism"?  
> 
> bugs.python.org allows you to select zero or *multiple* categories :-)

I'm getting confused.  Are you talking about NEWS file categories or
bugs.python.org categories?

> >  If there's a bug where SSLSocket.recvinto() doesn't
> > accept some writable buffers, is it "Networking" or "Security"? etc.  
> 
> Usually, when the reach the final fix, it becomes much easier to pick
> the correct category.

There is no definite "correct category" when you're mixing different
classification schemes (what kind of bug it is --
bug/security/enhancement/etc. --, what functional domain it pertains
to -- networking/concurrency/etc. --, which stdlib API it affects).
That's the problem I was pointing to.

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Intention to accept PEP 552 soon (deterministic pyc files)

2017-10-04 Thread Antoine Pitrou
On Wed, 4 Oct 2017 10:14:22 -0400
Barry Warsaw  wrote:
> On Oct 3, 2017, at 13:29, Benjamin Peterson  wrote:
> 
> > I'm not sure turning the implementation details of our internal formats
> > into APIs is the way to go.  
> 
> I still think an API in the stdlib would be useful and appropriate, but it’s 
> not like this couldn’t be done as a 3rd party module.

It can also be an implementation-specific API for which we don't
guarantee anything in the future.  The consenting adults rule would
apply.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 554 v3 (new interpreters module)

2017-10-04 Thread Koos Zevenhoven
On Wed, Oct 4, 2017 at 4:51 PM, Eric Snow 
wrote:

> On Tue, Oct 3, 2017 at 11:36 PM, Nick Coghlan  wrote:
> > The problem relates to the fact that there aren't any memory barriers
> > around CPython's INCREF operations (they're implemented as an ordinary
> > C post-increment operation), so you can get the following scenario:
> >
> > * thread on CPU A has the sole reference (ob_refcnt=1)
> > * thread on CPU B acquires a new reference, but hasn't pushed the
> > updated ob_refcnt value back to the shared memory cache yet
> > * original thread on CPU A drops its reference, *thinks* the refcnt is
> > now zero, and deletes the object
> > * bad things now happen in CPU B as the thread running there tries to
> > use a deleted object :)
>
> I'm not clear on where we'd run into this problem with channels.
> Mirroring your scenario:
>
> * interpreter A (in thread on CPU A) INCREFs the object (the GIL is still
> held)
> * interp A sends the object to the channel
> * interp B (in thread on CPU B) receives the object from the channel
> * the new reference is held until interp B DECREFs the object
>
> From what I see, at no point do we get a refcount of 0, such that
> there would be a race on the object being deleted.
>
> ​
So what you're saying is that when Larry finishes the gilectomy,
subinterpreters will work GIL-free too?​-)

​––Koos
​

The only problem I'm aware of (it dawned on me last night), is in the
> case that the interpreter that created the object gets deleted before
> the object does.  In that case we can't pass the deletion back to the
> original interpreter.  (I don't think this problem is necessarily
> exclusive to the solution I've proposed for Bytes.)
>
> -eric
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> k7hoven%40gmail.com
>



-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 554 v3 (new interpreters module)

2017-10-04 Thread Antoine Pitrou
On Mon, 2 Oct 2017 21:31:30 -0400
Eric Snow  wrote:
> 
> > By contrast, if we allow an actual bytes object to be shared, then
> > either every INCREF or DECREF on that bytes object becomes a
> > synchronisation point, or else we end up needing some kind of
> > secondary per-interpreter refcount where the interpreter doesn't drop
> > its shared reference to the original object in its source interpreter
> > until the internal refcount in the borrowing interpreter drops to
> > zero.  
> 
> There shouldn't be a need to synchronize on INCREF.  If both
> interpreters have at least 1 reference then either one adding a
> reference shouldn't be a problem.

I'm not sure what Nick meant by "synchronization point", but at least
you certainly need INCREF and DECREF to be atomic, which is a departure
from today's Py_INCREF / Py_DECREF behaviour (and is significantly
slower, even on high-level benchmarks).

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Intention to accept PEP 552 soon (deterministic pyc files)

2017-10-04 Thread Guido van Rossum
On Oct 3, 2017 9:55 AM, "Serhiy Storchaka"  wrote:

While PEP 552 is accepted, I would want to see some changes.

1. Increase the size of the constant part of the signature to at least 32
bits. Currently only the third and forth bytes are constant, and they are
'\r\n', that is often occurred in text files. The first two bytes can be
different in every Python version. This make hard detecting pyc files by
utilities like file (1).

2. Split the "version" of pyc files by "major" and "minor" parts. Every
major version is incompatible with other major versions, the interpreter
accepts only one particular major version. It can't be changed in a bugfix
release. But all minor versions inside the same major version are forward
and backward compatible. The interpreter should be able to execute pyc file
with arbitrary minor version, but it can use minor version of pyc file to
handle errors in older versions. Minor version can be changed in a bugfix
release. I hope this can help us with issues like
https://bugs.python.org/issue29537. Currently 3.5 supports two magic
numbers.

If we change the pyc format, it would be easy to make the above changes.


IIUC the PEP doesn't commit to any particular magic word format, so this
can be negotiated separately, on the tracker (unless there's a PEP
specifying the internal structure of the magic word?).

--Guido
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Intention to accept PEP 552 soon (deterministic pyc files)

2017-10-04 Thread Barry Warsaw
On Oct 3, 2017, at 13:29, Benjamin Peterson  wrote:

> I'm not sure turning the implementation details of our internal formats
> into APIs is the way to go.

I still think an API in the stdlib would be useful and appropriate, but it’s 
not like this couldn’t be done as a 3rd party module.

-Barry




signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inheritance vs composition in backcompat (PEP521)

2017-10-04 Thread Koos Zevenhoven
On Wed, Oct 4, 2017 at 4:04 PM, Nick Coghlan  wrote:

> On 4 October 2017 at 22:45, Koos Zevenhoven  wrote:
> > On Wed, Oct 4, 2017 at 3:33 PM, Nick Coghlan  wrote:
> >> That's not a backwards compatibility problem, because the only way to
> >> encounter it is to update your code to rely on the new extended
> >> protocol - your *existing* code will continue to work fine, since
> >> that, by definition, can't be relying on the new protocol extension.
> >>
> >
> > No, not all code is "your" code. Clearly this is not a well-known
> problem.
> > This is a backwards-compatibility problem for the author of the wrappeR,
> not
> > for the author of the wrappeD object.
>
> No, you're misusing the phrase "backwards compatibility", and
> confusing it with "feature enablement".
>
> Preserving backwards compatibility just means "existing code and
> functionality don't break". It has nothing to do with whether or not
> other support libraries and frameworks might need to change in order
> to enable full access to a new language feature.
>
>
​It's not about full access to a new language feature. It's about the
wrappeR promising it can wrap any ​context manager, which it then no longer
can. If the __suspend__ and __resume__ methods are ignored, that is not
about "not having full access to a new feature" — that's broken code. The
error message you get (if any) may not contain any hint of what went wrong.

Take the length hint protocol defined in PEP 424 for example: that
> extended the iterator protocol to include a new optional
> __length_hint__ method, such that container constructors can make a
> more reasonable guess as to how much space they should pre-allocate
> when being initialised from an iterator or iterable rather than
> another container.
>
>
​This is slightly similar, but not really. Not using __length_hint__ does
not affect the correctness of code.


> That protocol means that many container wrappers break the
> optimisation. That's not a compatibility problem, it just means those
> wrappers don't support the feature, and it would potentially be a
> useful enhancement if they did.
>
>
​Again, ignoring __length_hint__ does not lead to broken code, so that just
means the wrapper is as slow or as fast as it was before.

​So I still think it's an issue for the author of the wrapper to fix––even
if just by documenting that the wrapper does not support the new protocol
members. But that would not be necessary if the wrapper uses inheritance.

(Of course there may be another reason to not use inheritance, but just
overriding two methods seems like a good case for inheritance.).
​
​This discussion seems pretty pointless by now. It's true that *some* code
needs to change for this to be a problem. Updating only the Python version
does not break a codebase if libraries aren't updated, and even then,
breakage is not very likely, I suppose.

It all depends on the kind of change that is made. For __length_hint__, you
only risk not getting the performance improvement. For __suspend__ and
__resume__, there's a small chance of problems. For some other change, it
might be even riskier. But this is definitely not the most dangerous type
of compatibility issue.

––Koos


-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 554 v3 (new interpreters module)

2017-10-04 Thread Eric Snow
On Tue, Oct 3, 2017 at 11:36 PM, Nick Coghlan  wrote:
> The problem relates to the fact that there aren't any memory barriers
> around CPython's INCREF operations (they're implemented as an ordinary
> C post-increment operation), so you can get the following scenario:
>
> * thread on CPU A has the sole reference (ob_refcnt=1)
> * thread on CPU B acquires a new reference, but hasn't pushed the
> updated ob_refcnt value back to the shared memory cache yet
> * original thread on CPU A drops its reference, *thinks* the refcnt is
> now zero, and deletes the object
> * bad things now happen in CPU B as the thread running there tries to
> use a deleted object :)

I'm not clear on where we'd run into this problem with channels.
Mirroring your scenario:

* interpreter A (in thread on CPU A) INCREFs the object (the GIL is still held)
* interp A sends the object to the channel
* interp B (in thread on CPU B) receives the object from the channel
* the new reference is held until interp B DECREFs the object

>From what I see, at no point do we get a refcount of 0, such that
there would be a race on the object being deleted.

The only problem I'm aware of (it dawned on me last night), is in the
case that the interpreter that created the object gets deleted before
the object does.  In that case we can't pass the deletion back to the
original interpreter.  (I don't think this problem is necessarily
exclusive to the solution I've proposed for Bytes.)

-eric
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reorganize Python categories (Core, Library, ...)?

2017-10-04 Thread Barry Warsaw
On Oct 4, 2017, at 05:52, Victor Stinner  wrote:

> My problem is that almost all changes go into "Library" category. When
> I read long changelogs, it's sometimes hard to identify quickly the
> context (ex: impacted modules) of a change.
> 
> It's also hard to find open bugs of a specific module on
> bugs.python.org, since almost all bugs are in the very generic
> "Library" category. Using full text returns "false positives".
> 
> It's hard to find categories generic enough to not only contain a
> single item, but not contain too many items neither. Other ideas:
> 
> * XML: xml.doc, xml.etree, xml.parsers, xml.sax modules
> * Import machinery: imp and importlib modules
> * Typing: abc and typing modules

I often run into the same problem.  If we’re going to split up the Library 
section, then I think it makes sense to follow the top-level organization of 
the library manual:

https://docs.python.org/3/library/index.html

That already provides a mapping from module to category, and for the most part 
it’s a taxonomy that makes sense and is time proven.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reorganize Python categories (Core, Library, ...)?

2017-10-04 Thread Victor Stinner
2017-10-04 14:36 GMT+02:00 Antoine Pitrou :
> If  there's a crash in socket.sendmsg() that affects mainly
> multiprocessing, should it be in "Networking", "Security" or
> "Parallelism"?

bugs.python.org allows you to select zero or *multiple* categories :-)
It's common that categories of a bug evolves. For example, a buildbot
issue is first tagged as "Tests", but then moves into the correct
category once the problem is better understood.

>  If there's a bug where SSLSocket.recvinto() doesn't
> accept some writable buffers, is it "Networking" or "Security"? etc.

Usually, when the reach the final fix, it becomes much easier to pick
the correct category. Between Networking and Security, Security wins
since it's more important to list security fixes first in the
changelog.

> I agree with making the "Library" section finer-grained, but then
> shouldn't the subsection be simply the top-level module/package name?
> (e.g. "collections", "xml", "logging", "asyncio", "concurrent"...)

Yeah, that's another option. I don't know how to solve the problem, I
just listed the issues I have with the bug tracker and the changelog
:-)

> Also, perhaps the "blurb" tool can suggest a category depending on
> which stdlib files were modified, though there must be an easy way for
> the committer to override that choice.

This is why a mapping module name => category would help, yes.

> What is the problem with having a distinct category for each module?
> At worse, the logic which generates Docs from blurb files can merge
> some categories together if desired.  There's no problem with having a
> very fine-grained categorization *on disk*, since the presentation can
> be made different.  OTOH if the categorization is coarse-grained on
> disk (such is the case currently), the presentation layer can't
> recreate the information that was lost when committing.

Technically, blurb is correctly limited to a main category written in
the filename. Maybe blurb can evolve to store the modified module
(modules?) to infer the categoy? I don't know.

At least in the bug tracker, I would prefer to have the module name
*and* a distinct list of categories. As I wrote, while the analysis
makes progress, the module name can change, but also categories.
Yesterday, I analyzed a bug in test_cgitb. In fact, the bug was in
test_imp, something completely different :-) test_imp has side
effects, causing a bug in test_cgitb.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inheritance vs composition in backcompat (PEP521)

2017-10-04 Thread Nick Coghlan
On 4 October 2017 at 22:45, Koos Zevenhoven  wrote:
> On Wed, Oct 4, 2017 at 3:33 PM, Nick Coghlan  wrote:
>> That's not a backwards compatibility problem, because the only way to
>> encounter it is to update your code to rely on the new extended
>> protocol - your *existing* code will continue to work fine, since
>> that, by definition, can't be relying on the new protocol extension.
>>
>
> No, not all code is "your" code. Clearly this is not a well-known problem.
> This is a backwards-compatibility problem for the author of the wrappeR, not
> for the author of the wrappeD object.

No, you're misusing the phrase "backwards compatibility", and
confusing it with "feature enablement".

Preserving backwards compatibility just means "existing code and
functionality don't break". It has nothing to do with whether or not
other support libraries and frameworks might need to change in order
to enable full access to a new language feature.

Take the length hint protocol defined in PEP 424 for example: that
extended the iterator protocol to include a new optional
__length_hint__ method, such that container constructors can make a
more reasonable guess as to how much space they should pre-allocate
when being initialised from an iterator or iterable rather than
another container.

That protocol means that many container wrappers break the
optimisation. That's not a compatibility problem, it just means those
wrappers don't support the feature, and it would potentially be a
useful enhancement if they did.

Similarly, when context managers were added, folks needed to add
appropriate implementations of the protocol in order to be able to
actually make use of the feature. If a library didn't support it
natively, then they either needed to write their own context manager,
or else contribute an enhancement to that library.

This pattern applies whenever a new protocol is added or an existing
protocol is extended: whether or not you can actually rely on the new
feature will depend on whether or not all your dependencies also
support it.

The best case scenarios are those where we can enable a new feature in
a few key standard library APIs, and then most third party APIs will
transparently pick up the new behaviour (e.g. as we did for the fspath
protocol). However, even in situations like that, there may still be
other code that makes no longer correct assumptions, and blocks access
to the new feature (e.g. by including an explicit isinstance check).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inheritance vs composition in backcompat (PEP521)

2017-10-04 Thread Koos Zevenhoven
On Wed, Oct 4, 2017 at 3:33 PM, Nick Coghlan  wrote:

> On 4 October 2017 at 20:22, Koos Zevenhoven  wrote:
> > On Wed, Oct 4, 2017 at 8:07 AM, Nick Coghlan  wrote:
> >>
> >> On 3 October 2017 at 03:13, Koos Zevenhoven  wrote:
> >> > Well, it's not completely unrelated to that. The problem I'm talking
> >> > about
> >> > is perhaps most easily seen from a simple context manager wrapper that
> >> > uses
> >> > composition instead of inheritance:
> >> >
> >> > class Wrapper:
> >> > def __init__(self):
> >> > self._wrapped = SomeContextManager()
> >> >
> >> > def __enter__(self):
> >> > print("Entering context")
> >> > return self._wrapped.__enter__()
> >> >
> >> > def __exit__(self):
> >> > self._wrapped.__exit__()
> >> > print("Exited context")
> >> >
> >> >
> >> > Now, if the wrapped contextmanager becomes a PEP 521 one with
> >> > __suspend__
> >> > and __resume__, the Wrapper class is broken, because it does not
> respect
> >> > __suspend__ and __resume__. So actually this is a backwards
> compatiblity
> >> > issue.
> >>
> >> This is a known problem, and one of the main reasons that having a
> >> truly transparent object proxy like
> >> https://wrapt.readthedocs.io/en/latest/wrappers.html#object-proxy as
> >> part of the standard library would be highly desirable.
> >>
> >
> > This is barely related to the problem I describe. The wrapper is not
> > supposed to pretend to *be* the underlying object. It's just supposed to
> > extend its functionality.
>
> If a wrapper *isn't* trying to act as a transparent object proxy, and
> is instead adapting it to a particular protocol, then yes, you'll need
> to update the wrapper when the protocol is extended.
>
>
​Yes, but it still means that the change in the dependency (in this case a
standard Python protocol) breaks the wrapper code.​

Remember that the wrappeR class and the wrappeD class can be implemented in
different libraries.



> That's not a backwards compatibility problem, because the only way to
> encounter it is to update your code to rely on the new extended
> protocol - your *existing* code will continue to work fine, since
> that, by definition, can't be relying on the new protocol extension.
>
>
​No, not all code is "your" code. Clearly this is not a well-known problem.
This is a backwards-compatibility problem for the author of the wrappeR,
not for the author of the wrappeD object.

––Koos
​

-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reorganize Python categories (Core, Library, ...)?

2017-10-04 Thread Antoine Pitrou
On Wed, 4 Oct 2017 11:52:32 +0200
Victor Stinner  wrote:
> 
> It's also hard to find open bugs of a specific module on
> bugs.python.org, since almost all bugs are in the very generic
> "Library" category. Using full text returns "false positives".
> 
> I would prefer to see more specific categories like:
> 
> * Buildbots: only issues specific to buildbots
> * Networking: socket, asyncio, asyncore, asynchat modules
> * Security: ssl module but also vulnerabilities in any other part of
> CPython -- we already added a Security category in NEWS/blurb
> * Parallelim: multiprocessing and concurrent.futures modules

This is mixing different taxonomies and will make things ambiguous.  If
there's a crash in socket.sendmsg() that affects mainly
multiprocessing, should it be in "Networking", "Security" or
"Parallelism"?  If there's a bug where SSLSocket.recvinto() doesn't
accept some writable buffers, is it "Networking" or "Security"? etc.

I agree with making the "Library" section finer-grained, but then
shouldn't the subsection be simply the top-level module/package name?
(e.g. "collections", "xml", "logging", "asyncio", "concurrent"...)

Also, perhaps the "blurb" tool can suggest a category depending on
which stdlib files were modified, though there must be an easy way for
the committer to override that choice.

> I don't think that we need a distinct categoy for each module. We can
> put many uncommon modules in a generic category.

What is the problem with having a distinct category for each module?
At worse, the logic which generates Docs from blurb files can merge
some categories together if desired.  There's no problem with having a
very fine-grained categorization *on disk*, since the presentation can
be made different.  OTOH if the categorization is coarse-grained on
disk (such is the case currently), the presentation layer can't
recreate the information that was lost when committing.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inheritance vs composition in backcompat (PEP521)

2017-10-04 Thread Nick Coghlan
On 4 October 2017 at 20:22, Koos Zevenhoven  wrote:
> On Wed, Oct 4, 2017 at 8:07 AM, Nick Coghlan  wrote:
>>
>> On 3 October 2017 at 03:13, Koos Zevenhoven  wrote:
>> > Well, it's not completely unrelated to that. The problem I'm talking
>> > about
>> > is perhaps most easily seen from a simple context manager wrapper that
>> > uses
>> > composition instead of inheritance:
>> >
>> > class Wrapper:
>> > def __init__(self):
>> > self._wrapped = SomeContextManager()
>> >
>> > def __enter__(self):
>> > print("Entering context")
>> > return self._wrapped.__enter__()
>> >
>> > def __exit__(self):
>> > self._wrapped.__exit__()
>> > print("Exited context")
>> >
>> >
>> > Now, if the wrapped contextmanager becomes a PEP 521 one with
>> > __suspend__
>> > and __resume__, the Wrapper class is broken, because it does not respect
>> > __suspend__ and __resume__. So actually this is a backwards compatiblity
>> > issue.
>>
>> This is a known problem, and one of the main reasons that having a
>> truly transparent object proxy like
>> https://wrapt.readthedocs.io/en/latest/wrappers.html#object-proxy as
>> part of the standard library would be highly desirable.
>>
>
> This is barely related to the problem I describe. The wrapper is not
> supposed to pretend to *be* the underlying object. It's just supposed to
> extend its functionality.

If a wrapper *isn't* trying to act as a transparent object proxy, and
is instead adapting it to a particular protocol, then yes, you'll need
to update the wrapper when the protocol is extended.

That's not a backwards compatibility problem, because the only way to
encounter it is to update your code to rely on the new extended
protocol - your *existing* code will continue to work fine, since
that, by definition, can't be relying on the new protocol extension.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reorganize Python categories (Core, Library, ...)?

2017-10-04 Thread Brian Curtin
On Wed, Oct 4, 2017 at 5:52 AM, Victor Stinner 
wrote:

> Hi,
>
> Python uses a few categories to group bugs (on bugs.python.org) and
> NEWS entries (in the Python changelog). List used by the blurb tool:
>
> #.. section: Security
> #.. section: Core and Builtins
> #.. section: Library
> #.. section: Documentation
> #.. section: Tests
> #.. section: Build
> #.. section: Windows
> #.. section: macOS
> #.. section: IDLE
> #.. section: Tools/Demos
> #.. section: C API
>
> My problem is that almost all changes go into "Library" category. When
> I read long changelogs, it's sometimes hard to identify quickly the
> context (ex: impacted modules) of a change.
>
> It's also hard to find open bugs of a specific module on
> bugs.python.org, since almost all bugs are in the very generic
> "Library" category. Using full text returns "false positives".
>
> I would prefer to see more specific categories like:
>
> * Buildbots: only issues specific to buildbots
>

I would expect anything listed under buildbot to be about infrastructure
changes related to the running of build machines.

I think what you're getting at are the bugs that appear on build machines
that weren't otherwise caught during the development of a recent change. In
the end those are still just bugs in code, so I'm not sure I would group
them at such a high level. Wouldn't this be a better use of the priority
field?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inheritance vs composition in backcompat (PEP521)

2017-10-04 Thread Koos Zevenhoven
On Wed, Oct 4, 2017 at 8:07 AM, Nick Coghlan  wrote:

> On 3 October 2017 at 03:13, Koos Zevenhoven  wrote:
> > Well, it's not completely unrelated to that. The problem I'm talking
> about
> > is perhaps most easily seen from a simple context manager wrapper that
> uses
> > composition instead of inheritance:
> >
> > class Wrapper:
> > def __init__(self):
> > self._wrapped = SomeContextManager()
> >
> > def __enter__(self):
> > print("Entering context")
> > return self._wrapped.__enter__()
> >
> > def __exit__(self):
> > self._wrapped.__exit__()
> > print("Exited context")
> >
> >
> > Now, if the wrapped contextmanager becomes a PEP 521 one with __suspend__
> > and __resume__, the Wrapper class is broken, because it does not respect
> > __suspend__ and __resume__. So actually this is a backwards compatiblity
> > issue.
>
> This is a known problem, and one of the main reasons that having a
> truly transparent object proxy like
> https://wrapt.readthedocs.io/en/latest/wrappers.html#object-proxy as
> part of the standard library would be highly desirable.
>
>
This is barely related to the problem I describe. The wrapper is not
supposed to pretend to *be* the underlying object. It's just supposed to
extend its functionality.

Maybe it's just me, but using a transparent object proxy for this sounds
like someone trying to avoid inheritance for no reason and at any cost.
Inheritance probably has faster method access, and makes it more obvious
what's going on:

def Wrapper(contextmanager):
class Wrapper(type(contextmanager)):
def __enter__(self):
print("Entering context")
return contextmanager.__enter__()

def __exit__(self):
contextmanager.__exit__()
print("Exited context")
return Wrapper()


A wrapper based on a transparent object proxy is just a non-transparent
replacement for inheritance. Its wrapper nature is non-transparent because
it pretends to `be` the original object, while it's actually a wrapper.

But an object cannot `be` another object as long as the `is` operator won't
return True. And any straightforward way to implement that would add
performance overhead for normal objects.

I do remember sometimes wanting a transparent object proxy. But not for
normal wrappers. But I don't think I've gone as far as looking for a
library to do that, because it seems that you can only go half way anyway.

––Koos

​--
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Reorganize Python categories (Core, Library, ...)?

2017-10-04 Thread Victor Stinner
Hi,

Python uses a few categories to group bugs (on bugs.python.org) and
NEWS entries (in the Python changelog). List used by the blurb tool:

#.. section: Security
#.. section: Core and Builtins
#.. section: Library
#.. section: Documentation
#.. section: Tests
#.. section: Build
#.. section: Windows
#.. section: macOS
#.. section: IDLE
#.. section: Tools/Demos
#.. section: C API

My problem is that almost all changes go into "Library" category. When
I read long changelogs, it's sometimes hard to identify quickly the
context (ex: impacted modules) of a change.

It's also hard to find open bugs of a specific module on
bugs.python.org, since almost all bugs are in the very generic
"Library" category. Using full text returns "false positives".

I would prefer to see more specific categories like:

* Buildbots: only issues specific to buildbots
* Networking: socket, asyncio, asyncore, asynchat modules
* Security: ssl module but also vulnerabilities in any other part of
CPython -- we already added a Security category in NEWS/blurb
* Parallelim: multiprocessing and concurrent.futures modules

It's hard to find categories generic enough to not only contain a
single item, but not contain too many items neither. Other ideas:

* XML: xml.doc, xml.etree, xml.parsers, xml.sax modules
* Import machinery: imp and importlib modules
* Typing: abc and typing modules

The best would be to have a mapping of a module name into a category,
and make sure that all modules have a category. We might try to count
the number of commits and NEWS entries of the last 12 months to decide
if a category has the correct size.

I don't think that we need a distinct categoy for each module. We can
put many uncommon modules in a generic category.

By the way, we need maybe also a new "module name" field in the bug
tracker. But then comes the question of normalizing module names. For
example, should "email.message" be normalized to "email"? Maybe store
"email.message" but use "email" for search, display the module in the
issue title, etc.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com