Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Nick Coghlan
On 27 August 2017 at 03:23, Yury Selivanov  wrote:
> On Sat, Aug 26, 2017 at 1:23 PM, Ethan Furman  wrote:
>> On 08/26/2017 09:25 AM, Yury Selivanov wrote:
>>> ContextVar.lookup() method *traverses the stack* until it finds the LC
>>> that has a value.  "get()" does not reflect this subtle semantics
>>> difference.
>>
>> A good point; however, ChainMap, which behaves similarly as far as lookup
>> goes, uses "get" and does not have a "lookup" method.  I think we lose more
>> than we gain by changing that method name.
>
> ChainMap is constrained to be a Mapping-like object, but I get your
> point.  Let's see what others say about the "lookup()".  It is kind of
> an experiment to try a name and see if it fits.

I don't think "we may want to add extra parameters" is a good reason
to omit a conventional `get()` method - I think it's a reason to offer
a separate API to handle use cases where the question of *where* the
var is set matters (for example, `my_var.is_set()` would indicate
whether or not `my_var.set()` has been called in the current logical
context without requiring a parameter check for normal lookups that
don't care).

Cheers,
Nick.

P.S. And I say that as a reader who correctly guessed why you had
changed the method name in the current iteration of the proposal. I'm
sympathetic to those reasons, but I think sticking with the
conventional API will make this one easier to learn and use :)

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 module

2017-08-29 Thread Nick Coghlan
On 28 August 2017 at 01:51, Jim J. Jewett  wrote:
> I think there is general consensus that this should go in a module other
> than sys. (At least a submodule.)
>
> The specific names are still To Be Determined, but I suspect seeing the
> functions and objects as part of a named module will affect what works.

Given the refocusing of the PEP on the context variable API, with the
other aspects introduced solely in service of making context variables
work as defined, my current suggestion would be to make it a hybrid
Python/C API using the "contextvars" + "_contextvars" naming
convention.

Then all most end user applications defining context variables would
need is the single line:

from contextvars import new_context_var

_contextvars would contain the APIs that only the interpreter itself
can implement, while contextvars would provide a home for any pure
Python convenience APIs that could be shared across interpreter
implementations.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Nick Coghlan
On 29 August 2017 at 07:24, Yury Selivanov  wrote:
> This means that PEP 550 will have a caveat for async code: don't rely
> on context propagation up the call stack, unless you are writing
> __aenter__ and __aexit__ that are guaranteed to be called without
> being wrapped into a Task.

I'm not sure if it was Nathaniel or Stefan that raised it, but I liked
the analogy that compared wrapping a coroutine in a new Task to
submitting a synchronous function call to a concurrent.futures
executor: while the dispatch layer is able to pass down a snapshot of
the current execution context to the submitted function or wrapped
coroutine, the potential for concurrent execution of multiple
activities using that same context means the dispatcher *can't*
reliably pass back any changes that the child context makes.

For example, consider the following:

def func():
with make_executor() as executor:
fut1 = executor.submit(other_func1)
fut2 = executor.submit(other_func2)
result1 = fut1.result()
result2 = fut2.result()

async def coro():
fut1 = asyncio.ensure_future(other_coro1())
fut2 = asyncio.ensure_future(other_coro2())
result1 = await fut1
result2 = await fut2

For both of these cases, it shouldn't matter which order we use to
wait for the results, or if we perform any other operations in
between, and the only way to be sure of that outcome is if the
dispatch operations (whether that's asyncio.ensure_future or
executor.submit) prevent reverse propagation of context changes from
the operation being dispatched.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Security-sig] PEP 551: Security transparency in the Python runtime

2017-08-29 Thread Wes Turner
On Monday, August 28, 2017, Steve Dower  wrote:

> On 28Aug2017 1834, Gregory P. Smith wrote:
>
>> My gut feeling says that there are N interpreters available on just
>> about every bloated system image out there. Multiple pythons are often
>> among them, other we do not control will also continue to exist. I
>> expect a small initial payload can be created that when executed will
>> binary patch the interpreter's memory to disable all auditing,
>> /potentially/ in a manner that cannot be audited itself (a challenge
>> guaranteed to be accepted).
>>
>
> Repeating the three main goals listed by the PEP:
> * preventing malicious use is valuable
> * detecting malicious use is important
> * detecting attempts to bypass detection is critical
>
> This case falls under the last one. Yes, it is possible to patch the
> interpreter (at least in systems that don't block that - Windows certainly
> can prevent this at the kernel level), but in order to do so you should
> trigger at least one very unusual event (e.g. any of the ctypes ones).
>
> Compared to the current state, where someone can patch your interpreter
> without you ever being able to tell, it counts as a victory.
>
> And yeah, if you have alternative interpreters out there that are not
> auditing, you're in just as much trouble. There are certainly sysadmins who
> do a good job of controlling this though - these changes enable *those*
> sysadmins to do a better job, it doesn't help the ones who don't invest in
> it.
>
> If the goal is to audit attacks and the above becomes the standard
>> attack payload boilerplate before existing "use python to pull down
>> 'fun' stuff to execute". It seems to negate the usefulness.
>>
>
> You can audit all code before it executes and prevent it. Whether you use
> a local malware scanner or some sort of signature scan of log files doesn't
> matter, the change *enables* you to detect this behaviour. Right now it is
> impossible.


Wouldn't it be great to have the resources to source audit all code? (And
expect everyone to GPG sign all their commits someday.)

Many Linux packaging formats do have checksums of all files in a package:
{RPM, DEB,}

Python Wheel packages do have a manifest with SHA256 file checksums.
bdist_wheel.write_record():

https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/bdist_wheel.py?at=default&fileviewer=file-view-default#bdist_wheel.py-436

Is there a tool for checking these manifest and file checksums and
signatures?

Which keys can sign for which packages? IIUC, any key loaded into the local
keyring is currently valid for any package?

"ENH: GPG signatures, branch-environment map (GitFS/HgFS workflow)"
https://github.com/saltstack/salt/issues/12183

- links to GPG signing support in hg, git, os packaging systems

...

Setting and checking SELinux file context labels:

Someone else can explain how DEB handles semanage and chcon?

https://fedoraproject.org/wiki/PackagingDrafts/SELinux

RPM supports .te (type enforcement), .fc (file context), and .if SELinux
files with an `semodule` command.

RPM requires various combinations of the policycoreutils,
selinux-policy-targeted, selinux-policy-devel, and  policycoreutils-python
packages.

Should setup.py (running with set fcontext (eg root)) just call chcon
itself; or is it much better to repack (signed) Python packages as e.g.
RPMs?

FWIW, Salt and Ansible do support setting and checking SELinux file
contexts:

salt.modules.selinux:

https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.selinux.html

https://github.com/saltstack/salt/blob/develop/salt/modules/selinux.py

Requires:
- cmds: semanage, setsebool, semodule
- pkgs: policycoreutils, policycoreutils-python,


Ansible sefcontext:

http://docs.ansible.com/ansible/latest/sefcontext_module.html

https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/sefcontext.py

Requires:
- pkgs: libselinux-python, policycoreutils-python

Does it make sense to require e.g. policycoreutils-python[-python] in
'spython'? ((1) Instead of wrapping `ls -Z` and `chcon` (2) in setup.py (3)
as root)?



> This audit layer seems like a defense against existing exploit kits
>> rather than future ones.
>>
>
> As a *defense*, yes, though if you use a safelist of your own code rather
> than a blocklist of malicious code, you can defend against all unexpected
> code. (For example, you could have a signed catalog of code on the machine
> that is used to verify all sources that are executed. Again - these
> features are for sysadmins who invest in this, it isn't free security for
> lazy ones.)
>
> Is that still useful from a defense in depth
>> point of view?
>>
>
> Detection is very useful for defense in depth. The new direction of
> malware scanners is heading towards behavioural detection (away from
> signature-based detection) because it's more future proof to detect
> unexpected actions than unexpected code.
>
> (If you think any of these explanations deserve to 

Re: [Python-Dev] Pep 550 module

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 6:53 AM, Nick Coghlan  wrote:
> On 28 August 2017 at 01:51, Jim J. Jewett  wrote:
>> I think there is general consensus that this should go in a module other
>> than sys. (At least a submodule.)
>>
>> The specific names are still To Be Determined, but I suspect seeing the
>> functions and objects as part of a named module will affect what works.
>
> Given the refocusing of the PEP on the context variable API, with the
> other aspects introduced solely in service of making context variables
> work as defined, my current suggestion would be to make it a hybrid
> Python/C API using the "contextvars" + "_contextvars" naming
> convention.
>
> Then all most end user applications defining context variables would
> need is the single line:
>
> from contextvars import new_context_var

I like it!

+1 from me.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 5:01 AM, Nick Coghlan  wrote:
[..]
> P.S. And I say that as a reader who correctly guessed why you had
> changed the method name in the current iteration of the proposal. I'm
> sympathetic to those reasons, but I think sticking with the
> conventional API will make this one easier to learn and use :)

Yeah, I agree.  We'll switch lookup -> get in the next iteration.

Guido's parallel with getattr/setattr/delattr is also useful. getattr
can also lookup the attribute in base classes, but we still call it
"get".

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 module

2017-08-29 Thread Elvis Pranskevichus
On Tuesday, August 29, 2017 9:14:53 AM EDT Yury Selivanov wrote:
> On Tue, Aug 29, 2017 at 6:53 AM, Nick Coghlan  
wrote:
> > On 28 August 2017 at 01:51, Jim J. Jewett  
wrote:
> >> I think there is general consensus that this should go in a module
> >> other than sys. (At least a submodule.)
> >> 
> >> The specific names are still To Be Determined, but I suspect seeing
> >> the functions and objects as part of a named module will affect
> >> what works.> 
> > Given the refocusing of the PEP on the context variable API, with
> > the
> > other aspects introduced solely in service of making context
> > variables work as defined, my current suggestion would be to make
> > it a hybrid Python/C API using the "contextvars" + "_contextvars"
> > naming convention.
> > 
> > Then all most end user applications defining context variables would
> > 
> > need is the single line:
> > from contextvars import new_context_var
> 
> I like it!
> 
> +1 from me.

+1
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Nick Coghlan
On 29 August 2017 at 23:18, Yury Selivanov  wrote:
> On Tue, Aug 29, 2017 at 5:01 AM, Nick Coghlan  wrote:
> [..]
>> P.S. And I say that as a reader who correctly guessed why you had
>> changed the method name in the current iteration of the proposal. I'm
>> sympathetic to those reasons, but I think sticking with the
>> conventional API will make this one easier to learn and use :)
>
> Yeah, I agree.  We'll switch lookup -> get in the next iteration.
>
> Guido's parallel with getattr/setattr/delattr is also useful. getattr
> can also lookup the attribute in base classes, but we still call it
> "get".

True, in many ways attribute inheritance is Python's original ChainMap
implementation :)

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 module

2017-08-29 Thread Guido van Rossum
On Tue, Aug 29, 2017 at 6:46 AM, Elvis Pranskevichus 
wrote:

> On Tuesday, August 29, 2017 9:14:53 AM EDT Yury Selivanov wrote:
> > On Tue, Aug 29, 2017 at 6:53 AM, Nick Coghlan 
> wrote:
> > > Given the refocusing of the PEP on the context variable API, with
> > > the
> > > other aspects introduced solely in service of making context
> > > variables work as defined, my current suggestion would be to make
> > > it a hybrid Python/C API using the "contextvars" + "_contextvars"
> > > naming convention.
> > >
> > > Then all most end user applications defining context variables would
> > >
> > > need is the single line:
> > > from contextvars import new_context_var
> >
> > I like it!
> >
> > +1 from me.
>
> +1
>

OK, but does it have to look like a factory function? Can't it look like a
class? E.g.

from contextvars import ContextVar

my_parameter = ContextVar()

async def some_calculation():
my_parameter.set(my_parameter.get() + 2)

my_parameter.delete()

-- 
--Guido van Rossum (python.org/~guido )
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Security-sig] PEP 551: Security transparency in the Python runtime

2017-08-29 Thread Steve Dower

On 29Aug2017 0614, Wes Turner wrote:
Wouldn't it be great to have the resources to source audit all code? 
(And expect everyone to GPG sign all their commits someday.)


If you care this much, then you will find the resources to audit all the 
code manually after you've downloaded it and before you've deployed it 
(or delegate that trust/liability elsewhere). Plenty of larger companies 
do it, especially for their high value targets.



The rest of your email is highly platform-specific, and so while they 
are potential *uses* of this PEP, and I hope people will take the time 
to investigate them, they don't contribute to it in any way. None of 
these things will be added to or required by the core CPython release.


Cheers,
Steve

Many Linux packaging formats do have checksums of all files in a 
package: {RPM, DEB,}


Python Wheel packages do have a manifest with SHA256 file checksums. 
bdist_wheel.write_record():


https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/bdist_wheel.py?at=default&fileviewer=file-view-default#bdist_wheel.py-436

Is there a tool for checking these manifest and file checksums and 
signatures?


Which keys can sign for which packages? IIUC, any key loaded into the 
local keyring is currently valid for any package?


"ENH: GPG signatures, branch-environment map (GitFS/HgFS workflow)"
https://github.com/saltstack/salt/issues/12183

- links to GPG signing support in hg, git, os packaging systems

...

Setting and checking SELinux file context labels:

Someone else can explain how DEB handles semanage and chcon?

https://fedoraproject.org/wiki/PackagingDrafts/SELinux

RPM supports .te (type enforcement), .fc (file context), and .if SELinux 
files with an `semodule` command.


RPM requires various combinations of the policycoreutils, 
selinux-policy-targeted, selinux-policy-devel, and 
  policycoreutils-python packages.


Should setup.py (running with set fcontext (eg root)) just call chcon 
itself; or is it much better to repack (signed) Python packages as e.g. 
RPMs?


FWIW, Salt and Ansible do support setting and checking SELinux file 
contexts:

salt.modules.selinux:

https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.selinux.html

https://github.com/saltstack/salt/blob/develop/salt/modules/selinux.py

Requires:
- cmds: semanage, setsebool, semodule
- pkgs: policycoreutils, policycoreutils-python,


Ansible sefcontext:

http://docs.ansible.com/ansible/latest/sefcontext_module.html

https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/sefcontext.py

Requires:
- pkgs: libselinux-python, policycoreutils-python

Does it make sense to require e.g. policycoreutils-python[-python] in 
'spython'? ((1) Instead of wrapping `ls -Z` and `chcon` (2) in setup.py 
(3) as root)?

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Security-sig] PEP 551: Security transparency in the Python runtime

2017-08-29 Thread Steve Dower

On 29Aug2017 0801, Steve Dower wrote:

On 29Aug2017 0614, Wes Turner wrote:
Wouldn't it be great to have the resources to source audit all code? 
(And expect everyone to GPG sign all their commits someday.)


If you care this much, then you will find the resources to audit all the 
code manually after you've downloaded it and before you've deployed it 
(or delegate that trust/liability elsewhere). Plenty of larger companies 
do it, especially for their high value targets.


On re-reading it wasn't entirely clear, so just to clarify:

* above, "you" is meant as a generally inclusive term (i.e., not just 
Wes, unless Wes is also a sysadmin who is trying to carefully control 
his network :) )


* below, "you" is specifically the author of the email (i.e., Wes)

Cheers,
Steve

The rest of your email is highly platform-specific, and so while they 
are potential *uses* of this PEP, and I hope people will take the time 
to investigate them, they don't contribute to it in any way. None of 
these things will be added to or required by the core CPython release.


Cheers,
Steve

Many Linux packaging formats do have checksums of all files in a 
package: {RPM, DEB,}


Python Wheel packages do have a manifest with SHA256 file checksums. 
bdist_wheel.write_record():


https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e414a5df3a1e492644/wheel/bdist_wheel.py?at=default&fileviewer=file-view-default#bdist_wheel.py-436 



Is there a tool for checking these manifest and file checksums and 
signatures?


Which keys can sign for which packages? IIUC, any key loaded into the 
local keyring is currently valid for any package?


"ENH: GPG signatures, branch-environment map (GitFS/HgFS workflow)"
https://github.com/saltstack/salt/issues/12183

- links to GPG signing support in hg, git, os packaging systems

...

Setting and checking SELinux file context labels:

Someone else can explain how DEB handles semanage and chcon?

https://fedoraproject.org/wiki/PackagingDrafts/SELinux

RPM supports .te (type enforcement), .fc (file context), and .if 
SELinux files with an `semodule` command.


RPM requires various combinations of the policycoreutils, 
selinux-policy-targeted, selinux-policy-devel, and 
  policycoreutils-python packages.


Should setup.py (running with set fcontext (eg root)) just call chcon 
itself; or is it much better to repack (signed) Python packages as 
e.g. RPMs?


FWIW, Salt and Ansible do support setting and checking SELinux file 
contexts:

salt.modules.selinux:

https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.selinux.html 



https://github.com/saltstack/salt/blob/develop/salt/modules/selinux.py

Requires:
- cmds: semanage, setsebool, semodule
- pkgs: policycoreutils, policycoreutils-python,


Ansible sefcontext:

http://docs.ansible.com/ansible/latest/sefcontext_module.html

https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/sefcontext.py 



Requires:
- pkgs: libselinux-python, policycoreutils-python

Does it make sense to require e.g. policycoreutils-python[-python] in 
'spython'? ((1) Instead of wrapping `ls -Z` and `chcon` (2) in 
setup.py (3) as root)?

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Security-sig] PEP 551: Security transparency in the Python runtime

2017-08-29 Thread Wes Turner
On Tuesday, August 29, 2017, Steve Dower  wrote:

> On 29Aug2017 0801, Steve Dower wrote:
>
>> On 29Aug2017 0614, Wes Turner wrote:
>>
>>> Wouldn't it be great to have the resources to source audit all code?
>>> (And expect everyone to GPG sign all their commits someday.)
>>>
>>
>> If you care this much, then you will find the resources to audit all the
>> code manually after you've downloaded it and before you've deployed it (or
>> delegate that trust/liability elsewhere). Plenty of larger companies do it,
>> especially for their high value targets.
>>
>
> On re-reading it wasn't entirely clear, so just to clarify:
>
> * above, "you" is meant as a generally inclusive term (i.e., not just Wes,
> unless Wes is also a sysadmin who is trying to carefully control his
> network :) )
>
> * below, "you" is specifically the author of the email (i.e., Wes)


avc: denied
<[{/end>

Thanks!


>
> Cheers,
> Steve
>
> The rest of your email is highly platform-specific, and so while they are
>> potential *uses* of this PEP, and I hope people will take the time to
>> investigate them, they don't contribute to it in any way. None of these
>> things will be added to or required by the core CPython release.
>>
>> Cheers,
>> Steve
>>
>> Many Linux packaging formats do have checksums of all files in a package:
>>> {RPM, DEB,}
>>>
>>> Python Wheel packages do have a manifest with SHA256 file checksums.
>>> bdist_wheel.write_record():
>>>
>>> https://bitbucket.org/pypa/wheel/src/5d49f8cf18679d1bc6f3e1e
>>> 414a5df3a1e492644/wheel/bdist_wheel.py?at=default&
>>> fileviewer=file-view-default#bdist_wheel.py-436
>>>
>>> Is there a tool for checking these manifest and file checksums and
>>> signatures?
>>>
>>> Which keys can sign for which packages? IIUC, any key loaded into the
>>> local keyring is currently valid for any package?
>>>
>>> "ENH: GPG signatures, branch-environment map (GitFS/HgFS workflow)"
>>> https://github.com/saltstack/salt/issues/12183
>>>
>>> - links to GPG signing support in hg, git, os packaging systems
>>>
>>> ...
>>>
>>> Setting and checking SELinux file context labels:
>>>
>>> Someone else can explain how DEB handles semanage and chcon?
>>>
>>> https://fedoraproject.org/wiki/PackagingDrafts/SELinux
>>>
>>> RPM supports .te (type enforcement), .fc (file context), and .if SELinux
>>> files with an `semodule` command.
>>>
>>> RPM requires various combinations of the policycoreutils,
>>> selinux-policy-targeted, selinux-policy-devel, and   policycoreutils-python
>>> packages.
>>>
>>> Should setup.py (running with set fcontext (eg root)) just call chcon
>>> itself; or is it much better to repack (signed) Python packages as e.g.
>>> RPMs?
>>>
>>> FWIW, Salt and Ansible do support setting and checking SELinux file
>>> contexts:
>>> salt.modules.selinux:
>>>
>>> https://docs.saltstack.com/en/latest/ref/modules/all/salt.mo
>>> dules.selinux.html
>>>
>>> https://github.com/saltstack/salt/blob/develop/salt/modules/selinux.py
>>>
>>> Requires:
>>> - cmds: semanage, setsebool, semodule
>>> - pkgs: policycoreutils, policycoreutils-python,
>>>
>>>
>>> Ansible sefcontext:
>>>
>>> http://docs.ansible.com/ansible/latest/sefcontext_module.html
>>>
>>> https://github.com/ansible/ansible/blob/devel/lib/ansible/
>>> modules/system/sefcontext.py
>>>
>>> Requires:
>>> - pkgs: libselinux-python, policycoreutils-python
>>>
>>> Does it make sense to require e.g. policycoreutils-python[-python] in
>>> 'spython'? ((1) Instead of wrapping `ls -Z` and `chcon` (2) in setup.py (3)
>>> as root)?
>>>
>> ___
>> Python-Dev mailing list
>> [email protected]
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.
>> dower%40python.org
>>
>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 module

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 10:47 AM, Guido van Rossum  wrote:
> On Tue, Aug 29, 2017 at 6:46 AM, Elvis Pranskevichus 
> wrote:
>>
>> On Tuesday, August 29, 2017 9:14:53 AM EDT Yury Selivanov wrote:
>> > On Tue, Aug 29, 2017 at 6:53 AM, Nick Coghlan 
>> wrote:
>> > > Given the refocusing of the PEP on the context variable API, with
>> > > the
>> > > other aspects introduced solely in service of making context
>> > > variables work as defined, my current suggestion would be to make
>> > > it a hybrid Python/C API using the "contextvars" + "_contextvars"
>> > > naming convention.
>> > >
>> > > Then all most end user applications defining context variables would
>> > >
>> > > need is the single line:
>> > > from contextvars import new_context_var
>> >
>> > I like it!
>> >
>> > +1 from me.
>>
>> +1
>
>
> OK, but does it have to look like a factory function? Can't it look like a
> class? E.g.
>
> from contextvars import ContextVar
>
> my_parameter = ContextVar()
>
> async def some_calculation():
> my_parameter.set(my_parameter.get() + 2)
> 
> my_parameter.delete()

I initially designed the API to be part of the sys module, which
doesn't have any classes and has only functions.

Having ContextVar class exposed directly in the contextvars module
makes sense.  We can also replace new_execution_context() and
new_logical_context() with contextvars.ExecutionContext() and
contextvars.LogicalContext().

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Antoine Pitrou
On Mon, 28 Aug 2017 17:24:29 -0400
Yury Selivanov  wrote:
> Long story short, I think we need to rollback our last decision to
> prohibit context propagation up the call stack in coroutines.  In PEP
> 550 v3 and earlier, the following snippet would work just fine:
> 
>var = new_context_var()
> 
>async def bar():
>var.set(42)
> 
>async def foo():
>await bar()
>assert var.get() == 42   # with previous PEP 550 semantics
> 
>run_until_complete(foo())
> 
> But it would break if a user wrapped "await bar()" with "wait_for()":
> 
>var = new_context_var()
> 
>async def bar():
>var.set(42)
> 
>async def foo():
>await wait_for(bar(), 1)
>assert var.get() == 42  # AssertionError !!!
> 
>run_until_complete(foo())
> 
[...]

> So I guess we have no other choice other than reverting this spec
> change for coroutines.  The very first example in this email should
> start working again.

What about the second one?  Why wouldn't the bar() coroutine inherit
the LC at the point it's instantiated (i.e. where the synchronous bar()
call is done)?

> This means that PEP 550 will have a caveat for async code: don't rely
> on context propagation up the call stack, unless you are writing
> __aenter__ and __aexit__ that are guaranteed to be called without
> being wrapped into a Task.

Hmm, sorry for being a bit slow, but I'm not sure what this
sentence implies.  How is the user supposed to know whether something
will be wrapped into a Task (short of being an expert in asyncio
internals perhaps)?

Actually, if could whip up an example of what you mean here, it would
be helpful I think :-)

> Thus I propose to stop associating PEP 550 concepts with
> (dynamic) scoping.

Agreed that dynamic scoping is a red herring here.

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 7:16 PM, Yury Selivanov  wrote:
> On Mon, Aug 28, 2017 at 6:56 PM, Greg Ewing  
> wrote:
>> Yury Selivanov wrote:
>>>
>>> I saying that the following should not work:
>>>
>>> def nested_gen():
>>> set_some_context()
>>> yield
>>>
>>> def gen():
>>># some_context is not set
>>>yield from nested_gen()
>>># use some_context ???
>>
>>
>> And I'm saying it *should* work, otherwise it breaks
>> one of the fundamental principles on which yield-from
>> is based, namely that 'yield from foo()' should behave
>> as far as possible as a generator equivalent of a
>> plain function call.
>>
>
> Consider the following generator:
>
>
>   def gen():
>  with decimal.context(...):
> yield
>
>
> We don't want gen's context to leak to the outer scope -- that's one
> of the reasons why PEP 550 exists.  Even if we do this:
>
>  g = gen()
>  next(g)
>  # the decimal.context won't leak out of gen
>
> So a Python user would have a mental model: context set in generators
> doesn't leak.
>
> Not, let's consider a "broken" generator:
>
>  def gen():
>   decimal.context(...)
>   yield
>
> If we iterate gen() with next(), it still won't leak its context.  But
> if "yield from" has semantics that you want -- "yield from" to be just
> like function call -- then calling
>
>  yield from gen()
>
> will corrupt the context of the caller.
>
> I simply want consistency.  It's easier for everybody to say that
> generators never leaked their context changes to the outer scope,
> rather than saying that "generators can sometimes leak their context".

Adding to the above: there's a fundamental reason why we can't make
"yield from" transparent for EC modifications.

While we want "yield from" to have semantics close to a function call,
in some situations we simply can't. Because you can manually iterate a
generator and then 'yield from' it, you can have this weird
'partial-function-call' semantics.  For example:

 var = new_context_var()

 def gen():
 var.set(42)
 yield
 yield

Now, we can partially iterate the generator (1):

 def main():
 g = gen()
 next(g)

 # we don't want 'g' to leak its EC changes,
 # so var.get() is None here.
 assert var.get() is None

and then we can "yield from" it (2):

 def main():
 g = gen()
 next(g)

 # we don't want 'g' to leak its EC changes,
 # so var.get() is None here.
 assert var.get() is None

 yield from g
 # at this point it's too late for us to let var leak into
 # main().__logical_context__

For (1) we want the context change to be isolated.  For (2) you say
that the context change should propagate to the caller.  But it's
impossible: 'g' already has its own LC({var: 42}), and we can't merge
it with the LC of "main()".

"await" is fundamentally different, because it's not possible to
partially iterate the coroutine before awaiting it (asyncio will break
if you call "coro.send(None)" manually).

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 2:40 PM, Antoine Pitrou  wrote:
> On Mon, 28 Aug 2017 17:24:29 -0400
> Yury Selivanov  wrote:
>> Long story short, I think we need to rollback our last decision to
>> prohibit context propagation up the call stack in coroutines.  In PEP
>> 550 v3 and earlier, the following snippet would work just fine:
>>
>>var = new_context_var()
>>
>>async def bar():
>>var.set(42)
>>
>>async def foo():
>>await bar()
>>assert var.get() == 42   # with previous PEP 550 semantics
>>
>>run_until_complete(foo())
>>
>> But it would break if a user wrapped "await bar()" with "wait_for()":
>>
>>var = new_context_var()
>>
>>async def bar():
>>var.set(42)
>>
>>async def foo():
>>await wait_for(bar(), 1)
>>assert var.get() == 42  # AssertionError !!!
>>
>>run_until_complete(foo())
>>
> [...]
>
>> So I guess we have no other choice other than reverting this spec
>> change for coroutines.  The very first example in this email should
>> start working again.
>
> What about the second one?

Just to be clear: in the next revision of the PEP, the first example
will work without an AssertionError; second example will keep raising
an AssertionError.

> Why wouldn't the bar() coroutine inherit
> the LC at the point it's instantiated (i.e. where the synchronous bar()
> call is done)?

We want tasks to have their own isolated contexts.  When a task
is started, it runs its code in parallel with its "parent" task.  We want
each task to have its own isolated EC (OS thread/TLS vs
async task/EC analogy), otherwise the EC of "foo()" will be randomly
changed by the tasks it spawned.

wait_for() in the above example creates an asyncio.Task implicitly,
and that's why we don't see 'var' changed to '42' in foo().

This is a slightly complicated case, but it's addressable with a good
documentation and recommended best practices.

>
>> This means that PEP 550 will have a caveat for async code: don't rely
>> on context propagation up the call stack, unless you are writing
>> __aenter__ and __aexit__ that are guaranteed to be called without
>> being wrapped into a Task.
>
> Hmm, sorry for being a bit slow, but I'm not sure what this
> sentence implies.  How is the user supposed to know whether something
> will be wrapped into a Task (short of being an expert in asyncio
> internals perhaps)?
>
> Actually, if could whip up an example of what you mean here, it would
> be helpful I think :-)

__aenter__ won't ever be wrapped in a task because its called by
the interpreter.

var = new_context_var()

class MyAsyncCM:

def __aenter__(self):
   var.set(42)

async with MyAsyncCM():
  assert var.get() == 42

The above snippet will always work as expected.

We'll update the PEP with thorough explanation of all these
nuances in the semantics.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Antoine Pitrou


Le 29/08/2017 à 21:18, Yury Selivanov a écrit :
> On Tue, Aug 29, 2017 at 2:40 PM, Antoine Pitrou  wrote:
>> On Mon, 28 Aug 2017 17:24:29 -0400
>> Yury Selivanov  wrote:
>>> Long story short, I think we need to rollback our last decision to
>>> prohibit context propagation up the call stack in coroutines.  In PEP
>>> 550 v3 and earlier, the following snippet would work just fine:
>>>
>>>var = new_context_var()
>>>
>>>async def bar():
>>>var.set(42)
>>>
>>>async def foo():
>>>await bar()
>>>assert var.get() == 42   # with previous PEP 550 semantics
>>>
>>>run_until_complete(foo())
>>>
>>> But it would break if a user wrapped "await bar()" with "wait_for()":
>>>
>>>var = new_context_var()
>>>
>>>async def bar():
>>>var.set(42)
>>>
>>>async def foo():
>>>await wait_for(bar(), 1)
>>>assert var.get() == 42  # AssertionError !!!
>>>
>>>run_until_complete(foo())
>>>
>> [...]
> 
>> Why wouldn't the bar() coroutine inherit
>> the LC at the point it's instantiated (i.e. where the synchronous bar()
>> call is done)?
> 
> We want tasks to have their own isolated contexts.  When a task
> is started, it runs its code in parallel with its "parent" task.

I'm sorry, but I don't understand what it all means.

To pose the question differently: why is example #1 supposed to be
different, philosophically, than example #2?  Both spawn a coroutine,
both wait for its execution to end.  There is no reason that adding a
wait_for() intermediary (presumably because the user wants to add a
timeout) would significantly change the execution semantics of bar().

> wait_for() in the above example creates an asyncio.Task implicitly,
> and that's why we don't see 'var' changed to '42' in foo().

I don't understand why a non-obvious behaviour detail (the fact that
wait_for() creates an asyncio.Task implicitly) should translate into a
fundamental difference in observable behaviour.  I find it
counter-intuitive and error-prone.

> This is a slightly complicated case, but it's addressable with a good
> documentation and recommended best practices.

It would be better addressed with consistent behaviour that doesn't rely
on specialist knowledge, though :-/

>>> This means that PEP 550 will have a caveat for async code: don't rely
>>> on context propagation up the call stack, unless you are writing
>>> __aenter__ and __aexit__ that are guaranteed to be called without
>>> being wrapped into a Task.
>>
>> Hmm, sorry for being a bit slow, but I'm not sure what this
>> sentence implies.  How is the user supposed to know whether something
>> will be wrapped into a Task (short of being an expert in asyncio
>> internals perhaps)?
>>
>> Actually, if could whip up an example of what you mean here, it would
>> be helpful I think :-)
> 
> __aenter__ won't ever be wrapped in a task because its called by
> the interpreter.
> 
> var = new_context_var()
> 
> class MyAsyncCM:
> 
> def __aenter__(self):
>var.set(42)
> 
> async with MyAsyncCM():
>   assert var.get() == 42
> 
> The above snippet will always work as expected.

Uh... So I really don't understand what you meant above when you wrote:

"""
This means that PEP 550 will have a caveat for async code: don't rely
on context propagation up the call stack, unless you are writing
__aenter__ and __aexit__ that are guaranteed to be called without
being wrapped into a Task.
"""

To ask the question again: can you showcase how and where the "caveat"
applies?

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 3:32 PM, Antoine Pitrou  wrote:
>
>
> Le 29/08/2017 à 21:18, Yury Selivanov a écrit :
>> On Tue, Aug 29, 2017 at 2:40 PM, Antoine Pitrou  wrote:
>>> On Mon, 28 Aug 2017 17:24:29 -0400
>>> Yury Selivanov  wrote:
 Long story short, I think we need to rollback our last decision to
 prohibit context propagation up the call stack in coroutines.  In PEP
 550 v3 and earlier, the following snippet would work just fine:

var = new_context_var()

async def bar():
var.set(42)

async def foo():
await bar()
assert var.get() == 42   # with previous PEP 550 semantics

run_until_complete(foo())

 But it would break if a user wrapped "await bar()" with "wait_for()":

var = new_context_var()

async def bar():
var.set(42)

async def foo():
await wait_for(bar(), 1)
assert var.get() == 42  # AssertionError !!!

run_until_complete(foo())

>>> [...]
>>
>>> Why wouldn't the bar() coroutine inherit
>>> the LC at the point it's instantiated (i.e. where the synchronous bar()
>>> call is done)?
>>
>> We want tasks to have their own isolated contexts.  When a task
>> is started, it runs its code in parallel with its "parent" task.
>
> I'm sorry, but I don't understand what it all means.
>
> To pose the question differently: why is example #1 supposed to be
> different, philosophically, than example #2?  Both spawn a coroutine,
> both wait for its execution to end.  There is no reason that adding a
> wait_for() intermediary (presumably because the user wants to add a
> timeout) would significantly change the execution semantics of bar().

I see your point. The currently published version of the PEP (v4)
fixes this by saying: each coroutine has its own LC. Therefore,
"var.set(42)" cannot be visible to the code that calls "bar()".  And
therefore, "await wait_for(bar())" and "await bar()" work the same way
with regards to execution context semantics.

*Unfortunately*, while this fixes above examples to work the same way,
setting context vars in "__aenter__" stops working:

 class MyAsyncCM:

 def __aenter__(self):
var.set(42)

 async with MyAsyncCM():
   assert var.get() == 42

Because __aenter__ has its own LC, the code wrapped in "async with"
will not see the effect of "var.set(42)"!

This absolutely needs to be fixed, and the only way (that I know) it
can be fixed is to revert the "every coroutine has its own LC"
statement (going back to the semantics coroutines had in PEP 550 v2
and v3).

>
>> wait_for() in the above example creates an asyncio.Task implicitly,
>> and that's why we don't see 'var' changed to '42' in foo().
>
> I don't understand why a non-obvious behaviour detail (the fact that
> wait_for() creates an asyncio.Task implicitly) should translate into a
> fundamental difference in observable behaviour.  I find it
> counter-intuitive and error-prone.

"await bar()" and "await wait_for(bar())" are actually quite
different.  Let me illustrate with an example:

b1 = bar()
# bar() is not running yet
await b1

b2 = wait_for(bar())
# bar() was wrapped into a Task and is being running right now
await b2

Usually this difference is subtle, but in asyncio it's perfectly fine
to never await on b2, just let it run until it completes.  If you
don't "await b1" -- b1 simply will never run.

All in all, we can't say that "await bar()" and "await
wait_for(bar())" are equivalent.  The former runs bar() synchronously
within the coroutine that awaits it.  The latter runs bar() in a
completely separate and detached task in parallel to the coroutine
that spawned it.

>
>> This is a slightly complicated case, but it's addressable with a good
>> documentation and recommended best practices.
>
> It would be better addressed with consistent behaviour that doesn't rely
> on specialist knowledge, though :-/

I agree. But I don't see any other solution that would solve the
problem *and* satisfy the following requirements:

1. Context variables set in "CM.__aenter__" and "CM.__aexit__" should
be visible to code that is wrapped in "async with CM()".

2. Tasks must have isolated contexts -- changes that coroutines do to
the EC in one Task, should not be visible to other Tasks.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Nathaniel Smith
On Tue, Aug 29, 2017 at 12:32 PM, Antoine Pitrou  wrote:
>
>
> Le 29/08/2017 à 21:18, Yury Selivanov a écrit :
>> On Tue, Aug 29, 2017 at 2:40 PM, Antoine Pitrou  wrote:
>>> On Mon, 28 Aug 2017 17:24:29 -0400
>>> Yury Selivanov  wrote:
 Long story short, I think we need to rollback our last decision to
 prohibit context propagation up the call stack in coroutines.  In PEP
 550 v3 and earlier, the following snippet would work just fine:

var = new_context_var()

async def bar():
var.set(42)

async def foo():
await bar()
assert var.get() == 42   # with previous PEP 550 semantics

run_until_complete(foo())

 But it would break if a user wrapped "await bar()" with "wait_for()":

var = new_context_var()

async def bar():
var.set(42)

async def foo():
await wait_for(bar(), 1)
assert var.get() == 42  # AssertionError !!!

run_until_complete(foo())

>>> [...]
>>
>>> Why wouldn't the bar() coroutine inherit
>>> the LC at the point it's instantiated (i.e. where the synchronous bar()
>>> call is done)?
>>
>> We want tasks to have their own isolated contexts.  When a task
>> is started, it runs its code in parallel with its "parent" task.
>
> I'm sorry, but I don't understand what it all means.
>
> To pose the question differently: why is example #1 supposed to be
> different, philosophically, than example #2?  Both spawn a coroutine,
> both wait for its execution to end.  There is no reason that adding a
> wait_for() intermediary (presumably because the user wants to add a
> timeout) would significantly change the execution semantics of bar().
>
>> wait_for() in the above example creates an asyncio.Task implicitly,
>> and that's why we don't see 'var' changed to '42' in foo().
>
> I don't understand why a non-obvious behaviour detail (the fact that
> wait_for() creates an asyncio.Task implicitly) should translate into a
> fundamental difference in observable behaviour.  I find it
> counter-intuitive and error-prone.

For better or worse, asyncio users generally need to be aware of the
distinction between coroutines/Tasks/Futures and which functions
create or return which -- it's essentially the same as the distinction
between running some code in the current thread versus spawning a new
thread to run it (and then possibly waiting for the result).

Mostly the docs tell you when a function converts a coroutine into a
Task, e.g. if you look at the docs for 'ensure_future' or 'wait_for'
or 'wait' they all say this explicitly. Or in some cases like 'gather'
and 'shield', it's implicit because they take arbitrary futures, and
creating a task is how you convert a coroutine into a future.

As a rule of thumb, I think it's accurate to say that any function
that takes a coroutine object as an argument always converts it into a
Task.

>> This is a slightly complicated case, but it's addressable with a good
>> documentation and recommended best practices.
>
> It would be better addressed with consistent behaviour that doesn't rely
> on specialist knowledge, though :-/

This is the core of the Curio/Trio critique of asyncio: in asyncio,
operations that implicitly initiate concurrent execution are all over
the API. This is the root cause of asyncio's problems with buffering
and backpressure, it makes it hard to shut down properly (it's hard to
know when everything has finished running), it's related to the
"spooky cancellation at a distance" issue where cancelling one task
can cause another Task to get a cancelled exception, etc. If you use
the recommended "high level" API for streams, then AFAIK it's still
impossible to close your streams properly at shutdown (you can request
that a close happen "sometime soon", but you can't tell when it's
finished).

Obviously asyncio isn't going anywhere, so we should try to
solve/mitigate these issues where we can, but asyncio's API
fundamentally assumes that users will be very aware and careful about
which operations create which kinds of concurrency. So I sort of feel
like, if you can use asyncio at all, then you can handle wait_for
creating a new LC.

-n

[1] 
https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/#bug-3-closing-time

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Antoine Pitrou

Le 29/08/2017 à 21:59, Yury Selivanov a écrit :
> 
> This absolutely needs to be fixed, and the only way (that I know) it
> can be fixed is to revert the "every coroutine has its own LC"
> statement (going back to the semantics coroutines had in PEP 550 v2
> and v3).

I completely agree with this.  What I don't understand is why example #2
can't work the same.

> "await bar()" and "await wait_for(bar())" are actually quite
> different.  Let me illustrate with an example:
> 
> b1 = bar()
> # bar() is not running yet
> await b1
> 
> b2 = wait_for(bar())
> # bar() was wrapped into a Task and is being running right now
> await b2
> 
> Usually this difference is subtle, but in asyncio it's perfectly fine
> to never await on b2, just let it run until it completes.  If you
> don't "await b1" -- b1 simply will never run.

Perhaps... But still, why doesn't bar() inherit the LC *at the point
where it was instantiated* (i.e. foo()'s LC in the examples)?  The fact
that it's *later* passed to wait_for() shouldn't matter, right?  Or
should it?

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Nathaniel Smith
On Tue, Aug 29, 2017 at 12:59 PM, Yury Selivanov
 wrote:
> b2 = wait_for(bar())
> # bar() was wrapped into a Task and is being running right now
> await b2

Ah not quite. wait_for is itself implemented as a coroutine, so it
doesn't spawn off bar() into its own task until you await b2.

Though according to the docs you should pretend that you don't know
whether wait_for returns a coroutine or a Future, so what you said
would also be a conforming implementation.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 4:10 PM, Antoine Pitrou  wrote:
[..]
>> "await bar()" and "await wait_for(bar())" are actually quite
>> different.  Let me illustrate with an example:
>>
>> b1 = bar()
>> # bar() is not running yet
>> await b1
>>
>> b2 = wait_for(bar())
>> # bar() was wrapped into a Task and is being running right now
>> await b2
>>
>> Usually this difference is subtle, but in asyncio it's perfectly fine
>> to never await on b2, just let it run until it completes.  If you
>> don't "await b1" -- b1 simply will never run.
>
> Perhaps... But still, why doesn't bar() inherit the LC *at the point
> where it was instantiated* (i.e. foo()'s LC in the examples)?  The fact
> that it's *later* passed to wait_for() shouldn't matter, right?  Or
> should it?

bar() will inherit the lookup chain.  Two examples:

1)

   gvar = new_context_var()
   var = new_context_var()

   async def bar():
   # EC = [current_thread_LC_copy, Task_foo_LC]

   var.set(42)
   assert gvar.get() == ''

   async def foo():
   # EC = [current_thread_LC_copy, Task_foo_LC]

   gvar.set('')
   await bar()
   assert var.get() == 42   # with previous PEP 550 semantics
   assert gvar.get() == ''

   # EC = [current_thread_LC]
   run_until_complete(foo())   # Task_foo

2)

   gvar = new_context_var()
   var = new_context_var()

   async def bar():
   # EC = [current_thread_LC_copy, Task_foo_LC_copy, Task_wait_for_LC]

   var.set(42)
   assert gvar.get() == ''

   async def foo():
   # EC = [current_thread_LC_copy, Task_foo_LC]

   await wait_for(bar(), 1)   # bar() is wrapped into
Task_wait_for implicitly

   assert gvar.get() == ''   # OK
   assert var.get() == 42  # AssertionError !!!

   # EC = [current_thread_LC]
   run_until_complete(foo())   # Task_foo

The key difference:

In example (1), bar() will have the LC of the Task that runs foo().
Both "foo()" and "bar()" will *share* the same LC.  That's why foo()
will see changes made in bar().

In example (2), bar() will have the LC of wait_for() task, and foo()
will have a different LC.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Antoine Pitrou

Le 29/08/2017 à 22:20, Yury Selivanov a écrit :
> 
> 2)
> 
>gvar = new_context_var()
>var = new_context_var()
> 
>async def bar():
># EC = [current_thread_LC_copy, Task_foo_LC_copy, Task_wait_for_LC]

Ah, thanks!...  That explains things, though I don't expect most users
to spontaneously infer this and its consequences from the fact that they
used "wait_for()".

This seems actually even more problematic, because if bar() can mutate
Task_wait_for_LC, it may unwillingly affect wait_for() (assuming the
wait_for() implementation may some day use EC for whatever purpose, e.g.
logging).

It seems framework code like wait_for() should have a way to override
the default behaviour and remove their own LC's from "child" coroutines'
lookup chaines.  Perhaps the PEP already allows for his?

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4: coroutine policy

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 4:33 PM, Antoine Pitrou  wrote:
>
> Le 29/08/2017 à 22:20, Yury Selivanov a écrit :
>>
>> 2)
>>
>>gvar = new_context_var()
>>var = new_context_var()
>>
>>async def bar():
>># EC = [current_thread_LC_copy, Task_foo_LC_copy, Task_wait_for_LC]
>
> Ah, thanks!...  That explains things, though I don't expect most users
> to spontaneously infer this and its consequences from the fact that they
> used "wait_for()".

Yeah, we use "# EC=" comments in the PEP to explain how EC is
implemented for generators (in the Detailed Specification section),
and will now do the same for coroutines (in the next update).

>
> This seems actually even more problematic, because if bar() can mutate
> Task_wait_for_LC, it may unwillingly affect wait_for() (assuming the
> wait_for() implementation may some day use EC for whatever purpose, e.g.
> logging).

In general the patter is to wrap the passed coroutine into a Task and
then attach some callbacks to it (or wrap the coroutine into another
coroutine).  So while I understand the concern, I can't immediately
come up with a realistic example...

>
> It seems framework code like wait_for() should have a way to override
> the default behaviour and remove their own LC's from "child" coroutines'
> lookup chaines.  Perhaps the PEP already allows for his?

Yes, the PEP provides enough APIs to implement any semantics we want.
We might want to add "execution_context" kwarg to
"asyncio.create_task" to make this customization of EC easy for Tasks.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Greg Ewing

Yury Selivanov wrote:

Consider the following generator:

  def gen():
 with decimal.context(...):
yield

We don't want gen's context to leak to the outer scope


That's understandable, but fixing that problem shouldn't come
at the expense of breaking the ability to refactor generator
code or async code without changing its semantics.

I'm not convinced that it has to, either. In this example,
the with-statement is the thing that should be establishing
a new nested context. Yielding and re-entering the generator
should only be swapping around between existing contexts.


Not, let's consider a "broken" generator:

 def gen():
  decimal.context(...)
  yield


The following non-generator code is "broken" in exactly
the same way:

   def foo():
  decimal.context(...)
  do_some_decimal_calculations()
  # Context has now been changed for the caller


I simply want consistency.


So do I! We just have different ideas about what consistency
means here.


It's easier for everybody to say that
generators never leaked their context changes to the outer scope,
rather than saying that "generators can sometimes leak their context".


No, generators should *always* leak their context changes
to exactly the same extent that normal functions do. If you
don't want to leak a context change, you should use a with
statement.

What you seem to be suggesting is that generators shouldn't
leak context changes even when you *don't* use a with-statement.
If you're going to to that, you'd better make sure that the
same thing applies to regular functions, otherwise you've
introduced an inconsistency.

--
Greg

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 5:45 PM, Greg Ewing  wrote:
[..]
> What you seem to be suggesting is that generators shouldn't
> leak context changes even when you *don't* use a with-statement.

Yes, generators shouldn't leak context changes regardless of what and
how changes the context inside them:

  var = new_context_var()

  def gen():
   old_val = var.get()
   try:
var.set('blah')
yield
yield
yield
   finally:
  var.set(old_val)

with the above code, when you do "next(gen())" it would leak the state
without PEP 550.  "finally" block (or "with" block wouldn't help you
here) and corrupt the state of the caller.

That's the problem the PEP fixes.  The EC interaction with generators
is explained here with a great detail:
https://www.python.org/dev/peps/pep-0550/#id4

We explain the motivation behind desiring a working context-local
solution for generators in the Rationale section:
https://www.python.org/dev/peps/pep-0550/#rationale

Basically half of the PEP is about isolating context in generators.

> If you're going to to that, you'd better make sure that the
> same thing applies to regular functions, otherwise you've
> introduced an inconsistency.

Regular functions cannot pause/resume their execution, so they can't
leak an inconsistent context change due to out of order or partial
execution.

PEP 550 positions itself as a replacement for TLS, and clearly defines
its semantics for regular functions in a single thread, regular
functions in multithreaded code, generators, and asynchronous code
(async/await).  Everything is specified in the High-level
Specification section.  I wouldn't call slightly differently defined
semantics for generators/coroutines/functions an "inconsistency" --
they just have a different EC semantics given how different they are
from each other.

Drawing a parallel between 'yield from' and function calls is
possible, but we shouldn't forget that you can 'yield from' a
half-iterated generator.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Stefan Krah
On Tue, Aug 29, 2017 at 06:01:40PM -0400, Yury Selivanov wrote:
> PEP 550 positions itself as a replacement for TLS, and clearly defines
> its semantics for regular functions in a single thread, regular
> functions in multithreaded code, generators, and asynchronous code
> (async/await).  Everything is specified in the High-level
> Specification section.  I wouldn't call slightly differently defined
> semantics for generators/coroutines/functions an "inconsistency" --
> they just have a different EC semantics given how different they are
> from each other.

What I don't find so consistent is that the async universe is guarded
with async {def, for, with, ...}, but in this proposal regular context
managers and context setters implicitly adapt their behavior.

So, pedantically, having a language extension like

   async set(var, value)
   x = async get(var)

and making async-safe context managers explicit

   async with decimal.localcontext():
   ...


would feel more consistent.  I know generators are a problem, but even
allowing something like "async set" in generators would be a step up.



Stefan Krah



___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 7:06 PM, Stefan Krah  wrote:
> On Tue, Aug 29, 2017 at 06:01:40PM -0400, Yury Selivanov wrote:
>> PEP 550 positions itself as a replacement for TLS, and clearly defines
>> its semantics for regular functions in a single thread, regular
>> functions in multithreaded code, generators, and asynchronous code
>> (async/await).  Everything is specified in the High-level
>> Specification section.  I wouldn't call slightly differently defined
>> semantics for generators/coroutines/functions an "inconsistency" --
>> they just have a different EC semantics given how different they are
>> from each other.
>
> What I don't find so consistent is that the async universe is guarded
> with async {def, for, with, ...}, but in this proposal regular context
> managers and context setters implicitly adapt their behavior.
>
> So, pedantically, having a language extension like
>
>async set(var, value)
>x = async get(var)
>
> and making async-safe context managers explicit
>
>async with decimal.localcontext():
>...
>
>
> would feel more consistent.  I know generators are a problem, but even
> allowing something like "async set" in generators would be a step up.

But regular context managers work just fine with asynchronous code.
Not all of them have some local state. For example, you could have a
context manager to time how long the code wrapped into it executes:

  async def foo():
with timing():
  await ...

We use asynchronous context managers only when they need to do
asynchronous operations in their __aenter__ and __aexit__ (like DB
transaction begin/rollback/commit).

Requiring "await" to set a value for context variable would force us
to write specialized async CMs for cases where a sync CM would do just
fine.  This in turn, would make it impossible to use some sync
libraries in async code.  But there's nothing wrong in using
numpy/numpy.errstate in a coroutine.  I want to be able to copy/paste
their examples into my async code and I'd expect it to just work --
that's the point of the PEP.

async/await already requires to have separate APIs in libraries that
involve IO.  Let's not make the situation worse by asking people to
use asynchronous version of PEP 550 even though it's not really
needed.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Greg Ewing

Yury Selivanov wrote:

While we want "yield from" to have semantics close to a function call,


That's not what I said! I said that "yield from foo()" should
have semantics close to a function call. If you separate the
"yield from" from the "foo()", then of course you can get
different behaviours.

But that's beside the point, because I'm not suggesting
that generators should behave differently depending on when
or if you use "yield from" on them.


For (1) we want the context change to be isolated.  For (2) you say
that the context change should propagate to the caller.


No, I'm saying that the context change should *always*
propagate to the caller, unless you do something explicit
within the generator to prevent it.

I have some ideas on what that something might be, which
I'll post later.

--
Greg
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 7:36 PM, Greg Ewing  wrote:
> Yury Selivanov wrote:
>>
>> While we want "yield from" to have semantics close to a function call,
>
>
> That's not what I said! I said that "yield from foo()" should
> have semantics close to a function call. If you separate the
> "yield from" from the "foo()", then of course you can get
> different behaviours.
>
> But that's beside the point, because I'm not suggesting
> that generators should behave differently depending on when
> or if you use "yield from" on them.

OK, that wasn't clear.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Yury Selivanov
On Tue, Aug 29, 2017 at 7:36 PM, Greg Ewing  wrote:
[..]
>> For (1) we want the context change to be isolated.  For (2) you say
>> that the context change should propagate to the caller.
>
>
> No, I'm saying that the context change should *always*
> propagate to the caller, unless you do something explicit
> within the generator to prevent it.
>
> I have some ideas on what that something might be, which
> I'll post later.

BTW we already have mechanisms to always propagate context to the
caller -- just use threading.local() or a global variable.  PEP 550 is
for situations when you explicitly don't want to propagate the state.

Anyways, I'm curious to hear your ideas.

Yury
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-5001: More-informative multiprocessing error messages (#3079)

2017-08-29 Thread Serhiy Storchaka

30.08.17 01:52, Antoine Pitrou пише:

https://github.com/python/cpython/commit/bd73e72b4a9f019be514954b1d40e64dc3a5e81c
commit: bd73e72b4a9f019be514954b1d40e64dc3a5e81c
branch: master
author: Allen W. Smith, Ph.D 
committer: Antoine Pitrou 
date: 2017-08-30T00:52:18+02:00
summary:

bpo-5001: More-informative multiprocessing error messages (#3079)

* Make error message more informative

Replace assertions in error-reporting code with more-informative version that 
doesn't cause confusion over where and what the error is.

* Additional clarification + get travis to check

* Change from SystemError to TypeError

As suggested in PR comment by @pitrou, changing from SystemError; TypeError 
appears appropriate.

* NEWS file installation; ACKS addition (will do my best to justify it by 
additional work)

* Making current AssertionErrors in multiprocessing more informative

* Blurb added re multiprocessing managers.py, queues.py cleanup

* Further multiprocessing cleanup - went through pool.py

* Fix two asserts in multiprocessing/util.py

* Most asserts in multiprocessing more informative

* Didn't save right version

* Further work on multiprocessing error messages

* Correct typo

* Correct typo v2

* Blasted colon... serves me right for trying to work on two things at once

* Simplify NEWS entry

* Update 2017-08-18-17-16-38.bpo-5001.gwnthq.rst

* Update 2017-08-18-17-16-38.bpo-5001.gwnthq.rst

OK, never mind.

* Corrected (thanks to pitrou) error messages for notify

* Remove extraneous backslash in docstring.


Please, please don't forget to edit commit messages before merging. An 
excessively verbose commit message will be kept in the repository 
forever and will harm future developers that read a history.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Nick Coghlan
On 30 August 2017 at 10:18, Yury Selivanov  wrote:
> On Tue, Aug 29, 2017 at 7:36 PM, Greg Ewing  
> wrote:
> [..]
>>> For (1) we want the context change to be isolated.  For (2) you say
>>> that the context change should propagate to the caller.
>>
>>
>> No, I'm saying that the context change should *always*
>> propagate to the caller, unless you do something explicit
>> within the generator to prevent it.
>>
>> I have some ideas on what that something might be, which
>> I'll post later.
>
> BTW we already have mechanisms to always propagate context to the
> caller -- just use threading.local() or a global variable.  PEP 550 is
> for situations when you explicitly don't want to propagate the state.

Writing an "update_parent_context" decorator is also trivial (and will
work for both sync and async generators):

def update_parent_context(gf):
@functools.wraps(gf):
def wrapper(*args, **kwds):
gen = gf(*args, **kwds):
gen.__logical_context__ = None
return gen
return wrapper

The PEP already covers that approach when it talks about the changes
to contextlib.contextmanager to get context changes to propagate
automatically.

With contextvars getting its own module, it would also be
straightforward to simply include that decorator as part of its API,
so folks won't need to write their own.

While I'm not sure how much practical use it will see, I do think it's
important to preserve the *ability* to transparently refactor
generators using yield from - I'm just OK with such a refactoring
becoming "yield from update_parent_context(subgen())" instead of the
current "yield from subgen()" (as I think *not* updating the parent
context is a better default than updating it).

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-29 Thread Nick Coghlan
On 30 August 2017 at 16:40, Nick Coghlan  wrote:
> Writing an "update_parent_context" decorator is also trivial (and will
> work for both sync and async generators):
>
> def update_parent_context(gf):
> @functools.wraps(gf):
> def wrapper(*args, **kwds):
> gen = gf(*args, **kwds):
> gen.__logical_context__ = None
> return gen
> return wrapper
[snip]
> While I'm not sure how much practical use it will see, I do think it's
> important to preserve the *ability* to transparently refactor
> generators using yield from - I'm just OK with such a refactoring
> becoming "yield from update_parent_context(subgen())" instead of the
> current "yield from subgen()" (as I think *not* updating the parent
> context is a better default than updating it).

Oops, I got mixed up between whether I thought this should be a
decorator or an explicitly called helper function. One option would be
to provide both:

def update_parent_context(gen):
""Configures a generator-iterator to update its caller's
context variables
gen.__logical_context__ = None
return gen

def updates_parent_context(gf):
""Wraps a generator function's instances with update_parent_context
@functools.wraps(gf):
def wrapper(*args, **kwds):
return update_parent_context(gf(*args, **kwds))
return wrapper

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com