Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 9:50 PM, Guido van Rossum  wrote:
> On Mon, Aug 28, 2017 at 6:07 PM, Nathaniel Smith  wrote:
>>
>> The important difference between generators/coroutines and normal
>> function calls is that with normal function calls, the link between
>> the caller and callee is fixed for the entire lifetime of the inner
>> frame, so there's no way for the context to shift under your feet. If
>> all we had were normal function calls, then (green-) thread locals
>> using the save/restore trick would be enough to handle all the use
>> cases above -- it's only for generators/coroutines where the
>> save/restore trick breaks down. This means that pushing/popping LCs
>> when crossing into/out of a generator frame is the minimum needed to
>> get the desired semantics, and it keeps the LC stack small (important
>> since lookups can be O(n) in the worst case), and it minimizes the
>> backcompat breakage for operations like decimal.setcontext() where
>> people *do* expect to call it in a subroutine and have the effects be
>> visible in the caller.
>
>
> I like this way of looking at things. Does this have any bearing on
> asyncio.Task? To me those look more like threads than like generators. Or
> possibly they should inherit the lookup chain from the point when the Task
> was created, [..]

We explain why tasks have to inherit the lookup chain from the point
where they are created in the PEP (in the new High-level Specification
section):
https://www.python.org/dev/peps/pep-0550/#coroutines-and-asynchronous-tasks

In short, without inheriting the chain we can't wrap coroutines into
tasks (like wrapping an await in wait_for() would break the code, if
we don't inherit the chain).

In the latest version (v4) we made all coroutines to have their own
Logical Context, which, as we discovered today, makes us unable to set
context variables in __aenter__ coroutines.  This will be fixed in the
next version.

> FWIW we *could* have a policy that OS threads also
> inherit the lookup chain from their creator, but I doubt that's going to fly
> with backwards compatibility.

Backwards compatibility is indeed an issue.  Inheriting the chain for
threads would mean another difference between PEP 550 and
'threading.local()', that could cause backwards incompatible behaviour
for decimal/numpy when they are updated to new APIs.

For decimal, for example, we could use the following pattern to
fallback to use the default decimal context for ECs (threads) that
don't have it set:

ctx = decimal_var.get(default=default_decimal_ctx)

We can also add an 'initializer' keyword-argument to 'new_context_var'
to specify a callable that will be used to give a default value to the
var.

Another issue, is that with the current C API, we can only inherit EC
for threads started with 'threading.Thread'. There's no reliable way
to inherit the chain if a thread was initialized by a C extension.

IMO, inheriting the lookup chain in threads makes sense when we use
them for pools, like concurrent.futures.ThreadPoolExecutor.  When
threads are used as long-running subprograms, inheriting the chain
should be an opt-in.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Glenn Linderman

On 8/28/2017 6:50 PM, Guido van Rossum wrote:
FWIW we *could* have a policy that OS threads also inherit the lookup 
chain from their creator, but I doubt that's going to fly with 
backwards compatibility.


Since LC is new, how could such a policy affect backwards compatibility?

The obvious answer would be that some use cases that presently use other 
mechanisms that "should" be ported to using LC would have to be careful 
in how they do the port, but discussion seems to indicate that they 
would have to be careful in how they do the port anyway.


One of the most common examples is the decimal context. IIUC, each 
thread gets its initial decimal context from a global template, rather 
than inheriting from its parent thread. Porting decimal context to LC 
then, in the event of OS threads inheriting the lookup chain from their 
creator, would take extra work for compatibility: setting the decimal 
context from the global template (a step it must already take) rather 
than accepting the inheritance.  It might be appropriate that an updated 
version of decimal that uses LC would offer the option of inheriting the 
decimal context from the parent thread, or using the global template, as 
an enhancement.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 551: Security transparency in the Python runtime

2017-08-28 Thread Steve Dower

On 28Aug2017 1926, Chris Angelico wrote:

On Tue, Aug 29, 2017 at 12:23 PM, Steve Dower  wrote:

Check your line lengths, I think they may be too long? (Or maybe my mail
client is set too short?)



Yeah, not sure what's happened here. Are PEPs supposed to be 80? Or 72?


According to the emacs stanza at the end, 70. I don't know of any
non-emacs editors that respect that, though.


Ouch... it's hard enough to do a table in 80 characters. :(

Guess my first editing task is rewrapping everything...

Thanks (not really ;) ),
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 551: Security transparency in the Python runtime

2017-08-28 Thread Steve Dower

On 28Aug2017 1834, Gregory P. Smith wrote:

My gut feeling says that there are N interpreters available on just
about every bloated system image out there. Multiple pythons are often
among them, other we do not control will also continue to exist. I
expect a small initial payload can be created that when executed will
binary patch the interpreter's memory to disable all auditing,
/potentially/ in a manner that cannot be audited itself (a challenge
guaranteed to be accepted).


Repeating the three main goals listed by the PEP:
* preventing malicious use is valuable
* detecting malicious use is important
* detecting attempts to bypass detection is critical

This case falls under the last one. Yes, it is possible to patch the 
interpreter (at least in systems that don't block that - Windows 
certainly can prevent this at the kernel level), but in order to do so 
you should trigger at least one very unusual event (e.g. any of the 
ctypes ones).


Compared to the current state, where someone can patch your interpreter 
without you ever being able to tell, it counts as a victory.


And yeah, if you have alternative interpreters out there that are not 
auditing, you're in just as much trouble. There are certainly sysadmins 
who do a good job of controlling this though - these changes enable 
*those* sysadmins to do a better job, it doesn't help the ones who don't 
invest in it.



If the goal is to audit attacks and the above becomes the standard
attack payload boilerplate before existing "use python to pull down
'fun' stuff to execute". It seems to negate the usefulness.


You can audit all code before it executes and prevent it. Whether you 
use a local malware scanner or some sort of signature scan of log files 
doesn't matter, the change *enables* you to detect this behaviour. Right 
now it is impossible.



This audit layer seems like a defense against existing exploit kits
rather than future ones.


As a *defense*, yes, though if you use a safelist of your own code 
rather than a blocklist of malicious code, you can defend against all 
unexpected code. (For example, you could have a signed catalog of code 
on the machine that is used to verify all sources that are executed. 
Again - these features are for sysadmins who invest in this, it isn't 
free security for lazy ones.)



Is that still useful from a defense in depth
point of view?


Detection is very useful for defense in depth. The new direction of 
malware scanners is heading towards behavioural detection (away from 
signature-based detection) because it's more future proof to detect 
unexpected actions than unexpected code.


(If you think any of these explanations deserve to be in the PEP text 
itself, please let me know so I can add it in. Or if you don't like 
them, let me know that too and I'll try again!)


Cheers,
Steve


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 551: Security transparency in the Python runtime

2017-08-28 Thread Chris Angelico
On Tue, Aug 29, 2017 at 12:23 PM, Steve Dower  wrote:
>> Check your line lengths, I think they may be too long? (Or maybe my mail
>> client is set too short?)
>
>
> Yeah, not sure what's happened here. Are PEPs supposed to be 80? Or 72?

According to the emacs stanza at the end, 70. I don't know of any
non-emacs editors that respect that, though.

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 551: Security transparency in the Python runtime

2017-08-28 Thread Steve Dower

On 28Aug2017 1815, Steven D'Aprano wrote:

Very nicely written. A few comments below.

On Mon, Aug 28, 2017 at 04:55:19PM -0700, Steve Dower wrote:

[...]

This PEP describes additions to the Python API and specific behaviors
for the
CPython implementation that make actions taken by the Python runtime
visible to
security and auditing tools. The goals in order of increasing importance

[...]

Check your line lengths, I think they may be too long? (Or maybe my mail
client is set too short?)


Yeah, not sure what's happened here. Are PEPs supposed to be 80? Or 72?


[...]

To summarize, defenders have a need to audit specific uses of Python in
order to
detect abnormal or malicious usage. Currently, the Python runtime does not
provide any ability to do this, which (anecdotally) has led to organizations
switching to other languages.


It would help if the PEP addressed the state of the art in other
languages.


It briefly mentions that PowerShell has integrated similar functionality 
(generally more invasive, but it's not meant as a full application 
language).


As far as I know, no other languages have done anything at this level - 
OS-level scripting (WSH, AppleScript) rely on OS-level auditing and 
don't try to do it within the language. This makes sense from the point 
of view of "my system made a network connection to x.y.z.w", but doesn't 
provide the correlating information necessary to see "this Python code 
downloaded from x.com made a network connection to ...".



[...]

For example, ``sys.addaudithook()`` and ``sys.audit()`` should exist but
may do
nothing. This allows code to make calls to ``sys.audit()`` without having to
test for existence, but it should not assume that its call will have any
effect.
(Including existence tests in security-critical code allows another
vector to
bypass auditing, so it is preferable that the function always exist.)


That suggests a timing attack to infer the existence of auditing.
A naive attempt:
[...]
This is probably too naive to work in real code, but the point is that
the attacker may be able to exploit timing differences in sys.audit and
related functions to infer whether or not auditing is enabled.


I mention later that timing attacks are possible to determine whether 
events are being audited. I'm not convinced that provides any usable 
information though - by the time you can test that you are being 
tracked, you've (a) already got arbitrary code executing, and (b) have 
already been caught :) (or at least triggered an event somewhere... may 
not have flagged any alerts yet, but there's a record)


Cheers,
Steve

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Guido van Rossum
On Mon, Aug 28, 2017 at 6:07 PM, Nathaniel Smith  wrote:

> The important difference between generators/coroutines and normal
> function calls is that with normal function calls, the link between
> the caller and callee is fixed for the entire lifetime of the inner
> frame, so there's no way for the context to shift under your feet. If
> all we had were normal function calls, then (green-) thread locals
> using the save/restore trick would be enough to handle all the use
> cases above -- it's only for generators/coroutines where the
> save/restore trick breaks down. This means that pushing/popping LCs
> when crossing into/out of a generator frame is the minimum needed to
> get the desired semantics, and it keeps the LC stack small (important
> since lookups can be O(n) in the worst case), and it minimizes the
> backcompat breakage for operations like decimal.setcontext() where
> people *do* expect to call it in a subroutine and have the effects be
> visible in the caller.
>

I like this way of looking at things. Does this have any bearing on
asyncio.Task? To me those look more like threads than like generators. Or
possibly they should inherit the lookup chain from the point when the Task
was created, but not be affected at all by the lookup chain in place when
they are executed. FWIW we *could* have a policy that OS threads also
inherit the lookup chain from their creator, but I doubt that's going to
fly with backwards compatibility.

I guess my general (hurried, sorry) view is that we're at a good point
where we have a small number of mechanisms but are still debating policies
on how those mechanisms should be used. (The basic mechanism is chained
lookup and the policies are about how the chains are fit together for
various language/library constructs.)

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 551: Security transparency in the Python runtime

2017-08-28 Thread Gregory P. Smith
My gut feeling says that there are N interpreters available on just about
every bloated system image out there. Multiple pythons are often among
them, other we do not control will also continue to exist. I expect a small
initial payload can be created that when executed will binary patch the
interpreter's memory to disable all auditing, *potentially* in a manner
that cannot be audited itself (a challenge guaranteed to be accepted).

If the goal is to audit attacks and the above becomes the standard attack
payload boilerplate before existing "use python to pull down 'fun' stuff to
execute". It seems to negate the usefulness.

This audit layer seems like a defense against existing exploit kits rather
than future ones. Is that still useful from a defense in depth point of
view?

-gps

On Mon, Aug 28, 2017 at 6:24 PM Steven D'Aprano  wrote:

> Very nicely written. A few comments below.
>
> On Mon, Aug 28, 2017 at 04:55:19PM -0700, Steve Dower wrote:
>
> [...]
> > This PEP describes additions to the Python API and specific behaviors
> > for the
> > CPython implementation that make actions taken by the Python runtime
> > visible to
> > security and auditing tools. The goals in order of increasing importance
> [...]
>
> Check your line lengths, I think they may be too long? (Or maybe my mail
> client is set too short?)
>
>
> [...]
> > To summarize, defenders have a need to audit specific uses of Python in
> > order to
> > detect abnormal or malicious usage. Currently, the Python runtime does
> not
> > provide any ability to do this, which (anecdotally) has led to
> organizations
> > switching to other languages.
>
> It would help if the PEP addressed the state of the art in other
> languages.
>
>
> [...]
> > For example, ``sys.addaudithook()`` and ``sys.audit()`` should exist but
> > may do
> > nothing. This allows code to make calls to ``sys.audit()`` without
> having to
> > test for existence, but it should not assume that its call will have any
> > effect.
> > (Including existence tests in security-critical code allows another
> > vector to
> > bypass auditing, so it is preferable that the function always exist.)
>
> That suggests a timing attack to infer the existence of auditing.
> A naive attempt:
>
> from time import time
> f = lambda: None
> t = time()
> f()
> time_to_do_nothing = time() - t
> audit = sys.audit
> t = time()
> audit()
> time_to_do_audit = time() - t
> if time_to_do_audit <= time_to_do_nothing:
> do_something_bad()
>
>
> This is probably too naive to work in real code, but the point is that
> the attacker may be able to exploit timing differences in sys.audit and
> related functions to infer whether or not auditing is enabled.
>
>
>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 551: Security transparency in the Python runtime

2017-08-28 Thread Steven D'Aprano
Very nicely written. A few comments below.

On Mon, Aug 28, 2017 at 04:55:19PM -0700, Steve Dower wrote:

[...]
> This PEP describes additions to the Python API and specific behaviors 
> for the
> CPython implementation that make actions taken by the Python runtime 
> visible to
> security and auditing tools. The goals in order of increasing importance 
[...]

Check your line lengths, I think they may be too long? (Or maybe my mail 
client is set too short?)


[...]
> To summarize, defenders have a need to audit specific uses of Python in 
> order to
> detect abnormal or malicious usage. Currently, the Python runtime does not
> provide any ability to do this, which (anecdotally) has led to organizations
> switching to other languages.

It would help if the PEP addressed the state of the art in other 
languages.


[...]
> For example, ``sys.addaudithook()`` and ``sys.audit()`` should exist but 
> may do
> nothing. This allows code to make calls to ``sys.audit()`` without having to
> test for existence, but it should not assume that its call will have any 
> effect.
> (Including existence tests in security-critical code allows another 
> vector to
> bypass auditing, so it is preferable that the function always exist.)

That suggests a timing attack to infer the existence of auditing. 
A naive attempt:

from time import time
f = lambda: None
t = time()
f()
time_to_do_nothing = time() - t
audit = sys.audit
t = time()
audit()
time_to_do_audit = time() - t
if time_to_do_audit <= time_to_do_nothing:
do_something_bad()


This is probably too naive to work in real code, but the point is that 
the attacker may be able to exploit timing differences in sys.audit and 
related functions to infer whether or not auditing is enabled.



-- 
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Nathaniel Smith
On Mon, Aug 28, 2017 at 3:14 PM, Eric Snow  wrote:
> On Sat, Aug 26, 2017 at 3:09 PM, Nathaniel Smith  wrote:
>> You might be interested in these notes I wrote to motivate why we need
>> a chain of namespaces, and why simple "async task locals" aren't
>> sufficient:
>>
>> https://github.com/njsmith/pep-550-notes/blob/master/dynamic-scope.ipynb
>
> Thanks, Nathaniel!  That helped me understand the rationale, though
> I'm still unconvinced chained lookup is necessary for the stated goal
> of the PEP.
>
> (The rest of my reply is not specific to Nathaniel.)
>
> tl;dr Please:
>   * make the chained lookup aspect of the proposal more explicit (and
> distinct) in the beginning sections of the PEP (or drop chained
> lookup).
>   * explain why normal frames do not get to take advantage of chained
> lookup (or allow them to).
>
> 
>
> If I understood right, the problem is that we always want context vars
> resolved relative to the current frame and then to the caller's frame
> (and on up the call stack).  For generators, "caller" means the frame
> that resumed the generator.  Since we don't know what frame will
> resume the generator beforehand, we can't simply copy the current LC
> when a generator is created and bind it to the generator's frame.
>
> However, I'm still not convinced that's the semantics we need.  The
> key statement is "and then to the caller's frame (and on up the call
> stack)", i.e. chained lookup.  On the linked page Nathaniel explained
> the position (quite clearly, thank you) using sys.exc_info() as an
> example of async-local state.  I posit that that example isn't
> particularly representative of what we actually need.  Isn't the point
> of the PEP to provide an async-safe alternative to threading.local()?
>
> Any existing code using threading.local() would not expect any kind of
> chained lookup since threads don't have any.  So introducing chained
> lookup in the PEP is unnecessary and consequently not ideal since it
> introduces significant complexity.

There's a lot of Python code out there, and it's hard to know what it
all wants :-). But I don't think we should get hung up on matching
threading.local() -- no-one sits down and says "okay, what my users
want is for me to write some code that uses a thread-local", i.e.,
threading.local() is a mechanism, not an end-goal.

My hypothesis is in most cases, when people reach for
threading.local(), it's because they have some "contextual" variable,
and they want to be able to do things like set it to a value that
affects all and only the code that runs inside a 'with' block. So far
the only way to approximate this in Python has been to use
threading.local(), but chained lookup would work even better.

As evidence for this hypothesis: something like chained lookup is
important for exc_info() [1] and for Trio's cancellation semantics,
and I'm pretty confident that it's what users naturally expect for use
cases like 'with decimal.localcontext(): ...' or 'with
numpy.errstate(...): ...'. And it works fine for cases like Flask's
request-locals that get set once near the top of a callstack and then
treated as read-only by most of the code.

I'm not aware of any alternative to chained lookup that fulfills all
of these use cases -- are you? And I'm not aware of any use cases that
require something more than threading.local() but less than chained
lookup -- are you?

[1] I guess I should say something about including sys.exc_info() as
evidence that chained lookup as useful, given that CPython probably
won't share code between it's PEP 550 implementation and its
sys.exc_info() implementation. I'm mostly citing it as a evidence that
this is a real kind of need that can arise when writing programs -- if
it happens once, it'll probably happen again. But I can also imagine
that other implementations might want to share code here, and it's
certainly nice if the Python-the-language spec can just say
"exc_info() has semantics 'as if' it were implemented using PEP 550
storage" and leave it at that. Plus it's kind of rude for the
interpreter to claim semantics for itself that it won't let anyone
else implement :-).

> As the PEP is currently written, chained lookup is a key part of the
> proposal, though it does not explicitly express this.  I suppose this
> is where my confusion has been.
>
> At this point I think I understand one rationale for the chained
> lookup functionality; it takes advantage of the cooperative scheduling
> characteristics of generators, et al.  Unlike with threads, a
> programmer can know the context under which a generator will be
> resumed.  Thus it may be useful to the programmer to allow (or expect)
> the resumed generator to fall back to the calling context.  However,
> given the extra complexity involved, is there enough evidence that
> such capability is sufficiently useful?  Could chained lookup be
> addressed separately (in another PEP)?
>
> Also, wouldn't it be equally useful to support chained lookup fo

[Python-Dev] PEP 551: Security transparency in the Python runtime

2017-08-28 Thread Steve Dower

Hi python-dev,

Those of you who were at the PyCon US language summit this year (or who 
saw the coverage at https://lwn.net/Articles/723823/) may recall that I 
talked briefly about the ways Python is used by attackers to gain and/or 
retain access to systems on local networks.


I present here PEP 551, which proposes the core changes needed to 
CPython to allow sysadmins (or those responsible for defending their 
networks) to gain visibility into the behaviour of Python processes on 
their systems. It has already gone before security-sig, and has had a 
few significant enhancements as a result. There was also quite a 
positive reaction on Twitter after the first posting (I now have a 
significant number of infosec people watching what I do very 
carefully... :) )


Since the PEP should be self-describing, I'll leave context to its text. 
I believe it is ready for discussion, though there are three incomplete 
sections. Firstly, the list of audit locations is not yet complete, but 
is sufficient for the purposes of discussion (I expect to spend time at 
the upcoming dev sprints arguing about these in ridiculous detail).


Second, the performance analysis has not yet been completed with 
sufficient robustness to make a concrete statement about its impact. 
Preliminary tests show negligible impact in the normal case, and the 
"opted-in" case is the user's responsibility. This also relies somewhat 
on the list of hooks being complete and the implementation having 
stabilised.


Third, the section on recommendations is not settled. It is hard to 
recommend approaches in what is very much an evolving field, so I am 
constantly revising parts of this to keep it all restricted to those 
things enabled by or affected by this PEP. I am *not* trying to present 
a full guide on how to prevent attackers breaching your system :)


My current implementation is available at:
https://github.com/zooba/cpython/tree/sectrans

The github rendered version of this file is at:
https://github.com/python/peps/blob/master/pep-0551.rst

(The one on python.org will update at some point, but is a little behind 
the version in the repo.)


Cheers,
Steve

-

PEP: 551
Title: Security transparency in the Python runtime
Version: $Revision$
Last-Modified: $Date$
Author: Steve Dower 
Discussions-To: 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 23-Aug-2017
Python-Version: 3.7
Post-History: 24-Aug-2017 (security-sig)

Abstract


This PEP describes additions to the Python API and specific behaviors 
for the
CPython implementation that make actions taken by the Python runtime 
visible to
security and auditing tools. The goals in order of increasing importance 
are to
prevent malicious use of Python, to detect and report on malicious use, 
and most
importantly to detect attempts to bypass detection. Most of the 
responsibility
for implementation is required from users, who must customize and build 
Python

for their own environment.

We propose two small sets of public APIs to enable users to reliably 
build their

copy of Python without having to modify the core runtime, protecting future
maintainability. We also discuss recommendations for users to help them 
develop

and configure their copy of Python.

Background
==

Software vulnerabilities are generally seen as bugs that enable remote or
elevated code execution. However, in our modern connected world, the more
dangerous vulnerabilities are those that enable advanced persistent threats
(APTs). APTs are achieved when an attacker is able to penetrate a network,
establish their software on one or more machines, and over time extract 
data or
intelligence. Some APTs may make themselves known by maliciously 
damaging data
(e.g., `WannaCrypt 
`_)
or hardware (e.g., `Stuxnet 
`_).

Most attempt to hide their existence and avoid detection. APTs often use a
combination of traditional vulnerabilities, social engineering, phishing (or
spear-phishing), thorough network analysis, and an understanding of
misconfigured environments to establish themselves and do their work.

The first infected machines may not be the final target and may not require
special privileges. For example, an APT that is established as a
non-administrative user on a developer’s machine may have the ability to 
spread
to production machines through normal deployment channels. It is common 
for APTs
to persist on as many machines as possible, with sheer weight of 
presence making

them difficult to remove completely.

Whether an attacker is seeking to cause direct harm or hide their 
tracks, the
biggest barrier to detection is a lack of insight. System administrators 
with
large networks rely on distributed logs to u

Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 6:56 PM, Greg Ewing  wrote:
> Yury Selivanov wrote:
>>
>> I saying that the following should not work:
>>
>> def nested_gen():
>> set_some_context()
>> yield
>>
>> def gen():
>># some_context is not set
>>yield from nested_gen()
>># use some_context ???
>
>
> And I'm saying it *should* work, otherwise it breaks
> one of the fundamental principles on which yield-from
> is based, namely that 'yield from foo()' should behave
> as far as possible as a generator equivalent of a
> plain function call.
>

Consider the following generator:


  def gen():
 with decimal.context(...):
yield


We don't want gen's context to leak to the outer scope -- that's one
of the reasons why PEP 550 exists.  Even if we do this:

 g = gen()
 next(g)
 # the decimal.context won't leak out of gen

So a Python user would have a mental model: context set in generators
doesn't leak.

Not, let's consider a "broken" generator:

 def gen():
  decimal.context(...)
  yield

If we iterate gen() with next(), it still won't leak its context.  But
if "yield from" has semantics that you want -- "yield from" to be just
like function call -- then calling

 yield from gen()

will corrupt the context of the caller.

I simply want consistency.  It's easier for everybody to say that
generators never leaked their context changes to the outer scope,
rather than saying that "generators can sometimes leak their context".

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Greg Ewing

Yury Selivanov wrote:

I saying that the following should not work:

def nested_gen():
set_some_context()
yield

def gen():
   # some_context is not set
   yield from nested_gen()
   # use some_context ???


And I'm saying it *should* work, otherwise it breaks
one of the fundamental principles on which yield-from
is based, namely that 'yield from foo()' should behave
as far as possible as a generator equivalent of a
plain function call.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [bpo-30421]: Pull request review

2017-08-28 Thread Terry Reedy

On 8/28/2017 4:43 PM, Mark Lawrence via Python-Dev wrote:

The bulk of the work on argparse in recent years has been done by 
paul.j3.  I have no idea whether or not he is classed as a core developer.


'He' is a CLA-signed contributor, but not a committer, with no GitHub 
name registered with bpo.


--
Terry Jan Reedy


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 6:19 PM, Eric Snow  wrote:
> On Sat, Aug 26, 2017 at 10:31 AM, Yury Selivanov
>  wrote:
>> On Sat, Aug 26, 2017 at 9:33 AM, Sven R. Kunze  wrote:
>> [..]
>>> Why not the same interface as thread-local storage? This has been the
>>> question which bothered me from the beginning of PEP550. I don't understand
>>> what inventing a new way of access buys us here.
>>
>> This was covered at length in these threads:
>>
>> https://mail.python.org/pipermail/python-ideas/2017-August/046888.html
>> https://mail.python.org/pipermail/python-ideas/2017-August/046889.html
>
> FWIW, it would still be nice to have a simple replacement for the
> following under PEP 550:
>
> class Context(threading.local):
> ...
>
> Transitioning from there to PEP 550 is non-trivial.

And it should not be trivial, as the PEP 550 semantics is different
from TLS.  Using PEP 550 instead of TLS should be carefully evaluated.

Please also see this:
https://www.python.org/dev/peps/pep-0550/#replication-of-threading-local-interface

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 6:22 PM, Greg Ewing  wrote:
[..]
>> But almost nobody converts the code by simply slapping async/await on
>> top of it
>
>
> Maybe not, but it will also affect refactoring of code
> that is *already* using async/await, e.g. taking
>
>
>async def foobar():
>   # set decimal context
>   # use the decimal context we just set
>
> and refactoring it as above.

There's no code that already uses async/await and decimal context
managers/setters.  Any such code is broken right now, because decimal
context set in one coroutine affects them all.  Your example would
work only if foobar() is the only coroutine in your program.

>
> Given that one of the main motivations for yield-from
> (and subsequently async/await) was so that you *can*
> perform that kind of refactoring easily, that does
> indeed seem like a problem to me.

With the current PEP 550 semantics w.r.t. generators you still can
refactor them.  The following code would work as expected:

def nested_gen():
# use some_context

def gen():
   with some_context():
yield from nested_gen()

list(gen())

I saying that the following should not work:

def nested_gen():
set_some_context()
yield

def gen():
   # some_context is not set
   yield from nested_gen()
   # use some_context ???

list(gen())

IOW, any context set in generators should not leak to the caller,
ever.  This is the whole point of the PEP.

As for async/await, see this:
https://mail.python.org/pipermail/python-dev/2017-August/149022.html

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Greg Ewing

Yury Selivanov wrote:

On Mon, Aug 28, 2017 at 1:33 PM, Stefan Krah  wrote:
[..]


* Context managers like decimal contexts, numpy.errstate, and
warnings.catch_warnings.


The decimal context works like this:

 1) There is a default context template (user settable).

 2) Whenever the first operation *in a new thread* occurs, the
thread-local context is initialized with a copy of the
template.


I don't find it very intuitive if setcontext() is somewhat local in
coroutines but they don't participate in some form of CLS.

You have to think about things like "what happens in a fresh thread
when a coroutine calls setcontext() before any other decimal operation
has taken place".



I'm sorry, I don't follow you here.

PEP 550 semantics:

setcontext() in a regular code would set the context for the whole thread.

setcontext() in a coroutine/generator/async generator would set the
context for all the code it calls.



So perhaps Nathaniel is right that the PEP is not so useful for numpy
and decimal backwards compat.



Nathaniel's argument is pretty weak as I see it. He argues that some
people would take the following code:

def bar():
   # set decimal context

def foo():
   bar()
   # use the decimal context set in bar()

and blindly convert it to async/await:

async def bar():
   # set decimal context

async def foo():
   await bar()
   # use the decimal context set in bar()

And that it's a problem that it will stop working.

But almost nobody converts the code by simply slapping async/await on
top of it


Maybe not, but it will also affect refactoring of code
that is *already* using async/await, e.g. taking


   async def foobar():
  # set decimal context
  # use the decimal context we just set

and refactoring it as above.

Given that one of the main motivations for yield-from
(and subsequently async/await) was so that you *can*
perform that kind of refactoring easily, that does
indeed seem like a problem to me.

It seems to me that individual generators/coroutines
shouldn't automatically get a context of their own,
they should have to explicitly ask for one.

--
Greg


 -- things don't work this way. It was never a goal for

async/await or asyncio, or even trio/curio.  Porting code to
async/await almost always requires a thoughtful rewrite.

In async/await, the above code is an *anti-pattern*.  It's super
fragile and can break by adding a timeout around "await bar".  There's
no workaround here.

Asynchronous code is fundamentally non-local and a more complex topic
on its own, with its own concepts: Asynchronous Tasks, timeouts,
cancellation, etc.  Fundamentally: "(synchronous code) !=
(asynchronous code) - (async/await)".

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/greg.ewing%40canterbury.ac.nz


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Eric Snow
On Sat, Aug 26, 2017 at 10:31 AM, Yury Selivanov
 wrote:
> On Sat, Aug 26, 2017 at 9:33 AM, Sven R. Kunze  wrote:
> [..]
>> Why not the same interface as thread-local storage? This has been the
>> question which bothered me from the beginning of PEP550. I don't understand
>> what inventing a new way of access buys us here.
>
> This was covered at length in these threads:
>
> https://mail.python.org/pipermail/python-ideas/2017-August/046888.html
> https://mail.python.org/pipermail/python-ideas/2017-August/046889.html

FWIW, it would still be nice to have a simple replacement for the
following under PEP 550:

class Context(threading.local):
...

Transitioning from there to PEP 550 is non-trivial.

-eric
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Eric Snow
On Sat, Aug 26, 2017 at 3:09 PM, Nathaniel Smith  wrote:
> You might be interested in these notes I wrote to motivate why we need
> a chain of namespaces, and why simple "async task locals" aren't
> sufficient:
>
> https://github.com/njsmith/pep-550-notes/blob/master/dynamic-scope.ipynb

Thanks, Nathaniel!  That helped me understand the rationale, though
I'm still unconvinced chained lookup is necessary for the stated goal
of the PEP.

(The rest of my reply is not specific to Nathaniel.)

tl;dr Please:
  * make the chained lookup aspect of the proposal more explicit (and
distinct) in the beginning sections of the PEP (or drop chained
lookup).
  * explain why normal frames do not get to take advantage of chained
lookup (or allow them to).



If I understood right, the problem is that we always want context vars
resolved relative to the current frame and then to the caller's frame
(and on up the call stack).  For generators, "caller" means the frame
that resumed the generator.  Since we don't know what frame will
resume the generator beforehand, we can't simply copy the current LC
when a generator is created and bind it to the generator's frame.

However, I'm still not convinced that's the semantics we need.  The
key statement is "and then to the caller's frame (and on up the call
stack)", i.e. chained lookup.  On the linked page Nathaniel explained
the position (quite clearly, thank you) using sys.exc_info() as an
example of async-local state.  I posit that that example isn't
particularly representative of what we actually need.  Isn't the point
of the PEP to provide an async-safe alternative to threading.local()?

Any existing code using threading.local() would not expect any kind of
chained lookup since threads don't have any.  So introducing chained
lookup in the PEP is unnecessary and consequently not ideal since it
introduces significant complexity.

As the PEP is currently written, chained lookup is a key part of the
proposal, though it does not explicitly express this.  I suppose this
is where my confusion has been.

At this point I think I understand one rationale for the chained
lookup functionality; it takes advantage of the cooperative scheduling
characteristics of generators, et al.  Unlike with threads, a
programmer can know the context under which a generator will be
resumed.  Thus it may be useful to the programmer to allow (or expect)
the resumed generator to fall back to the calling context.  However,
given the extra complexity involved, is there enough evidence that
such capability is sufficiently useful?  Could chained lookup be
addressed separately (in another PEP)?

Also, wouldn't it be equally useful to support chained lookup for
function calls?  Programmers have the same level of knowledge about
the context stack with function calls as with generators.  I would
expect evidence in favor of chained lookups for generators to also
favor the same for normal function calls.

-eric
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 550 v4: coroutine policy

2017-08-28 Thread Yury Selivanov
Long story short, I think we need to rollback our last decision to
prohibit context propagation up the call stack in coroutines.  In PEP
550 v3 and earlier, the following snippet would work just fine:

   var = new_context_var()

   async def bar():
   var.set(42)

   async def foo():
   await bar()
   assert var.get() == 42   # with previous PEP 550 semantics

   run_until_complete(foo())

But it would break if a user wrapped "await bar()" with "wait_for()":

   var = new_context_var()

   async def bar():
   var.set(42)

   async def foo():
   await wait_for(bar(), 1)
   assert var.get() == 42  # AssertionError !!!

   run_until_complete(foo())

Therefore, in the current (v4) version of the PEP, we made all
coroutines to have their own LC (just like generators), which makes
both examples always raise an AssertionError.  This makes it easier
for async/await users to refactor their code: they simply cannot
propagate EC changes up the call stack, hence any coroutine can be
safely wrapped into a task.

Nathaniel and Stefan Krah argued on the mailing list that this change
in semantics makes the PEP harder to understand.  Essentially, context
changes propagate up the call stack for regular code, but not for
asynchronous.  For regular code the PEP behaves like TLS, but for
asynchronous it behaves like dynamic scoping.

IMO, on its own, this argument is not strong enough to rollback to the
older PEP 550 semantics, but I think I've discovered a stronger one:
asynchronous context managers.

With the current version (v4) of the PEP, it's not possible to set
context variables in __aenter__ and in
@contextlib.asynccontextmanager:

class Foo:

async def __aenter__(self):
 context_var.set('aaa')# won't be visible outside of __aenter__

So I guess we have no other choice other than reverting this spec
change for coroutines.  The very first example in this email should
start working again.

This means that PEP 550 will have a caveat for async code: don't rely
on context propagation up the call stack, unless you are writing
__aenter__ and __aexit__ that are guaranteed to be called without
being wrapped into a Task.

BTW, on the topic of dynamic scoping.  Context manager protocols (both
sync and async) is the fundamental reason why we couldn't implement
dynamic scoping in Python even if we wanted to.  With a classical
dynamic scoping in a functional language, __enter__ would have its own
scope, which the code in the 'with' block would never be able to
access.  Thus I propose to stop associating PEP 550 concepts with
(dynamic) scoping.

Thanks,
Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [bpo-30421]: Pull request review

2017-08-28 Thread Mark Lawrence via Python-Dev

On 28/08/2017 17:46, Terry Reedy wrote:

On 8/28/2017 3:42 AM, Robert Schindler wrote:

Hello,

In May, I submitted a pull request that extends the functionality of
argparse.ArgumentParser.


The argparse maintainer, bethard (Peter Bethard), was not added to the 
nosy list.  And he does not seem to have been active lately -- his bpo 
profile does not list a github name.



To do so, I followed the steps described in the developers guide.

According to [1], I already pinged at GitHub but got no response. The
next step seems to be writing to this list.

I know that nobody is payed for reviewing submissions, but maybe it just
got overlooked?

You can find the pull request at [2].



[1] https://docs.python.org/devguide/pullrequest.html#reviewing
[2] https://github.com/python/cpython/pull/1698


Some core developer has to decide if the new feature should be added, 
and if so, what the API should be.  If Peter is not doing that, I don't 
know who will.  It is possible that the current design is intentional, 
rather than an oversight.  It does not make too much sense to review the 
implementation (the PR) until the design decisions are made.  In this 
case, the PR adds a feature not discussed on the bpo issue.




The bulk of the work on argparse in recent years has been done by 
paul.j3.  I have no idea whether or not he is classed as a core developer.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

---
This email has been checked for viruses by AVG.
http://www.avg.com


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 and None/masking

2017-08-28 Thread Guido van Rossum
On Mon, Aug 28, 2017 at 9:26 AM, Barry Warsaw  wrote:

> On Aug 28, 2017, at 11:50, Yury Selivanov  wrote:
>
> > For checking if a context variable has a value in the topmost LC, we
> > can add two new keyword arguments to the "ContextVar.lookup()" method:
> >
> >   ContextVar.lookup(*, default=None, topmost=False)
> >
> > If `topmost` is set to `True`, `lookup` will only check the topmost LC.
> >
> > For deleting a value from the topmost LC we can add a new
> > "ContextVar.delete()" method.
>
> +1
>
Yes, that's the only way. (Also forgive me for ever having proposed
lookup() -- I think we should go back to get(), set(), delete(). Things
will then be similar to getattr/setattr/delattr for class attributes.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 1:33 PM, Stefan Krah  wrote:
[..]
>> * Context managers like decimal contexts, numpy.errstate, and
>> warnings.catch_warnings.
>
> The decimal context works like this:
>
>   1) There is a default context template (user settable).
>
>   2) Whenever the first operation *in a new thread* occurs, the
>  thread-local context is initialized with a copy of the
>  template.
>
>
> I don't find it very intuitive if setcontext() is somewhat local in
> coroutines but they don't participate in some form of CLS.
>
> You have to think about things like "what happens in a fresh thread
> when a coroutine calls setcontext() before any other decimal operation
> has taken place".

I'm sorry, I don't follow you here.

PEP 550 semantics:

setcontext() in a regular code would set the context for the whole thread.

setcontext() in a coroutine/generator/async generator would set the
context for all the code it calls.

> So perhaps Nathaniel is right that the PEP is not so useful for numpy
> and decimal backwards compat.

Nathaniel's argument is pretty weak as I see it. He argues that some
people would take the following code:

def bar():
   # set decimal context

def foo():
   bar()
   # use the decimal context set in bar()

and blindly convert it to async/await:

async def bar():
   # set decimal context

async def foo():
   await bar()
   # use the decimal context set in bar()

And that it's a problem that it will stop working.

But almost nobody converts the code by simply slapping async/await on
top of it -- things don't work this way. It was never a goal for
async/await or asyncio, or even trio/curio.  Porting code to
async/await almost always requires a thoughtful rewrite.

In async/await, the above code is an *anti-pattern*.  It's super
fragile and can break by adding a timeout around "await bar".  There's
no workaround here.

Asynchronous code is fundamentally non-local and a more complex topic
on its own, with its own concepts: Asynchronous Tasks, timeouts,
cancellation, etc.  Fundamentally: "(synchronous code) !=
(asynchronous code) - (async/await)".

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Stefan Krah
On Mon, Aug 28, 2017 at 12:12:00PM -0400, Yury Selivanov wrote:
> On Mon, Aug 28, 2017 at 11:52 AM, Stefan Krah  wrote:
> [..]
> > But the state "leaks in" as per your previous example:
> >
> > async def bar():
> > # use decimal with context=ctx
> >
> > async def foo():
> >  decimal.setcontext(ctx)
> >  await bar()
> >
> >
> > IMHO it shouldn't with coroutine-local-storage (let's call it CLS). So,
> > as I see it, there's still some mixture between dynamic scoping and CLS
> > because it this example bar() is allowed to search the stack.
> 
> The whole proposal will then be mostly useless.  If we forget about
> the dynamic scoping (I don't know why it's being brought up all the
> time, TBH; nobody uses it, almost no language implements it)

Because a) it was brought up by proponents of the PEP early on python-ideas,
b) people desperately want a mental model of what is going on. :-)


> * Context managers like decimal contexts, numpy.errstate, and
> warnings.catch_warnings.

The decimal context works like this:

  1) There is a default context template (user settable).

  2) Whenever the first operation *in a new thread* occurs, the
 thread-local context is initialized with a copy of the
 template.


I don't find it very intuitive if setcontext() is somewhat local in
coroutines but they don't participate in some form of CLS.

You have to think about things like "what happens in a fresh thread
when a coroutine calls setcontext() before any other decimal operation
has taken place".


So perhaps Nathaniel is right that the PEP is not so useful for numpy
and decimal backwards compat.


Stefan Krah



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 12:43 PM, Ethan Furman  wrote:
> On 08/28/2017 09:12 AM, Yury Selivanov wrote:
>
>> If we forget about dynamic scoping (I don't know why it's being brought up
>> all the
>> time, TBH; nobody uses it, almost no language implements it)
>
>
> Probably because it's not lexical scoping, and possibly because it's
> possible for a function to be running with one EC on one call, and a
> different EC on the next -- hence, the EC it's using is dynamically
> determined.
>
> It seems to me the biggest difference between "true" dynamic scoping and
> what PEP 550 implements is the granularity: i.e. not every single function
> gets it's own LC, just a select few: generators, async stuff, etc.
>
> Am I right?  (No CS degree here.)  If not, what are the differences?

Sounds right to me.

If PEP 550 was about adding true dynamic scoping, we couldn't use it
as a suitable context management solution for libraries like decimal.
For example, converting decimal/numpy to use new APIs would be a
totally backwards-incompatible change.

I still prefer using a "better TLS" analogy for PEP 550.  We'll likely
add a section summarizing differences between threading.local() and
new APIs (as suggested by Eric Snow).

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [bpo-30421]: Pull request review

2017-08-28 Thread Terry Reedy

On 8/28/2017 3:42 AM, Robert Schindler wrote:

Hello,

In May, I submitted a pull request that extends the functionality of
argparse.ArgumentParser.


The argparse maintainer, bethard (Peter Bethard), was not added to the 
nosy list.  And he does not seem to have been active lately -- his bpo 
profile does not list a github name.



To do so, I followed the steps described in the developers guide.

According to [1], I already pinged at GitHub but got no response. The
next step seems to be writing to this list.

I know that nobody is payed for reviewing submissions, but maybe it just
got overlooked?

You can find the pull request at [2].



[1] https://docs.python.org/devguide/pullrequest.html#reviewing
[2] https://github.com/python/cpython/pull/1698


Some core developer has to decide if the new feature should be added, 
and if so, what the API should be.  If Peter is not doing that, I don't 
know who will.  It is possible that the current design is intentional, 
rather than an oversight.  It does not make too much sense to review the 
implementation (the PR) until the design decisions are made.  In this 
case, the PR adds a feature not discussed on the bpo issue.


--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Ethan Furman

On 08/28/2017 09:12 AM, Yury Selivanov wrote:


If we forget about dynamic scoping (I don't know why it's being brought up all 
the
time, TBH; nobody uses it, almost no language implements it)


Probably because it's not lexical scoping, and possibly because it's possible for a function to be running with one EC 
on one call, and a different EC on the next -- hence, the EC it's using is dynamically determined.


It seems to me the biggest difference between "true" dynamic scoping and what PEP 550 implements is the granularity: 
i.e. not every single function gets it's own LC, just a select few: generators, async stuff, etc.


Am I right?  (No CS degree here.)  If not, what are the differences?

--
~Ethan~
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 and None/masking

2017-08-28 Thread Barry Warsaw
On Aug 28, 2017, at 11:50, Yury Selivanov  wrote:

> For checking if a context variable has a value in the topmost LC, we
> can add two new keyword arguments to the "ContextVar.lookup()" method:
> 
>   ContextVar.lookup(*, default=None, topmost=False)
> 
> If `topmost` is set to `True`, `lookup` will only check the topmost LC.
> 
> For deleting a value from the topmost LC we can add a new
> "ContextVar.delete()" method.

+1

-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 11:53 AM, Ivan Levkivskyi  wrote:
> A question appeared here about a simple mental model for PEP 550.
> It looks much clearer now, than in the first version, but I still would like
> to clarify: can one say that PEP 550 just provides more fine-grained version
> of threading.local(), that works not only per thread, but even per coroutine
> within the same thread?


Simple model:

1. Values in the EC propagate down the call stack for both synchronous
and asynchronous code.

2. For regular functions/code EC works the same way as threading.local().

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 11:52 AM, Stefan Krah  wrote:
[..]
> But the state "leaks in" as per your previous example:
>
> async def bar():
> # use decimal with context=ctx
>
> async def foo():
>  decimal.setcontext(ctx)
>  await bar()
>
>
> IMHO it shouldn't with coroutine-local-storage (let's call it CLS). So,
> as I see it, there's still some mixture between dynamic scoping and CLS
> because it this example bar() is allowed to search the stack.

The whole proposal will then be mostly useless.  If we forget about
the dynamic scoping (I don't know why it's being brought up all the
time, TBH; nobody uses it, almost no language implements it) the
current proposal is well balanced and solves multiple problems.  Three
points listed in the rationale section:

* Context managers like decimal contexts, numpy.errstate, and
warnings.catch_warnings.
* Request-related data, such as security tokens and request data in
web applications, language context for gettext etc.
* Profiling, tracing, and logging in large code bases.

Two of them require context propagation *down* the stack of
coroutines.  What latest PEP 550 revision does, it prohibits context
propagation *up* the stack in coroutines (it's a requirement to make
async code refactorable and easy to reason about).

Propagation of context "up" the stack in regular code is allowed with
threading.local(), and everybody is used to it.  Doing that for
coroutines doesn't work, because of the reasons covered here:
https://www.python.org/dev/peps/pep-0550/#coroutines-and-asynchronous-tasks

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Ivan Levkivskyi
A question appeared here about a simple mental model for PEP 550.
It looks much clearer now, than in the first version, but I still would
like to clarify: can one say that PEP 550 just provides more fine-grained
version of threading.local(), that works not only per thread, but even per
coroutine within the same thread?

--
Ivan



On 28 August 2017 at 17:29, Yury Selivanov  wrote:

> On Mon, Aug 28, 2017 at 11:26 AM, Ethan Furman  wrote:
> > On 08/28/2017 04:19 AM, Stefan Krah wrote:
> >
> >> What about this?
> >>
> >> async def bar():
> >>  setcontext(Context(prec=1))
> >>  for i in range(10):
> >>  await asyncio.sleep(1)
> >>  yield i
> >>
> >> async def foo():
> >>  async for i in bar():
> >>  # ctx.prec=1?
> >>  print(Decimal(100) / 3)
> >
> >
> > If I understand correctly, ctx.prec is whatever the default is, because
> foo
> > comes before bar on the stack, and after the current value for i is
> grabbed
> > bar is no longer executing, and therefore no longer on the stack.  I hope
> > I'm right.  ;)
>
> You're right!
>
> Yury
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> levkivskyi%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Stefan Krah
On Mon, Aug 28, 2017 at 11:23:12AM -0400, Yury Selivanov wrote:
> On Mon, Aug 28, 2017 at 7:19 AM, Stefan Krah  wrote:
> > Okay, so if I understand this correctly we actually will not have dynamic
> > scoping for regular functions:  bar() has returned, so the new context
> > would not be found on the stack with proper dynamic scoping.
> 
> Correct. Although I would avoid associating PEP 550 with dynamic
> scoping entirely, as we never intended to implement it.

Good, I agree it does not make sense.


> [..]
> > What about this?
> >
> > async def bar():
> > setcontext(Context(prec=1))
> > for i in range(10):
> > await asyncio.sleep(1)
> > yield i
> >
> > async def foo():
> > async for i in bar():
> > # ctx.prec=1?
> > print(Decimal(100) / 3)
> >
> >
> >
> > I'm searching for some abstract model to reason about the scopes.
> 
> Whatever is set in coroutines, generators, and async generators does
> not leak out.  In the above example, "prec=1" will only be set inside
> "bar()", and "foo()" will not see that.  (Same will happen for a
> regular function and a generator).

But the state "leaks in" as per your previous example: 

async def bar():
# use decimal with context=ctx

async def foo():
 decimal.setcontext(ctx)
 await bar()


IMHO it shouldn't with coroutine-local-storage (let's call it CLS). So,
as I see it, there's still some mixture between dynamic scoping and CLS
because it this example bar() is allowed to search the stack.


Stefan Krah



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 and None/masking

2017-08-28 Thread Yury Selivanov
On Sun, Aug 27, 2017 at 4:01 PM, Nathaniel Smith  wrote:
> I believe that the current status is:
>
> - assigning None isn't treated specially – it does mask any underlying
> values (which I think is what we want)

Correct.

>
> - there is currently no way to "unmask"
>
> - but it's generally agreed that there should be a way to do that, at least
> in some cases, to handle the save/restore issue I raised. It's just that
> Yury & Elvis wanted to deal with restructuring the PEP first before doing
> more work on the api details.

Yes. I think it's a good time to start a discussion about this, I can
list a couple ideas here.

For checking if a context variable has a value in the topmost LC, we
can add two new keyword arguments to the "ContextVar.lookup()" method:

   ContextVar.lookup(*, default=None, topmost=False)

If `topmost` is set to `True`, `lookup` will only check the topmost LC.

For deleting a value from the topmost LC we can add a new
"ContextVar.delete()" method.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 11:26 AM, Ethan Furman  wrote:
> On 08/28/2017 04:19 AM, Stefan Krah wrote:
>
>> What about this?
>>
>> async def bar():
>>  setcontext(Context(prec=1))
>>  for i in range(10):
>>  await asyncio.sleep(1)
>>  yield i
>>
>> async def foo():
>>  async for i in bar():
>>  # ctx.prec=1?
>>  print(Decimal(100) / 3)
>
>
> If I understand correctly, ctx.prec is whatever the default is, because foo
> comes before bar on the stack, and after the current value for i is grabbed
> bar is no longer executing, and therefore no longer on the stack.  I hope
> I'm right.  ;)

You're right!

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Ethan Furman

On 08/28/2017 04:19 AM, Stefan Krah wrote:


What about this?

async def bar():
 setcontext(Context(prec=1))
 for i in range(10):
 await asyncio.sleep(1)
 yield i

async def foo():
 async for i in bar():
 # ctx.prec=1?
 print(Decimal(100) / 3)


If I understand correctly, ctx.prec is whatever the default is, because foo comes before bar on the stack, and after the 
current value for i is grabbed bar is no longer executing, and therefore no longer on the stack.  I hope I'm right.  ;)


--
~Ethan~
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Sat, Aug 26, 2017 at 4:45 PM, francismb  wrote:
[..]

> it's by design that the execution context for new threads to be empty or
> should it be possible to set it to some initial value? Like e.g:
>
>  var = new_context_var('init')
>
>  def sub():
>  assert var.lookup() == 'init'
>  var.set('sub')
>
>  def main():
>  var.set('main')
>
>  thread = threading.Thread(target=sub)
>  thread.start()
>  thread.join()
>
>  assert var.lookup() == 'main'


Yes, it's by design.

With PEP 550 APIs it's easy to subclass threading.Thread or
concurrent.futures.ThreadPool to make them capture the EC.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Yury Selivanov
On Mon, Aug 28, 2017 at 7:19 AM, Stefan Krah  wrote:
> On Sun, Aug 27, 2017 at 11:19:20AM -0400, Yury Selivanov wrote:
>> On Sun, Aug 27, 2017 at 6:08 AM, Stefan Krah  wrote:
>> > On Sat, Aug 26, 2017 at 04:13:24PM -0700, Nathaniel Smith wrote:
>> >> It's perfectly reasonable to have a script where you call
>> >> decimal.setcontext or np.seterr somewhere at the top to set the
>> >> defaults for the rest of the script.
>> >
>> > +100.  The only thing that makes sense for decimal is to change 
>> > localcontext()
>> > to be automatically async-safe while preserving the rest of the semantics.
>>
>> TBH Nathaniel's argument isn't entirely correct.
>>
>> With the semantics defined in PEP 550 v4, you still can set decimal
>> context on top of your file, in your async functions etc.
>>
>> and this:
>>
>> def bar():
>> decimal.setcontext(ctx)
>>
>> def foo():
>>  bar()
>>  # use decimal with context=ctx
>
> Okay, so if I understand this correctly we actually will not have dynamic
> scoping for regular functions:  bar() has returned, so the new context
> would not be found on the stack with proper dynamic scoping.

Correct. Although I would avoid associating PEP 550 with dynamic
scoping entirely, as we never intended to implement it.

[..]
> What about this?
>
> async def bar():
> setcontext(Context(prec=1))
> for i in range(10):
> await asyncio.sleep(1)
> yield i
>
> async def foo():
> async for i in bar():
> # ctx.prec=1?
> print(Decimal(100) / 3)
>
>
>
> I'm searching for some abstract model to reason about the scopes.

Whatever is set in coroutines, generators, and async generators does
not leak out.  In the above example, "prec=1" will only be set inside
"bar()", and "foo()" will not see that.  (Same will happen for a
regular function and a generator).

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [bpo-30421]: Pull request review

2017-08-28 Thread Robert Schindler
Hello,

In May, I submitted a pull request that extends the functionality of
argparse.ArgumentParser.

To do so, I followed the steps described in the developers guide.

According to [1], I already pinged at GitHub but got no response. The
next step seems to be writing to this list.

I know that nobody is payed for reviewing submissions, but maybe it just
got overlooked?

You can find the pull request at [2].

Thanks in advance for any feedback.

Best regards
Robert

[1] https://docs.python.org/devguide/pullrequest.html#reviewing
[2] https://github.com/python/cpython/pull/1698


signature.asc
Description: PGP signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pep 550 module

2017-08-28 Thread Joao S. O. Bueno
Well, this talk may be a bit of bike-shedding, but

+1 for a separate module/sub module

And full -1 for something named

"dynscopevars"

That word is unreadable, barely mnemonic, but plain "ugly" - (I know that
this is  subjective, but it is just that :-) )

Why not just "execution_context"  or "sys.execution_context" ?

"from execution_context import Var, lookup, set, LogicalContext, run_with  "






On 27 August 2017 at 12:51, Jim J. Jewett  wrote:

> I think there is general consensus that this should go in a module other
> than sys. (At least a submodule.)
>
> The specific names are still To Be Determined, but I suspect seeing the
> functions and objects as part of a named module will affect what works.
>
> So I am requesting that the next iteration just pick a module name, and
> let us see how that looks.  E.g
>
> import dynscopevars
>
> user=dynscopevars.Var ("username")
>
> myscope=dynscopevars.get_current_scope()
>
> childscope=dynscopevars.Scope (parent=myscope,user="bob")
>
>
> -jJ
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> jsbueno%40python.org.br
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-28 Thread Stefan Krah
On Sun, Aug 27, 2017 at 11:19:20AM -0400, Yury Selivanov wrote:
> On Sun, Aug 27, 2017 at 6:08 AM, Stefan Krah  wrote:
> > On Sat, Aug 26, 2017 at 04:13:24PM -0700, Nathaniel Smith wrote:
> >> It's perfectly reasonable to have a script where you call
> >> decimal.setcontext or np.seterr somewhere at the top to set the
> >> defaults for the rest of the script.
> >
> > +100.  The only thing that makes sense for decimal is to change 
> > localcontext()
> > to be automatically async-safe while preserving the rest of the semantics.
> 
> TBH Nathaniel's argument isn't entirely correct.
> 
> With the semantics defined in PEP 550 v4, you still can set decimal
> context on top of your file, in your async functions etc.
> 
> and this:
> 
> def bar():
> decimal.setcontext(ctx)
> 
> def foo():
>  bar()
>  # use decimal with context=ctx

Okay, so if I understand this correctly we actually will not have dynamic
scoping for regular functions:  bar() has returned, so the new context
would not be found on the stack with proper dynamic scoping.


> and this:
> 
> async def bar():
> # use decimal with context=ctx
> 
> async def foo():
>  decimal.setcontext(ctx)
>  await bar()
> 
> The only thing that will not work, is this (ex1):
> 
> async def bar():
> decimal.setcontext(ctx)
> 
> async def foo():
>  await bar()
>  # use decimal with context=ctx

Here we do have dynamic scoping.



> Speaking of (ex1), there's an example that didn't work in any PEP 550 version:
> 
> def bar():
> decimal.setcontext(ctx)
> yield
> 
> async def foo():
>  list(bar())
>  # use decimal with context=ctx

What about this?

async def bar():
setcontext(Context(prec=1))
for i in range(10):
await asyncio.sleep(1)
yield i

async def foo():
async for i in bar():
# ctx.prec=1?
print(Decimal(100) / 3)



I'm searching for some abstract model to reason about the scopes.



Stefan Krah




___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com