Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Nick Coghlan
On 17 October 2017 at 15:02, Nick Coghlan  wrote:

> On 17 October 2017 at 14:31, Guido van Rossum  wrote:
>
>> No, that version just defers to magic in ContextVar.get/set, whereas what
>> I'd like to see is that the latter are just implemented in terms of
>> manipulating the mapping directly. The only operations for which speed
>> matters would be __getitem__ and __setitem__; most other methods just defer
>> to those. __delitem__ must also be a primitive, as must __iter__ and
>> __len__ -- but those don't need to be as speedy (however __delitem__ must
>> really work!).
>>
>
> To have the mapping API at the base of the design, we'd want to go back to
> using the ContextKey version of the API as the core primitive (to ensure we
> don't get name conflicts between different modules and packages), and then
> have ContextVar be a convenience wrapper that always accesses the currently
> active context:
>
> class ContextKey:
> ...
> class ExecutionContext:
> ...
>
> class ContextVar:
> def __init__(self, name):
> self._key = ContextKey(name)
>
> def get(self):
> return get_execution_context()[self._key]
>
> def set(self, value):
> get_execution_context()[self._key] = value
>
> def delete(self, value):
> del get_execution_context()[self._key]
>

Tangent: if we do go this way, it actually maps pretty nicely to the idea
of a "threading.ThreadVar" API that wraps threading.local():

class ThreadVar:
def __init__(self, name):
self._name = name
self._storage = threading.local()

def get(self):
return self._storage.value

def set(self, value):
self._storage.value = value

def delete(self):
del self._storage.value

(Note: real implementations of either idea would need to pay more attention
to producing clear exception messages and instance representations)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Nick Coghlan
On 17 October 2017 at 14:31, Guido van Rossum  wrote:

> No, that version just defers to magic in ContextVar.get/set, whereas what
> I'd like to see is that the latter are just implemented in terms of
> manipulating the mapping directly. The only operations for which speed
> matters would be __getitem__ and __setitem__; most other methods just defer
> to those. __delitem__ must also be a primitive, as must __iter__ and
> __len__ -- but those don't need to be as speedy (however __delitem__ must
> really work!).
>

To have the mapping API at the base of the design, we'd want to go back to
using the ContextKey version of the API as the core primitive (to ensure we
don't get name conflicts between different modules and packages), and then
have ContextVar be a convenience wrapper that always accesses the currently
active context:

class ContextKey:
...
class ExecutionContext:
...

class ContextVar:
def __init__(self, name):
self._key = ContextKey(name)

def get(self):
return get_execution_context()[self._key]

def set(self, value):
get_execution_context()[self._key] = value

def delete(self, value):
del get_execution_context()[self._key]

While I'd defer to Yury on the technical feasibility, I'd expect that
version could probably be made to work *if* you were amenable to some of
the mapping methods on the execution context raising RuntimeError in order
to avoid locking ourselves in to particular design decisions before we're
ready to make them.

The reason I say that is because one of the biggest future-proofing
concerns when it comes to exposing a mapping as the lowest API layer is
that it makes the following code pattern possible:

ec = get_execution_context()
# Change to a different execution context
ec[key] = new_value

The appropriate semantics for that case (modifying a context that isn't the
currently active one) are *really* unclear, which is why PEP 550 structures
the API to prevent it (context variables can only manipulate the active
context, not arbitrary contexts).

However, even with a mapping at the lowest layer, a similar API constraint
could still be introduced via a runtime guard in the mutation methods:

if get_execution_context() is not self:
raise RuntimeError("Cannot modify an inactive execution context")

That way, to actually mutate a different context, you'd still have to
switch contexts, just as you have to switch threads in C if you want to
modify another thread's thread specific storage.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Guido van Rossum
No, that version just defers to magic in ContextVar.get/set, whereas what
I'd like to see is that the latter are just implemented in terms of
manipulating the mapping directly. The only operations for which speed
matters would be __getitem__ and __setitem__; most other methods just defer
to those. __delitem__ must also be a primitive, as must __iter__ and
__len__ -- but those don't need to be as speedy (however __delitem__ must
really work!).

On Mon, Oct 16, 2017 at 9:09 PM, Nick Coghlan  wrote:

> On 17 October 2017 at 03:00, Guido van Rossum  wrote:
>
>> On Mon, Oct 16, 2017 at 9:11 AM, Yury Selivanov 
>> wrote:
>>
>>> > I agree, but I don't see how making the type a subtype (or duck type)
>>> of
>>> > MutableMapping prevents any of those strategies. (Maybe you were
>>> equating
>>> > MutableMapping with "subtype of dict"?)
>>>
>>> Question: why do we want EC objects to be mappings?  I'd rather make
>>> them opaque, which will result in less code and make it more
>>> future-proof.
>>>
>>
>> I'd rather have them mappings, since that's what they represent. It helps
>> users understand what's going on behind the scenes, just like modules,
>> classes and (most) instances have a `__dict__` that you can look at and (in
>> most cases) manipulate.
>>
>
> Perhaps rather than requiring that EC's *be* mappings, we could instead
> require that they expose a mapping API as their __dict__ attribute, similar
> to the way class dictionaries work?
>
> Then the latter could return a proxy that translated mapping operations
> into the appropriate method calls on the ContextVar being used as the key.
>
> Something like:
>
> class ExecutionContextProxy:
> def __init__(self, ec):
> self._ec = ec
> # Omitted from the methods below: checking if this EC is the
> # active EC, and implicitly switching to it if it isn't (for
> read ops)
> # or complaining (for write ops)
>
> # Individual operations call methods on the key itself
> def __getitem__(self, key):
> return key.get()
> def __setitem__(self, key, value):
> if not isinstance(key, ContextVar):
> raise TypeError("Execution context keys must be context
> variables")
> key.set(value)
> def __delitem__(self, key):
> key.delete()
>
> # The key set would be the context vars assigned in the active
> context
> def __contains__(self, key):
> # Note: PEP 550 currently calls the below method ec.vars(),
> # but I just realised that's confusing, given that the vars()
> builtin
> # returns a mapping
> return key in self._ec.assigned_vars()
> def __iter__(self):
> return iter(self._ec.assigned_vars())
> def keys(self):
> return self._ec.assigned_vars()
>
> # These are the simple iterator versions of values() and items()
> # but they could be enhanced to return dynamic views instead
> def values(self):
> for k in self._ec.assigned_vars():
> yield k.get()
> def items(self):
> for k in self._ec.assigned_vars():
> yield (k, k.get())
>
> The nice thing about defining the mapping API as a wrapper around
> otherwise opaque interpreter internals is that it makes it clearer which
> operations are expected to matter for runtime performance (i.e. the ones
> handled by the ExecutionContext itself), and which are mainly being
> provided as intuition pumps for humans attempting to understand how
> execution contexts actually work (whether for debugging purposes, or simply
> out of curiosity)
>
> If there's a part of the mapping proxy API where we don't have a strong
> intuition about how it should work, then instead of attempting to guess
> suitable semantics, we can instead define it as raising RuntimeError for
> now, and then wait and see if the appropriate semantics become clearer over
> time.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Nick Coghlan
On 17 October 2017 at 03:00, Guido van Rossum  wrote:

> On Mon, Oct 16, 2017 at 9:11 AM, Yury Selivanov 
> wrote:
>
>> > I agree, but I don't see how making the type a subtype (or duck type) of
>> > MutableMapping prevents any of those strategies. (Maybe you were
>> equating
>> > MutableMapping with "subtype of dict"?)
>>
>> Question: why do we want EC objects to be mappings?  I'd rather make
>> them opaque, which will result in less code and make it more
>> future-proof.
>>
>
> I'd rather have them mappings, since that's what they represent. It helps
> users understand what's going on behind the scenes, just like modules,
> classes and (most) instances have a `__dict__` that you can look at and (in
> most cases) manipulate.
>

Perhaps rather than requiring that EC's *be* mappings, we could instead
require that they expose a mapping API as their __dict__ attribute, similar
to the way class dictionaries work?

Then the latter could return a proxy that translated mapping operations
into the appropriate method calls on the ContextVar being used as the key.

Something like:

class ExecutionContextProxy:
def __init__(self, ec):
self._ec = ec
# Omitted from the methods below: checking if this EC is the
# active EC, and implicitly switching to it if it isn't (for
read ops)
# or complaining (for write ops)

# Individual operations call methods on the key itself
def __getitem__(self, key):
return key.get()
def __setitem__(self, key, value):
if not isinstance(key, ContextVar):
raise TypeError("Execution context keys must be context
variables")
key.set(value)
def __delitem__(self, key):
key.delete()

# The key set would be the context vars assigned in the active
context
def __contains__(self, key):
# Note: PEP 550 currently calls the below method ec.vars(),
# but I just realised that's confusing, given that the vars()
builtin
# returns a mapping
return key in self._ec.assigned_vars()
def __iter__(self):
return iter(self._ec.assigned_vars())
def keys(self):
return self._ec.assigned_vars()

# These are the simple iterator versions of values() and items()
# but they could be enhanced to return dynamic views instead
def values(self):
for k in self._ec.assigned_vars():
yield k.get()
def items(self):
for k in self._ec.assigned_vars():
yield (k, k.get())

The nice thing about defining the mapping API as a wrapper around otherwise
opaque interpreter internals is that it makes it clearer which operations
are expected to matter for runtime performance (i.e. the ones handled by
the ExecutionContext itself), and which are mainly being provided as
intuition pumps for humans attempting to understand how execution contexts
actually work (whether for debugging purposes, or simply out of curiosity)

If there's a part of the mapping proxy API where we don't have a strong
intuition about how it should work, then instead of attempting to guess
suitable semantics, we can instead define it as raising RuntimeError for
now, and then wait and see if the appropriate semantics become clearer over
time.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Guido van Rossum
Hm. I really like the idea that you can implement and demonstrate all of
ContextVar by manipulating the underlying mapping. And surely compared to
the effort of implementing the HAMT itself (including all its edge cases)
surely implementing the mutable mapping API should be considered
recreational programming.

On Mon, Oct 16, 2017 at 5:57 PM, Nathaniel Smith  wrote:

> On Mon, Oct 16, 2017 at 8:49 AM, Guido van Rossum 
> wrote:
> > On Sun, Oct 15, 2017 at 10:26 PM, Nathaniel Smith  wrote:
> >>
> >> On Sun, Oct 15, 2017 at 10:10 PM, Guido van Rossum 
> >> wrote:
> >> > Yes, that's what I meant by "ignoring generators". And I'd like there
> to
> >> > be
> >> > a "current context" that's a per-thread MutableMapping with ContextVar
> >> > keys.
> >> > Maybe there's not much more to it apart from naming the APIs for
> getting
> >> > and
> >> > setting it? To be clear, I am fine with this being a specific subtype
> of
> >> > MutableMapping. But I don't see much benefit in making it more
> abstract
> >> > than
> >> > that.
> >>
> >> We don't need it to be abstract (it's fine to have a single concrete
> >> mapping type that we always use internally), but I think we do want it
> >> to be opaque (instead of exposing the MutableMapping interface, the
> >> only way to get/set specific values should be through the ContextVar
> >> interface). The advantages are:
> >>
> >> - This allows C level caching of values in ContextVar objects (in
> >> particular, funneling mutations through a limited API makes cache
> >> invalidation *much* easier)
> >
> >
> > Well the MutableMapping could still be a proxy or something that
> invalidates
> > the cache when mutated. That's why I said it should be a single concrete
> > mapping type. (It also doesn't have to derive from MutableMapping -- it's
> > sufficient for it to be a duck type for one, or perhaps some Python-level
> > code could `register()` it.
>
> MutableMapping is just a really complicated interface -- you have to
> deal with iterator invalidation and popitem and implementing view
> classes and all that. It seems like a lot of code for a feature that
> no-one seems to worry about missing right now. (In fact, I suspect the
> extra code required to implement the full MutableMapping interface on
> top of a basic HAMT type is larger than the extra code to implement
> the current PEP 550 draft's chaining semantics on top of this proposal
> for a minimal PEP 550.)
>
> What do you think of something like:
>
> class Context:
> def __init__(self, /, init: MutableMapping[ContextVar,object] = {}):
> ...
>
> def as_dict(self) -> Dict[ContextVar, object]:
> "Returns a snapshot of the internal state."
>
> def copy(self) -> Context:
> "Equivalent to (but maybe faster than) Context(self.as_dict())."
>
> I like the idea of making it possible to set up arbitrary Contexts and
> introspect them, because sometimes you do need to debug weird issues
> or do some wacky stuff deep in the guts of a coroutine scheduler, but
> this would give us that without implementing MutableMapping's 17
> methods and 7 helper classes.
>
> -n
>
> --
> Nathaniel J. Smith -- https://vorpus.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Ethan Furman

On 10/16/2017 05:29 PM, Nathaniel Smith wrote:

On Mon, Oct 16, 2017 at 11:12 AM, Ethan Furman wrote:



What would be really nice is to have attribute access like thread locals.
Instead of working with individual ContextVars you grab the LocalContext and
access the vars as attributes.  I don't recall reading in the PEP why this
is a bad idea.


You're mixing up levels -- the way threading.local objects work is
that there's one big dict that's hidden inside the interpreter (in the
ThreadState), and it holds a separate little dict for each
threading.local. The dict holding ContextVars is similar to the big
dict; a threading.local itself is like a ContextVar that holds a dict.
(And the reason it's this way is that it's easy to build either
version on top of the other, and we did some survey of threading.local
usage and the ContextVar style usage was simpler in the majority of
cases.)

For threading.local there's no way to get at the big dict at all from
Python; it's hidden inside the C APIs and threading internals. I'm
guessing you've never missed this :-). For ContextVars we can't hide
it that much, because async frameworks need to be able to swap the
current dict when switching tasks and clone it when starting a new
task, but those are the only absolutely necessary operations.


Ah, thank you.

--
~Ethan~

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Nathaniel Smith
On Mon, Oct 16, 2017 at 8:49 AM, Guido van Rossum  wrote:
> On Sun, Oct 15, 2017 at 10:26 PM, Nathaniel Smith  wrote:
>>
>> On Sun, Oct 15, 2017 at 10:10 PM, Guido van Rossum 
>> wrote:
>> > Yes, that's what I meant by "ignoring generators". And I'd like there to
>> > be
>> > a "current context" that's a per-thread MutableMapping with ContextVar
>> > keys.
>> > Maybe there's not much more to it apart from naming the APIs for getting
>> > and
>> > setting it? To be clear, I am fine with this being a specific subtype of
>> > MutableMapping. But I don't see much benefit in making it more abstract
>> > than
>> > that.
>>
>> We don't need it to be abstract (it's fine to have a single concrete
>> mapping type that we always use internally), but I think we do want it
>> to be opaque (instead of exposing the MutableMapping interface, the
>> only way to get/set specific values should be through the ContextVar
>> interface). The advantages are:
>>
>> - This allows C level caching of values in ContextVar objects (in
>> particular, funneling mutations through a limited API makes cache
>> invalidation *much* easier)
>
>
> Well the MutableMapping could still be a proxy or something that invalidates
> the cache when mutated. That's why I said it should be a single concrete
> mapping type. (It also doesn't have to derive from MutableMapping -- it's
> sufficient for it to be a duck type for one, or perhaps some Python-level
> code could `register()` it.

MutableMapping is just a really complicated interface -- you have to
deal with iterator invalidation and popitem and implementing view
classes and all that. It seems like a lot of code for a feature that
no-one seems to worry about missing right now. (In fact, I suspect the
extra code required to implement the full MutableMapping interface on
top of a basic HAMT type is larger than the extra code to implement
the current PEP 550 draft's chaining semantics on top of this proposal
for a minimal PEP 550.)

What do you think of something like:

class Context:
def __init__(self, /, init: MutableMapping[ContextVar,object] = {}):
...

def as_dict(self) -> Dict[ContextVar, object]:
"Returns a snapshot of the internal state."

def copy(self) -> Context:
"Equivalent to (but maybe faster than) Context(self.as_dict())."

I like the idea of making it possible to set up arbitrary Contexts and
introspect them, because sometimes you do need to debug weird issues
or do some wacky stuff deep in the guts of a coroutine scheduler, but
this would give us that without implementing MutableMapping's 17
methods and 7 helper classes.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Nathaniel Smith
On Mon, Oct 16, 2017 at 11:12 AM, Ethan Furman  wrote:
> What would be really nice is to have attribute access like thread locals.
> Instead of working with individual ContextVars you grab the LocalContext and
> access the vars as attributes.  I don't recall reading in the PEP why this
> is a bad idea.

You're mixing up levels -- the way threading.local objects work is
that there's one big dict that's hidden inside the interpreter (in the
ThreadState), and it holds a separate little dict for each
threading.local. The dict holding ContextVars is similar to the big
dict; a threading.local itself is like a ContextVar that holds a dict.
(And the reason it's this way is that it's easy to build either
version on top of the other, and we did some survey of threading.local
usage and the ContextVar style usage was simpler in the majority of
cases.)

For threading.local there's no way to get at the big dict at all from
Python; it's hidden inside the C APIs and threading internals. I'm
guessing you've never missed this :-). For ContextVars we can't hide
it that much, because async frameworks need to be able to swap the
current dict when switching tasks and clone it when starting a new
task, but those are the only absolutely necessary operations.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling Python-3.6.3 fails two tests test_math and test_cmath

2017-10-16 Thread Tim Peters
[Richard Hinerfeld ]
> Compiling Python-3.6.3 on Linux fails two tests: test_math and test_cmatg

Precisely which version of Linux?  The same failure has already been
reported on OpenBSD here:

https://bugs.python.org/issue31630
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Compiling Python-3.6.3 fails two tests test_math and test_cmath

2017-10-16 Thread Richard Hinerfeld
Compiling Python-3.6.3 on Linux fails two tests: test_math and test_cmatg
running build
running build_ext
The following modules found by detect_modules() in setup.py, have been
built by the Makefile instead, as configured by the Setup files:
atexitpwd   time   
running build_scripts
copying and adjusting /home/richard/Python-3.6.3/Tools/scripts/pydoc3 -> 
build/scripts-3.6
copying and adjusting /home/richard/Python-3.6.3/Tools/scripts/idle3 -> 
build/scripts-3.6
copying and adjusting /home/richard/Python-3.6.3/Tools/scripts/2to3 -> 
build/scripts-3.6
copying and adjusting /home/richard/Python-3.6.3/Tools/scripts/pyvenv -> 
build/scripts-3.6
changing mode of build/scripts-3.6/pydoc3 from 644 to 755
changing mode of build/scripts-3.6/idle3 from 644 to 755
changing mode of build/scripts-3.6/2to3 from 644 to 755
changing mode of build/scripts-3.6/pyvenv from 644 to 755
renaming build/scripts-3.6/pydoc3 to build/scripts-3.6/pydoc3.6
renaming build/scripts-3.6/idle3 to build/scripts-3.6/idle3.6
renaming build/scripts-3.6/2to3 to build/scripts-3.6/2to3-3.6
renaming build/scripts-3.6/pyvenv to build/scripts-3.6/pyvenv-3.6
./python  ./Tools/scripts/run_tests.py -v test_cmath
== CPython 3.6.3 (default, Oct 16 2017, 14:42:21) [GCC 4.7.2]
== Linux-3.2.0-4-686-pae-i686-with-debian-7.11 little-endian
== cwd: /home/richard/Python-3.6.3/build/test_python_10507
== CPU count: 1
== encodings: locale=UTF-8, FS=utf-8
Using random seed 5661358
Run tests in parallel using 3 child processes
0:00:01 load avg: 0.24 [1/1/1] test_cmath failed
testAtanSign (test.test_cmath.CMathTests) ... ok
testAtanhSign (test.test_cmath.CMathTests) ... ok
testTanhSign (test.test_cmath.CMathTests) ... ok
test_abs (test.test_cmath.CMathTests) ... ok
test_abs_overflows (test.test_cmath.CMathTests) ... ok
test_cmath_matches_math (test.test_cmath.CMathTests) ... ok
test_constants (test.test_cmath.CMathTests) ... ok
test_infinity_and_nan_constants (test.test_cmath.CMathTests) ... ok
test_input_type (test.test_cmath.CMathTests) ... ok
test_isfinite (test.test_cmath.CMathTests) ... ok
test_isinf (test.test_cmath.CMathTests) ... ok
test_isnan (test.test_cmath.CMathTests) ... ok
test_phase (test.test_cmath.CMathTests) ... ok
test_polar (test.test_cmath.CMathTests) ... ok
test_polar_errno (test.test_cmath.CMathTests) ... ok
test_rect (test.test_cmath.CMathTests) ... ok
test_specific_values (test.test_cmath.CMathTests) ... FAIL
test_user_object (test.test_cmath.CMathTests) ... ok
test_asymmetry (test.test_cmath.IsCloseTests) ... ok
test_complex_near_zero (test.test_cmath.IsCloseTests) ... ok
test_complex_values (test.test_cmath.IsCloseTests) ... ok
test_decimals (test.test_cmath.IsCloseTests) ... ok
test_eight_decimal_places (test.test_cmath.IsCloseTests) ... ok
test_fractions (test.test_cmath.IsCloseTests) ... ok
test_identical (test.test_cmath.IsCloseTests) ... ok
test_identical_infinite (test.test_cmath.IsCloseTests) ... ok
test_inf_ninf_nan (test.test_cmath.IsCloseTests) ... ok
test_integers (test.test_cmath.IsCloseTests) ... ok
test_near_zero (test.test_cmath.IsCloseTests) ... ok
test_negative_tolerances (test.test_cmath.IsCloseTests) ... ok
test_reject_complex_tolerances (test.test_cmath.IsCloseTests) ... ok
test_zero_tolerance (test.test_cmath.IsCloseTests) ... ok

==
FAIL: test_specific_values (test.test_cmath.CMathTests)
--
Traceback (most recent call last):
  File "/home/richard/Python-3.6.3/Lib/test/test_cmath.py", line 418, in 
test_specific_values
msg=error_message)
  File "/home/richard/Python-3.6.3/Lib/test/test_cmath.py", line 149, in 
rAssertAlmostEqual
'{!r} and {!r} are not sufficiently close'.format(a, b))
AssertionError: tan0064: tan(complex(1.5707963267948961, 0.0))
Expected: complex(1978937966095219.0, 0.0)
Received: complex(1978945885716843.0, 0.0)
Received value insufficiently close to expected value.

--
Ran 32 tests in 0.316s

FAILED (failures=1)

1 test failed:
test_cmath
Re-running failed tests in verbose mode
Re-running test 'test_cmath' in verbose mode
testAtanSign (test.test_cmath.CMathTests) ... ok
testAtanhSign (test.test_cmath.CMathTests) ... ok
testTanhSign (test.test_cmath.CMathTests) ... ok
test_abs (test.test_cmath.CMathTests) ... ok
test_abs_overflows (test.test_cmath.CMathTests) ... ok
test_cmath_matches_math (test.test_cmath.CMathTests) ... ok
test_constants (test.test_cmath.CMathTests) ... ok
test_infinity_and_nan_constants (test.test_cmath.CMathTests) ... ok
test_input_type (test.test_cmath.CMathTests) ... ok
test_isfinite (test.test_cmath.CMathTests) ... ok
test_isinf (test.test_cmath.CMathTests) ... ok
test_isnan (test.test_cmath.CMathTests) ... ok
test_phase (test.test_cmath.CMathTests) ... ok
test_polar (test.test_cmath.CMathTests) ... ok
test_polar_errno 

Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Ethan Furman

On 10/16/2017 09:11 AM, Yury Selivanov wrote:


Question: why do we want EC objects to be mappings?  I'd rather make
them opaque, which will result in less code and make it more
future-proof.

The key arguments for keeping ContextVar abstraction:

* Naturally avoids name clashes.

* Allows to implement efficient caching.  This is important if we want
libraries like decimal/numpy to start using it.

* Abstracts away the actual implementation of the EC.  This is a
future-proof solution, with which we can enable EC support for
generators in the future.  We already know two possible solutions (PEP
550 v1, PEP 550 current), and ContextVar is a good enough abstraction
to support both of them.

IMO ContextVar.set() and ContextVar.get() is a simple and nice API to
work with the EC.  Most people (aside framework authors) won't even
need to work with EC objects directly anyways.


Framework/library authors are users too.  Please don't make the interface 
unpleasant to use.

What would be really nice is to have attribute access like thread locals.  Instead of working with individual 
ContextVars you grab the LocalContext and access the vars as attributes.  I don't recall reading in the PEP why this is 
a bad idea.


--
~Ethan~
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Guido van Rossum
On Mon, Oct 16, 2017 at 9:53 AM, Yury Selivanov 
wrote:

> I think we can still implement context isolation in generators in
> later versions for ContextVars.  In 3.7, ContextVars will only support
> async tasks and threads.  Using them in generators will be
> *documented* as unsafe, as the context will "leak out".  Fixing
> generators in some later version of Python will then be a feature/bug
> fix.  I expect almost no backwards compatibility issue, same as I
> wouldn't expect them if we switched decimal to PEP 550 in 3.7.
>

Context also leaks into a generator. That's a feature too. Basically a
generator does not have its own context; in that respect it's no different
from a regular function call. The apparent difference is that it's possible
to call next() on a generator object from different contexts (that's always
been possible, in today's Python you can do this from multiple threads and
there's even protection against re-entering a generator frame that's
already active in another thread -- the GIL helps here of course).

I expect that any future (post-3.7) changes to how context work in
generators will have to support this as the default behavior, and to get
other behavior the generator will have to be marked or wrapped somehow.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Antoine Pitrou
On Mon, 16 Oct 2017 19:20:44 +0200
Victor Stinner  wrote:
> Oh, now I'm confused. I misunderstood your previous message. I understood
> that you changed you mind and didn't want to add process_time_ns().
> 
> Can you elaborate why you consider that time.process_time_ns() is needed,
> but not the nanosecond flavor of os.times() nor resource.getrusage()? These
> functions use the same or similar clock, no?

I didn't say they weren't needed, I said that we could restrict
ourselves to the time module for the time being if it makes things
easier.

But if you want to tackle all of them at once, go for it! :-)

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
Oh, now I'm confused. I misunderstood your previous message. I understood
that you changed you mind and didn't want to add process_time_ns().

Can you elaborate why you consider that time.process_time_ns() is needed,
but not the nanosecond flavor of os.times() nor resource.getrusage()? These
functions use the same or similar clock, no?

Depending on platform, time.process_time() may be implemented with
resource.getrusage(), os.times() or something else.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Guido van Rossum
On Mon, Oct 16, 2017 at 9:11 AM, Yury Selivanov 
wrote:

> On Mon, Oct 16, 2017 at 11:49 AM, Guido van Rossum 
> wrote:
> > On Sun, Oct 15, 2017 at 10:26 PM, Nathaniel Smith  wrote:
> >> We don't need it to be abstract (it's fine to have a single concrete
> >> mapping type that we always use internally), but I think we do want it
> >> to be opaque (instead of exposing the MutableMapping interface, the
> >> only way to get/set specific values should be through the ContextVar
> >> interface). The advantages are:
> >>
> >> - This allows C level caching of values in ContextVar objects (in
> >> particular, funneling mutations through a limited API makes cache
> >> invalidation *much* easier)
>
> > Well the MutableMapping could still be a proxy or something that
> invalidates
> > the cache when mutated. That's why I said it should be a single concrete
> > mapping type. (It also doesn't have to derive from MutableMapping -- it's
> > sufficient for it to be a duck type for one, or perhaps some Python-level
> > code could `register()` it.
>
> Yeah, we can do a proxy.
>
> >> - It gives us flexibility to change the underlying data structure
> >> without breaking API, or for different implementations to make
> >> different choices -- in particular, it's not clear whether a dict or
> >> HAMT is better, and it's not clear whether a regular dict or
> >> WeakKeyDict is better.
>
> > I would keep it simple and supid, but WeakKeyDict is a subtype of
> > MutableMapping, and I'm sure we can find a way to implement the full
> > MutableMapping interface on top of HAMT as well.
>
> Correct.
>
> >> The first point (caching) I think is the really compelling one: in
> >> practice decimal and numpy are already using tricky caching code to
> >> reduce the overhead of accessing the ThreadState dict, and this gets
> >> even trickier with context-local state which has more cache
> >> invalidation points, so if we don't do this in the interpreter then it
> >> could actually become a blocker for adoption. OTOH it's easy for the
> >> interpreter itself to do this caching, and it makes everyone faster.
>
> > I agree, but I don't see how making the type a subtype (or duck type) of
> > MutableMapping prevents any of those strategies. (Maybe you were equating
> > MutableMapping with "subtype of dict"?)
>
> Question: why do we want EC objects to be mappings?  I'd rather make
> them opaque, which will result in less code and make it more
> future-proof.
>

I'd rather have them mappings, since that's what they represent. It helps
users understand what's going on behind the scenes, just like modules,
classes and (most) instances have a `__dict__` that you can look at and (in
most cases) manipulate.


> The key arguments for keeping ContextVar abstraction:
>

To be clear, I do want to keep ContextVar!


> * Naturally avoids name clashes.
>
> * Allows to implement efficient caching.  This is important if we want
> libraries like decimal/numpy to start using it.
>
> * Abstracts away the actual implementation of the EC.  This is a
> future-proof solution, with which we can enable EC support for
> generators in the future.  We already know two possible solutions (PEP
> 550 v1, PEP 550 current), and ContextVar is a good enough abstraction
> to support both of them.
>
> IMO ContextVar.set() and ContextVar.get() is a simple and nice API to
> work with the EC.  Most people (aside framework authors) won't even
> need to work with EC objects directly anyways.
>

Sure. But (unlike you, it seems) I find it important that users can
understand their actions in terms of operations on the mapping representing
the context. Its type should be a specific class that inherits from
`MutableMapping[ContextVar, object]`.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Antoine Pitrou
On Mon, 16 Oct 2017 18:53:18 +0200
Victor Stinner  wrote:

> 2017-10-16 18:28 GMT+02:00 Antoine Pitrou :
> >> What do you think?  
> >
> > It sounds fine to me!  
> 
> Ok fine, I updated the PEP. Let's start simple with the few functions
> (5 "clock" functions) which are "obviously" impacted by the precission
> loss.

It should be 6 functions, right?


> 
> Victor

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Ben Hoyt
Makes sense, thanks. -Ben

On Mon, Oct 16, 2017 at 12:28 PM, Victor Stinner 
wrote:

> 2017-10-16 18:14 GMT+02:00 Ben Hoyt :
> > Got it -- fair enough.
> >
> > We deploy so often where I work (a couple of times a week at least) that
> 104
> > days seems like an eternity. But I can see where for a very stable file
> > server or something you might well run it that long without deploying.
> Then
> > again, why are you doing performance tuning on a "very stable server"?
>
> I'm not sure of what you mean by "performance *tuning*". My idea in
> the example is more to collect live performance metrics to make sure
> that everything is fine on your "very stable server". Send these
> metrics to your favorite time serie database like Gnocchi, Graphite,
> Graphana or whatever.
>
> Victor
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
2017-10-16 18:28 GMT+02:00 Antoine Pitrou :
>> What do you think?
>
> It sounds fine to me!

Ok fine, I updated the PEP. Let's start simple with the few functions
(5 "clock" functions) which are "obviously" impacted by the precission
loss.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Yury Selivanov
On Mon, Oct 16, 2017 at 7:44 AM, Nick Coghlan  wrote:
[..]
> So going down this path would lock in the *default* semantics for the
> interaction between context variables and generators as being the same as
> the interaction between thread locals and generators, but would still leave
> the door open to subsequently introducing an opt-in API like the
> "contextvars.iter_in_context" idea for cases where folks decided they wanted
> to do something different (like capturing the context at the point where
> iterator was created and then temporarily switching back to that on each
> iteration).

I think we can still implement context isolation in generators in
later versions for ContextVars.  In 3.7, ContextVars will only support
async tasks and threads.  Using them in generators will be
*documented* as unsafe, as the context will "leak out".  Fixing
generators in some later version of Python will then be a feature/bug
fix.  I expect almost no backwards compatibility issue, same as I
wouldn't expect them if we switched decimal to PEP 550 in 3.7.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
2017-10-16 18:14 GMT+02:00 Ben Hoyt :
> Got it -- fair enough.
>
> We deploy so often where I work (a couple of times a week at least) that 104
> days seems like an eternity. But I can see where for a very stable file
> server or something you might well run it that long without deploying. Then
> again, why are you doing performance tuning on a "very stable server"?

I'm not sure of what you mean by "performance *tuning*". My idea in
the example is more to collect live performance metrics to make sure
that everything is fine on your "very stable server". Send these
metrics to your favorite time serie database like Gnocchi, Graphite,
Graphana or whatever.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Antoine Pitrou
On Mon, 16 Oct 2017 18:06:06 +0200
Victor Stinner  wrote:
> 2017-10-16 17:42 GMT+02:00 Antoine Pitrou :
> > Restricting this PEP to the time module would be fine with me.  
> 
> Maybe I should add a short sentence to keep the question open, but
> exclude it from the direct scope of the PEP? For example:
> 
> "New nanosecond flavor of these functions may be added later, if a
> concrete use case comes in."
> 
> What do you think?

It sounds fine to me!

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Yury Selivanov
On Mon, Oct 16, 2017 at 11:49 AM, Guido van Rossum  wrote:
> On Sun, Oct 15, 2017 at 10:26 PM, Nathaniel Smith  wrote:
>>
>> On Sun, Oct 15, 2017 at 10:10 PM, Guido van Rossum 
>> wrote:
>> > Yes, that's what I meant by "ignoring generators". And I'd like there to
>> > be
>> > a "current context" that's a per-thread MutableMapping with ContextVar
>> > keys.
>> > Maybe there's not much more to it apart from naming the APIs for getting
>> > and
>> > setting it? To be clear, I am fine with this being a specific subtype of
>> > MutableMapping. But I don't see much benefit in making it more abstract
>> > than
>> > that.
>>
>> We don't need it to be abstract (it's fine to have a single concrete
>> mapping type that we always use internally), but I think we do want it
>> to be opaque (instead of exposing the MutableMapping interface, the
>> only way to get/set specific values should be through the ContextVar
>> interface). The advantages are:
>>
>> - This allows C level caching of values in ContextVar objects (in
>> particular, funneling mutations through a limited API makes cache
>> invalidation *much* easier)
>
>
> Well the MutableMapping could still be a proxy or something that invalidates
> the cache when mutated. That's why I said it should be a single concrete
> mapping type. (It also doesn't have to derive from MutableMapping -- it's
> sufficient for it to be a duck type for one, or perhaps some Python-level
> code could `register()` it.

Yeah, we can do a proxy.

>
>>
>> - It gives us flexibility to change the underlying data structure
>> without breaking API, or for different implementations to make
>> different choices -- in particular, it's not clear whether a dict or
>> HAMT is better, and it's not clear whether a regular dict or
>> WeakKeyDict is better.
>
>
> I would keep it simple and supid, but WeakKeyDict is a subtype of
> MutableMapping, and I'm sure we can find a way to implement the full
> MutableMapping interface on top of HAMT as well.

Correct.

>
>>
>> The first point (caching) I think is the really compelling one: in
>> practice decimal and numpy are already using tricky caching code to
>> reduce the overhead of accessing the ThreadState dict, and this gets
>> even trickier with context-local state which has more cache
>> invalidation points, so if we don't do this in the interpreter then it
>> could actually become a blocker for adoption. OTOH it's easy for the
>> interpreter itself to do this caching, and it makes everyone faster.
>
>
> I agree, but I don't see how making the type a subtype (or duck type) of
> MutableMapping prevents any of those strategies. (Maybe you were equating
> MutableMapping with "subtype of dict"?)

Question: why do we want EC objects to be mappings?  I'd rather make
them opaque, which will result in less code and make it more
future-proof.

The key arguments for keeping ContextVar abstraction:

* Naturally avoids name clashes.

* Allows to implement efficient caching.  This is important if we want
libraries like decimal/numpy to start using it.

* Abstracts away the actual implementation of the EC.  This is a
future-proof solution, with which we can enable EC support for
generators in the future.  We already know two possible solutions (PEP
550 v1, PEP 550 current), and ContextVar is a good enough abstraction
to support both of them.

IMO ContextVar.set() and ContextVar.get() is a simple and nice API to
work with the EC.  Most people (aside framework authors) won't even
need to work with EC objects directly anyways.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Ben Hoyt
Got it -- fair enough.

We deploy so often where I work (a couple of times a week at least) that
104 days seems like an eternity. But I can see where for a very stable file
server or something you might well run it that long without deploying. Then
again, why are you doing performance tuning on a "very stable server"?

-Ben

On Mon, Oct 16, 2017 at 11:58 AM, Guido van Rossum  wrote:

> On Mon, Oct 16, 2017 at 8:37 AM, Ben Hoyt  wrote:
>
>> I've read the examples you wrote here, but I'm struggling to see what the
>> real-life use cases are for this. When would you care about *both* very
>> long-running servers (104 days+) and nanosecond precision? I'm not saying
>> it could never happen, but would want to see real "experience reports" of
>> when this is needed.
>>
>
> A long-running server might still want to log precise *durations* of
> various events. (Durations of events are the bread and butter of server
> performance tuning.) And for this it might want to use the most precise
> clock available, which is perf_counter(). But if perf_counter()'s epoch is
> the start of the process, after 104 days it can no longer report ns
> precision due to float rounding (even though the internal counter does not
> lose ns).
>
> --
> --Guido van Rossum (python.org/~guido)
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
2017-10-16 17:42 GMT+02:00 Antoine Pitrou :
> Restricting this PEP to the time module would be fine with me.

Maybe I should add a short sentence to keep the question open, but
exclude it from the direct scope of the PEP? For example:

"New nanosecond flavor of these functions may be added later, if a
concrete use case comes in."

What do you think?

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
2017-10-16 17:37 GMT+02:00 Ben Hoyt :
> I've read the examples you wrote here, but I'm struggling to see what the
> real-life use cases are for this. When would you care about *both* very
> long-running servers (104 days+) and nanosecond precision? I'm not saying it
> could never happen, but would want to see real "experience reports" of when
> this is needed.

The second example doesn't depend on the system uptime nor how long
the program is running. You can hit the issue just after the system
finished to boot:

"Example 2: compare time with different resolution"
https://www.python.org/dev/peps/pep-0564/#example-2-compare-time-with-different-resolution

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Guido van Rossum
On Mon, Oct 16, 2017 at 8:37 AM, Ben Hoyt  wrote:

> I've read the examples you wrote here, but I'm struggling to see what the
> real-life use cases are for this. When would you care about *both* very
> long-running servers (104 days+) and nanosecond precision? I'm not saying
> it could never happen, but would want to see real "experience reports" of
> when this is needed.
>

A long-running server might still want to log precise *durations* of
various events. (Durations of events are the bread and butter of server
performance tuning.) And for this it might want to use the most precise
clock available, which is perf_counter(). But if perf_counter()'s epoch is
the start of the process, after 104 days it can no longer report ns
precision due to float rounding (even though the internal counter does not
lose ns).

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Antoine Pitrou
On Mon, 16 Oct 2017 17:23:15 +0200
Victor Stinner  wrote:
> 2017-10-16 17:06 GMT+02:00 Antoine Pitrou :
> >> This PEP adds five new functions to the ``time`` module:
> >>
> >> * ``time.clock_gettime_ns(clock_id)``
> >> * ``time.clock_settime_ns(clock_id, time: int)``
> >> * ``time.perf_counter_ns()``
> >> * ``time.monotonic_ns()``
> >> * ``time.time_ns()``  
> >
> > Why not ``time.process_time_ns()``?  
> 
> I only wrote my first email on python-ideas to ask this question, but
> I got no answer on this question, only proposal of other solutions to
> get time with nanosecond resolution. So I picked the simplest option:
> start simple, only add new clocks, and maybe add more "_ns" functions
> later.
> 
> If we add process_time_ns(), should we also add nanosecond resolution
> to other functions related to process or CPU time?

Restricting this PEP to the time module would be fine with me.

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Guido van Rossum
On Sun, Oct 15, 2017 at 10:26 PM, Nathaniel Smith  wrote:

> On Sun, Oct 15, 2017 at 10:10 PM, Guido van Rossum 
> wrote:
> > Yes, that's what I meant by "ignoring generators". And I'd like there to
> be
> > a "current context" that's a per-thread MutableMapping with ContextVar
> keys.
> > Maybe there's not much more to it apart from naming the APIs for getting
> and
> > setting it? To be clear, I am fine with this being a specific subtype of
> > MutableMapping. But I don't see much benefit in making it more abstract
> than
> > that.
>
> We don't need it to be abstract (it's fine to have a single concrete
> mapping type that we always use internally), but I think we do want it
> to be opaque (instead of exposing the MutableMapping interface, the
> only way to get/set specific values should be through the ContextVar
> interface). The advantages are:
>
> - This allows C level caching of values in ContextVar objects (in
> particular, funneling mutations through a limited API makes cache
> invalidation *much* easier)
>

Well the MutableMapping could still be a proxy or something that
invalidates the cache when mutated. That's why I said it should be a single
concrete mapping type. (It also doesn't have to derive from MutableMapping
-- it's sufficient for it to be a duck type for one, or perhaps some
Python-level code could `register()` it.


> - It gives us flexibility to change the underlying data structure
> without breaking API, or for different implementations to make
> different choices -- in particular, it's not clear whether a dict or
> HAMT is better, and it's not clear whether a regular dict or
> WeakKeyDict is better.
>

I would keep it simple and supid, but WeakKeyDict is a subtype of
MutableMapping, and I'm sure we can find a way to implement the full
MutableMapping interface on top of HAMT as well.


> The first point (caching) I think is the really compelling one: in
> practice decimal and numpy are already using tricky caching code to
> reduce the overhead of accessing the ThreadState dict, and this gets
> even trickier with context-local state which has more cache
> invalidation points, so if we don't do this in the interpreter then it
> could actually become a blocker for adoption. OTOH it's easy for the
> interpreter itself to do this caching, and it makes everyone faster.
>

I agree, but I don't see how making the type a subtype (or duck type) of
MutableMapping prevents any of those strategies. (Maybe you were equating
MutableMapping with "subtype of dict"?)

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Ben Hoyt
I've read the examples you wrote here, but I'm struggling to see what the
real-life use cases are for this. When would you care about *both* very
long-running servers (104 days+) and nanosecond precision? I'm not saying
it could never happen, but would want to see real "experience reports" of
when this is needed.

-Ben

On Mon, Oct 16, 2017 at 9:50 AM, Victor Stinner 
wrote:

> I read again the discussions on python-ideas and noticed that I forgot
> to mention the "time_ns module" idea. I also added a section to give
> concrete examples of the precision loss.
>
> https://github.com/python/peps/commit/a4828def403913dbae7452b4f9b9d6
> 2a0c83a278
>
> Issues caused by precision loss
> ---
>
> Example 1: measure time delta
> ^
>
> A server is running for longer than 104 days. A clock is read before
> and after running a function to measure its performance. This benchmark
> lose precision only because the float type used by clocks, not because
> of the clock resolution.
>
> On Python microbenchmarks, it is common to see function calls taking
> less than 100 ns. A difference of a single nanosecond becomes
> significant.
>
> Example 2: compare time with different resolution
> ^
>
> Two programs "A" and "B" are runing on the same system, so use the system
> block. The program A reads the system clock with nanosecond resolution
> and writes the timestamp with nanosecond resolution. The program B reads
> the timestamp with nanosecond resolution, but compares it to the system
> clock read with a worse resolution. To simplify the example, let's say
> that it reads the clock with second resolution. If that case, there is a
> window of 1 second while the program B can see the timestamp written by A
> as "in the future".
>
> Nowadays, more and more databases and filesystems support storing time
> with nanosecond resolution.
>
> .. note::
>This issue was already fixed for file modification time by adding the
>``st_mtime_ns`` field to the ``os.stat()`` result, and by accepting
>nanoseconds in ``os.utime()``. This PEP proposes to generalize the
>fix.
>
> (...)
>
> Modify time.time() result type
> --
>
> It was proposed to modify ``time.time()`` to return a different float
> type with better precision.
>
> The PEP 410 proposed to use ``decimal.Decimal`` which already exists and
> supports arbitray precision, but it was rejected.  Apart
> ``decimal.Decimal``, no portable ``float`` type with better precision is
> currently available in Python.
>
> Changing the builtin Python ``float`` type is out of the scope of this
> PEP.
>
> Moreover, changing existing functions to return a new type introduces a
> risk of breaking the backward compatibility even the new type is
> designed carefully.
>
> (...)
>
> New time_ns module
> --
>
> Add a new ``time_ns`` module which contains the five new functions:
>
> * ``time_ns.clock_gettime(clock_id)``
> * ``time_ns.clock_settime(clock_id, time: int)``
> * ``time_ns.perf_counter()``
> * ``time_ns.monotonic()``
> * ``time_ns.time()``
>
> The first question is if the ``time_ns`` should expose exactly the same
> API (constants, functions, etc.) than the ``time`` module. It can be
> painful to maintain two flavors of the ``time`` module. How users use
> suppose to make a choice between these two modules?
>
> If tomorrow, other nanosecond variant are needed in the ``os`` module,
> will we have to add a new ``os_ns`` module as well? There are functions
> related to time in many modules: ``time``, ``os``, ``signal``,
> ``resource``, ``select``, etc.
>
> Another idea is to add a ``time.ns`` submodule or a nested-namespace to
> get the ``time.ns.time()`` syntax.
>
> Victor
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> benhoyt%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
2017-10-16 17:06 GMT+02:00 Antoine Pitrou :
>> This PEP adds five new functions to the ``time`` module:
>>
>> * ``time.clock_gettime_ns(clock_id)``
>> * ``time.clock_settime_ns(clock_id, time: int)``
>> * ``time.perf_counter_ns()``
>> * ``time.monotonic_ns()``
>> * ``time.time_ns()``
>
> Why not ``time.process_time_ns()``?

I only wrote my first email on python-ideas to ask this question, but
I got no answer on this question, only proposal of other solutions to
get time with nanosecond resolution. So I picked the simplest option:
start simple, only add new clocks, and maybe add more "_ns" functions
later.

If we add process_time_ns(), should we also add nanosecond resolution
to other functions related to process or CPU time?

* Add "ru_utime_ns" and "ru_stime_ns" to the resource.struct_rusage
used by os.wait3(), os.wait4() and resource.getrusage()

* For os.times(): add os.times_ns()? For this one, I prefer to add a
new function rather than duplicating *all* fields of os.times_result,
since all fields store durations

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Antoine Pitrou

Hi,

On Mon, 16 Oct 2017 12:42:30 +0200
Victor Stinner  wrote:
> 
> ``time.time()`` returns seconds elapsed since the UNIX epoch: January
> 1st, 1970. This function loses precision since May 1970 (47 years ago)::

This is a funny sentence.  I doubt computers (Unix or not) had
nanosecond clocks in May 1970.

> This PEP adds five new functions to the ``time`` module:
> 
> * ``time.clock_gettime_ns(clock_id)``
> * ``time.clock_settime_ns(clock_id, time: int)``
> * ``time.perf_counter_ns()``
> * ``time.monotonic_ns()``
> * ``time.time_ns()``

Why not ``time.process_time_ns()``?

> Hardware clock with a resolution better than 1 nanosecond already
> exists. For example, the frequency of a CPU TSC clock is the CPU base
> frequency: the resolution is around 0.3 ns for a CPU running at 3
> GHz. Users who have access to such hardware and really need
> sub-nanosecond resolution can easyly extend Python for their needs.

Typo: easily.  But how is easy is it?

> Such rare use case don't justify to design the Python standard library
> to support sub-nanosecond resolution.

I suspect that assertion will be challenged at some point :-)
Though I agree with the ease of implementation argument (about int64_t
being wide enough for nanoseconds but not picoseconds).

Regards

Antoine.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Preprocessing the CPython Source Tree

2017-10-16 Thread Paul Ross
I have implemented a C preprocessor written in Python which gives some
useful visualisations of source code, particularly macro usage:
https://github.com/paulross/cpip

I have been running this on the CPython source code and it occurs to me
that this might be useful to the python-dev community.

For example the Python dictionary source code is visualised here:
http://cpip.readthedocs.io/en/latest/_static/dictobject.c/
index_dictobject.c_a3f5bfec1ed531371fb1a2bcdcb2e9c2.html I found this
really useful when I was getting a segfault during a dictionary insert from
my C code. The segfault was on this line http://cpip.readthedocs.io/en/
latest/_static/dictobject.c/dictobject.c_a3f5bfec1ed531371fb1a2bcdcb2e9
c2.html#1130 but it is hard to see what is going on with macros inside
macros. If you click on the link on the left end of the line it takes you
to the full expansion of the macros http://cpip.readthedocs.io/en/
latest/_static/dictobject.c/dictobject.c.html#1130 as this is what the
compiler and debugger see.
I could examine these values in GDB and figure out what was going on. I
could also figure what that MAINTAIN_TRACKING macro was doing by looking at
the macros page generated by CPIP: http://cpip.readthedocs.io/en/
latest/_static/dictobject.c/macros_ref.html#_TUFJTlRBSU5fVFJBQ0tJTkdfMA__
and following those links.

I was wondering if it would be valuable to python-dev developers if this
tool was run regularly over the CPython source tree(s). A single source
tree takes about 12 CPU hours to process and generates 8GB of HTML/SVG. If
this is useful then where to host this?

Regards,

Paul Ross
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
I read again the discussions on python-ideas and noticed that I forgot
to mention the "time_ns module" idea. I also added a section to give
concrete examples of the precision loss.

https://github.com/python/peps/commit/a4828def403913dbae7452b4f9b9d62a0c83a278

Issues caused by precision loss
---

Example 1: measure time delta
^

A server is running for longer than 104 days. A clock is read before
and after running a function to measure its performance. This benchmark
lose precision only because the float type used by clocks, not because
of the clock resolution.

On Python microbenchmarks, it is common to see function calls taking
less than 100 ns. A difference of a single nanosecond becomes
significant.

Example 2: compare time with different resolution
^

Two programs "A" and "B" are runing on the same system, so use the system
block. The program A reads the system clock with nanosecond resolution
and writes the timestamp with nanosecond resolution. The program B reads
the timestamp with nanosecond resolution, but compares it to the system
clock read with a worse resolution. To simplify the example, let's say
that it reads the clock with second resolution. If that case, there is a
window of 1 second while the program B can see the timestamp written by A
as "in the future".

Nowadays, more and more databases and filesystems support storing time
with nanosecond resolution.

.. note::
   This issue was already fixed for file modification time by adding the
   ``st_mtime_ns`` field to the ``os.stat()`` result, and by accepting
   nanoseconds in ``os.utime()``. This PEP proposes to generalize the
   fix.

(...)

Modify time.time() result type
--

It was proposed to modify ``time.time()`` to return a different float
type with better precision.

The PEP 410 proposed to use ``decimal.Decimal`` which already exists and
supports arbitray precision, but it was rejected.  Apart
``decimal.Decimal``, no portable ``float`` type with better precision is
currently available in Python.

Changing the builtin Python ``float`` type is out of the scope of this
PEP.

Moreover, changing existing functions to return a new type introduces a
risk of breaking the backward compatibility even the new type is
designed carefully.

(...)

New time_ns module
--

Add a new ``time_ns`` module which contains the five new functions:

* ``time_ns.clock_gettime(clock_id)``
* ``time_ns.clock_settime(clock_id, time: int)``
* ``time_ns.perf_counter()``
* ``time_ns.monotonic()``
* ``time_ns.time()``

The first question is if the ``time_ns`` should expose exactly the same
API (constants, functions, etc.) than the ``time`` module. It can be
painful to maintain two flavors of the ``time`` module. How users use
suppose to make a choice between these two modules?

If tomorrow, other nanosecond variant are needed in the ``os`` module,
will we have to add a new ``os_ns`` module as well? There are functions
related to time in many modules: ``time``, ``os``, ``signal``,
``resource``, ``select``, etc.

Another idea is to add a ``time.ns`` submodule or a nested-namespace to
get the ``time.ns.time()`` syntax.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Paul Moore
On 16 October 2017 at 12:44, Nick Coghlan  wrote:
> The downside is that you'll still need to explicitly revert the decimal
> context before yielding from a generator if you didn't want the context
> change to "leak", but that's not a new constraint - it's one that already
> exists for the thread-local based decimal context API.

Ah, OK.Now I follow. Thanks for clarifying.
Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Nick Coghlan
On 16 October 2017 at 18:26, Paul Moore  wrote:

> On 16 October 2017 at 02:33, Yury Selivanov 
> wrote:
> > Stage 1. A new execution context PEP to solve the problem *just for
> > async code*.  The PEP will target Python 3.7 and completely ignore
> > synchronous generators and asynchronous generators.  It will be based
> > on PEP 550 v1 (no chained lookups, immutable mapping or CoW as an
> > optimization) and borrow some good API decisions from PEP 550 v3+
> > (contextvars module, ContextVar class).  The API (and C-API) will be
> > designed to be future proof and ultimately allow transition to the
> > stage 2.
>
> So would decimal contexts stick to using threading.local? If so,
> presumably they'd still have problems with async. If not, won't you
> still be stuck with having to define the new semantics they have when
> used with generators? Or would it be out of scope for the PEP to take
> a position on what decimal does?
>

Decimal could (and should) still switch over in order to make itself more
coroutine-friendly, as in this version of the proposal, the key design
parameters would be:

- for synchronous code that never changes the execution context, context
variables and thread locals are essentially equivalent (since there will be
exactly one execution context per thread)
- for asynchronous code, each task managed by the event loop will get its
own execution context (each of which is distinct from the event loop's own
execution context)

So while I was initially disappointed by the suggestion, I'm coming around
to the perspective that it's probably a good pragmatic way to improve
context variable adoption rates, since it makes it straightforward for
folks to seamlessly switch between using context variables when they're
available, and falling back to thread local variables otherwise (and
perhaps restricting their coroutine support to Python versions that offer
context variables).

The downside is that you'll still need to explicitly revert the decimal
context before yielding from a generator if you didn't want the context
change to "leak", but that's not a new constraint - it's one that already
exists for the thread-local based decimal context API.

So going down this path would lock in the *default* semantics for the
interaction between context variables and generators as being the same as
the interaction between thread locals and generators, but would still leave
the door open to subsequently introducing an opt-in API like the
"contextvars.iter_in_context" idea for cases where folks decided they
wanted to do something different (like capturing the context at the point
where iterator was created and then temporarily switching back to that on
each iteration).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 564: Add new time functions with nanosecond resolution

2017-10-16 Thread Victor Stinner
Hi,

While discussions on this PEP are not over on python-ideas, I proposed
this PEP directly on python-dev since I consider that my PEP already
summarizes current and past proposed alternatives.

python-ideas threads:

* Add time.time_ns(): system clock with nanosecond resolution
* Why not picoseconds?

The PEP 564 will be shortly online at:
https://www.python.org/dev/peps/pep-0564/

Victor


PEP: 564
Title: Add new time functions with nanosecond resolution
Version: $Revision$
Last-Modified: $Date$
Author: Victor Stinner 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 16-October-2017
Python-Version: 3.7


Abstract


Add five new functions to the ``time`` module: ``time_ns()``,
``perf_counter_ns()``, ``monotonic_ns()``, ``clock_gettime_ns()`` and
``clock_settime_ns()``. They are similar to the function without the
``_ns`` suffix, but have nanosecond resolution: use a number of
nanoseconds as a Python int.

The best ``time.time_ns()`` resolution measured in Python is 3 times
better then ``time.time()`` resolution on Linux and Windows.


Rationale
=

Float type limited to 104 days
--

The clocks resolution of desktop and latop computers is getting closer
to nanosecond resolution. More and more clocks have a frequency in MHz,
up to GHz for the CPU TSC clock.

The Python ``time.time()`` function returns the current time as a
floatting point number which is usually a 64-bit binary floatting number
(in the IEEE 754 format).

The problem is that the float type starts to lose nanoseconds after 104
days.  Conversion from nanoseconds (``int``) to seconds (``float``) and
then back to nanoseconds (``int``) to check if conversions lose
precision::

# no precision loss
>>> x = 2 ** 52 + 1; int(float(x * 1e-9) * 1e9) - x
0
# precision loss! (1 nanosecond)
>>> x = 2 ** 53 + 1; int(float(x * 1e-9) * 1e9) - x
-1
>>> print(datetime.timedelta(seconds=2 ** 53 / 1e9))
104 days, 5:59:59.254741

``time.time()`` returns seconds elapsed since the UNIX epoch: January
1st, 1970. This function loses precision since May 1970 (47 years ago)::

>>> import datetime
>>> unix_epoch = datetime.datetime(1970, 1, 1)
>>> print(unix_epoch + datetime.timedelta(seconds=2**53 / 1e9))
1970-04-15 05:59:59.254741


Previous rejected PEP
-

Five years ago, the PEP 410 proposed a large and complex change in all
Python functions returning time to support nanosecond resolution using
the ``decimal.Decimal`` type.

The PEP was rejected for different reasons:

* The idea of adding a new optional parameter to change the result type
  was rejected. It's an uncommon (and bad?) programming practice in
  Python.

* It was not clear if hardware clocks really had a resolution of 1
  nanosecond, especially at the Python level.

* The ``decimal.Decimal`` type is uncommon in Python and so requires
  to adapt code to handle it.


CPython enhancements of the last 5 years


Since the PEP 410 was rejected:

* The ``os.stat_result`` structure got 3 new fields for timestamps as
  nanoseconds (Python ``int``): ``st_atime_ns``, ``st_ctime_ns``
  and ``st_mtime_ns``.

* The PEP 418 was accepted, Python 3.3 got 3 new clocks:
  ``time.monotonic()``, ``time.perf_counter()`` and
  ``time.process_time()``.

* The CPython private "pytime" C API handling time now uses a new
  ``_PyTime_t`` type: simple 64-bit signed integer (C ``int64_t``).
  The ``_PyTime_t`` unit is an implementation detail and not part of the
  API. The unit is currently ``1 nanosecond``.

Existing Python APIs using nanoseconds as int
-

The ``os.stat_result`` structure has 3 fields for timestamps as
nanoseconds (``int``): ``st_atime_ns``, ``st_ctime_ns`` and
``st_mtime_ns``.

The ``ns`` parameter of the ``os.utime()`` function accepts a
``(atime_ns: int, mtime_ns: int)`` tuple: nanoseconds.


Changes
===

New functions
-

This PEP adds five new functions to the ``time`` module:

* ``time.clock_gettime_ns(clock_id)``
* ``time.clock_settime_ns(clock_id, time: int)``
* ``time.perf_counter_ns()``
* ``time.monotonic_ns()``
* ``time.time_ns()``

These functions are similar to the version without the ``_ns`` suffix,
but use nanoseconds as Python ``int``.

For example, ``time.monotonic_ns() == int(time.monotonic() * 1e9)`` if
``monotonic()`` value is small enough to not lose precision.

Unchanged functions
---

This PEP only proposed to add new functions getting or setting clocks
with nanosecond resolution. Clocks are likely to lose precision,
especially when their reference is the UNIX epoch.

Python has other functions handling time (get time, timeout, etc.), but
no nanosecond variant is proposed for them since they are less likely to
lose precision.

Example of unchanged functions:

* ``os`` module: ``sched_rr_get_interval()``, ``times()``, 

Re: [Python-Dev] Timeout for PEP 550 / Execution Context discussion

2017-10-16 Thread Paul Moore
On 16 October 2017 at 02:33, Yury Selivanov  wrote:
> Stage 1. A new execution context PEP to solve the problem *just for
> async code*.  The PEP will target Python 3.7 and completely ignore
> synchronous generators and asynchronous generators.  It will be based
> on PEP 550 v1 (no chained lookups, immutable mapping or CoW as an
> optimization) and borrow some good API decisions from PEP 550 v3+
> (contextvars module, ContextVar class).  The API (and C-API) will be
> designed to be future proof and ultimately allow transition to the
> stage 2.

So would decimal contexts stick to using threading.local? If so,
presumably they'd still have problems with async. If not, won't you
still be stuck with having to define the new semantics they have when
used with generators? Or would it be out of scope for the PEP to take
a position on what decimal does?

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com