Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Glenn Linderman

On 5/1/2015 9:59 AM, Guido van Rossum wrote:


I think coroutine is the name of a concept, not a specific
implementation.

Cheers,

 Cheers indeed! I agree that the *concept* is best called coroutine -- 
and we have used this term ever since PEP 342. But when we're talking 
specifics and trying to distinguish e.g. a function declared with 
'async def' from a regular function or from a regular generator 
function, using 'async function' sounds right. And 'async method' if 
it's a method.


Exactly. The async function/method is an implementation technique for a 
specific kind/subset of coroutine functionality.  So the term coroutine, 
qualified by a description of its best usage and limitationsof async 
function, can be used in defining async function, thus appealing to what 
people know or have heard of and vaguely understand and can read more 
about in the literature.


A glossary entry for coroutine in the docs seems appropriate, which 
could point out the 16† ways to implement coroutine-style 
functionalities in Python, and perhaps make recommendations for 
different types of usage.


†OK, not 16 ways, but it is 3 now, or 4?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492 quibble and request

2015-05-01 Thread Steve Dower
The high-level answer to this is, like yield, the function temporarily returns 
all the way up the stack to the caller who intends to advance the 
iterator/async function. This is typically the event loop, and the main 
confusion here comes when the loop is implicit.

If you explicitly define an event loop it's roughly equivalent to the for loop 
that is calling next on a generator. For pip, I expect that's what you'd have - 
a blocking function (do_work()?, execute_plan()?) that creates a loop and 
starts it's tasks running within it. Each task inside that call with be 
asynchronous with respect to each other (think about passing generators to 
zip() ) but the loop will block the rest of your code until they're all 
finished.

Cheers,
Steve

Top-posted from my Windows Phone

From: Paul Mooremailto:p.f.mo...@gmail.com
Sent: ‎4/‎30/‎2015 2:07
To: Greg Ewingmailto:greg.ew...@canterbury.ac.nz
Cc: Python Devmailto:python-dev@python.org
Subject: Re: [Python-Dev] PEP 492 quibble and request

On 30 April 2015 at 09:58, Greg Ewing greg.ew...@canterbury.ac.nz wrote:
 Ethan Furman wrote:

 Having gone through the PEP again, I am still no closer to understanding
 what happens here:

   data = await reader.read(8192)

 What does the flow of control look like at the interpreter level?


 Are you sure you *really* want to know? For the sake
 of sanity, I recommend ignoring the actual control
 flow and pretending that it's just like

data = reader.read(8192)

 with the reader.read() method somehow able to be
 magically suspended.

Well, if I don't know, I get confused as to where I invoke the event
loop, how my non-async code runs alongside this etc.
Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/steve.dower%40microsoft.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Stefan Behnel
Yury Selivanov schrieb am 01.05.2015 um 20:52:
 I don't like the idea of combining __next__ and __anext__.

Ok, fair enough. So, how would you use this new protocol manually then?
Say, I already know that I won't need to await the next item that the
iterator will return. For normal iterators, I could just call next() on it
and continue the for-loop. How would I do it for AIterators?

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Guido van Rossum
On Fri, May 1, 2015 at 5:39 AM, Stefan Behnel stefan...@behnel.de wrote:

 Yury Selivanov schrieb am 30.04.2015 um 03:30:
  Asynchronous Iterators and async for
  --
 
  An *asynchronous iterable* is able to call asynchronous code in its
  *iter* implementation, and *asynchronous iterator* can call
  asynchronous code in its *next* method.  To support asynchronous
  iteration:
 
  1. An object must implement an  ``__aiter__`` method returning an
 *awaitable* resulting in an *asynchronous iterator object*.
 
  2. An *asynchronous iterator object* must implement an ``__anext__``
 method returning an *awaitable*.
 
  3. To stop iteration ``__anext__`` must raise a ``StopAsyncIteration``
 exception.

 What this section does not explain, AFAICT, nor the section on design
 considerations, is why the iterator protocol needs to be duplicated
 entirely. Can't we just assume (or even validate) that any 'regular'
 iterator returned from __aiter__() (as opposed to __iter__()) properly
 obeys to the new protocol? Why additionally duplicate __next__() and
 StopIteration?

 ISTM that all this really depends on is that __next__() returns an
 awaitable. Renaming the method doesn't help with that guarantee.


This is an astute observation. I think its flaw (if any) is the situation
where we want a single object to be both a regular iterator and an async
iterator (say when migrating code to the new world). The __next__ method
might want to return a result while __anext__ has to return an awaitable.
The solution to that would be to have __aiter__ return an instance of a
different class than __iter__, but that's not always convenient.

Thus, aware of the choice, I would still prefer a separate __anext__ method.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Gustavo Carneiro
On 1 May 2015 at 16:31, Guido van Rossum gu...@python.org wrote:

 On Fri, May 1, 2015 at 5:50 AM, Stefan Behnel stefan...@behnel.de wrote:

 Yury Selivanov schrieb am 30.04.2015 um 03:30:
  1. Terminology:
  - *native coroutine* term is used for async def functions.

 When I read native, I think of native (binary) code. So native
 coroutine sounds like it's implemented in some compiled low-level
 language. That might be the case (certainly in the CPython world), but
 it's
 not related to this PEP nor its set of examples.


  We should discuss how we will name new 'async def' coroutines in
  Python Documentation if the PEP is accepted.

 Well, it doesn't hurt to avoid obvious misleading terminology upfront.


 I think obvious[ly] misleading is too strong, nobody is trying to
 mislead anybody, we just have different associations with the same word.
 Given the feedback I'd call native coroutine suboptimal (even though I
 proposed it myself) and I am now in favor of using async function.


But what if you have async methods?  I know, a method is almost a function,
but still, sounds slightly confusing.

IMHO, these are really classical coroutines.  If gevent calls them
coroutines, I don't think asyncio has any less right to call them
coroutines.

You have to ask yourself this: a new programmer, when he sees mentions of
coroutines, how likely is he to understand what he is dealing with?  What
about async function?  The former is a well known concept, since decades
ago, while the latter is something he probably (at least me) never heard of
before.

For me, an async function is just as likely to be an API that is
asynchronous in the sense that it takes an extra callback parameter to be
called when the asynchronous work is done.

I think coroutine is the name of a concept, not a specific implementation.

Cheers,

-- 
Gustavo J. A. M. Carneiro
Gambit Research
The universe is always one step beyond logic. -- Frank Herbert
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Guido van Rossum
On Fri, May 1, 2015 at 5:50 AM, Stefan Behnel stefan...@behnel.de wrote:

 Yury Selivanov schrieb am 30.04.2015 um 03:30:
  1. Terminology:
  - *native coroutine* term is used for async def functions.

 When I read native, I think of native (binary) code. So native
 coroutine sounds like it's implemented in some compiled low-level
 language. That might be the case (certainly in the CPython world), but it's
 not related to this PEP nor its set of examples.


  We should discuss how we will name new 'async def' coroutines in
  Python Documentation if the PEP is accepted.

 Well, it doesn't hurt to avoid obvious misleading terminology upfront.


I think obvious[ly] misleading is too strong, nobody is trying to mislead
anybody, we just have different associations with the same word. Given the
feedback I'd call native coroutine suboptimal (even though I proposed it
myself) and I am now in favor of using async function.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492 vs. PEP 3152, new round

2015-05-01 Thread Greg Ewing

Nathaniel Smith wrote:


If await acted like -, then this would be
  await (x ** 2)
But with the proposed grammar, it's instead
  (await x) ** 2


Ah, I had missed that!

This is a *good* argument for Yuri's grammar.
I withdraw my objection now.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Summary of Python tracker Issues

2015-05-01 Thread Python tracker

ACTIVITY SUMMARY (2015-04-24 - 2015-05-01)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open4841 (+27)
  closed 31025 (+25)
  total  35866 (+52)

Open issues with patches: 2254 


Issues opened (39)
==

#24054: Invalid syntax in inspect_fodder2.py (on Python 2.x)
http://bugs.python.org/issue24054  opened by ddriddle

#24055: unittest package-level set up  tear down module
http://bugs.python.org/issue24055  opened by demian.brecht

#24056: Expose closure  generator status in function repr()
http://bugs.python.org/issue24056  opened by ncoghlan

#24060: Clearify necessities for logging with timestamps
http://bugs.python.org/issue24060  opened by krichter

#24063: Support Mageia and Arch Linux in the platform module
http://bugs.python.org/issue24063  opened by akien

#24064: Make the property doctstring writeable
http://bugs.python.org/issue24064  opened by rhettinger

#24065: Outdated *_RESTRICTED flags in structmember.h
http://bugs.python.org/issue24065  opened by berker.peksag

#24066: send_message should take all the addresses in the To: header i
http://bugs.python.org/issue24066  opened by kirelagin

#24067: Weakproxy is an instance of collections.Iterator
http://bugs.python.org/issue24067  opened by ereuveni

#24068: statistics module - incorrect results with boolean input
http://bugs.python.org/issue24068  opened by wolma

#24069: Option to delete obsolete bytecode files
http://bugs.python.org/issue24069  opened by Sworddragon

#24076: sum() several times slower on Python 3
http://bugs.python.org/issue24076  opened by lukasz.langa

#24078: inspect.getsourcelines ignores context and returns wrong line 
http://bugs.python.org/issue24078  opened by siyuan

#24079: xml.etree.ElementTree.Element.text does not conform to the doc
http://bugs.python.org/issue24079  opened by jlaurens

#24080: asyncio.Event.wait() Task was destroyed but it is pending
http://bugs.python.org/issue24080  opened by matt

#24081: Obsolete caveat in reload() docs
http://bugs.python.org/issue24081  opened by encukou

#24082: Obsolete note in argument parsing (c-api/arg.rst)
http://bugs.python.org/issue24082  opened by encukou

#24084: pstats: sub-millisecond display
http://bugs.python.org/issue24084  opened by Romuald

#24085: large memory overhead when pyc is recompiled
http://bugs.python.org/issue24085  opened by bukzor

#24086: Configparser interpolation is unexpected
http://bugs.python.org/issue24086  opened by tbekolay

#24087: Documentation doesn't explain the term coroutine (PEP 342)
http://bugs.python.org/issue24087  opened by paul.moore

#24088: yield expression confusion
http://bugs.python.org/issue24088  opened by Jim.Jewett

#24089: argparse crashes with AssertionError
http://bugs.python.org/issue24089  opened by spaceone

#24090: Add a copy variable to clipboard option to the edit menu
http://bugs.python.org/issue24090  opened by irdb

#24091: Use after free in Element.extend (1)
http://bugs.python.org/issue24091  opened by pkt

#24092: Use after free in Element.extend (2)
http://bugs.python.org/issue24092  opened by pkt

#24093: Use after free in Element.remove
http://bugs.python.org/issue24093  opened by pkt

#24094: Use after free during json encoding (PyType_IsSubtype)
http://bugs.python.org/issue24094  opened by pkt

#24095: Use after free during json encoding a dict (2)
http://bugs.python.org/issue24095  opened by pkt

#24096: Use after free in get_filter
http://bugs.python.org/issue24096  opened by pkt

#24097: Use after free in PyObject_GetState
http://bugs.python.org/issue24097  opened by pkt

#24098: Multiple use after frees in obj2ast_* methods
http://bugs.python.org/issue24098  opened by pkt

#24099: Use after free in siftdown (1)
http://bugs.python.org/issue24099  opened by pkt

#24100: Use after free in siftdown (2)
http://bugs.python.org/issue24100  opened by pkt

#24101: Use after free in siftup
http://bugs.python.org/issue24101  opened by pkt

#24102: Multiple type confusions in unicode error handlers
http://bugs.python.org/issue24102  opened by pkt

#24103: Use after free in xmlparser_setevents (1)
http://bugs.python.org/issue24103  opened by pkt

#24104: Use after free in xmlparser_setevents (2)
http://bugs.python.org/issue24104  opened by pkt

#24105: Use after free during json encoding a dict (3)
http://bugs.python.org/issue24105  opened by pkt



Most recent 15 issues with no replies (15)
==

#24105: Use after free during json encoding a dict (3)
http://bugs.python.org/issue24105

#24104: Use after free in xmlparser_setevents (2)
http://bugs.python.org/issue24104

#24103: Use after free in xmlparser_setevents (1)
http://bugs.python.org/issue24103

#24102: Multiple type confusions in unicode error handlers
http://bugs.python.org/issue24102

#24101: Use after free in siftup
http://bugs.python.org/issue24101


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Stefan Behnel
Guido van Rossum schrieb am 01.05.2015 um 17:28:
 On Fri, May 1, 2015 at 5:39 AM, Stefan Behnel wrote:
 
 Yury Selivanov schrieb am 30.04.2015 um 03:30:
 Asynchronous Iterators and async for
 --

 An *asynchronous iterable* is able to call asynchronous code in its
 *iter* implementation, and *asynchronous iterator* can call
 asynchronous code in its *next* method.  To support asynchronous
 iteration:

 1. An object must implement an  ``__aiter__`` method returning an
*awaitable* resulting in an *asynchronous iterator object*.

 2. An *asynchronous iterator object* must implement an ``__anext__``
method returning an *awaitable*.

 3. To stop iteration ``__anext__`` must raise a ``StopAsyncIteration``
exception.

 What this section does not explain, AFAICT, nor the section on design
 considerations, is why the iterator protocol needs to be duplicated
 entirely. Can't we just assume (or even validate) that any 'regular'
 iterator returned from __aiter__() (as opposed to __iter__()) properly
 obeys to the new protocol? Why additionally duplicate __next__() and
 StopIteration?

 ISTM that all this really depends on is that __next__() returns an
 awaitable. Renaming the method doesn't help with that guarantee.
 
 
 This is an astute observation. I think its flaw (if any) is the situation
 where we want a single object to be both a regular iterator and an async
 iterator (say when migrating code to the new world). The __next__ method
 might want to return a result while __anext__ has to return an awaitable.
 The solution to that would be to have __aiter__ return an instance of a
 different class than __iter__, but that's not always convenient.

My personal gut feeling is that this case would best be handled by a
generic wrapper class. Both are well defined protocols, so I don't expect
people to change all of their classes and instead return a wrapped object
either from __iter__() or __aiter__(), depending on which they want to
optimise for, or which will eventually turn out to be easier to wrap.

But that's trying to predict the [Ff]uture, obviously. It just feels like
unnecessary complexity for now. And we already have a type slot for
__next__ (tp_iternext), but not for __anext__.

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Guido van Rossum
On Fri, May 1, 2015 at 8:55 AM, Gustavo Carneiro gjcarne...@gmail.com
wrote:




 On 1 May 2015 at 16:31, Guido van Rossum gu...@python.org wrote:

 On Fri, May 1, 2015 at 5:50 AM, Stefan Behnel stefan...@behnel.de
 wrote:

 Yury Selivanov schrieb am 30.04.2015 um 03:30:
  1. Terminology:
  - *native coroutine* term is used for async def functions.

 When I read native, I think of native (binary) code. So native
 coroutine sounds like it's implemented in some compiled low-level
 language. That might be the case (certainly in the CPython world), but
 it's
 not related to this PEP nor its set of examples.


  We should discuss how we will name new 'async def' coroutines in
  Python Documentation if the PEP is accepted.

 Well, it doesn't hurt to avoid obvious misleading terminology upfront.


 I think obvious[ly] misleading is too strong, nobody is trying to
 mislead anybody, we just have different associations with the same word.
 Given the feedback I'd call native coroutine suboptimal (even though I
 proposed it myself) and I am now in favor of using async function.


 But what if you have async methods?  I know, a method is almost a
 function, but still, sounds slightly confusing.

 IMHO, these are really classical coroutines.  If gevent calls them
 coroutines, I don't think asyncio has any less right to call them
 coroutines.

 You have to ask yourself this: a new programmer, when he sees mentions of
 coroutines, how likely is he to understand what he is dealing with?  What
 about async function?  The former is a well known concept, since decades
 ago, while the latter is something he probably (at least me) never heard of
 before.

 For me, an async function is just as likely to be an API that is
 asynchronous in the sense that it takes an extra callback parameter to be
 called when the asynchronous work is done.

 I think coroutine is the name of a concept, not a specific implementation.

 Cheers,

  Cheers indeed! I agree that the *concept* is best called coroutine -- and
we have used this term ever since PEP 342. But when we're talking specifics
and trying to distinguish e.g. a function declared with 'async def' from a
regular function or from a regular generator function, using 'async
function' sounds right. And 'async method' if it's a method.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Antoine Pitrou
On Fri, 1 May 2015 13:10:01 -0700
Guido van Rossum gu...@python.org wrote:
 On Fri, May 1, 2015 at 12:48 PM, Jim J. Jewett jimjjew...@gmail.com wrote:
 
  If there are more tasks than executors, yield is a way to release your
  current executor and go to the back of the line.  I'm pretty sure I
  saw several examples of that style back when coroutines were first
  discussed.
 
 
 Could you dig up the actual references? It seems rather odd to me to mix
 coroutines and threads this way.

I think Jim is saying that when you have a non-trivial task running
in the event loop, you can yield from time to time to give a chance
to other events (e.g. network events or timeouts) to be processed
timely.

Of course, that assumes the event loop will somehow priorize them over
the just yielded task.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Yury Selivanov

On 2015-05-01 4:24 PM, Arnaud Delobelle wrote:

On 1 May 2015 at 20:24, Yury Selivanov yselivanov...@gmail.com wrote:

On 2015-05-01 3:19 PM, Ethan Furman wrote:

[...]

If we must have __aiter__, then we may as well also have __anext__;
besides
being more consistent, it also allows an object to be both a normol
iterator
and an asynch iterator.


And this is a good point too.

I'm not convinced that allowing an object to be both a normal and an
async iterator is a good thing.  It could be a recipe for confusion.




I doubt that it will be a popular thing.  But disallowing this
by merging two different protocols in one isn't a good idea
either.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Yury Selivanov

Let's say it this way: I want to know what I am looking at
when I browse through the code -- an asynchronous iterator,
or a normal iterator.  I want an explicit difference between
these protocols, because they are different.

Moreover, the below code is a perfectly valid, infinite
iterable:

class SomeIterable:
 def __iter__(self):
 return self
 async def __next__(self):
 return 'spam'

I'm strong -1 on this idea.

Yury

On 2015-05-01 3:03 PM, Stefan Behnel wrote:

Yury Selivanov schrieb am 01.05.2015 um 20:52:

I don't like the idea of combining __next__ and __anext__.
In this case explicit is better than implicit.  __next__
returning coroutines is a perfectly normal thing for a
normal 'for' loop (it wouldn't to anything with them),
whereas 'async for' will interpret that differently, and
will try to await those coroutines.

Sure, but the difference is that one would have called __aiter__() first
and the other __iter__(). Normally, either of the two would not exist, so
using the wrong loop on an object will just fail. However, after we passed
that barrier, we already know that the object that was returned is supposed
to obey to the expected protocol, so it doesn't matter whether we call
__next__() or name it __anext__(), except that the second requires us to
duplicate an existing protocol.

This has nothing to do with implicit vs. explicit.

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Guido van Rossum
On Fri, May 1, 2015 at 12:48 PM, Jim J. Jewett jimjjew...@gmail.com wrote:

 If there are more tasks than executors, yield is a way to release your
 current executor and go to the back of the line.  I'm pretty sure I
 saw several examples of that style back when coroutines were first
 discussed.


Could you dig up the actual references? It seems rather odd to me to mix
coroutines and threads this way.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Guido van Rossum
On Fri, May 1, 2015 at 1:22 PM, Antoine Pitrou solip...@pitrou.net wrote:

 On Fri, 1 May 2015 13:10:01 -0700
 Guido van Rossum gu...@python.org wrote:
  On Fri, May 1, 2015 at 12:48 PM, Jim J. Jewett jimjjew...@gmail.com
 wrote:
 
   If there are more tasks than executors, yield is a way to release your
   current executor and go to the back of the line.  I'm pretty sure I
   saw several examples of that style back when coroutines were first
   discussed.
  
 
  Could you dig up the actual references? It seems rather odd to me to mix
  coroutines and threads this way.

 I think Jim is saying that when you have a non-trivial task running
 in the event loop, you can yield from time to time to give a chance
 to other events (e.g. network events or timeouts) to be processed
 timely.

 Of course, that assumes the event loop will somehow priorize them over
 the just yielded task.


Yeah, but (unlike some frameworks) when using asyncio you can't just put a
plain yield statement in your code. You'd have to do something like
`yield from asyncio.sleep(0)`.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Ethan Furman
On 05/01, Guido van Rossum wrote:
 On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett jimjjew...@gmail.com wrote:

 So does this mean that yield should NOT be used just to yield control
 if a task isn't blocked?  (e.g., if its next step is likely to be
 long, or low priority.)  Or even that it wouldn't be considered a
 co-routine in the python sense?

 
 I'm not sure what you're talking about. Does next step refer to something
 in the current stack frame or something that you're calling? None of the
 current uses of yield (the keyword) in Python are good for lowering
 priority of something. It's not just the GIL, it's that coroutines (by
 whatever name) are still single-threaded. If you have something
 long-running CPU-intensive you should probably run it in a background
 thread (or process) e.g. using an executor.

So when a generator is used as an iterator, yield and yield from are used
to produce the actual working values...

But when a generator is used as a coroutine, yield (and yield from?) are
used to provide context about when they should be run again?

--
~Ethan~
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Yury Selivanov

On 2015-05-01 3:19 PM, Ethan Furman wrote:

Sure, but the difference is that one would have called __aiter__() first
and the other __iter__(). Normally, either of the two would not exist, so
using the wrong loop on an object will just fail. However, after we passed
that barrier, we already know that the object that was returned is supposed
to obey to the expected protocol, so it doesn't matter whether we call
__next__() or name it __anext__(), except that the second requires us to
duplicate an existing protocol.

If we must have __aiter__, then we may as well also have __anext__; besides
being more consistent, it also allows an object to be both a normol iterator
and an asynch iterator.


And this is a good point too.

Thanks,
Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Jim J. Jewett
On Fri, May 1, 2015 at 2:59 PM, Guido van Rossum gu...@python.org wrote:
 On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett jimjjew...@gmail.com wrote:

 On Thu, Apr 30, 2015 at 3:32 PM, Guido van Rossum gu...@python.org
 wrote:


 (Guido:) Actually that's not even wrong. When using generators as
 coroutines, PEP 342
  style, yield means I am blocked waiting for a result that the I/O
  multiplexer is eventually going to produce.

 So does this mean that yield should NOT be used just to yield control
 if a task isn't blocked?  (e.g., if its next step is likely to be
 long, or low priority.)  Or even that it wouldn't be considered a
 co-routine in the python sense?

 I'm not sure what you're talking about. Does next step refer to something
 in the current stack frame or something that you're calling?

The next piece of your algorithm.

 None of the
 current uses of yield (the keyword) in Python are good for lowering
 priority of something.

If there are more tasks than executors, yield is a way to release your
current executor and go to the back of the line.  I'm pretty sure I
saw several examples of that style back when coroutines were first
discussed.

-jJ
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Guido van Rossum
On Fri, May 1, 2015 at 12:24 PM, Ethan Furman et...@stoneleaf.us wrote:

 On 05/01, Guido van Rossum wrote:
  On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett jimjjew...@gmail.com
 wrote:

  So does this mean that yield should NOT be used just to yield control
  if a task isn't blocked?  (e.g., if its next step is likely to be
  long, or low priority.)  Or even that it wouldn't be considered a
  co-routine in the python sense?
 
 
  I'm not sure what you're talking about. Does next step refer to
 something
  in the current stack frame or something that you're calling? None of the
  current uses of yield (the keyword) in Python are good for lowering
  priority of something. It's not just the GIL, it's that coroutines (by
  whatever name) are still single-threaded. If you have something
  long-running CPU-intensive you should probably run it in a background
  thread (or process) e.g. using an executor.

 So when a generator is used as an iterator, yield and yield from are used
 to produce the actual working values...

 But when a generator is used as a coroutine, yield (and yield from?) are
 used to provide context about when they should be run again?


The common thing is that the *argument* to yield provides info to
whoever/whatever is on the other end, and the *return value* from yield
[from] is whatever they returned in response.

When using yield to implement an iterator, there is no return value from
yield -- the other end is the for-loop that calls __next__, and it just
says give me the next value, and the value passed to yield is that next
value.

When using yield [from] to implement a coroutine the other end is probably
a trampoline or scheduler or multiplexer. The argument to yield [from]
tells the scheduler what you are waiting for. The scheduler resumes the
coroutine when that value is avaiable.

At this point please go read Greg Ewing's tutorial. Seriously.
http://www.cosc.canterbury.ac.nz/greg.ewing/python/yield-from/yield_from.html

Note that when using yield from, there is a third player: the coroutine
that contains the yield from. This is neither the scheduler nor the other
thing; the communication between the scheduler and the other thing passes
transparently *through* this coroutine. When the other thing has a value
for this coroutine, it uses *return* to send it a value. The other thing
here is a lower-level coroutine -- it could either itself also use
yield-from and return, or it could be an I/O primitive that actually
gives the scheduler a specific instruction (e.g. wait until this socket
becomes readable).

Please do read Greg's tutorial.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Guido van Rossum
On Fri, May 1, 2015 at 11:26 AM, Jim J. Jewett jimjjew...@gmail.com wrote:

 On Thu, Apr 30, 2015 at 3:32 PM, Guido van Rossum gu...@python.org
 wrote:

 (me:)
  A badly worded attempt to say
  Normal generator:  yield (as opposed to return) means
  that the function isn't done, and there may be more
  things to return later.

  but an asynchronous (PEP492) coroutine is primarily saying:

   This might take a while, go ahead and do something else
  meanwhile.

 (Yuri:) Correct.

 (Guido:) Actually that's not even wrong. When using generators as
 coroutines, PEP 342
  style, yield means I am blocked waiting for a result that the I/O
  multiplexer is eventually going to produce.

 So does this mean that yield should NOT be used just to yield control
 if a task isn't blocked?  (e.g., if its next step is likely to be
 long, or low priority.)  Or even that it wouldn't be considered a
 co-routine in the python sense?


I'm not sure what you're talking about. Does next step refer to something
in the current stack frame or something that you're calling? None of the
current uses of yield (the keyword) in Python are good for lowering
priority of something. It's not just the GIL, it's that coroutines (by
whatever name) are still single-threaded. If you have something
long-running CPU-intensive you should probably run it in a background
thread (or process) e.g. using an executor.


 If this is really just about avoiding busy-wait on network IO, then
 coroutine is way too broad a term, and I'm uncomfortable restricting a
 new keyword (async or await) to what is essentially a Domain Specific
 Language.


The common use case is network I/O. But it's quite possible to integrate
coroutines with a UI event loop.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Stefan Behnel
Yury Selivanov schrieb am 01.05.2015 um 20:52:
 I don't like the idea of combining __next__ and __anext__.
 In this case explicit is better than implicit.  __next__
 returning coroutines is a perfectly normal thing for a
 normal 'for' loop (it wouldn't to anything with them),
 whereas 'async for' will interpret that differently, and
 will try to await those coroutines.

Sure, but the difference is that one would have called __aiter__() first
and the other __iter__(). Normally, either of the two would not exist, so
using the wrong loop on an object will just fail. However, after we passed
that barrier, we already know that the object that was returned is supposed
to obey to the expected protocol, so it doesn't matter whether we call
__next__() or name it __anext__(), except that the second requires us to
duplicate an existing protocol.

This has nothing to do with implicit vs. explicit.

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Jim J. Jewett
On Thu, Apr 30, 2015 at 3:32 PM, Guido van Rossum gu...@python.org wrote:

(me:)
 A badly worded attempt to say
 Normal generator:  yield (as opposed to return) means
 that the function isn't done, and there may be more
 things to return later.

 but an asynchronous (PEP492) coroutine is primarily saying:

  This might take a while, go ahead and do something else
 meanwhile.

(Yuri:) Correct.

(Guido:) Actually that's not even wrong. When using generators as
coroutines, PEP 342
 style, yield means I am blocked waiting for a result that the I/O
 multiplexer is eventually going to produce.

So does this mean that yield should NOT be used just to yield control
if a task isn't blocked?  (e.g., if its next step is likely to be
long, or low priority.)  Or even that it wouldn't be considered a
co-routine in the python sense?

If this is really just about avoiding busy-wait on network IO, then
coroutine is way too broad a term, and I'm uncomfortable restricting a
new keyword (async or await) to what is essentially a Domain Specific
Language.

-jJ
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Yury Selivanov

Stefan,

I don't like the idea of combining __next__ and __anext__.
In this case explicit is better than implicit.  __next__
returning coroutines is a perfectly normal thing for a
normal 'for' loop (it wouldn't to anything with them),
whereas 'async for' will interpret that differently, and
will try to await those coroutines.

Yury

On 2015-05-01 1:10 PM, Stefan Behnel wrote:

Guido van Rossum schrieb am 01.05.2015 um 17:28:

On Fri, May 1, 2015 at 5:39 AM, Stefan Behnel wrote:


Yury Selivanov schrieb am 30.04.2015 um 03:30:

Asynchronous Iterators and async for
--

An *asynchronous iterable* is able to call asynchronous code in its
*iter* implementation, and *asynchronous iterator* can call
asynchronous code in its *next* method.  To support asynchronous
iteration:

1. An object must implement an  ``__aiter__`` method returning an
*awaitable* resulting in an *asynchronous iterator object*.

2. An *asynchronous iterator object* must implement an ``__anext__``
method returning an *awaitable*.

3. To stop iteration ``__anext__`` must raise a ``StopAsyncIteration``
exception.

What this section does not explain, AFAICT, nor the section on design
considerations, is why the iterator protocol needs to be duplicated
entirely. Can't we just assume (or even validate) that any 'regular'
iterator returned from __aiter__() (as opposed to __iter__()) properly
obeys to the new protocol? Why additionally duplicate __next__() and
StopIteration?

ISTM that all this really depends on is that __next__() returns an
awaitable. Renaming the method doesn't help with that guarantee.


This is an astute observation. I think its flaw (if any) is the situation
where we want a single object to be both a regular iterator and an async
iterator (say when migrating code to the new world). The __next__ method
might want to return a result while __anext__ has to return an awaitable.
The solution to that would be to have __aiter__ return an instance of a
different class than __iter__, but that's not always convenient.

My personal gut feeling is that this case would best be handled by a
generic wrapper class. Both are well defined protocols, so I don't expect
people to change all of their classes and instead return a wrapped object
either from __iter__() or __aiter__(), depending on which they want to
optimise for, or which will eventually turn out to be easier to wrap.

But that's trying to predict the [Ff]uture, obviously. It just feels like
unnecessary complexity for now. And we already have a type slot for
__next__ (tp_iternext), but not for __anext__.

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Ethan Furman
On 05/01, Stefan Behnel wrote:
 Yury Selivanov schrieb am 01.05.2015 um 20:52:

 I don't like the idea of combining __next__ and __anext__.
 In this case explicit is better than implicit.  __next__
 returning coroutines is a perfectly normal thing for a
 normal 'for' loop (it wouldn't to anything with them),
 whereas 'async for' will interpret that differently, and
 will try to await those coroutines.
 
 Sure, but the difference is that one would have called __aiter__() first
 and the other __iter__(). Normally, either of the two would not exist, so
 using the wrong loop on an object will just fail. However, after we passed
 that barrier, we already know that the object that was returned is supposed
 to obey to the expected protocol, so it doesn't matter whether we call
 __next__() or name it __anext__(), except that the second requires us to
 duplicate an existing protocol.

If we must have __aiter__, then we may as well also have __anext__; besides
being more consistent, it also allows an object to be both a normol iterator
and an asynch iterator.

--
~Ethan~
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unicode literals in Python 2.7

2015-05-01 Thread Adam Bartoš
On Fri, May 1, 2015 at 6:14 AM, Stephen J. Turnbull step...@xemacs.org
wrote:

 Adam Bartoš writes:

   Unfortunately, it doesn't work. With PYTHONIOENCODING=utf-8, the
   sys.std* streams are created with utf-8 encoding (which doesn't
   help on Windows since they still don't use ReadConsoleW and
   WriteConsoleW to communicate with the terminal) and after changing
   the sys.std* streams to the fixed ones and setting readline hook,
   it still doesn't work,

 I don't see why you would expect it to work: either your code is
 bypassing PYTHONIOENCODING=utf-8 processing, and that variable doesn't
 matter, or you're feeding already decoded text *as UTF-8* to your
 module which evidently expects something else (UTF-16LE?).


I'll describe my picture of the situation, which might be terribly wrong.
On Linux, in a typical situation, we have a UTF-8 terminal,
PYTHONENIOENCODING=utf-8, GNU readline is used. When the REPL wants input
from a user the tokenizer calls PyOS_Readline, which calls GNU readline.
The user is prompted  , during the input he can use autocompletion and
everything and he enters u'α'. PyOS_Readline returns bu'\xce\xb1' (as
char* or something), which is UTF-8 encoded input from the user. The
tokenizer, parser, and evaluator process the input and the result is
u'\u03b1', which is printed as an answer.

In my case I install custom sys.std* objects and a custom readline hook.
Again, the tokenizer calls PyOS_Readline, which calls my readline hook,
which calls sys.stdin.readline(), which returns an Unicode string a user
entered (it was decoded from UTF-16-LE bytes actually). My readline hook
encodes this string to UTF-8 and returns it. So the situation is the same.
The tokenizer gets b\u'xce\xb1' as before, but know it results in
u'\xce\xb1'.

Why is the result different? I though that in the first case
PyCF_SOURCE_IS_UTF8 might have been set. And after your suggestion, I
thought that PYTHONIOENCODING=utf-8 is the thing that also sets
PyCF_SOURCE_IS_UTF8.



   so presumably the PyCF_SOURCE_IS_UTF8 is still not set.

 I don't think that flag does what you think it does.  AFAICT from
 looking at the source, that flag gets unconditionally set in the
 execution context for compile, eval, and exec, and it is checked in
 the parser when creating an AST node.  So it looks to me like it
 asserts that the *internal* representation of the program is UTF-8
 *after* transforming the input to an internal representation (doing
 charset decoding, removing comments and line continuations, etc).


I thought it might do what I want because of the behaviour of eval. I
thought that the PyUnicode_AsUTF8String call in eval just encodes the
passed unicode to UTF-8, so the situation looks like follows:
eval(uu'\u031b') - (bu'\xce\xb1', PyCF_SOURCE_IS_UTF8 set) - u'\u03b1'
eval(uu'\u031b'.encode('utf-8')) - (bu'\xce\xb1', PyCF_SOURCE_IS_UTF8
not set) - u'\xce\xb1'
But of course, this my picture might be wrong.


  Well, the received text comes from sys.stdin and its encoding is
   known.

 How?  You keep asserting this.  *You* know, but how are you passing
 that information to *the Python interpreter*?  Guido may have a time
 machine, but nobody claims the Python interpreter is telepathic.


I thought that the Python interpreter knows the input comes from sys.stdin
at least to some extent because in pythonrun.c:PyRun_InteractiveOneObject
the encoding for the tokenizer is inferred from sys.stdin.encoding. But
this is actually the case only in Python 3. So I was wrong.


  Yes. In the latter case, eval has no idea how the bytes given are
   encoded.

 Eval *never* knows how bytes are encoded, not even implicitly.  That's
 one of the important reasons why Python 3 was necessary.  I think you
 know that, but you don't write like you understand the implications
 for your current work, which makes it hard to communicate.


Yes, eval never knows how bytes are encoded. But I meant it in comparison
with the first case where a Unicode string was passed.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Jim J. Jewett

On Thu Apr 30 21:27:09 CEST 2015, Yury Selivanov replied:


On 2015-04-30 2:41 PM, Jim J. Jewett wrote:

 Bad phrasing on my part.  Is there anything that prevents an
 asynchronous call (or waiting for one) without the async with?

 If so, I'm missing something important.  Either way, I would
 prefer different wording in the PEP.

 Yes, you can't use 'yield from' in __exit__/__enter__
 in current Python.

I tried it in 3.4, and it worked.

I'm not sure it would ever be sensible, but it didn't raise any
errors, and it did run.

What do you mean by can't use?


 For coroutines in PEP 492:
 __await__ = __anext__ is the same as __call__ = __next__
 __await__ = __aiter__ is the same as __call__ = __iter__

 That tells me that it will be OK sometimes, but will usually
 be either a mistake or an API problem -- and it explains why.

 Please put those 3 lines in the PEP.

 There is a line like that:
 https://www.python.org/dev/peps/pep-0492/#await-expression
 Look for Also, please note... line.

It was from reading the PEP that the question came up, and I
just reread that section.

Having those 3 explicit lines goes a long way towards explaining
how an asychio coroutine differs from a regular callable, in a
way that the existing PEP doesn't, at least for me.


 This is OK. The point is that you can use 'await log' in
 __aenter__.  If you don't need awaits in __aenter__ you can
 use them in __aexit__. If you don't need them there too,
 then just define a regular context manager.

 Is it an error to use async with on a regular context manager?
 If so, why?  If it is just that doing so could be misleading,
 then what about async with mgr1, mgr2, mgr3 -- is it enough
 that one of the three might suspend itself?

 'with' requires an object with __enter__ and __exit__

 'async with' requires an object with __aenter__ and __aexit__

 You can have an object that implements both interfaces.

I'm not still not seeing why with (let alone await with) can't
just run whichever one it finds.  await with won't actually let
the BLOCK run until the future is resolved.  So if a context
manager only supplies __enter__ instead of __aenter__, then at most
you've lost a chance to switch tasks while waiting -- and that is no
worse than if the context manager just happened to be really slow.


 For debugging this kind of mistakes there is a special debug mode in

 Is the intent to do anything more than preface execution with:

 import asynchio.coroutines
 asynchio.coroutines._DEBUG = True

 This won't work, unfortunately.  You need to set the
 debug flag *before* you import asyncio package (otherwise
 we would have an unavoidable performance cost for debug
 features).  If you enable it after you import asyncio,
 then asyncio itself won't be instrumented.  Please
 see the implementation of asyncio.coroutine for details.

Why does asynchio itself have to wrapped?  Is that really something
normal developers need to debug, or is it only for developing the
stdlib itself?  If it if only for developing the stdlib, than I
would rather see workarounds like shoving _DEBUG into builtins
when needed, as opposed to adding multiple attributes to sys.


-jJ

--

If there are still threading problems with my replies, please
email me with details, so that I can try to resolve them.  -jJ
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Jim J. Jewett
On Fri, May 1, 2015 at 4:10 PM, Guido van Rossum gu...@python.org wrote:
 On Fri, May 1, 2015 at 12:48 PM, Jim J. Jewett jimjjew...@gmail.com wrote:

 If there are more tasks than executors, yield is a way to release your
 current executor and go to the back of the line.  I'm pretty sure I
 saw several examples of that style back when coroutines were first
 discussed.

 Could you dig up the actual references? It seems rather odd to me to mix
 coroutines and threads this way.

I can try in a few days, but the primary case (and perhaps the only
one with running code) was for n_executors=1.  They assumed there
would only be a single thread, or at least only one that was really
important to the event loop -- the pattern was often described as an
alternative to relying on threads.

FWIW, Ron Adam's yielding  in
https://mail.python.org/pipermail/python-dev/2015-May/139762.html is
in the same spirit.

You replied it would be better if that were done by calling some
method on the scheduling loop, but that isn't any more standard, and
the yielding function is simple enough that it will be reinvented.

-jJ
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: What is the real goal?

2015-05-01 Thread Yury Selivanov



On 2015-05-01 5:37 PM, Jim J. Jewett wrote:

On Thu Apr 30 21:27:09 CEST 2015, Yury Selivanov replied:


On 2015-04-30 2:41 PM, Jim J. Jewett wrote:


Bad phrasing on my part.  Is there anything that prevents an
asynchronous call (or waiting for one) without the async with?
If so, I'm missing something important.  Either way, I would
prefer different wording in the PEP.

Yes, you can't use 'yield from' in __exit__/__enter__
in current Python.

I tried it in 3.4, and it worked.

I'm not sure it would ever be sensible, but it didn't raise any
errors, and it did run.

What do you mean by can't use?


It probably executed without errors, but it didn't run the
generators.


class Foo:
   def __enter__(self):
   yield from asyncio.sleep(0)
   print('spam')

with Foo(): pass # - 'spam' won't ever be printed.





For coroutines in PEP 492:
__await__ = __anext__ is the same as __call__ = __next__
__await__ = __aiter__ is the same as __call__ = __iter__

That tells me that it will be OK sometimes, but will usually
be either a mistake or an API problem -- and it explains why.
Please put those 3 lines in the PEP.

There is a line like that:
https://www.python.org/dev/peps/pep-0492/#await-expression
Look for Also, please note... line.

It was from reading the PEP that the question came up, and I
just reread that section.

Having those 3 explicit lines goes a long way towards explaining
how an asychio coroutine differs from a regular callable, in a
way that the existing PEP doesn't, at least for me.



This is OK. The point is that you can use 'await log' in
__aenter__.  If you don't need awaits in __aenter__ you can
use them in __aexit__. If you don't need them there too,
then just define a regular context manager.

Is it an error to use async with on a regular context manager?
If so, why?  If it is just that doing so could be misleading,
then what about async with mgr1, mgr2, mgr3 -- is it enough
that one of the three might suspend itself?

'with' requires an object with __enter__ and __exit__
'async with' requires an object with __aenter__ and __aexit__
You can have an object that implements both interfaces.

I'm not still not seeing why with (let alone await with) can't
just run whichever one it finds.  await with won't actually let
the BLOCK run until the future is resolved.  So if a context
manager only supplies __enter__ instead of __aenter__, then at most
you've lost a chance to switch tasks while waiting -- and that is no
worse than if the context manager just happened to be really slow.


let's say you have a function:

def foo():
   with Ctx(): pass


if Ctx.__enter__ is a generator/coroutine, then
foo becomes a generator/coroutine (otherwise how
(and to what) would you yield from/await on __enter__?).
And then suddenly calling 'foo' doesn't do anything
(it will return you a generator/coroutine object).

This isn't transparent or even remotely
understandable.






For debugging this kind of mistakes there is a special debug mode in

Is the intent to do anything more than preface execution with:
import asynchio.coroutines
asynchio.coroutines._DEBUG = True

This won't work, unfortunately.  You need to set the
debug flag *before* you import asyncio package (otherwise
we would have an unavoidable performance cost for debug
features).  If you enable it after you import asyncio,
then asyncio itself won't be instrumented.  Please
see the implementation of asyncio.coroutine for details.

Why does asynchio itself have to wrapped?  Is that really something
normal developers need to debug, or is it only for developing the
stdlib itself?  If it if only for developing the stdlib, than I
would rather see workarounds like shoving _DEBUG into builtins
when needed, as opposed to adding multiple attributes to sys.



Yes, normal developers need asyncio to be instrumented,
otherwise you won't know what you did wrong when you
called some asyncio code without 'await' for example.


Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Stefan Behnel
Yury Selivanov schrieb am 30.04.2015 um 03:30:
 Asynchronous Iterators and async for
 --
 
 An *asynchronous iterable* is able to call asynchronous code in its
 *iter* implementation, and *asynchronous iterator* can call
 asynchronous code in its *next* method.  To support asynchronous
 iteration:
 
 1. An object must implement an  ``__aiter__`` method returning an
*awaitable* resulting in an *asynchronous iterator object*.
 
 2. An *asynchronous iterator object* must implement an ``__anext__``
method returning an *awaitable*.
 
 3. To stop iteration ``__anext__`` must raise a ``StopAsyncIteration``
exception.

What this section does not explain, AFAICT, nor the section on design
considerations, is why the iterator protocol needs to be duplicated
entirely. Can't we just assume (or even validate) that any 'regular'
iterator returned from __aiter__() (as opposed to __iter__()) properly
obeys to the new protocol? Why additionally duplicate __next__() and
StopIteration?

ISTM that all this really depends on is that __next__() returns an
awaitable. Renaming the method doesn't help with that guarantee.

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: async/await in Python; version 4

2015-05-01 Thread Stefan Behnel
Yury Selivanov schrieb am 30.04.2015 um 03:30:
 1. Terminology:
 - *native coroutine* term is used for async def functions.

When I read native, I think of native (binary) code. So native
coroutine sounds like it's implemented in some compiled low-level
language. That might be the case (certainly in the CPython world), but it's
not related to this PEP nor its set of examples.


 We should discuss how we will name new 'async def' coroutines in
 Python Documentation if the PEP is accepted.

Well, it doesn't hurt to avoid obvious misleading terminology upfront.

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492 quibble and request

2015-05-01 Thread Steven D'Aprano
On Wed, Apr 29, 2015 at 06:12:37PM -0700, Guido van Rossum wrote:
 On Wed, Apr 29, 2015 at 5:59 PM, Nick Coghlan ncogh...@gmail.com wrote:
 
  On 30 April 2015 at 10:21, Ethan Furman et...@stoneleaf.us wrote:
   From the PEP:
  
   Why not a __future__ import
  
   __future__ imports are inconvenient and easy to forget to add.
  
   That is a horrible rationale for not using an import.  By that logic we
   should have everything in built-ins.  ;)
 
 
 This response is silly. The point is not against import but against
 __future__. A __future__ import definitely is inconvenient -- few people I
 know could even recite the correct constraints on their placement.

Are you talking about actual Python programmers, or people who dabble 
with the odd Python script now and again? I'm kinda shocked if it's the 
first.

It's not a complex rule: the __future__ import must be the first line of 
actual executable code in the file, so it can come after any encoding 
cookie, module docstring, comments and blank lines, but before any other 
code. The only part I didn't remember was that you can have multiple 
__future__ imports, I thought they all had to be on one line. (Nice to 
learn something new!)



[...]
  'as' went through the not really a keyword path, and
  it's a recipe for complexity in the code generation toolchain and
  general quirkiness as things behave in unexpected ways.
 
 
 I don't recall that -- but it was a really long time ago so I may
 misremember (did we even have __future__ at the time?).

I have a memory of much rejoicing when as was made a keyword, and an 
emphatic we're never going to do that again! about semi-keywords. I've 
tried searching for the relevant post(s), but cannot find anything. 
Maybe I imagined it?

But I do have Python 2.4 available, when we could write lovely code like 
this:

py import math as as
py as
module 'math' from '/usr/lib/python2.4/lib-dynload/mathmodule.so'

I'm definitely not looking forward to anything like that again.



-- 
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492 quibble and request

2015-05-01 Thread Steven D'Aprano
On Wed, Apr 29, 2015 at 07:31:22PM -0700, Guido van Rossum wrote:

 Ah, but here's the other clever bit: it's only interpreted this way
 *inside* a function declared with 'async def'. Outside such functions,
 'await' is not a keyword, so that grammar rule doesn't trigger. (Kind of
 similar to the way that the print_function __future__ disables the
 keyword-ness of 'print', except here it's toggled on or off depending on
 whether the nearest surrounding scope is 'async def' or not. The PEP could
 probably be clearer about this; it's all hidden in the Transition Plan
 section.)

You mean we could write code like this?

def await(x):
...


if condition:
async def spam():
await (eggs or cheese)
else:
def spam():
await(eggs or cheese)


I must admit that's kind of cool, but I'm sure I'd regret it.



-- 
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492 quibble and request

2015-05-01 Thread Paul Moore
On 1 May 2015 at 12:54, Steven D'Aprano st...@pearwood.info wrote:
 You mean we could write code like this?

 def await(x):
 ...


 if condition:
 async def spam():
 await (eggs or cheese)
 else:
 def spam():
 await(eggs or cheese)


 I must admit that's kind of cool, but I'm sure I'd regret it.

You could, but there are people with buckets of tar and feathers
waiting for you if you do :-)
Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com