On 06.07.2015 03:41, Nick Coghlan wrote:
That said, I think there's definitely value in providing a very simple
answer to the "how do I make a blocking call from a coroutine?"
question, so I filed an RFE to add asyncio.blocking_call:
http://bugs.python.org/issue24571

Nice step forward, Nick. Thanks a lot.

I'm less convinced of the value of "asyncio.wait_for_result()", so I
haven't filed an RFE for that one.

I have done that for you, because I feel people need to have a convenient tool to **bridge both worlds** in either direction: http://bugs.python.org/issue24578


That is even another issue that came to my mind once in a while but I forgot to post it here:

How are mature projects are supposed to make the transition to asyncio when they see performance opportunities in using it?

We have several millions lines of code. I actually imagined we could simply drop an 'await' once in a while in order to gain from asyncio's power. As it seems, we need to inconveniently write all sort of wrappers (unnecessarily from my perspective) to retain existing functionality and leverage asyncio's strength at the same time in order not to break anything. That is, in fact, the main reason why I conduct this discussion. I feel this transition is mostly impossible, very costly or only possible for new code (where is remains to be seen whether it fits in the existing codebase).

I also feel I am missing something of the bigger picture and I am not sure if something like this is planned for the future. But from my perspective in order to leverage asyncio's power, you need at least two coroutines running at the same time, right? So, in order to get things running, I still need some sort of get_event_loop into which I can put my top-level coroutines.

Assume my venerable business functionality:

def business_old():
    content1 = open('big.file').read()   # blocks until finished
    content2 = open('huge.file').read()  # blocks until finished
    return content1 + content2

I would like to rewrite/amend it to work asynchronously with minimal effort such as:

def business_new():
content1 = fork open('big.file').read() # wraps up the calls into awaitables content2 = fork open('huge.file').read() # and put them into the event loop return content1 + content2 # variables are used => await evaluation

I might have missed something but I think you get my point.

Correct me if I am wrong, but inferring from the given example of PEP 492, currently, we would need to do the following:

def business_new_2():
    content1 = open('big.file').read()  # get us two awaitables/futures
    content2 = open('huge.file').read() # ...

    # somehow the loop magic
loop = asyncio.get_event_loop()
loop.run_until_complete(content1)
loop.run_until_complete(content2)
try:
loop.run_forever() # might be something different
finally:
        loop.close()
    return content1.result() + content2.result()

I certainly do not want to put that into our codebase. Especially when this kind of code block could occur at any level many times in various functions.

Regards,
Sven


_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to