Nick Coghlan added the comment:
As a minor note, we didn't use call_in_thread on the event loop because it may
be "call_in_process". I've also merged issue 24578 back into this one as Guido
suggested.
I think the example Sven gave on the mailing list provides a better rationale
here than my original one of safely making blocking calls from an asyncio
coroutine (as I agree if you're to the point of writing your own coroutines,
dispatching a blocking call to the executor shouldn't be a big deal).
Sven's scenario instead relates to the case of having an application which is
still primarily synchronous, but has the occasional operation where it would be
convenient to be able to factor them out as parallel operations without needing
to significantly refactor the code. For example:
def load_and_process_data():
data1 = load_remote_data_set1()
data2 = load_remote_data_set2()
return process_data(data1, data2)
Trying yet another colour for the bikeshed (before I change the issue title
again), imagine if that could be written:
def load_and_process_data():
future1 = asyncio.background_call(load_remote_data_set1)
future2 = asyncio.background_call(load_remote_data_set2)
data1 = asyncio.wait_for_result(future1)
data2 = asyncio.wait_for_result(future2)
return process_data(data1, data2)
The operating model of interest here would be the one where you're still
writing a primarily synchronous (and perhaps even single-threaded) application
(rather than an event-driven one), but you'd like to occasionally do some
background processing while the foreground thread continues with other tasks.
----------
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue24571>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com