Sounds like you're breaking ground and combining things that haven't been tested together before. Some comments:
- Setting the default executor to a ProcessPoolExecutor feels like a bad idea -- it would mean that every time you connect to a remote host the address lookup is done in that executor (that's mainly why there is a default executor at all -- the API to set it mostly exists so you can configure the number of threads). Instead, I would just pass an explicit executor to run_in_executor(). - Looks like the signal handler is somehow inherited by the subprocess created for the process pool? Otherwise I'm not sure how to explain that the sleep(1000) returns immediately but doesn't raise an exception -- that's what happens with sleep() when a signal handler is called, but not when the handler raises an exception or the signal's default action is set (SIG_DFL) or it is ignored (SIG_IGN). - I'm not sure exactly where the RuntimeError comes from. It's possible that this happens during final program GC. More print statements are in order. - Do you know how far ask_exit() made it? I'd like to see more prints there too. On Wed, Mar 26, 2014 at 9:00 AM, Giampaolo Rodola' <[email protected]>wrote: > Hello there, > according to asyncio doc this is the correct way to handle SIGINT/SIGTERM > signals in order to "cleanly" shutdown the IO loop: > > http://docs.python.org/dev/library/asyncio-eventloop.html#example-set-signal-handlers-for-sigint-and-sigterm > This worked well for me as long as I didn't introduce executors. > Note: I expressively decided to use ProcessPoolExecutor instead of > ThreadPoolExecutor in order to be able to terminate workers and exit sooner: > > import asyncio > import functools > import time > import concurrent.futures > import signal > > loop = asyncio.get_event_loop() > executor = concurrent.futures.ProcessPoolExecutor() > > def long_running_fun(): > for x in range(5): > print("loop %s" % x) > time.sleep(1000) > > @asyncio.coroutine > def infinite_loop(): > while True: > try: > fut = loop.run_in_executor(None, long_running_fun) > yield from asyncio.wait_for(fut, None) > finally: > yield from asyncio.sleep(1) > > def ask_exit(signame): > print("got signal %s: exit" % signame) > loop.stop() > executor.shutdown() > > def main(): > loop.set_default_executor(executor) > asyncio.async(infinite_loop()) > for signame in ('SIGINT', 'SIGTERM'): > loop.add_signal_handler(getattr(signal, signame), > functools.partial(ask_exit, signame)) > loop.run_forever() > > if __name__ == '__main__': > main() > > > The problem with this code is that every time I hit CTRL+C "time.sleep()" > returns immediately and the "for" loop keeps looping until it's exhausted. > This is the output: > > $ python3.4 foo.py > loop 0 > ^Cloop 1 > got signal SIGINT: exit > ^Cloop 2 > ^Cloop 3 > ^Cloop 4 > ^CException ignored in: <generator object infinite_loop at 0x7fb0c0760cf0> > RuntimeError: generator ignored GeneratorExit > $ > > I've also tried other solutions such terminating processes returned by > multiprocessing.active_children() but it has the same effect. > Apparently the only effective strategy is to use SIGKILL. > Basically I'm looking for a way to cleanly shutdown the IO loop and all > its pending workers and if there's a "blessed" strategy in order to do that > it would probably makes sense to mention it in the doc because it's not > obvious. > > -- > Giampaolo - http://grodola.blogspot.com > > -- --Guido van Rossum (python.org/~guido)
