We have an application that involves submitting hundreds to thousands of jobs 
to a shared computing resource, and we're using asyncio to do so because it is 
far less overhead than threading or multiprocessing for the bookkeeping 
required to keep track of all these jobs. It makes extensive use of 
asyncio.create_subprocess_exec(). This was developed mostly in Python 3.9.7.

Normally we know ahead of time all the jobs that need to be run and this can be 
accommodated by a single call to asyncio.run(). However, in this new case we 
need to submit a few hundred jobs, review these results, and compose many more. 
That means a separate call to asyncio.run() is necessary.

I have tried to call our app twice, and during the second iteration things hang 
indefinitely. Processes get launched, but it eventually stops reporting job 
completions.

I have added debug=True to the asyncio.run() keyword args, but I'm not sure 
what I'm looking for that might tell me what's wrong. It may be something I'm 
doing, but based on the docs being ambiguous about this it could also be a 
fundamental limitation of asyncio.

Is what I'm trying going to be impossible to accomplish? I would hate to have 
to rig up some sort of crazy async/sync queue system to feed jobs dynamically 
all because of this problem with asyncio.run().

Thanks,

-Clint
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to