On Wed, Aug 22, 2012 at 1:40 PM, Dennis Jacobfeuerborn
<djacobfeuerb...@gmail.com> wrote:
> I was thinking about something like that but the issue is that this really 
> only works when you don't do any actual blocking work. I may be able to get 
> around the sleep() but then I have to fetch the URL or do some other work 
> that might block for a while so the get() trick doesn't work.

At a lower level, it is possible to poll on both the pipe and the
socket simultaneously.  At this point though you might want to start
looking at an asynchronous or event-driven framework like twisted or
gevent.

> Also the child process might not be able to deal with such an exit command at 
> all for one reason or another so the only safe way to get rid of it is for 
> the parent to kill it.

I think you mean that it is the most "reliable" way.  In general, the
only "safe" way to cause a process to exit is the cooperative
approach, because it may otherwise leave external resources such as
file data in an unexpected state that could cause problems later.

> The better option would be to not use a shared queue for communication and 
> instead use only dedicated pipes/queues for each child process but the 
> doesn't seem to be a way to wait for a message from multiple queues/pipes. If 
> that were the case then I could simply kill the child and get rid of the 
> respective pipes/queues without affecting the other processes or 
> communication channels.

Assuming that you're using a Unix system:

from select import select

while True:
    ready, _, _ = select(pipes, [], [], timeout)
    if not ready:
        # process timeout
    else:
        for pipe in ready:
            message = pipe.get()
            # process message
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to