Re: Reassign or discard Popen().stdout from a server process
On Thu, 10 Feb 2011 08:35:24 +, John O'Hagan wrote: But I'm still a little curious as to why even unsuccessfully attempting to reassign stdout seems to stop the pipe buffer from filling up. It doesn't. If the server continues to run, then it's ignoring/handling both SIGPIPE and the EPIPE error. Either that, or another process has the read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using non-blocking I/O or select() so that it doesn't block writing its diagnostic messages. The server fails with stdout=PIPE if I don't keep reading it, but doesn't fail if I do stdout=anything (I've tried files, strings, integers, and None) soon after starting the process, without any other changes. How is that consistent with either of the above conditions? I'm sure you're right, I just don't understand. What do you mean by fail. I wouldn't be surprised if it hung, due to the write() on stdout blocking. If you reassign the .stdout member, the existing file object is likely to become unreferenced, get garbage collected, and close the pipe, which would prevent the server from blocking (the write() will fail rather than blocking). If the server puts the pipe into non-blocking mode, write() will fail with EAGAIN if you don't read it but with EPIPE if you close the pipe. The server may handle these cases differently. -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Fri, 11 Feb 2011, Nobody wrote: On Thu, 10 Feb 2011 08:35:24 +, John O'Hagan wrote: But I'm still a little curious as to why even unsuccessfully attempting to reassign stdout seems to stop the pipe buffer from filling up. It doesn't. If the server continues to run, then it's ignoring/handling both SIGPIPE and the EPIPE error. Either that, or another process has the read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using non-blocking I/O or select() so that it doesn't block writing its diagnostic messages. The server fails with stdout=PIPE if I don't keep reading it, but doesn't fail if I do stdout=anything (I've tried files, strings, integers, and None) soon after starting the process, without any other changes. How is that consistent with either of the above conditions? I'm sure you're right, I just don't understand. What do you mean by fail. I wouldn't be surprised if it hung, due to the write() on stdout blocking. If you reassign the .stdout member, the existing file object is likely to become unreferenced, get garbage collected, and close the pipe, which would prevent the server from blocking (the write() will fail rather than blocking). If the server puts the pipe into non-blocking mode, write() will fail with EAGAIN if you don't read it but with EPIPE if you close the pipe. The server may handle these cases differently. By fail I mean the server, which is the Fluidsynth soundfont rendering program, stops producing sound in a way consistent with the blocked write() as you describe. It continues to read stdin; in fact, Ctrl+C-ing out of the block produces all the queued sounds at once. What I didn't realise was that the (ineffective) reassignment of stdout has the side-effect of closing it by dereferencing it, as you explain above. I asked on the Fluidsynth list and currently it ignores if the pipe it's writing to has been closed . All makes sense now, thanks. John -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Wed, 9 Feb 2011, Nobody wrote: On Fri, 04 Feb 2011 15:48:55 +, John O'Hagan wrote: But I'm still a little curious as to why even unsuccessfully attempting to reassign stdout seems to stop the pipe buffer from filling up. It doesn't. If the server continues to run, then it's ignoring/handling both SIGPIPE and the EPIPE error. Either that, or another process has the read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using non-blocking I/O or select() so that it doesn't block writing its diagnostic messages. The server fails with stdout=PIPE if I don't keep reading it, but doesn't fail if I do stdout=anything (I've tried files, strings, integers, and None) soon after starting the process, without any other changes. How is that consistent with either of the above conditions? I'm sure you're right, I just don't understand. Regards, John -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Fri, 04 Feb 2011 15:48:55 +, John O'Hagan wrote: But I'm still a little curious as to why even unsuccessfully attempting to reassign stdout seems to stop the pipe buffer from filling up. It doesn't. If the server continues to run, then it's ignoring/handling both SIGPIPE and the EPIPE error. Either that, or another process has the read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using non-blocking I/O or select() so that it doesn't block writing its diagnostic messages. -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Thu, 3 Feb 2011, Nobody wrote: On Tue, 01 Feb 2011 08:30:19 +, John O'Hagan wrote: I can't keep reading because that will block - there won't be any more output until I send some input, and I don't want it in any case. To try to fix this I added: proc.stdout = os.path.devnull which has the effect of stopping the server from failing, but I'm not convinced it's doing what I think it is. It isn't. os.path.devnull is a string, not a file. But even if you did: proc.stdout = open(os.path.devnull, 'w') that still wouldn't work. As mentioned earlier in the thread, I did in fact use open(), this was a typo, [...] Is it possible to re-assign the stdout of a subprocess after it has started? No. Or just close it? What's the right way to read stdout up to a given line, then discard the rest? If the server can handle the pipe being closed, go with that. Otherwise, options include redirecting stdout to a file and running tail -f on the file from within Python, or starting a thread or process whose sole function is to read and discard the server's output. Thanks, that's all clear now. But I'm still a little curious as to why even unsuccessfully attempting to reassign stdout seems to stop the pipe buffer from filling up. John -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Tue, 01 Feb 2011 08:30:19 +, John O'Hagan wrote: I can't keep reading because that will block - there won't be any more output until I send some input, and I don't want it in any case. To try to fix this I added: proc.stdout = os.path.devnull which has the effect of stopping the server from failing, but I'm not convinced it's doing what I think it is. It isn't. os.path.devnull is a string, not a file. But even if you did: proc.stdout = open(os.path.devnull, 'w') that still wouldn't work. If I replace devnull in the above line with a real file, it stays empty although I know there is more output, which makes me think it hasn't really worked. It hasn't. Simply closing stdout also seems to stop the crashes, but doesn't that mean it's still being written to, but the writes are just silently failing? In either case I'm wary of more elusive bugs arising from misdirected stdout. If you close proc.stdout, the next time the server writes to its stdout, it will receive SIGPIPE or, if it catches that, the write will fail with EPIPE (write on pipe with no readers). It's up to the server how it deals with that. Is it possible to re-assign the stdout of a subprocess after it has started? No. Or just close it? What's the right way to read stdout up to a given line, then discard the rest? If the server can handle the pipe being closed, go with that. Otherwise, options include redirecting stdout to a file and running tail -f on the file from within Python, or starting a thread or process whose sole function is to read and discard the server's output. -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Tue, Feb 1, 2011 at 12:30 AM, John O'Hagan m...@johnohagan.com wrote: I'm starting a server process as a subprocess. Startup is slow and unpredictable (around 3-10 sec), so I'm reading from its stdout until I get a line that tells me it's ready before proceeding, in simplified form: import subprocess proc = subprocess.Popen(['server', 'args'], stdout=subprocess.PIPE) while proc.stdout.readline() != Ready.\n: pass Now I can start communicating with the server, but I eventually realised that as I'm no longer reading stdout, the pipe buffer will fill up with output from the server and before long it blocks and the server stops working. I can't keep reading because that will block - there won't be any more output until I send some input, and I don't want it in any case. To try to fix this I added: proc.stdout = os.path.devnull which has the effect of stopping the server from failing, but I'm not convinced it's doing what I think it is. If I replace devnull in the above line with a real file, it stays empty although I know there is more output, which makes me think it hasn't really worked. Indeed. proc.stdout is a file, whereas os.devnull is merely a path string; the assignment is nonsensical type-wise. Simply closing stdout also seems to stop the crashes, but doesn't that mean it's still being written to, but the writes are just silently failing? In Based on some quick experimentation, yes, more or less. either case I'm wary of more elusive bugs arising from misdirected stdout. Is it possible to re-assign the stdout of a subprocess after it has started? I think that's impossible. (Most of Popen's attributes probably should be read-only properties to clarify that such actions are don't have the intended effect.) What's the right way to read stdout up to a given line, then discard the rest? I would think calling Popen.communicate() after you've reached the given line should do the trick. http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate Just ignore its return value. However, that does require sending the input all at once in a single chunk, which it sounds like may not be feasible in your case; if so, I have no idea how to do it cleanly. Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Tue, 1 Feb 2011, Chris Rebert wrote: On Tue, Feb 1, 2011 at 12:30 AM, John O'Hagan m...@johnohagan.com wrote: I'm starting a server process as a subprocess. Startup is slow and unpredictable (around 3-10 sec), so I'm reading from its stdout until I get a line that tells me it's ready before proceeding, in simplified form: import subprocess proc = subprocess.Popen(['server', 'args'], stdout=subprocess.PIPE) while proc.stdout.readline() != Ready.\n: pass Now I can start communicating with the server, but I eventually realised that as I'm no longer reading stdout, the pipe buffer will fill up with output from the server and before long it blocks and the server stops working. I can't keep reading because that will block - there won't be any more output until I send some input, and I don't want it in any case. To try to fix this I added: proc.stdout = os.path.devnull which has the effect of stopping the server from failing, but I'm not convinced it's doing what I think it is. If I replace devnull in the above line with a real file, it stays empty although I know there is more output, which makes me think it hasn't really worked. Indeed. proc.stdout is a file, whereas os.devnull is merely a path string; the assignment is nonsensical type-wise. My mistake, of course I meant open(os.path.devnull). Simply closing stdout also seems to stop the crashes, but doesn't that mean it's still being written to, but the writes are just silently failing? In Based on some quick experimentation, yes, more or less. either case I'm wary of more elusive bugs arising from misdirected stdout. Is it possible to re-assign the stdout of a subprocess after it has started? I think that's impossible. (Most of Popen's attributes probably should be read-only properties to clarify that such actions are don't have the intended effect.) I don't doubt what you say, but attempting to assign it does seem to do something, as it consistently stops the crashes which occur otherwise. What it does, I have no idea. What's the right way to read stdout up to a given line, then discard the rest? I would think calling Popen.communicate() after you've reached the given line should do the trick. http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate Just ignore its return value. However, that does require sending the input all at once in a single chunk, which it sounds like may not be feasible in your case; if so, I have no idea how to do it cleanly. Yes, unfortunately I need to send a lot of precisely-timed short strings, and communicate() blocks after the first call. I tried calling stdout.readline() the right number of times after each input, but that seems fiddly and fragile - for example the number of lines of output is not guaranteed and may vary in the case of errors, and also the extra reads had a noticeable effect on latency, which is important in this case. So far my best bet seems to be closing stdin, which doesn't seem very clean, but it does what I want and seems to be just as fast as using stdin=open(os.devnull) in the Popen call in the first place. Thanks, John -- http://mail.python.org/mailman/listinfo/python-list
Re: Reassign or discard Popen().stdout from a server process
On Tue, 1 Feb 2011, John O'Hagan wrote: So far my best bet seems to be closing stdin, which doesn't seem very clean, but it does what I want and seems to be just as fast as using stdin=open(os.devnull) in the Popen call in the first place. ...and both references to stdin above should have been to stdout (I really shouldn't post last thing at night). Thanks, John -- http://mail.python.org/mailman/listinfo/python-list