Carson Gaspar <[EMAIL PROTECTED]> writes:

> stderr _must_ be processed the same way stdin is. If you don't want them
> to share a pty, you need to allocate 2 (which is what the kerberos stuff
> does, as I recall).

This statement does not make much sense to me. The things you can do
on stdin (input) and stderr (output) are quite different. Even the
ttyflags for input and output are mostly orthogonal.

In general, shell utilities must be able to cope with some but not all
of std{in,out,err} being a tty. It's common to run programs under a
tty, but redirect stdout or stderr (but not both) to /dev/null. Or
stdin, for that matter.

On a more practical side, what problems can I expect if stdin and
stdout are a pty but stderr is not? Bash appearantly writes its
prompts to stderr (as is demostrated by running bash -i >/dev/null;
the prompt still appears). Unless I'm told some very strong reasons
for that, I consider it a bash bug, even if it may be a bug we have to
live with. Are there any other programs we can expect problems with?
How do other interactive shells behave, for example?

/Niels

Reply via email to