one example of this behaviour that i use all the time is: tar t < foo.tar | sed 10q
just to get some idea of the contents of foo.tar. often the tar file is huge, and i won't get a command prompt back until all the elements of the pipeline have completed - whereas i want it to finish quickly. it always annoys me when gnu tar prints something like: tar: Error in writing to standard output tar: Error is not recoverable: exiting now in this situation, where a silent exit is more appropriate. > also, the only case that is a problem is > when the return status of writes is not checked. it's often much more of a hassle to check the return status of all writes, as many programs will use print(f) in many places, all of which would have to be wrapped appropriately. i think the fundamental asymmetricality between read and write is one of knowledge: - on a write, the program knows what it's producing, and if the consumer isn't ready for it, that's almost always an error. in a pure pipeline, one where each component has no side effects, if you can't write data, there's nothing more you can do, so you might as well be dead. - on a read, the program usually doesn't know exactly what it's getting so it's in a good position to act appropriately on an unexpected situation. most programs will produce data into the pipeline after all input data has been read, error or not, so it's rarely appropriate for the program to die after a read error. mind you, i seem to remember that inferno's pipe write (at least) generates an exception on the first write to a defunct pipe. i think it should return -1 at least once before raising the exception, so a program that checks the return code should never encounter the exception. that seems like a good compromise to me.
