Currently, if the client application dies (== closes the connection), the backend will observe this and exit when it next returns to the outer loop and tries to read a new command. However, we might detect the loss of connection much sooner; for example, if we are doing a SELECT that outputs large amounts of data, we will see failures from send(). We have deliberately avoided trying to abort as soon as the connection drops, for fear that that might cause unexpected problems. However, it's moderately annoying to see the postmaster log fill with "pq_flush: send() failed" messages when something like this happens. It occurs to me that a fairly safe way to abort after loss of connection would be for pq_flush or pq_recvbuf to set QueryCancel when they detect a communications problem. This would not immediately abort the query in progress, but would ensure a cancel at the next safe time in the per-tuple loop. You wouldn't get very much more output before that happened, typically. Thoughts? Is there anything about this that might be unsafe? Should QueryCancel be set after *any* failure of recv() or send(), or only if certain errno codes are detected (and if so, which ones)? regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl