On 10.10.2012 17:26, Amit Kapila wrote:
36.+SendTimeLineHistory(TimeLineHistoryCmd *cmd)  {  ..
  if (nread<= 0)
+                        ereport(ERROR,
+                                        (errcode_for_file_access(),
+                                         errmsg("could not read file
\"%s\": %m",
+                                                        path)));

FileClose should be done in error case as well.

Hmm, I think you're right. The straightforward fix to just call FileClose() before the ereport()s in that function would not be enough, though. You might run out of memory in pq_sendbytes(), for example, which would throw an error. We could use PG_TRY/CATCH for this, but seems like overkill. Perhaps the simplest fix is to use a global (static) variable for the fd, and clean it up in WalSndErrorCleanup().

This is a fairly general issue, actually. Looking around, I can see at least two similar cases in existing code, with BasicOpenFile, where we will leak file descriptors on error:

copy_file: there are several error cases, including out-of-disk space, with no attempt to close the fds.

XLogFileInit: again, no attempt to close the file descriptor on failure. This is called at checkpoint from the checkpointer process, to preallocate new xlog files.

Given that we haven't heard any complaints of anyone running into these, these are not a big deal in practice, but in theory at least the XLogFileInit leak could lead to serious problems, as it could cause the checkpointer to run out of file descriptors.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to