On Thu, Aug 14, 2014 at 12:08 PM, Baker, Keith [OCDUS Non-J&J]
> I tried a combination of PIPE lock and file lock (fcntl) as Tom had suggested.
> Attached experimental patch has this logic...
> Postmaster :
> - get exclusive fcntl lock (to guard against race condition in PIPE-based
> - check PIPE for any existing readers
> - open PIPE for read
> All other backends:
> - get shared fcnlt lock
> - open PIPE for read
Hmm. This seems like it might almost work. But I don't see why the
other backends need to care about fcntl() at all. How about this
1. Acquire an exclusive lock on some file in the data directory, maybe
the control file, using fcntl().
2. Open the named pipe for read.
3. Open the named pipe for write.
4. Close the named pipe for read.
5. Install a signal handler for SIGPIPE which sets a global variable.
6. Try to write to the pipe.
7. Check that the variable is set; if not, FATAL.
8. Revert SIGPIPE handler.
9. Close the named pipe for write.
10. Open the named pipe for read.
11. Release the fcntl() lock acquired in step 1.
Regular backends don't need to do anything special, except that they
need to make sure that the file descriptor opened in step 8 gets
inherited by the right set of processes. That means that the
close-on-exec flag should be turned on in the postmaster; except in
EXEC_BACKEND builds, where it should be turned off but then turned on
again by child processes before they do anything that might fork.
It's impossible for two postmasters to start up at the same time
because the fcntl() lock acquired at step 1 will block any
newly-arriving postmaster until step 11 is completel. The
first-to-close semantics of fcntl() aren't a problem for this purpose
because we only execute a very limited amount of code over which we
have full control while holding the lock. By the time the postmaster
that gets the lock first completes step 10, any later-arriving
postmaster is guaranteed to fall out at step 7 while that postmaster
or any children who inherit the pipe descriptor remain alive. No
process holds any resource that will survive its exit, so cleanup is
This seems solid to me, but watch somebody find a problem with it...
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: