On Thu, Feb 9, 2017 at 7:10 PM, Peter Geoghegan <p...@bowt.ie> wrote: > At the risk of stating the obvious, ISTM that the right way to do > this, at a high level, is to err on the side of unneeded extra > unlink() calls, not leaking files. And, to make the window for problem > ("remaining hole that you haven't quite managed to plug") practically > indistinguishable from no hole at all, in a way that's kind of baked > into the API.
I do not think there should be any reason why we can't get the resource accounting exactly correct here. If a single backend manages to remove every temporary file that it creates exactly once (and that's currently true, modulo system crashes), a group of cooperating backends ought to be able to manage to remove every temporary file that any of them create exactly once (again, modulo system crashes). I do agree that a duplicate unlink() call isn't as bad as a missing unlink() call, at least if there's no possibility that the filename could have been reused by some other process, or some other part of our own process, which doesn't want that new file unlinked. But it's messy. If the seatbelts in your car were to randomly unbuckle, that would be a safety hazard. If they were to randomly refuse to unbuckle, you wouldn't say "that's OK because it's not a safety hazard", you'd say "these seatbelts are badly designed". And I think the same is true of this mechanism. The way to make this 100% reliable is to set things up so that there is joint ownership from the beginning and shared state that lets you know whether the work has already been done. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers