Bruce Momjian wrote: > The idea of using this on Unix is tempting, but Tatsuo is using a > threaded backend, so it is a little easier to do. However, it would > probably be pretty easy to write a file of modified file names that the > checkpoint could read and open/fsync/close.
Even that's not strictly necessary -- we *do* have shared memory we can use for this, and even when hundreds of tables have been written the list will only end up being a few tens of kilobytes in size (plus whatever overhead is required to track and manipulate the entries). But even then, we don't actually have to track the *names* of the files that have changed, just their RelFileNodes, since there's a mapping function from the RelFileNode to the filename. > Of course, if there are lots of files, sync() may be faster than > opening/fsync/closing all those files. This is true, and is something I hadn't actually thought of. So it sounds like some testing would be in order. Unfortunately I know of no system call which will take an array of file descriptors (or file names! May as well go for the gold when wishing for something :-) and sync them all to disk in the most optimal way... -- Kevin Brown [EMAIL PROTECTED] ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html