The idea to do a partial pass through shared buffers and only write a
fraction of dirty buffers, then fsync them is a good one.
The key point is that we spread out the fsyncs across the whole checkpoint
Yes, this is really Andres suggestion, as I understood it.
I think we should be writing out all buffers for a particular file in one
pass, then issue one fsync per file. >1 fsyncs per file seems a bad idea.
This is one of the things done in the "checkpoint continuous flushing"
patch, as buffers are sorted, they are written per file, and in order
within a file, which help getting sequencial writes instead of random
However for now the final fsync is not called, but Linux is told that the
written buffers must be flushed, which is akin to an "asynchronous fsync",
i.e. it asks to move data but does not wait for the data to be actually
written, as a blocking fsync would.
Andres suggestion, which has some common points to Takashi-san patch, is
to also integrate the fsync in the buffer writing process. There are some
details to think about, because probably it is not a a good to issue an
fsync right after the corresponding writes, it is better to wait for some
delay before doing so, so the implementation is not straightforward.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: