Thinking about this whole idea a bit more, it occured to me that the current approach to write all, then fsync all is really a historical artifact of the fact that we used to use the system-wide sync call instead of fsyncs to flush the pages to disk. That might not be the best way to do things in the new load-distributed-checkpoint world.

How about interleaving the writes with the fsyncs?

Scan all shared buffers, and build a list of all files with dirty pages, and buffers belonging to them

foreach(file in list)
  foreach(buffer belonging to file)
    sleep(); /* to throttle the I/O rate */
sleep(); /* to give the OS a chance to flush the writes at it's own pace */

This would spread out the fsyncs in a natural way, making the knob to control the duration of the sync phase unnecessary.

At some point we'll also need to fsync all files that have been modified since the last checkpoint, but don't have any dirty buffers in the buffer cache. I think it's a reasonable assumption that fsyncing those files doesn't generate a lot of I/O. Since the writes have been made some time ago, the OS has likely already flushed them to disk.

Doing the 1st phase of just scanning the buffers to see which ones are dirty also effectively implements the optimization of not writing buffers that were dirtied after the checkpoint start. And grouping the writes per file gives the OS a better chance to group the physical writes.

One problem is that currently the segmentation of relations to 1GB files is handled at a low level inside md.c, and we don't really have any visibility into that in the buffer manager. ISTM that some changes to the smgr interfaces would be needed for this to work well, though just doing it on a relation per relation basis would also be better than the current approach.

  Heikki Linnakangas

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?


Reply via email to