"Bruce Momjian" <[EMAIL PROTECTED]> writes: > I have a new idea. Rather than increasing write activity as we approach > checkpoint, I think there is an easier solution. I am very familiar > with the BSD kernel, and it seems they have a similar issue in trying to > smooth writes:
Just to give a bit of context for this. The traditional mechanism for syncing buffers to disk on BSD which this daemon was a replacement for was to simply call "sync" every 30s. Compared to that this daemon certainly smooths the I/O out over the 30s window... Linux has a more complex solution to this (of course) which has undergone a few generations over time. Older kernels had a user space daemon called bdflush which called an undocumented syscall every 5s. More recent ones have a kernel thread called pdflush. I think both have various mostly undocumented tuning knobs but neither makes any sort of guarantee about the amount of time a dirty buffer might live before being synced. Your thinking is correct but that's already the whole point of bgwriter isn't it? To get the buffers out to the kernel early in the checkpoint interval so that come checkpoint time they're hopefully already flushed to disk. As long as your checkpoint interval is well over 30s only the last 30s (or so, it's a bit fuzzier on Linux) should still be at risk of being pending. I think the main problem with an additional pause in the hopes of getting more buffers synced is that during the 30s pause on a busy system there would be a continual stream of new dirty buffers being created as bgwriter works and other backends need to reuse pages. So when the fsync is eventually called there will still be a large amount of i/o to do. Fundamentally the problem is that fsync is too blunt an instrument. We only need to fsync the buffers we care about, not the entire file. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster