Gregory Stark wrote: > "Bruce Momjian" <[EMAIL PROTECTED]> writes: > >> I have a new idea. ...the BSD kernel...similar issue...to smooth writes: > Linux has a more complex solution to this (of course) which has undergone a > few generations over time. Older kernels had a user space daemon called > bdflush which called an undocumented syscall every 5s. More recent ones have a > kernel thread called pdflush. I think both have various mostly undocumented > tuning knobs but neither makes any sort of guarantee about the amount of time > a dirty buffer might live before being synced.
Earlier in this thread (around the 7th) was a discussion of /proc/sys/vm/dirty_expire_centisecs and /proc/vm/dirty_writeback_centisecs which seem to be the tunables that matter here. Googling suggests that dirty_expire_centisecs specifies that data which has been dirty in memory for longer than this interval will be written out next time a pdflush daemon wakes up" and dirty_writeback_centisecs "expresses the interval between those wakeups" It seems to me that the sum of the two times does determine the maximum time before the kernel will start syncing a dirtied page. Bottom line, though is that it seems both postgresql and the OS's are trying to delay writes in the hopes of collapsing them; and that the actual delay is the sum of the OS's delay and postgresql's delay. I think Kevin Grittner's experimentation earlier in the thread did indeed suggest that getting writes to the OS faster and let it handle the collapsing of the writes was an effective method of reducing painful checkpoints. ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster