On Tue, 26 Jun 2007, Tom Lane wrote:

I'm not impressed with the idea of writing buffers because we might need them someday; that just costs extra I/O due to re-dirtying in too many scenarios.

This is kind of an interesting statement to me because it really highlights the difference in how I thinking about this problem from how you see it. As far as I'm concerned, there's a hierarchy of I/O the database needs to finish that goes like this:

1) Client back-end writes (blocked until a buffer appears)
2) LRU writes so (1) doesn't happen
3) Checkpoint writes
4) Dirty pages with a non-zero usage count

In my view of the world, there should be one parameter for a target rate of how much I/O you can stand under normal use, and the background writer should work its way as far down this chain as it can until it meets that. If there's plenty of clean buffers for the expected new allocations and there's no checkpoint going on, by all means write out some buffers we might re-dirty if there's I/O to spare. If you write them twice, so what? You didn't even get to that point as an option until all the important stuff was taken care of and the system was near idle.

The elimination of the all-scan background writer means that true hot and dirty spots in the buffer cache, like popular index blocks on a heavily updated table that never get a zero usage_count, are never going to be written out other than as part of the checkpoint process. That's OK for now, but I'd like it to be the case that one day the database's I/O scheduling would eventually get to those, in order to optimize performance in the kind of bursty scenarios I've been mentioning lately.

* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to