On Wed, 6 Jun 2007, Heikki Linnakangas wrote:

The original patch uses bgwriter_all_max_pages to set the minimum rate. I think we should have a separate variable, checkpoint_write_min_rate, in KB/s, instead.

Completely agreed. There shouldn't be any coupling with the background writer parameters, which may be set for a completely different set of priorities than the checkpoint has. I have to look at this code again to see why it's a min_rate instead of a max, that seems a little weird.

Nap phase: We should therefore give the delay as a number of seconds instead of as a percentage of checkpoint interval.

Again, the setting here should be completely decoupled from another GUC like the interval. My main complaint with the original form of this patch was how much it tried to syncronize the process with the interval; since I don't even have a system where that value is set to something, because it's all segment based instead, that whole idea was incompatible.

The original patch tried to spread the load out as evenly as possible over the time available. I much prefer thinking in terms of getting it done as quickly as possible while trying to bound the I/O storm.

And we don't know how much work an fsync performs. The patch uses the file size as a measure of that, but as we discussed that doesn't necessarily have anything to do with reality. fsyncing a 1GB file with one dirty block isn't any more expensive than fsyncing a file with a single block.

On top of that, if you have a system with a write cache, the time an fsync takes can greatly depend on how full it is at the time, which there is no way to measure or even model easily.

Is there any way to track how many dirty blocks went into each file during the checkpoint write? That's your best bet for guessing how long the fsync will take.

* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?


Reply via email to