On Sun, 2007-06-17 at 01:36 -0400, Greg Smith wrote: > The last project I was working on, any checkpoint that caused a > transaction to slip for more than 5 seconds would cause a data loss. One > of the defenses against that happening is that you have a wicked fast > transaction rate to clear the buffer out when thing are going well, but by > no means is that rate the important thing--never having the response time > halt for so long that transactions get lost is.
You would want longer checkpoints in that case. You're saying you don't want long checkpoints because they cause an effective outage. The current situation is that checkpoints are so severe that they cause an effective halt to processing, even though checkpoints allow processing to continue. Checkpoints don't hold any locks that prevent normal work from occurring but they did cause an unthrottled burst of work to occur that raised expected service times dramatically on an already busy server. There were a number of effects contributing to the high impact of checkpointing. Heikki's recent changes reduce the impact of checkpoints so that they do *not* halt other processing. Longer checkpoints do *not* mean longer halts in processing, they actually reduce the halt in processing. Smoother checkpoints mean smaller resource queues when a burst coincides with a checkpoint, so anybody with throughput-maximised or bursty apps should want longer, smooth checkpoints. You're right to ask for a minimum write rate, since this allows very small checkpoints to complete in reduced times. There's no gain from having long checkpoints per se, just the reduction in peak write rate they typically cause. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend