Gregory Stark <[EMAIL PROTECTED]> writes:
> I had a thought on this. Instead of sleeping for a constant amount of time and
> then estimating the number of pages needed for that constant amount of time
> perhaps what bgwriter should be doing is sleeping for a variable amount of
> time and estimating the length of time it needs to sleep to arrive at a
> constant number of pages being needed.

That's an interesting idea, but a possible problem with it is that we
can't vary the granularity of a sleep time as finely as we can vary the
number of buffers processed per iteration.  Assuming that the system's
tick rate is the typical 100Hz, we have only 10ms resolution on sleep
times.

> The reason I think this may be better is that "what percentage of the shared
> buffers the bgwriter allows to get old between wakeups" seems more likely to
> be a universal constant that people won't have to adjust than "fixed time
> interval between bgwriter cleanup operations".

Why?  What you're really trying to determine, I think, is the I/O load
imposed by the bgwriter, and pages-per-second seems a pretty natural
way to think about that; percentage of shared buffers not so much.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to