Qingqing Zhou <[EMAIL PROTECTED]> writes:
> Yeah, only theoretically there is some marginal performance improvements.
> Maybe you suggest we keep the LWLock but use the circular array part?

They're separable issues anyway.

> Yeah, not related to lock. But I changed algorithm to circular array as
> well and notice there is only one reader, so we can remove the requests
> after the we are successfully done. In another word, if there is problem,
> we never remove the unhanlded requests.

I doubt that's more robust than before though.  If we did run out of
memory in the hashtable, this coding would essentially block further
operation of the request queue, because it would keep trying to
re-insert all the entries in the queue, and keep failing.

If you want the bgwriter to keep working in the face of an out-of-memory
condition in the hashtable, I think you'd have to change the coding so
that it takes requests one at a time from the queue.  Then, as
individual requests get removed from the hashtable, individual new
requests could get put in, even if the total queue is too large to fit
all at once.  However this would result in much greater lock-acquisition
overhead, so it doesn't really seem like a win.  We haven't seen any
evidence that the current coding has a problem under field conditions.

Another issue to keep in mind is that correct operation requires that
the bgwriter not declare a checkpoint complete until it's completed
every fsync request that was queued before the checkpoint started.
So if the bgwriter is to try to keep going after failing to absorb
all the pending requests, there would have to be some logic addition
to keep track of whether it's OK to complete a checkpoint or not.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?


Reply via email to