On Mon, 2007-03-05 at 14:41 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > Itakgaki-san and I were discussing in January the idea of cache-looping,
> > whereby a process begins to reuse its own buffers in a ring of ~32
> > buffers. When we cycle back round, if usage_count==1 then we assume that
> > we can reuse that buffer. This avoids cache swamping for read and write
> > workloads, plus avoids too-frequent WAL writing for VACUUM.
> > This would maintain the beneficial behaviour for OLTP,
> Justify that claim.  It sounds to me like this would act very nearly the
> same as having shared_buffers == 32 ...

Sure. We wouldn't set the hint for IndexScans or Inserts, only for
SeqScans, VACUUM and COPY.

So OLTP-only workloads would be entirely unaffected. In the presence of
a mixed workload the scan tasks would have only a limited effect on the
cache, maintaining performance for the response time critical tasks. So
its an OLTP benefit because of cache protection and WAL-flush reduction
during VACUUM.

As we've seen, the scan tasks look like they'll go faster with this.

The assumption that we can reuse the buffer if usage_count<=1 seems
valid. If another user had requested the block, then the usage_count
would be > 1, unless the buffer has been used, unpinned and then a cycle
of the buffer cache had spun round, all within the time taken to process
32 blocks sequentially. We do have to reuse one of the buffers, so
cyclical reuse seems like a better bet most of the time than more
arbitrary block reuse, as we see in a larger cache.

Best way is to prove it though. Seems like not too much work to have a
private ring data structure when the hint is enabled. The extra
bookeeping is easily going to be outweighed by the reduction in mem->L2
cache fetches. I'll do it tomorrow, if no other volunteers.

  Simon Riggs             
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to