Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
I just ran a quick test with 4 concurrent scans on a dual-core system, and it looks like we do "leak" buffers from the rings because they're pinned at the time they would be recycled.

Yeah, I noticed the same in some tests here.  I think there's not a lot
we can do about that; we don't have enough visibility into why someone
else has the buffer pinned.

We could stash pinned buffers to some other list etc. and try them again later. But that gets a lot more complex.

Using a larger ring would help, by making it less probable that any
other sync-scanning backend is so far behind as to still have the oldest
element of our ring pinned.  But if we do that we have the L2-cache-size
effect to worry about.  Is there any actual data backing up that it's
useful to keep the ring fitting in L2, or is that just guesswork?  In
the sync-scan case the idea seems pretty bogus anyway, because the
actual working set will be N backends' rings not just one.

Yes, I tested different ring sizes here: http://archives.postgresql.org/pgsql-hackers/2007-05/msg00469.php

The tests above showed the effect when reading a table from OS cache. I haven't seen direct evidence supporting Luke's claim that the ring makes scans of tables bigger than RAM go faster with bigger I/O hardware, because I don't have such hardware at hand. We did repeat the tests on different hardware however, and monitored the CPU usage with vmstat at the same time. The CPU usage was significantly lower with the patch, so I believe that with better I/O hardware the test would become limited by CPU and the patch would therefore make it go faster.

BTW, we've been talking about the "L2 cache effect" but we don't really know for sure if the effect has anything to do with the L2 cache. But whatever it is, it's real.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to