On 04 Apr 2005 23:45:47 -0400, Greg Stark <[EMAIL PROTECTED]> wrote:
> Tom Lane <[EMAIL PROTECTED]> writes:
> > What I think might be happening is that the "working set" of pages
> > touched during index inserts is gradually growing, and at some point it
> > exceeds shared_buffers, and at that point performance goes in the toilet
> > because we are suddenly doing lots of reads to pull in index pages that
> > fell out of the shared buffer area.
> All this is happening within a single transaction too, right? So there hasn't
> been an fsync the entire time. It's entirely up to the kernel when to decide
> to start writing data.

This was my concern, and in fact moving from ext3 -> XFS has helped
substantially in this regard. This is all happening inside COPY
statements, so there's effectively a commit every 500 rows. I could
enlarge this, but I didn't notice a huge increase in performance when
doing tests on smaller bits.

Also, you are correct, I am running without fsync, although I could
change that if you thought it would "smooth" the performance.  The
issue is less absolute performance than something more deterministic. 
Going from 0.05 seconds for a 500 row COPY to 26 seconds really messes
with the system.

One thing that was mentioned early on, and I hope people remember, is
that I am running autovacuum in the background, but the timing of it
seems to have little to do with the system's problems, at least the
debug output doesn't conincide with performance loss.

> It's possible it's just buffering all the writes in memory until the amount of
> free buffers drops below some threshold then it suddenly starts writing out
> buffers.

That was happening with ext3, actually, or at least to the best of my knowledge.

> > It would be interesting to watch the output of iostat or vmstat during
> > this test run.  If I'm correct about this, the I/O load should be
> > basically all writes during the initial part of the test, and then
> > suddenly develop a significant and increasing fraction of reads at the
> > point where the slowdown occurs.
> I think he's right, if you see a reasonable write volume before the
> performance drop followed by a sudden increase in read volume (and decrease of
> write volume proportionate to the drop in performance) then it's just shared
> buffers becoming a bottleneck.

I've set shared_buffers to 16000 (from the original 1000) and am
running now, without the pauses. We'll see what it looks like, but so
far it seems to be running faster. How much and how it degrades will
be an interesting view.
> If there's hardly any write volume before, then a sudden increase in write
> volume despite a drop in performance then I might be right. In which case you
> might want to look into tools to tune your kernel vm system.

Here's a quick snapshot of iostat:

Linux 2.6.9-1.667 (bigbird.amber.org)   04/04/2005

avg-cpu:  %user   %nice    %sys %iowait   %idle
           1.05    0.01    0.63   13.15   85.17

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
hda               0.00         0.00         0.00       3616          0
sda              23.15        68.09       748.89  246884021 2715312654
sdb              19.08        37.65       773.03  136515457 2802814036

The first 3 columns have been identical (or nearly so) the whole time,
which tells me the system is pegged in its performance on IO.  This is
not surprising.

| Christopher Petrilli

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to