Tom Lane <[EMAIL PROTECTED]> writes:

> What I think might be happening is that the "working set" of pages
> touched during index inserts is gradually growing, and at some point it
> exceeds shared_buffers, and at that point performance goes in the toilet
> because we are suddenly doing lots of reads to pull in index pages that
> fell out of the shared buffer area.

All this is happening within a single transaction too, right? So there hasn't
been an fsync the entire time. It's entirely up to the kernel when to decide
to start writing data. 

It's possible it's just buffering all the writes in memory until the amount of
free buffers drops below some threshold then it suddenly starts writing out

> It would be interesting to watch the output of iostat or vmstat during
> this test run.  If I'm correct about this, the I/O load should be
> basically all writes during the initial part of the test, and then
> suddenly develop a significant and increasing fraction of reads at the
> point where the slowdown occurs.

I think he's right, if you see a reasonable write volume before the
performance drop followed by a sudden increase in read volume (and decrease of
write volume proportionate to the drop in performance) then it's just shared
buffers becoming a bottleneck.

If there's hardly any write volume before, then a sudden increase in write
volume despite a drop in performance then I might be right. In which case you
might want to look into tools to tune your kernel vm system.


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to