Christopher Petrilli <[EMAIL PROTECTED]> writes: > On 7/19/05, Tom Lane <[EMAIL PROTECTED]> wrote: >> I'm suddenly wondering if the performance dropoff corresponds to the >> point where the indexes have grown large enough to not fit in shared >> buffers anymore. If I understand correctly, the 5000-iterations mark >> corresponds to 2.5 million total rows in the table; with 5 indexes >> you'd have 12.5 million index entries or probably a couple hundred MB >> total. If the insertion pattern is sufficiently random that the entire >> index ranges are "hot" then you might not have enough RAM.
> This is entirely possible, currently: > shared_buffers = 1000 Ah-hah --- with that setting, you could be seeing shared-buffer thrashing even if only a fraction of the total index ranges need to be touched. I'd try some runs with shared_buffers at 10000, 50000, 100000. You might also try strace'ing the backend and see if behavior changes noticeably when the performance tanks. FWIW I have seen similar behavior while playing with MySQL's sql-bench test --- the default 1000 shared_buffers is not large enough to hold the "hot" part of the indexes in some of their insertion tests, and so performance tanks --- you can see this happening in strace because the kernel request mix goes from almost all writes to a significant part reads. On a pure data insertion benchmark you'd like to see nothing but writes. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly