I think PFC's question was not directed towards modeling your
application, but about helping us understand what is going wrong
(so we can fix  it).

Exactly, I was wondering if this delay would allow things to get flushed, for instance, which would give information about the problem (if giving it a few minutes of rest resumed normal operation, it would mean that some buffer somewhere is getting filled faster than it can be flushed).

So, go ahead with a few minutes even if it's unrealistic, that is not the point, you have to tweak it in various possible manners to understand the causes.

And instead of a pause, why not just set the duration of your test to 6000 iterations and run it two times without dropping the test table ?

I'm going into wild guesses, but first you should want to know if the problem is because the table is big, or if it's something else. So you run the complete test, stopping a bit after it starts to make a mess, then instead of dumping the table and restarting the test anew, you leave it as it is, do something, then run a new test, but on this table which already has data.

        'something' could be one of those :
disconnect, reconnect (well you'll have to do that if you run the test twice anyway)
        just wait
        restart postgres
        unmount and remount the volume with the logs/data on it
        reboot the machine
        analyze
        vacuum
        vacuum analyze
        cluster
        vacuum full
        reindex
defrag your files on disk (stopping postgres and copying the database from your disk to anotherone and back will do)
        or even dump'n'reload the whole database

I think useful information can be extracted that way. If one of these fixes your problem it'l give hints.

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to