On Mon, Sep 29, 2008 at 2:16 PM, Tom Lane <[EMAIL PROTECTED]> wrote: > "Peter Kovacs" <[EMAIL PROTECTED]> writes: >> We have a number of automated performance tests (to test our own code) >> involving PostgreSQL. Test cases are supposed to drop and recreate >> tables each time they run. > >> The problem is that some of the tests show a linear performance >> degradation overtime. (We have data for three months back in the >> past.) We have established that some element(s) of our test >> environment must be the culprit for the degradation. As rebooting the >> test machine didn't revert speeds to baselines recorded three months >> ago, we have turned our attention to the database as the only element >> of the environment which is persistent across reboots. Recreating the >> entire PGSQL cluster did cause speeds to revert to baselines. > > What it sounds like to me is that you're not vacuuming the system > catalogs, which are getting bloated with dead rows about all those > dropped tables.
Wow, great! It is not immediately clear from the documentation, but the VACUUM command also deals with the system catalogs as well, correct? Thanks a lot! Peter > > regards, tom lane > -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin