Hi,
we use 7.4 and suffer from table and index bloat. I tracked down the issue to vacuum using GetOldestXmin() which returns the oldest xmin of any database in cluster, not xmin of the current database. Yes, I've read the thread about this issue called "Problems Vacuumi'ng" - April 2004
http://groups.google.com/groups?hl=cs&lr=&threadm=20040401215058.3FBB14307%40jimsun.LinxNet.com&rnum=3&prev=/groups%3Fhl%3Dcs%26lr%3D%26q%3DProblems%2BVacuum%26btnG%3DHledat
But our situation is not caused by misbehaved clients. Let me explain. We're running several big OLTP databases on the server. Typical transaction is insert/update/delete and takes less than second. Up to this point everything is fine. But there are also tasks like backup or initial data loading that take much more time than a second - sometimes an hour or more. And during this time _all_ our databases on the server are not properly vacuumed because there is a long running transaction in some other database. This is really big problem for us. We're trying to avoid long running transactions but backup and initial data loading are essential tasks. The only solution I can see now, is to have several database clusters on the server in order to have completly separated databases...
Is there something I'm missing? What can I do for correct vacuuming when there is a long running transaction in some other database?
My suggestion is to add some more logic to vacuum to get correct oldest xmin - local to current database. I don't want to add extra overhead to all transactions but only to vacuum. I have no knowledge of Pg internals but something like adding txn number to pg_stat_activity and then checking this view (if it is enabled)... Would this be possible?
Bottom line: GetOldestXmin() is also called from Slony and also leads to unexpected behaviour (at least for me) - replication is waiting for transactions in completly unrelated databases.
Thanks, Kuba
---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend