> Hmm. You could use something along these lines instead:
> 0. LOCK TABLE queue_table
> 1. SELECT * INTO queue_table_new FROM queue_table
> 2. DROP TABLE queue_table
> 3. ALTER TABLE queue_table_new RENAME queue_table
> After all, it's not that you care about the clustering of the table, you 
> just want to remove old tuples.

... and then restart the app so all my pooled connections drop their
cached plans ;-)

Seriously, that won't work. If a session tries to insert a new row after
I lock the table to clean it up, I still want it to be able to insert
after the cleanup is finished... if I drop the table it tries to insert
to, it will fail.

> As a long term solution, it would be nice if we had more fine-grained 
> bookkeeping of snapshots that are in use in the system. In your case, 
> there's a lot of tuples that are not visible to pg_dump because xmin is 
> too new, and also not visible to any other transaction because xmax is 
> too old. If we had a way to recognize situations like that, and vacuum 
> those tuples, much of the problem with long-running transactions would 
> go away.

In the general case that won't work either in a strict MVCC sense... if
you have an old transaction, you should never clean up a dead tuple
which could be still visible to it.

> > Wouldn't be possible to do it like Simon (IIRC) suggested, and add a
> > parameter to enable/disable the current behavior, and use the MVCC
> > behavior as default ?
> I guess we could, but I don't see why should encourage using CLUSTER for 
> that. A more aggressive, MVCC-breaking version of VACUUM would make more 
> sense to me, but I don't like the idea of adding "break-MVCC" flags to 
> any commands.

Well, if there would be any other way to avoid the table bloat I would


---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to