On Tuesday 15 March 2005 04:37, Richard Huxton wrote:
> Tambet Matiisen wrote:
> > Now, if typical inserts into your most active table occur in batches of
> > 3 rows, in one transaction, then row count for this table is updated 3
> > times during transaction. 3 updates generate 3 tuples, while 2 of them
> > are dead from the very start. You effectively commit 2 useless tuples.
> > After millions of inserts you end up with rowcounts table having 2/3 of
> > dead tuples and queries start to slow down.
> >
> > Current solution is to vacuum often. My proposal was to create new tuple
> > only with first update. The next updates in the same transaction would
> > update the existing tuple, not create a new.
> How do you roll back to a savepoint with this model?

You can't, but you could add the caveat to just do this auto-reuse within any 
given nested transaction.   Then as long as you aren't using savepoints you 
get to reclaim all the space/ 

 On a similar note I was just wondering if it would be possible to mark any of 
these dead tuples as ready to be reused at transaction commit time, since we 
know that they are dead to any and all other transactions currently going on.  
This would save you from having to vacuum to get the tuples marked ready for 
reuse.  In the above scenario this could be a win, whether it would be 
overall is hard to say. 

Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to