Tom Lane wrote:
> Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
> 
>>>>3. vacuuming this table - it turned out that VACUUM FULL is completly
>>>>unusable on a table(which i actually expected before) of this size not
>>>>only to the locking involved but rather due to a gigantic memory
>>>>requirement and unbelievable slowness.
> 
> 
>>sure, that was mostly meant as an experiment, if I had to do this on a
>>production database I would most likely use CLUSTER to get the desired
>>effect (which in my case was purely getting back the diskspace wasted by
>>dead tuples)
> 
> 
> Yeah, the VACUUM FULL algorithm is really designed for situations where
> just a fraction of the rows have to be moved to re-compact the table.
> It might be interesting to teach it to abandon that plan and go to a
> CLUSTER-like table rewrite once the percentage of dead space is seen to
> reach some suitable level.  CLUSTER has its own disadvantages though
> (2X peak disk space usage, doesn't work on core catalogs, etc).

hmm very interesting idea, I for myself like it but from what i have
seen people quite often use vacuum full to get their disk usage down
_because_ they are running low on space (and because it's not that well
known that CLUSTER could be much faster) - maybe we should add a
note/hint about this to the maintenance/vacuum docs at least ?


Stefan

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to