> The database grovs to ~60Gb and after a 'vacuum full' it's ~31Gb, after 
> about a week the database it up to 55-60Gb again and i have to do a 
> 'vacuum alalyze full'  to gain disk (the disk is 70Gb so I'm living on 
> the edge here ;(

Cron vacuum (no analyze or full) more frequently -- every 10 minutes.

Bump the fsm values up:

max_fsm_relations = select count(*) + 100 from pg_class
max_fsm_pages = ~500 * max_fsm_relations

> In one well used (~3-5Gb big) table there is about 1.5million tuples and 
> vacuuing that table use to clean a lot of space (deleting ~1.2 millon, 
> leaving ~300k), can i gain speed (this table takes about 1/2 hour to 
> vacuum) by doing something like

See above.

> It's a couple of months until the redesign of all databaseuse is about 
> to change, but I have a new diskarray arriving the next day or so, when 
> this array is in place, can i gain speed by dumping a backup , deleting 
> the database and restoring the backup?
> It seems that ~30Gb data can be copied faster than ~10-14 hours  ;=)

You can, but thats temporary of course.  You'll have better luck by
properly organizing your diskarray (WAL on one drive by itself, etc.)

REINDEX and VACUUM FULL are just as good.


PGP Key: http://www.rbt.ca/rbtpub.asc

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to