Hi

We currently have a 16CPU 32GB box running postgres 8.2.

When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E UTF8 -F c -b" I get a file of 14GB in size.

But the database is 110GB in size on the disk. Why the big difference in size? Does this have anything to do with performance?

        I have a 2GB database, which dumps to a 340 MB file...
        Two reasons :

        - I have lots of big fat but very necessary indexes (not included in 
dump)
        - Dump is compressed with gzip which really works well on database data.

If you suspect your tables or indexes are bloated, restore your dump to a test box. Use fsync=off during restore, you don't care about integrity on the test box.
        This will avoid slowing down your production database.
        Then look at the size of the restored database.
        If it is much smaller than your production database, then you have 
bloat.
Time to CLUSTER, or REINDEX, or VACUUM FULL (your choice), on the tables that are bloated, and take note to vacuum those more often (and perhaps tune the autovacuum). Judicious use of CLUSTER on that small, but extremely often updated table can also be a very good option.
        8.3 and its new HOT feature are also a good idea.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to