kris pal wrote:
Christopher,
Thanks for your detailed response. Now I get the idea why vacuum is causing so much load. We should have vacuumed the database more often but didn't do it as we are more used to managing Oracle databases. This is the first Postgres instance that I am managing.
The datafile for production database ("bbs") is abt 6GB, though the actual data is very small, abt 20MB. I used the dump (from pg_dump of "bbs") of the production database to import into a "test_database" - its datafile size was < 20MB.
So instead of going through all these, am planning to have down time of 15 mins and do the following:
1)"pg_dump bmgs"
2)"drop the database bmgs" ** (this should essentially free up all the space occupied by pg_attribute and its indexes - right ?? )
3)"recreate database bmgs"
and
4)import the dump from step 1) into bmgs created in step 3)
Do you think there will be issues with this approach? The system catalog will be built from scratch, and there won't be any data loss right. That way I can reclaim the space and then run Vacuum more often.
Before to do it be sure that your pg_dump is non affected by not well ordered objects, if yes you have to reorder the order creation ( manually ) in the dump, some times happen.
Regards Gaetano Mendola
---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?
http://archives.postgresql.org
