On Thu, Feb 28, 2008 at 12:17 PM, Antonio Grassi <[EMAIL PROTECTED]> wrote:

> Hi list. I've searched the archives and I've seen a lot of posts related
> to database size, vacuuming, etc., but I can't totally understand this.
>
> I've a small database of about 300Mb (when initially loaded), whose data
> is almost completely replaced in the night (via various "drop table" and the
> load of dumps of such tables from other postgresql installation).
>
> Altough I've autovacuum activated, the size of this database is now about
> 3Gb. I've also full vacuumed the base by hand from time to time since the
> base was created, but that didn't help either.
>
> Yesterday I recreated the database with another name, thus having database
> A with 3Gb and database B with 300Mb. Then I made a "drop database A", and
> ran vacuumdb -af. But the space used by postgresql didn't go down. This
> makes me think that vacuum full just releases pages associated to existent
> database objects, is this correct?
>
> So my question is: is there a way to reclaim those 3Gb for the operating
> system, without deleting (rm) the data dir, doing initdb again and
> recreating the database (to finish up with one database of about 300Mb and
> 300Mb of disk usage)?
>
> Thanks in advance,
> Antonio
>
> PS: Yes, I've very few disk space, and no, recreating the database from
> scratch (in the 'rm' sense) wouldn't be too easy (work place
> particularities).
>

Try running a reindex (reindexdb -e -d <db>) and cluster all of the tables.
These are blocking transactions, but should release all of the bloat in your
db.

The cluster has to be run on a table by table basis and will cluster the
table on the primary key if no other index is provided.  See the
documentation for more information.

HTH,

Chris

-- 
Come see how to SAVE money on fuel, decrease harmful emissions, and even
make MONEY.  Visit http://colafuelguy.mybpi.com and join the revolution!

Reply via email to