On Tue, Apr 10, 2012 at 11:21 AM, Kiruba suthan <kirubasut...@gmail.com>wrote:

>
>
>
> We are using Postgresql Database server to host DB for an analytical tool
> I am working on. Basically this DB has metrics about files. Row count of
> few tables are more than 18 million. But the content of the tables are
> basic data types like numbers, date & string. No binary data is stored. The
> size of the DB is around 1 GB when taken a full dump.
>
>
>
> We query the DB using complex views to get reports and most of the result
> set of these queries are quite huge (row count in hundreds of thousand or
> in million).
>
>
>
> The size of the DB Cluster Folder varies between 400GB to 600GB which is
> unreasonably huge for the actual data. It is eating up all disk-space in
> the server. When I create a fresh DB from the dump in a new server the size
> of the DB cluster folder is around 2.3 GB which is very reasonable to me.
>
>
> Experts,
>
> Could you help me how to clean up DB Cluster folder and reclaim disk space
> please? And please give me some insight into how data is organized in DB
> Cluster and what should I do to avoid this happening again?
>
>
>
> Size of directories under DB Cluster Folder is mentioned below
>
> [user@server DB_CLUSER_DATA]$ du -ksh *
> 407G    base
> 316K    global
> 49M     pg_clog
> 4.0K    pg_hba.conf
> 4.0K    pg_ident.conf
> 120K    pg_multixact
> 12K     pg_notify
> 32K     pg_stat_tmp
> 88K     pg_subtrans
> 4.0K    pg_tblspc
> 4.0K    pg_twophase
> 4.0K    PG_VERSION
> 129M    pg_xlog
> 20K     postgresql.conf
> 4.0K    postmaster.opts
> 4.0K    postmaster.pid
>
>
>

Perform VACUUM FULL on entire cluster to reclaim space. This operation will
put Database Objects in Exclusive lock mode,so requesting you to do this
activity in non-peak hours.

Once you completed above maintenance activity,then schedule manual VACUUM
ANALYZE on every day once,so that dead rows space will be reused while
inserting new records into a table.

--Raghu

Reply via email to