We are using Postgresql Database server to host DB for an analytical tool I
am working on. Basically this DB has metrics about files. Row count of few
tables are more than 18 million. But the content of the tables are basic
data types like numbers, date & string. No binary data is stored. The size
of the DB is around 1 GB when taken a full dump.



We query the DB using complex views to get reports and most of the result
set of these queries are quite huge (row count in hundreds of thousand or
in million).



The size of the DB Cluster Folder varies between 400GB to 600GB which is
unreasonably huge for the actual data. It is eating up all disk-space in
the server. When I create a fresh DB from the dump in a new server the size
of the DB cluster folder is around 2.3 GB which is very reasonable to me.


Experts,

Could you help me how to clean up DB Cluster folder and reclaim disk space
please? And please give me some insight into how data is organized in DB
Cluster and what should I do to avoid this happening again?



Size of directories under DB Cluster Folder is mentioned below

[user@server DB_CLUSER_DATA]$ du -ksh *
407G    base
316K    global
49M     pg_clog
4.0K    pg_hba.conf
4.0K    pg_ident.conf
120K    pg_multixact
12K     pg_notify
32K     pg_stat_tmp
88K     pg_subtrans
4.0K    pg_tblspc
4.0K    pg_twophase
4.0K    PG_VERSION
129M    pg_xlog
20K     postgresql.conf
4.0K    postmaster.opts
4.0K    postmaster.pid


Thank you very much!



Regards,

Kiruba

Reply via email to