Hi all!

I have a big trouble with a database and hope you can help out on how to improve the time vacuum takes.

The database grovs to ~60Gb and after a 'vacuum full' it's ~31Gb, after about a week the database it up to 55-60Gb again and i have to do a 'vacuum alalyze full' to gain disk (the disk is 70Gb so I'm living on the edge here ;(
I have a maintenancewindow once a week but vacuuming this database takes around 10-14 hours and I really wanna cut that time down.
Since the disk is >85% full i tried to vacuum table by table instead of doing the whole database and my feeling was that i think I gained a speedup if I vacuumed the tables that takes most disk first (so the rest of the tables have more disk to work on)..
can this be true or was it just a feeling?

My next question is if i can gain more speed if i have paralell vacuums running?

When looking at iostat and vmstat the disk was not all choked up, and the process that was vacuuming had allocated 2Gb RAM so my thougt is that i can speed it up a bit more this way?

Question #2:

In one well used (~3-5Gb big) table there is about 1.5million tuples and vacuuing that table use to clean a lot of space (deleting ~1.2 millon, leaving ~300k), can i gain speed (this table takes about 1/2 hour to vacuum) by doing something like
"select * from XX into outfile YY;
drop table XX;
load into XX from file YY;"

Question #3:

It's a couple of months until the redesign of all databaseuse is about to change, but I have a new diskarray arriving the next day or so, when this array is in place, can i gain speed by dumping a backup , deleting the database and restoring the backup?
It seems that ~30Gb data can be copied faster than ~10-14 hours ;=)

The server is running RedHat 7.x and has 8 Xeon1.4Ghz CPU and 8Gb RAM..

Changes is the SQL-statements,coding or other changes other than in postgresql.conf is impossible..

Kindly regards,

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?


Reply via email to