Hi Jeff,

On Tue, 19 Dec 2006, Arnau wrote:

 I've got a DB in production that is bigger than 2GB that dumping it
takes more than 12 hours. I have a new server to replace this old one
where I have restore the DB's dump. The problem is I can't afford to
have the server out of business for so long, so I need your advice about
how you'd do this dump/restore. The big amount of data is placed in two
tables (statistics data), so I was thinking in dump/restore all except
this two tables and once the server is running again I'd dump/restore
this data. The problem is I don't know how exactly do this.

Arnau,

2GB and it takes 12 hours? What sort of server is this running on? Does your postgresql.conf have all default values perhaps? I routinely dump DBs that are 4-8GB in size and it takes about 10-15 minutes.



It's a dual Xeon with 4 GB of ram and with a RAID 5. Probably it has the default values. Any suggestion about what parameters I should change to speed it up?


--
Arnau

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to