On Tue, 19 Dec 2006, Arnau wrote:

On Tue, 19 Dec 2006, Arnau wrote:

 I've got a DB in production that is bigger than 2GB that dumping it
takes more than 12 hours. I have a new server to replace this old one
where I have restore the DB's dump. The problem is I can't afford to
have the server out of business for so long, so I need your advice about
how you'd do this dump/restore. The big amount of data is placed in two
tables (statistics data), so I was thinking in dump/restore all except
this two tables and once the server is running again I'd dump/restore
this data. The problem is I don't know how exactly do this.

Arnau,

2GB and it takes 12 hours? What sort of server is this running on? Does your postgresql.conf have all default values perhaps? I routinely dump DBs that are 4-8GB in size and it takes about 10-15 minutes.



It's a dual Xeon with 4 GB of ram and with a RAID 5. Probably it has the default values. Any suggestion about what parameters I should change to speed it up?

Have a look at:

http://www.powerpostgresql.com/PerfList
and
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html


--
Jeff Frost, Owner       <[EMAIL PROTECTED]>
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908     FAX: 650-649-1954

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

               http://www.postgresql.org/about/donate

Reply via email to