On 12/19/06, Arnau <[EMAIL PROTECTED]> wrote:
Hi Jeff,

> On Tue, 19 Dec 2006, Arnau wrote:
>
>>  I've got a DB in production that is bigger than 2GB that dumping it
>> takes more than 12 hours.

thats strange , we dump +20GB data in 2 hrs or so.

I have a new server to replace this old one
>> where I have restore the DB's dump. The problem is I can't afford to
>> have the server out of business for so long,

if the two biggest tables are *not critical* for application
availability i can dump  out their data separately
into two SQL files and later restore it.
once you dump out the data you can drop the tables
from the production DB before dumping out and see
how long it takes.

pg_dump -t schema.table  -h hostname -U user <dbname>

can dump out  a specific schema.table.
(the exact options are version dependent, which version btw u using?)

it is always desired to know the root cause of why pg_dump is
taking so long in your machine , but in worst case you could take the
approach you suggested.


so I need your advice about
>> how you'd do this dump/restore. The big amount of data is placed in two
>> tables (statistics data), so I was thinking in dump/restore all except
>> this two tables and once the server is running again I'd dump/restore
>> this data. The problem is I don't know how exactly do this.
>
> Arnau,
>

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to