Tom,

Thanks for the response, but I figured out the error is mine, not pg_dump's.  
In short (to minimize my embarrassment!) don't write to the same file from 
three different pg_dumps. 

The good news is running multiple pg_dumps simultaneously on a single database 
with exclusive coverage of different table sets works great, and my overall 
dump times have been reduced to one-fifth the time it takes to run a single 
pg_dump.  

BTW, I'm using PG 8.4.1, going to 8.4.8 soon, and its working great.  Thanks to 
all for the excellent database software.


Regards,

Bob Lunney

________________________________
From: Tom Lane <[email protected]>
To: Bob Lunney <[email protected]>
Cc: "[email protected]" <[email protected]>
Sent: Friday, July 1, 2011 2:09 PM
Subject: Re: [ADMIN] Parallel pg_dump on a single database 

Bob Lunney <[email protected]> writes:
> Is it possible (or smart!) to run multiple pg_dumps simulataneously on a 
> single database, dumping different parts of the database to different files 
> by using table and schema exclusion?  I'm attempting this and sometimes it 
> works and sometimes when I check the dump files with 
>   pg_restore -Fc <dumpfile> > /dev/null

> I get 

>   pg_restore: [custom archiver] found unexpected block ID (4) when reading 
>data -- expected 4238

That sure sounds like a bug.  What PG version are you using exactly?
Can you provide a more specific description of what you're doing,
so somebody else could reproduce this?

            regards, tom lane

-- 
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to