The reason this is needed from the export/dump side is that the database can 
become huge due to the number of datasheets added to the database.  These 
datasheets are not necessary to troubleshoot the setup and, sometimes, the 
datasheets are secure-sensitive.  In either case, they're not necessary when we 
need a copy of the customer's database to troubleshoot and they make the 
transport and import of the database horribly time consuming.

Thanks for the response.  Hopefully this can be addressed one day.

Cheers.

-----Original Message-----
From: Robert Haas [mailto:robertmh...@gmail.com] 
Sent: Thursday, July 24, 2014 7:31 AM
To: Braunstein, Alan
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Exporting Table-Specified BLOBs Only?

On Mon, Jul 21, 2014 at 2:14 PM, Braunstein, Alan <alan_braunst...@mentor.com> 
wrote:
> What do I need?
>
> A method of using pg_dump to selectively export BLOBs with OID’s used 
> in the tables specified with --table <table_name1> --table 
> <table_name2>

Hmm.  If you take a full backup using pg_dump -Fc, you can then use pg_restore 
-l and pg_restore -L to find and selectively restore whatever objects you want; 
e.g. restore the tables first, then fetch the list of OIDs from the relevant 
columns and restore those particular blobs.

But I don't think we've got a tool built into core for doing this kind of 
filtering on the dump side.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to