I'm a beginner here... anyway I try to share my ideas.

My situation is changed in a worst state: I'm no more able to make a pg_dump 
neither with my custom fetch value (I have tried "1" as value = one row at the 
time) neither without the "--column-inserts":

pg_dump: Dumping the contents of table "tDocumentsFiles" failed: PQgetResult() 
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 1073741823.
pg_dump: The command was: COPY public."tDocumentsFiles" ("ID_Document", 
"ID_File", "Name", "FileName", "Link", "Note", "Picture", "Content", 
"FileSize", "FileDateTime", "DrugBox", "DrugPicture", "DrugInstructions") TO 

I don't know if the Kyotaro Horiguchi patch will solve this, because, again, 
I'm not able to get neither one single row.
Similar problem trying to read and to write the bloab fields with my program.
Actually I'm working via pieces:
  r1) I get the length of the bloab field
  r2) I check the available free memory (on the client pc)
  r3) I read pieces of the bloab field, according to the free memory, appending 
them to a physical file
  w1) I check the length of the file to save inside the bloab
  w2) I check the available free memory (on the client pc)
  w3) I create a temporary table on the server
  w4) I add lines to this temporary table, writing pieces of the file according 
to the free memory
  w5) I ask the server to write, inside the final bloab field, the 
concatenation of the rows of the temporary data
The read and write is working now.
Probably the free memory check should be done on both sides (client and server 
[does a function/view with the available free memory exist?]) taking the 
smallest one.
What do you think to use a similar approach in the pg_dump?
a) go through the table getting the size of each row / fields
b) when the size of the row or of the field is bigger than the value (provided 
or stored somewhere), read pieces of the field till the end

PS: I have see there are the "large object" that can work via streams. My files 
are actually not bigger than 1Gb, but, ok, maybe in the future I will use them 
instead of the bloabs.

Thank you 

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to