On Sat, Feb 11, 2017 at 9:56 AM, Andrea Urbani <matfan...@mail.com> wrote:
> I'm a beginner here... anyway I try to share my ideas.
>
> My situation is changed in a worst state: I'm no more able to make a pg_dump 
> neither with my custom fetch value (I have tried "1" as value = one row at 
> the time) neither without the "--column-inserts":
>
> pg_dump: Dumping the contents of table "tDocumentsFiles" failed: 
> PQgetResult() failed.
> pg_dump: Error message from server: ERROR:  out of memory
> DETAIL:  Failed on request of size 1073741823.
> pg_dump: The command was: COPY public."tDocumentsFiles" ("ID_Document", 
> "ID_File", "Name", "FileName", "Link", "Note", "Picture", "Content", 
> "FileSize", "FileDateTime", "DrugBox", "DrugPicture", "DrugInstructions") TO 
> stdout;
>
> I don't know if the Kyotaro Horiguchi patch will solve this, because, again, 
> I'm not able to get neither one single row.

Yeah, if you can't fetch even one row, limiting the fetch size won't
help.  But why is that failing?  A single 1GB allocation should be
fine on most modern servers.  I guess the fact that you're using a
32-bit build of PostgreSQL is probably a big part of the problem;
there is probably only 2GB of available address space and you're
trying to find a single, contiguous 1GB chunk.  If you switch to using
a 64-bit PostgreSQL things will probably get a lot better for you,
unless the server's actual memory is also very small.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to