Is there any guidelines to sizing work_mem, shared_bufferes and other
configuration parameters etc., with regards to very large records?  I
have a table that has a bytea column and I am told that some of these
columns contain over 400MB of data.  I am having a problem on several
servers reading and more specifically dumping these records (table)
using pg_dump

Thanks

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to