Hi, We have problems with backend processes that close the channel because of palloc() failures. When an INSERT statement fails, the backend reports an error (e.g. `Cannot insert a duplicate key into a unique index') and allocates a few bytes more memory. The next SQL statement that fails causes the backend to allocate more memory again, etc. until we have no more virtual memory left. Is this a bug? We are using postgres 6.4.2 on FreeBSD 2.2.8. It also works with psql: toy=> create table mytable (i integer unique); NOTICE: CREATE TABLE/UNIQUE will create implicit index mytable_i_key for table mytable CREATE toy=> \q ~ $ # now do a lot of inserts that cause error messages: ~ $ while true; do echo "INSERT INTO mytable VALUES (1);"; done | psql toy INSERT INTO mytable VALUES (1); ERROR: Cannot insert a duplicate key into a unique index ...quite a lot of these messages INSERT INTO mytable VALUES (1); ERROR: Cannot insert a duplicate key into a unique index INSERT INTO mytable VALUES (1); pqReadData() -- backend closed the channel unexpectedly. This probably means the backend terminated abnormally before or while processing the request. We have lost the connection to the backend, so further processing is impossible. Terminating. Hmm, why does the backend allocate more and more memory with each failed INSERT ? Any clues? Thanks, Mirko