I have reformatted the mail, sorry for inconvenience. Thanks.
We have noticed the following issue with vacuumlo database that have millions 
of record in pg_largeobject i.e.      WARNING:  out of shared memory      
Failed to remove lo 155987:    ERROR:  out of shared memory         HINT:  You 
might need to increase max_locks_per_transaction.
Why do we need to increase max_locks_per_transaction/shared memory for clean up 
operation, if there are huge number records how can we tackle this situation 
with limited memory?. It is reproducible on postgresql-9.1.2. The steps are as 
following (PFA vacuumlo-test_data.sql that generates dummy data)  i.e.
Steps:
1. ./bin/initdb -D data-vacuumlo_test12. ./bin/pg_ctl -D data-vacuumlo_test1 -l 
logfile_data-vacuumlo_test1 start3. ./bin/createdb vacuumlo_test4. bin/psql -d 
vacuumlo_test -f vacuumlo-test_data.sql5. bin/vacuumlo vacuumlo_test
~/work/pg/postgresql-9.1.2/inst$ bin/psql -d vacuumlo_test -f 
vacuumlo-test_data.sqlCREATE FUNCTIONCREATE FUNCTION 
create_manylargeobjects------------------------- (1 row) count------- 13001(1 
row)
~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test
WARNING:  out of shared memoryFailed to remove lo 36726: ERROR:  out of shared 
memory
HINT:  You might need to increase max_locks_per_transaction.Failed to remove lo 
36727: ERROR:  current transaction is aborted, commands ignored until end of 
transaction blockFailed to remove lo 36728: ERROR:  current transaction is 
aborted, commands ignored until end of transaction blockFailed to remove lo 
36729: ERROR:  current transaction is aborted, commands ignored until end of 
transaction block........
Best Regards,Asif Naeem
                                          
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to