Tom Lane wrote:
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
We ran into a problem with a customer this weekend. They had >128,000 tables and we were trying to run a pg_dump. When we reached max_locks_per_transaction, the dump just hung waiting to lock the next table.

Would it make sense to have some sort of timeout for that?

I don't think you have diagnosed this correctly.  Running out of lock
table slots generates an "out of shared memory" error, with a HINT that
you might want to increase max_locks_per_transaction.  If you can prove
otherwise, please supply a test case.

You are correct, I didn't have all the information from my team members. Not that it should surprise you that you are correct ;)

Thanks for replying.

Joshua D. Drake



                        regards, tom lane



--

      === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
             http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to