Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
What's happening is that there is simply so many temporary tables in pg_class that when the new backend tries to clear them with RemoveTempRelations, it runs out of lock space.

What happened to the original owner of the tables?  It must have had
locks on all those tables ...

It's pretty easily reproducible with this:

postgres=# CREATE OR REPLACE FUNCTION createtemps (iters int4) RETURNS VOID LANGUAGE plpgsql AS $$
DECLARE i int4;
BEGIN
  i := 0;
  WHILE i < iters LOOP
EXECUTE 'CREATE TEMPORARY TABLE "dummytemp' || i || '_' || random() ||'" (id int4)';
    i := i + 1;
  END LOOP;
END;
$$;
CREATE FUNCTION
postgres=# SELECT createtemps(7000);
 createtemps
-------------

(1 row)

postgres=# SELECT createtemps(7000);
 createtemps
-------------

(1 row)

postgres=# \q
$ psql postgres
psql (8.4devel)
Type "help" for help.

postgres=# CREATE TEMPORARY TABLE footemp(id int4);WARNING: out of shared memory
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
postgres=#

In this case, the temp tables are created in two separate transactions, which is why the creations don't run out of shared memory. But when they're later dropped, the new backend tries to grab a lock on all of them at the same time.

I'm not quite sure how I got into that situation with my test case. I might have run the test case without a BEGIN/COMMIT at first, while developing the test script, so that all the thousands of temp tables were created in separate transactions.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to