Adam Ruth <[email protected]> writes:
> I've actually done this before. I had a web app with about 400 users
> each with their own schema. It actually worked very well, except for
> one thing. There got to be so many tables that a pg_dump would fail
> because it would run out of file locks. We got around it by creating a
> primary table and then using views in each of the schemas to access
> that user's data. It also made it easy to do a query against all users
> at once in the primary table.
Note that this is about how many tables you have, and has got nothing to
do with how many schemas they are in, but: the solution to that is to
increase max_locks_per_transaction. The default value is kinda
conservative to avoid eating up too much shared memory.
regards, tom lane
--
Sent via pgsql-general mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general