Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Adrian Klaver

On 1/11/22 01:34, Dominique Devienne wrote:
On Mon, Jan 10, 2022 at 11:13 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:






I go into use cases in my previous message, send a minute ago.

In your OP the error occurred here:
DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)" CASCADE
The CASCADE allows to command to recurse to objects that may not be
owned by the specified role. Is that what you want?


Sure. Although that's irrelevant. There are no such other objects.


Except we did not know that until now.


Each 2+N schemas "instance" is self-contained, and all ROLEs associated
to those SCHEMAs only have privileges associated to those schemas.

Actual LOGIN USERs are granted access to a subset of ROLEs in 
"instances" based on individual permissions.
So when I nuke an "instance", I DROP all its SCHEMAs, and all their 
associated ROLEs.


Actually you do DROP OWNED BY ... CASCADE.



And DROP'ing those "instance"-specific ROLEs implicitly severs the 
GRANTs made on LOGIN USERs, who remain.


What is the purpose of the process, clearing out given schema, dropping
objects only owned by a given role, or something else?


"dropping objects only owned by a given role" indeed.
I.e. exactly what DROP OWNED BY is designed to do, no?
As pointed out it can end up dropping objects owned by other roles due 
to dependencies. This means DROP OWNED BY ... CASCADE can extend past 
'dropping objects only owned by a given role'.




It just so happens that it seems impossible to do that, because that 
involves too many locks :(.


It is not impossible, just difficult to predict what to set 
max_locks_per_transaction to?




With 1400 relations for each of the N schemas, and actual in-prod 
instances at client sites
that require hundreds such schemas (on the current system not based on 
PostgreSQL), I'm guessing


So now we get to the crux of the issue, this is a migration from another 
system. It would be useful to know what that system is and how it is 
handled there. There may be people on this list that have similar 
experience's.



I can't assign 1400 * 1000 = over a million locks on the cluster, can I?

So now the question I asked in that other message, is whether I should 
use a dedicated DB per "instance" instead? --DD



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Tom Lane
Dominique Devienne  writes:
> OK :). But it does matter. those locks. Especially if DROP'ing a DB (as you
> suggest) requires just the same.

But it doesn't.

regards, tom lane




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Dominique Devienne
On Tue, Jan 11, 2022 at 12:47 PM Wolfgang Walther 
wrote:

> Dominique Devienne:
> > I wish for DB-specific ROLEs BTW...
>
> Same here. That would be so useful.
>

In fact, in my case, I also want something even narrower than that,
which are SCHEMA specific ROLEs. ROLEs tied to a given schema,
implicitly DROP'ed when their "owner" SCHEMA is DROP'ed , and which
can only take GRANTs/privileges on objects from it owner schema.

I'm not saying CLUSTER-wide ROLEs are not useful. They are, mostly for
LOGIN USERs IMHO.
But for NOLOGIN ROLEs used to group permissions, often in a single DB, or
even a single SCHEMA like in my case,
the fact ROLEs are CLUSTER-wide is problematic for the naming. FWIW. --DD

PS: I've read the note that DB-specific ROLEs kinda exists, but since the
doc explicitly mentions to avoid them,
I don't use them. And in case, as I wrote above, SCHEMA-correlated
ROLEs is what I really would like to use.


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Francisco Olarte
Dominique:

On Tue, 11 Jan 2022 at 11:57, Dominique Devienne  wrote:
> On Tue, Jan 11, 2022 at 11:05 AM Francisco Olarte  
> wrote:
>> Not going to enter into the lock situation but...
> OK :). But it does matter. those locks. Especially if DROP'ing a DB (as you 
> suggest) requires just the same.

It certainly does, yours is a extreme use case but should have a way
to make it work. From what I've understood it is just a problem of
hitting a limit, but it should work even if the solution is just "put
some more memory, increase that parameter".

Re: dropping a DB, I do not think you are going to hit this limit
there, as iy is a fundamentally different operation, someone more
experienced may page in, but I think it is a matter of checking nobody
uses it, updating a few rows in global catalogs and deleting all the
files. I've been out of DB programming for some years, but I remember
it was very fast. But know you say it, it may need tracking, but I
think depending object is only needed when dropping the role.


>> On Tue, 11 Jan 2022 at 10:24, Dominique Devienne  wrote:
>> > I need for unit testing purposes to be able to support multiple (2+N 
>> > schemas) "instances".
...
>> I'm not sure if you are going to hit other limitations, but I've
>> normally done tests with the "template database" approach ( using
>> create database template=, dropping the DB at the end ). It is fast,
>> it is simple, it is easy. Have you tried that?

> No, I haven't. I did see that feature, in the doc, and wondered about it for 
> Production, but not for testing.

I've never used it for production ( I can see its usefulness for the
hosting people which have a lot of customers or sites which the same
database, but has never been my case ). For testing, at least for me,
create with template was just like a very fast sql script for
creation, or a very fast restore. They were normally databases with
little data ( typically only the control tables populated 9.

>> seems much easier/faster than building and dropping all this 
>> schemas/roles,specially for testing.
> Good to here. But when you write "I've done tests", do you mean manual tests?
> Or automated unit-tests that create DBs (from a template) on-the-fly and DROP 
> them?
> Concurrently from different CI agents?

Well, they were not too "unity", creating the DB for a real unit ( of
behaviour ) test was too expensive, but I think you have that solved.

They were a batch of tests run either manually via single script (
pick a name, copy the database, run the tests in it, report, drop the
database  ( unless a flag was given, for foresincs ) ) or the same
script from the CI server.

> The reason I didn't consider DB templates for unit-testing, is that the 
> schemas are changing often.
> And creating the schemas is all automated in code already.

And you use this to create the template, via CI if you want.

> Plus ROLEs are CLUSTER-wide, so the DB template does nothing to help with 
> SCHEMA-associated roles. --DD

You may avoid the need to create and drop roles. If the test dbs are
isolated, you should not need to drop the roles when they are "live"
in the testing setup. You just need to script role creation ( globally
), plus template creation after it, then create with template and drop
for tests, and when template needs changing you either script the
update directly ( no need to drop roles ) or drop it, drop the roles (
which now have nothing depending on them ) and recreate from start.
More knowledge of exact procedures is needed, but as it seems you have
everything scripted you may be able to test that, and I remember
copying databases by create template was very fast. Of course we did
not have much roles or schemas, but it seemed a natural way to go for
testing in what you described, and also having different databases for
each test runs can give you better test isolation, specially in your
case of concurrent testing. Just seemed the proper tool for the job,
and I thought you may not be familiar with it.

Francisco Olarte.




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Wolfgang Walther

Dominique Devienne:

I wish for DB-specific ROLEs BTW...


Same here. That would be so useful.




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Dominique Devienne
On Tue, Jan 11, 2022 at 11:05 AM Francisco Olarte 
wrote:

> Dominique:
> Not going to enter into the lock situation but...
>

OK :). But it does matter. those locks. Especially if DROP'ing a DB (as you
suggest) requires just the same.


> On Tue, 11 Jan 2022 at 10:24, Dominique Devienne 
> wrote:
> ...
> > I need for unit testing purposes to be able to support multiple (2+N
> schemas) "instances".
> > Each instance (of 2+N schemas) is owned by a separate ROLE, created for
> that express purpose.
> > I designed / coded it to be able to have several "instances" per DB,
> that come and go for unit testing purpose,
> > and they will come and go concurrently (when CI kicks in, on several
> platforms/configurations in parallel).
> > And I thought DROP OWNED BY was going to be convenient (fewer
> client-server round-trips, perfectly models the *intent*).
> > But obviously given the limitations I'm discovering, that's not the case.
> >
> > In production, there will typically be a single "instance" per DB.
> >
> > So, should I redesign for each instance to be in its own DB? And instead
> of just creating schemas on the fly when running tests, creating DBs on the
> fly?
> > That means I'd could then DROP the whole DB (I wish for DB-specific
> ROLEs BTW...). Does that buy me anything? Does that help with locks-per-tx
> at all?
> > I'm happy to do that, if necessary. But is using a dedicated DB per 2+N
> schemas "instance" the right approach?
>
> I'm not sure if you are going to hit other limitations, but I've
> normally done tests with the "template database" approach ( using
> create database template=, dropping the DB at the end ). It is fast,
> it is simple, it is easy. Have you tried that?


No, I haven't. I did see that feature, in the doc, and wondered about it
for Production, but not for testing.


> seems much easier/faster than building and dropping all this
> schemas/roles,specially for testing.
>

Good to here. But when you write "I've done tests", do you mean manual
tests?
Or automated unit-tests that create DBs (from a template) on-the-fly and
DROP them?
Concurrently from different CI agents?

The reason I didn't consider DB templates for unit-testing, is that the
schemas are changing often.
And creating the schemas is all automated in code already.

Plus ROLEs are CLUSTER-wide, so the DB template does nothing to help with
SCHEMA-associated roles. --DD


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Francisco Olarte
Dominique:

Not going to enter into the lock situation but...

On Tue, 11 Jan 2022 at 10:24, Dominique Devienne  wrote:
...
> I need for unit testing purposes to be able to support multiple (2+N schemas) 
> "instances".
> Each instance (of 2+N schemas) is owned by a separate ROLE, created for that 
> express purpose.
> I designed / coded it to be able to have several "instances" per DB, that 
> come and go for unit testing purpose,
> and they will come and go concurrently (when CI kicks in, on several 
> platforms/configurations in parallel).
> And I thought DROP OWNED BY was going to be convenient (fewer client-server 
> round-trips, perfectly models the *intent*).
> But obviously given the limitations I'm discovering, that's not the case.
>
> In production, there will typically be a single "instance" per DB.
>
> So, should I redesign for each instance to be in its own DB? And instead of 
> just creating schemas on the fly when running tests, creating DBs on the fly?
> That means I'd could then DROP the whole DB (I wish for DB-specific ROLEs 
> BTW...). Does that buy me anything? Does that help with locks-per-tx at all?
> I'm happy to do that, if necessary. But is using a dedicated DB per 2+N 
> schemas "instance" the right approach?

I'm not sure if you are going to hit other limitations, but I've
normally done tests with the "template database" approach ( using
create database template=, dropping the DB at the end ). It is fast,
it is simple, it is easy. Have you tried that? seems much
easier/faster than building and dropping all this
schemas/roles,specially for testing.

Francisco Olarte.




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Dominique Devienne
On Mon, Jan 10, 2022 at 11:13 PM Adrian Klaver 
wrote:

> On 1/10/22 13:58, Dominique Devienne wrote:
>
> > Given that Tom mentions max_locks_per_transaction can be safely
> increased,
> > and given the stats I mentioned in this thread, what would a
> > "reasonable" max_locks_per_transaction
> > be in my case? By reasonable, I mean "as large as possible w/o being
> too large"...
> >
> > Obviously 64*100 is not quite large enough to be safe in this case.
> I'd appreciate some advise. TIA, --DD
>
> I think at this point it might be a good idea to explore what the case is?
>

I go into use cases in my previous message, send a minute ago.

In your OP the error occurred here:
> DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)" CASCADE
> The CASCADE allows to command to recurse to objects that may not be
> owned by the specified role. Is that what you want?
>

Sure. Although that's irrelevant. There are no such other objects.
Each 2+N schemas "instance" is self-contained, and all ROLEs associated
to those SCHEMAs only have privileges associated to those schemas.

Actual LOGIN USERs are granted access to a subset of ROLEs in "instances"
based on individual permissions.
So when I nuke an "instance", I DROP all its SCHEMAs, and all their
associated ROLEs.

And DROP'ing those "instance"-specific ROLEs implicitly severs the GRANTs
made on LOGIN USERs, who remain.


> What is the purpose of the process, clearing out given schema, dropping
> objects only owned by a given role, or something else?
>

"dropping objects only owned by a given role" indeed.
I.e. exactly what DROP OWNED BY is designed to do, no?

It just so happens that it seems impossible to do that, because that
involves too many locks :(.

With 1400 relations for each of the N schemas, and actual in-prod instances
at client sites
that require hundreds such schemas (on the current system not based on
PostgreSQL), I'm guessing
I can't assign 1400 * 1000 = over a million locks on the cluster, can I?

So now the question I asked in that other message, is whether I should use
a dedicated DB per "instance" instead? --DD


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-11 Thread Dominique Devienne
On Mon, Jan 10, 2022 at 10:40 PM Alvaro Herrera 
wrote:

> On 2022-Jan-10, Dominique Devienne wrote:
>
> > Btw, is there a catalog accurately count a schémas relations from
> the catalog?
>
> pg_class
>

ddevienne=> select relnamespace::regnamespace::text, count(*) from pg_class
where relnamespace::regnamespace::text like '"PNS:%"' group by relnamespace;
  relnamespace  | count
+---
 "PNS:ec44cdde757b572a8f367277e67f1e3f" |  1308
...

ddevienne=> select relnamespace::regnamespace::text, relkind, count(*) from
pg_class where relnamespace::regnamespace::text like '"PNS:%"' group by
relnamespace, relkind;
  relnamespace  | relkind | count
+-+---
 "PNS:ec44cdde757b572a8f367277e67f1e3f" | S   |   229
 "PNS:ec44cdde757b572a8f367277e67f1e3f" | i   |   828
 "PNS:ec44cdde757b572a8f367277e67f1e3f" | r   |   244
 "PNS:ec44cdde757b572a8f367277e67f1e3f" | v   | 7
...

So from what I was told in this thread, 1308 locks need to be taken, just
for that one schema.
And my "system" can have N of those (and there can be N systems in a DB).
(the other two "fixed" schemas have way fewer relations, 32 for one).


> > Of course I can do that. But it doesn’t feel right.
>
> Then you should increase max_locks_per_transaction to a value that better
> suits you.
>

But the point is that there's no limit on how many schema a given ROLE can
own.
So you can't pick a value that will always work.So that makes DROP OWNED BY
pretty much useless as too unreliable in my case. Unless I'm missing
something?

That's definitely something that's not mentioned in the doc of DROP OWNED
BY. Deserve a mention IMHO.

I need for unit testing purposes to be able to support multiple (2+N
schemas) "instances".
Each instance (of 2+N schemas) is owned by a separate ROLE, created for
that express purpose.
I designed / coded it to be able to have several "instances" per DB, that
come and go for unit testing purpose,
and they will come and go concurrently (when CI kicks in, on several
platforms/configurations in parallel).
And I thought DROP OWNED BY was going to be convenient (fewer client-server
round-trips, perfectly models the *intent*).
But obviously given the limitations I'm discovering, that's not the case.

In production, there will typically be a single "instance" per DB.

So, should I redesign for each instance to be in its own DB? And instead of
just creating schemas on the fly when running tests, creating DBs on the
fly?
That means I'd could then DROP the whole DB (I wish for DB-specific ROLEs
BTW...). Does that buy me anything? Does that help with locks-per-tx at all?

I'm happy to do that, if necessary. But is using a dedicated DB per 2+N
schemas "instance" the right approach?
What kind of other limitations I'm not aware of, and the doc glosses over
(or that I missed/haven't read yet), that I'd discover then?

This is for this kind of insights that I turn to experts on MLs. Thanks,
--DD


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Adrian Klaver

On 1/10/22 13:58, Dominique Devienne wrote:



Given that Tom mentions max_locks_per_transaction can be safely increased,
and given the stats I mentioned in this thread, what would a 
"reasonable" max_locks_per_transaction
be in my case? By reasonable, I mean "as large as possible w/o being too 
large"...


Obviously 64*100 is not quite large enough to be safe in this case. I'd 
appreciate some advise. TIA, --DD


I think at this point it might be a good idea to explore what the case is?

In your OP the error occurred here:

DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)" CASCADE

The CASCADE allows to command to recurse to objects that may not be 
owned by the specified role. Is that what you want?


What is the purpose of the process, clearing out given schema, dropping 
objects only owned by a given role, or something else?



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Dominique Devienne
On Mon, Jan 10, 2022 at 10:29 PM Tom Lane  wrote:

> "David G. Johnston"  writes:
> > On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne  >
> > wrote:
> >> Tom wrote "relation" for the number of locks necessary for DROP OWNED
> BY.
> >> What does it mean in this context? relation = table?
>
> > I'm confused here a bit as well.  The items being talked about are tables
> > and indexes, both of which manifest as files on the filesystem.  But not
> > all relations do (e.g., views).  But if this isn't tied to the filesystem
> > then I would expect that other object types, especially functions, would
> > require locking as well, but those are decidedly not relations.
>
> I was wrong actually --- I wrote that thinking that we acquire exclusive
> lock when dropping a relation (where relation may be defined as "something
> with a pg_class entry").  That's true, but these days we acquire a lock
> when deleting *any* cataloged database object.  So you'd also need a lock
> for each schema, function, etc that was due to get dropped.  This is
> basically to avoid problems in case of concurrent drop commands.
>
> It's still true that the size of a relation in columns or rows is not
> relevant here.
>

Given that Tom mentions max_locks_per_transaction can be safely increased,
and given the stats I mentioned in this thread, what would a "reasonable"
max_locks_per_transaction
be in my case? By reasonable, I mean "as large as possible w/o being too
large"...

Obviously 64*100 is not quite large enough to be safe in this case. I'd
appreciate some advise. TIA, --DD


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Tom Lane
"David G. Johnston"  writes:
> On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne 
> wrote:
>> Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
>> What does it mean in this context? relation = table?

> I'm confused here a bit as well.  The items being talked about are tables
> and indexes, both of which manifest as files on the filesystem.  But not
> all relations do (e.g., views).  But if this isn't tied to the filesystem
> then I would expect that other object types, especially functions, would
> require locking as well, but those are decidedly not relations.

I was wrong actually --- I wrote that thinking that we acquire exclusive
lock when dropping a relation (where relation may be defined as "something
with a pg_class entry").  That's true, but these days we acquire a lock
when deleting *any* cataloged database object.  So you'd also need a lock
for each schema, function, etc that was due to get dropped.  This is
basically to avoid problems in case of concurrent drop commands.

It's still true that the size of a relation in columns or rows is not
relevant here.

regards, tom lane




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Adrian Klaver

On 1/10/22 13:08, David G. Johnston wrote:
On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne > wrote:


Given |max_locks_per_transaction| * (max_connections


 +
max_prepared_transactions

)
from
https://www.postgresql.org/docs/current/runtime-config-locks.html
,
and max_conn being 100, that's not many locks.

Tom wrote "relation" for the number of locks necessary for DROP
OWNED BY.
What does it mean in this context? relation = table?



I'm confused here a bit as well.  The items being talked about are 
tables and indexes, both of which manifest as files on the filesystem.  
But not all relations do (e.g., views).  But if this isn't tied to the 
filesystem then I would expect that other object types, especially 
functions, would require locking as well, but those are decidedly not 
relations.


Pretty sure this is related to:

https://www.postgresql.org/docs/current/explicit-locking.html

In other words to locks on data access.




How do I determine the current value, and change it?


https://www.postgresql.org/docs/current/runtime-config.html 


(you already used the SHOW command to determine the current value)

David J.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread David G. Johnston
On Mon, Jan 10, 2022 at 12:09 PM Dominique Devienne 
wrote:

> Given  max_locks_per_transaction * (max_connections
> 
>  + max_prepared_transactions
> )
> from
> https://www.postgresql.org/docs/current/runtime-config-locks.html, and
> max_conn being 100, that's not many locks.
>
> Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
> What does it mean in this context? relation = table?
>
>>
>>>
I'm confused here a bit as well.  The items being talked about are tables
and indexes, both of which manifest as files on the filesystem.  But not
all relations do (e.g., views).  But if this isn't tied to the filesystem
then I would expect that other object types, especially functions, would
require locking as well, but those are decidedly not relations.


> How do I determine the current value, and change it?
>>
>
https://www.postgresql.org/docs/current/runtime-config.html
(you already used the SHOW command to determine the current value)

David J.


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Alvaro Herrera
On 2022-Jan-10, Dominique Devienne wrote:

> Given  max_locks_per_transaction * (max_connections
> 
>  + max_prepared_transactions
> )
> from
> https://www.postgresql.org/docs/current/runtime-config-locks.html, and
> max_conn being 100, that's not many locks.

6400 locks, to be precise.  So if your schemas have on average 10 tables
each with 3 indexes per table, you could drop at most 160 schemas in one
go (but only if you're lucky.)

> Given there's only 64 locks per conn by default, how can this work with
> over 100 tables?
> I'm confused... --DD

That value indicates the maximum number of locks that can be taken
across all sessions at a time.  You can have a single session take that
number of locks, or all sessions take 64 locks each.

If you really have many more relations that need to be dropped, you
could try to issue "DROP SCHEMA...CASCADE" for each schema to drop.
It's a lot less convenient than DROP OWNED BY, but it doesn't require to
take as many locks simultaneously.

-- 
Álvaro Herrera   39°49'30"S 73°17'W  —  https://www.EnterpriseDB.com/
"Los trabajadores menos efectivos son sistematicamente llevados al lugar
donde pueden hacer el menor daño posible: gerencia."  (El principio Dilbert)




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Adrian Klaver

On 1/10/22 11:12, Dominique Devienne wrote:

I accidentally replied to Tom only. Here's my response again.

Apparently, I'm using the default max_locks_per_transaction:
```
ddevienne=> show max_locks_per_transaction ;
  max_locks_per_transaction
---
  64
(1 row)
```





Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
What does it mean in this context? relation = table?


https://www.postgresql.org/docs/14/glossary.html#GLOSSARY-RELATION




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Dominique Devienne
I accidentally replied to Tom only. Here's my response again.

Apparently, I'm using the default max_locks_per_transaction:
```
ddevienne=> show max_locks_per_transaction ;
 max_locks_per_transaction
---
 64
(1 row)
```

Given  max_locks_per_transaction * (max_connections

 + max_prepared_transactions
)
from
https://www.postgresql.org/docs/current/runtime-config-locks.html, and
max_conn being 100, that's not many locks.

Tom wrote "relation" for the number of locks necessary for DROP OWNED BY.
What does it mean in this context? relation = table?

Given there's only 64 locks per conn by default, how can this work with
over 100 tables?
I'm confused... --DD

On Mon, Jan 10, 2022 at 7:06 PM Dominique Devienne 
wrote:

> On Mon, Jan 10, 2022 at 6:39 PM Tom Lane  wrote:
>
>> Dominique Devienne  writes:
>> > I'm trying to DROP a ROLE that has 4 schemas:
>> > * 2 smallish ones (1 or 2 dozen tables each),
>> > * 2 largish ones (250 tables, totalling around 4000 columns each).
>>
>> > And of course there are various indexes, constraints, etc... on each
>> schema.
>>
>> You're going to need a lock per dropped relation.  The number of
>> columns or rows doesn't enter into it, but the number of indexes does.
>>
>
> Here are the current stats of the larguish schema:
>
> const size_t expected_table_count = 244;
> const size_t expected_index_count = 409;
> const size_t expected_unique_index_count = 181;
> const size_t expected_cnstr_count = 989;
> const size_t expected_pk_cnstr_count = 243;
> const size_t expected_fk_cnstr_count = 506;
> const size_t expected_check_cnstr_count = 64;
> const size_t expected_unique_cnstr_count = 176;
>
> > DDL Error: DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)"
>> > CASCADE: #53200: ERROR:  out of shared memory
>> > HINT:  You might need to increase max_locks_per_transaction.
>>
>> I'd not have expected that when dropping 500-or-so tables, but maybe
>> you have a lot of indexes per table?
>>
>
> See above for the total.
> Since we use ON DELETE CASCADE and FKs, and coming from Oracle, I do index
> all my FKs...
>
>
>> > And please note that there could be dozens even hundreds of largish
>> schemas
>> > associated to the dropped ROLE (2 + N), not just the 2+2 it fails with
>> here.
>>
>> There's not a lot of penalty to increasing max_locks_per_transaction,
>> but no you can't make it "unbounded".
>>
>
> Is the HINT valid? How do I determine the current value, and change it?
> Could it be some other problem?
>

> Dropping each largish schema individually is certainly possible, but again
> coming from Oracle,
> I'm used to making transaction as big as they logically need to be, with
> little physical limitations.
> The action is drop all schemas of that instance of the "system", which has
> 2+N schemas. So the
> fact I'd need to use several transactions to work-around
> max_locks_per_transaction is a bummer... --DD
>


Re: DROP OWNED BY fails with #53200: ERROR: out of shared memory

2022-01-10 Thread Tom Lane
Dominique Devienne  writes:
> I'm trying to DROP a ROLE that has 4 schemas:
> * 2 smallish ones (1 or 2 dozen tables each),
> * 2 largish ones (250 tables, totalling around 4000 columns each).

> And of course there are various indexes, constraints, etc... on each schema.

You're going to need a lock per dropped relation.  The number of
columns or rows doesn't enter into it, but the number of indexes does.

> DDL Error: DROP OWNED BY "Epos-PNS (a73e1fbdcc5b4460ba55703a0d6b6ad7)"
> CASCADE: #53200: ERROR:  out of shared memory
> HINT:  You might need to increase max_locks_per_transaction.

I'd not have expected that when dropping 500-or-so tables, but maybe
you have a lot of indexes per table?

> And please note that there could be dozens even hundreds of largish schemas
> associated to the dropped ROLE (2 + N), not just the 2+2 it fails with here.

There's not a lot of penalty to increasing max_locks_per_transaction,
but no you can't make it "unbounded".

regards, tom lane