On 2018-Dec-14, Robert Haas wrote: > On Fri, Dec 14, 2018 at 12:27 PM Alvaro Herrera > <alvhe...@2ndquadrant.com> wrote:
> > Maybe it'd be better to change temp table removal to always drop > > max_locks_per_transaction objects at a time (ie. commit/start a new > > transaction every so many objects). > > We're basically just doing DROP SCHEMA ... CASCADE, so I'm not sure > how we'd implement that, but I agree it would be significantly better. (Minor nit: even currently, we don't drop the schema itself, only the objects it contains.) I was thinking we could scan pg_depend for objects depending on the schema, add them to an ObjectAddresses array, and do performMultipleDeletions once every max_locks_per_transaction objects. But in order for this to have any useful effect we'd have to commit the transaction and start another one; maybe that's too onerous. Maybe we could offer such a behavior as a special case to be used only in case the regular mechanism fails. So add a PG_TRY which, in case of failure, sends a hint to do the cleanup. Not sure this is worthwhile. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services