Tom Lane escribió:
> Alvaro Herrera <alvhe...@2ndquadrant.com> writes:
> > Robert Haas escribi�:
> >> I venture to guess that this is exactly the sort of thing that made
> >> Tom argue upthread that we shouldn't be putting a firing point in the
> >> middle of the drop operation.  Any slip-ups here will result in
> >> corrupt catalogs, and it's not exactly future-proof either.
> 
> > Well, is this kind of thing enough to punt the whole patch, or can we
> > chalk it off as the user's problem?
> 
> I don't really think that we want any events in the first release that
> are defined so that a bogus trigger can cause catalog corruption.
> That will, for example, guarantee that we can *never* open up the
> feature to non-superusers.  I think we'd be painting ourselves into a
> corner that we could not get out of.

Roger.

> > Another idea I just had was to scan the catalogs after the event trigger
> > and see if the Xmin for each tuple IsCurrentTransaction(), and if so
> > throw an error.
> 
> You mean examine every row in every catalog?  Doesn't sound like a great
> plan.

No, I mean the rows that are part of the set of objects to be deleted.

> I thought the proposal was to recompute the set of drop target objects,
> and complain if that had changed.

Yeah, that's what the patch I submitted upthread does.  The problem is
that pg_attribute rows are not in that set; they are deleted manually by
heap_drop_with_catalog by calling DeleteAttributeTuples.  So if you add
a column to a table in the trigger function, the sets are identical
and that logic doesn't detect that things are amiss.

-- 
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to