On Sun, May 27, 2012 at 11:31:12AM -0400, Tom Lane wrote: > Bruce Momjian <br...@momjian.us> writes: > > On Sun, May 27, 2012 at 08:48:54AM -0400, Andrew Dunstan wrote: > >> "things like CREATE LANGUAGE plperl" is a rather vague phrase. The > >> PL case could be easily handled by adding this to the query: > >> OR EXISTS (SELECT 1 FROM pg_catalog.pg_language WHERE lanplcallfoid > >> = p.oid) > >> Do you know of any other cases that this would miss? > > Well, laninline and lanvalidator for two ;-) > > > The problem is I don't know. I don't know in what places we reference > > shared object files implicit but not explicitly, and I can't know what > > future places we might do this. > > The "future changes" argument seems like a straw man to me. We're > already in the business of adjusting pg_upgrade when we make significant > catalog changes.
The bottom line is I just don't understand the rules of when a function in the pg_catalog schema implicitly creates something that references a shared object, and unless someone tells me, I am inclined to just have pg_upgrade check everything and throw an error during 'check', rather than throw an error during the upgrade. If someone did tell me, I would be happy with modifying the pg_upgrade query to match. Also, pg_upgrade rarely requires adjustments for major version changes, and we want to keep it that way. > > We are not writing a one-off pg_upgrade for JSON-backpatchers here. > > I tend to agree with that position, and particularly think that we > should not allow the not-community-approved design of the existing > JSON backport to drive changes to pg_upgrade. It would be better to > ask first if there were a different way to construct that backport > that would fit better with pg_upgrade. Yep. A command-line flag just seems too user-visible for this use-case, and too error-pone. I barely understand what is going on, particularly with plpython in "public" (which we don't fully even understand yet), so adding a command-line flag seems like the wrong direction. > Having said that, I've got to also say that I think we've fundamentally > blown it with the current approach to upgrading extensions. Because we > dump all the extension member objects, the extension contents have got > to be restorable into a new database version as-is, and that throws away > most of the flexibility that we were trying to buy with the extension > mechanism. IMO we have *got* to get to a place where both pg_dump and > pg_upgrade dump extensions just as "CREATE EXTENSION", and the sooner > the better. Once we have that, this type of issue could be addressed by > having different contents of the extension creation script for different > major server versions --- or maybe even the same server version but > different python library versions, to take something on-point for this > discussion. For instance, Andrew's problem could be dealt with if the > backport were distributed as an extension "json-backport", and then all > that's needed in a new installation is an empty extension script of that > name. > > More generally, this would mean that cross-version compatibility > problems for extensions could generally be solved in the extension > scripts, and not with kludges in pg_upgrade. As things stand, you can be > sure that kludging pg_upgrade is going to be the only possible fix for > a very wide variety of issues. > > I don't recall exactly what problems drove us to make pg_upgrade do > what it does with extensions, but we need a different fix for them. Uh, pg_upgrade doesn't do anything special with extensions, so it must have been something people did in pg_dump. -- Bruce Momjian <br...@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers