On 05/27/2012 11:31 AM, Tom Lane wrote:


Having said that, I've got to also say that I think we've fundamentally
blown it with the current approach to upgrading extensions.  Because we
dump all the extension member objects, the extension contents have got
to be restorable into a new database version as-is, and that throws away
most of the flexibility that we were trying to buy with the extension
mechanism.  IMO we have *got* to get to a place where both pg_dump and
pg_upgrade dump extensions just as "CREATE EXTENSION", and the sooner
the better.  Once we have that, this type of issue could be addressed by
having different contents of the extension creation script for different
major server versions --- or maybe even the same server version but
different python library versions, to take something on-point for this
discussion.  For instance, Andrew's problem could be dealt with if the
backport were distributed as an extension "json-backport", and then all
that's needed in a new installation is an empty extension script of that
name.



It sounds nice, but we'd have to make pg_upgrade drop its current assumption that libraries wanted in the old version will be named the same (one for one) as the libraries wanted in the new version. Currently it looks for every shared library named in probin (other than plpgsql.so) in the old cluster and tries to LOAD it in the new cluster, and errors out if it can't.

My current unspeakably ugly workaround for this behaviour is to supply a dummy library for the new cluster. The only other suggestion I have heard (from Bruce) to handle this is to null out the relevant probin entries before doing the upgrade. I'm not sure if that's better or worse. It is certainly just about as ugly.

So pg_upgrade definitely needs to get a lot smarter IMNSHO.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to