Jerry Sievers <[email protected]> writes:
> Kevin Grittner <[email protected]> writes:
>> Jerry Sievers <[email protected]> wrote:
>>> Planning to pg_upgrade some large (3TB) clusters using hard link
>>> method. Run time for the upgrade itself takes around 5 minutes.
>>> Unfortunately the post-upgrade analyze of the entire cluster is going
>>> to take a minimum of 1.5 hours running several threads to analyze all
>>> tables. This was measured in an R&D environment.
At least for some combinations of source and destination server
versions, it seems like it ought to be possible for pg_upgrade to just
move the old cluster's pg_statistic tables over to the new, as though
they were user data. pg_upgrade takes pains to preserve relation OIDs
and attnums, so the key values should be compatible. Except in
releases where we've added physical columns to pg_statistic or made a
non-backward-compatible redefinition of statistics meanings, it seems
like this should Just Work. In cases where it doesn't work, pg_dump
and reload of that table would not work either (even without the
anyarray problem).
regards, tom lane
--
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin