EnterpriseDB has created a new project at pgFoundry - http://pgfoundry.org/projects/pg-migrator/

pg_migrator is a tool that can in-place upgrade existing data without the usual dump/reload cycle.

The pg_migrator project site (at pgFoundry) contains a complete implementation of the functionality described below as well as a copy of the introductory document that I've included in this message.

We would welcome feedback on implementation details and ideas for improvements. 

        -- Korry

  Korry Douglas    [EMAIL PROTECTED]
  EnterpriseDB      http://www.enterprisedb.com


Upgrading a PostgreSQL database from one release to another can be an
expensive process. For minor upgrades, you can simply install new executables
and forget about upgrading existing data. But for major upgrades, you have to
export all of your data (using pg_dump), install the new release, run initdb
to create a new cluster, and then import your old data. If you have a lot of
data, that can take a considerable amount of time (hours?, days?). If you have
too much data, you may have to buy more storage since you need enough room to
hold the original data plus the exported data.

EnterpriseDB is contributing a new tool, pg_migrator, that can reduce the
amount of time (and disk space) required for many upgrades.


PG_migrator is a tool (not a complete solution) that performs an in-place
upgrade of existing data. For many upgrades, the data stored in user-defined
tables does not have to change when moving from one version of PostgreSQL to
another. Some upgrades require changes in the on-disk representation of data;
pg_migrator cannot help in those upgrades. However, many upgrades require no
changes to the on-disk representation of a user-defined table and, in those
cases, pg_migrator will move existing user-defined tables from the old
database cluster into the new cluster.

There are two factors that determine whether an in-place upgrade is practical.

Every table in a cluster (actually, every table created by a given version)
shares the same infrastructure layout. By infrastructure, we mean the on-disk
representation of the table headers and trailers and the on-disk
representation of tuple headers. If the infrastructure changes between the old
version of PostgreSQL and the new version, pg_migrator cannot move existing
tables to the new cluster (you'll have to pg_dump the old data and then import
that data into the new cluster).

Occasionally, a PostgreSQL release introduces a change to the on-disk
representation of a data type. For example, PostgreSQL version 8.2 changes the
layout for values of type INET and CIDR. If you are not storing any values of
type INET (or CIDR), pg_migrator can upgrade any table in your cluster. If you
are storing values of type INET (or CIDR) in some tables, you must
export/import those tables, but pg_migrator can in-place upgrade other tables
(a change in infrastructure means that you have to export/import every table).

If a new version of PostgreSQL does not change the infrastructure layout and
does not change the on-disk representation of a data type (that you are
using), you can pg_migrator to save a tremendous amount of time (and disk


To use pg_migrator during an upgrade, you start by installing a fresh cluster
(using the newest version) in a new directory. When you've finished installing
the new version, the new cluster will contain the new executables (postmaster,
pg_dump, ...) and the usual template0, template1, and postgresql databases,
but no user-defined tables. At this point, you can shutdown the new postmaster
(we presume that you shutdown the old postmaster prior to creating the new
cluster) and invoke pg_migrator.

When pg_migrator starts, it runs through a verification process that ensures
that all required executables (the old postmaster, the new postmaster,
pg_dump, pg_resetxlog, ...) are present and contain the expected version
numbers. The verification process also checks the old and new $PGDATA
directories to ensure that the expected files and subdirectories (base,
global, pg_clog, pg_xlog, ...) are in place.  If the verification process
succeeds, pg_migrator starts the old postmaster and runs pg_dumpall
--schema-only to capture the metadata contained in the old cluster. The script
produced by pg_dumpall will be used in a later step to recreate the following
user-defined objects in the new cluster:

			 roles (users)
			 privileges (grants)
			 packages (EnterpriseDB clusters only)
			 synonyms (EnterpriseDB clusters only)
			 procedures (EnterpriseDB clusters only)
			 data types
			 procedural languages
			 aggregate functions
			 operator classes
			 encoding conversions 
			 inheritance relationships
			 rewrite rules
			 type casts

Note that the script produced by pg_dumpall will only recreate user-defined
objects, not system-defined objects (most object types are considered
“system-defined' if they reside in a schema whose name begins with “pg_”, if
they reside in the information_schema or, in the case of an EnterpriseDB
cluster, reside in the sys or dbo schemas). The new cluster will contain the
system-defined objects created by the latest version of PostgreSQL.

Once pg_migrator has extracted the metadata from the old cluster, it performs
a number of bookkeeping tasks required to “sync up” the new cluster with the
existing data.

First, pg_migrator temporarily renames any tablespace directories (in the old
cluster) – the new cluster will point to the same tablespace directories and
will complain (ERROR: directory not empty) if those directories exist when
pg_migrator imports the metadata (in a later step).

Next, pg_migrator copies the “next transaction ID” from the old cluster to the
new cluster. This is the first step in ensuring that the proper tuples are
visible (and other tuples are hidden) from the new cluster (remember,
pg_migrator does not export/import the content of user-defined tables so the
transaction ID's in the new cluster must match the transaction ID's in the old
data). pg_migrator also copies the starting address for write-ahead logs from
the old cluster to the new cluster.

pg_migrator now shuts down the postmaster and copies the commit logs
($PGDATA/pg_clog/*) from the old cluster to the new cluster. At this point,
the new cluster and the old cluster agree on the visibility of tuples in
user-defined tables.

Now pg_migrator begins reconstructing the metadata (obtained from the old
cluster), running createdb for each database defined in the old cluster.

Once all of the databases have been created in the new cluster, pg_migrator
tackles the problem of naming toast relations. Toast tables are used to store
oversized data out-of-line (that is, in a separate file). When the server
decides to move a datum out of a tuple and into a toast table, it stores a
pointer in the original slot in the tuple. That pointer contains the
relfilenode (i.e. filename) of the toast table. That means that any table
which contains toasted data will contain the filename of the toast table (in
each toast pointer). Therefore, it is very important that toast tables retain
their old names when they are created in the new cluster. At the moment, the
CREATE TABLE command does not offer any explicit support for naming toast
tables (and it probably never will offer such a feature). To ensure that the
toast table names retain their old names, pg_migrator “reserves” the name of
each toast table before importing the metadata from the old cluster. To
reserve a filename, pg_migrator simply creates an empty file (with the
appropriate name) and the server avoids that name when it detects a
collision. In a later step, pg_migrator removes the “fake” toast tables and
patch the proper reltoastrelids back into the pg_class table so the new
cluster can find the old toast tables.

Next, pg_migrator executes the script produced earlier by pg_dumpall – this
script effectively copies the complete user-defined metadata from the old
cluster to the new cluster. When that script completes, pg_migrator (after
shutting down the new postmaster) deletes the fake toast tables and patches
the old toast tuple names into the new cluster (pg_migrator also removes and
recreates the corresponding pg_class and pg_type entries).

Finally, pg_migrator links (or copies, depending on the command-line supplied
by the user) each user-defined table (including data tables, index tables, and
toast tables) from the old cluster to the new cluster. This is the time-saver;
instead of exporting and importing every tuple, pg_migrator simply renames
each user-defined table (or, at worst, copies the entire table if the clusters
reside on different filesystems). Note that in this last step, pg_migrator
assigns a new name to each relation (the name of the relation matches the
relfilenode in the new cluster).

An important feature of the pg_migrator design is that it leaves the original
cluster intact – if a problem occurs during the upgrade, you can still run the
previous version.


We see pg_migrator as a component within a larger upgrade toolkit.  

One of the consequences of this perspective is that pg_migrator relies on the
user to determine whether an upgrade-in-place is possible. pg_migrator does
not check the CATALOG_VERSION so it will happily perform an in-place upgrade
even though it would be sufficient to simply replace the PostgreSQL
executables. pg_migrator cannot detect changes in relation-file headers or
trailers or in tuple headers. pg_migrator cannot detect changes in the on-disk
representation of a data type. Instead, pg_migrator should be invoked by a
tool (or user) that has already determined that an in-place upgrade is

Since pg_migrator is expected to function as part of a larger toolkit, it is
designed to produce status messages (and error messages) that can be easily
processed by other tools.

pg_migrator is written in C. To reduce source-code dependencies, pg_migrator
makes heavy use of existing tools such as pg_ctl, pg_dumpall, and pg_resetlog.

pg_migrator can upgrade a cluster from one PostgreSQL release to another
PostgreSQL release, from one EnterpriseDB release to another, or from a
PostgreSQL cluster to an EnterpriseDB cluster.

Reply via email to