Bruce,

* Bruce Momjian (br...@momjian.us) wrote:
> On Tue, Dec  5, 2017 at 09:16:02AM -0500, Stephen Frost wrote:
> > > There are clusters that take a long time to dump the schema from the old
> > > cluster and recreate it in the new cluster.  One idea of speeding up
> > > pg_upgrade would be to allow pg_upgrade to be run in two stages:
> > > 
> > > 1.  prevent system catalog changes while the old cluster is running, and
> > > dump the old cluster's schema and restore it in the new cluster
> > > 
> > > 2.  shut down the old cluster and copy/link the data files
> > 
> > Perhaps a bit more complicated, but couldn't we copy/link while the
> > old cluster is online and in backup mode, finish backup mode, shut down
> > the old cluster, and then play forward the WAL to catch any relation
> > extents being added or similar, and then flip to the new PG version?
> 
> Well, that would require reading the old WAL, which would add an
> additional compibility requirement that seems unwise.

In my proposal, this would be the old version of PG reading the old WAL.

Thinking about it a bit further though, I'm not sure it'd end up working
in link mode anyway, due to post-backup-finish changes that could be
made by the old server on the data files before it's shut down.

We have to have a way of dealing with the delta between the hard link
trees after the old server is shut down though because there could be
new relation extents, at least.

> > > My question is whether the schema dump/restore is time-consuming enough
> > > to warrant this optional more complex API, and whether people would
> > > support adding a server setting that prevented all system table changes?
> > 
> > When you say 'system table changes', you're referring to basically all
> > DDL, right?  Just wish to clarify as there might be some confusion
> > between the terminology you're using here and allow_system_table_mods.
> 
> Not only all DDL, but even updating them for the internal stuff, like
> pg_class.relfrozenxid.

Good point.  We'd really need a pretty bullet-proof way to ensure that
the catalog isn't changed during this time period and that seems like it
might be difficult without a lot of work.

> > Would we need to have autovacuum shut down too..?
> 
> Yes.

Ok, makes sense.

> > The other concern is if there's changes made to the catalogs by non-DDL
> > activity that needs to be addressed too (logical replication?); nothing
> > definite springs to mind off-hand for me, but perhaps others will think
> > of things.
> 
> Yes, it could extend to many parts of the system, which is why I am
> asking if it is worth it.

My initial reaction is that it's worth it, but then I also wonder about
other issues (having to get an ANALYZE done on the new cluster before
opening it up, for example..) and it makes me wonder if perhaps it'll
end up being too much risk for too little gain.

Thanks!

Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to