Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
My thinking is that when a page in the old format is read in, it's converted to the new format before doing anything else with it.

Yeah, I'm with Heikki on this.  What I see as a sane project definition
is:

* pg_migrator or equivalent to convert the system catalogs
* a hook in ReadBuffer to allow a data page conversion procedure to
  be applied, on the basis of checking for old page layout version.


pg_migrator is separate tool which requires old postgres version and I would like to have solution in postgres binary without old version presence. Very often new postgres version is store in same location (e.g. /usr/bin) and normal users could have a problem.

I see there three possible solution:

1) special postgres startup mode - postgres --upgrade-catalog
2) automatic conversion - when postgres convert catalog automatically on first startup on old db cluster 3) (in compat mode) catalog will be converted on fly (read/write), until upgrade mode is not start

> I think insisting on a downgrade option is an absolutely certain way
> of guaranteeing that the project will fail.

How I mentioned before. This is nice to have requirement. I would like to have in the mind and when it starts complexity explosion we can remove it from the requirement list.

I'm not sure it's feasible to expect that we can change representations
of user-defined types, either.  I don't see how you would do that
without catalog access (to look up the UDT), and the page conversion
procedure is going to have to be able to operate without catalog
accesses.  (Thought experiment: a page is read in during crash recovery
or PITR slave operation, and discovered to have the old format.)

The idea how to solve problem in data type on disk representation change is to keep old and new datatype in/out function. New created tables will contains new type implementation and old tables could be converted with ALTER TABLE command on user request. Old data type could be store in compat library.

BTW, I thought of a likely upgrade problem that we haven't discussed
(AFAIR) in any of the many threads on this subject.  What about an index
access method change that involves an index-wide restructuring, such
that it can't be done one page at a time?  A plausible example is
changing hash indexes to have multiple buckets per page.  Presumably
you can fix the index with REINDEX, but that doesn't meet the goal of
limited downtime, if the index is big.  Is there another way?


Yes, there is way to keep old and new implementation of index and each will have different oid. Primary key for pg_am table will be name+pg_version - It is similar to UDT solution. CREATE INDEX as a REINDEX will use actual implementation.


                Zdenek

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org

Reply via email to