On Sun, Feb 08, 2026 at 04:00:40PM -0600, Nathan Bossart wrote:
> On Sun, Feb 08, 2026 at 04:26:41PM -0500, Andres Freund wrote:
>> On 2026-02-05 15:29:55 -0600, Nathan Bossart wrote:
>>> +    * commands.  We can only do this for upgrades from v12 and newer; in
>>> +    * older versions, pg_largeobject_metadata was created WITH OIDS, so the
>>> +    * OID column is hidden and won't be dumped.
>>>      */
>> 
>> It's not really related to this change, but what is that WITH OIDS bit about?
>> Sure, they aren't shown by default, but all it takes to change that is to
>> explicitly add the output column? I'm not saying we have to do that, I just
>> don't understand the reasoning as written here.
> 
> IIRC the issue is that getTableAttrs() won't pick up the OID column on
> older versions.  It might be easy to fix that by adjusting its query for
> binary upgrades from <v12.  That could be worth doing, if for no other
> reason than to simplify some of the pg_dump code.  I'll make a note of it.

This was a little more painful than I expected, but this seems to be what
is required to allow COPY-ing pg_largeobject_metadata during binary
upgrades from < v12.

-- 
nathan
>From 4d5041f6bf9a5e20b8257b98e7e0fab6edc40bd9 Mon Sep 17 00:00:00 2001
From: Nathan Bossart <[email protected]>
Date: Wed, 11 Feb 2026 14:59:13 -0600
Subject: [PATCH v1 1/1] pg_upgrade: Remove version restriction on using COPY
 for pg_largeobject_metadata.

---
 src/bin/pg_dump/pg_backup_archiver.c |  7 +--
 src/bin/pg_dump/pg_dump.c            | 77 ++++++++++++++--------------
 2 files changed, 40 insertions(+), 44 deletions(-)

diff --git a/src/bin/pg_dump/pg_backup_archiver.c 
b/src/bin/pg_dump/pg_backup_archiver.c
index 35d3a07915d..dea26b8132a 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -2991,12 +2991,9 @@ _tocEntryRequired(TocEntry *te, teSection curSection, 
ArchiveHandle *AH)
        /*
         * For binary upgrade mode, dump pg_largeobject_metadata and the
         * associated pg_shdepend rows. This is faster to restore than the
-        * equivalent set of large object commands.  We can only do this for
-        * upgrades from v12 and newer; in older versions, 
pg_largeobject_metadata
-        * was created WITH OIDS, so the OID column is hidden and won't be 
dumped.
+        * equivalent set of large object commands.
         */
-       if (ropt->binary_upgrade && AH->public.remoteVersion >= 120000 &&
-               strcmp(te->desc, "TABLE DATA") == 0 &&
+       if (ropt->binary_upgrade && strcmp(te->desc, "TABLE DATA") == 0 &&
                (te->catalogId.oid == LargeObjectMetadataRelationId ||
                 te->catalogId.oid == SharedDependRelationId))
                return REQ_DATA;
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2c3754d020f..ae57127e5dd 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1118,11 +1118,9 @@ main(int argc, char **argv)
         * For binary upgrade mode, dump the pg_shdepend rows for large objects
         * and maybe even pg_largeobject_metadata (see comment below for 
details).
         * This is faster to restore than the equivalent set of large object
-        * commands.  We can only do this for upgrades from v12 and newer; in
-        * older versions, pg_largeobject_metadata was created WITH OIDS, so the
-        * OID column is hidden and won't be dumped.
+        * commands.
         */
-       if (dopt.binary_upgrade && fout->remoteVersion >= 120000)
+       if (dopt.binary_upgrade)
        {
                TableInfo  *shdepend;
 
@@ -2406,11 +2404,14 @@ dumpTableData_copy(Archive *fout, const void *dcontext)
        column_list = fmtCopyColumnList(tbinfo, clistBuf);
 
        /*
-        * Use COPY (SELECT ...) TO when dumping a foreign table's data, and 
when
-        * a filter condition was specified.  For other cases a simple COPY
-        * suffices.
+        * Use COPY (SELECT ...) TO when dumping a foreign table's data, when a
+        * filter condition was specified, and when in binary upgrade mode and
+        * dumping an old pg_largeobject_metadata defined WITH OIDS.  For other
+        * cases a simple COPY suffices.
         */
-       if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
+       if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE ||
+               (fout->dopt->binary_upgrade && fout->remoteVersion < 120000 &&
+                tbinfo->dobj.catId.oid == LargeObjectMetadataRelationId))
        {
                /* Temporary allows to access to foreign tables to dump data */
                if (tbinfo->relkind == RELKIND_FOREIGN_TABLE)
@@ -3967,14 +3968,13 @@ getLOs(Archive *fout)
                                                 "FROM pg_largeobject_metadata 
");
 
        /*
-        * For binary upgrades from v12 or newer, we transfer
-        * pg_largeobject_metadata via COPY or by copying/linking its files from
-        * the old cluster.  On such upgrades, we only need to consider large
-        * objects that have comments or security labels, since we still restore
-        * those objects via COMMENT/SECURITY LABEL commands.
+        * For binary upgrades, we transfer pg_largeobject_metadata via COPY or 
by
+        * copying/linking its files from the old cluster.  On such upgrades, we
+        * only need to consider large objects that have comments or security
+        * labels, since we still restore those objects via COMMENT/SECURITY 
LABEL
+        * commands.
         */
-       if (dopt->binary_upgrade &&
-               fout->remoteVersion >= 120000)
+       if (dopt->binary_upgrade)
                appendPQExpBufferStr(loQry,
                                                         "WHERE oid IN "
                                                         "(SELECT objoid FROM 
pg_description "
@@ -4063,25 +4063,13 @@ getLOs(Archive *fout)
                        loinfo->dobj.components |= DUMP_COMPONENT_ACL;
 
                /*
-                * In binary-upgrade mode for LOs, we do *not* dump out the LO 
data,
-                * as it will be copied by pg_upgrade, which simply copies the
-                * pg_largeobject table.
-                *
-                * The story for LO metadata is more complicated.  For upgrades 
from
-                * versions older than v12, we use ordinary SQL commands to 
restore
-                * both the content of pg_largeobject_metadata and any 
associated
-                * pg_shdepend rows.  For upgrades from newer versions, we 
transfer
-                * this information via COPY or by copying/linking the files 
from the
-                * old cluster.  For such upgrades, we do not need to dump the 
data,
-                * ACLs, or definitions of large objects.
+                * In binary upgrade mode, pg_largeobject and 
pg_largeobject_metadata
+                * are transferred via COPY or by copying/linking the files 
from the
+                * old cluster.  Thus, we do not need to dump LO data, 
definitions, or
+                * ACLs.
                 */
                if (dopt->binary_upgrade)
-               {
-                       if (fout->remoteVersion >= 120000)
-                               loinfo->dobj.dump &= ~(DUMP_COMPONENT_DATA | 
DUMP_COMPONENT_ACL | DUMP_COMPONENT_DEFINITION);
-                       else
-                               loinfo->dobj.dump &= ~DUMP_COMPONENT_DATA;
-               }
+                       loinfo->dobj.dump &= ~(DUMP_COMPONENT_DATA | 
DUMP_COMPONENT_ACL | DUMP_COMPONENT_DEFINITION);
 
                /*
                 * Create a "BLOBS" data item for the group, too. This is just a
@@ -9296,12 +9284,10 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int 
numTables)
                 * pg_shdepend so that the columns names are collected for the
                 * corresponding COPY commands.  Restoring the data for those 
catalogs
                 * is faster than restoring the equivalent set of large object
-                * commands.  We can only do this for upgrades from v12 and 
newer; in
-                * older versions, pg_largeobject_metadata was created WITH 
OIDS, so
-                * the OID column is hidden and won't be dumped.
+                * commands.
                 */
                if (!tbinfo->interesting &&
-                       !(fout->dopt->binary_upgrade && fout->remoteVersion >= 
120000 &&
+                       !(fout->dopt->binary_upgrade &&
                          (tbinfo->dobj.catId.oid == 
LargeObjectMetadataRelationId ||
                           tbinfo->dobj.catId.oid == SharedDependRelationId)))
                        continue;
@@ -9442,7 +9428,18 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int 
numTables)
                                                         "(pt.classoid = 
co.tableoid AND pt.objoid = co.oid)\n");
 
        appendPQExpBufferStr(q,
-                                                "WHERE a.attnum > 
0::pg_catalog.int2\n"
+                                                "WHERE a.attnum > 
0::pg_catalog.int2\n");
+
+       /*
+        * For binary upgrades from <v12, be sure to pick up
+        * pg_largeobject_metadata's oid column.
+        */
+       if (fout->dopt->binary_upgrade && fout->remoteVersion < 120000)
+               appendPQExpBufferStr(q,
+                                                        "OR (a.attnum = 
-2::pg_catalog.int2 AND src.tbloid = "
+                                                        
CppAsString2(LargeObjectMetadataRelationId) ")\n");
+
+       appendPQExpBufferStr(q,
                                                 "ORDER BY a.attrelid, 
a.attnum");
 
        res = ExecuteSqlQuery(fout, q->data, PGRES_TUPLES_OK);
@@ -9510,7 +9507,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int 
numTables)
                /* cross-check that we only got requested tables */
                if (tbinfo->relkind == RELKIND_SEQUENCE ||
                        (!tbinfo->interesting &&
-                        !(fout->dopt->binary_upgrade && fout->remoteVersion >= 
120000 &&
+                        !(fout->dopt->binary_upgrade &&
                           (tbinfo->dobj.catId.oid == 
LargeObjectMetadataRelationId ||
                                tbinfo->dobj.catId.oid == 
SharedDependRelationId))))
                        pg_fatal("unexpected column data for table \"%s\"",
@@ -9544,7 +9541,9 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int 
numTables)
 
                for (int j = 0; j < numatts; j++, r++)
                {
-                       if (j + 1 != atoi(PQgetvalue(res, r, i_attnum)))
+                       if (j + 1 != atoi(PQgetvalue(res, r, i_attnum)) &&
+                               !(fout->dopt->binary_upgrade && 
fout->remoteVersion < 120000 &&
+                                 tbinfo->dobj.catId.oid == 
LargeObjectMetadataRelationId))
                                pg_fatal("invalid column numbering in table 
\"%s\"",
                                                 tbinfo->dobj.name);
                        tbinfo->attnames[j] = pg_strdup(PQgetvalue(res, r, 
i_attname));
-- 
2.50.1 (Apple Git-155)

Reply via email to