On Mon, 27 Nov 2023 at 17:12, Amit Kapila <amit.kapil...@gmail.com> wrote:
>
> On Mon, Nov 27, 2023 at 3:18 PM vignesh C <vignes...@gmail.com> wrote:
> >
> > On Sat, 25 Nov 2023 at 17:50, Amit Kapila <amit.kapil...@gmail.com> wrote:
> > >
> > > On Sat, Nov 25, 2023 at 7:21 AM vignesh C <vignes...@gmail.com> wrote:
> > > >
> > >
> > > Few comments on v19:
> > > ==================
> > > 1.
> > > +    <para>
> > > +     The subscriptions will be migrated to the new cluster in a disabled 
> > > state.
> > > +     After migration, do this:
> > > +    </para>
> > > +
> > > +    <itemizedlist>
> > > +     <listitem>
> > > +      <para>
> > > +       Enable the subscriptions by executing
> > > +       <link linkend="sql-altersubscription"><command>ALTER
> > > SUBSCRIPTION ... ENABLE</command></link>.
> > >
> > > The reason for this restriction is not very clear to me. Is it because
> > > we are using pg_dump for subscription and the existing functionality
> > > is doing it? If so, I think currently even connect is false.
> >
> > This was done this way so that the apply worker doesn't get started
> > while the upgrade is happening. Now that we have set
> > max_logical_replication_workers to 0, the apply workers will not get
> > started during the upgrade process. I think now we can create the
> > subscriptions with the same options as the old cluster in case of
> > upgrade.
> >
>
> Okay, but what is your plan to change it. Currently, we are relying on
> existing pg_dump code to dump subscriptions data, do you want to
> change that? There is a reason for the current behavior of pg_dump
> which as mentioned in docs is: "When dumping logical replication
> subscriptions, pg_dump will generate CREATE SUBSCRIPTION commands that
> use the connect = false option, so that restoring the subscription
> does not make remote connections for creating a replication slot or
> for initial table copy. That way, the dump can be restored without
> requiring network access to the remote servers. It is then up to the
> user to reactivate the subscriptions in a suitable way. If the
> involved hosts have changed, the connection information might have to
> be changed. It might also be appropriate to truncate the target tables
> before initiating a new full table copy."
>
> I guess one reason to not enable subscription after restore was that
> it can't work without origins, and also one can restore the dump in a
> totally different environment, and one may choose not to dump all the
> corresponding tables which I don't think is true for an upgrade. So,
> that could be one reason to do differently for upgrades. Do we see
> reasons similar to pg_dump/restore due to which after upgrade
> subscriptions may not work?

I felt that the behavior for upgrade can be slightly different than
the dump as the subscription relations and the replication origin will
be updated when the subscriber is upgraded. And as the logical
replication workers will not be started during the upgrade we can
preserve the subscription enabled status too. I felt just adding an
"ALTER SUBSCRIPTION sub-name ENABLE" for the subscriptions that were
enabled in the old cluster in case of upgrade like in the attached
patch should be fine. The behavior of dump is not changed it is
retained as it is.

Regards,
Vignesh
From 7bbce54014434f23ba1e30390bbb903ebf174134 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignes...@gmail.com>
Date: Tue, 28 Nov 2023 15:35:42 +0530
Subject: [PATCH v20 2/2] Retain the subscription oids during upgrade.

Retain the subscription oids during upgrade.
---
 src/backend/commands/subscriptioncmds.c       | 25 +++++++++++++++++--
 src/backend/utils/adt/pg_upgrade_support.c    | 10 ++++++++
 src/bin/pg_dump/pg_dump.c                     |  8 ++++++
 src/bin/pg_upgrade/t/004_subscription.pl      |  4 +++
 src/include/catalog/binary_upgrade.h          |  1 +
 src/include/catalog/pg_proc.dat               |  4 +++
 .../expected/spgist_name_ops.out              |  6 +++--
 7 files changed, 54 insertions(+), 4 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index edc82c11be..f839989208 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -75,6 +75,12 @@
 /* check if the 'val' has 'bits' set */
 #define IsSet(val, bits)  (((val) & (bits)) == (bits))
 
+/*
+ * This will be set by the pg_upgrade_support function --
+ * binary_upgrade_set_next_pg_subscription_oid().
+ */
+Oid			binary_upgrade_next_pg_subscription_oid = InvalidOid;
+
 /*
  * Structure to hold a bitmap representing the user-provided CREATE/ALTER
  * SUBSCRIPTION command options and the parsed/default values of each of them.
@@ -679,8 +685,23 @@ CreateSubscription(ParseState *pstate, CreateSubscriptionStmt *stmt,
 	memset(values, 0, sizeof(values));
 	memset(nulls, false, sizeof(nulls));
 
-	subid = GetNewOidWithIndex(rel, SubscriptionObjectIndexId,
-							   Anum_pg_subscription_oid);
+	/* Use binary-upgrade override for pg_subscription.oid? */
+	if (IsBinaryUpgrade)
+	{
+		if (!OidIsValid(binary_upgrade_next_pg_subscription_oid))
+			ereport(ERROR,
+					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+					 errmsg("pg_subscription OID value not set when in binary upgrade mode")));
+
+		subid = binary_upgrade_next_pg_subscription_oid;
+		binary_upgrade_next_pg_subscription_oid = InvalidOid;
+	}
+	else
+	{
+		subid = GetNewOidWithIndex(rel, SubscriptionObjectIndexId,
+								   Anum_pg_subscription_oid);
+	}
+
 	values[Anum_pg_subscription_oid - 1] = ObjectIdGetDatum(subid);
 	values[Anum_pg_subscription_subdbid - 1] = ObjectIdGetDatum(MyDatabaseId);
 	values[Anum_pg_subscription_subskiplsn - 1] = LSNGetDatum(InvalidXLogRecPtr);
diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 53cfa72b6f..9445bf2aaf 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -179,6 +179,16 @@ binary_upgrade_set_next_pg_authid_oid(PG_FUNCTION_ARGS)
 	PG_RETURN_VOID();
 }
 
+Datum
+binary_upgrade_set_next_pg_subscription_oid(PG_FUNCTION_ARGS)
+{
+	Oid			subid = PG_GETARG_OID(0);
+
+	CHECK_IS_BINARY_UPGRADE;
+	binary_upgrade_next_pg_subscription_oid = subid;
+	PG_RETURN_VOID();
+}
+
 Datum
 binary_upgrade_create_empty_extension(PG_FUNCTION_ARGS)
 {
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 4a4bafba11..d008a5caaf 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -4963,6 +4963,14 @@ dumpSubscription(Archive *fout, const SubscriptionInfo *subinfo)
 	appendPQExpBuffer(delq, "DROP SUBSCRIPTION %s;\n",
 					  qsubname);
 
+	if (dopt->binary_upgrade)
+	{
+		appendPQExpBufferStr(query, "\n-- For binary upgrade, must preserve pg_subscription.oid\n");
+		appendPQExpBuffer(query,
+						  "SELECT pg_catalog.binary_upgrade_set_next_pg_subscription_oid('%u'::pg_catalog.oid);\n\n",
+						  subinfo->dobj.catId.oid);
+	}
+
 	appendPQExpBuffer(query, "CREATE SUBSCRIPTION %s CONNECTION ",
 					  qsubname);
 	appendStringLiteralAH(query, subinfo->subconninfo, fout);
diff --git a/src/bin/pg_upgrade/t/004_subscription.pl b/src/bin/pg_upgrade/t/004_subscription.pl
index 0b35afa1b6..924e69734b 100644
--- a/src/bin/pg_upgrade/t/004_subscription.pl
+++ b/src/bin/pg_upgrade/t/004_subscription.pl
@@ -171,6 +171,10 @@ $result = $new_sub->safe_psql('postgres',
 is($result, qq($remote_lsn), "remote_lsn should have been preserved");
 
 
+# The subscription oid should be preserved
+$result = $new_sub->safe_psql('postgres', "SELECT oid FROM pg_subscription");
+is($result, qq($sub_oid), "subscription oid should have been preserved");
+
 # Check the number of rows for each table on each server
 $result =
   $publisher->safe_psql('postgres', "SELECT count(*) FROM tab_upgraded1");
diff --git a/src/include/catalog/binary_upgrade.h b/src/include/catalog/binary_upgrade.h
index 82a9125ba9..dc7b251051 100644
--- a/src/include/catalog/binary_upgrade.h
+++ b/src/include/catalog/binary_upgrade.h
@@ -32,6 +32,7 @@ extern PGDLLIMPORT RelFileNumber binary_upgrade_next_toast_pg_class_relfilenumbe
 
 extern PGDLLIMPORT Oid binary_upgrade_next_pg_enum_oid;
 extern PGDLLIMPORT Oid binary_upgrade_next_pg_authid_oid;
+extern PGDLLIMPORT Oid binary_upgrade_next_pg_subscription_oid;
 
 extern PGDLLIMPORT bool binary_upgrade_record_init_privs;
 
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 45c681db5e..27184212c7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11406,6 +11406,10 @@
   provolatile => 'v', proparallel => 'u', prorettype => 'void',
   proargtypes => 'text pg_lsn',
   prosrc => 'binary_upgrade_replorigin_advance' },
+{ oid => '8406', descr => 'for use by pg_upgrade',
+  proname => 'binary_upgrade_set_next_pg_subscription_oid', provolatile => 'v',
+  proparallel => 'r', prorettype => 'void', proargtypes => 'oid',
+  prosrc => 'binary_upgrade_set_next_pg_subscription_oid' },
 
 # conversion functions
 { oid => '4302',
diff --git a/src/test/modules/spgist_name_ops/expected/spgist_name_ops.out b/src/test/modules/spgist_name_ops/expected/spgist_name_ops.out
index 1ee65ede24..39d43368c4 100644
--- a/src/test/modules/spgist_name_ops/expected/spgist_name_ops.out
+++ b/src/test/modules/spgist_name_ops/expected/spgist_name_ops.out
@@ -59,11 +59,12 @@ select * from t
  binary_upgrade_set_next_multirange_pg_type_oid       |  1 | binary_upgrade_set_next_multirange_pg_type_oid
  binary_upgrade_set_next_pg_authid_oid                |    | binary_upgrade_set_next_pg_authid_oid
  binary_upgrade_set_next_pg_enum_oid                  |    | binary_upgrade_set_next_pg_enum_oid
+ binary_upgrade_set_next_pg_subscription_oid          |    | binary_upgrade_set_next_pg_subscription_oid
  binary_upgrade_set_next_pg_tablespace_oid            |    | binary_upgrade_set_next_pg_tablespace_oid
  binary_upgrade_set_next_pg_type_oid                  |    | binary_upgrade_set_next_pg_type_oid
  binary_upgrade_set_next_toast_pg_class_oid           |  1 | binary_upgrade_set_next_toast_pg_class_oid
  binary_upgrade_set_next_toast_relfilenode            |    | binary_upgrade_set_next_toast_relfilenode
-(13 rows)
+(14 rows)
 
 -- Verify clean failure when INCLUDE'd columns result in overlength tuple
 -- The error message details are platform-dependent, so show only SQLSTATE
@@ -108,11 +109,12 @@ select * from t
  binary_upgrade_set_next_multirange_pg_type_oid       |  1 | binary_upgrade_set_next_multirange_pg_type_oid
  binary_upgrade_set_next_pg_authid_oid                |    | binary_upgrade_set_next_pg_authid_oid
  binary_upgrade_set_next_pg_enum_oid                  |    | binary_upgrade_set_next_pg_enum_oid
+ binary_upgrade_set_next_pg_subscription_oid          |    | binary_upgrade_set_next_pg_subscription_oid
  binary_upgrade_set_next_pg_tablespace_oid            |    | binary_upgrade_set_next_pg_tablespace_oid
  binary_upgrade_set_next_pg_type_oid                  |    | binary_upgrade_set_next_pg_type_oid
  binary_upgrade_set_next_toast_pg_class_oid           |  1 | binary_upgrade_set_next_toast_pg_class_oid
  binary_upgrade_set_next_toast_relfilenode            |    | binary_upgrade_set_next_toast_relfilenode
-(13 rows)
+(14 rows)
 
 \set VERBOSITY sqlstate
 insert into t values(repeat('xyzzy', 12), 42, repeat('xyzzy', 4000));
-- 
2.34.1

From 6f62b02d3fd5b86dd6f421293e3bc79ee932dfb8 Mon Sep 17 00:00:00 2001
From: Vignesh C <vignes...@gmail.com>
Date: Mon, 30 Oct 2023 12:31:59 +0530
Subject: [PATCH v20 1/2] Preserve the full subscription's state during
 pg_upgrade

Previously, only the subscription metadata information was preserved.  Without
the list of relations and their state it's impossible to re-enable the
subscriptions without missing some records as the list of relations can only be
refreshed after enabling the subscription (and therefore starting the apply
worker).  Even if we added a way to refresh the subscription while enabling a
publication, we still wouldn't know which relations are new on the publication
side, and therefore should be fully synced, and which shouldn't.

To fix this problem, this patch teaches pg_dump to restore the content of
pg_subscription_rel from the old cluster by using
binary_upgrade_add_sub_rel_state SQL function. This is supported only
in binary upgrade mode.

The new SQL binary_upgrade_add_sub_rel_state function has the following
syntax:
SELECT binary_upgrade_add_sub_rel_state(subname text, relid oid, state char [,sublsn pg_lsn])

In the above, subname is the subscription name, relid is the relation
identifier, the state is the state of the relation, sublsn is subscription lsn
which is optional, and defaults to NULL/InvalidXLogRecPtr if not provided.
pg_dump will retrieve these values(subname, relid, state and sublsn) from the
old cluster.

The subscription's replication origin is needed to ensure that we don't
replicate anything twice.

To fix this problem, this patch teaches pg_dump to update the replication
origin along with create subscription by using
binary_upgrade_replorigin_advance SQL function to restore the
underlying replication origin remote LSN. This is supported only in
binary upgrade mode.

The new SQL binary_upgrade_replorigin_advance function has the following
syntax:
SELECT binary_upgrade_replorigin_advance(subname text, sublsn pg_lsn)

In the above, subname is the subscription name and sublsn is subscription lsn.
pg_dump will retrieve these values(subname and sublsn) from the old cluster.

pg_upgrade will check that all the subscription relations are in 'i' (init) or
in 'r' (ready) state, and will error out if that's not the case, logging the
reason for the failure.

Author: Vignesh C, Julien Rouhaud
Reviewed-by: FIXME
Discussion: https://postgr.es/m/20230217075433.u5mjly4d5cr4hcfe@jrouhaud
---
 doc/src/sgml/ref/pgupgrade.sgml            |  50 +++
 src/backend/utils/adt/pg_upgrade_support.c | 125 +++++++
 src/bin/pg_dump/common.c                   |  22 ++
 src/bin/pg_dump/pg_dump.c                  | 227 ++++++++++++-
 src/bin/pg_dump/pg_dump.h                  |  17 +
 src/bin/pg_dump/pg_dump_sort.c             |  11 +-
 src/bin/pg_upgrade/check.c                 | 187 ++++++++++-
 src/bin/pg_upgrade/info.c                  |  56 +++-
 src/bin/pg_upgrade/meson.build             |   1 +
 src/bin/pg_upgrade/pg_upgrade.h            |   2 +
 src/bin/pg_upgrade/t/004_subscription.pl   | 368 +++++++++++++++++++++
 src/include/catalog/pg_proc.dat            |  10 +
 src/tools/pgindent/typedefs.list           |   1 +
 13 files changed, 1067 insertions(+), 10 deletions(-)
 create mode 100644 src/bin/pg_upgrade/t/004_subscription.pl

diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index 4f78e0e1c0..8c14047aa5 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -456,6 +456,56 @@ make prefix=/usr/local/pgsql.new install
 
    </step>
 
+   <step>
+    <title>Prepare for subscriber upgrades</title>
+
+    <para>
+     Setup the <link linkend="logical-replication-config-subscriber">
+     subscriber configurations</link> in the new subscriber.
+     <application>pg_upgrade</application> attempts to migrate subscription
+     dependencies which includes the subscription table information present in
+     <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>
+     system catalog and also the subscription replication origin. This allows
+     logical replication on the new subscriber to continue from where the
+     old subscriber was up to. Migration of subscription dependencies is only
+     supported when the old cluster is version 17.0 or later. Subscription
+     dependencies on clusters before version 17.0 will silently be ignored.
+    </para>
+
+    <para>
+     There are some prerequisites for <application>pg_upgrade</application> to
+     be able to upgrade the subscriptions. If these are not met an error
+     will be reported.
+    </para>
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       All the subscription tables in the old subscriber should be in state
+       <literal>i</literal> (initialize) or <literal>r</literal> (ready). This
+       can be verified by checking <link linkend="catalog-pg-subscription-rel">pg_subscription_rel</link>.<structfield>srsubstate</structfield>.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       The replication origin entry corresponding to each of the subscriptions
+       should exist in the old cluster. This can be found by checking
+       <link linkend="catalog-pg-subscription">pg_subscription</link> and
+       <link linkend="catalog-pg-replication-origin">pg_replication_origin</link>
+       system tables.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       The new cluster must have
+       <link linkend="guc-max-replication-slots"><varname>max_replication_slots</varname></link>
+       configured to a value greater than or equal to the number of
+       subscriptions present in the old cluster.
+      </para>
+     </listitem>
+    </itemizedlist>
+   </step>
+
    <step>
     <title>Stop both servers</title>
 
diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 2f6fc86c3d..53cfa72b6f 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,15 +11,22 @@
 
 #include "postgres.h"
 
+#include "access/table.h"
 #include "catalog/binary_upgrade.h"
 #include "catalog/heap.h"
 #include "catalog/namespace.h"
+#include "catalog/pg_subscription_rel.h"
 #include "catalog/pg_type.h"
 #include "commands/extension.h"
 #include "miscadmin.h"
 #include "replication/logical.h"
+#include "replication/origin.h"
+#include "replication/worker_internal.h"
+#include "storage/lmgr.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
+#include "utils/pg_lsn.h"
+#include "utils/syscache.h"
 
 
 #define CHECK_IS_BINARY_UPGRADE									\
@@ -305,3 +312,121 @@ binary_upgrade_logical_slot_has_caught_up(PG_FUNCTION_ARGS)
 
 	PG_RETURN_BOOL(!found_pending_wal);
 }
+
+/*
+ * binary_upgrade_add_sub_rel_state
+ *
+ * Add the relation with the specified relation state to pg_subscription_rel
+ * catalog.
+ */
+Datum
+binary_upgrade_add_sub_rel_state(PG_FUNCTION_ARGS)
+{
+	Relation	rel;
+	HeapTuple	tup;
+	Oid			subid;
+	Form_pg_subscription form;
+	char	   *subname;
+	Oid			relid;
+	char		relstate;
+	XLogRecPtr	sublsn;
+
+	CHECK_IS_BINARY_UPGRADE;
+
+	/* We must check these things before dereferencing the arguments */
+	if (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(2))
+		elog(ERROR, "null argument to binary_upgrade_add_sub_rel_state is not allowed");
+
+	subname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	relid = PG_GETARG_OID(1);
+	relstate = PG_GETARG_CHAR(2);
+	sublsn = PG_ARGISNULL(3) ? InvalidXLogRecPtr : PG_GETARG_LSN(3);
+
+	tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
+	if (!HeapTupleIsValid(tup))
+		ereport(ERROR,
+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				errmsg("relation %u does not exist", relid));
+	ReleaseSysCache(tup);
+
+	rel = table_open(SubscriptionRelationId, RowExclusiveLock);
+
+	/* Fetch the existing tuple. */
+	tup = SearchSysCache2(SUBSCRIPTIONNAME, MyDatabaseId,
+						  CStringGetDatum(subname));
+	if (!HeapTupleIsValid(tup))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("subscription \"%s\" does not exist", subname));
+
+	form = (Form_pg_subscription) GETSTRUCT(tup);
+	subid = form->oid;
+
+	AddSubscriptionRelState(subid, relid, relstate, sublsn);
+
+	ReleaseSysCache(tup);
+	table_close(rel, RowExclusiveLock);
+
+	PG_RETURN_VOID();
+}
+
+/*
+ * binary_upgrade_replorigin_advance
+ *
+ * Update the remote_lsn for the subscriber's replication origin.
+ */
+Datum
+binary_upgrade_replorigin_advance(PG_FUNCTION_ARGS)
+{
+	Relation	rel;
+	HeapTuple	tup;
+	Oid			subid;
+	Form_pg_subscription form;
+	char	   *subname;
+	XLogRecPtr	remote_commit;
+	char		originname[NAMEDATALEN];
+	RepOriginId node;
+
+	CHECK_IS_BINARY_UPGRADE;
+
+	/* We must check these things before dereferencing the arguments */
+	if (PG_ARGISNULL(0))
+		elog(ERROR, "null argument to binary_upgrade_replorigin_advance is not allowed");
+
+	subname = text_to_cstring(PG_GETARG_TEXT_PP(0));
+	remote_commit = PG_ARGISNULL(1) ? InvalidXLogRecPtr : PG_GETARG_LSN(1);
+
+	rel = table_open(SubscriptionRelationId, RowExclusiveLock);
+
+	/* Fetch the existing tuple. */
+	tup = SearchSysCacheCopy2(SUBSCRIPTIONNAME, MyDatabaseId,
+							  CStringGetDatum(subname));
+	if (!HeapTupleIsValid(tup))
+		ereport(ERROR,
+				errcode(ERRCODE_UNDEFINED_OBJECT),
+				errmsg("subscription \"%s\" does not exist", subname));
+
+	form = (Form_pg_subscription) GETSTRUCT(tup);
+	subid = form->oid;
+
+	ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname, sizeof(originname));
+
+	/* Lock to prevent the replication origin from vanishing */
+	LockRelationOid(ReplicationOriginRelationId, RowExclusiveLock);
+	node = replorigin_by_name(originname, false);
+
+	/*
+	 * The server will be stopped after setting up the objects in the new
+	 * cluster. Shutdown server will flush the origins during shutdown
+	 * checkpoint.
+	 */
+	replorigin_advance(node, remote_commit, InvalidXLogRecPtr,
+					   false /* backward */ ,
+					   false /* WAL log */ );
+
+	UnlockRelationOid(ReplicationOriginRelationId, RowExclusiveLock);
+	heap_freetuple(tup);
+	table_close(rel, RowExclusiveLock);
+
+	PG_RETURN_VOID();
+}
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 8b0c1e7b53..764a39fcb9 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -24,6 +24,7 @@
 #include "catalog/pg_operator_d.h"
 #include "catalog/pg_proc_d.h"
 #include "catalog/pg_publication_d.h"
+#include "catalog/pg_subscription_d.h"
 #include "catalog/pg_type_d.h"
 #include "common/hashfn.h"
 #include "fe_utils/string_utils.h"
@@ -265,6 +266,9 @@ getSchemaData(Archive *fout, int *numTablesPtr)
 	pg_log_info("reading subscriptions");
 	getSubscriptions(fout);
 
+	pg_log_info("reading subscription membership of tables");
+	getSubscriptionTables(fout);
+
 	free(inhinfo);				/* not needed any longer */
 
 	*numTablesPtr = numTables;
@@ -978,6 +982,24 @@ findPublicationByOid(Oid oid)
 	return (PublicationInfo *) dobj;
 }
 
+/*
+ * findSubscriptionByOid
+ *	  finds the DumpableObject for the subscription with the given oid
+ *	  returns NULL if not found
+ */
+SubscriptionInfo *
+findSubscriptionByOid(Oid oid)
+{
+	CatalogId	catId;
+	DumpableObject *dobj;
+
+	catId.tableoid = SubscriptionRelationId;
+	catId.oid = oid;
+	dobj = findObjectByCatalogId(catId);
+	Assert(dobj == NULL || dobj->objType == DO_SUBSCRIPTION);
+	return (SubscriptionInfo *) dobj;
+}
+
 
 /*
  * recordExtensionMembership
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 34fd0a86e9..4a4bafba11 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -296,6 +296,7 @@ static void dumpPolicy(Archive *fout, const PolicyInfo *polinfo);
 static void dumpPublication(Archive *fout, const PublicationInfo *pubinfo);
 static void dumpPublicationTable(Archive *fout, const PublicationRelInfo *pubrinfo);
 static void dumpSubscription(Archive *fout, const SubscriptionInfo *subinfo);
+static void dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo);
 static void dumpDatabase(Archive *fout);
 static void dumpDatabaseConfig(Archive *AH, PQExpBuffer outbuf,
 							   const char *dbname, Oid dboid);
@@ -4583,6 +4584,95 @@ is_superuser(Archive *fout)
 	return false;
 }
 
+/*
+ * getSubscriptionTables
+ *	  Get information about subscription membership for dumpable tables. This
+ *    will be used only in binary-upgrade mode for PG17 or later versions.
+ */
+void
+getSubscriptionTables(Archive *fout)
+{
+	DumpOptions *dopt = fout->dopt;
+	SubscriptionInfo *subinfo = NULL;
+	SubRelInfo *subrinfo;
+	PQExpBuffer query;
+	PGresult   *res;
+	int			i_srsubid;
+	int			i_srrelid;
+	int			i_srsubstate;
+	int			i_srsublsn;
+	int			ntups;
+	Oid			last_srsubid = InvalidOid;
+
+	if (dopt->no_subscriptions || !dopt->binary_upgrade ||
+		fout->remoteVersion < 170000)
+		return;
+
+	query = createPQExpBuffer();
+	appendPQExpBuffer(query, "SELECT srsubid, srrelid, srsubstate, srsublsn"
+					  " FROM pg_catalog.pg_subscription_rel"
+					  " ORDER BY srsubid");
+	res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);
+
+	ntups = PQntuples(res);
+	if (ntups == 0)
+		goto cleanup;
+
+	/* Get pg_subscription_rel attributes */
+	i_srsubid = PQfnumber(res, "srsubid");
+	i_srrelid = PQfnumber(res, "srrelid");
+	i_srsubstate = PQfnumber(res, "srsubstate");
+	i_srsublsn = PQfnumber(res, "srsublsn");
+
+	subrinfo = pg_malloc(ntups * sizeof(SubRelInfo));
+	for (int i = 0; i < ntups; i++)
+	{
+		Oid			cur_srsubid = atooid(PQgetvalue(res, i, i_srsubid));
+		Oid			relid = atooid(PQgetvalue(res, i, i_srrelid));
+		TableInfo  *tblinfo;
+
+		/*
+		 * If we switched to a new subscription, check if the subscription
+		 * exists.
+		 */
+		if (cur_srsubid != last_srsubid)
+		{
+			subinfo = findSubscriptionByOid(cur_srsubid);
+			if (subinfo == NULL)
+				pg_fatal("subscription with OID %u does not exist", cur_srsubid);
+
+			last_srsubid = cur_srsubid;
+		}
+
+		tblinfo = findTableByOid(relid);
+		if (tblinfo == NULL)
+			pg_fatal("failed sanity check, table with OID %u not found",
+					 relid);
+
+		/* OK, make a DumpableObject for this relationship */
+		subrinfo[i].dobj.objType = DO_SUBSCRIPTION_REL;
+		subrinfo[i].dobj.catId.tableoid = relid;
+		subrinfo[i].dobj.catId.oid = cur_srsubid;
+		AssignDumpId(&subrinfo[i].dobj);
+		subrinfo[i].dobj.name = pg_strdup(subinfo->dobj.name);
+		subrinfo[i].tblinfo = tblinfo;
+		subrinfo[i].srsubstate = PQgetvalue(res, i, i_srsubstate)[0];
+		if (PQgetisnull(res, i, i_srsublsn))
+			subrinfo[i].srsublsn = NULL;
+		else
+			subrinfo[i].srsublsn = pg_strdup(PQgetvalue(res, i, i_srsublsn));
+
+		subrinfo[i].subinfo = subinfo;
+
+		/* Decide whether we want to dump it */
+		selectDumpableObject(&(subrinfo[i].dobj), fout);
+	}
+
+cleanup:
+	PQclear(res);
+	destroyPQExpBuffer(query);
+}
+
 /*
  * getSubscriptions
  *	  get information about subscriptions
@@ -4609,6 +4699,8 @@ getSubscriptions(Archive *fout)
 	int			i_subsynccommit;
 	int			i_subpublications;
 	int			i_suborigin;
+	int			i_suboriginremotelsn;
+	int			i_subenabled;
 	int			i,
 				ntups;
 
@@ -4664,16 +4756,33 @@ getSubscriptions(Archive *fout)
 		appendPQExpBufferStr(query,
 							 " s.subpasswordrequired,\n"
 							 " s.subrunasowner,\n"
-							 " s.suborigin\n");
+							 " s.suborigin,\n");
 	else
 		appendPQExpBuffer(query,
 						  " 't' AS subpasswordrequired,\n"
 						  " 't' AS subrunasowner,\n"
-						  " '%s' AS suborigin\n",
+						  " '%s' AS suborigin,\n",
 						  LOGICALREP_ORIGIN_ANY);
 
+	if (fout->remoteVersion >= 170000)
+		appendPQExpBufferStr(query, " o.remote_lsn AS suboriginremotelsn,\n");
+	else
+		appendPQExpBufferStr(query, " NULL AS suboriginremotelsn,\n");
+
+	if (dopt->binary_upgrade && fout->remoteVersion >= 170000)
+		appendPQExpBufferStr(query, " s.subenabled\n");
+	else
+		appendPQExpBufferStr(query, " false AS subenabled\n");
+
+	appendPQExpBufferStr(query,
+						 "FROM pg_subscription s\n");
+
+	if (fout->remoteVersion >= 170000)
+		appendPQExpBufferStr(query,
+							 "LEFT JOIN pg_catalog.pg_replication_origin_status o \n"
+							 "    ON o.external_id = 'pg_' || s.oid::text \n");
+
 	appendPQExpBufferStr(query,
-						 "FROM pg_subscription s\n"
 						 "WHERE s.subdbid = (SELECT oid FROM pg_database\n"
 						 "                   WHERE datname = current_database())");
 
@@ -4700,6 +4809,8 @@ getSubscriptions(Archive *fout)
 	i_subsynccommit = PQfnumber(res, "subsynccommit");
 	i_subpublications = PQfnumber(res, "subpublications");
 	i_suborigin = PQfnumber(res, "suborigin");
+	i_suboriginremotelsn = PQfnumber(res, "suboriginremotelsn");
+	i_subenabled = PQfnumber(res, "subenabled");
 
 	subinfo = pg_malloc(ntups * sizeof(SubscriptionInfo));
 
@@ -4737,6 +4848,13 @@ getSubscriptions(Archive *fout)
 		subinfo[i].subpublications =
 			pg_strdup(PQgetvalue(res, i, i_subpublications));
 		subinfo[i].suborigin = pg_strdup(PQgetvalue(res, i, i_suborigin));
+		if (PQgetisnull(res, i, i_suboriginremotelsn))
+			subinfo[i].suboriginremotelsn = NULL;
+		else
+			subinfo[i].suboriginremotelsn =
+				pg_strdup(PQgetvalue(res, i, i_suboriginremotelsn));
+		subinfo[i].subenabled =
+			pg_strdup(PQgetvalue(res, i, i_subenabled));
 
 		/* Decide whether we want to dump it */
 		selectDumpableObject(&(subinfo[i].dobj), fout);
@@ -4746,6 +4864,76 @@ getSubscriptions(Archive *fout)
 	destroyPQExpBuffer(query);
 }
 
+/*
+ * dumpSubscriptionTable
+ *	  Dump the definition of the given subscription table mapping. This will be
+ *    used only in binary-upgrade mode for PG17 or later versions.
+ */
+static void
+dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo)
+{
+	DumpOptions *dopt = fout->dopt;
+	SubscriptionInfo *subinfo = subrinfo->subinfo;
+	PQExpBuffer query;
+	char	   *tag;
+
+	/* Do nothing in data-only dump */
+	if (dopt->dataOnly)
+		return;
+
+	Assert(fout->dopt->binary_upgrade && fout->remoteVersion >= 170000);
+
+	tag = psprintf("%s %s", subinfo->dobj.name, subrinfo->dobj.name);
+
+	query = createPQExpBuffer();
+
+	if (subinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
+	{
+		/*
+		 * binary_upgrade_add_sub_rel_state will add the subscription relation
+		 * to pg_subscription_rel table. This will be used only in
+		 * binary-upgrade mode.
+		 */
+		appendPQExpBufferStr(query,
+							 "\n-- For binary upgrade, must preserve the subscriber table.\n");
+		appendPQExpBufferStr(query,
+							 "SELECT pg_catalog.binary_upgrade_add_sub_rel_state(");
+		appendStringLiteralAH(query, subrinfo->dobj.name, fout);
+		appendPQExpBuffer(query,
+						  ", %u, '%c'",
+						  subrinfo->tblinfo->dobj.catId.oid,
+						  subrinfo->srsubstate);
+
+		if (subrinfo->srsublsn && subrinfo->srsublsn[0] != '\0')
+			appendPQExpBuffer(query, ", '%s'", subrinfo->srsublsn);
+		else
+			appendPQExpBuffer(query, ", NULL");
+
+		appendPQExpBufferStr(query, ");\n");
+	}
+
+	/*
+	 * There is no point in creating a drop query as the drop is done by table
+	 * drop.  (If you think to change this, see also _printTocEntry().)
+	 * Although this object doesn't really have ownership as such, set the
+	 * owner field anyway to ensure that the command is run by the correct
+	 * role at restore time.
+	 */
+	if (subrinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
+		ArchiveEntry(fout, subrinfo->dobj.catId, subrinfo->dobj.dumpId,
+					 ARCHIVE_OPTS(.tag = tag,
+								  .namespace = subrinfo->tblinfo->dobj.namespace->dobj.name,
+								  .owner = subinfo->rolname,
+								  .description = "SUBSCRIPTION TABLE",
+								  .section = SECTION_POST_DATA,
+								  .createStmt = query->data));
+
+	/* These objects can't currently have comments or seclabels */
+
+	free(tag);
+	destroyPQExpBuffer(query);
+}
+
 /*
  * dumpSubscription
  *	  dump the definition of the given subscription
@@ -4826,6 +5014,35 @@ dumpSubscription(Archive *fout, const SubscriptionInfo *subinfo)
 
 	appendPQExpBufferStr(query, ");\n");
 
+	if (dopt->binary_upgrade && fout->remoteVersion >= 170000)
+	{
+		if (subinfo->suboriginremotelsn)
+		{
+			/*
+			 * Preserve the remote_lsn for the subscriber's replication
+			 * origin. This value will be stale if the publisher gets
+			 * upgraded, we don't have a mechanism to distinguish this
+			 * scenario currently. There is no problem even if the remote_lsn
+			 * is updated with a stale value in this case as upgrade ensures
+			 * that all the transactions will be replicated before upgrading
+			 * the publisher.
+			 */
+			appendPQExpBufferStr(query,
+								 "\n-- For binary upgrade, must preserve the remote_lsn for the subscriber's replication origin.\n");
+			appendPQExpBufferStr(query,
+								 "SELECT pg_catalog.binary_upgrade_replorigin_advance(");
+			appendStringLiteralAH(query, subinfo->dobj.name, fout);
+			appendPQExpBuffer(query, ", '%s');\n", subinfo->suboriginremotelsn);
+		}
+
+		if (strcmp(subinfo->subenabled, "t") == 0)
+		{
+			appendPQExpBufferStr(query,
+								 "\n-- For binary upgrade, must preserve the subscriber's running state.\n");
+			appendPQExpBuffer(query, "ALTER SUBSCRIPTION %s ENABLE;\n", qsubname);
+		}
+	}
+
 	if (subinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 		ArchiveEntry(fout, subinfo->dobj.catId, subinfo->dobj.dumpId,
 					 ARCHIVE_OPTS(.tag = subinfo->dobj.name,
@@ -10444,6 +10661,9 @@ dumpDumpableObject(Archive *fout, DumpableObject *dobj)
 		case DO_SUBSCRIPTION:
 			dumpSubscription(fout, (const SubscriptionInfo *) dobj);
 			break;
+		case DO_SUBSCRIPTION_REL:
+			dumpSubscriptionTable(fout, (const SubRelInfo *) dobj);
+			break;
 		case DO_PRE_DATA_BOUNDARY:
 		case DO_POST_DATA_BOUNDARY:
 			/* never dumped, nothing to do */
@@ -18510,6 +18730,7 @@ addBoundaryDependencies(DumpableObject **dobjs, int numObjs,
 			case DO_PUBLICATION_REL:
 			case DO_PUBLICATION_TABLE_IN_SCHEMA:
 			case DO_SUBSCRIPTION:
+			case DO_SUBSCRIPTION_REL:
 				/* Post-data objects: must come after the post-data boundary */
 				addObjectDependency(dobj, postDataBound->dumpId);
 				break;
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index 2fe3cbed9a..7ce34288ea 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -83,6 +83,7 @@ typedef enum
 	DO_PUBLICATION_REL,
 	DO_PUBLICATION_TABLE_IN_SCHEMA,
 	DO_SUBSCRIPTION,
+	DO_SUBSCRIPTION_REL,
 } DumpableObjectType;
 
 /*
@@ -660,6 +661,7 @@ typedef struct _SubscriptionInfo
 {
 	DumpableObject dobj;
 	const char *rolname;
+	char	   *subenabled;
 	char	   *subbinary;
 	char	   *substream;
 	char	   *subtwophasestate;
@@ -671,8 +673,21 @@ typedef struct _SubscriptionInfo
 	char	   *subsynccommit;
 	char	   *subpublications;
 	char	   *suborigin;
+	char	   *suboriginremotelsn;
 } SubscriptionInfo;
 
+/*
+ * The SubRelInfo struct is used to represent a subscription relation.
+ */
+typedef struct _SubRelInfo
+{
+	DumpableObject dobj;
+	SubscriptionInfo *subinfo;
+	TableInfo  *tblinfo;
+	char		srsubstate;
+	char	   *srsublsn;
+} SubRelInfo;
+
 /*
  *	common utility functions
  */
@@ -697,6 +712,7 @@ extern CollInfo *findCollationByOid(Oid oid);
 extern NamespaceInfo *findNamespaceByOid(Oid oid);
 extern ExtensionInfo *findExtensionByOid(Oid oid);
 extern PublicationInfo *findPublicationByOid(Oid oid);
+extern SubscriptionInfo *findSubscriptionByOid(Oid oid);
 
 extern void recordExtensionMembership(CatalogId catId, ExtensionInfo *ext);
 extern ExtensionInfo *findOwningExtension(CatalogId catalogId);
@@ -756,5 +772,6 @@ extern void getPublicationNamespaces(Archive *fout);
 extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],
 								 int numTables);
 extern void getSubscriptions(Archive *fout);
+extern void getSubscriptionTables(Archive *fout);
 
 #endif							/* PG_DUMP_H */
diff --git a/src/bin/pg_dump/pg_dump_sort.c b/src/bin/pg_dump/pg_dump_sort.c
index abfea15c09..e8d9c8ac86 100644
--- a/src/bin/pg_dump/pg_dump_sort.c
+++ b/src/bin/pg_dump/pg_dump_sort.c
@@ -94,6 +94,7 @@ enum dbObjectTypePriorities
 	PRIO_PUBLICATION_REL,
 	PRIO_PUBLICATION_TABLE_IN_SCHEMA,
 	PRIO_SUBSCRIPTION,
+	PRIO_SUBSCRIPTION_REL,
 	PRIO_DEFAULT_ACL,			/* done in ACL pass */
 	PRIO_EVENT_TRIGGER,			/* must be next to last! */
 	PRIO_REFRESH_MATVIEW		/* must be last! */
@@ -147,10 +148,11 @@ static const int dbObjectTypePriority[] =
 	PRIO_PUBLICATION,			/* DO_PUBLICATION */
 	PRIO_PUBLICATION_REL,		/* DO_PUBLICATION_REL */
 	PRIO_PUBLICATION_TABLE_IN_SCHEMA,	/* DO_PUBLICATION_TABLE_IN_SCHEMA */
-	PRIO_SUBSCRIPTION			/* DO_SUBSCRIPTION */
+	PRIO_SUBSCRIPTION,			/* DO_SUBSCRIPTION */
+	PRIO_SUBSCRIPTION_REL		/* DO_SUBSCRIPTION_REL */
 };
 
-StaticAssertDecl(lengthof(dbObjectTypePriority) == (DO_SUBSCRIPTION + 1),
+StaticAssertDecl(lengthof(dbObjectTypePriority) == (DO_SUBSCRIPTION_REL + 1),
 				 "array length mismatch");
 
 static DumpId preDataBoundId;
@@ -1472,6 +1474,11 @@ describeDumpableObject(DumpableObject *obj, char *buf, int bufsize)
 					 "SUBSCRIPTION (ID %d OID %u)",
 					 obj->dumpId, obj->catId.oid);
 			return;
+		case DO_SUBSCRIPTION_REL:
+			snprintf(buf, bufsize,
+					 "SUBSCRIPTION TABLE (ID %d OID %u)",
+					 obj->dumpId, obj->catId.oid);
+			return;
 		case DO_PRE_DATA_BOUNDARY:
 			snprintf(buf, bufsize,
 					 "PRE-DATA BOUNDARY  (ID %d)",
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index fa52aa2c22..4d6ae77e2d 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -34,7 +34,9 @@ static void check_for_pg_role_prefix(ClusterInfo *cluster);
 static void check_for_new_tablespace_dir(void);
 static void check_for_user_defined_encoding_conversions(ClusterInfo *cluster);
 static void check_new_cluster_logical_replication_slots(void);
+static void check_new_cluster_subscription_configuration(void);
 static void check_old_cluster_for_valid_slots(bool live_check);
+static void check_old_cluster_subscription_state(void);
 
 
 /*
@@ -112,13 +114,21 @@ check_and_dump_old_cluster(bool live_check)
 	check_for_reg_data_type_usage(&old_cluster);
 	check_for_isn_and_int8_passing_mismatch(&old_cluster);
 
-	/*
-	 * Logical replication slots can be migrated since PG17. See comments atop
-	 * get_old_cluster_logical_slot_infos().
-	 */
 	if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)
+	{
+		/*
+		 * Logical replication slots can be migrated since PG17. See comments
+		 * atop get_old_cluster_logical_slot_infos().
+		 */
 		check_old_cluster_for_valid_slots(live_check);
 
+		/*
+		 * Subscription dependencies can be migrated since PG17. See comments
+		 * atop get_db_subscription_count().
+		 */
+		check_old_cluster_subscription_state();
+	}
+
 	/*
 	 * PG 16 increased the size of the 'aclitem' type, which breaks the
 	 * on-disk format for existing data.
@@ -237,6 +247,8 @@ check_new_cluster(void)
 	check_for_new_tablespace_dir();
 
 	check_new_cluster_logical_replication_slots();
+
+	check_new_cluster_subscription_configuration();
 }
 
 
@@ -1538,6 +1550,52 @@ check_new_cluster_logical_replication_slots(void)
 	check_ok();
 }
 
+/*
+ * check_new_cluster_subscription_configuration()
+ *
+ * Verify that the max_replication_slots configuration specified is enough for
+ * creating the subscriptions.
+ */
+static void
+check_new_cluster_subscription_configuration(void)
+{
+	PGresult   *res;
+	PGconn	   *conn;
+	int			nsubs_on_old;
+	int			max_replication_slots;
+
+	/* Logical slots can be migrated since PG17. */
+	if (GET_MAJOR_VERSION(old_cluster.major_version) < 1700)
+		return;
+
+	nsubs_on_old = count_old_cluster_subscriptions();
+
+	/* Quick return if there are no subscriptions to be migrated. */
+	if (nsubs_on_old == 0)
+		return;
+
+	prep_status("Checking for new cluster configuration for subscriptions");
+
+	conn = connectToServer(&new_cluster, "template1");
+
+	res = executeQueryOrDie(conn, "SELECT setting FROM pg_settings "
+							"WHERE name = 'max_replication_slots';");
+
+	if (PQntuples(res) != 1)
+		pg_fatal("could not determine parameter settings on new cluster");
+
+	max_replication_slots = atoi(PQgetvalue(res, 0, 0));
+	if (nsubs_on_old > max_replication_slots)
+		pg_fatal("max_replication_slots (%d) must be greater than or equal to the number of "
+				 "subscriptions (%d) on the old cluster",
+				 max_replication_slots, nsubs_on_old);
+
+	PQclear(res);
+	PQfinish(conn);
+
+	check_ok();
+}
+
 /*
  * check_old_cluster_for_valid_slots()
  *
@@ -1613,3 +1671,124 @@ check_old_cluster_for_valid_slots(bool live_check)
 
 	check_ok();
 }
+
+/*
+ * check_old_cluster_subscription_state()
+ *
+ * Verify that each of the subscriptions has all their corresponding tables in
+ * i (initialize) or r (ready).
+ */
+static void
+check_old_cluster_subscription_state(void)
+{
+	FILE	   *script = NULL;
+	char		output_path[MAXPGPATH];
+	int			ntup;
+
+	prep_status("Checking for subscription state");
+
+	snprintf(output_path, sizeof(output_path), "%s/%s",
+			 log_opts.basedir,
+			 "subs_invalid.txt");
+	for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
+	{
+		PGresult   *res;
+		DbInfo	   *active_db = &old_cluster.dbarr.dbs[dbnum];
+		PGconn	   *conn = connectToServer(&old_cluster, active_db->db_name);
+
+		/* We need to check for pg_replication_origin only once. */
+		if (dbnum == 0)
+		{
+			/*
+			 * Check that all the subscriptions have their respective
+			 * replication origin.
+			 */
+			res = executeQueryOrDie(conn,
+									"SELECT d.datname, s.subname "
+									"FROM pg_catalog.pg_subscription s "
+									"LEFT OUTER JOIN pg_catalog.pg_replication_origin o "
+									"	ON o.roname = 'pg_' || s.oid "
+									"INNER JOIN pg_catalog.pg_database d "
+									"	ON d.oid = s.subdbid "
+									"WHERE o.roname iS NULL;");
+
+			ntup = PQntuples(res);
+			for (int i = 0; i < ntup; i++)
+			{
+				if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
+					pg_fatal("could not open file \"%s\": %s",
+							 output_path, strerror(errno));
+				fprintf(script, "replication origin is missing for database:\"%s\" subscription:\"%s\"\n",
+						PQgetvalue(res, i, 0),
+						PQgetvalue(res, i, 1));
+			}
+			PQclear(res);
+		}
+
+		/*
+		 * A slot not created yet refers to the 'i' (initialize) state, while
+		 * 'r' (ready) state refers to a slot created previously but already
+		 * dropped. These states are supported for pg_upgrade. The other
+		 * states listed below are not supported:
+		 *
+		 * a) SUBREL_STATE_DATASYNC: A relation upgraded while in this state
+		 * would retain a replication slot, which could not be dropped by the
+		 * sync worker spawned after the upgrade because the subscription ID
+		 * tracked by the publisher does not match anymore.
+		 *
+		 * b) SUBREL_STATE_SYNCDONE: A relation upgraded while in this state
+		 * would retain the replication origin when there is a failure in
+		 * tablesync worker immediately after dropping the replication slot in
+		 * the publisher.
+		 *
+		 * c) SUBREL_STATE_FINISHEDCOPY: A tablesync worker spawned to work on
+		 * a relation upgraded while in this state would expect an origin ID
+		 * with the OID of the subscription used before the upgrade, causing
+		 * it to fail.
+		 *
+		 * d) SUBREL_STATE_SYNCWAIT, SUBREL_STATE_CATCHUP and
+		 * SUBREL_STATE_UNKNOWN: These states are not stored in the catalog,
+		 * so we need not allow these states.
+		 */
+		res = executeQueryOrDie(conn,
+								"SELECT s.subname, n.nspname, c.relname, r.srsubstate "
+								"FROM pg_catalog.pg_subscription_rel r "
+								"LEFT JOIN pg_catalog.pg_subscription s"
+								"	ON r.srsubid = s.oid "
+								"LEFT JOIN pg_catalog.pg_class c"
+								"	ON r.srrelid = c.oid "
+								"LEFT JOIN pg_catalog.pg_namespace n"
+								"	ON c.relnamespace = n.oid "
+								"WHERE r.srsubstate NOT IN ('i', 'r') "
+								"ORDER BY s.subname");
+
+		ntup = PQntuples(res);
+		for (int i = 0; i < ntup; i++)
+		{
+			if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
+				pg_fatal("could not open file \"%s\": %s",
+						 output_path, strerror(errno));
+
+			fprintf(script, "database:\"%s\" subscription:\"%s\" schema:\"%s\" relation:\"%s\" state:\"%s\" not in required state\n",
+					active_db->db_name,
+					PQgetvalue(res, i, 0),
+					PQgetvalue(res, i, 1),
+					PQgetvalue(res, i, 2),
+					PQgetvalue(res, i, 3));
+		}
+
+		PQclear(res);
+		PQfinish(conn);
+	}
+
+	if (script)
+	{
+		fclose(script);
+		pg_log(PG_REPORT, "fatal");
+		pg_fatal("Your installation contains subscriptions without origin or having relations not in i (initialize) or r (ready) state.\n"
+				 "A list of the problem subscriptions is in the file:\n"
+				 "    %s", output_path);
+	}
+	else
+		check_ok();
+}
diff --git a/src/bin/pg_upgrade/info.c b/src/bin/pg_upgrade/info.c
index 4878aa22bf..fb8250002f 100644
--- a/src/bin/pg_upgrade/info.c
+++ b/src/bin/pg_upgrade/info.c
@@ -28,6 +28,7 @@ static void print_db_infos(DbInfoArr *db_arr);
 static void print_rel_infos(RelInfoArr *rel_arr);
 static void print_slot_infos(LogicalSlotInfoArr *slot_arr);
 static void get_old_cluster_logical_slot_infos(DbInfo *dbinfo, bool live_check);
+static void get_db_subscription_count(DbInfo *dbinfo);
 
 
 /*
@@ -293,10 +294,14 @@ get_db_rel_and_slot_infos(ClusterInfo *cluster, bool live_check)
 		get_rel_infos(cluster, pDbInfo);
 
 		/*
-		 * Retrieve the logical replication slots infos for the old cluster.
+		 * Retrieve the logical replication slots infos and the subscriptions
+		 * count for the old cluster.
 		 */
 		if (cluster == &old_cluster)
+		{
 			get_old_cluster_logical_slot_infos(pDbInfo, live_check);
+			get_db_subscription_count(pDbInfo);
+		}
 	}
 
 	if (cluster == &old_cluster)
@@ -730,6 +735,55 @@ count_old_cluster_logical_slots(void)
 	return slot_count;
 }
 
+/*
+ * get_db_subscription_count()
+ *
+ * Gets the number of subscription count of the database.
+ *
+ * Note: This function will not do anything if the old cluster is pre-PG17.
+ * This is because before that the logical slots are not upgraded, so we will
+ * not be able to upgrade the logical replication clusters completely.
+ */
+static void
+get_db_subscription_count(DbInfo *dbinfo)
+{
+	PGconn	   *conn;
+	PGresult   *res;
+
+	/* Subscriptions can be migrated since PG17. */
+	if (GET_MAJOR_VERSION(old_cluster.major_version) < 1700)
+		return;
+
+	conn = connectToServer(&old_cluster, dbinfo->db_name);
+	res = executeQueryOrDie(conn, "SELECT count(*) "
+							"FROM pg_catalog.pg_subscription WHERE subdbid = %d",
+							dbinfo->db_oid);
+	dbinfo->nsubs = atoi(PQgetvalue(res, 0, 0));
+
+	PQclear(res);
+	PQfinish(conn);
+}
+
+/*
+ * count_old_cluster_subscriptions()
+ *
+ * Returns the number of subscriptions for all databases.
+ *
+ * Note: this function always returns 0 if the old_cluster is PG16 and prior
+ * because we gather subscriptions only for cluster versions greater than or
+ * equal to PG17. See get_db_subscription_count().
+ */
+int
+count_old_cluster_subscriptions(void)
+{
+	int			nsubs = 0;
+
+	for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
+		nsubs += old_cluster.dbarr.dbs[dbnum].nsubs;
+
+	return nsubs;
+}
+
 static void
 free_db_and_rel_infos(DbInfoArr *db_arr)
 {
diff --git a/src/bin/pg_upgrade/meson.build b/src/bin/pg_upgrade/meson.build
index 3e8a08e062..32f12f9e27 100644
--- a/src/bin/pg_upgrade/meson.build
+++ b/src/bin/pg_upgrade/meson.build
@@ -43,6 +43,7 @@ tests += {
       't/001_basic.pl',
       't/002_pg_upgrade.pl',
       't/003_logical_slots.pl',
+      't/004_subscription.pl',
     ],
     'test_kwargs': {'priority': 40}, # pg_upgrade tests are slow
   },
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index a710f325de..d63f13fffc 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -195,6 +195,7 @@ typedef struct
 											 * path */
 	RelInfoArr	rel_arr;		/* array of all user relinfos */
 	LogicalSlotInfoArr slot_arr;	/* array of all LogicalSlotInfo */
+	int			nsubs;			/* number of subscriptions */
 } DbInfo;
 
 /*
@@ -421,6 +422,7 @@ FileNameMap *gen_db_file_maps(DbInfo *old_db,
 							  const char *new_pgdata);
 void		get_db_rel_and_slot_infos(ClusterInfo *cluster, bool live_check);
 int			count_old_cluster_logical_slots(void);
+int			count_old_cluster_subscriptions(void);
 
 /* option.c */
 
diff --git a/src/bin/pg_upgrade/t/004_subscription.pl b/src/bin/pg_upgrade/t/004_subscription.pl
new file mode 100644
index 0000000000..0b35afa1b6
--- /dev/null
+++ b/src/bin/pg_upgrade/t/004_subscription.pl
@@ -0,0 +1,368 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+# Test for pg_upgrade of logical subscription
+use strict;
+use warnings;
+
+use File::Find qw(find);
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Can be changed to test the other modes.
+my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';
+
+# Initialize publisher node
+my $publisher = PostgreSQL::Test::Cluster->new('publisher');
+$publisher->init(allows_streaming => 'logical');
+$publisher->start;
+
+# Initialize the old subscriber node
+my $old_sub = PostgreSQL::Test::Cluster->new('old_sub');
+$old_sub->init;
+$old_sub->start;
+my $oldbindir = $old_sub->config_data('--bindir');
+
+# Initialize the new subscriber
+my $new_sub = PostgreSQL::Test::Cluster->new('new_sub');
+$new_sub->init;
+my $newbindir = $new_sub->config_data('--bindir');
+
+sub insert_line_at_pub
+{
+	my $payload = shift;
+
+	foreach ("tab_upgraded1", "tab_upgraded2", "tab_not_upgraded1")
+	{
+		$publisher->safe_psql('postgres',
+			"INSERT INTO " . $_ . " (val) VALUES('$payload')");
+	}
+}
+
+# Initial setup
+foreach ("tab_upgraded1", "tab_upgraded2", "tab_not_upgraded1")
+{
+	$publisher->safe_psql('postgres',
+		"CREATE TABLE " . $_ . " (id serial, val text)");
+	$old_sub->safe_psql('postgres',
+		"CREATE TABLE " . $_ . " (id serial, val text)");
+}
+insert_line_at_pub('before initial sync');
+
+# Setup logical replication
+my $connstr = $publisher->connstr . ' dbname=postgres';
+
+$publisher->safe_psql('postgres',
+	"CREATE PUBLICATION regress_pub FOR TABLE tab_upgraded1");
+
+$old_sub->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_sub CONNECTION '$connstr' PUBLICATION regress_pub"
+);
+
+$old_sub->wait_for_subscription_sync($publisher, 'regress_sub');
+
+# After the above wait_for_subscription_sync call the table can be either in
+# 'syncdone' or in 'ready' state. Now wait till the table reaches 'ready' state.
+my $synced_query =
+  "SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'r'";
+$old_sub->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for the table to reach ready state";
+
+# ------------------------------------------------------
+# Check that pg_upgrade is successful when all tables are in ready or in
+# init state.
+# ------------------------------------------------------
+$publisher->safe_psql('postgres',
+	"INSERT INTO tab_upgraded1 VALUES (generate_series(2,50), 'before initial sync')"
+);
+$publisher->wait_for_catchup('regress_sub');
+
+$publisher->safe_psql('postgres', "CREATE PUBLICATION regress_pub1");
+$old_sub->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION regress_pub1"
+);
+$old_sub->wait_for_subscription_sync($publisher, 'regress_sub1');
+
+# Change configuration to prepare a subscription table in init state
+$old_sub->append_conf('postgresql.conf',
+	"max_logical_replication_workers = 0");
+$old_sub->restart;
+
+# Add tab_upgraded2 to the publication. Now publication has tab_upgraded1
+# and tab_upgraded2 tables.
+$publisher->safe_psql('postgres',
+	"ALTER PUBLICATION regress_pub ADD TABLE tab_upgraded2");
+
+$old_sub->safe_psql('postgres',
+	"ALTER SUBSCRIPTION regress_sub REFRESH PUBLICATION");
+
+# Get the subscription oid of the old subscriber
+my $sub_oid =
+  $old_sub->safe_psql('postgres',
+	"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub'");
+
+# The tables will be in init state as the subscriber configuration for
+# max_logical_replication_workers is set to 0.
+$synced_query =
+  "SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'";
+$old_sub->poll_query_until('postgres', $synced_query)
+  or die "Timed out while waiting for the table to reach init state";
+
+# Get the replication origin remote_lsn of the old subscriber
+my $remote_lsn = $old_sub->safe_psql('postgres',
+	"SELECT remote_lsn FROM pg_replication_origin_status WHERE external_id = 'pg_' || $sub_oid"
+);
+$old_sub->safe_psql('postgres', "ALTER SUBSCRIPTION regress_sub DISABLE");
+
+$old_sub->stop;
+
+# Insert a row in tab_upgraded1 and tab_not_upgraded1 publisher table while
+# it's down.
+insert_line_at_pub('while old_sub is down');
+
+command_ok(
+	[
+		'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,
+		'-D', $new_sub->data_dir, '-b', $oldbindir,
+		'-B', $newbindir, '-s', $new_sub->host,
+		'-p', $old_sub->port, '-P', $new_sub->port,
+		$mode
+	],
+	'run of pg_upgrade for old instance when the subscription tables are in ready state'
+);
+ok( !-d $new_sub->data_dir . "/pg_upgrade_output.d",
+	"pg_upgrade_output.d/ removed after successful pg_upgrade");
+
+# Add tab_not_upgraded1 to the publication. Now publication has tab_upgraded1,
+# tab_upgraded2 and tab_not_upgraded1 tables.
+$publisher->safe_psql('postgres',
+	"ALTER PUBLICATION regress_pub ADD TABLE tab_not_upgraded1");
+
+$new_sub->start;
+
+# The subscription's running status should be preserved
+my $result =
+  $new_sub->safe_psql('postgres',
+	"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub'");
+is($result, qq(f),
+	"check that the subscriber that was disable on the old subscriber should be disabled in the new subscriber"
+);
+$result =
+  $new_sub->safe_psql('postgres',
+	"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub1'");
+is($result, qq(t),
+	"check that the subscriber that was enabled on the old subscriber should be enabled in the new subscriber"
+);
+$new_sub->safe_psql('postgres', "DROP SUBSCRIPTION regress_sub1");
+
+# Subscription relations should be preserved. The upgraded subscriber won't know
+# about 'tab_not_upgraded1' because the subscription is not yet refreshed.
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM pg_subscription_rel");
+is($result, qq(2),
+	"there should be 2 rows in pg_subscription_rel(representing tab_upgraded1 and tab_upgraded2)"
+);
+
+# The replication origin remote_lsn should be preserved
+$result = $new_sub->safe_psql('postgres',
+	"SELECT remote_lsn FROM pg_replication_origin_status os, pg_subscription s WHERE os.external_id = 'pg_' || s.oid"
+);
+is($result, qq($remote_lsn), "remote_lsn should have been preserved");
+
+
+# Check the number of rows for each table on each server
+$result =
+  $publisher->safe_psql('postgres', "SELECT count(*) FROM tab_upgraded1");
+is($result, qq(51), "check initial tab_upgraded1 table data on publisher");
+$result =
+  $publisher->safe_psql('postgres', "SELECT count(*) FROM tab_upgraded2");
+is($result, qq(2), "check initial tab_upgraded2 table data on publisher");
+$result =
+  $publisher->safe_psql('postgres', "SELECT count(*) FROM tab_not_upgraded1");
+is($result, qq(2), "check initial tab_not_upgraded1 table data on publisher");
+
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM tab_upgraded1");
+is($result, qq(50),
+	"check initial tab_upgraded1 table data on the new subscriber");
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM tab_upgraded2");
+is($result, qq(0),
+	"check initial tab_upgraded2 table data on upgraded subscriber");
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM tab_not_upgraded1");
+is($result, qq(0),
+	"check initial tab_not_upgraded1 table data on the new subscriber");
+
+# Enable the subscription
+$new_sub->safe_psql('postgres', "ALTER SUBSCRIPTION regress_sub ENABLE");
+
+$publisher->wait_for_catchup('regress_sub');
+
+# Rows on tab_upgraded1 and tab_upgraded2 should have been replicated, while
+# nothing should happen for tab_not_upgraded1.
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM tab_upgraded1");
+is($result, qq(51), "check replicated inserts on new subscriber");
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM tab_upgraded2");
+is($result, qq(2),
+	"check the data is synced after enabling the subscription for the table that was in init state"
+);
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM tab_not_upgraded1");
+is($result, qq(0),
+	"no change in table tab_not_upgraded1 after enable subscription which is not part of the publication"
+);
+
+# Refresh the subscription, the missing row on tab_not_upgraded1 should be
+# replicated.
+$new_sub->safe_psql('postgres',
+	"ALTER SUBSCRIPTION regress_sub REFRESH PUBLICATION");
+$new_sub->wait_for_subscription_sync($publisher, 'regress_sub');
+$result =
+  $new_sub->safe_psql('postgres', "SELECT count(*) FROM tab_not_upgraded1");
+is($result, qq(2),
+	"check replicated inserts on new subscriber after refreshing");
+
+# cleanup
+$new_sub->stop;
+$old_sub->append_conf('postgresql.conf',
+	"max_logical_replication_workers = 4");
+$old_sub->start;
+
+$old_sub->safe_psql('postgres', "ALTER SUBSCRIPTION regress_sub1 DISABLE");
+$old_sub->safe_psql('postgres',
+	"ALTER SUBSCRIPTION regress_sub1 SET (slot_name = none)");
+$old_sub->safe_psql('postgres', "DROP SUBSCRIPTION regress_sub1");
+
+# ------------------------------------------------------
+# Check that pg_upgrade fails when max_replication_slots configured in the new
+# cluster is less than number of subscriptions in the old cluster.
+# ------------------------------------------------------
+my $new_sub1 = PostgreSQL::Test::Cluster->new('new_sub1');
+$new_sub1->init;
+$new_sub1->append_conf('postgresql.conf', "max_replication_slots = 0");
+
+$old_sub->stop;
+
+# pg_upgrade will fail because the new cluster has insufficient
+# max_replication_slots.
+command_checks_all(
+	[
+		'pg_upgrade', '--no-sync',
+		'-d', $old_sub->data_dir,
+		'-D', $new_sub1->data_dir,
+		'-b', $oldbindir,
+		'-B', $newbindir,
+		'-s', $new_sub1->host,
+		'-p', $old_sub->port,
+		'-P', $new_sub1->port,
+		$mode, '--check',
+	],
+	1,
+	[
+		qr/max_replication_slots \(0\) must be greater than or equal to the number of subscriptions \(1\) on the old cluster/
+	],
+	[qr//],
+	'run of pg_upgrade where the new cluster has insufficient max_replication_slots'
+);
+
+# Reset max_replication_slots
+$new_sub1->append_conf('postgresql.conf', "max_replication_slots = 10");
+
+$old_sub->start;
+
+# Drop the subscription
+$old_sub->safe_psql('postgres', "DROP SUBSCRIPTION regress_sub");
+
+# ------------------------------------------------------
+# Check that pg_upgrade refuses to run in:
+# a) if there's a subscription with tables in a state other than 'r' (ready) or
+#    'i' (init) and/or
+# b) if the subscription has no replication origin.
+# ------------------------------------------------------
+$publisher->safe_psql(
+	'postgres', qq[
+		CREATE TABLE tab_primary_key(id serial PRIMARY KEY, val text);
+		INSERT INTO tab_primary_key values(1, 'before initial sync');
+		CREATE PUBLICATION regress_pub2 FOR TABLE tab_primary_key;
+]);
+
+# Insert the same value that is already present in publisher to the primary key
+# column of subscriber so that the table sync will fail.
+$old_sub->safe_psql(
+	'postgres', qq[
+		CREATE TABLE tab_primary_key(id serial PRIMARY KEY, val text);
+		INSERT INTO tab_primary_key values(1, 'before initial sync');
+		CREATE SUBSCRIPTION regress_sub2 CONNECTION '$connstr' PUBLICATION regress_pub2;
+]);
+
+# Table will be in 'd' (data is being copied) state as table sync will fail
+# because of primary key constraint error.
+my $started_query =
+  "SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd'";
+$old_sub->poll_query_until('postgres', $started_query)
+  or die
+  "Timed out while waiting for the table state to become 'd' (datasync)";
+
+# Create another subscription and drop the subscription's replication origin
+$old_sub->safe_psql('postgres',
+	"CREATE SUBSCRIPTION regress_sub3 CONNECTION '$connstr' PUBLICATION regress_pub2 WITH (enabled=false)"
+);
+
+my $subid = $old_sub->safe_psql('postgres',
+	"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub3'");
+my $reporigin = 'pg_' . qq($subid);
+
+# Drop the subscription's replication origin
+$old_sub->safe_psql('postgres',
+	"SELECT pg_replication_origin_drop('$reporigin')");
+
+$old_sub->stop;
+
+command_fails(
+	[
+		'pg_upgrade', '--no-sync',
+		'-d', $old_sub->data_dir,
+		'-D', $new_sub1->data_dir,
+		'-b', $oldbindir,
+		'-B', $newbindir,
+		'-s', $new_sub1->host,
+		'-p', $old_sub->port,
+		'-P', $new_sub1->port,
+		$mode, '--check',
+	],
+	'run of pg_upgrade --check for old instance with relation in \'d\' datasync(invalid) state and missing replication origin'
+);
+
+# Verify the reason why the subscriber cannot be upgraded
+my $sub_relstate_filename;
+
+# Find a txt file that contains a list of tables that cannot be upgraded. We
+# cannot predict the file's path because the output directory contains a
+# milliseconds timestamp. File::Find::find must be used.
+find(
+	sub {
+		if ($File::Find::name =~ m/subs_invalid\.txt/)
+		{
+			$sub_relstate_filename = $File::Find::name;
+		}
+	},
+	$new_sub1->data_dir . "/pg_upgrade_output.d");
+
+# Check the file content which should have tab_primary_key table in invalid
+# state.
+like(
+	slurp_file($sub_relstate_filename),
+	qr/database:\"postgres\" subscription:\"regress_sub2\" schema:\"public\" relation:\"tab_primary_key\" state:\"d\" not in required state/m,
+	'the previous test failed due to subscription table in invalid state');
+
+# Check the file content which should have regress_sub2 subscription.
+like(
+	slurp_file($sub_relstate_filename),
+	qr/replication origin is missing for database:\"postgres\" subscription:\"regress_sub3\"/m,
+	'the previous test failed due to missing replication origin');
+
+done_testing();
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index fb58dee3bc..45c681db5e 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -11396,6 +11396,16 @@
   provolatile => 'v', proparallel => 'u', prorettype => 'bool',
   proargtypes => 'name',
   prosrc => 'binary_upgrade_logical_slot_has_caught_up' },
+{ oid => '8404', descr => 'for use by pg_upgrade (relation for pg_subscription_rel)',
+  proname => 'binary_upgrade_add_sub_rel_state', proisstrict => 'f',
+  provolatile => 'v', proparallel => 'u', prorettype => 'void',
+  proargtypes => 'text oid char pg_lsn',
+  prosrc => 'binary_upgrade_add_sub_rel_state' },
+{ oid => '8405', descr => 'for use by pg_upgrade (remote_lsn for origin)',
+  proname => 'binary_upgrade_replorigin_advance', proisstrict => 'f',
+  provolatile => 'v', proparallel => 'u', prorettype => 'void',
+  proargtypes => 'text pg_lsn',
+  prosrc => 'binary_upgrade_replorigin_advance' },
 
 # conversion functions
 { oid => '4302',
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 86a9886d4f..e6d994923f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2662,6 +2662,7 @@ SubLinkType
 SubOpts
 SubPlan
 SubPlanState
+SubRelInfo
 SubRemoveRels
 SubTransactionId
 SubXactCallback
-- 
2.34.1

Reply via email to