Hi folks

I recently tracked down a race in shutdown of logical walsenders that can
cause PostgreSQL shutdown to hang for wal_sender_timeout/2 before it
continues to a normal shutdown. With a long timeout that can be quite
disruptive.

TL;DR: The logical walsender may be signalled to stop, then read the last
WAL record before the shutdown checkpoint is due to be written and go to
sleep. The checkpointer will wait for it to acknowledge the shutdown and
the walsender will wait for new WAL. The deadlock is eventually broken by
the walsender timeout keepalive timer.

Patch attached.

The issue arises from the difference between logical and physical walsender
shutdown as introduced by commit c6c3334364 "Prevent possibility of panics
during shutdown checkpoint". It's probably fairly hard to trigger. I ran
into a case where it happened regularly only because of an unrelated patch
that caused some WAL to be written just before the checkpointer issued
walsender shutdown requests. But it's still a legit bug.

If you hit the issue you'll see that walsender(s) can be seen to be
sleeping in WaitLatchOrSocket in WalSndLoop. They'll keep sleeping until
woken by the keepalive timeout. The checkpointer will be waiting in
WalSndWaitStopping() for the walsenders to enter WALSNDSTATE_STOPPING or
exit, whichever happens first. The postmaster will be waiting in ServerLoop
for the checkpointer to finish the shutdown checkpoint.

The checkpointer waits in WalSndWaitStopping() for all walsenders to either
exit or enter WALSNDSTATE_STOPPING state. Logical walsenders never enter
WALSNDSTATE_STOPPING, they go straight to exiting, so the checkpointer
can't finish WalSndWaitStopping() and write the shutdown checkpoint. A
logical walsender usually notices the shutdown request and exits as soon as
it has flushed all WAL up to the server's flushpoint, while physical
walsenders enter WALSNDSTATE_STOPPING.

But there's a race where a logical walsender may read the final available
record and notice it has caught up - but not notice that it has reached
end-of-WAL and check whether it should exit. This happens on the following
(simplified) code path in XLogSendLogical:

        if (record != NULL)
        {
            XLogRecPtr      flushPtr = GetFlushRecPtr();
            LogicalDecodingProcessRecord(...);
            sentPtr = ...;
            if (sentPtr >= flushPtr)
                WalSndCaughtUp = true;    // <-- HERE
        }

because the test for got_STOPPING that sets got_SIGUSR2 is only on the
other branch where getting a record returns `NULL`; this branch can sleep
before checking if shutdown was requested.

So if the walsender read the last WAL record available, when it's >= the
flush pointer and it already handled the SIGUSR1 latch wakeup for the WAL
write, it might go back to sleep and not wake up until the timeout.

The checkpointer already sent PROCSIG_WALSND_INIT_STOPPING to the
walsenders in the prior WalSndInitStopping() call so the walsender won't be
woken by a signal from the checkpointer. No new WAL will be written because
the walsender just consumed the final record written before the
checkpointer went to sleep, and the checkpointer won't write anything more
until the walsender exits. The client might not be due a keepalive for some
time.The only reason this doesn't turn into a total deadlock is that
keepalive wakeup.

An alternative fix would be to have the logical walsender set
WALSNDSTATE_STOPPING instead of faking got_SIGUSR2, then go to sleep
waiting for more WAL. Logical decoding would need to check if it was
running during shutdown and Assert(...) then ERROR if it saw any WAL
records that result in output plugin calls or snapshot management calls. I
avoided this approach as it's more intrusive and I'm not confident I can
concoct a reliable test to trigger it.

-- 
 Craig Ringer                   http://www.2ndQuadrant.com/
 2ndQuadrant - PostgreSQL Solutions for the Enterprise
From 559dda09b35870d3630a65cbca682e50343c6f0f Mon Sep 17 00:00:00 2001
From: Craig Ringer <cr...@2ndquadrant.com>
Date: Thu, 25 Jul 2019 09:14:58 +0800
Subject: [PATCH] Fix a delay in PostgreSQL shutdown caused by logical
 replication

Due to a race with WAL writing during shutdown, if logical walsenders were
running then PostgreSQL's shutdown could be delayed by up to
wal_sender_timeout/2 while it waits for the walsenders to shut down. The
walsenders wait for new WAL or end-of-wal which won't come until shutdown so
there's a deadlock. The walsender timeout eventually breaks the deadlock.

The issue was introduced by PostgreSQL 10 commit c6c3334364
"Prevent possibility of panics during shutdown checkpoint".

A logical walsender never enters WALSNDSTATE_STOPPING and allows the
checkpointer to continue shutdown. Instead it exits when it reads end-of-WAL.
But if it reads the last WAL record written before shutdown and that record
doesn't generate a client network write, it can mark itself caught up and go to
sleep without checking to see if it's been asked to shut down.

Fix by making sure the logical walsender always checks if it's been asked
to shut down before it allows the walsender main loop to go to sleep.

When this issue happens the walsender(s) can be seen to be sleeping in
WaitLatchOrSocket in WalSndLoop until woken by the keepalive timeout. The
checkpointer will be waiting in WalSndWaitStopping() for the walsenders to
enter WALSNDSTATE_STOPPING or exit, whichever happens first. The postmaster
will be waiting in ServerLoop for the checkpointer to finish the shutdown
checkpoint.

---
 src/backend/replication/walsender.c | 29 ++++++++++++++++++++---------
 1 file changed, 20 insertions(+), 9 deletions(-)

diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index b489c9c27f..c565e208bc 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -2907,18 +2907,29 @@ XLogSendLogical(void)
 		 * point, then we're caught up.
 		 */
 		if (logical_decoding_ctx->reader->EndRecPtr >= GetFlushRecPtr())
-		{
 			WalSndCaughtUp = true;
-
-			/*
-			 * Have WalSndLoop() terminate the connection in an orderly
-			 * manner, after writing out all the pending data.
-			 */
-			if (got_STOPPING)
-				got_SIGUSR2 = true;
-		}
 	}

+	/*
+	 * If we've recently sent up to the currently flushed WAL
+	 * and are shutting down, we can safely wrap up by flushing
+	 * buffers and exchanging CopyDone messages. It doesn't matter
+	 * if more WAL may be written before shutdown because no
+	 * WAL written after replication slots are checkpointed
+	 * can result in invocation of logical decoding hooks and
+	 * output to the client.
+	 *
+	 * We could instead WalSndSetState(WALSNDSTATE_STOPPING)
+	 * to allow shutdown to continue and put the walsender
+	 * in a state where any unexpected WAL records Assert.
+        * But this is safer as it reduces the risk of panics in
+        * hard-to-reach-and-test code.
+	 */
+	if (got_STOPPING && WalSndCaughtUp)
+		got_SIGUSR2 = true;
+
 	/* Update shared memory status */
 	{
 		WalSnd	   *walsnd = MyWalSnd;
--
2.21.0
---
 src/backend/replication/walsender.c | 29 ++++++++++++++++++++---------
 1 file changed, 20 insertions(+), 9 deletions(-)

diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index e7a59b0a92..a5d375f097 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -2858,18 +2858,29 @@ XLogSendLogical(void)
 		 * point, then we're caught up.
 		 */
 		if (logical_decoding_ctx->reader->EndRecPtr >= GetFlushRecPtr())
-		{
 			WalSndCaughtUp = true;
-
-			/*
-			 * Have WalSndLoop() terminate the connection in an orderly
-			 * manner, after writing out all the pending data.
-			 */
-			if (got_STOPPING)
-				got_SIGUSR2 = true;
-		}
 	}
 
+	/*
+	 * If we've recently sent up to the currently flushed WAL
+	 * and are shutting down, we can safely wrap up by flushing
+	 * buffers and exchanging CopyDone messages. It doesn't matter
+	 * if more WAL may be written before shutdown because no
+	 * WAL written after replication slots are checkpointed
+	 * can result in invocation of logical decoding hooks and
+	 * output to the client.
+	 *
+	 * We could instead WalSndSetState(WALSNDSTATE_STOPPING)
+	 * to allow shutdown to continue and put the walsender
+	 * in a state where any unexpected WAL records Assert.
+	 * But the net effect is the same and this is safer to
+	 * backpatch on customer systems.
+	 *
+	 * See RT64864
+	 */
+	if (got_STOPPING && WalSndCaughtUp)
+		got_SIGUSR2 = true;
+
 	/* Update shared memory status */
 	{
 		WalSnd	   *walsnd = MyWalSnd;
-- 
2.21.0

Reply via email to