Re: [HACKERS] WAL recycle retading based on active sync rep.
Hello, At Fri, 18 Nov 2016 10:16:22 -0800, Andres Freund wrote in <20161118181622.hklschaizwaxo...@alap3.anarazel.de> > Hi, > > On 2016-11-18 14:12:42 +0900, Kyotaro HORIGUCHI wrote: > > We had too-early WAL recycling during a test we had on a sync > > replication set. This is not a bug and a bit extreme case but is > > contrary to expectation on synchronous replication. > > I don't think you can expect anything else. My sentense was inaccurate. "is contrary to *naive* expectation on synchronous replication." But I agree to you. > > This is because sync replication doesn't wait non-commit WALs to > > be replicated. This situation is artificially caused with the > > first patch attached and the following steps. > > You could get that situation even if we waited for syncrep. The > SyncRepWaitForLSN happens after delayChkpt is unset. > > Additionally a syncrep connection could break for a a short while, and > you'd loose all guarantees anyway. I know. Replication slots are for such cases. > > - Is this situation required to be saved? This is caused by a > > large transaction, spans over two max_wal_size segments, or > > replication stall lasts for a chackepoint period. > > I very strongly think not. > > > > - Is the measure acceptable? For the worst case, a master > > crashes from WAL space exhaustion. (But such large transaction > > won't/shouldn't exist?) > > No, imo not. Thanks for clarifying that. regards, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WAL recycle retading based on active sync rep.
Thanks for the comment. At Fri, 18 Nov 2016 17:06:55 +0800, Craig Ringer wrote in > > We had too-early WAL recycling during a test we had on a sync > > replication set. This is not a bug and a bit extreme case but is > > contrary to expectation on synchronous replication. > > Isn't this prevented by using a physical replication slot? > > You hint that you looked at slots but they didn't meet your needs in some > way. I'm not sure I understood the last part. Yes, repslot does the similar. The point was whether "Do we expect that removal of necessary WAL doesn't occur on an active sync replication?", with a strong doubt. At Fri, 18 Nov 2016 10:16:22 -0800, Andres Freund wrote in <20161118181622.hklschaizwaxo...@alap3.anarazel.de> > On 2016-11-18 14:12:42 +0900, Kyotaro HORIGUCHI wrote: > > We had too-early WAL recycling during a test we had on a sync > > replication set. This is not a bug and a bit extreme case but is > > contrary to expectation on synchronous replication. > > I don't think you can expect anything else. I think this is the answer for it. regards, -- 堀口恭太郎 日本電信電話株式会社 NTTオープンソースソフトウェアセンタ Phone: 03-5860-5115 / Fax: 03-5463-5490 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WAL recycle retading based on active sync rep.
Hi, On 2016-11-18 14:12:42 +0900, Kyotaro HORIGUCHI wrote: > We had too-early WAL recycling during a test we had on a sync > replication set. This is not a bug and a bit extreme case but is > contrary to expectation on synchronous replication. I don't think you can expect anything else. > This is because sync replication doesn't wait non-commit WALs to > be replicated. This situation is artificially caused with the > first patch attached and the following steps. You could get that situation even if we waited for syncrep. The SyncRepWaitForLSN happens after delayChkpt is unset. Additionally a syncrep connection could break for a a short while, and you'd loose all guarantees anyway. > - Is this situation required to be saved? This is caused by a > large transaction, spans over two max_wal_size segments, or > replication stall lasts for a chackepoint period. I very strongly think not. > - Is the measure acceptable? For the worst case, a master > crashes from WAL space exhaustion. (But such large transaction > won't/shouldn't exist?) No, imo not. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WAL recycle retading based on active sync rep.
On 18 Nov. 2016 13:14, "Kyotaro HORIGUCHI" wrote: > > Hello. > > We had too-early WAL recycling during a test we had on a sync > replication set. This is not a bug and a bit extreme case but is > contrary to expectation on synchronous replication. Isn't this prevented by using a physical replication slot? You hint that you looked at slots but they didn't meet your needs in some way. I'm not sure I understood the last part.
[HACKERS] WAL recycle retading based on active sync rep.
Hello. We had too-early WAL recycling during a test we had on a sync replication set. This is not a bug and a bit extreme case but is contrary to expectation on synchronous replication. > FATAL: could not receive data from WAL stream: ERROR: requested WAL segment > 00010088 has already been removed This is because sync replication doesn't wait non-commit WALs to be replicated. This situation is artificially caused with the first patch attached and the following steps. - Configure a master with max_wal_size=80MB and min_wal_size=48MB, and synchronous_standby_names='*' then run. - Configure a replica using pg_basebackup and run it. Make a file /tmp/slow to delay replication. - On the master do =# create table t (a int); =# insert into t (select * from generate_series(0, 200)); I could guess the following two approaches for this. A. Retard wal recycling back to where sync replication reached. B. Block wal insertion until sync replication reaches to the first surviving segments. The second attached patch implements the first measure. It makes CreateCheckPoint consider satisfied sync replication on WAL recycling. If WAL segments to be recycled is required by the currently satisfied sync-replication, it keeps the required segments and emit the following message. > WARNING: sync replication too retarded. 2 extra WAL segments are preserved > (last segno to preserve is moved from 185 to 183) > HINT: If you see this message too frequently, consider increasing > wal_keep_segments or max_wal_size. This is somewhat simliar to what repl-slot does but this doesn't anything when synchronous replication is not satisfied. Perhaps max_temporary_preserve_segments or similar GUC is required to limit amount of extra segments. - Is this situation required to be saved? This is caused by a large transaction, spans over two max_wal_size segments, or replication stall lasts for a chackepoint period. - Is the measure acceptable? For the worst case, a master crashes from WAL space exhaustion. (But such large transaction won't/shouldn't exist?) Or other comments? regards, -- Kyotaro Horiguchi NTT Open Source Software Center >From 7234376b2d6fa6b86cc9e4ed95a52af7bd6225e6 Mon Sep 17 00:00:00 2001 From: Kyotaro Horiguchi Date: Fri, 18 Nov 2016 12:44:25 +0900 Subject: [PATCH 1/2] Slows replication processing. To cause sync-rep desync artificially, sleep 100ms for each replication message if /tmp/slow exists. --- src/backend/replication/walreceiver.c | 5 + 1 file changed, 5 insertions(+) diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c index 2bb3dce..57f3ad2 100644 --- a/src/backend/replication/walreceiver.c +++ b/src/backend/replication/walreceiver.c @@ -439,6 +439,11 @@ WalReceiverMain(void) */ last_recv_timestamp = GetCurrentTimestamp(); ping_sent = false; + { +struct stat b; +if (stat("/tmp/slow", &b) >= 0) + usleep(10); + } XLogWalRcvProcessMsg(buf[0], &buf[1], len - 1); } else if (len == 0) -- 2.9.2 >From 3368c16e7a8f30216e7d9579f5d2ca3b923259d5 Mon Sep 17 00:00:00 2001 From: Kyotaro Horiguchi Date: Fri, 18 Nov 2016 13:14:55 +0900 Subject: [PATCH 2/2] Preserve WAL segments requred by synchronous standbys. Since synchronous standby doesn't sync non-commit records, a large transaction may unexpectedly break a sync replication. This patch makes CreateCheckPoint to preserve all WAL segments required by the currently established synchronous replication. --- src/backend/access/transam/xlog.c | 26 ++ src/backend/replication/syncrep.c | 23 ++- src/include/replication/syncrep.h | 4 3 files changed, 40 insertions(+), 13 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 6cec027..195272e 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -49,6 +49,7 @@ #include "replication/slot.h" #include "replication/origin.h" #include "replication/snapbuild.h" +#include "replication/syncrep.h" #include "replication/walreceiver.h" #include "replication/walsender.h" #include "storage/barrier.h" @@ -8628,12 +8629,37 @@ CreateCheckPoint(int flags) if (PriorRedoPtr != InvalidXLogRecPtr) { XLogSegNo _logSegNo; + bool in_sync, am_sync; + XLogRecPtr repwriteptr, repflushptr, repapplyptr; /* Update the average distance between checkpoints. */ UpdateCheckPointDistanceEstimate(RedoRecPtr - PriorRedoPtr); XLByteToSeg(PriorRedoPtr, _logSegNo); KeepLogSeg(recptr, &_logSegNo); + + /* + * If I am under satisfied synchronous replication, refrain from + * removing segments apparently required by them. Refferring to write + * pointer is enough. + */ + in_sync = SyncRepGetOldestSyncRecPtr(&repwriteptr, &repflushptr, + &repapplyptr, &am_sync, true); + if (in_sync && repwriteptr