Oops! Sorry for the noise. I've must have been overworking yesterday and messed up the working branches. v7 was a correct set and v8 don't. Here is the correction with extended Perl test.
The test itself is in src/bin/pg_upgrade/t/005_offset.pl It is rather heavy and took about 45 minutes on my i5 with 2.7 Gb data generated. Basically, each test here is creating a cluster and fill it with multixacts. Thus, dozens of segments are created using two methods. One is with prepared transactions, and it creates, roughly, the same amount of segments for members and for offsets. The other one is based on Heikki's multixids.py and creates more members than offsets. I've used both of these methods to generate as much diverse data as possible. Here is how I test this patch set: 1. You need two pg clusters: the "old" one, i.e. without patch set, and the "new" with patch set v9 applied. 2. Apply v9-0005-TEST-initdb-option-to-initialize-cluster-with-non.patch.txt to the "old" and "new" clusters. Note, this is only patch required for "old" cluster. This will allow you to create a cluster with non-standard initial multixact and multixact offset. Unfortunately, this patch was not did not arouse public interest since it is assumed that there is similar functionality to the pg_resetwal utility. But similar is not mean equal. See, pg_resetwal must be used after cluster init, thus, we step into some problems with vacuum and some SLRU segments must be filled with zeroes. Also, template0 datminmxid must be manually updated. So, in me view, using this patch is justified and very handy here. 3. Also, apply all the "TEST" (0006 and 0007) patches to the "new" cluster. 4. Build "old" and "new" pg clusters. 5. Run the test with: PROVE_TESTS=t/005_offset.pl PG_TEST_NOCLEAN=1 oldinstall=/home/orlov/proj/OFFSET3/pgsql-old make check -s -C src/bin/pg_upgrade/ 6. In my case, it took around 45 minutes and generate roughly 2.7 Gb of data. "TEST" patches, of course, are for the test purposes and not to be committed. In src/bin/pg_upgrade/t/005_offset.pl I try to consider next cases: - Basic sanity checks. Here I test various initial multi and offset values (including wraparound) and see how appropriate segments are generated. - pg_upgarde tests. Here is oldinstall ENV is for. Run pg_upgrade for old cluster with multi and offset values just like in previous step. i.e. with various combinations. - Self pg_upgarde. -- Best regards, Maxim Orlov.
From 2642f597832cbed0ebc54202de4e0f5770ac5f50 Mon Sep 17 00:00:00 2001 From: Maxim Orlov <m.or...@postgrespro.ru> Date: Wed, 4 May 2022 15:53:36 +0300 Subject: [PATCH v9 5/7] TEST: initdb option to initialize cluster with non-standard xid/mxid/mxoff To date testing database cluster wraparund was not easy as initdb has always inited it with default xid/mxid/mxoff. The option to specify any valid xid/mxid/mxoff at cluster startup will make these things easier. Author: Maxim Orlov <orlo...@gmail.com> Author: Pavel Borisov <pashkin.e...@gmail.com> Author: Svetlana Derevyanko <s.derevya...@postgrespro.ru> Discussion: https://www.postgresql.org/message-id/flat/CACG%3Dezaa4vqYjJ16yoxgrpa-%3DgXnf0Vv3Ey9bjGrRRFN2YyWFQ%40mail.gmail.com --- src/backend/access/transam/clog.c | 21 +++++ src/backend/access/transam/multixact.c | 53 ++++++++++++ src/backend/access/transam/subtrans.c | 8 +- src/backend/access/transam/xlog.c | 15 ++-- src/backend/bootstrap/bootstrap.c | 50 +++++++++++- src/backend/main/main.c | 6 ++ src/backend/postmaster/postmaster.c | 14 +++- src/backend/tcop/postgres.c | 53 +++++++++++- src/bin/initdb/initdb.c | 107 ++++++++++++++++++++++++- src/bin/initdb/t/001_initdb.pl | 60 ++++++++++++++ src/include/access/xlog.h | 3 + src/include/c.h | 4 + src/include/catalog/pg_class.h | 2 +- 13 files changed, 382 insertions(+), 14 deletions(-) diff --git a/src/backend/access/transam/clog.c b/src/backend/access/transam/clog.c index e6f79320e9..17e29f4497 100644 --- a/src/backend/access/transam/clog.c +++ b/src/backend/access/transam/clog.c @@ -834,6 +834,7 @@ BootStrapCLOG(void) { int slotno; LWLock *lock = SimpleLruGetBankLock(XactCtl, 0); + int64 pageno; LWLockAcquire(lock, LW_EXCLUSIVE); @@ -844,6 +845,26 @@ BootStrapCLOG(void) SimpleLruWritePage(XactCtl, slotno); Assert(!XactCtl->shared->page_dirty[slotno]); + pageno = TransactionIdToPage(XidFromFullTransactionId(TransamVariables->nextXid)); + if (pageno != 0) + { + LWLock *nextlock = SimpleLruGetBankLock(XactCtl, pageno); + + if (nextlock != lock) + { + LWLockRelease(lock); + LWLockAcquire(nextlock, LW_EXCLUSIVE); + lock = nextlock; + } + + /* Create and zero the first page of the commit log */ + slotno = ZeroCLOGPage(pageno, false); + + /* Make sure it's written out */ + SimpleLruWritePage(XactCtl, slotno); + Assert(!XactCtl->shared->page_dirty[slotno]); + } + LWLockRelease(lock); } diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c index a817f539ee..095c39dd93 100644 --- a/src/backend/access/transam/multixact.c +++ b/src/backend/access/transam/multixact.c @@ -1955,6 +1955,7 @@ BootStrapMultiXact(void) { int slotno; LWLock *lock; + int64 pageno; lock = SimpleLruGetBankLock(MultiXactOffsetCtl, 0); LWLockAcquire(lock, LW_EXCLUSIVE); @@ -1966,6 +1967,26 @@ BootStrapMultiXact(void) SimpleLruWritePage(MultiXactOffsetCtl, slotno); Assert(!MultiXactOffsetCtl->shared->page_dirty[slotno]); + pageno = MultiXactIdToOffsetPage(MultiXactState->nextMXact); + if (pageno != 0) + { + LWLock *nextlock = SimpleLruGetBankLock(MultiXactOffsetCtl, pageno); + + if (nextlock != lock) + { + LWLockRelease(lock); + LWLockAcquire(nextlock, LW_EXCLUSIVE); + lock = nextlock; + } + + /* Create and zero the first page of the offsets log */ + slotno = ZeroMultiXactOffsetPage(pageno, false); + + /* Make sure it's written out */ + SimpleLruWritePage(MultiXactOffsetCtl, slotno); + Assert(!MultiXactOffsetCtl->shared->page_dirty[slotno]); + } + LWLockRelease(lock); lock = SimpleLruGetBankLock(MultiXactMemberCtl, 0); @@ -1978,7 +1999,39 @@ BootStrapMultiXact(void) SimpleLruWritePage(MultiXactMemberCtl, slotno); Assert(!MultiXactMemberCtl->shared->page_dirty[slotno]); + pageno = MXOffsetToMemberPage(MultiXactState->nextOffset); + if (pageno != 0) + { + LWLock *nextlock = SimpleLruGetBankLock(MultiXactMemberCtl, pageno); + + if (nextlock != lock) + { + LWLockRelease(lock); + LWLockAcquire(nextlock, LW_EXCLUSIVE); + lock = nextlock; + } + + /* Create and zero the first page of the members log */ + slotno = ZeroMultiXactMemberPage(pageno, false); + + /* Make sure it's written out */ + SimpleLruWritePage(MultiXactMemberCtl, slotno); + Assert(!MultiXactMemberCtl->shared->page_dirty[slotno]); + } + LWLockRelease(lock); + + /* + * If we're starting not from zero offset, initilize dummy multixact to + * evade too long loop in PerformMembersTruncation(). + */ + if (MultiXactState->nextOffset > 0 && MultiXactState->nextMXact > 0) + { + RecordNewMultiXact(FirstMultiXactId, + MultiXactState->nextOffset, 0, NULL); + RecordNewMultiXact(MultiXactState->nextMXact, + MultiXactState->nextOffset, 0, NULL); + } } /* diff --git a/src/backend/access/transam/subtrans.c b/src/backend/access/transam/subtrans.c index 50bb1d8cfc..a5e6e8f090 100644 --- a/src/backend/access/transam/subtrans.c +++ b/src/backend/access/transam/subtrans.c @@ -270,12 +270,16 @@ void BootStrapSUBTRANS(void) { int slotno; - LWLock *lock = SimpleLruGetBankLock(SubTransCtl, 0); + LWLock *lock; + int64 pageno; + + pageno = TransactionIdToPage(XidFromFullTransactionId(TransamVariables->nextXid)); + lock = SimpleLruGetBankLock(SubTransCtl, pageno); LWLockAcquire(lock, LW_EXCLUSIVE); /* Create and zero the first page of the subtrans log */ - slotno = ZeroSUBTRANSPage(0); + slotno = ZeroSUBTRANSPage(pageno); /* Make sure it's written out */ SimpleLruWritePage(SubTransCtl, slotno); diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 6f58412bca..c61d7d967c 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -136,6 +136,10 @@ int max_slot_wal_keep_size_mb = -1; int wal_decode_buffer_size = 512 * 1024; bool track_wal_io_timing = false; +TransactionId start_xid = FirstNormalTransactionId; +MultiXactId start_mxid = FirstMultiXactId; +MultiXactOffset start_mxoff = 0; + #ifdef WAL_DEBUG bool XLOG_DEBUG = false; #endif @@ -5080,13 +5084,14 @@ BootStrapXLOG(uint32 data_checksum_version) checkPoint.fullPageWrites = fullPageWrites; checkPoint.wal_level = wal_level; checkPoint.nextXid = - FullTransactionIdFromEpochAndXid(0, FirstNormalTransactionId); + FullTransactionIdFromEpochAndXid(0, Max(FirstNormalTransactionId, + start_xid)); checkPoint.nextOid = FirstGenbkiObjectId; - checkPoint.nextMulti = FirstMultiXactId; - checkPoint.nextMultiOffset = 0; - checkPoint.oldestXid = FirstNormalTransactionId; + checkPoint.nextMulti = Max(FirstMultiXactId, start_mxid); + checkPoint.nextMultiOffset = start_mxoff; + checkPoint.oldestXid = XidFromFullTransactionId(checkPoint.nextXid); checkPoint.oldestXidDB = Template1DbOid; - checkPoint.oldestMulti = FirstMultiXactId; + checkPoint.oldestMulti = checkPoint.nextMulti; checkPoint.oldestMultiDB = Template1DbOid; checkPoint.oldestCommitTsXid = InvalidTransactionId; checkPoint.newestCommitTsXid = InvalidTransactionId; diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c index d31a67599c..8c33b8ba9d 100644 --- a/src/backend/bootstrap/bootstrap.c +++ b/src/backend/bootstrap/bootstrap.c @@ -217,7 +217,7 @@ BootstrapModeMain(int argc, char *argv[], bool check_only) argv++; argc--; - while ((flag = getopt(argc, argv, "B:c:d:D:Fkr:X:-:")) != -1) + while ((flag = getopt(argc, argv, "B:c:d:D:Fkm:o:r:X:x:-:")) != -1) { switch (flag) { @@ -272,12 +272,60 @@ BootstrapModeMain(int argc, char *argv[], bool check_only) case 'k': bootstrap_data_checksum_version = PG_DATA_CHECKSUM_VERSION; break; + case 'm': + { + char *endptr; + + errno = 0; + start_mxid = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartMultiXactIdIsValid(start_mxid)) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid initial database cluster multixact id"))); + } + } + break; + case 'o': + { + char *endptr; + + errno = 0; + start_mxoff = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartMultiXactOffsetIsValid(start_mxoff)) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid initial database cluster multixact offset"))); + } + } + break; case 'r': strlcpy(OutputFileName, optarg, MAXPGPATH); break; case 'X': SetConfigOption("wal_segment_size", optarg, PGC_INTERNAL, PGC_S_DYNAMIC_DEFAULT); break; + case 'x': + { + char *endptr; + + errno = 0; + start_xid = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartTransactionIdIsValid(start_xid)) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid initial database cluster xid value"))); + } + } + break; default: write_stderr("Try \"%s --help\" for more information.\n", progname); diff --git a/src/backend/main/main.c b/src/backend/main/main.c index aea93a0229..6a3224bb82 100644 --- a/src/backend/main/main.c +++ b/src/backend/main/main.c @@ -358,12 +358,18 @@ help(const char *progname) printf(_(" -E echo statement before execution\n")); printf(_(" -j do not use newline as interactive query delimiter\n")); printf(_(" -r FILENAME send stdout and stderr to given file\n")); + printf(_(" -m START_MXID set initial database cluster multixact id\n")); + printf(_(" -o START_MXOFF set initial database cluster multixact offset\n")); + printf(_(" -x START_XID set initial database cluster xid\n")); printf(_("\nOptions for bootstrapping mode:\n")); printf(_(" --boot selects bootstrapping mode (must be first argument)\n")); printf(_(" --check selects check mode (must be first argument)\n")); printf(_(" DBNAME database name (mandatory argument in bootstrapping mode)\n")); printf(_(" -r FILENAME send stdout and stderr to given file\n")); + printf(_(" -m START_MXID set initial database cluster multixact id\n")); + printf(_(" -o START_MXOFF set initial database cluster multixact offset\n")); + printf(_(" -x START_XID set initial database cluster xid\n")); printf(_("\nPlease read the documentation for the complete list of run-time\n" "configuration settings and how to set them on the command line or in\n" diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c index 78e66a06ac..483307279f 100644 --- a/src/backend/postmaster/postmaster.c +++ b/src/backend/postmaster/postmaster.c @@ -572,7 +572,7 @@ PostmasterMain(int argc, char *argv[]) * tcop/postgres.c (the option sets should not conflict) and with the * common help() function in main/main.c. */ - while ((opt = getopt(argc, argv, "B:bC:c:D:d:EeFf:h:ijk:lN:OPp:r:S:sTt:W:-:")) != -1) + while ((opt = getopt(argc, argv, "B:bC:c:D:d:EeFf:h:ijk:lm:N:Oo:Pp:r:S:sTt:W:x:-:")) != -1) { switch (opt) { @@ -669,10 +669,18 @@ PostmasterMain(int argc, char *argv[]) SetConfigOption("max_connections", optarg, PGC_POSTMASTER, PGC_S_ARGV); break; + case 'm': + /* only used by single-user backend */ + break; + case 'O': SetConfigOption("allow_system_table_mods", "true", PGC_POSTMASTER, PGC_S_ARGV); break; + case 'o': + /* only used by single-user backend */ + break; + case 'P': SetConfigOption("ignore_system_indexes", "true", PGC_POSTMASTER, PGC_S_ARGV); break; @@ -723,6 +731,10 @@ PostmasterMain(int argc, char *argv[]) SetConfigOption("post_auth_delay", optarg, PGC_POSTMASTER, PGC_S_ARGV); break; + case 'x': + /* only used by single-user backend */ + break; + default: write_stderr("Try \"%s --help\" for more information.\n", progname); diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 184b830168..4fd594cfe5 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -3918,7 +3918,7 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx, * postmaster/postmaster.c (the option sets should not conflict) and with * the common help() function in main/main.c. */ - while ((flag = getopt(argc, argv, "B:bC:c:D:d:EeFf:h:ijk:lN:nOPp:r:S:sTt:v:W:-:")) != -1) + while ((flag = getopt(argc, argv, "B:bC:c:D:d:EeFf:h:ijk:lm:N:nOo:Pp:r:S:sTt:v:W:x:-:")) != -1) { switch (flag) { @@ -4010,6 +4010,23 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx, SetConfigOption("ssl", "true", ctx, gucsource); break; + case 'm': + { + char *endptr; + + errno = 0; + start_mxid = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartMultiXactIdIsValid(start_mxid)) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid initial database cluster multixact id"))); + } + } + break; + case 'N': SetConfigOption("max_connections", optarg, ctx, gucsource); break; @@ -4022,6 +4039,23 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx, SetConfigOption("allow_system_table_mods", "true", ctx, gucsource); break; + case 'o': + { + char *endptr; + + errno = 0; + start_mxoff = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartMultiXactOffsetIsValid(start_mxoff)) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid initial database cluster multixact offset"))); + } + } + break; + case 'P': SetConfigOption("ignore_system_indexes", "true", ctx, gucsource); break; @@ -4076,6 +4110,23 @@ process_postgres_switches(int argc, char *argv[], GucContext ctx, SetConfigOption("post_auth_delay", optarg, ctx, gucsource); break; + case 'x': + { + char *endptr; + + errno = 0; + start_xid = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartTransactionIdIsValid(start_xid)) + { + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid initial database cluster xid"))); + } + } + break; + default: errs++; break; diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index 9a91830783..410868dddf 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -168,6 +168,9 @@ static bool data_checksums = true; static char *xlog_dir = NULL; static int wal_segment_size_mb = (DEFAULT_XLOG_SEG_SIZE) / (1024 * 1024); static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC; +static TransactionId start_xid = 0; +static MultiXactId start_mxid = 0; +static MultiXactOffset start_mxoff = 0; /* internal vars */ @@ -1568,6 +1571,11 @@ bootstrap_template1(void) bki_lines = replace_token(bki_lines, "POSTGRES", escape_quotes_bki(username)); + /* relfrozenxid must not be less than FirstNormalTransactionId */ + sprintf(buf, "%llu", (unsigned long long) Max(start_xid, 3)); + bki_lines = replace_token(bki_lines, "RECENTXMIN", + buf); + bki_lines = replace_token(bki_lines, "ENCODING", encodingid_to_string(encodingid)); @@ -1593,6 +1601,9 @@ bootstrap_template1(void) printfPQExpBuffer(&cmd, "\"%s\" --boot %s %s", backend_exec, boot_options, extra_options); appendPQExpBuffer(&cmd, " -X %d", wal_segment_size_mb * (1024 * 1024)); + appendPQExpBuffer(&cmd, " -m %llu", (unsigned long long) start_mxid); + appendPQExpBuffer(&cmd, " -o %llu", (unsigned long long) start_mxoff); + appendPQExpBuffer(&cmd, " -x %llu", (unsigned long long) start_xid); if (data_checksums) appendPQExpBuffer(&cmd, " -k"); if (debug) @@ -2532,12 +2543,20 @@ usage(const char *progname) printf(_(" -d, --debug generate lots of debugging output\n")); printf(_(" --discard-caches set debug_discard_caches=1\n")); printf(_(" -L DIRECTORY where to find the input files\n")); + printf(_(" -m, --multixact-id=START_MXID\n" + " set initial database cluster multixact id\n" + " max value is 2^62-1\n")); printf(_(" -n, --no-clean do not clean up after errors\n")); printf(_(" -N, --no-sync do not wait for changes to be written safely to disk\n")); printf(_(" --no-instructions do not print instructions for next steps\n")); + printf(_(" -o, --multixact-offset=START_MXOFF\n" + " set initial database cluster multixact offset\n" + " max value is 2^62-1\n")); printf(_(" -s, --show show internal settings, then exit\n")); printf(_(" --sync-method=METHOD set method for syncing files to disk\n")); printf(_(" -S, --sync-only only sync database files to disk, then exit\n")); + printf(_(" -x, --xid=START_XID set initial database cluster xid\n" + " max value is 2^62-1\n")); printf(_("\nOther options:\n")); printf(_(" -V, --version output version information, then exit\n")); printf(_(" -?, --help show this help, then exit\n")); @@ -3079,6 +3098,18 @@ initialize_data_directory(void) /* Now create all the text config files */ setup_config(); + if (start_mxid != 0) + printf(_("selecting initial multixact id ... %llu\n"), + (unsigned long long) start_mxid); + + if (start_mxoff != 0) + printf(_("selecting initial multixact offset ... %llu\n"), + (unsigned long long) start_mxoff); + + if (start_xid != 0) + printf(_("selecting initial xid ... %llu\n"), + (unsigned long long) start_xid); + /* Bootstrap template1 */ bootstrap_template1(); @@ -3095,8 +3126,12 @@ initialize_data_directory(void) fflush(stdout); initPQExpBuffer(&cmd); - printfPQExpBuffer(&cmd, "\"%s\" %s %s template1 >%s", - backend_exec, backend_options, extra_options, DEVNULL); + printfPQExpBuffer(&cmd, "\"%s\" %s %s", + backend_exec, backend_options, extra_options); + appendPQExpBuffer(&cmd, " -m %llu", (unsigned long long) start_mxid); + appendPQExpBuffer(&cmd, " -o %llu", (unsigned long long) start_mxoff); + appendPQExpBuffer(&cmd, " -x %llu", (unsigned long long) start_xid); + appendPQExpBuffer(&cmd, " template1 >%s", DEVNULL); PG_CMD_OPEN(cmd.data); @@ -3183,6 +3218,9 @@ main(int argc, char *argv[]) {"icu-rules", required_argument, NULL, 18}, {"sync-method", required_argument, NULL, 19}, {"no-data-checksums", no_argument, NULL, 20}, + {"xid", required_argument, NULL, 'x'}, + {"multixact-id", required_argument, NULL, 'm'}, + {"multixact-offset", required_argument, NULL, 'o'}, {NULL, 0, NULL, 0} }; @@ -3224,7 +3262,7 @@ main(int argc, char *argv[]) /* process command-line options */ - while ((c = getopt_long(argc, argv, "A:c:dD:E:gkL:nNsST:U:WX:", + while ((c = getopt_long(argc, argv, "A:c:dD:E:gkL:m:nNo:sST:U:Wx:X:", long_options, &option_index)) != -1) { switch (c) @@ -3282,6 +3320,30 @@ main(int argc, char *argv[]) debug = true; printf(_("Running in debug mode.\n")); break; + case 'm': + { + char *endptr; + + errno = 0; + start_mxid = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartMultiXactIdIsValid(start_mxid)) + { + pg_log_error("invalid initial database cluster multixact id"); + exit(1); + } + else if (start_mxid < 1) /* FirstMultiXactId */ + { + /* + * We avoid mxid to be silently set to + * FirstMultiXactId, though it does not harm. + */ + pg_log_error("multixact id should be greater than 0"); + exit(1); + } + } + break; case 'n': noclean = true; printf(_("Running in no-clean mode. Mistakes will not be cleaned up.\n")); @@ -3289,6 +3351,21 @@ main(int argc, char *argv[]) case 'N': do_sync = false; break; + case 'o': + { + char *endptr; + + errno = 0; + start_mxoff = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartMultiXactOffsetIsValid(start_mxoff)) + { + pg_log_error("invalid initial database cluster multixact offset"); + exit(1); + } + } + break; case 'S': sync_only = true; break; @@ -3377,6 +3454,30 @@ main(int argc, char *argv[]) case 20: data_checksums = false; break; + case 'x': + { + char *endptr; + + errno = 0; + start_xid = strtou64(optarg, &endptr, 0); + + if (endptr == optarg || *endptr != '\0' || errno != 0 || + !StartTransactionIdIsValid(start_xid)) + { + pg_log_error("invalid value for initial database cluster xid"); + exit(1); + } + else if (start_xid < 3) /* FirstNormalTransactionId */ + { + /* + * We avoid xid to be silently set to + * FirstNormalTransactionId, though it does not harm. + */ + pg_log_error("xid should be greater than 2"); + exit(1); + } + } + break; default: /* getopt_long already emitted a complaint */ pg_log_error_hint("Try \"%s --help\" for more information.", progname); diff --git a/src/bin/initdb/t/001_initdb.pl b/src/bin/initdb/t/001_initdb.pl index 7520d3d0dd..91a85d9f4d 100644 --- a/src/bin/initdb/t/001_initdb.pl +++ b/src/bin/initdb/t/001_initdb.pl @@ -282,4 +282,64 @@ command_fails( [ 'pg_checksums', '-D', $datadir_nochecksums ], "pg_checksums fails with data checksum disabled"); +# Set non-standard initial mxid/mxoff/xid. +command_fails_like( + [ 'initdb', '-m', 'seven', $datadir ], + qr/initdb: error: invalid initial database cluster multixact id/, + 'fails for invalid initial database cluster multixact id'); +command_fails_like( + [ 'initdb', '-o', 'seven', $datadir ], + qr/initdb: error: invalid initial database cluster multixact offset/, + 'fails for invalid initial database cluster multixact offset'); +command_fails_like( + [ 'initdb', '-x', 'seven', $datadir ], + qr/initdb: error: invalid value for initial database cluster xid/, + 'fails for invalid initial database cluster xid'); + +command_checks_all( + [ 'initdb', '-m', '65535', "$tempdir/data-m65535" ], + 0, + [qr/selecting initial multixact id ... 65535/], + [], + 'selecting initial multixact id'); +command_checks_all( + [ 'initdb', '-o', '65535', "$tempdir/data-o65535" ], + 0, + [qr/selecting initial multixact offset ... 65535/], + [], + 'selecting initial multixact offset'); +command_checks_all( + [ 'initdb', '-x', '65535', "$tempdir/data-x65535" ], + 0, + [qr/selecting initial xid ... 65535/], + [], + 'selecting initial xid'); + +# Setup new cluster with given mxid/mxoff/xid. +my $node; +my $result; + +$node = PostgreSQL::Test::Cluster->new('test-mxid'); +$node->init(extra => ['-m', '16777215']); # 0xFFFFFF +$node->start; +$result = $node->safe_psql('postgres', "SELECT next_multixact_id FROM pg_control_checkpoint();"); +ok($result >= 16777215, 'setup cluster with given mxid'); +$node->stop; + +$node = PostgreSQL::Test::Cluster->new('test-mxoff'); +$node->init(extra => ['-o', '16777215']); # 0xFFFFFF +$node->start; +$result = $node->safe_psql('postgres', "SELECT next_multi_offset FROM pg_control_checkpoint();"); +ok($result >= 16777215, 'setup cluster with given mxoff'); +$node->stop; + +$node = PostgreSQL::Test::Cluster->new('test-xid'); +$node->init(extra => ['-x', '16777215']); # 0xFFFFFF +$node->start; +$result = $node->safe_psql('postgres', "SELECT txid_current();"); +ok($result >= 16777215, 'setup cluster with given xid - check 1'); +$result = $node->safe_psql('postgres', "SELECT oldest_xid FROM pg_control_checkpoint();"); +ok($result >= 16777215, 'setup cluster with given xid - check 2'); +$node->stop; + done_testing(); diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h index 34ad46c067..4ce79b12e3 100644 --- a/src/include/access/xlog.h +++ b/src/include/access/xlog.h @@ -94,6 +94,9 @@ typedef enum RecoveryState } RecoveryState; extern PGDLLIMPORT int wal_level; +extern PGDLLIMPORT TransactionId start_xid; +extern PGDLLIMPORT MultiXactId start_mxid; +extern PGDLLIMPORT MultiXactOffset start_mxoff; /* Is WAL archiving enabled (always or only while server is running normally)? */ #define XLogArchivingActive() \ diff --git a/src/include/c.h b/src/include/c.h index e1b3187d0b..f770e9a140 100644 --- a/src/include/c.h +++ b/src/include/c.h @@ -668,6 +668,10 @@ typedef uint64 MultiXactOffset; typedef uint32 CommandId; +#define StartTransactionIdIsValid(xid) ((xid) <= 0xFFFFFFFF) +#define StartMultiXactIdIsValid(mxid) ((mxid) <= 0xFFFFFFFF) +#define StartMultiXactOffsetIsValid(offset) ((offset) <= 0xFFFFFFFF) + #define FirstCommandId ((CommandId) 0) #define InvalidCommandId (~(CommandId)0) diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h index 0fc2c093b0..0a7518df0d 100644 --- a/src/include/catalog/pg_class.h +++ b/src/include/catalog/pg_class.h @@ -123,7 +123,7 @@ CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat Oid relrewrite BKI_DEFAULT(0) BKI_LOOKUP_OPT(pg_class); /* all Xids < this are frozen in this rel */ - TransactionId relfrozenxid BKI_DEFAULT(3); /* FirstNormalTransactionId */ + TransactionId relfrozenxid BKI_DEFAULT(RECENTXMIN); /* FirstNormalTransactionId */ /* all multixacts in this rel are >= this; it is really a MultiXactId */ TransactionId relminmxid BKI_DEFAULT(1); /* FirstMultiXactId */ -- 2.43.0
v9-0004-Get-rid-of-MultiXactMemberFreezeThreshold-call.patch
Description: Binary data
v9-0002-Use-64-bit-multixact-offsets.patch
Description: Binary data
v9-0001-Use-64-bit-format-output-for-multixact-offsets.patch
Description: Binary data
v9-0003-Make-pg_upgrade-convert-multixact-offsets.patch
Description: Binary data
From 33e21cf86b1813a67c699d703ab1f75bcf28a7b1 Mon Sep 17 00:00:00 2001 From: Maxim Orlov <orlo...@gmail.com> Date: Wed, 13 Nov 2024 16:34:34 +0300 Subject: [PATCH v9 7/7] TEST: bump catver --- src/bin/pg_upgrade/pg_upgrade.h | 2 +- src/include/catalog/catversion.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h index 2c85ec1e94..18faedc963 100644 --- a/src/bin/pg_upgrade/pg_upgrade.h +++ b/src/bin/pg_upgrade/pg_upgrade.h @@ -119,7 +119,7 @@ extern char *output_files[]; * * XXX: should be changed to the actual CATALOG_VERSION_NO on commit. */ -#define MULTIXACTOFFSET_FORMATCHANGE_CAT_VER 202409041 +#define MULTIXACTOFFSET_FORMATCHANGE_CAT_VER 202411112 /* * large object chunk size added to pg_controldata, diff --git a/src/include/catalog/catversion.h b/src/include/catalog/catversion.h index 5dd91e190a..3d09caf5ae 100644 --- a/src/include/catalog/catversion.h +++ b/src/include/catalog/catversion.h @@ -57,6 +57,6 @@ */ /* yyyymmddN */ -#define CATALOG_VERSION_NO 202411111 +#define CATALOG_VERSION_NO 202411112 #endif -- 2.43.0
From 3558ccb4712d50bcda877474db5c9fd124b6e919 Mon Sep 17 00:00:00 2001 From: Maxim Orlov <orlo...@gmail.com> Date: Tue, 19 Nov 2024 17:08:10 +0300 Subject: [PATCH v9 6/7] TEST: add src/bin/pg_upgrade/t/005_offset.pl --- src/bin/pg_upgrade/t/005_offset.pl | 562 +++++++++++++++++++++++++++++ 1 file changed, 562 insertions(+) create mode 100644 src/bin/pg_upgrade/t/005_offset.pl diff --git a/src/bin/pg_upgrade/t/005_offset.pl b/src/bin/pg_upgrade/t/005_offset.pl new file mode 100644 index 0000000000..1cfd8b364a --- /dev/null +++ b/src/bin/pg_upgrade/t/005_offset.pl @@ -0,0 +1,562 @@ +# Copyright (c) 2024, PostgreSQL Global Development Group + +use strict; +use warnings FATAL => 'all'; + +use File::Find qw(find); + +use PostgreSQL::Test::Cluster; +use PostgreSQL::Test::Utils; +use Test::More; + +# This pair of calls will create significantly more member segments than offset +# segments. +sub prep +{ + my $node = shift; + my $tbl = shift; + + $node->safe_psql('postgres', + "CREATE TABLE ${tbl} (I INT PRIMARY KEY, N_UPDATED INT) " . + " WITH (AUTOVACUUM_ENABLED=FALSE);" . + "INSERT INTO ${tbl} SELECT G, 0 FROM GENERATE_SERIES(1, 50) G;"); +} + +sub fill +{ + my $node = shift; + my $tbl = shift; + + my $nclients = 50; + my $update_every = 90; + my @connections = (); + + for (0..$nclients) + { + my $conn = $node->background_psql('postgres'); + $conn->query_safe("BEGIN"); + + push(@connections, $conn); + } + + for (my $i = 0; $i < 20000; $i++) + { + my $conn = $connections[$i % $nclients]; + + $conn->query_safe("COMMIT;"); + $conn->query_safe("BEGIN"); + + if ($i % $update_every == 0) + { + $conn->query_safe( + "UPDATE ${tbl} SET " . + "N_UPDATED = N_UPDATED + 1 " . + "WHERE I = ${i} % 50"); + } + else + { + $conn->query_safe( + "SELECT * FROM ${tbl} FOR KEY SHARE"); + } + } + + for my $conn (@connections) + { + $conn->quit(); + } +} + +# This pair of calls will create more or less the same amount of membsers and +# offsets segments. +sub prep2 +{ + my $node = shift; + my $tbl = shift; + + $node->safe_psql('postgres', + "CREATE TABLE ${tbl}(BAR INT PRIMARY KEY, BAZ INT); " . + "CREATE OR REPLACE PROCEDURE MXIDFILLER(N_STEPS INT DEFAULT 1000) " . + "LANGUAGE PLPGSQL " . + "AS \$\$ " . + "BEGIN " . + " FOR I IN 1..N_STEPS LOOP " . + " UPDATE ${tbl} SET BAZ = RANDOM(1, 1000) " . + " WHERE BAR IN (SELECT BAR FROM ${tbl} " . + " TABLESAMPLE BERNOULLI(80)); " . + " COMMIT; " . + " END LOOP; " . + "END; \$\$; " . + "INSERT INTO ${tbl} (BAR, BAZ) " . + "SELECT ID, ID FROM GENERATE_SERIES(1, 1024) ID;"); +} + +sub fill2 +{ + my $node = shift; + my $tbl = shift; + my $scale = shift // 1; + + $node->safe_psql('postgres', + "BEGIN; " . + "SELECT * FROM ${tbl} FOR KEY SHARE; " . + "PREPARE TRANSACTION 'A'; " . + "CALL MXIDFILLER((365 * ${scale})::int); " . + "COMMIT PREPARED 'A';"); +} + + +# generate around 2 offset segments and 55 member segments +sub mxid_gen1 +{ + my $node = shift; + my $tbl = shift; + + prep($node, $tbl); + fill($node, $tbl); + + $node->safe_psql('postgres', q(CHECKPOINT)); +} + +# generate around 10 offset segments and 12 member segments +sub mxid_gen2 +{ + my $node = shift; + my $tbl = shift; + my $scale = shift // 1; + + prep2($node, $tbl); + fill2($node, $tbl, $scale); + + $node->safe_psql('postgres', q(CHECKPOINT)); +} + +# Fetch latest multixact checkpoint values. +sub multi_bounds +{ + my ($node) = @_; + my $path = $node->config_data('--bindir'); + my ($stdout, $stderr) = run_command([ + $path . '/pg_controldata', + $node->data_dir + ]); + my @control_data = split("\n", $stdout); + my $next = undef; + my $oldest = undef; + my $next_offset = undef; + + foreach (@control_data) + { + if ($_ =~ /^Latest checkpoint's NextMultiXactId:\s*(.*)$/mg) + { + $next = $1; + print ">>> @ node ". $node->name . ", " . $_ . "\n"; + } + + if ($_ =~ /^Latest checkpoint's oldestMultiXid:\s*(.*)$/mg) + { + $oldest = $1; + print ">>> @ node ". $node->name . ", " . $_ . "\n"; + } + + if ($_ =~ /^Latest checkpoint's NextMultiOffset:\s*(.*)$/mg) + { + $next_offset = $1; + print ">>> @ node ". $node->name . ", " . $_ . "\n"; + } + + if (defined($oldest) && defined($next) && defined($next_offset)) + { + last; + } + } + + die "Latest checkpoint's NextMultiXactId not found in control file!\n" + unless defined($next); + + die "Latest checkpoint's oldestMultiXid not found in control file!\n" + unless defined($oldest); + + die "Latest checkpoint's NextMultiOffset not found in control file!\n" + unless defined($next_offset); + + return ($oldest, $next, $next_offset); +} + +# Create node from existing bins. +sub create_new_node +{ + my ($name, %params) = @_; + + create_node(0, @_); +} + +# Create node from ENV oldinstall +sub create_old_node +{ + my ($name, %params) = @_; + + if (!defined($ENV{oldinstall})) + { + die "oldinstall is not defined"; + } + + create_node(1, @_); +} + +sub create_node +{ + my ($install_path_from_env, $name, %params) = @_; + my $scale = defined $params{scale} ? $params{scale} : 1; + my $multi = defined $params{multi} ? $params{multi} : undef; + my $offset = defined $params{offset} ? $params{offset} : undef; + + my $node = + $install_path_from_env ? + PostgreSQL::Test::Cluster->new($name, + install_path => $ENV{oldinstall}) : + PostgreSQL::Test::Cluster->new($name); + + $node->init(force_initdb => 1, + extra => [ + $multi ? ('-m', $multi) : (), + $offset ? ('-o', $offset) : (), + ]); + + # Fixup MOX patch quirk + if ($multi) + { + unlink $node->data_dir . '/pg_multixact/offsets/0000'; + } + if ($offset) + { + unlink $node->data_dir . '/pg_multixact/members/0000'; + } + + $node->append_conf('fsync', 'off'); + $node->append_conf('postgresql.conf', 'max_prepared_transactions = 2'); + + $node->start(); + mxid_gen2($node, 'FOO', $scale); + mxid_gen1($node, 'BAR', $scale); + $node->restart(); + $node->safe_psql('postgres', q(SELECT * FROM FOO)); # just in case... + $node->safe_psql('postgres', q(SELECT * FROM BAR)); + $node->safe_psql('postgres', q(CHECKPOINT)); + $node->stop(); + + return $node; +} + +sub do_upgrade +{ + my ($oldnode, $newnode) = @_; + + command_ok( + [ + 'pg_upgrade', '--no-sync', + '-d', $oldnode->data_dir, + '-D', $newnode->data_dir, + '-b', $oldnode->config_data('--bindir'), + '-B', $newnode->config_data('--bindir'), + '-s', $newnode->host, + '-p', $oldnode->port, + '-P', $newnode->port, + '--check' + ], + 'run of pg_upgrade'); + + command_ok( + [ + 'pg_upgrade', '--no-sync', + '-d', $oldnode->data_dir, + '-D', $newnode->data_dir, + '-b', $oldnode->config_data('--bindir'), + '-B', $newnode->config_data('--bindir'), + '-s', $newnode->host, + '-p', $oldnode->port, + '-P', $newnode->port, + '--copy' + ], + 'run of pg_upgrade'); + + $oldnode->start(); + $newnode->start(); + + my $oldfoo = $oldnode->safe_psql('postgres', q(SELECT * FROM FOO)); + my $newfoo = $newnode->safe_psql('postgres', q(SELECT * FROM FOO)); + is($oldfoo, $newfoo, "select foo eq"); + + my $oldbar = $oldnode->safe_psql('postgres', q(SELECT * FROM BAR)); + my $newbar = $newnode->safe_psql('postgres', q(SELECT * FROM BAR)); + is($oldbar, $newbar, "select bar eq"); + + $oldnode->stop(); + $newnode->stop(); + + multi_bounds($oldnode); + multi_bounds($newnode); +} + +my @TESTS = ( + # tests without ENV oldinstall + 0, 1, 2, 3, 4, 5, 6, + # tests with "real" pg_upgrade + 100, 101, 102, 103, 104, 105, 106, + # self upgrade + 1000, +); + +# ============================================================================= +# Basic sanity tests on a NEW bin +# ============================================================================= + +# starts from the zero +SKIP: +{ + my $TEST_NO = 0; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $node = create_new_node('simple_mo', + scale => 1); + multi_bounds($node); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi starts from the value +SKIP: +{ + my $TEST_NO = 1; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $node = create_new_node('simple_Mo', + scale => 1.15, + multi => '0x123400'); + multi_bounds($node); + ok(1, "TEST $TEST_NO PASSED"); +} + +# offsets starts from the value +SKIP: +{ + my $TEST_NO = 2; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $node = create_new_node('simple_mO', + scale => 1.15, + offset => '0x432100'); + multi_bounds($node); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi and offsets starts from the value +SKIP: +{ + my $TEST_NO = 3; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $node = create_new_node('simple_MO', + scale => 1.15, + multi => '0xDEAD00', offset => '0xBEEF00'); + multi_bounds($node); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi starts from the value, multi wrap +SKIP: +{ + my $TEST_NO = 4; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $node = create_new_node('simple_Mo_wrap', + scale => 1.15, + multi => '0xFFFF7000'); + multi_bounds($node); + ok(1, "TEST $TEST_NO PASSED"); +} + +# offsets starts from the value, offsets wrap +SKIP: +{ + my $TEST_NO = 5; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $node = create_new_node('simple_mO_wrap', + scale => 1.15, + offset => '0xFFFFFC00'); + multi_bounds($node); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi starts from the value, offsets starts from the value, +# multi wrap, offsets wrap +SKIP: +{ + my $TEST_NO = 6; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $node = create_new_node('simple_MO_wrap', + scale => 1.15, + multi => '0xFFFF7000', offset => '0xFFFFFC00'); + multi_bounds($node); + ok(1, "TEST $TEST_NO PASSED"); +} + +# ============================================================================= +# pg_upgarde tests +# ============================================================================= + +# starts from the zero +SKIP: +{ + my $TEST_NO = 100; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'mo'; + my $oldnode = create_old_node("old_$dbname", + scale => 1); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi starts from the value +SKIP: +{ + my $TEST_NO = 101; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'Mo'; + my $oldnode = create_old_node("old_$dbname", + scale => 1.2, + multi => '0x123400'); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +# offsets starts from the value +SKIP: +{ + my $TEST_NO = 102; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'mO'; + my $oldnode = create_old_node("old_$dbname", + scale => 1.2, + offset => '0x432100'); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi and offsets starts from the value +SKIP: +{ + my $TEST_NO = 103; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'MO'; + my $oldnode = create_old_node("old_$dbname", + scale => 1.2, + multi => '0xDEAD00', offset => '0xBEEF00'); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi starts from the value, multi wrap +SKIP: +{ + my $TEST_NO = 104; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'Mo_wrap'; + my $oldnode = create_old_node("old_$dbname", + scale => 1.2, + multi => '0xFFFF7000'); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +# offsets starts from the value, offsets wrap +SKIP: +{ + my $TEST_NO = 105; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'mO_wrap'; + my $oldnode = create_old_node("old_$dbname", + scale => 1.2, + offset => '0xFFFFFC00'); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +# multi starts from the value, offsets starts from the value, +# multi wrap, offsets wrap +SKIP: +{ + my $TEST_NO = 106; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'MO_wrap'; + my $oldnode = create_old_node("old_$dbname", + scale => 1.2, + multi => '0xFFFF7000', offset => '0xFFFFFC00'); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +# ============================================================================= +# Self upgrade +# ============================================================================= + +# starts from the zero +SKIP: +{ + my $TEST_NO = 1000; + skip "do not test case $TEST_NO", 1 + unless ( grep( /^$TEST_NO$/, @TESTS ) ); + + my $dbname = 'self_upgrade'; + my $oldnode = create_new_node("old_$dbname", + scale => 1); + my $newnode = PostgreSQL::Test::Cluster->new("new_$dbname"); + $newnode->init(); + + do_upgrade($oldnode, $newnode); + ok(1, "TEST $TEST_NO PASSED"); +} + +done_testing(); -- 2.43.0