Re: pgsql: Add TAP tests for timeouts
> On 15 Mar 2024, at 11:20, Andrey M. Borodin wrote: > > The failure seems to be Perl-related. It’s not Perl-related. This is how wait_for_log() times out. timed out waiting for match: terminating connection due to transaction timeout at t/005_timeouts.pl line 58. But it did not even wait for a 100ms… 2024-03-15 03:32:26.492 UTC [1405044:4] 005_timeouts.pl LOG: statement: SELECT injection_points_wakeup('transaction-timeout’); / here we start to wait. 2024-03-15 03:32:26.492 UTC [1405044:5] 005_timeouts.pl LOG: disconnection: session time: 0:00:00.002 user=admin database=postgres host=[local] 2024-03-15 03:35:26.623 UTC [1405009:4] LOG: received immediate shutdown request Best regards, Andrey Borodin.
Re: pgsql: Add TAP tests for timeouts
Kyotaro, thanks for your corrections. I agree that wordings should be improved. But let’s deal with failures first. > On 15 Mar 2024, at 10:28, Michael Paquier wrote: > > On Fri, Mar 15, 2024 at 10:42:35AM +0900, Kyotaro Horiguchi wrote: >> In 005_timeouts.pl, I found the following comment. > > Note also that the test is not stable, one of my machines with > injection points enabled has complained twice in its last three runs: > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-03-14%2015%3A05%3A04 > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-03-15%2003%3A21%3A15 The failure is t/005_timeouts.pl (Wstat: 65280 Tests: 0 Failed: 0) Non-zero exit status: 255 Parse errors: No plan found in TAP output Files=5, Tests=68, 185 wallclock secs ( 0.03 usr 0.00 sys + 0.86 cusr 0.80 csys = 1.69 CPU) Result: FAIL The failure seems to be Perl-related. As far as I can see I’ve done everything akin to 041_checkpoint_at_promote.pl. On batta this test pass, but hachi seems to be unhappy with this test. And hachi sometimes pass this test too [0]. I’ll look more on this. Do I understand right that we have only 2 buildfarm members with injection points? Best regards, Andrey Borodin. [0] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hachi&dt=2024-03-14%2022%3A02%3A41&stg=module-test_misc-check
Re: pgsql: Add TAP tests for timeouts
On Fri, Mar 15, 2024 at 10:42:35AM +0900, Kyotaro Horiguchi wrote: > In 005_timeouts.pl, I found the following comment. Note also that the test is not stable, one of my machines with injection points enabled has complained twice in its last three runs: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-03-14%2015%3A05%3A04 https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-03-15%2003%3A21%3A15 -- Michael signature.asc Description: PGP signature
Re: pgsql: Add TAP tests for timeouts
Hello. At Thu, 14 Mar 2024 11:25:42 +, Alexander Korotkov wrote in > Add TAP tests for timeouts > > This commit adds new tests to verify that transaction_timeout, > idle_session_timeout, and idle_in_transaction_session_timeout work as > expected. > We introduce new injection points in before throwing a timeout FATAL error > and check these injection points are reached. > > Discussion: > https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com > Author: Andrey Borodin > Reviewed-by: Alexander Korotkov In 005_timeouts.pl, I found the following comment. > # If we send \q with $psql_session->quit it can get to pump already closed. > # So \q is in initial script, here we only finish IPC::Run. > $psql_session->{run}->finish; I'm not sure if "it can get to pump already closed." makes sense. I guess that it means "the command can get to be pumped (or "can be sent") to the session already closed" or something similar? > # 2. Test of the sidle in transaction timeout s/sidle/idle/ ? > # Wait until the backend is in the timeout injection point. I'm not sure, but it seems that "is in" meant "passes" or something like? regards. -- Kyotaro Horiguchi NTT Open Source Software Center
pgsql: Add basic TAP tests for the low-level backup method, take two
Add basic TAP tests for the low-level backup method, take two There are currently no tests for the low-level backup method where pg_backup_start() and pg_backup_stop() are involved while taking a file-system backup. The tests introduced in this commit rely on a background psql process to make sure that the backup is taken while the session doing the SQL start and stop calls remains alive. Two cases are checked here with the backup taken: - Recovery without a backup_label, leading to a corrupted state. - Recovery with a backup_label, with a consistent state reached. Both cases cross-check some patterns in the logs generated when running recovery. Compared to the first attempt in 99b4a63bef94, this includes a couple of fixes making the CI stable (5 runs succeeded here): - Add the file to the list of tests in meson.build. - Race condition with the first WAL segment that we expect in the primary's archives, by adding a poll on pg_stat_archiver. The second segment with the checkpoint record is archived thanks to pg_backup_stop waiting for it. - Fix failure of test where the backup_label does not exist. The cluster inherits the configuration of the first node; it was attempting to store segments in the first node's archives, triggering failures with copy on Windows. - Fix failure of test on Windows because of incorrect parsing of the backup_file in the success case. The data of the backup_label file is retrieved from the output pg_backup_stop() from a BackgroundPsql written directly to the backup's data folder. This would include CRLFs (\r\n), causing the startup process to fail at the beginning of recovery when parsing the backup_label because only LFs (\n) are allowed. Author: David Steele Discussion: https://postgr.es/m/f20fcc82-dadb-478d-beb4-1e2ffb0ac...@pgmasters.net Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/071e3ad59d6fd2d6d1277b2bd9579397d10ded28 Modified Files -- src/test/recovery/meson.build | 1 + src/test/recovery/t/042_low_level_backup.pl | 144 2 files changed, 145 insertions(+)
pgsql: Refactor initial hash lookup in dynahash.c
Refactor initial hash lookup in dynahash.c The same pattern is used three times in dynahash.c to retrieve a bucket number and a hash bucket from a hash value. This has popped up while discussing improvements for the type cache, where this piece of refactoring would become useful. Note that hash_search_with_hash_value() does not need the bucket number, just the hash bucket. Author: Teodor Sigaev Reviewed-by: Aleksander Alekseev, Michael Paquier Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b44317696...@sigaev.ru Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/cc5ef90edd809eaf85e11a0ee251229bbf7ce798 Modified Files -- src/backend/utils/hash/dynahash.c | 75 +-- 1 file changed, 33 insertions(+), 42 deletions(-)
pgsql: Trim ORDER BY/DISTINCT aggregate pathkeys in gather_grouping_pat
Trim ORDER BY/DISTINCT aggregate pathkeys in gather_grouping_paths Similar to d8a295389, trim off any PathKeys which are for ORDER BY / DISTINCT aggregate functions from the PathKey List for the Gather Merge paths created by gather_grouping_paths(). These additional PathKeys are not valid to use after grouping has taken place as these PathKeys belong to columns which are inputs to an aggregate function and, therefore are unavailable after aggregation. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/cf63174c-8c89-3953-cb49-48f41f749...@gmail.com Backpatch-through: 16, where 1349d2790 was added Branch -- REL_16_STABLE Details --- https://git.postgresql.org/pg/commitdiff/4e1ff2aadefbfbf798a4d318514ac85b2fbdc5aa Modified Files -- src/backend/optimizer/plan/planner.c | 27 +++ 1 file changed, 19 insertions(+), 8 deletions(-)
pgsql: Trim ORDER BY/DISTINCT aggregate pathkeys in gather_grouping_pat
Trim ORDER BY/DISTINCT aggregate pathkeys in gather_grouping_paths Similar to d8a295389, trim off any PathKeys which are for ORDER BY / DISTINCT aggregate functions from the PathKey List for the Gather Merge paths created by gather_grouping_paths(). These additional PathKeys are not valid to use after grouping has taken place as these PathKeys belong to columns which are inputs to an aggregate function and, therefore are unavailable after aggregation. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/cf63174c-8c89-3953-cb49-48f41f749...@gmail.com Backpatch-through: 16, where 1349d2790 was added Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4169850f0b6fc98a5f28d2b0ca4c3a4c1ecf4553 Modified Files -- src/backend/optimizer/plan/planner.c | 19 +++ 1 file changed, 15 insertions(+), 4 deletions(-)
pgsql: Login event trigger documentation wordsmithing
Login event trigger documentation wordsmithing Minor wordsmithing on the login trigger documentation and code comments to improve readability, as well as fixing a few small incorrect statements in the comments. Author: Robert Treat Discussion: https://postgr.es/m/CAJSLCQ0aMWUh1m6E9YdjeqV61baQ=ehtejx8xoxxg8h_2lc...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/4665cebc8a01dabd54b000bcc107a3468be3a81c Modified Files -- doc/src/sgml/event-trigger.sgml | 10 +- src/backend/commands/event_trigger.c | 18 +- 2 files changed, 14 insertions(+), 14 deletions(-)
pgsql: Make INSERT-from-multiple-VALUES-rows handle domain target colum
Make INSERT-from-multiple-VALUES-rows handle domain target columns. Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de92...@postgresql.org Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/b4a71cf65d70b98aaf890b13ec600e340ff69e3f Modified Files -- src/backend/parser/analyze.c | 19 -- src/backend/parser/parse_target.c| 18 +- src/backend/rewrite/rewriteHandler.c | 6 +- src/test/regress/expected/insert.out | 116 ++- src/test/regress/sql/insert.sql | 79 +++- 5 files changed, 227 insertions(+), 11 deletions(-)
pgsql: Make INSERT-from-multiple-VALUES-rows handle domain target colum
Make INSERT-from-multiple-VALUES-rows handle domain target columns. Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de92...@postgresql.org Branch -- REL_16_STABLE Details --- https://git.postgresql.org/pg/commitdiff/52898c63e724cd6e18728b4bc0f8d9ec49f12b14 Modified Files -- src/backend/parser/analyze.c | 19 -- src/backend/parser/parse_target.c| 18 +- src/backend/rewrite/rewriteHandler.c | 6 +- src/test/regress/expected/insert.out | 116 ++- src/test/regress/sql/insert.sql | 79 +++- 5 files changed, 227 insertions(+), 11 deletions(-)
pgsql: Make INSERT-from-multiple-VALUES-rows handle domain target colum
Make INSERT-from-multiple-VALUES-rows handle domain target columns. Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de92...@postgresql.org Branch -- REL_13_STABLE Details --- https://git.postgresql.org/pg/commitdiff/0200398dd39e5db0eba06043129f1a4daf9170c5 Modified Files -- src/backend/parser/analyze.c | 19 -- src/backend/parser/parse_target.c| 18 +- src/backend/rewrite/rewriteHandler.c | 6 +- src/test/regress/expected/insert.out | 116 ++- src/test/regress/sql/insert.sql | 79 +++- 5 files changed, 227 insertions(+), 11 deletions(-)
pgsql: Make INSERT-from-multiple-VALUES-rows handle domain target colum
Make INSERT-from-multiple-VALUES-rows handle domain target columns. Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de92...@postgresql.org Branch -- REL_12_STABLE Details --- https://git.postgresql.org/pg/commitdiff/82c87af7a0ac97ec6e99277f2deb4ee55e347e1e Modified Files -- src/backend/parser/analyze.c | 19 -- src/backend/parser/parse_target.c| 18 +- src/backend/rewrite/rewriteHandler.c | 6 +- src/test/regress/expected/insert.out | 116 ++- src/test/regress/sql/insert.sql | 79 +++- 5 files changed, 227 insertions(+), 11 deletions(-)
pgsql: Make INSERT-from-multiple-VALUES-rows handle domain target colum
Make INSERT-from-multiple-VALUES-rows handle domain target columns. Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de92...@postgresql.org Branch -- REL_14_STABLE Details --- https://git.postgresql.org/pg/commitdiff/3621ffd9f21b313116e84301a1dce0d3f1dc2f8a Modified Files -- src/backend/parser/analyze.c | 19 -- src/backend/parser/parse_target.c| 18 +- src/backend/rewrite/rewriteHandler.c | 6 +- src/test/regress/expected/insert.out | 116 ++- src/test/regress/sql/insert.sql | 79 +++- 5 files changed, 227 insertions(+), 11 deletions(-)
pgsql: Make INSERT-from-multiple-VALUES-rows handle domain target colum
Make INSERT-from-multiple-VALUES-rows handle domain target columns. Commit a3c7a993d fixed some cases involving target columns that are arrays or composites by applying transformAssignedExpr to the VALUES entries, and then stripping off any assignment ArrayRefs or FieldStores that the transformation added. But I forgot about domains over arrays or composites :-(. Such cases would either fail with surprising complaints about mismatched datatypes, or insert unexpected coercions that could lead to odd results. To fix, extend the stripping logic to get rid of CoerceToDomain if it's atop an ArrayRef or FieldStore. While poking at this, I realized that there's a poorly documented and not-at-all-tested behavior nearby: we coerce each VALUES column to the domain type separately, and rely on the rewriter to merge those operations so that the domain constraints are checked only once. If that merging did not happen, it's entirely possible that we'd get unexpected domain constraint failures due to checking a partially-updated container value. There's no bug there, but while we're here let's improve the commentary about it and add some test cases that explicitly exercise that behavior. Per bug #18393 from Pablo Kharo. Back-patch to all supported branches. Discussion: https://postgr.es/m/18393-65fedb1a0de92...@postgresql.org Branch -- REL_15_STABLE Details --- https://git.postgresql.org/pg/commitdiff/7c61d23422afece255ed47c84876fb062f40d451 Modified Files -- src/backend/parser/analyze.c | 19 -- src/backend/parser/parse_target.c| 18 +- src/backend/rewrite/rewriteHandler.c | 6 +- src/test/regress/expected/insert.out | 116 ++- src/test/regress/sql/insert.sql | 79 +++- 5 files changed, 227 insertions(+), 11 deletions(-)
pgsql: Add pg_column_toast_chunk_id().
Add pg_column_toast_chunk_id(). This function returns the chunk_id of an on-disk TOASTed value. If the value is un-TOASTed or not on-disk, it returns NULL. This is useful for identifying which values are actually TOASTed and for investigating "unexpected chunk number" errors. Bumps catversion. Author: Yugo Nagata Reviewed-by: Jian He Discussion: https://postgr.es/m/20230329105507.d764497456eeac1ca491b5bd%40sraoss.co.jp Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/d1162cfda885c5a8cb9cebfc8eed9f1d76855e83 Modified Files -- doc/src/sgml/func.sgml | 17 src/backend/utils/adt/varlena.c | 41 src/include/catalog/catversion.h | 2 +- src/include/catalog/pg_proc.dat | 3 ++ src/test/regress/expected/misc_functions.out | 16 +++ src/test/regress/sql/misc_functions.sql | 12 6 files changed, 90 insertions(+), 1 deletion(-)
pgsql: Remove redundant snapshot copying from parallel leader to worker
Remove redundant snapshot copying from parallel leader to workers The parallel query infrastructure copies the leader backend's active snapshot to the worker processes. But BitmapHeapScan node also had bespoken code to pass the snapshot from leader to the worker. That was redundant, so remove it. The removed code was analogous to the snapshot serialization in table_parallelscan_initialize(), but that was the wrong role model. A parallel bitmap heap scan is more like an independent non-parallel bitmap heap scan in each parallel worker as far as the table AM is concerned, because the coordination is done in nodeBitmapHeapscan.c, and the table AM doesn't need to know anything about it. This relies on the assumption that es_snapshot == GetActiveSnapshot(). That's not a new assumption, things would get weird if you used the QueryDesc's snapshot for visibility checks in the scans, but the active snapshot for evaluating quals, for example. This could use some refactoring and cleanup, but for now, just add some assertions. Reviewed-by: Dilip Kumar, Robert Haas Discussion: https://www.postgresql.org/message-id/5f3b9d59-0f43-419d-80ca-6d04c07cf...@iki.fi Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/84c18acaf690e438e953e387caf1c13298d4ecb4 Modified Files -- src/backend/access/table/tableam.c| 10 -- src/backend/executor/execMain.c | 6 ++ src/backend/executor/execParallel.c | 7 +++ src/backend/executor/nodeBitmapHeapscan.c | 17 ++--- src/include/access/tableam.h | 5 - src/include/nodes/execnodes.h | 4 6 files changed, 15 insertions(+), 34 deletions(-)
pgsql: Allow a no-wait lock acquisition to succeed in more cases.
Allow a no-wait lock acquisition to succeed in more cases. We don't determine the position at which a process waiting for a lock should insert itself into the wait queue until we reach ProcSleep(), and we may at that point discover that we must insert ourselves ahead of everyone who wants a conflicting lock, in which case we obtain the lock immediately. Up until now, a no-wait lock acquisition would fail in such cases, erroneously claiming that the lock couldn't be obtained immediately. Fix that by trying ProcSleep even in the no-wait case. No back-patch for now, because I'm treating this as an improvement to the existing no-wait feature. It could instead be argued that it's a bug fix, on the theory that there should never be any case whatsoever where no-wait fails to obtain a lock that would have been obtained immediately without no-wait, but I'm reluctant to interpret the semantics of no-wait that strictly. Robert Haas and Jingxian Li Discussion: http://postgr.es/m/ca+tgmobch-kmxgvpb0bb-inmdtcnktvcz4jbxdjows3kym+...@mail.gmail.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/2346df6fc373df9c5ab944eebecf7d3036d727de Modified Files -- src/backend/storage/lmgr/lock.c | 119 src/backend/storage/lmgr/proc.c | 20 - src/include/storage/proc.h | 4 +- src/test/isolation/expected/lock-nowait.out | 9 +++ src/test/isolation/isolation_schedule | 1 + src/test/isolation/specs/lock-nowait.spec | 28 +++ 6 files changed, 128 insertions(+), 53 deletions(-)
pgsql: Fix contrib/pg_visibility/meson.build
Fix contrib/pg_visibility/meson.build I broke that in e85662df44ff by oversight. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/c20d90a41ca869f9c6dd4058ad1c7f5c9ee9d912 Modified Files -- contrib/pg_visibility/meson.build | 1 + 1 file changed, 1 insertion(+)
pgsql: Add TAP tests for timeouts
Add TAP tests for timeouts This commit adds new tests to verify that transaction_timeout, idle_session_timeout, and idle_in_transaction_session_timeout work as expected. We introduce new injection points in before throwing a timeout FATAL error and check these injection points are reached. Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com Author: Andrey Borodin Reviewed-by: Alexander Korotkov Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/eeefd4280f6e5167d70efabb89586b7d38922d95 Modified Files -- src/backend/tcop/postgres.c | 10 +++ src/test/modules/test_misc/Makefile | 4 + src/test/modules/test_misc/meson.build | 4 + src/test/modules/test_misc/t/005_timeouts.pl | 129 +++ 4 files changed, 147 insertions(+)
pgsql: Fix false reports in pg_visibility
Fix false reports in pg_visibility Currently, pg_visibility computes its xid horizon using the GetOldestNonRemovableTransactionId(). The problem is that this horizon can sometimes go backward. That can lead to reporting false errors. In order to fix that, this commit implements a new function GetStrictOldestNonRemovableTransactionId(). This function computes the xid horizon, which would be guaranteed to be newer or equal to any xid horizon computed before. We have to do the following to achieve this. 1. Ignore processes xmin's, because they consider connection to other databases that were ignored before. 2. Ignore KnownAssignedXids, because they are not database-aware. At the same time, the primary could compute its horizons database-aware. 3. Ignore walsender xmin, because it could go backward if some replication connections don't use replication slots. As a result, we're using only currently running xids to compute the horizon. Surely these would significantly sacrifice accuracy. But we have to do so to avoid reporting false errors. Inspired by earlier patch by Daniel Shelepanov and the following discussion with Robert Haas and Tom Lane. Discussion: https://postgr.es/m/1649062270.289865713%40f403.i.mail.ru Reviewed-by: Alexander Lakhin, Dmitry Koval Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/e85662df44ff47acdf5d2d413339445d60a9c30c Modified Files -- contrib/pg_visibility/Makefile | 1 + contrib/pg_visibility/meson.build | 4 ++ contrib/pg_visibility/pg_visibility.c | 68 -- .../pg_visibility/t/001_concurrent_transaction.pl | 47 +++ src/backend/storage/ipc/procarray.c| 13 - src/include/storage/standby.h | 2 + 6 files changed, 129 insertions(+), 6 deletions(-)
pgsql: Comment out noisy libpq_pipeline test
Comment out noisy libpq_pipeline test libpq_pipeline's new 'cancel' test needs more research; disable it temporarily to prevent measles in the buildfarm. Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/cc6e64afda530576d83e331365d36c758495a7cd Modified Files -- src/test/modules/libpq_pipeline/libpq_pipeline.c | 3 +++ 1 file changed, 3 insertions(+)
pgsql: Fix documentation comment for pg_md5_hash
Fix documentation comment for pg_md5_hash Commit b69aba74578 added the errstr parameter to pg_md5_hash but missed updating the synopsis in the documentation comment. The follow-up commit 587de223f03 added the parameter to the list of outputs. The returnvalue had been changed from integer to bool before that but remained in the synopsis. This fixes both. Author: Tatsuro Yamada Discussion: https://postgr.es/m/tyypr01mb82313576150cc86084a122cd9e...@tyypr01mb8231.jpnprd01.prod.outlook.com Branch -- master Details --- https://git.postgresql.org/pg/commitdiff/6b41ef03306f50602f68593d562cd73d5e39a9b9 Modified Files -- src/common/md5_common.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)