Hello, Mathias!
> though I suspect the SP-GIST tests to have
> bugs, as an intermediate version of my 0003 patch didn't trigger the
> tests to fail
It all fails on master - could you please detail what is "intermediate" in
that case? Also, I think it is a good idea to add the same type of test t
Hello!
I was thinking the same thing once - and you may
see db989184cda7f4aa1ff764cca96142029e7e093b for the special comment about
that :)
https://github.com/postgres/postgres/commit/db989184cda7f4aa1ff764cca96142029e7e093b
Hello!
I realize proposed solution does not guarantee absent of false negative
cases...
It happens because I am looking just at XID values, but them have nothing
with transaction commitment order in the common case.
I'll look for some other option.
Best regards,
Mikhail.
Hello, everyone!
Just some commit messages + few cleanups.
Best regards,
Mikhail.
v8-0002-Fix-index-only-scan-race-condition-in-GiST-implem.patch
Description: Binary data
v8-0003-Improve-buffer-handling-for-killed-items-in-GiST-.patch
Description: Binary data
v8-0004-Fix-index-only-scan-rac
Ooops, missed one commit - fixed (logic related to LP_DEAD in GIST
extracted to separate commit).
Also, commitfest entry is here - https://commitfest.postgresql.org/52/5542/
>
v7-0004-This-should-fix-issues-with-incorrect-results-whe.patch
Description: Binary data
v7-0002-This-should-fix-issu
Hello everyone, and Mathias!
I have fixed sp-gist related crash and a few issues in implementation.
Now it passes tests and (in my opinion) feels simpler.
I'll register that thread in commitfest to honor the bureaucracy.
Best regards,
Mikhail.
v6-0003-This-should-fix-issues-with-incorrect-resu
Hello!
I think it is a good idea to sent notification (at least once) to the
authors of entries. Because it is easy to miss that thread.
Best regards,
Mikhail.
Hello, everyone!
Simplified (and stabilized, I hope) the test.
Best regards,
Mikhail.
v4-0001-Fix-possible-lost-tuples-in-non-MVCC-index-scans-.patch
Description: Binary data
v4-0002-Add-isolation-test-to-reproduce-dirty-snapshot-sc.patch
Description: Binary data
Hello, Noah!
> Stepping back, any variation in the total ordering of step-start and
> step-complete events necessarily changes the expected output. Hence, we
want
> to freeze that ordering. We could do that directly, by having
isolationtester
> read a format that lists start and complete even
Hello, Michael!
> Could you summarize here what you have done to achieve test
> stabilization in your new patch set posted at [1] without using the
> proposal of this thread?
Mostly idea is next:
Let's imagine we have two steps - step_before and step_after which may end
in either order.
Then i
Hello, everyone!
I was able to stabilize (I hope so) tests on [0] without any changes to
isolationtester and without "notices N".
So, I agree, it is better to put it on a shelf now. But a few words in
documentation may be a good idea.
Best regards,
Mikhail.
[0]:
https://www.postgresql.org/messa
Hello, everyone!
This is just small test refactoring after the last stabilization.
Best regards,
Mikhail.
>
v7-0001-Specs-top-reproduce-the-issues-with-CREATE-INDEX.patch
Description: Binary data
v7-0004-Modify-the-ExecInitPartitionInfo-function-to-cons.patch
Description: Binary data
v7-00
Hello, Noah!
> I misunderstood, and I was mistaken to see this as a bug fix. The
> isolationtester is acting per its definition, and this would be a
definition
> change.
Yes, you are right, but I think it is better to clarify it somehow
because from my point of view that definition feels like
Hello, Noah!
Thanks for your attention!
It looks like the simplest solution is just to count the number of
not-yet-completed steps and check that value.
A patch with such changes (+ the same test + README update + commit
message) is attached.
Best regards,
Mikhail.
v2-0001-isolation-tester-Fi
Hello!
In case if someone will look for a workaround - it is possible to use
"notices N" mark.\
Instead of
step before {
SELECT injection_points_run('injection-points-wait-2');
}
use something like:
step before {
DO $$
BEGIN
PERFORM injection_points_run('injection
Hello, everyone!
I have noticed tests are still flapping a little bit on FreeBSD.
Now I have added some markers to isolation specs to avoid possible race
conditions.
Best regards,
Mikhail.
>
v6-0003-Modify-the-infer_arbiter_indexes-function-to-also.patch
Description: Binary data
v6-0001-Spec
Hello, everyone!
Just curious - are any plans to continue to work on this project?
Best regards,
Mikhail.
Hello, everyone!
While stabilizing tests for [0] I have noticed unclear (and confusing in my
opinion) behavior of markers in the isolation tester.
I have attached a test with reproducer.
There are two permutations in the test:
permutation
after(before)
before
detach1
wak
Hello, everyone!
Decide just to clarify - the patch is failing on CFbot [0], but it is as
designed - it contains a reproducer which shows how unrelated backends may
affect each other even in case of **local** injection points, causing the
crash.
Best regards,
Michail.
[0]: https://cirrus-ci.com/
GISTSearchHeapItem;
/* Unvisited item, either index page or heap tuple */
@@ -176,6 +177,7 @@ typedef struct GISTScanOpaqueData
OffsetNumber curPageData; /* next item to return */
MemoryContext pageDataCxt; /* context holding the fetched tuples, for
* index-only scans */
+ Buffer
Hello!
Sorry, I should have expressed my thoughts in more detail, as they don't
matter as much as the time you took to answer.
>I don't quite read it as covering IOS. To me, the comment is more
> along the lines of (extensively extended):
My idea was just to add a few more details about the lock
Hello.
One thing I think we could add to the patches is to adapt the 10-years-old
comment below with notice about IOS:
/*
* We save the LSN of the page as we read it, so that we know whether it
* safe to apply LP_DEAD hints to the page later. This allows us to drop
* the pin for MVCC scans, which
Hello everyone,
My apologies, I probably forgot to attach the images with the benchmark
results in my previous email.
Please find them attached to this message.
Best regards,
Mikhail
Hello!
> Given this is working as expected, would we still need a migration step?
No, it is clear now. Thanks for explaining.
Best regards,
Mikhail.
Hello!
One more thing (maybe I missed it in the patch, but anyway) - should we
add some migration to ensure what old databases will get enabled=true by
default after upgrade?
Best regards,
Mikhail.
Hello.
A few comments on patch:
> + temporarily reducing the overhead of index maintenance
> + during bulk data loading operations
>
But tuples are still inserted, where the difference come from?
> or verifying an index is not being used
> + before dropping it
Hm, it does not provide
Hello, Hou!
> So, I think it's not a bug in the committed patch but an issue in the
testing
> venvironment. Besides, since we have not seen such failures on BF, I
think it
> may not be necessary to improve the testcases.
Thanks for your analysis!
Yes, probably WSL2/Windows interactions cause st
Hello, Michael!
Thank you for your comments and feedback!
Yes, this patch set contains a significant amount of code, which makes it
challenging to review. Some details are explained in the commit messages,
but I’m doing my best to structure the patch set in a way that is as
committable as possibl
Hello everyone!
Yesterday I got a strange set of test errors, probably somehow related to
that patch.
It happened on changed master branch (based
on d96d1d5152f30d15678e08e75b42756101b7cab6) but I don't think my changes
were affecting it.
My setup is a little bit tricky: Windows 11 run WSL2 with
Hello, Andrey!
Thanks for the review!
> I think usually write only commit year. Something tells me you can safely
write 2025 there.
Done.
> Can't wrap my head why do you need this?
Oops, copy-paste, fixed.
> I think this comment describes behavior before the fix in present tense.
Fixed.
> Snap
Hello, Noah!
I have Win 11 with that feature enabled, 200_connstr.pl passes without any
issues, but 010_dump_connstr.pl fails, yes.
All other tests seem to be passing, at least without ICU enabled.
010_dump_connstr.pl log is attached.
Best regards,
Mikhail.
log.tgz
Description: application/co
Hello,
I’ve found an issue with pgbench: it processes nested \if conditions
incorrectly.
For example:
\if false
\if true
DROP TABLE money CASCADE;
\endif
\endif
In this case, the table will be dropped, which is unexpected behavior.
Attached is a fix that addresses this issue, a
Hello, Andrey!
> Interesting bug. It's amazing how long it stand, giving that it would be
triggered by almost any check after updating a table.
Probably because in real cases, bt_index_check is used much more frequently
than bt_index_parent_check.
> From my POV correct fix direction is to use ap
Hello, everyone and Peter.
I simplified the patch to reproduce the issue more effectively. Now,
bt_index_parent_check fails on a valid index containing just two tuples.
Peter, I included you in this message because you are the author of
bt_index_parent_check, so I thought you might find this rele
Hello, everyone!
While working on [0], I encountered an issue involving a missing tuple in
an index that was built concurrently. The problem only occurred once, but
it caused me a significant amount of frustration. :)
After some time, I managed to find a way to reproduce the issue. It turns
out t
Hello, Michael!
> I fail to see how this is related? The original issue was that this
> was impossible to run safely concurrently, but now we have the
> facilities able to do so. There are a few cases where using a wait
> point has limits, for example outside a transaction context for some
> of
Hello, everyone!
I've improved the test stability. The revised version should provide
consistent results in all test runs.
Best regards,
Mikhail.
>
From e9a562587a509ace581d7ccf40eb41eb73b93b6f Mon Sep 17 00:00:00 2001
From: nkey
Date: Sun, 24 Nov 2024 14:55:13 +0100
Subject: [PATCH v5 4/4] Mod
Hello, Heikki, Nathan and Michael!
Oh, please excuse my impudence in bringing you all here, but I finally
found what almost the same issue was fixed by Heikki already [0].
I discovered that a similar issue was previously addressed by Heikki in
commit [0], where installcheck was disabled for injec
Hello, again.
> Another backend attempts to release the same resource (e.g., by aborting
a transaction) and triggers the injection point.
Oh, all that GPT-like correctors required to be carefully checked :)
Correct version: Another backend attempts to release some resource (e.g.,
by aborting a tr
Hello, Nathan and Michael!
I believe I’ve identified the root cause of the issue. It appears to be
related to the GetNamedDSMSegment and injection_points module.
Here’s how it happens:
* A backend attaches (even locally!) to an injection point, which might get
triggered during resource release (
Hello, everyone!
A rebased version is attached. Also, fixed potential race in the test and
detailed commit messages.
Best regards,
Mikhail.
>
v3-0001-Fix-possible-lost-tuples-in-non-MVCC-index-scans-.patch
Description: Binary data
v3-0002-Add-isolation-test-to-reproduce-dirty-snapshot-sc.pat
Hello, everyone.
Rebased on master.
>
From 9c85b499793e9a4bcbb55cc5d78cfeacab368b58 Mon Sep 17 00:00:00 2001
From: nkey
Date: Thu, 14 Nov 2024 22:36:26 +0100
Subject: [PATCH v4 2/4] Modify the infer_arbiter_indexes function to consider
both indisvalid and indisready indexes. Ensure that at lea
ssues that led to the reversion of commit d9d076222f5b, providing a safe way to allow xmin advancement during long-running non-unique, non-parallel concurrent index builds while ensuring index correctness.
Regression tests are added to verify the behavior.
Author: Michail Nikolaev
Reviewed-by: [Reviewer
Hello Hayato!
> Note that apply workers can stop due to some reasons (e.g., disabling
subscriptions,
> error out, deadlock...). In this case, the snapshot cannot eb registered
by the
> worker and index can be re-built during the period.
However, the xmin of a slot affects replication_slot_xmin in
Hello Hayato,
> WaitForOlderSnapshots() waits other transactions which can access older
tuples
> than the specified (=current) transaction, right? I think it does not
solve our issue.
Oh, I actually described the idea a bit incorrectly. The goal isn’t simply
to call WaitForOlderSnapshots(slot.xmi
Hello, Hayato!
> Thanks for updating the patch! While reviewing yours, I found a corner
case that
> a recently deleted tuple cannot be detected when index scan is chosen.
> This can happen when indices are re-built during the replication.
> Unfortunately, I don't have any solutions for it.
I just
Hello, Hou!
I have sent [0] reproducer within the context of conflict detection and
resolution to the original thread.
[0]:
https://www.postgresql.org/message-id/flat/CANtu0ojMjAwMRJK%3DH8y0YBB0ZEcN%2BJbdZeoXQn8dWO5F67jgsA%40mail.gmail.com#f5d1baf4702685aedf23daa9addc012e
>
Hello!
> Sorry for being noisy, just for the case, want to notice that [1] needs
to be addressed before any real usage of conflict resolution.
> [1]:
https://www.postgresql.org/message-id/flat/OS0PR01MB5716E30952F542E256DD72E294802%40OS0PR01MB5716.jpnprd01.prod.outlook.com#8aa2083efa76e6a65f51b8a7
Hello!
Sorry for being noisy, just for the case, want to notice that [1] needs to
be addressed before any real usage of conflict resolution.
[1]:
https://www.postgresql.org/message-id/flat/OS0PR01MB5716E30952F542E256DD72E294802%40OS0PR01MB5716.jpnprd01.prod.outlook.com#8aa2083efa76e6a65f51b8a7fd5
> The issue is simple, but I'll register this in commitfest just in case.
https://commitfest.postgresql.org/50/5243/
>
Hello!
While working on [1], I have found a small issue with correctness
of set_indexsafe_procflags usage in ReindexRelationConcurrently introduced
in [2].
> idx->safe = (indexRel->rd_indexprs == NIL && indexRel->rd_indpred == NIL);
It is always true because there are no RelationGetIndexExpressi
e with potential duplicates in unique
indexes. It looks a bit clunky, but it works for now [4].
Single commit from [5] also included, just for stable stress testing.
Full diff is available at [6].
Best regards,
Mikhail.
[1]:
https://github.com/michail-nikolaev/postgres/commit/01a47623571592c52c7a3
Hello, everyone!
This patch set addresses the issues discussed in this thread.
The main idea behind this fix is that it is safe to consider indisready
indexes alongside indisvalid indexes as arbiter indexes. However, it's
crucial that at least one fully valid index is present.
Why is it necessar
Hello!
> In addition, I think the bug is not a blocker for the conflict detection
> feature. As the feature simply reports the current behavior of the logical
> apply worker (either unique violation or tuple missing) without
introducing any
> new functionality. Furthermore, I think that the new
Ex
Hello!
> I think you might misunderstand the behavior of CheckAndReportConflict(),
even
> if it found a conflict, it still inserts the tuple into the index which
means
> the change is anyway applied.
> In the above conditions where a concurrent tuple insertion is removed
> or rolled back before C
Hello, everyone.
I have updated the spec to reproduce the issue, now it includes cases with
both CREATE INDEX and REINDEX.
To run:
make -C src/test/modules/injection_points/ check
Issue reproduced on empty index, but it may happen on index of any with the
same probability.
It is not critic
Hello, Hou!
> This is as expected, and we have documented this in the code comments. We
don't
> need to report a conflict if the conflicting tuple has been removed or
updated
> due to concurrent transaction. The same is true if the transaction that
> inserted the conflicting tuple is rolled back b
Hello!
> I think this is an independent issue which can be discussed separately in
the
> original thread[1], and I have replied to that thread.
Thanks! But it seems like this part is still relevant to the current thread:
> It also seems possible that a conflict could be resolved by a concurrent
Hello, Hou zj!
> In my test, if the tuple is updated and new tuple is in the same page,
> heapam_index_fetch_tuple should find the new tuple using HOT chain. So,
it's a
> bit unclear to me how the updated tuple is missing. Maybe I missed some
other
> conditions for this issue.
Yeah, I think the p
Hello, everyone.
There are some comments on this patch related to issue [0].
In short: any DirtySnapshot index scan may fail to find an existing tuple
in the case of a concurrent update.
- FindConflictTuple may return false negative result in the case of
concurrent update because ExecCheckIndexCo
S2Zap0r8AnU3OTmcCOA%40mail.gmail.com
[2]:
https://www.postgresql.org/message-id/CANtu0ojga8s9%2BJ89cAgLzn2e-bQgy3L0iQCKaCnTL%3Dppot%3Dqhw%40mail.gmail.com
[3]:
https://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:new_index_concurrently_approach#diff-50abc48efcc3
Hell, everyone!
Using the brand-new injection points support in specs, I created a spec to
reproduce the issue.
It fails like this currently:
make -C src/test/modules/injection_points/ check
@@ -64,6 +64,7 @@
step s3_s1: <... completed>
step s2_s1: <... completed>
+ERROR: duplicate key valu
Hello!
> Right, but we are extending this functionality to detect and resolve
> such conflicts [1][2]. I am hoping after that such updates won't be
> missed.
Yes, this is a nice feature. However, without the DirtySnapshot index scan
fix, it will fail in numerous instances, especially in master-ma
Hello, Amit!
> I think it is rather less likely or not possible in a parallel apply
> case because such conflicting updates (updates on the same tuple)
> should be serialized at the publisher itself. So one of the updates
> will be after the commit that has the second update.
Glad to hear! But an
Hello, Hayato!
> Thanks for pointing out the issue!
Thanks for your attention!
> IIUC, the issue can happen when two concurrent transactions using
DirtySnapshot access
> the same tuples, which is not specific to the parallel apply
Not exactly, it happens for any DirtySnapshot scan over a B-tree
It seems like I've identified the cause of the issue.
Currently, any DirtySnapshot (or SnapshotSelf) scan over a B-tree index may
skip (not find the TID for) some records in the case of parallel updates.
The following scenario is possible:
* Session 1 reads a B-tree page using SnapshotDirty and
Hello, everyone!
Updates so far:
* issue happens with both LOGGED and UNLOGGED relations
* issue happens with DirtySnapshot
* not happens with SnapshotSelf
* not happens with SnapshotAny
* not related to speculative inserted tuples - I have commented the code of
its insertion - and the issue conti
Hello, Andres.
Sorry to bother you, but I feel it's necessary to validate the possible
issue regarding someone who can decide whether it is okay or not.
The issue is reproducible with the first UPSERT implementation (your commit
168d5805e4c08bed7b95d351bf097cff7c07dd65 from 2015) and up to now.
T
Hello, everyone!
While reviewing [1], I noticed that check_exclusion_or_unique_constraint
occasionally returns false negatives for btree unique indexes during UPSERT
operations.
Although this doesn't cause any real issues with INSERT ON CONFLICT, I
wanted to bring it to your attention, as it might
Hello, Noah!
Answering
https://www.postgresql.org/message-id/flat/20240612194857.1c.nmisch%40google.com#684361ba86bad11f4e9fd84dfa8e0084
> On your other thread, it would be useful to see stack traces from the
high-CPU
> processes once the live lock has ended all query completion.
I was wrong, it
Hello, Michael!
> As far as I can see, it depends on what kind of query semantics and
> the amount of transparency you are looking for here in your
> application. An error in the query itself can also be defined as
> useful so as your application is aware of what happens as an effect of
> the con
Hello, Michael!
> This is a non-fresh Friday-afternoon idea, but it would make sure that
> we don't have any transactions using the indexes switched to _ccold
> with indisvalid that are waiting for a drop in phase 5. Your tests
> seem to pass with that, and that keeps the operation intact
> concu
Hello, everyone.
> I think It is even possible to see !alive index in the same situation (it
is worse case), but I was unable to reproduce it so far.
Fortunately, it is not possible.
So, seems like I have found the source of the problem:
1) infer_arbiter_indexes calls RelationGetIndexList to get
Hello.
> But I was unable to reproduce that using some random usleep(), however -
maybe it is a wrong assumption.
It seems like the assumption is correct - we may use an invalid index as
arbiter due to race condition.
The attached patch adds a check for that case, and now the test fails like
this
Hello, Michael.
> Isn't the issue that we may select as arbiter indexes stuff that's
!indisvalid?
As far as I can see (1) !indisvalid indexes are filtered out.
But... It looks like this choice is not locked in any way (2), so
index_concurrently_swap or index_concurrently_set_dead can change this
Hello!
> Can you say more about the connection you see between $SUBJECT and that?
That
> looks like a valid report of an important bug, but I'm not following the
> potential relationship to $SUBJECT.
I was guided by the following logic:
* A pg_class race condition can cause table indexes to look
Hello, everyone.
I am not sure, but I think that issue may be related to the issue described
in
https://www.postgresql.org/message-id/CANtu0ojXmqjmEzp-%3DaJSxjsdE76iAsRgHBoK0QtYHimb_mEfsg%40mail.gmail.com
It looks like REINDEX CONCURRENTLY could interfere with ON CONFLICT UPDATE
in some strange w
during
reindex tuples is 411 speed is 78.8310026363446 per second
# going to start reindex, num tuples in table is 1566
In some moments, all CPUs all hot but 30 connections are unable to do any
upsert. I think it may be somehow caused by two arbiter indexes (old and
new reindexed on
Hello.
I did the POC (1) of the method described in the previous email, and it
looks promising.
It doesn't block the VACUUM, indexes are built about 30% faster (22 mins vs
15 mins). Additional index is lightweight and does not produce any WAL.
I'll continue the more stress testing for a while. A
Hello, Matthias and others!
Realized new horizon was applied only during validation phase (once index
is marked as ready).
Now it applied if index is not marked as valid yet.
Updated version in attach.
--
> I think the best way for this to work wo
Hi again!
Made an error in `GlobalVisHorizonKindForRel` logic, and it was caught by a
new test.
Fixed version in attach.
>
From 9a8ea366f6d2d144979e825c4ac0bdd2937bf7c1 Mon Sep 17 00:00:00 2001
From: nkey
Date: Tue, 7 May 2024 22:10:56 +0200
Subject: [PATCH v3] WIP: fix d9d076222f5b "VACUUM: ig
Hello, Matthias and others!
Updated WIP in attach.
Changes are:
* Renaming, now it feels better for me
* More reliable approach in `GlobalVisHorizonKindForRel` to make sure we
have not missed `rd_safeindexconcurrentlybuilding` by calling
`RelationGetIndexList` if required
* Optimization to avoid
Hello, Matthias!
I just realized there is a much simpler and safe way to deal with the
problem.
So, d9d076222f5b94a85e0e318339cfc44b8f26022d(1) had a bug because the scan
was not protected by a snapshot. At the same time, we want this snapshot to
affect not all the relations, but only a subset of
Hello, Matthias!
> We can just release the current snapshot, and get a new one, right? I
> mean, we don't actually use the transaction for much else than
> visibility during the first scan, and I don't think there is a need
> for an actual transaction ID until we're ready to mark the index entry
>
Hello!
> I'm not a fan of this approach. Changing visibility and cleanup
> semantics to only benefit R/CIC sounds like a pain to work with in
> essentially all visibility-related code. I'd much rather have to deal
> with another index AM, even if it takes more time: the changes in
> semantics will
Hi!
> How do you suppose this would work differently from a long-lived
> normal snapshot, which is how it works right now?
Difference in the ability to take new visibility snapshot periodically
during the second phase with rechecking visibility of tuple according
to the "reference" snapshot (whic
One more idea - is just forbid HOT prune while the second phase is
running. It is not possible anyway currently because of snapshot held.
Possible enhancements:
* we may apply restriction only to particular tables
* we may apply restrictions only to part of the tables (not yet
scanned by R/CICs).
Hello!
> I think the best way for this to work would be an index method that
> exclusively stores TIDs, and of which we can quickly determine new
> tuples, too. I was thinking about something like GIN's format, but
> using (generation number, tid) instead of ([colno, colvalue], tid) as
> key data
> > > I just realised there is one issue with this design: We can't cheaply
> > > reset the snapshot during the second table scan:
> > > It is critically important that the second scan of R/CIC uses an index
> > > contents summary (made with index_bulk_delete) that was created while
> > > the curre
Hello, Melanie!
Sorry to interrupt you, just a quick question.
> Correct, but there are changes being discussed where we would freeze
> tuples during pruning as well [0], which would invalidate that
> implementation detail. And, if I had to choose between improved
> opportunistic freezing and imp
Hello!
> Correct, but there are changes being discussed where we would freeze
> tuples during pruning as well [0], which would invalidate that
> implementation detail. And, if I had to choose between improved
> opportunistic freezing and improved R/CIC, I'd probably choose
> improved freezing over
Hello!
It seems like the idea of "old" snapshot is still a valid one.
> Should this deal with any potential XID wraparound, too?
As far as I understand in our case, we are not affected by this in any way.
Vacuum in our table is not possible because of locking, so, nothing
may be frozen (see belo
Hello.
Realized my last idea is invalid (because tuples are frozen by using
dynamically calculated horizon) - so, don't waste your time on it :)
Need to think a little bit more here.
Thanks,
Mikhail.
> Yes, good catch.
> Assuming we have somehow prevented vac_truncate_clog from occurring
> during CIC, we can leave frozen and potentially frozen
> (xmin
Hello!
> How would this deal with tuples not visible to the old snapshot?
> Presumably we can assume they're newer than that snapshot (the old
> snapshot didn't have it, but the new one does, so it's committed after
> the old snapshot, making them newer), so that backend must have
> inserted it in
Hello!
> Also, feels like we may apply this to both phases (first and the second
> scans).
> The original patch (1) was helping only to the second one (after call
> to set_indexsafe_procflags).
Oops, I was wrong here. The original version of the patch was also applied to
both phases.
> Note tha
Hello!
> I've thought about alternative solutions, too: how about getting a new
> snapshot every so often?
> We don't really care about the liveness of the already-scanned data; the
> snapshots used for RIC
> are used only during the scan. C/RIC's relation's lock level means vacuum
> can't run
Hello, hackers!
I think about revisiting (1) ({CREATE INDEX, REINDEX} CONCURRENTLY
improvements) in some lighter way.
Yes, a serious bug was (2) caused by this optimization and now it reverted.
But what about a more safe idea in that direction:
1) add new horizon which ignores PROC_IN_SAFE_IC ba
Thanks everyone for help!
Hello, good question!
Thanks for your edits.
As answer: probably we need to change
"If we know that we're holding ProcArrayLock exclusively, we don't
need the read barrier."
to
"If we're removing xid, we don't need the read barrier because only
the startup process can remove and add xids to Known
1 - 100 of 219 matches
Mail list logo