On Tue, Dec 20, 2016 at 10:06 PM, Craig Ringer <cr...@2ndquadrant.com> wrote:
> On 20 December 2016 at 15:03, Petr Jelinek <petr.jeli...@2ndquadrant.com>
>>> The biggest change in this patch, and the main intrusive part, is that
>>> procArray->replication_slot_catalog_xmin is no longer directly used by
>>> vacuum. Instead, a new ShmemVariableCache->oldestCatalogXmin field is
>>> added, with a corresponding CheckPoint field.
>> If this mechanism would not be needed most of the time, wouldn't it be
>> better to not have it and just have a way to ask physical slot about
>> what's the current reserved catalog_xmin (in most cases the standby
>> should actually know what it is since it's sending the hs_feedback, but
>> first slot created on such standby may need to check).
> Yes, and that was actually my originally preferred approach, though
> the one above does offer the advantage that if something goes wrong we
> can detect it and fail gracefully. Possibly not worth the complexity
> Your approach requires us to make very sure that hot_standby_feedback
> does not get turned off by user or become ineffective once we're
> replicating, though, since we won't have any way to detect when needed
> tuples are removed. We'd probably just bail out with relcache/syscache
> lookup errors, but I can't guarantee we wouldn't crash if we tried
> logical decoding on WAL where needed catalogs have been removed.
I dunno, Craig, I think your approach sounds more robust. It's not
very nice to introduce a bunch of random prohibitions on what works
with what, and it doesn't sound like it's altogether watertight
anyway. Incorporating an occasional, small record into the WAL stream
to mark the advancement of the reserved catalog_xmin seems like a
cleaner and safer solution. We certainly do NOT want to find out
about corruption only because of random relcache/syscache lookup
failures, let alone crashes.
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: