On Wed, 11 Feb 2026 at 12:53, Amit Kapila <[email protected]> wrote:
>
> On Sat, Jan 31, 2026 at 5:41 PM Matthias van de Meent
> <[email protected]> wrote:
> >
> > I don't think the harm of changing wal_level from 'replica' to
> > 'logical' is decreased, because the harm is in the distributed
> > performance impact, not the access to data. A physical replication
> > slot does not (need to) impact the write performance of other backends
> > if it's sufficiently partitioned from other workloads (not configured
> > for syncrep, etc.), but wal_level=logical cannot be partitioned from
> > write workloads as it adds a non-negotiable overhead to the write
> > workloads of other backends, as they now needs to track more data
> > (identity columns) and must write more WAL.
> >
>
> While it's true that wal_level = logical adds inline overhead to WAL
> writes, the argument that physical slots are immune from backend
> performance is a bit of a stretch. A physical slot that fails to
> advance its xmin (due to a lagging standby or a forgotten slot)
> creates a global performance impact via table and index bloat.

Only if that physical slot reports and has reported hot standby
feedback, but, yes.

> This
> bloat increases I/O pressure and write amplification (via more index
> page splits, more number of FPWs, etc) for every backend on the
> system, often with longer-lasting consequences than the extra bytes
> written to WAL for logical logging. Since the REPLICATION role already
> has the power to degrade system performance via the xmin horizon,
> allowing them to toggle logical logging doesn't grant a fundamentally
> new 'type' of destructive power.

As I mentioned, the performance impact of misbehaving physical
replication slots does not appear immediately; it is a gradual
worsening that can be monitored, and automation can be implemented so
as to invalidate the slot before any performance impact really gets
started.
The same is not true when effective_wal_level=logical is enabled: Even
if the performance would probably return to normal quicker with LR
than after a truly bad case with physical replication, the performance
change would be immediate for any new transaction.

> > Replication slots that keep WAL from being recycled can be monitored
> > (and therefore, likely acted on) before the relevant problem (OOD)
> > occurs; which is not the case with the current effective_wal_level
> > implementation. One moment your tps is normal, the next moment it
> > drops because a role with REPLICATION added a logical slot, and you'll
> > have to delete it and wait for a checkpoint to revert back to replica.
> > The difference here is reaction time until it starts impacting
> > transactions.
> >
>
> We implemented this delayed behavior of reverting wal_level on
> slot_drop to keep the code simpler, otherwise, we need this delay
> mainly for temporary slots when the session exits abruptly. So, if
> required, in future, we can consider improving this delayed behavior
> for some common cases.

I wouldn't be opposed to a faster response, but that's not what I'm
looking for. I'm not looking for a change in behaviour when
enabling/disabling logical slots; if the DBA agrees to run the risk of
overhead caused by increased WAL levels at any point in time, then
that's the DBA's choice. So, the current design of how
effective_wal_level is updated looks OK to me.

However, with the current code, the DBA can't make the choice whether
to allow logical replication or not, at least not without reverting to
wal_level=minimal -- which removes effectively all HA features.  No
amount of monitoring or rights management can make a DBA safely use
the features enabled by wal_level=replica without risking additional
overhead with effective_wal_level=logical; a very significant change
from PG versions up to 18, where you could safely run your server like
that.

So, what I'm looking for is a configuration option that disables the
effective_wal_level=replica to effective_wal_level=logical upgrade
path; one that's not controlled by users with the REPLICATION role
option.


Kind regards,

Matthias van de Meent


Reply via email to