On Thu, May 29, 2025 at 11:09 AM shveta malik wrote:
> 
> On Wed, May 28, 2025 at 11:56 AM Masahiko Sawada 
> <sawada.m...@gmail.com> wrote:
> >
> >
> > I didn't know it was intended for testing and debugging purposes so 
> > clearilying it in the documentation would be a good idea.
> 
> I have added the suggested docs in v3.

Thanks for updating the patch.

I have few suggestions for the document from a user's perspective.

1.
>     ... , one
>     condition must be met. The logical replication slot on primary must be 
> advanced
>     to such a catalog change position (catalog_xmin) and WAL's LSN 
> (restart_lsn) for
>     which sufficient data is retained on the corresponding standby server.

The term "catalog change position" might be not be very eaiser for some readers
to grasp. Would it be clearer to phrase it as follows ?

"The logical replication slot on the primary must reach a state where the WALs
and system catalog rows retained by the slot are also present on the
corresponding standby server. "

2.
>     If the primary slot is still lagging behind and synchronization is 
> attempted
>     for the first time, then to prevent the data loss as explained, 
> persistence
>     and synchronization of newly created slot will be skipped, and the 
> following
>     log message may appear on standby.

The phrase "lagging behind" typically refers to the standby, which can be a bit
confusing. I understand that user can context around to understand it, but
would it be eaiser to undertand by providing a more detailed description like
below ?

"If the WALs and system catalog rows retained by the slot on the primary have
already been purged from the standby server, ..."

3.
<programlisting>
     LOG: could not synchronize replication slot "failover_slot" to prevent 
data loss
     DETAIL:  The remote slot needs WAL at LSN 0/3003F28 and catalog xmin 754, 
but the standby has LSN 0/3003F28 and catalog xmin 766.
</programlisting>

It seems that it lacks one space between "LOG:" and the message


Best Regards,
Hou zj

Reply via email to