On 21 Aug 2016 12:36 AM, "Bruce Momjian" <br...@momjian.us> wrote:
> On Sat, Aug 20, 2016 at 01:43:42PM +0900, Michael Paquier wrote:
> > On Sat, Aug 20, 2016 at 1:39 PM, Bruce Momjian <br...@momjian.us> wrote:
> > > Someone reported that a replication slot that existed at the time a
> > > backup was done on the master was copied to the standby.  Because they
> > > didn't realize it, their WAL was not being recycled on the standby.
> > >
> > > Is that possible?  Is it a known behavior?  I don't see it documented.
> >
> > >From backup.sgml:
> >    <para>
> >     It is often a good idea to also omit from the backup the files
> >     within the cluster's <filename>pg_replslot/</> directory, so that
> >     replication slots that exist on the master do not become part of the
> >     backup.  Otherwise, the subsequent use of the backup to create a
> >     may result in indefinite retention of WAL files on the standby, and
> >     possibly bloat on the master if hot standby feedback is enabled,
> >     the clients that are using those replication slots will still be
> >     to and updating the slots on the master, not the standby.  Even if
> >     backup is only intended for use in creating a new master, copying
> >     replication slots isn't expected to be particularly useful, since
> >     contents of those slots will likely be badly out of date by the time
> >     the new master comes on line.
> >    </para>
> >
> > Note as well that pg_basebackup omits its content and creates an empty
> > directory.
> Seems like another good idea to use pg_basebackup rather than manually
> doing base backups;  Magnus has been saying this for a while.

The main time that's an issue is when you're rsync'ing to save bandwidth,
using CoW volume snapshots, etc. pg_basebackup becomes totally impractical
on big systems.

> I supposed there is no way we could remove this error-prone behavior
> because replication slots must survive server restarts.  Is there no way
> to know if we are starting a standby from a fresh base backup vs.
> restarting a standby?  In that case we could clear the replication
> slots.  Are there any other error-prone things copied from the master?

We could remove slots when we enter archive recovery. But I've recently
implememted support for logical decoding from standbys, which needs slots.
Physical slot use on standby is also handy. We cannot tell whether a slot
was created on the replica or created on the master and copied in the base
backup and don't want to drop slots created on the replica.

I also have use cases for slots being retained in restore from snapshot,
for re-integrating restored nodes into an MM mesh.

I think a recovery.conf option to remove all slots during archive recovery
could be handy. But mostly it comes down to tools not copying them.

Reply via email to