On Thu, Jul 7, 2016 at 9:25 PM, Simon Riggs <si...@2ndquadrant.com> wrote: > I note also that replication slots aren't backed up by pg_dump; I see > analogy here and think that at least some parts of logical replication will > be similar and not require DDL at all, just as slots do not.
I agree with that. Of course, it's *impossible* to usefully back up a slot because the key ingredient in a slot is the LSN after which WAL should be preserved - and it's meaningless to preserve that across a dump and restore cycle. But, for example, replication set definitions can be preserved across a dump and restore and I am quite sure users will find it very unfortunate if they aren't. > We have much to discuss in terms of security, the way it should work and > what options to support and a sidetrack into syntax isn't warranted at this > early stage. Please lets discuss those important things first, then return > to whether DDL makes sense or not; it may do, or may not, or more likely > which parts of it need DDL and which do not. We've sort of hijacked this whole thread which was originally about something different, so maybe it would be better to start a new thread specifically to talk about the design of logical replication. For my money, though, I don't find the designs I've seen so far to be particularly compelling - and I think that the problem is that we tend to think about this from the point of view of the capabilities that must be available within a single instance. Physical replication has the same issue. Users don't want to configure archive_command and wal_keep_segments and max_wal_senders and wal_level and set up an archive and create recovery.conf on the standby. They want to spin up a new standby - and we don't provide any way to just do that. pg_basebackup's -X stream and -R options represent significant progress in that direction, but I don't think we've really taken it as far as it can go yet, which is not to say I know exactly what's missing. Similarly, when the master fails, users want to promote a standby (either one they choose or the one that is determined to be furthest ahead) and remaster the others and that's not something you can "just do". Similarly, for logical replication, users will want to do things like (1) spin up a new logical replication slave out of thin air, replicating an entire database or several databases or selected replication sets within selected databases; or (2) subscribe an existing database to another server, replicating an entire database or several databases; or (3) repoint an existing subscription at a new server after a master change or dump/reload, resynchronizing table contents if necessary; or (4) stop replication, either with or without dropping the local copies of the replicated tables. (This is not an exhaustive list, I'm sure.) I don't mean to imply that the existing designs are bad as far as they go. In each case, the functionality that has been built is good. But it's all focused, as it seems to me, on providing capabilities rather than on providing a way for users to manage a group of database servers using high-level primitives. That higher-level stuff largely gets left to add-on tools, which I don't think is serving us particularly well. Those add-on tools often find that the core support doesn't quite do everything they'd like it to do: that's why WAL-E and repmgr, for example, end up having to do some creative things to deliver certain features. We need to start thinking of groups of servers rather than individual servers as the unit of deployment. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers