On Sat, Jun 16, 2012 at 3:03 PM, Steve Singer <st...@ssinger.info> wrote:
> I feel that in-core support to capture changes and turn them into change
> records that can be replayed on other databases, without relying on triggers
> and log tables, would be good to have.
> I think we want some flexible enough that people write consumers of the LCRs
> to do conflict resolution for multi-master but I am not sure that the
> conflict resolution support actually belongs in core.

I agree, on both counts.  Anyone else want to chime in here?

> Most of the complexity of slony (both in terms of lines of code, and issues
> people encounter using it) comes not from the log triggers or replay of the
> logged data but comes from the configuration of the cluster.
> Controlling things like
> * Which tables replicate from a node to which other nodes
> * How do you change the cluster configuration on a running system (adding
> nodes, removing nodes, moving the origin of a table, adding tables to
> replication etc...)

Not being as familiar with Slony as I probably ought to be, I hadn't
given this much thought, but it's an interesting point.  The number of
logical replication policies that someone might want to implement, and
the ways in which they might want to change them as the situation
develops, is very large.  Whole cluster, whole database, one or
several schemas, individual tables, perhaps even more fine-grained
than per-table.  Trying to figure all of that out is going to require
a lot of work and, frankly, I question the value of having that stuff
in core anyway.

> I see three catalogs in play here.
> 1. The catalog on the origin
> 2. The catalog on the proxy system (this is the catalog used to translate
> the WAL records to LCR's).  The proxy system will need essentially the same
> pgsql binaries (same architecture, important complie flags etc..) as the
> origin
> 3. The catalog on the destination system(s).
> The catalog 2 must be in sync with catalog 1, catalog 3 shouldn't need to be
> in-sync with catalog 1.   I think catalogs 2 and 3 are combined in the
> current patch set (though I haven't yet looked at the code closely).   I
> think the performance optimizations Andres has implemented to update tuples
> through low-level functions should be left for later and that we should  be
> generating SQL in the apply cache so we don't start assuming much about
> catalog 3.

+1.  Although there is a lot of performance benefit to be had there,
it seems better to me to get the basics working and then do
performance optimization later.  That is, if we can detect that the
catalogs are in sync, then by all means ship around the binary tuple
to make things faster.  But requiring that (without having any way to
know whether it actually holds) strikes me as a mess.

> Part of what people expect from a robust in-core solution is that it should
> work with the the other in-core features.  If we have to list a bunch of
> in-core type as being incompatible with logical replication then people will
> look at logical replication with the same 'there be dragons here' attitude
> that scare many people away from the existing third party replication
> solutions.   Non-core or third party user defined types are a slightly
> different matter because we can't control what they do.

I agree, although I don't think either Andres or I are saying anything else.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to