On Tue, Jun 26, 2012 at 05:05:27PM -0500, Kevin Grittner wrote:
> David Fetter wrote:
> > On Mon, Jun 25, 2012 at 01:50:54PM -0500, Kevin Grittner wrote:
>
> >> One fine point regarding before and after images -- if a value
> >> doesn't change in an UPDATE, there's no reason to include it in
> >
David Fetter wrote:
> On Mon, Jun 25, 2012 at 01:50:54PM -0500, Kevin Grittner wrote:
>> One fine point regarding before and after images -- if a value
>> doesn't change in an UPDATE, there's no reason to include it in
>> both the BEFORE and AFTER tuple images, as long as we have the
>> null col
On Mon, Jun 25, 2012 at 01:50:54PM -0500, Kevin Grittner wrote:
> Andres Freund wrote:
>
> > I am not sure were going to get all that into 9.3.
>
> Sure, that was more related to why I was questioning how much these
> use cases even *could* integrate -- whether it even paid to
> *consider* the
On Mon, Jun 25, 2012 at 3:17 PM, Andres Freund wrote:
>> I suppose the main reason we haven't done it already is that it
>> increases the period of time during which we're using 2X the disk
>> space.
> I find that an acceptable price if its optional. Making it such doesn't seem
> to be a problem f
On 25 June 2012 17:42, Kevin Grittner wrote:
> This is clearly going to depend on the topology. You would
> definitely want to try to replicate the DDL for the case on which
> Simon is focused (which seems to me to be essentially physical
> replication of catalogs with logical replication of dat
On Monday, June 25, 2012 08:50:54 PM Kevin Grittner wrote:
> Andres Freund wrote:
> >> We most particularly *don't* want DDL to replicate automatically,
> >> because the schema changes are deployed along with related
> >> software changes, and we like to pilot any changes for at least a
> >> few d
Excerpts from Kevin Grittner's message of lun jun 25 14:50:54 -0400 2012:
> One fine point regarding before and after images -- if a value
> doesn't change in an UPDATE, there's no reason to include it in both
> the BEFORE and AFTER tuple images, as long as we have the null
> column bitmaps -- or
Hi,
On Monday, June 25, 2012 08:13:54 PM Robert Haas wrote:
> On Mon, Jun 25, 2012 at 1:50 PM, Andres Freund
wrote:
> > Its an argument why related infrastructure would be interesting to more
> > than that patch and thats not bad.
> > If the garbage collecting is done in a very simplistic manner
Andres Freund wrote:
>> We most particularly *don't* want DDL to replicate automatically,
>> because the schema changes are deployed along with related
>> software changes, and we like to pilot any changes for at least a
>> few days. Depending on the release, the rollout may take a
>> couple mo
Hi,
(munching the mail from Robert and Kevin together)
On Monday, June 25, 2012 06:42:41 PM Kevin Grittner wrote:
> Robert Haas wrote:
> > I bet for a lot of replication systems, the answer is "do a full
> > resync". In other words, we either forbid the operation outright
> > when the table is
On Mon, Jun 25, 2012 at 1:50 PM, Andres Freund wrote:
> Its an argument why related infrastructure would be interesting to more than
> that patch and thats not bad.
> If the garbage collecting is done in a very simplistic manner it doesn't sound
> too hard... The biggest problem is probably crash-
On Monday, June 25, 2012 05:34:13 PM Robert Haas wrote:
> On Mon, Jun 25, 2012 at 9:43 AM, Andres Freund
wrote:
> >> > The only theoretical way I see against that problem would be to
> >> > postpone all relation unlinks untill everything that could possibly
> >> > read them has finished. Doesn't
On Mon, Jun 25, 2012 at 12:42 PM, Kevin Grittner
wrote:
> Perhaps the first question is: Is there enough in common between
> logical replication (and all the topologies that might be created
> with that) and the proposal on the table (which seems to be based
> around one particular topology with a
Robert Haas wrote:
> I bet for a lot of replication systems, the answer is "do a full
> resync". In other words, we either forbid the operation outright
> when the table is enabled for logical replication, or else we emit
> an LCR that says, in effect, "transaction 12345 monkeyed with the
> tab
On Mon, Jun 25, 2012 at 9:43 AM, Andres Freund wrote:
>> > The only theoretical way I see against that problem would be to postpone
>> > all relation unlinks untill everything that could possibly read them has
>> > finished. Doesn't seem to alluring although it would be needed if we
>> > ever move
On Monday, June 25, 2012 03:08:51 AM Robert Haas wrote:
> On Sun, Jun 24, 2012 at 5:11 PM, Andres Freund
wrote:
> > There are some interesting problems related to locking and snapshots
> > here. Not sure if they are resolvable:
> >
> > We need to restrict SnapshotNow to represent to the view it
consistency during changeset
extraction from wal
On Sun, Jun 24, 2012 at 5:11 PM, Andres Freund
wrote:
> There are some interesting problems related to locking and snapshots here.
Not
> sure if they are resolvable:
>
> We need to restrict SnapshotNow to represent to the view it had back wh
On Sun, Jun 24, 2012 at 5:11 PM, Andres Freund wrote:
> There are some interesting problems related to locking and snapshots here. Not
> sure if they are resolvable:
>
> We need to restrict SnapshotNow to represent to the view it had back when the
> wal record were currently decoding had. Otherwis
On 24 June 2012 22:50, Andres Freund wrote:
> On Sunday, June 24, 2012 11:37:26 PM Simon Riggs wrote:
>> On 24 June 2012 22:11, Andres Freund wrote:
>> > One interesting problem are table rewrites (truncate, cluster, some ALTER
>> > TABLE's) and dropping tables. Because we nudge SnapshotNow to th
On Sunday, June 24, 2012 11:37:26 PM Simon Riggs wrote:
> On 24 June 2012 22:11, Andres Freund wrote:
> > One interesting problem are table rewrites (truncate, cluster, some ALTER
> > TABLE's) and dropping tables. Because we nudge SnapshotNow to the past
> > view it had back when the wal record wa
On 22 June 2012 20:30, Andres Freund wrote:
> The problem making replacement of SnapshotNow.satisfies useful is that there
> is
> no convenient way to represent subtransactions of the current transaction
> which already have committed according to the TransactionLog but aren't yet
> visible at t
On 24 June 2012 22:11, Andres Freund wrote:
> One interesting problem are table rewrites (truncate, cluster, some ALTER
> TABLE's) and dropping tables. Because we nudge SnapshotNow to the past view it
> had back when the wal record was created we get the old relfilenode. Which
> might have been d
On Thursday, June 21, 2012 01:41:25 PM Andres Freund wrote:
> Below are two possible implementation strategies for that concept
>
> Advantages:
> * Decoding is done on the master in an asynchronous fashion
> * low overhead during normal DML execution, not much additional code in
> that path
> * c
On Friday, June 22, 2012 03:22:03 PM Andres Freund wrote:
> On Thursday, June 21, 2012 05:40:08 PM Andres Freund wrote:
> > On Thursday, June 21, 2012 03:56:54 PM Florian Pflug wrote:
> > > On Jun21, 2012, at 13:41 , Andres Freund wrote:
> > > > 3b)
> > > > Ensure that enough information in the cat
On Thursday, June 21, 2012 05:40:08 PM Andres Freund wrote:
> On Thursday, June 21, 2012 03:56:54 PM Florian Pflug wrote:
> > On Jun21, 2012, at 13:41 , Andres Freund wrote:
> > > 3b)
> > > Ensure that enough information in the catalog remains by fudging the
> > > xmin horizon. Then reassemble an a
Hi,
On Friday, June 22, 2012 08:48:41 AM Simon Riggs wrote:
> On 21 June 2012 12:41, Andres Freund wrote:
> > 2.)
> > Keep the decoding site up2date by replicating the catalog via normal HS
> > recovery
> > mechanisms.
> >
> > Advantages:
> > * most of the technology is already there
> > * minim
On 21 June 2012 12:41, Andres Freund wrote:
> 2.)
> Keep the decoding site up2date by replicating the catalog via normal HS
> recovery
> mechanisms.
>
> Advantages:
> * most of the technology is already there
> * minimal overhead (space, performance)
> * no danger of out of sync catalogs
> * no s
Andres Freund Sent: Thursday, June 21, 2012 5:11 PM
> 4.)
> Log enough information in the walstream to make decoding possible using
only
> the walstream.
What I understood is that enough information is catalog data. Is that right
or something else?
> Advantages:
> * Decoding can optionally be do
On Thursday, June 21, 2012 04:05:54 PM Florian Pflug wrote:
> On Jun21, 2012, at 13:41 , Andres Freund wrote:
> > 5.)
> > The actually good idea. Yours?
>
> What about a mixure of (3b) and (4), which writes the data not to the WAL
> but to a separate logical replication log. More specifically:
>
On Thursday, June 21, 2012 03:56:54 PM Florian Pflug wrote:
> On Jun21, 2012, at 13:41 , Andres Freund wrote:
> > 3b)
> > Ensure that enough information in the catalog remains by fudging the xmin
> > horizon. Then reassemble an appropriate snapshot to read the catalog as
> > the tuple in question h
On Thursday, June 21, 2012 05:25:41 PM Simon Riggs wrote:
> On 21 June 2012 16:13, Andres Freund wrote:
> > On Thursday, June 21, 2012 05:05:04 PM Simon Riggs wrote:
> >> On 21 June 2012 15:53, Andres Freund wrote:
> >> >> ISTM we should maintain a lookup table on target system that has the
> >>
On 21 June 2012 16:13, Andres Freund wrote:
> On Thursday, June 21, 2012 05:05:04 PM Simon Riggs wrote:
>> On 21 June 2012 15:53, Andres Freund wrote:
>> >> ISTM we should maintain a lookup table on target system that has the
>> >> minimal required information on it.
>> >
>> > You need just about
On Thursday, June 21, 2012 05:05:04 PM Simon Riggs wrote:
> On 21 June 2012 15:53, Andres Freund wrote:
> >> ISTM we should maintain a lookup table on target system that has the
> >> minimal required information on it.
> >
> > You need just about the whole catalog because the *_out procs might ne
On 21 June 2012 15:53, Andres Freund wrote:
>> ISTM we should maintain a lookup table on target system that has the
>> minimal required information on it.
> You need just about the whole catalog because the *_out procs might need to
> lookup types, operators and such again.
> Unless you want to
Hi,
On Thursday, June 21, 2012 04:39:21 PM Simon Riggs wrote:
> On 21 June 2012 12:41, Andres Freund wrote:
> > 3)
> > Multi-Versioned catalog
> >
> > Below are two possible implementation strategies for that concept
> >
> > Advantages:
> > * Decoding is done on the master in an asynchronous fa
On 21 June 2012 12:41, Andres Freund wrote:
> 3)
> Multi-Versioned catalog
>
> Below are two possible implementation strategies for that concept
>
> Advantages:
> * Decoding is done on the master in an asynchronous fashion
> * low overhead during normal DML execution, not much additional code in
On Jun21, 2012, at 13:41 , Andres Freund wrote:
> 5.)
> The actually good idea. Yours?
What about a mixure of (3b) and (4), which writes the data not to the WAL
but to a separate logical replication log. More specifically:
There's a per-backend queue of change notifications.
Whenever a non-catal
On Jun21, 2012, at 13:41 , Andres Freund wrote:
> 3b)
> Ensure that enough information in the catalog remains by fudging the xmin
> horizon. Then reassemble an appropriate snapshot to read the catalog as the
> tuple in question has seen it.
The ComboCID machinery makes that quite a bit harder, I
Hi Robert, Hi all,
Robert and I talked quite a bit about different methods of providing enough
information to extract tuples from wal. I don't think either of us is yet
really convinced of any individual method, so I want to recap our discussion
in one email so others can chime in without readi
39 matches
Mail list logo