On Wed, Nov 4, 2015 at 12:43 AM, Andres Freund <and...@anarazel.de> wrote: > On 2015-11-03 10:23:35 -0500, Robert Haas wrote: >> On Mon, Nov 2, 2015 at 12:58 AM, Jeff Janes <jeff.ja...@gmail.com> wrote: >> > If a transaction holding locks aborts on an otherwise idle server, perhaps >> > it will take a very long time for a log-shipping standby to realize this. >> > But I have hard time believing that anyone who cares about that would be >> > using log-shipping (rather than streaming) anyway. >> >> I'm sure other people here understand this better than me, but I >> wonder if it wouldn't make more sense to somehow log this data only if >> something material has changed in the data being logged. > > Phew. That doesn't seem easy to measure. I'm doubtful that it's worth > comparing the snapshot and such, especially in the back > branches. > > We could maybe add something that we only log a snapshot if XXX > megabytes have been logged or something. But I don't know which number > to pick here - and if there's other write activity the price of a > snapshot record really isn't high.
My first guess on the matter is that we would like to have an extra condition that depends on max_wal_size with at least a minimum number of segments generated since the last standby snapshot, perhaps max_wal_size / 16, but this coefficient is clearly a rule of thumb. With the default configuration of 1GB, that would be waiting for 4 segments to be generated before logging in a standby snapshot. -- Michael -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers