On 1 March 2017 at 10:47, Thomas Munro <thomas.mu...@enterprisedb.com> wrote:

>>> I added a fourth case 'overwhelm.png' which you might find
>>> interesting.  It's essentially like one 'burst' followed by a 100% ide
>>> primary.  The primary stops sending new WAL around 50 seconds in and
>>> then there is no autovacuum, nothing happening at all.  The standby
>>> start is still replaying its backlog of WAL, but is sending back
>>> replies only every 10 seconds (because no WAL arriving so no other
>>> reason to send replies except status message timeout, which could be
>>> lowered).  So we see some big steps, and then we finally see it
>>> flat-line around 60 seconds because there is still now new WAL so we
>>> keep showing the last measured lag.  If new WAL is flushed it will pop
>>> back to 0ish, but until then its last known measurement is ~14
>>> seconds, which I don't think is technically wrong.
>>
>> If I understand what you're saying, "14 secs" would not be seen as the
>> correct answer by our users when the delay is now zero.
>>
>> Solving that is where the keepalives need to come into play. If no new
>> WAL, send a keepalive and track the lag on that.
>
> Hmm.  Currently it works strictly with measurements of real WAL write,
> flush and apply times.  I rather like the simplicity of that
> definition of the lag numbers, and the fact that they move only as a
> result of genuine measured activity WAL.  A keepalive message is never
> written, flushed or applied, so if we had special cases here to show
> constant 0 or measure keepalive round-trip time when we hit the end of
> known WAL or something like that, the reported lag times for those
> three operations wouldn't be true.  In any real database cluster there
> is real WAL being generated all the time, so after a big backload is
> finally processed by a standby the "14 secs" won't linger for very
> long, and during the time when you see that, it really is the last
> true measured lag time.
>
> I do see why a new user trying this feature for the first time might
> expect it to show a lag of 0 just as soon as sent LSN =
> write/flush/apply LSN or something like that, but after some
> reflection I suspect that it isn't useful information, and it would be
> smoke and mirrors rather than real data.

Perhaps I am misunderstanding the way it works.

If the last time WAL was generated the lag was 14 secs, then nothing
occurs for 2 hours aftwards AND all changes have been successfully
applied then it should not continue to show 14 secs for the next 2
hours.

IMHO the lag time should drop to zero in a reasonable time and stay at
zero for those 2 hours because there is no current lag.

If we want to show historical lag data, I'm supportive of the idea,
but we must report an accurate current value when the system is busy
and when the system is quiet.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to