> Jeff Davis  wrote:
> On Fri, 2011-01-21 at 18:52 -0600, Kevin Grittner wrote:
>> My assumption is that when we have a safe snapshot (which should
>> be pretty close to all the time), we immediately provide it to any
>> serializable transaction requesting a snapshot, except it seems to
>> make sense to use the new DEFERRABLE mode to mean that you want to
>> use the *next* one to arrive.
> How would it handle this situation:
> 1. Standby has safe snapshot S1
> 2. Primary does a VACUUM which removes some stuff visible in S1
> 3. Standby can't replay the VACUUM because it still has S1, but
> also can't get a new S2 because the WAL needed for that is behind
> the VACUUM
> So, S1 needs to be discarded. What do we do on the standby while
> there is no safe snapshot? I suppose throw errors -- I can't think
> of anything else.
We could wait for the next safe snapshot to arrive.  I don't know how
often that combination would occur, particulary in a situation where
there were long-running serializable read write transactions on the
master which would prevent a new safe snapshot from being generated. 
It seems as though a long-running transaction on the master would
also block vacuum activity.
I'm not sure how we can *really* know the frequency without field
>> This would effectively cause the point in time which was visible
>> to serializable transactions to lag behind what is visible to
>> other transactions by a variable amount, but would ensure that a
>> serializable transaction couldn't see any serialization anomalies.
>> It would also be immune to serialization failures from SSI logic;
>> but obviously, standby-related cancellations would be in play. I
>> don't know whether the older snapshots would tend to increase the
>> standby-related cancellations, but it wouldn't surprise me.
> I'm also a little concerned about the user-understandability here.
> Is it possible to make the following guarantees in this approach:
> 1. If transactions are completing on the primary, new snapshots
> will be taken on the standby; and
The rules there are rather complicated.  Based on previous responses
to posts where I've gotten into that detail, I fear that specifying
it with complete accuracy would cause so many eyes to glaze over that
nobody would get to the end of the description.  I will do it if
anybody asks, but without that I'll just say that the conditions for
blocking a safe snapshot in a mix of short-lived read-write
transactions are so esoteric that I expect that they would be
uncommon in practical use.  On the other hand, one long-running
read-write transaction could block generation of a new safe snapshot
indefinitely.  Transactions declared as read-only or running at an
isolation level other than serializable would have no impact on
generation of a safe snapshot.
> 2. If no write transactions are in progress on the primary, then
> the standby will get a snapshot that represents the exact same data
> as on the primary?
A snapshot taken while there are no serializable read write
transactions active can immediately be declared safe.  Whether such a
snapshot is always available on the standby depends on what sort of
throttling, if any, is used.
> That would be fairly easy to explain to users. If there is a
> visibility lag, then we just say "finish the write transactions,
> and progress will be made". And if the system is idle, they should
> see identical data.
Well, unless it's sync rep, you'll always have some latency between
the master and the standby.  And any throttling to control resource
utilization could also cause latency between other transactions and
serializable ones.  But other than that, you're exactly on target.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to