Zeugswetter Andreas ADI SD wrote:
First we must run the query in serializable mode and replace the snapshot with a synthetic one, which defines visibility at the
of the desired transaction

probably it is a good idea to take a lock on all tables involved to

avoid a vacuum to be started on them when the query is running.
Would the xmin exported by that transaction prevent vacuum from removing any tuples still needed for the flashback snapshot?
Sure, and that makes the mentioned lock unnecessary.

Problem is, that that transaction sets a historic snapshot at a later
time, so it is not yet running when vacuum looks at "global xmin".
So something else needs to hold up global xmin (see prev post).

I think to make this flashback stuff fly, you'd need to know the earliest xmin that you can still flashback too. Vacuum would advance
that xmin, as soon as it starts working. So the case you'd need to
protect against would be a race condition when you start a vacuum
and a flashback transaction at the same time. But for that, some simple
semaphore should suffice, and a well-thought-out ordering of the actions

In the long run, you'd probably want to store the commit-times of transactions somewhere, and add some guc that makes a vacuum assume
that recently comitted transaction (say, in the last hour) are still
considered active. That allow the dba to guarantee that he can always
flashback at least a hour.

greetings, Florian Pflug

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to