On 7/6/2004 3:58 PM, Simon Riggs wrote:

On Tue, 2004-07-06 at 08:38, Zeugswetter Andreas SB SD wrote:
> - by time - but the time stamp on each xlog record only specifies to the
> second, which could easily be 10 or more commits (we hope....)
> > Should we use a different datatype than time_t for the commit timestamp,
> one that offers more fine grained differentiation between checkpoints?


Imho seconds is really sufficient. If you know a more precise position
you will probably know it from backend log or an xlog sniffer. With those
you can easily use the TransactionId way.

TransactionID and timestamp is only sufficient if the transactions are selected by their commit order. Especially in read committed mode, consider this execution:


    xid-1: start
    xid-2: start
    xid-2: update field x
    xid-2: commit
    xid-1: update field y
    xid-1: commit

In this case, the update done by xid-1 depends on the row created by xid-2. So logically xid-2 precedes xid-1, because it made its changes earlier.

So you have to apply the log until you find the commit record of the transaction you want apply last, and then stamp all transactions that where in progress at that time as aborted.


Jan



OK, thanks. I'll just leave the time_t datatype just the way it is.

Best Regards, Simon Riggs


---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== [EMAIL PROTECTED] #


---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match

Reply via email to