Just thought of this a bit more. I we will look for start of long-running
transaction in WAL we may go back too far to the past only to get few
entries.

What if we consider slightly different approach? We assume that transaction
can be represented as a set of independent operations, which are applied in
the same order on both primary and backup nodes. Then we can do the
following:
1) When next operation is finished, we assign transaction LWM of the last
checkpoint. I.e. we maintain a map [Txn -> last_op_LWM].
2) If "last_op_LWM" of transaction is not changed between two subsequent
checkpoints, we assign it to special value "UP_TO_DATE".

Now at the time of checkpoint we get minimal value among current partition
LWM and active transaction LWMs, ignoring "UP_TO_DATE" values. Resulting
value is the final partition counter which we will request from supplier
node. We save it to checkpoint record. When WAL on demander is unwound from
this value, then it is guaranteed to contain all missing data of
demanders's active transactions.

I.e. instead of tracking the whole active transaction, we track part of
transaction which is possibly missing on a node.

Will that work?


On Tue, Nov 27, 2018 at 11:19 AM Vladimir Ozerov <voze...@gridgain.com>
wrote:

> Igor,
>
> Could you please elaborate - what is the whole set of information we are
> going to save at checkpoint time? From what I understand this should be:
> 1) List of active transactions with WAL pointers of their first writes
> 2) List of prepared transactions with their update counters
> 3) Partition counter low watermark (LWM) - the smallest partition counter
> before which there are no prepared transactions.
>
> And the we send to supplier node a message: "Give me all updates starting
> from that LWM plus data for that transactions which were active when I
> failed".
>
> Am I right?
>
> On Fri, Nov 23, 2018 at 11:22 AM Seliverstov Igor <gvvinbl...@gmail.com>
> wrote:
>
>> Hi Igniters,
>>
>> Currently I’m working on possible approaches how to implement historical
>> rebalance (delta rebalance using WAL iterator) over MVCC caches.
>>
>> The main difficulty is that MVCC writes changes on tx active phase while
>> partition update version, aka update counter, is being applied on tx
>> finish. This means we cannot start iteration over WAL right from the
>> pointer where the update counter updated, but should include updates, which
>> the transaction that updated the counter did.
>>
>> These updates may be much earlier than the point where the update counter
>> was updated, so we have to be able to identify the point where the first
>> update happened.
>>
>> The proposed approach includes:
>>
>> 1) preserve list of active txs, sorted by the time of their first update
>> (using WAL ptr of first WAL record in tx)
>>
>> 2) persist this list on each checkpoint (together with TxLog for example)
>>
>> 4) send whole active tx list (transactions which were in active state at
>> the time the node was crushed, empty list in case of graceful node stop) as
>> a part of partition demand message.
>>
>> 4) find a checkpoint where the earliest tx exists in persisted txs and
>> use saved WAL ptr as a start point or apply current approach in case the
>> active tx list (sent on previous step) is empty
>>
>> 5) start iteration.
>>
>> Your thoughts?
>>
>> Regards,
>> Igor
>
>

Reply via email to