nickva commented on a change in pull request #3780:
URL: https://github.com/apache/couchdb/pull/3780#discussion_r725272278



##########
File path: src/couch/src/couch_db.erl
##########
@@ -2047,6 +2051,18 @@ t_calculate_start_seq_epoch_mismatch() ->
         ?assertEqual(0, Seq)
     end).
 
+t_calculate_start_seq_shard_move() ->
+    ?_test(begin
+        Db = test_util:fake_db([]),
+        % Sequence when shard was on node1
+        ?assertEqual(2, calculate_start_seq(Db, node1, {2, <<"foo">>})),
+        % Sequence from node1 after the move happened, we reset back to the
+        % start of the epoch on node2 = 10
+        ?assertEqual(10, calculate_start_seq(Db, node1, {16, <<"foo">>})),

Review comment:
       > ?assertEqual(10, calculate_start_seq(Db, node1, {8, <<"foo">>})),
   
   Good question!
   
   `calculate_start_seq(Db, node1, {8, <<"foo">>})),` should be 8, because 
sequence 8 was generated on node1 before the the shard was moved to node2. It 
was moved to node2 on sequence 10. This is essentially the same case as 
`?assertEqual(2, calculate_start_seq(Db, node1, {2, <<"foo">>})),` in the test.
   
   In other words, for sequences from 1-9 when the shard lived on `node1` and 
the _changes feed has sequences  `{node1, 1}`, or `{node1, 2}`, or `{node1, 9}` 
etc., we'd start streaming from `1`, `2` or `9` safely.
   
   But then, the shard was moved to node2, the first time it was opened on 
node2 we recorded its new epoch on `node2` as with `Seq = 10`. So from now on, 
even if the shard copy on `node1` was updated and generated say sequence `15`, 
when we get `{node1, 15}` we can't start streaming from `15`, but know we've 
been `node2` since `Seq = 10`, so we start with `Seq = 10` then. Previously we 
could have started with 0 which is what this PR is fixing.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to