xkrogen commented on PR #4560:
URL: https://github.com/apache/hadoop/pull/4560#issuecomment-1226631416

   I am suggesting that we would also modify 
`QuorumJournalManager#selectInputStreams()` like:
   ```
         try {
           Collection<EditLogInputStream> rpcStreams = new ArrayList<>();
           selectRpcInputStreams(rpcStreams, fromTxnId, onlyDurableTxns);
           streams.addAll(rpcStreams);
           return;
         } catch (NewerTxnIdException ntie) {
           // normal situation, we requested newer IDs than any journal has. no 
new streams
           return;
         } catch (IOException ioe) {
           LOG.warn("Encountered exception while tailing edits >= " + fromTxnId 
+
               " via RPC; falling back to streaming.", ioe);
         }
   ```
   
   I say this mainly because we want to use `NewerTxnIdException` to detect 
when a JN is lagging, right? But if we special-case `sinceTxId == highestTxId + 
1`, then we might not detect the case where a JN is lagging by one txn.
   
   So let's say we have: JN0 with ID 1, JN1 with ID 2, JN2 with ID 2 (so JN0 
lags by one txn). Now we send out `getJournaledEdits()` RPCs. JN2 happens to 
respond slow, so we get response from JN0 and JN1. Now it looks like only txn 1 
is durably committed and we never load txn 2 -- the same issue you described in 
your original bug description.
   
   But by throwing `NewerTxnIdException`, `AsyncLoggerSet` will instead 
_ignore_ the response from JN0, so we wait for response from JN1 and JN2, and 
we correctly see that up to txn 2 is committed durably.
   
   Does this clarify? I agree the situation I describe should be rare, but I 
feel that we can cleanly solve it by using `NewerTxnIdException`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to