xkrogen commented on code in PR #4560:
URL: https://github.com/apache/hadoop/pull/4560#discussion_r950552483
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java:
##########
@@ -750,10 +750,17 @@ public GetJournaledEditsResponseProto
getJournaledEdits(long sinceTxId,
"is a requirement to fetch journaled edits via RPC. Please enable " +
"it via " + DFSConfigKeys.DFS_HA_TAILEDITS_INPROGRESS_KEY);
}
- if (sinceTxId > getHighestWrittenTxId()) {
+ long highestTxId = getHighestWrittenTxId();
+ if (sinceTxId == highestTxId + 1) {
// Requested edits that don't exist yet; short-circuit the cache here
metrics.rpcEmptyResponses.incr();
return
GetJournaledEditsResponseProto.newBuilder().setTxnCount(0).build();
+ } else if (sinceTxId > highestTxId + 1) {
Review Comment:
I'm still a little wary about special-casing this. What's the concern with
throwing the exception in in the `highestTxId + 1` case -- is it about the perf
overhead of creating the exception? If so we could use a cached exception:
```java
private static final NewerTxnIdException NEW_TXN_ID_EXCEPTION = new
JournaledEditsCache.NewerTxnIdException("...");
...
public GetJournaledEditsResponseProto getJournaledEdits(...) {
...
if (sinceTxId > getHighestWrittenTxId()) {
throw NEW_TXN_ID_EXCEPTION;
}
...
}
```
The expensive part of throwing exceptions is populating the stack trace,
which can be avoided like this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]