adoroszlai commented on code in PR #3615:
URL: https://github.com/apache/ozone/pull/3615#discussion_r928714072
##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java:
##########
@@ -181,6 +181,31 @@ public void incrementCount(List<Long> txIDs)
}
}
+ /**
+ * {@inheritDoc}
+ *
+ * @throws IOException
+ */
+ @Override
+ public int resetCount(List<Long> txIDs) throws IOException {
+ List<Long> failedTransactions = getFailedTransactions().stream()
+ .map(DeletedBlocksTransaction::getTxID).collect(Collectors.toList());
+ if (txIDs != null && !txIDs.isEmpty()) {
+ failedTransactions = failedTransactions.stream().filter(txIDs::contains)
+ .collect(Collectors.toList());
+ }
Review Comment:
> why the second stream is not allowed? These two streams are independent
processes I think
It is allowed, but unnecessary. Stream pipelines support multiple
non-terminal operations (`filter()`, `map()`, etc.):
```suggestion
Stream<Long> stream = getFailedTransactions().stream()
.map(DeletedBlocksTransaction::getTxID);
if (txIDs != null && !txIDs.isEmpty()) {
stream = stream.filter(txIDs::contains);
}
List<Long> failedTransactions = stream.collect(Collectors.toList());
```
This is better than the one with `Set` if the filter is expected to drop
many items. We save the cost of materializing the bigger collection.
However, since this command is not performance-critical, both solutions are
fine.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]