hangc0276 commented on code in PR #3940:
URL: https://github.com/apache/bookkeeper/pull/3940#discussion_r1182024031


##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/storage/ldb/SingleDirectoryDbLedgerStorage.java:
##########
@@ -943,9 +943,6 @@ public Iterable<Long> getActiveLedgersInRange(long 
firstLedgerId, long lastLedge
 
     @Override
     public void updateEntriesLocations(Iterable<EntryLocation> locations) 
throws IOException {
-        // Trigger a flush to have all the entries being compacted in the db 
storage
-        flush();
-
         entryLocationIndex.updateLocations(locations);

Review Comment:
   Looking back at the whole transaction steps,  Flush the current ledger 
storage write cache may want to ensure the remaining data has been flushed into 
the entry log file before updating the entres' index into RocksDB. But both 
Transaction Compactor and EntryLog Compactor are triggered flush operations to 
ensure those remaining data are flushed into the entry log file. So I think the 
Flush of the current ledger storage step is unnecessary and it will bring more 
throughput impact.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to