hangc0276 commented on PR #3342:
URL: https://github.com/apache/bookkeeper/pull/3342#issuecomment-1165241800

   > > Since you have now reduce the sizes to 1/3 each, what will happen is 
that the writes are blocked when we reach 66% of memory instead of 100%.
   > > I don't really understand why is it a problem for the write cache to 
fluctuate in size. The write cache job is to make sure we can decouple write 
path from the flushing. The read cache job is to provide high hit-rate.
   > 
   > I think this is an option when more real-time data needs to be persisted 
in writecache, instead of triggering read disk to load into readcache @merlimat
   
   @StevenLuMT I agree with Matteo. I do not really understand the motivation 
of this PR. I have the following concerns.
   - From your test data, you compared the cached data size, it certainly will 
be more than the old one. But the size of cached data is not our goal, but the 
bookie cache hit rate. We should leverage the gain and the cost. The gain is 
the hit rate, and the cost is the extra memory used.
   - What we are concerned about is whether this change is valuable, not just 
adding an option to turn it on or off.
   - We use 1/3 memory cache to cache the last flushed data. Those data contain 
all the newly written data, it may not improve the write cache hit rate a lot 
due to those cached data without purpose. This is why we evict the flushed data 
from OS PageCache once it is written into the journal disk. If we really need 
to cache the last flushed data into memory, I prefer to use OS PageCache.
   - I'm not sure whether you use BookKeeper based on Pulsar. If you use 
Pulsar, I prefer to tunning Pulsar broker cache instead of BookKeeper write 
cache. The Pulsar broker cache and BookKeeper read cache are driven by reading. 
It will have higher hit rate.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to