[
https://issues.apache.org/jira/browse/BOOKKEEPER-461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13503945#comment-13503945
]
Ivan Kelly commented on BOOKKEEPER-461:
---------------------------------------
[~hustlmsp] The new code moves the cache cleanup from being synchronous to be
asynchronous. I.e. If the read ahead cache is getting too full, schedule a
flush. This doesn't block many more requests trying to add to the cache before
it has been cleaned up. In the post-patch graph you attached, what are the big
dips?
The readaheadcache seems to be solving a very common problem. Perhaps we could
leverage guava's CacheBuilder, which does the same thing, and cleans up as it
goes, rather than removing a lot of entries in one go, as
collectOldCacheEntries does.
> Delivery throughput degrades when there are lots of publishers w/ high
> traffic.
> -------------------------------------------------------------------------------
>
> Key: BOOKKEEPER-461
> URL: https://issues.apache.org/jira/browse/BOOKKEEPER-461
> Project: Bookkeeper
> Issue Type: Bug
> Reporter: Sijie Guo
> Assignee: Sijie Guo
> Fix For: 4.2.0
>
> Attachments: BOOKKEEPER-461.diff, BOOKKEEPER-461.diff,
> BOOKKEEPER-461.diff, pub_sub_multithreads.png, pub_sub_singlethread.png
>
>
> When running benchmarking over the hub server, found that delivery throughput
> degrades when there are lots of publishers publishing messages. And the
> delivery throughput will goes up when there is no publishes.
> This issue is introduced due to ReadAheadCache only runs a single thread. So
> when the netty workers are busy handling publish requests, they are pushing
> lots of messages into ReadAheadCache's queue to put them in to read ahead
> cache. So the readahead cache is busy on updating keys.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira