qihongxu edited a comment on issue #2494: ARTEMIS-2224 Reduce contention on LivePageCacheImpl URL: https://github.com/apache/activemq-artemis/pull/2494#issuecomment-454780254 @franz1981 > 1. the tps are dependent by the most demanding operation in the CPU flamegraph you've posted > ie `ConcurrentAppendOnlyList:.get(index)` The flamegraph didn't show clearly without zoom in/out. On top of `ConcurrentAppendOnlyList:.get(index)` is ` getChunkOf(index, lastIndex)` since the page is huge and we only consumed messages from it. While we let consumers and producers work together, and producer are much faster than producers, the top of flamegrapgh changed to ` getValidLastIndex()` due to the burst of messages i guess. > If the same test (same configuration) is performed using just 1 consumer I'm expecting (best case scenario) that it would have (3k-6k)/200 tps Probably higher than that. In other tests when using 200 threads it's broker's processing ability that becomes the bottleneck of tps. The reason of using '200' is to squeeze all power out of broker. That is to say 1 consumer might perform better than total/200. > I suppose that 2 can't be true, so it really depends on the number of cores/threads available to execute such queries to LivePageCache IF such queries are performed in parallel. I agree. As for two assumptions, I have same ideas with you (1 is right and 2 might not). So if we use 400 threads the tps might not be higher than using 200. I could make a test on single and 400 consumers tomorrow using same config. What do you think or if you have better ideas/test cases towards our hypothesis? If so please let me know:)
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services