[ 
https://issues.apache.org/jira/browse/ARTEMIS-2224?focusedWorklogId=185767&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-185767
 ]

ASF GitHub Bot logged work on ARTEMIS-2224:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 16/Jan/19 13:35
            Start Date: 16/Jan/19 13:35
    Worklog Time Spent: 10m 
      Work Description: qihongxu commented on issue #2494: ARTEMIS-2224 Reduce 
contention on LivePageCacheImpl
URL: https://github.com/apache/activemq-artemis/pull/2494#issuecomment-454780254
 
 
   @franz1981 
   > 1. the tps are dependent by the most demanding operation in the CPU 
flamegraph you've posted
   >    ie `ConcurrentAppendOnlyList:.get(index)`
   
   The flamegraph didn't show clearly without zoom in/out. On top of 
`ConcurrentAppendOnlyList:.get(index)` is ` getChunkOf(index, lastIndex)`  
since the page is huge and we only consumed messages from it.
   
   While we let consumers and producers work together, and producer is much 
faster than producers the top of flamegrapgh changed to ` getValidLastIndex()` 
due to the burst of messages i guess.
   
   > If the same test (same configuration) is performed using just 1 consumer 
I'm expecting (best case scenario) that it would have (3k-6k)/200 tps
   
   Probably higher than that. In other tests when using 200 threads it's 
broker's processing ability that becomes the bottleneck of tps. The reason of 
using '200' is to squeeze all power out of broker.  That is to say 1 consumer 
might perform better than total/200.
   
   > I suppose that 2 can't be true, so it really depends on the number of 
cores/threads available to execute such queries to LivePageCache IF such 
queries are performed in parallel.
   
   I agree. As for two assumptions, I have same ideas with you (1 is right and 
2 might not). So if we use 400 threads the tps might not be higher than using 
200. 
   
   I could make a test on single and 400 consumers tomorrow using same config. 
What do you think or if you have better ideas/test cases towards our 
hypothesis? If so please let me know:)   
   
   
   
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 185767)
    Time Spent: 2.5h  (was: 2h 20m)

> Reduce contention on LivePageCacheImpl
> --------------------------------------
>
>                 Key: ARTEMIS-2224
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-2224
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>          Components: Broker
>    Affects Versions: 2.7.0
>            Reporter: Francesco Nigro
>            Assignee: Francesco Nigro
>            Priority: Major
>          Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Has been measured that LIvePageCacheImpl operations are a source of 
> contention on producer side while paging. 
> This contention decrease the scalability of the broker in an evident way 
> while using topics, because the page cache is been accessed concurrently by 
> several producers to ack transactions while the messages are being appended.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to