[ 
https://issues.apache.org/jira/browse/ARTEMIS-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3049:
-------------------------------------
    Description: 
LivePageCacheImpl::getMessage is performing a linked-list-like lookup that can 
be rather slow if compared to an array lookup.

[https://github.com/apache/activemq-artemis/pull/2494#issuecomment-455086939] 
clearly show the issue with the current implementation.

The ideal approaches to improve it could be:
 # to replace the chunked list with a copy on write array list
 # to use cursor/iterator API over the chunk list, binding one to each 
consumer, in order to get a linear stride over the live paged messages

Sadly, the latter approach seems not doable because the live page cache is 
accessed for each message lookup in an anonymous way, making impossible to have 
a 1:1 binding with the consumers, while the former seems not doable, because of 
the array copy cost on appending.

 

There is still one case that could be improved using the former approach, 
instead, delivering a huge speedup on lookup cost: while reloading live pages.

A reloaded live page already knows the amount of the loaded live paged 
messages, making possible to store them in a simple array, allowing a much 
faster lookup.

 

  was:
LivePageCacheImpl::getMessage is performing a linked-list-like lookup that can 
be rather slow if compared to an array lookup.

[https://github.com/apache/activemq-artemis/pull/2494#issuecomment-455086939] 
clearly show the issue with the current implementation.

The ideal approaches to improve it could be:
 # to replace the chunked list with a copy on write array list
 # to use cursor/iterator API over the chunk list, binding one to each 
consumer, in order to get a linear stride over the live paged messages

Sadly, the latter approach seems not doable because the live page cache is 
accessed for each message lookup in an anonymous way, making impossible to have 
a 1:1 binding with the consumers, while the former seems not doable, because of 
the array copy cost on appending.

 

There is still one case that could be improved using the former approach, 
instead, delivering a huge speedup on lookup cost: while reloading live pages.

A reloaded live page already knows the amount of the loaded live paged 
messages, making possible to store them in a simple array.

 


> Reduce live page lookup cost
> ----------------------------
>
>                 Key: ARTEMIS-3049
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-3049
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>          Components: Broker
>    Affects Versions: 2.16.0
>            Reporter: Francesco Nigro
>            Assignee: Francesco Nigro
>            Priority: Major
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> LivePageCacheImpl::getMessage is performing a linked-list-like lookup that 
> can be rather slow if compared to an array lookup.
> [https://github.com/apache/activemq-artemis/pull/2494#issuecomment-455086939] 
> clearly show the issue with the current implementation.
> The ideal approaches to improve it could be:
>  # to replace the chunked list with a copy on write array list
>  # to use cursor/iterator API over the chunk list, binding one to each 
> consumer, in order to get a linear stride over the live paged messages
> Sadly, the latter approach seems not doable because the live page cache is 
> accessed for each message lookup in an anonymous way, making impossible to 
> have a 1:1 binding with the consumers, while the former seems not doable, 
> because of the array copy cost on appending.
>  
> There is still one case that could be improved using the former approach, 
> instead, delivering a huge speedup on lookup cost: while reloading live pages.
> A reloaded live page already knows the amount of the loaded live paged 
> messages, making possible to store them in a simple array, allowing a much 
> faster lookup.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to