[ 
https://issues.apache.org/jira/browse/ARTEMIS-717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15490936#comment-15490936
 ] 

Michael Brown commented on ARTEMIS-717:
---------------------------------------

We have the idea in our system of groupings among records, which means that 
lifecycle events relating to them should be processed by the same service 
instance for efficiency and to avoid race hazards. We use this grouping 
identity as the value for JMSXGroupID.

We are expecting 10,000's of record groups to be processed each day (100,000's 
of lifecycle messages, but the number of groups is less, so only 10,000's). It 
isn't many, but add that up over the life of a long running broker that doesn't 
expire mappings, it can result in a large heap.

We can reasonably predict at least 100 bytes overhead per group/consumer 
mapping in Artemis (size of string object 16 + char array 48 + artemis' own 
string wrapper 16 + binding object 16 + hashmap overhead 16), and we will have 
approximately 20 queues processing these messages via diverts. So 100 * 10000 * 
365 * 20 = 730Mb for a single instance of the broker after one year for group 
mappings alone, within my estimates here, which are frankly low.

> Use hash buckets for JMSXGroupID to reduce system impact when there are lots 
> of groups
> --------------------------------------------------------------------------------------
>
>                 Key: ARTEMIS-717
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-717
>             Project: ActiveMQ Artemis
>          Issue Type: New Feature
>          Components: Broker
>            Reporter: Michael Brown
>
> Currently Artemis maintains an association of JMSXGroupID to consumer on a 
> one to one level, puting high burden on the broker.
> The sibiling project ArtemisMQ has solved this with the use of hash buckets. 
> I suggest that Artemis does the same.
> https://issues.apache.org/jira/browse/AMQ-439



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to