[ 
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171447#comment-17171447
 ] 

Francesco Nigro commented on ARTEMIS-2852:
------------------------------------------

[~kkondzielski]
In addition, this does seems something that should be fixed in the original 
version too, looking at the comment of [~amp001] on the original blog post

{quote}It's by design in artemis also if you configure a multi master as in it 
will shared over the masters - it's named cluster load balancing. And is 
transparent from a client/user perspective. I sent you link to the clustering 
doc and also sample deployment diagram with settings for a three master setup 
in the GitHub discussion thread.

In the docs live is used to describe master nodes and backup is slave 
Essentially you setup three ha pairs in a cluster group, (or even colocated 
live/backups) it is a lot easier to setup with udp discovery as it all self 
discovers and configures or you can use jgroups or static if needed.

By the looks of your ansible code you're making a cluster with one master and 
two slaves your actually almost there, you just need make two more masters and 
one more slave.

Or you could do co-located to reduce the nodes in half but you have to do that 
in broker.xml. It probably quicker though as you have mostly there just to 
create two more masters and the extra slave you can always re-work it to do 
co-lo later.

Once you do that just check the artemis logs and just make sure you see it all 
join. Update you client url list if using static, or easier is to use the same 
UDP discovery if it is available.{quote}

> Huge performance decrease between versions 2.2.0 and 2.13.0
> -----------------------------------------------------------
>
>                 Key: ARTEMIS-2852
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>            Reporter: Kasper Kondzielski
>            Priority: Major
>         Attachments: Selection_433.png, Selection_434.png, Selection_440.png, 
> Selection_441.png, Selection_451.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we 
> test various implementations of replicated queues. Previous version can be 
> found here:  [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and 
> applied all the performance tricks you told us last time. In particular these 
> were:
>  * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
>  * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this 
> one we forgot to set, but we suspect that it is not the issue)}}
>  * {{journal-type}} set to {{MAPPED}}
>  * {{journal-datasync}}, {{journal-sync-non-transactional}} and 
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64 
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750 
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the 
> increase of senders and receivers, as version 2.2.0 (previously tested). 
> Basically is not scaling at all as the throughput stays almost at the same 
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions: 
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about 
> performance tuning, but we are surprised that same settings as before 
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned 
> on? 
> All changes between those versions together with the final configuration can 
> be found on this merged PR: 
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>  
> Charts showing machines' usage in attachments. Memory consumed by artemis 
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks. 
> p.s. I wanted to ask this question on mailing list/nabble forum first but it 
> seems that I don't have permissions to do so even though I registered & 
> subscribed. Is that intentional?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to