[
https://issues.apache.org/jira/browse/ARTEMIS-2852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170029#comment-17170029
]
Francesco Nigro edited comment on ARTEMIS-2852 at 8/3/20, 2:07 PM:
-------------------------------------------------------------------
[~kkondzielski]
{quote}So, we will probably need to decrease this value down to 32 GB which is
the largest value that supports coops by default, right? {quote}
Yep and I see too that -XX:+UseStringDeduplication is enabled on 2.13.0 while
isn't on 2.2.0: maybe it could increase some of the phases costs and would be
better to drop it on 2.13.0.
I cannot say if it's the motivation behind the scalability issue, but I think
that to have a proper apple-to-apple comparison makes sense to have a
similar/same configuration.
was (Author: [email protected]):
[~kkondzielski]
{quote}So, we will probably need to decrease this value down to 32 GB which is
the largest value that supports coops by default, right? {quote}
Yep and I see too that -XX:+UseStringDeduplication is enabled on 2.13.0 while
isn't on 2.2.0: maybe it could increase some of the phases costs and would be
better to drop it on 2.13.0.
I cannot say if it's the motivation behind the scalability issue, but I think
that to have a proper apple-to-apple comparison makes sense to make the 2
configurations more similar then possible.
> Huge performance decrease between versions 2.2.0 and 2.13.0
> -----------------------------------------------------------
>
> Key: ARTEMIS-2852
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2852
> Project: ActiveMQ Artemis
> Issue Type: Bug
> Reporter: Kasper Kondzielski
> Priority: Major
> Attachments: Selection_433.png, Selection_434.png, Selection_440.png,
> Selection_441.png
>
>
> Hi,
> Recently, we started to prepare a new revision of our blog-post in which we
> test various implementations of replicated queues. Previous version can be
> found here: [https://softwaremill.com/mqperf/]
> We updated artemis binary to 2.13.0, regenerated configuration file and
> applied all the performance tricks you told us last time. In particular these
> were:
> * the {{Xmx}} java parameter bumped to {{16G (now bumped to 48G)}}
> * in {{broker.xml}}, the {{global-max-size}} setting changed to {{8G (this
> one we forgot to set, but we suspect that it is not the issue)}}
> * {{journal-type}} set to {{MAPPED}}
> * {{journal-datasync}}, {{journal-sync-non-transactional}} and
> {{journal-sync-transactional}} all set to false
> Apart from that we changed machines' type we use to r5.2xlarge ( 8 cores, 64
> GIB memory, Network bandwidth Up to 10 Gbps, Storage bandwidth Up to 4,750
> Mbps) and we decided to always run twice as much receivers as senders.
> From our tests it looks like version 2.13.0 is not scaling as well, with the
> increase of senders and receivers, as version 2.2.0 (previously tested).
> Basically is not scaling at all as the throughput state almost at the same
> level, while previously it used to grow linearly.
> Here you can find our tests results for both versions:
> [https://docs.google.com/spreadsheets/d/1kr9fzSNLD8bOhMkP7K_4axBQiKel1aJtpxsBCOy9ugU/edit?usp=sharing]
> We are aware that now there is a dedicated page in documentation about
> performance tuning, but we are surprised that same settings as before
> performs much worse.
> Maybe there is an obvious property which we overlooked which should be turned
> on?
> All changes between those versions together with the final configuration can
> be found on this merged PR:
> [https://github.com/softwaremill/mqperf/commit/6bfae489e11a250dc9e6ef59719782f839e8874a]
>
> Charts showing machines' usage in attachments. Memory consumed by artemis
> process didn't exceed ~ 16 GB. Bandwidht and cpu weren't also a bottlenecks.
> p.s. I wanted to ask this question on mailing list/nabble forum first but it
> seems that I don't have permissions to do so even though I registered &
> subscribed. Is that intentional?
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)