[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?focusedWorklogId=554476&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-554476
 ]

ASF GitHub Bot logged work on ARTEMIS-3117:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/Feb/21 21:24
            Start Date: 18/Feb/21 21:24
    Worklog Time Spent: 10m 
      Work Description: sebthom edited a comment on pull request #3459:
URL: https://github.com/apache/activemq-artemis/pull/3459#issuecomment-781633827


   @jbertram @ehsavoie I have three questions:
   1. why was the `SSLContextFactory` lookup implemented via a ServiceLoader 
and using a priority-based selection algo. This seems unnecessarily 
complicated. why was it not implemented in a way that e.g. the fully qualified 
class of the context factory to be used can be specified in the transport 
settings of the `broker.xml` instead?
   1. why is `CachingSSLContextFactory` holding the cache in a static map and 
not an instance field? 
https://github.com/apache/activemq-artemis/blob/52263663c48082227916cc3477f8892d9f10134b/artemis-core-client/src/main/java/org/apache/activemq/artemis/core/remoting/impl/ssl/CachingSSLContextFactory.java#L35
   1. the transport settings allow specification of an `sslContext` value, 
which is basically a fixed cachekey to set/lookup the SSLContext only evaluated 
by `CachingSSLContextFactory`. Since the `CachingSSLContextFactory` already is 
capable to calculate a cache key based on keystore/truststore path, why would 
one want to specify a cachekey anyway? I can only see the disadvantage that 
using a manually specified cachekey can result in unexpected results/issues if 
multiple acceptors/connectors are configured to use the same cachekey but 
configure different key-/truststores.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 554476)
    Time Spent: 4h 20m  (was: 4h 10m)

> Performance degradation when switching from JDK8 to JDK11
> ---------------------------------------------------------
>
>                 Key: ARTEMIS-3117
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
>             Project: ActiveMQ Artemis
>          Issue Type: Improvement
>          Components: Broker
>    Affects Versions: 2.16.0
>         Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>            Reporter: Sebastian T
>            Priority: Major
>         Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>          Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to