[ 
https://issues.apache.org/jira/browse/ARTEMIS-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17865061#comment-17865061
 ] 

Liviu Citu edited comment on ARTEMIS-4884 at 7/11/24 1:50 PM:
--------------------------------------------------------------

Seems that the certificates have no impact whatsoever.

Anyway, {*}j{*}ust for your information: * *I have run some tests today on my 
laptop using *Windows 10 64BIT* (this laptop is not in the datacenter with the 
Linux VM where the other tests are running so I am rolling out environmental 
issues)
h3. OpenWire Client

 

With Classic Broker I can run 500 concurrent connections without any problems. 
But with Artemis Broker, starting with 400 connections onwards I am 
encountering connectivity issues. On the server side I see errors like:
{noformat}
_2024-07-11 15:28:58,541 DEBUG [io.netty.handler.ssl.SslHandler] [id: 
0x9858123b, L:/127.0.0.1:3177 - R:/127.0.0.1:51375] Swallowing a harmless 
'connection reset by peer / broken pipe' error that occurred while writing 
close_notify in response to the peer's close_notify
java.io.IOException: An established connection was aborted by the software in 
your host machine
        at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) ~[?:?]
        at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:46) 
~[?:?]
        at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:330) 
~[?:?]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:284) ~[?:?]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:259) ~[?:?]
        at 
java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:417) ~[?:?]
        at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:255) 
~[netty-buffer-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) 
~[netty-buffer-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:356)
 ~[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
 [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) 
[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
 [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) 
[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) 
[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994)
 [netty-common-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
[netty-common-4.1.111.Final.jar:4.1.111.Final]
        at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 [artemis-commons-2.35.0.jar:2.35.0]_{noformat}
and on the client side:
{noformat}
9976 15:28:44.594 ERROR com.finastra.internal.classic.client.JmsConnection 99 
connect - Cannot initialize the Jms Broker connection. 
jakarta.jms.JMSException: Could not connect to broker URL: 
ssl://localhost:3177?keepAlive=true&wireFormat.maxInactivityDuration=0. Reason: 
java.net.ConnectException: Connection refused: no further information
  at 
org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:423)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:253)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
com.finastra.internal.classic.client.JmsConnection.connect(JmsConnection.java:88)
 ~[classes/:?]
  at 
com.finastra.internal.classic.client.JmsConnection.connect(JmsConnection.java:65)
 ~[classes/:?]
  at 
com.finastra.internal.classic.client.JmsTester.testOneConnection(JmsTester.java:106)
 ~[classes/:?]
  at 
com.finastra.internal.classic.client.JmsTester.lambda$testConcurrentConnections$0(JmsTester.java:77)
 ~[classes/:?]
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
 [?:?]
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 [?:?]
  at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
Caused by: java.net.ConnectException: Connection refused: no further information
  at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
  at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[?:?]
  at 
java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:547) 
~[?:?]
  at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:602) ~[?:?]
  at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327) ~[?:?]
  at java.base/java.net.Socket.connect(Socket.java:633) ~[?:?]
  at java.base/sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:304) 
~[?:?]
  at 
org.apache.activemq.transport.tcp.TcpTransport.connect(TcpTransport.java:525) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.tcp.TcpTransport.doStart(TcpTransport.java:488) 
~[activemq-client-6.1.2.jar:6.1.2]
  at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.AbstractInactivityMonitor.start(AbstractInactivityMonitor.java:172)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.InactivityMonitor.start(InactivityMonitor.java:52)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.WireFormatNegotiator.start(WireFormatNegotiator.java:72)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:403)
 ~[activemq-client-6.1.2.jar:6.1.2]{noformat}
h3. CORE Client

When running for 500 concurrent connections there are no connectivity errors. 
However, I noticed that in this case the client process is stuck at some point 
up until the first connections are being realized. The CPU is 100% and the 
process does not move for a couple of minutes. Then responses are coming and 
everything is going back to normal.

However, in case of *OpenWire* client the connection errors are retrieved much 
faster which is strange. I was expected some more waiting time until errors are 
thrown.

 

As a conclusion, I am not sure what is going on :( but I tent to agree that 
there might be some resource issue in the middle as you have earlier stated:
{noformat}
 Perhaps Artemis needs more CPU, more memory, faster IO, etc. to deal with the 
concurrent load you're putting on it. It may be that Classic has a built in 
bottleneck due to the way it's designed that's throttling connection attempts 
internally which helps it deal with the load.{noformat}
Anyway the errors are not encountered on every run. Sometimes I can run 500, 
600, 700 connections without any errors while sometimes I got errors from the 
first run. It is the case on Linux VM as well. 

I am using the same certificates in Artemis and Classic hence the problem is 
not there. I have also tested with OpenSSL provider and I have pretty much same 
behavior. Indeed sometimes it connects a little bit faster but I still have 
errors for some connections as  above.

I have created these java testers as I though there is a concurrency issue on 
the server side  but actually our problem does not involve so many concurrent 
connections. When we replicate the issue with OpenWire Negotiation timeout I do 
not think we have more than 20 active connections to the Artemis broker. The 
issue is quite random and very hard to replicate manually. Basically we have 
some automated regression tests using our software that behind the scene launch 
some Broker connections and performs various operations (connect, 
sending/listening data, etc). From time to time we are unable to start a 
session (as the error we actually have is not during connection step but when 
creating a session).  If it were a connection step then it wouldn't have been a 
problem because we have a retry mechanism there but we do not have that when 
creating a session and I do not think that is possible (if the session cannot 
be created due to broken pipe then I guess the whole connection is broken)

PS. I will start again the Java tests on Linux playing with tls version to see 
if there is a CPU problem.


was (Author: JIRAUSER300236):
Seems that the certificates have no impact whatsoever.

Anyway, {*}j{*}ust for your information: * *I have run some tests today on my 
laptop using *Windows 10 64BIT* (this laptop is not in the datacenter with the 
Linux VM where the other tests are running so I am rolling out environmental 
issues)
h3. OpenWire Client

 

With Classic Broker I can run 500 concurrent connections without any problems. 
But with Artemis Broker, starting with 400 connections onwards I am 
encountering connectivity issues. On the server side I see errors like:
{noformat}
_2024-07-11 15:28:58,541 DEBUG [io.netty.handler.ssl.SslHandler] [id: 
0x9858123b, L:/127.0.0.1:3177 - R:/127.0.0.1:51375] Swallowing a harmless 
'connection reset by peer / broken pipe' error that occurred while writing 
close_notify in response to the peer's close_notify
java.io.IOException: An established connection was aborted by the software in 
your host machine
        at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) ~[?:?]
        at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:46) 
~[?:?]
        at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:330) 
~[?:?]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:284) ~[?:?]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:259) ~[?:?]
        at 
java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:417) ~[?:?]
        at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:255) 
~[netty-buffer-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) 
~[netty-buffer-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:356)
 ~[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
 [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) 
[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
 [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) 
[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) 
[netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994)
 [netty-common-4.1.111.Final.jar:4.1.111.Final]
        at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
[netty-common-4.1.111.Final.jar:4.1.111.Final]
        at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 [artemis-commons-2.35.0.jar:2.35.0]_{noformat}
and on the client side:
{noformat}
9976 15:28:44.594 ERROR com.finastra.internal.classic.client.JmsConnection 99 
connect - Cannot initialize the Jms Broker connection. 
jakarta.jms.JMSException: Could not connect to broker URL: 
ssl://localhost:3177?keepAlive=true&wireFormat.maxInactivityDuration=0. Reason: 
java.net.ConnectException: Connection refused: no further information
  at 
org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:423)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:253)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
com.finastra.internal.classic.client.JmsConnection.connect(JmsConnection.java:88)
 ~[classes/:?]
  at 
com.finastra.internal.classic.client.JmsConnection.connect(JmsConnection.java:65)
 ~[classes/:?]
  at 
com.finastra.internal.classic.client.JmsTester.testOneConnection(JmsTester.java:106)
 ~[classes/:?]
  at 
com.finastra.internal.classic.client.JmsTester.lambda$testConcurrentConnections$0(JmsTester.java:77)
 ~[classes/:?]
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
 [?:?]
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 [?:?]
  at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
Caused by: java.net.ConnectException: Connection refused: no further information
  at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
  at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[?:?]
  at 
java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:547) 
~[?:?]
  at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:602) ~[?:?]
  at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327) ~[?:?]
  at java.base/java.net.Socket.connect(Socket.java:633) ~[?:?]
  at java.base/sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:304) 
~[?:?]
  at 
org.apache.activemq.transport.tcp.TcpTransport.connect(TcpTransport.java:525) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.tcp.TcpTransport.doStart(TcpTransport.java:488) 
~[activemq-client-6.1.2.jar:6.1.2]
  at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.AbstractInactivityMonitor.start(AbstractInactivityMonitor.java:172)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.InactivityMonitor.start(InactivityMonitor.java:52)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.WireFormatNegotiator.start(WireFormatNegotiator.java:72)
 ~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
~[activemq-client-6.1.2.jar:6.1.2]
  at 
org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:403)
 ~[activemq-client-6.1.2.jar:6.1.2]{noformat}
h3. CORE Client

When running for 500 concurrent connections there are no connectivity errors. 
However, I noticed that in this case the client process is stuck at some point 
up until the first connections are being realized. The CPU is 100% and the 
process does not move for a couple of minutes. Then responses are coming and 
everything is going back to normal.

However, in case of *OpenWire* client the connection errors are retrieved much 
faster which is strange. I was expected some more waiting time until errors are 
thrown.

 

As a conclusion, I am not sure what is going on :( but I tent to agree that 
there might be some resource issue in the middle as you have earlier stated:
{noformat}
 Perhaps Artemis needs more CPU, more memory, faster IO, etc. to deal with the 
concurrent load you're putting on it. It may be that Classic has a built in 
bottleneck due to the way it's designed that's throttling connection attempts 
internally which helps it deal with the load.{noformat}
Anyway the errors are not encountered on every run. Sometimes I can run 500, 
600, 700 connections without any errors while sometimes I got errors from the 
first run. It is the case on Linux VM as well. 

I am using the same certificates in Artemis and Classic hence the problem is 
not there. I have also tested with OpenSSL provider and I have pretty much same 
behavior. Indeed sometimes it connects a little bit faster but I still have 
errors for some connections as  above.

PS. I will start again the tests on Linux and playing with tls version to see 
if there is a CPU problem.

> [OpenWire] WireFormatNegotiator timeout during multiple parallel SSL 
> connections
> --------------------------------------------------------------------------------
>
>                 Key: ARTEMIS-4884
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-4884
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>    Affects Versions: 2.35.0
>            Reporter: Liviu Citu
>            Assignee: Justin Bertram
>            Priority: Major
>         Attachments: activemq-clients.zip, broker.xml, classic-client_1.log, 
> classic-client_2.log
>
>
> We are currently in process of migrating our broker from Classic 5.x to 
> Artemis. We are currently using CMS C++ client for connecting to the broker 
> {*}but the issue replicates also with the OpenWire JMS client{*}. Everything 
> works fine when using non-SSL setup (on both Windows and Linux) but we have 
> some issues when using SSL on Linux (SSL on Windows is OK).
> The initial problem started with the following exceptions on the client side:
> {noformat}
> 024-02-22 09:54:37.377 [ERROR] [activemq_connection.cc:336] CMS exception: 
> Channel was inactive for too long:
>                 FILE: activemq/core/ActiveMQConnection.cpp, LINE: 1293
>                 FILE: activemq/core/ActiveMQConnection.cpp, LINE: 1371
>                 FILE: activemq/core/ActiveMQConnection.cpp, LINE: 
> 573{noformat}
> while on the broker side we had:
> {noformat}
> 2024-03-20 12:29:08,700 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224088: Timeout (10 seconds) on acceptor "netty-ssl-acceptor" during 
> protocol handshake with /10.21.70.53:33053 has occurred.{noformat}
> To bypass these we have added the following setting to the *broker.xml* 
> *netty-ssl-acceptor* acceptor: *handshake-timeout=0*
> However now the exceptions we are receiving are:
> *+CMS client+*
> {noformat}
> 2024-05-22 09:26:40.842 [ERROR] [activemq_connection.cc:348] CMS exception: 
> OpenWireFormatNegotiator::onewayWire format negotiation timeout: peer did not 
> send his wire format.
>         FILE: activemq/core/ActiveMQConnection.cpp, LINE: 1293
>         FILE: activemq/core/ActiveMQConnection.cpp, LINE: 1371
>         FILE: activemq/core/ActiveMQConnection.cpp, LINE: 573{noformat}
> +*Java client*+
> {noformat}
> jakarta.jms.JMSException: Could not connect to broker URL: 
> ssl://linux_host:61617?keepAlive=true&wireFormat.maxInactivityDuration=0. 
> Reason: java.net.SocketException: Broken pipe
>   at 
> org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:423)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:353)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:245)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
> .........................................................................
> Caused by: java.net.SocketException: Broken pipe
>   at java.base/sun.nio.ch.NioSocketImpl.implWrite(NioSocketImpl.java:425) 
> ~[?:?]
>   at java.base/sun.nio.ch.NioSocketImpl.write(NioSocketImpl.java:445) ~[?:?]
>   at java.base/sun.nio.ch.NioSocketImpl$2.write(NioSocketImpl.java:831) ~[?:?]
>   at java.base/java.net.Socket$SocketOutputStream.write(Socket.java:1035) 
> ~[?:?]
>   at 
> java.base/sun.security.ssl.SSLSocketOutputRecord.deliver(SSLSocketOutputRecord.java:345)
>  ~[?:?]
>   at 
> java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1308)
>  ~[?:?]
>   at 
> org.apache.activemq.transport.tcp.TcpBufferedOutputStream.flush(TcpBufferedOutputStream.java:115)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:128) 
> ~[?:?]
>   at 
> org.apache.activemq.transport.tcp.TcpTransport.oneway(TcpTransport.java:194) 
> ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:336)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:318)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.transport.WireFormatNegotiator.sendWireFormat(WireFormatNegotiator.java:181)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.transport.WireFormatNegotiator.sendWireFormat(WireFormatNegotiator.java:84)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.transport.WireFormatNegotiator.start(WireFormatNegotiator.java:74)
>  ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
> ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at 
> org.apache.activemq.transport.TransportFilter.start(TransportFilter.java:64) 
> ~[activemq-client-6.1.2.jar!/:6.1.2]
>   at org.apache.{noformat}
> The problem replicates with the following:
>  * SSL on Linux. Problem does not replicate if non-SSL configuration is used. 
> Also does not replicate on Windows (regardless if SSL or non-SSL is used)
>  * *problem does not replicate with Classic Brokers. So it is specific to 
> Artemis broker*
>  * when testing with both Classic Broker and Artemis Broker, the client 
> connections using the Classic Broker were fine. Only those using Artemis 
> Broker were failing
>  * Artemis clients are also running on the same same host with the Broker. 
> Basically both client and server are running on the same host
>  * there are many connections done in the same time to the broker (25+). If 
> there are only few then the problem does not happen
>  * example of  connection URL used by the client (the other instance just 
> uses a different port)
> *ssl://linux_host:61617?keepAlive=true&wireFormat.MaxInactivityDuration=0*
>  * Broker configuration file attached (just mangled the SSL stuff and name of 
> the host). The other one is similar (different ports)
> When monitoring the successful connections I found out that usual connections 
> took less than 0.5 seconds to succeed. I was unable to find any successful 
> connection that took more than this.
> Looking to the broker logs we are unable to find any relevant message when 
> the connection fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
For further information, visit: https://activemq.apache.org/contact


Reply via email to