[jira] [Created] (ARTEMIS-2284) Artemis 2.7.0 logs password for STOMP protocol in clear text in debug logs

2019-03-26 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-2284:
---

 Summary: Artemis 2.7.0 logs password for STOMP protocol in clear 
text in debug logs
 Key: ARTEMIS-2284
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2284
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.7.0
Reporter: Miroslav Novak


If TRACE log is enabled for {{org.apache.activemq.artemis}} then StompProtoco 
is logging password in clear text:
{code}
13:48:06,488 DEBUG [org.apache.commons.beanutils.BeanUtils] (ServerService 
Thread Pool -- 86) 
BeanUtils.populate(org.apache.activemq.artemis.core.protocol.stomp.StompProtocolManager@2aa25516,
 {needClientAuth=tru
e, trustStorePassword=hornetqexample, keyStorePassword=hornetqexample, 
port=6445, sslEnabled=true, host=127.0.0.1, 
trustStorePath=/home/hudson/hudson_workspace/workspace/eap-7.x-messaging-weekly-common-ssl/eap-t
estsuite/jboss-hornetq-testsuite/tests-eap7/src/test/resources/org/jboss/qa/hornetq/test/transportprotocols/hornetq.example.truststore,
 keyStorePath=/home/hudson/hudson_workspace/workspace/eap-7.x-messaging-week
ly-common-ssl/eap-testsuite/jboss-hornetq-testsuite/tests-eap7/src/test/resources/org/jboss/qa/hornetq/test/transportprotocols/hornetq.example.keystore})
...
13:48:06,488 TRACE [org.apache.commons.beanutils.BeanUtils] (ServerService 
Thread Pool -- 86)   
setProperty(org.apache.activemq.artemis.core.protocol.stomp.StompProtocolManager@2aa25516,
 trustStorePassword, horn
etqexample)
...
13:48:06,489 TRACE [org.apache.commons.beanutils.BeanUtils] (ServerService 
Thread Pool -- 86)   
setProperty(org.apache.activemq.artemis.core.protocol.stomp.StompProtocolManager@2aa25516,
 keyStorePassword, hornet
qexample)

{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2171) ThreadPoolExecutor leak under SM due to lack of privileged block

2018-11-11 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-2171:
---

 Summary: ThreadPoolExecutor leak under SM due to lack of 
privileged block
 Key: ARTEMIS-2171
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2171
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.3
Reporter: Miroslav Novak


Description cloned from https://issues.jboss.org/browse/WFLY-10380:

Still researching the source of these leaks.

The way the leak happens is, a java.util.concurrent.ThreadPoolExecutor is 
constructed from an unprivileged context. The pool starts up and threads are 
created without a problem, however, the thread pool is never shut down. The 
finalizer runs but since it tries to shut down the pool with an access control 
context that was captured during construction, it fails because the context did 
not have the modifyThread RuntimePermission, and the thread pool never shuts 
down.

We need to identify the points where TPEs are being constructed without 
controlled privileges.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2109) Cannot build Artemis with JDK 11

2018-10-04 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-2109:
---

 Summary: Cannot build Artemis with JDK 11
 Key: ARTEMIS-2109
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2109
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.3
Reporter: Miroslav Novak


Artemis cannot be build with JDK 11:
{code}
$ java -version
openjdk version "11" 2018-09-25
OpenJDK Runtime Environment 18.9 (build 11+28)
OpenJDK 64-Bit Server VM 18.9 (build 11+28, mixed mode)
{code}

Build fails with:
{code}
$ mvn clean install
...
[INFO] 
[INFO] Building ActiveMQ Artemis Parent 2.7.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ artemis-pom ---
[INFO] Deleting /home/mnovak/projects/activemq-artemis/target
[INFO] 
[INFO] --- maven-enforcer-plugin:1.4:enforce (enforce-maven) @ artemis-pom ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.4:enforce (enforce-java) @ artemis-pom ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] ActiveMQ Artemis Parent  FAILURE [  0.302 s]
[INFO] ActiveMQ Artemis Commons ... SKIPPED
[INFO] ActiveMQ Artemis Core Client ... SKIPPED
[INFO] ActiveMQ Artemis Selector Implementation ... SKIPPED
[INFO] ActiveMQ Artemis JMS Client  SKIPPED
[INFO] ActiveMQ Artemis Native POM  SKIPPED
[INFO] ActiveMQ Artemis Journal ... SKIPPED
[INFO] ActiveMQ Artemis JDBC Store  SKIPPED
[INFO] ActiveMQ Artemis Server  SKIPPED
[INFO] ActiveMQ Artemis Protocols . SKIPPED
[INFO] ActiveMQ Artemis AMQP Protocol . SKIPPED
[INFO] ActiveMQ Artemis STOMP Protocol  SKIPPED
[INFO] ActiveMQ Artemis OpenWire Protocol . SKIPPED
[INFO] ActiveMQ Artemis HQClient Protocol . SKIPPED
[INFO] ActiveMQ Artemis HornetQ Protocol .. SKIPPED
[INFO] ActiveMQ Artemis MQTT Protocol . SKIPPED
[INFO] ActiveMQ Artemis DTO ... SKIPPED
[INFO] ActiveMQ Artemis Service Extensions  SKIPPED
[INFO] ActiveMQ Artemis JMS Server  SKIPPED
[INFO] ActiveMQ Artemis CDI Integration ... SKIPPED
[INFO] ActiveMQ Artemis Boot .. SKIPPED
[INFO] ActiveMQ Artemis Tools . SKIPPED
[INFO] ActiveMQ Artemis CLI ... SKIPPED
[INFO] ActiveMQ Artemis Web ... SKIPPED
[INFO] ActiveMQ Artemis Web ... SKIPPED
[INFO] ActiveMQ Artemis Core Client All ... SKIPPED
[INFO] ActiveMQ Artemis Client OSGi ... SKIPPED
[INFO] ActiveMQ Artemis JUnit Rules ... SKIPPED
[INFO] ActiveMQ Artemis JMS Client All  SKIPPED
[INFO] ActiveMQ Artemis JMS Client OSGi ... SKIPPED
[INFO] ActiveMQ Artemis RAR POM ... SKIPPED
[INFO] ActiveMQ Artemis REST Interface Implementation . SKIPPED
[INFO] ActiveMQ Artemis Maven Plugin .. SKIPPED
[INFO] ActiveMQ Artemis Server OSGi ... SKIPPED
[INFO] ActiveMQ Artemis Cons .. SKIPPED
[INFO] ActiveMQ Artemis HawtIO Branding ... SKIPPED
[INFO] ActiveMQ Artemis HawtIO Plugin . SKIPPED
[INFO] ActiveMQ Artemis Console ... SKIPPED
[INFO] ActiveMQ Artemis Spring Integration  SKIPPED
[INFO] Apache ActiveMQ Artemis Distribution ... SKIPPED
[INFO] ActiveMQ Artemis Tests POM . SKIPPED
[INFO] ActiveMQ Artemis Test Support .. SKIPPED
[INFO] ActiveMQ Artemis Unit Tests  SKIPPED
[INFO] ActiveMQ Artemis Joram Tests ... SKIPPED
[INFO] ActiveMQ Artemis timing Tests .. SKIPPED
[INFO] ActiveMQ Artemis JMS Tests . SKIPPED
[INFO] ActiveMQ Artemis Features .. SKIPPED
[INFO] ActiveMQ Artemis Integration Tests . SKIPPED
[INFO] ActiveMQ Artemis Client Integration Tests .. SKIPPED
[INFO] ActiveMQ Artemis Compatibility Tests ... SKIPPED
[INFO] ActiveMQ Artemis soak Tests  SKIPPED
[INFO] ActiveMQ Artemis stress Tests .. SKIPPED
[INFO] ActiveMQ Artemis performance Tests . SKIPPED
[INFO] Smoke Tests  SKIPPED
[INFO] 

[jira] [Created] (ARTEMIS-1743) NPE in server log when Artemis trace logging is enabled

2018-03-12 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-1743:
---

 Summary: NPE in server log when Artemis trace logging is enabled
 Key: ARTEMIS-1743
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1743
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.5.0
Reporter: Miroslav Novak


Artemis master (95b7438e7a7661692d5b78be944d05e254df9067) contains issue when 
trace logging is enabled.

If large message is sent and Artemis trace logs are enabled then following NPE 
is logged in server log: 

{code}
09:42:14,005 WARN  [org.apache.activemq.artemis.core.message.impl.CoreMessage] 
(default I/O-9) Error creating String for message: : 
java.lang.NullPointerException
    at 
org.apache.activemq.artemis.core.message.impl.CoreMessage.encode(CoreMessage.java:584)
    at 
org.apache.activemq.artemis.core.message.impl.CoreMessage.checkEncode(CoreMessage.java:248)
    at 
org.apache.activemq.artemis.core.message.impl.CoreMessage.getEncodeSize(CoreMessage.java:647)
    at 
org.apache.activemq.artemis.core.message.impl.CoreMessage.getPersistentSize(CoreMessage.java:1157)
    at 
org.apache.activemq.artemis.core.message.impl.CoreMessage.toString(CoreMessage.java:1132)
    at java.lang.String.valueOf(String.java:2994) [rt.jar:1.8.0_131]
    at java.lang.StringBuilder.append(StringBuilder.java:131) [rt.jar:1.8.0_131]
    at 
org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionSendLargeMessage.toString(SessionSendLargeMessage.java:73)
    at java.lang.String.valueOf(String.java:2994) [rt.jar:1.8.0_131]
    at java.lang.StringBuilder.append(StringBuilder.java:131) [rt.jar:1.8.0_131]
    at 
org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:368)
    at 
org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:646)
    at 
org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:68)
    at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
    at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)
    at 
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)
    at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
    at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
    at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
    at 
org.xnio.netty.transport.AbstractXnioSocketChannel$ReadListener.handleEvent(AbstractXnioSocketChannel.java:443)
    at 
org.xnio.netty.transport.AbstractXnioSocketChannel$ReadListener.handleEvent(AbstractXnioSocketChannel.java:379)
    at 
org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) 
[xnio-api-3.6.1.Final.jar:3.6.1.Final]
    at 
org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
 [xnio-api-3.6.1.Final.jar:3.6.1.Final]
    at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89) 
[xnio-nio-3.6.1.Final.jar:3.6.1.Final]
    at org.xnio.nio.WorkerThread.run(WorkerThread.java:591) 
[xnio-nio-3.6.1.Final.jar:3.6.1.Final]
{code}

Currently it appears that it has not impact on functionality but NPEs are 
flooding server log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1001) slow-consumer-check-period is in minutes but behaves like with seconds

2017-03-08 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15901302#comment-15901302
 ] 

Miroslav Novak commented on ARTEMIS-1001:
-

Thanks [~jbertram], I just "git blamed" the {{AddressSettings.java}} in Artemis 
which pointed me to Martyn and did not realize that it's from HornetQ times. I 
think that [~martyntaylor] provided enough reasons why not to change it mail 
thread "ActiveMQ Artemis 2.x stream" on activemq dev list. It's there for years 
without complains then let's keep it on 5 sec.






> slow-consumer-check-period is in minutes but behaves like with seconds
> --
>
> Key: ARTEMIS-1001
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1001
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.3
>Reporter: Miroslav Novak
>
> Attribute {{slow-consumer-check-period}} is in minutes but 
> {{SlowConsumerReaperRunnable} thread is scheduled as it's configured with 
> seconds.
> Problem is in {{QueueImpl.scheduleSlowConsumerReaper}} method:
> {code}
> slowConsumerReaperFuture = 
> scheduledExecutor.scheduleWithFixedDelay(slowConsumerReaperRunnable, 
> settings.getSlowConsumerCheckPeriod(), settings.getSlowConsumerCheckPeriod(), 
> TimeUnit.SECONDS);
> {code}
> contains {{TimeUnit.SECONDS}} instead of {{TimeUnit.MINUTES}}. 
> I tried to debug it and can see that 
> {{settings.getSlowConsumerCheckPeriod()}} returns 1 which is in minutes. This 
> seems to be easy fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-1011) Slow consumer detection - producer msg/s rate for queue should take into account messages which are already in queue

2017-03-07 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15900789#comment-15900789
 ] 

Miroslav Novak commented on ARTEMIS-1011:
-

Agreed, thanks [~jbertram] for looking at it. My comment above was just that 
based on code review of AbortSlowConsumerStrategy class from ActiveMQ 5.x, it 
seems that there is also this issue. Artemis will be more robust in this.

> Slow consumer detection - producer msg/s rate for queue should take into 
> account messages which are already in queue
> 
>
> Key: ARTEMIS-1011
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1011
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.3
>Reporter: Miroslav Novak
> Fix For: 2.0.0, 1.5.5
>
>
> There is still a problem how producer msg/s rate is calculated in 
> {{QueueImpl.getRate()}} for slow consumer detection. It calculates only 
> messages added during the last slow consumer check period. As this is used to 
> figure out, in which msg/s rate the queue could serve the consumer then it 
> should also take into account messages which are already in queue at the 
> start of queueRateCheckTime period. 
> Current implementation is problem for cases when messages are sent to queue 
> in bursts, for example producer sends 1000s messages in a few seconds and 
> then stops and will do that again in 1 hour. QueueImpl.getRate() method 
> returns 0 msg/s for slow consumer check period set to for example 5 min and 
> slow consumer detection will be skipped. 
> I tried to fix it by following change to QueueImpl.getRate() method and seems 
> to be ok, wdyt?
> {code}
>private final AtomicLong messageCountSnapshot = new AtomicLong(0);
>public float getRate() {
>   long locaMessageAdded = getMessagesAdded();
>   float timeSlice = ((System.currentTimeMillis() - 
> queueRateCheckTime.getAndSet(System.currentTimeMillis())) / 1000.0f);
>   if (timeSlice == 0) {
>  messagesAddedSnapshot.getAndSet(locaMessageAdded);
>  return 0.0f;
>   }
>   return BigDecimal.valueOf(((locaMessageAdded - 
> messagesAddedSnapshot.getAndSet(locaMessageAdded)) + 
> messageCountSnapshot.getAndSet(getMessageCount())) / timeSlice).setScale(2, 
> BigDecimal.ROUND_UP).floatValue();
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-1011) Slow consumer detection - producer msg/s rate for queue should take into account messages which are already in queue

2017-03-07 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898920#comment-15898920
 ] 

Miroslav Novak commented on ARTEMIS-1011:
-

Looks like that ActiveMQ 5.x does not bother by producer msg/s rate at all. 
They can just ignore consumers which are idle (=no messages were sent to 
consumer's buffer). 

I think it's design decision how Artemis is going to behave. I think, it's 
worth to make it more robust and able to handle message bursts. 

> Slow consumer detection - producer msg/s rate for queue should take into 
> account messages which are already in queue
> 
>
> Key: ARTEMIS-1011
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1011
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.3
>Reporter: Miroslav Novak
> Fix For: 2.0.0, 1.5.4
>
>
> There is still a problem how producer msg/s rate is calculated in 
> {{QueueImpl.getRate()}} for slow consumer detection. It calculates only 
> messages added during the last slow consumer check period. As this is used to 
> figure out, in which msg/s rate the queue could serve the consumer then it 
> should also take into account messages which are already in queue at the 
> start of queueRateCheckTime period. 
> Current implementation is problem for cases when messages are sent to queue 
> in bursts, for example producer sends 1000s messages in a few seconds and 
> then stops and will do that again in 1 hour. QueueImpl.getRate() method 
> returns 0 msg/s for slow consumer check period set to for example 5 min and 
> slow consumer detection will be skipped. 
> I tried to fix it by following change to QueueImpl.getRate() method and seems 
> to be ok, wdyt?
> {code}
>private final AtomicLong messageCountSnapshot = new AtomicLong(0);
>public float getRate() {
>   long locaMessageAdded = getMessagesAdded();
>   float timeSlice = ((System.currentTimeMillis() - 
> queueRateCheckTime.getAndSet(System.currentTimeMillis())) / 1000.0f);
>   if (timeSlice == 0) {
>  messagesAddedSnapshot.getAndSet(locaMessageAdded);
>  return 0.0f;
>   }
>   return BigDecimal.valueOf(((locaMessageAdded - 
> messagesAddedSnapshot.getAndSet(locaMessageAdded)) + 
> messageCountSnapshot.getAndSet(getMessageCount())) / timeSlice).setScale(2, 
> BigDecimal.ROUND_UP).floatValue();
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-1011) Slow consumer detection - producer msg/s rate for queue should take into account messages which are already in queue

2017-03-06 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898875#comment-15898875
 ] 

Miroslav Novak commented on ARTEMIS-1011:
-

This is similar to DROP policy than to Artemis slow consumer detection. 

I found 
http://timbish.blogspot.cz/2013/07/coming-in-activemq-59-new-way-to-abort.html 
- which describes AbortSlowConsumerStrategy and is similar Artemis slow 
consumer detection. I don't think that producer msg/s rate it taken into 
account. Need to check the code.

> Slow consumer detection - producer msg/s rate for queue should take into 
> account messages which are already in queue
> 
>
> Key: ARTEMIS-1011
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1011
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.3
>Reporter: Miroslav Novak
> Fix For: 2.0.0, 1.5.4
>
>
> There is still a problem how producer msg/s rate is calculated in 
> {{QueueImpl.getRate()}} for slow consumer detection. It calculates only 
> messages added during the last slow consumer check period. As this is used to 
> figure out, in which msg/s rate the queue could serve the consumer then it 
> should also take into account messages which are already in queue at the 
> start of queueRateCheckTime period. 
> Current implementation is problem for cases when messages are sent to queue 
> in bursts, for example producer sends 1000s messages in a few seconds and 
> then stops and will do that again in 1 hour. QueueImpl.getRate() method 
> returns 0 msg/s for slow consumer check period set to for example 5 min and 
> slow consumer detection will be skipped. 
> I tried to fix it by following change to QueueImpl.getRate() method and seems 
> to be ok, wdyt?
> {code}
>private final AtomicLong messageCountSnapshot = new AtomicLong(0);
>public float getRate() {
>   long locaMessageAdded = getMessagesAdded();
>   float timeSlice = ((System.currentTimeMillis() - 
> queueRateCheckTime.getAndSet(System.currentTimeMillis())) / 1000.0f);
>   if (timeSlice == 0) {
>  messagesAddedSnapshot.getAndSet(locaMessageAdded);
>  return 0.0f;
>   }
>   return BigDecimal.valueOf(((locaMessageAdded - 
> messagesAddedSnapshot.getAndSet(locaMessageAdded)) + 
> messageCountSnapshot.getAndSet(getMessageCount())) / timeSlice).setScale(2, 
> BigDecimal.ROUND_UP).floatValue();
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-1011) Slow consumer detection - producer msg/s rate for queue should take into account messages which are already in queue

2017-03-06 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15896893#comment-15896893
 ] 

Miroslav Novak commented on ARTEMIS-1011:
-

At this moment If producer does not send any messages then slow consumer policy 
is skipped. No consumer is disconnected. In test scenario the consumer would be 
disconnected only if another producer would send new messages and slow consumer 
policy would not be skipped. So yes, eventually consumer would be disconnected. 
I don't like the locking of getMessageCount() method as well. This can affect 
performance. However correct behavior is as described above. Do you know how 
ActiveMQ 5 behaves in this cases?

> Slow consumer detection - producer msg/s rate for queue should take into 
> account messages which are already in queue
> 
>
> Key: ARTEMIS-1011
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1011
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.3
>Reporter: Miroslav Novak
> Fix For: 2.0.0, 1.5.4
>
>
> There is still a problem how producer msg/s rate is calculated in 
> {{QueueImpl.getRate()}} for slow consumer detection. It calculates only 
> messages added during the last slow consumer check period. As this is used to 
> figure out, in which msg/s rate the queue could serve the consumer then it 
> should also take into account messages which are already in queue at the 
> start of queueRateCheckTime period. 
> Current implementation is problem for cases when messages are sent to queue 
> in bursts, for example producer sends 1000s messages in a few seconds and 
> then stops and will do that again in 1 hour. QueueImpl.getRate() method 
> returns 0 msg/s for slow consumer check period set to for example 5 min and 
> slow consumer detection will be skipped. 
> I tried to fix it by following change to QueueImpl.getRate() method and seems 
> to be ok, wdyt?
> {code}
>private final AtomicLong messageCountSnapshot = new AtomicLong(0);
>public float getRate() {
>   long locaMessageAdded = getMessagesAdded();
>   float timeSlice = ((System.currentTimeMillis() - 
> queueRateCheckTime.getAndSet(System.currentTimeMillis())) / 1000.0f);
>   if (timeSlice == 0) {
>  messagesAddedSnapshot.getAndSet(locaMessageAdded);
>  return 0.0f;
>   }
>   return BigDecimal.valueOf(((locaMessageAdded - 
> messagesAddedSnapshot.getAndSet(locaMessageAdded)) + 
> messageCountSnapshot.getAndSet(getMessageCount())) / timeSlice).setScale(2, 
> BigDecimal.ROUND_UP).floatValue();
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-1011) Slow consumer detection - producer msg/s rate for queue should take into account messages which are already in queue

2017-03-03 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15893884#comment-15893884
 ] 

Miroslav Novak commented on ARTEMIS-1011:
-

> I'm not clear why you need the AtomicLong messageCountSnapshot. Couldn't you 
> just invoke getMessageCount()?
Only got inspired by {{messagesAddedSnapshot.getAndSet(locaMessageAdded))}}. It 
just makes code nicer because of the getAndSet() method. Otherwise I don't see 
reason to use AtomicLong for messageCountSnapshot and messagesAddedSnapshot.

> Invoking getMessageCount() will lock the queue and therefore negatively 
> impact performance for high-throughput use-cases. We may want to add some 
> kind of optimization to getRate() to avoid that call if at all possible or 
> perhaps avoiding the call to getRate() unless absolutely necessary.
Good point. Just a quick idea. Producer msg/s rate could be calculated just 
from messageAdded as it was until now. If it does not meet the condition in if 
statement in SlowConsumerReaperRunnable.run() line 3135 {{} else if (queueRate  
< (threshold * consumersSet.size())) {}} then we would calculate producer msg/s 
again from messageAdded and messageCount as suggested in description. This 
should avoid calling getMessageCount() when there is a high load. I assume here 
that high load means that there are lots of added messages.

> This changes the semantics of the method such that it no longer returns the 
> rate of message production on the queue between invocations so it would 
> probably be good to rename the method to something more accurate.
Yes :-)

Test scenario could be like:
* Start server with slow consumer policy set to KILL, stresshold 10 msg/s, 
check period 5 seconds
* Send 100 messages to queue, wait 5 seconds 
* Start slow consumer - consumes messages in rate 1 msg/s
Pass Criteria: Consumer was disconnected. This test will fail now.

> Slow consumer detection - producer msg/s rate for queue should take into 
> account messages which are already in queue
> 
>
> Key: ARTEMIS-1011
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1011
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.3
>Reporter: Miroslav Novak
> Fix For: 2.0.0, 1.5.4
>
>
> There is still a problem how producer msg/s rate is calculated in 
> {{QueueImpl.getRate()}} for slow consumer detection. It calculates only 
> messages added during the last slow consumer check period. As this is used to 
> figure out, in which msg/s rate the queue could serve the consumer then it 
> should also take into account messages which are already in queue at the 
> start of queueRateCheckTime period. 
> Current implementation is problem for cases when messages are sent to queue 
> in bursts, for example producer sends 1000s messages in a few seconds and 
> then stops and will do that again in 1 hour. QueueImpl.getRate() method 
> returns 0 msg/s for slow consumer check period set to for example 5 min and 
> slow consumer detection will be skipped. 
> I tried to fix it by following change to QueueImpl.getRate() method and seems 
> to be ok, wdyt?
> {code}
>private final AtomicLong messageCountSnapshot = new AtomicLong(0);
>public float getRate() {
>   long locaMessageAdded = getMessagesAdded();
>   float timeSlice = ((System.currentTimeMillis() - 
> queueRateCheckTime.getAndSet(System.currentTimeMillis())) / 1000.0f);
>   if (timeSlice == 0) {
>  messagesAddedSnapshot.getAndSet(locaMessageAdded);
>  return 0.0f;
>   }
>   return BigDecimal.valueOf(((locaMessageAdded - 
> messagesAddedSnapshot.getAndSet(locaMessageAdded)) + 
> messageCountSnapshot.getAndSet(getMessageCount())) / timeSlice).setScale(2, 
> BigDecimal.ROUND_UP).floatValue();
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-921) Consumers killed as slow even if overall consuming rate is above threshold

2017-03-01 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891700#comment-15891700
 ] 

Miroslav Novak commented on ARTEMIS-921:


 [~clebertsuconic] Ok, I've created new jira ARTEMIS-1011.

> Consumers killed as slow even if overall consuming rate is above threshold
> --
>
> Key: ARTEMIS-921
> URL: https://issues.apache.org/jira/browse/ARTEMIS-921
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.1
>Reporter: Howard Gao
>Assignee: clebert suconic
> Fix For: 1.5.2, 2.0.0
>
>
> We have one queue. Imagine messages are produced at 2 msgs/s. There are three 
> consumers and slow consumer limit is 1 msgs/s. What happens is that all three 
> consumers get killed as slow, even though it is impossible for any of them to 
> be fast, since messages are distributed equally between the consumers 
> (round-robin).
> This has real consumer impact in a situation when producer rate is usually 
> high (so it requires multiple consumers working in parallel), but may 
> occasionally drop close to consumer-threshold. In this case, broker 
> disconnects all consumers who then have to reconnect and message processing 
> is delayed for the time of the reconnecting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ARTEMIS-1011) Slow consumer detection - producer msg/s rate for queue should take into account messages which are already in queue

2017-03-01 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-1011:
---

 Summary: Slow consumer detection - producer msg/s rate for queue 
should take into account messages which are already in queue
 Key: ARTEMIS-1011
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1011
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.5.3, 2.0.0
Reporter: Miroslav Novak


There is still a problem how producer msg/s rate is calculated in 
{{QueueImpl.getRate()}} for slow consumer detection. It calculates only 
messages added during the last slow consumer check period. As this is used to 
figure out, in which msg/s rate the queue could serve the consumer then it 
should also take into account messages which are already in queue at the start 
of queueRateCheckTime period. 

Current implementation is problem for cases when messages are sent to queue in 
bursts, for example producer sends 1000s messages in a few seconds and then 
stops and will do that again in 1 hour. QueueImpl.getRate() method returns 0 
msg/s for slow consumer check period set to for example 5 min and slow consumer 
detection will be skipped. 

I tried to fix it by following change to QueueImpl.getRate() method and seems 
to be ok, wdyt?
{code}
   private final AtomicLong messageCountSnapshot = new AtomicLong(0);
   public float getRate() {
  long locaMessageAdded = getMessagesAdded();
  float timeSlice = ((System.currentTimeMillis() - 
queueRateCheckTime.getAndSet(System.currentTimeMillis())) / 1000.0f);
  if (timeSlice == 0) {
 messagesAddedSnapshot.getAndSet(locaMessageAdded);
 return 0.0f;
  }
  return BigDecimal.valueOf(((locaMessageAdded - 
messagesAddedSnapshot.getAndSet(locaMessageAdded)) + 
messageCountSnapshot.getAndSet(getMessageCount())) / timeSlice).setScale(2, 
BigDecimal.ROUND_UP).floatValue();
   }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-921) Consumers killed as slow even if overall consuming rate is above threshold

2017-03-01 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890341#comment-15890341
 ] 

Miroslav Novak commented on ARTEMIS-921:


Can we fix it in scope of this jira? I can't reopen it :-(

> Consumers killed as slow even if overall consuming rate is above threshold
> --
>
> Key: ARTEMIS-921
> URL: https://issues.apache.org/jira/browse/ARTEMIS-921
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.1
>Reporter: Howard Gao
>Assignee: clebert suconic
> Fix For: 1.5.2, 2.0.0
>
>
> We have one queue. Imagine messages are produced at 2 msgs/s. There are three 
> consumers and slow consumer limit is 1 msgs/s. What happens is that all three 
> consumers get killed as slow, even though it is impossible for any of them to 
> be fast, since messages are distributed equally between the consumers 
> (round-robin).
> This has real consumer impact in a situation when producer rate is usually 
> high (so it requires multiple consumers working in parallel), but may 
> occasionally drop close to consumer-threshold. In this case, broker 
> disconnects all consumers who then have to reconnect and message processing 
> is delayed for the time of the reconnecting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-921) Consumers killed as slow even if overall consuming rate is above threshold

2017-03-01 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890330#comment-15890330
 ] 

Miroslav Novak commented on ARTEMIS-921:


There is still a problem how producer msg/s rate is calculated in 
{{QueueImpl.getRate()}} for slow consumer detection. It calculates only 
messages added during the last {{queueRateCheckTime}} seconds. As this is used 
to figure out, in which maximum msg/s rate the queue could serve the consumer 
then it should also take into account messages which are already in queue at 
{{queueRateCheckTime}} time. 

Current implementation is problem for cases when messages are sent to queue in 
bursts, for example producer sends 1000s messages in a few seconds and then 
stops and will do that again in 1 hour. QueueImpl.getRate() method returns 0 
msg/s for queueRateCheckTime which is by default 5 seconds and slow consumer 
detection will be skipped (almost) all the time.

I tried to fix it by following change to QueueImpl.getRate() method and seems 
to be ok, wdyt?
{code}
   private final AtomicLong messageCountSnapshot = new AtomicLong(0);
   public float getRate() {
  long locaMessageAdded = getMessagesAdded();
  float timeSlice = ((System.currentTimeMillis() - 
queueRateCheckTime.getAndSet(System.currentTimeMillis())) / 1000.0f);
  if (timeSlice == 0) {
 messagesAddedSnapshot.getAndSet(locaMessageAdded);
 return 0.0f;
  }
  return BigDecimal.valueOf(((locaMessageAdded - 
messagesAddedSnapshot.getAndSet(locaMessageAdded)) + 
messageCountSnapshot.getAndSet(getMessageCount())) / timeSlice).setScale(2, 
BigDecimal.ROUND_UP).floatValue(); <-- here is the change
   }
{code}

> Consumers killed as slow even if overall consuming rate is above threshold
> --
>
> Key: ARTEMIS-921
> URL: https://issues.apache.org/jira/browse/ARTEMIS-921
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.1
>Reporter: Howard Gao
>Assignee: clebert suconic
> Fix For: 1.5.2, 2.0.0
>
>
> We have one queue. Imagine messages are produced at 2 msgs/s. There are three 
> consumers and slow consumer limit is 1 msgs/s. What happens is that all three 
> consumers get killed as slow, even though it is impossible for any of them to 
> be fast, since messages are distributed equally between the consumers 
> (round-robin).
> This has real consumer impact in a situation when producer rate is usually 
> high (so it requires multiple consumers working in parallel), but may 
> occasionally drop close to consumer-threshold. In this case, broker 
> disconnects all consumers who then have to reconnect and message processing 
> is delayed for the time of the reconnecting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-1001) slow-consumer-check-period is in minutes but behaves like with seconds

2017-02-28 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887539#comment-15887539
 ] 

Miroslav Novak commented on ARTEMIS-1001:
-

5 second window for calculating msg/s rate for producer and consumer seems 
quite low for calculating proper rate and can lead to weird behavior. For 
example if consumer rate have little fluctuation in time due to network 
congestion or short high CPU loads on client, it might be disconnected. 

Regarding the threads it does not have to be such an issue as I thought. There 
is used {{ScheduledThreadPoolExecutor}} which has max size = 5 by default [1]. 
It's shared with other parts of the server to execute scheduled tasks. 
Scheduling 100s threads every 5 second might delay execution of other threads 
which might be undesirable but not so critical. Yes, good trade-off should be 
found.

I can see that {{DEFAULT_SLOW_CONSUMER_CHECK_PERIOD = 5;}} was originally set 
by [~martyntaylor]. Maybe he has more info. 

[1] 
https://github.com/apache/activemq-artemis/blob/1.5.3/docs/user-manual/en/thread-pooling.md#server-scheduled-thread-pool

> slow-consumer-check-period is in minutes but behaves like with seconds
> --
>
> Key: ARTEMIS-1001
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1001
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.3
>Reporter: Miroslav Novak
>
> Attribute {{slow-consumer-check-period}} is in minutes but 
> {{SlowConsumerReaperRunnable} thread is scheduled as it's configured with 
> seconds.
> Problem is in {{QueueImpl.scheduleSlowConsumerReaper}} method:
> {code}
> slowConsumerReaperFuture = 
> scheduledExecutor.scheduleWithFixedDelay(slowConsumerReaperRunnable, 
> settings.getSlowConsumerCheckPeriod(), settings.getSlowConsumerCheckPeriod(), 
> TimeUnit.SECONDS);
> {code}
> contains {{TimeUnit.SECONDS}} instead of {{TimeUnit.MINUTES}}. 
> I tried to debug it and can see that 
> {{settings.getSlowConsumerCheckPeriod()}} returns 1 which is in minutes. This 
> seems to be easy fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ARTEMIS-1001) slow-consumer-check-period is in minutes but behaves like with seconds

2017-02-27 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-1001:
---

 Summary: slow-consumer-check-period is in minutes but behaves like 
with seconds
 Key: ARTEMIS-1001
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1001
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.5.3
Reporter: Miroslav Novak


Attribute {{slow-consumer-check-period}} is in minutes but 
{{SlowConsumerReaperRunnable} thread is scheduled as it's configured with 
seconds.

Problem is in {{QueueImpl.scheduleSlowConsumerReaper}} method:
{code}
slowConsumerReaperFuture = 
scheduledExecutor.scheduleWithFixedDelay(slowConsumerReaperRunnable, 
settings.getSlowConsumerCheckPeriod(), settings.getSlowConsumerCheckPeriod(), 
TimeUnit.SECONDS);
{code}

contains {{TimeUnit.SECONDS}} instead of {{TimeUnit.MINUTES}}. 

I tried to debug it and can see that {{settings.getSlowConsumerCheckPeriod()}} 
returns 1 which is in minutes. This seems to be easy fix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (ARTEMIS-946) CLONE - Consumers killed as slow even if overall consuming rate is above threshold

2017-02-07 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak closed ARTEMIS-946.
--
Resolution: Fixed

Just tried If I can move jira - not possible :-( Closing.

> CLONE - Consumers killed as slow even if overall consuming rate is above 
> threshold
> --
>
> Key: ARTEMIS-946
> URL: https://issues.apache.org/jira/browse/ARTEMIS-946
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.1
>Reporter: Miroslav Novak
>Assignee: clebert suconic
> Fix For: 2.0.0, 1.5.2
>
>
> We have one queue. Imagine messages are produced at 2 msgs/s. There are three 
> consumers and slow consumer limit is 1 msgs/s. What happens is that all three 
> consumers get killed as slow, even though it is impossible for any of them to 
> be fast, since messages are distributed equally between the consumers 
> (round-robin).
> This has real consumer impact in a situation when producer rate is usually 
> high (so it requires multiple consumers working in parallel), but may 
> occasionally drop close to consumer-threshold. In this case, broker 
> disconnects all consumers who then have to reconnect and message processing 
> is delayed for the time of the reconnecting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-600) Enterprise message grouping

2017-02-07 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1583#comment-1583
 ] 

Miroslav Novak commented on ARTEMIS-600:


I did not create jiras for points 3, 4, 5. I did some manual plays with 
clustered message grouping to create this one. I think that info in the points 
is quite good but I know that reproducers are always welcome :-) I will take a 
look what can be done here once I have some free cycles. 



> Enterprise message grouping
> ---
>
> Key: ARTEMIS-600
> URL: https://issues.apache.org/jira/browse/ARTEMIS-600
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>Priority: Critical
> Fix For: 2.0.0
>
>
> Message grouping in Artemis is squishy as almost anything can break it at 
> this moment. *Drawbacks in current design:*
> * Consumers must be connected before messages are send to cluster
> * If producers sends message to cluster and there is no consumer in cluster 
> then message is stuck on this node. Consumer which later connects to another 
> node in cluster does not receive this message.
> * If server in cluster is shutdown then all message grouping breaks and no 
> other node in cluster is able to receive message (not even on other queues)
> * There is issue that backup for remote grouping handler does not take duties 
> after failover.
> * If consumer is closed then no other consumer is chosen
> *Suggested improvements:*
> * Decision to which consumer to route a message, will not be made during send 
> time in case that there is no consumer. 
> * Consumers do not have to be connected when messages are sent to cluster.
> * Message grouping will allow to cleanly shutdown server without breaking 
> message ordering/grouping. Connected consumers will be closed. Another 
> consumer in cluster will be chosen.
> * Futher If any consumer is closed then another consumer will be chosen. 
> * Allow to configure dispatch delay to avoid situation that first connected 
> consumer in cluster gets assigned all message groups. Delay will wait for 
> other consumers to connect so message groups are equally distributed. (we can 
> consider setting minimum consumer number)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-926) CME when Artemis server start

2017-01-19 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829502#comment-15829502
 ] 

Miroslav Novak commented on ARTEMIS-926:


I agree with Clebert, Methods on Properties class are synchronized. So any 
manipulation with system properties in runtime will need to wait for 
synchronized(properties) to finish.


> CME when Artemis server start
> -
>
> Key: ARTEMIS-926
> URL: https://issues.apache.org/jira/browse/ARTEMIS-926
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.1
>Reporter: Jeff Mesnil
> Fix For: 1.5.2, 2.0.0
>
>
> In our test suite, we had a test failing with the error:
> {code}
> #27;[0m#27;[31m07:57:14,980 ERROR [org.jboss.msc.service.fail] 
> (ServerService Thread Pool -- 64) MSC01: Failed to start service 
> jboss.messaging-activemq.default.jms.manager: 
> org.jboss.msc.service.StartException in service 
> jboss.messaging-activemq.default.jms.manager: WFLYMSGAMQ0033: Failed to start 
> service
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSService.doStart(JMSService.java:203)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSService.access$000(JMSService.java:63)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSService$1.run(JMSService.java:97)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> Caused by: java.util.ConcurrentModificationException
> at java.util.Hashtable$Enumerator.next(Hashtable.java:1367)
> at 
> org.apache.activemq.artemis.core.config.impl.ConfigurationImpl.parseSystemProperties(ConfigurationImpl.java:308)
> at 
> org.apache.activemq.artemis.core.config.impl.ConfigurationImpl.parseSystemProperties(ConfigurationImpl.java:299)
> at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.internalStart(ActiveMQServerImpl.java:488)
> at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.start(ActiveMQServerImpl.java:466)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.start(JMSServerManagerImpl.java:412)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSService.doStart(JMSService.java:199)
> ... 8 more
> {code}
> It is possible that our tests write some System Properties while Artemis is 
> started.
> To avoid this issue, Artemis should add a synchronized(properties) block in 
> org.apache.activemq.artemis.core.config.impl.ConfigurationImpl#parseSystemProperties(java.util.Properties).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-201) Log warning if server can crash on OutOfMemory due to "misconfiguration"

2016-12-06 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727905#comment-15727905
 ] 

Miroslav Novak commented on ARTEMIS-201:


I agree with Jeff, just not sure if to reduce it to INFO level as everyone 
simply ignores INFOs. We should also provide advice what to do. Like "Increase 
JVM memory or reduce max-size-bytes value in address-settings"

> Log warning if server can crash on OutOfMemory due to "misconfiguration"
> 
>
> Key: ARTEMIS-201
> URL: https://issues.apache.org/jira/browse/ARTEMIS-201
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.0.0
>Reporter: Miroslav Novak
>Assignee: Justin Bertram
> Fix For: 1.2.0
>
>
> Imagine situation where server is started with 3000 destinations and 
> max-size-bytes is set to 10MB. This would mean that JVM would have to be 
> started with at least 30GB of memory to prevent OOM in case that all 
> destinations get filled up. (PAGE mode is not a solution in this case as it 
> starts once destination exceeds 10MB in memory)
> Purpose of this jira is to provide check which would print warning in case 
> that such OOM can happen. This check would be executed during start of server 
> and then with adding any destination at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-809) Trace logs contain content of large messages creating really huge log files

2016-10-19 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-809:
---
Issue Type: Bug  (was: Improvement)

> Trace logs contain content of large messages creating really huge log files
> ---
>
> Key: ARTEMIS-809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-809
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Miroslav Novak
>
> Running Artemis 1.4 with enabled trace logs adds content of large messages to 
> log files. This is making trace logs unusable and grows in enormous rate.
> This should be disabled by default. Maybe configurable by system property or 
> in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-809) Trace logs contain content of large messages creating really huge log files

2016-10-19 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-809:
--

 Summary: Trace logs contain content of large messages creating 
really huge log files
 Key: ARTEMIS-809
 URL: https://issues.apache.org/jira/browse/ARTEMIS-809
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 1.4.0
Reporter: Miroslav Novak


Running Artemis 1.4 with enabled trace logs adds content of large messages to 
log files. This is making trace logs unusable and grows in enormous rate.

This should be disabled by default. Maybe configurable by system property or in 
configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-804) Log message id for dropped messages

2016-10-17 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-804:
---
Issue Type: Improvement  (was: Bug)

> Log message id for dropped messages
> ---
>
> Key: ARTEMIS-804
> URL: https://issues.apache.org/jira/browse/ARTEMIS-804
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Miroslav Novak
>
> If no dead letter address is configured then message is dropped when max 
> delivery attempts is reached. In this case warning like:
> {code}
>  17:12:09,202 WARN  [org.apache.activemq.artemis.core.server] (Thread-23 
> (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@7e65de0f-1736233410))
>  AMQ222150: Message has exceeded max delivery attempts. No Dead Letter 
> Address configured for queue jms.queue.InQueue so dropping it
> {code}
> is logged. This warning should also contain message id of the dropped message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-804) Log message id for dropped messages

2016-10-17 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-804:
---
Priority: Minor  (was: Major)

> Log message id for dropped messages
> ---
>
> Key: ARTEMIS-804
> URL: https://issues.apache.org/jira/browse/ARTEMIS-804
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Miroslav Novak
>Priority: Minor
>
> If no dead letter address is configured then message is dropped when max 
> delivery attempts is reached. In this case warning like:
> {code}
>  17:12:09,202 WARN  [org.apache.activemq.artemis.core.server] (Thread-23 
> (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@7e65de0f-1736233410))
>  AMQ222150: Message has exceeded max delivery attempts. No Dead Letter 
> Address configured for queue jms.queue.InQueue so dropping it
> {code}
> is logged. This warning should also contain message id of the dropped message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-804) Log message id for dropped messages

2016-10-17 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-804:
--

 Summary: Log message id for dropped messages
 Key: ARTEMIS-804
 URL: https://issues.apache.org/jira/browse/ARTEMIS-804
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.4.0
Reporter: Miroslav Novak


If no dead letter address is configured then message is dropped when max 
delivery attempts is reached. In this case warning like:
{code}
 17:12:09,202 WARN  [org.apache.activemq.artemis.core.server] (Thread-23 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@7e65de0f-1736233410))
 AMQ222150: Message has exceeded max delivery attempts. No Dead Letter Address 
configured for queue jms.queue.InQueue so dropping it
{code}

is logged. This warning should also contain message id of the dropped message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-709) Possible NPE on UUIDGenerator.getAllNetworkInterfaces()

2016-10-03 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15541709#comment-15541709
 ] 

Miroslav Novak commented on ARTEMIS-709:


PR is merged, should this be resolved?

> Possible NPE on UUIDGenerator.getAllNetworkInterfaces()
> ---
>
> Key: ARTEMIS-709
> URL: https://issues.apache.org/jira/browse/ARTEMIS-709
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Martin Styk
>Priority: Minor
>
> There is possibility of NPE on class {{UUIDGenerator}} in method 
> {{getAllNetworkInterfaces()}}. 
> {code:java}
>private static List getAllNetworkInterfaces() {
>   Enumeration networkInterfaces;
>   try {
>  networkInterfaces = NetworkInterface.getNetworkInterfaces();
>  List ifaces = new ArrayList<>();
>  while (networkInterfaces.hasMoreElements()) {
> ifaces.add(networkInterfaces.nextElement());
>  }
>  return ifaces;
>   }
>   catch (SocketException e) {
>  return Collections.emptyList();
>   }
>}
> {code}
> In case there are none network interfaces found on machine, method 
> {{NetworkInterface.getNetworkInterfaces()}} returns {{null}} which can cause 
> NPE in while cycle condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-473) Resolve split brain data after split brains scenarios.

2016-08-23 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15432755#comment-15432755
 ] 

Miroslav Novak commented on ARTEMIS-473:


[~clebertsuconic] do you think that options a) and b) are feasible? Option c) 
would require to resolve hard issues and merge the journals as you mentioned 
above which would be out of scope for this RFE.

> Resolve split brain data after split brains scenarios.
> --
>
> Key: ARTEMIS-473
> URL: https://issues.apache.org/jira/browse/ARTEMIS-473
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.2.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> If master-slave pair is configured using replicated journal and there are no 
> other servers in cluster then if network between master and slave is broken 
> then slave will activate. Depending on whether clients were disconnected from 
> master or not there might be or might not be failover to slave. Problem 
> happens in the moment when network between master and slave is restored. 
> Master and slave are active at the same time which is the split brain 
> syndrom. Currently there is no recovery mechanism to solve this situation.
> Suggested improvement: If clients failovered to slave then master will 
> restart itself so failback occurs (if configured). If clients did not 
> failover and stayed connected to master then backup will restart itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-681) Do not use just one "sf.my-cluster..." queue for message load-balancing and redistribution

2016-08-16 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-681:
---
Priority: Critical  (was: Major)

> Do not use just one "sf.my-cluster..." queue for message load-balancing and 
> redistribution 
> ---
>
> Key: ARTEMIS-681
> URL: https://issues.apache.org/jira/browse/ARTEMIS-681
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> For load balancing of messages in cluster is used just one "sf.my-cluster..." 
> queue which is used for routing messages to remote cluster node. This is 
> major performance bottleneck and scaling up blocker in case if there are 100s 
> queues/topics on server and there is just one queue to route messages to 
> another node in cluster.
> Expected behavior: There should be special "routing" queue for every queue 
> and topic subscription per "remote" cluster node. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-681) Do not use just one "sf.my-cluster..." queue for message load-balancing and redistribution

2016-08-16 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-681:
--

 Summary: Do not use just one "sf.my-cluster..." queue for message 
load-balancing and redistribution 
 Key: ARTEMIS-681
 URL: https://issues.apache.org/jira/browse/ARTEMIS-681
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.3.0
Reporter: Miroslav Novak


For load balancing of messages in cluster is used just one "sf.my-cluster..." 
queue which is used for routing messages to remote cluster node. This is major 
performance bottleneck and scaling up blocker in case if there are 100s 
queues/topics on server and there is just one queue to route messages to 
another node in cluster.

Expected behavior: There should be special "routing" queue for every queue and 
topic subscription per "remote" cluster node. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-473) Resolve split brain data after split brains scenarios.

2016-08-11 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417115#comment-15417115
 ] 

Miroslav Novak commented on ARTEMIS-473:


This feature request should handle options a) and b). c) requires to resolve 
hard issues which is out of scope. 

> Resolve split brain data after split brains scenarios.
> --
>
> Key: ARTEMIS-473
> URL: https://issues.apache.org/jira/browse/ARTEMIS-473
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.2.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> If master-slave pair is configured using replicated journal and there are no 
> other servers in cluster then if network between master and slave is broken 
> then slave will activate. Depending on whether clients were disconnected from 
> master or not there might be or might not be failover to slave. Problem 
> happens in the moment when network between master and slave is restored. 
> Master and slave are active at the same time which is the split brain 
> syndrom. Currently there is no recovery mechanism to solve this situation.
> Suggested improvement: If clients failovered to slave then master will 
> restart itself so failback occurs (if configured). If clients did not 
> failover and stayed connected to master then backup will restart itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-679) Activate most up to date server from master-slave(live-backup) pair

2016-08-11 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417112#comment-15417112
 ] 

Miroslav Novak commented on ARTEMIS-679:


In replicated journal when backup is started before live, it does not activate. 
It waits for live server to start first. Problem is that live always activates. 
No matter whether backup had up-to-date journal or not. This feature would be 
about exchanging last timestamps of write changes on Artemis journal between 
live and backup. Server with latest time stamp would activate. Server with 
older timestamp would restart.

Great benefit of this approach is that no matter which server from live-backup 
pair is started first. Server with most up-to-date journal will always start. 

> Activate most up to date server from master-slave(live-backup) pair
> ---
>
> Key: ARTEMIS-679
> URL: https://issues.apache.org/jira/browse/ARTEMIS-679
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> if there are 2 live/backup pairs with replicated journal in colocated 
> topology Artemis1(L1/B2) <-> Artemis2(L2/B1) then there is no easy way to 
> start them if they're all shutdown.
> Problem is that there is no way how to start the servers with most up-to-date 
> journal. If administrator shutdown servers in sequence Artemis1 and then 
> Artemis 2. Then Artemis 2 has the most up-to-date journals because backup B1 
> on server2 activated.
> Then If administrator decides to start Artemis2 then live L2 activates and 
> backup B1 waits for live L1 in Artemis 1 to start. But once L1 starts then L1 
> replicates its own "old" journal to B1.
> So L1 started with bad old journal. I would suggest that L1 and B1 compares 
> theirs journals and figure out which one is more up-to-date. Then server with 
> more up-to-date journal activates.
> In scenario described above it would be backup B1 which will activate first. 
> Live L1 will synchronize its own journal from B1 and then failback happens. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (ARTEMIS-473) Resolve split brain data after split brains scenarios.

2016-08-11 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416643#comment-15416643
 ] 

Miroslav Novak edited comment on ARTEMIS-473 at 8/11/16 6:39 AM:
-

I've created new jira for the description - ARTEMIS-679 - Activate most up to 
date server from master-slave(live-backup) pair. 

If split brain happens then there is not much Artemis can do about it. Still it 
can recover from quite common cases. Basically 3 situation can happen when 
split brain happens (=master and slave are active at the same time):

a) Clients do not loose connection to master and stay connected to master.
b) Clients loose connection to master and failover to backup. 
c) Clients loose connection to master and slave at same time. They're trying to 
reconnect to master-slave pair. 

I believe that for situations a) and b) Artemis can recover when network is 
reconnected. In the moment when master and slave notice that they're active at 
the same time, they will check who has external (no in-vm) connections. Server 
without external client connections will restart. Only server with the clients 
has the up-to-date journal. 

Option c) is problematic as clients can connect to master or slave so in this 
case there is nothing Artemis can do. wdyt?


was (Author: mnovak):
I've created new jira for the description - ARTEMIS-679 - Activate most up to 
date server from master-slave(live-backup) pair. 

If split brain happens then there is not much Artemis can do about it. Still it 
can recover from quite common cases. Basically 3 situation can happen when 
split brain happens (=master and slave are active at the same time):

a) Clients do not loose connection to master and stay connected to master.
b) Clients loose connection to master and failover backup. 
c) Clients loose connection to master and slave at same time. They will try to 
reconnect to master or slave pair. 

I believe that for situations a) and b) Artemis can recover when network is 
reconnected. In the moment when master and slave notice that they're active at 
the same time, they will check who has external (no in-vm) connections. Server 
without external client connections will restart. Only server with the clients 
has the up-to-date journal. 

Option c) is problematic as clients can connect to master or slave so in this 
case there is nothing Artemis can do. wdyt?

> Resolve split brain data after split brains scenarios.
> --
>
> Key: ARTEMIS-473
> URL: https://issues.apache.org/jira/browse/ARTEMIS-473
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.2.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> If master-slave pair is configured using replicated journal and there are no 
> other servers in cluster then if network between master and slave is broken 
> then slave will activate. Depending on whether clients were disconnected from 
> master or not there might be or might not be failover to slave. Problem 
> happens in the moment when network between master and slave is restored. 
> Master and slave are active at the same time which is the split brain 
> syndrom. Currently there is no recovery mechanism to solve this situation.
> Suggested improvement: If clients failovered to slave then master will 
> restart itself so failback occurs (if configured). If clients did not 
> failover and stayed connected to master then backup will restart itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-473) Resolve split brain data after split brains scenarios.

2016-08-11 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416643#comment-15416643
 ] 

Miroslav Novak commented on ARTEMIS-473:


I've created new jira for the description - ARTEMIS-679 - Activate most up to 
date server from master-slave(live-backup) pair. 

If split brain happens then there is not much Artemis can do about it. Still it 
can recover from quite common cases. Basically 3 situation can happen when 
split brain happens (=master and slave are active at the same time):

a) Clients do not loose connection to master and stay connected to master.
b) Clients loose connection to master and failover backup. 
c) Clients loose connection to master and slave at same time. They will try to 
reconnect to master or slave pair. 

I believe that for situations a) and b) Artemis can recover when network is 
reconnected. In the moment when master and slave notice that they're active at 
the same time, they will check who has external (no in-vm) connections. Server 
without external client connections will restart. Only server with the clients 
has the up-to-date journal. 

Option c) is problematic as clients can connect to master or slave so in this 
case there is nothing Artemis can do. wdyt?

> Resolve split brain data after split brains scenarios.
> --
>
> Key: ARTEMIS-473
> URL: https://issues.apache.org/jira/browse/ARTEMIS-473
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.2.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> If master-slave pair is configured using replicated journal and there are no 
> other servers in cluster then if network between master and slave is broken 
> then slave will activate. Depending on whether clients were disconnected from 
> master or not there might be or might not be failover to slave. Problem 
> happens in the moment when network between master and slave is restored. 
> Master and slave are active at the same time which is the split brain 
> syndrom. Currently there is no recovery mechanism to solve this situation.
> Suggested improvement: If clients failovered to slave then master will 
> restart itself so failback occurs (if configured). If clients did not 
> failover and stayed connected to master then backup will restart itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-679) Activate most up to date server from master-slave(live-backup) pair

2016-08-11 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-679:
---
Priority: Critical  (was: Major)

> Activate most up to date server from master-slave(live-backup) pair
> ---
>
> Key: ARTEMIS-679
> URL: https://issues.apache.org/jira/browse/ARTEMIS-679
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> if there are 2 live/backup pairs with replicated journal in colocated 
> topology Artemis1(L1/B2) <-> Artemis2(L2/B1) then there is no easy way to 
> start them if they're all shutdown.
> Problem is that there is no way how to start the servers with most up-to-date 
> journal. If administrator shutdown servers in sequence Artemis1 and then 
> Artemis 2. Then Artemis 2 has the most up-to-date journals because backup B1 
> on server2 activated.
> Then If administrator decides to start Artemis2 then live L2 activates and 
> backup B1 waits for live L1 in Artemis 1 to start. But once L1 starts then L1 
> replicates its own "old" journal to B1.
> So L1 started with bad old journal. I would suggest that L1 and B1 compares 
> theirs journals and figure out which one is more up-to-date. Then server with 
> more up-to-date journal activates.
> In scenario described above it would be backup B1 which will activate first. 
> Live L1 will synchronize its own journal from B1 and then failback happens. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-679) Activate most up to date server from master-slave(live-backup) pair

2016-08-11 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-679:
--

 Summary: Activate most up to date server from 
master-slave(live-backup) pair
 Key: ARTEMIS-679
 URL: https://issues.apache.org/jira/browse/ARTEMIS-679
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 1.3.0
Reporter: Miroslav Novak


if there are 2 live/backup pairs with replicated journal in colocated topology 
Artemis1(L1/B2) <-> Artemis2(L2/B1) then there is no easy way to start them if 
they're all shutdown.

Problem is that there is no way how to start the servers with most up-to-date 
journal. If administrator shutdown servers in sequence Artemis1 and then 
Artemis 2. Then Artemis 2 has the most up-to-date journals because backup B1 on 
server2 activated.
Then If administrator decides to start Artemis2 then live L2 activates and 
backup B1 waits for live L1 in Artemis 1 to start. But once L1 starts then L1 
replicates its own "old" journal to B1.

So L1 started with bad old journal. I would suggest that L1 and B1 compares 
theirs journals and figure out which one is more up-to-date. Then server with 
more up-to-date journal activates.

In scenario described above it would be backup B1 which will activate first. 
Live L1 will synchronize its own journal from B1 and then failback happens. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-473) Resolve split brain data after split brains scenarios.

2016-08-11 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-473:
---
Description: 
If master-slave pair is configured using replicated journal and there are no 
other servers in cluster then if network between master and slave is broken 
then slave will activate. Depending on whether clients were disconnected from 
master or not there might be or might not be failover to slave. Problem happens 
in the moment when network between master and slave is restored. Master and 
slave are active at the same time which is the split brain syndrom. Currently 
there is no recovery mechanism to solve this situation.

Suggested improvement: If clients failovered to slave then master will restart 
itself so failback occurs (if configured). If clients did not failover and 
stayed connected to master then backup will restart itself.

  was:
if there are 2 live/backup pairs with replicated journal in colocated topology 
Artemis1(L1/B2) <-> Artemis2(L2/B1) then there is no easy way to start them if 
they're all shutdown.

Problem is that there is no way how to start the servers with most up-to-date 
journal. If administrator shutdown servers in sequence Artemis1 and then 
Artemis 2. Then Artemis 2 has the most up-to-date journals because backup B1 on 
server2 activated.
Then If administrator decides to start Artemis2 then live L2 activates and 
backup B1 waits for live L1 in Artemis 1 to start. But once L1 starts then L1 
replicates its own "old" journal to B1.

So L1 started with bad old journal. I would suggest that L1 and B1 compares 
theirs journals and figure out which one is more up-to-date. Then server with 
more up-to-date journal activates.

In scenario described above it would be backup B1 which will activate first. 
Live L1 will synchronize its own journal from B1 and then failback happens.




> Resolve split brain data after split brains scenarios.
> --
>
> Key: ARTEMIS-473
> URL: https://issues.apache.org/jira/browse/ARTEMIS-473
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.2.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> If master-slave pair is configured using replicated journal and there are no 
> other servers in cluster then if network between master and slave is broken 
> then slave will activate. Depending on whether clients were disconnected from 
> master or not there might be or might not be failover to slave. Problem 
> happens in the moment when network between master and slave is restored. 
> Master and slave are active at the same time which is the split brain 
> syndrom. Currently there is no recovery mechanism to solve this situation.
> Suggested improvement: If clients failovered to slave then master will 
> restart itself so failback occurs (if configured). If clients did not 
> failover and stayed connected to master then backup will restart itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-473) Resolve split brain data after split brains scenarios.

2016-08-11 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416620#comment-15416620
 ] 

Miroslav Novak commented on ARTEMIS-473:


Sorry, the title says something different then there is in the description. 
I'll change description per tittle and create new jira for problem in 
description.

> Resolve split brain data after split brains scenarios.
> --
>
> Key: ARTEMIS-473
> URL: https://issues.apache.org/jira/browse/ARTEMIS-473
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.2.0
>Reporter: Miroslav Novak
>Priority: Critical
>
> if there are 2 live/backup pairs with replicated journal in colocated 
> topology Artemis1(L1/B2) <-> Artemis2(L2/B1) then there is no easy way to 
> start them if they're all shutdown.
> Problem is that there is no way how to start the servers with most up-to-date 
> journal. If administrator shutdown servers in sequence Artemis1 and then 
> Artemis 2. Then Artemis 2 has the most up-to-date journals because backup B1 
> on server2 activated.
> Then If administrator decides to start Artemis2 then live L2 activates and 
> backup B1 waits for live L1 in Artemis 1 to start. But once L1 starts then L1 
> replicates its own "old" journal to B1.
> So L1 started with bad old journal. I would suggest that L1 and B1 compares 
> theirs journals and figure out which one is more up-to-date. Then server with 
> more up-to-date journal activates.
> In scenario described above it would be backup B1 which will activate first. 
> Live L1 will synchronize its own journal from B1 and then failback happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-665) Optimize master/slave replication after failback - slave does not have to resync journal from master after failback

2016-08-03 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-665:
--

 Summary: Optimize master/slave replication after failback - slave 
does not have to resync journal from master after failback 
 Key: ARTEMIS-665
 URL: https://issues.apache.org/jira/browse/ARTEMIS-665
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 1.3.0
Reporter: Miroslav Novak


If there is master-slave pair with replicated journal and master is killed then 
slave activates and all clients failover to slave. 
If fail-back is enabled and master is started again then master will copy 
journal from slave before it will activate and failback happens. So far good. 

Problem is that once master activates then slave restart itself and starts to 
replicate the whole journal from master again. Throwing away it's original 
journal which just replicated to master. This is inefficiency and takes network 
resources. 

The improvement should be that slave will not throw away it's "up-to-date" 
journal and once master activates, it will just continue to replicate from 
master. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-599) Add --f option to ignore locking of data folder when dumping data.

2016-06-30 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15356589#comment-15356589
 ] 

Miroslav Novak commented on ARTEMIS-599:


Cool, thanks!

> Add --f option to ignore locking of data folder when dumping data.
> --
>
> Key: ARTEMIS-599
> URL: https://issues.apache.org/jira/browse/ARTEMIS-599
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>Assignee: clebert suconic
> Fix For: 1.4.0
>
>
> At this moment Artemis journal can be dumped by Artemis CLI command:
> {code}
> .../activemq-artemis/artemis-distribution/target/apache-artemis-1.3.0-SNAPSHOT-bin/apache-artemis-1.3.0-SNAPSHOT/bin/artemis
>  data print  --bindings bindings --journal journal --paging paging 
> --large-messages largemessages
> {code}
> Server should be in stopped state at this moment. The goal of the feature is 
> to allow to dump journal in runtime. It could be useful for investigation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-600) Enterprise message grouping

2016-06-27 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-600:
--

 Summary: Enterprise message grouping
 Key: ARTEMIS-600
 URL: https://issues.apache.org/jira/browse/ARTEMIS-600
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 1.3.0
Reporter: Miroslav Novak
Priority: Critical


Message grouping in Artemis is squishy as almost anything can break it at this 
moment. *Drawbacks in current design:*
* Consumers must be connected before messages are send to cluster
* If producers sends message to cluster and there is no consumer in cluster 
then message is stuck on this node. Consumer which later connects to another 
node in cluster does not receive this message.
* If server in cluster is shutdown then all message grouping breaks and no 
other node in cluster is able to receive message (not even on other queues)
* There is issue that backup for remote grouping handler does not take duties 
after failover.
* If consumer is closed then no other consumer is chosen

*Suggested improvements:*
* Decision to which consumer to route a message, will not be made during send 
time in case that there is no consumer. 
* Consumers do not have to be connected when messages are sent to cluster.
* Message grouping will allow to cleanly shutdown server without breaking 
message ordering/grouping. Connected consumers will be closed. Another consumer 
in cluster will be chosen.
* Futher If any consumer is closed then another consumer will be chosen. 
* Allow to configure dispatch delay to avoid situation that first connected 
consumer in cluster gets assigned all message groups. Delay will wait for other 
consumers to connect so message groups are equally distributed. (we can 
consider setting minimum consumer number)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-546) Allow to disable client-side load-balancing

2016-05-31 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307591#comment-15307591
 ] 

Miroslav Novak commented on ARTEMIS-546:


Looking at the code in {{ServerLocatorImpl.selectConnector()}}, I'm not sure if 
use of {{loadBalancingPolicy}} when {{usedTopology}} is {{null}} is valid as we 
should iterate over them everytime. Currently we iterate over them because 
round robin policy is used by default.

{code}
private TransportConfiguration selectConnector() {
  Pair[] usedTopology;

  synchronized (topologyArrayGuard) {
 usedTopology = topologyArray;
  }

  synchronized (this) {
 // if the topologyArray is null, we will use the initialConnectors
 if (usedTopology != null) {
if (logger.isTraceEnabled()) {
   logger.trace("Selecting connector from toplogy.");
}
int pos = loadBalancingPolicy.select(usedTopology.length);
Pair pair = 
usedTopology[pos];

return pair.getA();
 }
 else {
// Get from initialconnectors
if (logger.isTraceEnabled()) {
   logger.trace("Selecting connector from initial connectors.");
}

int pos = loadBalancingPolicy.select(initialConnectors.length);

return initialConnectors[pos];
 }
  }
   }
{code}

> Allow to disable client-side load-balancing
> ---
>
> Key: ARTEMIS-546
> URL: https://issues.apache.org/jira/browse/ARTEMIS-546
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>
> In case when user wants to define client side load-balancing on its own and 
> for each client specify connector to node in cluster to which it must connect 
> then there is no way do it by current load-balancing policies.
> Interface {{ConnectionLoadBalancingPolicy}} does not allow to say to which 
> node to connect based on connector information which was used in 
> configuration connection factory. 
> Idea is to allow to disable load-balancing policy. When load-balancing policy 
> is disabled then it will iterate through initial connectors (in sequence as 
> they were configured) to create connection to remote broker. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-546) Allow to disable client-side load-balancing

2016-05-31 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-546:
--

 Summary: Allow to disable client-side load-balancing
 Key: ARTEMIS-546
 URL: https://issues.apache.org/jira/browse/ARTEMIS-546
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 1.3.0
Reporter: Miroslav Novak


In case when user wants to define client side load-balancing on its own and for 
each client specify connector to node in cluster to which it must connect then 
there is no way do it by current load-balancing policies.

Interface {{ConnectionLoadBalancingPolicy}} does not allow to say to which node 
to connect based on connector information which was used in configuration 
connection factory. 

Idea is to allow to disable load-balancing policy. When load-balancing policy 
is disabled then it will iterate through initial connectors (in sequence as 
they were configured) to create connection to remote broker. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-541) Optimize live/backup replication after failback

2016-05-26 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302193#comment-15302193
 ] 

Miroslav Novak commented on ARTEMIS-541:


I understand the reasons. Idea was to make something like index in the journal. 
It would be sequential and would just grow. Every append to journal would have 
such sequential number. Then if live would start, it would send its last index 
to backup. Backup would send everything what is greater than the index.

> Optimize live/backup replication after failback
> ---
>
> Key: ARTEMIS-541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>
> Currently if there is master/slave pair configured using replicated journal 
> then after each failback when master is starting, it copies the whole journal 
> directory from slave.
> This seems to ineffective as master might have significant part of backups 
> journal before it failed. 
> If only differences between journals would be transfered then it would be 
> effective especially in case when failback is completed and master starts to 
> synchronize the whole journal back to slave. It copies the whole journal to 
> backup again which is almost the same. This would greatly speed up 
> synchronization of slave and safe network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-541) Optimize live/backup replication after failback

2016-05-26 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-541:
---
Issue Type: New Feature  (was: Bug)

> Optimize live/backup replication after failback
> ---
>
> Key: ARTEMIS-541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Miroslav Novak
>
> Currently if there is master/slave pair configured using replicated journal 
> then after each failback when master is starting, it copies the whole journal 
> directory from slave.
> This seems to ineffective as master might have significant part of backups 
> journal before it failed. 
> If only differences between journals would be transfered then it would be 
> effective especially in case when failback is completed and master starts to 
> synchronize the whole journal back to slave. It copies the whole journal to 
> backup again which is almost the same. This would greatly speed up 
> synchronization of slave and safe network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-541) Optimize live/backup replication after failback

2016-05-26 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-541:
--

 Summary: Optimize live/backup replication after failback
 Key: ARTEMIS-541
 URL: https://issues.apache.org/jira/browse/ARTEMIS-541
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.3.0
Reporter: Miroslav Novak


Currently if there is master/slave pair configured using replicated journal 
then after each failback when master is starting, it copies the whole journal 
directory from slave.
This seems to ineffective as master might have significant part of backups 
journal before it failed. 

If only differences between journals would be transfered then it would be 
effective especially in case when failback is completed and master starts to 
synchronize the whole journal back to slave. It copies the whole journal to 
backup again which is almost the same. This would greatly speed up 
synchronization of slave and safe network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-517) Provide public API method to check whether backup is synchronized with live server

2016-05-09 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276039#comment-15276039
 ] 

Miroslav Novak commented on ARTEMIS-517:


[~jbertram] Thanks Justin for update! We'll try it and see if it's good for our 
case.

> Provide public API method to check whether backup is synchronized with live 
> server 
> ---
>
> Key: ARTEMIS-517
> URL: https://issues.apache.org/jira/browse/ARTEMIS-517
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.2.0
>Reporter: Miroslav Novak
>Assignee: Justin Bertram
>
> If HA is configured with replicated journal then it takes some time to backup 
> to synchronize with live server.  Once backup is in sync with live then 
> following information appears in server.log:
> {code}
> 13:20:00,739 INFO  [org.apache.activemq.artemis.core.server] (Thread-3 
> (ActiveMQ-client-netty-threads-457000966)) AMQ221024: Backup server 
> ActiveMQServerImpl::serverUUID=bc015b34-fd73-11e5-80ca-1b35f669abb8 is 
> synchronized with live-server.
> 13:20:01,500 INFO  [org.apache.activemq.artemis.core.server] (Thread-2 
> (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@41f992ab-83559664))
>  AMQ221031: backup announced
> {code}
> Reading server logs to see whether backup is in sync is not convenient and 
> user friendly way. 
> We should provide public API to check state of synchronization. It should be 
> added to {{Activation}} interface. 
> This method should be implemented for {{SharedNothingBackupActivation}} and 
> also {{SharedNothingLiveActivation}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-517) Provide public API method to check whether backup is synchronized with live server

2016-05-05 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-517:
--

 Summary: Provide public API method to check whether backup is 
synchronized with live server 
 Key: ARTEMIS-517
 URL: https://issues.apache.org/jira/browse/ARTEMIS-517
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 1.2.0
Reporter: Miroslav Novak


If HA is configured with replicated journal then it takes some time to backup 
to synchronize with live server.  Once backup is in sync with live then 
following information appears in server.log:
{code}
13:20:00,739 INFO  [org.apache.activemq.artemis.core.server] (Thread-3 
(ActiveMQ-client-netty-threads-457000966)) AMQ221024: Backup server 
ActiveMQServerImpl::serverUUID=bc015b34-fd73-11e5-80ca-1b35f669abb8 is 
synchronized with live-server.
13:20:01,500 INFO  [org.apache.activemq.artemis.core.server] (Thread-2 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@41f992ab-83559664))
 AMQ221031: backup announced
{code}

Reading server logs to see whether backup is in sync is not convenient and user 
friendly way. 

We should provide public API to check state of synchronization. It should be 
added to {{Activation}} interface. 
This method should be implemented for {{SharedNothingBackupActivation}} and 
also {{SharedNothingLiveActivation}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-516) Rebalancing of outbound connections if cluster topology changes

2016-05-05 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-516:
--

 Summary: Rebalancing of outbound connections if cluster topology 
changes
 Key: ARTEMIS-516
 URL: https://issues.apache.org/jira/browse/ARTEMIS-516
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 1.2.0
Reporter: Miroslav Novak




Artemis resource adapter is capable to re-balance inbound connections to all 
nodes at run-time if underlying Artemis cluster topology changes.

Purpose of this feature request is to provide this behavior also for outbound 
connections. At this moment outbound connections are load balanced to all nodes 
in underlying cluster at creation time but once they're created and added to 
the pool, those connections are never reconnected to another node if cluster 
topology changes. To rebalance those connections the whole server must be 
restarted so connections are created and load-balanced again.

If outbound connection rebalancing is enabled and remote Artermis server in 
cluster has a backup then if this live crashes or is shutdown then outbound 
connection should failover to backup. It should have precedence before 
rebalancing to another live in cluster.

In case that there is just one server(live) in remote cluster which has backup 
then in case that this live is shutdown or crashes, outbound connection will 
failover to backup.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-473) Activate server with most up-to-date journal from live/backup pair

2016-04-07 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-473:
--

 Summary: Activate server with most up-to-date journal from 
live/backup pair
 Key: ARTEMIS-473
 URL: https://issues.apache.org/jira/browse/ARTEMIS-473
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 1.2.0
Reporter: Miroslav Novak
Priority: Critical


if there are 2 live/backup pairs with replicated journal in colocated topology 
Artemis1(L1/B2) <-> Artemis2(L2/B1) then there is no easy way to start them if 
they're all shutdown.

Problem is that there is no way how to start the servers with most up-to-date 
journal. If administrator shutdown servers in sequence Artemis1 and then 
Artemis 2. Then Artemis 2 has the most up-to-date journals because backup B1 on 
server2 activated.
Then If administrator decides to start Artemis2 then live L2 activates and 
backup B1 waits for live L1 in Artemis 1 to start. But once L1 starts then L1 
replicates its own "old" journal to B1.

So L1 started with bad old journal. I would suggest that L1 and B1 compares 
theirs journals and figure out which one is more up-to-date. Then server with 
more up-to-date journal activates.

In scenario described above it would be backup B1 which will activate first. 
Live L1 will synchronize its own journal from B1 and then failback happens.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ARTEMIS-359) [TestSuite] IBM JDK does not allow to instrument classes

2016-01-25 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak resolved ARTEMIS-359.

Resolution: Fixed

> [TestSuite] IBM JDK does not allow to instrument classes
> 
>
> Key: ARTEMIS-359
> URL: https://issues.apache.org/jira/browse/ARTEMIS-359
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Miroslav Novak
>
> IBM JDK 6/7/8 does not allow byteman agent to modify classes. For example 
> "extra-tests" will fail with following exception:
> {code}
> Running 
> org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest
> byteman jar is 
> /home/mnovak/.m2/repository/org/jboss/byteman/byteman/2.2.0/byteman-2.2.0.jar
> com.ibm.tools.attach.AttachNotSupportedException: acknowledgement timeout 
> from 21654 on port 42521
>   at 
> com.ibm.tools.attach.javaSE.VirtualMachineImpl.tryAttachTarget(VirtualMachineImpl.java:401)
>   at 
> com.ibm.tools.attach.javaSE.VirtualMachineImpl.attachTarget(VirtualMachineImpl.java:94)
>   at 
> com.ibm.tools.attach.javaSE.AttachProviderImpl.attachVirtualMachine(AttachProviderImpl.java:37)
>   at 
> ibm.tools.attach.J9AttachProvider.attachVirtualMachine(J9AttachProvider.java:55)
>   at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:231)
>   at org.jboss.byteman.agent.install.Install.attach(Install.java:374)
>   at org.jboss.byteman.agent.install.Install.install(Install.java:113)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitConfigState.loadAgent(BMUnitConfigState.java:340)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitConfigState.pushConfigurationState(BMUnitConfigState.java:472)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:98)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.net.SocketTimeoutException: Accept timed out
>   at 
> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:445)
>   at java.net.ServerSocket.implAccept(ServerSocket.java:620)
>   at java.net.ServerSocket.accept(ServerSocket.java:579)
>   at 
> com.ibm.tools.attach.javaSE.VirtualMachineImpl.tryAttachTarget(VirtualMachineImpl.java:396)
>   ... 17 more
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 291.094 sec 
> <<< FAILURE! - in 
> org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest
> org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest(org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest)
>   Time elapsed: 291.094 sec  <<< ERROR!
> com.sun.tools.attach.AttachNotSupportedException: acknowledgement timeout 
> from 21654 on port 42521
>   at 
> ibm.tools.attach.J9AttachProvider.attachVirtualMachine(J9AttachProvider.java:60)
>   at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:231)
>   at org.jboss.byteman.agent.install.Install.attach(Install.java:374)
>   at org.jboss.byteman.agent.install.Install.install(Install.java:113)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitConfigState.loadAgent(BMUnitConfigState.java:340)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitConfigState.pushConfigurationState(BMUnitConfigState.java:472)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:98)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> 

[jira] [Created] (ARTEMIS-359) [TestSuite] IBM JDK does not allow to instrument classes

2016-01-25 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-359:
--

 Summary: [TestSuite] IBM JDK does not allow to instrument classes
 Key: ARTEMIS-359
 URL: https://issues.apache.org/jira/browse/ARTEMIS-359
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Miroslav Novak


IBM JDK 6/7/8 does not allow byteman agent to modify classes. For example 
"extra-tests" will fail with following exception:
{code}
Running 
org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest
byteman jar is 
/home/mnovak/.m2/repository/org/jboss/byteman/byteman/2.2.0/byteman-2.2.0.jar
com.ibm.tools.attach.AttachNotSupportedException: acknowledgement timeout from 
21654 on port 42521
at 
com.ibm.tools.attach.javaSE.VirtualMachineImpl.tryAttachTarget(VirtualMachineImpl.java:401)
at 
com.ibm.tools.attach.javaSE.VirtualMachineImpl.attachTarget(VirtualMachineImpl.java:94)
at 
com.ibm.tools.attach.javaSE.AttachProviderImpl.attachVirtualMachine(AttachProviderImpl.java:37)
at 
ibm.tools.attach.J9AttachProvider.attachVirtualMachine(J9AttachProvider.java:55)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:231)
at org.jboss.byteman.agent.install.Install.attach(Install.java:374)
at org.jboss.byteman.agent.install.Install.install(Install.java:113)
at 
org.jboss.byteman.contrib.bmunit.BMUnitConfigState.loadAgent(BMUnitConfigState.java:340)
at 
org.jboss.byteman.contrib.bmunit.BMUnitConfigState.pushConfigurationState(BMUnitConfigState.java:472)
at 
org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:98)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.net.SocketTimeoutException: Accept timed out
at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:445)
at java.net.ServerSocket.implAccept(ServerSocket.java:620)
at java.net.ServerSocket.accept(ServerSocket.java:579)
at 
com.ibm.tools.attach.javaSE.VirtualMachineImpl.tryAttachTarget(VirtualMachineImpl.java:396)
... 17 more
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 291.094 sec <<< 
FAILURE! - in 
org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest
org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest(org.apache.activemq.artemis.tests.extras.byteman.ActiveMQMessageHandlerTest)
  Time elapsed: 291.094 sec  <<< ERROR!
com.sun.tools.attach.AttachNotSupportedException: acknowledgement timeout from 
21654 on port 42521
at 
ibm.tools.attach.J9AttachProvider.attachVirtualMachine(J9AttachProvider.java:60)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:231)
at org.jboss.byteman.agent.install.Install.attach(Install.java:374)
at org.jboss.byteman.agent.install.Install.install(Install.java:113)
at 
org.jboss.byteman.contrib.bmunit.BMUnitConfigState.loadAgent(BMUnitConfigState.java:340)
at 
org.jboss.byteman.contrib.bmunit.BMUnitConfigState.pushConfigurationState(BMUnitConfigState.java:472)
at 
org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:98)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-242) IllegalStateException thrown during producer.send()

2015-10-05 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-242:
--

 Summary: IllegalStateException thrown during producer.send()
 Key: ARTEMIS-242
 URL: https://issues.apache.org/jira/browse/ARTEMIS-242
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Miroslav Novak


Sometimes happens that during failback JMS producer can get 
java.lang.IllegalStateException during producer.send(message):
{code}
java.lang.IllegalStateException: Cannot send a packet while channel is doing 
failover
at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:242)
at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.send(ChannelImpl.java:201)
at 
org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sendInitialChunkOnLargeMessage(ActiveMQSessionContext.java:358)
at 
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.sendInitialLargeMessageHeader(ClientProducerImpl.java:339)
at 
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.largeMessageSendStreamed(ClientProducerImpl.java:518)
at 
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.largeMessageSendBuffered(ClientProducerImpl.java:414)
at 
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.largeMessageSend(ClientProducerImpl.java:333)
at 
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:263)
at 
org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:124)
at 
org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.doSendx(ActiveMQMessageProducer.java:476)
at 
org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:172)
at 
org.jboss.qa.hornetq.apps.clients.ProducerClientAck.sendMessage(ProducerClientAck.java:174)
at 
org.jboss.qa.hornetq.apps.clients.ProducerClientAck.run(ProducerClientAck.java:116)
{code}

This happened if failback-delay was set to 10s. There were 2 servers configured 
configured in dedicated HA topology with shared store.

IllegalStateException should be never thrown from producer.send().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-160) After failback backup prints warnings to log

2015-10-02 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14940898#comment-14940898
 ] 

Miroslav Novak commented on ARTEMIS-160:


Any update on this? This is not a blocker but quite annoying as it makes 
feeling like something serious went wrong.

> After failback backup prints warnings to log
> 
>
> Key: ARTEMIS-160
> URL: https://issues.apache.org/jira/browse/ARTEMIS-160
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Jeff Mesnil
> Fix For: 1.2.0
>
>
> We integrate Artemis in our app server.
> When the artemis server is stopped, we want to unregister any JNDI bindings 
> for the JMS resources.
> For failback, the only way to detect that the artemis server is stopped is to 
> use the ActivateCallback callback on Artemis *core* server. There is no way 
> to be notified when the JMS server (wrapping the core server) is stopped.
> This leads to a window where we remove JNDI bindings from the JMS server 
> before it is deactivated but the actual operations is performed after it was 
> deactivated and the server prints WARNING logs:
> {noformat}
> 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
> Thread Pool – 4) WFLYMSGAMQ0004: Failed to destroy queue: ExpiryQueue: 
> java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
> yet active
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
> Thread Pool – 68) WFLYMSGAMQ0004: Failed to destroy queue: AsyncQueue: 
> java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
> yet active
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> 15:34:59,123 WARN [org.wildfly.extension.messaging-activemq] (ServerService 
> Thread Pool – 9) WFLYMSGAMQ0004: Failed to destroy queue: DLQ: 
> java.lang.IllegalStateException: Cannot access JMS Server, core server is not 
> yet active
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.checkInitialised(JMSServerManagerImpl.java:1640)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.access$1100(JMSServerManagerImpl.java:101)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl$3.runException(JMSServerManagerImpl.java:752)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.runAfterActive(JMSServerManagerImpl.java:1847)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.removeQueueFromBindingRegistry(JMSServerManagerImpl.java:741)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSQueueService$2.run(JMSQueueService.java:101)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at 

[jira] [Created] (ARTEMIS-185) Clients should not throw HornetQException/JMSException to application code during failover

2015-08-03 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-185:
--

 Summary: Clients should not throw HornetQException/JMSException to 
application code during failover
 Key: ARTEMIS-185
 URL: https://issues.apache.org/jira/browse/ARTEMIS-185
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Miroslav Novak
Priority: Critical




Currently standalone JMS client throws exception (HornetQException or 
JMSException) to client application code during failover/failback and leaves on 
application programmer to handle this exception and retry the last operation.

This makes client code complex and developer must spent additional effort to 
handle those edge cases. This is complicated to achieve especially for consumer 
with transacted session of client acknowledge.

Goal of this RFE is to provide exception free behaviour for standalone JMS 
clients.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-179) Artemis is logging warnings during reconnecting cluster connection on server shut down

2015-07-29 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-179:
--

 Summary: Artemis is logging warnings during reconnecting cluster 
connection on server shut down 
 Key: ARTEMIS-179
 URL: https://issues.apache.org/jira/browse/ARTEMIS-179
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.0.0
Reporter: Miroslav Novak


Artemis is trying reconnect cluster connection if node in cluster is cleanly 
shut downed. Problem is that server.log is filled by:
{code}
13:07:09,855 WARN  [org.apache.activemq.artemis.core.server] (Thread-0 
(ActiveMQ-server-ActiveMQServerImpl::serverUUID=60b3fc86-35e1-11e5-aa4b-0324779e0c1f-753751730))
 AMQ222109: NodeID=5638e157-35e1-11e5-82f6-752f620dec6f is not available on the 
topology. Retrying the connection to that node now
{code}

which makes the logs completely unreadable. This is a change in behaviour 
against EAP 6.4.0.

Attaching: standalone-full-ha.xml

server.log:
{code}
13:06:37,059 INFO  [org.apache.activemq.artemis.core.server] (Thread-17 
(ActiveMQ-server-ActiveMQServerImpl::serverUUID=60b3fc86-35e1-11e5-aa4b-0324779e0c1f-753751730))
 AMQ221027: Bridge ClusterConnectionBridge@7eb486c7 
[name=sf.my-cluster.5638e157-35e1-11e5-82f6-752f620dec6f, 
queue=QueueImpl[name=sf.my-cluster.5638e157-35e1-11e5-82f6-752f620dec6f, 
postOffice=PostOfficeImpl 
[server=ActiveMQServerImpl::serverUUID=60b3fc86-35e1-11e5-aa4b-0324779e0c1f]]@7f77c8d9
 targetConnector=ServerLocatorImpl 
(identity=(Cluster-connection-bridge::ClusterConnectionBridge@7eb486c7 
[name=sf.my-cluster.5638e157-35e1-11e5-82f6-752f620dec6f, 
queue=QueueImpl[name=sf.my-cluster.5638e157-35e1-11e5-82f6-752f620dec6f, 
postOffice=PostOfficeImpl 
[server=ActiveMQServerImpl::serverUUID=60b3fc86-35e1-11e5-aa4b-0324779e0c1f]]@7f77c8d9
 targetConnector=ServerLocatorImpl 
[initialConnectors=[TransportConfiguration(name=http-connector, 
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
 
?httpUpgradeEnabled=trueport=9080host=localhosthttp-upgrade-endpoint=http-acceptor],
 
discoveryGroupConfiguration=null]]::ClusterConnectionImpl@841790227[nodeUUID=60b3fc86-35e1-11e5-aa4b-0324779e0c1f,
 connector=TransportConfiguration(name=http-connector, 
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
 
?host=localhosthttp-upgrade-endpoint=http-acceptorhttpUpgradeEnabled=trueport=8080,
 address=jms, 
server=ActiveMQServerImpl::serverUUID=60b3fc86-35e1-11e5-aa4b-0324779e0c1f])) 
[initialConnectors=[TransportConfiguration(name=http-connector, 
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
 
?httpUpgradeEnabled=trueport=9080host=localhosthttp-upgrade-endpoint=http-acceptor],
 discoveryGroupConfiguration=null]] is connected
13:07:09,305 WARN  [org.apache.activemq.artemis.core.client] (Thread-0 
(ActiveMQ-client-global-threads-166185675)) AMQ212037: Connection failure has 
been detected: AMQ119015: The connection was disconnected because of server 
shutdown [code=DISCONNECTED]
13:07:09,311 WARN  [org.apache.activemq.artemis.core.client] (Thread-2 
(ActiveMQ-client-global-threads-166185675)) AMQ212037: Connection failure has 
been detected: AMQ119015: The connection was disconnected because of server 
shutdown [code=DISCONNECTED]
13:07:09,310 WARN  [org.apache.activemq.artemis.core.client] (Thread-1 
(ActiveMQ-client-global-threads-166185675)) AMQ212037: Connection failure has 
been detected: AMQ119015: The connection was disconnected because of server 
shutdown [code=DISCONNECTED]
13:07:09,325 WARN  [org.apache.activemq.artemis.core.server] (Thread-2 
(ActiveMQ-client-global-threads-166185675)) AMQ222095: Connection failed with 
failedOver=false: ActiveMQDisconnectedException[errorType=DISCONNECTED 
message=AMQ119015: The connection was disconnected because of server shutdown]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$CloseRunnable.run(ClientSessionFactoryImpl.java:1183)
at 
org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

13:07:09,354 WARN  [org.apache.activemq.artemis.core.server] (Thread-2 
(ActiveMQ-client-global-threads-166185675)) AMQ222095: Connection failed with 
failedOver=false: ActiveMQDisconnectedException[errorType=DISCONNECTED 
message=AMQ119015: The connection was disconnected because of server shutdown]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$CloseRunnable.run(ClientSessionFactoryImpl.java:1183)
at 
org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105)

[jira] [Commented] (ARTEMIS-157) Connection factory ignores HA property when serialized to uri

2015-07-21 Thread Miroslav Novak (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14634815#comment-14634815
 ] 

Miroslav Novak commented on ARTEMIS-157:


Increasing priority to blocker as this does not allow clients to failover with 
connection factory looked up from server.

 Connection factory ignores HA property when serialized to uri 
 --

 Key: ARTEMIS-157
 URL: https://issues.apache.org/jira/browse/ARTEMIS-157
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.0.0
Reporter: Miroslav Novak
Priority: Blocker

 Connection factory's HA attribute is ignored when convertint to URI. This has 
 consequence that standalone JMS client is not able to failover from to live 
 to backup because it sets ha=false be default.
 Problem seems to be in method URISchema.getData:234:
 {code}
  public static String getData(ListString ignored, Object... beans) throws 
 Exception
{
   StringBuilder sb = new StringBuilder();
   synchronized (beanUtils)
   {
  for (Object bean : beans)
  {
 if (bean != null)
 {
PropertyDescriptor[] descriptors = 
 beanUtils.getPropertyUtils().getPropertyDescriptors(bean);
for (PropertyDescriptor descriptor : descriptors)
{
   if (descriptor.getReadMethod() != null  
 descriptor.getWriteMethod() != null  isWriteable(descriptor, ignored))
   {
  String value = beanUtils.getProperty(bean, 
 descriptor.getName());
  if (value != null)
  {
 
 sb.append().append(descriptor.getName()).append(=).append(value);
  }
   }
}
 }
  }
   }
   return sb.toString();
}
 {code}
 HA attribute is ignored because descriptor.getWriteMethod() != null in if 
 statement is false which means that there is no found any setHA() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-157) Connection factory ignores HA property when serialized to uri

2015-07-21 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-157:
---
Priority: Blocker  (was: Critical)

 Connection factory ignores HA property when serialized to uri 
 --

 Key: ARTEMIS-157
 URL: https://issues.apache.org/jira/browse/ARTEMIS-157
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.0.0
Reporter: Miroslav Novak
Priority: Blocker

 Connection factory's HA attribute is ignored when convertint to URI. This has 
 consequence that standalone JMS client is not able to failover from to live 
 to backup because it sets ha=false be default.
 Problem seems to be in method URISchema.getData:234:
 {code}
  public static String getData(ListString ignored, Object... beans) throws 
 Exception
{
   StringBuilder sb = new StringBuilder();
   synchronized (beanUtils)
   {
  for (Object bean : beans)
  {
 if (bean != null)
 {
PropertyDescriptor[] descriptors = 
 beanUtils.getPropertyUtils().getPropertyDescriptors(bean);
for (PropertyDescriptor descriptor : descriptors)
{
   if (descriptor.getReadMethod() != null  
 descriptor.getWriteMethod() != null  isWriteable(descriptor, ignored))
   {
  String value = beanUtils.getProperty(bean, 
 descriptor.getName());
  if (value != null)
  {
 
 sb.append().append(descriptor.getName()).append(=).append(value);
  }
   }
}
 }
  }
   }
   return sb.toString();
}
 {code}
 HA attribute is ignored because descriptor.getWriteMethod() != null in if 
 statement is false which means that there is no found any setHA() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-157) Connection factory ignores HA property when serialized to uri

2015-07-16 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-157:
---
Priority: Critical  (was: Major)

 Connection factory ignores HA property when serialized to uri 
 --

 Key: ARTEMIS-157
 URL: https://issues.apache.org/jira/browse/ARTEMIS-157
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.0.0
Reporter: Miroslav Novak
Priority: Critical

 Connection factory's HA attribute is ignored when convertint to URI. This has 
 consequence that standalone JMS client is not able to failover from to live 
 to backup because it sets ha=false be default.
 Problem seems to be in method URISchema.getData:234:
 {code}
  public static String getData(ListString ignored, Object... beans) throws 
 Exception
{
   StringBuilder sb = new StringBuilder();
   synchronized (beanUtils)
   {
  for (Object bean : beans)
  {
 if (bean != null)
 {
PropertyDescriptor[] descriptors = 
 beanUtils.getPropertyUtils().getPropertyDescriptors(bean);
for (PropertyDescriptor descriptor : descriptors)
{
   if (descriptor.getReadMethod() != null  
 descriptor.getWriteMethod() != null  isWriteable(descriptor, ignored))
   {
  String value = beanUtils.getProperty(bean, 
 descriptor.getName());
  if (value != null)
  {
 
 sb.append().append(descriptor.getName()).append(=).append(value);
  }
   }
}
 }
  }
   }
   return sb.toString();
}
 {code}
 HA attribute is ignored because descriptor.getWriteMethod() != null in if 
 statement is false which means that there is no found any setHA() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-125) Standalone backup server with shared store does not start if scaledown policy is defined but not enabled

2015-05-28 Thread Miroslav Novak (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miroslav Novak updated ARTEMIS-125:
---
Description: 
If standalone shared backup is configured with disabled scale-down policy then 
after failover backup stop itself.

Debugging showed that server.stop() is called in 
org.apache.activemq.artemis.core.server.impl.SharedStoreBackupActivation in 
line 102:
{code}
public void run()
  ...
 boolean scalingDown = SharedStoreSlavePolicy.getScaleDownPolicy() != 
null;
...
 if (scalingDown)
   ...
 activeMQServer.stop();
   ...
{code}

There is only check whether scale down policy was defined but not whether it's 
enabled/disabled.

  was:
If standalone shared backup is configured with disabled scale-down policy then 
after failover backup stop itself.

Debugging showed that server.stop() is called in 
org.apache.activemq.artemis.core.server.impl.SharedStoreBackupActivation in 
line 102:
{code:java}
public void run()
  ...
 boolean scalingDown = SharedStoreSlavePolicy.getScaleDownPolicy() != 
null;
...
 if (scalingDown)
   ...
 activeMQServer.stop();
   ...
{code:java}

There is only check whether scale down policy was defined but not whether it's 
enabled/disabled.


 Standalone backup server with shared store does not start if scaledown policy 
 is defined but not enabled
 

 Key: ARTEMIS-125
 URL: https://issues.apache.org/jira/browse/ARTEMIS-125
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.0.0
Reporter: Miroslav Novak
 Fix For: 1.1.0


 If standalone shared backup is configured with disabled scale-down policy 
 then after failover backup stop itself.
 Debugging showed that server.stop() is called in 
 org.apache.activemq.artemis.core.server.impl.SharedStoreBackupActivation in 
 line 102:
 {code}
 public void run()
   ...
  boolean scalingDown = SharedStoreSlavePolicy.getScaleDownPolicy() != 
 null;
 ...
  if (scalingDown)
...
  activeMQServer.stop();
...
 {code}
 There is only check whether scale down policy was defined but not whether 
 it's enabled/disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-125) Standalone backup server with shared store does not start if scaledown policy is defined but not enabled

2015-05-28 Thread Miroslav Novak (JIRA)
Miroslav Novak created ARTEMIS-125:
--

 Summary: Standalone backup server with shared store does not start 
if scaledown policy is defined but not enabled
 Key: ARTEMIS-125
 URL: https://issues.apache.org/jira/browse/ARTEMIS-125
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 1.0.0
Reporter: Miroslav Novak
 Fix For: 1.1.0


If standalone shared backup is configured with disabled scale-down policy then 
after failover backup stop itself.

Debugging showed that server.stop() is called in 
org.apache.activemq.artemis.core.server.impl.SharedStoreBackupActivation in 
line 102:
{code:java}
public void run()
  ...
 boolean scalingDown = SharedStoreSlavePolicy.getScaleDownPolicy() != 
null;
...
 if (scalingDown)
   ...
 activeMQServer.stop();
   ...
{code:java}

There is only check whether scale down policy was defined but not whether it's 
enabled/disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)