[jira] [Closed] (QPIDJMS-458) Potential race condition in JmsConnection.destroyResource

2019-08-21 Thread Timothy Bish (Jira)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed QPIDJMS-458.

Resolution: Done

This should have been fixed with work done in QPIDJMS-464 and was part of the 
0.44.0 release so we will close this out as done.

> Potential race condition in JmsConnection.destroyResource
> -
>
> Key: QPIDJMS-458
> URL: https://issues.apache.org/jira/browse/QPIDJMS-458
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.42.0
> Environment: OS: Windows 10 64Bit
> Broker: Apache Artemis 2.8.0
> JVM: Java HotSpot(TM) Client VM (25.40-b25, mixed mode)
> Java: version 1.8.0_40, vendor Oracle Corporation
>Reporter: Christian Danner
>Priority: Major
> Attachments: qpid_client_issue.txt
>
>
> It seems there is a race condition when attempting to close a 
> JmsMessageProducer as indicated by the stack trace below. The corresponding 
> Thread is stuck waiting for the JmsMessageProducer to be destroyed for a 
> JmsConnection.
> This behaviour was observed while testing Apache Artemis with low disk space. 
> In the provided trace we attempt to close a broker connection due to a 
> JMSException (TransactionRolledBackException caused by a duplicate message 
> ID), however the Thread gets stuck indefinitely waiting for the 
> JmsMessageProducer to be destroyed.
> We keep track of all sessions for a JmsConnection (one session per Thread) 
> and attempt to perform a graceful connection shutdown by closing all 
> producers and consumers, followed by each session before finally calling 
> close on the connection.
> We use external synchronization to ensure that the connection can only be 
> closed by a single Thread (so in this example all other Threads attempting to 
> use the broker connection are blocked waiting for the lock from the closing 
> Thread to be released).
>  
> Stack Trace:
> {{"Replicator_node1-->node2_[0ms]" #25 prio=5 os_prio=0 tid=0x49383c00 
> nid=0x3918 in Object.wait() [0x4b1ef000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.qpid.jms.provider.BalancedProviderFuture.sync(BalancedProviderFuture.java:137)
>   - locked <0x04e60300> (a 
> org.apache.qpid.jms.provider.BalancedProviderFuture)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:755)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:744)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.doClose(JmsMessageProducer.java:103)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.close(JmsMessageProducer.java:89)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.closeInternal(JMSMessageProducer.java:48)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.close(JMSMessageProducer.java:43)
>   at acme.broker.client.AbstractSession.tryClose(AbstractSession.java:108)
>   at acme.broker.client.AbstractSession.close(AbstractSession.java:90)
>   at 
> acme.broker.client.AbstractThreadedSessionManager.close(AbstractThreadedSessionManager.java:108)
>   - locked <0x1d321078> (a java.util.concurrent.ConcurrentHashMap)
>   at 
> acme.broker.client.AbstractBrokerConnection.closeInternal(AbstractBrokerConnection.java:204)
>   at 
> acme.broker.client.AbstractBrokerConnection.close(AbstractBrokerConnection.java:84)
>   at 
> acme.replication.jms.JMSMessageBridge.trySend(JMSMessageBridge.java:109)
>   at 
> acme.replication.jms.JMSMessageBridge.access$6(JMSMessageBridge.java:99)
>   at 
> acme.replication.jms.JMSMessageBridge$ReplicatorRunnable.run(JMSMessageBridge.java:62)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - <0x1cfa76b0> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-2084) Cannot encode annotations types with nested Maps that differ from the annotation key type

2019-08-06 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-2084.
--
Resolution: Fixed

> Cannot encode annotations types with nested Maps that differ from the 
> annotation key type
> -
>
> Key: PROTON-2084
> URL: https://issues.apache.org/jira/browse/PROTON-2084
> Project: Qpid Proton
>  Issue Type: Task
>  Components: proton-j
>Affects Versions: proton-j-0.33.1
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: proton-j-0.34.0
>
>
> The MapType encoder has a fixed key type setting used to enforce the types of 
> keys for things like MessageAnnoations, DeliveryAnnotations that define 
> Symbol type keys or ApplicationProperties that use String keys.  The current 
> code uses that fixed type in error when encoding nested Maps within which 
> causes the encode to fail if the nested map has a differing key type.  The 
> MapType encoder needs to ignore the fixed key type setting when encoding 
> values of the map entries. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-2084) Cannot encode annotations types with nested Maps that differ from the annotation key type

2019-08-06 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-2084:


 Summary: Cannot encode annotations types with nested Maps that 
differ from the annotation key type
 Key: PROTON-2084
 URL: https://issues.apache.org/jira/browse/PROTON-2084
 Project: Qpid Proton
  Issue Type: Task
  Components: proton-j
Affects Versions: proton-j-0.33.1
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.34.0


The MapType encoder has a fixed key type setting used to enforce the types of 
keys for things like MessageAnnoations, DeliveryAnnotations that define Symbol 
type keys or ApplicationProperties that use String keys.  The current code uses 
that fixed type in error when encoding nested Maps within which causes the 
encode to fail if the nested map has a differing key type.  The MapType encoder 
needs to ignore the fixed key type setting when encoding values of the map 
entries. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-469) Remove some unused code leftover from previous refactoring

2019-08-02 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-469.
--
Resolution: Fixed

> Remove some unused code leftover from previous refactoring
> --
>
> Key: QPIDJMS-469
> URL: https://issues.apache.org/jira/browse/QPIDJMS-469
> Project: Qpid JMS
>  Issue Type: Task
>  Components: qpid-jms-client
>Affects Versions: 0.44.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Trivial
> Fix For: 0.45.0
>
>
> Remove some unused variables and unreachable blocks that have been left from 
> previous refactorings of the client code. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-469) Remove some unused code leftover from previous refactoring

2019-08-02 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-469:


 Summary: Remove some unused code leftover from previous refactoring
 Key: QPIDJMS-469
 URL: https://issues.apache.org/jira/browse/QPIDJMS-469
 Project: Qpid JMS
  Issue Type: Task
  Components: qpid-jms-client
Affects Versions: 0.44.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.45.0


Remove some unused variables and unreachable blocks that have been left from 
previous refactorings of the client code. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-467) Provide consistent stack trace information in client JMS Exceptions

2019-07-23 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-467.
--
Resolution: Fixed

> Provide consistent stack trace information in client JMS Exceptions
> ---
>
> Key: QPIDJMS-467
> URL: https://issues.apache.org/jira/browse/QPIDJMS-467
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.40.0
>Reporter: Ritz
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.45.0
>
>
> Please investigate about including full stack traces which include calling 
> methods in all AMQ stack traces. We are expecting better and good full stack 
> traces to easily identify the offending code.
>  We are using qpid libs (qpid-jms-client-0.40.0.redhat-1.jar).
>  
> Below output is 2 tests run by Junit.  The first highlighted in RED does not 
> include any com.fedex etc calling method.  It does manage to identify the 
> *method name*.
> The 2^nd^ test shows the desired full stack trace.
>  
> We are seeing this behavior so many times. It appears that most/any time AMQ 
> hands off a task to a worker thread, the calling method is not included in 
> any failures reported.
>  
> 1) 
> testSendMessageWithCompletionListener(*junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests*)javax.jms.InvalidDestinationException:
>  AMQ119002: target address does not exist [condition = amqp:not-found]
> at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:150)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:117)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.handleClosed(AmqpResourceBuilder.java:185)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.processRemoteClose(AmqpResourceBuilder.java:129)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:973)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:104)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:831)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>     at java.util.concurrent.FutureTask.run(Unknown Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
>  Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>     at java.lang.Thread.run(Unknown Source)
>  
> 2) 
> testSendMessageWithOptionsAndWithCompletionListener(junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests)javax.jms.MessageFormatException:
>  Message must not be null
>     at org.apache.qpid.jms.JmsSession.send(JmsSession.java:765)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.sendMessage(JmsMessageProducer.java:246)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.send(JmsMessageProducer.java:214)
>     at 
> com.fedex.mi.decorator.jms.FedexJmsMessageProducer.send(FedexJmsMessageProducer.java:488)
>    {color:#FF} *at 
> junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests.testSendMessageWithOptionsAndWithCompletionListener(JmsMessageProducerTests.java:687)*{color}
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.runSelectedTests(FedexJMSJUnitDispatch.java:567)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.main(FedexJMSJUnitDispatch.java:709)
>  
> For all the issues we've encountered, the client catches an exception, prints 
> out the stack trace, and it doesn't include the client stack info
> The exception is thrown from the specific call.  I am not sure how internal 
> AMQ works but If AMQ chooses to run the operation in a separate thread, and 
> then do some sort of block waiting for that call to complete before returning 
> the exception in the original client calling thread, that is fine, but it 
> still would be able to build a complete stack trace before handing back 
> control to the calling client thread.  Perhaps a new exception with the cause 
> populated by the back ground thread make sense.
>  
> Here's another example for connection error with qpid 0.40.0 libs; no client 
> thread info!!
> The client code is basically
> try 
> {
>    

[jira] [Updated] (QPIDJMS-467) Provide consistent stack trace information in client JMS Exceptions

2019-07-23 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-467:
-
Summary: Provide consistent stack trace information in client JMS 
Exceptions  (was: Logging Issue)

> Provide consistent stack trace information in client JMS Exceptions
> ---
>
> Key: QPIDJMS-467
> URL: https://issues.apache.org/jira/browse/QPIDJMS-467
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.40.0
>Reporter: Ritz
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.40.0
>
>
> Please investigate about including full stack traces which include calling 
> methods in all AMQ stack traces. We are expecting better and good full stack 
> traces to easily identify the offending code.
>  We are using qpid libs (qpid-jms-client-0.40.0.redhat-1.jar).
>  
> Below output is 2 tests run by Junit.  The first highlighted in RED does not 
> include any com.fedex etc calling method.  It does manage to identify the 
> *method name*.
> The 2^nd^ test shows the desired full stack trace.
>  
> We are seeing this behavior so many times. It appears that most/any time AMQ 
> hands off a task to a worker thread, the calling method is not included in 
> any failures reported.
>  
> 1) 
> testSendMessageWithCompletionListener(*junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests*)javax.jms.InvalidDestinationException:
>  AMQ119002: target address does not exist [condition = amqp:not-found]
> at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:150)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:117)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.handleClosed(AmqpResourceBuilder.java:185)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.processRemoteClose(AmqpResourceBuilder.java:129)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:973)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:104)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:831)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>     at java.util.concurrent.FutureTask.run(Unknown Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
>  Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>     at java.lang.Thread.run(Unknown Source)
>  
> 2) 
> testSendMessageWithOptionsAndWithCompletionListener(junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests)javax.jms.MessageFormatException:
>  Message must not be null
>     at org.apache.qpid.jms.JmsSession.send(JmsSession.java:765)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.sendMessage(JmsMessageProducer.java:246)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.send(JmsMessageProducer.java:214)
>     at 
> com.fedex.mi.decorator.jms.FedexJmsMessageProducer.send(FedexJmsMessageProducer.java:488)
>    {color:#FF} *at 
> junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests.testSendMessageWithOptionsAndWithCompletionListener(JmsMessageProducerTests.java:687)*{color}
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.runSelectedTests(FedexJMSJUnitDispatch.java:567)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.main(FedexJMSJUnitDispatch.java:709)
>  
> For all the issues we've encountered, the client catches an exception, prints 
> out the stack trace, and it doesn't include the client stack info
> The exception is thrown from the specific call.  I am not sure how internal 
> AMQ works but If AMQ chooses to run the operation in a separate thread, and 
> then do some sort of block waiting for that call to complete before returning 
> the exception in the original client calling thread, that is fine, but it 
> still would be able to build a complete stack trace before handing back 
> control to the calling client thread.  Perhaps a new exception with the cause 
> populated by the back ground thread make sense.
>  
> Here's another example for connection error with qpid 0.40.0 libs; 

[jira] [Updated] (QPIDJMS-467) Provide consistent stack trace information in client JMS Exceptions

2019-07-23 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-467:
-
Fix Version/s: (was: 0.40.0)
   0.45.0

> Provide consistent stack trace information in client JMS Exceptions
> ---
>
> Key: QPIDJMS-467
> URL: https://issues.apache.org/jira/browse/QPIDJMS-467
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.40.0
>Reporter: Ritz
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.45.0
>
>
> Please investigate about including full stack traces which include calling 
> methods in all AMQ stack traces. We are expecting better and good full stack 
> traces to easily identify the offending code.
>  We are using qpid libs (qpid-jms-client-0.40.0.redhat-1.jar).
>  
> Below output is 2 tests run by Junit.  The first highlighted in RED does not 
> include any com.fedex etc calling method.  It does manage to identify the 
> *method name*.
> The 2^nd^ test shows the desired full stack trace.
>  
> We are seeing this behavior so many times. It appears that most/any time AMQ 
> hands off a task to a worker thread, the calling method is not included in 
> any failures reported.
>  
> 1) 
> testSendMessageWithCompletionListener(*junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests*)javax.jms.InvalidDestinationException:
>  AMQ119002: target address does not exist [condition = amqp:not-found]
> at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:150)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:117)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.handleClosed(AmqpResourceBuilder.java:185)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.processRemoteClose(AmqpResourceBuilder.java:129)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:973)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:104)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:831)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>     at java.util.concurrent.FutureTask.run(Unknown Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
>  Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>     at java.lang.Thread.run(Unknown Source)
>  
> 2) 
> testSendMessageWithOptionsAndWithCompletionListener(junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests)javax.jms.MessageFormatException:
>  Message must not be null
>     at org.apache.qpid.jms.JmsSession.send(JmsSession.java:765)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.sendMessage(JmsMessageProducer.java:246)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.send(JmsMessageProducer.java:214)
>     at 
> com.fedex.mi.decorator.jms.FedexJmsMessageProducer.send(FedexJmsMessageProducer.java:488)
>    {color:#FF} *at 
> junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests.testSendMessageWithOptionsAndWithCompletionListener(JmsMessageProducerTests.java:687)*{color}
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.runSelectedTests(FedexJMSJUnitDispatch.java:567)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.main(FedexJMSJUnitDispatch.java:709)
>  
> For all the issues we've encountered, the client catches an exception, prints 
> out the stack trace, and it doesn't include the client stack info
> The exception is thrown from the specific call.  I am not sure how internal 
> AMQ works but If AMQ chooses to run the operation in a separate thread, and 
> then do some sort of block waiting for that call to complete before returning 
> the exception in the original client calling thread, that is fine, but it 
> still would be able to build a complete stack trace before handing back 
> control to the calling client thread.  Perhaps a new exception with the cause 
> populated by the back ground thread make sense.
>  
> Here's another example for connection error with qpid 0.40.0 libs; no client 
> thread info!!
> The client 

[jira] [Assigned] (QPIDJMS-467) Logging Issue

2019-07-23 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned QPIDJMS-467:


Assignee: Timothy Bish

> Logging Issue
> -
>
> Key: QPIDJMS-467
> URL: https://issues.apache.org/jira/browse/QPIDJMS-467
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.40.0
>Reporter: Ritz
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.40.0
>
>
> Please investigate about including full stack traces which include calling 
> methods in all AMQ stack traces. We are expecting better and good full stack 
> traces to easily identify the offending code.
>  We are using qpid libs (qpid-jms-client-0.40.0.redhat-1.jar).
>  
> Below output is 2 tests run by Junit.  The first highlighted in RED does not 
> include any com.fedex etc calling method.  It does manage to identify the 
> *method name*.
> The 2^nd^ test shows the desired full stack trace.
>  
> We are seeing this behavior so many times. It appears that most/any time AMQ 
> hands off a task to a worker thread, the calling method is not included in 
> any failures reported.
>  
> 1) 
> testSendMessageWithCompletionListener(*junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests*)javax.jms.InvalidDestinationException:
>  AMQ119002: target address does not exist [condition = amqp:not-found]
> at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:150)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:117)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.handleClosed(AmqpResourceBuilder.java:185)
>     at 
> org.apache.qpid.jms.provider.amqp.builders.AmqpResourceBuilder.processRemoteClose(AmqpResourceBuilder.java:129)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:973)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:104)
>     at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:831)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>     at java.util.concurrent.FutureTask.run(Unknown Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
>  Source)
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>     at java.lang.Thread.run(Unknown Source)
>  
> 2) 
> testSendMessageWithOptionsAndWithCompletionListener(junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests)javax.jms.MessageFormatException:
>  Message must not be null
>     at org.apache.qpid.jms.JmsSession.send(JmsSession.java:765)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.sendMessage(JmsMessageProducer.java:246)
>     at 
> org.apache.qpid.jms.JmsMessageProducer.send(JmsMessageProducer.java:214)
>     at 
> com.fedex.mi.decorator.jms.FedexJmsMessageProducer.send(FedexJmsMessageProducer.java:488)
>    {color:#FF} *at 
> junitTests.com.fedex.mi.decorator.jms.JmsMessageProducerTests.testSendMessageWithOptionsAndWithCompletionListener(JmsMessageProducerTests.java:687)*{color}
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.runSelectedTests(FedexJMSJUnitDispatch.java:567)
>     at 
> junitTests.com.fedex.mi.decorator.jms.FedexJMSJUnitDispatch.main(FedexJMSJUnitDispatch.java:709)
>  
> For all the issues we've encountered, the client catches an exception, prints 
> out the stack trace, and it doesn't include the client stack info
> The exception is thrown from the specific call.  I am not sure how internal 
> AMQ works but If AMQ chooses to run the operation in a separate thread, and 
> then do some sort of block waiting for that call to complete before returning 
> the exception in the original client calling thread, that is fine, but it 
> still would be able to build a complete stack trace before handing back 
> control to the calling client thread.  Perhaps a new exception with the cause 
> populated by the back ground thread make sense.
>  
> Here's another example for connection error with qpid 0.40.0 libs; no client 
> thread info!!
> The client code is basically
> try 
> {
>    Connection con = cf.connect("bad_ID", "bad_pwd");
> }
> catch(Exception e)
> {
>   e.printStackTrace();

[jira] [Resolved] (QPIDJMS-461) JmsMessageIDBuilder::createMessageID can save StringBuilder allocations

2019-06-18 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-461.
--
   Resolution: Fixed
Fix Version/s: 0.44.0

> JmsMessageIDBuilder::createMessageID can save StringBuilder allocations
> ---
>
> Key: QPIDJMS-461
> URL: https://issues.apache.org/jira/browse/QPIDJMS-461
> Project: Qpid JMS
>  Issue Type: Improvement
>Affects Versions: 0.44.0
>Reporter: Francesco Nigro
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.44.0
>
> Attachments: screenshot-1.png
>
>
> JmsMessageIDBuilder::createMessageID doesn't seem able to correctly perform 
> escape analysis on StringBuilder, allocating many of them.
> The intermediate StringBuilder could be saved into a thread local pool 
> instead, saving unnececessary allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPIDJMS-461) JmsMessageIDBuilder::createMessageID can save StringBuilder allocations

2019-06-18 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned QPIDJMS-461:


Assignee: Timothy Bish

> JmsMessageIDBuilder::createMessageID can save StringBuilder allocations
> ---
>
> Key: QPIDJMS-461
> URL: https://issues.apache.org/jira/browse/QPIDJMS-461
> Project: Qpid JMS
>  Issue Type: Improvement
>Affects Versions: 0.44.0
>Reporter: Francesco Nigro
>Assignee: Timothy Bish
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> JmsMessageIDBuilder::createMessageID doesn't seem able to correctly perform 
> escape analysis on StringBuilder, allocating many of them.
> The intermediate StringBuilder could be saved into a thread local pool 
> instead, saving unnececessary allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-458) Potential race condition in JmsConnection.destroyResource

2019-05-21 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845043#comment-16845043
 ] 

Timothy Bish commented on QPIDJMS-458:
--

The client does indeed offer timeouts for various operations one of which is 
the "jms.closeTimeout" which covers this case and is tested in the client test 
suite.  The default timeout is 60 seconds unless you've altered that on the 
connection URI.  

[Documentation 
page:|http://qpid.apache.org/releases/qpid-jms-0.42.0/docs/index.html]

> Potential race condition in JmsConnection.destroyResource
> -
>
> Key: QPIDJMS-458
> URL: https://issues.apache.org/jira/browse/QPIDJMS-458
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.42.0
> Environment: OS: Windows 10 64Bit
> Broker: Apache Artemis 2.8.0
> JVM: Java HotSpot(TM) Client VM (25.40-b25, mixed mode)
> Java: version 1.8.0_40, vendor Oracle Corporation
>Reporter: Christian Danner
>Priority: Major
>
> It seems there is a race condition when attempting to close a 
> JmsMessageProducer as indicated by the stack trace below. The corresponding 
> Thread is stuck waiting for the JmsMessageProducer to be destroyed for a 
> JmsConnection.
> This behaviour was observed while testing Apache Artemis with low disk space. 
> In the provided trace we attempt to close a broker connection due to a 
> JMSException (TransactionRolledBackException caused by a duplicate message 
> ID), however the Thread gets stuck indefinitely waiting for the 
> JmsMessageProducer to be destroyed.
> We keep track of all sessions for a JmsConnection (one session per Thread) 
> and attempt to perform a graceful connection shutdown by closing all 
> producers and consumers, followed by each session before finally calling 
> close on the connection.
> We use external synchronization to ensure that the connection can only be 
> closed by a single Thread (so in this example all other Threads attempting to 
> use the broker connection are blocked waiting for the lock from the closing 
> Thread to be released).
>  
> Stack Trace:
> {{"Replicator_node1-->node2_[0ms]" #25 prio=5 os_prio=0 tid=0x49383c00 
> nid=0x3918 in Object.wait() [0x4b1ef000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.qpid.jms.provider.BalancedProviderFuture.sync(BalancedProviderFuture.java:137)
>   - locked <0x04e60300> (a 
> org.apache.qpid.jms.provider.BalancedProviderFuture)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:755)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:744)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.doClose(JmsMessageProducer.java:103)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.close(JmsMessageProducer.java:89)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.closeInternal(JMSMessageProducer.java:48)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.close(JMSMessageProducer.java:43)
>   at acme.broker.client.AbstractSession.tryClose(AbstractSession.java:108)
>   at acme.broker.client.AbstractSession.close(AbstractSession.java:90)
>   at 
> acme.broker.client.AbstractThreadedSessionManager.close(AbstractThreadedSessionManager.java:108)
>   - locked <0x1d321078> (a java.util.concurrent.ConcurrentHashMap)
>   at 
> acme.broker.client.AbstractBrokerConnection.closeInternal(AbstractBrokerConnection.java:204)
>   at 
> acme.broker.client.AbstractBrokerConnection.close(AbstractBrokerConnection.java:84)
>   at 
> acme.replication.jms.JMSMessageBridge.trySend(JMSMessageBridge.java:109)
>   at 
> acme.replication.jms.JMSMessageBridge.access$6(JMSMessageBridge.java:99)
>   at 
> acme.replication.jms.JMSMessageBridge$ReplicatorRunnable.run(JMSMessageBridge.java:62)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - <0x1cfa76b0> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-458) Potential race condition in JmsConnection.destroyResource

2019-05-21 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-458:
-
Priority: Major  (was: Blocker)

> Potential race condition in JmsConnection.destroyResource
> -
>
> Key: QPIDJMS-458
> URL: https://issues.apache.org/jira/browse/QPIDJMS-458
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.42.0
> Environment: OS: Windows 10 64Bit
> Broker: Apache Artemis 2.8.0
> JVM: Java HotSpot(TM) Client VM (25.40-b25, mixed mode)
> Java: version 1.8.0_40, vendor Oracle Corporation
>Reporter: Christian Danner
>Priority: Major
>
> It seems there is a race condition when attempting to close a 
> JmsMessageProducer as indicated by the stack trace below. The corresponding 
> Thread is stuck waiting for the JmsMessageProducer to be destroyed for a 
> JmsConnection.
> This behaviour was observed while testing Apache Artemis with low disk space. 
> In the provided trace we attempt to close a broker connection due to a 
> JMSException (TransactionRolledBackException caused by a duplicate message 
> ID), however the Thread gets stuck indefinitely waiting for the 
> JmsMessageProducer to be destroyed.
> We keep track of all sessions for a JmsConnection (one session per Thread) 
> and attempt to perform a graceful connection shutdown by closing all 
> producers and consumers, followed by each session before finally calling 
> close on the connection.
> We use external synchronization to ensure that the connection can only be 
> closed by a single Thread (so in this example all other Threads attempting to 
> use the broker connection are blocked waiting for the lock from the closing 
> Thread to be released).
>  
> Stack Trace:
> {{"Replicator_node1-->node2_[0ms]" #25 prio=5 os_prio=0 tid=0x49383c00 
> nid=0x3918 in Object.wait() [0x4b1ef000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.qpid.jms.provider.BalancedProviderFuture.sync(BalancedProviderFuture.java:137)
>   - locked <0x04e60300> (a 
> org.apache.qpid.jms.provider.BalancedProviderFuture)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:755)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:744)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.doClose(JmsMessageProducer.java:103)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.close(JmsMessageProducer.java:89)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.closeInternal(JMSMessageProducer.java:48)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.close(JMSMessageProducer.java:43)
>   at acme.broker.client.AbstractSession.tryClose(AbstractSession.java:108)
>   at acme.broker.client.AbstractSession.close(AbstractSession.java:90)
>   at 
> acme.broker.client.AbstractThreadedSessionManager.close(AbstractThreadedSessionManager.java:108)
>   - locked <0x1d321078> (a java.util.concurrent.ConcurrentHashMap)
>   at 
> acme.broker.client.AbstractBrokerConnection.closeInternal(AbstractBrokerConnection.java:204)
>   at 
> acme.broker.client.AbstractBrokerConnection.close(AbstractBrokerConnection.java:84)
>   at 
> acme.replication.jms.JMSMessageBridge.trySend(JMSMessageBridge.java:109)
>   at 
> acme.replication.jms.JMSMessageBridge.access$6(JMSMessageBridge.java:99)
>   at 
> acme.replication.jms.JMSMessageBridge$ReplicatorRunnable.run(JMSMessageBridge.java:62)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - <0x1cfa76b0> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-458) Potential race condition in JmsConnection.destroyResource

2019-05-21 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845006#comment-16845006
 ] 

Timothy Bish commented on QPIDJMS-458:
--

There's nothing in the trace that's indicative of a race.  The trace indicates 
the client is doing a wait on the remote end closing the AMQP link so that 
isn't in itself surprising.  We'd need more details here and ideally a 
reproducer to determine if there's a client side issue.  It would be good to 
provide the connection URI you are using as well as any other details you can 
in order to allow this to be investigated. 

> Potential race condition in JmsConnection.destroyResource
> -
>
> Key: QPIDJMS-458
> URL: https://issues.apache.org/jira/browse/QPIDJMS-458
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.42.0
> Environment: OS: Windows 10 64Bit
> Broker: Apache Artemis 2.8.0
> JVM: Java HotSpot(TM) Client VM (25.40-b25, mixed mode)
> Java: version 1.8.0_40, vendor Oracle Corporation
>Reporter: Christian Danner
>Priority: Blocker
>
> It seems there is a race condition when attempting to close a 
> JmsMessageProducer as indicated by the stack trace below. The corresponding 
> Thread is stuck waiting for the JmsMessageProducer to be destroyed for a 
> JmsConnection.
> This behaviour was observed while testing Apache Artemis with low disk space. 
> In the provided trace we attempt to close a broker connection due to a 
> JMSException (TransactionRolledBackException caused by a duplicate message 
> ID), however the Thread gets stuck indefinitely waiting for the 
> JmsMessageProducer to be destroyed.
> We keep track of all sessions for a JmsConnection (one session per Thread) 
> and attempt to perform a graceful connection shutdown by closing all 
> producers and consumers, followed by each session before finally calling 
> close on the connection.
> We use external synchronization to ensure that the connection can only be 
> closed by a single Thread (so in this example all other Threads attempting to 
> use the broker connection are blocked waiting for the lock from the closing 
> Thread to be released).
>  
> Stack Trace:
> {{"Replicator_node1-->node2_[0ms]" #25 prio=5 os_prio=0 tid=0x49383c00 
> nid=0x3918 in Object.wait() [0x4b1ef000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.qpid.jms.provider.BalancedProviderFuture.sync(BalancedProviderFuture.java:137)
>   - locked <0x04e60300> (a 
> org.apache.qpid.jms.provider.BalancedProviderFuture)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:755)
>   at 
> org.apache.qpid.jms.JmsConnection.destroyResource(JmsConnection.java:744)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.doClose(JmsMessageProducer.java:103)
>   at 
> org.apache.qpid.jms.JmsMessageProducer.close(JmsMessageProducer.java:89)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.closeInternal(JMSMessageProducer.java:48)
>   at 
> acme.broker.client.jms.impl.JMSMessageProducer.close(JMSMessageProducer.java:43)
>   at acme.broker.client.AbstractSession.tryClose(AbstractSession.java:108)
>   at acme.broker.client.AbstractSession.close(AbstractSession.java:90)
>   at 
> acme.broker.client.AbstractThreadedSessionManager.close(AbstractThreadedSessionManager.java:108)
>   - locked <0x1d321078> (a java.util.concurrent.ConcurrentHashMap)
>   at 
> acme.broker.client.AbstractBrokerConnection.closeInternal(AbstractBrokerConnection.java:204)
>   at 
> acme.broker.client.AbstractBrokerConnection.close(AbstractBrokerConnection.java:84)
>   at 
> acme.replication.jms.JMSMessageBridge.trySend(JMSMessageBridge.java:109)
>   at 
> acme.replication.jms.JMSMessageBridge.access$6(JMSMessageBridge.java:99)
>   at 
> acme.replication.jms.JMSMessageBridge$ReplicatorRunnable.run(JMSMessageBridge.java:62)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - <0x1cfa76b0> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-457) Failed send to disconnected connection leaves message in a read only state

2019-05-13 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-457.
--
Resolution: Fixed

> Failed send to disconnected connection leaves message in a read only state
> --
>
> Key: QPIDJMS-457
> URL: https://issues.apache.org/jira/browse/QPIDJMS-457
> Project: Qpid JMS
>  Issue Type: Task
>  Components: qpid-jms-client
>Affects Versions: 0.42.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.43.0
>
>
> If a message is sent using a producer that whose underlying connection has 
> already failed the message is left in a read-only state and cannot be resent 
> to another producer afterwards. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-457) Failed send to disconnected connection leaves message in a read only state

2019-05-13 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-457:


 Summary: Failed send to disconnected connection leaves message in 
a read only state
 Key: QPIDJMS-457
 URL: https://issues.apache.org/jira/browse/QPIDJMS-457
 Project: Qpid JMS
  Issue Type: Task
  Components: qpid-jms-client
Affects Versions: 0.42.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.43.0


If a message is sent using a producer that whose underlying connection has 
already failed the message is left in a read-only state and cannot be resent to 
another producer afterwards. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1508) Code smell in conditional expressions

2019-04-29 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-1508.
--
   Resolution: Fixed
Fix Version/s: proton-j-0.33.0

> Code smell in conditional expressions
> -
>
> Key: PROTON-1508
> URL: https://issues.apache.org/jira/browse/PROTON-1508
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Reporter: JC
>Priority: Trivial
> Fix For: proton-j-0.33.0
>
>
> Hi
> I've found a code smell in a recent snapshot in GitHub 
> (39a5fa78073a2db52929ba5ef9d685356630e581).
> Path: 
> proton-j/src/main/java/org/apache/qpid/proton/codec/messaging/ReceivedType.java
> {code}
>  73 public Object get(final int index)
>  74 {
>  75 
>  76 switch(index)
>  77 {
>  78 case 0:
>  79 return _impl.getSectionNumber();
>  80 case 1:
>  81 return _impl.getSectionOffset();
>  82 }
>  83 
>  84 throw new IllegalStateException("Unknown index " + index);
>  85 
>  86 }
>  87 
>  88 public int size()
>  89 {
>  90 return _impl.getSectionOffset() != null
>  91   ? 2
>  92   : _impl.getSectionOffset() != null
>  93   ? 1
>  94   : 0;
>  95 
>  96 }
> {code}
> In Line 90 and 92, conditions are actually same. One of condition should be 
> _impl.getSectionNumber() != or others?
> This might be a trivial thing but wanted to report this just in case.
> Thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-2037) [PATCH] IndexOutOfBoundException when decoding message using multiple buffers

2019-04-29 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-2037.
--
   Resolution: Fixed
 Assignee: Timothy Bish
Fix Version/s: proton-j-0.33.0

> [PATCH] IndexOutOfBoundException when decoding message using multiple buffers
> -
>
> Key: PROTON-2037
> URL: https://issues.apache.org/jira/browse/PROTON-2037
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Affects Versions: proton-j-0.32.0
>Reporter: Ulf Lilleengen
>Assignee: Timothy Bish
>Priority: Major
> Fix For: proton-j-0.33.0
>
>
> It is possible to trigger an IndexOutOfBoundException when using the 
> CompositeReadableBuffer and invoking some decode methods at array boundaries.
>  
> Patch: https://github.com/apache/qpid-proton-j/pull/32



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-445) Update Netty libraries to latest

2019-03-08 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-445.
--
Resolution: Fixed

> Update Netty libraries to latest
> 
>
> Key: QPIDJMS-445
> URL: https://issues.apache.org/jira/browse/QPIDJMS-445
> Project: Qpid JMS
>  Issue Type: Task
>  Components: qpid-jms-client
>Affects Versions: 0.40.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.41.0
>
>
> Update to latest Netty and Netty native SSL wrapper libs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-445) Update Netty libraries to latest

2019-03-08 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-445:


 Summary: Update Netty libraries to latest
 Key: QPIDJMS-445
 URL: https://issues.apache.org/jira/browse/QPIDJMS-445
 Project: Qpid JMS
  Issue Type: Task
  Components: qpid-jms-client
Affects Versions: 0.40.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.41.0


Update to latest Netty and Netty native SSL wrapper libs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-438) Remotely closed session are not removed from the connection

2018-12-12 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-438.
--
   Resolution: Fixed
 Assignee: Timothy Bish
Fix Version/s: 0.40.0

> Remotely closed session are not removed from the connection
> ---
>
> Key: QPIDJMS-438
> URL: https://issues.apache.org/jira/browse/QPIDJMS-438
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.39.0
>Reporter: David De Franco
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.40.0
>
> Attachments: amqp.log, out-of-memory1.PNG, out-of-memory2.PNG
>
>
> We use Qpid JMS to connect to the Azure service bus.
> In our applications we cache the connections in a pool and cache a session 
> for each connection for sending messages.
> When Azure believes the connection is idle for 5 minutes it is remotely 
> closed. Resulting in closing the cached session in the application. The 
> application responds by replacing the cached session with a newly created 
> session.
> The problem here is that the closed sessions are not removed from the 
> connection. Eventually resulting in an OutOfMemoryError.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1980) Optimize CompositeReadableBuffer::hashCode

2018-12-07 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-1980.
--
Resolution: Fixed

> Optimize CompositeReadableBuffer::hashCode 
> ---
>
> Key: PROTON-1980
> URL: https://issues.apache.org/jira/browse/PROTON-1980
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.31.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: proton-j-0.32.0
>
>
> Optimize the hashCode operation to better deal with spans that are within a 
> single array when the buffer contains more than one.
> Optimize the multi array hashCode operation to traverse the arrays as a 
> single operation instead of traversing the arrays using a series of indexed 
> get calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1980) Optimize CompositeReadableBuffer::hashCode

2018-12-07 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-1980:


 Summary: Optimize CompositeReadableBuffer::hashCode 
 Key: PROTON-1980
 URL: https://issues.apache.org/jira/browse/PROTON-1980
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Affects Versions: proton-j-0.31.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.32.0


Optimize the hashCode operation to better deal with spans that are within a 
single array when the buffer contains more than one.

Optimize the multi array hashCode operation to traverse the arrays as a single 
operation instead of traversing the arrays using a series of indexed get calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-431) Refactor the Failover provider to improve performance

2018-11-27 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-431.
--
Resolution: Fixed

> Refactor the Failover provider to improve performance
> -
>
> Key: QPIDJMS-431
> URL: https://issues.apache.org/jira/browse/QPIDJMS-431
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.38.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.39.0
>
>
> Currently when using failover with the client there is a significant impact 
> on client performance due to the way the provider serializes work.  We can 
> refactor this to handle most of the work on the clients thread and hand off 
> to the underlying provider the remainder of the work.  This reduces the 
> impact on senders to a relatively small amount and the consumer side sees 
> nearly transparent impacts. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPIDJMS-435) Failover reconnection fails when using Azure Sas token authentication

2018-11-27 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed QPIDJMS-435.

Resolution: Not A Problem

> Failover reconnection fails when using Azure Sas token authentication
> -
>
> Key: QPIDJMS-435
> URL: https://issues.apache.org/jira/browse/QPIDJMS-435
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: David De Franco
>Priority: Major
> Attachments: log.txt
>
>
> For authenticating connections with the Azure Service Bus we use sasl 
> mechanism ANONYMOUS. After a connection is opened, before any other activity, 
> we send a sas token to a special queue. This works fine as long as the 
> connection is not remotely closed.
> When the connection is remotely closed it cannot be restored by the 
> FailoverProvider because of recovery of an active session on the connection. 
> When the session is recovered it starts listening on an unauthenticated 
> connection.
> This also prevents re-authenticating the connection by sending the sas token 
> again. See attached log.
> Is it possible to prevent the recovery of the session? We already moved the 
> re-authentication after the try-with-resources statement where the session is 
> created. This way we assumed the session would be closed, preventing recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-435) Failover reconnection fails when using Azure Sas token authentication

2018-11-27 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700737#comment-16700737
 ] 

Timothy Bish commented on QPIDJMS-435:
--

This isn't a bug in the JMS client.  You are using a non-standard means of 
authentication which conflicts with how the JMS client operates and as such 
will need to work out some means of handling this case.  It may come down to 
needing to handle connection drops yourself and tearing down and rebuilding the 
JMS resources yourself instead of using the failover features.  The failover 
mechanism is doing exactly what it was designed to do which is to recover and 
recreate all the JMS resources transparently. 

> Failover reconnection fails when using Azure Sas token authentication
> -
>
> Key: QPIDJMS-435
> URL: https://issues.apache.org/jira/browse/QPIDJMS-435
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: David De Franco
>Priority: Major
> Attachments: log.txt
>
>
> For authenticating connections with the Azure Service Bus we use sasl 
> mechanism ANONYMOUS. After a connection is opened, before any other activity, 
> we send a sas token to a special queue. This works fine as long as the 
> connection is not remotely closed.
> When the connection is remotely closed it cannot be restored by the 
> FailoverProvider because of recovery of an active session on the connection. 
> When the session is recovered it starts listening on an unauthenticated 
> connection.
> This also prevents re-authenticating the connection by sending the sas token 
> again. See attached log.
> Is it possible to prevent the recovery of the session? We already moved the 
> re-authentication after the try-with-resources statement where the session is 
> created. This way we assumed the session would be closed, preventing recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-434) Consumer whose close was deferred in a client ack session can cause an exception when another consumer is also present in that session

2018-11-20 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-434.
--
Resolution: Fixed

> Consumer whose close was deferred in a client ack session can cause an 
> exception when another consumer is also present in that session
> --
>
> Key: QPIDJMS-434
> URL: https://issues.apache.org/jira/browse/QPIDJMS-434
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.38.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.39.0
>
>
> When a consumer that has received but not acknowledged messages in a client 
> ack session is closed, the close operation is deferred until either a 
> message.acknowledge call is made or the parent session is closed.  If there 
> are two consumers in such a session a call to acknowledge on a message can 
> lead to an exception being thrown in error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPIDJMS-434) Consumer whose close was deferred in a client ack session can cause an exception when another consumer is also present in that session

2018-11-20 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693826#comment-16693826
 ] 

Timothy Bish edited comment on QPIDJMS-434 at 11/20/18 9:56 PM:


Changes for this issue (QPIDJMS-257) allow this behaviour to happen.


was (Author: tabish121):
Changes for this issue allow this behaviour to happen.

> Consumer whose close was deferred in a client ack session can cause an 
> exception when another consumer is also present in that session
> --
>
> Key: QPIDJMS-434
> URL: https://issues.apache.org/jira/browse/QPIDJMS-434
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.38.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.39.0
>
>
> When a consumer that has received but not acknowledged messages in a client 
> ack session is closed, the close operation is deferred until either a 
> message.acknowledge call is made or the parent session is closed.  If there 
> are two consumers in such a session a call to acknowledge on a message can 
> lead to an exception being thrown in error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-434) Consumer whose close was deferred in a client ack session can cause an exception when another consumer is also present in that session

2018-11-20 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693826#comment-16693826
 ] 

Timothy Bish commented on QPIDJMS-434:
--

Changes for this issue allow this behaviour to happen.

> Consumer whose close was deferred in a client ack session can cause an 
> exception when another consumer is also present in that session
> --
>
> Key: QPIDJMS-434
> URL: https://issues.apache.org/jira/browse/QPIDJMS-434
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.38.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.39.0
>
>
> When a consumer that has received but not acknowledged messages in a client 
> ack session is closed, the close operation is deferred until either a 
> message.acknowledge call is made or the parent session is closed.  If there 
> are two consumers in such a session a call to acknowledge on a message can 
> lead to an exception being thrown in error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-434) Consumer whose close was deferred in a client ack session can cause an exception when another consumer is also present in that session

2018-11-20 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-434:


 Summary: Consumer whose close was deferred in a client ack session 
can cause an exception when another consumer is also present in that session
 Key: QPIDJMS-434
 URL: https://issues.apache.org/jira/browse/QPIDJMS-434
 Project: Qpid JMS
  Issue Type: Bug
  Components: qpid-jms-client
Affects Versions: 0.38.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.39.0


When a consumer that has received but not acknowledged messages in a client ack 
session is closed, the close operation is deferred until either a 
message.acknowledge call is made or the parent session is closed.  If there are 
two consumers in such a session a call to acknowledge on a message can lead to 
an exception being thrown in error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1967) Reduce garbage created in the Transport layer

2018-11-19 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-1967.
--
Resolution: Fixed

> Reduce garbage created in the Transport layer
> -
>
> Key: PROTON-1967
> URL: https://issues.apache.org/jira/browse/PROTON-1967
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.30.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: proton-j-0.31.0
>
>
> The Transport implementation currently creates a new Transfer, Flow, and 
> Disposition object for each write to the FrameWriter which creates an 
> excessive amount of unnecessary throw away objects.  The Transport can cache 
> a single instance of each type and properly fill the fields (or set to 
> defaults) on each write to ensure consistency. 
> This change requires adding a copy method to the types such that the tests 
> that intercept written frame data objects can create a copy for later 
> analysis. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1967) Reduce garbage created in the Transport layer

2018-11-19 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-1967:


 Summary: Reduce garbage created in the Transport layer
 Key: PROTON-1967
 URL: https://issues.apache.org/jira/browse/PROTON-1967
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Affects Versions: proton-j-0.30.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.31.0


The Transport implementation currently creates a new Transfer, Flow, and 
Disposition object for each write to the FrameWriter which creates an excessive 
amount of unnecessary throw away objects.  The Transport can cache a single 
instance of each type and properly fill the fields (or set to defaults) on each 
write to ensure consistency. 

This change requires adding a copy method to the types such that the tests that 
intercept written frame data objects can create a copy for later analysis. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-432) Update testing dependencies to latest

2018-11-15 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-432:


 Summary: Update testing dependencies to latest
 Key: QPIDJMS-432
 URL: https://issues.apache.org/jira/browse/QPIDJMS-432
 Project: Qpid JMS
  Issue Type: Task
  Components: qpid-jms-client
Affects Versions: 0.38.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.39.0


Update dependencies used in tests to latest releases



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-431) Refactor the Failover provider to improve performance

2018-11-14 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-431:


 Summary: Refactor the Failover provider to improve performance
 Key: QPIDJMS-431
 URL: https://issues.apache.org/jira/browse/QPIDJMS-431
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.38.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.39.0


Currently when using failover with the client there is a significant impact on 
client performance due to the way the provider serializes work.  We can 
refactor this to handle most of the work on the clients thread and hand off to 
the underlying provider the remainder of the work.  This reduces the impact on 
senders to a relatively small amount and the consumer side sees nearly 
transparent impacts. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1963) [j] Improve performance of the codec for certain common encoding operations

2018-11-13 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-1963:


 Summary: [j] Improve performance of the codec for certain common 
encoding operations
 Key: PROTON-1963
 URL: https://issues.apache.org/jira/browse/PROTON-1963
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Affects Versions: proton-j-0.30.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.31.0


Some additional improvements can be made to the proton-j codec that enables 
faster encoding of ASCII strings and faster decoding of types like 
MessageAnnotations and ApplicationProperties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-429) Refactor sender and receive code to use newer proton-j APIs

2018-11-12 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-429:


 Summary: Refactor sender and receive code to use newer proton-j 
APIs 
 Key: QPIDJMS-429
 URL: https://issues.apache.org/jira/browse/QPIDJMS-429
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.37.0
Reporter: Timothy Bish
 Fix For: 0.38.0


Newer versions of proton-j have added new APIs for better handling of transfer 
dispositions as well as easier identification of message sections during decode 
which can offer some performance improvements and clean up of older code in the 
library.  Additionally we can now implement the ensureRemaining API in our 
expanding writeable buffer implementation to better handle writes where more 
encoding space is needed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-420) Improve performance of MessageConsumer processing

2018-11-02 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-420.
--
Resolution: Fixed

> Improve performance of MessageConsumer processing
> -
>
> Key: QPIDJMS-420
> URL: https://issues.apache.org/jira/browse/QPIDJMS-420
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.38.0
>
>
> Refactor some of the code paths that handling inbound message processing and 
> eventually queue or deliver inbound messages to the MessageConsumer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1961) [j] Improve performance of the codec for certain common encoding operations

2018-11-01 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16672102#comment-16672102
 ] 

Timothy Bish commented on PROTON-1961:
--

Change added in 
[31f5fc97bbf60a73ba913b7dc16851cf7e2a150b|https://git-wip-us.apache.org/repos/asf?p=qpid-proton-j.git;h=31f5fc9]

> [j] Improve performance of the codec for certain common encoding operations
> ---
>
> Key: PROTON-1961
> URL: https://issues.apache.org/jira/browse/PROTON-1961
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.29.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: proton-j-0.30.0
>
>
> When writing certain types we can reduce the encoding work needed for writing 
> the descriptor codes and commonly encoded values such as the Accepted 
> disposition. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-1961) [j] Improve performance of the codec for certain common encoding operations

2018-11-01 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated PROTON-1961:
-
Summary: [j] Improve performance of the codec for certain common encoding 
operations  (was: [j] Improve performance on the codec for certain common 
encoding operations)

> [j] Improve performance of the codec for certain common encoding operations
> ---
>
> Key: PROTON-1961
> URL: https://issues.apache.org/jira/browse/PROTON-1961
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.29.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: proton-j-0.30.0
>
>
> When writing certain types we can reduce the encoding work needed for writing 
> the descriptor codes and commonly encoded values such as the Accepted 
> disposition. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1961) [j] Improve performance on the codec for certain common encoding operations

2018-11-01 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-1961:


 Summary: [j] Improve performance on the codec for certain common 
encoding operations
 Key: PROTON-1961
 URL: https://issues.apache.org/jira/browse/PROTON-1961
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Affects Versions: proton-j-0.29.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.30.0


When writing certain types we can reduce the encoding work needed for writing 
the descriptor codes and commonly encoded values such as the Accepted 
disposition. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-417) Reduce GC pressure while using BytesMessage

2018-10-31 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-417:
-
Fix Version/s: (was: 0.38.0)

> Reduce GC pressure while using BytesMessage
> ---
>
> Key: QPIDJMS-417
> URL: https://issues.apache.org/jira/browse/QPIDJMS-417
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Francesco Nigro
>Priority: Trivial
>
> JmsBytesMessage::initializeReading() creates DataInputStream that allocates 
> several byte[] and char[] even when no methods need them.
> Using directly the underline ByteBufInputStream would reduce the amount of 
> garbage created while reducing the indirections needed to hit the underline 
> ByteBuf that hold the data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-423) Log only connection URI in the connection initialized event handler

2018-10-29 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-423.
--
Resolution: Fixed

Used the built in URI tools to remove any query options from the connect string 
as it handles the variations in URIs we support for added caution. 

> Log only connection URI in the connection initialized event handler
> ---
>
> Key: QPIDJMS-423
> URL: https://issues.apache.org/jira/browse/QPIDJMS-423
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Benoit Devos
>Priority: Minor
>
> The method JmsConnection.onConnectionEstablished(final URI remoteURI) logs 
> this URI with all connection configuration options instead just the URI 
> portion should be logged with the query portion omitted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-423) Log only connection URI in the connection initialized event handler

2018-10-29 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-423:
-
Fix Version/s: 0.38.0

> Log only connection URI in the connection initialized event handler
> ---
>
> Key: QPIDJMS-423
> URL: https://issues.apache.org/jira/browse/QPIDJMS-423
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Benoit Devos
>Priority: Minor
> Fix For: 0.38.0
>
>
> The method JmsConnection.onConnectionEstablished(final URI remoteURI) logs 
> this URI with all connection configuration options instead just the URI 
> portion should be logged with the query portion omitted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-423) Log only connection URI in the connection initialized event handler

2018-10-29 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-423:
-
Description: The method JmsConnection.onConnectionEstablished(final URI 
remoteURI) logs this URI with all connection configuration options instead just 
the URI portion should be logged with the query portion omitted.  (was: The 
broker URI may contain sensitive info (like path to trust / key stores, and 
related *passwords*), and this info is being logged.

Sample:
{code:xml}



{code}

The method JmsConnection.onConnectionEstablished(final URI remoteURI) logs this 
URI as is, therefore disclosing some passwords.

Only essential info just be logged, i.e. scheme, host and port.)

> Log only connection URI in the connection initialized event handler
> ---
>
> Key: QPIDJMS-423
> URL: https://issues.apache.org/jira/browse/QPIDJMS-423
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Benoit Devos
>Priority: Minor
>
> The method JmsConnection.onConnectionEstablished(final URI remoteURI) logs 
> this URI with all connection configuration options instead just the URI 
> portion should be logged with the query portion omitted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-423) Log only connection URI in the connection initialized event handler

2018-10-29 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-423:
-
Summary: Log only connection URI in the connection initialized event 
handler  (was: Log only connection URI in the connection initialized )

> Log only connection URI in the connection initialized event handler
> ---
>
> Key: QPIDJMS-423
> URL: https://issues.apache.org/jira/browse/QPIDJMS-423
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Benoit Devos
>Priority: Minor
>
> The broker URI may contain sensitive info (like path to trust / key stores, 
> and related *passwords*), and this info is being logged.
> Sample:
> {code:xml}
>  class="org.apache.qpid.jms.JmsConnectionFactory">
> 
> 
> {code}
> The method JmsConnection.onConnectionEstablished(final URI remoteURI) logs 
> this URI as is, therefore disclosing some passwords.
> Only essential info just be logged, i.e. scheme, host and port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-423) Log only connection URI in the connection initialized

2018-10-29 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-423:
-
Summary: Log only connection URI in the connection initialized   (was: 
Avoid disclosing sensitive info when logging Remote Broker URI)

> Log only connection URI in the connection initialized 
> --
>
> Key: QPIDJMS-423
> URL: https://issues.apache.org/jira/browse/QPIDJMS-423
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Benoit Devos
>Priority: Minor
>
> The broker URI may contain sensitive info (like path to trust / key stores, 
> and related *passwords*), and this info is being logged.
> Sample:
> {code:xml}
>  class="org.apache.qpid.jms.JmsConnectionFactory">
> 
> 
> {code}
> The method JmsConnection.onConnectionEstablished(final URI remoteURI) logs 
> this URI as is, therefore disclosing some passwords.
> Only essential info just be logged, i.e. scheme, host and port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-420) Improve performance of MessageConsumer processing

2018-10-26 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-420:
-
Description: 
Refactor some of the code paths that handling inbound message processing and 
eventually queue or deliver inbound messages to the MessageConsumer.

 

  was:Avoid the cost of attempting to signal waiters on the prefetch queue when 
none are present to reduce time under lock when adding incoming messages.


> Improve performance of MessageConsumer processing
> -
>
> Key: QPIDJMS-420
> URL: https://issues.apache.org/jira/browse/QPIDJMS-420
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.38.0
>
>
> Refactor some of the code paths that handling inbound message processing and 
> eventually queue or deliver inbound messages to the MessageConsumer.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-420) Improve performance of MessageConsumer processing

2018-10-26 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-420:
-
Summary: Improve performance of MessageConsumer processing  (was: Avoid 
overhead of missing thread singalling on inbound message enqueues)

> Improve performance of MessageConsumer processing
> -
>
> Key: QPIDJMS-420
> URL: https://issues.apache.org/jira/browse/QPIDJMS-420
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.38.0
>
>
> Avoid the cost of attempting to signal waiters on the prefetch queue when 
> none are present to reduce time under lock when adding incoming messages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Reopened] (QPIDJMS-420) Avoid overhead of missing thread singalling on inbound message enqueues

2018-10-26 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reopened QPIDJMS-420:
--

> Avoid overhead of missing thread singalling on inbound message enqueues
> ---
>
> Key: QPIDJMS-420
> URL: https://issues.apache.org/jira/browse/QPIDJMS-420
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.38.0
>
>
> Avoid the cost of attempting to signal waiters on the prefetch queue when 
> none are present to reduce time under lock when adding incoming messages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-422) Custom ClientID Generators

2018-10-22 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659201#comment-16659201
 ] 

Timothy Bish commented on QPIDJMS-422:
--

There are already two ways to set a custom client ID one being to use the 
vendor neutral APIs provided by JMS and the other being the URI option that 
allows you to set your client ID value.  I don't really see a great amount of 
value in providing APIs that expose the client internals and are not vendor 
neutral when that is already provided directly in the JMS API. 

> Custom ClientID Generators
> --
>
> Key: QPIDJMS-422
> URL: https://issues.apache.org/jira/browse/QPIDJMS-422
> Project: Qpid JMS
>  Issue Type: New Feature
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Sebastian T
>Priority: Minor
>
> In our application we are using client IDs with a very specific format. We 
> would therefore like to register our own IdGenerator with the 
> JmsConnectionFactory. Currently the JmsConnectionFactory#setClientIdGenerator 
> is protected thus we cannot programmatically set a custom IdGenerator except 
> by using (hacky) reflection.
> I would suggest to rename the current IdGenerator class to 
> DefaultClientIdGenerator, create an ClientIdGenerator interface and change 
> the visiblity of JmsConnectionFactory#setClientIdGenerator to public.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-421) Improve JMS MessageProducer performance by caching Message Annotation encodings

2018-10-19 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-421.
--
Resolution: Fixed

> Improve JMS MessageProducer performance by caching Message Annotation 
> encodings
> ---
>
> Key: QPIDJMS-421
> URL: https://issues.apache.org/jira/browse/QPIDJMS-421
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.38.0
>
>
> Each message sent by a producer has a set of type values for Destination and 
> Message types added to the MessageAnnotation section of the AMQP message.  We 
> can improve send performance by caching the encoded bytes of these 
> MessageAnnotations sections and write that instead of performing a fresh 
> encode on each message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-421) Improve JMS MessageProducer performance by caching Message Annotation encodings

2018-10-19 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-421:


 Summary: Improve JMS MessageProducer performance by caching 
Message Annotation encodings
 Key: QPIDJMS-421
 URL: https://issues.apache.org/jira/browse/QPIDJMS-421
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.37.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.38.0


Each message sent by a producer has a set of type values for Destination and 
Message types added to the MessageAnnotation section of the AMQP message.  We 
can improve send performance by caching the encoded bytes of these 
MessageAnnotations sections and write that instead of performing a fresh encode 
on each message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-420) Avoid overhead of missing thread singalling on inbound message enqueues

2018-10-17 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-420.
--
Resolution: Fixed

> Avoid overhead of missing thread singalling on inbound message enqueues
> ---
>
> Key: QPIDJMS-420
> URL: https://issues.apache.org/jira/browse/QPIDJMS-420
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.38.0
>
>
> Avoid the cost of attempting to signal waiters on the prefetch queue when 
> none are present to reduce time under lock when adding incoming messages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-420) Avoid overhead of missing thread singalling on inbound message enqueues

2018-10-17 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-420:


 Summary: Avoid overhead of missing thread singalling on inbound 
message enqueues
 Key: QPIDJMS-420
 URL: https://issues.apache.org/jira/browse/QPIDJMS-420
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.37.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.38.0


Avoid the cost of attempting to signal waiters on the prefetch queue when none 
are present to reduce time under lock when adding incoming messages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-418) Clean up the usage of Symbol type and conversion to Symbol from String

2018-10-17 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-418.
--
Resolution: Fixed

> Clean up the usage of Symbol type and conversion to Symbol from String
> --
>
> Key: QPIDJMS-418
> URL: https://issues.apache.org/jira/browse/QPIDJMS-418
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.38.0
>
>
> The code currently does a lot more work than needed on each send converting 
> from String values into Symbol types to fill in the proton-j message 
> structure prior to encoding. We can now keep all our commonly used values in 
> static proton-j Symbol instances and request the cached String view that each 
> now maintains when we need a String representation. The current proton-j API 
> make this need for String less relevant that it once was so our overhead on 
> send can be significantly reduced with minimal changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-419) JMS Session is sometimes not recovered on failover reconnect

2018-10-17 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-419.
--
Resolution: Fixed

> JMS Session is sometimes not recovered on failover reconnect
> 
>
> Key: QPIDJMS-419
> URL: https://issues.apache.org/jira/browse/QPIDJMS-419
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.38.0
>
>
> In a very rare race the JMS Session that is newly created at the time of a 
> failover and quick reconnect cycle can be missed when recovering the 
> Connection resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-419) JMS Session is sometimes not recovered on failover reconnect

2018-10-17 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-419:


 Summary: JMS Session is sometimes not recovered on failover 
reconnect
 Key: QPIDJMS-419
 URL: https://issues.apache.org/jira/browse/QPIDJMS-419
 Project: Qpid JMS
  Issue Type: Bug
  Components: qpid-jms-client
Affects Versions: 0.37.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.38.0


In a very rare race the JMS Session that is newly created at the time of a 
failover and quick reconnect cycle can be missed when recovering the Connection 
resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-418) Clean up the usage of Symbol type and conversion to Symbol from String

2018-10-16 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-418:


 Summary: Clean up the usage of Symbol type and conversion to 
Symbol from String
 Key: QPIDJMS-418
 URL: https://issues.apache.org/jira/browse/QPIDJMS-418
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.37.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.38.0


The code currently does a lot more work than needed on each send converting 
from String values into Symbol types to fill in the proton-j message structure 
prior to encoding. We can now keep all our commonly used values in static 
proton-j Symbol instances and request the cached String view that each now 
maintains when we need a String representation. The current proton-j API make 
this need for String less relevant that it once was so our overhead on send can 
be significantly reduced with minimal changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-416) Move protocol processing work into the netty event loop thread

2018-10-11 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-416.
--
Resolution: Fixed

> Move protocol processing work into the netty event loop thread
> --
>
> Key: QPIDJMS-416
> URL: https://issues.apache.org/jira/browse/QPIDJMS-416
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.38.0
>
>
> Currently the protocol specific processing is handled in its own single 
> threaded executor which creates a performance drop as reads and writes are 
> queued into Netty for handling.  We can achieve a significant performance 
> boost by handling all the protocol work inside the Netty event loop and not 
> hopping between threads as we do now. 
> This requires some refactoring of connect and shutdown logic and some 
> safeguards around all callbacks in the transport to ensure that we always 
> operate on the event loop and not one on a client thread. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-416) Move protocol processing work into the netty event loop thread

2018-10-09 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644067#comment-16644067
 ] 

Timothy Bish commented on QPIDJMS-416:
--

Commit with change at: 
[https://git-wip-us.apache.org/repos/asf?p=qpid-jms.git;a=commit;h=4314482de1e8cb7e58d8331ca0e459f32d78ab4e]

 
{noformat}
commit    4314482de1e8cb7e58d8331ca0e459f32d78ab4e
tree    079f592511e1de668db615db5d6eb83ec79d4a07
parent    4b8739b756fd94d77965df17c7aed84c56b95dea{noformat}

> Move protocol processing work into the netty event loop thread
> --
>
> Key: QPIDJMS-416
> URL: https://issues.apache.org/jira/browse/QPIDJMS-416
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.37.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.38.0
>
>
> Currently the protocol specific processing is handled in its own single 
> threaded executor which creates a performance drop as reads and writes are 
> queued into Netty for handling.  We can achieve a significant performance 
> boost by handling all the protocol work inside the Netty event loop and not 
> hopping between threads as we do now. 
> This requires some refactoring of connect and shutdown logic and some 
> safeguards around all callbacks in the transport to ensure that we always 
> operate on the event loop and not one on a client thread. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-416) Move protocol processing work into the netty event loop thread

2018-10-09 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-416:


 Summary: Move protocol processing work into the netty event loop 
thread
 Key: QPIDJMS-416
 URL: https://issues.apache.org/jira/browse/QPIDJMS-416
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.37.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.38.0


Currently the protocol specific processing is handled in its own single 
threaded executor which creates a performance drop as reads and writes are 
queued into Netty for handling.  We can achieve a significant performance boost 
by handling all the protocol work inside the Netty event loop and not hopping 
between threads as we do now. 

This requires some refactoring of connect and shutdown logic and some 
safeguards around all callbacks in the transport to ensure that we always 
operate on the event loop and not one on a client thread. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1948) Refactor FrameWriter to avoid reencodes when buffer space is insufficient

2018-10-03 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-1948.
--
Resolution: Fixed

> Refactor FrameWriter to avoid reencodes when buffer space is insufficient
> -
>
> Key: PROTON-1948
> URL: https://issues.apache.org/jira/browse/PROTON-1948
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.29.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: proton-j-0.30.0
>
>
> The current FrameWriter implementation uses a ByteBuffer wrapped in a 
> ReadableBuffer to encode to and if the buffer is to small it recreates the 
> buffer with a larger size and must perform a second encode to finish the job. 
>  We can now implement our own ReadableBuffer type for the writer that both 
> performs better by dropping the ByteBuffer abstractions and that grows as 
> needed to fit the encoding of the types and the accompanying payloads (for 
> Transfers).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1948) Refactor FrameWriter to avoid reencodes when buffer space is insufficient

2018-10-03 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-1948:


 Summary: Refactor FrameWriter to avoid reencodes when buffer space 
is insufficient
 Key: PROTON-1948
 URL: https://issues.apache.org/jira/browse/PROTON-1948
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Affects Versions: proton-j-0.29.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.30.0


The current FrameWriter implementation uses a ByteBuffer wrapped in a 
ReadableBuffer to encode to and if the buffer is to small it recreates the 
buffer with a larger size and must perform a second encode to finish the job.  
We can now implement our own ReadableBuffer type for the writer that both 
performs better by dropping the ByteBuffer abstractions and that grows as 
needed to fit the encoding of the types and the accompanying payloads (for 
Transfers).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1941) Add WritableBuffer API for requesting space when writing complex types

2018-09-26 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-1941.
--
Resolution: Fixed

> Add WritableBuffer API for requesting space when writing complex types
> --
>
> Key: PROTON-1941
> URL: https://issues.apache.org/jira/browse/PROTON-1941
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Affects Versions: proton-j-0.29.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: proton-j-0.30.0
>
>
> Add new optional API to WritableBuffer that allows a complex types that has 
> already computed the encoding size to request that the buffer have at least 
> that amount of writable space left before an attempt to encode into that 
> buffers occurs.  This can result in either an early failure of the encode 
> avoiding encoding when the result is bound to fail or the underlying buffer 
> can increase its capacity to accommodate the incoming writes before they 
> happen which can result in less churn as a buffer tries to grow as the 
> complex type gets encoded into it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1941) Add WritableBuffer API for requesting space when writing complex types

2018-09-26 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-1941:


 Summary: Add WritableBuffer API for requesting space when writing 
complex types
 Key: PROTON-1941
 URL: https://issues.apache.org/jira/browse/PROTON-1941
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-j
Affects Versions: proton-j-0.29.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.30.0


Add new optional API to WritableBuffer that allows a complex types that has 
already computed the encoding size to request that the buffer have at least 
that amount of writable space left before an attempt to encode into that 
buffers occurs.  This can result in either an early failure of the encode 
avoiding encoding when the result is bound to fail or the underlying buffer can 
increase its capacity to accommodate the incoming writes before they happen 
which can result in less churn as a buffer tries to grow as the complex type 
gets encoded into it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPIDJMS-413) Incorrect array access code in ReadableBuffer implementation

2018-09-24 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed QPIDJMS-413.

Resolution: Fixed

> Incorrect array access code in ReadableBuffer implementation
> 
>
> Key: QPIDJMS-413
> URL: https://issues.apache.org/jira/browse/QPIDJMS-413
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.36.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Trivial
> Fix For: 0.37.0
>
>
> Code in the proton-j ReadableBuffer wrapper implementation incorrectly 
> computes the array offset of the backing Netty ByteBuf's array if present.
> Not currently hit in client code but could be in future depending on usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-413) Incorrect array access code in ReadableBuffer implementation

2018-09-24 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-413:


 Summary: Incorrect array access code in ReadableBuffer 
implementation
 Key: QPIDJMS-413
 URL: https://issues.apache.org/jira/browse/QPIDJMS-413
 Project: Qpid JMS
  Issue Type: Bug
  Components: qpid-jms-client
Affects Versions: 0.36.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.37.0


Code in the proton-j ReadableBuffer wrapper implementation incorrectly computes 
the array offset of the backing Netty ByteBuf's array if present.

Not currently hit in client code but could be in future depending on usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1925) [proton-j] Add some enums to Section and DeliveryState to make type determination simpler

2018-08-31 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-1925.
--
Resolution: Fixed

> [proton-j] Add some enums to Section and DeliveryState to make type 
> determination simpler
> -
>
> Key: PROTON-1925
> URL: https://issues.apache.org/jira/browse/PROTON-1925
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.29.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Trivial
> Fix For: proton-j-0.30.0
>
>
> Add enums for the known Section and DeliveryState types to allow for easier 
> identification of the types when writing code to process the events for 
> incoming messages and delivery state updates.  Right now we are forced to use 
> many instanceof calls to decide what the types are in order to react. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Reopened] (PROTON-1925) [proton-j] Add some enums to Section and DeliveryState to make type determination simpler

2018-08-31 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reopened PROTON-1925:
--

> [proton-j] Add some enums to Section and DeliveryState to make type 
> determination simpler
> -
>
> Key: PROTON-1925
> URL: https://issues.apache.org/jira/browse/PROTON-1925
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.29.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Trivial
> Fix For: proton-j-0.30.0
>
>
> Add enums for the known Section and DeliveryState types to allow for easier 
> identification of the types when writing code to process the events for 
> incoming messages and delivery state updates.  Right now we are forced to use 
> many instanceof calls to decide what the types are in order to react. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (PROTON-1925) [proton-j] Add some enums to Section and DeliveryState to make type determination simpler

2018-08-30 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved PROTON-1925.
--
Resolution: Fixed

> [proton-j] Add some enums to Section and DeliveryState to make type 
> determination simpler
> -
>
> Key: PROTON-1925
> URL: https://issues.apache.org/jira/browse/PROTON-1925
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.29.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Trivial
> Fix For: proton-j-0.30.0
>
>
> Add enums for the known Section and DeliveryState types to allow for easier 
> identification of the types when writing code to process the events for 
> incoming messages and delivery state updates.  Right now we are forced to use 
> many instanceof calls to decide what the types are in order to react. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1925) [proton-j] Add some enums to Section and DeliveryState to make type determination simpler

2018-08-30 Thread Timothy Bish (JIRA)
Timothy Bish created PROTON-1925:


 Summary: [proton-j] Add some enums to Section and DeliveryState to 
make type determination simpler
 Key: PROTON-1925
 URL: https://issues.apache.org/jira/browse/PROTON-1925
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Affects Versions: proton-j-0.29.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: proton-j-0.30.0


Add enums for the known Section and DeliveryState types to allow for easier 
identification of the types when writing code to process the events for 
incoming messages and delivery state updates.  Right now we are forced to use 
many instanceof calls to decide what the types are in order to react. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-411) Improve out of the the frame tracing loggers to include the binary payload

2018-08-16 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-411.
--
Resolution: Fixed

> Improve out of the the frame tracing loggers to include the binary payload
> --
>
> Key: QPIDJMS-411
> URL: https://issues.apache.org/jira/browse/QPIDJMS-411
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.36.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.37.0
>
>
> When the 'amqp.traceFrames' option is enabled the tracing only includes the 
> TransportFrame content and not the optional payload data.  Improve this 
> logging to include a string formatted representation of the payload data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-411) Improve out of the the frame tracing loggers to include the binary payload

2018-08-16 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-411:


 Summary: Improve out of the the frame tracing loggers to include 
the binary payload
 Key: QPIDJMS-411
 URL: https://issues.apache.org/jira/browse/QPIDJMS-411
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.36.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.37.0


When the 'amqp.traceFrames' option is enabled the tracing only includes the 
TransportFrame content and not the optional payload data.  Improve this logging 
to include a string formatted representation of the payload data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-409) JMS Scheduled Delivery delivers messages before time under load

2018-08-16 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16582539#comment-16582539
 ] 

Timothy Bish commented on QPIDJMS-409:
--

There isn't quite enough to go on in the logs provided, could you ensure that 
the ENV var PN_TRACE_FRM is set to "true".  Also a reproducer will go a lot 
further in finding out what is going on as otherwise we are just guessing. 

> JMS Scheduled Delivery delivers messages before time under load
> ---
>
> Key: QPIDJMS-409
> URL: https://issues.apache.org/jira/browse/QPIDJMS-409
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.35.0
> Environment: Application is a Spring Boot application - version 
> 2.0.4.RELEASE
> Qpid JMS Client version is 0.35.0
> Operating system where problem occurs is Windows 10
> I have found this problem using the following different message brokers:
>  * Apache Artemis 2.4.0
>  * Apache Artemis 2.6.2
>  * Apache Qpid Broker 7.0.6
>Reporter: Ian Rowlands
>Priority: Major
> Attachments: successful_delayed_delivery.txt
>
>
> When under system load, the Qpid JMS client doesn't handle scheduled message 
> delivery correct - it delivers the message prior to the required time.
> When running one request, the scheduling works correctly. The load to 
> reproduce this isn't very high (i.e running about 5 of my job processes 
> simultaneously seems to trip it up pretty quickly).
> I have used different JMS Brokers with the same client code and the same 
> problem occurs with both Brokers (see environment).
> I am using Spring JMS Template to send the JMS message. The key piece of code 
> is something like:
> {{delayedDeliveryjmsTemplate.setDeliveryDelay(timeoutPeriod);}}
> {{delayedDeliveryjmsTemplate.convertAndSend(queueName, timeoutMsg, (Message 
> jmsMessage) -> {}}
> {{ jmsMessage.setStringProperty(OBJECT_TYPE_FIELD, 
> timeoutMsg.getClass().getSimpleName());}}
> {{jmsMessage.setStringProperty(OBJECT_TYPE_FIELD, 
> timeoutMsg.getClass().getSimpleName());}}
> {{ return jmsMessage;}}
> {{});}}
> I realise you want more logging details but I'm unsure what would be best to 
> log. Please let me know and I'll do so.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPIDJMS-391) Add support for the use of native OpenSSL based providers

2018-08-15 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned QPIDJMS-391:


Assignee: Timothy Bish

> Add support for the use of native OpenSSL based providers
> -
>
> Key: QPIDJMS-391
> URL: https://issues.apache.org/jira/browse/QPIDJMS-391
> Project: Qpid JMS
>  Issue Type: New Feature
>  Components: qpid-jms-client
>Reporter: Johan Stenberg
>Assignee: Timothy Bish
>Priority: Minor
>  Labels: performance
> Fix For: 0.36.0
>
>
> It would be great to have an option to use netty-tcnative-boringssl-static 
> instead of the java based SSL provider. In the ActiveMQ Artemis this was 
> implemented as part of  https://issues.apache.org/jira/browse/ARTEMIS-1649
> In Netty 4.1.26 a new OpenSslX509KeyManagerFactory for easier configuration 
> was introduced. https://github.com/netty/netty/pull/8084 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-391) Add support for the use of native OpenSSL based providers

2018-08-15 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-391.
--
Resolution: Fixed

> Add support for the use of native OpenSSL based providers
> -
>
> Key: QPIDJMS-391
> URL: https://issues.apache.org/jira/browse/QPIDJMS-391
> Project: Qpid JMS
>  Issue Type: New Feature
>  Components: qpid-jms-client
>Reporter: Johan Stenberg
>Priority: Minor
>  Labels: performance
> Fix For: 0.36.0
>
>
> It would be great to have an option to use netty-tcnative-boringssl-static 
> instead of the java based SSL provider. In the ActiveMQ Artemis this was 
> implemented as part of  https://issues.apache.org/jira/browse/ARTEMIS-1649
> In Netty 4.1.26 a new OpenSslX509KeyManagerFactory for easier configuration 
> was introduced. https://github.com/netty/netty/pull/8084 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-10 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated PROTON-1911:
-
Fix Version/s: proton-j-0.29.0

> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.25.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Fix For: proton-j-0.29.0
>
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png, strings_encode_after.json, strings_encode_before.json
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-10 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16576553#comment-16576553
 ] 

Timothy Bish commented on PROTON-1911:
--

Fix applied with help from [~gemmellr] and [~nigro@gmail.com] 

> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.25.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png, strings_encode_after.json, strings_encode_before.json
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-407) Reconnect not working reliable for connections with more than 1 producer JMS session

2018-08-08 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-407:
-
Fix Version/s: 0.36.0

> Reconnect not working reliable for connections with more than 1 producer JMS 
> session
> 
>
> Key: QPIDJMS-407
> URL: https://issues.apache.org/jira/browse/QPIDJMS-407
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.35.0
>Reporter: Johan Stenberg
>Assignee: Timothy Bish
>Priority: Critical
> Fix For: 0.36.0
>
> Attachments: QPIDJMS-407.zip
>
>
> When a JMS connection with more than one producer session looses the 
> underlying TCP connection to the broker auto reconnect (failover) is not 
> working properly. After the reconnect attempt no new messages will be sent.
> When only one producer session is used, reconnect apparently works as 
> expected.
> I attached a maven project with a test case where the TCP connection is 
> dropped by the broker to provoke the reconnect attempt. In most cases when I 
> run the test class the *testAutoReconnectWith2ProducerSessions()* stops 
> sending messages after the first reconnect attempt. Maybe there occurs some 
> qpid internal race condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-407) Reconnect not working reliable for connections with more than 1 producer JMS session

2018-08-08 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-407.
--
Resolution: Fixed

> Reconnect not working reliable for connections with more than 1 producer JMS 
> session
> 
>
> Key: QPIDJMS-407
> URL: https://issues.apache.org/jira/browse/QPIDJMS-407
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.35.0
>Reporter: Johan Stenberg
>Assignee: Timothy Bish
>Priority: Critical
> Fix For: 0.36.0
>
> Attachments: QPIDJMS-407.zip
>
>
> When a JMS connection with more than one producer session looses the 
> underlying TCP connection to the broker auto reconnect (failover) is not 
> working properly. After the reconnect attempt no new messages will be sent.
> When only one producer session is used, reconnect apparently works as 
> expected.
> I attached a maven project with a test case where the TCP connection is 
> dropped by the broker to provoke the reconnect attempt. In most cases when I 
> run the test class the *testAutoReconnectWith2ProducerSessions()* stops 
> sending messages after the first reconnect attempt. Maybe there occurs some 
> qpid internal race condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPIDJMS-407) Reconnect not working reliable for connections with more than 1 producer JMS session

2018-08-07 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned QPIDJMS-407:


Assignee: Timothy Bish

> Reconnect not working reliable for connections with more than 1 producer JMS 
> session
> 
>
> Key: QPIDJMS-407
> URL: https://issues.apache.org/jira/browse/QPIDJMS-407
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.35.0
>Reporter: Johan Stenberg
>Assignee: Timothy Bish
>Priority: Critical
> Attachments: QPIDJMS-407.zip
>
>
> When a JMS connection with more than one producer session looses the 
> underlying TCP connection to the broker auto reconnect (failover) is not 
> working properly. After the reconnect attempt no new messages will be sent.
> When only one producer session is used, reconnect apparently works as 
> expected.
> I attached a maven project with a test case where the TCP connection is 
> dropped by the broker to provoke the reconnect attempt. In most cases when I 
> run the test class the *testAutoReconnectWith2ProducerSessions()* stops 
> sending messages after the first reconnect attempt. Maybe there occurs some 
> qpid internal race condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-407) Reconnect not working reliable for connections with more than 1 producer JMS session

2018-08-03 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16568828#comment-16568828
 ] 

Timothy Bish commented on QPIDJMS-407:
--

Does the same thing happen if you avoid the use of async completions ?

> Reconnect not working reliable for connections with more than 1 producer JMS 
> session
> 
>
> Key: QPIDJMS-407
> URL: https://issues.apache.org/jira/browse/QPIDJMS-407
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.35.0
>Reporter: Johan Stenberg
>Priority: Critical
> Attachments: QPIDJMS-407.zip
>
>
> When a JMS connection with more than one producer session looses the 
> underlying TCP connection to the broker auto reconnect (failover) is not 
> working properly. After the reconnect attempt no new messages will be sent.
> When only one producer session is used, reconnect apparently works as 
> expected.
> I attached a maven project with a test case where the TCP connection is 
> dropped by the broker to provoke the reconnect attempt. In most cases when I 
> run the test class the *testAutoReconnectWith2ProducerSessions()* stops 
> sending messages after the first reconnect attempt. Maybe there occurs some 
> qpid internal race condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-391) Add support for the use of native OpenSSL based providers

2018-08-02 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-391:
-
Fix Version/s: 0.36.0

> Add support for the use of native OpenSSL based providers
> -
>
> Key: QPIDJMS-391
> URL: https://issues.apache.org/jira/browse/QPIDJMS-391
> Project: Qpid JMS
>  Issue Type: New Feature
>  Components: qpid-jms-client
>Reporter: Johan Stenberg
>Priority: Minor
>  Labels: performance
> Fix For: 0.36.0
>
>
> It would be great to have an option to use netty-tcnative-boringssl-static 
> instead of the java based SSL provider. In the ActiveMQ Artemis this was 
> implemented as part of  https://issues.apache.org/jira/browse/ARTEMIS-1649
> In Netty 4.1.26 a new OpenSslX509KeyManagerFactory for easier configuration 
> was introduced. https://github.com/netty/netty/pull/8084 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPIDJMS-391) Add support for the use of native OpenSSL based providers

2018-08-02 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish updated QPIDJMS-391:
-
Summary: Add support for the use of native OpenSSL based providers  (was: 
Add support for SSL provider netty-tcnative-boringssl-static)

> Add support for the use of native OpenSSL based providers
> -
>
> Key: QPIDJMS-391
> URL: https://issues.apache.org/jira/browse/QPIDJMS-391
> Project: Qpid JMS
>  Issue Type: New Feature
>  Components: qpid-jms-client
>Reporter: Johan Stenberg
>Priority: Minor
>  Labels: performance
> Fix For: 0.36.0
>
>
> It would be great to have an option to use netty-tcnative-boringssl-static 
> instead of the java based SSL provider. In the ActiveMQ Artemis this was 
> implemented as part of  https://issues.apache.org/jira/browse/ARTEMIS-1649
> In Netty 4.1.26 a new OpenSslX509KeyManagerFactory for easier configuration 
> was introduced. https://github.com/netty/netty/pull/8084 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-401) Minor code cleanups and performance improvements

2018-07-18 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-401.
--
Resolution: Fixed

> Minor code cleanups and performance improvements
> 
>
> Key: QPIDJMS-401
> URL: https://issues.apache.org/jira/browse/QPIDJMS-401
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.34.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.35.0
>
>
> Many recent changes leave some older code in need of some housekeeping, clean 
> up some code based on recent changes and review.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPIDJMS-402) Massive performance degradation in 0.34.0

2018-07-18 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed QPIDJMS-402.

   Resolution: Fixed
Fix Version/s: 0.35.0

> Massive performance degradation in 0.34.0
> -
>
> Key: QPIDJMS-402
> URL: https://issues.apache.org/jira/browse/QPIDJMS-402
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.34.0
> Environment: Windows 7x64 + Oracle JDK 8u161x64
> Windows 7x64 + Open JDK 8u171x64
> CloudFoundry (Ubuntu Trusty) + Open JDK 8u172x64
>Reporter: Johan Stenberg
>Priority: Critical
> Fix For: 0.35.0
>
> Attachments: QpidJms402_PerfTest.java, 
> image-2018-07-13-16-39-19-707.png, qpidjms402.zip
>
>
> This is a followup issue for 
> [http://qpid.2158936.n2.nabble.com/qpid-jms-Severe-performance-issue-after-upgrading-from-0-33-0-to-0-34-0-td7678052.html]
> I am attaching a simple test case that shows the issue. When I use qpid jms 
> 0.33 I get 2000msg/s send + receive on my local machine. When I switch to 
> 0.34 the message rate drops to 20msg/s.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-404) Performance regressions on some platforms using new ProviderFuture implementation

2018-07-18 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-404:


 Summary: Performance regressions on some platforms using new 
ProviderFuture implementation
 Key: QPIDJMS-404
 URL: https://issues.apache.org/jira/browse/QPIDJMS-404
 Project: Qpid JMS
  Issue Type: Bug
  Components: qpid-jms-client
Affects Versions: 0.34.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.35.0


The new ProviderFuture implementation introduced in 0.34.0 relies on a stepped 
spin / wait algorithm that backs off the spin using yeilds and short parks 
which will eventually end in a wait / notify pattern if the event hasn't 
completed.  On some platforms the length of a park can be substantially longer 
than requested which leads to missing the event completion for long periods of 
time reducing performance. 

Introduce a set of ProviderFuture implementations that can be used on platforms 
where the stepped spin / wait variant causes regressions in performance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-403) Failover handler doesn't release pending tasks that could complete on connection drop

2018-07-17 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-403.
--
Resolution: Fixed

> Failover handler doesn't release pending tasks that could complete on 
> connection drop
> -
>
> Key: QPIDJMS-403
> URL: https://issues.apache.org/jira/browse/QPIDJMS-403
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.34.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Minor
> Fix For: 0.35.0
>
>
> Some tasks that are in flight in the Failover handler could complete when a 
> connection drops instead of waiting for a reconnect to occur but are not 
> having their off line behavior handler triggered when the connection drop is 
> detected and are forced to wait until a reconnect before they complete.  One 
> such example is a session close which when in flight and the connection drops 
> we can allow the request to complete because there nothing more to do for 
> that request on reconnect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-403) Failover handler doesn't release pending tasks that could complete on connection drop

2018-07-17 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-403:


 Summary: Failover handler doesn't release pending tasks that could 
complete on connection drop
 Key: QPIDJMS-403
 URL: https://issues.apache.org/jira/browse/QPIDJMS-403
 Project: Qpid JMS
  Issue Type: Bug
  Components: qpid-jms-client
Affects Versions: 0.34.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.35.0


Some tasks that are in flight in the Failover handler could complete when a 
connection drops instead of waiting for a reconnect to occur but are not having 
their off line behavior handler triggered when the connection drop is detected 
and are forced to wait until a reconnect before they complete.  One such 
example is a session close which when in flight and the connection drops we can 
allow the request to complete because there nothing more to do for that request 
on reconnect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-401) Minor code cleanups and performance improvements

2018-07-11 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-401:


 Summary: Minor code cleanups and performance improvements
 Key: QPIDJMS-401
 URL: https://issues.apache.org/jira/browse/QPIDJMS-401
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.34.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.35.0


Many recent changes leave some older code in need of some housekeeping, clean 
up some code based on recent changes and review.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-399) Add ability to split a write and flush on the Transport into two operations

2018-06-28 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-399.
--
Resolution: Fixed

> Add ability to split a write and flush on the Transport into two operations
> ---
>
> Key: QPIDJMS-399
> URL: https://issues.apache.org/jira/browse/QPIDJMS-399
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.34.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.35.0
>
>
> Enhance the current Transport layer to allow for split write and flush 
> operations which can improve performance in some cases such as batching 
> writes on larger messages or writing commands to the transport and delaying 
> flush until later. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-399) Add ability to split a write and flush on the Transport into two operations

2018-06-28 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-399:


 Summary: Add ability to split a write and flush on the Transport 
into two operations
 Key: QPIDJMS-399
 URL: https://issues.apache.org/jira/browse/QPIDJMS-399
 Project: Qpid JMS
  Issue Type: Improvement
  Components: qpid-jms-client
Affects Versions: 0.34.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.35.0


Enhance the current Transport layer to allow for split write and flush 
operations which can improve performance in some cases such as batching writes 
on larger messages or writing commands to the transport and delaying flush 
until later. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-398) Update testing dependencies and bundle plugin to latest

2018-06-26 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-398.
--
Resolution: Fixed

> Update testing dependencies and bundle plugin to latest
> ---
>
> Key: QPIDJMS-398
> URL: https://issues.apache.org/jira/browse/QPIDJMS-398
> Project: Qpid JMS
>  Issue Type: Task
>  Components: qpid-jms-client
>Affects Versions: 0.34.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.35.0
>
>
> Update the test dependencies for Mockito and Jetty to latest and switch to 
> latest bundle plugin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-398) Update testing dependencies and bundle plugin to latest

2018-06-26 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-398:


 Summary: Update testing dependencies and bundle plugin to latest
 Key: QPIDJMS-398
 URL: https://issues.apache.org/jira/browse/QPIDJMS-398
 Project: Qpid JMS
  Issue Type: Task
  Components: qpid-jms-client
Affects Versions: 0.34.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.35.0


Update the test dependencies for Mockito and Jetty to latest and switch to 
latest bundle plugin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-397) Update to the current v19 apache parent pom

2018-06-26 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-397.
--
Resolution: Fixed

> Update to the current v19 apache parent pom
> ---
>
> Key: QPIDJMS-397
> URL: https://issues.apache.org/jira/browse/QPIDJMS-397
> Project: Qpid JMS
>  Issue Type: Task
>  Components: qpid-jms-client
>Affects Versions: 0.34.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.35.0
>
>
> The build should be updated to a current apache parent pom version to use 
> more current plugins etc and better align with the other java components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-397) Update to the current v19 apache parent pom

2018-06-26 Thread Timothy Bish (JIRA)
Timothy Bish created QPIDJMS-397:


 Summary: Update to the current v19 apache parent pom
 Key: QPIDJMS-397
 URL: https://issues.apache.org/jira/browse/QPIDJMS-397
 Project: Qpid JMS
  Issue Type: Task
  Components: qpid-jms-client
Affects Versions: 0.34.0
Reporter: Timothy Bish
Assignee: Timothy Bish
 Fix For: 0.35.0


The build should be updated to a current apache parent pom version to use more 
current plugins etc and better align with the other java components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-395) connection:forced leads to JMSException even though reconnect is enabled

2018-06-25 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-395.
--
Resolution: Fixed

Fix such that resends are attempted on connection remote close, and more 
specific exception is thrown when not using failover to aid in debug. 

> connection:forced leads to JMSException even though reconnect is enabled
> 
>
> Key: QPIDJMS-395
> URL: https://issues.apache.org/jira/browse/QPIDJMS-395
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.33.0
>Reporter: Jiri Daněk
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.34.0
>
>
> Based on 
> http://qpid.2158936.n2.nabble.com/Reconnect-and-amqp-connection-forced-td7659043.html,
>  I believe that reconnect:force error should not be propagated to library 
> user and the library should silently reconnect. This does not happen in the 
> test below, when I am sending messages fast--I do get exception caused by 
> connection:forced. Notice the commented out sleep() call.
> In ActiveMQ Artemis testsuite:
> {noformat}
> diff --git 
> a/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
>  
> b/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
> index 81c28855ef..888171227b 100644
> --- 
> a/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
> +++ 
> b/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
> @@ -86,6 +86,32 @@ public class AmqpFailoverEndpointDiscoveryTest extends 
> FailoverTestBase {
>}
> }
>  
> +   @Test(timeout = 12)
> +   public void testReconnectWhileSendingWithAMQP() throws Exception {
> +  JmsConnectionFactory factory = getJmsConnectionFactory();
> +  try (Connection connection = factory.createConnection()) {
> + connection.start();
> + Session session = connection.createSession(false, 
> Session.AUTO_ACKNOWLEDGE);
> + javax.jms.Queue queue = session.createQueue(ADDRESS.toString());
> + MessageProducer producer = session.createProducer(queue);
> + Thread t = new Thread(() -> {
> +try {
> +   while(true) {
> +  System.out.println("sending message");
> +  producer.send(session.createTextMessage("hello before 
> failover"));
> +//  Thread.sleep(1000);  // comment out to send messages 
> quickly
> +   }
> +} catch (Exception e ) {
> +   e.printStackTrace();
> +}
> + });
> + t.start();
> + Thread.sleep(2000);  // simpler to read than actual synchronization
> + liveServer.crash(true, true);
> + Thread.sleep(2000);
> +  }
> +   }
> +
> private JmsConnectionFactory getJmsConnectionFactory() {
>if (protocol == 0) {
>   return new 
> JmsConnectionFactory("failover:(amqp://localhost:61616)");
> {noformat}
> The above will print (only print, there aren't asserts)
> {noformat}
> javax.jms.JMSException: Received error from remote peer without description 
> [condition = amqp:connection:forced]
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:164)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:117)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.processRemoteClose(AmqpAbstractResource.java:262)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:971)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:105)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:854)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The PN_TRACE_FRM log for this is
> {noformat}
> [...]
> [1476077082:1] -> Transfer{handle=0, deliveryId=221, 

[jira] [Commented] (QPIDJMS-395) connection:forced leads to JMSException even though reconnect is enabled

2018-06-25 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522592#comment-16522592
 ] 

Timothy Bish commented on QPIDJMS-395:
--

Fix for handling connection remote closed and resending pending messages:
https://git1-us-west.apache.org/repos/asf?p=qpid-jms.git;a=commit;h=fe0307b58beb7ec344f728e34a7ca0e3ef103add

{noformat}
QPIDJMS-395 Resend message in flight when remote closed

When the remote closes the connection and an inflight send
is outstanding we should handle the close and resend those
messages that are still awaiting dispositions in the same
manner as we do when the connection unexpectedly drops when
using the Failover feature.
{noformat}

> connection:forced leads to JMSException even though reconnect is enabled
> 
>
> Key: QPIDJMS-395
> URL: https://issues.apache.org/jira/browse/QPIDJMS-395
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.33.0
>Reporter: Jiri Daněk
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.34.0
>
>
> Based on 
> http://qpid.2158936.n2.nabble.com/Reconnect-and-amqp-connection-forced-td7659043.html,
>  I believe that reconnect:force error should not be propagated to library 
> user and the library should silently reconnect. This does not happen in the 
> test below, when I am sending messages fast--I do get exception caused by 
> connection:forced. Notice the commented out sleep() call.
> In ActiveMQ Artemis testsuite:
> {noformat}
> diff --git 
> a/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
>  
> b/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
> index 81c28855ef..888171227b 100644
> --- 
> a/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
> +++ 
> b/tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/amqp/AmqpFailoverEndpointDiscoveryTest.java
> @@ -86,6 +86,32 @@ public class AmqpFailoverEndpointDiscoveryTest extends 
> FailoverTestBase {
>}
> }
>  
> +   @Test(timeout = 12)
> +   public void testReconnectWhileSendingWithAMQP() throws Exception {
> +  JmsConnectionFactory factory = getJmsConnectionFactory();
> +  try (Connection connection = factory.createConnection()) {
> + connection.start();
> + Session session = connection.createSession(false, 
> Session.AUTO_ACKNOWLEDGE);
> + javax.jms.Queue queue = session.createQueue(ADDRESS.toString());
> + MessageProducer producer = session.createProducer(queue);
> + Thread t = new Thread(() -> {
> +try {
> +   while(true) {
> +  System.out.println("sending message");
> +  producer.send(session.createTextMessage("hello before 
> failover"));
> +//  Thread.sleep(1000);  // comment out to send messages 
> quickly
> +   }
> +} catch (Exception e ) {
> +   e.printStackTrace();
> +}
> + });
> + t.start();
> + Thread.sleep(2000);  // simpler to read than actual synchronization
> + liveServer.crash(true, true);
> + Thread.sleep(2000);
> +  }
> +   }
> +
> private JmsConnectionFactory getJmsConnectionFactory() {
>if (protocol == 0) {
>   return new 
> JmsConnectionFactory("failover:(amqp://localhost:61616)");
> {noformat}
> The above will print (only print, there aren't asserts)
> {noformat}
> javax.jms.JMSException: Received error from remote peer without description 
> [condition = amqp:connection:forced]
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:164)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:117)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.processRemoteClose(AmqpAbstractResource.java:262)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:971)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:105)
>   at 
> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:854)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> 

[jira] [Resolved] (QPIDJMS-396) Performance improvements for inter-thread event signalling

2018-06-25 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved QPIDJMS-396.
--
Resolution: Fixed

> Performance improvements for inter-thread event signalling 
> ---
>
> Key: QPIDJMS-396
> URL: https://issues.apache.org/jira/browse/QPIDJMS-396
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.33.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.34.0
>
>
> Improve the ability to detect and respond to event completions of requests 
> being handled by different threads within the client without incurring the 
> overhead of signals and wait completions.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-396) Performance improvements for inter-thread event signalling

2018-06-25 Thread Timothy Bish (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522560#comment-16522560
 ] 

Timothy Bish commented on QPIDJMS-396:
--

Commit that resolves this:
https://git1-us-west.apache.org/repos/asf?p=qpid-jms.git;a=commit;h=c66d888114021da31d9032c841c08903dd31cc89

{quote}
QPIDJMS-396 Allow for faster reaction times on sync operations

For sync operations from the JMS layer into the provider we can
more quickly process the events by using a spin-wait future that
checks in a short spin for the completion of the target event.
The spin will back off and eventually back down to a parked wait
that will be signalled by the normal wait / notify pattern.
{quote}

> Performance improvements for inter-thread event signalling 
> ---
>
> Key: QPIDJMS-396
> URL: https://issues.apache.org/jira/browse/QPIDJMS-396
> Project: Qpid JMS
>  Issue Type: Improvement
>  Components: qpid-jms-client
>Affects Versions: 0.33.0
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 0.34.0
>
>
> Improve the ability to detect and respond to event completions of requests 
> being handled by different threads within the client without incurring the 
> overhead of signals and wait completions.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



  1   2   3   4   5   6   7   >