[jira] [Commented] (QPID-6701) [Regression btw 0.30 - 0.32] If address doesn't resolve an exception is not thrown

2015-08-21 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706717#comment-14706717
 ] 

Rajith Attapattu commented on QPID-6701:


Couple of points,

1. Assert was to verify if the node has the supplied properties.Ex durable or 
if it's a queue or exchange.

2. Historically if a node didn't resolved, an exception was thrown. Whether 
this was right or wrong,  this is behavior that users had relied upon. Changing 
that is cuasing issues.with existing customers, which is why the users are 
treating it as a regression.

3. While a consume might fail, a producer will not. It will continue to send 
and the exchange will drop the messages.

4. If you want to allow addresses that are neither queues nor topics, then make 
that behavior explicitly configurable and leave the existing behavior the 
default.



 [Regression btw 0.30 - 0.32] If address doesn't resolve an exception is not 
 thrown
 --

 Key: QPID-6701
 URL: https://issues.apache.org/jira/browse/QPID-6701
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.32
Reporter: Rajith Attapattu
Assignee: Keith Wall
Priority: Blocker
 Fix For: qpid-java-6.0


 If you run java -cp $CP org.apache.qpid.example.Spout non-existing-node,
 1. In 0.30 you get an exception with the cause org.apache.qpid.AMQException: 
 Exception occured while verifying destination
 2. In 0.32 no such exception is thrown.
 The issue is in the resolveAddress method in AMQSession class.
 If resolved is false no action is taken. There are a couple issues with this.
 1. A producer can be created to a non existent queue or exchange.
 2. Messages being dropped -  While sending to a non existing exchange will 
 result in an error, sending to a non existent queue via an exchange will 
 simply result in messages being dropped. 
 3. The address will continue to be resolved as there was no error the 
 previous time.
 We should throw an exception if resolved == false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-6701) [Regression btw 0.30 - 0.32] If address doesn't resolve an exception is not thrown

2015-08-20 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-6701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705746#comment-14705746
 ] 

Rajith Attapattu commented on QPID-6701:


QPID-6040 (rev 1620659) changed the algorithm in AMQSession#resolveAddress so 
that the two switch blocks (QUEUE_TYPE and TOPIC_TYPE) to 'break' regardless of 
whether they consider the destination to be resolved

This is wrong as it will cause the above issues mentioned in the description of 
this JIRA.
Instead of reverting, we could simply throw an exception if the address doesn't 
resolve. After all that is what we expect from the address resolution method. 
We have several customers who rely on the address being resolved properly which 
is why we want this fixed.

If there are any test failures after the above simple change, then we should 
adjust the test behavior.

 [Regression btw 0.30 - 0.32] If address doesn't resolve an exception is not 
 thrown
 --

 Key: QPID-6701
 URL: https://issues.apache.org/jira/browse/QPID-6701
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.32
Reporter: Rajith Attapattu
Assignee: Keith Wall
Priority: Blocker
 Fix For: qpid-java-6.0


 If you run java -cp $CP org.apache.qpid.example.Spout non-existing-node,
 1. In 0.30 you get an exception with the cause org.apache.qpid.AMQException: 
 Exception occured while verifying destination
 2. In 0.32 no such exception is thrown.
 The issue is in the resolveAddress method in AMQSession class.
 If resolved is false no action is taken. There are a couple issues with this.
 1. A producer can be created to a non existent queue or exchange.
 2. Messages being dropped -  While sending to a non existing exchange will 
 result in an error, sending to a non existent queue via an exchange will 
 simply result in messages being dropped. 
 3. The address will continue to be resolved as there was no error the 
 previous time.
 We should throw an exception if resolved == false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-6701) [Regression btw 0.30 - 0.32] If address doesn't resolve an exception is not thrown

2015-08-18 Thread Rajith Attapattu (JIRA)
Rajith Attapattu created QPID-6701:
--

 Summary: [Regression btw 0.30 - 0.32] If address doesn't resolve 
an exception is not thrown
 Key: QPID-6701
 URL: https://issues.apache.org/jira/browse/QPID-6701
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.32
Reporter: Rajith Attapattu
Priority: Blocker
 Fix For: qpid-java-6.0


If you run java -cp $CP org.apache.qpid.example.Spout non-existing-node,

1. In 0.30 you get an exception with the cause org.apache.qpid.AMQException: 
Exception occured while verifying destination

2. In 0.32 no such exception is thrown.

The issue is in the resolveAddress method in AMQSession class.
If resolved is false no action is taken. There are a couple issues with this.

1. A producer can be created to a non existent queue or exchange.

2. Messages being dropped -  While sending to a non existing exchange will 
result in an error, sending to a non existent queue via an exchange will simply 
result in messages being dropped. 

3. The address will continue to be resolved as there was no error the previous 
time.

We should throw an exception if resolved == false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Review Request 30379: Skeleton code for a possible approach for codec improvements in proton-j.

2015-02-02 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30379/
---

(Updated Feb. 2, 2015, 6:38 p.m.)


Review request for qpid and Rafael Schloming.


Repository: qpid-proton-git


Description
---

Skeleton code for a possible approach for codec improvements in proton-j.


Diffs (updated)
-

  proton-j/src/main/java/org/apache/qpid/proton/amqp/transport2/Flow.java 
388cbef 
  proton-j/src/main/java/org/apache/qpid/proton/codec2/DecodeException.java 
PRE-CREATION 
  proton-j/src/main/java/org/apache/qpid/proton/codec2/Decoder.java d61fdef 
  proton-j/src/main/java/org/apache/qpid/proton/codec2/Encodable.java 
PRE-CREATION 
  proton-j/src/main/java/org/apache/qpid/proton/codec2/POJOBuilder.java c95844b 
  proton-j/src/main/java/org/apache/qpid/proton/codec2/PerformativeFactory.java 
PRE-CREATION 
  
proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesDecoder.java 
a262760 
  
proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesEncoder.java 
c7c888c 
  proton-j/src/main/java/org/apache/qpid/proton/codec2/TypeRegistry.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/30379/diff/


Testing
---


Thanks,

rajith attapattu



Re: Review Request 30379: Skeleton code for a possible approach for codec improvements in proton-j.

2015-02-02 Thread rajith attapattu


 On Feb. 2, 2015, 9:41 p.m., Rafael Schloming wrote:
  proton-j/src/main/java/org/apache/qpid/proton/amqp/transport2/Flow.java, 
  line 217
  https://reviews.apache.org/r/30379/diff/2/?file=843887#file843887line217
 
  I'm not sure we should do this kind of validation here. This is really 
  a semantic check, I would stick to just syntactic checks in the decode, 
  e.g. if you get a String where you expect an int or something like that.
  
  I would put the sort of semantic check you have here in a separate 
  method since it would be useful to be able to a) omit it at runtime for 
  performance reasons, and b) it would also be useful to perform the same 
  validation prior to encoding.

I see your logic in having the semantic checks on it's own and being able to 
use on both encoding and decoding.

But when do you envision this to be called during decoding? I want to 
understand what you meant by omit it at runtime for performance reasons.


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30379/#review70627
---


On Feb. 2, 2015, 6:38 p.m., rajith attapattu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30379/
 ---
 
 (Updated Feb. 2, 2015, 6:38 p.m.)
 
 
 Review request for qpid and Rafael Schloming.
 
 
 Repository: qpid-proton-git
 
 
 Description
 ---
 
 Skeleton code for a possible approach for codec improvements in proton-j.
 
 
 Diffs
 -
 
   proton-j/src/main/java/org/apache/qpid/proton/amqp/transport2/Flow.java 
 388cbef 
   proton-j/src/main/java/org/apache/qpid/proton/codec2/DecodeException.java 
 PRE-CREATION 
   proton-j/src/main/java/org/apache/qpid/proton/codec2/Decoder.java d61fdef 
   proton-j/src/main/java/org/apache/qpid/proton/codec2/Encodable.java 
 PRE-CREATION 
   proton-j/src/main/java/org/apache/qpid/proton/codec2/POJOBuilder.java 
 c95844b 
   
 proton-j/src/main/java/org/apache/qpid/proton/codec2/PerformativeFactory.java 
 PRE-CREATION 
   
 proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesDecoder.java
  a262760 
   
 proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesEncoder.java
  c7c888c 
   proton-j/src/main/java/org/apache/qpid/proton/codec2/TypeRegistry.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/30379/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 rajith attapattu
 




Re: Review Request 30379: Skeleton code for a possible approach for codec improvements in proton-j.

2015-01-28 Thread rajith attapattu


 On Jan. 28, 2015, 8:49 p.m., Rafael Schloming wrote:
  proton-j/src/main/java/org/apache/qpid/proton/amqp/transport2/Flow.java, 
  line 141
  https://reviews.apache.org/r/30379/diff/1/?file=839118#file839118line141
 
  do we want to specify the types for the Map here?

We should IMO. The current code does not. I was thinking String, Object.


 On Jan. 28, 2015, 8:49 p.m., Rafael Schloming wrote:
  proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesDecoder.java,
   line 32
  https://reviews.apache.org/r/30379/diff/1/?file=839119#file839119line32
 
  I'm not quite following how this part works. How do you know that the 
  data you have is a flow? Or is this a callback that is made when you've 
  detected a flow in the decoding stream? If the latter is the case, what is 
  actually invoking this?

Yes this is called when we detect it's a flow performative in the decoding 
stream.
I didn't put together the part that was going to call this method yet.
I will do so shortly


 On Jan. 28, 2015, 8:49 p.m., Rafael Schloming wrote:
  proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesEncoder.java,
   line 32
  https://reviews.apache.org/r/30379/diff/1/?file=839120#file839120line32
 
  It might be worth introducing an Encodable interface with 
  Encodable.encode(Encoder) and Encodable.decode(Decoder). That way you could 
  keep the encode and decode logic together in the performative itself. I 
  think having the encode/decode logic together would be good so that you 
  could easily spot if they are asymmetric. It would also ease code 
  generation if you end up going that route since you just need to generate 
  the one class.
  
  I think it would also give you a good way to supply the missing piece 
  of your decode picture that I mentioned above, since you could just have a 
  general purpose decoder that holds a map from a descriptor to a factory. 
  Then in your decoder when you get the descriptor callback you can lookup 
  and construct the appropriate Encodable and delegate to it.

I was initially hesitant to put any codec logic in the POJO version of the 
performatives.
But I think the approach you propose is a good compromise and would come in 
handy for code generation and building a general purpose decoder.

I will be posting another patch tonight with changes.


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30379/#review70071
---


On Jan. 28, 2015, 8:04 p.m., rajith attapattu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/30379/
 ---
 
 (Updated Jan. 28, 2015, 8:04 p.m.)
 
 
 Review request for qpid and Rafael Schloming.
 
 
 Repository: qpid-proton-git
 
 
 Description
 ---
 
 Skeleton code for a possible approach for codec improvements in proton-j.
 
 
 Diffs
 -
 
   proton-j/src/main/java/org/apache/qpid/proton/amqp/transport2/Flow.java 
 PRE-CREATION 
   
 proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesDecoder.java
  PRE-CREATION 
   
 proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesEncoder.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/30379/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 rajith attapattu
 




Review Request 30379: Skeleton code for a possible approach for codec improvements in proton-j.

2015-01-28 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30379/
---

Review request for qpid and Rafael Schloming.


Repository: qpid-proton-git


Description
---

Skeleton code for a possible approach for codec improvements in proton-j.


Diffs
-

  proton-j/src/main/java/org/apache/qpid/proton/amqp/transport2/Flow.java 
PRE-CREATION 
  
proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesDecoder.java 
PRE-CREATION 
  
proton-j/src/main/java/org/apache/qpid/proton/codec2/TransportTypesEncoder.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/30379/diff/


Testing
---


Thanks,

rajith attapattu



[jira] [Updated] (QPID-5870) Closing a topic consumer should delete its exclusive auto-delete queue

2014-07-08 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5870:
---

Attachment: QPID-5870.part2.patch

I added a check to ensure that the subscription queue is not deleted for 
durable subscribers.

All existing tests pass including the durable subscription tests.

 Closing a topic consumer should delete its exclusive auto-delete queue
 --

 Key: QPID-5870
 URL: https://issues.apache.org/jira/browse/QPID-5870
 Project: Qpid
  Issue Type: Bug
Reporter: Rajith Attapattu
 Attachments: QPID-5870.part2.patch, QPID-5870.patch


 When a topic consumer is closed, the subscription queue needs to be closed as 
 well.
 Currently this queue is only deleted when the session is closed (due to being 
 marked auto-deleted).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5870) Closing a topic consumer should delete its exclusive auto-delete queue

2014-07-08 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055304#comment-14055304
 ] 

Rajith Attapattu commented on QPID-5870:


 but I wonder what happens around e.g. DurableSubscriptions, are they handled 
 appropriately? Needs some tests.

The testDurableSubscription in AddressBasedDestination test verifies this 
change.
If you run the test without part2 of the patch is fails.


 Closing a topic consumer should delete its exclusive auto-delete queue
 --

 Key: QPID-5870
 URL: https://issues.apache.org/jira/browse/QPID-5870
 Project: Qpid
  Issue Type: Bug
Reporter: Rajith Attapattu
 Attachments: QPID-5870.part2.patch, QPID-5870.patch


 When a topic consumer is closed, the subscription queue needs to be closed as 
 well.
 Currently this queue is only deleted when the session is closed (due to being 
 marked auto-deleted).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5868) Java client ignores exceptions when waiting on sync

2014-07-04 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14052509#comment-14052509
 ] 

Rajith Attapattu commented on QPID-5868:


As for the session being closed, once the message delivery lock gets released, 
the thread IO Receiver thread will continue and will call closed method on the 
Session which will mark it close.

 Java client ignores exceptions when waiting on sync
 ---

 Key: QPID-5868
 URL: https://issues.apache.org/jira/browse/QPID-5868
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.27
Reporter: Rajith Attapattu
 Fix For: 0.29

 Attachments: QPID-5868.patch


 The java client will wait on the sync command even if an execution exception 
 is received from the broker.
 It will then proceed to throw a timeout exception and the execution exception 
 is not reported properly to the application.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5868) Java client ignores exceptions when waiting on sync

2014-07-04 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14052508#comment-14052508
 ] 

Rajith Attapattu commented on QPID-5868:


Apologies for not putting enough info in the JIRA.

Robbie you description is correct.
If an execution exception is thrown at the time close is initiated you can 
easily reproduce this issue.

When notified of the exception the AMQSession_0_10 tries to close the session. 
However the thread is blocked waiting to grab the message delivery lock. But 
that lock is already taken by the thread that initiated the close and is 
blocked (timed wait) on the commandsLock waiting for the sync to complete.
Once it times out, and since the session is still not closed, the exception 
being thrown is the time out exception and not the exception that caused the 
session close.

 Java client ignores exceptions when waiting on sync
 ---

 Key: QPID-5868
 URL: https://issues.apache.org/jira/browse/QPID-5868
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.27
Reporter: Rajith Attapattu
 Fix For: 0.29

 Attachments: QPID-5868.patch


 The java client will wait on the sync command even if an execution exception 
 is received from the broker.
 It will then proceed to throw a timeout exception and the execution exception 
 is not reported properly to the application.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5870) Closing a topic consumer should delete its exclusive auto-delete queue

2014-07-04 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14052523#comment-14052523
 ] 

Rajith Attapattu commented on QPID-5870:


Again I apologize for not providing enough context.

Yes the key change here is  deleteSubscriptionQueue is always being called.

This is to bring the JMS client inline with the address spec.
This is something requested by customers and IMO it is the correct behavior as 
well.

The delete directive should only apply for node properties (like the exchange 
or queue being created).
The subscription queue should not exist beyond the subscription except in the 
case of durable subscriptions.

I will make sure the durable subscriptions aren't affected.

 Closing a topic consumer should delete its exclusive auto-delete queue
 --

 Key: QPID-5870
 URL: https://issues.apache.org/jira/browse/QPID-5870
 Project: Qpid
  Issue Type: Bug
Reporter: Rajith Attapattu
 Attachments: QPID-5870.patch


 When a topic consumer is closed, the subscription queue needs to be closed as 
 well.
 Currently this queue is only deleted when the session is closed (due to being 
 marked auto-deleted).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5868) Java client ignores exceptions when waiting on sync

2014-07-04 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14052722#comment-14052722
 ] 

Rajith Attapattu commented on QPID-5868:


I verified that both the transport session and the AMQSession_0_10 (jms 
session) are both marked closed by the time the exception is thrown to the 
application, which I think is the most important thing.

Both the sync method and the exception method (called via the listener 
interface) delegates to setCurrentException method.
Therefore which gets called first doesn't matter.

Without the patch, important exceptions are not reported and customers have 
complained about it.
Therefore I believe it's important to get this patch in.

 Java client ignores exceptions when waiting on sync
 ---

 Key: QPID-5868
 URL: https://issues.apache.org/jira/browse/QPID-5868
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.27
Reporter: Rajith Attapattu
 Fix For: 0.29

 Attachments: QPID-5868.patch


 The java client will wait on the sync command even if an execution exception 
 is received from the broker.
 It will then proceed to throw a timeout exception and the execution exception 
 is not reported properly to the application.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-5868) Java client ignores exceptions when waiting on sync

2014-07-02 Thread Rajith Attapattu (JIRA)
Rajith Attapattu created QPID-5868:
--

 Summary: Java client ignores exceptions when waiting on sync
 Key: QPID-5868
 URL: https://issues.apache.org/jira/browse/QPID-5868
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.27
Reporter: Rajith Attapattu
 Fix For: 0.29


The java client will wait on the sync command even if an execution exception is 
received from the broker.
It will then proceed to throw a timeout exception and the execution exception 
is not reported properly to the application.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5868) Java client ignores exceptions when waiting on sync

2014-07-02 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5868:
---

Attachment: QPID-5868.patch

This patch suggests a potential fix for the issue.

 Java client ignores exceptions when waiting on sync
 ---

 Key: QPID-5868
 URL: https://issues.apache.org/jira/browse/QPID-5868
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.27
Reporter: Rajith Attapattu
 Fix For: 0.29

 Attachments: QPID-5868.patch


 The java client will wait on the sync command even if an execution exception 
 is received from the broker.
 It will then proceed to throw a timeout exception and the execution exception 
 is not reported properly to the application.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-5869) Specifying an ACL file will cause a core dumps when management is sisabled

2014-07-02 Thread Rajith Attapattu (JIRA)
Rajith Attapattu created QPID-5869:
--

 Summary: Specifying an ACL file will cause a core dumps when 
management is sisabled
 Key: QPID-5869
 URL: https://issues.apache.org/jira/browse/QPID-5869
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.28
Reporter: Rajith Attapattu
 Fix For: 0.29


The root cause is due to the ACL module trying to fire an event regardless of 
wether the management module is there or not.

Core was generated by `./qpidd --auth no -m no -t --acl-file ./data/acl.txt'.
Program terminated with signal 11, Segmentation fault.
#0  0x7f7e5e855bbd in 
qpid::management::ManagementAgent::raiseEvent(qpid::management::ManagementEvent 
const, qpid::management::ManagementAgent::severity_t) () from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
Missing separate debuginfos, use: debuginfo-install 
boost-program-options-1.53.0-14.fc19.x86_64 libuuid-2.23.2-5.fc19.x86_64 
nspr-4.10.6-1.fc19.x86_64 nss-3.16.1-1.fc19.x86_64 nss-util-3.16.1-1.fc19.x86_64
(gdb) bt
#0  0x7f7e5e855bbd in 
qpid::management::ManagementAgent::raiseEvent(qpid::management::ManagementEvent 
const, qpid::management::ManagementAgent::severity_t) () from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
#1  0x7f7e5e6bf20d in qpid::acl::Acl::readAclFile(std::string, 
std::string) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
#2  0x7f7e5e6bf0b7 in qpid::acl::Acl::readAclFile(std::string) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
#3  0x7f7e5e6bdd23 in qpid::acl::Acl::Acl(qpid::acl::AclValues, 
qpid::broker::Broker) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
#4  0x7f7e5e6d4273 in qpid::acl::AclPlugin::init(qpid::broker::Broker) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
#5  0x7f7e5e6d4851 in bool 
qpid::acl::AclPlugin::initqpid::broker::Broker(qpid::Plugin::Target) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
#6  0x7f7e5e6d4425 in 
qpid::acl::AclPlugin::initialize(qpid::Plugin::Target) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
#7  0x7f7e5e03287c in boost::_mfi::mf1void, qpid::Plugin, 
qpid::Plugin::Target::operator()(qpid::Plugin*, qpid::Plugin::Target) const 
()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
#8  0x7f7e5e032263 in void boost::_bi::list2boost::arg1, 
boost::reference_wrapperqpid::Plugin::Target 
::operator()boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
boost::_bi::list1qpid::Plugin* const (boost::_bi::typevoid, 
boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
boost::_bi::list1qpid::Plugin* const, int) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
#9  0x7f7e5e0317ca in void boost::_bi::bind_tvoid, boost::_mfi::mf1void, 
qpid::Plugin, qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
boost::reference_wrapperqpid::Plugin::Target  
::operator()qpid::Plugin*(qpid::Plugin* const) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
#10 0x7f7e5e030d43 in boost::_bi::bind_tvoid, boost::_mfi::mf1void, 
qpid::Plugin, qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
boost::reference_wrapperqpid::Plugin::Target   
std::for_each__gnu_cxx::__normal_iteratorqpid::Plugin* const*, 
std::vectorqpid::Plugin*, std::allocatorqpid::Plugin*  , 
boost::_bi::bind_tvoid, boost::_mfi::mf1void, qpid::Plugin, 
qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
boost::reference_wrapperqpid::Plugin::Target   
(__gnu_cxx::__normal_iteratorqpid::Plugin* const*, std::vectorqpid::Plugin*, 
std::allocatorqpid::Plugin*  , __gnu_cxx::__normal_iteratorqpid::Plugin* 
const*, std::vectorqpid::Plugin*, std::allocatorqpid::Plugin*  , 
boost::_bi::bind_tvoid, boost::_mfi::mf1void, qpid::Plugin, 
qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
boost::reference_wrapperqpid::Plugin::Target  ) () from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
#11 0x7f7e5e02fa3b in void qpid::(anonymous 
namespace)::each_pluginboost::_bi::bind_tvoid, boost::_mfi::mf1void, 
qpid::Plugin, qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
boost::reference_wrapperqpid::Plugin::Target   (boost::_bi::bind_tvoid, 
boost::_mfi::mf1void, ---Type return to continue, or q return to quit---
qpid::Plugin, qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
boost::reference_wrapperqpid::Plugin::Target   const) ()
   from 
/home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
#12 0x7f7e5e02f9cb in qpid::Plugin::initializeAll(qpid::Plugin::Target) ()
   from 
/home

[jira] [Updated] (QPID-5869) Specifying an invalid ACL file will cause a core dumps when management is sisabled

2014-07-02 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5869:
---

Summary: Specifying an invalid ACL file will cause a core dumps when 
management is sisabled  (was: Specifying an ACL file will cause a core dumps 
when management is sisabled)

 Specifying an invalid ACL file will cause a core dumps when management is 
 sisabled
 --

 Key: QPID-5869
 URL: https://issues.apache.org/jira/browse/QPID-5869
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.28
Reporter: Rajith Attapattu
 Fix For: 0.29


 The root cause is due to the ACL module trying to fire an event regardless of 
 wether the management module is there or not.
 Core was generated by `./qpidd --auth no -m no -t --acl-file ./data/acl.txt'.
 Program terminated with signal 11, Segmentation fault.
 #0  0x7f7e5e855bbd in 
 qpid::management::ManagementAgent::raiseEvent(qpid::management::ManagementEvent
  const, qpid::management::ManagementAgent::severity_t) () from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 Missing separate debuginfos, use: debuginfo-install 
 boost-program-options-1.53.0-14.fc19.x86_64 libuuid-2.23.2-5.fc19.x86_64 
 nspr-4.10.6-1.fc19.x86_64 nss-3.16.1-1.fc19.x86_64 
 nss-util-3.16.1-1.fc19.x86_64
 (gdb) bt
 #0  0x7f7e5e855bbd in 
 qpid::management::ManagementAgent::raiseEvent(qpid::management::ManagementEvent
  const, qpid::management::ManagementAgent::severity_t) () from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #1  0x7f7e5e6bf20d in qpid::acl::Acl::readAclFile(std::string, 
 std::string) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #2  0x7f7e5e6bf0b7 in qpid::acl::Acl::readAclFile(std::string) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #3  0x7f7e5e6bdd23 in qpid::acl::Acl::Acl(qpid::acl::AclValues, 
 qpid::broker::Broker) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #4  0x7f7e5e6d4273 in qpid::acl::AclPlugin::init(qpid::broker::Broker) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #5  0x7f7e5e6d4851 in bool 
 qpid::acl::AclPlugin::initqpid::broker::Broker(qpid::Plugin::Target) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #6  0x7f7e5e6d4425 in 
 qpid::acl::AclPlugin::initialize(qpid::Plugin::Target) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #7  0x7f7e5e03287c in boost::_mfi::mf1void, qpid::Plugin, 
 qpid::Plugin::Target::operator()(qpid::Plugin*, qpid::Plugin::Target) 
 const ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #8  0x7f7e5e032263 in void boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target 
 ::operator()boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list1qpid::Plugin* const (boost::_bi::typevoid, 
 boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list1qpid::Plugin* const, int) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #9  0x7f7e5e0317ca in void boost::_bi::bind_tvoid, 
 boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target  
 ::operator()qpid::Plugin*(qpid::Plugin* const) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #10 0x7f7e5e030d43 in boost::_bi::bind_tvoid, boost::_mfi::mf1void, 
 qpid::Plugin, qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target   
 std::for_each__gnu_cxx::__normal_iteratorqpid::Plugin* const*, 
 std::vectorqpid::Plugin*, std::allocatorqpid::Plugin*  , 
 boost::_bi::bind_tvoid, boost::_mfi::mf1void, qpid::Plugin, 
 qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target   
 (__gnu_cxx::__normal_iteratorqpid::Plugin* const*, 
 std::vectorqpid::Plugin*, std::allocatorqpid::Plugin*  , 
 __gnu_cxx::__normal_iteratorqpid::Plugin* const*, std::vectorqpid::Plugin*, 
 std::allocatorqpid::Plugin*  , boost::_bi::bind_tvoid, 
 boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target  ) () from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #11 0x7f7e5e02fa3b in void qpid::(anonymous 
 namespace)::each_pluginboost::_bi::bind_tvoid, boost::_mfi::mf1void, 
 qpid::Plugin, qpid::Plugin::Target, boost::_bi::list2boost

[jira] [Resolved] (QPID-5869) Specifying an invalid ACL file will cause a core dumps when management is disabled

2014-07-02 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-5869.


Resolution: Fixed
  Assignee: Rajith Attapattu

Added a fix on trunk at r1607488.
There are other places where the agent is used without the null check.
This may or may not be a problem depending how those methods are called.
Chuck is going to investigate and fix them.

 Specifying an invalid ACL file will cause a core dumps when management is 
 disabled
 --

 Key: QPID-5869
 URL: https://issues.apache.org/jira/browse/QPID-5869
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: 0.28
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.29


 The root cause is due to the ACL module trying to fire an event regardless of 
 whether the management module is there or not.
 Core was generated by `./qpidd --auth no -m no -t --acl-file ./data/acl.txt'.
 Program terminated with signal 11, Segmentation fault.
 #0  0x7f7e5e855bbd in 
 qpid::management::ManagementAgent::raiseEvent(qpid::management::ManagementEvent
  const, qpid::management::ManagementAgent::severity_t) () from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 Missing separate debuginfos, use: debuginfo-install 
 boost-program-options-1.53.0-14.fc19.x86_64 libuuid-2.23.2-5.fc19.x86_64 
 nspr-4.10.6-1.fc19.x86_64 nss-3.16.1-1.fc19.x86_64 
 nss-util-3.16.1-1.fc19.x86_64
 (gdb) bt
 #0  0x7f7e5e855bbd in 
 qpid::management::ManagementAgent::raiseEvent(qpid::management::ManagementEvent
  const, qpid::management::ManagementAgent::severity_t) () from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #1  0x7f7e5e6bf20d in qpid::acl::Acl::readAclFile(std::string, 
 std::string) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #2  0x7f7e5e6bf0b7 in qpid::acl::Acl::readAclFile(std::string) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #3  0x7f7e5e6bdd23 in qpid::acl::Acl::Acl(qpid::acl::AclValues, 
 qpid::broker::Broker) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #4  0x7f7e5e6d4273 in qpid::acl::AclPlugin::init(qpid::broker::Broker) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #5  0x7f7e5e6d4851 in bool 
 qpid::acl::AclPlugin::initqpid::broker::Broker(qpid::Plugin::Target) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #6  0x7f7e5e6d4425 in 
 qpid::acl::AclPlugin::initialize(qpid::Plugin::Target) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidbroker.so.2
 #7  0x7f7e5e03287c in boost::_mfi::mf1void, qpid::Plugin, 
 qpid::Plugin::Target::operator()(qpid::Plugin*, qpid::Plugin::Target) 
 const ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #8  0x7f7e5e032263 in void boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target 
 ::operator()boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list1qpid::Plugin* const (boost::_bi::typevoid, 
 boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list1qpid::Plugin* const, int) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #9  0x7f7e5e0317ca in void boost::_bi::bind_tvoid, 
 boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target  
 ::operator()qpid::Plugin*(qpid::Plugin* const) ()
from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #10 0x7f7e5e030d43 in boost::_bi::bind_tvoid, boost::_mfi::mf1void, 
 qpid::Plugin, qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target   
 std::for_each__gnu_cxx::__normal_iteratorqpid::Plugin* const*, 
 std::vectorqpid::Plugin*, std::allocatorqpid::Plugin*  , 
 boost::_bi::bind_tvoid, boost::_mfi::mf1void, qpid::Plugin, 
 qpid::Plugin::Target, boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target   
 (__gnu_cxx::__normal_iteratorqpid::Plugin* const*, 
 std::vectorqpid::Plugin*, std::allocatorqpid::Plugin*  , 
 __gnu_cxx::__normal_iteratorqpid::Plugin* const*, std::vectorqpid::Plugin*, 
 std::allocatorqpid::Plugin*  , boost::_bi::bind_tvoid, 
 boost::_mfi::mf1void, qpid::Plugin, qpid::Plugin::Target, 
 boost::_bi::list2boost::arg1, 
 boost::reference_wrapperqpid::Plugin::Target  ) () from 
 /home/rajith/workspace/git-qpid/qpid/qpid/cpp/build/src/libqpidcommon.so.2
 #11 0x7f7e5e02fa3b in void qpid

[jira] [Updated] (QPID-5870) Closing a topic consumer should delete its exclusive auto-delete queue

2014-07-02 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5870:
---

Attachment: QPID-5870.patch

The subscription queue is now deleted when the consumer is closed.

 Closing a topic consumer should delete its exclusive auto-delete queue
 --

 Key: QPID-5870
 URL: https://issues.apache.org/jira/browse/QPID-5870
 Project: Qpid
  Issue Type: Bug
Reporter: Rajith Attapattu
 Attachments: QPID-5870.patch


 When a topic consumer is closed, the subscription queue needs to be closed as 
 well.
 Currently this queue is only deleted when the session is closed (due to being 
 marked auto-deleted).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Welcome Andrew MacBean as a Qpid committer

2014-06-05 Thread Rajith Attapattu
Welcome Andrew!

Rajith


On Tue, Jun 3, 2014 at 6:25 PM, Andrew MacBean andymacb...@gmail.com
wrote:

 Thanks very much!

 Also the kind welcome messages are much appreciated. :)

 -Andrew
 On 3 Jun 2014 20:32, Gordon Sim g...@redhat.com wrote:

  The Qpid PMC have voted to grant commit rights to Andrew MacBean in
  recognition of his long-standing contributions to the project.
 
  Welcome Andrew, and thank you for your continued support for Apache Qpid!
 
  --Gordon.
 
 
 



Review Request 22021: [PROTON-589] Changes proposed to Messenger interface and supporting interfaces

2014-05-29 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22021/
---

Review request for qpid and Rafael Schloming.


Bugs: PROTON-589
https://issues.apache.org/jira/browse/PROTON-589


Repository: qpid


Description
---

Passive mode allows the file descriptors for messenger to be serviced by an 
external loop.
This patch contains changes proposed to Messenger interface and supporting 
interfaces.

Please note my working copy contains some very slight changes to the supporting 
interfaces.
But the core direction taken is the same.


Diffs
-

  
http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/src/main/java/org/apache/qpid/proton/messenger/ConnectionEventHandler.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/src/main/java/org/apache/qpid/proton/messenger/FileDescriptor.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/src/main/java/org/apache/qpid/proton/messenger/Listener.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/src/main/java/org/apache/qpid/proton/messenger/Messenger.java
 1598310 
  
http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/src/main/java/org/apache/qpid/proton/messenger/Selectable.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/22021/diff/


Testing
---


Thanks,

rajith attapattu



Review Request 22022: [PROTON-589] Implementation to support passive mode for proton-j messenger

2014-05-29 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22022/
---

Review request for qpid and Rafael Schloming.


Bugs: PROTON-589
https://issues.apache.org/jira/browse/PROTON-589


Repository: qpid


Description
---

This patch contains the first draft of the implementation for passive mode.

1. Changed the messenger to use the collector API instead of the querying API.
2. Removed dependency on the driver code.
3. Added NIO based implementation for non passive mode.


Diffs
-

  
http://svn.apache.org/repos/asf/qpid/proton/trunk/proton-j/src/main/java/org/apache/qpid/proton/messenger/impl/MessengerImpl.java
 1598317 

Diff: https://reviews.apache.org/r/22022/diff/


Testing
---

This version of the patch fails 6 python based tests.
No tests yet for passive mode.
Passive mode needs to be tested thoroughly.

The purpose of this review is to get agreement about the direction I have taken.


Thanks,

rajith attapattu



[jira] [Updated] (DISPATCH-16) Improve logging for dispatch

2014-02-07 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated DISPATCH-16:
-

Attachment: patch_1_schema_change.diff
patch_2_registering_log_source.diff
patch_3_configure_logging.diff
patch_4_use_log_source_to_filter.diff

 Improve logging for dispatch
 

 Key: DISPATCH-16
 URL: https://issues.apache.org/jira/browse/DISPATCH-16
 Project: Qpid Dispatch
  Issue Type: Improvement
Affects Versions: 0.1
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.2

 Attachments: patch_1_schema_change.diff, 
 patch_2_registering_log_source.diff, patch_3_configure_logging.diff, 
 patch_4_use_log_source_to_filter.diff


 The following improvements are planned.
 1. Add support for configuring logging on a per module basis.
 2. File support including log rotation.
 3. Syslog support.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-16) Improve logging for dispatch

2014-02-07 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated DISPATCH-16:
-

Attachment: patch_5_remove_set_mask_method.diff

 Improve logging for dispatch
 

 Key: DISPATCH-16
 URL: https://issues.apache.org/jira/browse/DISPATCH-16
 Project: Qpid Dispatch
  Issue Type: Improvement
Affects Versions: 0.1
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.2

 Attachments: patch_1_schema_change.diff, 
 patch_2_registering_log_source.diff, patch_3_configure_logging.diff, 
 patch_4_use_log_source_to_filter.diff, patch_5_remove_set_mask_method.diff


 The following improvements are planned.
 1. Add support for configuring logging on a per module basis.
 2. File support including log rotation.
 3. Syslog support.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5491) Improve logging for dispatch

2014-01-28 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5491:
---

Attachment: (was: log_patch.2)

 Improve logging for dispatch
 

 Key: QPID-5491
 URL: https://issues.apache.org/jira/browse/QPID-5491
 Project: Qpid
  Issue Type: Improvement
  Components: Qpid Dispatch
 Environment: The following improvements are planned.
 1. Add support for configuring logging on a per module basis.
 2. File support including log rotation.
 3. Syslog support.
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: Future

 Attachments: log_patch.3






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5491) Improve logging for dispatch

2014-01-28 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5491:
---

Attachment: log_patch.3

Reworked the patch to work on top of Ted's changes.

 Improve logging for dispatch
 

 Key: QPID-5491
 URL: https://issues.apache.org/jira/browse/QPID-5491
 Project: Qpid
  Issue Type: Improvement
  Components: Qpid Dispatch
 Environment: The following improvements are planned.
 1. Add support for configuring logging on a per module basis.
 2. File support including log rotation.
 3. Syslog support.
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: Future

 Attachments: log_patch.3






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-5491) Improve logging for dispatch

2014-01-27 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-5491:
---

Attachment: log_patch.2

The patch contains the following changes.

1. Each module registers with the log module and receives a handle.

2. It then uses the log handle when calling the log function.

3. This log handle should be then used to look up module specific config.
   However to keep this diff as simple as possible it just uses the default 
config instead.

 Improve logging for dispatch
 

 Key: QPID-5491
 URL: https://issues.apache.org/jira/browse/QPID-5491
 Project: Qpid
  Issue Type: Improvement
  Components: Qpid Dispatch
 Environment: The following improvements are planned.
 1. Add support for configuring logging on a per module basis.
 2. File support including log rotation.
 3. Syslog support.
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: Future

 Attachments: log_patch.2






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5491) Improve logging for dispatch

2014-01-17 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13874837#comment-13874837
 ] 

Rajith Attapattu commented on QPID-5491:


Sample configuration.

logging {
   default_level: DEBUG
   print_timestamp: yes
   print_to_stderr: yes
   print_to_file: no
   print_to_syslog: no
}

# Overrides the general config
log_module {
  module: SERVER
  level: TRACE
  print_to_file: yes 
  file: qdrouter.server.log
}

log_module {
  module: ROUTER
  level: TRACE
  print_to_file: yes 
  file: qdrouter.router.log
}

 Improve logging for dispatch
 

 Key: QPID-5491
 URL: https://issues.apache.org/jira/browse/QPID-5491
 Project: Qpid
  Issue Type: Improvement
  Components: Qpid Dispatch
 Environment: The following improvements are planned.
 1. Add support for configuring logging on a per module basis.
 2. File support including log rotation.
 3. Syslog support.
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: Future






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPID-5491) Improve logging for dispatch

2014-01-17 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13874837#comment-13874837
 ] 

Rajith Attapattu edited comment on QPID-5491 at 1/17/14 2:52 PM:
-

Sample configuration.

{code}
logging {
   default_level: DEBUG
   print_timestamp: yes
   print_to_stderr: yes
   print_to_file: no
   print_to_syslog: no
}

# Overrides the general config
log_module {
  module: SERVER
  level: TRACE
  print_to_file: yes 
  file: qdrouter.server.log
}

log_module {
  module: ROUTER
  level: TRACE
  print_to_file: yes 
  file: qdrouter.router.log
}
{code}


was (Author: rajith):
Sample configuration.

logging {
   default_level: DEBUG
   print_timestamp: yes
   print_to_stderr: yes
   print_to_file: no
   print_to_syslog: no
}

# Overrides the general config
log_module {
  module: SERVER
  level: TRACE
  print_to_file: yes 
  file: qdrouter.server.log
}

log_module {
  module: ROUTER
  level: TRACE
  print_to_file: yes 
  file: qdrouter.router.log
}

 Improve logging for dispatch
 

 Key: QPID-5491
 URL: https://issues.apache.org/jira/browse/QPID-5491
 Project: Qpid
  Issue Type: Improvement
  Components: Qpid Dispatch
 Environment: The following improvements are planned.
 1. Add support for configuring logging on a per module basis.
 2. File support including log rotation.
 3. Syslog support.
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: Future






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Review Request 17050: Improvements to logging in dispatch

2014-01-17 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17050/
---

Review request for qpid, Andrew Stitcher and Ted Ross.


Bugs: QPID-5491
https://issues.apache.org/jira/browse/QPID-5491


Repository: qpid


Description
---

The patch contains support for,
1. per module log configuration.
2. File and syslog support.

(*) Log rotation support will be added as a separate patch.
(*) This patch will not compile as only the important files were added to keep 
the diff small.
(*) To try it out pls use https://github.com/rajith77/Dispatch/tree/logging and 
use the config file attached to the JIRA.


Diffs
-

  
http://svn.apache.org/repos/asf/qpid/dispatch/trunk/include/qpid/dispatch/log.h 
1559165 
  
http://svn.apache.org/repos/asf/qpid/dispatch/trunk/python/qpid_dispatch_internal/config/schema.py
 1559165 
  http://svn.apache.org/repos/asf/qpid/dispatch/trunk/src/dispatch.c 1559165 
  http://svn.apache.org/repos/asf/qpid/dispatch/trunk/src/log.c 1559165 
  http://svn.apache.org/repos/asf/qpid/dispatch/trunk/src/log_private.h 1559165 

Diff: https://reviews.apache.org/r/17050/diff/


Testing
---

All existing tests pass.


Thanks,

rajith attapattu



Re: Null Pointer exception in Qpid Java CLient 0.24

2013-10-07 Thread Rajith Attapattu
Yes the older versions (pre 0-10) only works with Binding URL format.

Rajith


On Mon, Oct 7, 2013 at 1:35 PM, Gordon Sim g...@redhat.com wrote:

 On 10/07/2013 05:51 PM, k.madnani84 wrote:

 Hi,

 I am trying to use Qpid Java client 0.24 with RabbitMQ 3.1.5.I am getting
 a
 null pointer exception and i understand its because exchangeName is
 getting
 generated as null when i debug.But whats the reason?
 Is there something i need to mention extra in case of RabbitMQ.My
 properties
 file is as such:

 java.naming.factory.initial =
 org.apache.qpid.jndi.**PropertiesFileInitialContextFa**ctory
 connectionfactory.**qpidConnectionfactory =
 amqp://rbtadmin:b_Ksw6w0@**zldv0434.vci.att.com/?**
 brokerlist='tcp://zldv0434.**vci.att.com:5672http://rbtadmin:b_ksw...@zldv0434.vci.att.com/?brokerlist='tcp://zldv0434.vci.att.com:5672
 '
 destination.topicExchange = amq.topic


 This is speculative, but try changing the line above to

 destination.topicExchange=**topic://amq.topic

 The older protocols I think still require the 'binding url' format,
 https://cwiki.apache.org/**confluence/display/qpid/**BindingURLFormathttps://cwiki.apache.org/confluence/display/qpid/BindingURLFormat.
 It may be that the shortened form you use above is not parsed correctly.





 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@qpid.apache.**orgdev-unsubscr...@qpid.apache.org
 For additional commands, e-mail: dev-h...@qpid.apache.org




[jira] [Commented] (QPID-5204) qpid.subject is an invalid JMS property identifier

2013-10-03 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785382#comment-13785382
 ] 

Rajith Attapattu commented on QPID-5204:


QPID-3838 prefixes vendor specific properties with JMS_ 
We could add another rule to make sure we remove any . with and underscore.

As for  x-amqp-0-10-routing-key, it doesn't exist as a physical property.
It's a virtual key that gets mapped to the routing key.

 qpid.subject is an invalid JMS property identifier
 --

 Key: QPID-5204
 URL: https://issues.apache.org/jira/browse/QPID-5204
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.24
Reporter: Gordon Sim
 Fix For: 0.25


 It should be prefixed with JMS_ and the '.' needs to be replaced (e.g. 
 perhaps with '_'). This shouldn't affect what is sent over the wire, just how 
 the name is exposed through the API. 
 There are probably some others, e.g. x-amqp-0-10-routing-key etc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Review Request 14361: QPID-5197: Remove obsolete --cluster-durable/persistLastNode options in java code.

2013-10-01 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14361/#review26555
---

Ship it!


Ship It!

- rajith attapattu


On Oct. 1, 2013, 2:04 p.m., Alan Conway wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/14361/
 ---
 
 (Updated Oct. 1, 2013, 2:04 p.m.)
 
 
 Review request for qpid, Fraser Adams and rajith attapattu.
 
 
 Bugs: QPID-5197
 https://issues.apache.org/jira/browse/QPID-5197
 
 
 Repository: qpid
 
 
 Description
 ---
 
 QPID-5197: Remove obsolete --cluster-durable/persistLastNode options in java 
 code.
 QPID-5197: Remove obsolete --cluster-durable/persistLastNode options in C++ 
 and python.
 
 Fraser, can you look at the Java part of this and see if it is correct?
 
 
 Diffs
 -
 
   /trunk/qpid/cpp/src/qpid/client/QueueOptions.h 1528082 
   /trunk/qpid/cpp/src/qpid/client/QueueOptions.cpp 1528082 
   /trunk/qpid/cpp/src/tests/QueueOptionsTest.cpp 1528082 
   
 /trunk/qpid/doc/book/src/cpp-broker/Cheat-Sheet-for-configuring-Queue-Options.xml
  1528082 
   /trunk/qpid/doc/book/src/cpp-broker/Managing-CPP-Broker.xml 1528082 
   
 /trunk/qpid/java/client/src/main/java/org/apache/qpid/client/messaging/address/QpidQueueOptions.java
  1528082 
   /trunk/qpid/tools/src/java/bin/qpid-web/web/qmf-ui/scripts/qmf-ui.js 
 1528082 
   /trunk/qpid/tools/src/java/bin/qpid-web/web/ui/qmf.html 1528082 
   
 /trunk/qpid/tools/src/java/src/main/java/org/apache/qpid/qmf2/tools/QpidConfig.java
  1528082 
   
 /trunk/qpid/tools/src/java/src/main/java/org/apache/qpid/qmf2/util/GetOpt.java
  1528082 
   /trunk/qpid/tools/src/py/qpid-config 1528082 
 
 Diff: https://reviews.apache.org/r/14361/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Alan Conway
 




Re: Review Request 14361: QPID-5197: Remove obsolete --cluster-durable/persistLastNode options in java code.

2013-10-01 Thread rajith attapattu


 On Oct. 1, 2013, 2:29 p.m., rajith attapattu wrote:
  Ship It!

QpidQueueOptions.java is no longer in use.
Provided the QMF changes are verified by Fraser I don't foresee any other 
issues.


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14361/#review26555
---


On Oct. 1, 2013, 2:04 p.m., Alan Conway wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/14361/
 ---
 
 (Updated Oct. 1, 2013, 2:04 p.m.)
 
 
 Review request for qpid, Fraser Adams and rajith attapattu.
 
 
 Bugs: QPID-5197
 https://issues.apache.org/jira/browse/QPID-5197
 
 
 Repository: qpid
 
 
 Description
 ---
 
 QPID-5197: Remove obsolete --cluster-durable/persistLastNode options in java 
 code.
 QPID-5197: Remove obsolete --cluster-durable/persistLastNode options in C++ 
 and python.
 
 Fraser, can you look at the Java part of this and see if it is correct?
 
 
 Diffs
 -
 
   /trunk/qpid/cpp/src/qpid/client/QueueOptions.h 1528082 
   /trunk/qpid/cpp/src/qpid/client/QueueOptions.cpp 1528082 
   /trunk/qpid/cpp/src/tests/QueueOptionsTest.cpp 1528082 
   
 /trunk/qpid/doc/book/src/cpp-broker/Cheat-Sheet-for-configuring-Queue-Options.xml
  1528082 
   /trunk/qpid/doc/book/src/cpp-broker/Managing-CPP-Broker.xml 1528082 
   
 /trunk/qpid/java/client/src/main/java/org/apache/qpid/client/messaging/address/QpidQueueOptions.java
  1528082 
   /trunk/qpid/tools/src/java/bin/qpid-web/web/qmf-ui/scripts/qmf-ui.js 
 1528082 
   /trunk/qpid/tools/src/java/bin/qpid-web/web/ui/qmf.html 1528082 
   
 /trunk/qpid/tools/src/java/src/main/java/org/apache/qpid/qmf2/tools/QpidConfig.java
  1528082 
   
 /trunk/qpid/tools/src/java/src/main/java/org/apache/qpid/qmf2/util/GetOpt.java
  1528082 
   /trunk/qpid/tools/src/py/qpid-config 1528082 
 
 Diff: https://reviews.apache.org/r/14361/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Alan Conway
 




Re: Detecting potential Java deadlocks with JCarder

2013-09-06 Thread Rajith Attapattu
Hey Philip,

Thanks for the analysis!
I've been closely following your work.  JCarder seems like a very useful
tool.
Given that we use multiple locks, there are many ways to get into deadlock
situations which has made the current client untenable in some user
environments.
I will definitely use this tool in my future work.

Regards,

Rajith


On Fri, Sep 6, 2013 at 11:32 AM, Phil Harvey p...@philharveyonline.comwrote:

 I want to let others know about some interesting and useful lock analysis
 I've been doing.

 I recently ran the Java systests with the JCarder [1] agent enabled.
  JCarder instruments the bytecode, allowing it to keep track of when locks
 are acquired and released.  It uses this to detect whether locks are ever
 taken in an order that would, given unlucky scheduling, result in deadlock.

 Potential deadlocks are represented by GraphViz .dot files which can be
 easily visualised to show the locks, threads and methods involved.

 JCarder found several potential deadlocks in the Java client, which fall
 into three categories:

 1. Would only happen if the application used JMS illegally, according to
 the thread-safety section of the JMS spec.  I think we can ignore these.
 2. Manifestations of QPID-4574 (close a session or consumer in one thread
 while onMessage is sending a message).  I've modified the Jira's
 description/comments accordingly.
 3.  New potential deadlocks.  I've raised QPID-5117/8/9 for these.  Please
 shout if you think any of these are in fact duplicates.

 Note that JCarder cannot currently detect java.util.concurrent locks, which
 are heavily used by the Broker, so it is certainly no silver bullet.

 Nevertheless, I recommend that we use JCarder when testing changes that
 affect the threading/locking of any of our Java projects.

 Phil

 [1] http://www.jcarder.org/



Re: Welcome Pavel Moravec as a Qpid committer

2013-09-04 Thread Rajith Attapattu
Congrats and Welcome Pavel!

Rajith


On Wed, Sep 4, 2013 at 11:17 AM, Gordon Sim g...@redhat.com wrote:

 The Qpid PMC have voted to grant commit rights to Pavel Moravec in
 recognition of his long standing contributions to the project.

 Welcome Pavel, and thank you for your continued support for Apache Qpid!

 --Gordon.

 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@qpid.apache.**orgdev-unsubscr...@qpid.apache.org
 For additional commands, e-mail: dev-h...@qpid.apache.org




Re: RFQ 0.24: Two JIRAs

2013-08-16 Thread Rajith Attapattu
For python, this is indeed the solution we used when a user wanted the Java
client to recognize a python string sent in a map or an application
property.


On Fri, Aug 16, 2013 at 8:55 AM, Gordon Sim g...@redhat.com wrote:

 On 08/16/2013 01:15 PM, Darryl L. Pierce wrote:

 On Mon, Aug 12, 2013 at 08:52:20PM +0200, Jimmy Jones wrote:

 I'm a user of binary data in maps!

 I created the patch to handle UTF8 in perl. Removing the check will
 cause non-UTF8 strings to be casted into UTF8 which is probably a bad idea
 unless by luck they are 7 bit ASCII.


 Hey, Jimmy. I've been in training this week so haven't been able to
 respond.

 I reverted my changes, and would like to collaborate with you on a
 better, more considered solution to the problem of sending properties
 from Perl and the other dynamic languages.


 For Perl, what was wrong with the original code (as it is now you have
 reverted your changes)? It looks like that allows one to explicitly pass
 utf8 strings that will then be encoded as such. Whats the problem statement?

 For python, might one solution be to encode unicode strings as utf8? Again
 that at least gives you the ability to control what happens.


 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@qpid.apache.**orgdev-unsubscr...@qpid.apache.org
 For additional commands, e-mail: dev-h...@qpid.apache.org




Re: svn commit: r1506095 - /qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageProducer.java

2013-07-23 Thread Rajith Attapattu
Sorry, there wasn't JIRA for this.
I should have created one.
The patch I mentioned was attached to an internal Bugzilla .

Rajith


On Tue, Jul 23, 2013 at 12:10 PM, Robbie Gemmell
robbie.gemm...@gmail.comwrote:

 This also seems like a change which should have a JIRA reference,
 particularly if there was a patch.

 Robbie

 On 23 July 2013 16:03, raj...@apache.org wrote:

  Author: rajith
  Date: Tue Jul 23 15:03:38 2013
  New Revision: 1506095
 
  URL: http://svn.apache.org/r1506095
  Log:
  NO_JIRA Changed the exception thrown for an invalid destination from a
  regular JMSException to an
  InvalidDestinationException. This is patch from Pavel Morevec.
 
  Modified:
 
 
 qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageProducer.java
 
  Modified:
 
 qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageProducer.java
  URL:
 
 http://svn.apache.org/viewvc/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageProducer.java?rev=1506095r1=1506094r2=1506095view=diff
 
 
 ==
  ---
 
 qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageProducer.java
  (original)
  +++
 
 qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageProducer.java
  Tue Jul 23 15:03:38 2013
  @@ -440,7 +440,7 @@ public abstract class BasicMessageProduc
   {
   if (!(destination instanceof AMQDestination))
   {
  -throw new JMSException(Unsupported destination class: 
  +throw new InvalidDestinationException(Unsupported
  destination class: 
  + ((destination != null) ?
  destination.getClass() : null));
   }
 
  @@ -453,7 +453,7 @@ public abstract class BasicMessageProduc
   }
   catch(Exception e)
   {
  -JMSException ex = new JMSException(Error validating
  destination);
  +JMSException ex = new InvalidDestinationException(Error
  validating destination);
   ex.initCause(e);
   ex.setLinkedException(e);
 
 
 
 
  -
  To unsubscribe, e-mail: commits-unsubscr...@qpid.apache.org
  For additional commands, e-mail: commits-h...@qpid.apache.org
 
 



[jira] [Resolved] (QPID-3838) [JMS] Vendor specific properties should be prefixed with JMS_

2013-07-23 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-3838.


Resolution: Fixed

A fix has been made on trunk and also ported to 0.24

 [JMS] Vendor specific properties should be prefixed with JMS_
 -

 Key: QPID-3838
 URL: https://issues.apache.org/jira/browse/QPID-3838
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.12, 0.14
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
Priority: Minor
  Labels: jms-compliance
 Fix For: 0.23


 As per the JMS spec, vendor specific message properties should be prefixed 
 with JMS_
 Since we are including qpid.subject in all outgoing messages, it's causing 
 a TCK failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: [VOTE] Replace the Qpid website

2013-06-17 Thread Rajith Attapattu
[X] Yes, replace the existing site with the proposed site

Rajith


On Mon, Jun 17, 2013 at 6:05 PM, Ken Giusti kgiu...@redhat.com wrote:



 - Original Message -
  From: Justin Ross jr...@apache.org
  To: dev@qpid.apache.org
  Sent: Monday, June 17, 2013 1:35:14 PM
  Subject: [VOTE] Replace the Qpid website
 

  [X] Yes, replace the existing site with the proposed site
  [   ] No, the proposed site isn't ready
 
 

 --
 -K

 -
 To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
 For additional commands, e-mail: dev-h...@qpid.apache.org




[jira] [Updated] (QPID-4922) If Consumer close() method is invoked while inside onMessage(), it should be excuted after onMessage() has completed.

2013-06-13 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4922:
---

Attachment: QPID-4922.patch

Marks that close has been called and then executes it after the thread returns 
from onMessage()

 If Consumer close() method is invoked while inside onMessage(), it should be 
 excuted after onMessage() has completed. 
 --

 Key: QPID-4922
 URL: https://issues.apache.org/jira/browse/QPID-4922
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20, 0.22
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.23

 Attachments: QPID-4922.patch


 If Consumer close() is called while inside onMessage(), it deadlocks (or will 
 be waiting on a condition that would never be true with the patch for 
 QPID-4574).
 As per the JMS spec, the consumer cannot be closed() until onMessage() method 
 returns.
 Therefore the best solution is to mark that close() has been called and then 
 execute those method once the thread returns from onMessage().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-4922) If Consumer close() method is invoked while inside onMessage(), it should be excuted after onMessage() has completed.

2013-06-13 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4922:
---

Attachment: (was: QPID-4922.patch)

 If Consumer close() method is invoked while inside onMessage(), it should be 
 excuted after onMessage() has completed. 
 --

 Key: QPID-4922
 URL: https://issues.apache.org/jira/browse/QPID-4922
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20, 0.22
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.23

 Attachments: QPID-4922.patch


 If Consumer close() is called while inside onMessage(), it deadlocks (or will 
 be waiting on a condition that would never be true with the patch for 
 QPID-4574).
 As per the JMS spec, the consumer cannot be closed() until onMessage() method 
 returns.
 Therefore the best solution is to mark that close() has been called and then 
 execute those method once the thread returns from onMessage().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-4922) If Consumer close() method is invoked while inside onMessage(), it should be excuted after onMessage() has completed.

2013-06-13 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4922:
---

Attachment: QPID-4922.patch

 If Consumer close() method is invoked while inside onMessage(), it should be 
 excuted after onMessage() has completed. 
 --

 Key: QPID-4922
 URL: https://issues.apache.org/jira/browse/QPID-4922
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20, 0.22
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.23

 Attachments: QPID-4922.patch


 If Consumer close() is called while inside onMessage(), it deadlocks (or will 
 be waiting on a condition that would never be true with the patch for 
 QPID-4574).
 As per the JMS spec, the consumer cannot be closed() until onMessage() method 
 returns.
 Therefore the best solution is to mark that close() has been called and then 
 execute those method once the thread returns from onMessage().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-4357) If an exception is received while the client is blocked on sync, the client should return immediately rather than timing out.

2013-06-12 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4357:
---

Fix Version/s: (was: 0.21)
   Future

 If an exception is received while the client is blocked on sync, the client 
 should return immediately rather than timing out.
 -

 Key: QPID-4357
 URL: https://issues.apache.org/jira/browse/QPID-4357
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.10, 0.12, 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: Future


 Currently if the client receives an execution-exception while blocked on 
 sync, it doesn't get notified. Instead the client times out with an exception 
 as given bellow.
 This is less than ideal as it sometimes masks the real error and users need 
 to go through the logs to figure out what really happens.
 This also adds a lot of time to our test suite when errors happen, prolonging 
 the run time.
 org.apache.qpid.AMQException: timed out waiting for sync: complete = 37, 
 point = 43 [error code 541: internal error]
 at 
 org.apache.qpid.client.AMQSession_0_10.setCurrentException(AMQSession_0_10.java:1074)
 at 
 org.apache.qpid.client.AMQSession_0_10.sync(AMQSession_0_10.java:1051)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-4574) [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock

2013-06-12 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13681283#comment-13681283
 ] 

Rajith Attapattu commented on QPID-4574:


Currently the following solution is being looked at 
https://reviews.apache.org/r/10738/

 [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock
 

 Key: QPID-4574
 URL: https://issues.apache.org/jira/browse/QPID-4574
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
  Labels: deadlock
 Fix For: Future

 Attachments: Test_00782235.java


 This deadlock can manifest when a you have a producer sending messages inside 
 an onMessage() call.
 This is a common enough pattern, and is used by intermediaries like Message 
 Bridges, ESB's etc..
 Dispatcher-0-Conn-1:
   at 
 org.apache.qpid.client.BasicMessageProducer.send(BasicMessageProducer.java:309)
   - waiting to lock 0xecfd15a8 (a java.lang.Object)
   at Test_00782235$1.onMessage(Test_00782235.java:29)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:751)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:141)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:725)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:186)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:54)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.notifyConsumer(AMQSession.java:3479)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.dispatchMessage(AMQSession.java:3418)
   - locked 0xecfd16c8 (a java.lang.Object)
   - locked 0xecfd16d8 (a java.lang.Object)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.access$1000(AMQSession.java:3205)
   at org.apache.qpid.client.AMQSession.dispatch(AMQSession.java:3198)
   at 
 org.apache.qpid.client.message.UnprocessedMessage.dispatch(UnprocessedMessage.java:54)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.run(AMQSession.java:3341)
   at java.lang.Thread.run(Thread.java:679)
 main:
   at 
 org.apache.qpid.client.BasicMessageConsumer.close(BasicMessageConsumer.java:598)
   - waiting to lock 0xecfd16c8 (a java.lang.Object)
   - locked 0xecfd15a8 (a java.lang.Object)
   at 
 org.apache.qpid.client.BasicMessageConsumer.close(BasicMessageConsumer.java:558)
   at Test_00782235.init(Test_00782235.java:62)
   at Test_00782235.main(Test_00782235.java:12)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-4574) [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock

2013-06-12 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4574:
---

Fix Version/s: (was: 0.21)
   Future

 [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock
 

 Key: QPID-4574
 URL: https://issues.apache.org/jira/browse/QPID-4574
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
  Labels: deadlock
 Fix For: Future

 Attachments: Test_00782235.java


 This deadlock can manifest when a you have a producer sending messages inside 
 an onMessage() call.
 This is a common enough pattern, and is used by intermediaries like Message 
 Bridges, ESB's etc..
 Dispatcher-0-Conn-1:
   at 
 org.apache.qpid.client.BasicMessageProducer.send(BasicMessageProducer.java:309)
   - waiting to lock 0xecfd15a8 (a java.lang.Object)
   at Test_00782235$1.onMessage(Test_00782235.java:29)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:751)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:141)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:725)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:186)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:54)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.notifyConsumer(AMQSession.java:3479)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.dispatchMessage(AMQSession.java:3418)
   - locked 0xecfd16c8 (a java.lang.Object)
   - locked 0xecfd16d8 (a java.lang.Object)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.access$1000(AMQSession.java:3205)
   at org.apache.qpid.client.AMQSession.dispatch(AMQSession.java:3198)
   at 
 org.apache.qpid.client.message.UnprocessedMessage.dispatch(UnprocessedMessage.java:54)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.run(AMQSession.java:3341)
   at java.lang.Thread.run(Thread.java:679)
 main:
   at 
 org.apache.qpid.client.BasicMessageConsumer.close(BasicMessageConsumer.java:598)
   - waiting to lock 0xecfd16c8 (a java.lang.Object)
   - locked 0xecfd15a8 (a java.lang.Object)
   at 
 org.apache.qpid.client.BasicMessageConsumer.close(BasicMessageConsumer.java:558)
   at Test_00782235.init(Test_00782235.java:62)
   at Test_00782235.main(Test_00782235.java:12)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-4608) JMS client authorization failure throws JMSException instead of JMSSecurityException

2013-06-12 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-4608.


Resolution: Fixed

The relevant commits are

http://svn.apache.org/viewvc?view=revisionrevision=r1451047
http://svn.apache.org/viewvc?view=revisionrevision=1451362

 JMS client authorization failure throws JMSException instead of 
 JMSSecurityException
 

 Key: QPID-4608
 URL: https://issues.apache.org/jira/browse/QPID-4608
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
Priority: Minor
  Labels: jms, jmsexception
 Fix For: 0.21


 When ACLs doen't authorize a user to perform an action, Java client raises 
 JMSException while JMSSecurityException should be raised.
 From the java docs for JMSSecurityException,
 This exception must be thrown when a provider rejects a user name/password 
 submitted by a client. It may also be thrown for any case where a security 
 restriction prevents a method from completing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-3769) NPE in client AMQDestination.equals()

2013-06-12 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu closed QPID-3769.
--

Resolution: Fixed

Committed to both trunk and 0.22 branch.

 NPE in client AMQDestination.equals()
 -

 Key: QPID-3769
 URL: https://issues.apache.org/jira/browse/QPID-3769
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.12
Reporter: Jan Bareš
Assignee: Rajith Attapattu
  Labels: addressing
 Fix For: 0.22


 Code of org.apache.qpid.client.AMQDestination.equals(Object) is buggy, it 
 should test for null on _exchangeClass and _exchangeName before dereferencing 
 them, lines 522 and 526.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-4540) The subscription queue should only be deleted by the consumer.

2013-06-12 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-4540.


Resolution: Fixed

A fix was committed at http://svn.apache.org/viewvc?rev=1434492view=rev

 The subscription queue should only be deleted by the consumer.
 --

 Key: QPID-4540
 URL: https://issues.apache.org/jira/browse/QPID-4540
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
Priority: Minor
  Labels: Addressing
 Fix For: 0.21


 The removal of subscription queue is handled in the wrong method call and is 
 causing issues as this method can be called from a producer as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-4497) [Java client] Exclusive property for the subscription queue cannot be overridden using the address string

2013-06-12 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-4497.


Resolution: Fixed

 [Java client] Exclusive property for the subscription queue cannot be 
 overridden using the address string
 -

 Key: QPID-4497
 URL: https://issues.apache.org/jira/browse/QPID-4497
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
Priority: Minor
  Labels: addressing
 Fix For: 0.21


 There have been requests to allow the customization of the subscription 
 queue. However there was an issue as the exclusive property wasn't set 
 properly due to a bug.
 amq.topic/test; {link: {name:my-queue, durable:true, 
 x-declare:{exclusive:false, auto-delete:false}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-4906) If Session close() or closed() method is invoked while inside onMessage(), they should be excuted after onMessage() has completed.

2013-06-11 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4906:
---

Attachment: QPID-4906.patch

If close() or closed() is called while the dispatcher thread is inside the 
onMessage(), the client keeps track of it and executes it ones it returns from 
onMessage().

Please note the patch depends on msgDeliveryInProgress which is introduced as 
part of the fix proposed in https://reviews.apache.org/r/10738/

 If Session close() or closed() method is invoked while inside onMessage(), 
 they should be excuted after onMessage() has completed.
 --

 Key: QPID-4906
 URL: https://issues.apache.org/jira/browse/QPID-4906
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Attachments: QPID-4906.patch


 If Session close() or [closed() via the IO thread when a protocol close() is 
 received) is called while inside onMessage(), it deadlocks.
 As per the JMS spec, the session cannot be closed() until onMessage() method 
 returns.
 Therefore the best solution is to mark that close() or closed() has been 
 called and then execute those method once the thread returns from onMessage().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-06-11 Thread rajith attapattu


 On May 13, 2013, 3:22 p.m., Robbie Gemmell wrote:
  http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java,
   lines 744-745
  https://reviews.apache.org/r/10738/diff/2/?file=288993#file288993line744
 
  What impact does dropping the message on the floor here have on the 
  client?
  
  Earlier points in the call hierarchy seem to make effort to do other 
  things with the message when detecting session/consumer close, so is there 
  any impact from not doing so? E.g a message getting stuck acquired for a 
  now-closed consumer?
  
  Does any particular attention need paid to the overridden 0-10 specific 
  version of this method?
 
 rajith attapattu wrote:
 The message will be dropped if a consumer (or session is closed or 
 closing).
 When a consumer is closed, any messages acquired but not acknowledged 
 should be be made available to another consumer by the broker.
 These messages will be marked redelivered.
 Is this the same situation for 0-8/0-9 ?
 
 The situation is the same as a consumer closing with a bunch of unacked 
 messages when in CLIENT_ACK mode.
 Alternatively we could reject the message (release in 0-10 terms). But I 
 don't think this is required, given that we will be closing the consumer 
 anyways.
 
 
 Earlier points in the call hierarchy seem to make effort to do other 
 things with the message when detecting session/consumer close
 By other things are you referring to a reject? In 0-10 AFIK you don't 
 need to do it. 
 The situation is the same as a consumer closing with a bunch of unacked 
 messages when in CLIENT_ACK mode.
 
 Does any particular attention need paid to the overridden 0-10 specific 
 version of this method?
 IMO adding if (!(isClosed() || isClosing())) is required to prevent the 
 0-10 specific method from issuing credit (and receiving more messages, that 
 will eventually get released).
 There is a chance where the consumer could be marked closed, but not yet 
 reached the point of sending a cancel, by the time messageFlow() is called.
 The above check should prevent it.
 
 Robbie Gemmell wrote:
 Did you mean race condition (instead of deadlock)?
 
 I really meant deadlock. One of the uses of _messageDeliveryLock being 
 removed was added as part of the fix for a deadlock 
 (https://issues.apache.org/jira/browse/QPID-3911) and so I wonder what effect 
 removing it again will have. It may be the other changes mean it isn't a 
 problem, but I think it needs to be closely considered. 
 
 The message will be dropped if a consumer (or session is closed or 
 closing).
 When a consumer is closed, any messages acquired but not acknowledged 
 should be be made available to another consumer by the broker.
 These messages will be marked redelivered.
 Is this the same situation for 0-8/0-9 ?
 
 That isn't actually the case, transferred messages are the responsibility 
 of the session and remain acquired after consumer close until such point as 
 the session explicitly does something with them. I believe this is true of 
 0-8/9/91 as well, and probably explains the origin of the reject/release code 
 I referred to earlier in the call tree than the changes would now allow the 
 message to be silently dropped.
 
 From the 0-10 spec: Canceling a subscription MUST NOT affect pending 
 transfers. A transfer made prior to canceling transfers to the destination 
 MUST be able to be accepted, released, acquired, or rejected after the 
 subscription is canceled.
 
 rajith attapattu wrote:
 Indeed, Gordon mentioned that to me and the 2nd patch (the one before I 
 did today) takes care of rejecting messages from the consumer. We don't need 
 to do the same when the session is closing, as when the session ends, the any 
 unacked messages are put back on the queue.
 
 rajith attapattu wrote:
 As for QPID-3911,
 There is a deadlock, albeit a bit different (involving the same locks) 
 from QPID-3911, that do happen in similar circumstances.
 However this deadlock appears to manifest with or without this patch, 
 which leads me to believe that _messageDeliveryLock is not the right solution 
 for QPID-3911.
 Sadly the solution for QPID-3911 made it worse as there are at least two 
 distinct cases of deadlocks involving _messageDeliveryLock.
 1. Btw _lock and _messageDeliveryLock
 2. Btw _messageDeliveryLock and _failoverMutex.
 
 We definitely need to find a solution for the deadlocks (at least 3 
 cases) btw failoverMutex and _lock (in AMQSession), which seems to be the 
 root of all evil :)
 We might have to drop one lock (most likely _lock) and see if we can 
 provide an alternative strategy to guarantee correct behaviour.

 
 rajith attapattu wrote:
 Sorry I meant dopping _failoverMutex (not _lock in AMQSession).
 It might also

Re: AMQP 1.0 JMS client - supplementary coding standards

2013-06-04 Thread Rajith Attapattu
I'm strongly against using dependencies for the JMS client unless it's
unavoidable.
We can make exception for slf4j, but even then my preference is to use JDK
logging.
We've had several issues dealing with dependencies in the past.

Increasingly our client is being used in various containers like
AppServers, ESB's etc.. and mismatched libraries have caused issues.
If I put my rpm-packager hat on, then not having dependencies is a huge
bonus.

One of the explicit goals of proton was to keep the dependencies to a
minimum (if not eliminate it completely).
I think we should extend that goal for the JMS client as well.

Having a compact, dependency free library is always a plus point.

Rajith


On Tue, Jun 4, 2013 at 12:46 PM, Rob Godfrey rob.j.godf...@gmail.comwrote:

 On 4 June 2013 18:37, Phil Harvey p...@philharveyonline.com wrote:

  Ah, I expected a swift response about this.  Luckily I'd already donned
 my
  tin hat :-)
 
  On 4 June 2013 17:16, Rob Godfrey rob.j.godf...@gmail.com wrote:
 
   I'm not really a big fan of enforcing commons lang for toString  -
   sometimes you want/need to have a specific string representation of an
   object.
 
 
  Indeed - that's why I wrote should rather than must.  I accept that
  there are exceptional cases but believe there's a benefit in the other
 90%
  of classes doing things in the same way as each other.
 
  For clarity I will add the usual should vs must distinction at the top
 of
  the wiki page.
 
 
Similarly I think we should be intelligent in defining equals and
   hashCode() methods.
  
   What I'd actually prefer to say is that every object should define a
   toString().
  
 
  Yep, that's in the main coding standards already.
 
  
   In general I'd like to avoid having dependence on external libraries
  unless
   we *really* need to.
  
 
  You presumably believe that the use of EqualsBuilder and HashCodeBuilder
  for faciliating the bug-free implementation of equals() and hashCode()
 does
  not merit the inclusion of commons-lang as a dependency?
 
 

 They are tools but, to be honest, my IDE will build correct equals and
 hashCode methods for me (and toString as well) whereas it wouldn't generate
 code which used commons lang in this way, so it'd be adding work for me and
 likely increasing the likelihood of my making an error (what with the IDE
 being automated and all, versus having to code the commans lang approach by
 hand).

 So... no I don't think it merits adding a dependency to the JMS client.
 Adding any library as a dependency for the client is a big deal because it
 introduces the possibility of a dependency clash with the application the
 library is being used by.  SLF4J is specifically designed for this case and
 is perhaps the only library I'd feel comfortable depending on.  Everything
 else would need a much stronger justification in my book :-)

 -- Rob



[jira] [Created] (QPID-4906) If Session close() or closed() method is invoked while inside onMessage(), they should be excuted after onMessage() has completed.

2013-06-03 Thread Rajith Attapattu (JIRA)
Rajith Attapattu created QPID-4906:
--

 Summary: If Session close() or closed() method is invoked while 
inside onMessage(), they should be excuted after onMessage() has completed.
 Key: QPID-4906
 URL: https://issues.apache.org/jira/browse/QPID-4906
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu


If Session close() or [closed() via the IO thread when a protocol close() is 
received) is called while inside onMessage(), it deadlocks.

As per the JMS spec, the session cannot be closed() until onMessage() method 
returns.

Therefore the best solution is to mark that close() or closed() has been called 
and then execute those method once the thread returns from onMessage().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: [VOTE] Create a QPIDJMS JIRA project

2013-05-28 Thread Rajith Attapattu
Yes [X]
No [  ]


On Tue, May 28, 2013 at 11:05 AM, Robbie Gemmell
robbie.gemm...@gmail.comwrote:

 Following recent discussions regarding a more component-focused structure,
 and beginning work on a Proton-based AMQP 1.0 JMS client, could everyone
 please vote on the following:

 Create a QPIDJMS JIRA project and request the Subversion integration for
 it.

 Yes [  ]
 No [  ]



Re: [VOTE] Subversion to JIRA integration

2013-05-24 Thread Rajith Attapattu
QPID:
Yes [x ]
No [  ]


On Fri, May 24, 2013 at 7:03 AM, Robbie Gemmell robbie.gemm...@gmail.comwrote:

 Hi all,

 As some of you will already know, a mail was sent to committers@a.oearlier
 outlining some of the newer services ASF infra offer that we may not know
 about. One in particular stuck out for me, a service to populate JIRAs with
 information about relavant commits to Subversion or Git. I have always
 missed the Subversion integration within JIRA since it had to be disabled
 (it seemingly doesn't work very well with repositories of the size found at
 the ASF), so I would love to get something back in this area. You can find
 the details here: http://www.apache.org/dev/svngit2jira.html

 I would like to request this for both the QPID and PROTON JIRA instances.
 They indicate they would like agreement before enabling it, as unlike the
 older integration it generates a bit of traffic by actually posting
 comments on the JIRAs, so please vote now:

 QPID:
 Yes [  ]
 No [  ]

 PROTON:
 Yes [  ]
 No [  ]


 Robbie



Re: Website update 4

2013-05-24 Thread Rajith Attapattu
On Fri, May 24, 2013 at 8:56 AM, Justin Ross justin.r...@gmail.com wrote:

 On Fri, May 24, 2013 at 8:37 AM, Rob Godfrey  Sort of.  In general,
 for release content, there's a special
  generation step.  That's really a per-release task, however, not
  something that many developers would typically do.  The idea is that
  the release manager would do the generation with each new release.
  I've spent a good deal of time to make this easy (much easier than
  it's been in the past), probably because it's one of the jobs I've had
  to do with each release.
 
  Also, I guess I should say that the two steps are easily collapsed:
  make gen-release RELEASE=0.22 render.
 
  Trunk documentation would, as you suggest, best be solved by automated
  builds.
 
 
  So I'm at a bit of a loss as to why a two step document generation
 process
  would be a good idea.  Is there something that is now being done that
  cannot be done using the docbook - html XSL transforms?  I'm not the
  greatest fan of docbook, but adding in a secondary transform on top of an
  existing transform seems a little odd.  If docbook cannot give us the
  output we need, why are we still using docbook?

 Agreed about docbook; I'm not a fan, and I'd love to move away from it.


Markdown is a more friendly (and less clunky format) and the doc is
readable on it's own.
There ware ways to convert docbook to Markdown. So maybe we should look at
that option.

What do u think ?



 It's more like an import step.  The docbook content is (sensibly)
 maintained under the main qpid source tree.  The web content is
 (sensibly) maintained under the website source.  On the website, we
 fuse the two together for the release docs, mating the website
 template (with its navigation) to the body html from docbook.  Indeed,
 I favor removing the site template stuff from the docbook scripts
 under the qpid source tree.  I currently scrub it out.

 There's another less important reason.  Docbook generates lousy html,
 so I sanitize it (to xhtml) to make things like link checking work
 nicely.

 Justin

 -
 To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
 For additional commands, e-mail: dev-h...@qpid.apache.org




Re: Proton-based AMQP 1.0 JMS client

2013-05-23 Thread Rajith Attapattu
+1 for a different JIRA system.
I believe that is the only way we can support an independent release cycle.

Rajith


On Thu, May 23, 2013 at 2:36 PM, Rob Godfrey rob.j.godf...@gmail.comwrote:

 On 23 May 2013 20:24, Justin Ross jr...@apache.org wrote:

  This is great news!
 
  On Thu, May 23, 2013 at 11:07 AM, Robbie Gemmell
  robbie.gemm...@gmail.com wrote:
   Hi all,
  [snip]
   In keeping with prior discussion of making the project structure more
   component-based, a couple of things we would like to do:
  
   - Create an area in svn, located alongside the proton directory, e.g:
   http://svn.apache.org/repos/asf/qpid/jms
 
  Perfect, IMO.
 
   - Request creation of a JIRA project, e.g QPIDJMS, inititally for use
   relating to the new client but perhaps eventually serving for all QPID
  JMS
   related components.
 
  Out of curiosity, why do you suggest a distinct jira instance (versus
  using a jira component)?  I don't have a strong preference either way.
 

 So for me there's a couple of reasons.  Most importantly the existing JIRA
 system is full of JIRAs for the existing client(s) which will make it very
 difficult to distinguish JIRAs for the new component from JIRAs relating to
 the older client libraries.  Within the context of a QPID JIRA system
 trying define components for JMS client for AMQP 0-8/9/10, JMS client
 for AMQP 1-0 not build on proton, and JMS AMQP 1-0 client based on
 proton such that people don't get confused may be difficult.  Secondly
 we'll definitely be using a different release schedule / version numbering
 from the existing components.  Once we sort out the components better I'd
 be fine with moving the older clients into the QPIDJMS jira/directory and
 we can tailor the component names to reduce any likelihood of confusion.

 -- Rob


 
  Justin
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
  For additional commands, e-mail: dev-h...@qpid.apache.org
 
 



[jira] [Resolved] (QPID-4873) Optimizations in Java client to reduce queue memory footprint

2013-05-23 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-4873.


Resolution: Fixed

Committed the patch from Helen with modifications to accommodate the 
suggestions made in the comments.
http://svn.apache.org/r1485878

 Optimizations in Java client to reduce queue memory footprint
 -

 Key: QPID-4873
 URL: https://issues.apache.org/jira/browse/QPID-4873
 Project: Qpid
  Issue Type: Improvement
  Components: Java Client, Java Common
Affects Versions: 0.23
Reporter: Helen Kwong
Assignee: Rajith Attapattu
Priority: Minor
 Fix For: 0.23

 Attachments: ClientQueueMemoryOptimizations.patch


 My team is using the Java broker and Java client, version 0.16, and looking 
 to lower the client's memory footprint on our servers. We did some heap 
 analysis and found that the consumption is coming mostly from 
 AMQAnyDestination instances, each having a retained size close to ~3KB, and 
 since we have 6000 queues on each of our 2 brokers, this amounts to about 
 ~33MB, which is valuable real estate for us. In our analysis we found a few 
 possible optimizations in the Qpid code that would reduce the per-queue heap 
 consumption and which don't seem high risk, and would like to propose the 
 following changes (will attach a patch file). 
 (I had originally emailed the users list 2 weeks ago, and Rob Godfrey asked 
 me to raise a JIRA with the changes in a patch file -- 
 http://mail-archives.apache.org/mod_mbox/qpid-users/201305.mbox/%3CCACsaS94F0MQeyAKTN3yoU=j-MPc6oFWZgtCtj68GAwOcN=5...@mail.gmail.com%3E)
 The changes I attach here are with the trunk code and I've redone the numbers 
 / analysis running with the latest client.
 1. In Address / AddressParser, cacheing / reusing the options Maps for queues 
 created with the same options string. (This optimization gives us the most 
 significant savings.)
 All our queues are created with the same options string, which means each 
 corresponding AMQDestination has an Address that has an _options Map that is 
 the same for all queues, i.e., 12K copies of the same map. As far as we can 
 tell, the _options map is effectively immutable, i.e., there is no code path 
 by which an Address’s _options map can be modified. (Is this correct?) So a 
 possible improvement is that in org.apache.qpid.messaging.util.AddressParser, 
 we cache the options map for each options string that we've already 
 encountered, and if the options string passed in has already been seen, we 
 use the stored options map for that Address. This way, for queues having the 
 same options, their Address options will reference the same Map. (For our 
 queues, each Address _options Map currently takes up 1416 B.)
 2. AMQDestination's _link field -- 
 org.apache.qpid.client.messaging.address.Link
 Optimization A: org.apache.qpid.client.messaging.address.Link$Subscription's 
 args field is by default a new HashMap with default capacity 16. In our use 
 case it remains empty for all queues. A possible optimization is to set the 
 default value as Collections.emptyMap() instead. As far was we can tell, 
 Subscription.getArgs() is not used to get the map and then modify it. For us 
 this saves 128B per queue.
 Optimization B: Similarly, Link has a _bindings List that is by default a new 
 ArrayList with a default capacity of 10. In our use case it remains empty for 
 all queues, and as far as we can tell this list is not modified after it is 
 set. If we make the default value Collections.emptyList() instead, it will 
 save us 80B per queue.
 3. AMQDestination's _node field -- 
 org.apache.qpid.client.messaging.address.Node
 Node has a _bindings List that is by default a new ArrayList with the default 
 capacity. In our use case _bindings remains empty for all queues, and I don't 
 see getBindings() being used to get the list and then modify it. I also don't 
 see addBindings() being called anywhere in the client. So a possible 
 optimization is to set the default value as Collections.emptyList() instead. 
 For us this saves 80B per queue.
 The changes in AddressHelper.getBindings() are for the case when there are 
 node or link properties defined, but no bindings.
 Overall: Originally, each queue took up about 2760B for us; with these 
 optimizations, that goes down to 1024B, saving 63% per queue for us.
 We'd appreciate feedback on these changes and whether we are making any 
 incorrect assumptions. I've also added relevant tests to AMQDestinationTest 
 but am not sure if that's the best place. Thanks a lot!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http

[jira] [Commented] (QPID-4873) Optimizations in Java client to reduce queue memory footprint

2013-05-22 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13664379#comment-13664379
 ] 

Rajith Attapattu commented on QPID-4873:


I'm happy with optimization #2 and #3.

A bit concerned with optimization #1. 
While it works well if you reuse a limited number of option Strings, it might 
become an issue when there is a large number of distinct entries.
Even with a moderate number of queues (with distinct option strings), we might 
be keeping those maps in memory for longer than necessary, with little or no 
gain.

I would prefer to make this behaviour configurable and leave the current 
behaviour as the default.
Having a large number of queues with the same option String is not really a 
common use case, therefore it makes sense to make this optimization to be 
turned on only when needed.

 Optimizations in Java client to reduce queue memory footprint
 -

 Key: QPID-4873
 URL: https://issues.apache.org/jira/browse/QPID-4873
 Project: Qpid
  Issue Type: Improvement
  Components: Java Client, Java Common
Affects Versions: 0.23
Reporter: Helen Kwong
Assignee: Rajith Attapattu
Priority: Minor
 Fix For: 0.23

 Attachments: ClientQueueMemoryOptimizations.patch


 My team is using the Java broker and Java client, version 0.16, and looking 
 to lower the client's memory footprint on our servers. We did some heap 
 analysis and found that the consumption is coming mostly from 
 AMQAnyDestination instances, each having a retained size close to ~3KB, and 
 since we have 6000 queues on each of our 2 brokers, this amounts to about 
 ~33MB, which is valuable real estate for us. In our analysis we found a few 
 possible optimizations in the Qpid code that would reduce the per-queue heap 
 consumption and which don't seem high risk, and would like to propose the 
 following changes (will attach a patch file). 
 (I had originally emailed the users list 2 weeks ago, and Rob Godfrey asked 
 me to raise a JIRA with the changes in a patch file -- 
 http://mail-archives.apache.org/mod_mbox/qpid-users/201305.mbox/%3CCACsaS94F0MQeyAKTN3yoU=j-MPc6oFWZgtCtj68GAwOcN=5...@mail.gmail.com%3E)
 The changes I attach here are with the trunk code and I've redone the numbers 
 / analysis running with the latest client.
 1. In Address / AddressParser, cacheing / reusing the options Maps for queues 
 created with the same options string. (This optimization gives us the most 
 significant savings.)
 All our queues are created with the same options string, which means each 
 corresponding AMQDestination has an Address that has an _options Map that is 
 the same for all queues, i.e., 12K copies of the same map. As far as we can 
 tell, the _options map is effectively immutable, i.e., there is no code path 
 by which an Address’s _options map can be modified. (Is this correct?) So a 
 possible improvement is that in org.apache.qpid.messaging.util.AddressParser, 
 we cache the options map for each options string that we've already 
 encountered, and if the options string passed in has already been seen, we 
 use the stored options map for that Address. This way, for queues having the 
 same options, their Address options will reference the same Map. (For our 
 queues, each Address _options Map currently takes up 1416 B.)
 2. AMQDestination's _link field -- 
 org.apache.qpid.client.messaging.address.Link
 Optimization A: org.apache.qpid.client.messaging.address.Link$Subscription's 
 args field is by default a new HashMap with default capacity 16. In our use 
 case it remains empty for all queues. A possible optimization is to set the 
 default value as Collections.emptyMap() instead. As far was we can tell, 
 Subscription.getArgs() is not used to get the map and then modify it. For us 
 this saves 128B per queue.
 Optimization B: Similarly, Link has a _bindings List that is by default a new 
 ArrayList with a default capacity of 10. In our use case it remains empty for 
 all queues, and as far as we can tell this list is not modified after it is 
 set. If we make the default value Collections.emptyList() instead, it will 
 save us 80B per queue.
 3. AMQDestination's _node field -- 
 org.apache.qpid.client.messaging.address.Node
 Node has a _bindings List that is by default a new ArrayList with the default 
 capacity. In our use case _bindings remains empty for all queues, and I don't 
 see getBindings() being used to get the list and then modify it. I also don't 
 see addBindings() being called anywhere in the client. So a possible 
 optimization is to set the default value as Collections.emptyList() instead. 
 For us this saves 80B per queue.
 The changes in AddressHelper.getBindings() are for the case when there are 
 node or link

[jira] [Commented] (QPID-4873) Optimizations in Java client to reduce queue memory footprint

2013-05-22 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13664487#comment-13664487
 ] 

Rajith Attapattu commented on QPID-4873:


+1 Seems a good solution!

Rajith

 Optimizations in Java client to reduce queue memory footprint
 -

 Key: QPID-4873
 URL: https://issues.apache.org/jira/browse/QPID-4873
 Project: Qpid
  Issue Type: Improvement
  Components: Java Client, Java Common
Affects Versions: 0.23
Reporter: Helen Kwong
Assignee: Rajith Attapattu
Priority: Minor
 Fix For: 0.23

 Attachments: ClientQueueMemoryOptimizations.patch


 My team is using the Java broker and Java client, version 0.16, and looking 
 to lower the client's memory footprint on our servers. We did some heap 
 analysis and found that the consumption is coming mostly from 
 AMQAnyDestination instances, each having a retained size close to ~3KB, and 
 since we have 6000 queues on each of our 2 brokers, this amounts to about 
 ~33MB, which is valuable real estate for us. In our analysis we found a few 
 possible optimizations in the Qpid code that would reduce the per-queue heap 
 consumption and which don't seem high risk, and would like to propose the 
 following changes (will attach a patch file). 
 (I had originally emailed the users list 2 weeks ago, and Rob Godfrey asked 
 me to raise a JIRA with the changes in a patch file -- 
 http://mail-archives.apache.org/mod_mbox/qpid-users/201305.mbox/%3CCACsaS94F0MQeyAKTN3yoU=j-MPc6oFWZgtCtj68GAwOcN=5...@mail.gmail.com%3E)
 The changes I attach here are with the trunk code and I've redone the numbers 
 / analysis running with the latest client.
 1. In Address / AddressParser, cacheing / reusing the options Maps for queues 
 created with the same options string. (This optimization gives us the most 
 significant savings.)
 All our queues are created with the same options string, which means each 
 corresponding AMQDestination has an Address that has an _options Map that is 
 the same for all queues, i.e., 12K copies of the same map. As far as we can 
 tell, the _options map is effectively immutable, i.e., there is no code path 
 by which an Address’s _options map can be modified. (Is this correct?) So a 
 possible improvement is that in org.apache.qpid.messaging.util.AddressParser, 
 we cache the options map for each options string that we've already 
 encountered, and if the options string passed in has already been seen, we 
 use the stored options map for that Address. This way, for queues having the 
 same options, their Address options will reference the same Map. (For our 
 queues, each Address _options Map currently takes up 1416 B.)
 2. AMQDestination's _link field -- 
 org.apache.qpid.client.messaging.address.Link
 Optimization A: org.apache.qpid.client.messaging.address.Link$Subscription's 
 args field is by default a new HashMap with default capacity 16. In our use 
 case it remains empty for all queues. A possible optimization is to set the 
 default value as Collections.emptyMap() instead. As far was we can tell, 
 Subscription.getArgs() is not used to get the map and then modify it. For us 
 this saves 128B per queue.
 Optimization B: Similarly, Link has a _bindings List that is by default a new 
 ArrayList with a default capacity of 10. In our use case it remains empty for 
 all queues, and as far as we can tell this list is not modified after it is 
 set. If we make the default value Collections.emptyList() instead, it will 
 save us 80B per queue.
 3. AMQDestination's _node field -- 
 org.apache.qpid.client.messaging.address.Node
 Node has a _bindings List that is by default a new ArrayList with the default 
 capacity. In our use case _bindings remains empty for all queues, and I don't 
 see getBindings() being used to get the list and then modify it. I also don't 
 see addBindings() being called anywhere in the client. So a possible 
 optimization is to set the default value as Collections.emptyList() instead. 
 For us this saves 80B per queue.
 The changes in AddressHelper.getBindings() are for the case when there are 
 node or link properties defined, but no bindings.
 Overall: Originally, each queue took up about 2760B for us; with these 
 optimizations, that goes down to 1024B, saving 63% per queue for us.
 We'd appreciate feedback on these changes and whether we are making any 
 incorrect assumptions. I've also added relevant tests to AMQDestinationTest 
 but am not sure if that's the best place. Thanks a lot!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (QPID-4873) Optimizations in Java client to reduce queue memory footprint

2013-05-22 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13664819#comment-13664819
 ] 

Rajith Attapattu commented on QPID-4873:


Hi Helen,

I will take care of it first thing tomorrow morning.
Another big thank you from me for all your effort on this.


Rajith

 Optimizations in Java client to reduce queue memory footprint
 -

 Key: QPID-4873
 URL: https://issues.apache.org/jira/browse/QPID-4873
 Project: Qpid
  Issue Type: Improvement
  Components: Java Client, Java Common
Affects Versions: 0.23
Reporter: Helen Kwong
Assignee: Rajith Attapattu
Priority: Minor
 Fix For: 0.23

 Attachments: ClientQueueMemoryOptimizations.patch


 My team is using the Java broker and Java client, version 0.16, and looking 
 to lower the client's memory footprint on our servers. We did some heap 
 analysis and found that the consumption is coming mostly from 
 AMQAnyDestination instances, each having a retained size close to ~3KB, and 
 since we have 6000 queues on each of our 2 brokers, this amounts to about 
 ~33MB, which is valuable real estate for us. In our analysis we found a few 
 possible optimizations in the Qpid code that would reduce the per-queue heap 
 consumption and which don't seem high risk, and would like to propose the 
 following changes (will attach a patch file). 
 (I had originally emailed the users list 2 weeks ago, and Rob Godfrey asked 
 me to raise a JIRA with the changes in a patch file -- 
 http://mail-archives.apache.org/mod_mbox/qpid-users/201305.mbox/%3CCACsaS94F0MQeyAKTN3yoU=j-MPc6oFWZgtCtj68GAwOcN=5...@mail.gmail.com%3E)
 The changes I attach here are with the trunk code and I've redone the numbers 
 / analysis running with the latest client.
 1. In Address / AddressParser, cacheing / reusing the options Maps for queues 
 created with the same options string. (This optimization gives us the most 
 significant savings.)
 All our queues are created with the same options string, which means each 
 corresponding AMQDestination has an Address that has an _options Map that is 
 the same for all queues, i.e., 12K copies of the same map. As far as we can 
 tell, the _options map is effectively immutable, i.e., there is no code path 
 by which an Address’s _options map can be modified. (Is this correct?) So a 
 possible improvement is that in org.apache.qpid.messaging.util.AddressParser, 
 we cache the options map for each options string that we've already 
 encountered, and if the options string passed in has already been seen, we 
 use the stored options map for that Address. This way, for queues having the 
 same options, their Address options will reference the same Map. (For our 
 queues, each Address _options Map currently takes up 1416 B.)
 2. AMQDestination's _link field -- 
 org.apache.qpid.client.messaging.address.Link
 Optimization A: org.apache.qpid.client.messaging.address.Link$Subscription's 
 args field is by default a new HashMap with default capacity 16. In our use 
 case it remains empty for all queues. A possible optimization is to set the 
 default value as Collections.emptyMap() instead. As far was we can tell, 
 Subscription.getArgs() is not used to get the map and then modify it. For us 
 this saves 128B per queue.
 Optimization B: Similarly, Link has a _bindings List that is by default a new 
 ArrayList with a default capacity of 10. In our use case it remains empty for 
 all queues, and as far as we can tell this list is not modified after it is 
 set. If we make the default value Collections.emptyList() instead, it will 
 save us 80B per queue.
 3. AMQDestination's _node field -- 
 org.apache.qpid.client.messaging.address.Node
 Node has a _bindings List that is by default a new ArrayList with the default 
 capacity. In our use case _bindings remains empty for all queues, and I don't 
 see getBindings() being used to get the list and then modify it. I also don't 
 see addBindings() being called anywhere in the client. So a possible 
 optimization is to set the default value as Collections.emptyList() instead. 
 For us this saves 80B per queue.
 The changes in AddressHelper.getBindings() are for the case when there are 
 node or link properties defined, but no bindings.
 Overall: Originally, each queue took up about 2760B for us; with these 
 optimizations, that goes down to 1024B, saving 63% per queue for us.
 We'd appreciate feedback on these changes and whether we are making any 
 incorrect assumptions. I've also added relevant tests to AMQDestinationTest 
 but am not sure if that's the best place. Thanks a lot!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more

Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-05-21 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/
---

(Updated May 21, 2013, 2:48 p.m.)


Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.


Changes
---

Added the missing class ConditionManager.java


Description
---

There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
_failoverMutex surfaces. Among them sending a message inside onMessage() and 
the session being closed due to an error (causing the deadlock) seems to come 
up a lot in production environments. There is also a deadlock btw 
_messageDeliveryLock and _lock (AMQSession.java) which happens less frequently.
The messageDeliveryLock is used to ensure that we don't close the session in 
the middle of a message delivery. In order to do this we hold the lock across 
onMessage().
This causes several issues in addition to the potential to deadlock. If an 
onMessage call takes longer/wedged then you cannot close the session or 
failover will not happen until it returns as the same thread is holding the 
failoverMutex. 

Based on an idea from Rafi, I have come up with a solution to get rid of 
_messageDeliveryLock and instead use an alternative strategy to achieve similar 
functionality.
In order to ensure that close() doesn't proceed until the message deliveries 
currently in progress completes, an atomic counter is used to keep track of 
message deliveries in progress.
The close() will wait until the count falls to zero before proceeding. No new 
deliveries will be initiated bcos the close method will mark the session as 
closed.
The wait has a timeout to ensure that a longer running or wedged onMessage() 
will not hold up session close.
There is a slim chance that before a session being marked as closed a message 
delivery could be initiated, but not yet gotten to the point of updating the 
counter, hence waitForMsgDeliveryToFinish() will see it as zero and proceed 
with close. But in comparison to the issues with _messageDeliveryLock, I 
believe it's acceptable.

There is an issue if MessageConsumer close is called outside of Session close. 
This can be solved in a similar manner. I will wait until the current review is 
complete and then post the solution for the MessageConsumer close.
I will commit them both together.


This addresses bug QPID-4574.
https://issues.apache.org/jira/browse/QPID-4574


Diffs (updated)
-

  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQConnection.java
 1484812 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
 1484812 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java
 1484812 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer_0_10.java
 1484812 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/util/ConditionManager.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/10738/diff/


Testing
---

Java test suite, tests from customers and QE around the deadlock situation.


Thanks,

rajith attapattu



Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-05-21 Thread rajith attapattu


 On May 13, 2013, 3:22 p.m., Robbie Gemmell wrote:
  http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java,
   lines 744-745
  https://reviews.apache.org/r/10738/diff/2/?file=288993#file288993line744
 
  What impact does dropping the message on the floor here have on the 
  client?
  
  Earlier points in the call hierarchy seem to make effort to do other 
  things with the message when detecting session/consumer close, so is there 
  any impact from not doing so? E.g a message getting stuck acquired for a 
  now-closed consumer?
  
  Does any particular attention need paid to the overridden 0-10 specific 
  version of this method?
 
 rajith attapattu wrote:
 The message will be dropped if a consumer (or session is closed or 
 closing).
 When a consumer is closed, any messages acquired but not acknowledged 
 should be be made available to another consumer by the broker.
 These messages will be marked redelivered.
 Is this the same situation for 0-8/0-9 ?
 
 The situation is the same as a consumer closing with a bunch of unacked 
 messages when in CLIENT_ACK mode.
 Alternatively we could reject the message (release in 0-10 terms). But I 
 don't think this is required, given that we will be closing the consumer 
 anyways.
 
 
 Earlier points in the call hierarchy seem to make effort to do other 
 things with the message when detecting session/consumer close
 By other things are you referring to a reject? In 0-10 AFIK you don't 
 need to do it. 
 The situation is the same as a consumer closing with a bunch of unacked 
 messages when in CLIENT_ACK mode.
 
 Does any particular attention need paid to the overridden 0-10 specific 
 version of this method?
 IMO adding if (!(isClosed() || isClosing())) is required to prevent the 
 0-10 specific method from issuing credit (and receiving more messages, that 
 will eventually get released).
 There is a chance where the consumer could be marked closed, but not yet 
 reached the point of sending a cancel, by the time messageFlow() is called.
 The above check should prevent it.
 
 Robbie Gemmell wrote:
 Did you mean race condition (instead of deadlock)?
 
 I really meant deadlock. One of the uses of _messageDeliveryLock being 
 removed was added as part of the fix for a deadlock 
 (https://issues.apache.org/jira/browse/QPID-3911) and so I wonder what effect 
 removing it again will have. It may be the other changes mean it isn't a 
 problem, but I think it needs to be closely considered. 
 
 The message will be dropped if a consumer (or session is closed or 
 closing).
 When a consumer is closed, any messages acquired but not acknowledged 
 should be be made available to another consumer by the broker.
 These messages will be marked redelivered.
 Is this the same situation for 0-8/0-9 ?
 
 That isn't actually the case, transferred messages are the responsibility 
 of the session and remain acquired after consumer close until such point as 
 the session explicitly does something with them. I believe this is true of 
 0-8/9/91 as well, and probably explains the origin of the reject/release code 
 I referred to earlier in the call tree than the changes would now allow the 
 message to be silently dropped.
 
 From the 0-10 spec: Canceling a subscription MUST NOT affect pending 
 transfers. A transfer made prior to canceling transfers to the destination 
 MUST be able to be accepted, released, acquired, or rejected after the 
 subscription is canceled.
 
 rajith attapattu wrote:
 Indeed, Gordon mentioned that to me and the 2nd patch (the one before I 
 did today) takes care of rejecting messages from the consumer. We don't need 
 to do the same when the session is closing, as when the session ends, the any 
 unacked messages are put back on the queue.

As for QPID-3911,
There is a deadlock, albeit a bit different (involving the same locks) from 
QPID-3911, that do happen in similar circumstances.
However this deadlock appears to manifest with or without this patch, which 
leads me to believe that _messageDeliveryLock is not the right solution for 
QPID-3911.
Sadly the solution for QPID-3911 made it worse as there are at least two 
distinct cases of deadlocks involving _messageDeliveryLock.
1. Btw _lock and _messageDeliveryLock
2. Btw _messageDeliveryLock and _failoverMutex.

We definitely need to find a solution for the deadlocks (at least 3 cases) btw 
failoverMutex and _lock (in AMQSession), which seems to be the root of all evil 
:)
We might have to drop one lock (most likely _lock) and see if we can provide an 
alternative strategy to guarantee correct behaviour.


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/#review20483

Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-05-21 Thread rajith attapattu


 On May 13, 2013, 3:22 p.m., Robbie Gemmell wrote:
  http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java,
   lines 744-745
  https://reviews.apache.org/r/10738/diff/2/?file=288993#file288993line744
 
  What impact does dropping the message on the floor here have on the 
  client?
  
  Earlier points in the call hierarchy seem to make effort to do other 
  things with the message when detecting session/consumer close, so is there 
  any impact from not doing so? E.g a message getting stuck acquired for a 
  now-closed consumer?
  
  Does any particular attention need paid to the overridden 0-10 specific 
  version of this method?
 
 rajith attapattu wrote:
 The message will be dropped if a consumer (or session is closed or 
 closing).
 When a consumer is closed, any messages acquired but not acknowledged 
 should be be made available to another consumer by the broker.
 These messages will be marked redelivered.
 Is this the same situation for 0-8/0-9 ?
 
 The situation is the same as a consumer closing with a bunch of unacked 
 messages when in CLIENT_ACK mode.
 Alternatively we could reject the message (release in 0-10 terms). But I 
 don't think this is required, given that we will be closing the consumer 
 anyways.
 
 
 Earlier points in the call hierarchy seem to make effort to do other 
 things with the message when detecting session/consumer close
 By other things are you referring to a reject? In 0-10 AFIK you don't 
 need to do it. 
 The situation is the same as a consumer closing with a bunch of unacked 
 messages when in CLIENT_ACK mode.
 
 Does any particular attention need paid to the overridden 0-10 specific 
 version of this method?
 IMO adding if (!(isClosed() || isClosing())) is required to prevent the 
 0-10 specific method from issuing credit (and receiving more messages, that 
 will eventually get released).
 There is a chance where the consumer could be marked closed, but not yet 
 reached the point of sending a cancel, by the time messageFlow() is called.
 The above check should prevent it.
 
 Robbie Gemmell wrote:
 Did you mean race condition (instead of deadlock)?
 
 I really meant deadlock. One of the uses of _messageDeliveryLock being 
 removed was added as part of the fix for a deadlock 
 (https://issues.apache.org/jira/browse/QPID-3911) and so I wonder what effect 
 removing it again will have. It may be the other changes mean it isn't a 
 problem, but I think it needs to be closely considered. 
 
 The message will be dropped if a consumer (or session is closed or 
 closing).
 When a consumer is closed, any messages acquired but not acknowledged 
 should be be made available to another consumer by the broker.
 These messages will be marked redelivered.
 Is this the same situation for 0-8/0-9 ?
 
 That isn't actually the case, transferred messages are the responsibility 
 of the session and remain acquired after consumer close until such point as 
 the session explicitly does something with them. I believe this is true of 
 0-8/9/91 as well, and probably explains the origin of the reject/release code 
 I referred to earlier in the call tree than the changes would now allow the 
 message to be silently dropped.
 
 From the 0-10 spec: Canceling a subscription MUST NOT affect pending 
 transfers. A transfer made prior to canceling transfers to the destination 
 MUST be able to be accepted, released, acquired, or rejected after the 
 subscription is canceled.
 
 rajith attapattu wrote:
 Indeed, Gordon mentioned that to me and the 2nd patch (the one before I 
 did today) takes care of rejecting messages from the consumer. We don't need 
 to do the same when the session is closing, as when the session ends, the any 
 unacked messages are put back on the queue.
 
 rajith attapattu wrote:
 As for QPID-3911,
 There is a deadlock, albeit a bit different (involving the same locks) 
 from QPID-3911, that do happen in similar circumstances.
 However this deadlock appears to manifest with or without this patch, 
 which leads me to believe that _messageDeliveryLock is not the right solution 
 for QPID-3911.
 Sadly the solution for QPID-3911 made it worse as there are at least two 
 distinct cases of deadlocks involving _messageDeliveryLock.
 1. Btw _lock and _messageDeliveryLock
 2. Btw _messageDeliveryLock and _failoverMutex.
 
 We definitely need to find a solution for the deadlocks (at least 3 
 cases) btw failoverMutex and _lock (in AMQSession), which seems to be the 
 root of all evil :)
 We might have to drop one lock (most likely _lock) and see if we can 
 provide an alternative strategy to guarantee correct behaviour.


Sorry I meant dopping _failoverMutex (not _lock in AMQSession).
It might also be an opportunity to fix our less than stellar

[jira] [Created] (QPID-4864) The JMS client shouldn't hold a lock when creating a messge

2013-05-17 Thread Rajith Attapattu (JIRA)
Rajith Attapattu created QPID-4864:
--

 Summary: The JMS client shouldn't hold a lock when creating a 
messge
 Key: QPID-4864
 URL: https://issues.apache.org/jira/browse/QPID-4864
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.20, 0.18, 0.16, 0.22
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.23


The JMS client needlessly holds the failover mutex when creating a text 
message. It also sets the session.

When creating messages to be sent, they should be just value objects and should 
not hold any state.

When constructing a message to be given to the consumer, we have to set the 
session, so acknowledge can be called. But not when we create messages for 
sending.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-4864) The JMS client shouldn't hold a lock when creating a messge

2013-05-17 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-4864.


Resolution: Fixed

Removed the lock. Fix committed at http://svn.apache.org/r1483877

 The JMS client shouldn't hold a lock when creating a messge
 ---

 Key: QPID-4864
 URL: https://issues.apache.org/jira/browse/QPID-4864
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.16, 0.18, 0.20, 0.22
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
 Fix For: 0.23


 The JMS client needlessly holds the failover mutex when creating a text 
 message. It also sets the session.
 When creating messages to be sent, they should be just value objects and 
 should not hold any state.
 When constructing a message to be given to the consumer, we have to set the 
 session, so acknowledge can be called. But not when we create messages for 
 sending.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-4849) Faulty parsing logic in retrieving CN from an SSL certificate.

2013-05-15 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4849:
---

Labels: JMS SSL,  (was: )

 Faulty parsing logic in retrieving CN from an SSL certificate.
 --

 Key: QPID-4849
 URL: https://issues.apache.org/jira/browse/QPID-4849
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
Priority: Minor
  Labels: JMS, SSL,
 Fix For: 0.23


 Incorrect parsing logic is causing the wrong information being extracted from 
 an SSL certificate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-4849) Faulty parsing logic in retrieving CN from an SSL certificate.

2013-05-15 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4849:
---

Labels: JMS SSL  (was: JMS SSL,)

 Faulty parsing logic in retrieving CN from an SSL certificate.
 --

 Key: QPID-4849
 URL: https://issues.apache.org/jira/browse/QPID-4849
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
Priority: Minor
  Labels: JMS, SSL
 Fix For: 0.23


 Incorrect parsing logic is causing the wrong information being extracted from 
 an SSL certificate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-4849) Faulty parsing logic in retrieving CN from an SSL certificate.

2013-05-15 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-4849.


Resolution: Fixed

Committed a fix at http://svn.apache.org/r1483079

 Faulty parsing logic in retrieving CN from an SSL certificate.
 --

 Key: QPID-4849
 URL: https://issues.apache.org/jira/browse/QPID-4849
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
Priority: Minor
  Labels: JMS, SSL
 Fix For: 0.23


 Incorrect parsing logic is causing the wrong information being extracted from 
 an SSL certificate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Issue Comment Deleted] (QPID-4574) [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock

2013-05-14 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4574:
---

Comment: was deleted

(was: {code}
diff --git 
a/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java 
b/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
index 91a6389..8a7d807 100644
--- a/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
+++ b/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
@@ -3265,44 +3265,45 @@ public abstract class AMQSessionC extends 
BasicMessageConsumer, P extends Basic
 {
 long deliveryTag = message.getDeliveryTag();
 
-synchronized (_lock)
+synchronized (_connection.getFailoverMutex())
 {
-
-try
+synchronized (_lock)
 {
-while (connectionStopped())
+try
 {
-_lock.wait();
+while (connectionStopped())
+{
+_lock.wait();
+}
+}
+catch (InterruptedException e)
+{
+Thread.currentThread().interrupt();
 }
-}
-catch (InterruptedException e)
-{
-Thread.currentThread().interrupt();
-}
 
-if (!(message instanceof CloseConsumerMessage)
- tagLE(deliveryTag, _rollbackMark.get()))
-{
-if (_logger.isDebugEnabled())
+if (!(message instanceof CloseConsumerMessage)
+ tagLE(deliveryTag, _rollbackMark.get()))
 {
-_logger.debug(Rejecting message because delivery tag 
 + deliveryTag
-+  = rollback mark  + _rollbackMark.get());
+if (_logger.isDebugEnabled())
+{
+_logger.debug(Rejecting message because delivery 
tag  + deliveryTag
++  = rollback mark  + 
_rollbackMark.get());
+}
+rejectMessage(message, true);
 }
-rejectMessage(message, true);
-}
-else if (_usingDispatcherForCleanup)
-{
-_prefetchedMessageTags.add(deliveryTag);
-}
-else
-{
-synchronized (_messageDeliveryLock)
+else if (_usingDispatcherForCleanup)
+{
+_prefetchedMessageTags.add(deliveryTag);
+}
+else
 {
-notifyConsumer(message);
+synchronized (_messageDeliveryLock)
+{
+notifyConsumer(message);
+}
 }
 }
 }
-
 long current = _rollbackMark.get();
 if (updateRollbackMark(current, deliveryTag))
 {
{code})

 [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock
 

 Key: QPID-4574
 URL: https://issues.apache.org/jira/browse/QPID-4574
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
  Labels: deadlock
 Fix For: 0.21

 Attachments: Test_00782235.java


 This deadlock can manifest when a you have a producer sending messages inside 
 an onMessage() call.
 This is a common enough pattern, and is used by intermediaries like Message 
 Bridges, ESB's etc..
 Dispatcher-0-Conn-1:
   at 
 org.apache.qpid.client.BasicMessageProducer.send(BasicMessageProducer.java:309)
   - waiting to lock 0xecfd15a8 (a java.lang.Object)
   at Test_00782235$1.onMessage(Test_00782235.java:29)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:751)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:141)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:725)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:186)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:54)
   at 
 org.apache.qpid.client.AMQSession

[jira] [Issue Comment Deleted] (QPID-4574) [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock

2013-05-14 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-4574:
---

Comment: was deleted

(was: The change here is that the failoverMutex is acquired before dispatching 
a message.
However due to the indentation, it results in a diff slightly larger.)

 [JMS] Deadlock involving _failoverMutex and _messageDeliveryLock
 

 Key: QPID-4574
 URL: https://issues.apache.org/jira/browse/QPID-4574
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.14, 0.16, 0.18, 0.20
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
  Labels: deadlock
 Fix For: 0.21

 Attachments: Test_00782235.java


 This deadlock can manifest when a you have a producer sending messages inside 
 an onMessage() call.
 This is a common enough pattern, and is used by intermediaries like Message 
 Bridges, ESB's etc..
 Dispatcher-0-Conn-1:
   at 
 org.apache.qpid.client.BasicMessageProducer.send(BasicMessageProducer.java:309)
   - waiting to lock 0xecfd15a8 (a java.lang.Object)
   at Test_00782235$1.onMessage(Test_00782235.java:29)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:751)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:141)
   at 
 org.apache.qpid.client.BasicMessageConsumer.notifyMessage(BasicMessageConsumer.java:725)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:186)
   at 
 org.apache.qpid.client.BasicMessageConsumer_0_10.notifyMessage(BasicMessageConsumer_0_10.java:54)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.notifyConsumer(AMQSession.java:3479)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.dispatchMessage(AMQSession.java:3418)
   - locked 0xecfd16c8 (a java.lang.Object)
   - locked 0xecfd16d8 (a java.lang.Object)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.access$1000(AMQSession.java:3205)
   at org.apache.qpid.client.AMQSession.dispatch(AMQSession.java:3198)
   at 
 org.apache.qpid.client.message.UnprocessedMessage.dispatch(UnprocessedMessage.java:54)
   at 
 org.apache.qpid.client.AMQSession$Dispatcher.run(AMQSession.java:3341)
   at java.lang.Thread.run(Thread.java:679)
 main:
   at 
 org.apache.qpid.client.BasicMessageConsumer.close(BasicMessageConsumer.java:598)
   - waiting to lock 0xecfd16c8 (a java.lang.Object)
   - locked 0xecfd15a8 (a java.lang.Object)
   at 
 org.apache.qpid.client.BasicMessageConsumer.close(BasicMessageConsumer.java:558)
   at Test_00782235.init(Test_00782235.java:62)
   at Test_00782235.main(Test_00782235.java:12)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-05-14 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/
---

(Updated May 15, 2013, 3:06 a.m.)


Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.


Changes
---

The following changes were made from the previous iteration.
1. Address the thread safety issue identified in the review comments.

2. Moved out the common code to a new class named ConditionManager. Changed 
AMQSession and BasicMessageConsumer to use the
above class to to ensure close doesn't proceed while a message delivery is 
in progress.

3. Added a check in BasicMessageConsumer_0_10 to see if the consumer is closed 
or closing, before issuing any credit.

4. Added code to both BasicMessageConsumer_0_10 and BasicMessageConsumer to 
reject (release in 0-10) messages instead of just dropping them. This will 
ensure the messages are immediately requeued.
   Didn't add this for the session as in 0-10, messages will be requeued (if 
not acked) when the session is closed.
   Not sure what happens in pre 0-10. TODO to verify that situation.

5. Removed unwanted code.


Description
---

There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
_failoverMutex surfaces. Among them sending a message inside onMessage() and 
the session being closed due to an error (causing the deadlock) seems to come 
up a lot in production environments. There is also a deadlock btw 
_messageDeliveryLock and _lock (AMQSession.java) which happens less frequently.
The messageDeliveryLock is used to ensure that we don't close the session in 
the middle of a message delivery. In order to do this we hold the lock across 
onMessage().
This causes several issues in addition to the potential to deadlock. If an 
onMessage call takes longer/wedged then you cannot close the session or 
failover will not happen until it returns as the same thread is holding the 
failoverMutex. 

Based on an idea from Rafi, I have come up with a solution to get rid of 
_messageDeliveryLock and instead use an alternative strategy to achieve similar 
functionality.
In order to ensure that close() doesn't proceed until the message deliveries 
currently in progress completes, an atomic counter is used to keep track of 
message deliveries in progress.
The close() will wait until the count falls to zero before proceeding. No new 
deliveries will be initiated bcos the close method will mark the session as 
closed.
The wait has a timeout to ensure that a longer running or wedged onMessage() 
will not hold up session close.
There is a slim chance that before a session being marked as closed a message 
delivery could be initiated, but not yet gotten to the point of updating the 
counter, hence waitForMsgDeliveryToFinish() will see it as zero and proceed 
with close. But in comparison to the issues with _messageDeliveryLock, I 
believe it's acceptable.

There is an issue if MessageConsumer close is called outside of Session close. 
This can be solved in a similar manner. I will wait until the current review is 
complete and then post the solution for the MessageConsumer close.
I will commit them both together.


This addresses bug QPID-4574.
https://issues.apache.org/jira/browse/QPID-4574


Diffs (updated)
-

  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQConnection.java
 1480271 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
 1480271 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java
 1480271 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer_0_10.java
 1480271 

Diff: https://reviews.apache.org/r/10738/diff/


Testing
---

Java test suite, tests from customers and QE around the deadlock situation.


Thanks,

rajith attapattu



Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-05-13 Thread rajith attapattu
 redelivered.
Is this the same situation for 0-8/0-9 ?

The situation is the same as a consumer closing with a bunch of unacked 
messages when in CLIENT_ACK mode.
Alternatively we could reject the message (release in 0-10 terms). But I don't 
think this is required, given that we will be closing the consumer anyways.


Earlier points in the call hierarchy seem to make effort to do other things 
with the message when detecting session/consumer close
By other things are you referring to a reject? In 0-10 AFIK you don't need to 
do it. 
The situation is the same as a consumer closing with a bunch of unacked 
messages when in CLIENT_ACK mode.

Does any particular attention need paid to the overridden 0-10 specific 
version of this method?
IMO adding if (!(isClosed() || isClosing())) is required to prevent the 0-10 
specific method from issuing credit (and receiving more messages, that will 
eventually get released).
There is a chance where the consumer could be marked closed, but not yet 
reached the point of sending a cancel, by the time messageFlow() is called.
The above check should prevent it.


 On May 13, 2013, 3:22 p.m., Robbie Gemmell wrote:
  http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java,
   lines 594-595
  https://reviews.apache.org/r/10738/diff/2/?file=288993#file288993line594
 
  Removal of this usage of the old lock will cause a fair shift in client 
  behaviour, allowing the consumer close to proceed at the same time as 
  message delivery is ongoing on the session, possibly entailing things such 
  as the Dispatcher performing a session rollback in a message listener while 
  this close is in progress.
  
  How clear are we on what impact this change has on the client? For 
  example, this lock usage was apparently added specifically to prevent a 
  deadlock in that sort of situation. Has your investigation of this change 
  in behaviour determined whether that would become a problem again?

Did you mean race condition (instead of deadlock)?
The Consumer will check if it's closed (or closing) before it tries to deliver 
the message to the application. Therefore even if the session initiates a 
delivery before the consumer is marked closed it will not proceed beyond the 
notifyConsumer method in BasicMessageConsumer. The dropped message will be 
released when the consumer is closed.

Dispatcher performing a session rollback in a message listener while this 
close is in progress.
AFAIK the rollback method did not grab the _messageDeliveryLock, instead it was 
relying on _lock.
Both close() and notifyMessage() hold this lock for the entire duration of the 
call, thereby keeping these operations mutually exclusive.

Perhaps I misunderstood your concern here? If so please explain again.


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/#review20483
---


On May 8, 2013, 2:02 p.m., rajith attapattu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/10738/
 ---
 
 (Updated May 8, 2013, 2:02 p.m.)
 
 
 Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.
 
 
 Description
 ---
 
 There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
 _failoverMutex surfaces. Among them sending a message inside onMessage() and 
 the session being closed due to an error (causing the deadlock) seems to come 
 up a lot in production environments. There is also a deadlock btw 
 _messageDeliveryLock and _lock (AMQSession.java) which happens less 
 frequently.
 The messageDeliveryLock is used to ensure that we don't close the session in 
 the middle of a message delivery. In order to do this we hold the lock across 
 onMessage().
 This causes several issues in addition to the potential to deadlock. If an 
 onMessage call takes longer/wedged then you cannot close the session or 
 failover will not happen until it returns as the same thread is holding the 
 failoverMutex. 
 
 Based on an idea from Rafi, I have come up with a solution to get rid of 
 _messageDeliveryLock and instead use an alternative strategy to achieve 
 similar functionality.
 In order to ensure that close() doesn't proceed until the message deliveries 
 currently in progress completes, an atomic counter is used to keep track of 
 message deliveries in progress.
 The close() will wait until the count falls to zero before proceeding. No new 
 deliveries will be initiated bcos the close method will mark the session as 
 closed.
 The wait has a timeout to ensure that a longer running or wedged onMessage() 
 will not hold up session close.
 There is a slim chance that before a session being marked

Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-05-13 Thread rajith attapattu


 On May 13, 2013, 3:22 p.m., Robbie Gemmell wrote:
  http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java,
   line 767
  https://reviews.apache.org/r/10738/diff/2/?file=288992#file288992line767
 
  We probably shouldn't catch all exceptions, or eat the interrupted 
  status.
 
 rajith attapattu wrote:
 For starters I will narrow it down to interrupted exception and then log 
 it.

I meant catch the interrupted exception only and ignore.
Even if the thread was woken up prematurely, it will go back to waiting if the 
variable isn't set. 


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/#review20483
---


On May 8, 2013, 2:02 p.m., rajith attapattu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/10738/
 ---
 
 (Updated May 8, 2013, 2:02 p.m.)
 
 
 Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.
 
 
 Description
 ---
 
 There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
 _failoverMutex surfaces. Among them sending a message inside onMessage() and 
 the session being closed due to an error (causing the deadlock) seems to come 
 up a lot in production environments. There is also a deadlock btw 
 _messageDeliveryLock and _lock (AMQSession.java) which happens less 
 frequently.
 The messageDeliveryLock is used to ensure that we don't close the session in 
 the middle of a message delivery. In order to do this we hold the lock across 
 onMessage().
 This causes several issues in addition to the potential to deadlock. If an 
 onMessage call takes longer/wedged then you cannot close the session or 
 failover will not happen until it returns as the same thread is holding the 
 failoverMutex. 
 
 Based on an idea from Rafi, I have come up with a solution to get rid of 
 _messageDeliveryLock and instead use an alternative strategy to achieve 
 similar functionality.
 In order to ensure that close() doesn't proceed until the message deliveries 
 currently in progress completes, an atomic counter is used to keep track of 
 message deliveries in progress.
 The close() will wait until the count falls to zero before proceeding. No new 
 deliveries will be initiated bcos the close method will mark the session as 
 closed.
 The wait has a timeout to ensure that a longer running or wedged onMessage() 
 will not hold up session close.
 There is a slim chance that before a session being marked as closed a message 
 delivery could be initiated, but not yet gotten to the point of updating the 
 counter, hence waitForMsgDeliveryToFinish() will see it as zero and proceed 
 with close. But in comparison to the issues with _messageDeliveryLock, I 
 believe it's acceptable.
 
 There is an issue if MessageConsumer close is called outside of Session 
 close. This can be solved in a similar manner. I will wait until the current 
 review is complete and then post the solution for the MessageConsumer close.
 I will commit them both together.
 
 
 This addresses bug QPID-4574.
 https://issues.apache.org/jira/browse/QPID-4574
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQConnection.java
  1480271 
   
 http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
  1480271 
   
 http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java
  1480271 
   
 http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/common/src/main/java/org/apache/qpid/configuration/ClientProperties.java
  1480271 
 
 Diff: https://reviews.apache.org/r/10738/diff/
 
 
 Testing
 ---
 
 Java test suite, tests from customers and QE around the deadlock situation.
 
 
 Thanks,
 
 rajith attapattu
 




[jira] [Commented] (QPID-3838) [JMS] Vendor specific properties should be prefixed with JMS_

2013-05-09 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652992#comment-13652992
 ] 

Rajith Attapattu commented on QPID-3838:


I have committed the fix described in the above comment @ 
http://svn.apache.org/r1480656

 [JMS] Vendor specific properties should be prefixed with JMS_
 -

 Key: QPID-3838
 URL: https://issues.apache.org/jira/browse/QPID-3838
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.12, 0.14
Reporter: Rajith Attapattu
Priority: Minor
  Labels: jms-compliance

 As per the JMS spec, vendor specific message properties should be prefixed 
 with JMS_
 Since we are including qpid.subject in all outgoing messages, it's causing 
 a TCK failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-3317) JMS client doesn't distinguish between node and link bindings

2013-05-09 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-3317:
---

Fix Version/s: 0.20

 JMS client doesn't distinguish between node and link bindings
 -

 Key: QPID-3317
 URL: https://issues.apache.org/jira/browse/QPID-3317
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Reporter: Rajith Attapattu
Assignee: Rajith Attapattu
  Labels: addressing
 Fix For: 0.20


 The x-bindings specified in node properties should be created/deleted at the 
 time the node is created/deleted.
 The x-bindings specified in link properties should be created/deleted at the 
 time a link is established to a node.
 Ex creating producer or consumer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



What is the correct behaviour for the following case

2013-05-09 Thread Rajith Attapattu
If we use the following address with the java client you get an error with
exchange bind.

myEx_headers;{create: always,node:{type: topic,x-declare:{type:headers}}}

However the exchange (or queue) is created, all though the bind fails.

When this happens,

1. Should we reverse the queue or exchange declare ? (in addition to
throwing an exception)

2. Leave the exchange or queue as it and just throw the exception.

We currently do #2.

What are your thoughts on this?

Regards,

Rajith


Re: What is the correct behaviour for the following case

2013-05-09 Thread Rajith Attapattu
On Thu, May 9, 2013 at 11:52 AM, Gordon Sim g...@redhat.com wrote:

 On 05/09/2013 04:40 PM, Rajith Attapattu wrote:

 If we use the following address with the java client you get an error with
 exchange bind.

 myEx_headers;{create: always,node:{type: topic,x-declare:{type:headers}*
 *}}

 However the exchange (or queue) is created, all though the bind fails.


 Is the client sending the correct bind? (Since the address doesn't
 explicitly have one it should assume one where all messages would match).


No the client is not sending the x-match:all like the c++ or python client.
I'm definitely going to add it.




  When this happens,

 1. Should we reverse the queue or exchange declare ? (in addition to
 throwing an exception)

 2. Leave the exchange or queue as it and just throw the exception.

 We currently do #2.

 What are your thoughts on this?


 3. In this specific case I suspect the fix is to send the correct bind so
 that it doesn't fail.


Agreed.



 However in general if you have node level x-bindings and an x-declare I
 don't think there is a need to make them fail 'atomically'.


Thanks, my view as well.



 --**--**-
 To unsubscribe, e-mail: 
 dev-unsubscribe@qpid.apache.**orgdev-unsubscr...@qpid.apache.org
 For additional commands, e-mail: dev-h...@qpid.apache.org




Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-05-08 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/
---

(Updated May 8, 2013, 2:02 p.m.)


Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.


Changes
---

The updated patch address the following in addition to the previous patch,

1. Prevents message delivery if the session is marked for close.
2. Prevents message delivery if the Consumer is marked for close.
3. In both session and consumer, now there is no chance a session/consumer 
would be closed while a delivery is in progress.
4. Removed the timeout in waitForMsgDeliveryToFinish. Now the method will block 
until a delivery completes (as per the JMS spec).


Description
---

There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
_failoverMutex surfaces. Among them sending a message inside onMessage() and 
the session being closed due to an error (causing the deadlock) seems to come 
up a lot in production environments. There is also a deadlock btw 
_messageDeliveryLock and _lock (AMQSession.java) which happens less frequently.
The messageDeliveryLock is used to ensure that we don't close the session in 
the middle of a message delivery. In order to do this we hold the lock across 
onMessage().
This causes several issues in addition to the potential to deadlock. If an 
onMessage call takes longer/wedged then you cannot close the session or 
failover will not happen until it returns as the same thread is holding the 
failoverMutex. 

Based on an idea from Rafi, I have come up with a solution to get rid of 
_messageDeliveryLock and instead use an alternative strategy to achieve similar 
functionality.
In order to ensure that close() doesn't proceed until the message deliveries 
currently in progress completes, an atomic counter is used to keep track of 
message deliveries in progress.
The close() will wait until the count falls to zero before proceeding. No new 
deliveries will be initiated bcos the close method will mark the session as 
closed.
The wait has a timeout to ensure that a longer running or wedged onMessage() 
will not hold up session close.
There is a slim chance that before a session being marked as closed a message 
delivery could be initiated, but not yet gotten to the point of updating the 
counter, hence waitForMsgDeliveryToFinish() will see it as zero and proceed 
with close. But in comparison to the issues with _messageDeliveryLock, I 
believe it's acceptable.

There is an issue if MessageConsumer close is called outside of Session close. 
This can be solved in a similar manner. I will wait until the current review is 
complete and then post the solution for the MessageConsumer close.
I will commit them both together.


This addresses bug QPID-4574.
https://issues.apache.org/jira/browse/QPID-4574


Diffs (updated)
-

  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQConnection.java
 1480271 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
 1480271 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java
 1480271 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/common/src/main/java/org/apache/qpid/configuration/ClientProperties.java
 1480271 

Diff: https://reviews.apache.org/r/10738/diff/


Testing
---

Java test suite, tests from customers and QE around the deadlock situation.


Thanks,

rajith attapattu



[jira] [Commented] (QPID-4714) AMQConnection close can leak socket connections if exceptions occur earlier in the process

2013-04-30 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645788#comment-13645788
 ] 

Rajith Attapattu commented on QPID-4714:


Applied patch to the 0.22 branch.
See http://svn.apache.org/r1477705

 AMQConnection close can leak socket connections if exceptions occur earlier 
 in the process
 --

 Key: QPID-4714
 URL: https://issues.apache.org/jira/browse/QPID-4714
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.20
Reporter: Rajith Attapattu
 Fix For: Future


 If any of the operations throws an exception before the call to 
 _delegate.closeConnection(timeout); then we will leak socket connections.
 I already see this happening when running performance tests, where 
 session/connection close operations are timing out. The number of connections 
 keeps growing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-4714) AMQConnection close can leak socket connections if exceptions occur earlier in the process

2013-04-30 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu resolved QPID-4714.


   Resolution: Fixed
Fix Version/s: (was: Future)
   0.22

 AMQConnection close can leak socket connections if exceptions occur earlier 
 in the process
 --

 Key: QPID-4714
 URL: https://issues.apache.org/jira/browse/QPID-4714
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.20
Reporter: Rajith Attapattu
 Fix For: 0.22


 If any of the operations throws an exception before the call to 
 _delegate.closeConnection(timeout); then we will leak socket connections.
 I already see this happening when running performance tests, where 
 session/connection close operations are timing out. The number of connections 
 keeps growing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-4714) AMQConnection close can leak socket connections if exceptions occur earlier in the process

2013-04-25 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641879#comment-13641879
 ] 

Rajith Attapattu commented on QPID-4714:


I have checked in a fix at http://svn.apache.org/r1475810

Should we include this in 0.22 ?

 AMQConnection close can leak socket connections if exceptions occur earlier 
 in the process
 --

 Key: QPID-4714
 URL: https://issues.apache.org/jira/browse/QPID-4714
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.20
Reporter: Rajith Attapattu
 Fix For: Future


 If any of the operations throws an exception before the call to 
 _delegate.closeConnection(timeout); then we will leak socket connections.
 I already see this happening when running performance tests, where 
 session/connection close operations are timing out. The number of connections 
 keeps growing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Request for inclusion in 0.22

2013-04-25 Thread Rajith Attapattu
I would like to include r1475810 http://svn.apache.org/r1475810 in 0.22
The corresponding JIRA (patch attached) is QPID-4714

Rajith


Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-04-25 Thread rajith attapattu


 On April 24, 2013, 4:24 p.m., Robbie Gemmell wrote:
  I don't really agree that there is only a slim chance a delivery can begin 
  before the session is marked closed, it actually seems fairly likely to 
  occur (unless none of the queues used by the consumers on the session have 
  any messages left, like most of our tests). This could lead to a variety of 
  issues as a result, because important portions of the client depend on 
  being able to guarantee no more messages are being put into the consumer 
  receive queue or onMessage() callbacks fired while they are operating. I 
  don't get the impression these have been investigated enough to prove their 
  correctness in the face of this change, so it doesn't seem particularly 
  acceptable a trade-off as it stands.
  
  It seems you have already noticed that there would be issues with the 
  consumer close, since it would no longer stop deliveries occurring while 
  closing was in progress, making it one of the cases mentioned above.
  
  Given that msgDeliveriesInProgress is per-session and only 
  incremented/decremented by the Dispatcher thread in a single place, 
  shouldn't it just be an AtomicBoolean?
 
 rajith attapattu wrote:
 For the above case, the following needs to happen.
 
 1. A delivery is started just before session close is called, but haven't 
 got to the point where the count is incremented, by the time the close method 
 calls waitForMsgDeliveryToFinish().
 2. There are no outstanding deliveries at the time 
 waitForMsgDeliveryToFinish() looks at _msgDeliveriesInProgress.
 
 That is why I think there is a slim chance. If there are messages left 
 on the queue, then there is reasonable chance that _msgDeliveriesInProgress 
 is not zero. So the waitForMsgDeliveryToFinish() will block anyways.
 I'm currently working on plugging this gap. Provided this is taken care 
 of do you see any holes in this approach ?
 
 As for why I think this is an acceptable trade-off, most customers prefer 
 this loop hole over a client that deadlocks every now and then.
 This deadlock has been around for a long time with no solution and there 
 are several customers who are experiencing this deadlock in their production 
 environment.
 
 I don't get the impression these have been investigated enough to prove 
 their correctness in the face of this change, so it doesn't seem particularly 
 acceptable a trade-off as it stands. -- There is ample evidence that the 
 current setup is causing several deadlocks to the point of our client being 
 useless. This solution (baring the above issue) passes all the tests, 
 including a trial run at customer environments.
 What would you deem acceptable (provided the current issue above is 
 resolved) for you to gain confidence for the proposed fix ?
 
 Do you have an alternative solution? Or a way to improve the current one. 
 I'm sure we can find a way if we put our heads together.
 The bottom line is this issue needs to be resolved, or else people will 
 loose confidence in our client.
 
 rajith attapattu wrote:
 It seems you have already noticed that there would be issues with the 
 consumer close, since it would no longer stop deliveries occurring while 
 closing was in progress, making it one of the cases mentioned above. -- This 
 was identified at the outset, but wasn't included in this to ensure this 
 aspect of the solution is highlighted and debated first.
 
 I'm currently working on a solution to this, which I'm hoping will also 
 serve as a solution to the problem that's used in AMQSession.
 Again suggestions and ideas are most welcomed.
 
 Robbie Gemmell wrote:
 That they dont fail is good, but I think we should recognise that the 
 existing tests passing is no real guarantee that the change is sufficiently 
 baked. They largely don't cover the behaviours under discussion here, and 
 haven't shown up the problems prompting the change as a result. On that 
 note..are there new tests coming that do (even if only occasionally, given 
 these are often races), in order to validate the change is effective and the 
 properly maintained going forward? 
 
 It didn't take long just looking at the code to spot areas of concern. 
 I'd like to see solutions or feel that the changes in behaviour have been 
 investigated and their impact sufficiently reasoned about to ensure the 
 continued correctness of other areas in the client that might depended on the 
 previous behaviour...e.g the consumer close.
 
 It seems like compareAndSet operations on the delivery tracking object 
 could be used to extend the proposed change and impede forward progress of 
 the threads (without deadlock) while their desired condition is not met, 
 rather than just using the current value to make an independent decision 
 which would potentially allow the various threads to get into an undesirable 
 state.
 
 I forgot to say in my

Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-04-24 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/
---

(Updated April 24, 2013, 12:19 p.m.)


Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.


Description (updated)
---

There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
_failoverMutex surfaces. Among them sending a message inside onMessage() and 
the session being closed due to an error (causing the deadlock) seems to come 
up a lot in production environments. There is also a deadlock btw 
_messageDeliveryLock and _lock (AMQSession.java) which happens less frequently.
The messageDeliveryLock is used to ensure that we don't close the session in 
the middle of a message delivery. In order to do this we hold the lock across 
onMessage().
This causes several issues in addition to the potential to deadlock. If an 
onMessage call takes longer/wedged then you cannot close the session or 
failover will not happen until it returns as the same thread is holding the 
failoverMutex. 

Based on an idea from Rafi, I have come up with a solution to get rid of 
_messageDeliveryLock and instead use an alternative strategy to achieve similar 
functionality.
In order to ensure that close() doesn't proceed until the message deliveries 
currently in progress completes, an atomic counter is used to keep track of 
message deliveries in progress.
The close() will wait until the count falls to zero before proceeding. No new 
deliveries will be initiated bcos the close method will mark the session as 
closed.
The wait has a timeout to ensure that a longer running or wedged onMessage() 
will not hold up session close.
There is a slim chance that before a session being marked as closed a message 
delivery could be initiated, but not yet gotten to the point of updating the 
counter, hence waitForMsgDeliveryToFinish() will see it as zero and proceed 
with close. But in comparison to the issues with _messageDeliveryLock, I 
believe it's acceptable.

There is an issue if MessageConsumer close is called outside of Session close. 
This can be solved in a similar manner. I will wait until the current review is 
complete and then post the solution for the MessageConsumer close.
I will commit them both together.


This addresses bug QPID-4574.
https://issues.apache.org/jira/browse/QPID-4574


Diffs
-

  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQConnection.java
 1471133 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
 1471133 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java
 1471133 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/common/src/main/java/org/apache/qpid/configuration/ClientProperties.java
 1471133 

Diff: https://reviews.apache.org/r/10738/diff/


Testing
---

Java test suite, tests from customers and QE around the deadlock situation.


Thanks,

rajith attapattu



Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-04-24 Thread rajith attapattu


 On April 24, 2013, 4:24 p.m., Robbie Gemmell wrote:
  I don't really agree that there is only a slim chance a delivery can begin 
  before the session is marked closed, it actually seems fairly likely to 
  occur (unless none of the queues used by the consumers on the session have 
  any messages left, like most of our tests). This could lead to a variety of 
  issues as a result, because important portions of the client depend on 
  being able to guarantee no more messages are being put into the consumer 
  receive queue or onMessage() callbacks fired while they are operating. I 
  don't get the impression these have been investigated enough to prove their 
  correctness in the face of this change, so it doesn't seem particularly 
  acceptable a trade-off as it stands.
  
  It seems you have already noticed that there would be issues with the 
  consumer close, since it would no longer stop deliveries occurring while 
  closing was in progress, making it one of the cases mentioned above.
  
  Given that msgDeliveriesInProgress is per-session and only 
  incremented/decremented by the Dispatcher thread in a single place, 
  shouldn't it just be an AtomicBoolean?

For the above case, the following needs to happen.

1. A delivery is started just before session close is called, but haven't got 
to the point where the count is incremented, by the time the close method calls 
waitForMsgDeliveryToFinish().
2. There are no outstanding deliveries at the time waitForMsgDeliveryToFinish() 
looks at _msgDeliveriesInProgress.

That is why I think there is a slim chance. If there are messages left on the 
queue, then there is reasonable chance that _msgDeliveriesInProgress is not 
zero. So the waitForMsgDeliveryToFinish() will block anyways.
I'm currently working on plugging this gap. Provided this is taken care of do 
you see any holes in this approach ?

As for why I think this is an acceptable trade-off, most customers prefer this 
loop hole over a client that deadlocks every now and then.
This deadlock has been around for a long time with no solution and there are 
several customers who are experiencing this deadlock in their production 
environment.

I don't get the impression these have been investigated enough to prove their 
correctness in the face of this change, so it doesn't seem particularly 
acceptable a trade-off as it stands. -- There is ample evidence that the 
current setup is causing several deadlocks to the point of our client being 
useless. This solution (baring the above issue) passes all the tests, including 
a trial run at customer environments.
What would you deem acceptable (provided the current issue above is resolved) 
for you to gain confidence for the proposed fix ?

Do you have an alternative solution? Or a way to improve the current one. I'm 
sure we can find a way if we put our heads together.
The bottom line is this issue needs to be resolved, or else people will loose 
confidence in our client.


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/#review19626
---


On April 24, 2013, 12:19 p.m., rajith attapattu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/10738/
 ---
 
 (Updated April 24, 2013, 12:19 p.m.)
 
 
 Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.
 
 
 Description
 ---
 
 There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
 _failoverMutex surfaces. Among them sending a message inside onMessage() and 
 the session being closed due to an error (causing the deadlock) seems to come 
 up a lot in production environments. There is also a deadlock btw 
 _messageDeliveryLock and _lock (AMQSession.java) which happens less 
 frequently.
 The messageDeliveryLock is used to ensure that we don't close the session in 
 the middle of a message delivery. In order to do this we hold the lock across 
 onMessage().
 This causes several issues in addition to the potential to deadlock. If an 
 onMessage call takes longer/wedged then you cannot close the session or 
 failover will not happen until it returns as the same thread is holding the 
 failoverMutex. 
 
 Based on an idea from Rafi, I have come up with a solution to get rid of 
 _messageDeliveryLock and instead use an alternative strategy to achieve 
 similar functionality.
 In order to ensure that close() doesn't proceed until the message deliveries 
 currently in progress completes, an atomic counter is used to keep track of 
 message deliveries in progress.
 The close() will wait until the count falls to zero before proceeding. No new 
 deliveries will be initiated bcos the close method will mark the session as 
 closed.
 The wait has

Re: Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-04-24 Thread rajith attapattu


 On April 24, 2013, 4:24 p.m., Robbie Gemmell wrote:
  I don't really agree that there is only a slim chance a delivery can begin 
  before the session is marked closed, it actually seems fairly likely to 
  occur (unless none of the queues used by the consumers on the session have 
  any messages left, like most of our tests). This could lead to a variety of 
  issues as a result, because important portions of the client depend on 
  being able to guarantee no more messages are being put into the consumer 
  receive queue or onMessage() callbacks fired while they are operating. I 
  don't get the impression these have been investigated enough to prove their 
  correctness in the face of this change, so it doesn't seem particularly 
  acceptable a trade-off as it stands.
  
  It seems you have already noticed that there would be issues with the 
  consumer close, since it would no longer stop deliveries occurring while 
  closing was in progress, making it one of the cases mentioned above.
  
  Given that msgDeliveriesInProgress is per-session and only 
  incremented/decremented by the Dispatcher thread in a single place, 
  shouldn't it just be an AtomicBoolean?
 
 rajith attapattu wrote:
 For the above case, the following needs to happen.
 
 1. A delivery is started just before session close is called, but haven't 
 got to the point where the count is incremented, by the time the close method 
 calls waitForMsgDeliveryToFinish().
 2. There are no outstanding deliveries at the time 
 waitForMsgDeliveryToFinish() looks at _msgDeliveriesInProgress.
 
 That is why I think there is a slim chance. If there are messages left 
 on the queue, then there is reasonable chance that _msgDeliveriesInProgress 
 is not zero. So the waitForMsgDeliveryToFinish() will block anyways.
 I'm currently working on plugging this gap. Provided this is taken care 
 of do you see any holes in this approach ?
 
 As for why I think this is an acceptable trade-off, most customers prefer 
 this loop hole over a client that deadlocks every now and then.
 This deadlock has been around for a long time with no solution and there 
 are several customers who are experiencing this deadlock in their production 
 environment.
 
 I don't get the impression these have been investigated enough to prove 
 their correctness in the face of this change, so it doesn't seem particularly 
 acceptable a trade-off as it stands. -- There is ample evidence that the 
 current setup is causing several deadlocks to the point of our client being 
 useless. This solution (baring the above issue) passes all the tests, 
 including a trial run at customer environments.
 What would you deem acceptable (provided the current issue above is 
 resolved) for you to gain confidence for the proposed fix ?
 
 Do you have an alternative solution? Or a way to improve the current one. 
 I'm sure we can find a way if we put our heads together.
 The bottom line is this issue needs to be resolved, or else people will 
 loose confidence in our client.

It seems you have already noticed that there would be issues with the consumer 
close, since it would no longer stop deliveries occurring while closing was in 
progress, making it one of the cases mentioned above. -- This was identified 
at the outset, but wasn't included in this to ensure this aspect of the 
solution is highlighted and debated first.

I'm currently working on a solution to this, which I'm hoping will also serve 
as a solution to the problem that's used in AMQSession.
Again suggestions and ideas are most welcomed.


- rajith


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/#review19626
---


On April 24, 2013, 12:19 p.m., rajith attapattu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/10738/
 ---
 
 (Updated April 24, 2013, 12:19 p.m.)
 
 
 Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.
 
 
 Description
 ---
 
 There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
 _failoverMutex surfaces. Among them sending a message inside onMessage() and 
 the session being closed due to an error (causing the deadlock) seems to come 
 up a lot in production environments. There is also a deadlock btw 
 _messageDeliveryLock and _lock (AMQSession.java) which happens less 
 frequently.
 The messageDeliveryLock is used to ensure that we don't close the session in 
 the middle of a message delivery. In order to do this we hold the lock across 
 onMessage().
 This causes several issues in addition to the potential to deadlock. If an 
 onMessage call takes longer/wedged then you cannot close

Re: [Java Build failure] SessionCreateTest is hanging

2013-04-23 Thread Rajith Attapattu
Robbie, thanks for checking this out.

In my case the test appeared hung for a considerable time, after which the
test timed out.
My recollection is that this test used to get through a lot quicker. The
90secs you mentioned is more like it.

Maybe we should look at modifying the test to make it more reasonable than
excluding it.

Thanks again for checking this out.

Rajith


On Mon, Apr 22, 2013 at 6:33 PM, Robbie Gemmell robbie.gemm...@gmail.comwrote:

  I'm still a little unclear on whether you actually saw the test suite
 complete/abort (due to the timelimit on the JUnit test run) with a failure
 or not? This particular test does the same thing over and over again, so
 are you sure it isn't just going [very] slowly and hasn't finished yet?
 (Possibly instrument the test and see, or increase the JUnit timeout?)

 As far as I'm aware it is passing in the other CI environment I use to run
 the C++ profile, yes. I would expect it to pass the next time it runs on
 the ASF Jenkins nodes given it hasn't been failing previously and there
 have been no client changes in weeks to suggest it should now.

 When was the last time you successfully ran the C++ profile on the machine?
 Although it doesn't appear like a particular change has caused any issue,
 have you tried reverting the codebase to an earlier timeframe you know
 worked and running the test again?

 As mentioned, the test is excluded on all the Java broker profiles so it
 isn't actually passing there (though you could remove the exclude and check
 if it does). It has been excluded since the day it was committed, because
 this takes ages to run. It was also excluded from the C++ profiles at the
 same time, but that has obviously changed at some point over the years.

 Robbie

 On 22 April 2013 23:07, Rajith Attapattu rajit...@gmail.com wrote:

  There are no local changes. I initially came across this when testing
 out a
  patch on a branch.
  After that, I moved back to trunk, made sure I have the latest and no
 local
  changes.
  From the stack trace below it appears to be hung on a timed wait. But why
  it doesn't time out beats me.
 
  It certainly looks strange, hence the reason why I wanted to reach out
 and
  see if others are also noticing this.
  This can be reproduced (on my machine every time I run the build).
 
  main prio=10 tid=0x7f3d1c011000 nid=0xd54 in Object.wait()
  [0x7f3d21c85000]
  [junit]java.lang.Thread.State: TIMED_WAITING (on object monitor)
  [junit] at java.lang.Object.wait(Native Method)
  [junit] at
  org.apache.qpid.transport.util.Waiter.await(Waiter.java:54)
  [junit] at
  org.apache.qpid.transport.Session.awaitClose(Session.java:1067)
  [junit] at
  org.apache.qpid.transport.Session.close(Session.java:1056)
 
  Does this test pass on your env?
  The fact that it pass with the java profile and fails consistently with
 the
  cpp profile is interesting.
  I will try this out on a different env tomorrow and see what happens.
  Meanwhile I hope we get a jenkins run going or someone else had run it on
  their local machine.
 
  Rajith
 
 
  On Mon, Apr 22, 2013 at 5:42 PM, Robbie Gemmell 
 robbie.gemm...@gmail.com
  wrote:
 
   The last run being 2 days ago is just reflective of the few properly
   functioning Jenkins nodes currently being overloaded with jobs. It has
   passed on the several previous CPP profile runs on Jenkins, e.g:
  
  
 
 https://builds.apache.org/job/Qpid-Java-Cpp-Test/1341/testReport/org.apache.qpid.client/SessionCreateTest/
  
   There are some cases it was rather slow though, but that could just be
  from
   running on one of the poorly Jenkins nodes:
  
  
 
 https://builds.apache.org/job/Qpid-Java-Cpp-Test/1338/testReport/org.apache.qpid.client/SessionCreateTest/
  
   This JIRA was raised some time ago due to the test being really slow,
   though it was only noticed on the SSL profile:
   https://issues.apache.org/jira/browse/QPID-3431
  
   Are you sure it has actually hung? Do you have any local changes, and
  does
   it work without them? The test doesn't look to have failed in any of
 the
   build results on record for the last 2 weeks, and the last change to
 the
   client was 3 weeks ago:
   http://svn.apache.org/r1463158
  
   It isn't actually running at all on the Java broker profiles:
  
 
 java/test-profiles/JavaExcludes:org.apache.qpid.client.SessionCreateTest#*
  
   Robbie
  
   On 22 April 2013 20:30, Rajith Attapattu rajit...@gmail.com wrote:
  
Hi Folks,
   
org.apache.qpid.client.SessionCreateTest hangs when I run its under
 the
   cpp
profile.
It works well under the java broker profile.
   
It seems the last successful run on Jenkins was 2 days ago
   
https://builds.apache.org/job/Qpid-Java-Cpp-Test/
   
Anybody else seeing this ?
   
Regards,
   
Rajith
   
  
 



Review Request: Address deadlock btw _messageDeliveryLock and _failoverMutex

2013-04-23 Thread rajith attapattu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10738/
---

Review request for qpid, Robbie Gemmell, Weston Price, and Rob Godfrey.


Description
---

There are at least 3 cases where the deadlock btw _messageDeliveryLock and 
_failoverMutex surfaces. Among them sending a message inside onMessage() and 
the session being closed due to an error (causing the deadlock) seems to come 
up a lot in production environments. There is also a deadlock btw 
_messageDeliveryLock and _lock (AMQSession.java) which happens less frequently.
The messageDeliveryLock is used to ensure that we don't close the session in 
the middle of a message delivery. In order to do this we hold the lock across 
onMessage().
This causes several issues in addition to the potential to deadlock. If an 
onMessage call takes longer/wedged then you cannot close the session or 
failover will not happen until it returns as the same thread is holding the 
failoverMutex. 

Based on an idea from Rafi, I have come up with a solution to get rid of 
_messageDeliveryLock and instead use an alternative strategy to achieve similar 
functionality.
In order to ensure that close() doesn't proceed until the message deliveries 
currently in progress completes, an atomic counter is used to keep track of 
message deliveries in progress.
The close() will wait until the count falls to zero before proceeding. No new 
deliveries will be initiated bcos the close method will mark the session as 
closed.
The wait has a timeout to ensure that a longer running or wedged onMessage() 
will not hold up session close.
There is a slim chance that before a session being marked as closed a message 
delivery could be initiated, but not yet gotten to the point of updating the 
counter, hence waitForMsgDeliveryToFinish() will see it as zero and proceed 
with close. But in comparison to the issues with _messageDeliveryLock, I 
believe it's acceptable.


This addresses bug QPID-4574.
https://issues.apache.org/jira/browse/QPID-4574


Diffs
-

  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQConnection.java
 1471133 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/AMQSession.java
 1471133 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/src/main/java/org/apache/qpid/client/BasicMessageConsumer.java
 1471133 
  
http://svn.apache.org/repos/asf/qpid/trunk/qpid/java/common/src/main/java/org/apache/qpid/configuration/ClientProperties.java
 1471133 

Diff: https://reviews.apache.org/r/10738/diff/


Testing
---

Java test suite, tests from customers and QE around the deadlock situation.


Thanks,

rajith attapattu



[jira] [Commented] (QPID-4714) AMQConnection close is leaking socket connections

2013-04-23 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13639735#comment-13639735
 ] 

Rajith Attapattu commented on QPID-4714:


If we move the close connection method inside a finally block, then we can 
ensure the TCP connection is closed, even if an exception is thrown.

{code}
+finally
+{
+try
+{
+_delegate.closeConnection(timeout);
+}
+catch (Exception e)
+{
+_logger.warn(Error closing underlying protocol 
connection, e);
+}
+}
{code}

 AMQConnection close is leaking socket connections
 -

 Key: QPID-4714
 URL: https://issues.apache.org/jira/browse/QPID-4714
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.20
Reporter: Rajith Attapattu
 Fix For: Future


 If any of the operations throws an exception before the call to 
 _delegate.closeConnection(timeout); then we will leak socket connections.
 I already see this happening when running performance tests, where 
 session/connection close operations are timing out. The number of connections 
 keeps growing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[Java Build failure] SessionCreateTest is hanging

2013-04-22 Thread Rajith Attapattu
Hi Folks,

org.apache.qpid.client.SessionCreateTest hangs when I run its under the cpp
profile.
It works well under the java broker profile.

It seems the last successful run on Jenkins was 2 days ago

https://builds.apache.org/job/Qpid-Java-Cpp-Test/

Anybody else seeing this ?

Regards,

Rajith


Re: [Java Build failure] SessionCreateTest is hanging

2013-04-22 Thread Rajith Attapattu
There are no local changes. I initially came across this when testing out a
patch on a branch.
After that, I moved back to trunk, made sure I have the latest and no local
changes.
From the stack trace below it appears to be hung on a timed wait. But why
it doesn't time out beats me.

It certainly looks strange, hence the reason why I wanted to reach out and
see if others are also noticing this.
This can be reproduced (on my machine every time I run the build).

main prio=10 tid=0x7f3d1c011000 nid=0xd54 in Object.wait()
[0x7f3d21c85000]
[junit]java.lang.Thread.State: TIMED_WAITING (on object monitor)
[junit] at java.lang.Object.wait(Native Method)
[junit] at
org.apache.qpid.transport.util.Waiter.await(Waiter.java:54)
[junit] at
org.apache.qpid.transport.Session.awaitClose(Session.java:1067)
[junit] at
org.apache.qpid.transport.Session.close(Session.java:1056)

Does this test pass on your env?
The fact that it pass with the java profile and fails consistently with the
cpp profile is interesting.
I will try this out on a different env tomorrow and see what happens.
Meanwhile I hope we get a jenkins run going or someone else had run it on
their local machine.

Rajith


On Mon, Apr 22, 2013 at 5:42 PM, Robbie Gemmell robbie.gemm...@gmail.comwrote:

 The last run being 2 days ago is just reflective of the few properly
 functioning Jenkins nodes currently being overloaded with jobs. It has
 passed on the several previous CPP profile runs on Jenkins, e.g:

 https://builds.apache.org/job/Qpid-Java-Cpp-Test/1341/testReport/org.apache.qpid.client/SessionCreateTest/

 There are some cases it was rather slow though, but that could just be from
 running on one of the poorly Jenkins nodes:

 https://builds.apache.org/job/Qpid-Java-Cpp-Test/1338/testReport/org.apache.qpid.client/SessionCreateTest/

 This JIRA was raised some time ago due to the test being really slow,
 though it was only noticed on the SSL profile:
 https://issues.apache.org/jira/browse/QPID-3431

 Are you sure it has actually hung? Do you have any local changes, and does
 it work without them? The test doesn't look to have failed in any of the
 build results on record for the last 2 weeks, and the last change to the
 client was 3 weeks ago:
 http://svn.apache.org/r1463158

 It isn't actually running at all on the Java broker profiles:
 java/test-profiles/JavaExcludes:org.apache.qpid.client.SessionCreateTest#*

 Robbie

 On 22 April 2013 20:30, Rajith Attapattu rajit...@gmail.com wrote:

  Hi Folks,
 
  org.apache.qpid.client.SessionCreateTest hangs when I run its under the
 cpp
  profile.
  It works well under the java broker profile.
 
  It seems the last successful run on Jenkins was 2 days ago
 
  https://builds.apache.org/job/Qpid-Java-Cpp-Test/
 
  Anybody else seeing this ?
 
  Regards,
 
  Rajith
 



Re: Modularizing Qpid

2013-04-10 Thread Rajith Attapattu
+1 on this.
Having the flexibility to have individual release cycles for each component
will be huge advantage for us.
However as Justin mentioned, we shouldn't rule out a Qpid wide release
perhaps once a year or so.
From a users perspective this is a great thing to have, bcos all the
components bundled under that release will be guaranteed to work well
together.

Rajith

On Wed, Apr 10, 2013 at 10:46 AM, Rob Godfrey rob.j.godf...@gmail.comwrote:

 I'm +1 this... Obviously we need to understand better the amount of work to
 achieve the separation of the components... but if this were in place then
 we wouldn't be facing the sort of issues we are currently experiencing with
 the 0.22 release which would strongly benefit from not having the release
 cycles of all components tied together.

 -- Rob


 On 10 April 2013 15:55, Justin Ross jr...@apache.org wrote:

  Hi, everyone.  We've recently been discussing the components of our
  project in a couple different contexts.  This is a proposal to take
  the outcomes of those discussion and apply them to how Qpid is
  organized.
 
  Thanks for taking a look,
  Justin
 
  ## Related discussions
 
   -
 
 http://qpid.2158936.n2.nabble.com/Proposal-to-adjust-our-source-tree-layout-td7587237.html
   - http://qpid.2158936.n2.nabble.com/Website-update-td7590445.html
 
  ## The problem
 
  For a long time, Qpid was in many respects treated as one thing, with
  one release cycle.  Its parts were divided at the top level by
  language, not function.
 
  The division by language provides little incentive to factor out
  dependencies into clean interfaces, and it has tended to mean that
  developers often work in just one language silo.
 
  It has also meant that our source modules have only a weak
  correspondence to the user-focused release artifacts that we produce.
 
  With Proton, we've broken the mold, and the overall picture of Qpid is
  inconsistent and confusing, to the Qpid developers and users.
 
  ## The proposed approach
 
   - Qpid the project embraces a functional division of components
 
   - Each source component is self-contained and independent, with a
 focused purpose; among components, there are well defined
 dependencies
 
   - The source components should correspond closely to the pieces our
 users want to use independently; nonetheless, there would in some
 cases be multiple release artifacts from a component
 
   - Each component has its own set of branches, supporting independent
 releases
 
   - Each component should be neither too large nor too small; large
 enough to ease development and release management; small enough to
 represent a focused functional unit and to clarify dependencies
 
   - API components would in some cases also contain code shared by APIs
 and servers; the server would in that case depend on the API code
 base
 
  ## Proposed source components
 
   - Proton (this one already exists)
 - /qpid/proton/trunk/
 
   - JMS
 - /qpid/jms/trunk/
 - Depends on Proton
 
   - Java broker
 - /qpid/java-broker/trunk/
 - Depends on JMS (?)
 
   - Messaging API
 - /qpid/messaging-api/trunk/
 - Both the C++ (and bindings) and python implementations would
   move here
 - Depends on Proton
 
   - C++ broker
 - /qpid/cpp-broker/trunk/
 - Depends on Messaging-API
 
  Note that this matches the download page of the new website pretty
  nicely.
 
   - http://people.apache.org/~jross/transom/head/download.html
 
  There's some debate about the right names for these things.  Don't
  take my suggestions seriously.  I just had to put something down to
  illustrate.  If I had my druthers, we'd give the two brokers names
  that didn't include a programming language.
 
  ## First steps
 
  This change can't happen all at once.  We propose to start with these:
 
   - Isolate JMS from the existing qpid/trunk/qpid tree
   - Isolate the Messaging API from the existing qpid/trunk/qpid tree
 
  If this is agreed, the idea is to bite off this much during the 0.24
  cycle.
 
  ## Developer infrastructure
 
  This change calls for some work to support developers using multiple
  components in one development environment.  This needs more
  investigation.
 
  ## JIRA instances
 
  We propose *not* to create new jira instances for each component.  We
  can do that later on if necessary.  For now we can overload the
  version field in the qpid jira instance to include a component name.
 
  ## A Qpid distribution of source component releases
 
  While this scheme supports independent releases of each source
  component, it doesn't rule out a Qpid-wide release.  There may be
  reason for Qpid as a whole to share a release cadence and
  produce a new distribution of components each cycle.  It would all
  be more flexible, however.  A component might want to produce three
  revisions in the space of a standard Qpid-wide four-month cycle, or a
  component might produce no new revisions.
 
  

[jira] [Created] (QPID-4714) AMQConnection close is leaking socket connections

2013-04-05 Thread Rajith Attapattu (JIRA)
Rajith Attapattu created QPID-4714:
--

 Summary: AMQConnection close is leaking socket connections
 Key: QPID-4714
 URL: https://issues.apache.org/jira/browse/QPID-4714
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.20
Reporter: Rajith Attapattu
 Fix For: Future


If any of the operations throws an exception before the call to 
_delegate.closeConnection(timeout); then we will leak socket connections.

I already see this happening when running performance tests, where 
session/connection close operations are timing out. The number of connections 
keeps growing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-3769) NPE in client AMQDestination.equals()

2013-04-01 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13618814#comment-13618814
 ] 

Rajith Attapattu commented on QPID-3769:


Adjusted the hashcode and equals impl for AMQTopic to take the new 
implementation (in AMQDestination) into account while preserving the old BURL 
based logic.  Addressed concerns regarding the tests.
http://svn.apache.org/r1463158

 NPE in client AMQDestination.equals()
 -

 Key: QPID-3769
 URL: https://issues.apache.org/jira/browse/QPID-3769
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.12
Reporter: Jan Bareš
Assignee: Rajith Attapattu
  Labels: addressing

 Code of org.apache.qpid.client.AMQDestination.equals(Object) is buggy, it 
 should test for null on _exchangeClass and _exchangeName before dereferencing 
 them, lines 522 and 526.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-3769) NPE in client AMQDestination.equals()

2013-04-01 Thread Rajith Attapattu (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajith Attapattu updated QPID-3769:
---

Fix Version/s: 0.22

 NPE in client AMQDestination.equals()
 -

 Key: QPID-3769
 URL: https://issues.apache.org/jira/browse/QPID-3769
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.12
Reporter: Jan Bareš
Assignee: Rajith Attapattu
  Labels: addressing
 Fix For: 0.22


 Code of org.apache.qpid.client.AMQDestination.equals(Object) is buggy, it 
 should test for null on _exchangeClass and _exchangeName before dereferencing 
 them, lines 522 and 526.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-3769) NPE in client AMQDestination.equals()

2013-04-01 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13618955#comment-13618955
 ] 

Rajith Attapattu commented on QPID-3769:


Ported the relevant changes to 0.22 branch.
http://svn.apache.org/r1463215
http://svn.apache.org/r1463216
http://svn.apache.org/r1463217

 NPE in client AMQDestination.equals()
 -

 Key: QPID-3769
 URL: https://issues.apache.org/jira/browse/QPID-3769
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.12
Reporter: Jan Bareš
Assignee: Rajith Attapattu
  Labels: addressing
 Fix For: 0.22


 Code of org.apache.qpid.client.AMQDestination.equals(Object) is buggy, it 
 should test for null on _exchangeClass and _exchangeName before dereferencing 
 them, lines 522 and 526.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-3769) NPE in client AMQDestination.equals()

2013-03-27 Thread Rajith Attapattu (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615424#comment-13615424
 ] 

Rajith Attapattu commented on QPID-3769:


I was going to request a port, but I should have kept this open until then.

I also agree on the comments about the tests for this.


 NPE in client AMQDestination.equals()
 -

 Key: QPID-3769
 URL: https://issues.apache.org/jira/browse/QPID-3769
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Affects Versions: 0.12
Reporter: Jan Bareš
Assignee: Rajith Attapattu
  Labels: addressing

 Code of org.apache.qpid.client.AMQDestination.equals(Object) is buggy, it 
 should test for null on _exchangeClass and _exchangeName before dereferencing 
 them, lines 522 and 526.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Request for inclusion into the 0.22 branch

2013-03-27 Thread Rajith Attapattu
Hey Justin,

I want to include the following commits into the 0.22 branch.
http://svn.apache.org/r1461324
http://svn.apache.org/r1461329

Once I make some adjustments to the tests, I would need to port that as well.
I will send a follow up email when I've got that sorted out.

Regards,

Rajith

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



  1   2   3   4   5   6   7   8   9   10   >