[jira] [Comment Edited] (QPID-7259) qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages occasionally fails against the Java Broker

2016-05-11 Thread Keith Wall (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281234#comment-15281234
 ] 

Keith Wall edited comment on QPID-7259 at 5/12/16 5:58 AM:
---

It appears the test tries to control the completions manually, but there is a 
TODO left in the test code (Line 487), but it appears that something else 
(messaging itself??) is doing the completions too.




was (Author: k-wall):
It appears the test tries to control the completions manually, but there is a 
TODO left in the test code (Line 487), but it appears that something else 
(messaging itself??) is doing the completions too.



> qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
> occasionally fails against the Java Broker
> 
>
> Key: QPID-7259
> URL: https://issues.apache.org/jira/browse/QPID-7259
> Project: Qpid
>  Issue Type: Bug
>  Components: Python Test Suite
>Reporter: Keith Wall
>Priority: Minor
>
> Running 
> {{qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages}} 
> against the Java Broker (trunk), I occasionally see the following failure:
> {noformat}
> qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
> ...
>  fail
> Error during test:  Traceback (most recent call last):
> File "/Users/keith/py/bin/qpid-python-test", line 340, in run
>   phase()
> File 
> "/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
>  line 489, in test_window_flow_messages
>   self.assertEmpty(q)
> File 
> "/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
>  line 1109, in assertEmpty
>   self.fail("Queue not empty, contains: " + extra.body)
> File 
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py",
>  line 412, in fail
>   raise self.failureException(msg)
>   AssertionError: Queue not empty, contains: Message 6
> {noformat}
> Turning debug on, you can see on the failing case, the client side emits an 
> additional SessionCompleted(commands=[0, 0]), allowing the Broker to 
> legitimately send message 6.   In the passing case the 
> SessionCompleted(commands=null) is sent instead.
> The failing case:
> {noformat}
> 016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
> - RECV: [conn:6be9e091] ch=1 [S] MessageFlow(destination=c, unit=MESSAGE, 
> value=5)
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> identify: ch=1, commandId=13
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" ch=1 processed([13,13]) 12 12
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" processed: {[0, 12]}
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCompleted(commands={[0, 13]})
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 25 byte(s)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - RECV: [conn:6be9e091] ch=1 [S] 
> MessageFlow(destination=c, unit=BYTE, value=4294967295)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> identify: ch=1, commandId=14
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" ch=1 processed([14,14]) 13 13
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" processed: {[0, 13]}
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCompleted(commands={[0, 14]})
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCommandPoint(commandId=0, 

[jira] [Commented] (QPID-7259) qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages occasionally fails against the Java Broker

2016-05-11 Thread Keith Wall (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281234#comment-15281234
 ] 

Keith Wall commented on QPID-7259:
--

It appears the test tries to control the completions manually, but there is a 
TODO left in the test code (Line 487), but it appears that something else 
(messaging itself??) is doing the completions too.



> qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
> occasionally fails against the Java Broker
> 
>
> Key: QPID-7259
> URL: https://issues.apache.org/jira/browse/QPID-7259
> Project: Qpid
>  Issue Type: Bug
>  Components: Python Test Suite
>Reporter: Keith Wall
>
> Running 
> {{qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages}} 
> against the Java Broker (trunk), I occasionally see the following failure:
> {noformat}
> qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
> ...
>  fail
> Error during test:  Traceback (most recent call last):
> File "/Users/keith/py/bin/qpid-python-test", line 340, in run
>   phase()
> File 
> "/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
>  line 489, in test_window_flow_messages
>   self.assertEmpty(q)
> File 
> "/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
>  line 1109, in assertEmpty
>   self.fail("Queue not empty, contains: " + extra.body)
> File 
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py",
>  line 412, in fail
>   raise self.failureException(msg)
>   AssertionError: Queue not empty, contains: Message 6
> {noformat}
> Turning debug on, you can see on the failing case, the client side emits an 
> additional SessionCompleted(commands=[0, 0]), allowing the Broker to 
> legitimately send message 6.   In the passing case the 
> SessionCompleted(commands=null) is sent instead.
> The failing case:
> {noformat}
> 016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
> - RECV: [conn:6be9e091] ch=1 [S] MessageFlow(destination=c, unit=MESSAGE, 
> value=5)
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> identify: ch=1, commandId=13
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" ch=1 processed([13,13]) 12 12
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" processed: {[0, 12]}
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCompleted(commands={[0, 13]})
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 25 byte(s)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - RECV: [conn:6be9e091] ch=1 [S] 
> MessageFlow(destination=c, unit=BYTE, value=4294967295)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> identify: ch=1, commandId=14
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" ch=1 processed([14,14]) 13 13
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" processed: {[0, 13]}
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCompleted(commands={[0, 14]})
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCommandPoint(commandId=0, commandOffset=0)
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 id=0 [B] 
> MessageTransfer(destination=c, acceptMode=EXPLICIT, 

[jira] [Updated] (QPID-7259) qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages occasionally fails against the Java Broker

2016-05-11 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-7259:
-
Priority: Minor  (was: Major)

> qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
> occasionally fails against the Java Broker
> 
>
> Key: QPID-7259
> URL: https://issues.apache.org/jira/browse/QPID-7259
> Project: Qpid
>  Issue Type: Bug
>  Components: Python Test Suite
>Reporter: Keith Wall
>Priority: Minor
>
> Running 
> {{qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages}} 
> against the Java Broker (trunk), I occasionally see the following failure:
> {noformat}
> qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
> ...
>  fail
> Error during test:  Traceback (most recent call last):
> File "/Users/keith/py/bin/qpid-python-test", line 340, in run
>   phase()
> File 
> "/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
>  line 489, in test_window_flow_messages
>   self.assertEmpty(q)
> File 
> "/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
>  line 1109, in assertEmpty
>   self.fail("Queue not empty, contains: " + extra.body)
> File 
> "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py",
>  line 412, in fail
>   raise self.failureException(msg)
>   AssertionError: Queue not empty, contains: Message 6
> {noformat}
> Turning debug on, you can see on the failing case, the client side emits an 
> additional SessionCompleted(commands=[0, 0]), allowing the Broker to 
> legitimately send message 6.   In the passing case the 
> SessionCompleted(commands=null) is sent instead.
> The failing case:
> {noformat}
> 016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
> - RECV: [conn:6be9e091] ch=1 [S] MessageFlow(destination=c, unit=MESSAGE, 
> value=5)
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> identify: ch=1, commandId=13
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" ch=1 processed([13,13]) 12 12
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" processed: {[0, 12]}
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCompleted(commands={[0, 13]})
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
> 2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 25 byte(s)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - RECV: [conn:6be9e091] ch=1 [S] 
> MessageFlow(destination=c, unit=BYTE, value=4294967295)
> 2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> identify: ch=1, commandId=14
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" ch=1 processed([14,14]) 13 13
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
> ssn:"test-session" processed: {[0, 13]}
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCompleted(commands={[0, 14]})
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 
> SessionCommandPoint(commandId=0, commandOffset=0)
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - FLUSH: [conn:6be9e091]
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] ch=1 id=0 [B] 
> MessageTransfer(destination=c, acceptMode=EXPLICIT, acquireMode=PRE_ACQUIRED)
>   DeliveryProperties(routingKey=q)
>   body="Message 1"
> 2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
> (o.a.q.t.Connection) - SEND: [conn:6be9e091] 

[jira] [Created] (QPID-7259) qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages occasionally fails against the Java Broker

2016-05-11 Thread Keith Wall (JIRA)
Keith Wall created QPID-7259:


 Summary: 
qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
occasionally fails against the Java Broker
 Key: QPID-7259
 URL: https://issues.apache.org/jira/browse/QPID-7259
 Project: Qpid
  Issue Type: Bug
  Components: Python Test Suite
Reporter: Keith Wall


Running 
{{qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages}} 
against the Java Broker (trunk), I occasionally see the following failure:

{noformat}
qpid_tests.broker_0_10.message.MessageTests.test_window_flow_messages 
...
 fail
Error during test:  Traceback (most recent call last):
File "/Users/keith/py/bin/qpid-python-test", line 340, in run
  phase()
File 
"/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
 line 489, in test_window_flow_messages
  self.assertEmpty(q)
File 
"/Users/keith/py/lib/python2.7/site-packages/qpid_tests/broker_0_10/message.py",
 line 1109, in assertEmpty
  self.fail("Queue not empty, contains: " + extra.body)
File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py",
 line 412, in fail
  raise self.failureException(msg)
  AssertionError: Queue not empty, contains: Message 6
{noformat}

Turning debug on, you can see on the failing case, the client side emits an 
additional SessionCompleted(commands=[0, 0]), allowing the Broker to 
legitimately send message 6.   In the passing case the 
SessionCompleted(commands=null) is sent instead.

The failing case:
{noformat}
016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) - 
RECV: [conn:6be9e091] ch=1 [S] MessageFlow(destination=c, unit=MESSAGE, value=5)
2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
identify: ch=1, commandId=13
2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
ssn:"test-session" ch=1 processed([13,13]) 12 12
2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
ssn:"test-session" processed: {[0, 12]}
2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- SEND: [conn:6be9e091] ch=1 SessionCompleted(commands={[0, 13]})
2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- FLUSH: [conn:6be9e091]
2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
(o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
2016-05-11 23:16:16,341 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
(o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
(o.a.q.s.t.NonBlockingConnection) - Read 25 byte(s)
2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- RECV: [conn:6be9e091] ch=1 [S] MessageFlow(destination=c, unit=BYTE, 
value=4294967295)
2016-05-11 23:16:16,342 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
identify: ch=1, commandId=14
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
ssn:"test-session" ch=1 processed([14,14]) 13 13
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Session) - 
ssn:"test-session" processed: {[0, 13]}
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- SEND: [conn:6be9e091] ch=1 SessionCompleted(commands={[0, 14]})
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- FLUSH: [conn:6be9e091]
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
(o.a.q.s.t.NonBlockingConnection) - Written 26 bytes
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 
(o.a.q.s.t.NonBlockingConnection) - Read 0 byte(s)
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- SEND: [conn:6be9e091] ch=1 SessionCommandPoint(commandId=0, commandOffset=0)
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- FLUSH: [conn:6be9e091]
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- SEND: [conn:6be9e091] ch=1 id=0 [B] MessageTransfer(destination=c, 
acceptMode=EXPLICIT, acquireMode=PRE_ACQUIRED)
  DeliveryProperties(routingKey=q)
  body="Message 1"
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- SEND: [conn:6be9e091] ch=1 SessionFlush(completed=true)
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- FLUSH: [conn:6be9e091]
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] (o.a.q.t.Connection) 
- SEND: [conn:6be9e091] ch=1 id=1 [B] MessageTransfer(destination=c, 
acceptMode=EXPLICIT, acquireMode=PRE_ACQUIRED)
  DeliveryProperties(routingKey=q)
  body="Message 2"
2016-05-11 23:16:16,343 DEBUG [IO-/0:0:0:0:0:0:0:1:54458] 

Re: Review Request 47243: PROTON-1133: decouple the virtual host from the network address used by reactor

2016-05-11 Thread Alan Conway

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/47243/#review132776
---


Ship it!




Love it. Give unto the reactor what is the reactor's and unto the container 
what is the container's. One doc nit which you can ignore at will. Ship It.


proton-c/include/proton/connection.h (line 335)


Nit: It's not illegal, just very weird. It might make sense in some non-DNS 
environment with different rules about "hostname".

How about "Note: the virtual host string is passed verbatim, it is not 
parsed as a URL or modified in any way. It should not contain numeric IP 
addresses or port numbers unless that is what you intend to send as the virtual 
host name"


- Alan Conway


On May 11, 2016, 4:52 p.m., Kenneth Giusti wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/47243/
> ---
> 
> (Updated May 11, 2016, 4:52 p.m.)
> 
> 
> Review request for qpid, Alan Conway, Chug Rolke, Cliff Jansen, Justin Ross, 
> and Robbie Gemmell.
> 
> 
> Repository: qpid-proton-git
> 
> 
> Description
> ---
> 
> The pn_connection_set_hostname() interface is used to set the
> 'hostname' field in the Open performative.  By definition this is the
> 'virtual host' and should not be used by reactor for the network
> address.  The network address for outgoing connections should be set
> by using the reactor's pn_reactor_connection_to_host() factory, or the
> pn_reactor_set_connection_host() when re-connecting to a different
> host.  For inbound connections, the peer address is provided by the
> acceptor and cannot be modified.  In both cases, the
> pn_reactor_get_connection_address() method can be used to obtain the
> peer's network address.
> 
> 
> Diffs
> -
> 
>   proton-c/bindings/cpp/src/container_impl.cpp a221f45 
>   proton-c/bindings/cpp/src/reactor.hpp 48d9ea1 
>   proton-c/bindings/cpp/src/reactor.cpp 9507d2b 
>   proton-c/bindings/python/proton/reactor.py 1631c35 
>   proton-c/include/proton/connection.h da20f94 
>   proton-c/include/proton/reactor.h be642a9 
>   proton-c/src/posix/io.c 3226594 
>   proton-c/src/reactor/acceptor.c 8f0e99b 
>   proton-c/src/reactor/connection.c 336d1f1 
>   proton-c/src/reactor/reactor.h f996dca 
>   proton-c/src/tests/reactor.c 9564569 
>   proton-c/src/windows/io.c 7ff928d 
>   proton-j/src/main/java/org/apache/qpid/proton/reactor/Reactor.java a3307d2 
>   
> proton-j/src/main/java/org/apache/qpid/proton/reactor/impl/AcceptorImpl.java 
> fb2f892 
>   proton-j/src/main/java/org/apache/qpid/proton/reactor/impl/IOHandler.java 
> 5a32824 
>   proton-j/src/main/java/org/apache/qpid/proton/reactor/impl/ReactorImpl.java 
> d13cfbe 
>   tests/python/proton_tests/reactor.py 6ee107d 
> 
> Diff: https://reviews.apache.org/r/47243/diff/
> 
> 
> Testing
> ---
> 
> New unit tests added.
> 
> 
> Thanks,
> 
> Kenneth Giusti
> 
>



[jira] [Commented] (QPID-7207) Reorganize Qpid source for independent releases

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280607#comment-15280607
 ] 

ASF subversion and git services commented on QPID-7207:
---

Commit 1743410 from [~justi9] in branch 'qpid/trunk'
[ https://svn.apache.org/r1743410 ]

QPID-7207: Make call_for_output Python 2.6 compatible

> Reorganize Qpid source for independent releases
> ---
>
> Key: QPID-7207
> URL: https://issues.apache.org/jira/browse/QPID-7207
> Project: Qpid
>  Issue Type: Task
>Reporter: Justin Ross
>Assignee: Justin Ross
>
> An effort to achieve the source tree layout proposed here\[1\]. It allows the 
> Qpid project to produce independent releases of Qpid C++ and Python as well 
> as other modules that have heretofore been bundled into one large Qpid 
> release.  More detail in the proposal\[2\].
> \[1\] 
> https://cwiki.apache.org/confluence/display/qpid/Source+tree+layout+proposal
> \[2\] https://github.com/ssorj/qpid-svn-reorg



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Vishal Sharda (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280551#comment-15280551
 ] 

Vishal Sharda edited comment on DISPATCH-332 at 5/11/16 6:18 PM:
-

Two router configuration files to reproduce the message loss bug and the output 
from the receiver.

There were to simple_send.py senders running in parallel and each sending 20K 
messages.  The simple_recv.py on the other router however received only 1 
message - the last one (2) from both the senders.


was (Author: vsharda):
Two router configuration files to reproduce the message loss bug and the output 
from the receiver.


> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
> Attachments: config1.conf, config2.conf, output.txt
>
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Ted Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280552#comment-15280552
 ] 

Ted Ross commented on DISPATCH-332:
---

I tried to reproduce your symptom and didn't see any problem.
If you are using simple_send and simple_recv for this test, you will have a 
problem with multiple senders on the same address.  Simple_recv ignores 
duplicate messages so it's possible that the problem you are seeign is a result 
of ignored duplicates (two instances of simple_send will send messages with the 
same message-id and the receiver will detect/ignore duplicates.
Try removing the first three lines of on_message in simple_recv.py and testing 
again:
{noformat}
def on_message(self, event):
-   if event.message.id and event.message.id < self.received:
-   # ignore duplicate message
-   return
if self.expected == 0 or self.received < self.expected:
print event.message.body
self.received += 1
if self.received == self.expected:
event.receiver.close()
event.connection.close()
{noformat}

> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
> Attachments: config1.conf, config2.conf, output.txt
>
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Vishal Sharda (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal Sharda updated DISPATCH-332:
---
Attachment: output.txt
config2.conf
config1.conf

Two router configuration files to reproduce the message loss bug and the output 
from the receiver.


> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
> Attachments: config1.conf, config2.conf, output.txt
>
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-319) Discrepancy between origin router's path and other routers' valid-origins

2016-05-11 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross resolved DISPATCH-319.
---
Resolution: Fixed

> Discrepancy between origin router's path and other routers' valid-origins
> -
>
> Key: DISPATCH-319
> URL: https://issues.apache.org/jira/browse/DISPATCH-319
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
>Reporter: Ted Ross
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
>
> In certain cases where there are multiple equal-cost paths across a network, 
> the path chosen by an origin to a destination can be different from the path 
> that other routers assume the origin will choose.  This results in 
> valid-origin lists that block traffic flowing from the origin because the 
> transit router doesn't think it's on the chosen path from the origin.
> This results in balanced deliveries being intermittently released when there 
> are valid consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Ganesh Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280432#comment-15280432
 ] 

Ganesh Murthy commented on DISPATCH-332:


Can you please attach the two router config files you are using to this Jira? 
Thanks.

> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-320) SSL enabled connector does not do hostname verification

2016-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280430#comment-15280430
 ] 

ASF GitHub Bot commented on DISPATCH-320:
-

GitHub user ganeshmurthy opened a pull request:

https://github.com/apache/qpid-dispatch/pull/73

DISPATCH-320 - Added a new connector property called verifyHostName w…

…hich will verify host name on SSL connections

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ganeshmurthy/qpid-dispatch DISPATCH-320-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-dispatch/pull/73.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #73


commit 2afba41297a41fd82abc26ff5c4065c109ff934c
Author: Ganesh Murthy 
Date:   2016-05-11T14:51:13Z

DISPATCH-320 - Added a new connector property called verifyHostName which 
will verify host name on SSL connections




> SSL enabled connector does not do hostname verification
> ---
>
> Key: DISPATCH-320
> URL: https://issues.apache.org/jira/browse/DISPATCH-320
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 0.6.0
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
> Fix For: 0.6.0
>
>
> When a connector is ssl enabled (contains an sslProfile), dispatch does not 
> do hostname verification. 
> Dispatch must ensure that the host name in the URL to which it connects 
> matches the host name in the digital certificate that the server sends back 
> as part of the SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-dispatch pull request: DISPATCH-320 - Added a new connector p...

2016-05-11 Thread ganeshmurthy
GitHub user ganeshmurthy opened a pull request:

https://github.com/apache/qpid-dispatch/pull/73

DISPATCH-320 - Added a new connector property called verifyHostName w…

…hich will verify host name on SSL connections

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ganeshmurthy/qpid-dispatch DISPATCH-320-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-dispatch/pull/73.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #73


commit 2afba41297a41fd82abc26ff5c4065c109ff934c
Author: Ganesh Murthy 
Date:   2016-05-11T14:51:13Z

DISPATCH-320 - Added a new connector property called verifyHostName which 
will verify host name on SSL connections




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-320) SSL enabled connector does not do hostname verification

2016-05-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280422#comment-15280422
 ] 

ASF GitHub Bot commented on DISPATCH-320:
-

Github user ganeshmurthy closed the pull request at:

https://github.com/apache/qpid-dispatch/pull/72


> SSL enabled connector does not do hostname verification
> ---
>
> Key: DISPATCH-320
> URL: https://issues.apache.org/jira/browse/DISPATCH-320
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 0.6.0
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
> Fix For: 0.6.0
>
>
> When a connector is ssl enabled (contains an sslProfile), dispatch does not 
> do hostname verification. 
> Dispatch must ensure that the host name in the URL to which it connects 
> matches the host name in the digital certificate that the server sends back 
> as part of the SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-dispatch pull request: DISPATCH-320 - Added a new connector p...

2016-05-11 Thread ganeshmurthy
Github user ganeshmurthy closed the pull request at:

https://github.com/apache/qpid-dispatch/pull/72


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Review Request 47243: PROTON-1133: decouple the virtual host from the network address used by reactor

2016-05-11 Thread Kenneth Giusti

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/47243/
---

Review request for qpid, Alan Conway, Chug Rolke, Cliff Jansen, Justin Ross, 
and Robbie Gemmell.


Repository: qpid-proton-git


Description
---

The pn_connection_set_hostname() interface is used to set the
'hostname' field in the Open performative.  By definition this is the
'virtual host' and should not be used by reactor for the network
address.  The network address for outgoing connections should be set
by using the reactor's pn_reactor_connection_to_host() factory, or the
pn_reactor_set_connection_host() when re-connecting to a different
host.  For inbound connections, the peer address is provided by the
acceptor and cannot be modified.  In both cases, the
pn_reactor_get_connection_address() method can be used to obtain the
peer's network address.


Diffs
-

  proton-c/bindings/cpp/src/container_impl.cpp a221f45 
  proton-c/bindings/cpp/src/reactor.hpp 48d9ea1 
  proton-c/bindings/cpp/src/reactor.cpp 9507d2b 
  proton-c/bindings/python/proton/reactor.py 1631c35 
  proton-c/include/proton/connection.h da20f94 
  proton-c/include/proton/reactor.h be642a9 
  proton-c/src/posix/io.c 3226594 
  proton-c/src/reactor/acceptor.c 8f0e99b 
  proton-c/src/reactor/connection.c 336d1f1 
  proton-c/src/reactor/reactor.h f996dca 
  proton-c/src/tests/reactor.c 9564569 
  proton-c/src/windows/io.c 7ff928d 
  proton-j/src/main/java/org/apache/qpid/proton/reactor/Reactor.java a3307d2 
  proton-j/src/main/java/org/apache/qpid/proton/reactor/impl/AcceptorImpl.java 
fb2f892 
  proton-j/src/main/java/org/apache/qpid/proton/reactor/impl/IOHandler.java 
5a32824 
  proton-j/src/main/java/org/apache/qpid/proton/reactor/impl/ReactorImpl.java 
d13cfbe 
  tests/python/proton_tests/reactor.py 6ee107d 

Diff: https://reviews.apache.org/r/47243/diff/


Testing
---

New unit tests added.


Thanks,

Kenneth Giusti



[jira] [Commented] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Vishal Sharda (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280386#comment-15280386
 ] 

Vishal Sharda commented on DISPATCH-332:


I used the following fixedAddress in both the configuration files.

fixedAddress {
prefix: /
fanout: single
bias: closest
}

Insecure port 5672 was used for all the communication.

Everything is working fine if the 2 senders and 1 receiver are all attached to 
the same router and also if 1 sender and 1 receiver are each connected to the 
two interconnected routers.  The issue occurs only when we start a second 
parallel sender on the same router where one sender is already active.

Increasing the number of parallel senders and receivers further increases the 
percentage of messages lost.


> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-5816) [Java Client 0-10] If a resolved destination is used to create a consumer on a new connection created after destination was resolved, the client does not try to create t

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280383#comment-15280383
 ] 

ASF subversion and git services commented on QPID-5816:
---

Commit 1743394 from oru...@apache.org in branch 'java/branches/6.0.x'
[ https://svn.apache.org/r1743394 ]

QPID-5816: [Java Client] Maintain a per-session (weak) cache of resolved 
Destinations

merged from trunk using
svn merge -c 1743228   ^/qpid/java/trunk

> [Java Client 0-10] If a resolved destination is used to create a consumer on 
> a new connection created after destination was resolved, the client does not 
> try to create the destination on the broker
> -
>
> Key: QPID-5816
> URL: https://issues.apache.org/jira/browse/QPID-5816
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.20, 0.22, 0.24, 0.26
>Reporter: Alex Rudyy
> Fix For: qpid-java-6.0.3
>
>
> On consumer creation for a destination marked as resolved 0.10 client does 
> not send exchange.declare, queue.declare and exchange.bind commands to the 
> broker in order to create the corresponding broker entities.
> If such resolved destination is used to create consumer and the destination 
> node does not exist on a broker, the consumer creation will fail.
> In response to subscribe command the broker can return an exception because 
> destination does not exist:
> {noformat}
> ...
> SEND: [conn:7e779b1b] ch=0 id=2 MessageSubscribe(queue=test_queue, 
> destination=1, acceptMode=EXPLICIT, acquireMode=PRE_ACQUIRED, resumeTtl=0, 
> arguments={x-filter-jms-selector=})
> ...
> RECV: [conn:7e779b1b] ch=0 ExecutionException(errorCode=NOT_FOUND, 
> commandId=2, description=Queue: test_queue not found)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7257) [Java Broker] Correct connection state logging

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280344#comment-15280344
 ] 

ASF subversion and git services commented on QPID-7257:
---

Commit 1743393 from oru...@apache.org in branch 'java/branches/6.0.x'
[ https://svn.apache.org/r1743393 ]

QPID-7257: [Java Broker] Correct connection state logging

merged from trunk using
svn merge -c 1743161  ^/qpid/java/trunk

> [Java Broker] Correct connection state logging
> --
>
> Key: QPID-7257
> URL: https://issues.apache.org/jira/browse/QPID-7257
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Broker
>Reporter: Alex Rudyy
> Fix For: qpid-java-6.0.3, qpid-java-6.1
>
>
> Correct connection state logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Ted Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280288#comment-15280288
 ] 

Ted Ross commented on DISPATCH-332:
---

Can you provide information about what distribution settings you were using in 
your test?  What addresses did you use?  Did you provide any configuration for 
those addresses?  Were they multicast, closest, or balanced?

-Ted

> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7237) Excessive threads creation when suspending/resuming flow

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280265#comment-15280265
 ] 

ASF subversion and git services commented on QPID-7237:
---

Commit 1743387 from oru...@apache.org in branch 'java/branches/6.0.x'
[ https://svn.apache.org/r1743387 ]

QPID-7237: [Java Client] Use single thread thread-pool to perform flow control 
on no-ack sessions

* Avoids spawning new thread for each state change
* Coalesce flow control tasks for no-ack sessions
* Change the lower prefetch threshold to be half of upper prefetch threshold 
when the same values for thresholds are specified

merged from trunk using
svn merge -c 1742900,1743383  ^/qpid/java/trunk

> Excessive threads creation when suspending/resuming flow
> 
>
> Key: QPID-7237
> URL: https://issues.apache.org/jira/browse/QPID-7237
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.24, qpid-java-6.0.2
>Reporter: Flavio Baronti
>Assignee: Keith Wall
> Fix For: qpid-java-6.0.3, qpid-java-6.1
>
> Attachments: QPID-7237.patch
>
>
> In high load situations, with a NO_ACKNOWLEDGE session, it is possible for 
> the client to create an excessive amount of threads to suspend/resume the 
> channel.
> I'm providing a patch to avoid creating a new thread if the previous thread 
> has not completed its operation. This patch appears to avoid the problem in 
> our environment. Notice we are using version 0.24, but the code is the same 
> in the latest version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-dispatch pull request: DISPATCH-320 - Added a new connector p...

2016-05-11 Thread ganeshmurthy
GitHub user ganeshmurthy opened a pull request:

https://github.com/apache/qpid-dispatch/pull/72

DISPATCH-320 - Added a new connector property called verifyHostName w…

…hich will verify host name on SSL connections

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ganeshmurthy/qpid-dispatch DISPATCH-320

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-dispatch/pull/72.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #72


commit fb677c1f808041c963b2b5bfee6814c001f2809f
Author: Ganesh Murthy 
Date:   2016-05-11T14:51:13Z

DISPATCH-320 - Added a new connector property called verifyHostName which 
will verify host name on SSL connections




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7237) Excessive threads creation when suspending/resuming flow

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280233#comment-15280233
 ] 

ASF subversion and git services commented on QPID-7237:
---

Commit 1743383 from oru...@apache.org in branch 'java/trunk'
[ https://svn.apache.org/r1743383 ]

QPID-7237: Coalesce flow control tasks for no-ack sessions and change the lower 
prefetch threshold to be half of upper prefetch threshold when the same values 
for thresholds are specified

> Excessive threads creation when suspending/resuming flow
> 
>
> Key: QPID-7237
> URL: https://issues.apache.org/jira/browse/QPID-7237
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.24, qpid-java-6.0.2
>Reporter: Flavio Baronti
>Assignee: Keith Wall
> Fix For: qpid-java-6.0.3, qpid-java-6.1
>
> Attachments: QPID-7237.patch
>
>
> In high load situations, with a NO_ACKNOWLEDGE session, it is possible for 
> the client to create an excessive amount of threads to suspend/resume the 
> channel.
> I'm providing a patch to avoid creating a new thread if the previous thread 
> has not completed its operation. This patch appears to avoid the problem in 
> our environment. Notice we are using version 0.24, but the code is the same 
> in the latest version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPID-7258) [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls connections

2016-05-11 Thread Lorenz Quack (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lorenz Quack reassigned QPID-7258:
--

Assignee: (was: Lorenz Quack)

> [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls 
> connections
> -
>
> Key: QPID-7258
> URL: https://issues.apache.org/jira/browse/QPID-7258
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Reporter: Lorenz Quack
> Attachments: 
> 0001-QPID-7258-Python-Client-for-AMQP-0-8.0-9-1-Perform-h.patch
>
>
> Currently, the Python client for AMQP 0-8...0-9-1 does not perform hostname 
> verification of tls connections. this opens the possibility of 
> Man-in-the-Middle attacks.
> We should enhance the client to have this ability, make it configurable and 
> turn the feature on by default.
> It should respect hostnames from both CN and SANs, and support wildcards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPID-7258) [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls connections

2016-05-11 Thread Lorenz Quack (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lorenz Quack reassigned QPID-7258:
--

Assignee: Lorenz Quack

> [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls 
> connections
> -
>
> Key: QPID-7258
> URL: https://issues.apache.org/jira/browse/QPID-7258
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Reporter: Lorenz Quack
>Assignee: Lorenz Quack
> Attachments: 
> 0001-QPID-7258-Python-Client-for-AMQP-0-8.0-9-1-Perform-h.patch
>
>
> Currently, the Python client for AMQP 0-8...0-9-1 does not perform hostname 
> verification of tls connections. this opens the possibility of 
> Man-in-the-Middle attacks.
> We should enhance the client to have this ability, make it configurable and 
> turn the feature on by default.
> It should respect hostnames from both CN and SANs, and support wildcards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7258) [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls connections

2016-05-11 Thread Lorenz Quack (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lorenz Quack updated QPID-7258:
---
Status: Reviewable  (was: In Progress)

> [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls 
> connections
> -
>
> Key: QPID-7258
> URL: https://issues.apache.org/jira/browse/QPID-7258
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Reporter: Lorenz Quack
>Assignee: Lorenz Quack
> Attachments: 
> 0001-QPID-7258-Python-Client-for-AMQP-0-8.0-9-1-Perform-h.patch
>
>
> Currently, the Python client for AMQP 0-8...0-9-1 does not perform hostname 
> verification of tls connections. this opens the possibility of 
> Man-in-the-Middle attacks.
> We should enhance the client to have this ability, make it configurable and 
> turn the feature on by default.
> It should respect hostnames from both CN and SANs, and support wildcards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7258) [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls connections

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280201#comment-15280201
 ] 

ASF subversion and git services commented on QPID-7258:
---

Commit 1743379 from [~lorenz.quack] in branch 'qpid/trunk'
[ https://svn.apache.org/r1743379 ]

QPID-7258: Python Client for AMQP 0-8...0-9-1] Perform
 hostname verification of tls connections

* hostname verification is performed by default.
* introduce connection_option "ssl_skip_hostname_check" to disable this feature
* hostname verification will throw an ImportError on Python <2.6

> [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls 
> connections
> -
>
> Key: QPID-7258
> URL: https://issues.apache.org/jira/browse/QPID-7258
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Reporter: Lorenz Quack
> Attachments: 
> 0001-QPID-7258-Python-Client-for-AMQP-0-8.0-9-1-Perform-h.patch
>
>
> Currently, the Python client for AMQP 0-8...0-9-1 does not perform hostname 
> verification of tls connections. this opens the possibility of 
> Man-in-the-Middle attacks.
> We should enhance the client to have this ability, make it configurable and 
> turn the feature on by default.
> It should respect hostnames from both CN and SANs, and support wildcards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-332:
--
Fix Version/s: 0.6.0

> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
> Fix For: 0.6.0
>
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (DISPATCH-332) Heavy message loss happening with 2 interconnected routers

2016-05-11 Thread Ted Ross (JIRA)

 [ 
https://issues.apache.org/jira/browse/DISPATCH-332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross reassigned DISPATCH-332:
-

Assignee: Ted Ross

> Heavy message loss happening with 2 interconnected routers
> --
>
> Key: DISPATCH-332
> URL: https://issues.apache.org/jira/browse/DISPATCH-332
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Routing Engine
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Qpid Proton 0.12.2 for drivers and 
> dependency for Qpid Dispatch, Hardware: 2 CPUs, 15 GB RAM, 30 GB HDD.
>Reporter: Vishal Sharda
>Assignee: Ted Ross
>Priority: Blocker
>
> We are running two Dispatch Routers each configured for interior mode and the 
> second router's configuration includes a connector to the first router with 
> inter-router role.
> When we connect one sender to one router and one receiver to the other router 
> both listening to the same queue, we see all messages (20,000 in our test) 
> being transmitted.
> As soon as we start a second sender connected to the same router to which the 
> first sender connects and sending to the same queue, we start seeing heavy 
> message loss.  Around 20% of messages are lost with each sender attempting to 
> send 20,000 messages on its own (40,000 in total) and running in parallel 
> with the other sender.  The message loss happens regardless of the message 
> size.
> We tried with simple_send.py, simple_recv.py as well as send and recv C 
> executable files from Qpid Proton 0.12.2.
> We even saw a crash in the router with the following message:
> qdrouterd: /home/vsharda/qpid-dispatch/src/posix/threading.c:71: 
> sys_mutex_lock: Assertion `result == 0' failed.
> Aborted
> The message loss was observed with the 0.6.0 SNAPSHOT taken on May 9 as well 
> as the one taken on March 3 before the router core refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7258) [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls connections

2016-05-11 Thread Lorenz Quack (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lorenz Quack updated QPID-7258:
---
Attachment: 0001-QPID-7258-Python-Client-for-AMQP-0-8.0-9-1-Perform-h.patch

> [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls 
> connections
> -
>
> Key: QPID-7258
> URL: https://issues.apache.org/jira/browse/QPID-7258
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Reporter: Lorenz Quack
> Attachments: 
> 0001-QPID-7258-Python-Client-for-AMQP-0-8.0-9-1-Perform-h.patch
>
>
> Currently, the Python client for AMQP 0-8...0-9-1 does not perform hostname 
> verification of tls connections. this opens the possibility of 
> Man-in-the-Middle attacks.
> We should enhance the client to have this ability, make it configurable and 
> turn the feature on by default.
> It should respect hostnames from both CN and SANs, and support wildcards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7258) [Python Client for AMQP 0-8...0-9-1] Perform hostname verification of ssl/tls connections

2016-05-11 Thread Lorenz Quack (JIRA)
Lorenz Quack created QPID-7258:
--

 Summary: [Python Client for AMQP 0-8...0-9-1] Perform hostname 
verification of ssl/tls connections
 Key: QPID-7258
 URL: https://issues.apache.org/jira/browse/QPID-7258
 Project: Qpid
  Issue Type: Bug
  Components: Java Client
Reporter: Lorenz Quack


Currently, the Python client for AMQP 0-8...0-9-1 does not perform hostname 
verification of tls connections. this opens the possibility of 
Man-in-the-Middle attacks.

We should enhance the client to have this ability, make it configurable and 
turn the feature on by default.
It should respect hostnames from both CN and SANs, and support wildcards.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7256) When invoke the method "connection.start" in a loop,reporting socket closed

2016-05-11 Thread Rob Godfrey (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280020#comment-15280020
 ] 

Rob Godfrey commented on QPID-7256:
---

Yeah - that is the "old" AMQP 1.0 client. There is a newer client (details 
linked to in my previous comment) which is based on the Qpid Proton framework.  
The latest release of the new JMS AMQP 1.0 client is 0.9.0 (see 
[here|https://qpid.apache.org/releases/qpid-jms-0.9.0/] ).

> When invoke the method "connection.start" in a loop,reporting socket closed
> ---
>
> Key: QPID-7256
> URL: https://issues.apache.org/jira/browse/QPID-7256
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.32
> Environment: windows7、 jdk7
>Reporter: Steven
>
> I use the for loop,It will loop 1000 times,every time,I create a 
> connection,then send message,After the message has been sent,close the 
> connection.
> this loop executed it for some times,It will report error,as you can see the 
> screen shot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7256) When invoke the method "connection.start" in a loop,reporting socket closed

2016-05-11 Thread Steven (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280008#comment-15280008
 ] 

Steven commented on QPID-7256:
--

I use the JMS AMQP version AMQP 1.0 0.32,I thinks is the latest version

> When invoke the method "connection.start" in a loop,reporting socket closed
> ---
>
> Key: QPID-7256
> URL: https://issues.apache.org/jira/browse/QPID-7256
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.32
> Environment: windows7、 jdk7
>Reporter: Steven
>
> I use the for loop,It will loop 1000 times,every time,I create a 
> connection,then send message,After the message has been sent,close the 
> connection.
> this loop executed it for some times,It will report error,as you can see the 
> screen shot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7216) [Java Broker, WMC] add new ManagedOperation to retrieve Connections less verbose

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15280002#comment-15280002
 ] 

ASF subversion and git services commented on QPID-7216:
---

Commit 1743341 from oru...@apache.org in branch 'java/branches/6.0.x'
[ https://svn.apache.org/r1743341 ]

QPID-7216: Add descriptions for management operations on VirtualHost

> [Java Broker, WMC] add new ManagedOperation to retrieve Connections less 
> verbose
> 
>
> Key: QPID-7216
> URL: https://issues.apache.org/jira/browse/QPID-7216
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Lorenz Quack
>Assignee: Keith Wall
> Fix For: qpid-java-6.0.3
>
>
> The current version of getConnections retrieves all connections and their 
> sessions with full context on each. This is too verbose.
> To maintain backwards compatibility we should add a new ManagedOperation on 
> the 6.0.x branch to only retrieve the connections (without context if 
> possible).
> The WMC should be changed to use this instead of the current getConnections.
> Both getConnections and the new operation should be considered deprecated. 
> getConnections will be removed from v7 and the new operation will not be 
> introduced to v6.1 because there we use queries (QPID-7215).
> In addition the WMC connections table should be (client-side) paginated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7256) When invoke the method "connection.start" in a loop,reporting socket closed

2016-05-11 Thread Steven (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279980#comment-15279980
 ] 

Steven commented on QPID-7256:
--

Hello,

the Qpid Java broker version is 6.0.1.I haven't used the latest Java 
Broker(6.0.2) to test it,from the Qpid Broker's log it didn't report any 
error,As you can see below server qpid's log.

2016-05-11 07:21:40,895 INFO  [main] (messagestore.created) - [Broker] 
[vh(/default)/ms(DerbyMessageStore)] MST-1001 : Created
2016-05-11 07:21:40,896 INFO  [main] (messagestore.store_location) - [Broker] 
[vh(/default)/ms(DerbyMessageStore)] MST-1002 : Store location : 
/var/qpidwork/default/messages
2016-05-11 07:21:40,907 INFO  [main] (messagestore.recovery_start) - [Broker] 
[vh(/default)/ms(DerbyMessageStore)] MST-1004 : Recovery Start
2016-05-11 07:21:40,915 INFO  [main] (transactionlog.recovery_start) - [Broker] 
[vh(/default)/ms(DerbyMessageStore)] TXN-1004 : Recovery Start
2016-05-11 07:21:40,935 INFO  [main] (transactionlog.recovery_complete) - 
[Broker] [vh(/default)/ms(DerbyMessageStore)] TXN-1006 : Recovery Complete
2016-05-11 07:21:40,935 INFO  [main] (messagestore.recovered) - [Broker] 
[vh(/default)/ms(DerbyMessageStore)] MST-1005 : Recovered 0 messages
2016-05-11 07:21:40,936 INFO  [main] (messagestore.recovery_complete) - 
[Broker] [vh(/default)/ms(DerbyMessageStore)] MST-1006 : Recovery Complete
2016-05-11 07:21:40,998 INFO  [main] (broker.listening) - [Broker] BRK-1002 : 
Starting : Listening on TCP port 5672
2016-05-11 07:21:41,568 INFO  [main] (broker.listening) - [Broker] BRK-1002 : 
Starting : Listening on SSL port 5673
2016-05-11 07:21:41,636 INFO  [main] (managementconsole.startup) - [Broker] 
MNG-1001 : Web Management Startup
2016-05-11 07:21:41,920 INFO  [main] (server.Server) - jetty-8.1.14.v20131031
2016-05-11 07:21:42,100 INFO  [main] (server.AbstractConnector) - Started 
SelectChannelConnector@0.0.0.0:8080
2016-05-11 07:21:42,103 INFO  [main] (managementconsole.listening) - [Broker] 
MNG-1002 : Starting : HTTP : Listening on TCP port 8080
2016-05-11 07:21:42,104 INFO  [main] (managementconsole.ready) - [Broker] 
MNG-1004 : Web Management Ready
2016-05-11 07:21:42,222 INFO  [main] (managementconsole.startup) - [Broker] 
MNG-1001 : JMX Management Startup
2016-05-11 07:21:42,356 INFO  [main] (managementconsole.listening) - [Broker] 
MNG-1002 : Starting : RMI Registry : Listening on TCP port 8999
2016-05-11 07:21:42,585 INFO  [main] (managementconsole.listening) - [Broker] 
MNG-1002 : Starting : JMX RMIConnectorServer : Listening on TCP port 9099
2016-05-11 07:21:42,586 INFO  [main] (managementconsole.ready) - [Broker] 
MNG-1004 : JMX Management Ready
2016-05-11 07:21:42,589 INFO  [main] (broker.ready) - [Broker] BRK-1004 : Qpid 
Broker Ready
2016-05-11 07:22:15,278 INFO  [HttpManagement-60] (managementconsole.open) - 
[mng:admin(/192.168.81.18:52021)] MNG-1007 : Open : User admin

I think that it is the client reporting error,not the java broker Server

> When invoke the method "connection.start" in a loop,reporting socket closed
> ---
>
> Key: QPID-7256
> URL: https://issues.apache.org/jira/browse/QPID-7256
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.32
> Environment: windows7、 jdk7
>Reporter: Steven
>
> I use the for loop,It will loop 1000 times,every time,I create a 
> connection,then send message,After the message has been sent,close the 
> connection.
> this loop executed it for some times,It will report error,as you can see the 
> screen shot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7216) [Java Broker, WMC] add new ManagedOperation to retrieve Connections less verbose

2016-05-11 Thread Keith Wall (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279971#comment-15279971
 ] 

Keith Wall commented on QPID-7216:
--

Alex - changes look good. One knit-pick: add an description to the new 
operation that lets the user know the data returned is shallow.

> [Java Broker, WMC] add new ManagedOperation to retrieve Connections less 
> verbose
> 
>
> Key: QPID-7216
> URL: https://issues.apache.org/jira/browse/QPID-7216
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Lorenz Quack
>Assignee: Keith Wall
> Fix For: qpid-java-6.0.3
>
>
> The current version of getConnections retrieves all connections and their 
> sessions with full context on each. This is too verbose.
> To maintain backwards compatibility we should add a new ManagedOperation on 
> the 6.0.x branch to only retrieve the connections (without context if 
> possible).
> The WMC should be changed to use this instead of the current getConnections.
> Both getConnections and the new operation should be considered deprecated. 
> getConnections will be removed from v7 and the new operation will not be 
> introduced to v6.1 because there we use queries (QPID-7215).
> In addition the WMC connections table should be (client-side) paginated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1169) Deadlock in pn_messenger_send when using more than 2 publishers?

2016-05-11 Thread Frank Quinn (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-1169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279932#comment-15279932
 ] 

Frank Quinn commented on PROTON-1169:
-

Can anyone help here? Is this a known and expectation limitation in point to 
point?

> Deadlock in pn_messenger_send when using more than 2 publishers?
> 
>
> Key: PROTON-1169
> URL: https://issues.apache.org/jira/browse/PROTON-1169
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c
>Affects Versions: 0.12.0
> Environment: Fedora 23, 64-bit
>Reporter: Frank Quinn
>
> As per 
> http://qpid.2158936.n2.nabble.com/Deadlock-in-pn-messenger-send-when-using-more-than-2-publishers-td7641239.html,
>  I think I have found an issue with qpid proton when running in point to 
> point mode. If running a single recv thread and 3 concurrent messenger links 
> are set up with it, it seems to cause a deadlock in the third 
> pn_messenger_send. All subsequent attempts to send will also hang (i.e. the 
> proton example send.c application).
> We found this behaviour in our own code for OpenMAMA, but I think we have a 
> valid recreation in native qpid proton code here too - see 
> https://github.com/OpenMAMA/OpenMAMA/files/200901/om-issue-153.zip (attached 
> as part of where it was discovered - 
> https://github.com/OpenMAMA/OpenMAMA/issues/153).
> If you compile and run that code on latest yum versions for Fedora 23 / qpid 
> proton, you'll get:
> {noformat}
> Creating the messengers
> Starting the messengers
> Starting listener thread
> pthread_create successful
> Creating message for sending
> Setting the subject for the message
> Setting the address and sending message to subscriber
> Sending from first publisher
> Sent from first publisher
> Recv got something
> Received message with subject 'First Publisher'
> Sending from second publisher
> Sent from second publisher
> Recv got something
> Received message with subject 'Second Publisher'
> Sending from third publisher
> {noformat}
> Then it hangs - you never get "Sent from third publisher".
> The application sets up one messenger to run on its own recv thread, then on 
> the main thread, it fires up 3 distinct messengers and attempts to send a 
> single message from each messenger, and the third one hangs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7216) [Java Broker, WMC] add new ManagedOperation to retrieve Connections less verbose

2016-05-11 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-7216:
-
Status: Reviewable  (was: In Progress)

> [Java Broker, WMC] add new ManagedOperation to retrieve Connections less 
> verbose
> 
>
> Key: QPID-7216
> URL: https://issues.apache.org/jira/browse/QPID-7216
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Lorenz Quack
>Assignee: Alex Rudyy
> Fix For: qpid-java-6.0.3
>
>
> The current version of getConnections retrieves all connections and their 
> sessions with full context on each. This is too verbose.
> To maintain backwards compatibility we should add a new ManagedOperation on 
> the 6.0.x branch to only retrieve the connections (without context if 
> possible).
> The WMC should be changed to use this instead of the current getConnections.
> Both getConnections and the new operation should be considered deprecated. 
> getConnections will be removed from v7 and the new operation will not be 
> introduced to v6.1 because there we use queries (QPID-7215).
> In addition the WMC connections table should be (client-side) paginated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPID-7216) [Java Broker, WMC] add new ManagedOperation to retrieve Connections less verbose

2016-05-11 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy reassigned QPID-7216:


Assignee: Keith Wall  (was: Alex Rudyy)

> [Java Broker, WMC] add new ManagedOperation to retrieve Connections less 
> verbose
> 
>
> Key: QPID-7216
> URL: https://issues.apache.org/jira/browse/QPID-7216
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Lorenz Quack
>Assignee: Keith Wall
> Fix For: qpid-java-6.0.3
>
>
> The current version of getConnections retrieves all connections and their 
> sessions with full context on each. This is too verbose.
> To maintain backwards compatibility we should add a new ManagedOperation on 
> the 6.0.x branch to only retrieve the connections (without context if 
> possible).
> The WMC should be changed to use this instead of the current getConnections.
> Both getConnections and the new operation should be considered deprecated. 
> getConnections will be removed from v7 and the new operation will not be 
> introduced to v6.1 because there we use queries (QPID-7215).
> In addition the WMC connections table should be (client-side) paginated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPID-7216) [Java Broker, WMC] add new ManagedOperation to retrieve Connections less verbose

2016-05-11 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy reassigned QPID-7216:


Assignee: Alex Rudyy

> [Java Broker, WMC] add new ManagedOperation to retrieve Connections less 
> verbose
> 
>
> Key: QPID-7216
> URL: https://issues.apache.org/jira/browse/QPID-7216
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Lorenz Quack
>Assignee: Alex Rudyy
> Fix For: qpid-java-6.0.3
>
>
> The current version of getConnections retrieves all connections and their 
> sessions with full context on each. This is too verbose.
> To maintain backwards compatibility we should add a new ManagedOperation on 
> the 6.0.x branch to only retrieve the connections (without context if 
> possible).
> The WMC should be changed to use this instead of the current getConnections.
> Both getConnections and the new operation should be considered deprecated. 
> getConnections will be removed from v7 and the new operation will not be 
> introduced to v6.1 because there we use queries (QPID-7215).
> In addition the WMC connections table should be (client-side) paginated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7216) [Java Broker, WMC] add new ManagedOperation to retrieve Connections less verbose

2016-05-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279917#comment-15279917
 ] 

ASF subversion and git services commented on QPID-7216:
---

Commit 1743334 from oru...@apache.org in branch 'java/branches/6.0.x'
[ https://svn.apache.org/r1743334 ]

QPID-7216: [Java Broker, WMC] Add new ManagedOperation to retrieve connections 
without children and inherited context

> [Java Broker, WMC] add new ManagedOperation to retrieve Connections less 
> verbose
> 
>
> Key: QPID-7216
> URL: https://issues.apache.org/jira/browse/QPID-7216
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Lorenz Quack
> Fix For: qpid-java-6.0.3
>
>
> The current version of getConnections retrieves all connections and their 
> sessions with full context on each. This is too verbose.
> To maintain backwards compatibility we should add a new ManagedOperation on 
> the 6.0.x branch to only retrieve the connections (without context if 
> possible).
> The WMC should be changed to use this instead of the current getConnections.
> Both getConnections and the new operation should be considered deprecated. 
> getConnections will be removed from v7 and the new operation will not be 
> introduced to v6.1 because there we use queries (QPID-7215).
> In addition the WMC connections table should be (client-side) paginated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7237) Excessive threads creation when suspending/resuming flow

2016-05-11 Thread Keith Wall (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279833#comment-15279833
 ] 

Keith Wall commented on QPID-7237:
--

Flavio,

For the number of threads created to be excessive, I think you must be creating 
the NO_ACK session using JMS's default 
{{javax.jms.Connection#createSession(boolean, int)}}.  When created in this 
way, the high and low watermarks of the underlying queue get set to the same 
value (default 500).  Having the two values set the same makes the 
{{FlowControlBlockingQueue}} rapid oscillate the flow control state - causing 
the stream of threads.  If you use 
{{org.apache.qpid.jms.Connection#createSession(boolean, int, int, int)}} 
instead, and set the low value sensibly e.g. 
{{org.apache.qpid.jms.Connection#createSession(false, 
org.apache.qpid.jms.Session#NO_ACKNOWLEDGE, 500, 250)}}  , you should be able 
to achieve more efficient elastic behaviour with far fewer transitions.  This 
approach will work with the releases as they stands today.



> Excessive threads creation when suspending/resuming flow
> 
>
> Key: QPID-7237
> URL: https://issues.apache.org/jira/browse/QPID-7237
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.24, qpid-java-6.0.2
>Reporter: Flavio Baronti
>Assignee: Keith Wall
> Fix For: qpid-java-6.0.3, qpid-java-6.1
>
> Attachments: QPID-7237.patch
>
>
> In high load situations, with a NO_ACKNOWLEDGE session, it is possible for 
> the client to create an excessive amount of threads to suspend/resume the 
> channel.
> I'm providing a patch to avoid creating a new thread if the previous thread 
> has not completed its operation. This patch appears to avoid the problem in 
> our environment. Notice we are using version 0.24, but the code is the same 
> in the latest version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (QPID-7256) When invoke the method "connection.start" in a loop,reporting socket closed

2016-05-11 Thread Rob Godfrey (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279758#comment-15279758
 ] 

Rob Godfrey edited comment on QPID-7256 at 5/11/16 8:23 AM:


Also, have you tried using the newer JMS AMQP 1.0 client (details 
[here|https://qpid.apache.org/components/jms/])?  The older client you appear 
to be using has been deprecated and is no longer being actively developed.  
(Obviously this won't help if the issue you are facing is on the broker rather 
than the client).


was (Author: rgodfrey):
Also, have you tried using the newer JMS AMQP 1.0 client (details 
[https://qpid.apache.org/components/jms/|here])?  The older client you appear 
to be using has been deprecated and is no longer being actively developed.  
(Obviously this won't help if the issue you are facing is on the broker rather 
than the client).

> When invoke the method "connection.start" in a loop,reporting socket closed
> ---
>
> Key: QPID-7256
> URL: https://issues.apache.org/jira/browse/QPID-7256
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.32
> Environment: windows7、 jdk7
>Reporter: Steven
>
> I use the for loop,It will loop 1000 times,every time,I create a 
> connection,then send message,After the message has been sent,close the 
> connection.
> this loop executed it for some times,It will report error,as you can see the 
> screen shot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7256) When invoke the method "connection.start" in a loop,reporting socket closed

2016-05-11 Thread Rob Godfrey (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279758#comment-15279758
 ] 

Rob Godfrey commented on QPID-7256:
---

Also, have you tried using the newer JMS AMQP 1.0 client (details 
[https://qpid.apache.org/components/jms/|here])?  The older client you appear 
to be using has been deprecated and is no longer being actively developed.  
(Obviously this won't help if the issue you are facing is on the broker rather 
than the client).

> When invoke the method "connection.start" in a loop,reporting socket closed
> ---
>
> Key: QPID-7256
> URL: https://issues.apache.org/jira/browse/QPID-7256
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.32
> Environment: windows7、 jdk7
>Reporter: Steven
>
> I use the for loop,It will loop 1000 times,every time,I create a 
> connection,then send message,After the message has been sent,close the 
> connection.
> this loop executed it for some times,It will report error,as you can see the 
> screen shot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7256) When invoke the method "connection.start" in a loop,reporting socket closed

2016-05-11 Thread Keith Wall (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279708#comment-15279708
 ] 

Keith Wall commented on QPID-7256:
--

Hi Steven

What version of the Java Broker are you using?  Are you able to reproduce from 
the latest Java Broker release (6.0.2)?
What does that Broker's qpid.log say when you see the failure on the client 
side?

Keith


> When invoke the method "connection.start" in a loop,reporting socket closed
> ---
>
> Key: QPID-7256
> URL: https://issues.apache.org/jira/browse/QPID-7256
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Client
>Affects Versions: 0.32
> Environment: windows7、 jdk7
>Reporter: Steven
>
> I use the for loop,It will loop 1000 times,every time,I create a 
> connection,then send message,After the message has been sent,close the 
> connection.
> this loop executed it for some times,It will report error,as you can see the 
> screen shot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org