[jira] [Commented] (DISPATCH-451) [AMQP] Hard coded session capacity should be configurable

2016-10-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556288#comment-15556288
 ] 

ASF subversion and git services commented on DISPATCH-451:
--

Commit c248a34f3d2da46bb7e7e67ccbd04e2340b69f15 in qpid-dispatch's branch 
refs/heads/crolke-DISPATCH-451 from [~chug]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=c248a34 ]

DISPATCH-451: Describe policy setting priority over listener/connector setting


> [AMQP] Hard coded session capacity should be configurable
> -
>
> Key: DISPATCH-451
> URL: https://issues.apache.org/jira/browse/DISPATCH-451
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 0.6.0
>Reporter: Chuck Rolke
>
> In container.c and policy.c
> {noformat}
> pn_session_set_incoming_capacity(link->pn_sess, 100);
> {noformat}
> could be improved.
> Policy provides settable values for Open/maxFrameSize, Open/maxSessions, 
> Begin/incoming_window, Attach/maxMessageSize. Configuration settings for 
> these are also desirable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-451) [AMQP] Hard coded session capacity should be configurable

2016-10-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556287#comment-15556287
 ] 

ASF subversion and git services commented on DISPATCH-451:
--

Commit 5f4e4a6f2a0dc326bf51823036ab8e6bad9df063 in qpid-dispatch's branch 
refs/heads/crolke-DISPATCH-451 from [~chug]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=5f4e4a6 ]

DISPATCH-451: Allow configurable maxSessions and maxSessionWindow


> [AMQP] Hard coded session capacity should be configurable
> -
>
> Key: DISPATCH-451
> URL: https://issues.apache.org/jira/browse/DISPATCH-451
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 0.6.0
>Reporter: Chuck Rolke
>
> In container.c and policy.c
> {noformat}
> pn_session_set_incoming_capacity(link->pn_sess, 100);
> {noformat}
> could be improved.
> Policy provides settable values for Open/maxFrameSize, Open/maxSessions, 
> Begin/incoming_window, Attach/maxMessageSize. Configuration settings for 
> these are also desirable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-451) [AMQP] Hard coded session capacity should be configurable

2016-10-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556289#comment-15556289
 ] 

ASF subversion and git services commented on DISPATCH-451:
--

Commit f1dbfbf6dd73b4068440f235be38906944c35b59 in qpid-dispatch's branch 
refs/heads/crolke-DISPATCH-451 from [~chug]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=f1dbfbf ]

DISPATCH-451: Incorporate review comments; add self tests


> [AMQP] Hard coded session capacity should be configurable
> -
>
> Key: DISPATCH-451
> URL: https://issues.apache.org/jira/browse/DISPATCH-451
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 0.6.0
>Reporter: Chuck Rolke
>
> In container.c and policy.c
> {noformat}
> pn_session_set_incoming_capacity(link->pn_sess, 100);
> {noformat}
> could be improved.
> Policy provides settable values for Open/maxFrameSize, Open/maxSessions, 
> Begin/incoming_window, Attach/maxMessageSize. Configuration settings for 
> these are also desirable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Vishal Sharda (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556243#comment-15556243
 ] 

Vishal Sharda commented on DISPATCH-337:


We built with the default setting "RelWithDebInfo" which should have included 
the Debug information.

CPU usage per thread is very low.

vsharda@millennium-qpid-deploy-lnp-1-5129:~$ top -Hbcd 5 | grep qdrouterd
14213 vsharda   20   0   11132   1624   1492 S  0.0  0.0   0:00.00 grep 
qdrouterd
25467 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.34 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25482 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7   0:39.50 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25493 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  20:02.09 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25494 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.65 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25495 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:55.10 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25493 qserv 20   0 2524848 2.187g   8420 S  0.2  3.7  20:02.10 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
14213 vsharda   20   0   11136   1624   1492 S  0.0  0.0   0:00.00 grep 
qdrouterd
25467 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.34 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25482 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7   0:39.50 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25494 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.65 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25495 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:55.10 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25494 qserv 20   0 2524848 2.187g   8420 S  0.2  3.7  19:51.66 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
14213 vsharda   20   0   11136   1624   1492 S  0.0  0.0   0:00.00 grep 
qdrouterd
25467 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.34 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25482 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7   0:39.50 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25493 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  20:02.10 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25495 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:55.10 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25493 qserv 20   0 2524848 2.187g   8420 S  0.2  3.7  20:02.11 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25495 qserv 20   0 2524848 2.187g   8420 S  0.2  3.7  19:55.11 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
14213 vsharda   20   0   11136   1624   1492 S  0.0  0.0   0:00.00 grep 
qdrouterd
25467 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.34 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25482 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7   0:39.50 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25494 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.66 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25467 qserv 20   0 2524848 2.187g   8420 S  0.2  3.7  19:51.35 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25494 qserv 20   0 2524848 2.187g   8420 S  0.2  3.7  19:51.67 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
14213 vsharda   20   0   11136   1624   1492 S  0.0  0.0   0:00.00 grep 
qdrouterd
25482 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7   0:39.50 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25493 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  20:02.11 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25495 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:55.11 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25493 qserv 20   0 2524848 2.187g   8420 S  0.2  3.7  20:02.12 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
14213 vsharda   20   0   11136   1624   1492 S  0.0  0.0   0:00.00 grep 
qdrouterd
25467 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.35 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25482 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7   0:39.50 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25494 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:51.67 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+
25495 qserv 20   0 2524848 2.187g   8420 S  0.0  3.7  19:55.11 qdrouterd -c 
/x/web/LIVE/switch-dr-network/configurator+

The memory footprint is increasing steadily.

qdstat failed to get a response within 120 seconds:

vsharda@millennium-qpid-deploy-lnp-2-7131:/$ qdstat -cb 10.25.171.242 -t 120
Timeout: Connection amqp://10.25.171.242:amqp/$management timed out: Opening 
connection



> Huge memory leaks in 

[jira] [Commented] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Ted Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556229#comment-15556229
 ] 

Ted Ross commented on DISPATCH-337:
---

I believe that the qdstat program is timing out waiting for a response from the 
router.  You can increase the timeout period using the -t option.  It's 
possible that with a longer timeout, you will eventually get a response.

I believe there is a problem with the router-core thread.  Either it is very 
busy, it is spinning, or it is stopped.  The CPU usage will shed light on this 
question.

-Ted

> Huge memory leaks in Qpid Dispatch router
> -
>
> Key: DISPATCH-337
> URL: https://issues.apache.org/jira/browse/DISPATCH-337
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Apache Qpid Proton 0.12.2 for drivers and 
> dependencies, Hardware: 2 CPUs, 15 GB RAM, 60 GB HDD on 2 separate machines
>Reporter: Vishal Sharda
>Priority: Critical
> Attachments: LNP-1_Huge_memory.png, LNP-1_Leak_starts.png, 
> LNP-1_not_accepting_connections.png, Memory_usage_first_run_no_SSL.png, 
> Memory_usage_subsequent_run_no_SSL.png, Rapid_perm_memory_increase.png, 
> Subsequent_memory_increase.png, Tim-Router-3-huge-memory-usage.png, 
> Tim_Router_3.png, Tim_Routers_3_and_6_further_leaks.png, config1.conf, 
> config2.conf, val2_receiver.txt, val2_sender.txt
>
>
> Valgrind shows huge memory leaks while running 2 interconnected routers with 
> 2 parallel senders connected to the one router and 2 parallel receivers 
> connected to the other router.
> The CRYPTO leak that is coming from Qpid Proton 0.12.2 is already fixed here:
> https://issues.apache.org/jira/browse/PROTON-1115
> However, the rest of the leaks are from qdrouterd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Ted Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556214#comment-15556214
 ] 

Ted Ross commented on DISPATCH-337:
---

Did you build your executable without debug symbols?  Without them, the pstack 
isn't useful.

What is the CPU usage of the router process?  Can you get the CPU for each 
thread?

Does the memory footprint rise steadily or does it burst?  IS it somewhere in 
the 12MB per hour range?


> Huge memory leaks in Qpid Dispatch router
> -
>
> Key: DISPATCH-337
> URL: https://issues.apache.org/jira/browse/DISPATCH-337
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Apache Qpid Proton 0.12.2 for drivers and 
> dependencies, Hardware: 2 CPUs, 15 GB RAM, 60 GB HDD on 2 separate machines
>Reporter: Vishal Sharda
>Priority: Critical
> Attachments: LNP-1_Huge_memory.png, LNP-1_Leak_starts.png, 
> LNP-1_not_accepting_connections.png, Memory_usage_first_run_no_SSL.png, 
> Memory_usage_subsequent_run_no_SSL.png, Rapid_perm_memory_increase.png, 
> Subsequent_memory_increase.png, Tim-Router-3-huge-memory-usage.png, 
> Tim_Router_3.png, Tim_Routers_3_and_6_further_leaks.png, config1.conf, 
> config2.conf, val2_receiver.txt, val2_sender.txt
>
>
> Valgrind shows huge memory leaks while running 2 interconnected routers with 
> 2 parallel senders connected to the one router and 2 parallel receivers 
> connected to the other router.
> The CRYPTO leak that is coming from Qpid Proton 0.12.2 is already fixed here:
> https://issues.apache.org/jira/browse/PROTON-1115
> However, the rest of the leaks are from qdrouterd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Vishal Sharda (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556181#comment-15556181
 ] 

Vishal Sharda commented on DISPATCH-337:


vsharda@millennium-qpid-deploy-lnp-2-7131:/$ PN_TRACE_FRM=1 qdstat -cb 
10.24.170.251
[0xfbf6f0]:  -> SASL
[0xfbf6f0]:  <- SASL
[0xfbf6f0]:0 <- @sasl-mechanisms(64) 
[sasl-server-mechanisms=@PN_SYMBOL[:ANONYMOUS]]
[0xfbf6f0]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, 
initial-response=b"anonymous@millennium-qpid-deploy-lnp-2-7131"]
[0xfbf6f0]:0 <- @sasl-outcome(68) [code=0]
[0xfbf6f0]:  -> AMQP
[0xfbf6f0]:0 -> @open(16) [container-id="2034a069-e072-46f3-ac55-3c76fbb692ca", 
hostname="10.24.170.251", channel-max=32767]
[0xfbf6f0]:  <- AMQP
[0xfbf6f0]:0 <- @open(16) [container-id="Router.A.0", max-frame-size=16384, 
channel-max=32767, idle-time-out=8000, offered-capabilities=:"ANONYMOUS-RELAY", 
properties={:product="qpid-dispatch-router", :version="0.6.0"}]
[0xfbf6f0]:0 -> @begin(17) [next-outgoing-id=0, incoming-window=2147483647, 
outgoing-window=2147483647]
[0xfbf6f0]:0 -> @attach(18) 
[name="2034a069-e072-46f3-ac55-3c76fbb692ca-$management", handle=0, role=false, 
snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, timeout=0, 
dynamic=false], target=@target(41) [address="$management", durable=0, 
timeout=0, dynamic=false], initial-delivery-count=0]
[0xfbf6f0]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=0, 
incoming-window=61, outgoing-window=2147483647]
[0xfbf6f0]:0 -> (EMPTY FRAME)
[0xfbf6f0]:0 -> (EMPTY FRAME)
Timeout: Connection amqp://10.24.170.251:amqp/$management timed out: Opening 
link 2034a069-e072-46f3-ac55-3c76fbb692ca-$management


There are no symbols found while running pstack


25467: qdrouterd -c /x/web/LIVE/switch-dr-network/configurator/qdrouterd.conf
(No symbols found)
0x7f4197612d3d:  (2, 4023a0, 7fff7c15271f, 7fff7c15271f, 4023a0, 401a67) + 
800085032b50
0x7f41987163b0:  (1, 12f3f60, 30, 31, 7f4184123050, 7f41801207e0) + 
527e0
0x10003:  (1267c80, 1278990, 1267ca0, 11d8120, 124dd40, 401810) + 
ffdddcf0
0x7f410004:  (12bee70, 7f4195605a50, 12c1a90, 12c1b50, 7f4198b4d580, 4) 
+ 90
0x01183850:  (1267c80, 1278990, 1267ca0, 11d8120, 124dd40, 401810) + 
ffdddcf0
0x7f410004:  (12bee70, 7f4195605a50, 12c1a90, 12c1b50, 7f4198b4d580, 4) 
+ 90
0x01183850:  (1267c80, 1278990, 1267ca0, 11d8120, 124dd40, 401810) + 
ffdddcf0
0x7f410004:  (12bee70, 7f4195605a50, 12c1a90, 12c1b50, 7f4198b4d580, 4) 
+ 90
0x01183850:  (1267c80, 1278990, 1267ca0, 11d8120, 124dd40, 401810) + 
ffdddcf0
0x7f410004:  (12bee70, 7f4195605a50, 12c1a90, 12c1b50, 7f4198b4d580, 4) 
+ 90
0x01183850:  (1267c80, 1278990, 1267ca0, 11d8120, 124dd40, 401810) + 
ffdddcf0


> Huge memory leaks in Qpid Dispatch router
> -
>
> Key: DISPATCH-337
> URL: https://issues.apache.org/jira/browse/DISPATCH-337
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Apache Qpid Proton 0.12.2 for drivers and 
> dependencies, Hardware: 2 CPUs, 15 GB RAM, 60 GB HDD on 2 separate machines
>Reporter: Vishal Sharda
>Priority: Critical
> Attachments: LNP-1_Huge_memory.png, LNP-1_Leak_starts.png, 
> LNP-1_not_accepting_connections.png, Memory_usage_first_run_no_SSL.png, 
> Memory_usage_subsequent_run_no_SSL.png, Rapid_perm_memory_increase.png, 
> Subsequent_memory_increase.png, Tim-Router-3-huge-memory-usage.png, 
> Tim_Router_3.png, Tim_Routers_3_and_6_further_leaks.png, config1.conf, 
> config2.conf, val2_receiver.txt, val2_sender.txt
>
>
> Valgrind shows huge memory leaks while running 2 interconnected routers with 
> 2 parallel senders connected to the one router and 2 parallel receivers 
> connected to the other router.
> The CRYPTO leak that is coming from Qpid Proton 0.12.2 is already fixed here:
> https://issues.apache.org/jira/browse/PROTON-1115
> However, the rest of the leaks are from qdrouterd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Ted Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556161#comment-15556161
 ] 

Ted Ross commented on DISPATCH-337:
---

It appears that the router is isolated and unreachable via AMQP.  I have two 
additional suggestions:

It would be helpful to know what's going on with the failed connections.  Can 
you get a trace from a client that tries and fails to connect to the router?

  PN_TRACE_FRM=1 qdstat -b  -c

Also, please run "pstack " to get a stack trace from the running 
router.


> Huge memory leaks in Qpid Dispatch router
> -
>
> Key: DISPATCH-337
> URL: https://issues.apache.org/jira/browse/DISPATCH-337
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Apache Qpid Proton 0.12.2 for drivers and 
> dependencies, Hardware: 2 CPUs, 15 GB RAM, 60 GB HDD on 2 separate machines
>Reporter: Vishal Sharda
>Priority: Critical
> Attachments: LNP-1_Huge_memory.png, LNP-1_Leak_starts.png, 
> LNP-1_not_accepting_connections.png, Memory_usage_first_run_no_SSL.png, 
> Memory_usage_subsequent_run_no_SSL.png, Rapid_perm_memory_increase.png, 
> Subsequent_memory_increase.png, Tim-Router-3-huge-memory-usage.png, 
> Tim_Router_3.png, Tim_Routers_3_and_6_further_leaks.png, config1.conf, 
> config2.conf, val2_receiver.txt, val2_sender.txt
>
>
> Valgrind shows huge memory leaks while running 2 interconnected routers with 
> 2 parallel senders connected to the one router and 2 parallel receivers 
> connected to the other router.
> The CRYPTO leak that is coming from Qpid Proton 0.12.2 is already fixed here:
> https://issues.apache.org/jira/browse/PROTON-1115
> However, the rest of the leaks are from qdrouterd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Vishal Sharda (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556144#comment-15556144
 ] 

Vishal Sharda commented on DISPATCH-337:


1. The router is reachable and shows "Accepting incoming connection ..." from 
one of the client machines in the logs.  It is not clear what causes connection 
failure.

2. Yes, there are few TCP connections to the bad router:

vsharda@millennium-qpid-deploy-lnp-1-5129:~$ netstat -at | grep 5670
tcp0  0 *:5670  *:* LISTEN 
tcp0  0 millennium-qpid-de:5670 userstage114828.c:36758 ESTABLISHED
tcp0  0 localhost:5670  localhost:33894 ESTABLISHED
tcp0  0 millennium-qpid-de:5670 userstage118169.c:38132 ESTABLISHED
tcp0  0 millennium-qpid-de:5670 10.22.99.81:40080   ESTABLISHED
tcp0  0 millennium-qpid-de:5670 10.22.102.215:50594 ESTABLISHED
tcp6   0  0 localhost:33894 localhost:5670  ESTABLISHED

3. Other routers say that the bad router (Router.A.0) does not exist in the 
network:

vsharda@millennium-qpid-deploy-lnp-2-7131:/$ qdstat -nv
Routers in the Network
  router-id   next-hop  link  cost  neighbors   
valid-origins
  
=
  Router.A.1  (self)-   ['Router.A.2', 'Router.A.3', 'Router.A.4']  
[]
  Router.A.2  - 1 1 ['Router.A.1', 'Router.A.3', 'Router.A.4']  
[]
  Router.A.3  - 2 1 ['Router.A.1', 'Router.A.2', 'Router.A.4']  
[]
  Router.A.4  - 3 1 ['Router.A.1', 'Router.A.2', 'Router.A.3']  
[]

4. On 2016-10-02 13:28:30, it was 620 MB.  On 2016-10-07 13:16:00, it is 2.127 
GB.

5. Bad router cannot be checked from the good routers:

vsharda@millennium-qpid-deploy-lnp-2-7131:/$ qdstat -b 10.24.170.251 -c
Timeout: Connection amqp://10.24.170.251:amqp/$management timed out: Opening 
link 9809b66a-1f2d-4952-92f1-c6c5c8b35680-$management

> Huge memory leaks in Qpid Dispatch router
> -
>
> Key: DISPATCH-337
> URL: https://issues.apache.org/jira/browse/DISPATCH-337
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Apache Qpid Proton 0.12.2 for drivers and 
> dependencies, Hardware: 2 CPUs, 15 GB RAM, 60 GB HDD on 2 separate machines
>Reporter: Vishal Sharda
>Priority: Critical
> Attachments: LNP-1_Huge_memory.png, LNP-1_Leak_starts.png, 
> LNP-1_not_accepting_connections.png, Memory_usage_first_run_no_SSL.png, 
> Memory_usage_subsequent_run_no_SSL.png, Rapid_perm_memory_increase.png, 
> Subsequent_memory_increase.png, Tim-Router-3-huge-memory-usage.png, 
> Tim_Router_3.png, Tim_Routers_3_and_6_further_leaks.png, config1.conf, 
> config2.conf, val2_receiver.txt, val2_sender.txt
>
>
> Valgrind shows huge memory leaks while running 2 interconnected routers with 
> 2 parallel senders connected to the one router and 2 parallel receivers 
> connected to the other router.
> The CRYPTO leak that is coming from Qpid Proton 0.12.2 is already fixed here:
> https://issues.apache.org/jira/browse/PROTON-1115
> However, the rest of the leaks are from qdrouterd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Ted Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556024#comment-15556024
 ] 

Ted Ross edited comment on DISPATCH-337 at 10/7/16 7:24 PM:


A couple of questions:

# In what way is the bad router rejecting connections?  port-unreachable? 
authentication failure? other?
# Are there open TCP connections to the bad router?  How many?
# What do the other routers say about the bad router?  Do they have valid 
connections to the bad router?  Is the bad router represented in qdstat -nv on 
one of the good routers?
# What is the rate of increase in memory use?
# Can you manage the bad router via one of the good routers? (i.e. qdstat -b 
 -r  -c)



was (Author: tedross):
A couple of questions:

# In what way is the bad router rejecting connections?  port-unreachable? 
authentication failure? other?

# Are there open TCP connections to the bad router?  How many?

# What do the other routers say about the bad router?  Do they have valid 
connections to the bad router?  Is the bad router represented in qdstat -nv on 
one of the good routers?

# What is the rate of increase in memory use?

# Can you manage the bad router via one of the good routers? (i.e. qdstat -b 
 -r  -c)


> Huge memory leaks in Qpid Dispatch router
> -
>
> Key: DISPATCH-337
> URL: https://issues.apache.org/jira/browse/DISPATCH-337
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Apache Qpid Proton 0.12.2 for drivers and 
> dependencies, Hardware: 2 CPUs, 15 GB RAM, 60 GB HDD on 2 separate machines
>Reporter: Vishal Sharda
>Priority: Critical
> Attachments: LNP-1_Huge_memory.png, LNP-1_Leak_starts.png, 
> LNP-1_not_accepting_connections.png, Memory_usage_first_run_no_SSL.png, 
> Memory_usage_subsequent_run_no_SSL.png, Rapid_perm_memory_increase.png, 
> Subsequent_memory_increase.png, Tim-Router-3-huge-memory-usage.png, 
> Tim_Router_3.png, Tim_Routers_3_and_6_further_leaks.png, config1.conf, 
> config2.conf, val2_receiver.txt, val2_sender.txt
>
>
> Valgrind shows huge memory leaks while running 2 interconnected routers with 
> 2 parallel senders connected to the one router and 2 parallel receivers 
> connected to the other router.
> The CRYPTO leak that is coming from Qpid Proton 0.12.2 is already fixed here:
> https://issues.apache.org/jira/browse/PROTON-1115
> However, the rest of the leaks are from qdrouterd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-337) Huge memory leaks in Qpid Dispatch router

2016-10-07 Thread Ted Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/DISPATCH-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15556024#comment-15556024
 ] 

Ted Ross commented on DISPATCH-337:
---

A couple of questions:

# In what way is the bad router rejecting connections?  port-unreachable? 
authentication failure? other?

# Are there open TCP connections to the bad router?  How many?

# What do the other routers say about the bad router?  Do they have valid 
connections to the bad router?  Is the bad router represented in qdstat -nv on 
one of the good routers?

# What is the rate of increase in memory use?

# Can you manage the bad router via one of the good routers? (i.e. qdstat -b 
 -r  -c)


> Huge memory leaks in Qpid Dispatch router
> -
>
> Key: DISPATCH-337
> URL: https://issues.apache.org/jira/browse/DISPATCH-337
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 0.6.0
> Environment: Debian 8.3, Apache Qpid Proton 0.12.2 for drivers and 
> dependencies, Hardware: 2 CPUs, 15 GB RAM, 60 GB HDD on 2 separate machines
>Reporter: Vishal Sharda
>Priority: Critical
> Attachments: LNP-1_Huge_memory.png, LNP-1_Leak_starts.png, 
> LNP-1_not_accepting_connections.png, Memory_usage_first_run_no_SSL.png, 
> Memory_usage_subsequent_run_no_SSL.png, Rapid_perm_memory_increase.png, 
> Subsequent_memory_increase.png, Tim-Router-3-huge-memory-usage.png, 
> Tim_Router_3.png, Tim_Routers_3_and_6_further_leaks.png, config1.conf, 
> config2.conf, val2_receiver.txt, val2_sender.txt
>
>
> Valgrind shows huge memory leaks while running 2 interconnected routers with 
> 2 parallel senders connected to the one router and 2 parallel receivers 
> connected to the other router.
> The CRYPTO leak that is coming from Qpid Proton 0.12.2 is already fixed here:
> https://issues.apache.org/jira/browse/PROTON-1115
> However, the rest of the leaks are from qdrouterd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDIT-41) Rearrange test directory structure to better organize tests and shims

2016-10-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPIDIT-41?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1603#comment-1603
 ] 

ASF subversion and git services commented on QPIDIT-41:
---

Commit 514bac7511be39c2ceb25eb31c3234ccd9c1ae74 in qpid-interop-test's branch 
refs/heads/master from [~kpvdr]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-interop-test.git;h=514bac7 ]

QPIDIT-41: Reorganized dir structure and tidied up the test code. Copied the 
old jms_messages_test to a new jms_hdrs_props_test and simplified the 
jms_messages_test to include only message body tests. Simplified parameters 
sent to shims to the same format for all shims, only one JSON string to and 
returned from receiver shim.


> Rearrange test directory structure to better organize tests and shims
> -
>
> Key: QPIDIT-41
> URL: https://issues.apache.org/jira/browse/QPIDIT-41
> Project: Apache QPID IT
>  Issue Type: Improvement
>Reporter: Kim van der Riet
>Assignee: Kim van der Riet
>
> The current directory structure for qpid-interop-test needs some improvements 
> to make it more consistent and easier to manage shims and tests:
> * All tests should reside in the same directory 
> (src/python/qpid-interop-test), and the test name is the key to the shims 
> directory structure for finding the Sender and Receiver shims.
> * JMS shims should be located in a directory named identically to the name of 
> the test for which it is written, and located in 
> shims/qpid-jms/src/main/java/org/apace/qpid/qpid_interop_test. There are only 
> two shims, called Sender.java and Receiver.java. The Java package is 
> consequently "org.apache.qpid.interop_test..
> * Python shims should be located in a directory named identically to the name 
> of the test for which it is written, and located in 
> shims/qpid-proton-python/src. There are only two shims, called Sender.py and 
> Receiver.py.
> * C++ shims should be located in a directory named identically to the name of 
> the test for which it is written, and located in 
> shims/qpid-proton-cpp/src/qpidit. There are only two shims, called 
> Sender.{hpp,cpp} and Receiver.{hpp,cpp}. The shim namespace is conseqnently 
> qpidit..
> {noformat}
> qpid-interop-test
>   +-shims
>   |   +-qpid-jms
>   |   |   +-src
>   |   |   +-main
>   |   |   +-java
>   |   |   +-org
>   |   |   +-apache
>   |   |   +-qpid
>   |   |   +-qpid-interop-test
>   |   |   +-
>   |   |   |   +-Receiver.java
>   |   |   |   +-Sender.java
>   |   |   +-
>   |   |   +-Receiver.java
>   |   |   +-Sender.java
>   |   +-qpid-proton-cpp
>   |   |   +-src
>   |   |   +-qpidit
>   |   |   +-
>   |   |   |   +- Receiver.cpp
>   |   |   |   +- Receiver.hpp
>   |   |   |   +- Sender.cpp
>   |   |   |   +- Sender.hpp
>   |   |   +-
>   |   |   +- Receiver.cpp
>   |   |   +- Receiver.hpp
>   |   |   +- Sender.cpp
>   |   |   +- Sender.hpp
>   |   +-qpid-proton-python
>   |   +-src
>   |   +-
>   |   |   +-Sender.py
>   |   |   +-Receiver.py
>   |   +-
>   |   +-Sender.py
>   |   +-Receiver.py
>   +-src
>   +-python
>   +-qpid-interop-test
>   +-
>   +-
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7447) Java Broker tuning improvements for v6.1 release

2016-10-07 Thread Lorenz Quack (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1420#comment-1420
 ] 

Lorenz Quack commented on QPID-7447:


Code looks good. But In general I would prefer these code changes have comments 
on them explaining what the if() checks around the for loops are for 
performance reasons. Otherwise I can see myself or someone else coming in here 
in a couple of month and removing them because they seem redundant.

> Java Broker tuning improvements for v6.1 release
> 
>
> Key: QPID-7447
> URL: https://issues.apache.org/jira/browse/QPID-7447
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Keith Wall
>Assignee: Keith Wall
> Fix For: qpid-java-6.1
>
>
> Java Broker tuning improvements for v6.1 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7453) Hand off selection task only if connection tasks need to be processed

2016-10-07 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-7453:
-
Status: Reviewable  (was: In Progress)

> Hand off selection task only if connection tasks need to be processed
> -
>
> Key: QPID-7453
> URL: https://issues.apache.org/jira/browse/QPID-7453
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Keith Wall
>Assignee: Keith Wall
> Fix For: qpid-java-6.1
>
>
> Currently the selector thread always reschedules the selection task on the 
> workQueue.  In the case where the select was awoken for the purpose of 
> reregistering a connection on the selector, there will be no connection work 
> to be done, so the potential hand off to of the selection task will be 
> needless.
> We can simply optimise the algorithm to hand off responsibility for the 
> selection iff there is connection work that can be consumed by this thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPID-7453) Hand off selection task only if connection tasks need to be processed

2016-10-07 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall reassigned QPID-7453:


Assignee: Keith Wall

> Hand off selection task only if connection tasks need to be processed
> -
>
> Key: QPID-7453
> URL: https://issues.apache.org/jira/browse/QPID-7453
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Keith Wall
>Assignee: Keith Wall
> Fix For: qpid-java-6.1
>
>
> Currently the selector thread always reschedules the selection task on the 
> workQueue.  In the case where the select was awoken for the purpose of 
> reregistering a connection on the selector, there will be no connection work 
> to be done, so the potential hand off to of the selection task will be 
> needless.
> We can simply optimise the algorithm to hand off responsibility for the 
> selection iff there is connection work that can be consumed by this thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7453) Hand off selection task only if connection tasks need to be processed

2016-10-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/QPID-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1163#comment-1163
 ] 

ASF subversion and git services commented on QPID-7453:
---

Commit 1763765 from [~k-wall] in branch 'java/trunk'
[ https://svn.apache.org/r1763765 ]

QPID-7453: [Java Broker] Hand off selection task only if connection tasks need 
to be processed

This change also prevents iterator garbage if there are no connection tasks to 
be done.

> Hand off selection task only if connection tasks need to be processed
> -
>
> Key: QPID-7453
> URL: https://issues.apache.org/jira/browse/QPID-7453
> Project: Qpid
>  Issue Type: Improvement
>  Components: Java Broker
>Reporter: Keith Wall
> Fix For: qpid-java-6.1
>
>
> Currently the selector thread always reschedules the selection task on the 
> workQueue.  In the case where the select was awoken for the purpose of 
> reregistering a connection on the selector, there will be no connection work 
> to be done, so the potential hand off to of the selection task will be 
> needless.
> We can simply optimise the algorithm to hand off responsibility for the 
> selection iff there is connection work that can be consumed by this thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7453) Hand off selection task only if connection tasks need to be processed

2016-10-07 Thread Keith Wall (JIRA)
Keith Wall created QPID-7453:


 Summary: Hand off selection task only if connection tasks need to 
be processed
 Key: QPID-7453
 URL: https://issues.apache.org/jira/browse/QPID-7453
 Project: Qpid
  Issue Type: Improvement
  Components: Java Broker
Reporter: Keith Wall
 Fix For: qpid-java-6.1


Currently the selector thread always reschedules the selection task on the 
workQueue.  In the case where the select was awoken for the purpose of 
reregistering a connection on the selector, there will be no connection work to 
be done, so the potential hand off to of the selection task will be needless.

We can simply optimise the algorithm to hand off responsibility for the 
selection iff there is connection work that can be consumed by this thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7452) [Java Tests] URLConnection can be spuriously open twice in RestTestHelper whilst making a test HTTP request to the Broker causing test failures

2016-10-07 Thread Alex Rudyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-7452:
-
Attachment: 
TEST-org.apache.qpid.server.store.berkeleydb.replication.BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect.txt

> [Java Tests] URLConnection can be spuriously open twice in RestTestHelper 
> whilst making a test HTTP request to the Broker causing test failures
> ---
>
> Key: QPID-7452
> URL: https://issues.apache.org/jira/browse/QPID-7452
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Tests
>Reporter: Alex Rudyy
> Attachments: 
> TEST-org.apache.qpid.server.store.berkeleydb.replication.BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect.txt
>
>
> Test 
> BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect(BDBHAVirtualHostNodeRestTest
>  failed recently with the following error:
> {noformat}
> Unexpected response code from PUT virtualhostnode/node3 expected:<201> but 
> was:<422>
> Stacktrace
> java.lang.AssertionError: Unexpected response code from PUT 
> virtualhostnode/node3 expected:<201> but was:<422>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.qpid.systest.rest.RestTestHelper.submitRequest(RestTestHelper.java:594)
>   at 
> org.apache.qpid.server.store.berkeleydb.replication.BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect(BDBHAVirtualHostNodeRestTest.java:252)
> {noformat}
> The test logs contain records of 2 HTTP requests being made to join the HA 
> node to the group whilst test intended to make only one HTTP request.
> {noformat}
> 2016-10-06 22:00:57,857 DEBUG [HttpManagement-http-2022] 
> o.a.q.s.m.p.f.LoggingFilter REQUEST  user='null' method='PUT' 
> url='http://localhost:51866/api/latest/virtualhostnode/node3'
> 2016-10-06 22:00:57,860 DEBUG [HttpManagement-http-2038] 
> o.a.q.s.m.p.f.LoggingFilter REQUEST  user='null' method='PUT' 
> url='http://localhost:51866/api/latest/virtualhostnode/node3'
> {noformat}
> The first request was successfully completed
> {noformat}
> 2016-10-06 22:00:58,210 DEBUG [HttpManagement-http-2022] 
> o.a.q.s.m.p.f.LoggingFilter RESPONSE user='[/127.0.0.1:48747, webadmin]' 
> method='PUT' url='http://localhost:51866/api/latest/virtualhostnode/node3' 
> status='201'
> {noformat} 
> The second spurious request had failed
> {noformat}
> WARN  [HttpManagement-http-2038] o.a.q.s.m.p.s.r.RestServlet 
> IllegalConfigurationException processing request 
> http://localhost:51866/api/latest/virtualhostnode/node3 from user 
> '[/127.0.0.1:48849, webadmin]': Cannot bind to address 'localhost:10002'. 
> Address is already in use.
> 2016-10-06 22:00:58,221 DEBUG [VirtualHostNode-node3-Config] 
> o.a.q.s.c.u.TaskExecutorImpl Performing Task['create' on 
> 'VirtualHost[id=35bb124e-1e4d-4c06-812e-c92c13b0b4d3, 
> name=BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect, 
> type=BDB_HA_REPLICA]']
> 2016-10-06 22:00:58,222 DEBUG [HttpManagement-http-2038] 
> o.a.q.s.m.p.f.LoggingFilter RESPONSE user='[/127.0.0.1:48849, webadmin]' 
> method='PUT' url='http://localhost:51866/api/latest/virtualhostnode/node3' 
> status='422'
> {noformat}
> The client received response with http status code 422 which caused the test 
> failure.
> It looks like the problem lies in the code used to open URLConnection to the 
> broker in RestTestHelper. We need to prevent URLConnection from being opened 
> twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-7452) [Java Tests] URLConnection can be spuriously open twice in RestTestHelper whilst making a test HTTP request to the Broker causing test failures

2016-10-07 Thread Alex Rudyy (JIRA)
Alex Rudyy created QPID-7452:


 Summary: [Java Tests] URLConnection can be spuriously open twice 
in RestTestHelper whilst making a test HTTP request to the Broker causing test 
failures
 Key: QPID-7452
 URL: https://issues.apache.org/jira/browse/QPID-7452
 Project: Qpid
  Issue Type: Bug
  Components: Java Tests
Reporter: Alex Rudyy


Test 
BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect(BDBHAVirtualHostNodeRestTest
 failed recently with the following error:
{noformat}
Unexpected response code from PUT virtualhostnode/node3 expected:<201> but 
was:<422>

Stacktrace

java.lang.AssertionError: Unexpected response code from PUT 
virtualhostnode/node3 expected:<201> but was:<422>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.qpid.systest.rest.RestTestHelper.submitRequest(RestTestHelper.java:594)
at 
org.apache.qpid.server.store.berkeleydb.replication.BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect(BDBHAVirtualHostNodeRestTest.java:252)
{noformat}

The test logs contain records of 2 HTTP requests being made to join the HA node 
to the group whilst test intended to make only one HTTP request.
{noformat}
2016-10-06 22:00:57,857 DEBUG [HttpManagement-http-2022] 
o.a.q.s.m.p.f.LoggingFilter REQUEST  user='null' method='PUT' 
url='http://localhost:51866/api/latest/virtualhostnode/node3'
2016-10-06 22:00:57,860 DEBUG [HttpManagement-http-2038] 
o.a.q.s.m.p.f.LoggingFilter REQUEST  user='null' method='PUT' 
url='http://localhost:51866/api/latest/virtualhostnode/node3'
{noformat}
The first request was successfully completed
{noformat}
2016-10-06 22:00:58,210 DEBUG [HttpManagement-http-2022] 
o.a.q.s.m.p.f.LoggingFilter RESPONSE user='[/127.0.0.1:48747, webadmin]' 
method='PUT' url='http://localhost:51866/api/latest/virtualhostnode/node3' 
status='201'
{noformat} 
The second spurious request had failed
{noformat}
WARN  [HttpManagement-http-2038] o.a.q.s.m.p.s.r.RestServlet 
IllegalConfigurationException processing request 
http://localhost:51866/api/latest/virtualhostnode/node3 from user 
'[/127.0.0.1:48849, webadmin]': Cannot bind to address 'localhost:10002'. 
Address is already in use.
2016-10-06 22:00:58,221 DEBUG [VirtualHostNode-node3-Config] 
o.a.q.s.c.u.TaskExecutorImpl Performing Task['create' on 
'VirtualHost[id=35bb124e-1e4d-4c06-812e-c92c13b0b4d3, 
name=BDBHAVirtualHostNodeRestTest.testIntruderBDBHAVHNNotAllowedToConnect, 
type=BDB_HA_REPLICA]']
2016-10-06 22:00:58,222 DEBUG [HttpManagement-http-2038] 
o.a.q.s.m.p.f.LoggingFilter RESPONSE user='[/127.0.0.1:48849, webadmin]' 
method='PUT' url='http://localhost:51866/api/latest/virtualhostnode/node3' 
status='422'
{noformat}

The client received response with http status code 422 which caused the test 
failure.

It looks like the problem lies in the code used to open URLConnection to the 
broker in RestTestHelper. We need to prevent URLConnection from being opened 
twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



Re: Azure Service Bus receiver timing out with electron go binding

2016-10-07 Thread Tobias Duckworth
Thank you very much for the detailed explanation.

I managed to get this going quite simply:

1) Added a pure virtual method on connection_engine,
connection_engine::tick().

2) Added a public function pn_timestamp_t
connection_engine::tick_check(pn_timestamp_t now)

3) Added some logic to connection_engine::dispatch that checks pn_event_type
for whether it's a PN_TRANSPORT event, and if so sets a boolean to call the
connection_engine::tick() method.

4) Implemented the connection_engine::tick() method in my derived class,
which gets the current time in milliseconds, then calls
connection_engine::tick_check(now). If the returned value is greater than
zero a timer is setup for the number of milliseconds returned, which on
expiry calls the tick() method in the derived class.

With these four steps the underlying transport sends heartbeats at half of
the remote_idle_timeout specified in the response to the connect packet.

Thanks again,
Toby




--
View this message in context: 
http://qpid.2158936.n2.nabble.com/Azure-Service-Bus-receiver-timing-out-with-electron-go-binding-tp7651057p7651625.html
Sent from the Apache Qpid developers mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-7451) [Java Broker] MessageTransferMessage should cache message size

2016-10-07 Thread Keith Wall (JIRA)

 [ 
https://issues.apache.org/jira/browse/QPID-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Wall updated QPID-7451:
-
Fix Version/s: qpid-java-6.1
   qpid-java-6.0.5

> [Java Broker] MessageTransferMessage should cache message size
> --
>
> Key: QPID-7451
> URL: https://issues.apache.org/jira/browse/QPID-7451
> Project: Qpid
>  Issue Type: Bug
>  Components: Java Broker
>Affects Versions: 0.32, qpid-java-6.0, qpid-java-6.0.1, qpid-java-6.0.2, 
> qpid-java-6.0.3, qpid-java-6.0.4
>Reporter: Rob Godfrey
>Assignee: Rob Godfrey
> Fix For: qpid-java-6.0.5, qpid-java-6.1
>
> Attachments: QPID-7451_for_0_32.patch
>
>
> AMQMessage (the 0-9-1 path) caches message size, however 
> MessageTransferMessage (the 0-10 path) does not.  This means that the 
> delivery path from AbstractQueue may require retrieval of the message meta 
> data from the store to get the message size.  Since not all requests to get 
> the message size are protected by first getting a message reference, there is 
> a chance of a race whereby the message is removed from the store between 
> checking whether it is available and obtaining the message size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org