[jira] [Commented] (QPID-6933) Factor out a JMS client neutral messaging test suite from system tests
[ https://issues.apache.org/jira/browse/QPID-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301926#comment-16301926 ] ASF subversion and git services commented on QPID-6933: --- Commit c405c844b6c7d35c78b5aca6ba97de2961c8e92b in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=c405c84 ] QPID-6933: [System Tests] Refactor AsynchMessageListenerTest as JMS 1.1 system test > Factor out a JMS client neutral messaging test suite from system tests > -- > > Key: QPID-6933 > URL: https://issues.apache.org/jira/browse/QPID-6933 > Project: Qpid > Issue Type: Improvement > Components: Java Tests >Reporter: Keith Wall >Assignee: Alex Rudyy > > The existing system testsuite is in a poor state. > * It is poorly structured > * Mixes different types of test. i.e. messaging behaviour with those that > test features of the Java Broker (e.g. REST). > * Many of the tests refer directly to the implementation classes of the Qpid > Client 0-8..0-10 client meaning the tests cannot be run using the new client. > As a first step, we want to factor out a separate messaging system test suite: > * The tests in this suite will be JMS client neutral. > * Written in terms of JNDI/JMS client > * Configurations/Broker observations will be performed via a clean > Broker-neutral facade. For instance > ** a mechanism to cause the queue to be created of a particular type. > ** a mechanism to observe a queue depth. > * The tests will be classified by feature (to be defined) > * The classification system will be used to drive an exclusion feature (to be > defined). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-6933) Factor out a JMS client neutral messaging test suite from system tests
[ https://issues.apache.org/jira/browse/QPID-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301927#comment-16301927 ] ASF subversion and git services commented on QPID-6933: --- Commit 8e78fbe605ab8c5c6a7dfc89890793f160e75d71 in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=8e78fbe ] QPID-6933: [System Tests] Remove explict exchange/queue delete tests - these are mainly concerned with non-JMS implementations methods of the legacy client. The server side behaviour is now covered by prootcol tests. > Factor out a JMS client neutral messaging test suite from system tests > -- > > Key: QPID-6933 > URL: https://issues.apache.org/jira/browse/QPID-6933 > Project: Qpid > Issue Type: Improvement > Components: Java Tests >Reporter: Keith Wall >Assignee: Alex Rudyy > > The existing system testsuite is in a poor state. > * It is poorly structured > * Mixes different types of test. i.e. messaging behaviour with those that > test features of the Java Broker (e.g. REST). > * Many of the tests refer directly to the implementation classes of the Qpid > Client 0-8..0-10 client meaning the tests cannot be run using the new client. > As a first step, we want to factor out a separate messaging system test suite: > * The tests in this suite will be JMS client neutral. > * Written in terms of JNDI/JMS client > * Configurations/Broker observations will be performed via a clean > Broker-neutral facade. For instance > ** a mechanism to cause the queue to be created of a particular type. > ** a mechanism to observe a queue depth. > * The tests will be classified by feature (to be defined) > * The classification system will be used to drive an exclusion feature (to be > defined). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-6933) Factor out a JMS client neutral messaging test suite from system tests
[ https://issues.apache.org/jira/browse/QPID-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301925#comment-16301925 ] ASF subversion and git services commented on QPID-6933: --- Commit f24f2c74b858d44737d47a328930fbcf61491975 in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=f24f2c7 ] QPID-6933: [System Tests] Subsume plethora of acknowledgement tests into a JMS 1.1 test case > Factor out a JMS client neutral messaging test suite from system tests > -- > > Key: QPID-6933 > URL: https://issues.apache.org/jira/browse/QPID-6933 > Project: Qpid > Issue Type: Improvement > Components: Java Tests >Reporter: Keith Wall >Assignee: Alex Rudyy > > The existing system testsuite is in a poor state. > * It is poorly structured > * Mixes different types of test. i.e. messaging behaviour with those that > test features of the Java Broker (e.g. REST). > * Many of the tests refer directly to the implementation classes of the Qpid > Client 0-8..0-10 client meaning the tests cannot be run using the new client. > As a first step, we want to factor out a separate messaging system test suite: > * The tests in this suite will be JMS client neutral. > * Written in terms of JNDI/JMS client > * Configurations/Broker observations will be performed via a clean > Broker-neutral facade. For instance > ** a mechanism to cause the queue to be created of a particular type. > ** a mechanism to observe a queue depth. > * The tests will be classified by feature (to be defined) > * The classification system will be used to drive an exclusion feature (to be > defined). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Comment Edited] (PROTON-1718) (Proton-J) Custom Sasl
[ https://issues.apache.org/jira/browse/PROTON-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301860#comment-16301860 ] Tim Taylor edited comment on PROTON-1718 at 12/22/17 7:33 PM: -- Actually, after doing some digging, I suspect that I don't need to submit a PR to achieve this functionality. If I understand the Sasl APIs correctly, I should be able to use the combination of send(...)/recv(...)/pending() to manually send inits, wait for challenges, receive those challenges, and send my custom responses to meet the challenges. I can't seem to make it work, however. I can successfully send the init frame and am given a challenge frame in response. I can read that challenge frame just fine and can expose it to my application just fine. However, when I try to send a response to that first challenge, the frame I am trying to send is never written. It is saved as the challenge response, but it is never processed beyond that. Is the below code supposed to work for this custom sasl scenario? Sasl sasl = transport.sasl(); sasl.client(); sasl.setMechanisms(""); //send init message, wait for response sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive first challenge, send first challenge response, wait for second challenge byte[] firstChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive second challenge, send second challenge response byte[] secondChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); ... private void waitForSaslBuffer(Sasl sasl) { while (sasl.pending() == 0) {Thread.sleep(1000);} } private byte[] retrieveChallengeData(Sasl sasl) { byte[] saslChallengeBytes = new byte[sasl.pending()]; sasl.recv(saslChallengeBytes, 0, sasl.pending()); return saslChallengeBytes; } was (Author: timtay): Actually, after doing some digging, I suspect that I don't need to submit a PR to achieve this functionality. If I understand the Sasl APIs correctly, I should be able to use the combination of send(...)/recv(...)/pending() to manually send inits, wait for challenges, receive those challenges, and send my custom responses to meet the challenges. I can't seem to make it work, however. I can successfully send the init frame and am given a challenge frame in response. I can read that challenge frame just fine and can expose it to my application just fine. However, when I try to send a response to that first challenge, the frame I am trying to send is never written. It is saved as the challenge response, but it is never processed beyond that. Is the below code expected to work? Sasl sasl = transport.sasl(); sasl.client(); sasl.setMechanisms(""); //send init message, wait for response sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive first challenge, send first challenge response, wait for second challenge byte[] firstChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive second challenge, send second challenge response byte[] secondChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); ... private void waitForSaslBuffer(Sasl sasl) { while (sasl.pending() == 0) { Thread.sleep(1000); } } private byte[] retrieveChallengeData(Sasl sasl) { byte[] saslChallengeBytes = new byte[sasl.pending()]; sasl.recv(saslChallengeBytes, 0, sasl.pending()); return saslChallengeBytes; } > (Proton-J) Custom Sasl > -- > > Key: PROTON-1718 > URL: https://issues.apache.org/jira/browse/PROTON-1718 > Project: Qpid Proton > Issue Type: Improvement > Components: proton-j >Affects Versions: proton-j-0.24.0 >Reporter: Tim Taylor > Labels: features > > I would like to be able to provide a custom SASL implementation for Proton-j > to use instead of being forced to use the default SaslImpl.java > implementation. > Ideally, code like below would be possible > private class CustomSasl implements org.apache.qpid.proton.engine.Sasl > { > ... > } > ... > ... > //transport.sasl(...) saves the provided sasl implementation and uses it > internally > Sasl sasl = transport.sasl(new CustomSasl()); > Do you currently have a workaround that would allow me to use Proton-J this > way? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Comment Edited] (PROTON-1718) (Proton-J) Custom Sasl
[ https://issues.apache.org/jira/browse/PROTON-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301860#comment-16301860 ] Tim Taylor edited comment on PROTON-1718 at 12/22/17 7:32 PM: -- Actually, after doing some digging, I suspect that I don't need to submit a PR to achieve this functionality. If I understand the Sasl APIs correctly, I should be able to use the combination of send(...)/recv(...)/pending() to manually send inits, wait for challenges, receive those challenges, and send my custom responses to meet the challenges. I can't seem to make it work, however. I can successfully send the init frame and am given a challenge frame in response. I can read that challenge frame just fine and can expose it to my application just fine. However, when I try to send a response to that first challenge, the frame I am trying to send is never written. It is saved as the challenge response, but it is never processed beyond that. Is the below code expected to work? Sasl sasl = transport.sasl(); sasl.client(); sasl.setMechanisms(""); //send init message, wait for response sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive first challenge, send first challenge response, wait for second challenge byte[] firstChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive second challenge, send second challenge response byte[] secondChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); ... private void waitForSaslBuffer(Sasl sasl) { while (sasl.pending() == 0) { Thread.sleep(1000); } } private byte[] retrieveChallengeData(Sasl sasl) { byte[] saslChallengeBytes = new byte[sasl.pending()]; sasl.recv(saslChallengeBytes, 0, sasl.pending()); return saslChallengeBytes; } was (Author: timtay): Actually, after doing some digging, I suspect that I don't need to submit a PR to achieve this functionality. If I understand the Sasl APIs correctly, I should be able to use the combination of send(...)/recv(...)/pending() to manually send inits, wait for challenges, receive those challenges, and send my custom responses to meet the challenges. I can't seem to make it work, however. I can successfully send the init frame and am given a challenge frame in response. I can read that challenge frame just fine and can expose it to my application just fine. However, when I try to send a response to that first challenge, the frame I am trying to send is never written. It is saved as the challenge response, but it is never processed beyond that. Is the below code expected to work? Sasl sasl = transport.sasl(); sasl.client(); sasl.setMechanisms(""); //send init message, wait for response sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive first challenge, send first challenge response, wait for second challenge byte[] firstChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive second challenge, send second challenge response byte[] secondChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); ... private void waitForSaslBuffer(Sasl sasl) { private void waitForSaslBuffer(Sasl sasl) { while (sasl.pending() == 0) { Thread.sleep(1000); } } private byte[] retrieveChallengeData(Sasl sasl) { byte[] saslChallengeBytes = new byte[sasl.pending()]; sasl.recv(saslChallengeBytes, 0, sasl.pending()); return saslChallengeBytes; } > (Proton-J) Custom Sasl > -- > > Key: PROTON-1718 > URL: https://issues.apache.org/jira/browse/PROTON-1718 > Project: Qpid Proton > Issue Type: Improvement > Components: proton-j >Affects Versions: proton-j-0.24.0 >Reporter: Tim Taylor > Labels: features > > I would like to be able to provide a custom SASL implementation for Proton-j > to use instead of being forced to use the default SaslImpl.java > implementation. > Ideally, code like below would be possible > private class CustomSasl implements org.apache.qpid.proton.engine.Sasl > { > ... > } > ... > ... > //transport.sasl(...) saves the provided sasl implementation and uses it > internally > Sasl sasl = transport.sasl(new CustomSasl()); > Do you currently have a workaround that would allow me to use Proton-J this > way? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Comment Edited] (PROTON-1718) (Proton-J) Custom Sasl
[ https://issues.apache.org/jira/browse/PROTON-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301860#comment-16301860 ] Tim Taylor edited comment on PROTON-1718 at 12/22/17 7:31 PM: -- Actually, after doing some digging, I suspect that I don't need to submit a PR to achieve this functionality. If I understand the Sasl APIs correctly, I should be able to use the combination of send(...)/recv(...)/pending() to manually send inits, wait for challenges, receive those challenges, and send my custom responses to meet the challenges. I can't seem to make it work, however. I can successfully send the init frame and am given a challenge frame in response. I can read that challenge frame just fine and can expose it to my application just fine. However, when I try to send a response to that first challenge, the frame I am trying to send is never written. It is saved as the challenge response, but it is never processed beyond that. Is the below code expected to work? Sasl sasl = transport.sasl(); sasl.client(); sasl.setMechanisms(""); //send init message, wait for response sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive first challenge, send first challenge response, wait for second challenge byte[] firstChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive second challenge, send second challenge response byte[] secondChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); ... private void waitForSaslBuffer(Sasl sasl) { private void waitForSaslBuffer(Sasl sasl) { while (sasl.pending() == 0) { Thread.sleep(1000); } } private byte[] retrieveChallengeData(Sasl sasl) { byte[] saslChallengeBytes = new byte[sasl.pending()]; sasl.recv(saslChallengeBytes, 0, sasl.pending()); return saslChallengeBytes; } was (Author: timtay): Actually, after doing some digging, I suspect that I don't need to submit a PR to achieve this functionality. If I understand the Sasl APIs correctly, I should be able to use the combination of send(...)/recv(...)/pending() to manually send inits, wait for challenges, receive those challenges, and send my custom responses to meet the challenges. I can't seem to make it work, however. I can successfully send the init frame and am given a challenge frame in response. I can read that challenge frame just fine and can expose it to my application just fine. However, when I try to send a response to that first challenge, the frame I am trying to send is never written. It is saved as the challenge response, but it is never processed beyond that. Is the below code expected to work? Sasl sasl = transport.sasl(); sasl.client(); sasl.setMechanisms(""); //send init message, wait for response sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive first challenge, send first challenge response, wait for second challenge byte[] firstChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive second challenge, send second challenge response byte[] secondChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); ... private void waitForSaslBuffer(Sasl sasl) { while (sasl.pending() == 0) { Thread.sleep(1000); } } private byte[] retrieveChallengeData(Sasl sasl) { byte[] saslChallengeBytes = new byte[sasl.pending()]; sasl.recv(saslChallengeBytes, 0, sasl.pending()); return saslChallengeBytes; } > (Proton-J) Custom Sasl > -- > > Key: PROTON-1718 > URL: https://issues.apache.org/jira/browse/PROTON-1718 > Project: Qpid Proton > Issue Type: Improvement > Components: proton-j >Affects Versions: proton-j-0.24.0 >Reporter: Tim Taylor > Labels: features > > I would like to be able to provide a custom SASL implementation for Proton-j > to use instead of being forced to use the default SaslImpl.java > implementation. > Ideally, code like below would be possible > private class CustomSasl implements org.apache.qpid.proton.engine.Sasl > { > ... > } > ... > ... > //transport.sasl(...) saves the provided sasl implementation and uses it > internally > Sasl sasl = transport.sasl(new CustomSasl()); > Do you currently have a workaround that would allow me to use Proton-J this > way? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (PROTON-1718) (Proton-J) Custom Sasl
[ https://issues.apache.org/jira/browse/PROTON-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301860#comment-16301860 ] Tim Taylor commented on PROTON-1718: Actually, after doing some digging, I suspect that I don't need to submit a PR to achieve this functionality. If I understand the Sasl APIs correctly, I should be able to use the combination of send(...)/recv(...)/pending() to manually send inits, wait for challenges, receive those challenges, and send my custom responses to meet the challenges. I can't seem to make it work, however. I can successfully send the init frame and am given a challenge frame in response. I can read that challenge frame just fine and can expose it to my application just fine. However, when I try to send a response to that first challenge, the frame I am trying to send is never written. It is saved as the challenge response, but it is never processed beyond that. Is the below code expected to work? Sasl sasl = transport.sasl(); sasl.client(); sasl.setMechanisms(""); //send init message, wait for response sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive first challenge, send first challenge response, wait for second challenge byte[] firstChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); waitForSaslBuffer(sasl); //receive second challenge, send second challenge response byte[] secondChallengeBytes = retrieveChallengeData(sasl); sasl.send(, 0, ); ... private void waitForSaslBuffer(Sasl sasl) { while (sasl.pending() == 0) { Thread.sleep(1000); } } private byte[] retrieveChallengeData(Sasl sasl) { byte[] saslChallengeBytes = new byte[sasl.pending()]; sasl.recv(saslChallengeBytes, 0, sasl.pending()); return saslChallengeBytes; } > (Proton-J) Custom Sasl > -- > > Key: PROTON-1718 > URL: https://issues.apache.org/jira/browse/PROTON-1718 > Project: Qpid Proton > Issue Type: Improvement > Components: proton-j >Affects Versions: proton-j-0.24.0 >Reporter: Tim Taylor > Labels: features > > I would like to be able to provide a custom SASL implementation for Proton-j > to use instead of being forced to use the default SaslImpl.java > implementation. > Ideally, code like below would be possible > private class CustomSasl implements org.apache.qpid.proton.engine.Sasl > { > ... > } > ... > ... > //transport.sasl(...) saves the provided sasl implementation and uses it > internally > Sasl sasl = transport.sasl(new CustomSasl()); > Do you currently have a workaround that would allow me to use Proton-J this > way? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (QPIDJMS-352) Update Netty to latest 4.1.19.Final
[ https://issues.apache.org/jira/browse/QPIDJMS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Bish resolved QPIDJMS-352. -- Resolution: Fixed > Update Netty to latest 4.1.19.Final > --- > > Key: QPIDJMS-352 > URL: https://issues.apache.org/jira/browse/QPIDJMS-352 > Project: Qpid JMS > Issue Type: Task > Components: qpid-jms-client >Affects Versions: 0.28.0 >Reporter: Timothy Bish >Assignee: Timothy Bish >Priority: Minor > Fix For: 0.29.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPIDJMS-352) Update Netty to latest 4.1.19.Final
[ https://issues.apache.org/jira/browse/QPIDJMS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301720#comment-16301720 ] ASF subversion and git services commented on QPIDJMS-352: - Commit 368508c3a9a107ebfd5e1e91530fbc9bbf690bd3 in qpid-jms's branch refs/heads/master from [~tabish121] [ https://git-wip-us.apache.org/repos/asf?p=qpid-jms.git;h=368508c ] QPIDJMS-352 Update to netty 4.1.19.Final > Update Netty to latest 4.1.19.Final > --- > > Key: QPIDJMS-352 > URL: https://issues.apache.org/jira/browse/QPIDJMS-352 > Project: Qpid JMS > Issue Type: Task > Components: qpid-jms-client >Affects Versions: 0.28.0 >Reporter: Timothy Bish >Assignee: Timothy Bish >Priority: Minor > Fix For: 0.29.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Created] (QPIDJMS-352) Update Netty to latest 4.1.19.Final
Timothy Bish created QPIDJMS-352: Summary: Update Netty to latest 4.1.19.Final Key: QPIDJMS-352 URL: https://issues.apache.org/jira/browse/QPIDJMS-352 Project: Qpid JMS Issue Type: Task Components: qpid-jms-client Affects Versions: 0.28.0 Reporter: Timothy Bish Assignee: Timothy Bish Priority: Minor Fix For: 0.29.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-902) Intermittent crash with link to broker when broker closed
[ https://issues.apache.org/jira/browse/DISPATCH-902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301679#comment-16301679 ] Alan Conway commented on DISPATCH-902: -- Fixed PROTON-1727 which I believe is the root cause of this problem. [~ganeshmurthy] can you run the dispatch tests to verify before resolving this issue? > Intermittent crash with link to broker when broker closed > - > > Key: DISPATCH-902 > URL: https://issues.apache.org/jira/browse/DISPATCH-902 > Project: Qpid Dispatch > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Kim van der Riet >Assignee: Ganesh Murthy >Priority: Blocker > Fix For: 1.1.0 > > Attachments: qdrouterd.node1.conf, qdrouterd.node2.conf, > qpidd.d2n.conf, testme.tgz > > > When using dispatch in a 2-node configuration with a broker between them: > {noformat} > 9002 10001 100019003 > sender > dispatch1 -> qpid-cpp -> dispatch2 -> receiver > {noformat} > and initializing in the following order: > # start dispatch1 > # start dispatch2 > # start qpid-cpp > # wait for "Link Route Activated" messages on both dispatch nodes > # stop qpid-cpp > then the dispatch nodes will core after a random amount of time and after > sending a random number of > {noformat} > (info) Connection to localhost:10001 failed: proton:io Connection refused - > on read from localhost:10001 > {noformat} > messages. > The stack trace is as follows for all occurrences: > {noformat} > Thread 3 "qdrouterd" received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7fffea269700 (LWP 10954)] > pn_transport_tail_closed (transport=0x0) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/core/transport.c:3044 > 3044 bool pn_transport_tail_closed(pn_transport_t *transport) { return > transport->tail_closed; } > (gdb) thread apply all bt > Thread 5 (Thread 0x7fffe9267700 (LWP 10956)): > #0 0x767eb6d3 in epoll_wait () at > ../sysdeps/unix/syscall-template.S:84 > #1 0x777327e2 in proactor_do_epoll (p=0x89b550, > can_block=can_block@entry=true) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:1978 > #2 0x777337ca in pn_proactor_wait (p=) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2025 > #3 0x77bbc219 in thread_run (arg=0x89ec20) at > /home/kpvdr/RedHat/qpid-dispatch/src/server.c:932 > #4 0x775185ca in start_thread (arg=0x7fffe9267700) at > pthread_create.c:333 > #5 0x767eb0cd in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 > Thread 4 (Thread 0x7fffe9a68700 (LWP 10955)): > #0 0x767eb6d3 in epoll_wait () at > ../sysdeps/unix/syscall-template.S:84 > #1 0x777327e2 in proactor_do_epoll (p=0x89b550, > can_block=can_block@entry=true) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:1978 > #2 0x777337ca in pn_proactor_wait (p=) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2025 > #3 0x77bbc219 in thread_run (arg=0x89ec20) at > /home/kpvdr/RedHat/qpid-dispatch/src/server.c:932 > #4 0x775185ca in start_thread (arg=0x7fffe9a68700) at > pthread_create.c:333 > #5 0x767eb0cd in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 > Thread 3 (Thread 0x7fffea269700 (LWP 10954)): > #0 pn_transport_tail_closed (transport=0x0) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/core/transport.c:3044 > #1 0x7794f4f9 in pn_connection_driver_read_closed > (d=d@entry=0x7fffdc054288) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/core/connection_driver.c:109 > #2 0x77731ef1 in pconnection_rclosed (pc=0x7fffdc053ce0) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:898 > #3 pconnection_process (pc=0x7fffdc053ce0, events=, > timeout=timeout@entry=false, topup=topup@entry=false) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:1084 > #4 0x77732945 in proactor_do_epoll (p=0x89b550, > can_block=can_block@entry=true) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2007 > #5 0x777337ca in pn_proactor_wait (p=) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2025 > #6 0x77bbc219 in thread_run (arg=0x89ec20) at > /home/kpvdr/RedHat/qpid-dispatch/src/server.c:932 > #7 0x775185ca in start_thread (arg=0x7fffea269700) at > pthread_create.c:333 > #8 0x767eb0cd in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 > Thread 2 (Thread 0x7fffeaa6a700 (LWP 10953)): > #0 pthread_cond_wait@@GLIBC_2.3.2 () at > ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 > #1 0x77ba2949 in sys_cond_wait (cond=, > held_mutex=) at >
[jira] [Assigned] (DISPATCH-902) Intermittent crash with link to broker when broker closed
[ https://issues.apache.org/jira/browse/DISPATCH-902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway reassigned DISPATCH-902: Assignee: Ganesh Murthy (was: Alan Conway) > Intermittent crash with link to broker when broker closed > - > > Key: DISPATCH-902 > URL: https://issues.apache.org/jira/browse/DISPATCH-902 > Project: Qpid Dispatch > Issue Type: Bug >Affects Versions: 1.0.0 >Reporter: Kim van der Riet >Assignee: Ganesh Murthy >Priority: Blocker > Fix For: 1.1.0 > > Attachments: qdrouterd.node1.conf, qdrouterd.node2.conf, > qpidd.d2n.conf, testme.tgz > > > When using dispatch in a 2-node configuration with a broker between them: > {noformat} > 9002 10001 100019003 > sender > dispatch1 -> qpid-cpp -> dispatch2 -> receiver > {noformat} > and initializing in the following order: > # start dispatch1 > # start dispatch2 > # start qpid-cpp > # wait for "Link Route Activated" messages on both dispatch nodes > # stop qpid-cpp > then the dispatch nodes will core after a random amount of time and after > sending a random number of > {noformat} > (info) Connection to localhost:10001 failed: proton:io Connection refused - > on read from localhost:10001 > {noformat} > messages. > The stack trace is as follows for all occurrences: > {noformat} > Thread 3 "qdrouterd" received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7fffea269700 (LWP 10954)] > pn_transport_tail_closed (transport=0x0) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/core/transport.c:3044 > 3044 bool pn_transport_tail_closed(pn_transport_t *transport) { return > transport->tail_closed; } > (gdb) thread apply all bt > Thread 5 (Thread 0x7fffe9267700 (LWP 10956)): > #0 0x767eb6d3 in epoll_wait () at > ../sysdeps/unix/syscall-template.S:84 > #1 0x777327e2 in proactor_do_epoll (p=0x89b550, > can_block=can_block@entry=true) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:1978 > #2 0x777337ca in pn_proactor_wait (p=) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2025 > #3 0x77bbc219 in thread_run (arg=0x89ec20) at > /home/kpvdr/RedHat/qpid-dispatch/src/server.c:932 > #4 0x775185ca in start_thread (arg=0x7fffe9267700) at > pthread_create.c:333 > #5 0x767eb0cd in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 > Thread 4 (Thread 0x7fffe9a68700 (LWP 10955)): > #0 0x767eb6d3 in epoll_wait () at > ../sysdeps/unix/syscall-template.S:84 > #1 0x777327e2 in proactor_do_epoll (p=0x89b550, > can_block=can_block@entry=true) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:1978 > #2 0x777337ca in pn_proactor_wait (p=) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2025 > #3 0x77bbc219 in thread_run (arg=0x89ec20) at > /home/kpvdr/RedHat/qpid-dispatch/src/server.c:932 > #4 0x775185ca in start_thread (arg=0x7fffe9a68700) at > pthread_create.c:333 > #5 0x767eb0cd in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 > Thread 3 (Thread 0x7fffea269700 (LWP 10954)): > #0 pn_transport_tail_closed (transport=0x0) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/core/transport.c:3044 > #1 0x7794f4f9 in pn_connection_driver_read_closed > (d=d@entry=0x7fffdc054288) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/core/connection_driver.c:109 > #2 0x77731ef1 in pconnection_rclosed (pc=0x7fffdc053ce0) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:898 > #3 pconnection_process (pc=0x7fffdc053ce0, events=, > timeout=timeout@entry=false, topup=topup@entry=false) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:1084 > #4 0x77732945 in proactor_do_epoll (p=0x89b550, > can_block=can_block@entry=true) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2007 > #5 0x777337ca in pn_proactor_wait (p=) at > /home/kpvdr/RedHat/qpid-proton/proton-c/src/proactor/epoll.c:2025 > #6 0x77bbc219 in thread_run (arg=0x89ec20) at > /home/kpvdr/RedHat/qpid-dispatch/src/server.c:932 > #7 0x775185ca in start_thread (arg=0x7fffea269700) at > pthread_create.c:333 > #8 0x767eb0cd in clone () at > ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 > Thread 2 (Thread 0x7fffeaa6a700 (LWP 10953)): > #0 pthread_cond_wait@@GLIBC_2.3.2 () at > ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 > #1 0x77ba2949 in sys_cond_wait (cond=, > held_mutex=) at > /home/kpvdr/RedHat/qpid-dispatch/src/posix/threading.c:91 > #2 0x77bb0cf5 in router_core_thread (arg=0x8f8c90) at >
[jira] [Commented] (PROTON-1727) [epoll proactor] segfaults, hangs and leaked FDs around failed connect
[ https://issues.apache.org/jira/browse/PROTON-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301674#comment-16301674 ] ASF subversion and git services commented on PROTON-1727: - Commit 8d91e54c4445f8f7fcac44150de5ee2da34a3571 in qpid-proton's branch refs/heads/master from [~aconway] [ https://git-wip-us.apache.org/repos/asf?p=qpid-proton.git;h=8d91e54 ] PROTON-1727 [epoll] fix file descriptor leak on reconnect Fix a file descriptor leak when a host name resolves to multiple socket addresses and connecting to the first address fails. > [epoll proactor] segfaults, hangs and leaked FDs around failed connect > -- > > Key: PROTON-1727 > URL: https://issues.apache.org/jira/browse/PROTON-1727 > Project: Qpid Proton > Issue Type: Bug > Components: proton-c >Affects Versions: proton-c-0.18.1 >Reporter: Alan Conway >Assignee: Alan Conway >Priority: Blocker > Fix For: proton-c-0.20.0 > > > There is a race condition that causes leaked FDs and segfaults in the epoll > proactor under the following conditions: > - there is more than one thread processing proactor events. > - attempting to connect to a host address that resolves to multiple socket > addresses, e.g. resolving the NULL hostname on a machine with ipv4 and ipv6 > enabled. > - there is nothing listening on the target port. > The attached reproducer shows several bad behaviors: > - under rr or valgrind (--tool=memcheck and --tool=helgrind) it quickly (< > 1min) shows race conditions and/or invalid memory access. > - it hangs fairly often even without valgrind/rr, more so if you increase the > thread count. Without valgrind/rr it rarely segfaults. > - it leaks FDs - the test should run forever, but runs out of FDs around 1024 > iterations. > This is probably the cause of > https://issues.apache.org/jira/browse/DISPATCH-902, which does occur very > frequently under the conditions described there. > The test program should run forever without leaking or showing any faults. > Note that gcc -fsantize does not detect races or memory errors, which > suggests the bug requires a delay at the right time to manifest. Valgrind's > overhead and rr's code serialization appears to provide that delay. It seems > likely that dispatch's reconnect logic is providing the delay in DISPATCH-902. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (PROTON-1727) [epoll proactor] segfaults, hangs and leaked FDs around failed connect
[ https://issues.apache.org/jira/browse/PROTON-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway resolved PROTON-1727. - Resolution: Fixed > [epoll proactor] segfaults, hangs and leaked FDs around failed connect > -- > > Key: PROTON-1727 > URL: https://issues.apache.org/jira/browse/PROTON-1727 > Project: Qpid Proton > Issue Type: Bug > Components: proton-c >Affects Versions: proton-c-0.18.1 >Reporter: Alan Conway >Assignee: Alan Conway >Priority: Blocker > Fix For: proton-c-0.20.0 > > > There is a race condition that causes leaked FDs and segfaults in the epoll > proactor under the following conditions: > - there is more than one thread processing proactor events. > - attempting to connect to a host address that resolves to multiple socket > addresses, e.g. resolving the NULL hostname on a machine with ipv4 and ipv6 > enabled. > - there is nothing listening on the target port. > The attached reproducer shows several bad behaviors: > - under rr or valgrind (--tool=memcheck and --tool=helgrind) it quickly (< > 1min) shows race conditions and/or invalid memory access. > - it hangs fairly often even without valgrind/rr, more so if you increase the > thread count. Without valgrind/rr it rarely segfaults. > - it leaks FDs - the test should run forever, but runs out of FDs around 1024 > iterations. > This is probably the cause of > https://issues.apache.org/jira/browse/DISPATCH-902, which does occur very > frequently under the conditions described there. > The test program should run forever without leaking or showing any faults. > Note that gcc -fsantize does not detect races or memory errors, which > suggests the bug requires a delay at the right time to manifest. Valgrind's > overhead and rr's code serialization appears to provide that delay. It seems > likely that dispatch's reconnect logic is providing the delay in DISPATCH-902. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (PROTON-1727) [epoll proactor] segfaults, hangs and leaked FDs around failed connect
[ https://issues.apache.org/jira/browse/PROTON-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301675#comment-16301675 ] ASF subversion and git services commented on PROTON-1727: - Commit 223e6d012dab8bbe4bdb92538d84c630bbd1cf27 in qpid-proton's branch refs/heads/master from [~aconway] [ https://git-wip-us.apache.org/repos/asf?p=qpid-proton.git;h=223e6d0 ] PROTON-1727 [epoll] fix race condition Needed to set current_arm flag on second and subsequent connect attempts when resolve returns multiple socket addresses. Otherwise another thread can delete the connection early. > [epoll proactor] segfaults, hangs and leaked FDs around failed connect > -- > > Key: PROTON-1727 > URL: https://issues.apache.org/jira/browse/PROTON-1727 > Project: Qpid Proton > Issue Type: Bug > Components: proton-c >Affects Versions: proton-c-0.18.1 >Reporter: Alan Conway >Assignee: Alan Conway >Priority: Blocker > Fix For: proton-c-0.20.0 > > > There is a race condition that causes leaked FDs and segfaults in the epoll > proactor under the following conditions: > - there is more than one thread processing proactor events. > - attempting to connect to a host address that resolves to multiple socket > addresses, e.g. resolving the NULL hostname on a machine with ipv4 and ipv6 > enabled. > - there is nothing listening on the target port. > The attached reproducer shows several bad behaviors: > - under rr or valgrind (--tool=memcheck and --tool=helgrind) it quickly (< > 1min) shows race conditions and/or invalid memory access. > - it hangs fairly often even without valgrind/rr, more so if you increase the > thread count. Without valgrind/rr it rarely segfaults. > - it leaks FDs - the test should run forever, but runs out of FDs around 1024 > iterations. > This is probably the cause of > https://issues.apache.org/jira/browse/DISPATCH-902, which does occur very > frequently under the conditions described there. > The test program should run forever without leaking or showing any faults. > Note that gcc -fsantize does not detect races or memory errors, which > suggests the bug requires a delay at the right time to manifest. Valgrind's > overhead and rr's code serialization appears to provide that delay. It seems > likely that dispatch's reconnect logic is providing the delay in DISPATCH-902. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Updated] (QPID-8040) [Broker-J] Uncaught java.nio.channels.CancelledKeyException seen during Broker shutdown
[ https://issues.apache.org/jira/browse/QPID-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keith Wall updated QPID-8040: - Status: Open (was: Reviewable) We had another one of these failures yesterday. 2017-12-21. I think our guarding may be one level too deep. {code} Thread terminated due to uncaught exceptionjava.nio.channels.CancelledKeyException at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at java.nio.channels.SelectionKey.isAcceptable(SelectionKey.java:360) at org.apache.qpid.server.transport.SelectorThread$SelectionTask.processSelectionKeys(SelectorThread.java:180) at org.apache.qpid.server.transport.SelectorThread$SelectionTask.performSelect(SelectorThread.java:336) at org.apache.qpid.server.transport.SelectorThread$SelectionTask.run(SelectorThread.java:97) at org.apache.qpid.server.transport.SelectorThread.run(SelectorThread.java:538) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) at java.lang.Thread.run(Thread.java:748) {code} > [Broker-J] Uncaught java.nio.channels.CancelledKeyException seen during > Broker shutdown > --- > > Key: QPID-8040 > URL: https://issues.apache.org/jira/browse/QPID-8040 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.1.0 >Reporter: Keith Wall >Assignee: Keith Wall >Priority: Minor > Fix For: qpid-java-broker-7.0.1 > > Attachments: > TEST-org.apache.qpid.systest.management.amqp.AmqpManagementTest.testInvokeOperationReturningMap > > > The following exception was trapped by the UncaughtExceptionHandler during a > shutdown of the Broker. The would have caused a abnormal termination. I > expect it could also happen during a HA virtualhost mastership change too. > This code hasn't changed, so I think the problem is probably longstanding. > {noformat} > 2017-11-10 20:30:04,540 DEBUG [QpidJMS Connection Executor: > ID:51dec5e7-faf9-4b92-89ec-16396a27a101:1] o.a.q.j.u.ThreadPoolUtils Shutdown > of ExecutorService: > java.util.concurrent.ScheduledThreadPoolExecutor@6fbbe33[Terminated, pool > size = 0, active threads = 0, queued tasks = 0, completed tasks = 24] is > shutdown: true and terminated: true took: 0.000 seconds. > 2017-11-10 20:30:04,540 DEBUG [VirtualHostNode-test-Config] > o.a.q.s.c.u.TaskExecutorImpl Stopping task executor > virtualhost-test-preferences > 2017-11-10 20:30:04,544 DEBUG [VirtualHostNode-test-Config] > o.a.q.s.c.u.TaskExecutorImpl Task executor is stopped > 2017-11-10 20:30:04,543 ERROR [Selector-Port-amqp] > o.a.q.t.u.InternalBrokerHolder Uncaught exception from thread > Selector-Port-amqp > java.nio.channels.CancelledKeyException: null > at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) > at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:82) > at > java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:204) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.processSelectionKeys(SelectorThread.java:240) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.performSelect(SelectorThread.java:326) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.run(SelectorThread.java:97) > at > org.apache.qpid.server.transport.SelectorThread.run(SelectorThread.java:528) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-8060) [Broker-J] [AMQP 0-8..0-9-1] Declaring queue that specifies an alternate binding that does not exist crashes the Broker
[ https://issues.apache.org/jira/browse/QPID-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301655#comment-16301655 ] Keith Wall commented on QPID-8060: -- I find {{org/apache/qpid/server/protocol/v0_8/AMQChannel.java:3134}} reference to a conditionally initialised automatic variable {{alternateExchangeName}} displeasing. The code is brittle. I think following the pattern of {{UnknownConfiguredObjectException}} would be a better way to go. Specifically I mean that {{UnknownAlternateBindingException}} should carry for the name of the unknown alternate binding, so that AMQChannel and ServerSessionDelegate have straight forward way of getting it. I think the rest of the changes look okay. > [Broker-J] [AMQP 0-8..0-9-1] Declaring queue that specifies an alternate > binding that does not exist crashes the Broker > --- > > Key: QPID-8060 > URL: https://issues.apache.org/jira/browse/QPID-8060 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Alex Rudyy > Fix For: qpid-java-broker-7.0.1 > > > Declaring queue with not existing alternate binding crashes the Broker with > the following stack trace: > {noformat} > org.apache.qpid.server.configuration.IllegalConfigurationException: Cannot > create alternate binding for 'test' : Alternate binding destination > 'not_existing' cannot be found. > at > org.apache.qpid.server.queue.AbstractQueue.validateOrCreateAlternateBinding(AbstractQueue.java:3537) > at > org.apache.qpid.server.queue.AbstractQueue.onCreate(AbstractQueue.java:320) > at > org.apache.qpid.server.model.AbstractConfiguredObject.doCreation(AbstractConfiguredObject.java:1273) > at > org.apache.qpid.server.model.AbstractConfiguredObject$6.execute(AbstractConfiguredObject.java:893) > at > org.apache.qpid.server.model.AbstractConfiguredObject$6.execute(AbstractConfiguredObject.java:866) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:637) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:630) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$TaskLoggingWrapper.execute(TaskExecutorImpl.java:248) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl.submitWrappedTask(TaskExecutorImpl.java:165) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl.submit(TaskExecutorImpl.java:153) > at > org.apache.qpid.server.model.AbstractConfiguredObject.doOnConfigThread(AbstractConfiguredObject.java:629) > at > org.apache.qpid.server.model.AbstractConfiguredObject.createAsync(AbstractConfiguredObject.java:865) > at > org.apache.qpid.server.model.AbstractConfiguredObjectTypeFactory.createAsync(AbstractConfiguredObjectTypeFactory.java:75) > at > org.apache.qpid.server.queue.QueueFactory.createAsync(QueueFactory.java:58) > at > org.apache.qpid.server.model.ConfiguredObjectFactoryImpl.createAsync(ConfiguredObjectFactoryImpl.java:145) > at > org.apache.qpid.server.model.AbstractConfiguredObject.addChildAsync(AbstractConfiguredObject.java:2143) > at > org.apache.qpid.server.virtualhost.AbstractVirtualHost.addChildAsync(AbstractVirtualHost.java:857) > at > org.apache.qpid.server.model.AbstractConfiguredObject$17.execute(AbstractConfiguredObject.java:2100) > at > org.apache.qpid.server.model.AbstractConfiguredObject$17.execute(AbstractConfiguredObject.java:2095) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:637) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:630) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$TaskLoggingWrapper.execute(TaskExecutorImpl.java:248) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper$1.run(TaskExecutorImpl.java:320) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper.call(TaskExecutorImpl.java:313) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75) > at >
[jira] [Updated] (QPID-8060) [Broker-J] [AMQP 0-8..0-9-1] Declaring queue that specifies an alternate binding that does not exist crashes the Broker
[ https://issues.apache.org/jira/browse/QPID-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keith Wall updated QPID-8060: - Status: Open (was: Reviewable) > [Broker-J] [AMQP 0-8..0-9-1] Declaring queue that specifies an alternate > binding that does not exist crashes the Broker > --- > > Key: QPID-8060 > URL: https://issues.apache.org/jira/browse/QPID-8060 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Alex Rudyy > Fix For: qpid-java-broker-7.0.1 > > > Declaring queue with not existing alternate binding crashes the Broker with > the following stack trace: > {noformat} > org.apache.qpid.server.configuration.IllegalConfigurationException: Cannot > create alternate binding for 'test' : Alternate binding destination > 'not_existing' cannot be found. > at > org.apache.qpid.server.queue.AbstractQueue.validateOrCreateAlternateBinding(AbstractQueue.java:3537) > at > org.apache.qpid.server.queue.AbstractQueue.onCreate(AbstractQueue.java:320) > at > org.apache.qpid.server.model.AbstractConfiguredObject.doCreation(AbstractConfiguredObject.java:1273) > at > org.apache.qpid.server.model.AbstractConfiguredObject$6.execute(AbstractConfiguredObject.java:893) > at > org.apache.qpid.server.model.AbstractConfiguredObject$6.execute(AbstractConfiguredObject.java:866) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:637) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:630) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$TaskLoggingWrapper.execute(TaskExecutorImpl.java:248) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl.submitWrappedTask(TaskExecutorImpl.java:165) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl.submit(TaskExecutorImpl.java:153) > at > org.apache.qpid.server.model.AbstractConfiguredObject.doOnConfigThread(AbstractConfiguredObject.java:629) > at > org.apache.qpid.server.model.AbstractConfiguredObject.createAsync(AbstractConfiguredObject.java:865) > at > org.apache.qpid.server.model.AbstractConfiguredObjectTypeFactory.createAsync(AbstractConfiguredObjectTypeFactory.java:75) > at > org.apache.qpid.server.queue.QueueFactory.createAsync(QueueFactory.java:58) > at > org.apache.qpid.server.model.ConfiguredObjectFactoryImpl.createAsync(ConfiguredObjectFactoryImpl.java:145) > at > org.apache.qpid.server.model.AbstractConfiguredObject.addChildAsync(AbstractConfiguredObject.java:2143) > at > org.apache.qpid.server.virtualhost.AbstractVirtualHost.addChildAsync(AbstractVirtualHost.java:857) > at > org.apache.qpid.server.model.AbstractConfiguredObject$17.execute(AbstractConfiguredObject.java:2100) > at > org.apache.qpid.server.model.AbstractConfiguredObject$17.execute(AbstractConfiguredObject.java:2095) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:637) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:630) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$TaskLoggingWrapper.execute(TaskExecutorImpl.java:248) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper$1.run(TaskExecutorImpl.java:320) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper.call(TaskExecutorImpl.java:313) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) > at java.lang.Thread.run(Thread.java:745) > {noformat} > Client code: > {code} > Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); > AMQSession amqSession = (AMQSession)session; > final Maparguments = new HashMap<>(); >
[jira] [Updated] (QPID-8060) [Broker-J] [AMQP 0-8..0-9-1] Declaring queue that specifies an alternate binding that does not exist crashes the Broker
[ https://issues.apache.org/jira/browse/QPID-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keith Wall updated QPID-8060: - Summary: [Broker-J] [AMQP 0-8..0-9-1] Declaring queue that specifies an alternate binding that does not exist crashes the Broker (was: [Broker-J] [AMQP 0-8..0-9-1] Declaring queue with not existing alternate binding crashes the Broker) > [Broker-J] [AMQP 0-8..0-9-1] Declaring queue that specifies an alternate > binding that does not exist crashes the Broker > --- > > Key: QPID-8060 > URL: https://issues.apache.org/jira/browse/QPID-8060 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Alex Rudyy > Fix For: qpid-java-broker-7.0.1 > > > Declaring queue with not existing alternate binding crashes the Broker with > the following stack trace: > {noformat} > org.apache.qpid.server.configuration.IllegalConfigurationException: Cannot > create alternate binding for 'test' : Alternate binding destination > 'not_existing' cannot be found. > at > org.apache.qpid.server.queue.AbstractQueue.validateOrCreateAlternateBinding(AbstractQueue.java:3537) > at > org.apache.qpid.server.queue.AbstractQueue.onCreate(AbstractQueue.java:320) > at > org.apache.qpid.server.model.AbstractConfiguredObject.doCreation(AbstractConfiguredObject.java:1273) > at > org.apache.qpid.server.model.AbstractConfiguredObject$6.execute(AbstractConfiguredObject.java:893) > at > org.apache.qpid.server.model.AbstractConfiguredObject$6.execute(AbstractConfiguredObject.java:866) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:637) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:630) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$TaskLoggingWrapper.execute(TaskExecutorImpl.java:248) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl.submitWrappedTask(TaskExecutorImpl.java:165) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl.submit(TaskExecutorImpl.java:153) > at > org.apache.qpid.server.model.AbstractConfiguredObject.doOnConfigThread(AbstractConfiguredObject.java:629) > at > org.apache.qpid.server.model.AbstractConfiguredObject.createAsync(AbstractConfiguredObject.java:865) > at > org.apache.qpid.server.model.AbstractConfiguredObjectTypeFactory.createAsync(AbstractConfiguredObjectTypeFactory.java:75) > at > org.apache.qpid.server.queue.QueueFactory.createAsync(QueueFactory.java:58) > at > org.apache.qpid.server.model.ConfiguredObjectFactoryImpl.createAsync(ConfiguredObjectFactoryImpl.java:145) > at > org.apache.qpid.server.model.AbstractConfiguredObject.addChildAsync(AbstractConfiguredObject.java:2143) > at > org.apache.qpid.server.virtualhost.AbstractVirtualHost.addChildAsync(AbstractVirtualHost.java:857) > at > org.apache.qpid.server.model.AbstractConfiguredObject$17.execute(AbstractConfiguredObject.java:2100) > at > org.apache.qpid.server.model.AbstractConfiguredObject$17.execute(AbstractConfiguredObject.java:2095) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:637) > at > org.apache.qpid.server.model.AbstractConfiguredObject$2.execute(AbstractConfiguredObject.java:630) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$TaskLoggingWrapper.execute(TaskExecutorImpl.java:248) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper$1.run(TaskExecutorImpl.java:320) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:360) > at > org.apache.qpid.server.configuration.updater.TaskExecutorImpl$CallableWrapper.call(TaskExecutorImpl.java:313) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) > at java.lang.Thread.run(Thread.java:745) > {noformat} >
[jira] [Commented] (QPID-8058) [Broker-J][AMQP 1.0] Broker does not respond to drain request from consumer of management temporary destination
[ https://issues.apache.org/jira/browse/QPID-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301644#comment-16301644 ] Keith Wall commented on QPID-8058: -- Looks okay to me, okay for porting. > [Broker-J][AMQP 1.0] Broker does not respond to drain request from consumer > of management temporary destination > --- > > Key: QPID-8058 > URL: https://issues.apache.org/jira/browse/QPID-8058 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Keith Wall > Fix For: qpid-java-broker-7.0.1 > > Attachments: > TEST-org.apache.qpid.systest.management.amqp.AmqpManagementTest.testCreateQueueOnBrokerManagement.txt > > > Test {{AmqpManagementTest.testCreateQueueOnBrokerManagement}} failed with the > following error: > {noformat} > org.apache.qpid.jms.JmsOperationTimedOutException: Remote did not respond to > a drain request in time > {noformat} > As per exception message, the client sent the Flow with drain=true but Broker > did not reply on it > {noformat} > 2017-12-05 18:20:13,950 DEBUG [IO-/127.0.0.1:48726] o.a.q.s.p.frame > RECV[/127.0.0.1:48726|1] : > Flow{nextIncomingId=0,incomingWindow=2047,nextOutgoingId=2,outgoingWindow=2147483647,handle=0,deliveryCount=0,linkCredit=1000,drain=true} > {noformat} > See the test log attached. > Broker-J does not respond on flow performative with drain flag coming from > consumers of temporary destinations created on management nodes ($management). > Test attached receiver link for temporary destination created on management > mode: > {noformat} > 2017-12-05 18:20:11,760 DEBUG [IO-/127.0.0.1:48726] o.a.q.s.p.frame > RECV[/127.0.0.1:48726|1] : > Attach{name=qpid-jms:receiver:ID:0ea0fec9-455c-47db-b15e-4fd09c0c48d6:1:1:1:TempQueuecdf2fb78-beda-4456-9e2c-dd2dd4d9a53e,handle=0,role=receiver,sndSettleMode=unsettled,rcvSettleMode=first,source=Source{address=TempQueuecdf2fb78-beda-4456-9e2c-dd2dd4d9a53e,durable=none,expiryPolicy=link-detach,timeout=0,dynamic=false,defaultOutcome=Modified{deliveryFailed=true},outcomes=[amqp:accepted:list, > amqp:rejected:list, amqp:released:list, > amqp:modified:list],capabilities=[temporary-queue]},target=Target{}} > 2017-12-05 18:20:11,763 DEBUG [IO-/127.0.0.1:48726] o.a.q.s.p.frame > SEND[/127.0.0.1:48726|1] : > Attach{name=qpid-jms:receiver:ID:0ea0fec9-455c-47db-b15e-4fd09c0c48d6:1:1:1:TempQueuecdf2fb78-beda-4456-9e2c-dd2dd4d9a53e,handle=0,role=sender,sndSettleMode=unsettled,rcvSettleMode=first,source=Source{address=TempQueuecdf2fb78-beda-4456-9e2c-dd2dd4d9a53e,durable=none,expiryPolicy=link-detach,dynamic=false,defaultOutcome=Modified{deliveryFailed=true},outcomes=[amqp:accepted:list, > amqp:released:list, > amqp:rejected:list],capabilities=[]},target=Target{},unsettled={},initialDeliveryCount=0,offeredCapabilities=[SHARED-SUBS],properties={}} > {noformat} > Granted credit > {noformat} > 2017-12-05 18:20:11,765 DEBUG [IO-/127.0.0.1:48726] o.a.q.s.p.frame > RECV[/127.0.0.1:48726|1] : > Flow{nextIncomingId=0,incomingWindow=2047,nextOutgoingId=1,outgoingWindow=2147483647,handle=0,deliveryCount=0,linkCredit=1000} > {noformat} > Broker sent Transfer > {noformat} > 2017-12-05 18:20:13,949 DEBUG [IO-/127.0.0.1:48726] o.a.q.s.p.frame > SEND[/127.0.0.1:48726|1] : > Transfer{handle=0,deliveryId=0,deliveryTag=\x00\x00\x00\x00\x00\x00\x00\x00,messageFormat=0} > {noformat} > and immediately after transfer received Flow with drain=true > {noformat} > 2017-12-05 18:20:13,950 DEBUG [IO-/127.0.0.1:48726] o.a.q.s.p.frame > RECV[/127.0.0.1:48726|1] : > Flow{nextIncomingId=0,incomingWindow=2047,nextOutgoingId=2,outgoingWindow=2147483647,handle=0,deliveryCount=0,linkCredit=1000,drain=true} > {noformat} > There was no message left on message source but broker did not send flow back. > The existing implementation of {{ManagementNode#pullMessage}} does not notify > its {{ConsumerTarget}} when there is no messages left to send. > Only users of AMQP management are affected by the bug. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (QPID-8057) [JMS AMQP 0-x][AMQP 0-10] IoReceiver thread can block itself for 60 seconds after sending session close controls and waiting for broker responses
[ https://issues.apache.org/jira/browse/QPID-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keith Wall resolved QPID-8057. -- Resolution: Fixed > [JMS AMQP 0-x][AMQP 0-10] IoReceiver thread can block itself for 60 seconds > after sending session close controls and waiting for broker responses > - > > Key: QPID-8057 > URL: https://issues.apache.org/jira/browse/QPID-8057 > Project: Qpid > Issue Type: Bug > Components: JMS AMQP 0-x >Affects Versions: qpid-java-6.0.8, qpid-java-client-0-x-6.3.0, > qpid-java-6.1.5 >Reporter: Alex Rudyy >Priority: Minor > Fix For: qpid-java-client-0-x-6.3.1 > > Attachments: > TEST-org.apache.qpid.test.unit.transacted.TransactionTimeoutTest.testProducerOpenCommit.txt > > > IoReceiver thread can "dead lock" itself for 60 seconds after receiving > execution exception from broker and sending session controls to close other > connection sessions. This issue can occur in the following scenario: > * broker sends execution exception > * client receives execution exception and populates field > {{org.apache.qpid.client.AMQSession_0_10#_currentException}} as part of > {{org.apache.qpid.client.AMQSession_0_10#setCurrentException}}. The field is > set using synchronization lock > {{org.apache.qpid.client.AMQSession_0_10#_currentExceptionLock}}. However, > the field is referenced later in > {{org.apache.qpid.client.AMQSession_0_10#setCurrentException}} without the > synchronization lock. It could be reset (set to null) from the application > main thread as part of call > {{org.apache.qpid.client.AMQSession#checkNotClosed}} whilst the execution of > {{org.apache.qpid.client.AMQSession_0_10#setCurrentException}} is still in > progress. As result, > {{org.apache.qpid.client.AMQSession_0_10#_currentException}} can have null > value when method {{org.apache.qpid.client.AMQConnection#exceptionReceived}} > is invoked from > {{org.apache.qpid.client.AMQSession_0_10#setCurrentException}}. "{{null}} > cause" is interpreted as "hard error" causing to close other sessions and > connection from IoReceiver thread. Whilst closing session or connection, the > client waits for the broker responses, however, it cannot get them as > IoReceiver thread is blocked. Wen should not wait for any broker responses in > IoReceiver thread. > Thus, we have 2 issues here: > * {{org.apache.qpid.client.AMQSession_0_10#_currentException}} should not be > referenced without a synchronization lock > * the client should not wait for broker responses in IoReceiver thread (i.e. > session or connection close should not be performed in IoReceiver thread) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (QPID-8027) [Broker-J][AMQP 0-8..0-9-1] Receiving BasicAck with unknown tag crashes the broker
[ https://issues.apache.org/jira/browse/QPID-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keith Wall resolved QPID-8027. -- Resolution: Fixed > [Broker-J][AMQP 0-8..0-9-1] Receiving BasicAck with unknown tag crashes the > broker > -- > > Key: QPID-8027 > URL: https://issues.apache.org/jira/browse/QPID-8027 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Keith Wall >Priority: Minor > Fix For: qpid-java-broker-7.0.1 > > > On receiving basick.ack with unknown tag an NPE is thrown which in turn kills > the broker. Here is the stack trace for the issue > {noformat} > > # > # Unhandled Exception java.lang.NullPointerException in Thread > IO-/127.0.0.1:50573 > # > # Exiting > # > > java.lang.NullPointerException > at > org.apache.qpid.server.protocol.v0_8.UnacknowledgedMessageMapImpl.acknowledge(UnacknowledgedMessageMapImpl.java:183) > at > org.apache.qpid.server.protocol.v0_8.AMQChannel.receiveBasicAck(AMQChannel.java:1738) > at > org.apache.qpid.server.protocol.v0_8.transport.BasicAckBody.process(BasicAckBody.java:119) > at > org.apache.qpid.server.protocol.v0_8.ServerDecoder.processMethod(ServerDecoder.java:194) > at > org.apache.qpid.server.protocol.v0_8.AMQDecoder.processFrame(AMQDecoder.java:203) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder.doProcessFrame(BrokerDecoder.java:141) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder.processFrame(BrokerDecoder.java:65) > at > org.apache.qpid.server.protocol.v0_8.AMQDecoder.processInput(AMQDecoder.java:185) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder$1.run(BrokerDecoder.java:104) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder$1.run(BrokerDecoder.java:97) > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder.processAMQPFrames(BrokerDecoder.java:96) > at > org.apache.qpid.server.protocol.v0_8.AMQDecoder.decode(AMQDecoder.java:118) > at > org.apache.qpid.server.protocol.v0_8.ServerDecoder.decodeBuffer(ServerDecoder.java:44) > at > org.apache.qpid.server.protocol.v0_8.AMQPConnection_0_8Impl$1.run(AMQPConnection_0_8Impl.java:254) > at > org.apache.qpid.server.protocol.v0_8.AMQPConnection_0_8Impl$1.run(AMQPConnection_0_8Impl.java:246) > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.qpid.server.protocol.v0_8.AMQPConnection_0_8Impl.received(AMQPConnection_0_8Impl.java:245) > at > org.apache.qpid.server.transport.MultiVersionProtocolEngine.received(MultiVersionProtocolEngine.java:134) > at > org.apache.qpid.server.transport.NonBlockingConnection.processAmqpData(NonBlockingConnection.java:610) > at > org.apache.qpid.server.transport.NonBlockingConnectionPlainDelegate.processData(NonBlockingConnectionPlainDelegate.java:58) > at > org.apache.qpid.server.transport.NonBlockingConnection.doRead(NonBlockingConnection.java:496) > at > org.apache.qpid.server.transport.NonBlockingConnection.doWork(NonBlockingConnection.java:270) > at > org.apache.qpid.server.transport.NetworkConnectionScheduler.processConnection(NetworkConnectionScheduler.java:134) > at > org.apache.qpid.server.transport.SelectorThread$ConnectionProcessor.processConnection(SelectorThread.java:570) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.performSelect(SelectorThread.java:361) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.run(SelectorThread.java:97) > at > org.apache.qpid.server.transport.SelectorThread.run(SelectorThread.java:528) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-8027) [Broker-J][AMQP 0-8..0-9-1] Receiving BasicAck with unknown tag crashes the broker
[ https://issues.apache.org/jira/browse/QPID-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301628#comment-16301628 ] Keith Wall commented on QPID-8027: -- Looks reasonable to me. > [Broker-J][AMQP 0-8..0-9-1] Receiving BasicAck with unknown tag crashes the > broker > -- > > Key: QPID-8027 > URL: https://issues.apache.org/jira/browse/QPID-8027 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Keith Wall >Priority: Minor > Fix For: qpid-java-broker-7.0.1 > > > On receiving basick.ack with unknown tag an NPE is thrown which in turn kills > the broker. Here is the stack trace for the issue > {noformat} > > # > # Unhandled Exception java.lang.NullPointerException in Thread > IO-/127.0.0.1:50573 > # > # Exiting > # > > java.lang.NullPointerException > at > org.apache.qpid.server.protocol.v0_8.UnacknowledgedMessageMapImpl.acknowledge(UnacknowledgedMessageMapImpl.java:183) > at > org.apache.qpid.server.protocol.v0_8.AMQChannel.receiveBasicAck(AMQChannel.java:1738) > at > org.apache.qpid.server.protocol.v0_8.transport.BasicAckBody.process(BasicAckBody.java:119) > at > org.apache.qpid.server.protocol.v0_8.ServerDecoder.processMethod(ServerDecoder.java:194) > at > org.apache.qpid.server.protocol.v0_8.AMQDecoder.processFrame(AMQDecoder.java:203) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder.doProcessFrame(BrokerDecoder.java:141) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder.processFrame(BrokerDecoder.java:65) > at > org.apache.qpid.server.protocol.v0_8.AMQDecoder.processInput(AMQDecoder.java:185) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder$1.run(BrokerDecoder.java:104) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder$1.run(BrokerDecoder.java:97) > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.qpid.server.protocol.v0_8.BrokerDecoder.processAMQPFrames(BrokerDecoder.java:96) > at > org.apache.qpid.server.protocol.v0_8.AMQDecoder.decode(AMQDecoder.java:118) > at > org.apache.qpid.server.protocol.v0_8.ServerDecoder.decodeBuffer(ServerDecoder.java:44) > at > org.apache.qpid.server.protocol.v0_8.AMQPConnection_0_8Impl$1.run(AMQPConnection_0_8Impl.java:254) > at > org.apache.qpid.server.protocol.v0_8.AMQPConnection_0_8Impl$1.run(AMQPConnection_0_8Impl.java:246) > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.qpid.server.protocol.v0_8.AMQPConnection_0_8Impl.received(AMQPConnection_0_8Impl.java:245) > at > org.apache.qpid.server.transport.MultiVersionProtocolEngine.received(MultiVersionProtocolEngine.java:134) > at > org.apache.qpid.server.transport.NonBlockingConnection.processAmqpData(NonBlockingConnection.java:610) > at > org.apache.qpid.server.transport.NonBlockingConnectionPlainDelegate.processData(NonBlockingConnectionPlainDelegate.java:58) > at > org.apache.qpid.server.transport.NonBlockingConnection.doRead(NonBlockingConnection.java:496) > at > org.apache.qpid.server.transport.NonBlockingConnection.doWork(NonBlockingConnection.java:270) > at > org.apache.qpid.server.transport.NetworkConnectionScheduler.processConnection(NetworkConnectionScheduler.java:134) > at > org.apache.qpid.server.transport.SelectorThread$ConnectionProcessor.processConnection(SelectorThread.java:570) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.performSelect(SelectorThread.java:361) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.run(SelectorThread.java:97) > at > org.apache.qpid.server.transport.SelectorThread.run(SelectorThread.java:528) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Updated] (QPID-8047) [Broker-J][AMQP 0-10] NPE on receiving session.detach for unknown session
[ https://issues.apache.org/jira/browse/QPID-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keith Wall updated QPID-8047: - Priority: Minor (was: Major) > [Broker-J][AMQP 0-10] NPE on receiving session.detach for unknown session > - > > Key: QPID-8047 > URL: https://issues.apache.org/jira/browse/QPID-8047 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Keith Wall >Priority: Minor > Fix For: qpid-java-broker-7.0.1 > > > The following NPE is reported on receiving of detach for unknown session > {noformat} > java.lang.NullPointerException > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.stopAllSubscriptions(ServerConnectionDelegate.java:377) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.sessionDetach(ServerConnectionDelegate.java:366) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.sessionDetach(ServerConnectionDelegate.java:52) > at > org.apache.qpid.server.protocol.v0_10.transport.SessionDetach.dispatch(SessionDetach.java:90) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.control(ServerConnectionDelegate.java:97) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.control(ServerConnectionDelegate.java:52) > at > org.apache.qpid.server.protocol.v0_10.transport.Method.delegate(Method.java:157) > at > org.apache.qpid.server.protocol.v0_10.ServerConnection.received(ServerConnection.java:279) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.emit(ServerAssembler.java:178) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.assemble(ServerAssembler.java:256) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.frame(ServerAssembler.java:205) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.received(ServerAssembler.java:135) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.received(ServerAssembler.java:109) > at > org.apache.qpid.server.protocol.v0_10.ServerInputHandler.received(ServerInputHandler.java:184) > at > org.apache.qpid.server.protocol.v0_10.AMQPConnection_0_10Impl.lambda$received$1(AMQPConnection_0_10Impl.java:146) > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.qpid.server.protocol.v0_10.AMQPConnection_0_10Impl.received(AMQPConnection_0_10Impl.java:141) > at > org.apache.qpid.server.transport.MultiVersionProtocolEngine.received(MultiVersionProtocolEngine.java:134) > at > org.apache.qpid.server.transport.NonBlockingConnection.processAmqpData(NonBlockingConnection.java:610) > at > org.apache.qpid.server.transport.NonBlockingConnectionPlainDelegate.processData(NonBlockingConnectionPlainDelegate.java:58) > at > org.apache.qpid.server.transport.NonBlockingConnection.doRead(NonBlockingConnection.java:496) > at > org.apache.qpid.server.transport.NonBlockingConnection.doWork(NonBlockingConnection.java:270) > at > org.apache.qpid.server.transport.NetworkConnectionScheduler.processConnection(NetworkConnectionScheduler.java:134) > at > org.apache.qpid.server.transport.SelectorThread$ConnectionProcessor.processConnection(SelectorThread.java:570) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.performSelect(SelectorThread.java:361) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.run(SelectorThread.java:97) > at > org.apache.qpid.server.transport.SelectorThread.run(SelectorThread.java:528) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (QPID-8047) [Broker-J][AMQP 0-10] NPE on receiving session.detach for unknown session
[ https://issues.apache.org/jira/browse/QPID-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keith Wall resolved QPID-8047. -- Resolution: Fixed > [Broker-J][AMQP 0-10] NPE on receiving session.detach for unknown session > - > > Key: QPID-8047 > URL: https://issues.apache.org/jira/browse/QPID-8047 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.0 >Reporter: Alex Rudyy >Assignee: Keith Wall >Priority: Minor > Fix For: qpid-java-broker-7.0.1 > > > The following NPE is reported on receiving of detach for unknown session > {noformat} > java.lang.NullPointerException > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.stopAllSubscriptions(ServerConnectionDelegate.java:377) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.sessionDetach(ServerConnectionDelegate.java:366) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.sessionDetach(ServerConnectionDelegate.java:52) > at > org.apache.qpid.server.protocol.v0_10.transport.SessionDetach.dispatch(SessionDetach.java:90) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.control(ServerConnectionDelegate.java:97) > at > org.apache.qpid.server.protocol.v0_10.ServerConnectionDelegate.control(ServerConnectionDelegate.java:52) > at > org.apache.qpid.server.protocol.v0_10.transport.Method.delegate(Method.java:157) > at > org.apache.qpid.server.protocol.v0_10.ServerConnection.received(ServerConnection.java:279) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.emit(ServerAssembler.java:178) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.assemble(ServerAssembler.java:256) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.frame(ServerAssembler.java:205) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.received(ServerAssembler.java:135) > at > org.apache.qpid.server.protocol.v0_10.ServerAssembler.received(ServerAssembler.java:109) > at > org.apache.qpid.server.protocol.v0_10.ServerInputHandler.received(ServerInputHandler.java:184) > at > org.apache.qpid.server.protocol.v0_10.AMQPConnection_0_10Impl.lambda$received$1(AMQPConnection_0_10Impl.java:146) > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.qpid.server.protocol.v0_10.AMQPConnection_0_10Impl.received(AMQPConnection_0_10Impl.java:141) > at > org.apache.qpid.server.transport.MultiVersionProtocolEngine.received(MultiVersionProtocolEngine.java:134) > at > org.apache.qpid.server.transport.NonBlockingConnection.processAmqpData(NonBlockingConnection.java:610) > at > org.apache.qpid.server.transport.NonBlockingConnectionPlainDelegate.processData(NonBlockingConnectionPlainDelegate.java:58) > at > org.apache.qpid.server.transport.NonBlockingConnection.doRead(NonBlockingConnection.java:496) > at > org.apache.qpid.server.transport.NonBlockingConnection.doWork(NonBlockingConnection.java:270) > at > org.apache.qpid.server.transport.NetworkConnectionScheduler.processConnection(NetworkConnectionScheduler.java:134) > at > org.apache.qpid.server.transport.SelectorThread$ConnectionProcessor.processConnection(SelectorThread.java:570) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.performSelect(SelectorThread.java:361) > at > org.apache.qpid.server.transport.SelectorThread$SelectionTask.run(SelectorThread.java:97) > at > org.apache.qpid.server.transport.SelectorThread.run(SelectorThread.java:528) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at > org.apache.qpid.server.bytebuffer.QpidByteBufferFactory.lambda$null$0(QpidByteBufferFactory.java:464) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-6933) Factor out a JMS client neutral messaging test suite from system tests
[ https://issues.apache.org/jira/browse/QPID-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301426#comment-16301426 ] ASF subversion and git services commented on QPID-6933: --- Commit 38a8f177ade12b82f3bae3d0cc4f1b80d6376f4f in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=38a8f17 ] QPID-6933: [System Tests] Remove more redundant/poorly focused tests > Factor out a JMS client neutral messaging test suite from system tests > -- > > Key: QPID-6933 > URL: https://issues.apache.org/jira/browse/QPID-6933 > Project: Qpid > Issue Type: Improvement > Components: Java Tests >Reporter: Keith Wall >Assignee: Alex Rudyy > > The existing system testsuite is in a poor state. > * It is poorly structured > * Mixes different types of test. i.e. messaging behaviour with those that > test features of the Java Broker (e.g. REST). > * Many of the tests refer directly to the implementation classes of the Qpid > Client 0-8..0-10 client meaning the tests cannot be run using the new client. > As a first step, we want to factor out a separate messaging system test suite: > * The tests in this suite will be JMS client neutral. > * Written in terms of JNDI/JMS client > * Configurations/Broker observations will be performed via a clean > Broker-neutral facade. For instance > ** a mechanism to cause the queue to be created of a particular type. > ** a mechanism to observe a queue depth. > * The tests will be classified by feature (to be defined) > * The classification system will be used to drive an exclusion feature (to be > defined). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-6933) Factor out a JMS client neutral messaging test suite from system tests
[ https://issues.apache.org/jira/browse/QPID-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301427#comment-16301427 ] ASF subversion and git services commented on QPID-6933: --- Commit a2a1c965fdae4f3c49b4c87aa38326462c3c297b in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=a2a1c96 ] QPID-6933: [System Tests] Refactor queue browser tests as JMS 1.1 system test > Factor out a JMS client neutral messaging test suite from system tests > -- > > Key: QPID-6933 > URL: https://issues.apache.org/jira/browse/QPID-6933 > Project: Qpid > Issue Type: Improvement > Components: Java Tests >Reporter: Keith Wall >Assignee: Alex Rudyy > > The existing system testsuite is in a poor state. > * It is poorly structured > * Mixes different types of test. i.e. messaging behaviour with those that > test features of the Java Broker (e.g. REST). > * Many of the tests refer directly to the implementation classes of the Qpid > Client 0-8..0-10 client meaning the tests cannot be run using the new client. > As a first step, we want to factor out a separate messaging system test suite: > * The tests in this suite will be JMS client neutral. > * Written in terms of JNDI/JMS client > * Configurations/Broker observations will be performed via a clean > Broker-neutral facade. For instance > ** a mechanism to cause the queue to be created of a particular type. > ** a mechanism to observe a queue depth. > * The tests will be classified by feature (to be defined) > * The classification system will be used to drive an exclusion feature (to be > defined). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-8038) [Broker-J][System Tests] Add protocol system test suites for AMQP 0-x
[ https://issues.apache.org/jira/browse/QPID-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301213#comment-16301213 ] ASF subversion and git services commented on QPID-8038: --- Commit 6e99e868de1efde922351f6c4dfcd3cafb89b9a5 in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=6e99e86 ] QPID-8038: [Broker-J] [AMQP 0-x] Add qos bytes protocol test > [Broker-J][System Tests] Add protocol system test suites for AMQP 0-x > - > > Key: QPID-8038 > URL: https://issues.apache.org/jira/browse/QPID-8038 > Project: Qpid > Issue Type: Improvement > Components: Broker-J, Java Tests >Reporter: Alex Rudyy > > We need a test frameworks which would allow creation of tests which would be > sending the AMQP 0-x performatives over TCP and receiving and asserting > broker responses. > The framework should satisfy the following requirements: > * It should allow running tests against other AMQP brokers > * The framework should encapsulate starting/stopping of broker and queue > creation/deletion under special interface(s) which can be implemented by the > Broker developers in order to run tests against different Broker > implementations > * Tests should be able to start and stop broker if required or configured > * Tests should be able to generate AMQP performatives and assert received > peer's AMQP performatives > * The framework should allow using other transport than TCP if required > * The framework should be based on AMQP 0-x implementations of Broker-J -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-6933) Factor out a JMS client neutral messaging test suite from system tests
[ https://issues.apache.org/jira/browse/QPID-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301215#comment-16301215 ] ASF subversion and git services commented on QPID-6933: --- Commit ac25f57435c352c6e835dbe8227339bde5a5b112 in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=ac25f57 ] QPID-6933: [System Tests] Refactor InvalidDestinationTest as a JMS 1.1 system tests > Factor out a JMS client neutral messaging test suite from system tests > -- > > Key: QPID-6933 > URL: https://issues.apache.org/jira/browse/QPID-6933 > Project: Qpid > Issue Type: Improvement > Components: Java Tests >Reporter: Keith Wall >Assignee: Alex Rudyy > > The existing system testsuite is in a poor state. > * It is poorly structured > * Mixes different types of test. i.e. messaging behaviour with those that > test features of the Java Broker (e.g. REST). > * Many of the tests refer directly to the implementation classes of the Qpid > Client 0-8..0-10 client meaning the tests cannot be run using the new client. > As a first step, we want to factor out a separate messaging system test suite: > * The tests in this suite will be JMS client neutral. > * Written in terms of JNDI/JMS client > * Configurations/Broker observations will be performed via a clean > Broker-neutral facade. For instance > ** a mechanism to cause the queue to be created of a particular type. > ** a mechanism to observe a queue depth. > * The tests will be classified by feature (to be defined) > * The classification system will be used to drive an exclusion feature (to be > defined). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-6933) Factor out a JMS client neutral messaging test suite from system tests
[ https://issues.apache.org/jira/browse/QPID-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301214#comment-16301214 ] ASF subversion and git services commented on QPID-6933: --- Commit 051b2d078786b455cf84464d8cc33fe73ab1fd70 in qpid-broker-j's branch refs/heads/master from [~k-wall] [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=051b2d0 ] QPID-6933: [System Tests] Remove 0-8 ConsumerFlowControlTest - reimplemented as 0-8 protocol test > Factor out a JMS client neutral messaging test suite from system tests > -- > > Key: QPID-6933 > URL: https://issues.apache.org/jira/browse/QPID-6933 > Project: Qpid > Issue Type: Improvement > Components: Java Tests >Reporter: Keith Wall >Assignee: Alex Rudyy > > The existing system testsuite is in a poor state. > * It is poorly structured > * Mixes different types of test. i.e. messaging behaviour with those that > test features of the Java Broker (e.g. REST). > * Many of the tests refer directly to the implementation classes of the Qpid > Client 0-8..0-10 client meaning the tests cannot be run using the new client. > As a first step, we want to factor out a separate messaging system test suite: > * The tests in this suite will be JMS client neutral. > * Written in terms of JNDI/JMS client > * Configurations/Broker observations will be performed via a clean > Broker-neutral facade. For instance > ** a mechanism to cause the queue to be created of a particular type. > ** a mechanism to observe a queue depth. > * The tests will be classified by feature (to be defined) > * The classification system will be used to drive an exclusion feature (to be > defined). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org