Re: [java] Test result inconsistencies?
On Dec 11, 2006, at 6:27 PM, Martin Ritchie wrote: On 11/12/06, Steve Vinoski <[EMAIL PROTECTED]> wrote: On Dec 11, 2006, at 2:58 AM, Gordon Sim wrote: > Steve Vinoski wrote: >> When I raise the testing issue in this list, as I've already done >> on numerous occasions, I usually get next to nothing in terms of >> feedback. Do others share my concerns about this whole testing >> issue? Do we as a group agree on the value of overhauling and >> enhancing our tests? > > Yes, I agree that more (and better organised) tests would be a very > good thing. Thanks Gordon, Alan, Rajith, and Jim for your responses. Just so I'm clear on this: the java/client/src/test/java/org/apache/ qpid directory currently contains all of the following: Of all these, only specific tests under the test/unit subdirectory are currently being executed as unit tests, as specified by the includes/excludes configured for Surefire in client/pom.xml. What I'm proposing as a first step is moving everything listed here except for the test subdirectory into a temporary src/old_test directory, also under client, keeping the full package name for each test of course. That way, I can remove the includes/excludes from the Surefire configuration in the pom without us losing anything. I would also do something similar for the systests directory. Any reason why I shouldn't do this? Speak now... --steve This would be great. I've been meaning to do this since we created JIRA-39 (https://issues.apache.org/jira/browse/QPID-39) just haven't got down the list of bugs far enough :D Thanks for that pointer, Martin -- I didn't see QPID-39, so I'll add some comments to it. I've gone ahead and reorganized the client and systests as I explained earlier. With this test reorg, all the client tests now pass under Eclipse. There appears to be one failure under Eclipse in the systests, however, which I'll look into. thanks, --steve
Re: Re: [java] Test result inconsistencies?
On 11/12/06, Steve Vinoski <[EMAIL PROTECTED]> wrote: On Dec 11, 2006, at 2:58 AM, Gordon Sim wrote: > Steve Vinoski wrote: >> When I raise the testing issue in this list, as I've already done >> on numerous occasions, I usually get next to nothing in terms of >> feedback. Do others share my concerns about this whole testing >> issue? Do we as a group agree on the value of overhauling and >> enhancing our tests? > > Yes, I agree that more (and better organised) tests would be a very > good thing. Thanks Gordon, Alan, Rajith, and Jim for your responses. Just so I'm clear on this: the java/client/src/test/java/org/apache/ qpid directory currently contains all of the following: IBMPerfTest client cluster codec config cts example flow fragmentation framing headers jndi latency mina multiconsumer ping pubsub1 requestreply1 test testutil topic transacted weblogic Of all these, only specific tests under the test/unit subdirectory are currently being executed as unit tests, as specified by the includes/excludes configured for Surefire in client/pom.xml. What I'm proposing as a first step is moving everything listed here except for the test subdirectory into a temporary src/old_test directory, also under client, keeping the full package name for each test of course. That way, I can remove the includes/excludes from the Surefire configuration in the pom without us losing anything. I would also do something similar for the systests directory. Any reason why I shouldn't do this? Speak now... --steve This would be great. I've been meaning to do this since we created JIRA-39 (https://issues.apache.org/jira/browse/QPID-39) just haven't got down the list of bugs far enough :D -- Martin Ritchie
Re: [java] Test result inconsistencies?
On Dec 11, 2006, at 2:58 AM, Gordon Sim wrote: Steve Vinoski wrote: When I raise the testing issue in this list, as I've already done on numerous occasions, I usually get next to nothing in terms of feedback. Do others share my concerns about this whole testing issue? Do we as a group agree on the value of overhauling and enhancing our tests? Yes, I agree that more (and better organised) tests would be a very good thing. Thanks Gordon, Alan, Rajith, and Jim for your responses. Just so I'm clear on this: the java/client/src/test/java/org/apache/ qpid directory currently contains all of the following: IBMPerfTest client cluster codec config cts example flow fragmentation framing headers jndi latency mina multiconsumer ping pubsub1 requestreply1 test testutil topic transacted weblogic Of all these, only specific tests under the test/unit subdirectory are currently being executed as unit tests, as specified by the includes/excludes configured for Surefire in client/pom.xml. What I'm proposing as a first step is moving everything listed here except for the test subdirectory into a temporary src/old_test directory, also under client, keeping the full package name for each test of course. That way, I can remove the includes/excludes from the Surefire configuration in the pom without us losing anything. I would also do something similar for the systests directory. Any reason why I shouldn't do this? Speak now... --steve
Re: [java] Test result inconsistencies?
On Mon, 2006-12-11 at 17:56 +0100, Jim Meyering wrote: > Just a minor correction: creating precise and portable tests *can* > be quite onerous, in that preparing good tests takes time and energy. > But it is well worth the effort. Note however, that sometimes it is very > hard to test for a fix. I agree there are certainly cases that you can't easily unit test. For example if you have to run some program continuously for hours to get it to core because of some nasty race, it clearly isn't reasonable to add a unit test that runs for days! On the other hand it would be very reasonable to create a stress test suite that is periodically run for days at a time in an attempt to catch leaks, races an the like. It's also true that trying to retroactively add tests to a system that grew up without them can be very painful, which is why I think it's particularly important to harp on this issue now for a young project like Qpid. So perhaps I made too sweeping a statement, but I stand by the contention that 90% of the time there is no excuse for a commit that causes a net reduction in the projects test coverage. Cheers, Alan.
Re: [java] Test result inconsistencies?
Alan Conway <[EMAIL PROTECTED]> wrote: ... > I would go further and strongly suggest that no-one should *ever* commit > new functionality or fixes without tests. The exceptions would be a fix > for that makes an existing test failure pass, and pure refactoring work > that isn't supposed to change the behavior of the system. (And even then > you usually find that the existing tests need a bit more coverage as you > go.) > > It's not an onerous requirement: you *always* test your code before you > commit it don't you? Nobody *ever* just says "hey it compiles, and it's > trivial and obviously works" - right? Well, I confess that I've been known to do that. But I have to admit I've ended up regretting it once or twice, too. Just a minor correction: creating precise and portable tests *can* be quite onerous, in that preparing good tests takes time and energy. But it is well worth the effort. Note however, that sometimes it is very hard to test for a fix. Here's an actual example that I'm contending with, as coreutils maintainer: Here's a patch for a race condition bug from just last week: patch for cp -p to fix race condition with temporary permissions http://article.gmane.org/gmane.comp.gnu.core-utils.bugs/9103 but to detect the problem, you have to observe the permissions of the file in question in the brief interval between cp's open syscall and a subsequent chmod. One way to verify the patch is to strace a specific cp command and examine the order of the syscalls, but using strace is not portable, and its output is not easy to compare or parse. And besides, you can't reliably determine the permissions on a just-created file via syscall output. These days, I rarely check in a bug fix or new feature without at least some minimal test to exercise it, but in this case, it was a challenge[*]. In fact, I haven't yet added the test, but at least, I now know how I'll do it. The test bourne shell script will have to resort to using gdb in batch mode, setting a break point and quitting just after the open syscall that creates the file in question. Then, the bourne shell continues on to verify the permissions on that file. Of course, this means the binary must now have debugging symbols in order for this test to work, and gdb must be available. But it's worth it. [*] If I'd been willing to change cp's copying engine to make it more testable, it would have been a lot easier. E.g., add a testing-only option to make it exit right after the specified open syscall. But infrastructure like that can make the code harder to maintain.
Re: [java] Test result inconsistencies?
+1 Alan, well said. Regards, Rajith On 12/11/06, Alan Conway <[EMAIL PROTECTED]> wrote: On Mon, 2006-12-11 at 07:58 +, Gordon Sim wrote: > Steve Vinoski wrote: > > When I raise the testing issue in this list, as I've already done on > > numerous occasions, I usually get next to nothing in terms of feedback. > > Do others share my concerns about this whole testing issue? Do we as a > > group agree on the value of overhauling and enhancing our tests? > > Yes, I agree that more (and better organised) tests would be a very good > thing. I would go further and strongly suggest that no-one should *ever* commit new functionality or fixes without tests. The exceptions would be a fix for that makes an existing test failure pass, and pure refactoring work that isn't supposed to change the behavior of the system. (And even then you usually find that the existing tests need a bit more coverage as you go.) It's not an onerous requirement: you *always* test your code before you commit it don't you? Nobody *ever* just says "hey it compiles, and it's trivial and obviously works" - right? So all I'm saying is, don't keep those tests to yourself, they're worth a lot more if you share them. Testing with your own private tests that never see the mainline is almost worthless. Private tests don't protect against regression, don't provide measurable coverage, never get run in an environment other than your own, don't provide a test base that can be extended as the code evolves, and don't help other people understand what your code is supposed to do. Measure all these benefits against the tiny amount of effort required to put the tests you would do anyway into a form that fits the projects automated frameworks. There's no question it pays off manifold. Of course it's always a matter of judgment and debate as to exactly how much testing is the right amount. But there no question that it is simply irresponsible to commit code without any tests. Cheers, Alan.
Re: [java] Test result inconsistencies?
On Mon, 2006-12-11 at 07:58 +, Gordon Sim wrote: > Steve Vinoski wrote: > > When I raise the testing issue in this list, as I've already done on > > numerous occasions, I usually get next to nothing in terms of feedback. > > Do others share my concerns about this whole testing issue? Do we as a > > group agree on the value of overhauling and enhancing our tests? > > Yes, I agree that more (and better organised) tests would be a very good > thing. I would go further and strongly suggest that no-one should *ever* commit new functionality or fixes without tests. The exceptions would be a fix for that makes an existing test failure pass, and pure refactoring work that isn't supposed to change the behavior of the system. (And even then you usually find that the existing tests need a bit more coverage as you go.) It's not an onerous requirement: you *always* test your code before you commit it don't you? Nobody *ever* just says "hey it compiles, and it's trivial and obviously works" - right? So all I'm saying is, don't keep those tests to yourself, they're worth a lot more if you share them. Testing with your own private tests that never see the mainline is almost worthless. Private tests don't protect against regression, don't provide measurable coverage, never get run in an environment other than your own, don't provide a test base that can be extended as the code evolves, and don't help other people understand what your code is supposed to do. Measure all these benefits against the tiny amount of effort required to put the tests you would do anyway into a form that fits the projects automated frameworks. There's no question it pays off manifold. Of course it's always a matter of judgment and debate as to exactly how much testing is the right amount. But there no question that it is simply irresponsible to commit code without any tests. Cheers, Alan.
Re: [java] Test result inconsistencies?
Steve Vinoski wrote: When I raise the testing issue in this list, as I've already done on numerous occasions, I usually get next to nothing in terms of feedback. Do others share my concerns about this whole testing issue? Do we as a group agree on the value of overhauling and enhancing our tests? Yes, I agree that more (and better organised) tests would be a very good thing.
Re: [java] Test result inconsistencies?
On Dec 8, 2006, at 7:07 PM, Steve Vinoski wrote: Kim, an added note: if you run the tests under Eclipse, they seem to fail every single time. The common and broker tests are OK, but the client tests get about halfway through, fail, and then hang. I don't know what the cause is, but I'm starting to look into it. This is probably a false alarm. We use includes and excludes in maven so that only certain tests are executed. Under Eclipse, though, it looks like it's trying to run everything, and is tripping over tests that we already know don't work. This points to still more reasons why the tests need an overhaul. Maybe we ought to move all tests that currently are not executed by maven out from under the project directories and into, say, a broken_tests directory at the java level? That way, we won't lose them, but they won't get in way under Eclipse or require special include and exclude directives under maven. Unless someone knows of a good reason not to do this, I'll proceed with it, but I'll give it a couple day so everyone can speak up if they want. BTW, I get the impression, but may very well be mistaken, that many in the qpid-dev group place a high value on added functionality but do not value tests as highly. I'm generally a "test first" kind of guy, in that if I'm adding functionality, I write the tests for it first, and then write the functionality, and commit them all at the same time. The low test coverage we currently have makes me wary of changing anything, simply because without a test, I don't know whether my changes would break anything, and among those of us still relatively new to this code base, I doubt I'm alone in that feeling. When I raise the testing issue in this list, as I've already done on numerous occasions, I usually get next to nothing in terms of feedback. Do others share my concerns about this whole testing issue? Do we as a group agree on the value of overhauling and enhancing our tests? --steve --steve On Dec 8, 2006, at 7:25 AM, Kim van der Riet wrote: I notice that about a quarter to a third of the time the maven tests hang, even though the first one after a long period seems to work correctly. Consecutive runs either pass or hang (which I have to terminate). I make no other changes from run to run - I just run "mvn". Sometimes consecutive runs pass, hang, pass, hang, etc. but not always. Is is possible that there is some sort of left-over state/condition/file/process from previous tests that may be interfering with the current test? I searched for running brokers, etc., but I did not see anything. When it works, I get the normal test passed message. When it hangs I get (always the same message): Running org.apache.qpid.test.unit.client.forwardall.CombinedTest Starting 2 services... Starting client... Received 1 of 2 responses. Other times, where I got completely normal results on a previous run, some tests produce error messages, but the test still passes. When this happens (about 1 run in 3 or 4), I get one or several of the following: Running org.apache.qpid.test.unit.client.channelclose.ChannelCloseOkTest pool-16-thread-4 2006-12-07 16:27:03,633 ERROR [qpid.server.protocol.AMQPFastProtocolHandler] Exception caught inAMQProtocolSession(anonymous(7912507)), closing session explictly: java.lang.IllegalStateException: Handed undecoded ByteBuffer buf = HeapBuffer[pos=0 lim=0 cap=0: empty] java.lang.IllegalStateException: Handed undecoded ByteBuffer buf = HeapBuffer[pos=0 lim=0 cap=0: empty] at org.apache.qpid.server.protocol.AMQPFastProtocolHandler.messageReceiv ed(AMQPFastProtocolHandler.java:198) at org.apache.mina.common.support.AbstractIoFilterChain $2.messageReceived(AbstractIoFilterChain.java:189) at org.apache.mina.common.support.AbstractIoFilterChain.callNextMessageR eceived(AbstractIoFilterChain.java:502) at org.apache.mina.common.support.AbstractIoFilterChain.access$1000 (AbstractIoFilterChain.java:52) at org.apache.mina.common.support.AbstractIoFilterChain $EntryImpl$1.messageReceived(AbstractIoFilterChain.java:777) at org.apache.qpid.pool.Event.process(Event.java:80) at org.apache.qpid.pool.Job.processAll(Job.java:81) at org.apache.qpid.pool.Job.run(Job.java:103) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask (ThreadPoolExecutor.java:650) at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:675) at java.lang.Thread.run(Thread.java:595) pool-20-thread-2 2006-12-08 07:03:21,590 ERROR [qpid.server.protocol.AMQPFastProtocolHandler] Exception caught inAMQProtocolSession(anonymous(1668655)), closing session explictly: java.lang.IllegalStateException: Handed undecoded ByteBuffer buf = HeapBuffer[pos=0 lim=82 cap=82: 01 00 01 00 00 00 4A 00 32 00 14 00 00 19 74 6D 70 5F 61 6E 6F 6E 79 6D 6F 75 73 28 33 32 34 38 36
Re: [java] Test result inconsistencies?
Kim, an added note: if you run the tests under Eclipse, they seem to fail every single time. The common and broker tests are OK, but the client tests get about halfway through, fail, and then hang. I don't know what the cause is, but I'm starting to look into it. --steve On Dec 8, 2006, at 7:25 AM, Kim van der Riet wrote: I notice that about a quarter to a third of the time the maven tests hang, even though the first one after a long period seems to work correctly. Consecutive runs either pass or hang (which I have to terminate). I make no other changes from run to run - I just run "mvn". Sometimes consecutive runs pass, hang, pass, hang, etc. but not always. Is is possible that there is some sort of left-over state/condition/file/process from previous tests that may be interfering with the current test? I searched for running brokers, etc., but I did not see anything. When it works, I get the normal test passed message. When it hangs I get (always the same message): Running org.apache.qpid.test.unit.client.forwardall.CombinedTest Starting 2 services... Starting client... Received 1 of 2 responses. Other times, where I got completely normal results on a previous run, some tests produce error messages, but the test still passes. When this happens (about 1 run in 3 or 4), I get one or several of the following: Running org.apache.qpid.test.unit.client.channelclose.ChannelCloseOkTest pool-16-thread-4 2006-12-07 16:27:03,633 ERROR [qpid.server.protocol.AMQPFastProtocolHandler] Exception caught inAMQProtocolSession(anonymous(7912507)), closing session explictly: java.lang.IllegalStateException: Handed undecoded ByteBuffer buf = HeapBuffer[pos=0 lim=0 cap=0: empty] java.lang.IllegalStateException: Handed undecoded ByteBuffer buf = HeapBuffer[pos=0 lim=0 cap=0: empty] at org.apache.qpid.server.protocol.AMQPFastProtocolHandler.messageReceive d(AMQPFastProtocolHandler.java:198) at org.apache.mina.common.support.AbstractIoFilterChain $2.messageReceived(AbstractIoFilterChain.java:189) at org.apache.mina.common.support.AbstractIoFilterChain.callNextMessageRe ceived(AbstractIoFilterChain.java:502) at org.apache.mina.common.support.AbstractIoFilterChain.access$1000 (AbstractIoFilterChain.java:52) at org.apache.mina.common.support.AbstractIoFilterChain $EntryImpl$1.messageReceived(AbstractIoFilterChain.java:777) at org.apache.qpid.pool.Event.process(Event.java:80) at org.apache.qpid.pool.Job.processAll(Job.java:81) at org.apache.qpid.pool.Job.run(Job.java:103) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask (ThreadPoolExecutor.java:650) at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:675) at java.lang.Thread.run(Thread.java:595) pool-20-thread-2 2006-12-08 07:03:21,590 ERROR [qpid.server.protocol.AMQPFastProtocolHandler] Exception caught inAMQProtocolSession(anonymous(1668655)), closing session explictly: java.lang.IllegalStateException: Handed undecoded ByteBuffer buf = HeapBuffer[pos=0 lim=82 cap=82: 01 00 01 00 00 00 4A 00 32 00 14 00 00 19 74 6D 70 5F 61 6E 6F 6E 79 6D 6F 75 73 28 33 32 34 38 36 35 39 30 29 5F 31 0A 61 6D 71 2E 64 69 72 65 63 74 19 74 6D 70 5F 61 6E 6F 6E 79 6D 6F 75 73 28 33 32 34 38 36 35 39 30 29 5F 31 01 00 00 00 00 CE] java.lang.IllegalStateException: Handed undecoded ByteBuffer buf = HeapBuffer[pos=0 lim=82 cap=82: 01 00 01 00 00 00 4A 00 32 00 14 00 00 19 74 6D 70 5F 61 6E 6F 6E 79 6D 6F 75 73 28 33 32 34 38 36 35 39 30 29 5F 31 0A 61 6D 71 2E 64 69 72 65 63 74 19 74 6D 70 5F 61 6E 6F 6E 79 6D 6F 75 73 28 33 32 34 38 36 35 39 30 29 5F 31 01 00 00 00 00 CE] at org.apache.qpid.server.protocol.AMQPFastProtocolHandler.messageReceive d(AMQPFastProtocolHandler.java:198) at org.apache.mina.common.support.AbstractIoFilterChain $2.messageReceived(AbstractIoFilterChain.java:189) at org.apache.mina.common.support.AbstractIoFilterChain.callNextMessageRe ceived(AbstractIoFilterChain.java:502) at org.apache.mina.common.support.AbstractIoFilterChain.access$1000 (AbstractIoFilterChain.java:52) at org.apache.mina.common.support.AbstractIoFilterChain $EntryImpl$1.messageReceived(AbstractIoFilterChain.java:777) at org.apache.qpid.pool.Event.process(Event.java:80) at org.apache.qpid.pool.Job.processAll(Job.java:81) at org.apache.qpid.pool.Job.run(Job.java:103) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask (ThreadPoolExecutor.java:650) at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:675) at java.lang.Thread.run(Thread.java:595) different buffer contents and lengths> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.194 sec Running org.apache.qpid.test.unit.client.connectionurl.ConnectionURLTest AnonymousIoService-8 2006-12-08 07:03:29,782
