We presently have about 83 junit tests, at least 1/3 are rewritten qa
tests, so yes it is possible in test cases where you don't need services
or other infrastructure running.
On 3/12/2012 6:29 AM, Gregg Wonderly wrote:
I concur that include JUnit for unit testing would be a good thing, and then we
can move tests to JUnit if they are more likely unit tests than qa tests.
Gregg
On Dec 2, 2012, at 12:35 PM, Dan Creswell<dan.cresw...@gmail.com> wrote:
Nice to hear from you Patricia...
On 2 December 2012 10:29, Patricia Shanahan<p...@acm.org> wrote:
I hope you don't mind me throwing in some random comments on this. I think
there are two types of testing that need to be distinguished, system and
unit.
A system test looks that the external behavior of the whole system, so what
it is testing changes only when the API changes, and tests should apply
across many source code revisions. I can see separating those out.
However, I feel River has been weak on unit tests, tests that check the
implementation of e.g. a data structure against its javadoc comments. Those
tests need to change with internal interface changes.
Testing e.g. the multi-thread consistency of a complex data structure using
only external system tests can be a mistake. It may take a very large
configuration or many days of running to bring out a bug that could be found
relatively fast by a unit test that hits the data structure rapidly.
I'm a little concerned that the reconfiguration of the tests may represent
an increased commitment to only doing system tests, and not doing any unit
tests.
It doesn't represent any such thing in my mind! I'd expect to do a
bunch of those per release and for individual checkins etc.
That's one of the reasons I'm interested in maybe moving to junit.
Maybe in fact, we bring in junit as a statement of intent for unit
tests and leave jtreg and friends for the "big" stuff.
Feeling less concerned? ;)
Patricia
On 12/2/2012 10:21 AM, Dan Creswell wrote:
...
On 30 November 2012 19:53, Gregg Wonderly<ge...@cox.net> wrote:
I still wonder why it doesn't feel right that the test suite be in the
same branch as the associated "release". Some of the new code needs new
test that demonstrate "functionality" while other tests that demonstrate
compatibility will be ran on each release without change. It seems to me,
that in the end, when a release goes out the door, the tests that validated
that release, are part of that "release".
I have some similar disquiet, here's what I'm thinking at the moment
(subject to change faster than I can type!)...
Compatibility and similar is really "compliance test" and is closely
linked to the APIs defined by the specs. Two flavours here:
(1) "Well-behaved service" tests - does a service do join properly etc.
(2) Compliance tests - do the APIs behave right etc.
These are kind of slow moving as are the APIs at least for now. I feel
right now like (1) might be a subproject applied to our own "built-in"
services as well as others. I'm tempted to say the same about (2) save
for the fact that if we give up on the idea someone else is going to
build a River clone this stuff becomes part of the release/test phase
for the core.
Any other testing we're doing over and above what falls into (1) and
(2) above is part of tests for core and ought to be living in the same
branch and run as part of release. However, that's a little
uncomfortable when one wishes to freeze development of core to do
major work on the test harness etc. You branch core and test suite to
work purely on the suite.
Manageable I guess well, until you have the trunk moving on and
breaking your already seriously under construction test suite where
everything in trunk is "old style" and will be a b*stard to merge
across but if you don't your branched test suite is gonna break for
nuisance reasons.
If we need two different types of tests, and a migration path from
"functionality tests" into "compatibility tests", then maybe we really need
two trees for development of each release, and branching the whole test
suite would be one branch an the new release would be the other.
Is that how you guys are thinking about this?
You have my (current) thinking above...
Gregg Wonderly
On Nov 30, 2012, at 9:43 PM, Peter Firmstone<j...@zeus.net.au> wrote:
On 30/11/2012 12:27 AM, Dan Creswell wrote:
On 29 November 2012 13:11, Peter Firmstone<j...@zeus.net.au> wrote:
The last passing trunk versions:
Jdk6 Ubuntu 1407017
Solaris x86 1373770
Jdk7 Ubuntu 1379873
Windows 1373770
Revision 1373770 looks the most stable, I think all platforms were
passing
on this, 1407017 only passed on Ubuntu jdk6, nothing else.
If we can confirm 1373770 as stable, maybe we should branch a release
off
that, buying some time to stabilise what we're working on now.
I think we should do that. I'm also tempted to suggest we consider
limiting
our development until we've fixed these tests up. Or alternatively
control
the rate of patch merging so we can pace it and make sure the tests get
focus.
That's a bit sledgehammer but...
Ok, sounds like a plan, how do you think we should best approach the
task?
Create a branch in skunk, just for qa and run tests against released
jars?
Regards,
Peter.