Hi,
 
something that I have in mind for some time now and where I got reminded by Ryan Ovrevik in XDT-1002 and others is regresion testing.
I basically see three cases:
a) make sure that generated code compiles (and would even run)
b) validate the generated xml - against a DTD and some reference xml
c) let the developer validate the written .xdt files without running against a test case
 
The first part is easy for a) just add a compile step into every build of a module.
The second part could mean to run findbugs(*1) against the generated code and make
sure, that there are no prio 1 errors.
 
Same for b) the first part is done within the samples directory. But this doesn't tell
me if a change in some .xdt will leave out some elements/attrbutes, but shouldn't as
long as the DD ist still valid.
 
c) is only caught when running the samples _and_ the sample needs the specific .xdt
file, which takes quite some round trip time.
 
Obviously the first two are more things for an ant task, while the latter would be an interactive
thing.
 
The other thing is layout of tests. There are already some unit tests in core. Should
unit tests follow this and have a test directory under each module? Or should it follow
the samples case with a new 'toplevel' directory, where all test go into (I am in favour of that)?
 
Is junit the right framework for all this? Are there better ones?
 
Suggestions? Opinions? Volounteers?
 
 
  Heiko
 
 
 
 
 
 
(*1) http://findbugs.sf.net/
 

Reply via email to