ideally, you would test against all supported databases. We have a product we do that with actually(I am not on it, and don't know what they do to do it). It runs all the automated tests against every database. It takes a little while but you know when you broke a different database. I am not much of a database guy and am fairly new to schemas. My only concern is the sourcegenerator and XML stuff, but you could potentially set up cruisecontrol for many databases if you wanted. Just an idea.
Dean


Bruce Snyder wrote:

This one time, at band camp, Dean Hiller said:

DH>actually, keith added my automated build target which will build castor DH>and run all the test cases. I haven't hooked it up to the automated DH>build yet. Once I do, notification on whether test cases broke or not DH>is still sent. It just doesn't have ability to tell you which test DH>cases. As far as Junit results being spit out in XML. I have no idea DH>if that is ant or junit, but that is a good point. If it is too much DH>work, I say drop it and don't do it. Too much work for too little DH>benefit. I will have the automated build call the automated-build DH>target when I get a chance. I am swamped right now.

Yeah, I saw that Keith committed that the other day. Very nice. But I
agree that it would be nice to see what passed/failed. The real problem
with this is that the CTF-JDO tests are highly dependent upon the
database against which they're being executed. Due to JDBC implemenation
differences and database differences, pass/fail results can vary widely.

Bruce



----------------------------------------------------------- If you wish to unsubscribe from this mailing, send mail to
[EMAIL PROTECTED] with a subject of:
unsubscribe castor-dev




Reply via email to