David Brown wrote:
My previous project made the tests an integral part of the review process (you couldn't check in a change without a test to demonstrate that you fixed something. The test had to pass on the current version, and fail on the previous. An exception had to be explicitly spelled out and reviewed as well).
Oh, that's a wonderful idea, requiring a previous test to fail. But how do you implement new functionality? You can't check in a test that fails, and if the code isn't failing any tests now...
I saw another idea, too - your code doesn't become part of the daily build unless your changes, when run, don't break anyone's tests. What this *means* is that you are rewarded for adding tests to your code, because nobody else can check something in that breaks your modules. If you don't put in good tests, then you have to keep fixing your code to work around the bad things they did (since you can't break *their* code once it's checked in).
I just wish there was an easier way to test things like device drivers, and interactive systems.
Yeah. My problem is I wind up doing a lot of web stuff (hard to mock), with the back end talking to a whole bunch of systems that are difficult to mock. Indeed, a whole lot of back end systems for which we don't even have specs before most of the system has to be built. Makes things rough. The last piece I architected had to be designed and under development before we finalized the contract for one of the server-like back end pieces, so we didn't even know whether that piece was a library or a separate server process or what. I'm sure that if I took the time to design it all right, with business-layer objects and all that sort of stuff, it would be easier, but that's another several weeks of thinking when the customer doesn't have money for that.
You know, I need a real job. :-) -- Darren New / San Diego, CA, USA (PST) -- KPLUG-LPSG@kernel-panic.org http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg