Hi Leo, Simons wrote:
> I usually draw a few things on paper, and the arrows I draw show where > my contracts (work interfaces) need to be. Next I write the work interfaces. > > Then I create unit tests for the to-be-created implementations using > some mock objects framework for all dependencies. These test the work > interface contract as I drew it. > could you elaborate on this ? 1) I create the interaface CompomentA. 2) I create the the implementation mock object. ComponentAProvider which just implements the methods in interface that do nothing(stubs) 3) what do I do now with the unit tests? do you mean a unit test that implements the servicable interface and then does a lookup on the ComponentA and calls its methods ? or something that just calls the interface methods directly to test it ? what dependencies are you talking about ? Avalon dependencies? if so, do you mean if ComponentA had dependencies on other Avalon components, then you would test that in your unit test here ? where do you typically house these sets of unit tests ? in impl/src/test/com/mycomp/myapp/...... or in api/src/ ....somewhere???? > Then I create trivial implementations (ie no I/O, no threads, no > external dependencies....no blahblah at all) that simulate what they > should be doing later on. I add implementation ideas on making things > work better in comments. > > Run the tests. Probably add more tests as my idea of the contract > solidifies. Modify the interfaces one or two times. Add in custom > exceptions. > > Coffee break. > > Then I start modifying the implementations to actually do what they > should be doing, rerunning the tests often. > > When things seem in order, I start writing up things like metadata, > config files, etc, and start doing integration tests (if you're using > Merlin, you use the AbstractMerlinTestCase, if you're using Fortress or > Pico, you just create and populate a container instance). This is often > a frustrating part as I manage to forget config file formats quite > often, but backed by all my running unit tests I can usually figure out > the issues. > > Coffee break. > > When all that works, I'll usually do another sketch (this time what > things actually look like), then refactor things bit by bit as I change > things in the sketch. Until I'm sort-of happy with the layout. > > This is usually when I make my first commit (so peers can review and so > I can go back and look at what on earth I meant in the first place). > I'll have a coffee break and talk to one of to those peers for feedback. > If no peer is available, I read slashdot instead. > > The next point should usually be to refactor the test cases into an > abstract test case that just tests the work interface contract, and a > subclass that runs the tests in the abstract case, plus a few of its own > (like for constructors), on the implementation. This is especially > important if you will have multiple implementations of the same > interface, since basic testing of other implementations just involves a > minimal extension of the abstract test case. > > I'm often too lazy to do all of those things though. In practice I'll > often get enthousiastic on an idea, then just write an implementation > immediately, add minimal tests that show it works, take a coffee break, > then start componentizing. But always regret it later because that's > actually more work. > > An interesting observation about the size of the subsystem I tackle at > once: it is limited to the size of the paper I'm drawing on. I used to > be much more productive in one of my old jobs: we had lots of A3 paper > lying around. But I think I produced more bugs too :D > > I'd round of with a joke involving paper and coffee, but can't think of any.