Tim Cross writes: > > You are probably misunderstanding me. My question is basically > > whether your testing has covered (i.e., executed) all the code that > > has changed. Moreover, in the unibyte/multibyte case, we need to test > > it with multibyte text. > > > > My comment was to your reference to tools to manage test coverage. It would be > very useful to have a 'standard' collection of test messages that we could > use.
No, it seems that we are still talking at cross-purposes. A test coverage tool is one to which you can tell which parts of the code you want covered and, at the end of testing, it tells you whether all those parts have been covered fully. It can also identify for you which parts weren't covered so that you can go and find test cases to cover them. It is a *language-based* tool and doesn't depend on how sophisticated the software system is. In the absence of such a tool, what I do is to use the Emacs debuggers. Set break points around the code I want to test, and trace through the code step by step to examine whether everything is being covered. Also, remembering the Dijkstra adage, "testing can only show the presence of bugs, not their absence" I try hard to imagine what else could happen which is not represented by my test cases. For the recent Thunderbird compatibility code, I wasn't able to do this because I didn't have enough Thunderbird email to test it with. As you saw, problems remained when it got published. > We could possibly then also develop a standard set of regression tests. > My point is that it will take a lot of work to develop something like this. > However, this does not mean it isn't worthwhile. In fact, given the code base > we have, it could be argued it is essential. It would be great if we could > just run a script after a build that executed a set of tests - while this > would not be sufficient in itself, it would add to our confidence that changes > don't introduce new problems. This would be system testing. To me, system testing doesn't give the same kind of confidence as unit testing. It would be useful to start collecting email messages that could form a good test suite. It is a bit hard to do because email messages can't be shared easily. But we should try. > Yes. I'm trying to deal with them in 'groups' of similar messages and then > commit each group so that it is easier to extract specific bunches of fixes. I > am also using my compiler-fixes branch as my current VM version as an > additional ad hoc testing process. That is good. Can you make sure that you commit each group separately? That would make it easier when we merge them with the trunk. Cheers, Uday _______________________________________________ Mailing list: https://launchpad.net/~vm Post to : [email protected] Unsubscribe : https://launchpad.net/~vm More help : https://help.launchpad.net/ListHelp

