Hi Joost, > Using the Testtool is very useful to find regressions by comparing > result files of eg. a CWS build with result files of the corresponding > master workspace.
That statement might be true, but not very helpful in itself. That is: Of the 100 regressions introduced every day, how many are found by the testtool, with which effort, and how many are found by, say, manual testing, with which effort? Only an asnwer to questions like those allow us to judge the efficiency of the testtool. Don't get me wrong, I am not saying that the testtool is useless, I actually do not know enough about it. So, I just want to point out that the efficiency of a quality assurance tool (and testtool is one of those) is measured not only by its success, but also by the efforts invested into it. >> Not sure. Looking at the amount of stoppers which came in in 3.1's >> release phase, and the amount of stoppers already raised for 3.1.1 (and >> most often for good reasons), I think that *finding* bugs *is* a problem. >> >> Of course, or at least in my opinion: Automatically finding bugs is just >> *one* line of defense. There are others, and personally, I think another >> line of defense must be with the Devs, by not *introducing* the bugs. >> IMO, we take way too little measures to prevent bugs, we always >> concentrate on finding and fixing them. >> > > You're describing the perfect world where developers don't introduce new > bugs ;-) Hmm? Read again, please. I am well aware that nearly every regression is introduced by a developer's mistake, and I said that in my opinion, we still do to little to prevent *this* particular source of errors. >> I'm asking because my gut feeling is that the latter takes too much >> time. And seeing that all issues have already been VERIFIED in the CWS, >> and assuming that *breaking* a fix by merely integrating the CWS is >> unlikely (though surely possible), I wonder whether auto-CLOSING issues >> would free previous QA resources. > > I have a different experience regarding this topic. In the past I > remember cases where the integration of child workspaces integrated new > regressions because the integration failed or dependencies between child > workspaces were not solved correctly during integration work. OK this > case is rare but it's better to check a proper integration than to find > the regressions weeks after integration. Again, this is a matter of effort vs. gain. If you spend 10 hours into closing VERIFIED issues, and find 1 regression which didn't exist in the CWS - is it worth it? Or should the 10 hours better have been spent elsewhere, say in reducing the pile? You just say that regressions are found by this process, but that's nothing I doubt. I explicitly asked whether *it's worth it*. I have little experience with closing such issues myself, but I am a regular reader of iss...@dba, where all changes to DBA issues arrive. And the number of "seen in master, closing"-mails is *much* higher than the number of "oops, still broken in master"-mails. That makes me think that perhaps the time is better spent elsewhere ... Ciao Frank -- - Frank Schönheit, Software Engineer [email protected] - - Sun Microsystems http://www.sun.com/staroffice - - OpenOffice.org Base http://dba.openoffice.org - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
