Mathias Bauer <[EMAIL PROTECTED]> writes: > I'm not sure if you understood my concern. Let me put it simple: what > makes us think that the current tests we are talking about that AFAIK > have been used in QA for testing the master for quite some time will > help to find regressions that currently stay unnoticed? > and
> But I also want to believe that running several hours of tests for > e.g. automatic styles would be worth the effort. This is a good > example where I suspect that possible regressions would stay > unnoticed by automatic GUI testing. But of course that's open for > debate. And exactly this debate is what I want to see happening. So > let's wait until the proposed test cases are published and until we > have verified that they run reliably. Then the QA and development > engineers of the different teams can investigate them and decide if > they make sense or if we can create other tests that serve the > desited purpose better. *Then* we can decide whether we wanted to > run the tests more frequently or even make them mandatory. > Hi Mathias, I'm really not sure we're all still on the same page here. I would hope that QA runs something quite similar to the suggested minimal set of tests on _every_ CWS anyway - so this has nothing to do with the tests being mandatory or run more often - it's about *when* those tests are run. And in general, I very much like the idea to run any kind of test as early as possible. Who wouldn't? Next come the details. From what I've heard (and partly experienced myself), the current state of the automatic gui testing leaves something to desire. Acceptance among developers would be orders of magnitude higher if those tests are: a) fully automatic, i.e. fire-and-forget, and get a mail if it broke b) reliable, i.e. zero false positives, if the probed codes paths haven't changed (a low number of false negatives are also nice to have, but imo not too relevant for developer acceptance) So, I like everything Joerg proposed in his first posting, because it brings us closer to this goal (yes, even the mandatory part - because it keeps the motivation high to make the test experience as smooth as possible for the devs). I think the question about test case effectiveness is more or less unrelated to this topic - although quite important in its own right. But I wouldn't want to limit it to the smal subset of automatic dev testing discussed here, the whole suite of QA tests should be scrutinized then. Cheers, -- Thorsten --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
