HI,

Vito Smolej schrieb:
Well, Clytie, welcome to the (thinned out) crowd: I've had the same, unpleasant experience. With 31.dec.2006 as the deadline,

I extended to 31.01.2007

that's how it looks (from - http://www.sunvirtuallab.com:8001/tcm2/opensource/tcm_index.cgi?tcm_config=newooo)


2.1RC <http://www.sunvirtuallab.com:8001/tcm2/opensource/tcm_report.cgi?tcm_config=newooo&action=report_list&project_id=24> 2006-11-23 2006-12-30 1691 128 191 1293 3303 <http://www.sunvirtuallab.com:8001/tcm2/opensource/tcm_report.cgi?tcm_config=newooo&action=one_build&project_id=24&breakdown=language_id>


I read it like this:

1) 1691 finished (good)
2) 1293 - i.e. 39% -  not finished. (duh)
3) 128 bugs (great)
4) 191 tests skipped (huh?)
...
The argument "did not have time to enter the results" does not wash. Reminds me of the eternal question"Is it a ball or a strike?" and the one and only correct answer from an old seasoned baseball ump: "it's nothing until I call it" - results of our work mean nothing until we have committed them.
correct. But reading the plain numbers does not give the correct impression.

E.g. for German we had:
173 passed
13 failed
13 skipped
142 untested

the high number of untested Cases is caused by volunteers who asked for tests but did not start to test (for whatever reason). But each of those tests has been backed up by results of another volunteer.

If we found nobody at all for one platform to do the tests, this platform got not approved. (So in fact Mac PPC is still not approved due to missing test results).

Given the goal of 3303,0,0,0 (oh well, what else should it be?!) the question is: has 2.1 passed? The answer (in TCM) 3 of 7 have passed, 1 NoGo (slovenian), the rest did not even (care or have time to) say. My opinon: this sucks, but are we ready to face it? If not, we'll have a classic case of a cognitive dissonance on our hands.
Yes, we have .. and will ever have.


how to proceed?
1) automate the process - i.e. stress / support / enhance the role of testtool
The process is actually automated. But it cannot be automated to 100%.


2) lighten up - there's a sizeable proportion of repeats in test cases.

Where? If you use the release sanitiy scenario, you will do a test only once per platform.

3) focus and prioritize - common sense would expect windows and linux to be done first before the rest kicks in

It is up to any native lang QA team to prioritize this work.


4) define goals that are doable - six weeks' deadline for 2.1 was rather tight. I got my testable version (win, slovenian) some time 1st, 2nd week of December...

Please don't mix up localization (translation) testing and release approval.
The first one ist not for release approval (indeed it would delay a release if critical errors were found). This tests should be done on the first localized version within the current codeline. If you get this version to late for testing, please coordinate with "your build provider".

Release approval can be done within a short period (supposed, localization testing has been done on earlier builds).

André

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to