On 2/17/12 7:42 AM, Dennis E. Hamilton wrote:
Ji Yan,

I don't think the situation is that bad:

Progression through an instance of a test can be handled by comments and 
managing the state of the bugzilla issue.

That may be too crude, and it doesn't deal with the need to automatically 
populate a set of open tests for a new release.

Some better tooling would be useful that didn't have one bugzilla issue per 
test case.

of course, some tolling can be of big help. And especially when the format is well defined. A wiki is fine but it requires also high discipline to keep it well formatted and aligned. Ok Mediawiki would allow to use tags to aggregate all test cases on one page and potentially organize them for releases as well with different tags.

I think we should give it a chance and we will see how it fits into our workflow in the future.

I am sure we will discuss some kind of strategy how we want provide releases in the future, how often, based on which criteria etc. At least for me is true that I would like to deliver high quality software that is enterprise ready from day one of availability. And I think this can be aligned with a lot of fun and collaboration in a community as well.


However, a hierarchical management structure is unlikely here.  It is a bit contrary to 
the Apache Way.  No design should depend on the existence of "formal" testers.

we all know that and I think the example of Ji Yan can be seen as an example how a organize release process can look like. Somebody who wants to help testing, can choose a test, can execute it and report the results. Another volunteer who feels responsible to provide an overview on the current status, generate some weekly report where we are. Everybody can and especially the release manager can use this generated information to get the necessary information where we are.

Our project is very huge and and everything that we can simplify by some tooling is good for my point of view.


However, having a wiki-based map and template for the tests that are needed is 
a valuable contribution.  Individuals can indicate in wiki tabulations what 
tests they are carrying out, so that there is not unnecessary duplication (and 
if result reports are not forthcoming, someone can communicate an offer to take 
them on instead).

So everyone can see the open test items, the closed test items, the test 
results, and the ones that have been offered to be done and are not yet 
reported.

It should not be difficult to create some sort of simple summary report if that 
is called for.

why not simply using existing tools who can do that out of the box.


This can all be done on a wiki page too, since links to attachments are 
supported and it is possible to process the markdown for tables in scripts.

yes it can but it has to be done

Creation of new test sequences against subsequent release candidates or 
milestone builds should not be difficult to clone onto a new wiki page.

again yes, but it has to be done

I would definitely support the idea to test such a tool

Juergen

  - Dennis

PS: Some philosophy for work in this kind of project:  It should be possible to 
accomplish the work with minimal tooling, so anyone can participate and also 
support the analysis and any coordination.  There can be additional automation. 
 However, a way for folks to operate with the minimal tooling should remain 
possible.  It is how there can be sustained expansion of participation and 
tolerance of turnover.  The on-ramp for entry-level participation has to be 
minimal.  It should be possible to understand what the testing process is, and 
to contribute, without having to also master specialized technology.  That's my 
thinking based on my observations of Apache OpenOffice so far.  In particular, 
all structures that are created to facilitate coordinated work must be designed 
to work with all-volunteer resources, whether or not some or many of those 
willing and able to participate are being compensated for their efforts.

-----Original Message-----
From: Ji Yan [mailto:[email protected]]
Sent: Thursday, February 16, 2012 21:28
To: [email protected]
Subject: Re: Proposal for AOO test tool

     Thanks for all of your response. I like your brilliant idea, but from
QA point of view, I'm afraid it couldn't help our work.
     A formal tester should follow test case to do test, once test is done,
tester fills result in a document, we called it as execution record. QA
lead will generate test report regularly based on the execution records,
and QA manager or PM and others will know the test status. From this
procedure we can tell test case is base element of test, and we need to
generate test report based on execution record.
     If we use bugzilla to record test case, where could we put result into?
If we put test case to wiki page, where could we put result into? another
wiki page? while, how could we generate test report? I don't think bugzilla
and wiki page are right approach.
     So, a powerful test management tool will help us on managing  test
case, test execution and work procedure. So I strongly recommend using test
tool to manage our QA work.

      Here I bring another question, if I want to install the tool into web
server, how to do that? Who will help me on this?


2012/2/14 Andre Fischer<[email protected]>

On 13.02.2012 22:17, Rob Weir wrote:

On Mon, Feb 13, 2012 at 6:55 AM, Ji Yan<[email protected]>   wrote:

Hi all,

  Recently, I'm thinking about how testing work should be done and what
the
procedure should followed under Apache OO structure. Before OO goes into
ASF, testing work was controlled by QUASTe and manual test cases stored
in
TCM but both tools were disconnected once Oracle donated OO to Apache.
Now,
it's time for us to think about how can we move on for testing.
  While within AOO 3.4, we store the manual scripts in wiki page, it's
good
place at this time, but should not be permanent. As it's hard to tell
test
status and collect testing data, also it has no connection with
automation
test tool.


I wonder if Bugzilla would be better than the wiki?


Hm, to me the wiki seems to be a better place.  I think of the manual test
cases as some form of documentation (about how and what to test). The wiki
provides better support for organizing and searching.  But, not being a QA
engineer, I can easily be mistaken.

-Andre



We could create a "product" in BZ for all test cases, with
"components" under that for different test areas, like "performance
test", "smoke test", "detailed test", etc.

One BZ issue per test case.

For each test pass, we simply reset each test case/issue back to "New
state".  We then test each issue.  If the test case passes, then we
mark the BZ issue as closed.  If the test case fails, then we already
have a BZ issue for the developers.

Pro: Makes it very easy to make new test cases from existing BZ
issues, or to make BZ issues from testcases.

Con: Reporting not so good.   Does not handle doing multiple test
passes in parallel.  For example, if we wanted to test AOO 4.0 in
parallel with a maintenance AOO 3.4.1 release.


   After review some tools, I find the "Test Link"[1], maybe the proper
tool
for us to manage testing work. If anyone has any suggestion on other
tools,
please let me know. The target is to customize and deploy it to OO
website. I'll move forward with this tool with no objection

[1] 
http://testlink.sourceforge.**net/docs/testLink.php<http://testlink.sourceforge.net/docs/testLink.php>
--


I tried their demo site.  It was very slow.  Does anyone have
experience with Test Link?

-Rob


Thanks&   Best Regards, Yan Ji





Reply via email to