--- Larry Kollar <[EMAIL PROTECTED]> wrote: > 
> Jesper Skov <[EMAIL PROTECTED]> wrote:
> 
> > I only see one problem with this: we do not have
> anyone willing to take
> > on the job as QA+Release Engineering ....
> > 
> > We'd need that person (non-programmer, but able to
> do a CVS checkout and
> > build AbiWord) to document the process so it can
> be handed over to
> > someone else should it be necessary.
> 
> ... more snippage ...
> 
> > When I do QA+RM for a customer for Red Hat/eCos it
> costs me at a minimum
> > a full days work to follow a QA sheet by hand and
> go through the
> > motions, do the README, etc.. We have pretty much
> a turn-key release
> > system, and before a release we have run 12k+
> tests in our test farm.
> > It's a chore to do - but I do it because it's
> necessary and because I'm
> > paid to do it.
> 
> A QA "sheet"? As in singular?
> 
> [Jesper already knows this stuff, to be sure, but
> just in
> case some others don't... you might want to know
> what
> you're getting into in case you want to volunteer.
> :-)]
> 
> My day job isn't verification, but I work with them
> from
> time to time. They have dozens (if not hundreds) of
> one- or
> two-page procedures that they go through for each
> release
> (I helped them develop a template & wrote one or two
> sheets
> for them, and some of our documentation also goes
> through
> a verification process.)  It's basically a checklist
> with
> a pass/fail result depending on what happens.
> 
> Failed test cases are not always showstoppers; each
> one is
> filed as a bug & evaluated individually. But
> quantity as
> well as quality counts; a certain percentage of
> failed test
> cases (regardless of severity) will hold up a
> release too.
> 
> > Now compare that to the rush jobs we do with
> AbiWord. ...
> > We just tag the tree (with varying success, I
> might
> > add, this should be scripted!), get people to
> build for all the
> > platforms, and hope *others* will report problems
> they find in time for
> > us to do something about it.
> 
> This, I think, is a "feature" of the free/open
> source
> software development model -- we make cable
> data/telephony
> equipment, and all our releases have to "soak" for a
> certain number of equipment/hours (with a beta
> tester)
> before the release. Compare that to a system where
> anyone
> can snag the current source from CVS, or download a
> daily
> snapshot, or stick with pre-compiled binaries. Free
> has
> a price, and that price is responsibility -- in a
> literal
> sense -- you, the user, should respond to the
> developers
> when you find a problem. (Searching the bug list to
> avoid
> duplicate entries helps, but you might not always
> find
> a known bug even with a search -- that happened to
> me.) 
> 
> > ... In short: we need
> > to get *way* better at handling a release now that
> we've gone 1.x. And
> > we need *someone* with time, energy and discipline
> (and preferrably no
> > programming skill) to take care of it.
> 
> Sam T. mentioned the bazaar model -- but what he
> described is, IMHO, about what we {are, should be}
> doing now. OTOH, Sam's right -- we have a huge pool
> of
> potential testers to draw from.
> 
> Here's my own suggestions about how QA could be
> done:
> 
>   - Abi is scriptable, anyone with the inclination
> and
>     energy could write up some automated tests.
> Automated
>     tests could cover a significant fraction of test
> cases.
>     (Or, alternatively, automated tests could be
> reserved
>     for development -- IOW, the source isn't
> released for
>     QA before it passes all the automated tests.)

See this bug:
http://bugzilla.abisource.com/show_bug.cgi?id=3133
And add comments to it.

>   - Someone who knows what they're doing (Jesper?)
> needs
>     to make a list of all the things that we need to
> test
>     in a release. I could probably come up with
> several
>     dozen obvious things myself (opening various
> files,
>     importing various formats, handling images, etc)
> but
>     I doubt that any single person would think of
> *all*
>     the stuff that should be tested. Developers and
> users
>     alike should work on the list. I'm guessing
> there's
>     probably 200 or more things to test, depending
> on how
>     fine-grained we want to get.

I think we should maintain a file called features.txt
which lists every single feature (ans subfeature) in
AbiWord.  A goal would be to eventually have a way to
test each one of them.
I'd also like something similar to show what features
each importer and exporter support.

Andrew Dunbar.

>   - Unless someone can provide us with a ready-made
> form
>     for creating test cases, that needs to be done.
> I'll
>     "take that action" if nothing pre-made is
> forthcoming.
>     I'd *like* to see a web form that allows
> over-the-Net
>     entry of test case results; or users could email
> them
>     to abi-dev or perhaps a mailing list created for
> the
>     purpose. (The web form could also do the
> mailing.)
> 
>   - Write up test cases from the list. I'll help.
> 
>   - Write a procedure for QA testers to follow.
> 
>   - Recruit QA testers from abi-users.
> 
> 
> Feel free to chew this up & spit it out....
> 
> -- 
> Larry Kollar   k o l l a r  at  a l l t e l . n e t
> "Content creators are the engine that drives value
> in the
> information life cycle."   -- Barry Schaeffer, on
XML-Doc 

=====
http://linguaphile.sourceforge.net http://www.abisource.com

__________________________________________________
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com

Reply via email to