On Wednesday 10 February 2010, Edgar Grimberg wrote:
> On Wed, Feb 10, 2010 at 10:38 PM, David Brownell <[email protected]> wrote:
> >
> > Yes, systematic testing to help uncover regressions is really The Way
> > To Do Things.  I'm not sure how complete our test coverage is, though.
> 
> One entire category was intentionally left out: commands specific to targets.

How about coverage of boards and board-level mechanisms, like 'reset-init'
event handlers to initialize things?


> The goal of this first attempt of systematic of tests was to prove
> that "basic" functionality is there for each of the tested target.
> As a constraint, it is necessary that the tests can be easily be ran
> and reproduced.

That constraint is present for almost all testing.  :)

 
> > I still have some board tests on my list-of-stuff-to-do, but I did get
> > most of them done.
> 
> Post them over and I'll make sure to include them in the template and
> ask Laurentiu to run them when he gets the chance.

He won't be able to do that with boards he doesn't have.  ;)


> > Also worth mentioning:  I think this time we've been better at holding
> > off changes in the RC cycle (and, partly, before that) which didn't address
> > real problems ... and had somewhat better disciplne about avoiding (and
> > resolving!) regressions during the merge window.
> 
> Now we also have a list of things that are working, so it's easier to
> spot regressions.

Yep.  That's part of any systematic approach to testing:  you can
say "these all passed last time, so they've got to pass for this
upcoming release too".  Requires a stable farm of test hardware.

Of course, there's also value in more ad-hoc testing; it's likely
to uncover different issues.  But using purely ad-hoc testing would
ensure there are significant holes in test coverage.

- Dave

_______________________________________________
Openocd-development mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/openocd-development

Reply via email to