On Thu, Jan 24, 2013 at 9:47 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> > On Thu, Jan 24, 2013 at 9:39 AM, Karl Rupp <rupp at mcs.anl.gov> wrote: > >> Testing for the same number of iterations is - as you mentioned - a >> terrible metric. I see this regularly on GPUs, where rounding modes differ >> slightly from CPUs. Running a fixed (low) number of iterations is certainly >> the better choice here, provided that the systems we use for the tests are >> neither too ill-conditioned nor too well-behaved so that we can eventually >> reuse the tests for some preconditioners. >> > > That's something that certainly makes sense for tests of functionality, > but not for examples/tutorials that new users should encounter, lest they > get the impression that they should use such options. > > Do you have much experience with code coverage tools? It would be very > useful if we could automatically identify which tests were serving no > useful purpose. The amount of time taken by make alltests is currently > unreasonable, and though parallel testing will help, I suspect there are > also many tests that could be removed (and time-consuming tests that could > be made much faster without affecting their usefulness). > Satish had gcov working before, but it just did not prove to be very useful. First, we generally write tests to look at the workflow for something rather than as a unit test. Second, coverage ignores the path you take to get to a certain line of code. My impression is that these things are only useful when they tell you lines which are never exercised. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20130124/44b4245e/attachment.html>
