Patrick,

I think your situation is very specific, and cannot be implemented with just
20 lines of code. But it should not be hard to do by writing some of your
own classes.

First of all, it looks like you will be creating some files that the test
suite will depend on. If you loose them or they become corrupt, then your
tests won't be useful at least for some time.
You want to dedicate a directory on a hard drive that A) Nobody else messes
with B) Is backed-up C) If the tests need to be run on more than one
computer, then you can ask Sys Admin to create a network-mapped drive, say
M:
An alternative is using a database, but that introduces extra complexity and
you probably don't want that just for the sake of implementing those tests.

The following stuff:

[Source(Reader = typeof(MatrixEquationReader), File="SomeSillyFile.mtx")]
[Compare(typeof(MatrixEquationComparer))]
[Target(Writer=typeof(MatrixEquationWriter),File="SomeSillyResultFile.xml")]

Is a syntactic sugar for a few dozen lines of C# code. I think you are
trying to do something specific, and so you have to decide the format for "
SomeSillyFile.mtx" and "SomeSillyResultFile.xml". By XML file you probably
do not mean the XML Test report. I do not think you want that anyway. I
think you will want to write a Reader class and a Writer class yourself. My
advice would be: once you know what you need, pick file formats that are as
simple as possible. You can end up spending a large chunk of your time
writing a file parser. Plain ASCII comma-separated values (CSV) is a good
format for storing tabular data, and there are some libraries for it.

Now, I am not sure how extensively you want to test your solver and how many
data sets you are creating (I'm guessing more than 10, if you bothered to
look into fancy test decorators in the first place). That will affect how
you structure the directory containing input files / output files. Suppose
you can categorize the problems being solved into 3 types: "TypeA", "TypeB",
"TypeC". Then, you could organize the directory like so:

|
M:\Tests\Data\Equation\
  \--TypeA
     input_file01.csv
     output_file01.csv
     input_file02.csv
     output_file02.csv
     ...
     input_file25.csv
     output_file25.csv
  \--TypeB
     input_file01.csv
     output_file01.csv
     input_file02.csv
     output_file02.csv
     ...
     input_file25.csv
     output_file25.csv
  \--TypeC
     input_file01.csv
     output_file01.csv
     input_file02.csv
     output_file02.csv
     ...
     input_file25.csv
     output_file25.csv

and then have a Scanner class, which scans the directory and builds up an
object, which can iteratively return to you a pair of files of type a, b, c
or all of them.

The scanner would look like so:
class Scanner {
    public Scanner() {
        // Traverse the diectory and build up a data structure.
    }

    public IEnumerable<PairOfFiles> GetFiles(string type) {
        if(type == "ALL") {
            foreach(PairOfcases pair in internalDataStructure) {
                yield return pair;
            }
            yield break;
        } // else

        foreach(PairOfFiles pair in internalDataStructure) {
            if(pair.Type == type) {
                yield return pair;
            }
    }
}

This is a somewhat-lengthy class, but it can make your testing code look a
lot shorter.

[TestFixture]
public MatrixsolverTester {

   Scanner scanner;

   // Default constructor

   [SetUp]
   public void Init() {
       scanner = new Scanner();
   }

   [Test]
   public void TestAll() {
       Input in;
       Output out;

       foreach(PairOfFiles pair in scanner.GetFiles("ALL")) {
          in = Reader.Read(pair.InputFile);
          // Solve
          out = in.Something(...);
          // Compare
          Writer.Write(out, pair.OutputFile);
       }
   }

   [Test]
   public void TestTypeA() {
       Input in;
       Output out;

       foreach(PairOfFiles pair in scanner.GetFiles("TypeA")) {
          in = Reader.Read(pair.InputFile);
          // Solve
          out = in.Something(...);
          // Compare
          Writer.Write(out, pair.OutputFile);
       }
   }

   // And so on ...
}

The above example show how you can deal with lots of input/output files by
writing some of your own code. The tests themselves are kept relatively
short and readable.
That would take care of 2/3 decorators:

[Source(Reader = typeof(MatrixEquationReader), File="SomeSillyFile.mtx")]
[Compare(typeof(MatrixEquationComparer))]
[Target(Writer=typeof(MatrixEquationWriter),File="SomeSillyResultFile.xml")]

If you want to store the running time for comparison, then you can store
that in a "time_filexx.csv". Then your PairOfFiles object become
TripleofFiles, (or TupleOfFiles) and has more fields.

If you want precise time measurement, then I would calculate it yourself:

DateTime start = DateTime.Now;
// some code here
TimeSpan duration = DateTime.Now.Subtract(start);

I do not know if the algorithm that you are using for solving changes over
time, but you want to make sure that it is consistent. If it does, then you
probably want to store the results in time-stamped files, and so the logic
of the scanner class would become more complex.

Another thing I am not sure about is how do you compare the results. I am
guessing that the results are different each time because it is not possible
to come up with a closed-form solution, and the algorithm uses some sort of
numerical approximation. You'd have to write the comparer class yourself and
think about how it should work. It would have to make sure that one vector
is similar to another. Well' it sounds to me that you want to create a
baseline (several results) using your algorithm, when you know that your
algorithm works correctly, and save those somewhere. Then, during testing
you would solve the same equation several times, and then make sure that the
N results in the baseline and the N results that you just got look
reasonably close to each other. If the solver utilizes random number
generators, then you may be able to find a stored result A and a new result
B where A and B are not terribly similar (because A may be in the left tail
of a distribution, and B may be in the right tail). But , when extracting
the statistics about N stored results and N new results, the numbers should
be consistent. You have to know how the algorithm works before you can
decide what deviation is acceptable. Or, you have to know what sort of
deviation between consecutive results the client is willing to tolerate. ...
You probably know more than I do about how to compare the results.

However, I do not think you really need a decorator like this one:
[Compare(typeof(MatrixEquationComparer))]

It looks like the Comparer class won't be simple, so what's a big deal if
you add a few more lines of code to your test suite?
Just initialize the comparer in the constructor or init, and use it inside
of a test. The decorator is not really shortening your test.

That's how I would approach this, but of course you know more about the
project you are working on than I do.

- Leonid

On 8/9/07, P. van der Velde <[EMAIL PROTECTED]> wrote:
>
>
> Hi Leonid
>
> Thanks for your answer! See my answers below.
>
> Leonid Lastovkin wrote:
> > Patrick,
> >
> > I  am not sure  I understood the following 100%:
> >
> > In essence I need a way to track the accuracy of class
> > implementations. I'm
> > mainly interested in numerical stuff so the answer can be slightly
> > different each time, making normal unit tests complicated.
> >
> > Are you talking about profiling, e.g. measuring the performance of
> > part of the system as opposed to running a series of tests (which are
> > essentially methods that either pass or fail?)
> Eh sort of both. I want to automate my verifications. Say you have
> something a matrix library. The operations (add, subtract, multiply
> etc.) are easy to test and that can be unit tested. The matrix solvers
> (they solve Ax = b) however are a different story. So I want to do
> something like:
>
> [Verification]
> [Source(Reader = typeof(MatrixEquationReader), File="SomeSillyFile.mtx")]
> [Compare(typeof(MatrixEquationComparer))]
> [Target(Writer=typeof(MatrixEquationWriter),
> File="SomeSillyResultFile.xml")]
> public void MatrixSolve(Matrix a, Vector b, Vector result)
> {
>     MatrixSolver solver = new Solver();
>
>     // Solve the equation Ax = b
>    Vector internalX = solver.Solve(a, b);
>
>     // Compare the results
>     Verification.Compare(x, internalX);
> }








In this case we mark the 'test' as a verification and then provide a
> source to read the data from, a comparison class type and a target to
> write the results to. Then when this test is executed the testing
> framework would create a reader, tell it to read the file (which
> contains the matrix and the two vectors and then pass these to the test.
> The test would invoke the solver and pass the result and the original
> back to the framework for comparison and storage. The framework could
> then store all the required information (test id, time taken, result,
> original, difference, pass / fail based on the difference etc.). Tests
> like these would probably not be run as frequently as unit tests, but
> they would still be run regularly.
> It should also be possible to add new source files and have the tests
> use these without intervention. That way if new test cases come to light
> you can immediately add those to the verification tests, without having
> to change the source code.
>
> Note that this syntax is not thought about at all and probably won't
> work. I just hope it explains what it is that I want.
>
> >
> > <snip>
> > If you only care about the big picture: that is making sure that
> > MethodA which used to run in 10-12 seconds last release does not take
> > 28 seconds six months later. This can be framed as a test. You have to
> > keep the same settings: same hardware, OS, Debug vs. Release build,
> > etc. Otherwise you would be comparing apples and oranges. It is true,
> > each run would take a different amount of time to run. You could take
> > a few dozen samples and figure out what the distribution is. You can
> > then warn if the running time is outside of two stdev, and fail if it
> > is outside of four sigma. This is not perfect, as once in a while the
> > test may still fail, but that's the best you can do when dealing with
> > running time which produces random results.
> This sounds sort of like the other thing I'm trying to achieve.
> Automated checks on the performance of a method/class etc.
> >
> > There are two ways to access the running times:
> > 1) You could create self-contained executable module which runs MbUnit
> > tests
> >
> http://www.mertner.com/confluence/display/MbUnit/TestExecutionUsingSelfExecutableTestAssemblies
> > and then get access to the object model of the results (there is API
> > for it as well).
> > I've had some problems with accessing the object model (perhaps it is
> > just me),
> > 2) so I am dumping the results into an XML file and then parsing that
> > file. It is not terribly hard in C#.
> mmmm good ones. I'll have a look at these :-)
> >
> > I really like MbUnit, but I would not use it for elaborate profiling.
> > I used to be part of a large group where separate groups of people
> > were working on testing of the software and on on measuring
> > /comparing/analyzing performance. In my mind those are separate
> > activities.
> > You could have slow code that does the job right. Conversely, you do
> > not have to wait until the code is bug-free to start analyzing how
> > much slower/faster it is. One of the tools out there is dotTrace
> > profiler from JetBrains.
> I've got the dotTrace profiler, it's a really nice tool :-)
>
> Thanks again for your suggestions, they're very helpful :-)
>
> Regards
>
> Patrick
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"MbUnit.User" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/MbUnitUser?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to