On Saturday, 20 August 2016 at 20:39:13 UTC, Engine Machine wrote:
We have a unittest, what about an examples?

Examples are self contained short programs, each block acts as a "main" function. One can run all the examples and spit out all the output consecutively. It also allows for more robust testing since it adds another layer.

It would provide better examples for docs also. Instead of using assert to assert stuff we can see a real program in action. The output of the example could easily be generated and added to the end of the code.

Seems like a win win! (maybe examples is not a good keyword, but that is moot)

It seems like there could be a library function that compile-time-reflects to make a collection of all the functions in a module that have names starting with "maintest" and calls each of them in a try block, with a catch block that just prints the error messages to stderr and incorrect return codes and counts the total fails and finally counts the number of tests run to return the success ratio.

That would be trivial to write in a language that has runtime reflection like Python. Probably someone who knows D well could write it for D in a few minutes. It's too hot here (in the Seattle area) right now for me with my limited D knowledge to feel like trying it instead of kibitzing.

The point would be when writing something that has one development-stage 'main', other mains that could be used instead or are planned to be used or are just maintests would be live all the time (as long as the function that calls them as tests is included in the real main or, as it should probably be, in a non-release version block or in a unittest block.)

A well-featured interface would allow calling maintests with simulated command line args specific for each one, or a list of lists of strings times a list of functions, args[][] times maintest*[], for more thorough testing. I've used a feature in NetBeans for Java (hobby project, not a pro dev here) that calls main for a configured build with stored args, but it required manual GUI interaction to switch between the builds.

Also a good feature would allow a maintest "success" to be a user specified value of a return code for certain args, because when writing something that's supposed to return specific error codes, that functionality should be tested.

hypothetical usage code:

unittest
{
auto mainResults = Maintests([1: [["-x", "-y"], ["0"]], 2: [["-x", "-A"], ["3"]]); // -x and -A switches are incompatible args, to return 3 as an arbitrary error code assert(mainResults.fails == 1 && mainResults.tests["X"][2].success == false);
}

int maintestX (string[] args)
{
    // TODO
    return 0; // it's going to fail the second test, on purpose
}

int maintestY (string[] args)
{
    // TODO
return 3 * (args[2] == "-A"); // laziest way to pass current tests
}

result:

4 tests run with each unittested release build, which would otherwise require two additional versions of the module containing main to be compiled and run with specific args 2 times each, with the correct return values differing for extra difficulty, or else require assert(maintestX(["-x", "-y"]) == 0); etc. to be written X4 with variations. The number of variations to write out manually would increase by the number more test cases times the number of variations of main, and wouldn't ensure that functions that look like "maintest" are tested.

// ? void maintestZ () { }
// would fail a test that specifies any return value other than 0
// ? int maintest() { throw new Exception("no catch"); }
// would always fail

This might be generalized to
auto results = TestPattern!("maintest*", int[string[]])([["-a"]: 0]);
which might have some more general uses.

[I wrote the above, then I felt like, nah, I don't want to post things that sound like asking for other people to do more work. Now that I've done something on it myself, I'm posting the above for documentation.]

Reply via email to