On Monday, 23 September 2013 at 16:40:56 UTC, jostly wrote:
I think it's great to see the D unit testing ecosystem growing. Since it's still relatively small, I think we have a good chance here to create interoperability between the different frameworks.

As I see it, we have:

1. Running unit tests

This is where D shines with the builting facility for unit tests. However, it suffers a bit from the fact that, if we use assert, it will stop on the first assertion failure, and there is (as far as I've been able to tell) no reliable way to run specific code before or after all the unit tests. If I'm wrong on that assumption, please correct me, that would simplify the spec running for specd.

In specd, the actual code inside the unittest { } sections only collect results, and the reporting is called from a main() supplied by compiling with version "specrunner" set. I haven't checked to see if your dunit do something similar.

In my understanding, D's built-in support for unittests is best suited for test cases that can be expressed as one-liners. When it gets more complicated, we usually use classes. That's what JUnit and TestNG do and that's what dunit does (this one: https://github.com/linkrope/dunit).

For our software, we even separate the 'src' tree from the 'unittest' tree to not distort the coverage results.

2. Asserting results

Varies from the builtin assert() to xUnit-like assertEquals() to the more verbose x.must.equal(y) used in specd.

This could easily be standardized by letting all custom asserts throw an AssertError, though I would prefer to use another exception that encapsulates the expected and actual result, to help with bridging to reporting.

It's too easy to use 'assertEquals' wrong. JUnit defines 'assertEquals(expected, actual)' while DUnit defines it the other way around. For JUnit, I've seen too many wrong uses: 'assertEquals(answer, 42)' giving misleading messages "expected ... but got 42".

Even with UFCS, why shouldn't you write '42.assertEqual(actual)'? That's where the "more verbose" 'must' matchers shine: '42.must.equal(actual)' is obviously the wrong way around.

When you have violated contracts, you get 'AssertError' exceptions from deep within your code under test. To fix these errors you may wish for a stack trace. On the other hand, the pretty messages you get for failed test assertions should be enough to fix these failures. In this case, the stack trace would only show the test runner calling the test case.

So: 'must' matchers are better than 'assert...'; and 'AssertError' should not be thrown for failures!

3. Reporting results

If we have moved beyond basic assert() and use some kind of unit test runner, then we have the ability to report a summary of run tests, and which (and how many) failed.

This is one area where IDE integration would be very nice, and I would very much prefer it if the different unit test frameworks agreed on one standard unit test runner interface, so that the IDE integration problem becomes one of adapting each IDE to one runner interface, instead of adapting each framework to each IDE.

In my experience from the Java and Scala world, the last point is the biggest. Users expect to be able to run unit tests and see the report in whatever standard way their IDE has. In practice this most often means that various libraries pretend to be JUnit when it comes to running tests, because JUnit is supported by all IDEs.

A few days ago, I added such a reporting to dunit. An XML test report is now available that uses the JUnitReport format. We use Jenkins (formerly known as Hudson) for continuous integration so that we can browse our test results and track failures. Nice!

Let's not end up in that situation, but rather work out a common API to run unit tests, and the D unit test community can be the envy of every other unit tester. :)

Agreed!

Reply via email to