Sorry for somewhat delayed answer - not sure if anyone has answered to your questions in the meanwhile.

On Friday, 24 July 2015 at 00:19:50 UTC, Walter Bright wrote:
On 7/23/2015 2:08 PM, Dicebot wrote:
It does not protect from errors in definition

void foo (R) (Range r)
     if (isInputRange!Range)
{ r.save(); }

unittest
{
     SomeForwardRange r;
     foo(r);
}

This will compile and show 100% test coverage. Yet when user will try using it
with real input range, it will fail.

That is correct. Some care must be taken that the mock types used in the unit tests actually match what the constraint is, rather than being a superset of them.

This is absolutely impractical. I will never even consider such attitude as a solution for production projects. If test coverage can't be verified automatically, it is garbage, period. No one will ever manually verify thousands lines of code after some trivial refactoring just to make sure compiler does its job.

By your attitude `-cov` is not necessary at all - you can do the same manually anyway, with some help of 3d party tool. Yet you advertise it as crucial D feature (and are being totally right about it).

There is quite a notable difference in clarity between error message coming from some arcane part of function body and referring to wrong usage (or even totally misleading because of UFCS) and simple and straightforward "Your type X does not
implement method X necessary for trait Y"

I believe they are the same. "method X does not exist for type Y".

Well, the difference is that you "believe" and I actually write code and read those error messages. They are not the same at all. In D error message gets evaluated in context of function body and is likely to be completely misleading in all but most trivial methods. For example, if there is a global UFCS function available with the same name but different argument list, you will get an error about wrong arguments and not about missing methods.

Coverage does not work with conditional compilation:

void foo (T) ()
{
     import std.stdio;
     static if (is(T == int))
         writeln("1");
     else
         writeln("2");
}

unittest
{
     foo!int();
}

$ dmd -cov=100 -unittest -main ./sample.d

Let's look at the actual coverage report:
===============================
       |void foo (T) ()
       |{
       |    import std.stdio;
       |    static if (is(T == int))
      1|        writeln("1");
       |    else
       |        writeln("2");
       |}
       |
       |unittest
       |{
      1|    foo!int();
       |}
       |
foo.d is 100% covered
============================

I look at these all the time. It's pretty obvious that the second writeln is not being compiled in.

Again, this is impractical. You may be capable of reading with speed of light but this not the normal industry case. Programs are huge, changesets are big, time pressure is real. If something can't be verified in automated way at least for basic sanity, it is simply not good enough. This is the whole point of CI revolution.

In practice I will only look into .cov files when working on adding new tests to improve the coverage and will never be able to do it more often (unless compiler notifies me to do so). This is real-world constraint one needs to deal with, not matter what your personal preferences about good development process are.

Now, if I make a mistake in the second writeln such that it is syntactically correct yet semantically wrong, and I ship it, and it blows up when the customer actually instantiates that line of code,

   -- where is the advantage to me? --

How am I, the developer, better off? How does "well, it looks syntactically like D code, so ship it!" pass any sort of professional quality assurance?

?

If compiler would actually show 0 coverage for non-instantiated lines, than automatic coverage control check in CI would complain and code would never be shipped unless it gets covered with tests (which check the semantics). Your are putting it totally backwards.

Reply via email to