Reporting % success rather than demanding 100% success would seem to
be a strictly weaker testing policy.

Arguably, with macros you need fewer features since `@test a == b`
could recognize an equality test and report what a and b were. But one
feature we could stand to add is asserting properties that must be
true for all arguments, and running through lots of combinations of
instances. However, in reality we do some of this already, since the
"files full of asserts" also in many cases do nested loops of tests.
Saying we do "just asserts" obscures this fact.


On Mon, Dec 29, 2014 at 5:05 PM, Jameson Nash <[email protected]> wrote:
> I imagine there are advantages to frameworks in that you can expected
> failures and continue through the test suite after one fails, to give a
> better % success/failure metric than Julia's simplistic go/no-go approach.
>
> I used JUnit many years ago for a high school class, and found that,
> relative to `@assert` statements, it had more options for asserting various
> approximate and conditional statements that would otherwise have been very
> verbose to write in Java. Browsing back through it's website now
> (http://junit.org/ under Usage and Idioms), it apparently now has some more
> features for testing such as rules, theories, timeouts, and concurrency).
> Those features would likely help improve testing coverage by making tests
> easier to describe.
>
> On Mon Dec 29 2014 at 4:45:53 PM Steven G. Johnson <[email protected]>
> wrote:
>>
>> On Monday, December 29, 2014 4:12:36 PM UTC-5, Stefan Karpinski wrote:
>>>
>>> I didn't read through the broken builds post in detail – thanks for the
>>> clarification. Julia basically uses master as a branch for merging and
>>> simmering experimental work. It seems like many (most?) projects don't do
>>> this, and instead use master for stable work.
>>
>>
>> Yeah, a lot of projects use the Gitflow model, in which a develop branch
>> is used for experimental work and master is used for (nearly) release
>> candidates.
>>
>> I can understand where Dan is coming from in terms of finding issues
>> continually when using Julia, but in my case it's more commonly "this
>> behavior is annoying / could be improved" than "this behavior is wrong".
>> It's rare for me to code for a few hours in Julia without filing issues in
>> the former category, but out of the 300 issues I've filed since 2012, it
>> looks like less than two dozen are in the latter "definite bug" category.
>>
>> I'm don't understand his perspective on "modern test frameworks" in which
>> FactCheck is light-years better than a big file full of asserts.  Maybe my
>> age is showing, but from my perspective FactCheck (and its Midje antecedent)
>> just gives you a slightly more verbose assert syntax and a way of grouping
>> asserts into blocks (which doesn't seem much better than just adding a
>> comment at the top of a group of asserts).   Tastes vary, of course, but Dan
>> seems to be referring to some dramatic advantage that isn't a matter of mere
>> spelling.  What am I missing?

Reply via email to