Rozental, Gennadiy <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> > A half-way solution is to have something like:
> >
> > BOOST_CHECK_EQUAL_NUMBERS(x,y,IsEqual)
> >
> > and let users specify their own Preciates.
>
> There is BOOST_CHECK_PREDICATE
>
Yes, I know.
My point was that with BOOST_CHECK_EQUAL_NUMBERS() the test library
could output something readable of the form:

"numbers x and y are not approximately equal"

It could even add to the output something of the form:

" according to " << Pred ;

which would use the comparator << operator so it can
output the relevant information such as epsilon, scale,
etc..

> > By default, the Test library could provide
> > a straight-forward ABSOLUTE-ERROR comparator:
>
> By default, the Test library provides relative error comparator, which is
> according to my understanding is more correct.
>
But there is no such thing as a "more correct" way to compare
FP values in the context free level of a test library.
You know already that relative errors have to be scaled to be
meaningful, but choosing the right scaling is the complex.
A default semantic that simply scales epsilon() times any of the
arguments will be simply unusable for most practical tests
because actual errors will easily exceed that; yet OTOH,
suppling a factor to increase the scaling will mainly lead users
to the problematic Illusion of Simplicity that brought us
to this discussion.

A comparison based on absolute errors is pesimistic, but for unbiased
comparisons it often results on what is expected, much more often
that relative-error based comparisons.
It isn't smart but is easy to understand.

BTW: The default comparator I showed before might better be named
"DifferAtMostBy"

Fernando Cacciola




_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Reply via email to