Peter Dimov wrote: 
>>> Beman's approach, where unexpected failures were automatically 
>>> determined by comparing the current run with aprevious run, seems to
>>> cope better with this scenario, and requires no manual input.
>> 
>> Does it? What if the previous run was a total failure - what the next
>> one is going to show? 
>
> Nothing will go wrong; it's only pass->fail transitions that are
> emphasized. 

But that's my point. If current run was a disaster, in the next one -
which can happen an hour later - the new failures won't be emphasized
since they are not new anymore - even although they _are_ regressions
and need to be fixed!

> False pass->fail transitions can only happen for
> compile-fail/link-fail tests that aren't that significant.
>
>
>> IMO it can work only if you have a trusted snapshot of what is
>> considered a "good" CVS state and you update it "pessimistically" -
>> that is, remove the expected failures that are now passing and leave
>> everything else intact - automatically, of course. And that's exactly
>> what we are going to do.
>
> I didn't realize that the plan was to automatically manage the expected
> failures.

It wasn't at the very beginning, but thanks to your and other people's
comments our understanding evolved, and so did the plan :).

Thank you,
Aleksey
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Reply via email to