On 2014-02-26, at 19:11, b...@openinworld.com wrote:

> Camillo Bruni wrote:
>> On 2014-02-25, at 17:04, b...@openinworld.com
>>  wrote:
>> 
>>   
>> 
>>> I'd like to better understand the semantics of "expected failures" in 
>>> TestRunner.  It seems to me that if you want to ensure that a certain 
>>> operation fails, in a test you'd wrap it as follows...
>>> 
>>>   shouldFailed=false.
>>>   [ self operationThatShouldFail ] on: Error do: [ shouldFailed := true ].
>>>   self assert: shouldFailed.
>>>     
>>> 
>> 
>> I prefer:
>> 
>>      self should: [ self operationThatShouldFail ] raise: ASpecificError
>> 
>> Otherwise you can just run the code itself:
>> 
>>      self operationThatShouldFail
>> 
>> the test framework will take care of it and mark the test correctly as a 
>> failure.
>>   
>> 
> Camillo, I don't quite follow your second case.  Do you refer to using 
> <expectedFailure> ?  Doing that might be community convention, but "expected 
> failures" for this makes me uncomfortable.  It is not specific about what the 
> failure should be, so the occurrence of non-expected failures is masked.  I 
> really like #should:raise since it is specific.  It would be nice if as many 
> <expectedFailure>s as possible were converted to #should:raise, with 
> <epxectedFailure> being used only for fubared tests being prioritised to deal 
> with later on, with the ultimate goal of having no <expected Failure>s in the 
> image.

ah sorry, you are right, I didn't answer correctly.
I don't use <expectedFailure> since I don't understand what it means :P

I prefer #should:raise: too. And in the case to mark a test failing (aka skip) 
we can do the following:

        self skip: 'Because it does not work'

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to