On 19 Jan 2009, at 5:31 AM, Michael McCracken wrote:

> On Sun, Jan 18, 2009 at 8:58 PM, Gregory Jefferis  
> <jeffe...@gmail.com> wrote:
>>
>> On 2009-01-19 01:42, "Christiaan Hofman" <cmhof...@gmail.com> wrote:
>>
>>>
>>> On 19 Jan 2009, at 2:20 AM, Gregory Jefferis wrote:
>>>
>>>> On 2009-01-19 00:39, "Adam R. Maxwell" <amaxw...@mac.com> wrote:
>>>>
>>> I don't think that's possible, but I don't know too much about unit
>>> tests. I'd rather say that the tests should be designed to be
>>> independent of the prefs.
>> Yes and no.  I think that there could be sensible reasons to want  
>> to test
>> the effect of a preference on program behaviour.

Perhaps, but then only in a context where you're testing such an  
effect. As I said, it's important that a test tests what it is  
intended to test, not anything else.

>>
>
> This is possible- you can remove the application domain prefs (the
> ones the user sets) from NSUserDefaults:
> [[NSUserDefaults standardUserDefaults]
> removePersistentDomainForName:@"edu.ucsd.cs.mmccrack.bibdesk"]
>
> this is probably a good idea to do - so the tests are always testing
> the same thing, and if we need to test the effects of a pref, you just
> set it in the test.
>

Wouldn't that remove the defaults as well? If so, than this wouldn't  
work, because many places in the code expect some value to be set for  
various preferences. Moreover, OFPreferenceWrapper has an assertion  
that checks whether a default value is registered.

>>> That would be wrong, as there is no "right" in this case, that's why
>>> it fails. This particular pref is not even guaranteed to be the only
>>> one affecting bibtex generation. In fact, it isn't, and moreover: we
>>> can't know, because it may change in the future, nobody knows what  
>>> the
>>> prefs will be.
>>
>> I think my point would be that if we change code or default prefs  
>> in the
>> future that alters the bibtex representation then we should spot  
>> that and
>> either change the test or change the code/pref.  I think it is  
>> better to
>> start with a test that is on the fragile side and then gets refined  
>> than
>> with a test that is so robust that it never breaks!
>>
>>> The problem  with these particular tests is that they test the wrong
>>> thing. They are supposed to test the string parsing, but they test  
>>> as
>>> much the bibtex generation.
>>
>> Yes you're quite right.  Thanks.
>>
>>> And in that respect they make assumptions
>>> that are baseless, and may well be wrong (in fact, in this case they
>>> are wrong).
>>
>> But I'm not so sure that checking the bibtex representation is a  
>> stupid
>> thing to do at all.

Definitely, e.g. it would make it hard to diff a file if the bibtex  
changes. But it better be isolated, to avoid the need to have to fix  
every single unit test when you want to make a change.

>> This was just a silly place to do it. Rather any test
>> that directly has to do with the bibtex representation would be  
>> better off
>> happening just once in eg TestBibItem.
>>
>>> For this particular test it would be better to check the pubFields  
>>> of
>>> the item rather than the bibtex.
>>
>> Done.

Perhaps an even more inclusive and simpler test would be to compare  
all the fields at once by comparing the pubFields dictionary. This  
will also check if there's no junk fields generated.

>> But I have also put a test of bibtex minimal representation into
>> TestBibItem (ignoring tabs and newlines).
>
> Thanks for sticking with this.
> Adding tests to existing code is a multi-step process, but I'm sure
> they're going to be very useful.
>
> -mike

thanks from me too,
Christiaan


------------------------------------------------------------------------------
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
_______________________________________________
Bibdesk-develop mailing list
Bibdesk-develop@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bibdesk-develop

Reply via email to