On 7 Jul 2013, at 20:27, Frank Loeffler <[email protected]> wrote:

> Hi,
> 
> On Sun, Jul 07, 2013 at 11:06:26AM -0700, Roland Haas wrote:
>> This commit (and the following ones) make the tests fail, at least in
>> subversion (Zelmani is ok since the author *does* provide updated tests,
>> just not in GRHydr/svn, so I could port over the change).
> 
> Regardless of what happens here: the tests should be updated accordingly
> if this is necessary.

Did you know that the tests were going to fail before committing?  i.e. was it 
a conscious decision to break the tests in order to get the code committed 
sooner?

In some cases (e.g. the failures in TwoPunctures), I think there are NaNs 
generated.  Is this an indication that the commits themselves were wrong?

There is a nonzero cost to having the tests failing.  Any new failures are less 
likely to be noticed, and it makes work for people who then have to investigate 
why the tests fail.  Ideally, developers would not commit code which breaks the 
tests.  It's good to have the automated tests running as a safety net, but I 
think developers can still be expected to run the tests before making a huge 
series of commits like this.

>> I am personally generally unhappy about commits that require everybody
>> to change/update their paramater files unless the change is required to
>> prevent a bug from manifesting. Removing "[0]" from ones parameter file
>> is not hard, but will break each and every single one of them. One of
>> the nice things about Cactus has been that (some parts of it) are fairly
>> stable and developers take care to not introduce changes that require
>> each user to update their parameter files (unless the user wants/needs
>> new features).
> 
> In general, I agree, but sometimes I see that changes are necessary,
> even without bugs being involved. A code cleanup from time to time is
> good and might involve some changes - in moderation of course.

I am undecided. I agree that it is pleasant to tidy things up and make the 
current version clean.  But at the same time you have to consider the 
disadvantages.   It is very frustrating when you are trying to locate where a 
bug was introduced with historical versions of the code if you can't use the 
same parameter file with old and new versions.  If at all possible, I would try 
to make the old parameter files still work.  If we had 100% test coverage (i.e. 
all the code was exercised during the tests), then maybe we would have the 
luxury of assuming that we never had to go back to old versions to debug where 
someone broke something, but we are nowhere near that, so it is inevitable that 
we will have to do this from time to time, and if the code is not backwards 
compatible, this becomes somewhat of a crapshoot.  Often you can get the best 
of both by introducing a new parameter rather than changing the old one, though 
I haven't looked into the details of this particular issue so I don't know if 
that makes sense here.

-- 
Ian Hinder
http://numrel.aei.mpg.de/people/hinder

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to