On 2020/03/05 10:33:57, hanwenn wrote: > > the script fail on None.strip(), though not the best option, is still better > > than treating this as discreetly as the intentional removal of a test case. > > I disagree. Having the script crash prevents us from forming a complete > impression of what a change does, because the crash can hide other regression > failures.
I want to make sure that it is clear that I did not say that having the script crash was the best option, just better than acting like there is no difference. Listing no diff when an expected text file is missing would prevent us from forming a complete impression of what a change does; I'm concerned that you seem unconcerned about that. > > These changes deserve some systematic manual testing; I hope you're covering > > that. If this script fails to publish differences when it should, nobody else > > involved in the countdown process is going to notice. > > this is why I am extending the if case to be symmetric. We already know that the > addition of files is handled correctly. Making a minimal change here avoids some > QA. It's not completely clear to me, but what I infer from this is that the extent of your testing has been to run make check and observe that it no longer crashes. Is that right, or have you looked closely enough to describe how the two cases I previously mentioned are presented now? My biggest concern is that table of differences might have no row for the missing file. I don't think it's safe to assume that the calling code handles removal like it handles addition; my reason for believing this is that I implemented the handling of added test cases, and I did not aim to handle removed test cases when I did it. https://codereview.appspot.com/583590043/
