Just another thought:
We could also use [Category] to mark "ShouldPass" tests.
Then all regressions in the ShouldPass category would be visible.

BTW, since tests are shared among vendors, some "ShouldPass" for a vendor
but not yet for another...

How to handle this?


Giacomo


On Wed, May 6, 2009 at 8:44 AM, Giacomo Tesio <[email protected]> wrote:

> [3] write a "simple" software monitoring checkins and running all tests at
> each checkin. When it notes some regression, it would mail all the
> developers a notification.
>
> BTW I've to note that
> 1) Some test are wrong coded: they do not really test what they should
> (noticed on some of the 101 modified tests) passing even if they should not
> (some time using workaround against missing DbLinq feature... but than why
> unit testing?).
> 2) Some test don't passes just becouse the underlying db contains
> different data set (and data structure).
>
>
> Even if we could not fix all the DbLinq's errors we SHOULD fix the unit
> tests ones.
> Those error are the real noise we should fix.
>
> Moreover we really need a distributable testing infrastructure, urgently.
> I simply can not run all tests, since I've not all the databases.
>
>
> Giacomo
>
>
> On Wed, May 6, 2009 at 6:36 AM, Jonathan Pryor <[email protected]> wrote:
>
>>  I think we have a fundamental problem with our unit tests: most of them
>> don't have 100% pass rates.  This is a problem because you can introduce
>> errors without realizing it, because the NUnit tree doesn't look
>> significantly different (lots of green and red before, vs. lots of green and
>> red later).
>>
>> For example, when I write the unit test page (
>> http://groups.google.com/group/dblinq/web/unit-tests)<http://groups.google.com/group/dblinq/web/unit-tests%29>I
>>  wrote that SQL Server had 425 tests run with 70 failures.  As I write this
>> 432 tests are run (yay) with *190* failures (boo!) -- more than double
>> the error count compared to March 23.
>>
>> I don't know what caused the increased failures.  Currently, I don't care.
>>
>> What I do currently care about is preventing such regressions in the
>> future, and the way to do that is by getting our unit tests 100% green (so
>> that regressions and errors are actually visible, not hidden in a sea of
>> existing errors).
>>
>> I can think of two[0] solutions to this, and I'm welcome to any additional
>> suggestions.
>>
>> 1. Use #if's in the test code to remove failing tests.
>>
>> 2. Use [Category] attributes on the tests to declare which tests
>> shouldn't be executed.  The Categories tab within NUnit allows you to
>> specify which categories are executed.
>>
>> The problem with (1) is a gigantic increase in code complexity, as each
>> vendor will have a different set of failing unit tests, so we'd potentially
>> need checks for every vendor on each method.  This is incredibly bad.
>>
>> (2) would at least avoid the "line noise" implied by #if, though it could
>> also get very "busy" (with upwards of 7 [Category] attributes on a given
>> method).
>>
>> Thoughts?  Alternatives?
>>
>> Thanks,
>> - Jon
>>
>> [0] OK, a third solution would be to actually fix all the errors so that
>> everything is green without using (1) or (2) above, but I don't think that
>> this is practical in the short term.
>> >>
>>
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"DbLinq" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/dblinq?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to