I'm not so sure that excluding tests would be a good idea, since we could
loose control about failing ones...

Can you send me the PITA :-D

I think that a script controlling for regressions would be a good thing...


Giacomo

On Tue, Jun 2, 2009 at 2:05 AM, Jonathan Pryor <[email protected]> wrote:

>  After trying to actually do this, [Category("...")] is a non-starter.
>
> The requirements for what I want are simple:
>
>    1. Have a *single* .nunit file for all "interesting" tests.  For
>    example, svn-trunk has DbLinq-All.nunit which contains *all* unit
>    tests, while DbLinq-Sqlite-Sqlserver.nunit contains only SQLite,
>    Microsoft SQL Server, and "no database" (*_ndb*) tests.  The latter
>    exists because I'm currently only interested in SQLite and SQL Server
>    support (as I don't have any other databases available), so testing 
> anything
>    else is pointless.  I want a *single* .nunit file so that I can hit Run
>    and run all tests in one window instead of needing to manage several
>    separate NUnit runner windows (and possibly forget to run some subset of
>    tests).
>    2. *All* tests should be built, even the failing ones.  This is so that
>    we can easily run previously failing tests to see if any are now working.
>    3. Known failing tests should *not* be run by default.
>
> (Why do I care about a green tree?  See below [0].)
>
> So, solutions:
>
> 1. #if the tests, as originally mentioned.  This fails requirement (2).
>
> 2. Use [Conditional("...")].  This fails (1), as *all* assemblies within a
> .nunit file share the same set of categories, meaning that if I have one
> set of tests that fail under SQLite, and a different (non-overlapping) set
> of tests that fail under SQL Server, *both* sets won't be executed if both
> categories are specified.  This is obviously not good.
>
> 3. What I'm currently thinking is to use [Exclude] in combination with #if,
> e.g.:
>
> #if SQLSERVER /* || other #defines */
>     [Explicit]
> #endif
>     public void TestName() {...}
>
>
> This will fulfill my 3 requirements.  Alternatively we could use 
> [Ignore]instead of
> [Explicit], though [Ignore] will make the test runner turn yellow instead
> of green in an "all pass" scenario.
>
> - Jon
>
> [0] The problem I've been facing is a simple one: I'll see that there's
> e.g. 190 failing tests.  I'll fix one, and find that I now have 195 failing
> tests (i.e. I fixed one test and regressed 6 others).
>
> Now, which tests regressed?  :-)
>
> My current solution, which is a PITA, is a nasty combination of
> grep+sed+diff on the "before svn changes" and "after svn changes"
> TestResult.xml files, e.g.:
>
> grep '<test-case' TestRestult.previous.xml | sed 's/time=.*//g' > p
> grep '<test-case' TestRestult.current.xml | sed 's/time=.*//g' > c
> diff -u p c
>
>  So I'm currently finding it of utmost importance to get a green build,
> simply so I can more easily see which tests I'm regressing while trying to
> fix things.
>
>
> On Wed, 2009-05-06 at 08:06 -0400, Jonathan Pryor wrote:
>
> First, for most databases more tests pass than fail, so use [Category] to
> mark tests that should pass would result in *more* methods being
> attributed than marking those that fail (which is why I suggested marking
> failing tests).
>
> As for tests failing on some vendors but not others, we just need to use
> multiple categories with separate strings:
>
> [Test]
> [Category("NotSqlite")]
> [Category("NotSqlServer")]
> public void TestSomething() {...}
>
>  - Jon
>
> On Wed, 2009-05-06 at 08:51 +0200, Giacomo Tesio wrote:
>
> Just another thought:
> We could also use [Category] to mark "ShouldPass" tests.
> Then all regressions in the ShouldPass category would be visible.
>
> BTW, since tests are shared among vendors, some "ShouldPass" for a vendor
> but not yet for another...
>
> How to handle this?
>
>
> Giacomo
>
>
> On Wed, May 6, 2009 at 8:44 AM, Giacomo Tesio <[email protected]> wrote:
>
> [3] write a "simple" software monitoring checkins and running all tests at
> each checkin. When it notes some regression, it would mail all the
> developers a notification.
>
> BTW I've to note that
> 1) Some test are wrong coded: they do not really test what they should
> (noticed on some of the 101 modified tests) passing even if they should not
> (some time using workaround against missing DbLinq feature... but than why
> unit testing?).
> 2) Some test don't passes just becouse the underlying db contains different
> data set (and data structure).
>
>
> Even if we could not fix all the DbLinq's errors we SHOULD fix the unit
> tests ones.
> Those error are the real noise we should fix.
>
> Moreover we really need a distributable testing infrastructure, urgently.
> I simply can not run all tests, since I've not all the databases.
>
>
> Giacomo
>
>
>
> On Wed, May 6, 2009 at 6:36 AM, Jonathan Pryor <[email protected]> wrote:
>
> I think we have a fundamental problem with our unit tests: most of them
> don't have 100% pass rates.  This is a problem because you can introduce
> errors without realizing it, because the NUnit tree doesn't look
> significantly different (lots of green and red before, vs. lots of green and
> red later).
>
> For example, when I write the unit test page (
> http://groups.google.com/group/dblinq/web/unit-tests)<http://groups.google.com/group/dblinq/web/unit-tests%29>I
>  wrote that SQL Server had 425 tests run with 70 failures.  As I write this
> 432 tests are run (yay) with *190* failures (boo!) -- more than double the
> error count compared to March 23.
>
> I don't know what caused the increased failures.  Currently, I don't care.
>
> What I do currently care about is preventing such regressions in the
> future, and the way to do that is by getting our unit tests 100% green (so
> that regressions and errors are actually visible, not hidden in a sea of
> existing errors).
>
> I can think of two[0] solutions to this, and I'm welcome to any additional
> suggestions.
>
> 1. Use #if's in the test code to remove failing tests.
>
> 2. Use [Category] attributes on the tests to declare which tests shouldn't
> be executed.  The Categories tab within NUnit allows you to specify which
> categories are executed.
>
> The problem with (1) is a gigantic increase in code complexity, as each
> vendor will have a different set of failing unit tests, so we'd potentially
> need checks for every vendor on each method.  This is incredibly bad.
>
> (2) would at least avoid the "line noise" implied by #if, though it could
> also get very "busy" (with upwards of 7 [Category] attributes on a given
> method).
>
> Thoughts?  Alternatives?
>
> Thanks,
> - Jon
>
> [0] OK, a third solution would be to actually fix all the errors so that
> everything is green without using (1) or (2) above, but I don't think that
> this is practical in the short term.
>
>
>
>
>
>
>
>
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"DbLinq" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/dblinq?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to