Oddly, your message has "broken" message threading in Evolution...

(Meaning that the message I'm replying to isn't shown as a reply to my
previous message.  Odd...)

On Tue, 2010-03-23 at 23:56 +0100, Anders wrote:
> But it seems that the data in the "examples" scripts is the same
> across vendors. I have just eyeballed the "examples" scripts (lining
> up the scripts pairwise in my editor and flipping between them) and
> they seem identical.

I did mention that I'm an idiot.

Here's what happened: I saw "Northwind", assumed that it was the
*actual* Northwind database, and when I was setting up my SQL Server
instance I installed the *actual* Northwind data.

(See the installation instructions for SQL Server at
http://groups.google.com/group/dblinq/web/unit-tests, which I wrote.  So
it's not like I was following incorrect instructions; I made an
incorrect assumption and then documented it.)

This (obviously) resulted in some tests "failing" because the data
differed, and I didn't know that the examples/*/sql directories existed,
and no one told me otherwise, so...

That's how we arrived at this state of affairs.

Perhaps the better solution is to remove tests/*.sql and document use of
examples/*/sql/*.sql.  I'm not entirely sure.  (There is something to be
said about smaller datasets, but at the same time some issues might only
be apparent in larger datasets, e.g. finding appropriate FK joins, etc.
I know that when fixing some of the tests last year that finding queries
that would "work"/test the desired behavior was somewhat more difficult
under the (then smaller) SQLite DB, as there was simply less data to
query...)

> The reason that the "examples" scripts have other (far less) data than
> the "tests" scripts, I assume, is exactly to make them better suited
> for unit testing. For example the EntitySet.Refresh01 and .Refresh02
> test takes about 2 minutes on my (admitted, not very fast) computer
> using the full northwind database from the "tests" folder.

This sounds like a feature, not a bug -- it gives us an incentive to
profile wtf is going on there and fix it. :-)

> > 2. You *MAY* want to consider an alternate table-generation approach.
...
> Aren't there a difference between "known-bad" in the sense that "this
> will never work for this vendor" (e.g. testing for store procedures in
> a database that doesn't support them), and failing tests that are
> failing just because the errors haven't been fixed yet.

There should be a difference, but there is not currently such a
distinction made.  This would be trivially supported by adding another
custom attribute (e.g. [NotSupported(PROVIDER)]), so that
[Explicit]+[NotSupported] means the DB can't support the test, while
just [Explicit] means that DbLinq is buggy.

>  I don't assume
> the later type is supposed to be disabled. It would seem quite odd to
> me to disable tests just because they are failing.

That is, in fact, the point -- to disable tests because they are
failing.  The problem was this: there would be ~100 tests that would
always fail.  I'd fix a bug, which should fix a test...and 110 tests
would now be failing.  Which ones broke?  Who knows!

(It's actually knowable...through lots of grep+sed+etc. magic with
parsing the NUnit-generated TestResult.xml files, and remembering to
preserve the TestResult.xml file from before the fixes, and...)

In short, it made progress *significantly* slower, because I had no way
of knowing which (previously working) tests my "fix" broke.

Think of it as a developer time optimization. :-)

> BTW, I was wondering if it wouldn't be an idea to use categories for
> disabling tests instead of the "preprocessor directives" and
> "Explicit"-attribute (e.g. a test could be marked with
> [Category("ExcludeMySql")] and when running with the mysql vendor,
> that category would be excluded). That way only one test-assembly
> would be needed and I think it would perhaps be easier to read those
> annotations. But that is another discussion.

We actually had that discussion:

        
http://groups.google.com/group/dblinq/browse_thread/thread/4242411770d568f4/10aab71e304ac24f?lnk=gst&q=category#10aab71e304ac24f

In particular:

        http://groups.google.com/group/dblinq/msg/64eae3e2a410af11

(In short: I tried.  It didn't work.  Perhaps I was an idiot and did it
wrong; you're welcome to try. ;-)

Furthermore, we would (currently) still need one _test.dll assembly for
each DB/provider, as the DbConnection type to use is "baked into" the
_test.dll assembly.  (This could be fixed by relying solely on
connection strings, so it could be done, but we'd still have the
question of how to actually "run" a single _test.dll assembly against N
different databases; I'm not sure how we could use NUnit as-is to do
this.)

> > 3. Please do NOT skip tests that work for everything.  Instead, place
> > them into a separate section ("Works for everything") so that we can
> > easily specify what is actually supported.  Otherwise we're left with
> > saying "everything that doesn't work is supported", which isn't strictly
> > true (we might not have written a test for it yet).
> 
> Okay, I just didn't think that table was so self-explanatory or
> readable (there will be several tests for each "feature") that it
> served as a feature list. I viewed it more as a todo-list and for that
> purpose it seemed useful to suppress "completed" tasks, also it helped
> keep the page under 200KB allowed by the wiki. Two pages perhaps.

Two pages make lots of sense.  Please do that.

 - Jon


-- 
You received this message because you are subscribed to the Google Groups 
"DbLinq" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/dblinq?hl=en.

Reply via email to