On Thu, 2010-03-25 at 22:57 +0100, Anders Holbøll wrote:
> On Wed, Mar 24, 2010 at 17:03, Jonathan Pryor <[email protected]> wrote:
> > Perhaps the better solution is to remove tests/*.sql and document use of
> > examples/*/sql/*.sql.  I'm not entirely sure.
> I'm leaning that way. Including the *.db3 binary blob.

Of course we'd still need the .db3 binary blob.  How else do we easily
test SQLite? ;-)

That said, *please*, go forth and do it if you feel that strongly.  It
will require two things, though:

1. Updating documentation so that people attempting to run the unit
tests can actually have them pass.

2. Updating the unit tests themselves so that they pass.

This is by far the bigger issue, as I know many of the tests I wrote
will only work against the "full" Northwind dataset (and thus will need
fixing), and I'm reasonably sure that some of the Join tests I had to
write couldn't easily be done with the "examples" dataset.  (True, I may
not have tried hard enough to find a set of data that would be usable,
or it could be solved by extending the "examples" dataset, but at the
time it was just easier to use/require the full dataset...)

If you can do that without disabling existing tests, I'm all for moving
back to the examples dataset.

> ...but having few data, makes it easier to extend the schema.

I don't think extending the schema will be that easy, considering that
any such "extensions" would need to be done to ~6 different databases,
with subtly different SQL dialects, and (currently) no one has access to
all of the actual databases...

It would still be an option, certainly...

>  On the other hand, there are probably
> tables that aren't used by any tests. Using a large dataset in the
> hope of finding something, sounds more like integration testing or
> fuzz testing (which are of course also valid testing strategies) than
> unit testing.

If we want to be pedantic, ~none of our "unit tests" are actually unit
tests (except those in the *_ndb* assemblies), as they don't test JUST
DbLinq (or at least, not an isolated part of DbLinq).  Instead, they
require a DB connection, sending/receiving data from the DB, etc., etc.
They're really integration tests...

One of my longer-term goals is to rip out the DbLinq Core/provider layer
and replace it with a clone of System.Data.Common.CommandTrees.  This is
what Entity Framework uses to split up the core from DB-specific
providers, and there is already a SQLite EF provider that uses these
types, so this would allow us to reduce the amount of custom per-DB code
kept in DbLinq (ideally "offshoring" it to other projects, as most of
these DB's will have EF providers *anyway*).  My ernest hope is that
this could be used to write *actual* unit tests, turning an
IQueryable<T> query into a CommandTree object graph, and running tests
against that object graph instead of communicating with the DB.

That should *really* speedup unit test execution...but that's a long way
off. :-(

> Ok, so the tests currently marked Explicit should just be read as
> regular failing tests and are todo.

Correct.  Or they're tests that the DB can't support (e.g. require
stored procedures, which SQLite doesn't support).  We're not currently
differentiating between these two cases, but we should in the future.  I
suppose in the "DB doesn't support the functionality" case, we could
just #if out the test so it isn't even compiled...

Obviously this needs some debate/prototyping to see what works.

> > That is, in fact, the point -- to disable tests because they are
> > failing.  The problem was this: there would be ~100 tests that would
> > always fail.  I'd fix a bug, which should fix a test...and 110 tests
> > would now be failing.  Which ones broke?  Who knows!
...
> Okay, I'm of course not in a position to tell you what works for you.
> But in moving the test report generation from the excel sheet to an
> app, it was also the idea that reports for tracking fixes and
> regressions over time both quantitatively and qualitatively could be
> added. I could try adding an option that given two TestResults.xml
> files would show the fixed and regressed tests. As for not
> "remembering to preserve the TestResult.xml file from before the
> fixes" isn't that kind of like failing to add/remove the Explicit
> option?

Again, "developer time optimization."  I don't particularly care how
it's done, but I *need* a way to know which tests I broke when I'm
changing code.  I outlined two approaches (TestResult.xml comparison,
the current [Explicit] setup).  There are likely others.  What we do
doesn't matter so much as that the solution be FAST and require MINIMAL
additional work from the dev.  ("FAST" as in turnaround time for the
"edit code / compile / run tests / see what broke / repeat" cycle.
Editing will always be slow, compilation is too slow, as is running the
tests, but once the tests have been run if it takes more than 1s to see
which tests are NEWLY failing, the system is broken.)

It should be noted that there's at least one issue with [Explicit]: We
may *fix* some of the [Explicit] issues and not even realize it, because
they're not being executed.  Again, it's not ideal, but it was the best
solution we could think of at the time.

 - Jon


-- 
You received this message because you are subscribed to the Google Groups 
"DbLinq" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/dblinq?hl=en.

Reply via email to