We have a cron job that runs overnight to clean up anything that was
missed in Jenkin's runs.
No offense, but that scares me. If this strategy was so successful,
why do you even need to clean anything up? You can accumulate cruft
forever, right?
Ha. Like any database, smaller ones perform
On Sun, May 05, 2013 at 11:03:49PM -0700, Ovid wrote:
From: Buddy Burden barefootco...@gmail.com
I lean more towards Mark's approach on this one, albeit with a slight twist.
Given the very interesting discussion and the fact that I hate dogma (such as
what I was throwing down), I have to say
David,
* first, when a bug gets reported in live, I like to create a test case
from it, using data that at least replicates the structure of that
that is live. This will, necessarily, be an end-to-end test of the
whole application, from user interaction through all the layers that
make
On Sat, May 4, 2013 at 10:37 PM, Buddy Burden barefootco...@gmail.comwrote:
We have several databases, but unit tests definitely don't have their
own. Typically unit tests run either against the dev database, or the QA
database. Primarily, they run against whichever database the current
Lasse,
Interesting... Developers in our project have a local copy of the
production database for working with but our unit test runs always create a
database from scratch and run all schema migrations on it before running
the tests. Creating and migrating the unit test DB usually takes between
I've done some heavy DB work/testing and like your idea of simply
turning off autocommit, rolling back for all the database tests. It's
not what we did- we just truncated all the test tables to start from a
good state, and the only parallel testing we did were specifically
designed concurrency
Ovid,
I lean more towards Mark's approach on this one, albeit with a slight twist.
For many of the test suites I've worked on, the business rules are
complex enough that this is a complete non-starter. I *must* have a
database in a known-good state at the start of every test run.
is
2013/5/2 brian d foy brian.d@gmail.com
In HARNESS_OPTIONS we can set -jN to note we want parallel tests
running, but how can a particular module, which might be buried in the
dependency chain, tell the harness it can't do that?
It seems to me that by the time the tests are running, it's
On Thu, May 02, 2013 at 12:51:06PM -0700, Karen Etheridge wrote:
On Thu, May 02, 2013 at 02:39:13PM -0500, brian d foy wrote:
In HARNESS_OPTIONS we can set -jN to note we want parallel tests
running, but how can a particular module, which might be buried in the
dependency chain, tell the
On Thu, May 2, 2013 at 9:39 PM, brian d foy brian.d@gmail.com wrote:
In HARNESS_OPTIONS we can set -jN to note we want parallel tests
running, but how can a particular module, which might be buried in the
dependency chain, tell the harness it can't do that?
It seems to me that by the time
No, you can't have your tests clean up after themselves. For two
reasons. First, that means you have to scatter house-keeping crap all
over your tests. Second, if you have a test failure you want the
results to be sitting there in the database to help with debugging.
There is another way
On Fri, May 03, 2013 at 11:03:28AM -0400, Mark Stosberg wrote:
No, you can't have your tests clean up after themselves. For two
reasons. First, that means you have to scatter house-keeping crap all
over your tests. Second, if you have a test failure you want the
results to be sitting
OK, but you still have to clean out your database before you start each
independent chunk of your test suite, otherwise you start from an
unknown state.
In a lot of cases, this isn't true. This pattern is quite common:
1. Insert entity.
2. Test with entity just inserted.
Since all that
From: Mark Stosberg m...@summersault.com
OK, but you still have to clean out your database before you start each
independent chunk of your test suite, otherwise you start from an
unknown state.
In a lot of cases, this isn't true. This pattern is quite common:
1. Insert entity.
2. Test with
On Friday, May 03, 2013 01:34:35 PM Ovid wrote:
... you'll have to do a lot of work to convince people that starting out
with an effectively random environment is a good way to test code.
Before you dismiss the idea entirely, consider that our real live code running
for real live clients
In HARNESS_OPTIONS we can set -jN to note we want parallel tests
running, but how can a particular module, which might be buried in the
dependency chain, tell the harness it can't do that?
It seems to me that by the time the tests are running, it's too late
because they are already in parallel
On Thu, May 02, 2013 at 02:39:13PM -0500, brian d foy wrote:
In HARNESS_OPTIONS we can set -jN to note we want parallel tests
running, but how can a particular module, which might be buried in the
dependency chain, tell the harness it can't do that?
When can a test not be parallelizable? Most
On 05/02/2013 03:39 PM, brian d foy wrote:
In HARNESS_OPTIONS we can set -jN to note we want parallel tests
running, but how can a particular module, which might be buried in the
dependency chain, tell the harness it can't do that?
It seems to me that by the time the tests are running, it's
When can a test not be parallelizable? Most of the examples that I can
think of (depending on a file resource, etc) smell like a design failure.
Tests should usually be able to use a unique temp file, etc.
Here's an example:
Say you are testing a web application that does a bulk export of a
19 matches
Mail list logo