On 17 May 2010, at 11:38, David Chelimsky <dchelim...@gmail.com> wrote:
On May 16, 2010, at 11:10 PM, Scott Taylor wrote:
On May 16, 2010, at 8:13 PM, David Chelimsky wrote:
On May 16, 2010, at 12:54 PM, Scott Taylor wrote:
Hey all,
I'm wondering if anyone has any experience with an automated test-
case generation tool like Quickcheck (for erlang/haskell). I'd be
interested in hearing any impressions, war stories, or dev
workflows regarding a tool like this. Talking off list to David
C, he suggested that it might be a complimentary tool to a TDD/
BDD framework like rspec.
This is something I've been playing around with in Cucumber for quite
a while. My main thought is I want to make more use of dead CPU time.
When I'm sleeping I want my tests and system being exercised. What I'm
thinking is a tool in Cucumber a little like heckle. Mutating the
matches/inputs (which in cucumber represent regexp matches) and
examing the output.
A cucumber test can be seen as a black box with inputs we can prod at
and observe the output.
What I want out of this is a report which shows me failures and what
inputs where used.
In order to prevent a sprawl of failures it would be useful to derive
from the failures rules which describe a group of failing tests.
I.e with int between 1 and 100 test failed.
I think of this
This is a different usecase to Rspec but thought some of my thoughts
might be useful.
My thinking here is that it could be useful to drive out an
initial implementation using TDD, and at the point we think we've
got the solution we want, add something quickcheck-like to try to
poke holes in it. I'd probably then add new examples if any cases
I hadn't considered were revealed through this process.
Have you watched John Hughes' presentation on the matter?
http://video.google.com/videoplay?docid=4655369445141008672#
I haven't yet. I'll give it a look-see later today.
It's sort of interesting that he won't do any TDD - he'll let the
reduction process generate the "minimum" test case, and go from
there (that's not explicitly stated in that video, although I'm
pretty sure I've heard him say it before).
If I had a tool like this, I'm guessing I'd probably have a
workflow like the following:
1. use the random test case generator, and fix any issues that were
obvious.
2. If something wasn't obvious, I'd go and write a test case for in
a more traditional testing tool (rspec). I often use the debugger
in conjunction with the spec runner, running the one test case with
a debugger statement at the start of the test case.
3. Any regressions would (obviously) happen in the traditional tool.
The big win with a tool like this is not testing boundary cases,
it's in having the tool "write" the test cases for you. OTOH, I
wonder if the simplicity of the implementation would be sacrificed
when taking this approach.
My guess is that it would.
Another drawback - I have no idea how such a tool would integrate
with a build server.
What integration point would there need to be? It's just Ruby.
It appears as though there is a similar project out there for
ruby named rushcheck (http://rushcheck.rubyforge.org/).
It's up on github too: http://github.com/hayeah/rushcheck. Same
guy has this too: http://github.com/hayeah/rantly - random data
generator - looks like you could do stuff like:
Rantly.new.each(100) do
thing.method_that_accepts_a_string(string).should have_some_quality
end
There's a blog post about the library here, if anyone is interested:
http://www.metacircus.com/hacking/2009/04/10/look-ma-no-monads.html
I've been thinking about integrating a port of the ruby library
faker into scriptcheck, the javascript testing tool I've been
working on:
http://github.com/Marak/Faker.js
http://github.com/smtlaissezfaire/scriptcheck
This would cause 100 random strings to be generated and passed to
thing.method_that_accepts_a_string. Assuming the matcher verifies
some set of rules about the outcomes, you've basically got quick
check.
Yeah, pretty much. One issue, though, is that you don't want to
hard code the number of random generations.
Why not? Wouldn't it make sense to have smaller numbers in some
cases and larger ones in others?
You'll also want a convenient way to run just one given test case
easily (which rspec already has). You'll probably also want to
separate these random generation tests from the rest of your tests.
Exactly! This is what I had in mind when I said "at the point we
think we've got the solution we want, add something quickcheck-like
to try to poke holes in it." The steps would be:
1. Drive out minimal implementation with specs
2. Write some quickcheck-ish tests in a separate location
3. Run them
4. If there are any failures, use them to evaluate and enhance the
specs that I'd already written
This would really amplify the distinction between specs and tests.
Plus, the tests would be indirectly testing the specs as much as
they are testing the implementation. Of course, this is all
theoretical. If we could just use quickcheck and still get all the
documentation and implementation-driving benefits of TDD, I'd
probably move in that direction myself :)
Hitting a database 1000 times for one test is going to be costly.
If we used the process I just outlined, we could run the specs using
autotest (ironic), and only run the tests on demand and on the CI
server.
Now that I'm thinking about it, it might make a ton of sense in
languages like erlang or haskell where everything is functional
because those languages lend themselves to parallelization since
there are no shared resources.
Regards,
Scott
_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users
_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users