On 6 May 2012 16:01, Guillermo Polito <[email protected]> wrote: > Hmm, I don't know if it is worth to add a super meta meta framework when we > have only one test framework :). But evolving SUnit would be nice. > I'd like to add parameterized tests (for Glorp and OpenDBX should be > gorgeous) :P.
SqueakCheck (http://www.squeaksource.com/SqueakCheck/) does just that: create a TheoryTestCase (subclassing TestCase), define a method with a single parameter, mark it with <theory> or <theoryTaking: #MySpecialClass>, and you're done. <theory> theories will attempt to find an appropriate object automatically (by considering all messages sent to the parameter (call it t1) and to t1 class. The TheoryTestCase uses a random data generator to throw at the theory. If something falsifies the theory, that counterexample is preserved as a standard test in the 'counterexamples' category. It still needs work: it needs to take a counterexample and "minimise" it, and the data generator is not at all sophisticated. I'd like a proper type inferencer in there too to inhabit the type expected by the test. But it does work, and I've used it in anger. frank > On Sun, May 6, 2012 at 10:52 AM, Stéphane Ducasse > <[email protected]> wrote: >> >> Hi guys >> >> here is a digest of a discussion in VW mailing-list. >> would be nice to have somebody looking and thinking about that. For >> example the work of Markus Gaelli for sorting falling tests >> was lost and this is a pity. >> >> Stef >> >> >> ------------------------------------------------------------ >> >> You might also want to look at Assessments (bundle of the same name in >> the public Store repository, MIT license). It offers a much more >> flexible implementation of a basic test framework, which is then used to >> execute tests from three different variants of SUnit (SUnit, SUnitToo, >> and SUnitVM) without needing to modify or override or reparent existing >> test classes. In addition, it implements test based validation, as well >> as test based benchmarks and performance measurements. For references, >> see Chapter 4 of "A Mentoring Course on Smalltalk" here: >> >> http://www.lulu.com/shop/search.ep?contributorId=441247 >> >> as well as several conference talk slides about these, for example: >> >> http://www.youtube.com/watch?v=jeLGRjQqRf0 >> >> and also see for example the paper Extreme Validation here: >> >> >> http://www.caesarsystems.com/resources/caesarsystems/files/Extreme_Validation.pdf >> >> >> Assessments' flexible architecture also allows extending Assessments >> without having to override the framework itself. This is a problem with >> extending SUnit, and it will lead to various extensions being >> incompatible with each other. >> >> >> > The 'Unit Testing' Chapter of doc/ToolGuide.pdf has a section at the >> > end 'Extensions and Variants of SUnit in VisualWorks' which covers >> > the other tools. Briefly: >> > >> > - SUnit, with VW UI RBSUnitExtensions is the cross-dialect utility >> > we know and love >> > >> > - SUnitToo, with VW UI SUnitToo(ls), is a mature tool created by >> > Travis Griggs. It is VW-specific and permits exploration of >> > VW-specific ideas for SUnit that are not - or not yet - in the >> > cross-dialect framework. (Some ideas trialled in SUnitToo, and in >> > Andres' Assessments framework, have migrated to SUnit and are now >> > also in Pharo, VASmalltalk and Dolphin SUnit. Others may always be >> > too VW-specific, or may remain an area of debate.) >> > >> > - The SUnit2SU2-Bridge parcel reparents SUnit tests as SUnitToo >> > tests when the bridge is deployed (done automatically on loading the >> > parcel and can be done programmatically) and back again when the >> > bridge is retracted (done automatically on unloading and can be done >> > programmatically). This can be useful if for example you want to >> > keep your tests SUnit test, e.g. because the utility is >> > cross-dialect or for easier historic comparison, but want to use >> > SUnitToo(ls) UI or wish to run these tests in a single suite with >> > other SUnitToo tests. >> > >> > There is a very high degree of similarity between the frameworks: a >> > test case should run the same under either. Some minor differences >> > are: >> > >> > - SUnitToo(ls) has an image memory of the last result of run >> > tests: open a fresh window on a test and you will see it with an >> > icon of the last-time-run result. RBSunitExtension remembers only >> > within each window: open another RB, or move off the test pane in >> > the same RB, and the knowledge of test outcomes is discarded. >> > >> > - SUnit has optimistic locking of TestResources by default, with >> > an optional pessimistic locking pattern. Thus, for example, if your >> > system can only be logged in to one database at a time and your >> > overall test suite includes two database login resources that login >> > to two databases, you must tag them as belonging to a >> > CompetingResource set. SUnitToo has pessimistic locking and (IIRC) >> > no pattern for escape from it at this time. Thus you need never tag >> > competing resources, but if you have a resource that takes 5 minutes >> > to setUp and tearDown (e.g. installing/uninstalling a complex >> > product), and use it in a suite of thousands of tests with tens of >> > other resources (e.g. your code integration suite), then SUnitToo >> > could turn that 5 minutes into an hour and 5 minutes as it was >> > repeatedly tornDown and setUp again in pessimistically-calculated >> > competing resource sets. >> > >> > - SUnit provides TestCase API on TestResources also, so e.g. if >> > code in a test case's setUp method starts being too slow as test >> > numbers grow, it can be refactored to a test resource's setUp >> > method, to be run once per suite instead of once per test, without >> > needing to be rewritten. >> > >> > - SUnitToo randomises each test run. On the plus side, this >> > means repeated runs may well find order-dependent errors that >> > Sunit's consistent run order does not expose. On the minus side, >> > the run order is not remembered or recreatable, so just such >> > order-dependent failures may haunt you as intermittent failures. >> > (FYI I wish to add a randomise-run-order feature to SUnit but with >> > memory of the order and only used when the user selects it.) >> > >> > Travis may be able to list other differences. Generally, the intent >> > is to keep behaviour the same except for areas where ideas for >> > developing the frameworks are being trialled. >> > >> > HTH >> > Niall Ross >> >> >> FYI, the mention in my earlier email of SUnit having taken some ideas >> from Assessments, as well as from SUnitToo, refers to some of these ways >> of extending. For example, the 'Extensions and Variants of SUnit in >> VisualWorks' subsection (in the test chapter of doc/ToolsGuide.pdf) >> mentions the pattern for skipping tests. This is an example of using >> ClassifiedTestResult, which owes its inspiration to Assessments. >> >> In 7.9, the RBSUnitExtensions tool allows plugin of TestSuite >> subclasses, by doing >> RBSUnitExtension suiteClass: MyTestSuite. >> During 7.10, we expect to publish some examples of TestSuite subclasses >> (and TestResult subclasses referenced by them) in the OR so people can >> use the above to experiment, and maybe some in the community will also >> do so: thus we can find out what subclassing patterns may be best, so >> make any (very minor, purely refactoring) changes in the top-level >> framework that would help support them. >> >> You are right that SUnitToo(ls) gives better visual feedback than >> RBSUnitExtensions (having the SUnit2SU2-Bridge, so we can always use >> SUnitToo UI for SUnit tests just by loading it, has made me careless >> about porting these UI features - I will try to do that for the next >> release). By contrast, RBSUnitExtensions is a bit ahead when it comes >> to handling tests that are extensions in other packages (maybe those >> features will also get ported). > >
