Denis Kudriashov wrote > Problem with TestCase approach is mixing two separate concerns. First is > how we want to define system specifications. Another is how we can persist > such specifications. > ... > So it is my vision. I really want to build such system but I have no time > for this yet. > Maybe Sean doing something similar?
Yes, yes, yes!!! This is something I've been working on and thinking about since I came to Smalltalk. Right now we have something like: 1. Use standard Smalltalk artifacts and tools to write serialized descriptions of tests 2. Have SUnit automagically create live test objects, which we never really see or reason about 3. Use some Test-specific tools, like TestRunner and Nautilus icons This is writing in "testing assembly language". We work with the serialized implementation details, which then bubble up all the way to the UI i.e. TestRunner lists Packages in the left pane, and TestCase subclasses in the next, neither of which have anything to do with the domain of testing. It's similar to the way we encode files (e.g. images) as strings returned by methods because we can't version them easily with Monticello. Every time I try to create "what TDD/BDD would look like in a live, dynamic environment", I start with a custom test creation UI and work backward. The UI creates a testing domain model, which may be serialized as packages, classes, and methods, but neither the test writer, runner, or reader should ever have to deal with them directly. I have had quite a few false starts, but have learned a lot. I think a real testing domain model could create huge leverage for our community. In fact I came to Smalltalk from Ruby because I instantly recognized that manipulating text via files and command line had missed the point. In short, when you're ready, I'd love to collaborate... ----- Cheers, Sean -- View this message in context: http://forum.world.st/Unifying-Testing-Ideas-tp4726787p4734239.html Sent from the Pharo Smalltalk Developers mailing list archive at Nabble.com.
