On Sat, Jul 12, 2008 at 11:00:29AM -0700, James Keenan via RT wrote: > On Sat Jul 12 09:33:35 2008, coke wrote: > > > > Another solution here would be to not run them by default. The purpose > > of 'make test' > > should be to verify that the parrot functionality works on the target > > system. > > If speed is your concern, you can call 'make coretest'. We've had that > functionality available since r23209 (2007-11-28). That lops off not > only the pre- and post-configuration tests, but the coding standards > tests as well. I've used that many times, and it appears that some of > our smoke testers are set up for that as well.
Here's a question I have -- what are the various use cases for running the various test targets? Perhaps if we enumerate the use cases we could understand it better and make our targets match. Here's a brief start: Case #1 - Testing prior to commit: In this case the person is a developer who has made some changes to Parrot or one of its subsystems and is about to commit back to the repository. Clearly we want to check that the change hasn't caused a regression in functionality, but we also want to make sure that the MANIFEST has been properly updated, svn properties are set correctly, and that coding standards are being followed. Unless the developer has been making changes to Parrot configuration, we probably don't need to run the configuration tests. Case #2 - Testing prior to release. Here we want to make sure all tests are run. Case #3 - Testing initial build after checkout/update. In this case the tester wants to verify that the version of Parrot just build is functioning properly. This may be because the tester is looking for a baseline report prior to introducing other changes, or because the tester is experimenting with Parrot and simply wants to know that it built properly on their specific platform. Case #4 - Intermediate testing in development. The developer is in an iterative process of changing things and wants to quickly test what (if any) functionality has been affected by the changes thus far. Unlike case #1 above, the developer isn't generally interested in manifest, svn properties, or coding standards yet. Are there any other cases? It also occurs to me that everyone wants "make test" to always do what they mean for their specific aspect of Parrot, which differs from developer to installer to tester to release manager. Perhaps a more radical solution is to have "make test" always present a short menu (less than 1 screen) of the common test targets, and require the tester to be more explicit about what types of tests they wish to perform. Speaking from Rakudo's case I know this might be useful -- especially in the case of the "committest" target. (People working in the languages/perl6/ directory often forget to run Parrot's coding standards tests, which is why I requested "make codetest" for addition to Rakudo's "make test" target.) Hope this helps, comments and criticisms welcomed. Pm