On Thursday 12 April 2007 20:11, Keith Lofstrom spake thus: > So if you want to move forward, help figure out a testing method. To > start with, testing will be manual. We find some volunteers, build > a few different test machines, design some observation methods and > report results back to the list. Later, we can automate that.
Anyone can check out the code from a repository and run it. Same should go for the tests to the greatest extent possible. Developers need to be able to run at least some level of test without having to go the whole hog and run their tests on "test machines". They need to be able to change 1 line of code and run at least some tests locally on their dev machine and get a quick answer back saying OK/NOK. They don't want the hassle of installing their code on a remote machine where it will be harder to debug for every minute change they make (e.g. if they are working on refactoring Dirvish and are making many small structural changes one after the other). Further, we should be able to do a significant amount of testing using frozen data sets, thereby reducing the need for dedicated test machines/environments. Let's consider a few levels of testing: (1) unit tests: these should be runnable on a dev machine and would test dirvish internals (e.g. testing individual private functions). (2) "integration" ? tests: these would test the overall dirvish behaviour, but using overrides such as those described by Eric Wilhelm, would not make real calls to rsync etc. These should also be runnable on a dev machine. It is sufficient to check the commands issued by dirvish would be those expected. (An alternative would be to use dummy programs/scripts in place of the real rsync and pre/post-scripts. Eric W. might be able to comment on which is the best way to go and why?) (3) "qualification" ? tests: this is where "test machines/environments" can come into play - end to end tests etc. Do we really call rsync at this point or continue to use overrides? It is not Dirivish's business to test rsync, so probably not. It would also mean losing flexibility (e.g. testing error codes as mentionned by Eric W.). All the same, we need some kind of battery of tests that calls rsync, even if it is a subset, to check that e.g. a new/old version of rsync does indeed support all the options we give it etc., or to test some of the more exotic setups (Cygwin, Mac, NSLU2, ...?). With the current state of dirvish code: (1) is not going to buy us much (dirvish.pl = 1210 lines, 7 functions...), but it's something we will be able to do if we refactor dirvish. (2) we can do with Eric W's techniques. We can use these tests to check any changes/fixes (including refactoring of course!) as they are made. They should be fairly quick and easy to run. (3) requires resources, and as mentionned earlier, more work to set up, but they would be part of the final "pre-production" phase (where all 3 kinds of tests would be required to pass). Cheers, Eric -- Eric Mountain _______________________________________________ Dirvish mailing list [EMAIL PROTECTED] http://www.dirvish.org/mailman/listinfo/dirvish
