Hi Kaz, On Saturday 19 March 2011 02:24:41 am Kaz Kylheku wrote: > On Fri, 18 Mar 2011 21:00:12 +0100, Jean Delvare <[email protected]> > wrote: > > That's not enough. Quilt comes with a non-regression test suite, > > which your script should pass. Try it: "make check". I did, your > > code failed (even the update you send half an hour ago.) > > I made a few more fixes and overall hardening, like making sure > things are quoted and -- is used to that an argument doesn't look > like an option and such. I haven't addressed the FreeBSD portability, > nor that one potentially fragile sed edit that remains > (which I can eliminate by cd-ing to a directory to avoid having > to stream-edit the list of path names). > > It's now here: http://kylheku.com/~kaz/backup-files > > I downloaded the quilt 0.48 tarball and used its test suite to run > all 41 test scripts. There was a failure in just one of the commands > in one of the scripts, so I used "make -k" to get past that.
You'd rather have cloned the git repository, as version 0.48 is getting very old. > > This only failed because patch put some terminal emulator > codes into the output to do highlighting or colorizing, > and it happens with the stock backup-files too: > > The next patch would create the file create, =~ The next patch > would create the file `?create'?, =~ indicates a success, so this isn't where the failure happened. Really, try again with the test suite in the git repository, it is known to work well and has much better coverage. And I just tested your script with it, and it fails all over the place. > > In all other respects, it is the expected output from patch. > Not bad: first attempt at "make check" passes! "I used make -k to get past the failure" isn't my definition of "passes". Seriously, your backup-files script would have to pass the test suite completely to be considered, except maybe for one error message in a case which is never supposed to happen anyway. > > This is the "make -k check" time with my "backup-files" > > real 0m42.383s > user 0m0.972s > sys 0m0.808s > > This is with the C version: > > real 0m43.340s > user 0m0.944s > sys 0m0.836s > > This is on an NFS filesystem; the variance between the real times > is greater than between above two. The user and sys times are quite > stable between runs. > > Basically, the performance is about the same. > > Now, local disk. Actually no, forget that, let's use Linux tmpfs: > > script: > > real 0m25.221s > user 0m0.928s > sys 0m0.788s > > C: > > real 0m25.416s > user 0m1.052s > sys 0m0.896s > > Again, it's about the same thing. This is nice, but the test suite is meant to find functional regressions. It is not suited for performance testing, because it only manipulates a few files at once, and because only a small part of the run-time is in backup-files. If you really want to benchmark your script, measure direct calls to it on large file sets. I had two sample file sets for my benchmarks, one with 471 files, one with 9202 files. This is how I found which parts of my script needed optimizing. Tests on individual files are good too, of course, just to make sure you don't have a huge performance loss in this case. -- Jean Delvare Suse L3 _______________________________________________ Quilt-dev mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/quilt-dev
