On 11/11/10 02:32 PM, Simon Slavin wrote: > > On 11 Nov 2010, at 1:41pm, Dr. David Kirkby wrote: > >> On 11/10/10 04:28 PM, Roger Binns wrote: >> >>> The SQLite developers decided their library will always be reliable and >>> greatly care about data integrity hence the amount of testing. >> >> I wish the Sage developers would take as much care. One recently said >> something >> to the effect of "I'd rather not spend hours worrying about how code might >> fail, >> when it is so easy to create patches when someone reports a bug" > > Well, you just put me off on using Sage for the foreseeable future.
The situation is improving. The main developer is seeing the need to improve quality, and is organising to just get bugs fixed. There is also a commerical company willing to provide test software to find more bugs. Standards of coding are improving too. > Bugs do not get spotted frequently in complicated maths software like Sage. > The vast majority of users put numbers in, get numbers out, and assume the > software works correctly. Bugs are rarely even spotted, much less reported, > unless the numbers get graphed and the graph looks wrong, or when the error > is so big the result falls outside a plausible range (e.g. a percentage > bigger than 100%). Almost no users will report bugs even if they do find > them if you make the bug-reporting process too annoying. I don't think the bug reporting system is annoying. Just email will do. We intending making the reporting of bug anonymous too. Likewise, the test suite, which has a few thousands tests, will hopefully soon be reported automatically. > I would have hoped that the tests for Sage version increments were as good as > the ones for SQLite. I'm afraid they are not. I don't think there's the appetite for such extensive testing among a sufficient number of developers. I very much doubt the commercial software packages (Maple, Mathematica, MATLAB etc) have such extensive testing. Certainly automated testing by one individual has uncovered thousands of bugs in the commercial software. I think in general mathematical software like Sage is very difficult to test. Whilst some things are easy to test, others are far less so. You can supply an input, and get an output, but there's no way to verify if that output is correct or not. Where possible, I also test on some of the rarer platforms like AIX and HP-UX. We always test on Solaris SPARC, Solaris x86, OpenSolaris, Linux and OS X. > I worked with share dealing software for years. A mistake in our code could > have cost a dealer millions of dollars. Designing the test code was part of > designing any new feature: this is what the data will look like, this is how > the user-interface will work, this is how it'll talk to other systems and > here are the things we can test to make sure it's working right. A factor of > 1:1 (lines of code in the module vs. lines of code in the test module) was > not unusual, but it could easily be 30:1 or 1:30 depending on how ingenious > we were feeling at the design stage and how many kinds of unexpected input we > felt like inserting specific tests for. Unfortunately fuzzing tests weren't > invented until after I left that company but we'd have loved them. Again, I think the testing is probaby easier than with very complex maths software. > Our customers loved us because we packaged the test suite with the > application. When auditors came around the customer could show the auditor > all the tests 'they'd' run for bad input, calculation overflows, etc. and the > auditors would go away impressed. Sage does too. We have thousands of tests. But there are certainly inputs for which I don't believe one can verify if the output is correct - that said, I'm not a mathematician. Sometimes the outputs are correct, but not in the simplest possible form. > Simon. Dave _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users