On Dec 27, 2011 6:35 PM, "Dan Scott" <[email protected]> wrote: > > On Tue, Dec 27, 2011 at 05:12:26PM -0600, Scott Prater wrote: > > I've made the changes needed in the admin client (statistical category > > editor) to implement the enhancements to patron statistical categories > > ( see the proposal at > > http://evergreen-ils.org/dokuwiki/doku.php?id=dev:proposal:patron_statistical_categories > > ). I did some pretty thorough and mind-crushingly dull manual testing > > in the staff admin client to make sure that everything behaved as > > expected in all the possible combinations I could think of different > > org units, categories, entries, etc. > > Wow! At first glance that looks like an impressively well-scoped and > thought-out chunk of work! > > > What I'd really like to do is write tests for the OpenSRF methods I > > created that simulate as closely as possible the requests made by the > > javascript to the OpenSRF backend, so that I can make sure I cover all > > the possible use cases, get expected responses, and be able to rerun > > the tests whenever any changes are made. > > > > My tests would do all the things normal things tests do: seed the > > database with test data, execute the methods with some mock objects, > > and compare the responses to other mock objects, then delete the test > > data from the database. > > > > Where would be the best place to put such tests in the source tree? > > For functional verification tests like this that would require a > complete running system, I think a subdirectory under Open-ILS/tests > would be perfectly appropriate. If you need seed data for bib records, > copies, call numbers, located URIs, monograph parts, and conjoined items > you might find Open-ILS/tests/datasets/concerto.sql useful. Sounds like > that's not the focus of your current efforts, but perhaps a similar > approach would be useful for seeding the data you need - particularly if > you need to create "historical" data such as past circulation history, > etc, that might not be as easy to create using strict OpenSRF API calls. >
Outside the (unfortunately, yes) minimal in-EG tests, there's also the Constrictor project. Bill Erickson built this specifically for API testing and benchmarking. It's driven by relatively simple configuration files, provides full-stack testing with expected result comparison, and measures various timing components of each test. It has the added benefit of being able to control a cluster of test-running clients to simulate load for those parts of the code that are load-sensitive, such as optional database replication, process-local caching and transaction control. I'll have to defer to Bill on the current whereabouts of a Constrictor repo, though, as even the readonly svn repo from before the age of git seems to be missing. -miker
