Hi, On Mon, Aug 9, 2010 at 3:53 PM, Ard Schrijvers <[email protected]> wrote: > First of all, thanks a lot for this Jukka. I really like it. Would you > have an idea how we could measure performance for larger repositories. > For example, I would be glad to add some query performance tests, but, > obviously, querying can be very sensitive to the number of nodes. I > would be interested in the performance of some queries (of xpath, sql > and qom) against different repository version, but then specifically > queries against large repositories. I understand if it is not feasible > because the tests would take to long. WDYT?
The size of the test repository shouldn't be too much of a problem, as long as the setup/teardown code doesn't take hours to complete. A few minutes per test is still quite OK; you can create quite a bit of test content in that time. The test suite currently doesn't allow multiple different tests to share test content, but that should be easy to solve by introducing a concept of test groups with their own setup/teardown phases. A more essential consideration is the time it takes to execute a single test query. Currently the test suite is configured to spend 50 seconds iterating over a single performance tests, so to get good statistics an individual test shouldn't take much longer than a few seconds. We can increase the execution time, but I think a few seconds should in any case be the upper limit for most interesting search use cases. See the simple search test case I added in revision 983662. It would be great if you'd be interested in adding more complex search benchmarks. BR, Jukka Zitting
