We've known for some time that our automated performance tests use test data that is quite different from real world cases.
In short, our test data is a big 3000 event calendar in single collection, with 2-4 tiny helper collections thrown in, mostly non-recurring events and small payloads (body). In the real world we are seeing 3 or more collections with up to a couple of hundred events (some exceptions with up to a 1000 events), and most events being recurring events, with lots of overlays and events shared between collections, and mostly recurring events with some large payloads. I did a comparison with Katie's data vs. the test data, results available at http://wiki.osafoundation.org/Journal/PerfDataComparisons20070227 Based on the comparison with Katie's data we can divide into three categories: 1) cases where real world is significantly slower (over 10%), 2) cases where test data is significantly slower (over 10%) and 3) minor differences. We have 6 out of 11 tests in the first bucket, with view switching leading the difference being over twice as slow in the real world case. 3 cases fall into the second bucket. The 6 cases in the first bucket might call for some revision to our test data, but I would like to run some additional tests with more real world data. Note that I don't think we can optimize away all of the difference, but a third might be doable. I could run the tests if you send me your .ini file (do Test > Save settings... first). Or you could run your own tests - please check with me first for instructions. -- Heikki Toivonen
signature.asc
Description: OpenPGP digital signature
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Open Source Applications Foundation "chandler-dev" mailing list http://lists.osafoundation.org/mailman/listinfo/chandler-dev
