Heikki Toivonen wrote:
Is mail syncing also something you use and which is slowing you down?
Do we have a test for this? I don't think this is on our Alpha3/Alpha4
short list, but is probably good to have in our repertoire.
I'd like to see us run these tests against the 3k calendar (for
comparison to previous releases), our new sample data set that the QA
team is working on, and some even larger data set (~10k). We should set
targets for the sample data set.
The problem is that running the performance tests takes quite a bit of
time so there is a limit on how many new tests we can reasonably add.
And removing existing tests means we'll lose the ability to compare to
past. I think we could remove some tests, though.
Yup, the exact set of tests to run is a bit tricky.
Seems like we have these needs:
- Running tests against a 3k repository is useful for comparison to
historical data
- Running tests against an empty repository is an interesting comparison
- Running tests against the new sample data set is useful because it
gives us the best metric for alpha3/4 dogfoodability
- Running tests against a bigger repository (10k?) is useful to push on
the repository size and learn about scaling
- Running the original suite of tests is useful for comparison to
historical data
- We'll want to expand the number of scenarios over time to cover new
use cases
- Running too many tests in every cycle means the cycle takes forever
What are our options here?
- Does it make sense to have different test suites, and not run all test
suites continuously? Or alternate test suite runs?
fyi: the new test data set task is being tracked in Bug 6029. Sheila,
Heikki or Mikeal, could you update that bug with more information?
https://bugzilla.osafoundation.org/show_bug.cgi?id=6029
Cheers,
Katie
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev