On 12/22/2010 10:57 AM, jgr...@simulexinc.com wrote:
...
This is the biggest concern, I think. As such, I'd be interested in
seeing performance runs, to back up the intuition. Then, at least,
we'd know precisely what trade-off we're talking about.
The test would need to cover both small batches and large, both in
multiples of the batch-size/takeMultipleLimit and for numbers off of
those multiples, with transactions and without.
I think we need a lot of performance tests, some way to organize them,
and some way to retain their results.
I propose adding a "performance" folder to the River trunk, with
subdirectories "src" and "results". src would contain benchmark source
code. result would contain benchmark output.
System level tests could have their own package hierarchy, under
org.apache.impl, but reflecting what is being measured. Unit level tests
would need to follow the package hierarchy for the code being tested, to
get package access. The results hierarchy would mirror that src
hierarchy for the tests.
Any ideas, alternatives, changes, improvements?
Patricia