Hi, On Thu, Aug 25, 2011 at 11:42 AM, Jukka Zitting <[email protected]> wrote: > Results from a performance test run from last night: > > http://people.apache.org/~jukka/jackrabbit/report-2011-08-25/report.html
Here's an updated report: http://people.apache.org/~jukka/jackrabbit/report-2011-09-01/report.html > * The BigFileReadTest results are pretty fascinating. It looks like > the set of ten 100MB test files fits into the OS buffer cache (the > test computer has 4GB RAM) and unlike in previous Jackrabbit versions > we can now for some reason avoid invalidating that cache. That was indeed the case. I increased the test set from ten to a hundred 100MB files, which prevented all of them from fitting into memory at a time, and now there's much less variance in performance. Note that there was a massive performance problem in writing large files in the default configuration of all of Jackrabbit 1.4, 1.5 and 1.6. I had to disable the big file tests on those versions as otherwise writing the 10GB of test data would have taken way too long to be practical. > * We have a notable regression in concurrent read and read/write > performance. I'll file a blocker issue for that, to be fixed before > Jackrabbit 2.3. See https://issues.apache.org/jira/browse/JCR-3064 > * Massive performance improvements for SetProperty and > UpdateManyChildNodes tests. I'm not sure which of the recent changes > is giving this performance boost. The performance boost can not be seen in the latest results anymore, so it might have been just a fluke. BR, Jukka Zitting
