Using a single fixed heap size for GC performance testing is a Bad Idea.
Only 5 of these benchmarks would require a GC (per iteration) in a
900MB heap for a full-heap collector. Fop, for example, only allocates
100Mb.
Best to find out the min heap size for each benchmark and then do
testing in a 2x or 3x heap. You could try the Max Live figure from the
dacapobench.org as a first approximation.
cheers
Vladimir Strigun wrote:
Hi all,
Since default gc was changed to gcv5, I've done performance comparison
between gcv4 and gcv5 with Dacapo benchmark. The next build was used:
svn = r538104, (May 15 2007), Windows/ia32/msvc 1310, release build.
Measurements were performed on Woodcrest machine, with 900M heap,
large pages off. As the addition for the current code I used new
charset encoders/decoders[1]. Each Dacapo benchmarks were executed 10
times, final result calculated from last 5 values. I got the following
numbers (the values are in milliseconds, so the less the better):
gc_cc.dll:
antlr 1762
bloat: 5927 (the benchmark failed after 6th iteration)
chart: 7831
fop: 1043
hsqldb: 2290
jython: 6790
luindex: 5422
lusearch: 2846
pmd: 4071
xalan: 2087
gc_gen.dll:
antlr: 2354 (the benchmark failed after 7th iteration)
bloat: 6005 (the benchmark failed after 6th iteration)
chart: 7812
fop: 1034
hsqldb: 2219 (the benchmark failed after 6th iteration)
jython: 6072
luindex: 5478
lusearch: 3569
pmd: 27772
xalan: 4547 (failed after 3rd iteration)
So, as the result we have 3 new failures (benchmarks not completed
successfully), huge degradation for pmd benchmark, degradation for
lusearch benchmark and significant speedup for jython. I'll continue
to play with gcv5 - increase heap size, turn on large pages, etc.
Should I create 3 separate JIRA issues for each failure or one issue
is enough?
Thanks.
Vladimir.
[1] https://issues.apache.org/jira/browse/HARMONY-3593
--
Robin Garner
Dept. of Computer Science
Australian National University
http://cs.anu.edu.au/people/Robin.Garner/