On 2015-09-15 02:53, Maurizio Cimadamore wrote:
Hi Erik,
thanks for the explanation.
Regarding build times, the current heuristics scores ok on my
high-end machine (I get more or less same time as with JDK 8 build) -
but with a lower spec machine (i.e. laptop with dual core intel i5) it
gets much much worse - i.e. I used to be able to build in 7 minutes on
my laptop (using ccache) - now build time is at least double that figure.
I believe the major reason for your degradation of build performance in
JDK 9 vs JDK 8 on a low end machine is caused by us splitting the java
compilation into a per module model. In JDK 8, all java code was
compiled in one chunk. On a high end machine, splitting doesn't cause as
much degradation as many modules are compiled in parallel, making up for
the lost time of restarting the JVM each time. I think most of this loss
will be made up when we introduce server javac, where the jvm will be
warmed up and reused for all java compilations.
Have you tested reducing the number of jobs (make JOBS=2) on your i5? If
that does indeed improve build times on JDK 9, I would be surprised, but
then we definitely need to change the heuristics. In my experience
lowering the JOBS number does not have a positive impact on build times
any system.
/Erik
I know it's an hard problem to decide how many cores to use but there
seem to be a pattern emerging:
* low-end machines get completely swamped by the build load
* CPU bound tests run into troubles when reusing same concurrency
settings, even on high-end hardware. Without playing with timeouts
it's impossible to get a clean test sheet.
* on relatively high-end HW, current build concurrency settings seem
to be doing ok.
Realistically, I believe anything that uses more than n/2 virtual
processors is going to face troubles sooner or later; the build might
be ok since there's so much IO going on (reading/writing files) - but
the more the build will become CPU intensive (and sjavac might help
with that) the more current settings could become a bottleneck.
Maurizio
On 14/09/15 17:05, Erik Joelsson wrote:
Hello,
When I implemented the heuristic to choose a suitable default
concurrency, I only ever worried about the build. I think having
tests use the same concurrency setting must be a new feature? In any
case, it seems like there is a case for reducing concurrency when
running tests.
Another note. It at least used to be quite tricky to get correct
information about cores vs hyperthreading from the OS. I know today
we aren't even consistent with this across platforms. Perhaps we
should revisit this heuristic and take hyperthreading into
consideration too.
The current implemenation uses 100% of number of virtual cpus when 1
to 4 of them, then 90% at 5 to 16. After that it caps out at 16. (I
might remember some detail wrong here)
/Erik
On 2015-09-14 04:10, Maurizio Cimadamore wrote:
The information I posted was slightly incorrect, sorry - my machine
has 8 cores (and 16 virtual processors) - so you see why choosing
concurrency factor of 14 is particularly bad in this setup.
Maurizio
On 14/09/15 12:03, Maurizio Cimadamore wrote:
Hi,
I realized that the concurrency factor inferred by the JDK build
might be too high; on a 16 core machine, concurrency is set to 14 -
which then leads to absurd load averages (50-ish) when
building/running tests. High load when building is not a big issue,
but when running test this almost always turns into spurious
failures due to timeouts. I know I can override the concurrency
factor with --with-jobs - but I was curious as to why the default
parameter is set to such aggressive value?
Thanks
Maurizio