Re: Stop using precompiled headers for Linux?

2018-11-05 Thread Aleksey Shipilev
On 11/05/2018 05:46 PM, Erik Joelsson wrote:
>> If we decide to keep precompiled headers on by default, maybe we should add 
>> a simple no-PCH
>> verification task in tier1? It only needs to build hotspot, so it should be 
>> quick.
>>
> That is a good point, so sure we can do that. Which debug level would be most 
> appropriate for this
> test, debug or slowdebug? I very much doubt it's relevant to run release 
> builds without PCH.

Choosing between fastdebug or slowdebug, I'd choose fastdebug. slowdebug has, 
or at least used to
have its own warts (oopDesc verification?) that can mask build issues.

-Aleksey



Re: Stop using precompiled headers for Linux?

2018-11-05 Thread Erik Joelsson

On 2018-11-03 01:51, Magnus Ihse Bursie wrote:


On 2018-10-30 20:21, Erik Joelsson wrote:
Last I checked, it did provide significant build speed improvements 
when building just hotspot, but that could need revisiting.


We do have verification of --disable-precompiled-headers (in 
slowdebug) in builds-tier2 so we normally get notified if this fails. 
However, Mach5 has not been running since Friday so this particular 
bug wasn't detected automatically. Looking at the bug, it also failed 
on Solaris, which would have been caught by tier1 builds.


If we decide to keep precompiled headers on by default, maybe we 
should add a simple no-PCH verification task in tier1? It only needs 
to build hotspot, so it should be quick.


That is a good point, so sure we can do that. Which debug level would be 
most appropriate for this test, debug or slowdebug? I very much doubt 
it's relevant to run release builds without PCH.


/Erik

/Magnus



/Erik


On 2018-10-30 10:26, Ioi Lam wrote:
Is there any advantage of using precompiled headers on Linux? It's 
on by default and we keep having breakage where someone would forget 
to add #include. The latest instance is JDK-8213148.


I just turn on precompiled headers explicitly in all my builds. I 
don't see any difference in build time (at least not significant 
enough for me to bother).


Should we disable it by default on Linux?

Thanks

- Ioi










Re: Stop using precompiled headers for Linux?

2018-11-03 Thread Magnus Ihse Bursie



On 2018-10-30 20:21, Erik Joelsson wrote:
Last I checked, it did provide significant build speed improvements 
when building just hotspot, but that could need revisiting.


We do have verification of --disable-precompiled-headers (in 
slowdebug) in builds-tier2 so we normally get notified if this fails. 
However, Mach5 has not been running since Friday so this particular 
bug wasn't detected automatically. Looking at the bug, it also failed 
on Solaris, which would have been caught by tier1 builds.


If we decide to keep precompiled headers on by default, maybe we should 
add a simple no-PCH verification task in tier1? It only needs to build 
hotspot, so it should be quick.


/Magnus



/Erik


On 2018-10-30 10:26, Ioi Lam wrote:
Is there any advantage of using precompiled headers on Linux? It's on 
by default and we keep having breakage where someone would forget to 
add #include. The latest instance is JDK-8213148.


I just turn on precompiled headers explicitly in all my builds. I 
don't see any difference in build time (at least not significant 
enough for me to bother).


Should we disable it by default on Linux?

Thanks

- Ioi








Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Magnus Ihse Bursie




On 2018-11-02 15:00, Magnus Ihse Bursie wrote:
So obviously this is a nice improvement even here. I could probably 
try around a bit and see if there is an even better fit with a 
different selections of header files, but even without that, I'd say 
this patch is by itself as good for clang as it is for gcc. 


I could not improve matters much. There was very small differences 
between setting the limit at 110, 130 (BKM), 150 or 200:


real    3m6,322s
user    19m3,000s
sys    1m41,252s
hotspot new pch BKM (on and above 130)

real    3m6,309s
user    19m2,764s
sys    1m41,488s
hotspot new pch on and above 110

real    3m7,515s
user    19m6,470s
sys    1m42,345s
hotspot new pch on and above 150

real    3m8,497s
user    19m15,851s
sys    1m42,494s
hotspot new pch on and above 200

/Magnus


Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Magnus Ihse Bursie



On 2018-11-02 17:21, Erik Joelsson wrote:

Nice work!

What exactly are you measuring, "make hotspot" or some other target?

Yes, "make hotspot".

If we can find a reasonable set of extra files for the windows pch 
that restores all or most of the performance, that would of course be 
preferable. I doubt we will find a significantly better selection on 
Mac compared to Linux though.
It seems the best selection for Mac more or less exactly equals Linux. 
Which is nice. For Windows, I was able to more or less precisely match 
the original behaviour with the Linux set + the four inline.hpp files I 
removed for Linux:


# include "oops/oop.inline.hpp"
# include "memory/allocation.inline.hpp"
# include "oops/access.inline.hpp"
# include "runtime/handles.inline.hpp"

Then I got from the original:
real    6m39.035s
user    0m58.580s
sys 2m48.138s
hotspot with original pch

to:

real    6m18.645s
user    0m55.963s
sys    2m28.264s
hotspot with new pch, BKM (on and above 130), including inline

Quite good for just adding four more files depending on the Windows 
platform.


By adding yet some more include files (and keeping the inline files), I 
was able to improve Windows compile time somewhat more:

real    6m7.355s
user    0m55.718s
sys    2m26.153s
hotspot with new pch on and above 110, including inline

Then I also added this set:
// 130-110
# include "runtime/thread.inline.hpp"
# include "utilities/bitMap.inline.hpp"
# include "oops/arrayOop.inline.hpp"
# include "gc/shared/gcId.hpp"
# include "runtime/mutexLocker.hpp"
# include "oops/objArrayOop.inline.hpp"
# include "classfile/javaClasses.inline.hpp"
# include "memory/referenceType.hpp"
# include "oops/weakHandle.hpp"
# include "oops/compressedOops.inline.hpp"
# include "gc/shared/barrierSet.hpp"
# include "utilities/stack.hpp"
# include "gc/g1/g1YCTypes.hpp"
# include "memory/padded.hpp"
# include "logging/logHandle.hpp"

This starts to look a bit specialized (the g1 files is likely to need 
#ifdef guards etc), so maybe it's not worth it.


/Magnus





/Erik


On 2018-11-02 07:00, Magnus Ihse Bursie wrote:

On 2018-11-02 12:14, Magnus Ihse Bursie wrote:
Caveats: I have only run this on my local linux build with the 
default server JVM configuration. Other machines will have different 
sweet spots. Other JVM variants/feature combinations will have 
different sweet spots. And, most importantly, I have not tested this 
at all on Windows. Nevertheless, I'm almost prepared to suggest a 
patch that uses this selection of files if running on gcc, just as 
is, because of the speed improvements I measured. 


I've started running tests on other platforms. Unfortunately, I don't 
have access to quite as powerful machines, so everything takes much 
longer. For the moment, I've only tested my "BKM" (best known method) 
from linux, to see if it works.


For xcode/macos I got:
real    4m21,528s
user    27m28,623s
sys    2m18,244s
hotspot with original pch

real    4m28,867s
user    29m10,685s
sys    2m14,456s
hotspot without pch

real    3m6,322s
user    19m3,000s
sys    1m41,252s
hotspot with new BKM pch

So obviously this is a nice improvement even here. I could probably 
try around a bit and see if there is an even better fit with a 
different selections of header files, but even without that, I'd say 
this patch is by itself as good for clang as it is for gcc.


For windows I got:
real    6m39.035s
user    0m58.580s
sys 2m48.138s
hotspot with original pch

real    10m29.227s
user    1m6.909s
sys    2m24.108s
hotspot without pch

real    6m56.262s
user    0m57.563s
sys    2m27.514s
hotspot with new BKM pch

I'm not sure what's going on with the user time numbers here. 
Presumably cygwin cannot get to the real Windows time data. What I 
can see is the huge difference in wall clock time between PCH and no 
PCH. I can also see that the new trimmed BKM list retains most of 
that improvement, but is actually somewhat slower that the original 
list. I'm currently rerunning with a larger set on Windows, to see if 
this helps improve things. I can certainly live with a 
precompiled.hpp that includes some additional files on Windows.


/Magnus








Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Aleksey Shipilev
On 11/02/2018 12:14 PM, Magnus Ihse Bursie wrote:
> And here is the "winning" list (which I declared as "on or above 130, without 
> inline"). I encourage
> everyone to try this on their own system, and report back the results!
> 
> #ifndef DONT_USE_PRECOMPILED_HEADER
> # include "classfile/classLoaderData.hpp"
> # include "classfile/javaClasses.hpp"
> # include "classfile/systemDictionary.hpp"
> # include "gc/shared/collectedHeap.hpp"
> # include "gc/shared/gcCause.hpp"
> # include "logging/log.hpp"
> # include "memory/allocation.hpp"
> # include "memory/iterator.hpp"
> # include "memory/memRegion.hpp"
> # include "memory/resourceArea.hpp"
> # include "memory/universe.hpp"
> # include "oops/instanceKlass.hpp"
> # include "oops/klass.hpp"
> # include "oops/method.hpp"
> # include "oops/objArrayKlass.hpp"
> # include "oops/objArrayOop.hpp"
> # include "oops/oop.hpp"
> # include "oops/oopsHierarchy.hpp"
> # include "runtime/atomic.hpp"
> # include "runtime/globals.hpp"
> # include "runtime/handles.hpp"
> # include "runtime/mutex.hpp"
> # include "runtime/orderAccess.hpp"
> # include "runtime/os.hpp"
> # include "runtime/thread.hpp"
> # include "runtime/timer.hpp"
> # include "services/memTracker.hpp"
> # include "utilities/align.hpp"
> # include "utilities/bitMap.hpp"
> # include "utilities/copy.hpp"
> # include "utilities/debug.hpp"
> # include "utilities/exceptions.hpp"
> # include "utilities/globalDefinitions.hpp"
> # include "utilities/growableArray.hpp"
> # include "utilities/macros.hpp"
> # include "utilities/ostream.hpp"
> # include "utilities/ticks.hpp"
> #endif // !DONT_USE_PRECOMPILED_HEADER

"make clean hotspot" times on my TR 2950X Linux x86_64 build node:

 no PCH: {134s, 135s, 135s} wall, ~59m user
old PCH: {136s, 136s, 135s} wall, ~55m user
new PCH: {111s, 108s, 108s} wall, ~45m user

I am all for shallower PCH, even knowing I would disable it for my builds 
anyway :)

Thanks,
-Aleksey



Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Erik Joelsson

Nice work!

What exactly are you measuring, "make hotspot" or some other target?

If we can find a reasonable set of extra files for the windows pch that 
restores all or most of the performance, that would of course be 
preferable. I doubt we will find a significantly better selection on Mac 
compared to Linux though.


/Erik


On 2018-11-02 07:00, Magnus Ihse Bursie wrote:

On 2018-11-02 12:14, Magnus Ihse Bursie wrote:
Caveats: I have only run this on my local linux build with the 
default server JVM configuration. Other machines will have different 
sweet spots. Other JVM variants/feature combinations will have 
different sweet spots. And, most importantly, I have not tested this 
at all on Windows. Nevertheless, I'm almost prepared to suggest a 
patch that uses this selection of files if running on gcc, just as 
is, because of the speed improvements I measured. 


I've started running tests on other platforms. Unfortunately, I don't 
have access to quite as powerful machines, so everything takes much 
longer. For the moment, I've only tested my "BKM" (best known method) 
from linux, to see if it works.


For xcode/macos I got:
real    4m21,528s
user    27m28,623s
sys    2m18,244s
hotspot with original pch

real    4m28,867s
user    29m10,685s
sys    2m14,456s
hotspot without pch

real    3m6,322s
user    19m3,000s
sys    1m41,252s
hotspot with new BKM pch

So obviously this is a nice improvement even here. I could probably 
try around a bit and see if there is an even better fit with a 
different selections of header files, but even without that, I'd say 
this patch is by itself as good for clang as it is for gcc.


For windows I got:
real    6m39.035s
user    0m58.580s
sys 2m48.138s
hotspot with original pch

real    10m29.227s
user    1m6.909s
sys    2m24.108s
hotspot without pch

real    6m56.262s
user    0m57.563s
sys    2m27.514s
hotspot with new BKM pch

I'm not sure what's going on with the user time numbers here. 
Presumably cygwin cannot get to the real Windows time data. What I can 
see is the huge difference in wall clock time between PCH and no PCH. 
I can also see that the new trimmed BKM list retains most of that 
improvement, but is actually somewhat slower that the original list. 
I'm currently rerunning with a larger set on Windows, to see if this 
helps improve things. I can certainly live with a precompiled.hpp that 
includes some additional files on Windows.


/Magnus






Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Thomas Stüfe
Hi Magnus,

your winning variant gives me a nice boost on my thinkpad:

pch, standard:
real17m52.367s
user52m20.730s
sys 4m53.711s

pch, your variant:
real15m0.514s
user46m6.466s
sys 2m38.371s

(non-pch is ~19-20 minutes WTC)

With those numbers, I might start using pch again on low powered machines.

.. Thomas



On Fri, Nov 2, 2018 at 12:14 PM Magnus Ihse Bursie
 wrote:
>
>
> On 2018-11-02 11:39, Magnus Ihse Bursie wrote:
> > On 2018-11-02 00:53, Ioi Lam wrote:
> >> Maybe precompiled.hpp can be periodically (weekly?) updated by a
> >> robot, which parses the dependencies files generated by gcc, and pick
> >> the most popular N files?
> > I think that's tricky to implement automatically. However, I've done
> > more or less, that, and I've got some wonderful results! :-)
>
> Ok, I'm done running my tests.
>
> TL;DR: I've managed to reduce wall-clock time from 2m 45s (with pch) or
> 2m 23s (without pch), to 1m 55s. The cpu time spent went from 52m 27s
> (with pch) or 55m 30s (without pch) to 41m 10s. This is a huge gain for
> our automated builds! And a clear improvement even for the ordinary
> developer.
>
> The list of included header files is reduced to just 37. The winning
> combination was to include all header files that was included in more
> than 130 different files, but to exclude all files with the name
> "*.inline.hpp". Hopefully, a further gain of not pulling in the
> *.inline.hpp files is that the risk of pch/non-pch failures will diminish.
>
> However, these 37 files in turn pull in an additional 201 header files.
> Of these, three are *.inline.hpp:
> share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdBits.inline.hpp,
> os_cpu/linux_x86/bytes_linux_x86.inline.hpp and
> os_cpu/linux_x86/copy_linux_x86.inline.hpp. This looks like a problem
> with the header files to me.
>
> With some exceptions (mostly related to JFR), these additional 200 files
> have "generic" looking names (like share/gc/g1/g1_globals.hpp), which
> indicate to me that it is reasonable to have them in this list, just as
> the list of the original 37 tended to be quite general and high-level
> includes. However, some files (like
> share/jfr/instrumentation/jfrEventClassTransformer.hpp) has maybe leaked
> in where they should not really be. It might be worth letting a hotspot
> engineer spend some cycles to check up these files and see if anything
> can be improved.
>
> Caveats: I have only run this on my local linux build with the default
> server JVM configuration. Other machines will have different sweet
> spots. Other JVM variants/feature combinations will have different sweet
> spots. And, most importantly, I have not tested this at all on Windows.
> Nevertheless, I'm almost prepared to suggest a patch that uses this
> selection of files if running on gcc, just as is, because of the speed
> improvements I measured.
>
> And some data:
>
> Here is my log from my runs. The "on or above" means the cutoff I used
> for how many files that needed to include the files that were selected.
> As you can see, there is not much difference between cutoffs between
> 130-150, or (without the inline files) between 110 and 150. (There were
> a lot of additional inline files in the positions below 130.) With all
> other equal, I'd prefer a solution with fewer files. That is less likely
> to go bad.
>
> real2m45.623s
> user52m27.813s
> sys5m27.176s
> hotspot with original pch
>
> real2m23.837s
> user55m30.448s
> sys3m39.739s
> hotspot without pch
>
> real1m59.533s
> user42m50.019s
> sys3m0.893s
> hotspot new pch on or above 250
>
> real1m58.937s
> user42m18.994s
> sys3m0.245s
> hotspot new pch on or above 200
>
> real2m0.729s
> user42m16.636s
> sys2m57.125s
> hotspot new pch on or above 170
>
> real1m58.064s
> user42m9.618s
> sys2m57.635s
> hotspot new pch on or above 150
>
> real1m58.053s
> user42m9.796s
> sys2m58.732s
> hotspot new pch on or above 130
>
> real2m3.364s
> user42m54.818s
> sys3m2.737s
> hotspot new pch on or above 100
>
> real2m6.698s
> user44m30.434s
> sys3m12.015s
> hotspot new pch on or above 70
>
> real2m0.598s
> user41m17.810s
> sys2m56.258s
> hotspot new pch on or above 150 without inline
>
> real1m55.981s
> user41m10.076s
> sys2m51.983s
> hotspot new pch on or above 130 without inline
>
> real1m56.449s
> user41m10.667s
> sys2m53.808s
> hotspot new pch on or above 110 without inline
>
> And here is the "winning" list (which I declared as "on or above 130,
> without inline"). I encourage everyone to try this on their own system,
> and report back the results!
>
> #ifndef DONT_USE_PRECOMPILED_HEADER
> # include "classfile/classLoaderData.hpp"
> # include "classfile/javaClasses.hpp"
> # include "classfile/systemDictionary.hpp"
> # include "gc/shared/collectedHeap.hpp"
> # include "gc/shared/gcCause.hpp"
> # include "logging/log.hpp"
> # include 

Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Magnus Ihse Bursie

On 2018-11-02 12:14, Magnus Ihse Bursie wrote:
Caveats: I have only run this on my local linux build with the default 
server JVM configuration. Other machines will have different sweet 
spots. Other JVM variants/feature combinations will have different 
sweet spots. And, most importantly, I have not tested this at all on 
Windows. Nevertheless, I'm almost prepared to suggest a patch that 
uses this selection of files if running on gcc, just as is, because of 
the speed improvements I measured. 


I've started running tests on other platforms. Unfortunately, I don't 
have access to quite as powerful machines, so everything takes much 
longer. For the moment, I've only tested my "BKM" (best known method) 
from linux, to see if it works.


For xcode/macos I got:
real    4m21,528s
user    27m28,623s
sys    2m18,244s
hotspot with original pch

real    4m28,867s
user    29m10,685s
sys    2m14,456s
hotspot without pch

real    3m6,322s
user    19m3,000s
sys    1m41,252s
hotspot with new BKM pch

So obviously this is a nice improvement even here. I could probably try 
around a bit and see if there is an even better fit with a different 
selections of header files, but even without that, I'd say this patch is 
by itself as good for clang as it is for gcc.


For windows I got:
real    6m39.035s
user    0m58.580s
sys 2m48.138s
hotspot with original pch

real    10m29.227s
user    1m6.909s
sys    2m24.108s
hotspot without pch

real    6m56.262s
user    0m57.563s
sys    2m27.514s
hotspot with new BKM pch

I'm not sure what's going on with the user time numbers here. Presumably 
cygwin cannot get to the real Windows time data. What I can see is the 
huge difference in wall clock time between PCH and no PCH. I can also 
see that the new trimmed BKM list retains most of that improvement, but 
is actually somewhat slower that the original list. I'm currently 
rerunning with a larger set on Windows, to see if this helps improve 
things. I can certainly live with a precompiled.hpp that includes some 
additional files on Windows.


/Magnus




Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Magnus Ihse Bursie



On 2018-11-02 11:39, Magnus Ihse Bursie wrote:

On 2018-11-02 00:53, Ioi Lam wrote:
Maybe precompiled.hpp can be periodically (weekly?) updated by a 
robot, which parses the dependencies files generated by gcc, and pick 
the most popular N files?
I think that's tricky to implement automatically. However, I've done 
more or less, that, and I've got some wonderful results! :-)


Ok, I'm done running my tests.

TL;DR: I've managed to reduce wall-clock time from 2m 45s (with pch) or 
2m 23s (without pch), to 1m 55s. The cpu time spent went from 52m 27s 
(with pch) or 55m 30s (without pch) to 41m 10s. This is a huge gain for 
our automated builds! And a clear improvement even for the ordinary 
developer.


The list of included header files is reduced to just 37. The winning 
combination was to include all header files that was included in more 
than 130 different files, but to exclude all files with the name 
"*.inline.hpp". Hopefully, a further gain of not pulling in the 
*.inline.hpp files is that the risk of pch/non-pch failures will diminish.


However, these 37 files in turn pull in an additional 201 header files. 
Of these, three are *.inline.hpp:
share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdBits.inline.hpp, 
os_cpu/linux_x86/bytes_linux_x86.inline.hpp and 
os_cpu/linux_x86/copy_linux_x86.inline.hpp. This looks like a problem 
with the header files to me.


With some exceptions (mostly related to JFR), these additional 200 files 
have "generic" looking names (like share/gc/g1/g1_globals.hpp), which 
indicate to me that it is reasonable to have them in this list, just as 
the list of the original 37 tended to be quite general and high-level 
includes. However, some files (like 
share/jfr/instrumentation/jfrEventClassTransformer.hpp) has maybe leaked 
in where they should not really be. It might be worth letting a hotspot 
engineer spend some cycles to check up these files and see if anything 
can be improved.


Caveats: I have only run this on my local linux build with the default 
server JVM configuration. Other machines will have different sweet 
spots. Other JVM variants/feature combinations will have different sweet 
spots. And, most importantly, I have not tested this at all on Windows. 
Nevertheless, I'm almost prepared to suggest a patch that uses this 
selection of files if running on gcc, just as is, because of the speed 
improvements I measured.


And some data:

Here is my log from my runs. The "on or above" means the cutoff I used 
for how many files that needed to include the files that were selected. 
As you can see, there is not much difference between cutoffs between 
130-150, or (without the inline files) between 110 and 150. (There were 
a lot of additional inline files in the positions below 130.) With all 
other equal, I'd prefer a solution with fewer files. That is less likely 
to go bad.


real    2m45.623s
user    52m27.813s
sys    5m27.176s
hotspot with original pch

real    2m23.837s
user    55m30.448s
sys    3m39.739s
hotspot without pch

real    1m59.533s
user    42m50.019s
sys    3m0.893s
hotspot new pch on or above 250

real    1m58.937s
user    42m18.994s
sys    3m0.245s
hotspot new pch on or above 200

real    2m0.729s
user    42m16.636s
sys    2m57.125s
hotspot new pch on or above 170

real    1m58.064s
user    42m9.618s
sys    2m57.635s
hotspot new pch on or above 150

real    1m58.053s
user    42m9.796s
sys    2m58.732s
hotspot new pch on or above 130

real    2m3.364s
user    42m54.818s
sys    3m2.737s
hotspot new pch on or above 100

real    2m6.698s
user    44m30.434s
sys    3m12.015s
hotspot new pch on or above 70

real    2m0.598s
user    41m17.810s
sys    2m56.258s
hotspot new pch on or above 150 without inline

real    1m55.981s
user    41m10.076s
sys    2m51.983s
hotspot new pch on or above 130 without inline

real    1m56.449s
user    41m10.667s
sys    2m53.808s
hotspot new pch on or above 110 without inline

And here is the "winning" list (which I declared as "on or above 130, 
without inline"). I encourage everyone to try this on their own system, 
and report back the results!


#ifndef DONT_USE_PRECOMPILED_HEADER
# include "classfile/classLoaderData.hpp"
# include "classfile/javaClasses.hpp"
# include "classfile/systemDictionary.hpp"
# include "gc/shared/collectedHeap.hpp"
# include "gc/shared/gcCause.hpp"
# include "logging/log.hpp"
# include "memory/allocation.hpp"
# include "memory/iterator.hpp"
# include "memory/memRegion.hpp"
# include "memory/resourceArea.hpp"
# include "memory/universe.hpp"
# include "oops/instanceKlass.hpp"
# include "oops/klass.hpp"
# include "oops/method.hpp"
# include "oops/objArrayKlass.hpp"
# include "oops/objArrayOop.hpp"
# include "oops/oop.hpp"
# include "oops/oopsHierarchy.hpp"
# include "runtime/atomic.hpp"
# include "runtime/globals.hpp"
# include "runtime/handles.hpp"
# include "runtime/mutex.hpp"
# include "runtime/orderAccess.hpp"
# include "runtime/os.hpp"
# include "runtime/thread.hpp"
# include 

Re: Stop using precompiled headers for Linux?

2018-11-02 Thread Magnus Ihse Bursie

On 2018-11-02 00:53, Ioi Lam wrote:
Maybe precompiled.hpp can be periodically (weekly?) updated by a 
robot, which parses the dependencies files generated by gcc, and pick 
the most popular N files?
I think that's tricky to implement automatically. However, I've done 
more or less, that, and I've got some wonderful results! :-)


I'd still like to run some more tests, but preliminiary data indicates 
that there is much to be gained by having a more sensible list of files 
in the precompiled header.


The fewer files we got on this list, the less likely it is to become 
(drastically) outdated. So I don't think we need to do this 
automatically, but perhaps manually every now and then when we feel 
build times are increasing.


/Magnus



- Ioi


On 11/1/18 4:38 PM, David Holmes wrote:
It's not at all obvious to me that the way we use PCH is the 
right/best way to use it. We dump every header we think it would be 
good to precompile into precompiled.hpp and then only ask gcc to 
precompile it. That results in a ~250MB file that has to be read into 
and processed for every source file! That doesn't seem very efficient 
to me.


Cheers,
David

On 2/11/2018 3:18 AM, Erik Joelsson wrote:

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem 
to lose just as much real compile time. The big difference in these 
tests was rather the number of cpus in the machine (32 threads in 
the linux box vs 8 on the mac). The total amount of work done was 
increased when PCH was disabled, that's the user time. Here is my 
theory on why the real (wall clock) time was not consistent with 
user time between these experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the 
time saved in 2. The potential time saved in 2 goes down as the 
number of cpus go up. I'm pretty sure that if I repeated the 
experiment on Linux on a smaller box (typically one we use in CI), 
the results would look similar to Macosx, and similarly, if I had 
access to a much bigger mac, it would behave like the big Linux box. 
This is why I'm saying this should be done for both or none of these 
platforms.


In addition to this, the experiment only built hotspot. If you we 
would instead build the whole JDK, then the time wasted in 1 in the 
PCH case would be negated to a large extent by other build targets 
running concurrently, so for a full build, PCH is still providing 
value.


The question here is that if the value of PCH isn't very big, 
perhaps it's not worth it if it's also creating as much grief as 
described here. There is no doubt that there is value however. And 
given the examination done by Magnus, it seems this value could be 
increased.


The main reason why we haven't disabled PCH in CI before this. We 
really really want to get CI builds fast. We don't have a ton of 
over capacity to just throw at it. PCH made builds faster, so we 
used them. My other reason is consistency between builds. Supporting 
multiple different modes of building creates the potential for 
inconsistencies. For that reason I would definitely not support 
having PCH on by default, but turned off in our CI/dev-submit. We 
pick one or the other as the official build configuration, and we 
stick with the official build configuration for all builds of any 
official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one 
after the other. The jdk-submit currently only runs tier1. In tier2 
I've put slowdebug builds with PCH disabled, just to help verify a 
common developer configuration. These builds are not meant to be 
used for testing or anything like that, they are just run for 
verification, which is why this is ok. We could argue that it would 
make sense to move the linux-x64-slowdebug without pch build to 
tier1 so that it's included in dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 
2013), 3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the 
-npch is without precompiled headers. Here we see a slight 
degradation when disabling on both user time and wall clock time. 
My guess is that the user time increase is about the same, but 
because of a lower cpu count, the extra load is not as easily 
covered.


These tests were run with just building hotspot. This means that 
the precompiled header is generated alone on one core while 
nothing else is happening, which would explain this degradation in 
build speed. If we were instead building the whole product, we 
would see a better correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did 

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread David Holmes

On 2/11/2018 9:53 AM, Ioi Lam wrote:
Maybe precompiled.hpp can be periodically (weekly?) updated by a robot, 
which parses the dependencies files generated by gcc, and pick the most 
popular N files?


Do we need to use precompiled.hpp at all? Can we not use the list of 
files contained in precompiled.hpp and precompile them individually?


Might be an interesting experiment.

David


- Ioi


On 11/1/18 4:38 PM, David Holmes wrote:
It's not at all obvious to me that the way we use PCH is the 
right/best way to use it. We dump every header we think it would be 
good to precompile into precompiled.hpp and then only ask gcc to 
precompile it. That results in a ~250MB file that has to be read into 
and processed for every source file! That doesn't seem very efficient 
to me.


Cheers,
David

On 2/11/2018 3:18 AM, Erik Joelsson wrote:

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these 
tests was rather the number of cpus in the machine (32 threads in the 
linux box vs 8 on the mac). The total amount of work done was 
increased when PCH was disabled, that's the user time. Here is my 
theory on why the real (wall clock) time was not consistent with user 
time between these experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the 
time saved in 2. The potential time saved in 2 goes down as the 
number of cpus go up. I'm pretty sure that if I repeated the 
experiment on Linux on a smaller box (typically one we use in CI), 
the results would look similar to Macosx, and similarly, if I had 
access to a much bigger mac, it would behave like the big Linux box. 
This is why I'm saying this should be done for both or none of these 
platforms.


In addition to this, the experiment only built hotspot. If you we 
would instead build the whole JDK, then the time wasted in 1 in the 
PCH case would be negated to a large extent by other build targets 
running concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described 
here. There is no doubt that there is value however. And given the 
examination done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We 
really really want to get CI builds fast. We don't have a ton of over 
capacity to just throw at it. PCH made builds faster, so we used 
them. My other reason is consistency between builds. Supporting 
multiple different modes of building creates the potential for 
inconsistencies. For that reason I would definitely not support 
having PCH on by default, but turned off in our CI/dev-submit. We 
pick one or the other as the official build configuration, and we 
stick with the official build configuration for all builds of any 
official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one 
after the other. The jdk-submit currently only runs tier1. In tier2 
I've put slowdebug builds with PCH disabled, just to help verify a 
common developer configuration. These builds are not meant to be used 
for testing or anything like that, they are just run for 
verification, which is why this is ok. We could argue that it would 
make sense to move the linux-x64-slowdebug without pch build to tier1 
so that it's included in dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 
2013), 3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the 
-npch is without precompiled headers. Here we see a slight 
degradation when disabling on both user time and wall clock time. 
My guess is that the user time increase is about the same, but 
because of a lower cpu count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that 
the precompiled header is generated alone on one core while nothing 
else is happening, which would explain this degradation in build 
speed. If we were instead building the whole product, we would see 
a better correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did 
with ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That 
the gain is small is no 

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Ioi Lam
Maybe precompiled.hpp can be periodically (weekly?) updated by a robot, 
which parses the dependencies files generated by gcc, and pick the most 
popular N files?


- Ioi


On 11/1/18 4:38 PM, David Holmes wrote:
It's not at all obvious to me that the way we use PCH is the 
right/best way to use it. We dump every header we think it would be 
good to precompile into precompiled.hpp and then only ask gcc to 
precompile it. That results in a ~250MB file that has to be read into 
and processed for every source file! That doesn't seem very efficient 
to me.


Cheers,
David

On 2/11/2018 3:18 AM, Erik Joelsson wrote:

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these 
tests was rather the number of cpus in the machine (32 threads in the 
linux box vs 8 on the mac). The total amount of work done was 
increased when PCH was disabled, that's the user time. Here is my 
theory on why the real (wall clock) time was not consistent with user 
time between these experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the 
time saved in 2. The potential time saved in 2 goes down as the 
number of cpus go up. I'm pretty sure that if I repeated the 
experiment on Linux on a smaller box (typically one we use in CI), 
the results would look similar to Macosx, and similarly, if I had 
access to a much bigger mac, it would behave like the big Linux box. 
This is why I'm saying this should be done for both or none of these 
platforms.


In addition to this, the experiment only built hotspot. If you we 
would instead build the whole JDK, then the time wasted in 1 in the 
PCH case would be negated to a large extent by other build targets 
running concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described 
here. There is no doubt that there is value however. And given the 
examination done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We 
really really want to get CI builds fast. We don't have a ton of over 
capacity to just throw at it. PCH made builds faster, so we used 
them. My other reason is consistency between builds. Supporting 
multiple different modes of building creates the potential for 
inconsistencies. For that reason I would definitely not support 
having PCH on by default, but turned off in our CI/dev-submit. We 
pick one or the other as the official build configuration, and we 
stick with the official build configuration for all builds of any 
official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one 
after the other. The jdk-submit currently only runs tier1. In tier2 
I've put slowdebug builds with PCH disabled, just to help verify a 
common developer configuration. These builds are not meant to be used 
for testing or anything like that, they are just run for 
verification, which is why this is ok. We could argue that it would 
make sense to move the linux-x64-slowdebug without pch build to tier1 
so that it's included in dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 
2013), 3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the 
-npch is without precompiled headers. Here we see a slight 
degradation when disabling on both user time and wall clock time. 
My guess is that the user time increase is about the same, but 
because of a lower cpu count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that 
the precompiled header is generated alone on one core while nothing 
else is happening, which would explain this degradation in build 
speed. If we were instead building the whole product, we would see 
a better correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did 
with ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That 
the gain is small is no argument to lose it. (I remember a time when 
you were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH 
as a problem, and there is a net loss of time, at least on 

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread David Holmes
It's not at all obvious to me that the way we use PCH is the right/best 
way to use it. We dump every header we think it would be good to 
precompile into precompiled.hpp and then only ask gcc to precompile it. 
That results in a ~250MB file that has to be read into and processed for 
every source file! That doesn't seem very efficient to me.


Cheers,
David

On 2/11/2018 3:18 AM, Erik Joelsson wrote:

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these tests 
was rather the number of cpus in the machine (32 threads in the linux 
box vs 8 on the mac). The total amount of work done was increased when 
PCH was disabled, that's the user time. Here is my theory on why the 
real (wall clock) time was not consistent with user time between these 
experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the time 
saved in 2. The potential time saved in 2 goes down as the number of 
cpus go up. I'm pretty sure that if I repeated the experiment on Linux 
on a smaller box (typically one we use in CI), the results would look 
similar to Macosx, and similarly, if I had access to a much bigger mac, 
it would behave like the big Linux box. This is why I'm saying this 
should be done for both or none of these platforms.


In addition to this, the experiment only built hotspot. If you we would 
instead build the whole JDK, then the time wasted in 1 in the PCH case 
would be negated to a large extent by other build targets running 
concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described here. 
There is no doubt that there is value however. And given the examination 
done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We really 
really want to get CI builds fast. We don't have a ton of over capacity 
to just throw at it. PCH made builds faster, so we used them. My other 
reason is consistency between builds. Supporting multiple different 
modes of building creates the potential for inconsistencies. For that 
reason I would definitely not support having PCH on by default, but 
turned off in our CI/dev-submit. We pick one or the other as the 
official build configuration, and we stick with the official build 
configuration for all builds of any official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one after 
the other. The jdk-submit currently only runs tier1. In tier2 I've put 
slowdebug builds with PCH disabled, just to help verify a common 
developer configuration. These builds are not meant to be used for 
testing or anything like that, they are just run for verification, which 
is why this is ok. We could argue that it would make sense to move the 
linux-x64-slowdebug without pch build to tier1 so that it's included in 
dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 2013), 
3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the -npch is 
without precompiled headers. Here we see a slight degradation when 
disabling on both user time and wall clock time. My guess is that the 
user time increase is about the same, but because of a lower cpu 
count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that the 
precompiled header is generated alone on one core while nothing else 
is happening, which would explain this degradation in build speed. If 
we were instead building the whole product, we would see a better 
correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did with 
ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That the 
gain is small is no argument to lose it. (I remember a time when you 
were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH as 
a problem, and there is a net loss of time, at least on selected 
testing machines. It makes sense to turn it off as default, then.


/Magnus



/Erik

macosx-x64
real     4m13.658s
user     27m17.595s
sys     2m11.306s

macosx-x64-npch
real     4m27.823s
user     30m0.434s

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie


> 1 nov. 2018 kl. 17:49 skrev Erik Joelsson :
> 
>> On 2018-11-01 08:17, Magnus Ihse Bursie wrote:
>>> On 2018-11-01 15:53, Ioi Lam wrote:
>>> Just a stupid question. Does GCC have actual support for PCH? I know 
>>> windows can load pre-compiled information from a special binary file. Does 
>>> GCC support that kind of functionality?
>> Yes.
>> 
>> https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html
> But the Visual Studio compiler seems to be able to gain much more build 
> performance from it. I don't have fresh numbers but I definitely remember a 
> non PCH build on Windows taking more than double the time, if not triple or 
> quadruple.

Could that be due to a lower starting point? I mean, if the windows compilation 
takes more time in the base case, it's easier to improve times with PCH. 

/Magnus

> 
> /Erik
>> /Magnus
>> 
>>> 
>>> Thanks
>>> Ioi
>>> 
 On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie 
  wrote:
 
 
 
> On 2018-11-01 12:51, Thomas Stüfe wrote:
> On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
>  wrote:
>> On 2018-11-01 11:54, Aleksey Shipilev wrote:
 On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
 But then again, it might just signal that the list of headers included 
 in the PCH is no longer
 optimal. If it used to be the case that ~100 header files were so 
 interlinked, that changing any of
 them caused recompilation of all files that included it and all the 
 other 100 header files on the
 PCH list, then there was a net gain for using PCH and no "punishment".
 
 But nowadays this list might be far too large. Perhaps there's just 
 only a core set of say 20 header
 files that are universally (or almost universally) included, and 
 that's all that should be in the
 PCH list then. My guess is that, with a proper selection of header 
 files, PCH will still be a benefit.
>>> I agree. This smells like inefficient PCH list. We can improve that, 
>>> but I think that would be a
>>> lower priority, given the abundance of CPU power we use to compile 
>>> Hotspot. In my mind, the decisive
>>> factor for disabling PCH is to keep proper includes at all times, 
>>> without masking it with PCH. Half
>>> of the trivial bugs I submit against hotspot are #include differences 
>>> that show up in CI that builds
>>> without PCH.
>>> 
>>> So this is my ideal world:
>>>a) Efficient PCH list enabled by default for development pleasure;
>>>b) CIs build without PCH all the time (jdk-submit tier1 included!);
>>> 
>>> Since we don't yet have (a), and (b) seems to be tedious, regardless 
>>> how many times both Red Hat and
>>> SAP people ask for it, disabling PCH by default feels like a good 
>>> fallback.
>> Should just CI builds default to non-PCH, or all builds (that is, should
>> "configure" default to non-PCH on linux)? Maybe the former is better --
>> one thing that the test numbers here has not shown is if incremental
>> recompiles are improved by PCH. My gut feeling is that they really
>> should -- once you've created your PCH, subsequent recompiles will be
>> faster.
> That would only be true as long as you just change cpp files, no? As
> soon as you touch a header which is included in precompiled.hpp you
> are worse off than without pch.
> 
>> So the developer default should perhaps be to keep PCH, and we
>> should only configure the CI builds to do without PCH.
> CI without pch would be better than nothing. But seeing how clunky and
> slow jdk-submit is (and how often there are problems), I rather fail
> early in my own build than waiting for jdk-submit to tell me something
> went wrong (well, that is why I usually build nonpch, like Ioi does).
> 
> Just my 5 cent.
 I hear you, loud and clear. :) I've created 
 https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by 
 default, for all builds, on gcc. (I'm interpreting "linux" in this case as 
 "gcc", since this is compiler-dependent, and not OS dependent).
 
 /Magnus
 
> ..Thomas
>> /Magnus
>> 
>> 
>>> -Aleksey
> 



Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Erik Joelsson

Hello,

My point here, which wasn't very clear, is that Mac and Linux seem to 
lose just as much real compile time. The big difference in these tests 
was rather the number of cpus in the machine (32 threads in the linux 
box vs 8 on the mac). The total amount of work done was increased when 
PCH was disabled, that's the user time. Here is my theory on why the 
real (wall clock) time was not consistent with user time between these 
experiments can be explained:


With pch the time line (simplified) looks like this:

1. Single thread creating PCH
2. All cores compiling C++ files

When disabling pch it's just:

1. All cores compiling C++ files

To gain speed with PCH, the time spent in 1 much be less than the time 
saved in 2. The potential time saved in 2 goes down as the number of 
cpus go up. I'm pretty sure that if I repeated the experiment on Linux 
on a smaller box (typically one we use in CI), the results would look 
similar to Macosx, and similarly, if I had access to a much bigger mac, 
it would behave like the big Linux box. This is why I'm saying this 
should be done for both or none of these platforms.


In addition to this, the experiment only built hotspot. If you we would 
instead build the whole JDK, then the time wasted in 1 in the PCH case 
would be negated to a large extent by other build targets running 
concurrently, so for a full build, PCH is still providing value.


The question here is that if the value of PCH isn't very big, perhaps 
it's not worth it if it's also creating as much grief as described here. 
There is no doubt that there is value however. And given the examination 
done by Magnus, it seems this value could be increased.


The main reason why we haven't disabled PCH in CI before this. We really 
really want to get CI builds fast. We don't have a ton of over capacity 
to just throw at it. PCH made builds faster, so we used them. My other 
reason is consistency between builds. Supporting multiple different 
modes of building creates the potential for inconsistencies. For that 
reason I would definitely not support having PCH on by default, but 
turned off in our CI/dev-submit. We pick one or the other as the 
official build configuration, and we stick with the official build 
configuration for all builds of any official capacity (which includes CI).


In the current CI setup, we have a bunch of tiers that execute one after 
the other. The jdk-submit currently only runs tier1. In tier2 I've put 
slowdebug builds with PCH disabled, just to help verify a common 
developer configuration. These builds are not meant to be used for 
testing or anything like that, they are just run for verification, which 
is why this is ok. We could argue that it would make sense to move the 
linux-x64-slowdebug without pch build to tier1 so that it's included in 
dev-submit.


/Erik

On 2018-11-01 03:38, Magnus Ihse Bursie wrote:



On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 2013), 
3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the -npch is 
without precompiled headers. Here we see a slight degradation when 
disabling on both user time and wall clock time. My guess is that the 
user time increase is about the same, but because of a lower cpu 
count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that the 
precompiled header is generated alone on one core while nothing else 
is happening, which would explain this degradation in build speed. If 
we were instead building the whole product, we would see a better 
correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did with 
ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That the 
gain is small is no argument to lose it. (I remember a time when you 
were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH as 
a problem, and there is a net loss of time, at least on selected 
testing machines. It makes sense to turn it off as default, then.


/Magnus



/Erik

macosx-x64
real     4m13.658s
user     27m17.595s
sys     2m11.306s

macosx-x64-npch
real     4m27.823s
user     30m0.434s
sys     2m18.669s

macosx-x64-debug
real     5m21.032s
user     35m57.347s
sys     2m20.588s

macosx-x64-debug-npch
real     5m33.728s
user     38m10.311s
sys     2m27.587s

macosx-x64-slowdebug
real     3m54.439s
user     25m32.197s
sys     2m8.750s

macosx-x64-slowdebug-npch
real     4m11.987s
user     27m59.857s
sys     2m18.093s


On 2018-10-30 14:00, Erik Joelsson wrote:

Hello,

On 2018-10-30 

Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Erik Joelsson

On 2018-11-01 08:17, Magnus Ihse Bursie wrote:

On 2018-11-01 15:53, Ioi Lam wrote:
Just a stupid question. Does GCC have actual support for PCH? I know 
windows can load pre-compiled information from a special binary file. 
Does GCC support that kind of functionality?

Yes.

https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html

But the Visual Studio compiler seems to be able to gain much more build 
performance from it. I don't have fresh numbers but I definitely 
remember a non PCH build on Windows taking more than double the time, if 
not triple or quadruple.


/Erik

/Magnus



Thanks
Ioi

On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie 
 wrote:





On 2018-11-01 12:51, Thomas Stüfe wrote:
On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
But then again, it might just signal that the list of headers 
included in the PCH is no longer
optimal. If it used to be the case that ~100 header files were 
so interlinked, that changing any of
them caused recompilation of all files that included it and all 
the other 100 header files on the
PCH list, then there was a net gain for using PCH and no 
"punishment".


But nowadays this list might be far too large. Perhaps there's 
just only a core set of say 20 header
files that are universally (or almost universally) included, and 
that's all that should be in the
PCH list then. My guess is that, with a proper selection of 
header files, PCH will still be a benefit.
I agree. This smells like inefficient PCH list. We can improve 
that, but I think that would be a
lower priority, given the abundance of CPU power we use to 
compile Hotspot. In my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, 
without masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include 
differences that show up in CI that builds

without PCH.

So this is my ideal world:
   a) Efficient PCH list enabled by default for development 
pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 
included!);


Since we don't yet have (a), and (b) seems to be tedious, 
regardless how many times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good 
fallback.
Should just CI builds default to non-PCH, or all builds (that is, 
should
"configure" default to non-PCH on linux)? Maybe the former is 
better --

one thing that the test numbers here has not shown is if incremental
recompiles are improved by PCH. My gut feeling is that they really
should -- once you've created your PCH, subsequent recompiles will be
faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.


So the developer default should perhaps be to keep PCH, and we
should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.
I hear you, loud and clear. :) I've created 
https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by 
default, for all builds, on gcc. (I'm interpreting "linux" in this 
case as "gcc", since this is compiler-dependent, and not OS dependent).


/Magnus


..Thomas

/Magnus



-Aleksey







Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie

On 2018-11-01 15:53, Ioi Lam wrote:

Just a stupid question. Does GCC have actual support for PCH? I know windows 
can load pre-compiled information from a special binary file. Does GCC support 
that kind of functionality?

Yes.

https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html

/Magnus



Thanks
Ioi


On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie  
wrote:




On 2018-11-01 12:51, Thomas Stüfe wrote:
On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
But then again, it might just signal that the list of headers included in the 
PCH is no longer
optimal. If it used to be the case that ~100 header files were so interlinked, 
that changing any of
them caused recompilation of all files that included it and all the other 100 
header files on the
PCH list, then there was a net gain for using PCH and no "punishment".

But nowadays this list might be far too large. Perhaps there's just only a core 
set of say 20 header
files that are universally (or almost universally) included, and that's all 
that should be in the
PCH list then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
   a) Efficient PCH list enabled by default for development pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.

Should just CI builds default to non-PCH, or all builds (that is, should
"configure" default to non-PCH on linux)? Maybe the former is better --
one thing that the test numbers here has not shown is if incremental
recompiles are improved by PCH. My gut feeling is that they really
should -- once you've created your PCH, subsequent recompiles will be
faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.


So the developer default should perhaps be to keep PCH, and we
should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.

I hear you, loud and clear. :) I've created https://bugs.openjdk.java.net/browse/JDK-8213241 to 
disable PCH by default, for all builds, on gcc. (I'm interpreting "linux" in this case as 
"gcc", since this is compiler-dependent, and not OS dependent).

/Magnus


..Thomas

/Magnus



-Aleksey





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Ioi Lam
Just a stupid question. Does GCC have actual support for PCH? I know windows 
can load pre-compiled information from a special binary file. Does GCC support 
that kind of functionality?

Thanks
Ioi

> On Nov 1, 2018, at 5:09 AM, Magnus Ihse Bursie 
>  wrote:
> 
> 
> 
>> On 2018-11-01 12:51, Thomas Stüfe wrote:
>> On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
>>  wrote:
>>> On 2018-11-01 11:54, Aleksey Shipilev wrote:
> On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
> But then again, it might just signal that the list of headers included in 
> the PCH is no longer
> optimal. If it used to be the case that ~100 header files were so 
> interlinked, that changing any of
> them caused recompilation of all files that included it and all the other 
> 100 header files on the
> PCH list, then there was a net gain for using PCH and no "punishment".
> 
> But nowadays this list might be far too large. Perhaps there's just only 
> a core set of say 20 header
> files that are universally (or almost universally) included, and that's 
> all that should be in the
> PCH list then. My guess is that, with a proper selection of header files, 
> PCH will still be a benefit.
 I agree. This smells like inefficient PCH list. We can improve that, but I 
 think that would be a
 lower priority, given the abundance of CPU power we use to compile 
 Hotspot. In my mind, the decisive
 factor for disabling PCH is to keep proper includes at all times, without 
 masking it with PCH. Half
 of the trivial bugs I submit against hotspot are #include differences that 
 show up in CI that builds
 without PCH.
 
 So this is my ideal world:
   a) Efficient PCH list enabled by default for development pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 included!);
 
 Since we don't yet have (a), and (b) seems to be tedious, regardless how 
 many times both Red Hat and
 SAP people ask for it, disabling PCH by default feels like a good fallback.
>>> Should just CI builds default to non-PCH, or all builds (that is, should
>>> "configure" default to non-PCH on linux)? Maybe the former is better --
>>> one thing that the test numbers here has not shown is if incremental
>>> recompiles are improved by PCH. My gut feeling is that they really
>>> should -- once you've created your PCH, subsequent recompiles will be
>>> faster.
>> That would only be true as long as you just change cpp files, no? As
>> soon as you touch a header which is included in precompiled.hpp you
>> are worse off than without pch.
>> 
>>> So the developer default should perhaps be to keep PCH, and we
>>> should only configure the CI builds to do without PCH.
>> CI without pch would be better than nothing. But seeing how clunky and
>> slow jdk-submit is (and how often there are problems), I rather fail
>> early in my own build than waiting for jdk-submit to tell me something
>> went wrong (well, that is why I usually build nonpch, like Ioi does).
>> 
>> Just my 5 cent.
> I hear you, loud and clear. :) I've created 
> https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by default, 
> for all builds, on gcc. (I'm interpreting "linux" in this case as "gcc", 
> since this is compiler-dependent, and not OS dependent).
> 
> /Magnus
> 
>> 
>> ..Thomas
>>> /Magnus
>>> 
>>> 
 -Aleksey
 
> 



Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Martin Buchholz
I vote for disabling precompiled headers by default - they simply make the
build less reliable.

It seemed like precompiled headers did not work when using different
optimization levels for different source files, which in turn was needed
for building with clang, so I've been disabling precompiled headers for
years in my own build script.  Here's a snippet:

# Disable optimization for selected source files.
#
# Needed to have different optimization levels for different files?
addConfigureFlag --disable-precompiled-headers
# We really need NONE; LOW is not low enough!
# Fixed in jdk10: JDK-8186787 clang-4.0 SIGSEGV in Unsafe_PutByte
((major >= 10)) \
  || makeFlags+=(BUILD_LIBJVM_unsafe.cpp_OPTIMIZATION=NONE)
if [[ "${DEBUG_LEVEL}" != "release" ]]; then
  # https://bugs.openjdk.java.net/browse/JDK-8186780
  makeFlags+=(BUILD_LIBJVM_os_linux_x86.cpp_OPTIMIZATION=NONE)
fi


On Thu, Nov 1, 2018 at 5:09 AM, Magnus Ihse Bursie <
magnus.ihse.bur...@oracle.com> wrote:

>
>
> On 2018-11-01 12:51, Thomas Stüfe wrote:
>
>> On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
>>  wrote:
>>
>>> On 2018-11-01 11:54, Aleksey Shipilev wrote:
>>>
 On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:

> But then again, it might just signal that the list of headers included
> in the PCH is no longer
> optimal. If it used to be the case that ~100 header files were so
> interlinked, that changing any of
> them caused recompilation of all files that included it and all the
> other 100 header files on the
> PCH list, then there was a net gain for using PCH and no "punishment".
>
> But nowadays this list might be far too large. Perhaps there's just
> only a core set of say 20 header
> files that are universally (or almost universally) included, and
> that's all that should be in the
> PCH list then. My guess is that, with a proper selection of header
> files, PCH will still be a benefit.
>
 I agree. This smells like inefficient PCH list. We can improve that,
 but I think that would be a
 lower priority, given the abundance of CPU power we use to compile
 Hotspot. In my mind, the decisive
 factor for disabling PCH is to keep proper includes at all times,
 without masking it with PCH. Half
 of the trivial bugs I submit against hotspot are #include differences
 that show up in CI that builds
 without PCH.

 So this is my ideal world:
a) Efficient PCH list enabled by default for development pleasure;
b) CIs build without PCH all the time (jdk-submit tier1 included!);

 Since we don't yet have (a), and (b) seems to be tedious, regardless
 how many times both Red Hat and
 SAP people ask for it, disabling PCH by default feels like a good
 fallback.

>>> Should just CI builds default to non-PCH, or all builds (that is, should
>>> "configure" default to non-PCH on linux)? Maybe the former is better --
>>> one thing that the test numbers here has not shown is if incremental
>>> recompiles are improved by PCH. My gut feeling is that they really
>>> should -- once you've created your PCH, subsequent recompiles will be
>>> faster.
>>>
>> That would only be true as long as you just change cpp files, no? As
>> soon as you touch a header which is included in precompiled.hpp you
>> are worse off than without pch.
>>
>> So the developer default should perhaps be to keep PCH, and we
>>> should only configure the CI builds to do without PCH.
>>>
>> CI without pch would be better than nothing. But seeing how clunky and
>> slow jdk-submit is (and how often there are problems), I rather fail
>> early in my own build than waiting for jdk-submit to tell me something
>> went wrong (well, that is why I usually build nonpch, like Ioi does).
>>
>> Just my 5 cent.
>>
> I hear you, loud and clear. :) I've created https://bugs.openjdk.java.net/
> browse/JDK-8213241 to disable PCH by default, for all builds, on gcc.
> (I'm interpreting "linux" in this case as "gcc", since this is
> compiler-dependent, and not OS dependent).
>
> /Magnus
>
>
>> ..Thomas
>>
>>> /Magnus
>>>
>>>
>>> -Aleksey


>


Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie




On 2018-11-01 12:51, Thomas Stüfe wrote:

On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:

But then again, it might just signal that the list of headers included in the 
PCH is no longer
optimal. If it used to be the case that ~100 header files were so interlinked, 
that changing any of
them caused recompilation of all files that included it and all the other 100 
header files on the
PCH list, then there was a net gain for using PCH and no "punishment".

But nowadays this list might be far too large. Perhaps there's just only a core 
set of say 20 header
files that are universally (or almost universally) included, and that's all 
that should be in the
PCH list then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
   a) Efficient PCH list enabled by default for development pleasure;
   b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.

Should just CI builds default to non-PCH, or all builds (that is, should
"configure" default to non-PCH on linux)? Maybe the former is better --
one thing that the test numbers here has not shown is if incremental
recompiles are improved by PCH. My gut feeling is that they really
should -- once you've created your PCH, subsequent recompiles will be
faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.


So the developer default should perhaps be to keep PCH, and we
should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.
I hear you, loud and clear. :) I've created 
https://bugs.openjdk.java.net/browse/JDK-8213241 to disable PCH by 
default, for all builds, on gcc. (I'm interpreting "linux" in this case 
as "gcc", since this is compiler-dependent, and not OS dependent).


/Magnus



..Thomas

/Magnus



-Aleksey





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Thomas Stüfe
On Thu, Nov 1, 2018 at 12:05 PM Magnus Ihse Bursie
 wrote:
>
> On 2018-11-01 11:54, Aleksey Shipilev wrote:
> > On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
> >> But then again, it might just signal that the list of headers included in 
> >> the PCH is no longer
> >> optimal. If it used to be the case that ~100 header files were so 
> >> interlinked, that changing any of
> >> them caused recompilation of all files that included it and all the other 
> >> 100 header files on the
> >> PCH list, then there was a net gain for using PCH and no "punishment".
> >>
> >> But nowadays this list might be far too large. Perhaps there's just only a 
> >> core set of say 20 header
> >> files that are universally (or almost universally) included, and that's 
> >> all that should be in the
> >> PCH list then. My guess is that, with a proper selection of header files, 
> >> PCH will still be a benefit.
> > I agree. This smells like inefficient PCH list. We can improve that, but I 
> > think that would be a
> > lower priority, given the abundance of CPU power we use to compile Hotspot. 
> > In my mind, the decisive
> > factor for disabling PCH is to keep proper includes at all times, without 
> > masking it with PCH. Half
> > of the trivial bugs I submit against hotspot are #include differences that 
> > show up in CI that builds
> > without PCH.
> >
> > So this is my ideal world:
> >   a) Efficient PCH list enabled by default for development pleasure;
> >   b) CIs build without PCH all the time (jdk-submit tier1 included!);
> >
> > Since we don't yet have (a), and (b) seems to be tedious, regardless how 
> > many times both Red Hat and
> > SAP people ask for it, disabling PCH by default feels like a good fallback.
>
> Should just CI builds default to non-PCH, or all builds (that is, should
> "configure" default to non-PCH on linux)? Maybe the former is better --
> one thing that the test numbers here has not shown is if incremental
> recompiles are improved by PCH. My gut feeling is that they really
> should -- once you've created your PCH, subsequent recompiles will be
> faster.

That would only be true as long as you just change cpp files, no? As
soon as you touch a header which is included in precompiled.hpp you
are worse off than without pch.

> So the developer default should perhaps be to keep PCH, and we
> should only configure the CI builds to do without PCH.

CI without pch would be better than nothing. But seeing how clunky and
slow jdk-submit is (and how often there are problems), I rather fail
early in my own build than waiting for jdk-submit to tell me something
went wrong (well, that is why I usually build nonpch, like Ioi does).

Just my 5 cent.

..Thomas
>
> /Magnus
>
>
> >
> > -Aleksey
> >
>


Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Roman Kennke



> On 2018-11-01 11:54, Aleksey Shipilev wrote:
>> On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
>>> But then again, it might just signal that the list of headers
>>> included in the PCH is no longer
>>> optimal. If it used to be the case that ~100 header files were so
>>> interlinked, that changing any of
>>> them caused recompilation of all files that included it and all the
>>> other 100 header files on the
>>> PCH list, then there was a net gain for using PCH and no "punishment".
>>>
>>> But nowadays this list might be far too large. Perhaps there's just
>>> only a core set of say 20 header
>>> files that are universally (or almost universally) included, and
>>> that's all that should be in the
>>> PCH list then. My guess is that, with a proper selection of header
>>> files, PCH will still be a benefit.
>> I agree. This smells like inefficient PCH list. We can improve that,
>> but I think that would be a
>> lower priority, given the abundance of CPU power we use to compile
>> Hotspot. In my mind, the decisive
>> factor for disabling PCH is to keep proper includes at all times,
>> without masking it with PCH. Half
>> of the trivial bugs I submit against hotspot are #include differences
>> that show up in CI that builds
>> without PCH.
>>
>> So this is my ideal world:
>>   a) Efficient PCH list enabled by default for development pleasure;
>>   b) CIs build without PCH all the time (jdk-submit tier1 included!);
>>
>> Since we don't yet have (a), and (b) seems to be tedious, regardless
>> how many times both Red Hat and
>> SAP people ask for it, disabling PCH by default feels like a good
>> fallback.
> 
> Should just CI builds default to non-PCH, or all builds (that is, should
> "configure" default to non-PCH on linux)? Maybe the former is better --
> one thing that the test numbers here has not shown is if incremental
> recompiles are improved by PCH. My gut feeling is that they really
> should -- once you've created your PCH, subsequent recompiles will be
> faster. So the developer default should perhaps be to keep PCH, and we
> should only configure the CI builds to do without PCH.

I don't know. I usually disable PCH to avoid getting bad surprises once
my patch goes through CI. I think this should be consistent. I waste
more time building with and without PCH and fixing those bugs than I
save (if anything) a few seconds build time.

Roman



Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie

On 2018-11-01 11:54, Aleksey Shipilev wrote:

On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:

But then again, it might just signal that the list of headers included in the 
PCH is no longer
optimal. If it used to be the case that ~100 header files were so interlinked, 
that changing any of
them caused recompilation of all files that included it and all the other 100 
header files on the
PCH list, then there was a net gain for using PCH and no "punishment".

But nowadays this list might be far too large. Perhaps there's just only a core 
set of say 20 header
files that are universally (or almost universally) included, and that's all 
that should be in the
PCH list then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
  a) Efficient PCH list enabled by default for development pleasure;
  b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.


Should just CI builds default to non-PCH, or all builds (that is, should 
"configure" default to non-PCH on linux)? Maybe the former is better -- 
one thing that the test numbers here has not shown is if incremental 
recompiles are improved by PCH. My gut feeling is that they really 
should -- once you've created your PCH, subsequent recompiles will be 
faster. So the developer default should perhaps be to keep PCH, and we 
should only configure the CI builds to do without PCH.


/Magnus




-Aleksey





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Aleksey Shipilev
On 11/01/2018 11:43 AM, Magnus Ihse Bursie wrote:
> But then again, it might just signal that the list of headers included in the 
> PCH is no longer
> optimal. If it used to be the case that ~100 header files were so 
> interlinked, that changing any of
> them caused recompilation of all files that included it and all the other 100 
> header files on the
> PCH list, then there was a net gain for using PCH and no "punishment".
> 
> But nowadays this list might be far too large. Perhaps there's just only a 
> core set of say 20 header
> files that are universally (or almost universally) included, and that's all 
> that should be in the
> PCH list then. My guess is that, with a proper selection of header files, PCH 
> will still be a benefit.

I agree. This smells like inefficient PCH list. We can improve that, but I 
think that would be a
lower priority, given the abundance of CPU power we use to compile Hotspot. In 
my mind, the decisive
factor for disabling PCH is to keep proper includes at all times, without 
masking it with PCH. Half
of the trivial bugs I submit against hotspot are #include differences that show 
up in CI that builds
without PCH.

So this is my ideal world:
 a) Efficient PCH list enabled by default for development pleasure;
 b) CIs build without PCH all the time (jdk-submit tier1 included!);

Since we don't yet have (a), and (b) seems to be tedious, regardless how many 
times both Red Hat and
SAP people ask for it, disabling PCH by default feels like a good fallback.

-Aleksey



Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie

On 2018-10-30 21:17, Aleksey Shipilev wrote:

On 10/30/2018 06:26 PM, Ioi Lam wrote:

Is there any advantage of using precompiled headers on Linux?

I have measured it recently on shenandoah repositories, and fastdebug/release 
build times have not
improved with or without PCH. Actually, it gets worse when you touch a single 
header that is in PCH
list, and you end up recompiling the entire Hotspot. I would be in favor of 
disabling it by default.
Not long ago, the hotspot header files were a mess, so you almost always 
ended up in recompiling all of hotspot regardless, when you changed a 
header file. If this situation has improved, then certainly it might 
have shifted the balance between gains and losses for PCH.


But then again, it might just signal that the list of headers included 
in the PCH is no longer optimal. If it used to be the case that ~100 
header files were so interlinked, that changing any of them caused 
recompilation of all files that included it and all the other 100 header 
files on the PCH list, then there was a net gain for using PCH and no 
"punishment".


But nowadays this list might be far too large. Perhaps there's just only 
a core set of say 20 header files that are universally (or almost 
universally) included, and that's all that should be in the PCH list 
then. My guess is that, with a proper selection of header files, PCH 
will still be a benefit.


/Magnus





It's on by default and we keep having
breakage where someone would forget to add #include. The latest instance is 
JDK-8213148.

Yes, we catch most of these breakages in CIs. Which tells me adding it to 
jdk-submit would cover
most of the breakage during pre-integration testing.

-Aleksey





Re: Stop using precompiled headers for Linux?

2018-11-01 Thread Magnus Ihse Bursie




On 2018-10-31 00:54, Erik Joelsson wrote:
Below are the corresponding numbers from a Mac, (Mac Pro (Late 2013), 
3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the -npch is 
without precompiled headers. Here we see a slight degradation when 
disabling on both user time and wall clock time. My guess is that the 
user time increase is about the same, but because of a lower cpu 
count, the extra load is not as easily covered.


These tests were run with just building hotspot. This means that the 
precompiled header is generated alone on one core while nothing else 
is happening, which would explain this degradation in build speed. If 
we were instead building the whole product, we would see a better 
correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did with 
ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


Well, if you show that it is a loss in time on macosx to disable 
precompiled headers, and no-one (as far as I've seen) has complained 
about PCH on mac, then why not keep them on as default there? That the 
gain is small is no argument to lose it. (I remember a time when you 
were hunting seconds in the build time ;-))


On linux, the story seems different, though. People experience PCH as a 
problem, and there is a net loss of time, at least on selected testing 
machines. It makes sense to turn it off as default, then.


/Magnus



/Erik

macosx-x64
real     4m13.658s
user     27m17.595s
sys     2m11.306s

macosx-x64-npch
real     4m27.823s
user     30m0.434s
sys     2m18.669s

macosx-x64-debug
real     5m21.032s
user     35m57.347s
sys     2m20.588s

macosx-x64-debug-npch
real     5m33.728s
user     38m10.311s
sys     2m27.587s

macosx-x64-slowdebug
real     3m54.439s
user     25m32.197s
sys     2m8.750s

macosx-x64-slowdebug-npch
real     4m11.987s
user     27m59.857s
sys     2m18.093s


On 2018-10-30 14:00, Erik Joelsson wrote:

Hello,

On 2018-10-30 13:17, Aleksey Shipilev wrote:

On 10/30/2018 06:26 PM, Ioi Lam wrote:

Is there any advantage of using precompiled headers on Linux?
I have measured it recently on shenandoah repositories, and 
fastdebug/release build times have not
improved with or without PCH. Actually, it gets worse when you touch 
a single header that is in PCH
list, and you end up recompiling the entire Hotspot. I would be in 
favor of disabling it by default.
I just did a measurement on my local workstation (2x8 cores x2 ht 
Ubuntu 18.04 using Oracle devkit GCC 7.3.0). I ran "time make 
hotspot" with clean build directories.


linux-x64:
real    4m6.657s
user    61m23.090s
sys    6m24.477s

linux-x64-npch
real    3m41.130s
user    66m11.824s
sys    4m19.224s

linux-x64-debug
real    4m47.117s
user    75m53.740s
sys    8m21.408s

linux-x64-debug-npch
real    4m42.877s
user    84m30.764s
sys    4m54.666s

linux-x64-slowdebug
real    3m54.564s
user    44m2.828s
sys    6m22.785s

linux-x64-slowdebug-npch
real    3m23.092s
user    55m3.142s
sys    4m10.172s

These numbers support your claim. Wall clock time is actually 
increased with PCH enabled, but total user time is decreased. Does 
not seem worth it to me.

It's on by default and we keep having
breakage where someone would forget to add #include. The latest 
instance is JDK-8213148.
Yes, we catch most of these breakages in CIs. Which tells me adding 
it to jdk-submit would cover

most of the breakage during pre-integration testing.
jdk-submit is currently running what we call "tier1". We do have 
builds of Linux slowdebug with precompiled headers disabled in tier2. 
We also build solaris-sparcv9 in tier1 which does not support 
precompiled headers at all, so to not be caught in jdk-submit you 
would have to be in Linux specific code. The example bug does not 
seem to be that. Mach5/jdk-submit was down over the weekend and 
yesterday so my suspicion is the offending code in this case was 
never tested.


That said, given that we get practically no benefit from PCH on 
Linux/GCC, we should probably just turn it off by default for Linux 
and/or GCC. I think we need to investigate Macos as well here.


/Erik

-Aleksey









Re: Stop using precompiled headers for Linux?

2018-10-30 Thread Erik Joelsson
Below are the corresponding numbers from a Mac, (Mac Pro (Late 2013), 
3.7 GHz, Quad-Core Intel Xeon E5, 16 GB). To be clear, the -npch is 
without precompiled headers. Here we see a slight degradation when 
disabling on both user time and wall clock time. My guess is that the 
user time increase is about the same, but because of a lower cpu count, 
the extra load is not as easily covered.


These tests were run with just building hotspot. This means that the 
precompiled header is generated alone on one core while nothing else is 
happening, which would explain this degradation in build speed. If we 
were instead building the whole product, we would see a better 
correlation between user and real time.


Given the very small benefit here, it could make sense to disable 
precompiled headers by default for Linux and Mac, just as we did with 
ccache.


I do know that the benefit is huge on Windows though, so we cannot 
remove the feature completely. Any other comments?


/Erik

macosx-x64
real     4m13.658s
user     27m17.595s
sys     2m11.306s

macosx-x64-npch
real     4m27.823s
user     30m0.434s
sys     2m18.669s

macosx-x64-debug
real     5m21.032s
user     35m57.347s
sys     2m20.588s

macosx-x64-debug-npch
real     5m33.728s
user     38m10.311s
sys     2m27.587s

macosx-x64-slowdebug
real     3m54.439s
user     25m32.197s
sys     2m8.750s

macosx-x64-slowdebug-npch
real     4m11.987s
user     27m59.857s
sys     2m18.093s


On 2018-10-30 14:00, Erik Joelsson wrote:

Hello,

On 2018-10-30 13:17, Aleksey Shipilev wrote:

On 10/30/2018 06:26 PM, Ioi Lam wrote:

Is there any advantage of using precompiled headers on Linux?
I have measured it recently on shenandoah repositories, and 
fastdebug/release build times have not
improved with or without PCH. Actually, it gets worse when you touch 
a single header that is in PCH
list, and you end up recompiling the entire Hotspot. I would be in 
favor of disabling it by default.
I just did a measurement on my local workstation (2x8 cores x2 ht 
Ubuntu 18.04 using Oracle devkit GCC 7.3.0). I ran "time make hotspot" 
with clean build directories.


linux-x64:
real    4m6.657s
user    61m23.090s
sys    6m24.477s

linux-x64-npch
real    3m41.130s
user    66m11.824s
sys    4m19.224s

linux-x64-debug
real    4m47.117s
user    75m53.740s
sys    8m21.408s

linux-x64-debug-npch
real    4m42.877s
user    84m30.764s
sys    4m54.666s

linux-x64-slowdebug
real    3m54.564s
user    44m2.828s
sys    6m22.785s

linux-x64-slowdebug-npch
real    3m23.092s
user    55m3.142s
sys    4m10.172s

These numbers support your claim. Wall clock time is actually 
increased with PCH enabled, but total user time is decreased. Does not 
seem worth it to me.

It's on by default and we keep having
breakage where someone would forget to add #include. The latest 
instance is JDK-8213148.
Yes, we catch most of these breakages in CIs. Which tells me adding 
it to jdk-submit would cover

most of the breakage during pre-integration testing.
jdk-submit is currently running what we call "tier1". We do have 
builds of Linux slowdebug with precompiled headers disabled in tier2. 
We also build solaris-sparcv9 in tier1 which does not support 
precompiled headers at all, so to not be caught in jdk-submit you 
would have to be in Linux specific code. The example bug does not seem 
to be that. Mach5/jdk-submit was down over the weekend and yesterday 
so my suspicion is the offending code in this case was never tested.


That said, given that we get practically no benefit from PCH on 
Linux/GCC, we should probably just turn it off by default for Linux 
and/or GCC. I think we need to investigate Macos as well here.


/Erik

-Aleksey







Re: Stop using precompiled headers for Linux?

2018-10-30 Thread Erik Joelsson

Hello,

On 2018-10-30 13:17, Aleksey Shipilev wrote:

On 10/30/2018 06:26 PM, Ioi Lam wrote:

Is there any advantage of using precompiled headers on Linux?

I have measured it recently on shenandoah repositories, and fastdebug/release 
build times have not
improved with or without PCH. Actually, it gets worse when you touch a single 
header that is in PCH
list, and you end up recompiling the entire Hotspot. I would be in favor of 
disabling it by default.
I just did a measurement on my local workstation (2x8 cores x2 ht Ubuntu 
18.04 using Oracle devkit GCC 7.3.0). I ran "time make hotspot" with 
clean build directories.


linux-x64:
real    4m6.657s
user    61m23.090s
sys    6m24.477s

linux-x64-npch
real    3m41.130s
user    66m11.824s
sys    4m19.224s

linux-x64-debug
real    4m47.117s
user    75m53.740s
sys    8m21.408s

linux-x64-debug-npch
real    4m42.877s
user    84m30.764s
sys    4m54.666s

linux-x64-slowdebug
real    3m54.564s
user    44m2.828s
sys    6m22.785s

linux-x64-slowdebug-npch
real    3m23.092s
user    55m3.142s
sys    4m10.172s

These numbers support your claim. Wall clock time is actually increased 
with PCH enabled, but total user time is decreased. Does not seem worth 
it to me.

It's on by default and we keep having
breakage where someone would forget to add #include. The latest instance is 
JDK-8213148.

Yes, we catch most of these breakages in CIs. Which tells me adding it to 
jdk-submit would cover
most of the breakage during pre-integration testing.
jdk-submit is currently running what we call "tier1". We do have builds 
of Linux slowdebug with precompiled headers disabled in tier2. We also 
build solaris-sparcv9 in tier1 which does not support precompiled 
headers at all, so to not be caught in jdk-submit you would have to be 
in Linux specific code. The example bug does not seem to be that. 
Mach5/jdk-submit was down over the weekend and yesterday so my suspicion 
is the offending code in this case was never tested.


That said, given that we get practically no benefit from PCH on 
Linux/GCC, we should probably just turn it off by default for Linux 
and/or GCC. I think we need to investigate Macos as well here.


/Erik

-Aleksey





Re: Stop using precompiled headers for Linux?

2018-10-30 Thread Aleksey Shipilev
On 10/30/2018 06:26 PM, Ioi Lam wrote:
> Is there any advantage of using precompiled headers on Linux? 

I have measured it recently on shenandoah repositories, and fastdebug/release 
build times have not
improved with or without PCH. Actually, it gets worse when you touch a single 
header that is in PCH
list, and you end up recompiling the entire Hotspot. I would be in favor of 
disabling it by default.

> It's on by default and we keep having
> breakage where someone would forget to add #include. The latest instance is 
> JDK-8213148.

Yes, we catch most of these breakages in CIs. Which tells me adding it to 
jdk-submit would cover
most of the breakage during pre-integration testing.

-Aleksey



Re: Stop using precompiled headers for Linux?

2018-10-30 Thread Erik Joelsson
Last I checked, it did provide significant build speed improvements when 
building just hotspot, but that could need revisiting.


We do have verification of --disable-precompiled-headers (in slowdebug) 
in builds-tier2 so we normally get notified if this fails. However, 
Mach5 has not been running since Friday so this particular bug wasn't 
detected automatically. Looking at the bug, it also failed on Solaris, 
which would have been caught by tier1 builds.


/Erik


On 2018-10-30 10:26, Ioi Lam wrote:
Is there any advantage of using precompiled headers on Linux? It's on 
by default and we keep having breakage where someone would forget to 
add #include. The latest instance is JDK-8213148.


I just turn on precompiled headers explicitly in all my builds. I 
don't see any difference in build time (at least not significant 
enough for me to bother).


Should we disable it by default on Linux?

Thanks

- Ioi






Re: Stop using precompiled headers for Linux?

2018-10-30 Thread Thomas Stüfe
It would help already if Oracle would disable precompiled headers for
the submit test builds.

..Thomas
On Tue, Oct 30, 2018 at 6:26 PM Ioi Lam  wrote:
>
> Is there any advantage of using precompiled headers on Linux? It's on by
> default and we keep having breakage where someone would forget to add
> #include. The latest instance is JDK-8213148.
>
> I just turn on precompiled headers explicitly in all my builds. I don't
> see any difference in build time (at least not significant enough for me
> to bother).
>
> Should we disable it by default on Linux?
>
> Thanks
>
> - Ioi
>
>


Re: Stop using precompiled headers for Linux?

2018-10-30 Thread Roman Kennke
I'd be in favour of disabling by default.

Roman

> Is there any advantage of using precompiled headers on Linux? It's on by
> default and we keep having breakage where someone would forget to add
> #include. The latest instance is JDK-8213148.
> 
> I just turn on precompiled headers explicitly in all my builds. I don't
> see any difference in build time (at least not significant enough for me
> to bother).
> 
> Should we disable it by default on Linux?
> 
> Thanks
> 
> - Ioi
> 
> 



Stop using precompiled headers for Linux?

2018-10-30 Thread Ioi Lam
Is there any advantage of using precompiled headers on Linux? It's on by 
default and we keep having breakage where someone would forget to add 
#include. The latest instance is JDK-8213148.


I just turn on precompiled headers explicitly in all my builds. I don't 
see any difference in build time (at least not significant enough for me 
to bother).


Should we disable it by default on Linux?

Thanks

- Ioi