IME...  shows you how old I am, not up-to-date on all these. ;^)

I have heard that the gcc4 PCH works pretty well, but officially we
are still using gcc3. The hotspot Makefiles might have done a little
work toward using gcc4 PCH, and their source may actually be ready for
use of PCH with any of the compilers due to the Windows PCH work.

In the jdk Makefiles (not hotspot), I added a COMPILE_APPROACH variable
that could be "normal", "parallel", or "batch".
The "parallel" used the GNU make -p option but in a very limited way
(Makefiles have to have pretty perfect dependency specifications for
 the -p option to work from the top down), effectively "parallel" does a
'make -p *.o' for each library being built.
The "batch" just did a 'gcc -c *.c' type compile (if the compile lines
matched for all the files).

I had hoped that the compilers might be smart enough to see the "batch"
compile as an opportunity to automatically do PCH on all these files
(with supplying the appropriate compiler automatic PCH options).
Turns out most of the "automatic" PCH options just don't work, or
I couldn't get them to work. I came to the conclusion that to get
benefits from PCH, your source files need to be normalized with regards
to the use of the #include files, which is a bigger effort for the
jdk than I was willing to take on.

Not surprisingly, on Solaris/Linux, the "batch" mode without PCH
was the same speed as "normal", since these compilers just loop
over their source files. But Windows does show a slight benefit
of "batch" compiles over "normal", and little benefit of using
"parallel", which was surprising to me.
Again, maybe this just comes down to the cost of a fork/exec?
The 'make -p' may just not be as beneficial when the process startup
cost is higher?

-kto

Andrew Haley wrote:
Kelly O'Hair wrote:

Andrew Haley wrote:
Kelly O'Hair wrote:
Excellent points Steve. And good summary.

The hotspot nmake Makefiles are tied to the MS Visual Studio (VC)
compilers,
and I'm pretty sure nmake.exe is now only delivered with the VC product.
It's not clear how you can match this build performance.
However, it's also not clear how much of this benefit comes from
Hotspot's
use of VC pre-compiled headers (PCH). Windows builds with nmake/VC/PCH
takes a few minutes, versus 20-35min builds on equivalent Linux/Solaris
systems. So it's significant and something (the performance) that we
want
to keep. If a GNU Makefile using VC/PCH can match nmake is an open
question.
This is interesting.  I guess the Linux build isn't using PCH too?
The gcc compilers and Sun Studio compilers we have used in the past either
didn't have PCH capability or didn't have stable enough ones.
It's been my experience that each PCH implementation is unique in some
way, varied implementation techniques, with varied performance benefits.
The gcc we have used (version 3 based) did not have a good PCH solution
(gcc 4 supposedly has one now?),
and the Sun Studio Compilers just recently got a stable PCH system.

I haven't used it, but it's supposed to be pretty good.  Apple use it
all the time.

On Windows, none of the [ parallel make options ] options are available or show 
little
benefit. So far PCH has been the best answer, which the Hotspot team
has done but the rest of the jdk's native sources aren't quite as
normalized as the Hotspot sources.

Maybe the Windows issue is the higher cost of process startup/warmup?
Fewer processes with more work to do is a better Windows situation?
I'm guessing of course...

I have tried using parallel GNU make and batch compiles in the jdk
builds, and seen benefits on Linux and Solaris, but not much with
Windows.
Ah, that is important: IME builds scale almost linearly with the number
of processors.
What is IME?

In My Experience.  :-)

Andrew.

Reply via email to