Roland Mainz wrote: > Danek Duvall wrote: > >> Sriram Natarajan wrote: >> >>> Now, assuming this can be done, does any one have any objections with >>> the concept of delivering multiple version (regular and cpu optimized) >>> of some critical libraries that are under sfw consolidation ? >>> >> I certainly don't, though generally greater optimization means a longer >> build time, which is eventually going to get painful. >> > > Why ? If the application runs a lot faster with a higher optimisation > more customers are "happy"/"satisfied" (OkOk... you have to do more > testing to verify that the compiler didn't "over-optimize" the > application...) ... :-) > For example (for a "cheap" optimisation trick) there is the "-xipo" > option which offers _significant_ performance benefits (e.g. "-xO4 > -xipo=2") at the expense of much longer build time (short: the XIPO > (=Interprocedural Optimizer) switch causes the compiler to do inlining > and optimisation at the final link step across _all_ files of an > application). > I wish this switch would be used for most of the applications shipped > with Solaris (well, it does not work with OS/Net because "ctfmerge" is > unable to handle it (like most other optimisation switches in the > compiler... ;-(((( )) ... > I'm not a fan of ctfmerge at all and would love to see it replaced with DWARF3 + compression support. That's probably not a high priority item in kmdb, mdb, dbx or sun compilers though. I'm not sure why IPO would have to be turned off for ctfmerge to be ran since afaik it works on the full executable after the final compile phase. Doesn't ctfmerge just convert headers and index/compress on the existing executable? I'm too lazy to look at what the debugging symbols on a binary look like with and w/o IPO on.. maybe one of the Sun compiler guys can enlighten us..
TIA ./C