On Tue, Sep 30, 2014 at 9:28 PM, Nathan Sidwell <nat...@acm.org> wrote:
> On 09/22/14 08:04, Teresa Johnson wrote:
>
>
>> The approach we now take for LIPO builds is to propagate the counts
>
>
> LIPO?

Sorry, Lightweight IPO (https://gcc.gnu.org/wiki/LightweightIpo),
implemented and used on google branches. We don't use LTO for
scalability reasons. With LIPO the decisions about how to group
modules for cross-module optimization are done at the end of the
profiling run, which is where I have implemented these COMDAT fixups.

>
>> for the profiled copy of the COMDAT to other modules. (Additionally
>> the indirect call profiling we perform in LIPO mode would point to a
>> module that we didn't have access to, which is a related issue that
>> the COMDAT fixups we perform at the end of the LIPO profiling run are
>> trying to solve.)
>
>
> I'm presuming then that all modules of relevance are compiled with coverage
> information?  Propagating counters from one module to another still seems
> wrong.  It'll make it look like the function's executed more times than it
> really is (per copy).

That's already the case with the copy selected by the linker, which
contains counts from all out-of-line copies of the COMDAT. This just
enables other modules to have access to the same profile data for
their copy of the COMDAT when they are compiled. It is correct that if
you sum together all the counts across all copies of the COMDAT, it
would look hotter than it should. But in this case we essentially only
see one copy when compiling each module.


> Plus in the general case one can't rely on the copies
> being identical implementations -- COMDAT only requires they be functionally
> equivalent.

We only do the copying when the cfg and line checksums match, so they
are most likely identical.

>
>
>> Correct in that it makes it look like these copies were executed. This
>> was causing some issues when we rewrote/merged profiles with
>> gcov-tool, which essentially operates in whole-program mode. To handle
>> this, this patch marks the modified (previously all-zero) copies in
>> the gcda file. So now gcov-tool can handle them appropriately (clear
>> them on read before doing any analysis), and gcov-dump will flag them.
>
>
> Um, I don't understand why you're doing this then?  If gcov is ignoring the
> copies, what is the purpose?

The new gcov-tool is ignoring the copies, when it reads in the
profiles and redoes the LIPO call graph analysis (on the google
branch). But the compiler does not ignore the copied counts. So the
compiler sees non-zero counts for these COMDAT routines and can
optimize more effectively.

In any case, David pointed out that with your approach we wouldn't
need to do the copying that I have implemented here for LIPO.
Teresa

>
> nathan



-- 
Teresa Johnson | Software Engineer | tejohn...@google.com | 408-460-2413

Reply via email to