mzolotukhin added a comment.


I've been monitoring compile-time for quite a while, so let my put my 2 cents 
here too.

> Who is the audience for this information?
>  What information do they want from a time report?
>  How do we present that information in a way that's not misleading (given 
> Clang's architecture)?

I would find the timers extremely useful. Even if they overlap, they still 
would 1) be a good indicator of a newly introduced problem and 2) give a rough 
idea where frontend time is spent. I agree that it wouldn't be super-accurate, 
but the numbers we usually operate are quite high (e.g. <some part> started to 
take 1.5x time). When we decide to investigate it deeper, we can use more 
accurate tools, such as profilers. All that said, if we can make timers more 
accurate/not-overlapping, that surely would be helpful as well.

> Can we deliver useful value compared to a modern, dedicated profiling tool?

Having timers, especially producing results in the same format, as existing 
LLVM timers, would be much more convenient than using profilers. With timers I 
can simply add a compile-time flag to my test-suite run and get all the numbers 
in the end of my usual run. With profilers it's a bit more complicated.

Speaking of timers overlapping and mutual calling: LLVM timers also have this 
problem, and e.g. if there is problem is in some analysis (say, 
ScalarEvolution), we see indications in several other passes using this 
analysis (say, IndVarSimplify and LoopStrengthReduction). While it does not 
immediately show the problematic spot, it gives you pretty strong hints to 
where to look at first. So, having even not-perfect timers is still useful.

Also, I apologize for LGTMing the patch earlier while it was not properly 
reviewed - I didn't notice it didn't have cfe-commits in subscribers, and it 
had been waiting for a review for quite some time (so I assumed that all 
interested parties had their chance to look at it).


cfe-commits mailing list

Reply via email to