labath added a subscriber: probinson.
labath added a comment.

In https://reviews.llvm.org/D51208#1212950, @dblaikie wrote:

> In https://reviews.llvm.org/D51208#1212320, @labath wrote:
>
> > As far as the strict intention of this test goes, the change is fine, as it 
> > is meant to check that the accelerator tables get used *when* they are 
> > generated. How do you achieve them being generated is not important.
> >
> > However, I am not sure now under what circumstances will the accelerator 
> > tables be emitted in the first place. David, does this mean that we will 
> > now not emit .debug-names even if one specifies `-glldb`? Was that 
> > intentional?
>
>
> Not especially intentional - but I clearly didn't give it quite enough 
> thought. Mostly I was modelling the behavior of GCC (no pubnames by default - 
> you can opt-in to them (& split-dwarf opts in by default, but one thing GCC 
> didn't do was allow them to be turned off again - which is what I wanted)).
>
> As for the default behavior for DWARFv5 - have you run much in the way of 
> performance numbers on how much debug_names speeds things up? From what I 
> could see with a gdb-index (sort of like debug_names - but linker generated, 
> so it's a single table for the whole program) it didn't seem to take GDB long 
> to parse/build up its own index compared to using the one in the file. So it 
> seemed to me like the extra link time, object size, etc, wasn't worth it in 
> the normal case. The really bad case for me is split-DWARF (worse with a 
> distributed filesystem) - where the debugger has to read all the .dwo files & 
> might have a high latency filesystem for each file it reads... super slow. 
> But if the debug info was either in the executable (not split) or in a DWP 
> (split then packaged), it seemed pretty brief.
>
> But if LLDB has different performance characteristics, or the default should 
> be different for other reasons - I'm fine with that. I think I left it on for 
> Apple so as not to mess with their stuff because of the MachO/dsym sort of 
> thing that's a bit different from the environments I'm looking at.


These are the numbers from my llvm-dev email in June:

> setting a breakpoint on a non-existing function without the use of
>  accelerator tables:
>  real    0m5.554s
>  user    0m43.764s
>  sys     0m6.748s
> 
> setting a breakpoint on a non-existing function with accelerator tables:
>  real    0m3.517s
>  user    0m3.136s
>  sys     0m0.376s

This is an extreme case, because practically the only thing we are doing is 
building the symbol index, but it's nice for demonstrating the amount of work 
that lldb needs to do without it. In practice, the ratio will not be this huge 
most of the time, because we will usually find some matches, and then will have 
to do some extra work, which will add a constant overhead to both sides. 
However, this means that the no-accel case will take even longer. I am not sure 
how this compares to gdb numbers, but I think the difference here is 
significant.

Also, I am pretty sure the Apple folks, who afaik are in the process of 
switching to debug_names, will want to have them on by default for their 
targets (ping @aprantl, @JDevlieghere). I think the cleanest way (and one that 
best reflects the reality) to achieve that would be to have `-glldb` imply 
`-gpubnames`. As for whether we should emit debug_names for DWARF 5 by default 
(-gdwarf-5 => -gpubnames) is a more tricky question, and I don't have a clear 
opinion on that (however, @probinson might).

(In either case, I agree that there are circumstances in which having 
debug_names is not beneficial, so having a flag to control them is a good idea).


Repository:
  rLLDB LLDB

https://reviews.llvm.org/D51208



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to