clayborg accepted this revision.
clayborg added a comment.
This revision is now accepted and ready to land.

In D78489#1996834 <https://reviews.llvm.org/D78489#1996834>, @labath wrote:

> In D78489#1995837 <https://reviews.llvm.org/D78489#1995837>, @clayborg wrote:
>
> > Sounds like a win then, as long as we don't get slower I am fine with this. 
> > I was guessing that line tables might be faster because it is generally 
> > very compressed and compact compared to debug info, but thanks for 
> > verifying.
> >
> > Might be worth checking the memory consumption of LLDB quickly too with 
> > DW_AT_ranges compiled out and just make sure we don't take up too much 
> > extra memory.
>
>
> I've done a similar benchmark to the last one, but measured memory usage 
> ("RssAnon" as reported by linux). One notable difference is that I
>
> Currently (information retrieved from DW_AT_ranges) we use about ~330 MB of 
> memory. If I switch to dwarf dies as the source, the memory goes all the way 
> to 2890 MB. This number is suspiciously large -- it either means that our die 
> freeing is not working properly, or that glibc is very bad at releasing 
> memory back to the OS. Given the magnitude of the increase, i think it's a 
> little bit of both. With line tables as the source the memory usage is 706 
> MB. It's an increase from 330, but definitely smaller than 2.8 GB. (the 
> number 330 is kind of misleading here since we're not considering removing 
> that option -- it will always be used if it is available).


Since we mmap in the entire DWARF, I am not surprised by taking up new memory 
because we touch those pages and won't get those back. If you remove the DIE 
freeing code, I will bet you see much more memory used. We definitely free the 
memory for the DIEs and give that back, so I would be willing to bet the 
increase you are seeing is from mmap loading pages in that we touch.

> 
> 
>> We aren't doing anything to unload these line tables like we do with DIEs 
>> are we? It might make sense to pares the line tables and then throw them 
>> away if they were not already parsed?  With the DIEs we were throwing them 
>> away if they weren't already parsed to keep memory consumption down, so 
>> might be worth throwing the line tables away after running this if we are 
>> now going to rely on it.
> 
> That would be possible, but I don't think it's worth the trouble. I think the 
> phrase "relying on it" overemphasizes the importance of that code. In 
> practice, the only time when this path will be taken is when debugging code 
> built with clang<=3.3, which is seven years old and did not even fully 
> implement c++11. It also seems like the switch to line tables will save 
> memory, at least until the die freeing bug is fixed. And lastly, the 
> difference i reported is pretty much the worst possible case, as the only 
> thing the debugger will do is parse the line tables and exit. Once the 
> debugger starts doing other stuff too, the difference will start to fade 
> (e.g. running to the breakpoint in main increases the memory usage to 600 MB 
> even with DW_AT_ranges).

Ok, thanks for looking into this.

> 
> 
>> One other thing to verify we want to go this route is to re-enable the old 
>> DIE parsing for high/low PCs, but put the results in a separate address 
>> ranges class. After we parse the line tables, verify that all ranges we 
>> found in the DIEs are in the line tables? It would not be great if we were 
>> going to start missing some functions if a function doesn't have a line 
>> table? Not sure if/how this would happen, but you never know.
> 
> I implemented a check like that. While doing it I've learned that the 
> DIE-based parsing does not work since May 2018 (D47275 
> <https://reviews.llvm.org/D47275>) and nobody noticed. What that means is 
> that in practice we were always going through line tables (if DW_AT_ranges 
> were missing). It also means that my times reported earlier for die-based 
> searching were incorrect as they also included the time to build the line 
> tables. (However, that does not change the ordering, as even if we subtract 
> the time it took to parse the line tables, the DIE method is still much 
> slower).
> 
> After fixing the die-based search, I was able to verify that the die-based 
> ranges are an exact match to the line table ranges (for the particular 
> compiler used -- top of tree clang).
> 
> Given all of the above (die-based searching being slow, taking more memory, 
> and not actually working), i think it's pretty clear that we should remove it.

Fine with me. Thanks for taking the time to ensure we don't regress.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D78489/new/

https://reviews.llvm.org/D78489



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to