Re: [lldb-dev] [RFC] Fast Conditional Breakpoints (FCB)

2019-08-20 Thread Tamas Berghammer via lldb-dev
It is great that you are looking at supporting these fast breakpoints
but I am concerned about the instruction moving code along the same
lines Pavel mentioned. Copying instructions from 1 location to another
is fairly complicated even without considering the issue of jump
targets and jump target detection makes it even harder.

For reference, I implemented a similar system to do code shifting only
on prologue instructions using LLVM what you might find useful for
reference at 
https://github.com/google/gapid/tree/master/gapii/interceptor-lib/cc
(Apache v2 license) in case you decide to go down this path.

That system doesn't try to detect jump targets and only handles a
small subset of the instructions but I think shows the general
complexity. On X86_64 I think the number of instructions needs
rewriting are relatively small as most of them aren't PC relative but
for example on ARM where (almost) any instruction can take PC as a
register it will be a monumental task that is very hard to test (I
would expect AArch64 to be somewhere between X86_64 and ARM in terms
of complexity due to PC relative instructions but no general purpose
PC register).

In my view this discussion leads to the question of how we trade
performance for accuracy/reliability. We can easily gain a lot of
performance by being a bit sloppy and assume that we can safely insert
trampolines into the middle of the function but I would want my
debugger to "never lie" or crash my program.

Tamas

On Tue, Aug 20, 2019 at 8:46 AM Pavel Labath via lldb-dev
 wrote:
>
> On 20/08/2019 00:11, Ismail Bennani wrote:
> >> On Aug 19, 2019, at 2:30 PM, Frédéric Riss  wrote:
> >>
> >>
> >>
> >>> On Aug 16, 2019, at 11:13 AM, Ismail Bennani via lldb-dev 
> >>>  wrote:
> >>>
> >>> Hi Pavel,
> >>>
> >>> Thanks for all your feedbacks.
> >>>
> >>> I’ve been following the discussion closely and find your approach quite 
> >>> interesting.
> >>>
> >>> As Jim explained, I’m also trying to have a conditional breakpoint, that 
> >>> is able to stop a specific thread (name or id) when the condition 
> >>> expression evaluates to true.
> >>>
> >>> I feel like stacking up options with your approach would imply doing more 
> >>> context switches.
> >>> But it’s definitely a better fallback mechanism than the current one. 
> >>> I’ll try to make a prototype to see the performance difference for both 
> >>> approaches.
> >>>
> >>>
>  On Aug 15, 2019, at 10:10 AM, Pavel Labath  wrote:
> 
>  Hello Ismail, and wellcome to LLDB. You have a very interesting (and not 
>  entirely trivial) project, and I wish you the best of luck in your work. 
>  I think this will be a very useful addition to lldb.
> 
>  It sounds like you have researched the problem very well, and the 
>  overall direction looks good to me. However, I do have some ideas 
>  suggestions about possible tweaks/improvements that I would like to hear 
>  your thoughts on. Please find my comments inline.
> 
>  On 14/08/2019 22:52, Ismail Bennani via lldb-dev wrote:
> > Hi everyone,
> > I’m Ismail, a compiler engineer intern at Apple. As a part of my 
> > internship,
> > I'm adding Fast Conditional Breakpoints to LLDB, using code patching.
> > Currently, the expressions that power conditional breakpoints are 
> > lowered
> > to LLVM IR and LLDB knows how to interpret a subset of it. If that 
> > fails,
> > the debugger JIT-compiles the expression (compiled once, and re-run on 
> > each
> > breakpoint hit). In both cases LLDB must collect all program state used 
> > in
> > the condition and pass it to the expression.
> > The goal of my internship project is to make conditional breakpoints 
> > faster by:
> > 1. Compiling the expression ahead-of-time, when setting the breakpoint 
> > and
> >inject into the inferior memory only once.
> > 2. Re-route the inferior execution flow to run the expression and check 
> > whether
> >it needs to stop, in-process.
> > This saves the cost of having to do the context switch between debugger 
> > and
> > the inferior program (about 10 times) to compile and evaluate the 
> > condition.
> > This feature is described on the [LLDB Project 
> > page](https://lldb.llvm.org/status/projects.html#use-the-jit-to-speed-up-conditional-breakpoint-evaluation).
> > The goal would be to have it working for most languages and 
> > architectures
> > supported by LLDB, however my original implementation will be for 
> > C-based
> > languages targeting x86_64. It will be extended to AArch64 afterwards.
> > Note the way my prototype is implemented makes it fully extensible for 
> > other
> > languages and architectures.
> > ## High Level Design
> > Every time a breakpoint that holds a condition is hit, multiple context
> > switches are needed in order to compile and evaluate the condition.
> > Firs

Re: [lldb-dev] LLDB bot health

2019-01-14 Thread Tamas Berghammer via lldb-dev
+Pavel Labath 

Pavel and Me was owning the following bots:
lldb-x86_64-ubuntu-14.04-buildserver: Builds lldb-server for various
andoird architectures (doesn't run tests)
lldb-x86_64-ubuntu-14.04-cmake  : Runs lldb tests with 6 different
compilers on Linux (clang-3.5, gcc-4.9.4, clang HEAD) * (i386, x86_64)
lldb-x86_64-darwin-13.4 : Building lldb on darwin using
cmake+ninja and running remote debugging tests for android (AFAIK devices
have been removed since)
lldb-windows7-android   : Building lldb for windows using
cmake+ninja and running remote debugging tests for android (using an i386
andoird emulator)
lldb-x86_64-ubuntu-14.04-android: Building lldb for linu and running
remote debugging tests for android (AFAIK devices have been removed since)

My opinion is that we should leave lldb-x86_64-ubuntu-14.04-buildserver on
as it provides at least build coverage for android and it is very stable
and easy to fix when it breaks. If people have interest maintaining Linux
support (I hope they do) then having lldb-x86_64-ubuntu-14.04-cmake on and
green could be useful and I can help out with general bot maintenance but
won't have bandwidth to actually look into test failures. For the rest of
the bots I would propose to just turn them on unless somebody from
Google/Android steps forward to maintain them as they occasionally require
physical access and at the moment they are located in a lab in the Google
MTV office. Will send an e-mail to a few interested parties to check if
there is any takers.

Cheers,
Tamas

On Fri, Jan 11, 2019 at 11:18 PM Stella Stamenova via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> If you look at the bot online, there's usually an admin listed. For
> example:
>
> http://lab.llvm.org:8011/buildslaves
>
> I've CC'd the admins of the bots from Paul's list that are failing.
>
> Thanks,
> -Stella
>
> -Original Message-
> From: Davide Italiano 
> Sent: Friday, January 11, 2019 3:12 PM
> To: Stella Stamenova 
> Cc: Robinson, Paul ; Pavel Labath ;
> Zachary Turner ; LLDB 
> Subject: Re: [lldb-dev] LLDB bot health
>
> On Fri, Jan 11, 2019 at 3:07 PM Stella Stamenova 
> wrote:
> >
> > Thanks Davide,
> >
> > I think several of these bots have not been maintained for a while. One
> thing we could do is try to ping the owners and see if it's possible to
> update the bots or if they're no longer useful, then remove them.
> >
>
> I agree. I don't know who owns these bots, is there an easy way to find?
> (or just cc: them to these e-mail).
> We can then ask Galina to just remove the bots if nobody maintains them.
>
> Thanks,
>
> --
> Davide
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Tamas Berghammer via lldb-dev
Thanks Pavel for looping me in. I haven't looked into the actual
implementation of the prototype yet but reading your description I have
some concern regarding the amount of data you capture as I feel it isn't
sufficient to reproduce a set of usecases.

One problem is when the behavior of LLDB is not deterministic for whatever
reason (e.g. multi threading, unordered maps, etc...). Lets take
SBModule::FindSymbols() what returns an SBSymbolContextList without any
specific order (haven't checked the implementation but I would consider a
random order to be valid). If a user calls this function, then iterates
through the elements to find an index `I`, calls `GetContextAtIndex(I)` and
pass the result into a subsequent function then what will we do. Will we
capture what did `GetContextAtIndex(I)` returned in the trace and use that
value or will we capture the value of `I`, call `GetContextAtIndex(I)`
during reproduction and use that value. Doing the first would be correct in
this case but would mean we don't call `GetContextAtIndex(I)` while doing
the second case would mean we call `GetContextAtIndex(I)` with a wrong
index if the order in SBSymbolContextList is non deterministic. In this
case as we know that GetContextAtIndex is just an accessor into a vector
the first option is the correct one but I can imagine cases where this is
not the case (e.g. if GetContextAtIndex would have some useful side effect).

Other interesting question is what to do with functions taking raw binary
data in the form of a pointer + size (e.g. SBData::SetData). I think we
will have to annotate these APIs to make the reproducer system aware of the
amount of data they have to capture and then allocate these buffers with
the correct lifetime during replay. I am not sure what would be the best
way to attach these annotations but I think we might need a fairly generic
framework because I won't be surprised if there are more situation when we
have to add annotations to the API. I slightly related question is if a
function returns a pointer to a raw buffer (e.g. const char* or void*) then
do we have to capture the content of it or the pointer for it and in either
case what is the lifetime of the buffer returned (e.g.
SBError::GetCString() returns a buffer what goes out of scope when the
SBError goes out of scope).

Additionally I am pretty sure we have at least some functions returning
various indices what require remapping other then the pointers either
because they are just indexing into a data structure with undefined
internal order or they referencing some other resource. Just by randomly
browsing some of the SB APIs I found for example SBHostOS::ThreadCreate
what returns the pid/tid for the newly created thread what will have to be
remapped (it also takes a function as an argument what is a problem as
well). Because of this I am not sure if we can get away with an
automatically generated set of API descriptions instead of wring one with
explicit annotations for the various remapping rules.

If there is interest I can try to take a deeper look into the topic
sometime later but I hope that those initial thoughts are useful.

Tamas

On Mon, Jan 7, 2019 at 9:40 AM Pavel Labath  wrote:

> On 04/01/2019 22:19, Jonas Devlieghere via lldb-dev wrote:
> > Hi Everyone,
> >
> > In September I sent out an RFC [1] about adding reproducers to LLDB.
> > Over the
> > past few months, I landed the reproducer framework, support for the GDB
> > remote
> > protocol and a bunch of preparatory changes. There's still an open code
> > review
> > [2] for dealing with files, but that one is currently blocked by a
> change to
> > the VFS in LLVM [3].
> >
> > The next big piece of work is supporting user commands (e.g. in the
> > driver) and
> > SB API calls. Originally I expected these two things to be separate, but
> > Pavel
> > made a good case [4] that they're actually very similar.
> >
> > I created a prototype of how I envision this to work. As usual, we can
> > differentiate between capture and replay.
> >
> > ## SB API Capture
> >
> > When capturing a reproducer, every SB function/method is instrumented
> > using a
> > macro at function entry. The added code tracks the function identifier
> > (currently we use its name with __PRETTY_FUNCTION__) and its arguments.
> >
> > It also tracks when a function crosses the boundary between internal and
> > external use. For example, when someone (be it the driver, the python
> > binding
> > or the RPC server) call SBFoo, and in its implementation SBFoo calls
> > SBBar, we
> > don't need to record SBBar. When invoking SBFoo during replay, it will
> > itself
> > call SBBar.
> >
> > When a boundary is crossed, the function name and arguments are
> > serialized to a
> > file. This is trivial for basic types. For objects, we maintain a table
> that
> > maps pointer values to indices and serialize the index.
> >
> > To keep our table consistent, we also need to track return for functions
> > that
> > return an object by v

Re: [lldb-dev] Anybody using the Go/Java debugger plugins?

2018-01-30 Thread Tamas Berghammer via lldb-dev
Originally I added the Java support to work with the Android ART runtime
and it has some pretty hard beaked in dependencies on the debug info ART
generates and on the version of ART available at that time (Android N) even
though I don't think this limitation is communicated clearly in source code
or in code reviews. Considering that AFAIK it haven't been tested with
Android O and haven't seen any bugfix for a while I would assume it is
mostly unused so I am happy to get it removed. And as Pavel said if
somebody want to use it again we can always add it back in with a better
testing strategy and long term plan.

Generally for new language support I think we should have a similar policy
then what LLVM have for new backends. They should be developed out of tree
first without us providing a stable API (developer can fork a specific
version of LLDB, preferably upstream language independent bugfixes and then
pull in new changes once in a while) and if they are mature enough both in
terms of testing and maintenance commitment then they can be pulled into
the main LLDB source tree.

Tamas

On Tue, Jan 30, 2018 at 11:52 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Right, so, independently of this thread here, we've had an internal
> discussion about reviving java support. However, it is still very
> uncertain that this will actually happen , so I'm not opposed to
> removing it as we can always add it back later (with better testing,
> hopefully).
>
> Regardless of what happens here (and in light of the rust thread), I
> think a clearer bar for what we expect from new language support
> plugin would be useful for everyone.
>
> pl
>
> On 22 January 2018 at 20:13, Jim Ingham  wrote:
> > To Davide's alternative: LLDB does handle loading plugins that use the
> SB API's (for things like data formatters.)  But there's not currently an
> SB interface to support
> > writing a full language plugin, and we don't export the lldb_private
> API's from the lldb binary.  So there's no current mechanism to provide
> out-of-tree language plugins.  It would be great to enable out-of-tree
> language support mechanisms but we would have to design an SB interface for
> that purpose.
> >
> > I see occasional questions about using Go with lldb on stack overflow
> and the like.  It might be good to put out a more general "anybody
> interested in supporting this" call for Go, but I'm not sure the lldb-dev
> list is the best place to find an owner.  Is there some Go dev list we can
> ask to see if anybody cares to support this?
> >
> > Non-stop never actually worked, it was just a promise, and the code for
> it was pretty thin.  I would be okay with pulling that out unless somebody
> is actually getting good use out of it.
> >
> > Jim
> >
> >> On Jan 22, 2018, at 10:17 AM, Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>
> >> The Go support for added by Ryan as a 20% project. Now that he's no
> >> longer working for Google, it's pretty much abandoned.
> >> The Java support was added by us (android folks) to support java
> >> debugging (to a certain extent). However, we never really finished the
> >> project, so we're not using that code now. We're hoping to come back
> >> to it one day, but I agree we should not burden everyone else while we
> >> make up our mind on that.
> >>
> >> So I don't think anybody would shout at us if we removed them right
> >> now, but maybe we should make some effort to find a maintainer for
> >> them before removal? E.g. publicly declare that they are going to be
> >> deleted on date  unless a maintainer steps up to take care of them
> >> (we can define the minimum level of support we'd expect from such a
> >> maintainer). Then I can e.g. forward the email to the Google Go folks
> >> and see if anyone of them wants to take that up.
> >>
> >> As for Java, I'm going to bring up the desire to remove the Java
> >> plugin on our team's meeting this week and get back to you with the
> >> result.
> >>
> >>
> >> In general I think that a clear deprecation/removal process would be
> >> nice to have. I have a couple of things I think are broken/unused
> >> (PlatformKalimba? non-stop mode?) but I haven't brought them up
> >> because I was unsure how to handle it.
> >>
> >>
> >> On 22 January 2018 at 15:28, Davide Italiano 
> wrote:
> >>> Hi,
> >>> during my wandering I stumbled upon the `Go` and the `Java` plugins in
> >>> the lldb source tree.
> >>> They seem to not have been touched in a while, and I'm not necessarily
> >>> sure they're in a working state. Keeping them in tree is a maintenance
> >>> burden, so unless somebody is actively using them or somebody is
> >>> willing to step up as maintainers, I'm not necessarily sure we should
> >>> pay this price.
> >>>
> >>> An alternative would be that of having a pluggable mechanism to add
> >>> language support (I haven't fleshed out the details of this yet, but
> >>> it should be possible, somehow). Other languages which have out of

Re: [lldb-dev] Resolving dynamic type based on RTTI fails in case of type names inequality in DWARF and mangled symbols

2017-12-19 Thread Tamas Berghammer via lldb-dev
Hi,

I thought most compiler still emits DW_AT_MIPS_linkage_name instead of the
standard DW_AT_linkage_name but I agree that if we can we should use the
standard one.

Regarding performance we have 2 different scenarios. On Apple platforms we
have the apple accelerator tables to improve load time (might work on
FreeBsd as well) while on other platforms we Index the DWARF data
(DWARFCompileUnit::Index) to effectively generate accelerator tables in
memory what is a faster process then fully parsing the DWARF (currently we
only parse function DIEs and we don't build the clang types). I think an
ideal solution would be to have the vtable name stored in DWARF so the
DWARF data is standalone and then have some accelerator tables to be able
to do fast lookup from mangled symbol name to DIE offset. I am not too
familiar with the apple accelerator tables but if we have anything what
maps from mangled name to DIE offset then we can add a few entry to it to
map from mangled vtable name to type DIE or vtable DIE.

Tamas

On Mon, Dec 18, 2017 at 9:02 PM xgsa  wrote:

> Hi Tamas,
>
> First, why DW_AT_MIPS_linkage_name, but not just DW_AT_linkage_name? The
> later is standartized and currently generated by clang at least on x64.
>
> Second, this doesn't help to solve the issue, because this will require
> parsing all the DWARF types during startup to build a map that breaks DWARF
> lazy load, performed by lldb. Or am I missing something?
>
> Thanks,
> Anton.
>
> 18.12.2017, 22:59, "Tamas Berghammer" :
>
> Hi Anton and Jim,
>
> What do you think about storing the mangled type name or the mangled
> vtable symbol name somewhere in DWARF in the DW_AT_MIPS_linkage_name
> attribute? We are already doing it for the mangled names of functions so
> extending it to types shouldn't be too controversial.
>
> Tamas
>
> On Mon, 18 Dec 2017, 17:29 xgsa via lldb-dev, 
> wrote:
>
> Thank you for clarification, Jim, you are right, I misunderstood a little
> bit what lldb actually does.
>
> It is not that the compiler can't be fixed, it's about the fact that
> relying on correspondence of mangled and demangled forms are not reliable
> enough, so we are looking for more robust alternatives. Moreover, I am not
> sure that such fuzzy matching could be done just basing on class name, so
> it will require reading more DIEs. Taking into account that, for instance,
> in our project there are quite many such types, it could noticeable slow
> down the debugger.
>
> Thus, I'd like to mention one more alternative and get your feedback, if
> possible. Actually, what is necessary is the correspondence of mangled and
> demangled vtable symbol. Possibly, it worth preparing a separate section
> during compilation (like e.g. apple_types), which would store this
> correspondence? It will work fast and be more reliable than the current
> approach, but certainly, will increase debug info size (however, cannot
> estimate which exact increase will be, e.g. in persent).
>
> What do you think? Which solution is preferable?
>
> Thanks,
> Anton.
>
> 15.12.2017, 23:34, "Jim Ingham" :
> > First off, just a technical point. lldb doesn't use RTTI to find dynamic
> types, and in fact works for projects like lldb & clang that turn off RTTI.
> It just uses the fact that the vtable symbol for an object demangles to:
> >
> > vtable for CLASSNAME
> >
> > That's not terribly important, but I just wanted to make sure people
> didn't think lldb was doing something fancy with RTTI... Note, gdb does (or
> at least used to do) dynamic detection the same way.
> >
> > If the compiler can't be fixed, then it seems like your solution [2] is
> what we'll have to try.
> >
> > As it works now, we get the CLASSNAME from the vtable symbol and look it
> up in the the list of types. That is pretty quick because the type names
> are indexed, so we can find it with a quick search in the index. Changing
> this over to a method where we do some additional string matching rather
> than just using the table's hashing is going to be a fair bit slower
> because you have to run over EVERY type name. But this might not be that
> bad. You would first look it up by exact CLASSNAME and only fall back on
> your fuzzy match if this fails, so most dynamic type lookups won't see any
> slowdown. And if you know the cases where you get into this problem you can
> probably further restrict when you need to do this work so you don't suffer
> this penalty for every lookup where we don't have debug info for the
> dynamic type. And you could keep a side-table of mangled-name -> DWARF
> name, and maybe a black-list for unfound names, so you only have to do this
> once.
> >
> > This estimation is based on the assumption that you can do your work
> just on the type names, without having to get more type information out of
> the DWARF for each candidate match. A solution that relies on realizing
> every class in lldb so you can get more information out of the type
> information to help with the match will defeat all ou

Re: [lldb-dev] Resolving dynamic type based on RTTI fails in case of type names inequality in DWARF and mangled symbols

2017-12-18 Thread Tamas Berghammer via lldb-dev
Hi Anton and Jim,

What do you think about storing the mangled type name or the mangled vtable
symbol name somewhere in DWARF in the DW_AT_MIPS_linkage_name attribute? We
are already doing it for the mangled names of functions so extending it to
types shouldn't be too controversial.

Tamas

On Mon, 18 Dec 2017, 17:29 xgsa via lldb-dev, 
wrote:

> Thank you for clarification, Jim, you are right, I misunderstood a little
> bit what lldb actually does.
>
> It is not that the compiler can't be fixed, it's about the fact that
> relying on correspondence of mangled and demangled forms are not reliable
> enough, so we are looking for more robust alternatives. Moreover, I am not
> sure that such fuzzy matching could be done just basing on class name, so
> it will require reading more DIEs. Taking into account that, for instance,
> in our project there are quite many such types, it could noticeable slow
> down the debugger.
>
> Thus, I'd like to mention one more alternative and get your feedback, if
> possible. Actually, what is necessary is the correspondence of mangled and
> demangled vtable symbol. Possibly, it worth preparing a separate section
> during compilation (like e.g. apple_types), which would store this
> correspondence? It will work fast and be more reliable than the current
> approach, but certainly, will increase debug info size (however, cannot
> estimate which exact increase will be, e.g. in persent).
>
> What do you think? Which solution is preferable?
>
> Thanks,
> Anton.
>
> 15.12.2017, 23:34, "Jim Ingham" :
> > First off, just a technical point. lldb doesn't use RTTI to find dynamic
> types, and in fact works for projects like lldb & clang that turn off RTTI.
> It just uses the fact that the vtable symbol for an object demangles to:
> >
> > vtable for CLASSNAME
> >
> > That's not terribly important, but I just wanted to make sure people
> didn't think lldb was doing something fancy with RTTI... Note, gdb does (or
> at least used to do) dynamic detection the same way.
> >
> > If the compiler can't be fixed, then it seems like your solution [2] is
> what we'll have to try.
> >
> > As it works now, we get the CLASSNAME from the vtable symbol and look it
> up in the the list of types. That is pretty quick because the type names
> are indexed, so we can find it with a quick search in the index. Changing
> this over to a method where we do some additional string matching rather
> than just using the table's hashing is going to be a fair bit slower
> because you have to run over EVERY type name. But this might not be that
> bad. You would first look it up by exact CLASSNAME and only fall back on
> your fuzzy match if this fails, so most dynamic type lookups won't see any
> slowdown. And if you know the cases where you get into this problem you can
> probably further restrict when you need to do this work so you don't suffer
> this penalty for every lookup where we don't have debug info for the
> dynamic type. And you could keep a side-table of mangled-name -> DWARF
> name, and maybe a black-list for unfound names, so you only have to do this
> once.
> >
> > This estimation is based on the assumption that you can do your work
> just on the type names, without having to get more type information out of
> the DWARF for each candidate match. A solution that relies on realizing
> every class in lldb so you can get more information out of the type
> information to help with the match will defeat all our attempts at lazy
> DWARF reading. This can cause quite long delays in big programs. So I would
> be much more worried about a solution that requires this kind of work.
> Again, if you can reject most potential candidates by looking at the name,
> and only have to realize a few likely types, the approach might not be that
> slow.
> >
> > Jim
> >
> >>  On Dec 15, 2017, at 7:11 AM, xgsa via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>
> >>  Sorry, I probably shouldn't have used HTML for that message. Converted
> to plain text.
> >>
> >>   Original message 
> >>  15.12.2017, 18:01, "xgsa" :
> >>
> >>  Hi,
> >>
> >>  I am working on issue that in C++ program for some complex cases with
> templates showing dynamic type based on RTTI in lldb doesn't work properly.
> Consider the following example:
> >>  enum class TagType : bool
> >>  {
> >> Tag1
> >>  };
> >>
> >>  struct I
> >>  {
> >> virtual ~I() = default;
> >>  };
> >>
> >>  template 
> >>  struct Impl : public I
> >>  {
> >>  private:
> >> int v = 123;
> >>  };
> >>
> >>  int main(int argc, const char * argv[]) {
> >> Impl impl;
> >> I& i = impl;
> >> return 0;
> >>  }
> >>
> >>  For this example clang generates type name "Impl" in
> DWARF and "__ZTS4ImplIL7TagType0EE" when mangling symbols (which lldb
> demangles to Impl<(TagType)0>). Thus when in
> ItaniumABILanguageRuntime::GetTypeInfoFromVTableAddress() lldb tries to
> resolve the type, it is unable to find it. More cases and the detailed
> description why ll

Re: [lldb-dev] Prologue instructions having line information

2017-09-14 Thread Tamas Berghammer via lldb-dev
Hi Carlos,

Thank your for looking into the LLDB failure. I looked into it briefly and
the issue is that we have have 2 function f and g where g is inlined into f
as the first call and this causes the first non-prologue line entry of f to
be inside the address range of g what means that when we step info f from
outside we will end up inside g instead. Previously the first line entry
for f matched with the start address of the inlined copy of g where LLDB
was able to handle the stepping properly.

For the concrete example you should compile
https://github.com/llvm-mirror/lldb/blob/26fea9dbbeb3020791cdbc46fbf3cc9d7685d7fd/packages/Python/lldbsuite/test/functionalities/inline-stepping/calling.cpp
with
"/mnt/ssd/ll/git/build/host-release/bin/clang-5.0 -std=c++11 -g -O0
-fno-builtin -m32 --driver-mode=g++ calling.cpp" and then observe
that caller_trivial_2 have a DW_AT_low_pc = 0x8048790 and the
inlined inline_trivial_1 inside it have a DW_AT_low_pc = 0x8048793 but the
first line entry after "Set prologue_end to true" is at 0x8048796 while
previously it was at 0x8048793.

Tamas

On Thu, Sep 14, 2017 at 9:59 AM Carlos Alberto Enciso via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi,
>
> I have been working on a compiler issue, where instructions associated to
> the function prolog are assigned line information, causing the debugger to
> show incorrectly the beginning of the function body.
>
> For a full description, please see:
>
> https://reviews.llvm.org/D37625
> https://reviews.llvm.org/rL313047
>
> The submitted patch caused some LLDB tests to fail. I have attached the
> log failure.
>
> I have no knowledge about the test framework used by LLDB.
>
> What is the best way to proceed in this case?
>
> Thanks very much for your feedback.
>
> Carlos Enciso
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb-server link failure with shared library configuration

2017-08-30 Thread Tamas Berghammer via lldb-dev
I tried to build using the following command what should be a reasonably
close approximation to the one you used (I don't have ICU installed at the
moment) and it still links fine for me:
CC=/usr/bin/clang CXX=/usr/bin/clang++ cmake -G Ninja ../../llvm
-DBUILD_SHARED_LIBS=true -DLLVM_TARGETS_TO_BUILD='X86'
-DCMAKE_BUILD_TYPE=Debug -DLLVM_ENABLE_ASSERTIONS=TRUE
-DLLVM_OPTIMIZED_TABLEGEN=ON

It would be great to understand what exactly causes the difference between
the 2 cases by some sort of bisecting as I see nothing in the source code
what would explain this. If changing from -DCMAKE_BUILD_TYPE=Debug to
-DCMAKE_BUILD_TYPE=Release
fixes the issue then it would be nice to diff the ninja build graph and the
different cmake caches to try to figure out where the different starts.

Tamas

On Wed, Aug 30, 2017 at 12:17 PM Peeter Joot 
wrote:

> Hi Tamas,
>
> It looks like lldb-server only fails if I build with a Debug
> configuration, which I didn't realize until now.  In Release configuration,
> I don't need any changes to CMakefiles and lldb-server links without
> error.  My full build configuration in debug mode was:
>
> mkdir lldb50.1708110153
>
> cd lldb50.1708110153
>
> PATH=$PATH:/opt/lzlabs/bin
>
> CC=/usr/bin/clang CXX=/usr/bin/clang++ cmake \
>
> -G \
>
> Ninja \
>
> ../llvm \
>
> -DBUILD_SHARED_LIBS=true \
>
> -DLLVM_TARGETS_TO_BUILD='X86' \
>
> -DCMAKE_BUILD_TYPE=Debug \
>
> -DLLVM_ENABLE_ASSERTIONS=TRUE \
>
> -DCMAKE_INSTALL_PREFIX=/home/pjoot/clang/lldb50.1708110153 \
>
> -DLLVM_OPTIMIZED_TABLEGEN=ON \
>
> -DICU_LIBRARY=/opt/lzlabs/lib64 \
>
> -DICU_INCLUDE_DIR=/opt/lzlabs/include
> Without any changes LLVMRuntimeDyld is not in the lldb-server link list,
> so this is not an ordering issue.  I'm not sure why this ends up as an
> issue only with Debug.
>
> --
> Peeter
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb-server link failure with shared library configuration

2017-08-30 Thread Tamas Berghammer via lldb-dev
Hi Peeter,

Why do you have to make the dependency conditional on
BUILD_SHARED_LIBS? If lldbExpression
depends on LLVMRuntimeDyld it should depend on it independently of the
build config.

Also I gave it a try to build lldb using shared libraries locally and I
haven't hit any issue when I used the following command (on Ubuntu
14.04): cmake ../../llvm -G Ninja -DCMAKE_C_COMPILER=clang
-DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Release
-DBUILD_SHARED_LIBS=true

Are you using some other cmake flags as well? Also can you check the link
command used for the final linking step if it contains LLVMRuntimeDyld
without your change? Can it be just a library ordering issue where some
symbols are dropped before they are used?

Cheers,
Tamas

On Wed, Aug 30, 2017 at 12:50 AM Peeter Joot 
wrote:

> Hi Tamas,
>
> I was able to use your suggestion as follows:
>
> diff --git a/source/Expression/CMakeLists.txt
> b/source/Expression/CMakeLists.txt
>
> index 7d9643a..b53b095 100644
>
> --- a/source/Expression/CMakeLists.txt
>
> +++ b/source/Expression/CMakeLists.txt
>
> @@ -2,6 +2,12 @@ if(NOT LLDB_BUILT_STANDALONE)
>
>set(tablegen_deps intrinsics_gen)
>
>  endif()
>
>
> +set(LLDB_EXP_DEPS)
>
> +
>
> +if(BUILD_SHARED_LIBS)
>
> +  list(APPEND LLDB_EXP_DEPS LLVMRuntimeDyld)
>
> +endif()
>
> +
>
>  add_lldb_library(lldbExpression
>
>DiagnosticManager.cpp
>
>DWARFExpression.cpp
>
> @@ -30,6 +36,7 @@ add_lldb_library(lldbExpression
>
>  lldbTarget
>
>  lldbUtility
>
>  lldbPluginExpressionParserClang
>
> +${LLDB_EXP_DEPS}
>
>
>LINK_COMPONENTS
>
>  Core
>
> and was able to successfully build the lldb-server.
>
> --
> Peeter
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb-server link failure with shared library configuration

2017-08-29 Thread Tamas Berghammer via lldb-dev
Hi All,

We are trying to keep the size of lldb-server as small as possible as it
have to be copied over to the android device for every debug session. The
way we currently achieve this is by using linker garbage collection to get
rid of the unused code.

In the log term it would be nice to be more explicit about the list of
dependencies but currently we don't have clear enough boundaries for doing
that. Pavel and Zachary spent some time on improving it but I think we are
still quite far from that.

For your problem I think a better short term option would be to add
LLVMRuntimeDyld as a dependency for lldbExpression instead of lldb-server
directly (assuming it works). Optionally if you are feeling more
adventurous you can try to replace ${LLDB_PLUGINS} and ${LLDB_SYSTEM_LIBS}
with a more explicit list of dependencies what might remove the dependency
between lldb-server and LLVMRuntimeDyld but I am not certain.

Tamas

On Mon, Aug 28, 2017 at 6:00 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> If we are pulling in the expression parser, that would explain our issues.
> If this currently happens in lldb-server we need to add LLVMRuntimeDyld to
> the link libraries. I know some people at Google have looked into getting
> lldb-server to link against as little as possible, and maybe this is just
> how things are for the time being. We should verify that. It would be nice
> if lldb-server didn't link against the expression parser if possible.
>
> Greg
>
> On Aug 28, 2017, at 9:56 AM, Peeter Joot 
> wrote:
>
> Hi Greg,
>
> IRExecutionUnit.cpp looks like the origin of at least some of the
> undefined symbols:
>
> .../llvm/include/llvm/ExecutionEngine/RTDyldMemoryManager.h:61: undefined
> reference to `vtable for llvm::RTDyldMemoryManager'
>
>
> .../llvm/include/llvm/ExecutionEngine/JITSymbol.h:223: undefined reference
> to `vtable for llvm::JITSymbolResolver'
>
>
> .../llvm/include/llvm/ExecutionEngine/RuntimeDyld.h:96: undefined
> reference to `vtable for llvm::RuntimeDyld::MemoryManager'
>
>
> lib/liblldbExpression.a(IRExecutionUnit.cpp.o):(.data.rel.ro+0x90):
> undefined reference to `llvm::RTDyldMemoryManager::deregisterEHFrames()'
>
> lib/liblldbExpression.a(IRExecutionUnit.cpp.o):(.data.rel.ro+0xa8):
> undefined reference to `llvm::RuntimeDyld::MemoryManager::anchor()'
>
> lib/liblldbExpression.a(IRExecutionUnit.cpp.o):(.data.rel.ro+0x118):
> undefined reference to `llvm::JITSymbolResolver::anchor()'
>
> lib/liblldbExpression.a(IRExecutionUnit.cpp.o):(.data.rel.ro._ZTVN4llvm18MCJITMemoryManagerE[_ZTVN4llvm18MCJITMemoryManagerE]+0x60):
> undefined reference to `llvm::RuntimeDyld:
>
> :MemoryManager::anchor()'
>
> there are a couple of undefined vtable references in headers (also above),
> but it's not clear to me if these also neccessarily come from
> IRExectionUnix.cpp.
>
> --
> Peeter
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb command like gdb's "set auto-solib-add off"

2017-05-23 Thread Tamas Berghammer via lldb-dev
Few more addition to the above:

How are you running lldb-server on your device? For remote debugging
running lldb-server in platform mode (and then using remote-linux or
similar as the selected platform in LLDB) will give you a significantly
better performance over running lldb-server in gdbserver mode only and then
selecting remote-gdbserver as your platform in LLDB. The following things
*only apply* to the case when you are running lldb-server in platform mode.

If the target side libraries are backed by files on the target system then
LLDB should download them only once (at first usage) and then cache it on
the host in a module cache (even between LLDB or machine restarts). It
means that the startup time is expected to be quite high the first time you
debug on a specific device but it should be much faster afterwards (as you
already have the libraries on the host). If this is not the case it would
be interesting to see why module cache isn't working for you.

By default LLDB uses the gdb-remote protocol to download the files from the
target device what is known to be very very slow for transferring large
amount of data in bulk. For Android we implemented a faster way to download
the files using ADB what gave us a large performance gain (multiple times
faster, but don't remember exact number). You can see the code at
https://github.com/llvm-mirror/lldb/blob/a4df8399803ba766d05ef7fcd5d04dc0342d2682/source/Plugins/Platform/Android/PlatformAndroid.cpp#L190
I
expect that you can achieve similar gains if you implement
Platform*::GetFile and Platform*::PutFile for your platform based on a
faster method (e.g. scp/rsync)

Tamas

On Tue, May 23, 2017 at 12:23 AM Ted Woodward via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> To expand on Jim's message, "target modules search-paths add" can be used
> to help lldb find  the host-side copies of shared libraries when they're
> not in the same directory as on the target system.
>
> For example, if you have libraries in /usr/lib on the target system, and
> have copies on the host system in /local/scratch/work/debuglibs , you can
> say
> target modules search-paths add /usr/lib /local/scratch/work/debuglibs
> and when lldb goes to load (for example) /usr/lib/libc.so, it will try to
> load /local/scratch/work/debuglibs/libc.so from the host machine before
> trying to load through the memory interface.
>
> I found this very helpful when trying to debug dynamic executables on
> Linux running on a Hexagon board, running lldb on x86 Linux or Windows.
>
> Ted
>
> --
> Qualcomm Innovation Center, Inc.
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
> Linux Foundation Collaborative Project
>
> > -Original Message-
> > From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Jim
> > Ingham via lldb-dev
> > Sent: Monday, May 22, 2017 5:02 PM
> > To: Chunseok Lee 
> > Cc: lldb-dev 
> > Subject: Re: [lldb-dev] lldb command like gdb's "set auto-solib-add off"
> >
> > In general, if lldb can find host-side copies of binaries that match the
> ones it
> > finds on the device, it will do all symbol reading against the host
> copies.  In
> > the case of an OS X host debugging iOS, lldb uses Spotlight and a few
> other
> > tricks to find the host-side binaries.  You can also use
> "add-symbol-file" to
> > manually point lldb at the host-side symbol files.  If you are reading
> symbols
> > from host-side files, then symbol loading doesn't slow down debugging
> > startup that much.
> >
> > Presumably, your symbol files are only on the device, so you are reading
> > them from memory.  "settings set target.memory-module-load-level" is
> > almost what you want, but it applies to ALL shared libraries read from
> > memory.  If you can copy the symbol file that contains the
> > __jit_debug_register_code to the host you are debugging from, and use
> > add-symbol-file to tell lldb about it, then that one should NOT have to
> be
> > read from memory anymore.  Then you could turn "memory-module-load-
> > level" to partial or even mininal, and that should get you starting
> faster.
> >
> > The other option would be to extend the setting, so you can say:
> >
> > set set target.memory-module-load-level [[lib-name level] [lib-name
> level]
> > ...]
> >
> > If there's just one argument, that's equivalent to "all ".
> >
> > Jim
> >
> > > On May 22, 2017, at 2:35 PM, Chunseok Lee 
> > wrote:
> > >
> > >
> > >
> > > Thank you for your help.
> > > It would be really helpful to me.
> > >
> > > The reason behind the question is exactly what you mentioned. I am
> > > wokring on debugging in devices and it seems that shared library
> loading(I
> > do not know lldb loads symbols lazyly) runs very slowly since my testing
> > program depends on so many shared libs.  since I am debuggging with
> gdbjit
> > feature, I do not need shared library loading except one shared lib(which
> > contains __jit_debug_register_code symbol) Thus, I want to turn off shred
> > lib loading

Re: [lldb-dev] Running check-lldb

2017-04-20 Thread Tamas Berghammer via lldb-dev
AFAIK the Ubuntu 14.04 cmake builder runs tests using ToT clang (built on
the build bot) as step test3 and test4 and it seems to be green so if you
are seeing different result then I would expect it to be caused by a
configuration difference between the setup the bot has and you have (or the
bot runs the tests incorrectly).

On Thu, Apr 20, 2017 at 2:47 PM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On 19 April 2017 at 19:15, Scott Smith 
> wrote:
>
>> A combination of:
>> 1. Updating to a known good release according to buildbot
>> 2. using Ubuntu 14.04
>> 3. compiling release using clang-4.0
>>
> I'd hope that the compiler used to build lldb does not matter. If you see
> any differences due to this factor, please let me know.
>
> 4. using the dotest command line that buildbot uses
>>
> The exact command line the buildbot uses is not important.. The only
> important distinction from the check-lldb target is the compiler used. By
> default it uses the host compiler used to build lldb. As no-one builds
> tests using clang-4.0 it's quite possible that some things may be broken
> (or just not properly annotated).
>
>
>> 5. specifying gcc-4.8 instead of the locally compiled clang
>>
>> has most of the tests passing, with a handful of unexpected successes:
>>
>> UNEXPECTED SUCCESS:
>> TestRegisterVariables.RegisterVariableTestCase.test_and_run_command_dwarf
>> (lang/c/register_variables/TestRegisterVariables.py)
>> UNEXPECTED SUCCESS:
>> TestRegisterVariables.RegisterVariableTestCase.test_and_run_command_dwo
>> (lang/c/register_variables/TestRegisterVariables.py)
>> UNEXPECTED SUCCESS:
>> TestExitDuringBreak.ExitDuringBreakpointTestCase.test_dwarf
>> (functionalities/thread/exit_during_break/TestExitDuringBreak.py)
>> UNEXPECTED SUCCESS:
>> TestExitDuringBreak.ExitDuringBreakpointTestCase.test_dwo
>> (functionalities/thread/exit_during_break/TestExitDuringBreak.py)
>> UNEXPECTED SUCCESS:
>> TestThreadStates.ThreadStateTestCase.test_process_interrupt_dwarf
>> (functionalities/thread/state/TestThreadStates.py)
>> UNEXPECTED SUCCESS:
>> TestThreadStates.ThreadStateTestCase.test_process_interrupt_dwo
>> (functionalities/thread/state/TestThreadStates.py)
>> UNEXPECTED SUCCESS: TestRaise.RaiseTestCase.test_restart_bug_dwarf
>> (functionalities/signal/raise/TestRaise.py)
>> UNEXPECTED SUCCESS: TestRaise.RaiseTestCase.test_restart_bug_dwo
>> (functionalities/signal/raise/TestRaise.py)
>> UNEXPECTED SUCCESS:
>> TestMultithreaded.SBBreakpointCallbackCase.test_sb_api_listener_resume_dwarf
>> (api/multithreaded/TestMultithreaded.py)
>> UNEXPECTED SUCCESS:
>> TestMultithreaded.SBBreakpointCallbackCase.test_sb_api_listener_resume_dwo
>> (api/multithreaded/TestMultithreaded.py)
>> UNEXPECTED SUCCESS: lldbsuite.test.lldbtest.TestPrintf.test_with_dwarf
>> (lang/cpp/printf/TestPrintf.py)
>> UNEXPECTED SUCCESS: lldbsuite.test.lldbtest.TestPrintf.test_with_dwo
>> (lang/cpp/printf/TestPrintf.py)
>>
> The unexpected successes are expected, unfortunately. :) What happens here
> is that the tests are flaky and they fail like 1% of the time, so they are
> marked as xfail.
>
>
>>
>> This looks different than another user's issue:
>> http://lists.llvm.org/pipermail/lldb-dev/2016-February/009504.html
>>
>> I also tried gcc-4.9.4 (via the ubuntu-toolchain-r ppa) and got a
>> different set of problems:
>>
>> FAIL:
>> TestNamespaceDefinitions.NamespaceDefinitionsTestCase.test_expr_dwarf
>> (lang/cpp/namespace_definitions/TestNamespaceDefinitions.py)
>> FAIL: TestNamespaceDefinitions.NamespaceDefinitionsTestCase.test_expr_dwo
>> (lang/cpp/namespace_definitions/TestNamespaceDefinitions.py)
>> FAIL:
>> TestTopLevelExprs.TopLevelExpressionsTestCase.test_top_level_expressions_dwarf
>> (expression_command/top-level/TestTopLevelExprs.py)
>> FAIL:
>> TestTopLevelExprs.TopLevelExpressionsTestCase.test_top_level_expressions_dwo
>> (expression_command/top-level/TestTopLevelExprs.py)
>> UNEXPECTED SUCCESS:
>> TestExitDuringBreak.ExitDuringBreakpointTestCase.test_dwarf
>> (functionalities/thread/exit_during_break/TestExitDuringBreak.py)
>> UNEXPECTED SUCCESS:
>> TestExitDuringBreak.ExitDuringBreakpointTestCase.test_dwo
>> (functionalities/thread/exit_during_break/TestExitDuringBreak.py)
>> UNEXPECTED SUCCESS:
>> TestThreadStates.ThreadStateTestCase.test_process_interrupt_dwarf
>> (functionalities/thread/state/TestThreadStates.py)
>> UNEXPECTED SUCCESS: TestRaise.RaiseTestCase.test_restart_bug_dwarf
>> (functionalities/signal/raise/TestRaise.py)
>> UNEXPECTED SUCCESS: TestRaise.RaiseTestCase.test_restart_bug_dwo
>> (functionalities/signal/raise/TestRaise.py)
>> UNEXPECTED SUCCESS:
>> TestMultithreaded.SBBreakpointCallbackCase.test_sb_api_listener_resume_dwarf
>> (api/multithreaded/TestMultithreaded.py)
>> UNEXPECTED SUCCESS:
>> TestMultithreaded.SBBreakpointCallbackCase.test_sb_api_listener_resume_dwo
>> (api/multithreaded/TestMultithreaded.py)
>> UNEXPECTED SUCCESS: lldbsuite.test.lldbtest.TestPrintf.test_with_dwarf
>> (lang/cpp/printf

Re: [lldb-dev] Linux issues where I am not getting breakpoints...

2017-04-13 Thread Tamas Berghammer via lldb-dev
I seen a similar issue when trying to debug an application with a lot of
shared libraries (1000+) and in that case the problem was that lldb-server
was too slow to respond what caused a connection timeout in lldb.
Increasing plugin.process.gdb-remote.packet-timeout fixed the problem for
me but it would be great if we can make the jModulesInfo packet faster in
lldb-server.

Tamas

On Wed, Apr 12, 2017 at 11:33 PM Greg Clayton  wrote:

> So the issue is with jModulesInfo. If it is too large we end up losing
> connection. Not sure if this is on the send or receive side yet. But if I
> comment out support for this packet, my debug sessions works just fine.
>
> Greg
>
> On Apr 12, 2017, at 10:42 AM, Greg Clayton  wrote:
>
> What I now believe is happening is lldb-server is exiting for some reason
> and then the process just runs and still shows the output in LLDB because
> we hooked up the STDIO. I see lldb-server exits with an exit code of 0, but
> no command had been sent to terminate it. I will track that down.
>
> Also, log_channels in lldb-gdbserver.cpp is using a llvm::StringRef
> incorrectly:
>
> case 'c': // Log Channels
>   if (optarg && optarg[0])
> log_channels = StringRef(optarg);
>   break;
>
> Bad! This is exactly when we shouldn't be using llvm::StringRef. optarg is
> a static variable and can change if there are any arguments after "-c
> ".
>
> Greg
>
> On Apr 12, 2017, at 10:05 AM, Tamas Berghammer 
> wrote:
>
> If the process is restarted by lldb-server then "posix ptrace" should have
> some indication about it. Also "posix process" and "posix thread" can be
> useful to understand the bigger picture (all of them in lldb-server).
>
> Note: You can enable them by setting LLDB_SERVER_LOG_CHANNELS
> and LLDB_DEBUGSERVER_LOG_FILE environment variables before starting lldb.
>
> On Wed, Apr 12, 2017 at 5:11 PM Greg Clayton  wrote:
>
> What is actually happening is we are stopped and handling the
> EntryBreakpoint and are in the process of trying to load all shared
> libraries, and then a signal (I am guessing) comes into the lldb-server and
> causes the target to resume. Not sure if that is due to the signal passing
> packet:
>
>
> $QPassSignals:0e;1b;20;21;22;23;24;25;26;27;28;29;2a;2b;2c;2d;2e;2f;30;31;32;33;34;35;36;37;38;39;3a;3b;3c;3d;3e;3f;40#69
>
> that gets sent these days. I will try removing this and seeing if it fixes
> anything.
>
> Is there any logging I can enabled in lldb-server to catch the resume? I
> haven't looked at the code but I finally proved what was happening last
> night (target resumes while we are stopped at a breakpoint somehow). The
> program runs and exits and when the shared libraries are finally done
> loading, there is no connection to speak to.
>
> Greg
>
> On Apr 11, 2017, at 8:26 AM, Pavel Labath  wrote:
>
>
>
> On 11 April 2017 at 15:56, Greg Clayton  wrote:
>
>
> On Apr 11, 2017, at 5:33 AM, Pavel Labath  wrote:
>
> Are you sure this is not just an artifact of stdio buffering? I tried the
> same experiment, but I placed a real log statement, and I could see that
> all the LoadModuleAtAddress calls happen between the $T and $c packets in
> the gdb-remote packet sequence.
>
> The module loading should be synchronous, so I think the problem lies
> elsewhere.
>
> What is the nature of the breakpoint that is not getting hit? Can you
> provide a repro case? The only bug like this that I am aware of is that we
> fail to hit breakpoints in global constructors in shared libraries, but
> that hasn't worked even in 3.8..
>
>
> I unfortunately can't attach a repro case. I will be able to track this
> down, just need some pointers. I did notice that I wasn't able to hit
> breakpoints in global constructors though... Do we know why? On Mac, we get
> notified of shared libraries as they load so we never miss anything. Why
> are we not able to get the same thing with linux?
>
>
> It looks like we are intercepting the library load too late, but I haven't
> investigated yet how to fix it. It's definitely possible (this works fine
> in gdb), but I don't know how, as the dynamic linker is still a big unknown
> to me. FWIW, I think I'll be messing with the dynamic loader plugin
> soon(ish), so I'll try to fix this then.
>
> pl
>
>
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Linux issues where I am not getting breakpoints...

2017-04-12 Thread Tamas Berghammer via lldb-dev
If the process is restarted by lldb-server then "posix ptrace" should have
some indication about it. Also "posix process" and "posix thread" can be
useful to understand the bigger picture (all of them in lldb-server).

Note: You can enable them by setting LLDB_SERVER_LOG_CHANNELS
and LLDB_DEBUGSERVER_LOG_FILE environment variables before starting lldb.

On Wed, Apr 12, 2017 at 5:11 PM Greg Clayton  wrote:

> What is actually happening is we are stopped and handling the
> EntryBreakpoint and are in the process of trying to load all shared
> libraries, and then a signal (I am guessing) comes into the lldb-server and
> causes the target to resume. Not sure if that is due to the signal passing
> packet:
>
>
> $QPassSignals:0e;1b;20;21;22;23;24;25;26;27;28;29;2a;2b;2c;2d;2e;2f;30;31;32;33;34;35;36;37;38;39;3a;3b;3c;3d;3e;3f;40#69
>
> that gets sent these days. I will try removing this and seeing if it fixes
> anything.
>
> Is there any logging I can enabled in lldb-server to catch the resume? I
> haven't looked at the code but I finally proved what was happening last
> night (target resumes while we are stopped at a breakpoint somehow). The
> program runs and exits and when the shared libraries are finally done
> loading, there is no connection to speak to.
>
> Greg
>
> On Apr 11, 2017, at 8:26 AM, Pavel Labath  wrote:
>
>
>
> On 11 April 2017 at 15:56, Greg Clayton  wrote:
>
>
> On Apr 11, 2017, at 5:33 AM, Pavel Labath  wrote:
>
> Are you sure this is not just an artifact of stdio buffering? I tried the
> same experiment, but I placed a real log statement, and I could see that
> all the LoadModuleAtAddress calls happen between the $T and $c packets in
> the gdb-remote packet sequence.
>
> The module loading should be synchronous, so I think the problem lies
> elsewhere.
>
> What is the nature of the breakpoint that is not getting hit? Can you
> provide a repro case? The only bug like this that I am aware of is that we
> fail to hit breakpoints in global constructors in shared libraries, but
> that hasn't worked even in 3.8..
>
>
> I unfortunately can't attach a repro case. I will be able to track this
> down, just need some pointers. I did notice that I wasn't able to hit
> breakpoints in global constructors though... Do we know why? On Mac, we get
> notified of shared libraries as they load so we never miss anything. Why
> are we not able to get the same thing with linux?
>
>
> It looks like we are intercepting the library load too late, but I haven't
> investigated yet how to fix it. It's definitely possible (this works fine
> in gdb), but I don't know how, as the dynamic linker is still a big unknown
> to me. FWIW, I think I'll be messing with the dynamic loader plugin
> soon(ish), so I'll try to fix this then.
>
> pl
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Linux issues where I am not getting breakpoints...

2017-04-11 Thread Tamas Berghammer via lldb-dev
See https://bugs.llvm.org/show_bug.cgi?id=25806 for details about why we
can't set breakpoint in the static initializer (it is an LLDB bug).

For your investigation a few pointers/guesses (assuming it is not some
stdout displaying issue what I consider unlikely based on your description):
* Do your application calls dlopen? It can explain why you see stdout
before some library load events and also I can imagine more issue in that
code path.
* Are you sure LoadModuleAtAddress called from LoadAllCurrentModules in all
4 cases? It can be called from RefreshModules as well what is used when we
get notified about a new library and I expect it to be more likely based on
the output (for the second 2 line).
* I suggest to stop at libc.so'_start and see what libraries are loaded
there (I expect it to be after the first 2 log and before the stdout).
Verify that both shared-library-event breakpoint is resolved at this time
(by "breakpoint list -i") and also set a manual breakpoint there to see
when it triggers. I expect you will hit that breakpoint just after your log
lines are displayed. If that is the case you should get a stacktrace and
see the callstack causing the event to be triggered.
* The libraries you should look out in the log is libc.so and
ld-linux-x86-64.so
(or similar). The dynamic loader integration should work only after these 2
libraries are loaded.

On Tue, Apr 11, 2017 at 3:56 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On Apr 11, 2017, at 5:33 AM, Pavel Labath  wrote:
>
> Are you sure this is not just an artifact of stdio buffering? I tried the
> same experiment, but I placed a real log statement, and I could see that
> all the LoadModuleAtAddress calls happen between the $T and $c packets in
> the gdb-remote packet sequence.
>
> The module loading should be synchronous, so I think the problem lies
> elsewhere.
>
> What is the nature of the breakpoint that is not getting hit? Can you
> provide a repro case? The only bug like this that I am aware of is that we
> fail to hit breakpoints in global constructors in shared libraries, but
> that hasn't worked even in 3.8..
>
>
> I unfortunately can't attach a repro case. I will be able to track this
> down, just need some pointers. I did notice that I wasn't able to hit
> breakpoints in global constructors though... Do we know why? On Mac, we get
> notified of shared libraries as they load so we never miss anything. Why
> are we not able to get the same thing with linux?
>
>
>
> On 10 April 2017 at 22:51, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> I have added some logging to a program that is not hitting breakpoints
> with LLDB top of tree SVN. An older lldb 3.8 hits the breakpoint just fine.
> I placed some logging in LLDB:
>
> ModuleSP DynamicLoader::LoadModuleAtAddress(const FileSpec &file,
> addr_t link_map_addr,
> addr_t base_addr,
> bool base_addr_is_offset) {
>   printf("%s: lma = 0x%16.16llx, ba = 0x%16.16llx, baio = %i\n",
> file.GetPath().c_str(), link_map_addr, base_addr, base_addr_is_offset);
>
>
> This is called by DynamicLoaderPOSIXDYLD::LoadAllCurrentModules().
>
> My problem is I see:
>
> [vdso]: lma = 0x, ba = 0x77ffa000, baio = 0
> linux-vdso.so.1: lma = 0x77ffe6e0, ba = 0x77ffa000, baio =
> 1
> /tmp/liba.so: lma = 0x77ff66a8, ba = 0x77e3, baio = 1
> 8 locations added to breakpoint 1
> /tmp/libb.so: lma = 0x77e2f000, ba = 0x77d43000, baio = 1
> [==] Running 14 tests from 1 test case.
> [--] Global test environment set-up.
> [--] 14 tests from MyTest
> [ RUN  ] MyTest.Test1
> [   OK ] MyTest.Test1 (0 ms)
> /tmp/libc.so: lma = 0x77e2f000, ba = 0x77d43000, baio = 1
> /tmp/libd.so: lma = 0x77e2f000, ba = 0x77d43000, baio = 1
>
>
> Note that I see program output _during_ the messages that are showing that
> shared libraries are being loaded? I would assume we are loading shared
> libraries synchronously, but the log seems to indicated otherwise.
>
> If anyone knows anything on this subject please let me know...
>
> Greg Clayton
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] std::vector formatter question

2017-03-24 Thread Tamas Berghammer via lldb-dev
The libstdc++ one is defined in examples/synthetic/gnu_libstdcpp.py while
the libc++ one is defined in
source/Plugins/Language/CPlusPlus/LibCxxVector.cpp and both of them is
registered in source/Plugins/Language/CPlusPlus/CPlusPlusLanguage.cpp by
specifying a type name regex to identify the effected types. If you have a
custom STL or any other costume type I suggest you to write a synthetic
child provider in Python (see https://lldb.llvm.org/varformats.html) as it
can be loaded at runtime so you don't have to fork LLDB.

Tamas

On Fri, Mar 24, 2017 at 4:30 PM Ted Woodward via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On standalone Hexagon (no OS support), we use Dinkumware for the c/c++
> library. LLDB isn't able to print out values of a vector:
>
> Process 1 stopped
> * thread #1: tid = 0x0001, 0x519c vector.elf`main + 76 at
> vector.cpp:10,
> stop reason = step in
> frame #0: 0x519c vector.elf`main + 76 at vector.c:10
>7vector v;
>8v.push_back(2);
>9v.push_back(1);
> -> 10   cout << v[0] << " " << v[1] << endl;
>11   return 0;
>12   }
>  (lldb) fr v v
> (std::vector >) v = size=0 {}
>
> When I run on x86 linux built with gcc, I get:
> (lldb) fr v v
> (std::vector >) v = size=2 {
>   [0] = 2
>   [1] = 1
> }
>
>
> My guess is Dinkumware's vector type is just a little bit different from
> libstdc++/libcxx, so the standard formatters don't do the right thing.
> Where
> are the vector formatters defined, and how does LLDB determine which
> one to use?
>
>
> --
> Qualcomm Innovation Center, Inc.
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
> Linux Foundation Collaborative Project
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] C++ method declaration parsing

2017-03-16 Thread Tamas Berghammer via lldb-dev
A random idea: Instead of parsing demangled C++ method names what people
think about writing or reusing a demangler what can gave back both the
demangled name and the parsed name in some form?

My guess is that it would be both more efficient (we already have most of
information during demangling) and possibly easier to implement as I expect
less edge cases. Additionally I think it would be a nice library to have as
part of the LLVM project.

Tamas

On Thu, Mar 16, 2017 at 2:43 AM Eugene Zemtsov via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Yes, it's a good idea to add cfe-dev.
> It is totally possible that I overlooked something and clang can help with
> this kind of superficial parsing.
>
> As far as I can see even clang-format does it's own parsing
> (UnwrappedLineParser.cpp) and clang-format has very similar need of roughly
> understanding of code without knowing any context.
>
> > are you certain that clang's parser would be unacceptably slow?
>
> I don't have any perf numbers to back it up, but it does look like a lot
> of clang infrastructure needs to be set up before actual parsing begins.
> (see lldb_private::ClangExpressionParser). It's not important though, as at
> this stage I don't see how we can reuse clang at all.
>
>
>
> On Wed, Mar 15, 2017 at 5:03 PM, Zachary Turner 
> wrote:
>
> If there is any way to re-use clang parser for this, it would be
> wonderful.  Even if it means adding support to clang for whatever you need
> in order to make it possible.  You mention performance, are you certain
> that clang's parser would be unacceptably slow?
>
> +cfe-dev as they may have some more input on what it would take to extend
> clang to make this possible.
>
> On Wed, Mar 15, 2017 at 4:48 PM Eugene Zemtsov via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi, Everyone.
>
> Current implementation of CPlusPlusLanguage::MethodName::Parse() doesn't
> cover full extent of possible function declarations,
> or even declarations returned by abi::__cxa_demangle.
>
> Consider this code:
> --
>
> #include 
> #include 
> #include 
>
> void func() {
>   printf("func() was called\n");
> }
>
> struct Class
> {
>   Class() {
> printf("ctor was called\n");
>   }
>
>   Class(const Class& c) {
> printf("copy ctor was called\n");
>   }
>
>   ~Class() {
> printf("dtor was called\n");
>   }
> };
>
>
> int main() {
>   std::function f = func;
>   f();
>
>   Class c;
>   std::vector v;
>   v.push_back(c);
>
>   return 0;
> }
>
> --
>
> When compiled It has at least two symbols that currently cannot be
> correctly parsed by MethodName::Parse() .
>
> void std::vector >::_M_emplace_back_aux const&>(Class const&)
> void (* const&std::_Any_data::_M_access() const)() - a template 
> function that returns a reference to a function pointer.
>
> It causes incorrect behavior in avoid-stepping and sometimes messes
> printing of thread backtrace.
>
> I would like to solve this issue, but current implementation of method
> name parsing doesn't seem sustainable.
> Clever substrings and regexs are fine for trivial cases, but they become a
> nightmare once we consider more complex cases.
> That's why I'd like to have code that follows some kind of grammar
> describing function declarations.
>
> As I see it, choices for new implementation of MethodName::Parse() are
> 1. Reuse clang parsing code.
> 2. Parser generated by bison.
> 3. Handwritten recursive descent parser.
>
> I looked at the option #1, at it appears to be impossible to reuse clang
> parser for this kind of zero-context parsing.
> Especially given that we care about performance of this code. Clang C++
> lexer on the other hand can be reused.
>
> Option #2. Using bison is tempting, but it would require introduction of
> new compile time dependency.
> That might be especially inconvenient on Windows.
>
> That's why I think option #3 is the way to go. Recursive descent parser
> that reuses a C++ lexer from clang.
>
> LLDB doesn't need to parse everything (e.g. we don't care about details
> of function arguments), but it needs to be able to handle tricky return
> types and base names.
> Eventually new implementation should be able to parse signature of every
> method generated by STL.
>
> Before starting implementation, I'd love to get some feedback. It might be
> that my overlooking something important.
>
> --
> Thanks,
> Eugene Zemtsov.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
>
>
> --
> Thanks,
> Eugene Zemtsov.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] DWARF v5 unit headers

2017-02-28 Thread Tamas Berghammer via lldb-dev
As far as I know the only dwarf v5 functionality currently implemented in
LLDB is the split dwarf support so I don't expect it to work with the new
dwarf v5 data but as long as clang emits dwarf v4 (or older) by default it
shouldn't cause any immediate problem with the test suite (we will still
have to teach LLDB to handle dwarf v5).

For the future changes, when you start to emit the new dwarf v5 tag and
form values instead of the current GNU extension tag and form values for
split dwarf and for the related new data form-s we will have to teach LLDB
to understand them (currently we expect only the GNU versions) so a heads
up for that change would be appreciated. Other then this I expect no issue
regarding the addition of dwarf v5 support for LLDB.

Tamas

On Tue, Feb 28, 2017 at 5:25 AM Robinson, Paul via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I'm planning to commit a patch (http://reviews.llvm.org/D30206) which will
> cause Clang/LLVM to emit correct unit headers if you ask for version 5.
> I've run the lldb tests and I *think* I pointed to my modified Clang
> correctly (cmake with -DLLDB_TEST_COMPILER=/my/clang) and AFAICT it does
> not introduce new problems.
> I saw 3 Failure and 12 Error with or without the patch.
> (One Expected Failure seems to have become an Unexpected Success. Haven't
> tried to decipher logs to figure out which one yet.)
>
> If anybody can predict a problem with my patch, please let me know by
> noon Pacific time (2000 GMT) tomorrow (28th).
>
> We're going to be doing more work implementing various bits of DWARF v5
> in the coming months.  If anybody thinks they can predict that there are
> particular bits that would be especially problematic for LLDB, it would
> be useful to know up front which bits those are.
>
> Thanks
> --paulr
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Debugging ELF relocatable files using LLDB

2017-02-22 Thread Tamas Berghammer via lldb-dev
I uploaded a CL for review what fixes the crash you are experimenting at
https://reviews.llvm.org/D30251 (we are mapping the files into memory as
read only and then trying to write into it) but I think nobody tests LLDB
with relocable object files so you might run into a lot of bugs in the way.
Also I suggest to switch to a newer version of LLDB (preferably ToT) as 3.6
is fairly old and known to have a lot of bugs on Linux and on ARM.

Tamas

On Wed, Feb 22, 2017 at 4:56 AM Ramana via lldb-dev 
wrote:

> On Thu, Feb 16, 2017 at 10:26 PM, Greg Clayton  wrote:
> >
> > On Feb 16, 2017, at 3:51 AM, Ramana via lldb-dev <
> lldb-dev@lists.llvm.org>
> > wrote:
> >
> > It looks like LLDB doesn't like ELF relocatable files for debugging
> > and asserts with the following message when tried
> >
> > /lldb/source/Plugins/ObjectFile/ELF/ObjectFileELF.cpp:2228:
> > unsigned int ObjectFileELF::RelocateSection(.):  Assertion `false
> > && "unexpected relocation type"' failed.
> >
> > Are we not supposed to debug ELF relocatable files on LLDB or am I
> > missing something?
> >
> > If we cannot debug the relocatable files, is it _simply_ because those
> > files lack program headers (program memory map) and relocations are
> > yet to be processed (for debug info) or there are other reasons?
> >
> > For our target, the assembler output itself is a self contained ELF
> > and hence will not have external references (both code and data). I am
> > wondering if I can debug these ELF files on LLDB with minimal changes
> > which does not require a full (or proper) linking step and would
> > appreciate any pointers on that.
> >
> > Thanks,
> > Ramana
> >
> >
> > Looks like you just need to add support for the 32 bit relocations:
> >
> >
> > if (hdr->Is32Bit()) {
> >   switch (reloc_type(rel)) {
> >   case R_386_32:
> >   case R_386_PC32:
> >   default:
> > assert(false && "unexpected relocation type");
> >   }
> > } else {
> >   switch (reloc_type(rel)) {
> >   case R_X86_64_64: {
> > symbol = symtab->FindSymbolByID(reloc_symbol(rel));
> > if (symbol) {
> >   addr_t value = symbol->GetAddressRef().GetFileAddress();
> >   DataBufferSP &data_buffer_sp =
> debug_data.GetSharedDataBuffer();
> >   uint64_t *dst = reinterpret_cast(
> >   data_buffer_sp->GetBytes() + rel_section->GetFileOffset() +
> >   ELFRelocation::RelocOffset64(rel));
> >   *dst = value + ELFRelocation::RelocAddend64(rel);
> > }
> > break;
> >   }
> >   case R_X86_64_32:
> >   case R_X86_64_32S: {
> > symbol = symtab->FindSymbolByID(reloc_symbol(rel));
> > if (symbol) {
> >   addr_t value = symbol->GetAddressRef().GetFileAddress();
> >   value += ELFRelocation::RelocAddend32(rel);
> >   assert(
> >   (reloc_type(rel) == R_X86_64_32 && (value <= UINT32_MAX))
> ||
> >   (reloc_type(rel) == R_X86_64_32S &&
> >((int64_t)value <= INT32_MAX && (int64_t)value >=
> > INT32_MIN)));
> >   uint32_t truncated_addr = (value & 0x);
> >   DataBufferSP &data_buffer_sp =
> debug_data.GetSharedDataBuffer();
> >   uint32_t *dst = reinterpret_cast(
> >   data_buffer_sp->GetBytes() + rel_section->GetFileOffset() +
> >   ELFRelocation::RelocOffset32(rel));
> >   *dst = truncated_addr;
> > }
> > break;
> >   }
> >   case R_X86_64_PC32:
> >   default:
> > assert(false && "unexpected relocation type");
> >   }
> > }
> >
> >
> > I am guessing you will do something similar to the x86-64 stuff.
>
> I tried to mimic the x86_64 relocations handling for our target but
> getting segmentation fault while trying to write to the 'dst'
> location.
>
> In fact, the x86_64 also segfaults while trying to write to 'dst'
> location. I just tried to debug the following simple program for
> x86_64.
>
> main.c:
> int main () {
>return 0;
> }
>
> $ clang main.c -o main_64b.o --target=x86_64 -c -g
> $  lldb main_64b.o
> (lldb) target create "main_64b.o"
> Current executable set to 'main_64b.o' (x86_64).
> (lldb) source list
> Segmentation fault (core dumped)
>
> Am I doing something wrong or support for debugging the x86_64 ELF
> relocatable files using LLDB is broken?
>
> BTW, I am using LLVM v3.6 and LLDB v3.6.
>
> Regards,
> Ramana
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB failed to locate source when dwarf symbols are inside compile unit on Linux

2017-01-12 Thread Tamas Berghammer via lldb-dev
Hi Jeffrey,

For the source code locating issue based on your info my guess is that LLDB
doesn't able to resolve the relative file name path specified in your
symbol files to the absolute path required to load the file from disk. Can
you try running "target modules dump line-table " where the file
name is just the name of the file without any path? If the problem is what
I am guessing then you should see an output like this (note the relative
path).
(lldb) target modules dump line-table s.cpp
Line table for ./foo/s.cpp in `a.out
0x00400a0d: ./foo/s.cpp:3
0x00400a1a: ./foo/s.cpp:4
0x00400a58: ./foo/s.cpp:4
0x00400a64: ./foo/s.cpp:5
0x00400a93: ./foo/s.cpp:6
0x00400a9e: ./foo/s.cpp:6
...

The above problem can be worked around either by running LLDB with a
current working directory where the file path displayed by "target modules
dump line-table" is relative to or setting up a directory remapping for
that path using "settings set target.source-map ./ ".

Tamas

On Mon, Jan 9, 2017 at 11:55 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> In ELF files if there is a section named “.gnu_debuglink” it will contain
> a path to the external debug file. Dump this section and see what it
> contains. This section contains a null terminated C string as the path
> followed by a 4 byte aligned 32 bit integer which is a file CRC. Check to
> see the path is relative.
>
> I am guessing this is your problem.
>
> Greg
>
>
> On Jan 9, 2017, at 3:42 PM, Jeffrey Tan  wrote:
>
> Hey Greg, I just confirmed this with our build team. I seem to have
> misunderstood the location of debug symbol. It is actually not inside each
> individual object file but:
> The debug info in dev mode sits in the .debug_* sections of the shared
> libraries (we don't use debug fission).
> One potential complicating factor is that we relativize the 
> DW_AT_comp_dirattributes
> in the DWARF info, so that it's almost always just a long reference to the
> current working directory (e.g. .///).
>
> I do not know why this(symbol in shared library) would cause the bug
> though.
>
> Jeffrey
>
> On Mon, Jan 9, 2017 at 1:57 PM, Greg Clayton  wrote:
>
> Comments below.
>
> On Jan 9, 2017, at 1:10 PM, Jeffrey Tan via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi,
>
> O ur company is using Buck(https://buckbuild.com/) to build internal
> service. Recently the build team made a change in buck to not merge dwarf
> symbols from each object file into final binary so debugger needs to read
> source/symbol table from compilation unit itself.
>
>
> How are debug symbols expected to be found? Is fission being used where
> the DWARF for each compile unit is in .dwo files and the main executable
> has skeleton DWARF? I will skip all other questions until we know more
> about how and where the DWARF is.
>
> Greg Clayton
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] A problem with the arm64 unwind plans I'm looking at

2016-11-09 Thread Tamas Berghammer via lldb-dev
Based on your comments I have one more idea for a good heuristic. What if
we detect a dynamic branch (e.g. "br ", "tbb ...", etc...) and store
the register state for that place. Then when we find a block with no unwind
info for the first instruction then we use the one we saved for the dynamic
branch (as we know that the only way that block can be reached is through a
dynamic branch). If there is exactly 1 dynamic branch in the code then this
should gave us the "perfect" result while if we have multiple dynamic
branches then we will pick one "randomly" but for compiler generated code I
think it will be good enough. The only tricky case is if we fail to detect
the dynamic branch but that should be easy to fix as we already track every
branch on ARM (for single stepping) and doing it on AArch64 should be easy
as well.

On Tue, Nov 8, 2016 at 11:10 PM Jason Molenda  wrote:

> Yeah I was thinking that maybe if we spot an epilogue instruction (ret, b
> ), and the next instruction doesn't have a reinstated
> register context, we could backtrack to the initial register context of
> this block of instructions (and if it's not the beginning of the function),
> re-instate that register context for the next instruction.
>
> It doesn't help if we have a dynamic dispatch after the initial part of
> the function.  For that, we'd need to do something like your suggestion of
> finding the biggest collection of register saves.
>
> e.g. if I rearrange/modify my example function a little to make it more
> interesting (I didn't fix up the +offsets)
>
> prologue:
> > 0x17df0 <+0>:   stpx22, x21, [sp, #-0x30]!
> > 0x17df4 <+4>:   stpx20, x19, [sp, #0x10]
> > 0x17df8 <+8>:   stpx29, x30, [sp, #0x20]
> > 0x17dfc <+12>:  addx29, sp, #0x20; =0x20
>
> direct branch:
> > 0x17e1c <+44>:  cmpw20, #0x1d; =0x1d
> > 0x17e20 <+48>:  b.hi   0x17e4c   ; <+92>  {
> block #3 }
>
> dynamic dispatch:
> > 0x17e24 <+52>:  adrx9, #0x90 ; switcher + 196
> > 0x17e28 <+56>:  nop
> > 0x17e2c <+60>:  ldrsw  x8, [x9, x8, lsl #2]
> > 0x17e30 <+64>:  addx8, x8, x9
> > 0x17e34 <+68>:  br x8
>
> block #1
> > 0x17e9c <+172>: sxtw   x8, w19
> > 0x17ea0 <+176>: strx8, [sp]
> > 0x17ea4 <+180>: adrx0, #0x10f; "%c\n"
> > 0x17ea8 <+184>: nop
> > 0x17eac <+188>: bl 0x17f64   ; symbol stub
> for: printf
> > 0x17e70 <+128>: subsp, x29, #0x20; =0x20
> > 0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
> > 0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
> > 0x17e7c <+140>: ldpx22, x21, [sp], #0x30
> > 0x17eb0 <+192>: b 0x17f4c   ; symbol stub
> for: abort
>
> block #2
> > 0x17e38 <+72>:  subsp, x29, #0x20; =0x20
> > 0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
> > 0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
> > 0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
> > 0x17e48 <+88>:  ret
>
>
> block #3
> > 0x17e4c <+92>:  addw0, w0, #0x1  ; =0x1
> > 0x17e50 <+96>:  b  0x17e38   ; <+72> at
> a.c:115
> > 0x17e54 <+100>: orrw8, wzr, #0x7
> > 0x17e58 <+104>: strx8, [sp, #0x8]
> > 0x17e5c <+108>: sxtw   x8, w19
> > 0x17e60 <+112>: strx8, [sp]
> > 0x17e64 <+116>: adrx0, #0x148; "%c %d\n"
> > 0x17e68 <+120>: nop
> > 0x17e6c <+124>: bl 0x17f64   ; symbol stub
> for: printf
> > 0x17e70 <+128>: subsp, x29, #0x20; =0x20
> > 0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
> > 0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
> > 0x17e7c <+140>: ldpx22, x21, [sp], #0x30
> > 0x17e80 <+144>: b  0x17f38   ; f3 at b.c:4
>
> block #4
> > 0x17e38 <+72>:  subsp, x29, #0x20; =0x20
> > 0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
> > 0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
> > 0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
> > 0x17e48 <+88>:  ret
>
> First, an easy one:  When we get to the first instruction of 'block #4',
> we've seen a complete epilogue ending in 'B other-function' and the first
> instruction of block #4 is not branched to.  If we find the previous direct
> branch target -- to the first instruction of 'block #3' was conditionally
> branched to, we reuse that register context for block #4.  This could
> easily go wrong for hand-written assembly where you might undo the stack
> state part-way and then branch to another part of the function.  But I
> doubt compiler generated code is ever going to do that.
>
> Second, a trickier one: When we get to the first instruction of 'block
> #2', we have no previous branch t

Re: [lldb-dev] A problem with the arm64 unwind plans I'm looking at

2016-11-07 Thread Tamas Berghammer via lldb-dev
Hi Jason,

I thought about this situation when implemented the original branch
following code and haven't been able to come up with a really good solution.

My only idea is the same what you mentioned. We should try to recognize all
unconditional branches and returns (but not calls) and then if the
following instruction don't have any unwind information yet (e.g. haven't
been a branch target so far) then we try to find some reasonable unwind
info from the previous lines.

The difficult question is how to find the correct information. One possible
heuristic I have in mind is to try to find any call instruction inside the
function before the current PC and use the unwind info from there. The
reason I like this heuristic because there won't be a call instruction
inside the prologue or epilogue and on ARM based on the ABI every call
instruction have to have the same unwind info. Other possible alternative
(or if we don't have a call instruction) is to use the unwind info line
with the information about the highest number of registers. If multiple
lines have the same number of information then either use the earliest one
or the one with the fewest registers being set to IsSame to avoid picking
something from an epilogue.

I don't think any of my suggestions are really good but I don't have any
better idea at the moment.

Tamas

On Sat, Nov 5, 2016 at 3:01 AM Jason Molenda  wrote:

> Hi Tamas & Pavel, I thought you might have some ideas so I wanted to show
> a problem I'm looking at right now.  The arm64 instruction unwinder
> forwards the unwind state based on branch instructions within the
> function.  So if one block of code ends in an epilogue, the next
> instruction (which is presumably a branch target) will have the correct
> original unwind state.  This change went in to
> UnwindAssemblyInstEmulation.cpp  mid-2015 in r240533 - the code it replaced
> was poorly written, we're better off with this approach.
>
> However I'm looking at a problem where clang will come up with a branch
> table for a bunch of case statements.  e.g. this function:
>
> 0x17df0 <+0>:   stpx22, x21, [sp, #-0x30]!
> 0x17df4 <+4>:   stpx20, x19, [sp, #0x10]
> 0x17df8 <+8>:   stpx29, x30, [sp, #0x20]
> 0x17dfc <+12>:  addx29, sp, #0x20; =0x20
> 0x17e00 <+16>:  subsp, sp, #0x10 ; =0x10
> 0x17e04 <+20>:  movx19, x1
> 0x17e08 <+24>:  movx20, x0
> 0x17e0c <+28>:  addw21, w20, w20, lsl #2
> 0x17e10 <+32>:  bl 0x17f58   ; symbol stub
> for: getpid
> 0x17e14 <+36>:  addw0, w0, w21
> 0x17e18 <+40>:  movw8, w20
> 0x17e1c <+44>:  cmpw20, #0x1d; =0x1d
> 0x17e20 <+48>:  b.hi   0x17e4c   ; <+92> at a.c:112
> 0x17e24 <+52>:  adrx9, #0x90 ; switcher + 196
> 0x17e28 <+56>:  nop
> 0x17e2c <+60>:  ldrsw  x8, [x9, x8, lsl #2]
> 0x17e30 <+64>:  addx8, x8, x9
> 0x17e34 <+68>:  br x8
> 0x17e38 <+72>:  subsp, x29, #0x20; =0x20
> 0x17e3c <+76>:  ldpx29, x30, [sp, #0x20]
> 0x17e40 <+80>:  ldpx20, x19, [sp, #0x10]
> 0x17e44 <+84>:  ldpx22, x21, [sp], #0x30
> 0x17e48 <+88>:  ret
> 0x17e4c <+92>:  addw0, w0, #0x1  ; =0x1
> 0x17e50 <+96>:  b  0x17e38   ; <+72> at a.c:115
> 0x17e54 <+100>: orrw8, wzr, #0x7
> 0x17e58 <+104>: strx8, [sp, #0x8]
> 0x17e5c <+108>: sxtw   x8, w19
> 0x17e60 <+112>: strx8, [sp]
> 0x17e64 <+116>: adrx0, #0x148; "%c %d\n"
> 0x17e68 <+120>: nop
> 0x17e6c <+124>: bl 0x17f64   ; symbol stub
> for: printf
> 0x17e70 <+128>: subsp, x29, #0x20; =0x20
> 0x17e74 <+132>: ldpx29, x30, [sp, #0x20]
> 0x17e78 <+136>: ldpx20, x19, [sp, #0x10]
> 0x17e7c <+140>: ldpx22, x21, [sp], #0x30
> 0x17e80 <+144>: b  0x17f38   ; f3 at b.c:4
> 0x17e84 <+148>: sxtw   x8, w19
> 0x17e88 <+152>: strx8, [sp]
> 0x17e8c <+156>: adrx0, #0x127; "%c\n"
> 0x17e90 <+160>: nop
> 0x17e94 <+164>: bl 0x17f64   ; symbol stub
> for: printf
> 0x17e98 <+168>: bl 0x17f40   ; f4 at b.c:7
> 0x17e9c <+172>: sxtw   x8, w19
> 0x17ea0 <+176>: strx8, [sp]
> 0x17ea4 <+180>: adrx0, #0x10f; "%c\n"
> 0x17ea8 <+184>: nop
> 0x17eac <+188>: bl 0x17f64   ; symbol stub
> for: printf
> 0x17eb0 <+192>: bl 0x17f4c   ; symbol stub
> for: abort
>
>
> It loads data from the jump table and branches to the correct block in the
> +52 .. +68 instructions.  We have epilogues at 88, 144, and 192.  And

Re: [lldb-dev] llvm changing line table info from DWARF 2 to DWARF 4

2016-10-20 Thread Tamas Berghammer via lldb-dev
Building LLDB with cmake is already supported on all operating systems
(including Darwin) for a while so that shouldn't be a blocker.

On Thu, Oct 20, 2016 at 8:09 PM Tim Hammerquist via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> IIRC, the only reason the LLDB python test suite uses the in-tree compiler
> (Scenario 1) was so to test sanitizers before they were available in the
> system compiler. If that's the case, then using Xcode 8 on the builder will
> allow both the LLDB build and tests to use the system compiler.
>
> As I understand it, there are a few ways to go about building lldb using
> the ToT (or at least, last green) compiler. This approach will be of
> limited use until building lldb with cmake is supported, however. I'm
> following up on this timeline.
>
> -Tim
>
>
> On Thu, Oct 20, 2016 at 11:50 AM, Ted Woodward <
> ted.woodw...@codeaurora.org> wrote:
>
> I think a hardcoded value of 1 for maximum_operations_per_instruction will
> work like it does today – 1 linetable entry per Hexagon packet, which may
> have 1-4 instructions in it. Hexagon executes 1 packet at a time, so
> anywhere from 1-4 instructions at once.
>
>
>
> At O0, the compiler doesn’t packetize instructions, so 1 instruction is
> run at a time. At 01 it will, but it doesn’t do many other optimizations.
> We should still have 1 line per packet. O2 and O3 can move instructions
> around, so will have up to 4 source lines in 1 packet. I think we’ll need
> to experiment internally with what that means for the debugger, once we get
> this change.
>
>
>
> --
>
> Qualcomm Innovation Center, Inc.
>
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
> Linux Foundation Collaborative Project
>
>
>
> *From:* Eric Christopher [mailto:echri...@gmail.com]
> *Sent:* Wednesday, October 19, 2016 6:09 PM
> *To:* Tim Hammerquist 
> *Cc:* Greg Clayton ; Ted Woodward <
> ted.woodw...@codeaurora.org>; LLDB 
> *Subject:* Re: [lldb-dev] llvm changing line table info from DWARF 2 to
> DWARF 4
>
>
>
>
>
> On Wed, Oct 19, 2016 at 3:34 PM Tim Hammerquist  wrote:
>
> I was mistaken.
>
>
>
> The system toolchain builds stage1 llvm, clang & co.
>
> The system toolchain builds lldb containing the llvm/clang/etc bits.
>
> The system toolchain builds gtest test programs.
>
> The stage1 compiler builds the python test inferiors.
>
>
>
>
>
> OK, then it sounds like at least some of the test programs are built with
> the new compiler? IIRC the python test inferiors here are the programs that
> are the meat of the testsuite for lldb yes?
>
>
>
> If so, then on check-in we should possibly see some difference on some bot
> if they all use the same general configuration.  I don't have a current
> checkout so I don't know if the default -g is used or if it's set to a
> different dwarf level. Currently it looks like clang will use dwarf4 by
> default with -g:
>
>
>
> echristo@dzur ~/tmp> ~/builds/build-llvm/bin/clang -c foo.c -o - -target
> x86_64-apple-macosx10.11 -g | llvm-dwarfdump - | grep version | grep -v
> clang
>
> 0x: Compile Unit: length = 0x0037 version = 0x0004 abbr_offset
> = 0x addr_size = 0x08 (next unit at 0x003b)
>
>  version: 2
>
>
>
> where the first line is the debug_info header and the second is the
> version in the line table.
>
>
>
> Ted/Greg: Relatedly, what brought this up was the vliw aspect with 
> maximum_operations_per_instruction
> - it's being hard coded to 1 here and I'm not sure how we want to deal with
> that on hexagon? Currently it'll be hard set to 1 so line stepping will
> work as I imagine it currently does. That said, if we wanted to take
> advantage of it then that's different. Primarily I wasn't sure if Ted and
> folk had a debugger that did take advantage of it if it was there.
>
>
>
> Thanks!
>
>
>
> -eric
>
>
>
>
>
> On Wed, Oct 19, 2016 at 3:28 PM, Eric Christopher 
> wrote:
>
>
>
> On Wed, Oct 19, 2016 at 3:26 PM Tim Hammerquist  wrote:
>
> The LLDB job in llvm.org will build a stage1 RA with
> llvm+clang+libcxx+compiler-rt using the system compiler, and use the new
> compiler to build lldb.
>
>
>
> By default, this is kicked off automatically when a clang stage1 RA is
> successful, but can be manually triggered to build HEAD, or any revision
> desired.
>
>
>
> The python test suite (invoked with the xcodebuild target
> lldb-python-test-suite) uses the newly built compiler to build its test
> programs.
>
>
>
>
> http://lab.llvm.org:8080/green/job/lldb_build_test/21202/consoleFull#console-section-4
>
>
>
> However, the gtest suite (target lldb-gtest) uses the system (Xcode
> toolchain) compiler to build test programs.
>
>
>
>
> http://lab.llvm.org:8080/green/job/lldb_build_test/21202/artifact/lldb/test_output.zip
>
>
>
>
>
> This seems like something that should be fixed :)
>
>
>
> -eric
>
>
>
>
>
> -Tim
>
>
>
> On Wed, Oct 19, 2016 at 2:36 PM, Eric Christopher 
> wrote:
>
> From chatting with Tim it sounds like at least one lldb bot uses the ToT
> compiler - we should pr

Re: [lldb-dev] Regenerating public API reference documentation

2016-10-20 Thread Tamas Berghammer via lldb-dev
As nobody had any objection I committed in the regeneration of the docs as
rL284725

For generating it automatically it would be great but as far as I know
currently the LLDB docs and the LLVM docs are generated and distributed in
a very different ways so it might be challenging to integrate (in case of
LLDB the HTML files are checked into the main lldb repository).

Tamas

On Fri, Oct 14, 2016 at 4:50 PM Mehdi Amini  wrote:


On Oct 14, 2016, at 6:44 AM, Tamas Berghammer via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

Hi All,

The current LLDB API reference documentation available at
http://lldb.llvm.org/python_reference/
<http://lldb.llvm.org/python_reference/index.html> and at
http://lldb.llvm.org/cpp_reference/html/ but it haven't been updated since
July 2013.

I am planning to regenerate it next week using "ninja lldb-cpp-doc
lldb-python-doc" (from a Linux machine using epydoc 3.0.1 and doxygen
1.8.6) to get them up to date. Is there any objection against it?

Additionally, in the future it would be great if we can keep the generated
doc more up to date after additions to the SB API so users of LLDB can rely
it.


There is a bot continuously updating http://llvm.org/docs/ ; ideally we
should be able to hook the other LLVM sub-projects there.

—
Mehdi
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Regenerating public API reference documentation

2016-10-14 Thread Tamas Berghammer via lldb-dev
Hi All,

The current LLDB API reference documentation available at
http://lldb.llvm.org/python_reference/
 and at
http://lldb.llvm.org/cpp_reference/html/ but it haven't been updated since
July 2013.

I am planning to regenerate it next week using "ninja lldb-cpp-doc
lldb-python-doc" (from a Linux machine using epydoc 3.0.1 and doxygen
1.8.6) to get them up to date. Is there any objection against it?

Additionally, in the future it would be great if we can keep the generated
doc more up to date after additions to the SB API so users of LLDB can rely
it.

Tamas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Evolution

2016-08-28 Thread Tamas Berghammer via lldb-dev
You can grep for "  {$". With this regex I
see no false positives and 272 case with 40 or more leading spaces

On Sun, 28 Aug 2016, 17:59 Zachary Turner via lldb-dev, <
lldb-dev@lists.llvm.org> wrote:

> Here it is
>
>
> grep -n '^ \+' . -r -o | awk '{t=length($0);sub(" *$","");printf("%s%d\n",
> $0, t-length($0));}' | sort -t: -n -k 3 -r | awk 'BEGIN { FS = ":" } ; { if
> ($3 >= 50) print $0 }'
> On Sun, Aug 28, 2016 at 9:54 AM Zachary Turner  wrote:
>
>> I tried that, but most of the results (and there are a ton to wade
>> through) are function parameters that wrapped and align with the opening
>> paren on the next line.
>>
>> Earlier in the thread (i think it was this thread anyway) i posted a bash
>> incantation that will grep the source tree and return all lines with >= N
>> leading spaces sorted descending by number of leading spaces. The highest
>> was about 160 :)
>>
>> If you search lldb-dev for awk or sed you'll probably find it
>> On Sun, Aug 28, 2016 at 9:10 AM Chris Lattner  wrote:
>>
>>> Can you just grep for “^“ or something?
>>> That seems like a straight-forward way to find lines that have a ton of
>>> leading indentation.
>>>
>>> -Chris
>>>
>>> On Aug 27, 2016, at 9:28 AM, Zachary Turner  wrote:
>>>
>>> It will probably be hard to find all the cases.  Unfortunately
>>> clang-tidy doesn't have a "detect deep indentation" check, but that would
>>> be pretty useful, so maybe I'll try to add that at some point (although I
>>> doubt I can get to it before the big reformat).
>>>
>>> Finding all of the egregious cases before the big reformat will present
>>> a challenge, so I'm not sure if it's better to spend effort trying, or just
>>> deal with it as we spot code that looks bad because of indentation level.
>>>
>>> On Sat, Aug 27, 2016 at 9:24 AM Chris Lattner 
>>> wrote:
>>>
 On Aug 26, 2016, at 6:12 PM, Zachary Turner via lldb-dev <
 lldb-dev@lists.llvm.org> wrote:

 Back to the formatting issue, there's a lot of code that's going to
 look bad after the reformat, because we have some DEEPLY indented code.
 LLVM has adopted the early return model for this reason.  A huge amount of
 our deeply nested code could be solved by using early returns.


 FWIW, early returns are part of the LLVM Coding standard:

 http://llvm.org/docs/CodingStandards.html#use-early-exits-and-continue-to-simplify-code

 So it makes sense for LLDB to adopt this approach at some point.

 I don’t have an opinion about whether it happens before or after the
 "big reformat", but I guess I agree with your point that doing it would be
 good to do it for the most egregious cases before the reformat.

 -Chris

>>>
>>> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Add support for OCaml native debugging

2016-07-08 Thread Tamas Berghammer via lldb-dev
Can you upload your patches to http://reviews.llvm.org/differential/ as we
do all code reviews in that system?

Tamas

On Fri, Jul 8, 2016 at 10:53 AM E BOUTALEB via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> To be frank, I do not like that either. I could add the test the DWARF
> emission feature hit OCaml packages.
>
> Anyway, here are the patches. I would be fine with just the code review if
> the absence of tests is a bother.
> I ran check-lldb before and after applying the patches, and AFAIK I didn't
> introduce any regressions.
>
> Elias
>
> --
> From: tbergham...@google.com
> Date: Thu, 7 Jul 2016 13:23:41 +
>
> Subject: Re: [lldb-dev] Add support for OCaml native debugging
> To: e.bouta...@hotmail.fr; lldb-dev@lists.llvm.org
>
>
> What type of binaries do you want to commit in?
>
> Generally we don't like putting binaries to the repository because they
> are not human readable so it is hard to review/diff them and they will only
> run on a single platform and a single architecture while we support a lot
> of different configuration.
>
> Tamas
>
> On Wed, Jul 6, 2016 at 3:26 PM E BOUTALEB via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> I would like to submit two patches for code review.
> They introduce concrete support for OCaml native debugging, granted that
> you have access to the native compiler with DWARF emission support (see
> https://github.com/ocaml/ocaml/pull/574)
>
> This adds about 2000 lines of code.
> The type system isn't particularly complex here, every value is considered
> as an unsigned integer, and interpretation of the value is left to an
> external debugging layer made in OCaml.
> The language plugin handles function name demangling for breakpoints too.
>
> No tests for now. Is it fine to commit binaries with the patchs?
>
> Elias Boutaleb
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Add support for OCaml native debugging

2016-07-07 Thread Tamas Berghammer via lldb-dev
What type of binaries do you want to commit in?

Generally we don't like putting binaries to the repository because they are
not human readable so it is hard to review/diff them and they will only run
on a single platform and a single architecture while we support a lot of
different configuration.

Tamas

On Wed, Jul 6, 2016 at 3:26 PM E BOUTALEB via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I would like to submit two patches for code review.
> They introduce concrete support for OCaml native debugging, granted that
> you have access to the native compiler with DWARF emission support (see
> https://github.com/ocaml/ocaml/pull/574)
>
> This adds about 2000 lines of code.
> The type system isn't particularly complex here, every value is considered
> as an unsigned integer, and interpretation of the value is left to an
> external debugging layer made in OCaml.
> The language plugin handles function name demangling for breakpoints too.
>
> No tests for now. Is it fine to commit binaries with the patchs?
>
> Elias Boutaleb
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] All windows Mutex objects are recursive???

2016-05-12 Thread Tamas Berghammer via lldb-dev
We already use both std::mutex and std::condition_variable
in include/lldb/Utility/TaskPool.h for a while (since October) and nobody
complained about it so I think we can safely assume that all platform has
the necessary STL support.

On Wed, May 11, 2016 at 11:44 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> It would be nice to get a patch that gets rid of the Mutex.h/Mutex.cpp and
> switches over to using C++11 std::mutex/std::recursive_mutex and get rid of
> Condition.h/Condition.cpp for std::condition_variable. Then we can be more
> consistent. We need to make sure the C++ standard libraries are ready on
> all platforms first though.
>
> Greg
>
> > On May 11, 2016, at 3:01 PM, Zachary Turner  wrote:
> >
> > I mean std::recursive_mutex is recursive
> >
> > On Wed, May 11, 2016 at 3:01 PM Zachary Turner 
> wrote:
> > Yes, eventually we should move to std::mutex and
> std::condition_variable, in which case it behaves as expected (std::mutex
> is non recursive, std::mutex is recursive).
> >
> >
> >
> > On Wed, May 11, 2016 at 2:20 PM Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > From lldb/source/Host/windows/Mutex.cpp:
> >
> >
> > Mutex::Mutex () :
> > m_mutex()
> > {
> > m_mutex =
> static_cast(malloc(sizeof(CRITICAL_SECTION)));
> > InitializeCriticalSection(static_cast(m_mutex));
> > }
> >
> > //--
> > // Default constructor.
> > //
> > // Creates a pthread mutex with "type" as the mutex type.
> > //--
> > Mutex::Mutex (Mutex::Type type) :
> > m_mutex()
> > {
> > m_mutex =
> static_cast(malloc(sizeof(CRITICAL_SECTION)));
> > InitializeCriticalSection(static_cast(m_mutex));
> > }
> >
> >
> > It also means that Condition.cpp doesn't act like its unix counterpart
> as the pthread_contition_t requires that wait be called with a non
> recursive mutex. Not sure what or if any issues are resulting from this,
> but I just thought everyone should be aware.
> >
> > Greg Clayton
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] google/stable branch on git mirror?

2016-05-03 Thread Tamas Berghammer via lldb-dev
+Eric Christopher 

Adding Eric as he was the last person merging changes to the google/stable
branch. As far as I know nobody releases LLDB from that branch so I
wouldn't rely on it too much (Android Studio release from master) but you
can gave it a try if you want.

Tamas

On Fri, Apr 29, 2016 at 7:04 PM Jeffrey Pautler via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi all. First post…new to the mailing list.
>
>
>
> I was looking for the lldb/branches/google/stable/ branch on a git mirror,
> but was unable to find it. I was specifically looking at
> http://llvm.org/git/lldb.git, but didn’t see it anywhere else either
> (github, etc).
>
>
>
> Is it only available from the svn repo?
>
>
>
> Would it be useful for anyone else for that branch to be mirrored to the
> git repo as well?
>
>
>
> Thanks,
>
> Jeff
>
>
>
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Bug fixes for release_38 branch

2016-04-29 Thread Tamas Berghammer via lldb-dev
Is there any reason you want to use the release_38 branch specifically? As
far as I know nobody tested it or using it in the LLDB community so it is
approximately as good as any random commit on master. If you are looking
for a reasonably stable LLDB then I think you are better off with asking
for the version number shipped with xcode or with Android Studio as those
versions are a bit more tested and it is used by some users as well.

On Thu, Apr 28, 2016 at 8:57 PM Francis Ricci via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Over the last month or two, I've been working to stabilize the release_38
> branch of lldb, and there are commits which fix bugs on this branch that
> I'd like to cherry-pick down. They're listed at the bottom of this message.
>
> One thing to note - r251106 is a commit I'd like to revert, instead of a
> cherry-pick. When we use this commit (multithreaded dwarf parsing) on the
> 3.8 branch, I run into a lot of dwarf assertion failures, even after
> cherry-picking all the dwarf fixes I could find from master. I don't see
> these assertion failures on master, so it's definitely an issue that's been
> fixed since the branch cut, but I think the best solution for the
> release_38 branch is to disable it for now.
>
> r264810 will have a small merge conflict due to an indentation change in
> lldbpexpect.py
> r263735 will have a small merge conflict due to a whitespace change on
> master. Everything else should apply cleanly.
>
> Commits:
> r267741 Use absolute module path when possible if sent in svr4 packets
> r264810 Fixed the failing test TestCommandScriptImmediateOutput on MacOSX
> r267468 Maintain register numbering across xml include features
> r267467 Properly unload modules from target image list when using svr4
> packets
> r267466 Use Process Plugin register indices when communicating with remote
> r267463 Store absolute path for lldb executable in dotest.py
> r267462 Create _lldb python symlink correctly when LLVM_LIBDIR_SUFFIX is
> used
> r265422 Fix dotest.py '-p' option for multi-process mode
> r265420 Print environment when dumping arch triple
> r265419 Make sure to update Target arch if environment changed
> r265418 Allow gdbremote process to read modules from memory
> r264476 Fix FILE * leak in Python API
> r264351 Make File option flags consistent for Python API
> r263824 Fixed a bug where DW_AT_start_scope would fall through to
> DW_AT_artificial in SymbolFileDWARF::ParseVariableDIE(). This was caught by
> the clang warning that catches unannotated case fall throughs.
> r263735 Fix deadlock due to thread list locking in 'bt all' with obj-c
> r261858 Handle the case when a variable is only valid in part of the
> enclosing scope
> r261598 Fixed a problem where the DWARF for inline functions was
> mis-parsed.
> r261279 Make sure code that is in the middle of figuring out the correct
> architecture on attach uses the architecture it has figured out, rather
> than the Target's architecture, which may not have been updated to the
> correct value yet.
> r260626 Don't crash if we have a DIE that has a DW_AT_ranges attribute and
> yet the SymbolFileDWARF doesn't have a DebugRanges. If this happens print a
> nice error message to prompt the user to file a bug and attach the
> offending DWARF file so we can get the correct compiler fixed.
> r260618 Removed a bad assertion:
> r260322 Added code that was commented out during testing to stops template
> member functions from being added to class definitions (see revision 260308
> for details).
> r260308 Fixed many issues that were causing differing type definition
> issues to show up when parsing expressions.
> r259962 Fix "thread backtrace -s": option was misparsed because of a
> missing break.
> r258367 Fix a problem where we were not calling fcntl() with the correct
> arguments for F_DUPFD
> r257786 Fixed a crasher when dealing with table entries that have blank
> names.
> r257644 Fix an issue where scripted commands would not actually print any
> of their output if an immediate output file was set in the result object
> via a Python file object
> REVERT - r251106 Re-commit "Make dwarf parsing multi-threaded"
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] UnicodeDecodeError for serialize SBValue description

2016-04-07 Thread Tamas Berghammer via lldb-dev
LLDB supports adding data formatters without modifying the source code and
I would strongly prefer to go that way as we don't want each user of LLDB
to start adding data formatters to their own custom types. We have a pretty
detailed (but possible a bit outdated) description about how they work and
how you can add a new one here: http://lldb.llvm.org/varformats.html

Enrico: Is there any reason you suggested the data formatters written
inside LLDB over the python based ones?

On Thu, Apr 7, 2016 at 3:31 AM Jeffrey Tan via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Thanks Enrico. This is very detailed! I will take a look.
> Btw: originally, I was hoping that data formatter can be added without
> changing the source code. Like giving a xml/json format file telling lldb
> the memory layout/structure of the data structure, lldb can parse the
> xml/json and deduce the formatting. This is approach used by data
> visualizer in VS debugger:
> https://msdn.microsoft.com/en-us/library/jj620914.aspx
> This will make adding data formatter more extensible/flexible. Any reason
> we did not take this approach?
>
> Jeffrey
>
> On Wed, Apr 6, 2016 at 11:49 AM, Enrico Granata 
> wrote:
>
>>
>> On Apr 5, 2016, at 2:42 PM, Jeffrey Tan  wrote:
>>
>> Hi Enrico,
>>
>> Any suggestion/example how to add a data formatter for our own STL
>> string? From the output below I can see we are using our own "
>> *fbstring_core*" which I assume I need to write a type summary for this
>> type:
>>
>> frame variable corpus -T
>> (const string &const) corpus = error: summary string parsing error: {
>>   (std::*fbstring_core*) store_ = {
>> (std::*fbstring_core*::(anonymous union))  = {
>>   (char [24]) small_ = "www"
>>   (std::fbstring_core::MediumLarge) ml_ = {
>> (char *) data_ = 0x0077
>> "H\x89U\xa8H\x89M\xa0L\x89E\x98H\x8bE\xa8H\x89��_U��D\x88e�H\x8bE\xa0H\x89��]U��H\x89�H\x8dE�H\x89�H\x89���
>> ��L\x8dm�H\x8bE\x98H\x89��IU��\x88]�L\x8be\xb0L\x89��
>> (std::size_t) size_ = 0
>> (std::size_t) capacity_ = 1441151880758558720
>>   }
>> }
>>   }
>> }
>>
>>
>> Admittedly, this is going to be a little vague since I haven’t really
>> seen your code and I am only working off of one sample
>>
>> There’s going to be two parts to getting this to work:
>>
>> *Part 1 - Formatting fbstring_core*
>>
>> At a glance, an fbstring_core can be backed by two representations.
>> A “small” representation (a char array), and a “medium/large"
>> representation (a char* + a size)
>> I assume that the way you tell one from the other is
>>
>> if (size == 0) small
>> else medium-large
>>
>> If my assumption is not correct, you’ll need to discover what the correct
>> discriminator logic is - the class has to know, and so do you :-)
>>
>> Armed with that knowledge, look in lldb
>> source/Plugins/Language/CPlusPlus/Formatters/LibCxx.cpp
>> There’s a bunch of code that deals with formatting llvm’s libc++
>> std::string - which follows a very similar logic to your class
>>
>> ExtractLibcxxStringInfo() is the function that handles discovering which
>> layout the string uses - where the data lives - and how much data there is
>>
>> Once you have told yourself how much data there is (the size) and where
>> it lives (array or pointer), LibcxxStringSummaryProvider() has the easy
>> task - it sets up a StringPrinter, tells it how much data to print, where
>> to get it from, and then delegates the StringPrinter to do the grunt work
>> StringPrinter is a nifty little tool - it can handle generating summaries
>> for different kinds of strings (UTF8? UTF16? we got it - is a \0 a
>> terminator? what quote character would you like? …) - you point it at some
>> data, set up a few options, and it will generate a printable representation
>> for you - if your string type is doing anything out of the ordinary, let’s
>> talk - I am definitely open to extending StringPrinter to handle even more
>> magic
>>
>> *Part 2 - Teaching std::string that it can be backed by an fbstring_core*
>>
>> At the end of part 1, you’ll probably end up with a
>> FBStringCoreSummaryProvider() - now you need to teach LLDB about it
>> The obvious thing you could do would be to go in CPlusPlusLanguage
>> ::GetFormatters() add a LoadFBStringFormatter(g_category) to it - and
>> then imitate - say - LoadLibCxxFormatters()
>>
>> AddCXXSummary(cpp_category_sp, lldb_private::formatters::
>> FBStringCoreSummaryProvider, “fbstringcore summary provider", ConstString
>> (“std::fbstring_core<.+>"), stl_summary_flags, true);
>>
>> That will work - but what you would see is:
>>
>> (const string &const) corpus = error: summary string parsing error: {
>>   (std::*fbstring_core*) store_ = “www"
>>
>>
>> You wanna do
>>
>> (lldb) log enable lldb formatters
>> (lldb) frame variable -T corpus
>>
>> It will list one or more typenames - the most specific one is the one you
>> like (e.g. for libc++ we get std::__1::string - this is how we tell
>> ourselves this is the std::s

Re: [lldb-dev] Green Dragon LLDB Xcode build update: TSAN support

2016-04-05 Thread Tamas Berghammer via lldb-dev
I think we don't. If we consider them stable enough for enabling them on a
buildbot AND we agree to revert changes breaking the unittests then I am
happy with enabling them (doing it should take very little effort from our
side). Otherwise I would prefer to wait until we can get them to a stable
state.

On Mon, Apr 4, 2016 at 10:53 PM Todd Fiala  wrote:

> One more update:
>
> The Green Dragon OS X LLDB builder now actually runs the gtests instead of
> just building them.
>
> The gtests run as a phase right before the Python test suite.  A non-zero
> value returning from the gtests will cause the OS X LLDB build to fail.
> Right now, tracking down the cause of the failure will require looking at
> the console log for the build and test job.  I'm excited to see our gtest
> test count has gone from roughly 17  to over 100 now!
>
> Pavel or Tamas, are we running the gtests on the Linux buildbots?
>
> -Todd
>
> On Mon, Apr 4, 2016 at 10:49 AM, Todd Fiala  wrote:
>
>> Hi all,
>>
>> I've made a minor change to the Green Dragon LLDB OS X Xcode build
>> located here:
>> http://lab.llvm.org:8080/green/job/LLDB/
>>
>> 1. Previously, the python test run used the default C/C++ compiler to
>> build test inferiors.  Now it uses the just-built clang/clang++ to build
>> test inferiors.  At some point in the future, we will change this to a
>> matrix of important clang/clang++ versions (e.g. some number of official
>> Xcode-released clangs).  For now, however, we'll continue to build with
>> just one, and that one will be the one in the clang build tree.
>>
>> 2. The Xcode llvm/clang build step now includes compiler-rt and libcxx.
>> This, together with the change above, will allow the newer LLDB TSAN tests
>> to run.
>>
>> If you're ever curious how the Xcode build is run, it uses the build.py
>> script in the zorg repo (http://llvm.org/svn/llvm-project/zorg/trunk)
>> under zorg/jenkins/build.py.  The build constructs the build tree with a
>> "derive-lldb" command, and does the Xcode build with the "lldb" command.
>>
>> Please let me know if you have any questions.
>>
>> I'll address any hiccups that may show up ASAP.
>>
>> Thanks!
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb-server stripped binary size: AArch64 ~16Mb vs ARM ~9 Mb

2016-03-02 Thread Tamas Berghammer via lldb-dev
We try to keep all of our bugs in the public LLVM bug tracker (llvm.org/bugs)
under OS=Linux so if you are looking for issues to work on that is a good
place to start (the other option is to look for expectedFailures in the
test suit). In general most of the issue we have are present on Linux and
very few are specific to android.

For new features we don't have any clear road map. The current primary
focus is to improve the quality and the speed of LLDB but cool new features
are always welcomed (e.g. software watchpoints, expression evaluation
improvements, module debug info support, etc...)

On Tue, Mar 1, 2016 at 10:47 AM Mikhail Filimonov 
wrote:

> Hi and thank you for the detailed reply, Tamas.
>
> Ok, so I’ll cope with increased size of lldb-server for AArch64.
>
> As a side note – is there any publicly available roadmap for LLDB on
> Android, that covers features to implement\issues to fix? I suggest that
> the community will greatly appreciate to get a glimpse on the direction of
> development for that target.
>
>
>
> Regards,
>
> Mikhail
>
>
>
> *From:* Tamas Berghammer [mailto:tbergham...@google.com]
> *Sent:* Tuesday, March 1, 2016 1:34 PM
> *To:* Pavel Labath ; Mikhail Filimonov <
> mfilimo...@nvidia.com>
>
>
> *Cc:* lldb-dev@lists.llvm.org
> *Subject:* Re: [lldb-dev] lldb-server stripped binary size: AArch64 ~16Mb
> vs ARM ~9 Mb
>
>
>
> As Pavel mentioned the unreasonable large size for lldb-server is caused
> by the fact that we are relying on the liker to remove the unused code and
> it can't do too good job because we have lot of unreasonable dependencies.
>
>
>
> The size difference between arm and arrahc64 caused by several reason:
>
> * On arm we compile to thumb2 instruction set what is in average ~30%
> smaller then the arm (and aarch64) instruction set. Before this change the
> size of lldb-server on arm was ~14MB
>
> * We have Safe ICF (identical code folding) enabled for arm what reduces
> the binary size by 5-10%. It is not enabled for aarch64 because last time I
> checked there was still some issue in ld.gold when using ICF on aarch64. It
> should be already fixed upstream but haven't reached the NDK yet.
>
> * The aarch64 lldb-server capable of debugging both arm and aarch64
> applications so it contains a bit more code because of this (e.g. 2
> spearate register context)
>
>
>
> Optimizing the size of both binary is possible (and we want to do it
> sooner or later) but because of the reasons I listed the arm one will stay
> much smaller then the aarch64 one.
>
>
>
> Tamas
>
>
>
> On Tue, Mar 1, 2016 at 9:18 AM Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi,
>
> so the problem here is that we are currently relying on the linker to
> remove code that we don't need, and it can't always do a good job in
> figuring out which code is not used due to complex dependencies. So,
> innocent-looking changes in the code can pull in lots of transitive
> dependencies, even though they are not used. I suspect something like
> that is going on here, although we should keep in mind that arm64 code
> is less dense naturally. Any help on this front will be welcome,
> although it probably won't be trivial, as we have probably picked off
> the low-hanging fruit already.
>
> That said, you may want to try adding LLVM_TARGETS_TO_BUILD=Aarch64 to
> your cmake line. We use that, although I can't say how much it affects
> the size of the resulting binary.
>
> help that helps,
> pl
>
> On 29 February 2016 at 20:15, Mikhail Filimonov via lldb-dev
>  wrote:
> > Hello, fellow developers and congratulations with long awaited 3.8
> Release.
> >
> > I wonder why AArch64 stripped binary of lldb-server built from [3.8
> Release] RC3 source is so much bigger than its ARM counterpart.
> > See the numbers:
> > 16318632 Feb 29 22:41 lldb-server-3.8.0-aarch64
> >  9570916 Feb 29 22:23 lldb-server-3.8.0-arm
> > lldb-server-3.8.0-aarch64: ELF 64-bit LSB  executable, ARM aarch64,
> version 1 (SYSV), statically linked, stripped
> > lldb-server-3.8.0-arm: ELF 32-bit LSB  executable, ARM, EABI5
> version 1 (SYSV), statically linked, stripped
> >
> > My build configuration is MinSizeRel in both cases:
> > cmake -GNinja
> > -DCMAKE_BUILD_TYPE=MinSizeRel $HOME/llvm_git
> > -DCMAKE_TOOLCHAIN_FILE=tools/lldb/cmake/platforms/Android.cmake
> > -DANDROID_TOOLCHAIN_DIR=$HOME/Toolchains/aarch64-21-android
> > -DANDROID_ABI=aarch64
> > -DCMAKE_CXX_COMPILER_VERSION=4.9
> > -DLLVM_TARGET_ARCH=aarch64
> > -DLLVM_HOST_TRIPLE=aarch64-unknown-linux-android
> > -DLLVM_TABLEGEN=$HOME/llvm_host/bin/llvm-tblgen
> > -DCLANG_TABLEGEN=$HOME/llvm_host/bin/clang-tblgen
> >
> > cmake -GNinja
> > -DCMAKE_BUILD_TYPE=MinSizeRel $HOME/llvm_git
> > -DCMAKE_TOOLCHAIN_FILE=tools/lldb/cmake/platforms/Android.cmake
> > -DANDROID_TOOLCHAIN_DIR=$HOME/Toolchains/arm-21-android-toolchain
> > -DANDROID_ABI=armeabi
> > -DCMAKE_CXX_COMPILER_VERSION=4.9
> > -DLLVM_TARGET_ARCH=arm
> > -DLLVM_HOST_TRIPLE=arm-unknown-linux-android
> > -DLLVM_TAB

Re: [lldb-dev] lldb-server stripped binary size: AArch64 ~16Mb vs ARM ~9 Mb

2016-03-01 Thread Tamas Berghammer via lldb-dev
As Pavel mentioned the unreasonable large size for lldb-server is caused by
the fact that we are relying on the liker to remove the unused code and it
can't do too good job because we have lot of unreasonable dependencies.

The size difference between arm and arrahc64 caused by several reason:
* On arm we compile to thumb2 instruction set what is in average ~30%
smaller then the arm (and aarch64) instruction set. Before this change the
size of lldb-server on arm was ~14MB
* We have Safe ICF (identical code folding) enabled for arm what reduces
the binary size by 5-10%. It is not enabled for aarch64 because last time I
checked there was still some issue in ld.gold when using ICF on aarch64. It
should be already fixed upstream but haven't reached the NDK yet.
* The aarch64 lldb-server capable of debugging both arm and aarch64
applications so it contains a bit more code because of this (e.g. 2
spearate register context)

Optimizing the size of both binary is possible (and we want to do it sooner
or later) but because of the reasons I listed the arm one will stay much
smaller then the aarch64 one.

Tamas

On Tue, Mar 1, 2016 at 9:18 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi,
>
> so the problem here is that we are currently relying on the linker to
> remove code that we don't need, and it can't always do a good job in
> figuring out which code is not used due to complex dependencies. So,
> innocent-looking changes in the code can pull in lots of transitive
> dependencies, even though they are not used. I suspect something like
> that is going on here, although we should keep in mind that arm64 code
> is less dense naturally. Any help on this front will be welcome,
> although it probably won't be trivial, as we have probably picked off
> the low-hanging fruit already.
>
> That said, you may want to try adding LLVM_TARGETS_TO_BUILD=Aarch64 to
> your cmake line. We use that, although I can't say how much it affects
> the size of the resulting binary.
>
> help that helps,
> pl
>
> On 29 February 2016 at 20:15, Mikhail Filimonov via lldb-dev
>  wrote:
> > Hello, fellow developers and congratulations with long awaited 3.8
> Release.
> >
> > I wonder why AArch64 stripped binary of lldb-server built from [3.8
> Release] RC3 source is so much bigger than its ARM counterpart.
> > See the numbers:
> > 16318632 Feb 29 22:41 lldb-server-3.8.0-aarch64
> >  9570916 Feb 29 22:23 lldb-server-3.8.0-arm
> > lldb-server-3.8.0-aarch64: ELF 64-bit LSB  executable, ARM aarch64,
> version 1 (SYSV), statically linked, stripped
> > lldb-server-3.8.0-arm: ELF 32-bit LSB  executable, ARM, EABI5
> version 1 (SYSV), statically linked, stripped
> >
> > My build configuration is MinSizeRel in both cases:
> > cmake -GNinja
> > -DCMAKE_BUILD_TYPE=MinSizeRel $HOME/llvm_git
> > -DCMAKE_TOOLCHAIN_FILE=tools/lldb/cmake/platforms/Android.cmake
> > -DANDROID_TOOLCHAIN_DIR=$HOME/Toolchains/aarch64-21-android
> > -DANDROID_ABI=aarch64
> > -DCMAKE_CXX_COMPILER_VERSION=4.9
> > -DLLVM_TARGET_ARCH=aarch64
> > -DLLVM_HOST_TRIPLE=aarch64-unknown-linux-android
> > -DLLVM_TABLEGEN=$HOME/llvm_host/bin/llvm-tblgen
> > -DCLANG_TABLEGEN=$HOME/llvm_host/bin/clang-tblgen
> >
> > cmake -GNinja
> > -DCMAKE_BUILD_TYPE=MinSizeRel $HOME/llvm_git
> > -DCMAKE_TOOLCHAIN_FILE=tools/lldb/cmake/platforms/Android.cmake
> > -DANDROID_TOOLCHAIN_DIR=$HOME/Toolchains/arm-21-android-toolchain
> > -DANDROID_ABI=armeabi
> > -DCMAKE_CXX_COMPILER_VERSION=4.9
> > -DLLVM_TARGET_ARCH=arm
> > -DLLVM_HOST_TRIPLE=arm-unknown-linux-android
> > -DLLVM_TABLEGEN=$HOME/llvm_host/bin/llvm-tblgen
> > -DCLANG_TABLEGEN=$HOME/llvm_host/bin/clang-tblgen
> >
> > Maybe I need some additional settings to be set for AArch64 case?
> >
> > Regards,
> > Mikhail
> >
> > -Original Message-
> > From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of
> Hans Wennborg via lldb-dev
> > Sent: Wednesday, February 24, 2016 12:51 AM
> > To: release-test...@lists.llvm.org
> > Cc: llvm-dev ; cfe-dev ;
> openmp-dev (openmp-...@lists.llvm.org) ; LLDB
> Dev 
> > Subject: [lldb-dev] [3.8 Release] RC3 has been tagged
> >
> > Dear testers,
> >
> > Release Candidate 3 has just been tagged [1]. Please build, test, and
> upload to the sftp.
> >
> > If there are no regressions from previous release candidates, this will
> be the last release candidate before the final release.
> >
> > Release notes can still go into the branch.
> >
> > Thanks again for all your work!
> > Hans
> >
> >  [1]
> http://lists.llvm.org/pipermail/llvm-branch-commits/2016-February/009866.html
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> >
> ---
> > This email message is for the sole use of the intended recipient(s) and
> may contain
> > confidential information.  Any unauthorized review, use, disclosure 

Re: [lldb-dev] Module Cache improvements - RFC

2016-02-24 Thread Tamas Berghammer via lldb-dev
I completely agree with you that we shouldn't change LLDB too much just to
speed up the startup time at the first use.

For android we already have a host side disk cache in place similar to what
you described for iOS and we already using ADB (an android specific
interface) to download the files from the device but unfortunately its
speed is only ~4-5MB/s on most device.

On Tue, Feb 23, 2016 at 9:23 PM Greg Clayton  wrote:

> > On Feb 23, 2016, at 10:31 AM, Nico Weber  wrote:
> >
> > On Tue, Feb 23, 2016 at 1:21 PM, Tamas Berghammer via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > Yes we already have a disk cache on the host. I agree with you that
> waiting 30s at the first startup shouldn't be an issue in general (Pavel
> isn't sharing my opinion). The only catch is that in case of iOS there are
> only a few different builds released so if you downloaded the modules once
> then I think you won't have to download them the next time when you try to
> use a different device. In case of Android we have to download the symbols
> from each device you are using and at that point 30s might be an issue (I
> still don't think it is).
> >
> > With my app developer hat on, if some program makes me wait 30s for
> something then I won't like that program.
>
> I agree, but if the first time you hook your phone up Android Studio pops
> up a dialog box saying "This is the first time you have connected this
> device, hold on while I cache the shared libraries for this device..." then
> it wouldn't be too bad. It is primarily the fact that the 30 seconds is
> happening without feedback during first launch or attach. Also, you can
> probably use something faster than the lldb-platform to download all of the
> files. In Xcode, we download all symbols into the users home directory in a
> known location:
>
> ~/Library/Developer/Xcode/iOS DeviceSupport
>
> This folder contains the exact OS version and a build number:
>
> (lldb) platform select remote-ios
>   Platform: remote-ios
>  Connected: no
>  SDK Roots: [ 0] "~/Library/Developer/Xcode/iOS DeviceSupport/9.0 (W)"
>  SDK Roots: [ 1] "~/Library/Developer/Xcode/iOS DeviceSupport/9.1 (X)"
>  SDK Roots: [ 2] "~/Library/Developer/Xcode/iOS DeviceSupport/9.2 (Y)"
>
> Where W, X, Y are build numbers. We know we can look in these
> folders for any files that are from the device. They get populated and
> these SDK directories get searched by LLDB's PlatformRemoteiOS so they get
> found (we don't use the file cache that the PlatformAndroid currently uses).
>
> So with a little work, I would add some functionality to your Android
> Studio, have something that knows how to copy files from device as quickly
> as possible (using lldb-platform is slwww and that is the way it is
> currently done I believe) into some such directory, all while showing a
> progress dialog to the user on first device connect, and then debugging
> will always be quick. And you can probably make it quicker than 30 seconds.
>
> Greg Clayton
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Module Cache improvements - RFC

2016-02-23 Thread Tamas Berghammer via lldb-dev
Yes we already have a disk cache on the host. I agree with you that waiting
30s at the first startup shouldn't be an issue in general (Pavel isn't
sharing my opinion). The only catch is that in case of iOS there are only a
few different builds released so if you downloaded the modules once then I
think you won't have to download them the next time when you try to use a
different device. In case of Android we have to download the symbols from
each device you are using and at that point 30s might be an issue (I still
don't think it is).

On Tue, Feb 23, 2016 at 6:00 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I believe this is already done.
>
> I am guessing the main issue is this happens on the first time you debug
> to a device you and up with a 30 second delay with no feedback as to what
> is going on. So you say "launch" and then 35 seconds later you hit your
> breakpoint at main. In Xcode we solve this by downloading all of the files
> when we attach to a device for the first time and we show progress as we
> download all shared libraries. Sounds like it would be good for Android
> Studio to do the same thing?
>
> Greg
> > On Feb 22, 2016, at 5:27 PM, Zachary Turner  wrote:
> >
> > Can't you just cache the modules locally on the disk, so that you only
> take that 26 second hit the first time you try to download that module, and
> then it indexes it by some sort of hash.  Then instead of just downloading
> it, you check the local cache first and only download if it's not there.
> >
> > If you already do all this, then disregard.
> >
> > On Mon, Feb 22, 2016 at 4:39 PM Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > > On Jan 28, 2016, at 4:21 AM, Pavel Labath  wrote:
> > >
> > > Hello all,
> > >
> > > we are running into limitations of the current module download/caching
> > > system. A simple android application can link to about 46 megabytes
> > > worth of modules, and downloading that with our current transfer rates
> > > takes about 25 seconds. Much of the data we download this way is never
> > > actually accessed, and yet we download everything immediately upon
> > > starting the debug session, which makes the first session extremely
> > > laggy.
> > >
> > > We could speed up a lot by only downloading the portions of the module
> > > that we really need (in my case this turns out to be about 8
> > > megabytes). Also, further speedups could be made by increasing the
> > > throughput of the gdb-remote protocol used for downloading these files
> > > by using pipelining.
> > >
> > > I made a proof-of-concept hack  of these things, put it into lldb and
> > > I was able to get the time for the startup-attach-detach-exit cycle
> > > down to 5.4 seconds (for comparison, the current time for the cycle is
> > > about 3.6 seconds with a hot module cache, and 28(!) seconds with an
> > > empty cache).
> > >
> > > Now, I would like to properly implement these things in lldb properly,
> > > so this is a request for comments on my plan. What I would like to do
> > > is:
> > > - Replace ModuleCache with a SectionCache (actually, more like a cache
> > > of arbitrary file chunks). When a the cache gets a request for a file
> > > and the file is not in the cache already, it returns a special kind of
> > > a Module, whose fragments will be downloaded as we are trying to
> > > access them. These fragments will be cached on disk, so that
> > > subsequent requests for the file do not need to re-download them. We
> > > can also have the option to short-circuit this logic and download the
> > > whole file immediately (e.g., when the file is small, or we have a
> > > super-fast way of obtaining the whole file via rsync, etc...)
> > > - Add pipelining support to GDBRemoteCommunicationClient for
> > > communicating with the platform. This actually does not require any
> > > changes to the wire protocol. The only change is in adding the ability
> > > to send an additional request to the server while waiting for the
> > > response to the previous one. Since the protocol is request-response
> > > based and we are communication over a reliable transport stream, each
> > > response can be correctly matched to a request even though we have
> > > multiple packets in flight. Any packets which need to maintain more
> > > complex state (like downloading a single entity using continuation
> > > packets) can still lock the stream to get exclusive access, but I am
> > > not sure if we actually even have any such packets in the platform
> > > flavour of the protocol.
> > > - Paralelize downloading of multiple files in parallel, utilizing
> > > request pipelining. Currently we get the biggest delay when first
> > > attaching to a process (we download file headers and some basic
> > > informative sections) and when we try to set the first symbol-level
> > > breakpoint (we download symbol tables and string sections). Both of
> > > these actions operate on all modules in bulk, which makes them easy
> > 

Re: [lldb-dev] Interest in enabling -Werror by default

2016-02-17 Thread Tamas Berghammer via lldb-dev
I think the Linux-x86_64 build using clang is mostly warning free (1
warning on http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake)
but it isn't true for most of the other configuration.

I think -Werror can be enabled on the buildbots on a case by case bases
depending on the decision of the owner/maintainer. The main reason I think
it this way because a change like this will increase the number of build
failures what will give more work to the buildbot maintainer primarily
because most buildbot don't send out failure messages (flakiness) and I am
not convinced that the community will fix some warning based on a report
from a build bot.

As a partial maintainer of 5 different buildbots I don't want to enable
-Werror on any them as I think it will be too much additional maintenance
work compared to the benefit unless we enforce -Werror on local builds as
well (e.g. use -Werror if compiling with clang on an x86_64 platform).

On Wed, Feb 17, 2016 at 3:19 AM Saleem Abdulrasool 
wrote:

> On Tue, Feb 16, 2016 at 12:38 PM, Kamil Rytarowski  wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> NetBSD builds with GCC 4.8.2 and it emits few warnings for LLDB.
>>
>> Before enabling -Werror please first iterate over build logs and help
>> to squash them. For example it detects undefined behavior IIRC for a
>> Darwin code part.
>
>
> Interesting.  On Linux, lldb had many warnings, and over time, I've
> managed to get mots of them cleaned up.  Right now, there are a couple of
> -Wtype-limits warnings and one -Wformat warning.  Is there a build bot that
> can be used to monitor what those warnings are?  If there aren't any
> buildbots, then this would be of no consequence since we wouldn't turn it
> on for user builds.
>
> I wish I had caught what I wrote versus what I was thinking before hitting
> send :-(.
>
>
>>
>> On 16.02.2016 20:01, Zachary Turner via lldb-dev wrote:
>> > You're talking about doing it on a per-bot basis and not a global
>> > policy, but just throwing in that on the MSVC side at least, we're
>> > not warning free right now and it's not trivial tog et warning free
>> > without disabling some warnings (which I don't want to do either)
>> >
>> > On Tue, Feb 16, 2016 at 10:31 AM Saleem Abdulrasool via lldb-dev
>> > mailto:lldb-dev@lists.llvm.org>> wrote:
>> >
>> > On Tuesday, February 16, 2016, Tamas Berghammer
>> > mailto:tbergham...@google.com>> wrote:
>> >
>> > If you want to enable it only on the bots then I think we can
>> > decide it on a bot by bot bases. For me the main question is who
>> > will be responsible for fixing a warning introduced by a change in
>> > llvm or clang causing a build failure because of a warning
>> > (especially when the fix is non trivial)?
>> >
>> >
>> > I think that the same policy as LLVM/clang should apply here.  The
>> > person making the change would be responsible for ensuring that
>> > nothing breaks as a result of their change.  The same situation
>> > exists when working on interfaces that effect clang: a fix for a
>> > warning introduced by a change in LLVM may be non-trivial in
>> > clang.
>> >
>> > Just to be clear, I'm merely suggesting this as an option.  If it
>> > is deemed too burdensome by most of the common committers, we state
>> > so and not do this.
>> >
>> >
>> >
>> > On Tue, Feb 16, 2016 at 4:31 PM Saleem Abdulrasool
>> >  wrote:
>> >
>> > On Tuesday, February 16, 2016, Tamas Berghammer
>> >  wrote:
>> >
>> > I would be happy if we can keep lldb warning free but I don't think
>> > enabling -Werror is a good idea for 2 reasons: * We are using a lot
>> > of different compiler and keeping the codebase warning free on all
>> > of them might not be feasible especially for the less used, older
>> > gcc versions. * Neither llvm nor clang have -Werror enabled so if
>> > we enable it then a clang/llvm change can break our build with a
>> > warning when it is hard to justify a revert and a fix might not be
>> > trivial.
>> >
>> >
>> > Err, sorry.  I meant by default on the build bots (IIRC, some
>> > (many?) of the build bots do build with -Werror for LLVM and
>> > clang).  Yes, a new warning in clang could cause issues in LLDB,
>> > though the same thing exists for the LLVM/clang dependency.  Since
>> > this would be on the build bots, it should get resolved rather
>> > quickly.
>> >
>> > In short term I would prefer to just create a policy saying
>> > everybody should write warning free code for lldb (I think it
>> > already kind of exists) and we as a community try to ensure it
>> > during code review and with fixing the possible things what slip
>> > through. In the longer term I would be happy to see -Werror turned
>> > on for llvm and clang first and then we can follow up with lldb but
>> > making this change will require a lot of discussion and might get
>> > some push back.
>> >
>> > On Tue, Feb 16, 2016 at 6:02 AM Saleem Abdulrasool via lldb-dev
>> >  wrote:
>> >
>> > Hi,
>> >
>> > It seems that enabling -Werror by 

Re: [lldb-dev] Interest in enabling -Werror by default

2016-02-16 Thread Tamas Berghammer via lldb-dev
If you want to enable it only on the bots then I think we can decide it on
a bot by bot bases. For me the main question is who will be responsible for
fixing a warning introduced by a change in llvm or clang causing a build
failure because of a warning (especially when the fix is non trivial)?

On Tue, Feb 16, 2016 at 4:31 PM Saleem Abdulrasool 
wrote:

> On Tuesday, February 16, 2016, Tamas Berghammer 
> wrote:
>
>> I would be happy if we can keep lldb warning free but I don't think
>> enabling -Werror is a good idea for 2 reasons:
>> * We are using a lot of different compiler and keeping the codebase
>> warning free on all of them might not be feasible especially for the less
>> used, older gcc versions.
>> * Neither llvm nor clang have -Werror enabled so if we enable it then a
>> clang/llvm change can break our build with a warning when it is hard to
>> justify a revert and a fix might not be trivial.
>>
>
> Err, sorry.  I meant by default on the build bots (IIRC, some (many?) of
> the build bots do build with -Werror for LLVM and clang).  Yes, a new
> warning in clang could cause issues in LLDB, though the same thing exists
> for the LLVM/clang dependency.  Since this would be on the build bots, it
> should get resolved rather quickly.
>
> In short term I would prefer to just create a policy saying everybody
>> should write warning free code for lldb (I think it already kind of exists)
>> and we as a community try to ensure it during code review and with fixing
>> the possible things what slip through. In the longer term I would be happy
>> to see -Werror turned on for llvm and clang first and then we can follow up
>> with lldb but making this change will require a lot of discussion and might
>> get some push back.
>>
>> On Tue, Feb 16, 2016 at 6:02 AM Saleem Abdulrasool via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi,
>>>
>>> It seems that enabling -Werror by default is within reach for lldb now.
>>> There currently are three warnings that remain with gcc 5.1 on Linux, and
>>> the build is clean of warnings with clang.
>>>
>>> There are two instances of type range limitations on comparisons in
>>> asserts, and one instance of string formatting which has a GNU
>>> incompatibility.
>>>
>>> Is there any interest in enabling -Werror by default to help keep the
>>> build clean going forward?
>>>
>>> --
>>> Saleem Abdulrasool
>>> compnerd (at) compnerd (dot) org
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>
>
> --
> Saleem Abdulrasool
> compnerd (at) compnerd (dot) org
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Interest in enabling -Werror by default

2016-02-16 Thread Tamas Berghammer via lldb-dev
I would be happy if we can keep lldb warning free but I don't think
enabling -Werror is a good idea for 2 reasons:
* We are using a lot of different compiler and keeping the codebase warning
free on all of them might not be feasible especially for the less used,
older gcc versions.
* Neither llvm nor clang have -Werror enabled so if we enable it then a
clang/llvm change can break our build with a warning when it is hard to
justify a revert and a fix might not be trivial.

In short term I would prefer to just create a policy saying everybody
should write warning free code for lldb (I think it already kind of exists)
and we as a community try to ensure it during code review and with fixing
the possible things what slip through. In the longer term I would be happy
to see -Werror turned on for llvm and clang first and then we can follow up
with lldb but making this change will require a lot of discussion and might
get some push back.

On Tue, Feb 16, 2016 at 6:02 AM Saleem Abdulrasool via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi,
>
> It seems that enabling -Werror by default is within reach for lldb now.
> There currently are three warnings that remain with gcc 5.1 on Linux, and
> the build is clean of warnings with clang.
>
> There are two instances of type range limitations on comparisons in
> asserts, and one instance of string formatting which has a GNU
> incompatibility.
>
> Is there any interest in enabling -Werror by default to help keep the
> build clean going forward?
>
> --
> Saleem Abdulrasool
> compnerd (at) compnerd (dot) org
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB does some deep recursion into external modules to resolve name lookups

2016-02-10 Thread Tamas Berghammer via lldb-dev
Hi Sean,

Can you gave us some more context on this because without access to the
referenced rdar bug I don't really understand your previous e-mail (and I
think I am not alone with this)

Thanks,
Tamas

On Wed, Feb 10, 2016 at 2:54 AM Sean Callanan via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I’ve been investing the “po performance bug” ( po
> when debugging Xcode is extremely slow) in recent Xcode, and I discovered
> this problem.
>
> We are looking at pch files that are generated on Xcode’s behalf and it
> looks like we’re recursing through their dependencies when we don’t find
> something, but we’re probably not searching efficiently because this is
> super slow.
>
> This would be an Everest regression.
>
> I’m going to keep working on the original Radar because I haven’t gotten
> Brent’s backtrace yet; that said, this one is going to affect users’
> perception of expression parser performance as well so I’ve filed it
> separately.
>
> Sean
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Running a single test

2016-02-09 Thread Tamas Berghammer via lldb-dev
Zachary's solution will work as well but that one won't make debugging the
test too easy (still using several processes). If you want to run just 1
test then you have to specify  --no-multiprocess and then you can use the
same flags as before (-p, -f)

On Tue, Feb 9, 2016 at 10:19 PM Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Try passing the directory to start in as the last argument.  Also make
> sure you include .py on the filename when using -p (I don't actually know
> if this is required but I do it).
>
> % python dotest.py
> --executable /tank/emaste/src/llvm/build-nodebug/bin/lldb -C /usr/bin/clang
> -v -t -p TestCppIncompleteTypes.py
> ~/src/llvm/tools/lldb/packages/Python/lldbsuite/test
>
> I don't know off the top of my head why that last argument is required,
> and I agree it's counterintuitive and probably doesn't *need* to be that
> way for technical reasons.
>
> LMK if this works
>
> On Tue, Feb 9, 2016 at 2:01 PM Ed Maste via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> I've been away from LLDB development for a little while but am
>> starting to work on it again.
>>
>> I used to run a few tests using dotest.py's -f or -p flags, but they
>> don't seem to be working now.
>>
>>   -f filterspec Specify a filter, which consists of the test class
>> name, a dot, followed by the test method, to only
>> admit such test into the test suite
>>   -p patternSpecify a regexp filename pattern for inclusion
>> in the
>> test suite
>>
>> For example, I'd expect this command:
>>
>> % python dotest.py --executable
>> /tank/emaste/src/llvm/build-nodebug/bin/lldb -C /usr/bin/clang -v -t
>> -p TestCppIncompleteTypes
>>
>> to run just the TestCppIncompleteTypes.py test(s), but instead it
>> looks like it runs the full suite.
>>
>> I'd also expect
>>
>> % python dotest.py --executable
>> /tank/emaste/src/llvm/build-nodebug/bin/lldb -C /usr/bin/clang -v -t
>> -f TestCppIncompleteTypes.test_limit_debug_info
>>
>> to run a single test from the same suite, but it runs no tests
>> ("Collected 0 tests").
>>
>> I'm sure these options used to work, although this could be an issue
>> that affects only FreeBSD. Do they work on Linux/OS X?
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] MSVC 2013 w/ Python 2.7 is moving to an unsupported toolchain

2016-02-02 Thread Tamas Berghammer via lldb-dev
Hi Zachary,

We are still using MSVC 2013 and Python 2.7 to compile LLDB on Windows for
Android Studio and we also have a buildbot what is testing this
configuration (without sending e-mail at the moment) here:
http://lab.llvm.org:8011/builders/lldb-windows7-android

We are in the discussion to decide what is our plan for going forward both
in terms of Visual Studio version and Python version and I expect that we
will make a decision this week. Until then please don't remove any hack we
have in the code because of MSVC 2013 (e.g. alias template workarounds) and
if adding new code then please try not to break MSVC 2013. I will send out
an update about our decision hopefully at the end of this week.

You mentioned that LLVM plan to bump the minimum version of MSVC to 2015.
Do you have any link to the place where they discussed it or do you know
anything about the schedule?

Thanks,
Tamas

On Tue, Feb 2, 2016 at 7:16 PM Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> As of this week, we have the test suite running clean under MSVC 2015
> using Python 3.5.  I'm sure new things will pop up, but I'm considering the
> transition "done" as of now.
>
> What this means for MSVC 2013 is that we dont' want to support it anymore.
>
> Reasons:
> * C++ language support is poor
> * Compiling your own version of Python is difficult and a high barrier to
> entry for people wanting to build LLDB on Windows
> * LLVM will eventually bump its minimum MSVC version to 2015 as well.
>
> To this end, I have already changed the MSVC buildbot [
> http://lab.llvm.org:8011/builders/lldb-x86-windows-msvc2015] to compile
> using 2015.  The old 2013 buildbot no longer exists.
>
> This week I plan to update the build instructions on lldb.org to reflect
> the simpler more streamlined instructions for 2015 and remove the
> instructions for 2015.
>
> I know some people are still using 2013.  I don't plan to break anything
> or explicitly remove support from CMake or anywhere else for 2013.  I'm
> only saying that unless someone else steps up to keep this configuration
> working, it may break at any time, there won't be a buildbot testing it,
> and I can't guarantee anything about it continuing to work.
>
> Note that when LLVM bumps its minimum required version to MSVC 2015
> (expected this year), it will be **very hard for anyone to continue using
> Python 2 on Windows at tip of trunk**.  The only real workaround for this
> is going to be forking Python (on your own end) and making whatever changes
> are necessary to Python to keep it compiling, as they will not accept the
> patches upstream.
>
> Happy to answer any questions about this.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry for performance monitors

2016-02-01 Thread Tamas Berghammer via lldb-dev
If you want to go with the path to implement it outside LLDB then I would
suggest to implement it with an out of tree plugin written in C++. You can
use the SB API the same way as you can from python and additionally it have
a few advantages:
* You have a C/C++ API what makes it easy to integrate the functionality
into an IDE (they just have to link to your shared library)
* You can generate a Python API if you need one with SWIG the same way we
do it for the SB API
* You don't have to worry about making the code both Python 2.7 and Python
3.5 compatible

You can see a very simple example for implementing an out of tree C++
plugin in /examples/plugins/commands

On Mon, Feb 1, 2016 at 10:53 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Speaking for Android Studio, I think that we *could* use a
> python-based implementation (hard to say exactly without knowing the
> details of the implementation), but I believe a different
> implementation could be *easier* to integrate. Plus, if the solution
> integrates more closely with lldb, we could surface some of the data
> in the command-line client as well.
>
> pl
>
> On 1 February 2016 at 10:30, Ravitheja Addepally
>  wrote:
> > And what about the ease of integration into a an IDE, I don't really
> know if
> > the python based approach would be usable or not in this context ?
> >
> > On Mon, Feb 1, 2016 at 11:17 AM, Pavel Labath  wrote:
> >>
> >> It feels to me that the python based approach could run into a dead
> >> end fairly quickly: a) you can only access the data when the target is
> >> stopped; b) the self-tracing means that the evaluation of these
> >> expressions would introduce noise in the data; c) overhead of all the
> >> extra packets(?).
> >>
> >> So, I would be in favor of a lldb-server based approach. I'm not
> >> telling you that you shouldn't do that, but I don't think that's an
> >> approach I would take...
> >>
> >> pl
> >>
> >>
> >> On 1 February 2016 at 08:58, Ravitheja Addepally
> >>  wrote:
> >> > Ok, that is one option, but one of the aim for this activity is to
> make
> >> > the
> >> > data available for use by the IDE's like Android Studio or XCode or
> any
> >> > other that may want to display this information in its environment so
> >> > keeping that in consideration would the complete python based approach
> >> > be
> >> > useful ? or would providing LLDB api's to extract raw perf data from
> the
> >> > target be useful ?
> >> >
> >> > On Thu, Jan 21, 2016 at 10:00 PM, Greg Clayton 
> >> > wrote:
> >> >>
> >> >> One thing to think about is you can actually just run an expression
> in
> >> >> the
> >> >> program that is being debugged without needing to change anything in
> >> >> the GDB
> >> >> remote server. So this can all be done via python commands and would
> >> >> require
> >> >> no changes to anything. So you can run an expression to enable the
> >> >> buffer.
> >> >> Since LLDB supports multiple line expression that can define their
> own
> >> >> local
> >> >> variables and local types. So the expression could be something like:
> >> >>
> >> >> int perf_fd = (int)perf_event_open(...);
> >> >> struct PerfData
> >> >> {
> >> >> void *data;
> >> >> size_t size;
> >> >> };
> >> >> PerfData result = read_perf_data(perf_fd);
> >> >> result
> >> >>
> >> >>
> >> >> The result is then a structure that you can access from your python
> >> >> command (it will be a SBValue) and then you can read memory in order
> to
> >> >> get
> >> >> the perf data.
> >> >>
> >> >> You can also split things up into multiple calls where you can run
> >> >> perf_event_open() on its own and return the file descriptor:
> >> >>
> >> >> (int)perf_event_open(...)
> >> >>
> >> >> This expression will return the file descriptor
> >> >>
> >> >> Then you could allocate memory via the SBProcess:
> >> >>
> >> >> (void *)malloc(1024);
> >> >>
> >> >> The result of this expression will be the buffer that you use...
> >> >>
> >> >> Then you can read 1024 bytes at a time into this newly created
> buffer.
> >> >>
> >> >> So a solution that is completely done in python would be very
> >> >> attractive.
> >> >>
> >> >> Greg
> >> >>
> >> >>
> >> >> > On Jan 21, 2016, at 7:04 AM, Ravitheja Addepally
> >> >> >  wrote:
> >> >> >
> >> >> > Hello,
> >> >> >   Regarding the questions in this thread please find the
> answers
> >> >> > ->
> >> >> >
> >> >> > How are you going to present this information to the user? (I know
> >> >> > debugserver can report some performance data... Have you looked
> into
> >> >> > how that works? Do you plan to reuse some parts of that
> >> >> > infrastructure?) and How will you get the information from the
> server
> >> >> > to
> >> >> > the client?
> >> >> >
> >> >> >  Currently I plan to show a list of instructions that have been
> >> >> > executed
> >> >> > so far, I saw the
> >> >> > implementation suggested by pavel, the already present
> infrastructure
> >> >> > is
> >> >> > a little bit lacking in terms of the n

Re: [lldb-dev] Ubuntu version-based fail/skip

2016-01-25 Thread Tamas Berghammer via lldb-dev
I think recently we are trying to reduce the number of decorators we are
having so adding a few new Ubuntu specific decorators might not be a good
idea. My suggestion would be to move on a little bit to the functional
programming style with adding a new option to @expetedFailureAll where we
can specify a function what have to evaluate to true for the decorator to
be considered (and it is evaluated only after all other condition of
@expectedFailureAll). Then we can create a free function called
getLinuxDistribution what will return the distribution id and then as a
final step we can specify a lambda to expetedFailureAll through its new
argument what calls getLinuxDistribution and compares it with the right
value. I know it is a lot of hoops to jump over to get a distribution
specific decorator but I think this approach can handle arbitrarily complex
skip/xfail conditions what will help us in the future.

What do you think?

Thanks,
Tamas


On Fri, Jan 22, 2016 at 6:31 PM Todd Fiala  wrote:

> Hey all,
>
> What do you think about having some kind of way of marking the (in this
> case, specifically) Ubuntu distribution for fail/skip test decorators?
> I've had a few cases where I've needed to mark tests failing on for Ubuntu
> where it really was only a particular release of an Ubuntu distribution,
> and wasn't specifically the compiler.  (i.e. it was a constellation of more
> moving parts that clearly occur on a particular release of an Ubuntu
> distribution but not on others, and certainly not generically across all
> Linux distributions).
>
> I'd love to have a way to skip and xfail a test for a specific Ubuntu
> distribution release.  I guess it could be done uber-genrically, but with
> Linux distributions this can get complicated due to the os/distribution
> axes.  So I'd be happy to start off with just having them at a distribution
> basis:
>
> @skipIfUbuntu(version_check_list)  # version_check_list contains one or
> more version checks that, if passing, trigger the skip
>
> @expectedFailureUbuntu(version_check_list)  # similar to above
>
> Or possibly more usefully,
>
> @skipIfLinuxDistribution(version_check_list)  # version_check_list
> contains one or more version checks that, if passing, trigger the skip,
> includes the distribution
>
> @expectedFailureLinuxDistribution(version_check_list)  # similar to above
>
>
> It's not clear to me how to work in the os=linux, distribution=Ubuntu into
> the more generic checks like and get distribution-level version checking
> working right otherwise, but I'm open to suggestions.
>
> The workaround for the short term is to just use blanket-linux @skipIf and
> @expectedFailure style calls.
>
> Thoughts?
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How to load core on a different machine?

2016-01-06 Thread Tamas Berghammer via lldb-dev
I would try to set target.exec-search-paths (before loading the core file)
to the directory containing the binaries downloaded from the server. Then
lldb should start searching for the shared libraries in the listed
directories.

On Wed, Jan 6, 2016 at 7:03 PM Eugene Birukov via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hmm... neither approach really works.
>
> 1. I created platform from lldb prompt, but when I create target from core
> file I see exactly the same wrong stacks. It seems that platform is ignored
> during core load in my case.
> 2. chroot requires the whole set of binaries there in the new root. I
> simply cannot copy everything from the server. Even if I do, lldb will use
> copied binaries which is not a good idea...
>
> *root@eugenebi-L2:~# chroot /home/eugene/tmp*
> *chroot: failed to run command ‘/bin/bash’: No such file or directory*
>
>
> 3. I tried SBDebugger::SetCurrentPlatformSDKRoot() but it does not have
> any visible effect on load core, not sure what it is supposed to do :)
>
> Eugene
>
> > Subject: Re: [lldb-dev] How to load core on a different machine?
> > From: gclay...@apple.com
> > Date: Tue, 5 Jan 2016 15:04:36 -0800
> > CC: lldb-dev@lists.llvm.org
> > To: eugen...@hotmail.com
>
> >
> > Try this:
> >
> > % lldb
> > (lldb) platform select --sysroot /path/to/remote/shared/libraries
> remote-linux
> > (lldb) 
> >
> > If this works, there are SBPlatform class calls in the API you can use
> the select the platform as done above if you need to not do this from the
> command line.
> >
> > The other option is to chroot into /path/to/remote/shared/libraries and
> you will need to copy your core file into /path/to/remote/shared/libraries,
> then just run LLDB normally and it should work.
> >
> > Greg Clayton
> >
> > > On Jan 5, 2016, at 12:53 PM, Eugene Birukov via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >
> > > Hi,
> > >
> > > I am using LLDB-3.7 on Ubuntu Linux.
> > >
> > > I have a core dump file and all shared libraries from my server but I
> want to investigate them on a dev box. But I fail to correctly load it in
> LLDB - it shows wrong stacks. I.e. I am looking for something equivalent to
> GDB commands "set solib-absolute-prefix" and "set solib-search-path".
> > >
> > > I tried to play with "target modules search-paths insert", but I
> cannot use it if there is no target and I cannot load core after I have a
> target - not sure what this command is intended to do...
> > >
> > > Now, what I really need to do - it is load core in my custom debugger
> that uses C++ API. Here I made some progress:
> > > • Create target with NULL file name
> > > • Load core using SBTarget::LoadCore()
> > > • Manually load all executables - the initial a.out and all the shared
> libraries using SBTarget::AddModule() and SBTarget::SetModuleLoadAddress()
> > > This kind of works, but there are two problems:
> > > • How would I find the list of modules and addresses to load from the
> core file? Currently I did it by loading core in the debugger on the
> server, but this is not acceptable for production run...
> > > • LLDB correctly prints stacks and resolves symbols, but I cannot
> disassembly any code - the ReadMemory retuns all zeroes from code addresses.
> > >
> > > Any help would be greatly appreciated.
> > >
> > > Thanks,
> > > Eugene
> > > ___
> > > lldb-dev mailing list
> > > lldb-dev@lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Bug 25896] New: Hide stack frames from specific source files

2015-12-21 Thread Tamas Berghammer via lldb-dev
We are not working on this feature for android at the moment and if we ever
implement it we will most likely do it in the UI side (inside Android
Studio) the same way Jason described in
https://llvm.org/bugs/show_bug.cgi?id=25896#c1

On Sun, Dec 20, 2015 at 9:01 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Sounds like you almost want the ability to do a backtrace projection.  At
> one point I wanted this for cross C++/Java frames, but I haven't worked on
> that problem in some time.
>
> Android folks - did we ever add anything to support hiding some of the
> trampolines or other call sites involved in the C++/Java transitions?
>
> -Todd
>
> On Sat, Dec 19, 2015 at 3:44 PM, via lldb-dev 
> wrote:
>
>> Bug ID 25896  Summary Hide
>> stack frames from specific source files Product lldb Version unspecified
>> Hardware All OS All Status NEW Severity enhancement Priority P Component All
>> Bugs Assignee lldb-dev@lists.llvm.org Reporter chinmayga...@gmail.com CC
>> llvm-b...@lists.llvm.org Classification Unclassified
>>
>> When my program is paused in the debugger, I would like to hide stack frames
>> originating from certain source files (or libraries) from appearing in the
>> backtrace. These frames usually correspond to standard library functions 
>> that I
>> am not in the process of actively debugging.
>>
>> On a similar note, I did find `target.process.thread.step-avoid-regexp` which
>> allows me to avoid stepping into select frames. However, I want to also
>> suppress these frames in the backtrace listing, and, avoid showing the same
>> when move up and down the bracktrace.
>>
>> --
>> You are receiving this mail because:
>>
>>- You are the assignee for the bug.
>>
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>
>
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Passing as argument an array in a function call

2015-12-16 Thread Tamas Berghammer via lldb-dev
I verified and LLDB also works correctly in case of arm and aarch64 on
android (using lldb-server). My guess is that it is a MIPS specific but in
the SysV ABI but I haven't verified it.

Tamas

On Wed, Dec 16, 2015 at 6:37 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

>
> > On Dec 16, 2015, at 6:06 AM, Dean De Leo via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Hi,
> >
> > assume we wish to use the expression evaluator to invoke a function from
> lldb, setting the result into an array passed as parameter, e.g:
> >
> > void test1(uint32_t* d) {
> >for(int i = 0; i < 6; i++){
> >d[i] = 42 + i;
> >}
> > }
> >
> > where the expected output should be d = {42,43,44,45,46,47}. However
> performing the following expression having as target android/mips32 returns:
> >
> > (lldb) expr -- uint32_t data[6] = {}; test1(data);  data
> > (uint32_t [6]) $4 = ([0] = 0, [1] = 2003456944, [2] = 44, [3] = 45, [4]
> = 2004491136, [5] = 47)
> >
> > Is this an expected behaviour or a bug?
>
> Definitely a bug in LLDB somewhere, or possibly in the memory allocation
> on the MIPS host that is done via lldb-server. Are you using lldb-server
> here? It has an allocate memory packet.
>
> > I suspect the evaluator allocates the memory for data and releases once
> the expression has been executed?
>
> We allocate memory for the resulting data that continues to exist in your
> process so the memory shouldn't be released.
>
> > If so, can you please advise what's the proper way to achieve the same
> functionality?
>
> This should work so it will be a matter of tracking down what is actually
> failing. If you can run to where you want to run your expression and then
> before you run your expression do:
>
> (lldb) log enable -f /tmp/log.txt gdb-remote packets
> (lldb) log enable -f /tmp/log.txt lldb expr
>
> Then run your expression and then do:
>
> (lldb) log disable gdb-remote packets
> (lldb) log disable lldb expr
>
> Then send the file, we might be able to see what is going on. The GDB
> remote packets will allow us to see the memory that is allocated, and the
> "lldb expr" will allow us to see all of the gory details as to where it is
> trying to use "d".
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-10 Thread Tamas Berghammer via lldb-dev
HI Todd,

You changed the way the test failure list is printed in a way that now we
only print the name of the test function failing with the name of the test
file in parenthesis. Can we add back the name of the test class to this
list?

There are 2 reason I am asking for it:
* To run only a specific test we have to specify the "-f" option to
dotest.py and it takes the fully qualified function name as an argument.
Before your change it was displayed in the test output (in a bit
uncomfortable way) but after your change we have to open the test file and
copy the class name from there to run only a single test suit.
* With the new output format the output of the buildbot only displays the
list of the failing test function names what isn't too specific in a lot of
case (e.g. we have several test method called test_dwarf). This point is
less important as the file name can be added to the output from the
buildbot perspective.

Thanks,
Tamas

On Wed, Dec 9, 2015 at 7:57 PM Ying Chen  wrote:

> I submitted this patch to include "ERROR" lines in buildbot step results.
> http://reviews.llvm.org/rL255145
>
> Error results will be displayed in step result like this after the patch,
> "ERROR: 9 (SIGKILL) test_buildbot_catches_exceptional_exit_dwarf"
>
> Thanks,
> Ying
>
> On Wed, Dec 9, 2015 at 10:45 AM, Todd Fiala  wrote:
>
>> Great, thanks Tamas!
>>
>> I left the default turned on, and just essentially removed the issues by
>> parking them as .py.parked files.  That way we can flip them on in the
>> future if we want to verify a testbot's detection of these.
>>
>> I will be going back to the xUnit Results formatter and making sure it
>> maps timeouts and exceptional errors to the xUnit error type with details.
>>
>> On Wed, Dec 9, 2015 at 10:30 AM, Tamas Berghammer > > wrote:
>>
>>> Thank you for making the experiment. It looks reasonable. For the ERROR
>>> the buildbot detected it and it will fail the build but it isn't listed in
>>> the list of failing tests what should be fixed. After this experiment I
>>> think it is fine to change the default output formatter from our side.
>>>
>>> Tamas
>>>
>>> On Wed, Dec 9, 2015 at 6:26 PM Todd Fiala  wrote:
>>>
 The reports look good at the test level:


 http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9294

 I'd say the buildbot reflection script missed the ERROR, so that is
 something maybe Ying can look at (the summary line in the build run), but
 that is unrelated AFAICT.

 I'm going to move aside the failures.

 On Wed, Dec 9, 2015 at 10:13 AM, Todd Fiala 
 wrote:

> I am going to stop the current build on that builder.  There was one
> change in it, and it will be another 20 minutes before it completes.  I
> don't want the repo in a known broken state that long.
>
> On Wed, Dec 9, 2015 at 10:07 AM, Todd Fiala 
> wrote:
>
>> I forced a build on the ubuntu 14.04 cmake builder.  The build
>> _after_ 9292 will contain the two changes (and we will expect failures on
>> it).
>>
>> On Wed, Dec 9, 2015 at 10:05 AM, Todd Fiala 
>> wrote:
>>
>>> These went in as:
>>>
>>> r255130 - turn it on by default
>>> r255131 - create known issues.  This one is to be reverted if all 3
>>> types show up properly.
>>>
>>> On Wed, Dec 9, 2015 at 9:41 AM, Todd Fiala 
>>> wrote:
>>>
 It is a small change.

 I almost have all the trial tests ready, so I'll just commit both
 changes at the same time (the flip on, and the trial balloon issues).

 If all goes well and the three types of issue show up, then the
 last of the two will get reverted (the one with the failures).

 If none (or only some) of the issues show up, they'll both get
 reverted.

 -Todd

 On Wed, Dec 9, 2015 at 9:39 AM, Pavel Labath 
 wrote:

> If it's not too much work, I think the extra bit of noise will not
> be
> a problem. But I don't think it is really necessary either.
>
> I assume the actual flip will be a small change that we can back
> out
> easily if we notice troubles... After a sufficient grace period we
> can
> remove the old formatter altogether and hopefully simplify the code
> somewhat.
>
> pl
>
> On 9 December 2015 at 17:08, Todd Fiala 
> wrote:
> > Here's what I can do.
> >
> > Put in the change (setting the default to use the new format).
> >
> > Separately, put in a trial balloon commit with one failing test,
> one
> > exceptional exit test, and one timeout test, and watch the
> ubuntu 14.04
> > buildbot catch it and fail.  Then reverse this out.  That should
> show beyond
> > a reasonable doubt whether the buil

Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Tamas Berghammer via lldb-dev
Thank you for making the experiment. It looks reasonable. For the ERROR the
buildbot detected it and it will fail the build but it isn't listed in the
list of failing tests what should be fixed. After this experiment I think
it is fine to change the default output formatter from our side.

Tamas

On Wed, Dec 9, 2015 at 6:26 PM Todd Fiala  wrote:

> The reports look good at the test level:
>
>
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake/builds/9294
>
> I'd say the buildbot reflection script missed the ERROR, so that is
> something maybe Ying can look at (the summary line in the build run), but
> that is unrelated AFAICT.
>
> I'm going to move aside the failures.
>
> On Wed, Dec 9, 2015 at 10:13 AM, Todd Fiala  wrote:
>
>> I am going to stop the current build on that builder.  There was one
>> change in it, and it will be another 20 minutes before it completes.  I
>> don't want the repo in a known broken state that long.
>>
>> On Wed, Dec 9, 2015 at 10:07 AM, Todd Fiala  wrote:
>>
>>> I forced a build on the ubuntu 14.04 cmake builder.  The build _after_
>>> 9292 will contain the two changes (and we will expect failures on it).
>>>
>>> On Wed, Dec 9, 2015 at 10:05 AM, Todd Fiala 
>>> wrote:
>>>
 These went in as:

 r255130 - turn it on by default
 r255131 - create known issues.  This one is to be reverted if all 3
 types show up properly.

 On Wed, Dec 9, 2015 at 9:41 AM, Todd Fiala 
 wrote:

> It is a small change.
>
> I almost have all the trial tests ready, so I'll just commit both
> changes at the same time (the flip on, and the trial balloon issues).
>
> If all goes well and the three types of issue show up, then the last
> of the two will get reverted (the one with the failures).
>
> If none (or only some) of the issues show up, they'll both get
> reverted.
>
> -Todd
>
> On Wed, Dec 9, 2015 at 9:39 AM, Pavel Labath 
> wrote:
>
>> If it's not too much work, I think the extra bit of noise will not be
>> a problem. But I don't think it is really necessary either.
>>
>> I assume the actual flip will be a small change that we can back out
>> easily if we notice troubles... After a sufficient grace period we can
>> remove the old formatter altogether and hopefully simplify the code
>> somewhat.
>>
>> pl
>>
>> On 9 December 2015 at 17:08, Todd Fiala  wrote:
>> > Here's what I can do.
>> >
>> > Put in the change (setting the default to use the new format).
>> >
>> > Separately, put in a trial balloon commit with one failing test, one
>> > exceptional exit test, and one timeout test, and watch the ubuntu
>> 14.04
>> > buildbot catch it and fail.  Then reverse this out.  That should
>> show beyond
>> > a reasonable doubt whether the buildbot catches new failures and
>> errors.  (I
>> > think this is a noisy way to accomplish this, but it certainly would
>> > validate if its working).
>> >
>> > -Todd
>> >
>> > On Wed, Dec 9, 2015 at 8:06 AM, Todd Fiala 
>> wrote:
>> >>
>> >> Specifically, the markers for issue details are:
>> >>
>> >> FAIL
>> >> ERROR
>> >> UNEXPECTED SUCCESS
>> >> TIMEOUT
>> >>
>> >> (These are the fourth field in the array entries (lines 275 - 290)
>> of
>> >> packages/Python/lldbsuite/test/basic_results_formatter.py).
>> >>
>> >> -Todd
>> >>
>> >> On Wed, Dec 9, 2015 at 8:04 AM, Todd Fiala 
>> wrote:
>> >>>
>> >>> That's a good point, Tamas.
>> >>>
>> >>> I use (so I claim) the same all upper-case markers for the test
>> result
>> >>> details.  Including, not using XPASS but rather UNEXPECTED
>> SUCCESS for
>> >>> unexpected successes.  (The former would trigger the lit script
>> IIRC to
>> >>> parse that as a failing-style result).
>> >>>
>> >>> The intent is this is a no-op on the test runner.
>> >>>
>> >>> On Wed, Dec 9, 2015 at 8:02 AM, Tamas Berghammer <
>> tbergham...@google.com>
>> >>> wrote:
>> 
>>  +Ying Chen
>> 
>>  Ying, what do we have to do on the build bot side to support a
>> change in
>>  the default test result summary formatter?
>> 
>>  On Wed, Dec 9, 2015 at 4:00 PM Todd Fiala via lldb-dev
>>   wrote:
>> >
>> > Hi all,
>> >
>> > Per a previous thread on this, I've made all the changes I
>> intended to
>> > make last night to get the intended replacement of test run
>> results meet or
>> > exceed current requirements.
>> >
>> > I'd like to switch over to that by default.  I'm depending on
>> the test
>> > event system to be able to handle test method reruns in test
>> results
>> > accounting.
>> >
>> > The primary thing missing before was 

Re: [lldb-dev] BasicResultsFormatter - new test results summary

2015-12-09 Thread Tamas Berghammer via lldb-dev
+Ying Chen 

Ying, what do we have to do on the build bot side to support a change in
the default test result summary formatter?

On Wed, Dec 9, 2015 at 4:00 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi all,
>
> Per a previous thread on this, I've made all the changes I intended to
> make last night to get the intended replacement of test run results meet or
> exceed current requirements.
>
> I'd like to switch over to that by default.  I'm depending on the test
> event system to be able to handle test method reruns in test results
> accounting.
>
> The primary thing missing before was that timeouts were not routed through
> the test events system, nor were exception process exits (i.e. test
> inferiors exiting with a signal on POSIX systems).  Those were added last
> night so that test events are generated for those, and the
> BasicResultsFormatter presents that information properly.
>
> I will switch it over to being the default output in a bit here.  Please
> let me know if you have any concerns once I flip it on by default.
>
> Thanks!
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG] Many lookup failures

2015-12-01 Thread Tamas Berghammer via lldb-dev
On Tue, Dec 1, 2015 at 2:11 AM David Blaikie via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On Mon, Nov 30, 2015 at 6:04 PM, Ramkumar Ramachandra 
> wrote:
>
>> On Mon, Nov 30, 2015 at 5:42 PM, Greg Clayton  wrote:
>> > When we debug "a.out" again, we might have recompiled "liba.so", but
>> not "libb.so" and when we debug again, we don't need to reload the debug
>> info for "libb.so" if it hasn't changed, we just reload "liba.so" and its
>> debug info. When we rerun a target (run a.out again), we don't need to
>> spend any time reloading any shared libraries that haven't changed since
>> they are still in our global shared library cache. So to keep this global
>> library cache clean, we don't allow types from another shared library
>> (libb.so) to be loaded into another (liba.so), otherwise we wouldn't be
>> able to reap the benefits of our shared library cache as we would always
>> need to reload debug info every time we run.
>>
>> Tangential: gdb starts up significantly faster than lldb. I wonder
>> what lldb is doing wrong.
>>
>> Oh, this is if I use the lldb that Apple supplied. If I compile my own
>> lldb with llvm-release, clang-release, and lldb-release, it takes like
>> 20x the time to start up: why is this? And if I use llvm-debug,
>> clang-debug, lldb-debug, the time it takes is completely unreasonable.
>>
>
> If you built your own you probably built a +Asserts build which slows
> things down a lot. You'll want to make sure you're building Release-Asserts
> (Release "minus" Asserts) builds if you want them to be usable.
>

What do you mean under startup speed and how do you measure it? I use
Release+Assert build of ToT LLDB on Linux and it takes significantly less
time for it to start up when debugging a large application (I usually test
with debug clang) then what you mentioned.

For me just to start up LLDB is almost instantaneous (~100ms) as it don't
parse any symbol or debug information at that time. If I trigger some debug
info parsing/indexing (with setting a breakpoint) then the startup time
will be around 3-5 seconds (40 core + ssd machine) what include an indexing
of all DIEs (it should be faster on darwin as the index is already in the
executable). On the other hand doing the same with gdb takes ~30 seconds
(independently if I set a breakpoint or not) because gdb parses all symbol
info at startup.

I would like to understand why are you seeing so slow startup time as I
worked on optimizing symbol parsing quite a bit in the last few month. Can
you send me some information about how you measure the startup time (lldb
commands, some info about the inferior) and can you do a quick profiling to
see where the time is spent?


>
>>
>> > LLDB currently recreates types in a clang::ASTContext and this imposes
>> much stricter rules on how we represent types which is one of the
>> weaknesses of the LLDB approach to type representation as the clang
>> codebase often asserts when it is not happy with how things are
>> represented. This does payoff IMHO in the complex expressions we can
>> evaluate where we can use flow control, define and use C++ lambdas, and
>> write more than one statement when writing expressions. But it is
>> definitely a tradeoff. GDB has its own custom type representation which can
>> be better for dealing with the different kinds and completeness of debug
>> info, but I am comfortable with our approach.
>>
>> Yeah, about that. I question the utility of evaluating crazy
>> expressions in lldb: I've not felt the need to do that even once, and
>> I suspect a large userbase is with me on this. What's important is
>> that lldb should _never_ fail to inspect a variable: isn't this the #1
>> job of the debugger?
>>
>
> Depends on the language - languages with more syntactic sugar basically
> need crazy expression evaluation to function very well in a debugger for
> the average user. (evaluating operator overloads in C++ expressions, just
> being able to execute non-trivial pretty-printers for interesting types
> (std::vector being a simple example, or a small-string optimized
> std::string, etc - let alone examples in ObjC or even Swift))
>

If you just want to inspect the content of a variable then I suggest to use
the "frame variable" command as it require significantly less context then
evaluating an expression. Unfortunately it can still fail in some cases
with the same lookup failure you see but it happens in significantly less
cases.


> - Dave
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] reply: reply: lldb debug jit-compiled code with llvm on windows

2015-11-30 Thread Tamas Berghammer via lldb-dev
On Mon, Nov 30, 2015 at 10:18 AM haifeng_q  wrote:

> Question 1:
> On the windows, I use the code implements a function (see
> debug_target.cpp) of JIT (see debug_target_process.cpp), but when
> generating debug information, no production .symtab section for information
> that leads LLDB get JIT debugging information .symtab failure , and then
> set a breakpoint to fail.
>  LLDB command: lldb_result.txt
>  JIT compilation results: debug_target.ll
>
>  Question 2:
>  How JIT debugging supported on Linux?
>

I theory when a new function is JIT-ed then __jit_debug_register_code
function is called where LLDB have a breakpoint set. When that breakpoint
is hit then LLDB reads the JIT-ed elf file based on the information in
__it_debug_descriptor
and processes all debug info in it.

In practice when I last tried JIT debugging with lldb and lli (few weeks
ago) it get the notification for the new JIT-ed elf file but it processed
only the eh_frame from it even though symtab and full debug info was also
provided. Most likely there is some issue around the JIT breakpoint
handling or around the elf file parsing code in LLDB what needs
some investigation.

>
> thanks!
>
> -- The original message --
> *From:* "Zachary Turner";;
> *Data:* 2015年11月21日 AM 0:10
> *Receive:* "Tamas Berghammer"; " "<
> haifen...@foxmail.com>; "lldb-dev";
> *Title:* Re: [lldb-dev] reply: lldb debug jit-compiled code with llvm on
> windows
>
> Can you also try clang-cl and see if it works?
>
> On Fri, Nov 20, 2015 at 3:02 AM Tamas Berghammer via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> I don't know how JIT debugging should work on WIndows with MSVC but I
>> don't think LLDB support it in any way. What I wrote should be true on
>> Linux (and on some related) systems. You might be able to get the same
>> results on Windows if you use lli (LLVM based JIT runner) but I have no
>> knowledge if it will work or not.
>>
>> On Fri, Nov 20, 2015 at 8:56 AM haifeng_q  wrote:
>>
>>> My analysis of the code, the reasons are:
>>>
>>> Since the debugging process is MSVC compiler, there is no DWARF debugging
>>> information. So lldb get __jit_debug_register_code and
>>> __it_debug_descriptor symbol debugging process fails, and
>>> __jit_debug_register_code not support MSVC.
>>> I do not know whether correct?
>>>
>>> -- original message--
>>> *From:*"Tamas Berghammer";tbergham...@google.com;
>>> Date*:*2015年11月19日 PM 8:37
>>> *receive:* " "; "lldb-dev"<
>>> lldb-dev@lists.llvm.org>;
>>> *Subject:* Re: [lldb-dev] lldb debug jit-compiled code with llvm on
>>> windows
>>>
>>> In theory you don't have to do anything special to debug some JIT-ed
>>> code as everything should just work (based on the gdb jit interface). In
>>> practice I tried it out a few days ago and it wasn't working at all even in
>>> the case when the application is launched under LLDB (not with attach).
>>> LLDB was understanding the eh_frame for the JIT-ed code but didn't found
>>> the debug info for unknown reason. We should investigate it and try to fix
>>> it sometime. We (lldb for android developers) plan to do it sometime but if
>>> you are interested in it then please feel free to take a look and let us
>>> know if you have any question.
>>>
>>> Tamas
>>>
>>> On Thu, Nov 19, 2015 at 8:40 AM haifeng_q via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> hi,
>>>> process A generate function Func1 code with llvm jit compiler, and calls
>>>> Func1. modeled on "Kaleidoscope: Adding Debug Information" add debug
>>>> information. how to use LLDB to attach process A to debug this function
>>>> , add a breakpoint in the function?
>>>>
>>>> thanks!
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] reply: lldb debug jit-compiled code with llvm on windows

2015-11-20 Thread Tamas Berghammer via lldb-dev
I don't know how JIT debugging should work on WIndows with MSVC but I don't
think LLDB support it in any way. What I wrote should be true on Linux (and
on some related) systems. You might be able to get the same results on
Windows if you use lli (LLVM based JIT runner) but I have no knowledge if
it will work or not.

On Fri, Nov 20, 2015 at 8:56 AM haifeng_q  wrote:

> My analysis of the code, the reasons are:
>
> Since the debugging process is MSVC compiler, there is no DWARF debugging
> information. So lldb get __jit_debug_register_code and
> __it_debug_descriptor symbol debugging process fails, and
> __jit_debug_register_code not support MSVC.
> I do not know whether correct?
>
> -- original message--
> *From:*"Tamas Berghammer";tbergham...@google.com;
> Date*:*2015年11月19日 PM 8:37
> *receive:* " "; "lldb-dev";
>
> *Subject:* Re: [lldb-dev] lldb debug jit-compiled code with llvm on
> windows
>
> In theory you don't have to do anything special to debug some JIT-ed code
> as everything should just work (based on the gdb jit interface). In
> practice I tried it out a few days ago and it wasn't working at all even in
> the case when the application is launched under LLDB (not with attach).
> LLDB was understanding the eh_frame for the JIT-ed code but didn't found
> the debug info for unknown reason. We should investigate it and try to fix
> it sometime. We (lldb for android developers) plan to do it sometime but if
> you are interested in it then please feel free to take a look and let us
> know if you have any question.
>
> Tamas
>
> On Thu, Nov 19, 2015 at 8:40 AM haifeng_q via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> hi,
>> process A generate function Func1 code with llvm jit compiler, and calls
>> Func1. modeled on "Kaleidoscope: Adding Debug Information" add debug
>> information. how to use LLDB to attach process A to debug this function, add
>> a breakpoint in the function?
>>
>> thanks!
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb debug jit-compiled code with llvm on windows

2015-11-19 Thread Tamas Berghammer via lldb-dev
In theory you don't have to do anything special to debug some JIT-ed code
as everything should just work (based on the gdb jit interface). In
practice I tried it out a few days ago and it wasn't working at all even in
the case when the application is launched under LLDB (not with attach).
LLDB was understanding the eh_frame for the JIT-ed code but didn't found
the debug info for unknown reason. We should investigate it and try to fix
it sometime. We (lldb for android developers) plan to do it sometime but if
you are interested in it then please feel free to take a look and let us
know if you have any question.

Tamas

On Thu, Nov 19, 2015 at 8:40 AM haifeng_q via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> hi,
> process A generate function Func1 code with llvm jit compiler, and calls
> Func1. modeled on "Kaleidoscope: Adding Debug Information" add debug
> information. how to use LLDB to attach process A to debug this function, add
> a breakpoint in the function?
>
> thanks!
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Apple LLDB OS X build bot

2015-11-02 Thread Tamas Berghammer via lldb-dev
Hi Todd,

Thank you for setting up the new buildbot. I have a few questions about it:
* Is it running the test suit or only do a build?
* If the test suit is run then where can we see the result of the tests?

Thanks,
Tamas

On Wed, Oct 28, 2015 at 2:03 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi all,
>
> I've made a few changes to the Apple OS X buildbot today.  These are
> mostly minor, but the key is to make sure we all know when it's broken.
>
> First off, it now builds the lldb-tool scheme using the Debug
> configuration.  (Previously it was building a BuildAndIntegration
> configuration, which nobody outside Apple is ever going to be able to build
> right).
>
> Second, it no longer tries to build a signed debugserver and instead uses
> the system debugserver.
>
> At this point, if you get an email on a broken build, please make sure to
> do the typical courteous thing and (1) fix it if you know how, (2) reach
> out and ask us if we know how if it is a platform-specific issue, or (3)
> revert until we figure out a way to get it working for everyone.
>
> You can get to the builder here:
> http://lab.llvm.org:8080/green/job/LLDB/
>
> It's part of the newer Jenkins-style builders that llvm.org has been
> trying out.
>
> It is configured to send emails on a transition from green to red.
>
> Here's the current green build:
> http://lab.llvm.org:8080/green/job/LLDB/13827/
>
> Thanks!
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Moving pexpect and unittest2 to lldb/third_party

2015-10-22 Thread Tamas Berghammer via lldb-dev
Hi Zach,

I think nobody is using the "if __name__ == '__main__'" block as executing
a test file directly isn't working at the moment (the "import lldb" command
fails). If you plan to change all test file then I would prefer to remove
the reference to unittest2 from them for simplicity if nobody have an
objection against it.

Tamas

On Wed, Oct 21, 2015 at 8:57 PM Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> *TL;DR - Nobody has to do anything, this is just a heads up that a 400+
> file CL is coming.*
>
> IANAL, but I've been told by one that I need to move all third party code
> used by LLDB to lldb/third_party.  Currently there is only one thing there:
> the Python `six` module used for creating code that is portable across
> Python 2 and Python 3.
>
> The only other 2 instances that I'm aware of are pexpect and unittest2,
> which are under lldb/test.  I've got some patches locally which move
> pexpect and unittest2 to lldb/third_party.  I'll hold off on checking them
> in for a bit to give people a chance to see this message first, because
> otherwise you might be surprised when you see a CL with 400 files being
> checked in.
>
> Nobody will have to do anything after this CL goes in, and everything
> should continue to work exactly as it currently does.
>
> The main reason for the churn is that pretty much every single test in
> LLDB does something like this:
>
> *import unittest2*
>
> ...
>
> if __name__ == '__main__':
> import atexit
> lldb.SBDebugger.Initialize()
> atexit.register(lambda: lldb.SBDebugger.Terminate())
> *unittest2.main()*
>
> This worked when unittest2 was a subfolder of test, but not when it's
> somewhere else.  Since LLDB's python code is not organized into a standard
> python package and we treat the scripts like dotest etc as standalone
> scripts, the way I've made this work is by introducing a module called 
> *lldb_shared
> *under test which, when you import it, fixes up sys.path to correctly add
> all the right locations under lldb/third_party.
>
> So, every single test now needs a line at the top to import lldb_shared.
>
> TBH I don't even know if we need this unittest2 stuff anymore (does anyone
> even use it?)  but even if the answer is no, then that still means changing
> every file to delete the import statement and the if __name__ ==
> '__main__': block.
>
> If there are no major concerns I plan to check this in by the end of the
> day, or tomorrow.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-21 Thread Tamas Berghammer via lldb-dev
I seen very similar error messages when debugging an application compiled
with fission (split/dwo) debug info on Linux with a release version of LLDB
compiled from ToT. When I tested the same with a debug or with a
release+assert build I hit some assertion inside clang. It might worth a
try to check if the same is happening in your case as it might help finding
out the root cause.

In my case the issue is that we somehow end up with 2 FilldDecl object for
a given field inside one of the CXXRecordDecl object and then when we are
doing a pointer based lookup we will go wrong. I haven't figured out why it
is happening and haven't manage to reproduce it reliably either, but plan
to look into it in the near future if nobody beats me.

Tamas

On Wed, Oct 21, 2015 at 4:46 PM Ramkumar Ramachandra via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> So first, an addendum: I found a way to make the project build without
> using a symlink, and use a direct reference instead. The problem still
> persists. It may be that symlink is one of the problems, but it is
> certainly not the only problem.
>
> On Tue, Oct 20, 2015 at 8:22 PM, Greg Clayton  wrote:
> > int
> > Declaration::Compare(const Declaration& a, const Declaration& b)
> > {
> > int result = FileSpec::Compare(a.m_file, b.m_file, true);
> > if (result)
>
> Wait, won't FileSpec::Compare be true iff a.m_file is the same as
> b.m_file (excluding symlink resolution)? If so, why are we putting the
> symlink-checking logic in the true branch of the original
> FileSpec::Compare? Aren't we expanding the scope of what we match,
> instead of narrowing it?
>
> > {
> > int symlink_result = result;
> > if (a.m_file.GetFilename() == b.m_file.GetFilename())
> > {
> > // Check if the directories in a and b are symlinks to each
> other
> > FileSpec resolved_a;
> > FileSpec resolved_b;
> > if (FileSystem::ResolveSymbolicLink(a.m_file,
> resolved_a).Success() &&
> > FileSystem::ResolveSymbolicLink(b.m_file,
> resolved_b).Success())
> > {
> > symlink_result = FileSpec::Compare(resolved_a,
> resolved_b, true);
>
> I'm confused. Shouldn't the logic be "check literal equality; if true,
> return immediately; if not, check equality with symlink resolution"?
>
> > }
> > }
> > if (symlink_result != 0)
> > return symlink_result;
> > }
> > if (a.m_line < b.m_line)
> > return -1;
> > else if (a.m_line > b.m_line)
> > return 1;
> > #ifdef LLDB_ENABLE_DECLARATION_COLUMNS
> > if (a.m_column < b.m_column)
> > return -1;
> > else if (a.m_column > b.m_column)
> > return 1;
> > #endif
> > return 0;
> > }
>
> Here's my version of the patch, although I'm not sure when the code
> will be reached.
>
> int
> Declaration::Compare(const Declaration& a, const Declaration& b)
> {
> int result = FileSpec::Compare(a.m_file, b.m_file, true);
> if (result)
> return result;
> if (a.m_file.GetFilename() == b.m_file.GetFilename()) {
> // Check if one of the directories is a symlink to the other
> int symlink_result = result;
> FileSpec resolved_a;
> FileSpec resolved_b;
> if (FileSystem::ResolveSymbolicLink(a.m_file,
> resolved_a).Success() &&
> FileSystem::ResolveSymbolicLink(b.m_file,
> resolved_b).Success())
> {
> symlink_result = FileSpec::Compare(resolved_a, resolved_b,
> true);
> if (symlink_result)
> return symlink_result;
> }
> }
> if (a.m_line < b.m_line)
> return -1;
> else if (a.m_line > b.m_line)
> return 1;
> #ifdef LLDB_ENABLE_DECLARATION_COLUMNS
> if (a.m_column < b.m_column)
> return -1;
> else if (a.m_column > b.m_column)
> return 1;
> #endif
> return 0;
> }
>
> If you're confident that this solves a problem, I can send it as a
> code review or something (and set up git-svn, sigh).
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] TestRaise.py test_restart_bug flakey stats

2015-10-19 Thread Tamas Berghammer via lldb-dev
The expected flakey works a bit differently then you are described:
* Run the tests
* If it passes, it goes as a successful test and we are done
* Run the test again
* If it is passes the 2nd time then record it as expected failure (IMO
expected falkey would be a better result, but we don't have that category)
* If it fails 2 times in a row then record it as a failure because a flakey
test should pass at least once in every 2 run (it means we need ~95%
success rate to keep the build bot green in most of the time). If it isn't
passing often enough for that then it should be marked as expected failure.
This is done this way to detect the case when a flakey test get broken
completely by a new change.

I checked some states for TestRaise on the build bot and in the current
definition of expected flakey we shouldn't mark it as flakey because it
will often fail 2 times in a row (it passing rate is ~50%) what will be
reported as a failure making the build bot red.

I will send you the full stats from the lass 100 build in a separate off
list mail as it is a too big for the mailing list. If somebody else is
interested in it then let me know.

Tamas

On Sun, Oct 18, 2015 at 2:18 AM Todd Fiala  wrote:

> Nope, no good either when I limit the flakey to DWO.
>
> So perhaps I don't understand how the flakey marking works.  I thought it
> meant:
> * run the test.
> * If it passes, it goes as a successful test.  Then we're done.
> * run the test again.
> * If it passes, then we're done and mark it a successful test.  If it
> fails, then mark it an expected failure.
>
> But that's definitely not the behavior I'm seeing, as a flakey marking in
> the above scheme should never produce a failing test.
>
> I'll have to revisit the flakey test marking to see what it's really doing
> since my understanding is clearly flawed!
>
> On Sat, Oct 17, 2015 at 5:57 PM, Todd Fiala  wrote:
>
>> Hmm, the flakey behavior may be specific to dwo.  Testing it locally as
>> unconditionally flaky on Linux is failing on dwarf.  All the ones I see
>> succeed are dwo.  I wouldn't expect a diff there but that seems to be the
>> case.
>>
>> So, the request still stands but I won't be surprised if we find that dwo
>> sometimes passes while dwarf doesn't (or at least not enough to get through
>> the flakey setting).
>>
>> On Sat, Oct 17, 2015 at 4:57 PM, Todd Fiala  wrote:
>>
>>> Hi Tamas,
>>>
>>> I think you grabbed me stats on failing tests in the past.  Can you dig
>>> up the failure rate for TestRaise.py's test_restart_bug() variants on
>>> Ubuntu 14.04 x86_64?  I'd like to mark it as flaky on Linux, since it is
>>> passing most of the time over here.  But I want to see if that's valid
>>> across all Ubuntu 14.04 x86_64.  (If it is passing some of the time, I'd
>>> prefer marking it flakey so that we don't see unexpected successes).
>>>
>>> Thanks!
>>>
>>> --
>>> -Todd
>>>
>>
>>
>>
>> --
>> -Todd
>>
>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Question on assert

2015-10-15 Thread Tamas Berghammer via lldb-dev
Hi Todd,

The 64 bit ID of a DIE is built up in the following way:
* The offset of the DIE is in the lower 32 bit
* If we are using SymbolFileDWARF then the higher 32 bit is the offset of
the compile unit this DIE belongs to
* If we are using SymbolFileDWARFDwo then the higher 32 bit is the offset
of the base compile unit in the parent SymbolFileDWARF
* If we are using SymbolFileDWARFDebugMap then the higher 32 bit is the ID
of the SymbolFileDWARF this DIE belongs to
* If the higher 32 bit is 0 then that means that the source of the DIE
isn't specified

The assert then tries to verify that one of the following conditions holds:
* The higher 32 bit of "id" is 0 what means that we don't have a symbol
file pointer (AFAIK shouldn't happen) or we are coming from a
SymbolFileDWARF
* The higher 32 bit of "cu_id" is 0 what means that the compile unit is at
0 offset what is the case for the single compile units in
SymbolFileDWARFDwo (and I think for SymbolFileDWARFDebugMap)
* The higher 32 bit of "id" (what is the ID of the SymbolFileDWARF we are
belonging to) matches with the higher 32 bit of "cu_id" (what is the offset
of the compile unit in the base object file)

After thinking a bit more about the assert I think the problem is that the
way I calculate cu_id is incompatible for the case when we are using
SymbolFileDWARFDebugMap.

I think changing line 188 to the following should fix the issue:
lldb::user_id_t cu_id = m_cu->GetID()&0xull;

Please give it a try on OSX and let me know if it helps. I tested it on
Linux and it isn't cause any regression there.

Thanks,
Tamas

On Wed, Oct 14, 2015 at 9:13 PM Todd Fiala  wrote:

> Hi Tamas,
>
> There is an assert in DWARFDIE.cpp (lines 189 - 191) that we're hitting on
> the OS X side somewhat frequently nowadays:
>
> assert ((id&0xull) == 0 ||
>
> (cu_id&0xll) == 0 ||
>
> (id&0xull) == (cu_id&
> 0xll));
>
>
> It does not seem to get hit consistently.  We're trying to tease apart
> what it is trying to do.  It's a bit strange since it is saying that the
> assert should not fire if any one of three clauses is true.  But it's hard
> to figure out what exactly is going on there.
>
>
> Can you elucidate what this is trying to do?  Thanks!
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB: Unwinding based on Assembly Instruction Profiling

2015-10-15 Thread Tamas Berghammer via lldb-dev
If we are trying to unwind from a non call site (frame 0 or signal handler)
then the current implementation first try to use the non call site unwind
plan (usually assembly emulation) and if that one fails then it will fall
back to the call site unwind plan (eh_frame, compact unwind info, etc.)
instead of falling back to the architecture default unwind plan because it
should be a better guess in general and we usually fail with the assembly
emulation based unwind plan for hand written assembly functions where
eh_frame is usually valid at all address.

Generating asynchronous eh_frame (valid at all address) is possible with
gcc (I am not sure about clang) but there is no way to tell if a given
eh_frame inside an object file is valid at all address or only at call
sites. The best approximation what we can do is to say that each eh_frame
entry is valid only at the address what it specifies as start address but
we don't make a use of it in LLDB at the moment.

For the 2nd part of the original question, I think changing the eh_frame
based unwind plan after a failed unwind using instruction emulation is only
a valid option for the PC where we tried to unwind from because the
assembly based unwind plan could be valid at other parts of the function.
Making the change for that 1 concrete PC address would make sense, but have
practically no effect because the next time we want to unwind from the
given address we use the same fall back mechanism as in the first case and
the change would have only a very small performance gain.

Tamas

On Wed, Oct 14, 2015 at 9:36 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

>
> > On Oct 14, 2015, at 1:02 PM, Joerg Sonnenberger via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > On Wed, Oct 14, 2015 at 11:42:06AM -0700, Greg Clayton via lldb-dev
> wrote:
> >> EH frame can't be used to unwind when we are in the first frame because
> >> it is only valid at call sites. It also can't be used in frames that
> >> are asynchronously interrupted like signal handler frames.
> >
> > This is not necessarily true, GCC can build them like that. I don't
> > think we have a flag for clang/LLVM to create full async unwind tables.
>
> Most compilers don't generate stuff that is complete, and if it is
> complete, I am not aware of any markings on EH frame that states it is
> complete. So we really can't use it unless we know the info is complete.
> Was there ever an additional augmentation letter that was attached to the
> complete EH frame info?
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] changing default test runner from multiprocessing-based to threading-based

2015-09-22 Thread Tamas Berghammer via lldb-dev
One more point to Zachary's comment is that currently if LLDB crashes for a
test we report the test failure somewhat correctly (not perfectly). With a
multi threaded approach I would expect an LLDB crash to take down the full
test run what isn't something we want.

On Tue, Sep 22, 2015 at 12:03 AM Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> After our last discussion, I thought about it some more and there are at
> least some problems with this.  The biggest problem is that with only a
> single process, you are doing all tests from effectively a single instance
> of LLDB.  There's a TestMultipleDebuggers.py for example, and whether or
> not that test passes is equivalent to whether or not the test suite can
> even work without dying horribly.  In other words, you are inherently
> relying on multiple debuggers working to even run the test suite.
>
> I don't know if that's a problem, but at the very least, it's kind of
> unfortunate.  And of course the problem grows to other areas.  What other
> things fail horribly when a single instance of LLDB is debugging 100
> processes at the same time?
>
> It's worth adding this as an alternate run mode, but I don't think we
> should make it default until it's more battle-tested.
>
> On Mon, Sep 21, 2015 at 12:49 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi all,
>>
>> I'm considering changing the default lldb test runner from
>> multiprocessing-based to threading-based.  Long ago I switched it from
>> threading to multiprocessing.  The only reason I did this was because OS X
>> was failing to allow more than one exec at a time in the worker threads -
>> way down in the Python Global Interpreter Lock (GIL).  And, at the time, I
>> didn't have the time to break out the test runner strategies.
>>
>> We have verified the threading-based issue is no longer manifesting on OS
>> X 10.10 and 10.11 beta.  That being the case, I'd like to convert us back
>> to being threading-based by default.  Specifically, this will have the same
>> effect as doing the following:
>> (non-Windows): --test-runner-name threading
>> (Windows): --test-runner-name threading-pool
>>
>> There are a couple benefits here:
>> 1. We'll remove a fork for creating the worker queues.  Each of those are
>> just threads when using threading, rather than being forked processes.
>> Depending on the underlying OS, a thread is typically cheaper.  Also, some
>> of the inter-worker communication now becomes cheap intra-process
>> communication instead of heavier multiprocessing constructs.
>> 2. Debugging is a bit easier.  The worker queues make a lot of noise in
>> 'ps aux'-style greps, and are a pain to debug relatively speaking vs. the
>> threaded version.
>>
>> I'm not yet looking to remove the multiprocessing support.  It is likely
>> I'll check the OS X version and default to the multiprocessing test runner
>> if it wasn't explicitly specified and the OS X version is < 10.10 as I'm
>> pretty sure I hit the issue on 10.9's python.
>>
>> Thoughts?
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Tamas Berghammer via lldb-dev
Unfortunately the GCE logs aren't public at the moment and the amount of
them isn't make it easy to make them accessible in any way (~30MB/build)
and they aren't much more machine parsable then the stdout from the build.

I think downloading data with the json API won't help because it will only
list the failures displayed on the Web UI what don't contain full test
names and don't contain info about the UnexpectedSuccess-es. If you want to
download it from the web interface then I am pretty sure we have to parse
in the stdout of the test runner and change dotest in a way that it
displays more information about the outcome of the different tests.

On Tue, Sep 15, 2015 at 5:52 PM Todd Fiala  wrote:

> Yep looks like there's a decent interface to it.  Thanks, Siva!
>
> I see there's some docs here too:
> http://docs.buildbot.net/current/index.html
>
> On Tue, Sep 15, 2015 at 9:42 AM, Siva Chandra 
> wrote:
>
>> On Tue, Sep 15, 2015 at 9:25 AM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> > The cmake builder runs in GCE and it uploads all test logs to Google
>>> Cloud Storage (including full host logs and server logs). I used a python
>>> script (running also in GCE) to download this data and to parse the test
>>> output from the test traces.
>>>
>>> Are the GCE logs public?  If not, do you know if our buildbot protocol
>>> supports polling this info via another method straight from the build bot?
>>>
>>
>> You are probably looking for this: http://lab.llvm.org:8011/json/help
>>
>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Tamas Berghammer via lldb-dev
Yes, you are reading it correctly (for totclang we mean the totclang at the
time when the test suit was run).

The cmake builder runs in GCE and it uploads all test logs to Google Cloud
Storage (including full host logs and server logs). I used a python script
(running also in GCE) to download this data and to parse the test output
from the test traces.

On Tue, Sep 15, 2015 at 5:08 PM Todd Fiala  wrote:

> Just to make sure I'm reading these right:
>
> == Compiler: totclang Architecture: x86_64 ==
>
> UnexpectedSuccess
> TestMiInterpreterExec.MiInterpreterExecTestCase.test_lldbmi_settings_set_target_run_args_before
> (250/250 100.00%)
> TestRaise.RaiseTestCase.test_restart_bug_with_dwarf (119/250 47.60%)
> TestMiSyntax.MiSyntaxTestCase.test_lldbmi_process_output (250/250
> 100.00%)
> TestInferiorAssert.AssertingInferiorTestCase.test_inferior_asserting_expr_dwarf
> (195/250 78.00%)
>
>
> This is saying that running the tests with a top of tree clang, on x86_64,
> we see (for example):
> * test_lldbmi_settings_set_target_run_args_before() is always passing,
> * test_inferior_asserting_expr_dwarf() is always passing
> * test_restart_bug_with_dwarf() is failing more often than passing.
>
> This is incredibly useful for figuring out the true disposition of a test
> on different configurations.  What method did you use to gather that data?
>
> On Tue, Sep 15, 2015 at 9:03 AM, Todd Fiala  wrote:
>
>> Wow Tamas, this is perfect.  Thanks for pulling that together!
>>
>> Don't worry about the bigger file.
>>
>> Thanks much.
>>
>> -Todd
>>
>> On Tue, Sep 15, 2015 at 8:56 AM, Tamas Berghammer > > wrote:
>>
>>> I created a new statistic what separates the data based on compiler and
>>> architecture and I also extended it to the last 250 builds on the Linux
>>> build bot. If you would like to see the build IDs for the different
>>> outcomes then let me know, because I have them collected out, but it is a
>>> quite big file.
>>>
>>> Tamas
>>>
>>> On Tue, Sep 15, 2015 at 3:37 PM Todd Fiala  wrote:
>>>
 On Tue, Sep 15, 2015 at 2:57 AM, Tamas Berghammer <
 tbergham...@google.com> wrote:

> Hi Todd,
>
> I attached the statistic of the last 100 test run on the Linux x86_64
> builder (
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake).
> The data might be a little bit noisy because of the actual test failures
> happening because of a temporary regression, but they should give you a
> general idea about what is happening.
>
>
 Thanks, Tamas!  I'll have a look.


> I will try to create a statistic where the results are displayed
> separately for each compiler and architecture to get a bit more detailed
> view, but it will take some time. If you want I can include the list of
> build numbers for all outcome, but it will be a very log list (currently
> only included for Timeout and Failure)
>
>
 I'll know better when I have a look at what you provided.  The hole I
 see right now is we're not adequately dealing with unexpected successes for
 different configurations.  Any reporting around that is helpful.

 Thanks!


> Tamas
>
> On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:
>>
>> *cmake/ninja/clang-3.6:*
>>
>> Testing: 395 test suites, 24 threads
>> 395 out of 395 test suites processed - TestGdbRemoteKill.py
>> Ran 395 test suites (0 failed) (0.00%)
>> Ran 478 test cases (0 failed) (0.00%)
>>
>> Unexpected Successes (6)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>>
>>
>> *cmake/ninja/gcc-4.9.2:*
>>
>> 395 out of 395 test suites processed - TestMultithreaded.py
>> Ran 395 test suites (1 failed) (0.253165%)
>> Ran 457 test cases (1 failed) (0.218818%)
>> Failing Tests (1)
>> FAIL: LLDB (suite) :: TestRegisterVariables.py
>>
>> Unexpected Successes (6)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestDataFormatterSynth.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py
>>
>>
>> I will look into those.  I suspect some of them are compiler-version
>> specific, much like some of the OS X ones I dug into earlier.
>> --
>> -Todd
>> ___

Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Tamas Berghammer via lldb-dev
Hi Todd,

I attached the statistic of the last 100 test run on the Linux x86_64
builder (http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake).
The data might be a little bit noisy because of the actual test failures
happening because of a temporary regression, but they should give you a
general idea about what is happening.

I will try to create a statistic where the results are displayed separately
for each compiler and architecture to get a bit more detailed view, but it
will take some time. If you want I can include the list of build numbers
for all outcome, but it will be a very log list (currently only included
for Timeout and Failure)

Tamas

On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:
>
> *cmake/ninja/clang-3.6:*
>
> Testing: 395 test suites, 24 threads
> 395 out of 395 test suites processed - TestGdbRemoteKill.py
> Ran 395 test suites (0 failed) (0.00%)
> Ran 478 test cases (0 failed) (0.00%)
>
> Unexpected Successes (6)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>
>
> *cmake/ninja/gcc-4.9.2:*
>
> 395 out of 395 test suites processed - TestMultithreaded.py
> Ran 395 test suites (1 failed) (0.253165%)
> Ran 457 test cases (1 failed) (0.218818%)
> Failing Tests (1)
> FAIL: LLDB (suite) :: TestRegisterVariables.py
>
> Unexpected Successes (6)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestDataFormatterSynth.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py
>
>
> I will look into those.  I suspect some of them are compiler-version
> specific, much like some of the OS X ones I dug into earlier.
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>


test-statistics
Description: Binary data
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Tamas Berghammer via lldb-dev
In theory the test should be skipped when you are using gcc (cc is an alias
for it) but we detect the type of the compiler based on the executable name
and in case of cc we don't recognize that it is a gcc, so we don't skip the
test.

On Tue, Aug 25, 2015 at 5:45 PM Chaoren Lin via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> You're using CC="/usr/bin/cc". It needs to be clang for USE_LIBCPP to do
> anything. :/
>
> On Tue, Aug 25, 2015 at 9:20 AM, Todd Fiala  wrote:
>
>> Here are a couple of the failures that came up (the log output from the
>> full dosep.py run).
>>
>> Let me know if that is not sufficient!
>>
>> On Tue, Aug 25, 2015 at 9:14 AM, Pavel Labath  wrote:
>>
>>> There's no need to do anything fancy (yet :) ). For initial diagnosis
>>> the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
>>> -t` should suffice.
>>>
>>> pl
>>>
>>> On 25 August 2015 at 16:45, Todd Fiala  wrote:
>>> > Thanks, Pavel!  I'll dig that up and get back.
>>> >
>>> > On Tue, Aug 25, 2015 at 8:30 AM, Pavel Labath 
>>> wrote:
>>> >>
>>> >> There is no separate option, it should just work. :)
>>> >>
>>> >> I'm betting you are still missing some package there (we should
>>> >> document the prerequisites better). Could you send the error message
>>> >> you are getting so we can have a look.
>>> >>
>>> >> cheers,
>>> >> pl
>>> >>
>>> >>
>>> >> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>>> >>  wrote:
>>> >> >
>>> >> >
>>> >> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
>>> >> > wrote:
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> The TestDataFormatterLibcc* tests require libc++-dev:
>>> >> >>>
>>> >> >>> $ sudo apt-get install libc++-dev
>>> >> >>>
>>> >> >>
>>> >> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to
>>> hear.
>>> >> >> Pre-14.04 I gave up on it.
>>> >> >>
>>> >> >> Will cmake automatically choose libc++ if it is present?  Or do I
>>> need
>>> >> >> to
>>> >> >> pass something to cmake to use libc++?
>>> >> >
>>> >> >
>>> >> > Hmm it appears I need to do more than just install libc++-dev.  I
>>> did a
>>> >> > clean build with that installed, then ran the tests, and I still
>>> have
>>> >> > the
>>> >> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to
>>> >> > pass
>>> >> > along for the compile options to dotest.py to override/specify
>>> which c++
>>> >> > lib
>>> >> > it is using?
>>> >> >
>>> >> >>
>>> >> >>
>>> >> >> Thanks, Chaoren!
>>> >> >>
>>> >> >> -Todd
>>> >> >>
>>> >> >>>
>>> >> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
>>> >> >>>  wrote:
>>> >> 
>>> >> 
>>> >>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner <
>>> ztur...@google.com>
>>> >>  wrote:
>>> >> >
>>> >> > Can't comment on the failures for Linux, but I don't think we
>>> have a
>>> >> > good handle on the unexpected successes.  I only added that
>>> >> > information to
>>> >> > the output about a week ago, before that unexpected successes
>>> were
>>> >> > actually
>>> >> > going unnoticed.
>>> >> 
>>> >> 
>>> >>  Okay, thanks Zachary.   A while back we had some flapping tests
>>> that
>>> >>  would oscillate between unexpected success and failure on Linux.
>>> >>  Some of
>>> >>  those might still be in that state but maybe (!) are fixed.
>>> >> 
>>> >>  Anyone on the Linux end who happens to know if the fails in
>>> >>  particular
>>> >>  look normal, that'd be good to know.
>>> >> 
>>> >>  Thanks!
>>> >> 
>>> >> >
>>> >> >
>>> >> > It's likely that someone could just go in there and remove the
>>> XFAIL
>>> >> > from those tests.
>>> >> >
>>> >> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
>>> >> >  wrote:
>>> >> >>
>>> >> >> Hi all,
>>> >> >>
>>> >> >> I'm just trying to get a handle on current lldb test failures
>>> >> >> across
>>> >> >> different platforms.
>>> >> >>
>>> >> >> On Linux on non-virtualized hardware, I currently see the
>>> failures
>>> >> >> below on Ubuntu 14.04.2 using a setup like this:
>>> >> >> * stock linker (ld.bfd),
>>> >> >> * g++ 4.9.2
>>> >> >> * cmake
>>> >> >> * ninja
>>> >> >> * libstdc++
>>> >> >>
>>> >> >> ninja check-lldb output:
>>> >> >>
>>> >> >> Ran 394 test suites (15 failed) (3.807107%)
>>> >> >> Ran 474 test cases (17 failed) (3.586498%)
>>> >> >> Failing Tests (15)
>>> >> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad
>>> 3.13.0-57-generic
>>> >> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux
>>> rad
>>> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>>> >> >> x86_64 x86_64)
>>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
>>> >> >> 3.13.0-57-generic #95-Ubunt

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Tamas Berghammer via lldb-dev
Hi Todd,

I am using a clang-3.5 build release LLDB to debug an other clang-3.5 build
debug LLDB on Linux x86_64 and it works pretty well for me (works better
then using GDB). The most issue I am hitting is around expression
evaluation when I can't execute very small functions in std:: objects, but
I can get around it with accessing the internal data representation
(primarily for shared_ptr, unique_ptr and vector). We are still using gcc
for compiling lldb-server for android because the android clang have some
issues (atomic not supported) but I don't know anybody who testing a gcc
built LLDB on Linux.

Tamas


On Tue, Aug 25, 2015 at 4:31 PM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> There is no separate option, it should just work. :)
>
> I'm betting you are still missing some package there (we should
> document the prerequisites better). Could you send the error message
> you are getting so we can have a look.
>
> cheers,
> pl
>
>
> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>  wrote:
> >
> >
> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
> wrote:
> >>
> >>
> >>
> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
> wrote:
> >>>
> >>> The TestDataFormatterLibcc* tests require libc++-dev:
> >>>
> >>> $ sudo apt-get install libc++-dev
> >>>
> >>
> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
> >> Pre-14.04 I gave up on it.
> >>
> >> Will cmake automatically choose libc++ if it is present?  Or do I need
> to
> >> pass something to cmake to use libc++?
> >
> >
> > Hmm it appears I need to do more than just install libc++-dev.  I did a
> > clean build with that installed, then ran the tests, and I still have the
> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to pass
> > along for the compile options to dotest.py to override/specify which c++
> lib
> > it is using?
> >
> >>
> >>
> >> Thanks, Chaoren!
> >>
> >> -Todd
> >>
> >>>
> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
> >>>  wrote:
> 
> 
>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner 
>  wrote:
> >
> > Can't comment on the failures for Linux, but I don't think we have a
> > good handle on the unexpected successes.  I only added that
> information to
> > the output about a week ago, before that unexpected successes were
> actually
> > going unnoticed.
> 
> 
>  Okay, thanks Zachary.   A while back we had some flapping tests that
>  would oscillate between unexpected success and failure on Linux.
> Some of
>  those might still be in that state but maybe (!) are fixed.
> 
>  Anyone on the Linux end who happens to know if the fails in particular
>  look normal, that'd be good to know.
> 
>  Thanks!
> 
> >
> >
> > It's likely that someone could just go in there and remove the XFAIL
> > from those tests.
> >
> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
> >  wrote:
> >>
> >> Hi all,
> >>
> >> I'm just trying to get a handle on current lldb test failures across
> >> different platforms.
> >>
> >> On Linux on non-virtualized hardware, I currently see the failures
> >> below on Ubuntu 14.04.2 using a setup like this:
> >> * stock linker (ld.bfd),
> >> * g++ 4.9.2
> >> * cmake
> >> * ninja
> >> * libstdc++
> >>
> >> ninja check-lldb output:
> >>
> >> Ran 394 test suites (15 failed) (3.807107%)
> >> Ran 474 test cases (17 failed) (3.586498%)
> >> Failing Tests (15)
> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Tamas Berghammer via lldb-dev
Going back to the original question I think you have more test failures
then expected. As Chaoren mentioned all TestDataFormatterLibc* tests are
failing because of a missing dependency, but I think the rest of the tests
should pass (I wouldn't expect them to depend on libc++-dev).

You can see the up to date list of failures on the Linux buildbot here:
http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake

The buildbot is running in "Google Compute Engine" with Linux version:
"Linux buildbot-master-ubuntu-1404 3.16.0-31-generic #43~14.04.1-Ubuntu SMP
Tue Mar 10 20:13:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux"

LLDB is compiled by Clang (not sure about witch version but can find out if
somebody thinks it matters) and the inferiors are compiled with clang-3.5,
clang-tot, gcc-4.9.2. In all tested configuration there should be no
failure (all failing tests should be XFAIL-ed).

For the flaky tests we introduced an "expectedFlaky" decorator what
executes the test twice and expects it to pass at least once, but it
haven't been applied to all flaky test yet. The plan with the tests passing
with "unexpected success" at the moment is to gather statistics about them
and based on that mark them as "expected flaky" or remove the "expected
failure" based on the number of failures we seen in the last few hundreds
runs.

Tamas

On Tue, Aug 25, 2015 at 2:50 AM via lldb-dev 
wrote:

> On Mon, Aug 24, 2015 at 05:37:43PM -0700, via lldb-dev wrote:
> > On Mon, Aug 24, 2015 at 03:37:52PM -0700, Todd Fiala via lldb-dev wrote:
> > > On Linux on non-virtualized hardware, I currently see the failures
> below on
> > > Ubuntu 14.04.2 using a setup like this:
> > > [...]
> > >
> > > ninja check-lldb output:
>
> FYI, ninja check-lldb actually calls dosep.
>
> > > Ran 394 test suites (15 failed) (3.807107%)
> > > Ran 474 test cases (17 failed) (3.586498%)
> >
> > I don't think you can trust the reporting of dosep.py's "Ran N test
> > cases", as it fails to count about 500 test cases.  The only way I've
> > found to get an accurate count is to add up all the Ns from "Ran N tests
> > in" as follows:
> >
> > ./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee
> test_out.log
> > export total=`grep -E "^Ran [0-9]+ tests? in" test_out.log | awk
> '{count+=$2} END {print count}'`
>
> Of course, these commands assume you're running the tests from the
> lldb/test directory.
>
> > (See comments in http://reviews.llvm.org/rL238467.)
>
> I've pasted (and tweaked) the relavent comments from that review here,
> where I describe a narrowed case showing how dosep fails to count all the
> test cases from one test suite in test/types.  Note that the tests were run
> on OSX, so your counts may vary.
>
> The final count from:
> Ran N test cases .*
> is wrong, as I'll explain below. I've done a comparison between dosep and
> dotest on a narrowed subset of tests to show how dosep can omit the test
> cases from a test suite in its count.
>
> Tested on subset of lldb/test with just the following directories/files
> (i.e. all others directories/files were removed):
> test/make
> test/pexpect-2.4
> test/plugins
> test/types
> test/unittest2
> # The .py files kept in test/types are as follows (so
> test/types/TestIntegerTypes.py* was removed):
> test/types/AbstractBase.py
> test/types/HideTestFailures.py
> test/types/TestFloatTypes.py
> test/types/TestFloatTypesExpr.py
> test/types/TestIntegerTypesExpr.py
> test/types/TestRecursiveTypes.py
>
> Tests were run in the lldb/test directory using the following commands:
> dotest:
> ./dotest.py -v
> dosep:
> ./dosep.py -s --options "-v"
>
> Comparing the test case totals, dotest correctly counts 46, but dosep
> counts only 16:
> dotest:
> Ran 46 tests in 75.934s
> dosep:
> Testing: 23 tests, 4 threads ## note: this number changes randonmly
> Ran 6 tests in 7.049s
> [PASSED TestFloatTypes.py] - 1 out of 23 test suites processed
> Ran 6 tests in 11.165s
> [PASSED TestFloatTypesExpr.py] - 2 out of 23 test suites processed
> Ran 30 tests in 54.581s ## FIXME: not counted?
> [PASSED TestIntegerTypesExpr.py] - 3 out of 23 test suites
> processed
> Ran 4 tests in 3.212s
> [PASSED TestRecursiveTypes.py] - 4 out of 23 test suites processed
> Ran 4 test suites (0 failed) (0.00%)
> Ran 16 test cases (0 failed) (0.00%)
>
> With test/types/TestIntegerTypesExpr.py* removed, both correctly count 16
> test cases:
> dosep:
> Testing: 16 tests, 4 threads
> Ran 6 tests in 7.059s
> Ran 6 tests in 11.186s
> Ran 4 tests in 3.241s
> Ran 3 test suites (0 failed) (0.00%)
> Ran 16 test cases (0 failed) (0.00%)
>
> Note: I couldn't compare the test counts on all the tests because of the
> concern raised in http://reviews.llvm.org/rL237053. That is, dotest can
> no longer comple

Re: [lldb-dev] [RFC] Simplifying logging code

2015-08-13 Thread Tamas Berghammer via lldb-dev
Thank you for the link to the previous discussion and the description of
the Windows logging. I like the idea of the macro based logging on Windows
but agree that the explicit log channel definition is a bit too verbose.

Currently I would prefer a mixed solution with 'Log* log = ...; LOG_IF(log,
"pattern", ...);' for the usual case and 'Log* log = ...; LOG_IF_ANY(log,
categories, "pattern", ...);' if we want to log to different log channels.
I believe it will improve the readability quite a lot, but would only make
sense if we can apply it to the full code base within a reasonably short
period of time to avoid confusion with the multiple log patterns.

Tamas

On Wed, Aug 12, 2015 at 7:00 PM Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> After the previous discussion I agree that evaluating the arguments is
> unacceptable.  But you are correct here that a macro would solve this.  In
> fact, most C++ log libraries use macros I guess for this very reason.
>
> I decided to make some macros for the windows plugin which you can look at
> it in ProcessWindowsLog.h.
>
> There are some issues that are not obvious how to solve though.  For
> example, the macros I wrote in ProcessWindowsLog cannot be used outside of
> my plugin.  This is because each plugin statically defines its own channels
> as well as defines its own global Log object.  If this were to be done in a
> way that there were one set of macros that all current and future generic
> code and plugins could use, I think it would require a fairly substantial
> refactor.
>
> On Wed, Aug 12, 2015 at 6:11 AM Vince Harron via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> We could solve booth the efficiency concerns and the conciseness with a
>> macro.  (Gasp!)
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Simplifying logging code

2015-08-12 Thread Tamas Berghammer via lldb-dev
I don't remember to any discussion about it but I might just missed it
(don't see it in the archive either).

>From the efficiency perspective in most of the case evaluating the
arguments for Printf should be very fast (printing local variable) and the
few case where it isn't the case we can keep the condition (using "if
(log.Enabled()) log.Printf()"). From readability perspective I think
ignoring the "if (log)" in most of the case won't hurt and it will
eliminate the possibility of missing a check what will cause a crash.

Tamas

On Wed, Aug 12, 2015 at 1:37 PM Colin Riley via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> From an efficiency perspective, the arguments to Printf will still need to
> be evaluated. Some of those arguments touch multiple areas and will require
> significant effort to change into a new format, which is essentially the
> exact same as we have now.
>
> Was there not a decision to stick with what we have now when this came up
> a few weeks ago? Clean and easy to understand over verbose any day of the
> week in my view.
>
> Colin
>
>
>
> On 12/08/2015 11:52, Tamas Berghammer via lldb-dev wrote:
>
> Hi All,
>
> At the moment logging in LLDB done in the following way:
> Log* log = GetLogIfAllCategoriesSet(...);
> if (log)
> log->Printf(...);
>
> This approach is clean and easy to understand but have the disadvantage of
> being a bit verbose. What is the general opinion about changing it to
> something like this?
> Logger log = GetLogIfAllCategoriesSet(...);
> log.Printf(...);
>
> The idea would be to return a new type of object from
> GetLogIfAllCategoriesSet with small size (size of a pointer) what will
> check if the log category is enabled. From efficiency perspective this
> change would have no effect and it will simplify the writing of the logging
> statements.
>
> Implementation details:
> Logger would just contain a pointer to a Log object and forward all call
> to that object if that one isn't null. Additionally it will have a method
> to check for nullness of the underlying log object if we want to do some
> calculation only if the logging is enabled.
>
> Thanks,
> Tamas
>
> P.S.: Other possible simplification in the logging system would be to
> use LogIfAllCategoriesSet but it require the specification of the log
> channel at each call and have a very minor overhead because of checking for
> the enabled log categories at each call.
>
>
> ___
> lldb-dev mailing 
> listlldb-...@lists.llvm.orghttp://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
> --
> - Colin Riley
> Senior Director,
> Parallel/Graphics Debugger Systems
>
> Codeplay Software Ltd
> 45 York Place, Edinburgh, EH1 3HP
> Tel: 0131 466 0503
> Fax: 0131 557 6600
> Website: http://www.codeplay.com
> Twitter: https://twitter.com/codeplaysoft
>
> This email and any attachments may contain confidential and /or privileged 
> information and is for use by the addressee only. If you are not the intended 
> recipient, please notify Codeplay Software Ltd immediately and delete the 
> message from your computer. You may not copy or forward it,or use or disclose 
> its contents to any other person. Any views or other information in this 
> message which do not relate to our business are not authorized by Codeplay 
> software Ltd, nor does this message form part of any contract unless so 
> stated.
> As internet communications are capable of data corruption Codeplay Software 
> Ltd does not accept any responsibility for any changes made to this message 
> after it was sent. Please note that Codeplay Software Ltd does not accept any 
> liability or responsibility for viruses and it is your responsibility to scan 
> any attachments.
> Company registered in England and Wales, number: 04567874
> Registered office: 81 Linkfield Street, Redhill RH1 6BY
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [RFC] Simplifying logging code

2015-08-12 Thread Tamas Berghammer via lldb-dev
Hi All,

At the moment logging in LLDB done in the following way:
Log* log = GetLogIfAllCategoriesSet(...);
if (log)
log->Printf(...);

This approach is clean and easy to understand but have the disadvantage of
being a bit verbose. What is the general opinion about changing it to
something like this?
Logger log = GetLogIfAllCategoriesSet(...);
log.Printf(...);

The idea would be to return a new type of object from
GetLogIfAllCategoriesSet with small size (size of a pointer) what will
check if the log category is enabled. From efficiency perspective this
change would have no effect and it will simplify the writing of the logging
statements.

Implementation details:
Logger would just contain a pointer to a Log object and forward all call to
that object if that one isn't null. Additionally it will have a method to
check for nullness of the underlying log object if we want to do some
calculation only if the logging is enabled.

Thanks,
Tamas

P.S.: Other possible simplification in the logging system would be to
use LogIfAllCategoriesSet but it require the specification of the log
channel at each call and have a very minor overhead because of checking for
the enabled log categories at each call.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev