Re: [lldb-dev] [llvm-dev] [lit] check-all hanging

2019-01-07 Thread Joel E. Denny via lldb-dev
On Mon, Jan 7, 2019 at 11:39 AM Adrian Prantl  wrote:

>
>
> On Jan 7, 2019, at 8:28 AM, Joel E. Denny via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> On Mon, Jan 7, 2019 at 11:15 AM Davide Italiano 
> wrote:
>
>> On Sat, Jan 5, 2019 at 9:48 AM Joel E. Denny via lldb-dev
>>  wrote:
>> >
>> > On Fri, Jan 4, 2019 at 11:39 AM Frédéric Riss  wrote:
>> >>
>> >>
>> >>
>> >> > On Jan 4, 2019, at 7:30 AM, Joel E. Denny 
>> wrote:
>> >> >
>> >> > On Thu, Jan 3, 2019 at 11:30 AM Frédéric Riss 
>> wrote:
>> >> > -llvm-dev + lldb-dev for the lldv test failures.
>> >> >
>> >> >> On Jan 3, 2019, at 7:33 AM, Joel E. Denny 
>> wrote:
>> >> >>
>> >> >> All,
>> >> >>
>> >> >> Thanks for the replies.  Kuba: For LLDB, when were things expected
>> to have improved?  It's possible things improved for me at some point, but
>> this isn't something I've found time to track carefully, and I still see
>> problems.
>> >> >>
>> >> >> I ran check-all a couple of times last night at r350238, which I
>> pulled yesterday.  Here are the results:
>> >> >>
>> >> >> ```
>> >> >> 
>> >> >> Testing Time: 5043.24s
>> >> >> 
>> >> >> Unexpected Passing Tests (2):
>> >> >> lldb-Suite :: functionalities/asan/TestMemoryHistory.py
>> >> >> lldb-Suite :: functionalities/asan/TestReportData.py
>> >> >>
>> >> >> 
>> >> >> Failing Tests (54):
>> >> >> Clang :: CXX/modules-ts/basic/basic.link/p2/module.cpp
>> >> >> Clang :: Modules/ExtDebugInfo.cpp
>> >> >> Clang :: Modules/using-directive-redecl.cpp
>> >> >> Clang :: Modules/using-directive.cpp
>> >> >> Clang :: PCH/chain-late-anonymous-namespace.cpp
>> >> >> Clang :: PCH/cxx-namespaces.cpp
>> >> >> Clang :: PCH/namespaces.cpp
>> >> >> LLDB :: ExecControl/StopHook/stop-hook-threads.test
>> >> >> LeakSanitizer-AddressSanitizer-x86_64 :: TestCases/Linux/
>> use_tls_dynamic.cc
>> >> >> LeakSanitizer-Standalone-x86_64 :: TestCases/Linux/
>> use_tls_dynamic.cc
>> >> >> MemorySanitizer-X86_64 :: dtls_test.c
>> >> >> MemorySanitizer-lld-X86_64 :: dtls_test.c
>> >> >> lldb-Suite ::
>> functionalities/register/register_command/TestRegisters.py
>> >> >> lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py
>> >> >
>> >> > It’s hard to diagnose dotest failures without the log.
>> >> >
>> >> > (My last reply to this was rejected by the list because I wasn't
>> subscribed.  Trying again.)
>> >> >
>> >> > I have no experience debugging lldb.  Here's the lit output for the
>> last fail (now at r350377), but let me know if you want something more:
>> >> >
>> >> > ```
>> >> > FAIL: lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py
>> (59083 of 59736)
>> >> >  TEST 'lldb-Suite ::
>> tools/lldb-server/TestGdbRemoteRegisterState.py' FAILED 
>> >> > lldb version 8.0.0
>> >> > LLDB library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
>> >> > LLDB import library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
>> >> > Libc++ tests will not be run because: Unable to find libc++
>> installation
>> >> > Skipping following debug info categories: ['dsym', 'gmodules']
>> >> >
>> >> > Session logs for test failures/errors/unexpected successes will go
>> into directory '/home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces'
>> >> > Command invoked: /home/jdenny/ornl/llvm-mono-git/lldb/test/dotest.py
>> -q --arch=x86_64 -s /home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces
>> --build-dir /home/jdenny/ornl/llvm-mono-git-build/lldb-test-build.noindex
>> -S nm -u CXXFLAGS -u CFLAGS --executable
>> /home/jdenny/ornl/llvm-mono-git-build/./bin/lldb --dsymutil
>> /home/jdenny/ornl/llvm-mono-git-build/./bin/dsymutil --filecheck
>> /home/jdenny/ornl/llvm-mono-git-build/./bin/FileCheck -C
>> /home/jdenny/ornl/llvm-mono-git-build/./bin/clang --env
>> ARCHIVER=/usr/bin/ar --env OBJCOPY=/usr/bin/objcopy
>> /home/jdenny/ornl/llvm-mono-git/lldb/packages/Python/lldbsuite/test/tools/lldb-server
>> -p TestGdbRemoteRegisterState.py
>> >> > UNSUPPORTED: LLDB
>> (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) ::
>> test_grp_register_save_restore_works_no_suffix_debugserver
>> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver tests)
>> >> > FAIL: LLDB
>> (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) ::
>> test_grp_register_save_restore_works_no_suffix_llgs
>> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
>> >> > lldb-server exiting...
>> >> > UNSUPPORTED: LLDB
>> (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) ::
>> test_grp_register_save_restore_works_with_suffix_debugserver
>> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver tests)
>> >> > FAIL: LLDB
>> (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) ::
>> test_grp_register_save_restore_works_with_suffix_llgs
>> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
>> >> > lldb-server exiting...
>> >> >
>> =

Re: [lldb-dev] [cfe-dev] [llvm-dev] Updates on SVN to GitHub migration

2019-01-07 Thread Nico Weber via lldb-dev
(I wanted to ask about another update, but it looks like there was one
posted to llvm-dev today:
http://lists.llvm.org/pipermail/llvm-dev/2019-January/128840.html
Mentioning this for others who subscribe to cfe-dev or similar but not
llvm-dev.)

On Mon, Dec 10, 2018 at 1:58 PM Nico Weber  wrote:

> Thanks for the update!
>
> On Mon, Dec 10, 2018 at 1:55 PM Tom Stellard  wrote:
>
>> On 12/10/2018 10:38 AM, Nico Weber wrote:
>> > Here's another question about the current status of this. It's close to
>> two months after the official monorepo was supposed to be published. Can
>> someone give an update? Is this on hold indefinitely? Are there concrete
>> issues that people are working on and this will happen as soon as those are
>> resolved?
>> >
>>
>> There were some issues raised in the thread on llvm-dev:
>> "Dealing with out of tree changes and the LLVM  git monorepo"  This
>> migration
>> has been delayed while discussing these issues.  Discussion on that
>> thread has died down and it seems like the consensus is to move forward
>> with
>> the original plan, but we are waiting to get some formal closure on that
>> thread.
>>
>> > At the least, I'm assuming the "SVN will shut down 1 year from now"
>> refers to 1 year from when the monorepo actually gets published, not 1 year
>> relative to when the initial mail got sent?
>> >
>>
>> The deadline for SVN shutdown remains unchanged.  It's still going to be
>> around the 2019 LLVM Developers meeting.
>>
>> > Someone mentioned an issue with github's svn bridge, but it wasn't
>> clear if that's blocking, and if it is if there's a plan for it.
>> >
>>
>> It's not a blocking issue and there haven't been any updates lately,
>> you can follow status on this bug:
>> https://bugs.llvm.org/show_bug.cgi?id=39396
>>
>> -Tom
>>
>> > Thanks
>> > Nico
>> >
>> > On Sat, Oct 20, 2018 at 4:10 AM Jonas Hahnfeld via cfe-dev <
>> cfe-...@lists.llvm.org > wrote:
>> >
>> > (+openmp-dev, they should know about this!)
>> >
>> > Recapping the "Concerns"
>> > (https://llvm.org/docs/Proposals/GitHubMove.html#id12) there is a
>> > proposal of "single-subproject Git mirrors" for people who are only
>> > contributing to standalone subprojects. I think this will be easy
>> in the
>> > transition period, we can just continue to move the current
>> official git
>> > mirrors. Will this "service" be continued after GitHub becomes the
>> 'one
>> > source of truth'? I'd strongly vote for yes, but I'm not sure how
>> that's
>> > going to work on a technical level.
>> >
>> > Thanks,
>> > Jonas
>> >
>> > On 2018-10-20 03:14, Tom Stellard via llvm-dev wrote:
>> > > On 10/19/2018 05:47 PM, Tom Stellard via lldb-dev wrote:
>> > >> TLDR: Official monorepo repository will be published on
>> > >> Tuesday, Oct 23, 2018.  After this date, you should modify
>> > >> your workflows to use the monorepo ASAP.  Current workflows
>> > >> will be supported for at most 1 more year.
>> > >>
>> > >> Hi,
>> > >>
>> > >> We had 2 round-tables this week at the Developer Meeting to
>> > >> discuss the SVN to GitHub migration, and I wanted to update
>> > >> the rest of the community on what we discussed.
>> > >>
>> > >> The most important outcome from that meeting is that we
>> > >> now have a timeline for completing the transition which looks
>> > >> like this:
>> > >>
>> > >
>> > > Step 1:
>> > >> Tues Oct 23, 2018:
>> > >>
>> > >> The latest monorepo prototype[1] will be moved over to the LLVM
>> > >> organization github project[2] and will begin mirroring the
>> current
>> > >> SVN repository.  Commits will still be made to the SVN repository
>> > >> just as they are today.
>> > >>
>> > >> All community members should begin migrating their workflows that
>> > >> rely on SVN or the current git mirrors to use the new monorepo.
>> > >>
>> > >> For CI jobs or internal mirrors pulling from SVN or
>> > >> http://llvm.org/git/*.git you should modify them to pull from
>> > >> the new monorepo and also to deal with the new repository
>> > >> layout.
>> > >>
>> > >> For Developers, you should begin using the new monorepo
>> > >> for your development and using the provided scripts[3]
>> > >> to commit your code.  These scripts will allow to commit
>> > >> to SVN from the monorepo without using git-svn
>> > >>
>> > >>
>> > >
>> > > Sorry hit send before I was done.  Here is the rest of the mail:
>> > >
>> > > Step 2:
>> > >
>> > > Around the time of next year's developer meeting (1 year at the
>> most),
>> > > we will turn off commit access to the SVN server and enable commit
>> > > access to the monorepo.  At this point the monorepo will become
>> the
>> > > 'one source of truth' for the project.  Community members *must*
>> have
>> > > updated their workflows

Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Frédéric Riss via lldb-dev


> On Jan 7, 2019, at 11:31 AM, Pavel Labath via lldb-dev 
>  wrote:
> 
> On 07/01/2019 19:26, Jonas Devlieghere wrote:
>> On Mon, Jan 7, 2019 at 1:40 AM Pavel Labath >  >> 
>> wrote:
>>I've been thinking about how could this be done better, and the best
>>(though not ideal) way I came up with is using the functions address as
>>the key. That's guaranteed to be unique everywhere. Of course, you
>>cannot serialize that to a file, but since you already have a central
>>place where you list all intercepted functions (to register their
>>replayers), that place can be also used to assign unique integer IDs to
>>these functions. So then the idea would be that the SB_RECORD macro
>>takes the address of the current function, that gets converted to an ID
>>in the lookup table, and the ID gets serialized.
>> It sound like you would generate the indices at run-time. How would that 
>> work with regards to the the reverse mapping?
> In the current implementation, SBReplayer::Init contains a list of all 
> intercepted methods, right? Each of the SB_REGISTER calls takes two 
> arguments: The method name, and the replay implementation.
> 
> I would change that so that this macro takes three arguments:
> - the function address (the "runtime" ID)
> - an integer (the "serialized" ID)
> - the replay implementation
> 
> This creates a link between the function address and the serialized ID. So 
> when, during capture, a method calls SB_RECORD_ENTRY and passes in the 
> function address, that address can be looked up and translated to an ID for 
> serialization.
> 
> The only thing that would need to be changed is to have SBReplayer::Init 
> execute during record too (which probably means it shouldn't be called 
> SBReplayer, but whatever..), so that the ID mapping is also available when 
> capturing.
> 
> Does that make sense?

I think I understand what you’re explaining, and the mapping side of things 
makes sense. But I’m concerned about the size and complexity of the SB_RECORD 
macro that will need to be written. IIUC, those would need to take the address 
of the current function and the prototype, which is a lot of cumbersome text to 
type. It seems like having a specialized tool to generate those would be nice, 
but once you have a tool you also don’t need all this complexity, do you?

Fred

>>The part that bugs me about this is that taking the address of an
>>overloaded function is extremely tedious (you have to write something
>>like static_cast(&SBFoo::Bar)). That would mean all
>>of these things would have to be passed to the RECORD macro. OTOH, the
>>upshot of this would be that the macro would now have sufficient
>>information to perform pretty decent error checking on its invocation.
>>Another nice about this could be that once you already have a prototype
>>and an address of the function, it should be possible (with sufficient
>>template-fu) to synthesize replay code for the function automatically,
>>at least in the simple cases, which would avoid the repetitiveness of
>>the current replay code. Together, this might obviate the need for any
>>clang plugins or other funny build steps.
>> See my previous question. I see how the signature would help with decoding 
>> but still missing how you'd get the mapping.
>>The second thing I noticed is the usage of pointers for identifying
>>object. A pointer is good for that but only while the object it points
>>to is alive. Once the object is gone, the pointer can (and most likely
>>will) be reused. So, it sounds to me like you also need to track the
>>lifetime of these objects. That may be as simple as intercepting
>>constructor/destructor calls, but I haven't seen anything like that yet
>>(though I haven't looked at all details of the patch).
>> This shouldn't be a problem. When a new object is created it will be 
>> recorded in the table with a new identifier.
> Ok, sounds good.
> 
>>Tying into that is the recording of return values. It looks like the
>>current RECORD_RETURN macro will record the address of the temporary
>>object in the frame of the current function. However, that address will
>>become invalid as soon as the function returns as the result object
>>will
>>be copied into a location specified by the caller as a part of the
>>return processing. Are you handling this in any way?
>> We capture the temporary and the call to the copy-assignment constructor. 
>> This is not super efficient but it's the best we can do.
> 
> Ok, cool. I must have missed that part in the code.
> 
>>The final thing, which I noticed is the lack of any sign of threading
>>support. I'm not too worried about that, as that sounds like something
>>that could be fitted into the existing framework incrementally, but it
>>is something worth keeping in mind, as you're 

Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Jonas Devlieghere via lldb-dev
On Mon, Jan 7, 2019 at 3:52 AM Tamas Berghammer 
wrote:

> Thanks Pavel for looping me in. I haven't looked into the actual
> implementation of the prototype yet but reading your description I have
> some concern regarding the amount of data you capture as I feel it isn't
> sufficient to reproduce a set of usecases.
>

Thanks Tamas!


> One problem is when the behavior of LLDB is not deterministic for whatever
> reason (e.g. multi threading, unordered maps, etc...). Lets take
> SBModule::FindSymbols() what returns an SBSymbolContextList without any
> specific order (haven't checked the implementation but I would consider a
> random order to be valid). If a user calls this function, then iterates
> through the elements to find an index `I`, calls `GetContextAtIndex(I)` and
> pass the result into a subsequent function then what will we do. Will we
> capture what did `GetContextAtIndex(I)` returned in the trace and use that
> value or will we capture the value of `I`, call `GetContextAtIndex(I)`
> during reproduction and use that value. Doing the first would be correct in
> this case but would mean we don't call `GetContextAtIndex(I)` while doing
> the second case would mean we call `GetContextAtIndex(I)` with a wrong
> index if the order in SBSymbolContextList is non deterministic. In this
> case as we know that GetContextAtIndex is just an accessor into a vector
> the first option is the correct one but I can imagine cases where this is
> not the case (e.g. if GetContextAtIndex would have some useful side effect).
>

Indeed, in this scenario we would replay the call with the same `I`
resulting in an incorrect value. I think the only solution is fixing the
non-derterminism. This should be straightforward for lists (some kind of
sensible ordering), but maybe there are other issues I'm not aware of.


> Other interesting question is what to do with functions taking raw binary
> data in the form of a pointer + size (e.g. SBData::SetData). I think we
> will have to annotate these APIs to make the reproducer system aware of the
> amount of data they have to capture and then allocate these buffers with
> the correct lifetime during replay. I am not sure what would be the best
> way to attach these annotations but I think we might need a fairly generic
> framework because I won't be surprised if there are more situation when we
> have to add annotations to the API. I slightly related question is if a
> function returns a pointer to a raw buffer (e.g. const char* or void*) then
> do we have to capture the content of it or the pointer for it and in either
> case what is the lifetime of the buffer returned (e.g.
> SBError::GetCString() returns a buffer what goes out of scope when the
> SBError goes out of scope).
>

This a good concern and not something I had a good solution for at this
point. For const char* string we work around this by serializing the actual
string. Obviously that won't always work. Also we have the void* batons for
callsback, which is another tricky thing that wouldn't be supported. I'm
wondering if we can get away with ignoring these at first (maybe printing
something in the replay logic that warns the user that the reproducer
contains an unsupported function?).


> Additionally I am pretty sure we have at least some functions returning
> various indices what require remapping other then the pointers either
> because they are just indexing into a data structure with undefined
> internal order or they referencing some other resource. Just by randomly
> browsing some of the SB APIs I found for example SBHostOS::ThreadCreate
> what returns the pid/tid for the newly created thread what will have to be
> remapped (it also takes a function as an argument what is a problem as
> well). Because of this I am not sure if we can get away with an
> automatically generated set of API descriptions instead of wring one with
> explicit annotations for the various remapping rules.
>

Fixing the non-determinism should also address this, right?


> If there is interest I can try to take a deeper look into the topic
> sometime later but I hope that those initial thoughts are useful.
>

Thank you. I'll start by incorporating the feedback and ping the thread
when the patch is ready for another look.


> Tamas
>
> On Mon, Jan 7, 2019 at 9:40 AM Pavel Labath  wrote:
>
>> On 04/01/2019 22:19, Jonas Devlieghere via lldb-dev wrote:
>> > Hi Everyone,
>> >
>> > In September I sent out an RFC [1] about adding reproducers to LLDB.
>> > Over the
>> > past few months, I landed the reproducer framework, support for the GDB
>> > remote
>> > protocol and a bunch of preparatory changes. There's still an open code
>> > review
>> > [2] for dealing with files, but that one is currently blocked by a
>> change to
>> > the VFS in LLVM [3].
>> >
>> > The next big piece of work is supporting user commands (e.g. in the
>> > driver) and
>> > SB API calls. Originally I expected these two things to be separate,
>> but
>> > Pavel
>> > made a good

Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Pavel Labath via lldb-dev

On 07/01/2019 19:26, Jonas Devlieghere wrote:



On Mon, Jan 7, 2019 at 1:40 AM Pavel Labath > wrote:

I've been thinking about how could this be done better, and the best
(though not ideal) way I came up with is using the functions address as
the key. That's guaranteed to be unique everywhere. Of course, you
cannot serialize that to a file, but since you already have a central
place where you list all intercepted functions (to register their
replayers), that place can be also used to assign unique integer IDs to
these functions. So then the idea would be that the SB_RECORD macro
takes the address of the current function, that gets converted to an ID
in the lookup table, and the ID gets serialized.


It sound like you would generate the indices at run-time. How would that 
work with regards to the the reverse mapping?
In the current implementation, SBReplayer::Init contains a list of all 
intercepted methods, right? Each of the SB_REGISTER calls takes two 
arguments: The method name, and the replay implementation.


I would change that so that this macro takes three arguments:
- the function address (the "runtime" ID)
- an integer (the "serialized" ID)
- the replay implementation

This creates a link between the function address and the serialized ID. 
So when, during capture, a method calls SB_RECORD_ENTRY and passes in 
the function address, that address can be looked up and translated to an 
ID for serialization.


The only thing that would need to be changed is to have SBReplayer::Init 
execute during record too (which probably means it shouldn't be called 
SBReplayer, but whatever..), so that the ID mapping is also available 
when capturing.


Does that make sense?



The part that bugs me about this is that taking the address of an
overloaded function is extremely tedious (you have to write something
like static_cast(&SBFoo::Bar)). That would mean all
of these things would have to be passed to the RECORD macro. OTOH, the
upshot of this would be that the macro would now have sufficient
information to perform pretty decent error checking on its invocation.
Another nice about this could be that once you already have a prototype
and an address of the function, it should be possible (with sufficient
template-fu) to synthesize replay code for the function automatically,
at least in the simple cases, which would avoid the repetitiveness of
the current replay code. Together, this might obviate the need for any
clang plugins or other funny build steps.


See my previous question. I see how the signature would help with 
decoding but still missing how you'd get the mapping.


The second thing I noticed is the usage of pointers for identifying
object. A pointer is good for that but only while the object it points
to is alive. Once the object is gone, the pointer can (and most likely
will) be reused. So, it sounds to me like you also need to track the
lifetime of these objects. That may be as simple as intercepting
constructor/destructor calls, but I haven't seen anything like that yet
(though I haven't looked at all details of the patch).


This shouldn't be a problem. When a new object is created it will be 
recorded in the table with a new identifier.

Ok, sounds good.



Tying into that is the recording of return values. It looks like the
current RECORD_RETURN macro will record the address of the temporary
object in the frame of the current function. However, that address will
become invalid as soon as the function returns as the result object
will
be copied into a location specified by the caller as a part of the
return processing. Are you handling this in any way?


We capture the temporary and the call to the copy-assignment 
constructor. This is not super efficient but it's the best we can do.


Ok, cool. I must have missed that part in the code.



The final thing, which I noticed is the lack of any sign of threading
support. I'm not too worried about that, as that sounds like something
that could be fitted into the existing framework incrementally, but it
is something worth keeping in mind, as you're going to run into that
pretty soon.


Yup, I've intentially ignored this for now.


Awasome.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Jonas Devlieghere via lldb-dev
On Mon, Jan 7, 2019 at 1:40 AM Pavel Labath  wrote:

> On 04/01/2019 22:19, Jonas Devlieghere via lldb-dev wrote:
> > Hi Everyone,
> >
> > In September I sent out an RFC [1] about adding reproducers to LLDB.
> > Over the
> > past few months, I landed the reproducer framework, support for the GDB
> > remote
> > protocol and a bunch of preparatory changes. There's still an open code
> > review
> > [2] for dealing with files, but that one is currently blocked by a
> change to
> > the VFS in LLVM [3].
> >
> > The next big piece of work is supporting user commands (e.g. in the
> > driver) and
> > SB API calls. Originally I expected these two things to be separate, but
> > Pavel
> > made a good case [4] that they're actually very similar.
> >
> > I created a prototype of how I envision this to work. As usual, we can
> > differentiate between capture and replay.
> >
> > ## SB API Capture
> >
> > When capturing a reproducer, every SB function/method is instrumented
> > using a
> > macro at function entry. The added code tracks the function identifier
> > (currently we use its name with __PRETTY_FUNCTION__) and its arguments.
> >
> > It also tracks when a function crosses the boundary between internal and
> > external use. For example, when someone (be it the driver, the python
> > binding
> > or the RPC server) call SBFoo, and in its implementation SBFoo calls
> > SBBar, we
> > don't need to record SBBar. When invoking SBFoo during replay, it will
> > itself
> > call SBBar.
> >
> > When a boundary is crossed, the function name and arguments are
> > serialized to a
> > file. This is trivial for basic types. For objects, we maintain a table
> that
> > maps pointer values to indices and serialize the index.
> >
> > To keep our table consistent, we also need to track return for functions
> > that
> > return an object by value. We have a separate macro that wraps the
> returned
> > object.
> >
> > The index is sufficient because every object that is passed to a
> > function has
> > crossed the boundary and hence was recorded. During replay (see below)
> > we map
> > the index to an address again which ensures consistency.
> >
> > ## SB API Replay
> >
> > To replay the SB function calls we need a way to invoke the corresponding
> > function from its serialized identifier. For every SB function, there's a
> > counterpart that deserializes its arguments and invokes the function.
> These
> > functions are added to the map and are called by the replay logic.
> >
> > Replaying is just a matter looping over the function identifiers in the
> > serialized file, dispatching the right deserialization function, until
> > no more
> > data is available.
> >
> > The deserialization function for constructors or functions that return
> > by value
> > contains additional logic for dealing with the aforementioned indices.
> The
> > resulting objects are added to a table (similar to the one described
> > earlier)
> > that maps indices to pointers. Whenever an object is passed as an
> > argument, the
> > index is used to get the actual object from the table.
> >
> > ## Tool
> >
> > Even when using macros, adding the necessary capturing and replay code is
> > tedious and scales poorly. For the prototype, we did this by hand, but we
> > propose a new clang-based tool to streamline the process.
> >
> > For the capture code, the tool would validate that the macro matches the
> > function signature, suggesting a fixit if the macros are incorrect or
> > missing.
> > Compared to generating the macros altogether, it has the advantage that
> we
> > don't have "configured" files that are harder to debug (without faking
> line
> > numbers etc).
> >
> > The deserialization code would be fully generated. As shown in the
> prototype
> > there are a few different cases, depending on whether we have to account
> for
> > objects or not.
> >
> > ## Prototype Code
> >
> > I created a differential [5] on Phabricator with the prototype. It
> > contains the
> > necessary methods to re-run the gdb remote (reproducer) lit test.
> >
> > ## Feedback
> >
> > Before moving forward I'd like to get the community's input. What do you
> > think
> > about this approach? Do you have concerns or can we be smarter
> > somewhere? Any
> > feedback would be greatly appreciated!
> >
> > Thanks,
> > Jonas
> >
> > [1] http://lists.llvm.org/pipermail/lldb-dev/2018-September/014184.html
> > [2] https://reviews.llvm.org/D54617
> > [3] https://reviews.llvm.org/D54277
> > [4] https://reviews.llvm.org/D55582
> > [5] https://reviews.llvm.org/D56322
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
>
>
Thanks for the feedback Pavel!


> [Adding Tamas for his experience with recording and replaying APIs.]
>
>
> Thank you for sharing the prototype Jonas. It looks very interesting,
> but there are a couple of things that worry me about it.
>
> The first one is the usage

Re: [lldb-dev] [llvm-dev] [lit] check-all hanging

2019-01-07 Thread Adrian Prantl via lldb-dev


> On Jan 7, 2019, at 8:28 AM, Joel E. Denny via lldb-dev 
>  wrote:
> 
> On Mon, Jan 7, 2019 at 11:15 AM Davide Italiano  > wrote:
> On Sat, Jan 5, 2019 at 9:48 AM Joel E. Denny via lldb-dev
> mailto:lldb-dev@lists.llvm.org>> wrote:
> >
> > On Fri, Jan 4, 2019 at 11:39 AM Frédéric Riss  > > wrote:
> >>
> >>
> >>
> >> > On Jan 4, 2019, at 7:30 AM, Joel E. Denny  >> > > wrote:
> >> >
> >> > On Thu, Jan 3, 2019 at 11:30 AM Frédéric Riss  >> > > wrote:
> >> > -llvm-dev + lldb-dev for the lldv test failures.
> >> >
> >> >> On Jan 3, 2019, at 7:33 AM, Joel E. Denny  >> >> > wrote:
> >> >>
> >> >> All,
> >> >>
> >> >> Thanks for the replies.  Kuba: For LLDB, when were things expected to 
> >> >> have improved?  It's possible things improved for me at some point, but 
> >> >> this isn't something I've found time to track carefully, and I still 
> >> >> see problems.
> >> >>
> >> >> I ran check-all a couple of times last night at r350238, which I pulled 
> >> >> yesterday.  Here are the results:
> >> >>
> >> >> ```
> >> >> 
> >> >> Testing Time: 5043.24s
> >> >> 
> >> >> Unexpected Passing Tests (2):
> >> >> lldb-Suite :: functionalities/asan/TestMemoryHistory.py
> >> >> lldb-Suite :: functionalities/asan/TestReportData.py
> >> >>
> >> >> 
> >> >> Failing Tests (54):
> >> >> Clang :: CXX/modules-ts/basic/basic.link/p2/module.cpp
> >> >> Clang :: Modules/ExtDebugInfo.cpp
> >> >> Clang :: Modules/using-directive-redecl.cpp
> >> >> Clang :: Modules/using-directive.cpp
> >> >> Clang :: PCH/chain-late-anonymous-namespace.cpp
> >> >> Clang :: PCH/cxx-namespaces.cpp
> >> >> Clang :: PCH/namespaces.cpp
> >> >> LLDB :: ExecControl/StopHook/stop-hook-threads.test
> >> >> LeakSanitizer-AddressSanitizer-x86_64 :: 
> >> >> TestCases/Linux/use_tls_dynamic.cc
> >> >> LeakSanitizer-Standalone-x86_64 :: 
> >> >> TestCases/Linux/use_tls_dynamic.cc
> >> >> MemorySanitizer-X86_64 :: dtls_test.c
> >> >> MemorySanitizer-lld-X86_64 :: dtls_test.c
> >> >> lldb-Suite :: 
> >> >> functionalities/register/register_command/TestRegisters.py
> >> >> lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py
> >> >
> >> > It’s hard to diagnose dotest failures without the log.
> >> >
> >> > (My last reply to this was rejected by the list because I wasn't 
> >> > subscribed.  Trying again.)
> >> >
> >> > I have no experience debugging lldb.  Here's the lit output for the last 
> >> > fail (now at r350377), but let me know if you want something more:
> >> >
> >> > ```
> >> > FAIL: lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py 
> >> > (59083 of 59736)
> >> >  TEST 'lldb-Suite :: 
> >> > tools/lldb-server/TestGdbRemoteRegisterState.py' FAILED 
> >> > 
> >> > lldb version 8.0.0
> >> > LLDB library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
> >> > LLDB import library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
> >> > Libc++ tests will not be run because: Unable to find libc++ installation
> >> > Skipping following debug info categories: ['dsym', 'gmodules']
> >> >
> >> > Session logs for test failures/errors/unexpected successes will go into 
> >> > directory '/home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces'
> >> > Command invoked: /home/jdenny/ornl/llvm-mono-git/lldb/test/dotest.py -q 
> >> > --arch=x86_64 -s /home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces 
> >> > --build-dir 
> >> > /home/jdenny/ornl/llvm-mono-git-build/lldb-test-build.noindex -S nm -u 
> >> > CXXFLAGS -u CFLAGS --executable 
> >> > /home/jdenny/ornl/llvm-mono-git-build/./bin/lldb --dsymutil 
> >> > /home/jdenny/ornl/llvm-mono-git-build/./bin/dsymutil --filecheck 
> >> > /home/jdenny/ornl/llvm-mono-git-build/./bin/FileCheck -C 
> >> > /home/jdenny/ornl/llvm-mono-git-build/./bin/clang --env 
> >> > ARCHIVER=/usr/bin/ar --env OBJCOPY=/usr/bin/objcopy 
> >> > /home/jdenny/ornl/llvm-mono-git/lldb/packages/Python/lldbsuite/test/tools/lldb-server
> >> >  -p TestGdbRemoteRegisterState.py
> >> > UNSUPPORTED: LLDB 
> >> > (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) :: 
> >> > test_grp_register_save_restore_works_no_suffix_debugserver 
> >> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver 
> >> > tests)
> >> > FAIL: LLDB (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) :: 
> >> > test_grp_register_save_restore_works_no_suffix_llgs 
> >> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
> >> > lldb-server exiting...
> >> > UNSUPPORTED: LLDB 
> >> > (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) :: 
> >> > test_grp_register_save_restore_works_with_suffix_debugserver 
> >> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver 
> >> > tests)
> >> > FAIL: LLDB (/home/jdenny/ornl

Re: [lldb-dev] [llvm-dev] [lit] check-all hanging

2019-01-07 Thread Joel E. Denny via lldb-dev
On Mon, Jan 7, 2019 at 11:15 AM Davide Italiano 
wrote:

> On Sat, Jan 5, 2019 at 9:48 AM Joel E. Denny via lldb-dev
>  wrote:
> >
> > On Fri, Jan 4, 2019 at 11:39 AM Frédéric Riss  wrote:
> >>
> >>
> >>
> >> > On Jan 4, 2019, at 7:30 AM, Joel E. Denny 
> wrote:
> >> >
> >> > On Thu, Jan 3, 2019 at 11:30 AM Frédéric Riss 
> wrote:
> >> > -llvm-dev + lldb-dev for the lldv test failures.
> >> >
> >> >> On Jan 3, 2019, at 7:33 AM, Joel E. Denny 
> wrote:
> >> >>
> >> >> All,
> >> >>
> >> >> Thanks for the replies.  Kuba: For LLDB, when were things expected
> to have improved?  It's possible things improved for me at some point, but
> this isn't something I've found time to track carefully, and I still see
> problems.
> >> >>
> >> >> I ran check-all a couple of times last night at r350238, which I
> pulled yesterday.  Here are the results:
> >> >>
> >> >> ```
> >> >> 
> >> >> Testing Time: 5043.24s
> >> >> 
> >> >> Unexpected Passing Tests (2):
> >> >> lldb-Suite :: functionalities/asan/TestMemoryHistory.py
> >> >> lldb-Suite :: functionalities/asan/TestReportData.py
> >> >>
> >> >> 
> >> >> Failing Tests (54):
> >> >> Clang :: CXX/modules-ts/basic/basic.link/p2/module.cpp
> >> >> Clang :: Modules/ExtDebugInfo.cpp
> >> >> Clang :: Modules/using-directive-redecl.cpp
> >> >> Clang :: Modules/using-directive.cpp
> >> >> Clang :: PCH/chain-late-anonymous-namespace.cpp
> >> >> Clang :: PCH/cxx-namespaces.cpp
> >> >> Clang :: PCH/namespaces.cpp
> >> >> LLDB :: ExecControl/StopHook/stop-hook-threads.test
> >> >> LeakSanitizer-AddressSanitizer-x86_64 ::
> TestCases/Linux/use_tls_dynamic.cc
> >> >> LeakSanitizer-Standalone-x86_64 ::
> TestCases/Linux/use_tls_dynamic.cc
> >> >> MemorySanitizer-X86_64 :: dtls_test.c
> >> >> MemorySanitizer-lld-X86_64 :: dtls_test.c
> >> >> lldb-Suite ::
> functionalities/register/register_command/TestRegisters.py
> >> >> lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py
> >> >
> >> > It’s hard to diagnose dotest failures without the log.
> >> >
> >> > (My last reply to this was rejected by the list because I wasn't
> subscribed.  Trying again.)
> >> >
> >> > I have no experience debugging lldb.  Here's the lit output for the
> last fail (now at r350377), but let me know if you want something more:
> >> >
> >> > ```
> >> > FAIL: lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py
> (59083 of 59736)
> >> >  TEST 'lldb-Suite ::
> tools/lldb-server/TestGdbRemoteRegisterState.py' FAILED 
> >> > lldb version 8.0.0
> >> > LLDB library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
> >> > LLDB import library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
> >> > Libc++ tests will not be run because: Unable to find libc++
> installation
> >> > Skipping following debug info categories: ['dsym', 'gmodules']
> >> >
> >> > Session logs for test failures/errors/unexpected successes will go
> into directory '/home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces'
> >> > Command invoked: /home/jdenny/ornl/llvm-mono-git/lldb/test/dotest.py
> -q --arch=x86_64 -s /home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces
> --build-dir /home/jdenny/ornl/llvm-mono-git-build/lldb-test-build.noindex
> -S nm -u CXXFLAGS -u CFLAGS --executable
> /home/jdenny/ornl/llvm-mono-git-build/./bin/lldb --dsymutil
> /home/jdenny/ornl/llvm-mono-git-build/./bin/dsymutil --filecheck
> /home/jdenny/ornl/llvm-mono-git-build/./bin/FileCheck -C
> /home/jdenny/ornl/llvm-mono-git-build/./bin/clang --env
> ARCHIVER=/usr/bin/ar --env OBJCOPY=/usr/bin/objcopy
> /home/jdenny/ornl/llvm-mono-git/lldb/packages/Python/lldbsuite/test/tools/lldb-server
> -p TestGdbRemoteRegisterState.py
> >> > UNSUPPORTED: LLDB
> (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) ::
> test_grp_register_save_restore_works_no_suffix_debugserver
> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver tests)
> >> > FAIL: LLDB (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64)
> :: test_grp_register_save_restore_works_no_suffix_llgs
> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
> >> > lldb-server exiting...
> >> > UNSUPPORTED: LLDB
> (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) ::
> test_grp_register_save_restore_works_with_suffix_debugserver
> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver tests)
> >> > FAIL: LLDB (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64)
> :: test_grp_register_save_restore_works_with_suffix_llgs
> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
> >> > lldb-server exiting...
> >> > ==
> >> > FAIL: test_grp_register_save_restore_works_no_suffix_llgs
> (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
> >> > --
> >> > Traceback

Re: [lldb-dev] [llvm-dev] [lit] check-all hanging

2019-01-07 Thread Davide Italiano via lldb-dev
On Sat, Jan 5, 2019 at 9:48 AM Joel E. Denny via lldb-dev
 wrote:
>
> On Fri, Jan 4, 2019 at 11:39 AM Frédéric Riss  wrote:
>>
>>
>>
>> > On Jan 4, 2019, at 7:30 AM, Joel E. Denny  wrote:
>> >
>> > On Thu, Jan 3, 2019 at 11:30 AM Frédéric Riss  wrote:
>> > -llvm-dev + lldb-dev for the lldv test failures.
>> >
>> >> On Jan 3, 2019, at 7:33 AM, Joel E. Denny  wrote:
>> >>
>> >> All,
>> >>
>> >> Thanks for the replies.  Kuba: For LLDB, when were things expected to 
>> >> have improved?  It's possible things improved for me at some point, but 
>> >> this isn't something I've found time to track carefully, and I still see 
>> >> problems.
>> >>
>> >> I ran check-all a couple of times last night at r350238, which I pulled 
>> >> yesterday.  Here are the results:
>> >>
>> >> ```
>> >> 
>> >> Testing Time: 5043.24s
>> >> 
>> >> Unexpected Passing Tests (2):
>> >> lldb-Suite :: functionalities/asan/TestMemoryHistory.py
>> >> lldb-Suite :: functionalities/asan/TestReportData.py
>> >>
>> >> 
>> >> Failing Tests (54):
>> >> Clang :: CXX/modules-ts/basic/basic.link/p2/module.cpp
>> >> Clang :: Modules/ExtDebugInfo.cpp
>> >> Clang :: Modules/using-directive-redecl.cpp
>> >> Clang :: Modules/using-directive.cpp
>> >> Clang :: PCH/chain-late-anonymous-namespace.cpp
>> >> Clang :: PCH/cxx-namespaces.cpp
>> >> Clang :: PCH/namespaces.cpp
>> >> LLDB :: ExecControl/StopHook/stop-hook-threads.test
>> >> LeakSanitizer-AddressSanitizer-x86_64 :: 
>> >> TestCases/Linux/use_tls_dynamic.cc
>> >> LeakSanitizer-Standalone-x86_64 :: TestCases/Linux/use_tls_dynamic.cc
>> >> MemorySanitizer-X86_64 :: dtls_test.c
>> >> MemorySanitizer-lld-X86_64 :: dtls_test.c
>> >> lldb-Suite :: 
>> >> functionalities/register/register_command/TestRegisters.py
>> >> lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py
>> >
>> > It’s hard to diagnose dotest failures without the log.
>> >
>> > (My last reply to this was rejected by the list because I wasn't 
>> > subscribed.  Trying again.)
>> >
>> > I have no experience debugging lldb.  Here's the lit output for the last 
>> > fail (now at r350377), but let me know if you want something more:
>> >
>> > ```
>> > FAIL: lldb-Suite :: tools/lldb-server/TestGdbRemoteRegisterState.py (59083 
>> > of 59736)
>> >  TEST 'lldb-Suite :: 
>> > tools/lldb-server/TestGdbRemoteRegisterState.py' FAILED 
>> > 
>> > lldb version 8.0.0
>> > LLDB library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
>> > LLDB import library dir: /home/jdenny/ornl/llvm-mono-git-build/bin
>> > Libc++ tests will not be run because: Unable to find libc++ installation
>> > Skipping following debug info categories: ['dsym', 'gmodules']
>> >
>> > Session logs for test failures/errors/unexpected successes will go into 
>> > directory '/home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces'
>> > Command invoked: /home/jdenny/ornl/llvm-mono-git/lldb/test/dotest.py -q 
>> > --arch=x86_64 -s /home/jdenny/ornl/llvm-mono-git-build/lldb-test-traces 
>> > --build-dir /home/jdenny/ornl/llvm-mono-git-build/lldb-test-build.noindex 
>> > -S nm -u CXXFLAGS -u CFLAGS --executable 
>> > /home/jdenny/ornl/llvm-mono-git-build/./bin/lldb --dsymutil 
>> > /home/jdenny/ornl/llvm-mono-git-build/./bin/dsymutil --filecheck 
>> > /home/jdenny/ornl/llvm-mono-git-build/./bin/FileCheck -C 
>> > /home/jdenny/ornl/llvm-mono-git-build/./bin/clang --env 
>> > ARCHIVER=/usr/bin/ar --env OBJCOPY=/usr/bin/objcopy 
>> > /home/jdenny/ornl/llvm-mono-git/lldb/packages/Python/lldbsuite/test/tools/lldb-server
>> >  -p TestGdbRemoteRegisterState.py
>> > UNSUPPORTED: LLDB 
>> > (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) :: 
>> > test_grp_register_save_restore_works_no_suffix_debugserver 
>> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver tests)
>> > FAIL: LLDB (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) :: 
>> > test_grp_register_save_restore_works_no_suffix_llgs 
>> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
>> > lldb-server exiting...
>> > UNSUPPORTED: LLDB 
>> > (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) :: 
>> > test_grp_register_save_restore_works_with_suffix_debugserver 
>> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState) (debugserver tests)
>> > FAIL: LLDB (/home/jdenny/ornl/llvm-mono-git-build/bin/clang-8-x86_64) :: 
>> > test_grp_register_save_restore_works_with_suffix_llgs 
>> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
>> > lldb-server exiting...
>> > ==
>> > FAIL: test_grp_register_save_restore_works_no_suffix_llgs 
>> > (TestGdbRemoteRegisterState.TestGdbRemoteRegisterState)
>> > --
>> > Traceback (most recent call last):
>> >   File 
>> > "/home/jdenny/ornl/llvm-mon

Re: [lldb-dev] Unreliable process attach on Linux

2019-01-07 Thread Florian Weimer via lldb-dev
* Pavel Labath:

> Yes, there's a dns lookup being done on the other end. TBH, I'm not
> really sure what's it being used for. Maybe we should try deleting the
> hostname field from the qHostInfo response (or just put an IP address
> there).

Or use the system host name without resorting to DNS (using uname or
gethostname on GNU/Linux).  The DNS lookup is really surprising.

Thanks,
Florian
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Accessing physical memory while remote debugging

2019-01-07 Thread Zdenek Prikryl via lldb-dev

Hi Daniel, Sanimir,

This feature request is repeating every now and then :-). I think it's 
very useful, but it's not that easy to implement.


Additional comments embedded.

--
Zdenek Prikryl

On 11/28/2018 07:01 PM, Sanimir Agovic via lldb-dev wrote:

Hi Daniel,


On Sat, Nov 24, 2018 at 9:34 PM Daniel Shaulov via lldb-dev 
mailto:lldb-dev@lists.llvm.org>> wrote:
> The one thing that is really missing is the ability to read/write to 
physical memory addresses.
This would indeed be a neat addition to improve debugging bare-metal 
targets, be it simulator or jtag based e.g. openocd.
My suggestion is to generalize your idea. Add support/api to access 
memory in arbitrary address spaces. Accessing physical memory would be 
just a user of this api. This way lldb could support llvm 
architectures with multiple address spaces e.g. nvidia cuda and some 
opencl implementations.



> I looked a bit at the gdb protocol and it only supports 'm' and 'M' 
for reading and writing to virtual memory, and nothing for physical 
memory.

>
> So I suggest we add a new extensions to the gdb protocol:
> QReadPhysicalMemory - works just like 'm', but with physical memory.
> QWritePhysicalMemory - works just like 'M', but with physical memory.
Have a look at the qXfer rsp packets[1] which is used for transferring 
target objects, a prototype might look like this 
qXfer:memory:read:annex:tid:offset,length (write is analogue) where 
annex denotes to an address space identifier, offset and length are 
obvious.
Similar to the x/X packet the payload is binary encoded and not hex as 
in m/M making this new packet a superset of both x and m. I also 
highly recommend to propagate memory access errors back to the 
debugger there are plenty of reasons why memory access may fail on an 
on-chip-debugger. Afaik gdb/rsp supports error messages with the 
E.errtext notation where errtext is the error message.


Seems fine.



Coming back to tid, it is the thread id. Rsp is a stateful protocol 
and for certain operations it needs to switch the thread. This avoids 
switching back and forth and is similar to the lldb extension 
QThreadSuffixSupported[2].
Passing a tid is not needed to read memory from a process and it seems 
rather unusual but for a jtag debugger it is required to correctly 
translate the virtual address if a mmu is enabled. It is up to the 
target how to interpret tid.



> I am willing to work on adding support for this in lldb and in qemu. 
In fact, the qemu part was so easy and straightforward, that I already 
have a branch ready with the change.
Provide an API similar to llvm to support address spaces. A prototype 
might look like this: size_t ReadMemory(addr_t addr, void *buf, size_t 
size, unsigned addr_space, lldb::SBError &error)
The current ReadMemory would call this new API with addr_space = 0, 
the default address space.


The last time we discussed this issue we ended with an additional type 
for the address with an address space id (e.g., class AddressBase). The 
reason for it is that you need to propagate the address space id to 
expression evaluation and other parts as well. So, the relation would be 
lldb::addr_t < AddressBase < Address.


The big challenge here is to patch all lldb::addr_t instances that 
represent memory addresses to AddressBase (lldb::addr_t is used for 
non-address data time to time as well). Who's volunteering for it? :-)...





> The lldb part is a bit more tricky. At the core, changing 
ProcessGDBRemote.cpp:2776, writing  "QReadPhysicalMemory" instead of 
'm', is enough to change ALL the reads to physical memory. But we 
don't want that. So we need to add a new flag to 
CommandObjectMemoryRead, and pass it in CommandObjectMemory.cpp:669, 
then pass the flag to Process::ReadMemory. Here it gets a bit tricky, 
since Process::ReadMemory has a cache, so we can't just pass the flag 
to ReadMemoryFromInferior, we need to have a separate cache for it.

You need a per addresspace cache.


Correct, caches has to be address space aware (I think there are several 
of them).





> 3. I know it's the wrong place to ask, but does anyone know how 
accepting the qemu community will be with the patch? Have they ever 
accepted patches aimed at making lldb work better with the gdbstub, or 
is it strictly for debugging with gdb proper?
There is no right way but providing tests with your patches, keeping 
them small and rather independent of each other, and adding 
documentation is a good start.


To fully support address spaces one needs to interpret the debug 
information correctly to dispatch the memory access to the right 
address space and the type system needs to be extended as well. Having 
a way to query for available address spaces would also be helpful. 
Keep in mind to extend the lldb commands to expose this feature to the 
user

memory read/write --asid  | --asid-name 
memory list
disassemble --asid  | --asid-name 


That is correct as well.



[1] 
https://sourceware.org/gdb/onlinedocs/gdb/Gene

Re: [lldb-dev] Unreliable process attach on Linux

2019-01-07 Thread Pavel Labath via lldb-dev

On 07/01/2019 13:22, Florian Weimer wrote:

* Pavel Labath:


Thanks. I think this is what I suspected. The server is extremely slow
in responding to the qHostInfo packet. This timeout for this was
recently increased to 10 seconds, but it looks like 7.0 still has the
default (1 second) timeout.

If you don't want to recompile or update, you should be able to work
around this by increasing the default timeout with the following
command "settings set plugin.process.gdb-remote.packet-timeout 10".


I see, that helps.

There's a host name in the qHostInfo response?  Where's the code that
determines the host name?  On the other end?  I wonder if it performs a
DNS lookup.  That could explain the delay.

Thanks,
Florian



Yes, there's a dns lookup being done on the other end. TBH, I'm not 
really sure what's it being used for. Maybe we should try deleting the 
hostname field from the qHostInfo response (or just put an IP address 
there).

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Unreliable process attach on Linux

2019-01-07 Thread Florian Weimer via lldb-dev
* Pavel Labath:

> Thanks. I think this is what I suspected. The server is extremely slow
> in responding to the qHostInfo packet. This timeout for this was
> recently increased to 10 seconds, but it looks like 7.0 still has the
> default (1 second) timeout.
>
> If you don't want to recompile or update, you should be able to work
> around this by increasing the default timeout with the following
> command "settings set plugin.process.gdb-remote.packet-timeout 10".

I see, that helps.

There's a host name in the qHostInfo response?  Where's the code that
determines the host name?  On the other end?  I wonder if it performs a
DNS lookup.  That could explain the delay.

Thanks,
Florian
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Tamas Berghammer via lldb-dev
Thanks Pavel for looping me in. I haven't looked into the actual
implementation of the prototype yet but reading your description I have
some concern regarding the amount of data you capture as I feel it isn't
sufficient to reproduce a set of usecases.

One problem is when the behavior of LLDB is not deterministic for whatever
reason (e.g. multi threading, unordered maps, etc...). Lets take
SBModule::FindSymbols() what returns an SBSymbolContextList without any
specific order (haven't checked the implementation but I would consider a
random order to be valid). If a user calls this function, then iterates
through the elements to find an index `I`, calls `GetContextAtIndex(I)` and
pass the result into a subsequent function then what will we do. Will we
capture what did `GetContextAtIndex(I)` returned in the trace and use that
value or will we capture the value of `I`, call `GetContextAtIndex(I)`
during reproduction and use that value. Doing the first would be correct in
this case but would mean we don't call `GetContextAtIndex(I)` while doing
the second case would mean we call `GetContextAtIndex(I)` with a wrong
index if the order in SBSymbolContextList is non deterministic. In this
case as we know that GetContextAtIndex is just an accessor into a vector
the first option is the correct one but I can imagine cases where this is
not the case (e.g. if GetContextAtIndex would have some useful side effect).

Other interesting question is what to do with functions taking raw binary
data in the form of a pointer + size (e.g. SBData::SetData). I think we
will have to annotate these APIs to make the reproducer system aware of the
amount of data they have to capture and then allocate these buffers with
the correct lifetime during replay. I am not sure what would be the best
way to attach these annotations but I think we might need a fairly generic
framework because I won't be surprised if there are more situation when we
have to add annotations to the API. I slightly related question is if a
function returns a pointer to a raw buffer (e.g. const char* or void*) then
do we have to capture the content of it or the pointer for it and in either
case what is the lifetime of the buffer returned (e.g.
SBError::GetCString() returns a buffer what goes out of scope when the
SBError goes out of scope).

Additionally I am pretty sure we have at least some functions returning
various indices what require remapping other then the pointers either
because they are just indexing into a data structure with undefined
internal order or they referencing some other resource. Just by randomly
browsing some of the SB APIs I found for example SBHostOS::ThreadCreate
what returns the pid/tid for the newly created thread what will have to be
remapped (it also takes a function as an argument what is a problem as
well). Because of this I am not sure if we can get away with an
automatically generated set of API descriptions instead of wring one with
explicit annotations for the various remapping rules.

If there is interest I can try to take a deeper look into the topic
sometime later but I hope that those initial thoughts are useful.

Tamas

On Mon, Jan 7, 2019 at 9:40 AM Pavel Labath  wrote:

> On 04/01/2019 22:19, Jonas Devlieghere via lldb-dev wrote:
> > Hi Everyone,
> >
> > In September I sent out an RFC [1] about adding reproducers to LLDB.
> > Over the
> > past few months, I landed the reproducer framework, support for the GDB
> > remote
> > protocol and a bunch of preparatory changes. There's still an open code
> > review
> > [2] for dealing with files, but that one is currently blocked by a
> change to
> > the VFS in LLVM [3].
> >
> > The next big piece of work is supporting user commands (e.g. in the
> > driver) and
> > SB API calls. Originally I expected these two things to be separate, but
> > Pavel
> > made a good case [4] that they're actually very similar.
> >
> > I created a prototype of how I envision this to work. As usual, we can
> > differentiate between capture and replay.
> >
> > ## SB API Capture
> >
> > When capturing a reproducer, every SB function/method is instrumented
> > using a
> > macro at function entry. The added code tracks the function identifier
> > (currently we use its name with __PRETTY_FUNCTION__) and its arguments.
> >
> > It also tracks when a function crosses the boundary between internal and
> > external use. For example, when someone (be it the driver, the python
> > binding
> > or the RPC server) call SBFoo, and in its implementation SBFoo calls
> > SBBar, we
> > don't need to record SBBar. When invoking SBFoo during replay, it will
> > itself
> > call SBBar.
> >
> > When a boundary is crossed, the function name and arguments are
> > serialized to a
> > file. This is trivial for basic types. For objects, we maintain a table
> that
> > maps pointer values to indices and serialize the index.
> >
> > To keep our table consistent, we also need to track return for functions
> > that
> > return an object by v

Re: [lldb-dev] Unreliable process attach on Linux

2019-01-07 Thread Pavel Labath via lldb-dev

On 07/01/2019 09:29, Florian Weimer wrote:

* Pavel Labath:


On 04/01/2019 17:38, Florian Weimer via lldb-dev wrote:

Consider this example program:

#include 
#include 
#include 

#include 
#include 
#include 

int
main(void)
{
// Target process for the debugger.
pid_t pid = fork();
if (pid < 0)
  err(1, "fork");
if (pid == 0)
  while (true)
pause();

lldb::SBDebugger::Initialize();
{
  auto debugger(lldb::SBDebugger::Create());
  if (!debugger.IsValid())
errx(1, "SBDebugger::Create failed");

  auto target(debugger.CreateTarget(nullptr));
  if (!target.IsValid())
errx(1, "SBDebugger::CreateTarget failed");

  lldb::SBAttachInfo attachinfo(pid);
  lldb::SBError error;
  auto process(target.Attach(attachinfo, error));
  if (!process.IsValid())
errx(1, "SBTarget::Attach failed: %s", error.GetCString());
  error = process.Detach();
  if (error.Fail())
errx(1, "SBProcess::Detach failed: %s", error.GetCString());
}
lldb::SBDebugger::Terminate();

if (kill(pid, SIGKILL) != 0)
  err(1, "kill");
if (waitpid(pid, NULL, 0) < 0)
  err(1, "waitpid");

return 0;
}

Run it in a loop like this:

$ while ./test-attach ; do date; done

On Linux x86-64 (Fedora 29), with LLDB 7 (lldb-7.0.0-1.fc29.x86_64) and
kernel 4.19.12 (kernel-4.19.12-301.fc29.x86_64), after 100 iterations or
so, attaching to the newly created process fails:

test-attach: SBTarget::Attach failed: lost connection

This also reproduces occasionally with LLDB itself (with “lldb -p PID”).

Any suggestions how to get more information about the cause of this
error?



I would recommend enabling gdb-remote logging (so something like:
debugger.HandleCommand("log enable gdb-remote packets")) to see at
which stage do we actually lose the gdb-server connection.


Thanks.  I enabled logging like this:

 auto debugger(lldb::SBDebugger::Create());
 if (!debugger.IsValid())
   errx(1, "SBDebugger::Create failed");

 debugger.HandleCommand("log enable gdb-remote packets");

 auto target(debugger.CreateTarget(nullptr));
 if (!target.IsValid())
   errx(1, "SBDebugger::CreateTarget failed");

And here's the output I get:

test-attach  <   1> send packet: +
test-attach  history[1] tid=0x1cab <   1> send packet: +
test-attach  <  19> send packet: $QStartNoAckMode#b0
test-attach  <   1> read packet: +
test-attach  <   6> read packet: $OK#9a
test-attach  <   1> send packet: +
test-attach  <  41> send packet: $qSupported:xmlRegisters=i386,arm,mips#12
test-attach  < 124> read packet: 
$PacketSize=2;QStartNoAckMode+;QThreadSuffixSupported+;QListThreadsInStopReply+;qEcho+;QPassSignals+;qXfer:auxv:read+#be
test-attach  <  26> send packet: $QThreadSuffixSupported#e4
test-attach  <   6> read packet: $OK#9a
test-attach  <  27> send packet: $QListThreadsInStopReply#21
test-attach  <   6> read packet: $OK#9a
test-attach  <  13> send packet: $qHostInfo#9b
test-attach  <  11> send packet: $qEcho:1#5b
test-attach: SBTarget::Attach failed: lost connection

Florian



Thanks. I think this is what I suspected. The server is extremely slow 
in responding to the qHostInfo packet. This timeout for this was 
recently increased to 10 seconds, but it looks like 7.0 still has the 
default (1 second) timeout.


If you don't want to recompile or update, you should be able to work 
around this by increasing the default timeout with the following command 
"settings set plugin.process.gdb-remote.packet-timeout 10".


regards,
pavel
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Pavel Labath via lldb-dev

On 04/01/2019 22:19, Jonas Devlieghere via lldb-dev wrote:

Hi Everyone,

In September I sent out an RFC [1] about adding reproducers to LLDB. 
Over the
past few months, I landed the reproducer framework, support for the GDB 
remote
protocol and a bunch of preparatory changes. There's still an open code 
review

[2] for dealing with files, but that one is currently blocked by a change to
the VFS in LLVM [3].

The next big piece of work is supporting user commands (e.g. in the 
driver) and
SB API calls. Originally I expected these two things to be separate, but 
Pavel

made a good case [4] that they're actually very similar.

I created a prototype of how I envision this to work. As usual, we can
differentiate between capture and replay.

## SB API Capture

When capturing a reproducer, every SB function/method is instrumented 
using a

macro at function entry. The added code tracks the function identifier
(currently we use its name with __PRETTY_FUNCTION__) and its arguments.

It also tracks when a function crosses the boundary between internal and
external use. For example, when someone (be it the driver, the python 
binding
or the RPC server) call SBFoo, and in its implementation SBFoo calls 
SBBar, we
don't need to record SBBar. When invoking SBFoo during replay, it will 
itself

call SBBar.

When a boundary is crossed, the function name and arguments are 
serialized to a

file. This is trivial for basic types. For objects, we maintain a table that
maps pointer values to indices and serialize the index.

To keep our table consistent, we also need to track return for functions 
that

return an object by value. We have a separate macro that wraps the returned
object.

The index is sufficient because every object that is passed to a 
function has
crossed the boundary and hence was recorded. During replay (see below) 
we map

the index to an address again which ensures consistency.

## SB API Replay

To replay the SB function calls we need a way to invoke the corresponding
function from its serialized identifier. For every SB function, there's a
counterpart that deserializes its arguments and invokes the function. These
functions are added to the map and are called by the replay logic.

Replaying is just a matter looping over the function identifiers in the
serialized file, dispatching the right deserialization function, until 
no more

data is available.

The deserialization function for constructors or functions that return 
by value

contains additional logic for dealing with the aforementioned indices. The
resulting objects are added to a table (similar to the one described 
earlier)
that maps indices to pointers. Whenever an object is passed as an 
argument, the

index is used to get the actual object from the table.

## Tool

Even when using macros, adding the necessary capturing and replay code is
tedious and scales poorly. For the prototype, we did this by hand, but we
propose a new clang-based tool to streamline the process.

For the capture code, the tool would validate that the macro matches the
function signature, suggesting a fixit if the macros are incorrect or 
missing.

Compared to generating the macros altogether, it has the advantage that we
don't have "configured" files that are harder to debug (without faking line
numbers etc).

The deserialization code would be fully generated. As shown in the prototype
there are a few different cases, depending on whether we have to account for
objects or not.

## Prototype Code

I created a differential [5] on Phabricator with the prototype. It 
contains the

necessary methods to re-run the gdb remote (reproducer) lit test.

## Feedback

Before moving forward I'd like to get the community's input. What do you 
think
about this approach? Do you have concerns or can we be smarter 
somewhere? Any

feedback would be greatly appreciated!

Thanks,
Jonas

[1] http://lists.llvm.org/pipermail/lldb-dev/2018-September/014184.html
[2] https://reviews.llvm.org/D54617
[3] https://reviews.llvm.org/D54277
[4] https://reviews.llvm.org/D55582
[5] https://reviews.llvm.org/D56322

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev



[Adding Tamas for his experience with recording and replaying APIs.]


Thank you for sharing the prototype Jonas. It looks very interesting, 
but there are a couple of things that worry me about it.


The first one is the usage of __PRETTY_FUNCTION__. That sounds like a 
non-starter even for an initial implementation, as the string that 
expands to is going to differ between compilers (gcc and clang will 
probably agree on it, but I know for a fact it will be different on 
msvc). It that was just an internal property of the serialization 
format, then it might be fine, but it looks like you are hardcoding the 
values in code to connect the methods with their replayers, which is 
going to be a problem.


I've been thinking about how could this be

Re: [lldb-dev] Unreliable process attach on Linux

2019-01-07 Thread Florian Weimer via lldb-dev
* Pavel Labath:

> On 04/01/2019 17:38, Florian Weimer via lldb-dev wrote:
>> Consider this example program:
>>
>> #include 
>> #include 
>> #include 
>>
>> #include 
>> #include 
>> #include 
>>
>> int
>> main(void)
>> {
>>// Target process for the debugger.
>>pid_t pid = fork();
>>if (pid < 0)
>>  err(1, "fork");
>>if (pid == 0)
>>  while (true)
>>pause();
>>
>>lldb::SBDebugger::Initialize();
>>{
>>  auto debugger(lldb::SBDebugger::Create());
>>  if (!debugger.IsValid())
>>errx(1, "SBDebugger::Create failed");
>>
>>  auto target(debugger.CreateTarget(nullptr));
>>  if (!target.IsValid())
>>errx(1, "SBDebugger::CreateTarget failed");
>>
>>  lldb::SBAttachInfo attachinfo(pid);
>>  lldb::SBError error;
>>  auto process(target.Attach(attachinfo, error));
>>  if (!process.IsValid())
>>errx(1, "SBTarget::Attach failed: %s", error.GetCString());
>>  error = process.Detach();
>>  if (error.Fail())
>>errx(1, "SBProcess::Detach failed: %s", error.GetCString());
>>}
>>lldb::SBDebugger::Terminate();
>>
>>if (kill(pid, SIGKILL) != 0)
>>  err(1, "kill");
>>if (waitpid(pid, NULL, 0) < 0)
>>  err(1, "waitpid");
>>
>>return 0;
>> }
>>
>> Run it in a loop like this:
>>
>> $ while ./test-attach ; do date; done
>>
>> On Linux x86-64 (Fedora 29), with LLDB 7 (lldb-7.0.0-1.fc29.x86_64) and
>> kernel 4.19.12 (kernel-4.19.12-301.fc29.x86_64), after 100 iterations or
>> so, attaching to the newly created process fails:
>>
>> test-attach: SBTarget::Attach failed: lost connection
>>
>> This also reproduces occasionally with LLDB itself (with “lldb -p PID”).
>>
>> Any suggestions how to get more information about the cause of this
>> error?
>>
>
> I would recommend enabling gdb-remote logging (so something like:
> debugger.HandleCommand("log enable gdb-remote packets")) to see at
> which stage do we actually lose the gdb-server connection.

Thanks.  I enabled logging like this:

auto debugger(lldb::SBDebugger::Create());
if (!debugger.IsValid())
  errx(1, "SBDebugger::Create failed");

debugger.HandleCommand("log enable gdb-remote packets");

auto target(debugger.CreateTarget(nullptr));
if (!target.IsValid())
  errx(1, "SBDebugger::CreateTarget failed");

And here's the output I get:

test-attach  <   1> send packet: +
test-attach  history[1] tid=0x1cab <   1> send packet: +
test-attach  <  19> send packet: $QStartNoAckMode#b0
test-attach  <   1> read packet: +
test-attach  <   6> read packet: $OK#9a
test-attach  <   1> send packet: +
test-attach  <  41> send packet: $qSupported:xmlRegisters=i386,arm,mips#12
test-attach  < 124> read packet: 
$PacketSize=2;QStartNoAckMode+;QThreadSuffixSupported+;QListThreadsInStopReply+;qEcho+;QPassSignals+;qXfer:auxv:read+#be
test-attach  <  26> send packet: $QThreadSuffixSupported#e4
test-attach  <   6> read packet: $OK#9a
test-attach  <  27> send packet: $QListThreadsInStopReply#21
test-attach  <   6> read packet: $OK#9a
test-attach  <  13> send packet: $qHostInfo#9b
test-attach  <  11> send packet: $qEcho:1#5b
test-attach: SBTarget::Attach failed: lost connection

Florian
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Unreliable process attach on Linux

2019-01-07 Thread Pavel Labath via lldb-dev

On 04/01/2019 17:38, Florian Weimer via lldb-dev wrote:

Consider this example program:

#include 
#include 
#include 

#include 
#include 
#include 

int
main(void)
{
   // Target process for the debugger.
   pid_t pid = fork();
   if (pid < 0)
 err(1, "fork");
   if (pid == 0)
 while (true)
   pause();

   lldb::SBDebugger::Initialize();
   {
 auto debugger(lldb::SBDebugger::Create());
 if (!debugger.IsValid())
   errx(1, "SBDebugger::Create failed");

 auto target(debugger.CreateTarget(nullptr));
 if (!target.IsValid())
   errx(1, "SBDebugger::CreateTarget failed");

 lldb::SBAttachInfo attachinfo(pid);
 lldb::SBError error;
 auto process(target.Attach(attachinfo, error));
 if (!process.IsValid())
   errx(1, "SBTarget::Attach failed: %s", error.GetCString());
 error = process.Detach();
 if (error.Fail())
   errx(1, "SBProcess::Detach failed: %s", error.GetCString());
   }
   lldb::SBDebugger::Terminate();

   if (kill(pid, SIGKILL) != 0)
 err(1, "kill");
   if (waitpid(pid, NULL, 0) < 0)
 err(1, "waitpid");

   return 0;
}

Run it in a loop like this:

$ while ./test-attach ; do date; done

On Linux x86-64 (Fedora 29), with LLDB 7 (lldb-7.0.0-1.fc29.x86_64) and
kernel 4.19.12 (kernel-4.19.12-301.fc29.x86_64), after 100 iterations or
so, attaching to the newly created process fails:

test-attach: SBTarget::Attach failed: lost connection

This also reproduces occasionally with LLDB itself (with “lldb -p PID”).

Any suggestions how to get more information about the cause of this
error?



I would recommend enabling gdb-remote logging (so something like: 
debugger.HandleCommand("log enable gdb-remote packets")) to see at which 
stage do we actually lose the gdb-server connection.


My best bet would be that on your machine/build the server is slower 
than usual in responding to one of the client packets and that causes 
the connection to be dropped.


cheers,
pavel
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev