Re: [lldb-dev] Can I call a python script from LLDB c++ code?

2018-04-03 Thread Ted Woodward via lldb-dev
Responses inline

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project

> -Original Message-
> From: Greg Clayton [mailto:clayb...@gmail.com]
> Sent: Tuesday, April 03, 2018 5:19 PM
> To: Ted Woodward 
> Cc: lldb-dev@lists.llvm.org
> Subject: Re: [lldb-dev] Can I call a python script from LLDB c++ code?
> 
> 
> 
> > On Apr 3, 2018, at 12:18 PM, Ted Woodward via lldb-dev  d...@lists.llvm.org> wrote:
> >
> > LLDB for Hexagon can automatically launch and connect to the Hexagon
> > simulator, much like LLDB can launch and connect to debugserver/lldb-
> server.
> > I've got a copy of GDBRemoteCommunication::StartDebugserverProcess
> > that does this. A copy because there are feature incompatibilities
> > between hexagon-sim and debugserver/lldb-server.
> >
> > On a hardware target, our OS has a debug stub. We'd like to run the
> > lldb test suite talking to this stub on the simulator, instead of
> > talking to the RSP interface the simulator publishes. We have a module
> > that will forward ports to the OS under simulation, but to do this I
need to:
> > 1) open an http connection to port x
> > 2) parse some xml coming back that contains the actual port for the
> > stub I want to connect to
> > 3) connect to the new port
> 
> 
> Can't you forward ports in advance and then run lldb-server in platform
mode
> and tell it to use only those ports? Then lldb-server will do everything
it needs.
> There is a port offset option to lldb-server that can be used in case the
lldb-
> server that runs on the simulator returns say port , but it needs to
have
> 1 added to it...

Short answer - no.  It's a custom stub, not lldb-server, but that's not the
issue.
The issue is that the mechanism to get data into the simulation mimics what
we do on
hardware, where the DSP doesn't have access to the outside world, and
everything
goes through an Android app. The system publishes 1 port per process that
the stub
controls. These ports are picked randomly, and are set up when the http
connection
is made. The data that is read over that connection needs to be parsed to
find the
ports that the stub is publishing.

> > I have a python script that will do this, but I need to do it inside
> > LLDB
> > c++ code in GDBRemoteCommunication.cpp, so when I do a "run" it will
> > c++ jump
> > through the correct hoops and connect to the stub under simulation.
> >
> > Is there a good way to call a python script from LLDB c++ code and get
> > a result back? Or is there a better solution?
> >
> 
> The the main question is can you run lldb-server in the simulator and have
the
> test suite just work? What is stopping you from being able to do that if
the
> answer is no?

I've got the test suite working using the simulator's RSP interface, but the
next step
is to exercise the OS stub. And to get to it I have to jump through the
hoops I talked
about earlier.

> It sounds like a real hack if you have to run a python script in
> ProcessGDBRemote. It sounds like you need to just modify your hexagon
> simulator platform code to "do the right thing".

"Do the right thing" in this case involves opening an http connection,
parsing XML,
and telling LLDB to connect to the port I get from the XML. The launch is
done inside
Process::Launch, which is called from the platform, so I can't do any
processing
In the platform.

Worst case I could do something like 'system("python sim_stub_connect.py")'
to get the port
that's being published, if using LLDB's interpreter is not a good idea.

> > Ted
> >
> > --
> > Qualcomm Innovation Center, Inc.
> > The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> > a Linux Foundation Collaborative Project
> >
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Can I call a python script from LLDB c++ code?

2018-04-03 Thread Greg Clayton via lldb-dev


> On Apr 3, 2018, at 12:18 PM, Ted Woodward via lldb-dev 
>  wrote:
> 
> LLDB for Hexagon can automatically launch and connect to the Hexagon
> simulator, much like LLDB can launch and connect to debugserver/lldb-server.
> I've got a copy of GDBRemoteCommunication::StartDebugserverProcess that does
> this. A copy because there are feature incompatibilities between hexagon-sim
> and debugserver/lldb-server.
> 
> On a hardware target, our OS has a debug stub. We'd like to run the lldb
> test suite talking to this stub on the simulator, instead of talking to the
> RSP interface the simulator publishes. We have a module that will forward
> ports to the OS under simulation, but to do this I need to:
> 1) open an http connection to port x
> 2) parse some xml coming back that contains the actual port for the stub I
> want to connect to
> 3) connect to the new port


Can't you forward ports in advance and then run lldb-server in platform mode 
and tell it to use only those ports? Then lldb-server will do everything it 
needs. There is a port offset option to lldb-server that can be used in case 
the lldb-server that runs on the simulator returns say port , but it needs 
to have 1 added to it...

> 
> I have a python script that will do this, but I need to do it inside LLDB
> c++ code in GDBRemoteCommunication.cpp, so when I do a "run" it will jump
> through the correct hoops and connect to the stub under simulation.
> 
> Is there a good way to call a python script from LLDB c++ code and get a
> result back? Or is there a better solution?
> 

The the main question is can you run lldb-server in the simulator and have the 
test suite just work? What is stopping you from being able to do that if the 
answer is no?

It sounds like a real hack if you have to run a python script in 
ProcessGDBRemote. It sounds like you need to just modify your hexagon simulator 
platform code to "do the right thing".


> Ted
> 
> --
> Qualcomm Innovation Center, Inc.
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
> Linux Foundation Collaborative Project
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Can I call a python script from LLDB c++ code?

2018-04-03 Thread Ted Woodward via lldb-dev
LLDB for Hexagon can automatically launch and connect to the Hexagon
simulator, much like LLDB can launch and connect to debugserver/lldb-server.
I've got a copy of GDBRemoteCommunication::StartDebugserverProcess that does
this. A copy because there are feature incompatibilities between hexagon-sim
and debugserver/lldb-server.

On a hardware target, our OS has a debug stub. We'd like to run the lldb
test suite talking to this stub on the simulator, instead of talking to the
RSP interface the simulator publishes. We have a module that will forward
ports to the OS under simulation, but to do this I need to:
1) open an http connection to port x
2) parse some xml coming back that contains the actual port for the stub I
want to connect to
3) connect to the new port

I have a python script that will do this, but I need to do it inside LLDB
c++ code in GDBRemoteCommunication.cpp, so when I do a "run" it will jump
through the correct hoops and connect to the stub under simulation.

Is there a good way to call a python script from LLDB c++ code and get a
result back? Or is there a better solution?

Ted

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-03 Thread Jim Ingham via lldb-dev


> On Apr 3, 2018, at 1:28 AM, Ramana  wrote:
> 
> 
> 
> On Thu, Mar 29, 2018 at 11:17 PM, Jim Ingham  > wrote:
> The breakpoints aren't a structural problem.  If you can figure out a 
> non-code modifying way to handle breakpoints, that would be a very surgical 
> change.  And as Fred points out, out of place execution in the target would 
> be really handy for other things, like offloading breakpoint conditions into 
> the target, and only stopping if the condition is true.  So this is a well 
> motivated project.
> 
> And our model for handling both expression evaluation and execution control 
> are already thread-centric.  It would be pretty straight-forward to treat 
> "still running" threads the same way as threads with no interesting stop 
> reasons, for instance.
> 
> I think the real difficulty will come at the higher layers.  First off, we 
> gate a lot of Command & SB API operations on "is the process running" and 
> that will have to get much more fine-grained.  Figuring out a good model for 
> this will be important.
> 
> Then you're going to have to figure out what exactly to do when somebody is 
> in the middle of say running a long expression on thread A when thread B 
> stops.  What's a useful way to present this information?  If lldb is sharing 
> the terminal with the process, you can't just dump output in the middle of 
> command output, but you don't want to delay too long...
> 
> Also, the IOHandlers are currently a stack, but that model won't work when 
> the process IOHandler is going to have to be live (at least the output part 
> of it) while the CommandInterpreter IOHandler is also live.  That's going to 
> take reworking.
> 
> On the event and operations side, I think the fact that we have the 
> separation between the private and public states will make this a lot easier. 
>  We can use the event transition from private to public state to serialize 
> the activity that's going on under the covers so that it appears coherent to 
> the user.  The fact that lldb goes through separate channels for process I/O 
> and command I/O and we very seldom just dump stuff to stdout will also make 
> solving the problem of competing demands for the user's attention more 
> possible.
> 
> Thanks Jim for the elaborate view on the non-stop mode support.
> 
> BTW my understanding on public vs private states is that the public state is 
> as known by the user and all the process state changes will be first tracked 
> with private state which then will be made public, i.e. public state will be 
> updated, should the user need to know about that process state change. Is 
> there anything else I am missing on public vs private states?

That’s exactly it.  Another detail worth noting is that all the user-supplied 
work - particularly for breakpoints, the commands, conditions etc - happens 
when the event is pulled off the public queue.  So we can postpone all this 
work for threads the notification of whose stop we’re suspending by holding 
back some events from the private->public transition, which will I think be 
helpful in making the non-stop mode behave nicely.

Jim

> 
> 
> And I think we can't do any of this till we have a robust "ProcessMock" 
> plugin that we can use to emulate end-to-end through the debugger all the 
> corner cases that non-stop debugging will bring up.  Otherwise there will be 
> no way to reliably test any of this stuff, and it won't ever be stable.
> 
> I don't think any of this will be impossible, but it's going to be a lot of 
> work.
> 
> Jim
> 
> 
> > On Mar 29, 2018, at 9:27 AM, Greg Clayton via lldb-dev 
> > > wrote:
> >
> >
> >
> >> On Mar 29, 2018, at 9:10 AM, Frédéric Riss  >> > wrote:
> >>
> >>
> >>
> >>> On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev 
> >>> > wrote:
> >>>
> >>>
> >>>
>  On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev 
>  > wrote:
> 
>  Hi,
> 
>  It appears that the lldb-server, as of v5.0, did not implement the GDB 
>  RSPs non-stop mode 
>  (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop
>   
>  ).
>   Am I wrong?
> 
>  If the support is actually not there, what needs to be changed to enable 
>  the same in lldb-server?
> >>>
> >>> As Pavel said, adding support into lldb-server will be easy. Adding 
> >>> support to LLDB will be harder. One downside of enabling this mode will 
> >>> be a performance loss in the GDB remote packet transfer. Why? IIRC this 
> >>> mode requires a read thread where one thread is always reading packets 
> >>> and putting them into a packet 

Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-03 Thread Pavel Labath via lldb-dev
On Mon, 2 Apr 2018 at 16:28, Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

>
>
> On Apr 2, 2018, at 6:18 AM, Ramana  wrote:
>
>
>
> On Thu, Mar 29, 2018 at 8:02 PM, Greg Clayton  wrote:
>
>>
>>
>> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev 
>> wrote:
>>
>> Hi,
>>
>> It appears that the lldb-server, as of v5.0, did not implement the GDB
>> RSPs non-stop mode (
>> https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop).
>> Am I wrong?
>>
>> If the support is actually not there, what needs to be changed to enable
>> the same in lldb-server?
>>
>>
>> As Pavel said, adding support into lldb-server will be easy. Adding
>> support to LLDB will be harder. One downside of enabling this mode will be
>> a performance loss in the GDB remote packet transfer. Why? IIRC this mode
>> requires a read thread where one thread is always reading packets and
>> putting them into a packet buffer. Threads that want to send a packet an
>> get a reply must not send the packet then use a condition variable + mutex
>> to wait for the response. This threading overhead really slows down the
>> packet transfers. Currently we have a mutex on the GDB remote communication
>> where each thread that needs to send a packet will take the mutex and then
>> send the packet and wait for the response on the same thread. I know the
>> performance differences are large on MacOS, not sure how they are on other
>> systems. If you do end up enabling this, please run the "process plugin
>> packet speed-test" command which is available only when debugging with
>> ProcessGDBRemote. It will send an receive various packets of various sizes
>> and report speed statistics back to you.
>>
>
> So, in non-stop mode, though we can have threads running asynchronously
> (some running, some stopped), the GDB remote packet transfer will be
> synchronous i.e. will get queued?
>
>
> In the normal mode there is no queueing which means we don't need a thread
> to read packets and deliver the right response to the right thread. With
> non-stop mode we will need a read thread IIRC. The extra threading overhead
> is costly.
>
> And this is because the packet responses should be matched appropriately
> as there typically will be a single connection to the remote target and
> hence this queueing cannot be avoided?
>
>
> It can't be avoided because you have to be ready to receive a thread stop
> packet at any time, even if no packets are being sent. With the normal
> protocol, you can only receive a stop packet in response to a continue
> packet. So there is never a time where you can't just sent the packet and
> receive the response on the same thread. With non-stop mode, there must be
> a thread for the stop reply packets for any thread that can stop at any
> time. Adding threads means ~10,000 cycles of thread synchronization code
> for each packet.
>
>
I think this is one of the least important obstacles in tackling the
non-stop feature, but since we're already discussing it, I just wanted to
point out that there are many ways we can improve the performance here. The
read thread *is* necessary, but only so that we can receieve asynchronous
responses when we're not going any gdb-remote work. If we are already
sending some packets, it is superfluous.

As one optimization, we could make sure that the read thread is disabled
why we are sending a packet. E.g., the SendPacketAndWaitForResponse could
do something like:
SendPacket(msg); // We can do this even while the read thread is doing work
SuspendReadThread(); // Should be cheap as it happens while the remote stub
is processing our packet
GetResponse(); // Happens on the main thread, as before
ResumeReadThread(); // Fast.

We could even take this further and have some sort of a RAII object which
disables the read thread at a higher level for when we want to be sending a
bunch of packets.

Of course, this would need to be implemented with a steady hand and
carefully tested, but the good news here is that the gdb-remote protocol is
one of the better tested aspects of lldb, with many testing approaches
available.

However, I think the place for this discussion is once we have something
which is >90% functional..
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-03 Thread Ramana via lldb-dev
On Thu, Mar 29, 2018 at 11:17 PM, Jim Ingham  wrote:

> The breakpoints aren't a structural problem.  If you can figure out a
> non-code modifying way to handle breakpoints, that would be a very surgical
> change.  And as Fred points out, out of place execution in the target would
> be really handy for other things, like offloading breakpoint conditions
> into the target, and only stopping if the condition is true.  So this is a
> well motivated project.
>
> And our model for handling both expression evaluation and execution
> control are already thread-centric.  It would be pretty straight-forward to
> treat "still running" threads the same way as threads with no interesting
> stop reasons, for instance.
>
> I think the real difficulty will come at the higher layers.  First off, we
> gate a lot of Command & SB API operations on "is the process running" and
> that will have to get much more fine-grained.  Figuring out a good model
> for this will be important.
>
> Then you're going to have to figure out what exactly to do when somebody
> is in the middle of say running a long expression on thread A when thread B
> stops.  What's a useful way to present this information?  If lldb is
> sharing the terminal with the process, you can't just dump output in the
> middle of command output, but you don't want to delay too long...
>
> Also, the IOHandlers are currently a stack, but that model won't work when
> the process IOHandler is going to have to be live (at least the output part
> of it) while the CommandInterpreter IOHandler is also live.  That's going
> to take reworking.
>
> On the event and operations side, I think the fact that we have the
> separation between the private and public states will make this a lot
> easier.  We can use the event transition from private to public state to
> serialize the activity that's going on under the covers so that it appears
> coherent to the user.  The fact that lldb goes through separate channels
> for process I/O and command I/O and we very seldom just dump stuff to
> stdout will also make solving the problem of competing demands for the
> user's attention more possible.
>

Thanks Jim for the elaborate view on the non-stop mode support.

BTW my understanding on public vs private states is that the public state
is as known by the user and all the process state changes will be first
tracked with private state which then will be made public, i.e. public
state will be updated, should the user need to know about that process
state change. Is there anything else I am missing on public vs private
states?


> And I think we can't do any of this till we have a robust "ProcessMock"
> plugin that we can use to emulate end-to-end through the debugger all the
> corner cases that non-stop debugging will bring up.  Otherwise there will
> be no way to reliably test any of this stuff, and it won't ever be stable.
>
> I don't think any of this will be impossible, but it's going to be a lot
> of work.
>
> Jim
>
>
> > On Mar 29, 2018, at 9:27 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> >
> >
> >> On Mar 29, 2018, at 9:10 AM, Frédéric Riss  wrote:
> >>
> >>
> >>
> >>> On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>>
> >>>
> >>>
>  On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> 
>  Hi,
> 
>  It appears that the lldb-server, as of v5.0, did not implement the
> GDB RSPs non-stop mode (https://sourceware.org/gdb/
> onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). Am I wrong?
> 
>  If the support is actually not there, what needs to be changed to
> enable the same in lldb-server?
> >>>
> >>> As Pavel said, adding support into lldb-server will be easy. Adding
> support to LLDB will be harder. One downside of enabling this mode will be
> a performance loss in the GDB remote packet transfer. Why? IIRC this mode
> requires a read thread where one thread is always reading packets and
> putting them into a packet buffer. Threads that want to send a packet an
> get a reply must not send the packet then use a condition variable + mutex
> to wait for the response. This threading overhead really slows down the
> packet transfers. Currently we have a mutex on the GDB remote communication
> where each thread that needs to send a packet will take the mutex and then
> send the packet and wait for the response on the same thread. I know the
> performance differences are large on MacOS, not sure how they are on other
> systems. If you do end up enabling this, please run the "process plugin
> packet speed-test" command which is available only when debugging with
> ProcessGDBRemote. It will send an receive various packets of various sizes
> and report speed statistics back to you.
> 
>  Also, in lldb at least I see some code relevant to non-stop mode, but
> is non-stop mode fully implemented in lldb or there is 

Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-03 Thread Ramana via lldb-dev
On Thu, Mar 29, 2018 at 11:58 PM, Greg Clayton  wrote:

>
>
> On Mar 29, 2018, at 11:07 AM, Jim Ingham  wrote:
>
>
>
> On Mar 29, 2018, at 10:40 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>
>
> On Mar 29, 2018, at 10:36 AM, Frédéric Riss  wrote:
>
>
>
> On Mar 29, 2018, at 9:27 AM, Greg Clayton  wrote:
>
>
>
> On Mar 29, 2018, at 9:10 AM, Frédéric Riss  wrote:
>
>
>
> On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>
>
> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev 
> wrote:
>
> Hi,
>
> It appears that the lldb-server, as of v5.0, did not implement the GDB
> RSPs non-stop mode (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_
> 002dStop.html#Remote-Non_002dStop). Am I wrong?
>
> If the support is actually not there, what needs to be changed to enable
> the same in lldb-server?
>
>
> As Pavel said, adding support into lldb-server will be easy. Adding
> support to LLDB will be harder. One downside of enabling this mode will be
> a performance loss in the GDB remote packet transfer. Why? IIRC this mode
> requires a read thread where one thread is always reading packets and
> putting them into a packet buffer. Threads that want to send a packet an
> get a reply must not send the packet then use a condition variable + mutex
> to wait for the response. This threading overhead really slows down the
> packet transfers. Currently we have a mutex on the GDB remote communication
> where each thread that needs to send a packet will take the mutex and then
> send the packet and wait for the response on the same thread. I know the
> performance differences are large on MacOS, not sure how they are on other
> systems. If you do end up enabling this, please run the "process plugin
> packet speed-test" command which is available only when debugging with
> ProcessGDBRemote. It will send an receive various packets of various sizes
> and report speed statistics back to you.
>
>
> Also, in lldb at least I see some code relevant to non-stop mode, but is
> non-stop mode fully implemented in lldb or there is only partial support?
>
>
> Everything in LLDB right now assumes a process centric debugging model
> where when one thread stops all threads are stopped. There will be quite a
> large amount of changes needed for a thread centric model. The biggest
> issue I know about is breakpoints. Any time you need to step over a
> breakpoint, you must stop all threads, disable the breakpoint, single step
> the thread and re-enable the breakpoint, then start all threads again. So
> even the thread centric model would need to start and stop all threads many
> times.
>
>
> If we work on this, that’s not the way we should approach breakpoints in
> non-stop mode (and it’s not how GDB does it). I’m not sure why Ramana is
> interested in it, but I think one of the main motivations to add it to GDB
> was systems where stopping all some threads for even a small amount of time
> would just break things. You want a way to step over breakpoints without
> disrupting the other threads.
>
> Instead of removing the breakpoint, you can just teach the debugger to
> execute the code that has been patched in a different context. You can
> either move the code someplace else and execute it there or emulate it.
> Sometimes you’ll need to patch it if it is PC-relative. IIRC, GDB calls
> this displaced stepping. It’s relatively simple and works great.
>
>
> This indeed is one of the changes we would need to do for non-stop mode.
> We have the EmulateInstruction class in LLDB that is designed just for this
> kind of thing. You can give the emulator function a read/write memory and
> read/write register callbacks and a baton and it can execute the
> instruction and read/write memory and regisrters as needed through the
> context. It would be very easy to have the read register callback know to
> take the PC of the original instruction and return it if the PC is
> requested.
>
> We always got push back in the past about adding full instruction
> emulation support as Chris Lattner wanted it to exist in LLVM in the
> tablegen tables, but no one ever got around to doing that part. So we added
> prologue instruction parsing and any instructions that can modify the PC
> (for single stepping) to the supported emulated instructions.
>
> So yes, emulating instructions without removing them from the code is one
> of the things required for this feature. Not impossible, just very time
> consuming to be able to emulate every instruction out of place. I would
> _love_ to see that go in and would be happy to review patches for anyone
> wanting to take this on. Though the question still remains: does this
> happen in LLVM or in LLDB. Emulating instruction in LLVM might provide some
> great testing that could happen in the LLVM layers.
>
>
> In my porting experience, emulation is