[lldb-dev] April LLVM bay-area social is this Thursday!

2018-04-02 Thread George Burgess IV via lldb-dev
We'll be at Tied House as usual, starting on Thursday the 5th at 7pm!

If you can, help us plan and RSVP here:
https://www.meetup.com/LLVM-Bay-Area-Social/events/kncsjlyxgbhb/

See everyone there!
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-02 Thread Greg Clayton via lldb-dev


> On Apr 2, 2018, at 6:18 AM, Ramana  wrote:
> 
> 
> 
> On Thu, Mar 29, 2018 at 8:02 PM, Greg Clayton  > wrote:
> 
> 
>> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev > > wrote:
>> 
>> Hi,
>> 
>> It appears that the lldb-server, as of v5.0, did not implement the GDB RSPs 
>> non-stop mode 
>> (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop
>>  
>> ).
>>  Am I wrong?
>> 
>> If the support is actually not there, what needs to be changed to enable the 
>> same in lldb-server?
> 
> As Pavel said, adding support into lldb-server will be easy. Adding support 
> to LLDB will be harder. One downside of enabling this mode will be a 
> performance loss in the GDB remote packet transfer. Why? IIRC this mode 
> requires a read thread where one thread is always reading packets and putting 
> them into a packet buffer. Threads that want to send a packet an get a reply 
> must not send the packet then use a condition variable + mutex to wait for 
> the response. This threading overhead really slows down the packet transfers. 
> Currently we have a mutex on the GDB remote communication where each thread 
> that needs to send a packet will take the mutex and then send the packet and 
> wait for the response on the same thread. I know the performance differences 
> are large on MacOS, not sure how they are on other systems. If you do end up 
> enabling this, please run the "process plugin packet speed-test" command 
> which is available only when debugging with ProcessGDBRemote. It will send an 
> receive various packets of various sizes and report speed statistics back to 
> you.
> 
> So, in non-stop mode, though we can have threads running asynchronously (some 
> running, some stopped), the GDB remote packet transfer will be synchronous 
> i.e. will get queued?

In the normal mode there is no queueing which means we don't need a thread to 
read packets and deliver the right response to the right thread. With non-stop 
mode we will need a read thread IIRC. The extra threading overhead is costly.

> And this is because the packet responses should be matched appropriately as 
> there typically will be a single connection to the remote target and hence 
> this queueing cannot be avoided?

It can't be avoided because you have to be ready to receive a thread stop 
packet at any time, even if no packets are being sent. With the normal 
protocol, you can only receive a stop packet in response to a continue packet. 
So there is never a time where you can't just sent the packet and receive the 
response on the same thread. With non-stop mode, there must be a thread for the 
stop reply packets for any thread that can stop at any time. Adding threads 
means ~10,000 cycles of thread synchronization code for each packet.

>> 
>> Also, in lldb at least I see some code relevant to non-stop mode, but is 
>> non-stop mode fully implemented in lldb or there is only partial support?
> 
> Everything in LLDB right now assumes a process centric debugging model where 
> when one thread stops all threads are stopped. There will be quite a large 
> amount of changes needed for a thread centric model. The biggest issue I know 
> about is breakpoints. Any time you need to step over a breakpoint, you must 
> stop all threads, disable the breakpoint, single step the thread and 
> re-enable the breakpoint, then start all threads again. So even the thread 
> centric model would need to start and stop all threads many times. 
> 
> Greg, what if, while stepping over a breakpoint, the remaining threads can 
> still continue and no need to disable the breakpoint? What else do I need to 
> take care of?

This is where we would really need the instruction emulation support for 
executing breakpoint opcodes out of place. I believe the other discussions have 
highlighted this need. Let me know if that isn't clear. That is really the only 
way this feature truly works.

Greg___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-02 Thread Ramana via lldb-dev
On Thu, Mar 29, 2018 at 11:37 PM, Jim Ingham  wrote:

>
>
> > On Mar 29, 2018, at 10:40 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> >
> >
> >> On Mar 29, 2018, at 10:36 AM, Frédéric Riss  wrote:
> >>
> >>
> >>
> >>> On Mar 29, 2018, at 9:27 AM, Greg Clayton  wrote:
> >>>
> >>>
> >>>
>  On Mar 29, 2018, at 9:10 AM, Frédéric Riss  wrote:
> 
> 
> 
> > On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> >
> >
> >> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>
> >> Hi,
> >>
> >> It appears that the lldb-server, as of v5.0, did not implement the
> GDB RSPs non-stop mode (https://sourceware.org/gdb/
> onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). Am I wrong?
> >>
> >> If the support is actually not there, what needs to be changed to
> enable the same in lldb-server?
> >
> > As Pavel said, adding support into lldb-server will be easy. Adding
> support to LLDB will be harder. One downside of enabling this mode will be
> a performance loss in the GDB remote packet transfer. Why? IIRC this mode
> requires a read thread where one thread is always reading packets and
> putting them into a packet buffer. Threads that want to send a packet an
> get a reply must not send the packet then use a condition variable + mutex
> to wait for the response. This threading overhead really slows down the
> packet transfers. Currently we have a mutex on the GDB remote communication
> where each thread that needs to send a packet will take the mutex and then
> send the packet and wait for the response on the same thread. I know the
> performance differences are large on MacOS, not sure how they are on other
> systems. If you do end up enabling this, please run the "process plugin
> packet speed-test" command which is available only when debugging with
> ProcessGDBRemote. It will send an receive various packets of various sizes
> and report speed statistics back to you.
> >>
> >> Also, in lldb at least I see some code relevant to non-stop mode,
> but is non-stop mode fully implemented in lldb or there is only partial
> support?
> >
> > Everything in LLDB right now assumes a process centric debugging
> model where when one thread stops all threads are stopped. There will be
> quite a large amount of changes needed for a thread centric model. The
> biggest issue I know about is breakpoints. Any time you need to step over a
> breakpoint, you must stop all threads, disable the breakpoint, single step
> the thread and re-enable the breakpoint, then start all threads again. So
> even the thread centric model would need to start and stop all threads many
> times.
> 
>  If we work on this, that’s not the way we should approach breakpoints
> in non-stop mode (and it’s not how GDB does it). I’m not sure why Ramana is
> interested in it, but I think one of the main motivations to add it to GDB
> was systems where stopping all some threads for even a small amount of time
> would just break things. You want a way to step over breakpoints without
> disrupting the other threads.
> 
>  Instead of removing the breakpoint, you can just teach the debugger
> to execute the code that has been patched in a different context. You can
> either move the code someplace else and execute it there or emulate it.
> Sometimes you’ll need to patch it if it is PC-relative. IIRC, GDB calls
> this displaced stepping. It’s relatively simple and works great.
> >>>
> >>> This indeed is one of the changes we would need to do for non-stop
> mode. We have the EmulateInstruction class in LLDB that is designed just
> for this kind of thing. You can give the emulator function a read/write
> memory and read/write register callbacks and a baton and it can execute the
> instruction and read/write memory and regisrters as needed through the
> context. It would be very easy to have the read register callback know to
> take the PC of the original instruction and return it if the PC is
> requested.
> >>>
> >>> We always got push back in the past about adding full instruction
> emulation support as Chris Lattner wanted it to exist in LLVM in the
> tablegen tables, but no one ever got around to doing that part. So we added
> prologue instruction parsing and any instructions that can modify the PC
> (for single stepping) to the supported emulated instructions.
> >>>
> >>> So yes, emulating instructions without removing them from the code is
> one of the things required for this feature. Not impossible, just very time
> consuming to be able to emulate every instruction out of place. I would
> _love_ to see that go in and would be happy to review patches for anyone
> wanting to take this on. Though the question still remains: does this
> happen in LLVM or in LLDB. Emulating instruction in 

Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-02 Thread Ramana via lldb-dev
On Thu, Mar 29, 2018 at 11:17 PM, Jim Ingham  wrote:

> The breakpoints aren't a structural problem.  If you can figure out a
> non-code modifying way to handle breakpoints, that would be a very surgical
> change.  And as Fred points out, out of place execution in the target would
> be really handy for other things, like offloading breakpoint conditions
> into the target, and only stopping if the condition is true.  So this is a
> well motivated project.
>
> And our model for handling both expression evaluation and execution
> control are already thread-centric.  It would be pretty straight-forward to
> treat "still running" threads the same way as threads with no interesting
> stop reasons, for instance.
>
> I think the real difficulty will come at the higher layers.  First off, we
> gate a lot of Command & SB API operations on "is the process running" and
> that will have to get much more fine-grained.  Figuring out a good model
> for this will be important.
>
> Then you're going to have to figure out what exactly to do when somebody
> is in the middle of say running a long expression on thread A when thread B
> stops.  What's a useful way to present this information?  If lldb is
> sharing the terminal with the process, you can't just dump output in the
> middle of command output, but you don't want to delay too long...
>
> Also, the IOHandlers are currently a stack, but that model won't work when
> the process IOHandler is going to have to be live (at least the output part
> of it) while the CommandInterpreter IOHandler is also live.  That's going
> to take reworking.
>
> On the event and operations side, I think the fact that we have the
> separation between the private and public states will make this a lot
> easier.  We can use the event transition from private to public state to
> serialize the activity that's going on under the covers so that it appears
> coherent to the user.  The fact that lldb goes through separate channels
> for process I/O and command I/O and we very seldom just dump stuff to
> stdout will also make solving the problem of competing demands for the
> user's attention more possible.
>
> And I think we can't do any of this till we have a robust "ProcessMock"
> plugin that we can use to emulate end-to-end through the debugger all the
> corner cases that non-stop debugging will bring up.  Otherwise there will
> be no way to reliably test any of this stuff, and it won't ever be stable.
>
> I don't think any of this will be impossible, but it's going to be a lot
> of work.
>
> Jim
>

Thanks Jim for the comments. Being new to lldb, that's a lot of food for
thought for me. Will get back here after doing some homework on what all
this means.


>
>
> > On Mar 29, 2018, at 9:27 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> >
> >
> >> On Mar 29, 2018, at 9:10 AM, Frédéric Riss  wrote:
> >>
> >>
> >>
> >>> On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>>
> >>>
> >>>
>  On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> 
>  Hi,
> 
>  It appears that the lldb-server, as of v5.0, did not implement the
> GDB RSPs non-stop mode (https://sourceware.org/gdb/
> onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). Am I wrong?
> 
>  If the support is actually not there, what needs to be changed to
> enable the same in lldb-server?
> >>>
> >>> As Pavel said, adding support into lldb-server will be easy. Adding
> support to LLDB will be harder. One downside of enabling this mode will be
> a performance loss in the GDB remote packet transfer. Why? IIRC this mode
> requires a read thread where one thread is always reading packets and
> putting them into a packet buffer. Threads that want to send a packet an
> get a reply must not send the packet then use a condition variable + mutex
> to wait for the response. This threading overhead really slows down the
> packet transfers. Currently we have a mutex on the GDB remote communication
> where each thread that needs to send a packet will take the mutex and then
> send the packet and wait for the response on the same thread. I know the
> performance differences are large on MacOS, not sure how they are on other
> systems. If you do end up enabling this, please run the "process plugin
> packet speed-test" command which is available only when debugging with
> ProcessGDBRemote. It will send an receive various packets of various sizes
> and report speed statistics back to you.
> 
>  Also, in lldb at least I see some code relevant to non-stop mode, but
> is non-stop mode fully implemented in lldb or there is only partial support?
> >>>
> >>> Everything in LLDB right now assumes a process centric debugging model
> where when one thread stops all threads are stopped. There will be quite a
> large amount of changes needed for a thread centric model. The biggest
> issue I know 

Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0

2018-04-02 Thread Ramana via lldb-dev
On Thu, Mar 29, 2018 at 8:02 PM, Greg Clayton  wrote:

>
>
> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev 
> wrote:
>
> Hi,
>
> It appears that the lldb-server, as of v5.0, did not implement the GDB
> RSPs non-stop mode (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_
> 002dStop.html#Remote-Non_002dStop). Am I wrong?
>
> If the support is actually not there, what needs to be changed to enable
> the same in lldb-server?
>
>
> As Pavel said, adding support into lldb-server will be easy. Adding
> support to LLDB will be harder. One downside of enabling this mode will be
> a performance loss in the GDB remote packet transfer. Why? IIRC this mode
> requires a read thread where one thread is always reading packets and
> putting them into a packet buffer. Threads that want to send a packet an
> get a reply must not send the packet then use a condition variable + mutex
> to wait for the response. This threading overhead really slows down the
> packet transfers. Currently we have a mutex on the GDB remote communication
> where each thread that needs to send a packet will take the mutex and then
> send the packet and wait for the response on the same thread. I know the
> performance differences are large on MacOS, not sure how they are on other
> systems. If you do end up enabling this, please run the "process plugin
> packet speed-test" command which is available only when debugging with
> ProcessGDBRemote. It will send an receive various packets of various sizes
> and report speed statistics back to you.
>

So, in non-stop mode, though we can have threads running asynchronously
(some running, some stopped), the GDB remote packet transfer will be
synchronous i.e. will get queued? And this is because the packet responses
should be matched appropriately as there typically will be a single
connection to the remote target and hence this queueing cannot be avoided?

>
> Also, in lldb at least I see some code relevant to non-stop mode, but is
> non-stop mode fully implemented in lldb or there is only partial support?
>
>
> Everything in LLDB right now assumes a process centric debugging model
> where when one thread stops all threads are stopped. There will be quite a
> large amount of changes needed for a thread centric model. The biggest
> issue I know about is breakpoints. Any time you need to step over a
> breakpoint, you must stop all threads, disable the breakpoint, single step
> the thread and re-enable the breakpoint, then start all threads again. So
> even the thread centric model would need to start and stop all threads many
> times.
>

Greg, what if, while stepping over a breakpoint, the remaining threads can
still continue and no need to disable the breakpoint? What else do I need
to take care of?


>
> Be sure to speak with myself, Jim Ingham and Pavel in depth before
> undertaking this task as there will be many changes required.
>
> Greg
>
>
> Thanks,
> Ramana
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev