On Mon, 2 Apr 2018 at 16:28, Greg Clayton via lldb-dev < lldb-dev@lists.llvm.org> wrote:
> > > On Apr 2, 2018, at 6:18 AM, Ramana <ramana.venka...@gmail.com> wrote: > > > > On Thu, Mar 29, 2018 at 8:02 PM, Greg Clayton <clayb...@gmail.com> wrote: > >> >> >> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev <lldb-dev@lists.llvm.org> >> wrote: >> >> Hi, >> >> It appears that the lldb-server, as of v5.0, did not implement the GDB >> RSPs non-stop mode ( >> https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). >> Am I wrong? >> >> If the support is actually not there, what needs to be changed to enable >> the same in lldb-server? >> >> >> As Pavel said, adding support into lldb-server will be easy. Adding >> support to LLDB will be harder. One downside of enabling this mode will be >> a performance loss in the GDB remote packet transfer. Why? IIRC this mode >> requires a read thread where one thread is always reading packets and >> putting them into a packet buffer. Threads that want to send a packet an >> get a reply must not send the packet then use a condition variable + mutex >> to wait for the response. This threading overhead really slows down the >> packet transfers. Currently we have a mutex on the GDB remote communication >> where each thread that needs to send a packet will take the mutex and then >> send the packet and wait for the response on the same thread. I know the >> performance differences are large on MacOS, not sure how they are on other >> systems. If you do end up enabling this, please run the "process plugin >> packet speed-test" command which is available only when debugging with >> ProcessGDBRemote. It will send an receive various packets of various sizes >> and report speed statistics back to you. >> > > So, in non-stop mode, though we can have threads running asynchronously > (some running, some stopped), the GDB remote packet transfer will be > synchronous i.e. will get queued? > > > In the normal mode there is no queueing which means we don't need a thread > to read packets and deliver the right response to the right thread. With > non-stop mode we will need a read thread IIRC. The extra threading overhead > is costly. > > And this is because the packet responses should be matched appropriately > as there typically will be a single connection to the remote target and > hence this queueing cannot be avoided? > > > It can't be avoided because you have to be ready to receive a thread stop > packet at any time, even if no packets are being sent. With the normal > protocol, you can only receive a stop packet in response to a continue > packet. So there is never a time where you can't just sent the packet and > receive the response on the same thread. With non-stop mode, there must be > a thread for the stop reply packets for any thread that can stop at any > time. Adding threads means ~10,000 cycles of thread synchronization code > for each packet. > > I think this is one of the least important obstacles in tackling the non-stop feature, but since we're already discussing it, I just wanted to point out that there are many ways we can improve the performance here. The read thread *is* necessary, but only so that we can receieve asynchronous responses when we're not going any gdb-remote work. If we are already sending some packets, it is superfluous. As one optimization, we could make sure that the read thread is disabled why we are sending a packet. E.g., the SendPacketAndWaitForResponse could do something like: SendPacket(msg); // We can do this even while the read thread is doing work SuspendReadThread(); // Should be cheap as it happens while the remote stub is processing our packet GetResponse(); // Happens on the main thread, as before ResumeReadThread(); // Fast. We could even take this further and have some sort of a RAII object which disables the read thread at a higher level for when we want to be sending a bunch of packets. Of course, this would need to be implemented with a steady hand and carefully tested, but the good news here is that the gdb-remote protocol is one of the better tested aspects of lldb, with many testing approaches available. However, I think the place for this discussion is once we have something which is >90% functional..
_______________________________________________ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev