Most calls for lldb-server should use an instance variable 
GDBRemoteCommunication::m_packet_timeout which you could then modify. But this 
timeout you are talking about is the time that the expression can take when 
running. I would just bump these up temporarily while you are debugging to 
avoid the timeouts. Just don't check it in.

So for GDB Remote packets, we already bump the timeout up in the 
GDBRemoteCommunication constructor:

#ifdef LLDB_CONFIGURATION_DEBUG
    m_packet_timeout (1000),
#else
    m_packet_timeout (1),
#endif


Anything else is probably expression timeouts and you will need to manually 
bump those up in order to debug, or you could do the same thing as the GDB 
Remote in InferiorCallPOSIX.cpp:

 #ifdef LLDB_CONFIGURATION_DEBUG
    options.SetTimeoutUsec(50000000);
#else
    options.SetTimeoutUsec(500000);
#endif


> On Oct 7, 2015, at 10:33 AM, Eugene Birukov via lldb-dev 
> <lldb-dev@lists.llvm.org> wrote:
> 
> Hello,
>  
> I am trying to see what is going inside LLDB server 3.7.0 but there are a lot 
> of timeouts scattered everywhere. Say, InferiorCallPOSIX.cpp:74 sets 
> hard-coded timeout to 500,000us, etc. These timeouts fire if I spend any time 
> on breakpoint inside server and make debugging experience miserable. Is there 
> any way to turn them all off?
>  
> BTW, I am using LLDB as a C++ API, not as standalone program, but I have 
> debugger attached to it and can alter its memory state.
>  
> Thanks,
> Eugene
>  
> _______________________________________________
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

_______________________________________________
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

Reply via email to